diff --git a/.github/ISSUE_TEMPLATE/Bug.md b/.github/ISSUE_TEMPLATE/Bug.md index cf17d3d7f..35ebb8e21 100644 --- a/.github/ISSUE_TEMPLATE/Bug.md +++ b/.github/ISSUE_TEMPLATE/Bug.md @@ -33,18 +33,18 @@ The Rclone Developers -#### Which OS you are using and how many bits (eg Windows 7, 64 bit) +#### Which OS you are using and how many bits (e.g. Windows 7, 64 bit) -#### Which cloud storage system are you using? (eg Google Drive) +#### Which cloud storage system are you using? (e.g. Google Drive) -#### The command you were trying to run (eg `rclone copy /tmp remote:tmp`) +#### The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`) -#### A log from the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`) +#### A log from the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5fa8d96fd..4ab7fac12 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -12,10 +12,10 @@ When filing an issue, please include the following information if possible as well as a description of the problem. Make sure you test with the [latest beta of rclone](https://beta.rclone.org/): - * Rclone version (eg output from `rclone -V`) - * Which OS you are using and how many bits (eg Windows 7, 64 bit) - * The command you were trying to run (eg `rclone copy /tmp remote:tmp`) - * A log of the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`) + * Rclone version (e.g. output from `rclone -V`) + * Which OS you are using and how many bits (e.g. Windows 7, 64 bit) + * The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`) + * A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`) * if the log contains secrets then edit the file with a text editor first to obscure them ## Submitting a pull request ## @@ -48,7 +48,7 @@ When ready - run the unit tests for the code you changed go test -v -Note that you may need to make a test remote, eg `TestSwift` for some +Note that you may need to make a test remote, e.g. `TestSwift` for some of the unit tests. Note the top level Makefile targets @@ -170,7 +170,7 @@ with modules beneath. * log - logging facilities * march - iterates directories in lock step * object - in memory Fs objects - * operations - primitives for sync, eg Copy, Move + * operations - primitives for sync, e.g. Copy, Move * sync - sync directories * walk - walk a directory * fstest - provides integration test framework @@ -207,7 +207,7 @@ from those during the release process. See the `make doc` and `make website` targets in the Makefile if you are interested in how. You don't need to run these when adding a feature. -Documentation for rclone sub commands is with their code, eg +Documentation for rclone sub commands is with their code, e.g. `cmd/ls/ls.go`. Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository) @@ -364,7 +364,7 @@ See the [testing](#testing) section for more information on integration tests. Add your fs to the docs - you'll need to pick an icon for it from [fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in -alphabetical order of full name of remote (eg `drive` is ordered as +alphabetical order of full name of remote (e.g. `drive` is ordered as `Google Drive`) but with the local file system last. * `README.md` - main GitHub page diff --git a/MAINTAINERS.md b/MAINTAINERS.md index f2c8fe7e6..00774c926 100644 --- a/MAINTAINERS.md +++ b/MAINTAINERS.md @@ -45,7 +45,7 @@ Rclone uses the labels like this: If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going. -When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (eg the next go release). +When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (e.g. the next go release). The milestones have these meanings: diff --git a/RELEASE.md b/RELEASE.md index 12450880a..40c989d01 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -48,8 +48,8 @@ If rclone needs a point release due to some horrendous bug: Set vars - * BASE_TAG=v1.XX # eg v1.52 - * NEW_TAG=${BASE_TAG}.Y # eg v1.52.1 + * BASE_TAG=v1.XX # e.g. v1.52 + * NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1 * echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1 First make the release branch. If this is a second point release then diff --git a/backend/azureblob/azureblob.go b/backend/azureblob/azureblob.go index bc2211888..2d895cf1f 100644 --- a/backend/azureblob/azureblob.go +++ b/backend/azureblob/azureblob.go @@ -274,7 +274,7 @@ func validateAccessTier(tier string) bool { // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ - 401, // Unauthorized (eg "Token has expired") + 401, // Unauthorized (e.g. "Token has expired") 408, // Request Timeout 429, // Rate exceeded. 500, // Get occasional 500 Internal Server Error diff --git a/backend/b2/b2.go b/backend/b2/b2.go index d1d289e3b..be32640ed 100644 --- a/backend/b2/b2.go +++ b/backend/b2/b2.go @@ -290,7 +290,7 @@ func (o *Object) split() (bucket, bucketPath string) { // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ - 401, // Unauthorized (eg "Token has expired") + 401, // Unauthorized (e.g. "Token has expired") 408, // Request Timeout 429, // Rate exceeded. 500, // Get occasional 500 Internal Server Error @@ -1440,7 +1440,7 @@ func (o *Object) Size() int64 { // Make sure it is lower case // // Remove unverified prefix - see https://www.backblaze.com/b2/docs/uploading.html -// Some tools (eg Cyberduck) use this +// Some tools (e.g. Cyberduck) use this func cleanSHA1(sha1 string) (out string) { out = strings.ToLower(sha1) const unverified = "unverified:" diff --git a/backend/cache/cache.go b/backend/cache/cache.go index 340da76ff..089d9c3b5 100644 --- a/backend/cache/cache.go +++ b/backend/cache/cache.go @@ -68,7 +68,7 @@ func init() { CommandHelp: commandHelp, Options: []fs.Option{{ Name: "remote", - Help: "Remote to cache.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).", + Help: "Remote to cache.\nNormally should contain a ':' and a path, e.g. \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).", Required: true, }, { Name: "plex_url", @@ -581,7 +581,7 @@ Some valid examples are: "0:10" -> the first ten chunks Any parameter with a key that starts with "file" can be used to -specify files to fetch, eg +specify files to fetch, e.g. rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye diff --git a/backend/chunker/chunker.go b/backend/chunker/chunker.go index 1c6c70f4b..59470f2aa 100644 --- a/backend/chunker/chunker.go +++ b/backend/chunker/chunker.go @@ -42,7 +42,7 @@ import ( // used mostly for consistency checks (lazily for performance reasons). // Other formats can be developed that use an external meta store // free of these limitations, but this needs some support from -// rclone core (eg. metadata store interfaces). +// rclone core (e.g. metadata store interfaces). // // The following types of chunks are supported: // data and control, active and temporary. @@ -140,7 +140,7 @@ func init() { Name: "remote", Required: true, Help: `Remote to chunk/unchunk. -Normally should contain a ':' and a path, eg "myremote:path/to/dir", +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).`, }, { Name: "chunk_size", @@ -464,7 +464,7 @@ func (f *Fs) setChunkNameFormat(pattern string) error { // filePath can be name, relative or absolute path of main file. // // chunkNo must be a zero based index of data chunk. -// Negative chunkNo eg. -1 indicates a control chunk. +// Negative chunkNo e.g. -1 indicates a control chunk. // ctrlType is type of control chunk (must be valid). // ctrlType must be "" for data chunks. // @@ -994,7 +994,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, remote st } // Wrapped remote may or may not have seen EOF from chunking reader, - // eg. the box multi-uploader reads exactly the chunk size specified + // e.g. the box multi-uploader reads exactly the chunk size specified // and skips the "EOF" read. Hence, switch to next limit here. if !(c.chunkLimit == 0 || c.chunkLimit == c.chunkSize || c.sizeTotal == -1 || c.done) { silentlyRemove(ctx, chunk) @@ -1183,7 +1183,7 @@ func (c *chunkingReader) Read(buf []byte) (bytesRead int, err error) { if c.chunkLimit <= 0 { // Chunk complete - switch to next one. // Note #1: - // We might not get here because some remotes (eg. box multi-uploader) + // We might not get here because some remotes (e.g. box multi-uploader) // read the specified size exactly and skip the concluding EOF Read. // Then a check in the put loop will kick in. // Note #2: @@ -1387,7 +1387,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error { // However, if rclone dies unexpectedly, it can leave hidden temporary // chunks, which cannot be discovered using the `list` command. // Remove does not try to search for such chunks or to delete them. -// Sometimes this can lead to strange results eg. when `list` shows that +// Sometimes this can lead to strange results e.g. when `list` shows that // directory is empty but `rmdir` refuses to remove it because on the // level of wrapped remote it's actually *not* empty. // As a workaround users can use `purge` to forcibly remove it. diff --git a/backend/chunker/chunker_test.go b/backend/chunker/chunker_test.go index e5f1bb181..4acdf5b5a 100644 --- a/backend/chunker/chunker_test.go +++ b/backend/chunker/chunker_test.go @@ -15,10 +15,10 @@ import ( // Command line flags var ( - // Invalid characters are not supported by some remotes, eg. Mailru. + // Invalid characters are not supported by some remotes, e.g. Mailru. // We enable testing with invalid characters when -remote is not set, so // chunker overlays a local directory, but invalid characters are disabled - // by default when -remote is set, eg. when test_all runs backend tests. + // by default when -remote is set, e.g. when test_all runs backend tests. // You can still test with invalid characters using the below flag. UseBadChars = flag.Bool("bad-chars", false, "Set to test bad characters in file names when -remote is set") ) diff --git a/backend/crypt/crypt.go b/backend/crypt/crypt.go index 41ebb5516..f94de0a49 100644 --- a/backend/crypt/crypt.go +++ b/backend/crypt/crypt.go @@ -30,7 +30,7 @@ func init() { CommandHelp: commandHelp, Options: []fs.Option{{ Name: "remote", - Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).", + Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, e.g. \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).", Required: true, }, { Name: "filename_encryption", @@ -76,7 +76,7 @@ NB If filename_encryption is "off" then this option will do nothing.`, }, { Name: "server_side_across_configs", Default: false, - Help: `Allow server-side operations (eg copy) to work across different crypt configs. + Help: `Allow server-side operations (e.g. copy) to work across different crypt configs. Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it. diff --git a/backend/drive/drive.go b/backend/drive/drive.go index 5310d45d5..b822c7608 100755 --- a/backend/drive/drive.go +++ b/backend/drive/drive.go @@ -435,7 +435,7 @@ need to use --ignore size also.`, }, { Name: "server_side_across_configs", Default: false, - Help: `Allow server-side operations (eg copy) to work across different drive configs. + Help: `Allow server-side operations (e.g. copy) to work across different drive configs. This can be useful if you wish to do a server-side copy between two different Google drives. Note that this isn't enabled by default @@ -1690,7 +1690,7 @@ func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in chan listRE if len(paths) == 1 { // don't check parents at root because // - shared with me items have no parents at the root - // - if using a root alias, eg "root" or "appDataFolder" the ID won't match + // - if using a root alias, e.g. "root" or "appDataFolder" the ID won't match i = 0 // items at root can have more than one parent so we need to put // the item in just once. @@ -2440,7 +2440,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { usage := &fs.Usage{ Used: fs.NewUsageValue(q.UsageInDrive), // bytes in use Trashed: fs.NewUsageValue(q.UsageInDriveTrash), // bytes in trash - Other: fs.NewUsageValue(q.Usage - q.UsageInDrive), // other usage eg gmail in drive + Other: fs.NewUsageValue(q.Usage - q.UsageInDrive), // other usage e.g. gmail in drive } if q.Limit > 0 { usage.Total = fs.NewUsageValue(q.Limit) // quota of bytes that can be used diff --git a/backend/http/http.go b/backend/http/http.go index ba90b4bf0..8b33688a7 100644 --- a/backend/http/http.go +++ b/backend/http/http.go @@ -58,7 +58,7 @@ The input format is comma separated list of key,value pairs. Standard For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. -You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'. +You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'. `, Default: fs.CommaSepList{}, Advanced: true, diff --git a/backend/hubic/hubic.go b/backend/hubic/hubic.go index 92a977ad8..818586e24 100644 --- a/backend/hubic/hubic.go +++ b/backend/hubic/hubic.go @@ -71,7 +71,7 @@ func init() { type credentials struct { Token string `json:"token"` // OpenStack token Endpoint string `json:"endpoint"` // OpenStack endpoint - Expires string `json:"expires"` // Expires date - eg "2015-11-09T14:24:56+01:00" + Expires string `json:"expires"` // Expires date - e.g. "2015-11-09T14:24:56+01:00" } // Fs represents a remote hubic diff --git a/backend/local/local.go b/backend/local/local.go index 61ab1eddb..511c89c35 100644 --- a/backend/local/local.go +++ b/backend/local/local.go @@ -87,13 +87,13 @@ Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts "can't copy - source file is being updated" if the file changes during upload. -However on some file systems this modification time check may fail (eg +However on some file systems this modification time check may fail (e.g. [Glusterfs #2206](https://github.com/rclone/rclone/issues/2206)) so this check can be disabled with this flag. If this flag is set, rclone will use its best efforts to transfer a file which is being updated. If the file is only having things -appended to it (eg a log) then rclone will transfer the log file with +appended to it (e.g. a log) then rclone will transfer the log file with the size it had the first time rclone saw it. If the file is being modified throughout (not just appended to) then diff --git a/backend/onedrive/onedrive.go b/backend/onedrive/onedrive.go index b19130748..3c21e4d6d 100755 --- a/backend/onedrive/onedrive.go +++ b/backend/onedrive/onedrive.go @@ -274,7 +274,7 @@ listing, set this option.`, }, { Name: "server_side_across_configs", Default: false, - Help: `Allow server-side operations (eg copy) to work across different onedrive configs. + Help: `Allow server-side operations (e.g. copy) to work across different onedrive configs. This can be useful if you wish to do a server-side copy between two different Onedrives. Note that this isn't enabled by default diff --git a/backend/qingstor/qingstor.go b/backend/qingstor/qingstor.go index 501c727ee..8c746edd9 100644 --- a/backend/qingstor/qingstor.go +++ b/backend/qingstor/qingstor.go @@ -207,7 +207,7 @@ func (o *Object) split() (bucket, bucketPath string) { func qsParseEndpoint(endpoint string) (protocol, host, port string, err error) { /* Pattern to match an endpoint, - eg: "http(s)://qingstor.com:443" --> "http(s)", "qingstor.com", 443 + e.g.: "http(s)://qingstor.com:443" --> "http(s)", "qingstor.com", 443 "http(s)//qingstor.com" --> "http(s)", "qingstor.com", "" "qingstor.com" --> "", "qingstor.com", "" */ diff --git a/backend/s3/s3.go b/backend/s3/s3.go index b1c05d158..4cc1c9068 100644 --- a/backend/s3/s3.go +++ b/backend/s3/s3.go @@ -225,7 +225,7 @@ func init() { Help: "Use this if unsure. Will use v4 signatures and an empty region.", }, { Value: "other-v2-signature", - Help: "Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.", + Help: "Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.", }}, }, { Name: "endpoint", @@ -1016,7 +1016,7 @@ The minimum is 0 and the maximum is 5GB.`, Help: `Chunk size to use for uploading. When uploading files larger than upload_cutoff or files with unknown -size (eg from "rclone rcat" or uploaded with "rclone mount" or google +size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google photos or google docs) they will be uploaded as multipart uploads using this chunk size. @@ -1121,7 +1121,7 @@ if false then rclone will use virtual path style. See [the AWS S3 docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro) for more info. -Some providers (eg AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to +Some providers (e.g. AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to false - rclone will do this automatically based on the provider setting.`, Default: true, @@ -1133,7 +1133,7 @@ setting.`, If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication. -Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.`, +Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.`, Default: false, Advanced: true, }, { @@ -1223,7 +1223,7 @@ See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rcl // Constants const ( - metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime + metaMtime = "Mtime" // the meta key to store mtime in - e.g. X-Amz-Meta-Mtime metaMD5Hash = "Md5chksum" // the meta key to store md5hash in // The maximum size of object we can COPY - this should be 5GiB but is < 5GB for b2 compatibility // See https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/76 @@ -1306,7 +1306,7 @@ type Object struct { lastModified time.Time // Last modified meta map[string]*string // The object metadata if known - may be nil mimeType string // MimeType of object - may be "" - storageClass string // eg GLACIER + storageClass string // e.g. GLACIER } // ------------------------------------------------------------ diff --git a/backend/sugarsync/sugarsync.go b/backend/sugarsync/sugarsync.go index 30c5a65f7..c81042480 100644 --- a/backend/sugarsync/sugarsync.go +++ b/backend/sugarsync/sugarsync.go @@ -576,7 +576,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, } newID = resp.Header.Get("Location") if newID == "" { - // look up ID if not returned (eg for syncFolder) + // look up ID if not returned (e.g. for syncFolder) var found bool newID, found, err = f.FindLeaf(ctx, pathID, leaf) if err != nil { diff --git a/backend/swift/swift.go b/backend/swift/swift.go index a9f456134..68ffeefd3 100644 --- a/backend/swift/swift.go +++ b/backend/swift/swift.go @@ -51,7 +51,7 @@ default for this is 5GB which is its maximum value.`, Name: "no_chunk", Help: `Don't chunk files during streaming upload. -When doing streaming uploads (eg using rcat or mount) setting this +When doing streaming uploads (e.g. using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked @@ -272,7 +272,7 @@ func (f *Fs) Features() *fs.Features { // retryErrorCodes is a slice of error codes that we will retry var retryErrorCodes = []int{ - 401, // Unauthorized (eg "Token has expired") + 401, // Unauthorized (e.g. "Token has expired") 408, // Request Timeout 409, // Conflict - various states that could be resolved on a retry 429, // Rate exceeded. diff --git a/backend/webdav/webdav.go b/backend/webdav/webdav.go index 0d2d431c4..bbf8bc729 100644 --- a/backend/webdav/webdav.go +++ b/backend/webdav/webdav.go @@ -81,7 +81,7 @@ func init() { IsPassword: true, }, { Name: "bearer_token", - Help: "Bearer token instead of user/pass (eg a Macaroon)", + Help: "Bearer token instead of user/pass (e.g. a Macaroon)", }, { Name: "bearer_token_command", Help: "Command to run to get a bearer token", diff --git a/bin/test-repeat.sh b/bin/test-repeat.sh index 1899a3f27..9ab243f07 100755 --- a/bin/test-repeat.sh +++ b/bin/test-repeat.sh @@ -14,7 +14,7 @@ don't fail very often. Syntax: $0 [flags] -Note that flags for 'go test' need to be expanded, eg '-test.v' instead +Note that flags for 'go test' need to be expanded, e.g. '-test.v' instead of just '-v'. '-race' does not need to be expanded. Flags this script understands diff --git a/bin/tidy-beta b/bin/tidy-beta index 48f26ffeb..d884f49ce 100755 --- a/bin/tidy-beta +++ b/bin/tidy-beta @@ -3,7 +3,7 @@ version="$1" if [ "$version" = "" ]; then - echo "Syntax: $0 [delete]" + echo "Syntax: $0 [delete]" exit 1 fi dry_run="--dry-run" diff --git a/cmd/about/about.go b/cmd/about/about.go index 854e30e1d..9b5d83df9 100644 --- a/cmd/about/about.go +++ b/cmd/about/about.go @@ -61,14 +61,14 @@ Where the fields are: * Used: total size used * Free: total amount this user could upload. * Trashed: total amount in the trash - * Other: total amount in other storage (eg Gmail, Google Photos) + * Other: total amount in other storage (e.g. Gmail, Google Photos) * Objects: total number of objects in the storage Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted. -Use the --full flag to see the numbers written out in full, eg +Use the --full flag to see the numbers written out in full, e.g. Total: 18253611008 Used: 7993453766 @@ -76,7 +76,7 @@ Use the --full flag to see the numbers written out in full, eg Trashed: 104857602 Other: 8849156022 -Use the --json flag for a computer readable output, eg +Use the --json flag for a computer readable output, e.g. { "total": 18253611008, diff --git a/cmd/backend/backend.go b/cmd/backend/backend.go index af270061b..e10b48163 100644 --- a/cmd/backend/backend.go +++ b/cmd/backend/backend.go @@ -47,7 +47,7 @@ for more info). rclone backend features remote: -Pass options to the backend command with -o. This should be key=value or key, eg: +Pass options to the backend command with -o. This should be key=value or key, e.g.: rclone backend stats remote:path stats -o format=json -o long diff --git a/cmd/cmd.go b/cmd/cmd.go index 0114a8f4e..309536410 100644 --- a/cmd/cmd.go +++ b/cmd/cmd.go @@ -495,7 +495,7 @@ func AddBackendFlags() { done := map[string]struct{}{} for i := range fsInfo.Options { opt := &fsInfo.Options[i] - // Skip if done already (eg with Provider options) + // Skip if done already (e.g. with Provider options) if _, doneAlready := done[opt.Name]; doneAlready { continue } diff --git a/cmd/dedupe/dedupe.go b/cmd/dedupe/dedupe.go index e0df76f17..637981964 100644 --- a/cmd/dedupe/dedupe.go +++ b/cmd/dedupe/dedupe.go @@ -30,7 +30,7 @@ names and offers to delete all but one or rename them to be different. This is only useful with backends like Google Drive which can have -duplicate file names. It can be run on wrapping backends (eg crypt) if +duplicate file names. It can be run on wrapping backends (e.g. crypt) if they wrap a backend which supports duplicate file names. In the first pass it will merge directories with the same name. It @@ -43,7 +43,7 @@ This means that for most duplicated files the ` + "`dedupe`" + ` command will not be interactive. ` + "`dedupe`" + ` considers files to be identical if they have the -same file path and the same hash. If the backend does not support hashes (eg crypt wrapping +same file path and the same hash. If the backend does not support hashes (e.g. crypt wrapping Google Drive) then they will never be found to be identical. If you use the ` + "`--size-only`" + ` flag then files will be considered identical if they have the same size (any hash will be ignored). This diff --git a/cmd/genautocomplete/genautocomplete_bash.go b/cmd/genautocomplete/genautocomplete_bash.go index f371d690c..f649cb480 100644 --- a/cmd/genautocomplete/genautocomplete_bash.go +++ b/cmd/genautocomplete/genautocomplete_bash.go @@ -19,7 +19,7 @@ var bashCommandDefinition = &cobra.Command{ Generates a bash shell autocompletion script for rclone. This writes to /etc/bash_completion.d/rclone by default so will -probably need to be run with sudo or as root, eg +probably need to be run with sudo or as root, e.g. sudo rclone genautocomplete bash diff --git a/cmd/genautocomplete/genautocomplete_fish.go b/cmd/genautocomplete/genautocomplete_fish.go index a60230c40..bafe15ffa 100644 --- a/cmd/genautocomplete/genautocomplete_fish.go +++ b/cmd/genautocomplete/genautocomplete_fish.go @@ -19,7 +19,7 @@ var fishCommandDefinition = &cobra.Command{ Generates a fish autocompletion script for rclone. This writes to /etc/fish/completions/rclone.fish by default so will -probably need to be run with sudo or as root, eg +probably need to be run with sudo or as root, e.g. sudo rclone genautocomplete fish diff --git a/cmd/genautocomplete/genautocomplete_zsh.go b/cmd/genautocomplete/genautocomplete_zsh.go index 6a2c1c4a0..6cc352a65 100644 --- a/cmd/genautocomplete/genautocomplete_zsh.go +++ b/cmd/genautocomplete/genautocomplete_zsh.go @@ -19,7 +19,7 @@ var zshCommandDefinition = &cobra.Command{ Generates a zsh autocompletion script for rclone. This writes to /usr/share/zsh/vendor-completions/_rclone by default so will -probably need to be run with sudo or as root, eg +probably need to be run with sudo or as root, e.g. sudo rclone genautocomplete zsh diff --git a/cmd/hashsum/hashsum.go b/cmd/hashsum/hashsum.go index 1994b1935..55a2d5b90 100644 --- a/cmd/hashsum/hashsum.go +++ b/cmd/hashsum/hashsum.go @@ -31,7 +31,7 @@ Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool. -Run without a hash to see the list of supported hashes, eg +Run without a hash to see the list of supported hashes, e.g. $ rclone hashsum Supported hashes are: diff --git a/cmd/help.go b/cmd/help.go index ce1e452ef..7cd2d34f0 100644 --- a/cmd/help.go +++ b/cmd/help.go @@ -297,7 +297,7 @@ func showBackend(name string) { var standardOptions, advancedOptions fs.Options done := map[string]struct{}{} for _, opt := range backend.Options { - // Skip if done already (eg with Provider options) + // Skip if done already (e.g. with Provider options) if _, doneAlready := done[opt.Name]; doneAlready { continue } diff --git a/cmd/ls/lshelp/lshelp.go b/cmd/ls/lshelp/lshelp.go index b9bec0841..6827e584c 100644 --- a/cmd/ls/lshelp/lshelp.go +++ b/cmd/ls/lshelp/lshelp.go @@ -21,6 +21,6 @@ Note that ` + "`ls` and `lsl`" + ` recurse by default - use "--max-depth 1" to s The other list commands ` + "`lsd`,`lsf`,`lsjson`" + ` do not recurse by default - use "-R" to make them recurse. Listing a non existent directory will produce an error except for -remotes which can't have empty directories (eg s3, swift, gcs, etc - +remotes which can't have empty directories (e.g. s3, swift, gcs, etc - the bucket based remotes). ` diff --git a/cmd/lsf/lsf.go b/cmd/lsf/lsf.go index 92221bc26..90b43d031 100644 --- a/cmd/lsf/lsf.go +++ b/cmd/lsf/lsf.go @@ -72,7 +72,7 @@ output: o - Original ID of underlying object m - MimeType of object if known e - encrypted name - T - tier of storage if known, eg "Hot" or "Cool" + T - tier of storage if known, e.g. "Hot" or "Cool" So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last. diff --git a/cmd/lsjson/lsjson.go b/cmd/lsjson/lsjson.go index ad467bf8b..789a60229 100644 --- a/cmd/lsjson/lsjson.go +++ b/cmd/lsjson/lsjson.go @@ -65,11 +65,11 @@ may be repeated). If --hash-type is set then it implies --hash. If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra -request (eg s3, swift). +request (e.g. s3, swift). If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra -request (eg s3, swift). +request (e.g. s3, swift). If --encrypted is not specified the Encrypted won't be emitted. @@ -91,7 +91,7 @@ If the directory is a bucket in a bucket based backend, then The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the -nearest millisecond (eg Google Drive) then 3 digits will always be +nearest millisecond (e.g. Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown ("2017-05-31T16:15:57+01:00"). diff --git a/cmd/mount/test/seekers.go b/cmd/mount/test/seekers.go index 71d08acbc..ae5eba5c3 100644 --- a/cmd/mount/test/seekers.go +++ b/cmd/mount/test/seekers.go @@ -21,7 +21,7 @@ var ( maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read") simultaneous = flag.Int("transfers", 16, "Number of simultaneous files to open") seeksPerFile = flag.Int("seeks", 8, "Seeks per file") - mask = flag.Int64("mask", 0, "mask for seek, eg 0x7fff") + mask = flag.Int64("mask", 0, "mask for seek, e.g. 0x7fff") ) func init() { diff --git a/cmd/mount2/node.go b/cmd/mount2/node.go index e1369607f..90f98379a 100644 --- a/cmd/mount2/node.go +++ b/cmd/mount2/node.go @@ -235,7 +235,7 @@ func (ds *dirStream) Next() (de fuse.DirEntry, errno syscall.Errno) { // defer log.Trace(nil, "")("de=%+v, errno=%v", &de, &errno) fi := ds.nodes[ds.i] de = fuse.DirEntry{ - // Mode is the file's mode. Only the high bits (eg. S_IFDIR) + // Mode is the file's mode. Only the high bits (e.g. S_IFDIR) // are considered. Mode: getMode(fi), diff --git a/cmd/mountlib/mount.go b/cmd/mountlib/mount.go index b4c3aef1e..9e1dfeaeb 100644 --- a/cmd/mountlib/mount.go +++ b/cmd/mountlib/mount.go @@ -260,7 +260,7 @@ applications won't work with their files on an rclone mount without "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the [File Caching](#file-caching) section for more info. -The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, +The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache. diff --git a/cmd/rc/rc.go b/cmd/rc/rc.go index 4e0b997bb..6cf204a56 100644 --- a/cmd/rc/rc.go +++ b/cmd/rc/rc.go @@ -92,7 +92,7 @@ Will place this in the "arg" value Use --loopback to connect to the rclone instance running "rclone rc". This is very useful for testing commands without having to run an -rclone rc server, eg: +rclone rc server, e.g.: rclone rc --loopback operations/about fs=/ diff --git a/cmd/serve/dlna/cds.go b/cmd/serve/dlna/cds.go index 0b5f4b198..451d5636b 100644 --- a/cmd/serve/dlna/cds.go +++ b/cmd/serve/dlna/cds.go @@ -23,7 +23,7 @@ import ( ) // Add a minimal number of mime types to augment go's built in types -// for environments which don't have access to a mime.types file (eg +// for environments which don't have access to a mime.types file (e.g. // Termux on android) func init() { for _, t := range []struct { diff --git a/cmd/serve/dlna/dlnaflags/dlnaflags.go b/cmd/serve/dlna/dlnaflags/dlnaflags.go index 701870c3a..520c76f33 100644 --- a/cmd/serve/dlna/dlnaflags/dlnaflags.go +++ b/cmd/serve/dlna/dlnaflags/dlnaflags.go @@ -11,7 +11,7 @@ var Help = ` ### Server options Use ` + "`--addr`" + ` to specify which IP address and port the server should -listen on, eg ` + "`--addr 1.2.3.4:8000` or `--addr :8080`" + ` to listen to all +listen on, e.g. ` + "`--addr 1.2.3.4:8000` or `--addr :8080`" + ` to listen to all IPs. Use ` + "`--name`" + ` to choose the friendly server name, which is by diff --git a/cmd/serve/ftp/ftp.go b/cmd/serve/ftp/ftp.go index 38c9ba856..22cee3ef6 100644 --- a/cmd/serve/ftp/ftp.go +++ b/cmd/serve/ftp/ftp.go @@ -79,7 +79,7 @@ or you can make a remote of type ftp to read and write it. ### Server options Use --addr to specify which IP address and port the server should -listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all +listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. diff --git a/cmd/serve/http/http.go b/cmd/serve/http/http.go index 04724174a..0c430a7ad 100644 --- a/cmd/serve/http/http.go +++ b/cmd/serve/http/http.go @@ -32,7 +32,7 @@ var Command = &cobra.Command{ over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it. -You can use the filter flags (eg --include, --exclude) to control what +You can use the filter flags (e.g. --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. diff --git a/cmd/serve/httplib/httplib.go b/cmd/serve/httplib/httplib.go index 2034908a0..d28fe253f 100644 --- a/cmd/serve/httplib/httplib.go +++ b/cmd/serve/httplib/httplib.go @@ -30,7 +30,7 @@ var Help = ` ### Server options Use --addr to specify which IP address and port the server should -listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all +listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. diff --git a/cmd/serve/serve.go b/cmd/serve/serve.go index 24a1f832c..07a550716 100644 --- a/cmd/serve/serve.go +++ b/cmd/serve/serve.go @@ -38,7 +38,7 @@ var Command = &cobra.Command{ Use: "serve [opts] ", Short: `Serve a remote over a protocol.`, Long: `rclone serve is used to serve a remote over a given protocol. This -command requires the use of a subcommand to specify the protocol, eg +command requires the use of a subcommand to specify the protocol, e.g. rclone serve http remote: @@ -46,7 +46,7 @@ Each subcommand has its own options which you can see in their help. `, RunE: func(command *cobra.Command, args []string) error { if len(args) == 0 { - return errors.New("serve requires a protocol, eg 'rclone serve http remote:'") + return errors.New("serve requires a protocol, e.g. 'rclone serve http remote:'") } return errors.New("unknown protocol") }, diff --git a/cmd/serve/sftp/sftp.go b/cmd/serve/sftp/sftp.go index c442db261..8395de48c 100644 --- a/cmd/serve/sftp/sftp.go +++ b/cmd/serve/sftp/sftp.go @@ -61,7 +61,7 @@ var Command = &cobra.Command{ over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it. -You can use the filter flags (eg --include, --exclude) to control what +You can use the filter flags (e.g. --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. diff --git a/cmd/touch/touch.go b/cmd/touch/touch.go index c686e103e..c96348cd2 100644 --- a/cmd/touch/touch.go +++ b/cmd/touch/touch.go @@ -46,9 +46,9 @@ unless the --no-create flag is provided. If --timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of: -- 'YYMMDD' - eg. 17.10.30 -- 'YYYY-MM-DDTHH:MM:SS' - eg. 2006-01-02T15:04:05 -- 'YYYY-MM-DDTHH:MM:SS.SSS' - eg. 2006-01-02T15:04:05.123456789 +- 'YYMMDD' - e.g. 17.10.30 +- 'YYYY-MM-DDTHH:MM:SS' - e.g. 2006-01-02T15:04:05 +- 'YYYY-MM-DDTHH:MM:SS.SSS' - e.g. 2006-01-02T15:04:05.123456789 Note that --timestamp is in UTC if you want local time then add the --localtime flag. diff --git a/cmd/tree/tree.go b/cmd/tree/tree.go index 1c69a6d0e..6ced45ee2 100644 --- a/cmd/tree/tree.go +++ b/cmd/tree/tree.go @@ -85,7 +85,7 @@ For example 1 directories, 5 files -You can use any of the filtering options with the tree command (eg +You can use any of the filtering options with the tree command (e.g. --include and --exclude). You can also use --fast-list. The tree command has many options for controlling the listing which diff --git a/docs/README.md b/docs/README.md index 1eeb56175..e5b1c3561 100644 --- a/docs/README.md +++ b/docs/README.md @@ -37,7 +37,7 @@ so it is easy to tweak stuff. │   │   ├── footer.copyright.html - copyright footer │   │   ├── footer.html - footer including scripts │   │   ├── header.html - the whole html header -│   │   ├── header.includes.html - header includes eg css files +│   │   ├── header.includes.html - header includes e.g. css files │   │   ├── menu.html - left hand side menu │   │   ├── meta.html - meta tags for the header │   │   └── navbar.html - top navigation bar diff --git a/docs/content/_index.md b/docs/content/_index.md index 112913ae3..639120397 100644 --- a/docs/content/_index.md +++ b/docs/content/_index.md @@ -86,7 +86,7 @@ Rclone helps you: - MD5, SHA1 hashes are checked at all times for file integrity - Timestamps are preserved on files - Operations can be restarted at any time - - Can be to and from network, eg two different cloud providers + - Can be to and from network, e.g. two different cloud providers - Can use multi-threaded downloads to local disk - [Copy](/commands/rclone_copy/) new or changed files to cloud storage - [Sync](/commands/rclone_sync/) (one way) to make a directory identical diff --git a/docs/content/alias.md b/docs/content/alias.md index d49d2c1eb..6325dd7f3 100644 --- a/docs/content/alias.md +++ b/docs/content/alias.md @@ -9,7 +9,7 @@ description: "Remote Aliases" The `alias` remote provides a new name for another remote. Paths may be as deep as required or a local path, -eg `remote:directory/subdirectory` or `/directory/subdirectory`. +e.g. `remote:directory/subdirectory` or `/directory/subdirectory`. During the initial setup with `rclone config` you will specify the target remote. The target remote can either be a local path or another remote. diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md index c15c07034..ddc590f5e 100644 --- a/docs/content/azureblob.md +++ b/docs/content/azureblob.md @@ -7,7 +7,7 @@ description: "Rclone docs for Microsoft Azure Blob Storage" ----------------------------------------- Paths are specified as `remote:container` (or `remote:` for the `lsd` -command.) You may put subdirectories in too, eg +command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`. Here is an example of making a Microsoft Azure Blob Storage @@ -104,7 +104,7 @@ as they can't be used in JSON strings. MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 -hashes, eg the local disk. +hashes, e.g. the local disk. ### Authenticating with Azure Blob Storage @@ -127,7 +127,7 @@ container level SAS URL right click on a container in the Azure Blob explorer in the Azure portal. If you use a container level SAS URL, rclone operations are permitted -only on a particular container, eg +only on a particular container, e.g. rclone ls azureblob:container diff --git a/docs/content/b2.md b/docs/content/b2.md index 10e334826..82e113d9a 100644 --- a/docs/content/b2.md +++ b/docs/content/b2.md @@ -9,7 +9,7 @@ description: "Backblaze B2" B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/). Paths are specified as `remote:bucket` (or `remote:` for the `lsd` -command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. +command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. Here is an example of making a b2 configuration. First run @@ -181,7 +181,7 @@ If you wish to remove all the old versions then you can use the `rclone cleanup remote:bucket` command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, -eg `rclone cleanup remote:bucket/path/to/stuff`. +e.g. `rclone cleanup remote:bucket/path/to/stuff`. Note that `cleanup` will remove partially uploaded files from the bucket if they are more than a day old. diff --git a/docs/content/box.md b/docs/content/box.md index 1198f8bcd..237c88c62 100644 --- a/docs/content/box.md +++ b/docs/content/box.md @@ -8,7 +8,7 @@ description: "Rclone docs for Box" Paths are specified as `remote:path` -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. The initial setup for Box involves getting a token from Box which you can do either in your browser, or with a config.json downloaded from Box diff --git a/docs/content/cache.md b/docs/content/cache.md index 74a75e6ac..f439ea8de 100644 --- a/docs/content/cache.md +++ b/docs/content/cache.md @@ -51,7 +51,7 @@ XX / Cache a remote [snip] Storage> cache Remote to cache. -Normally should contain a ':' and a path, eg "myremote:path/to/dir", +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). remote> local:/test Optional: The URL of the Plex server @@ -313,7 +313,7 @@ Here are the standard options specific to cache (Cache a remote). #### --cache-remote Remote to cache. -Normally should contain a ':' and a path, eg "myremote:path/to/dir", +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote diff --git a/docs/content/changelog.md b/docs/content/changelog.md index 05db4dd92..88739d615 100644 --- a/docs/content/changelog.md +++ b/docs/content/changelog.md @@ -268,7 +268,7 @@ description: "Rclone Changelog" * Bug Fixes * docs - * Disable smart typography (eg en-dash) in MANUAL.* and man page (Nick Craig-Wood) + * Disable smart typography (e.g. en-dash) in MANUAL.* and man page (Nick Craig-Wood) * Update install.md to reflect minimum Go version (Evan Harris) * Update install from source instructions (Nick Craig-Wood) * make_manual: Support SOURCE_DATE_EPOCH (Morten Linderud) @@ -373,7 +373,7 @@ all the docs and Edward Barker for helping re-write the front page. * Add `--check-first` to do all checking before starting transfers (Nick Craig-Wood) * Add `--track-renames-strategy` for configurable matching criteria for `--track-renames` (Bernd Schoolmann) * Add `--cutoff-mode` hard,soft,cautious (Shing Kit Chan & Franklyn Tackitt) - * Filter flags (eg `--files-from -`) can read from stdin (fishbullet) + * Filter flags (e.g. `--files-from -`) can read from stdin (fishbullet) * Add `--error-on-no-transfer` option (Jon Fautley) * Implement `--order-by xxx,mixed` for copying some small and some big files (Nick Craig-Wood) * Allow `--max-backlog` to be negative meaning as large as possible (Nick Craig-Wood) @@ -817,7 +817,7 @@ all the docs and Edward Barker for helping re-write the front page. * Check config names more carefully and report errors (Nick Craig-Wood) * Remove error: can't use `--size-only` and `--ignore-size` together. (Nick Craig-Wood) * filter: Prevent mixing options when `--files-from` is in use (Michele Caci) - * serve sftp: Fix crash on unsupported operations (eg Readlink) (Nick Craig-Wood) + * serve sftp: Fix crash on unsupported operations (e.g. Readlink) (Nick Craig-Wood) * Mount * Allow files of unknown size to be read properly (Nick Craig-Wood) * Skip tests on <= 2 CPUs to avoid lockup (Nick Craig-Wood) @@ -833,7 +833,7 @@ all the docs and Edward Barker for helping re-write the front page. * Azure Blob * Disable logging to the Windows event log (Nick Craig-Wood) * B2 - * Remove `unverified:` prefix on sha1 to improve interop (eg with CyberDuck) (Nick Craig-Wood) + * Remove `unverified:` prefix on sha1 to improve interop (e.g. with CyberDuck) (Nick Craig-Wood) * Box * Add options to get access token via JWT auth (David) * Drive @@ -1048,7 +1048,7 @@ all the docs and Edward Barker for helping re-write the front page. * controlled with `--multi-thread-cutoff` and `--multi-thread-streams` * Use rclone.conf from rclone executable directory to enable portable use (albertony) * Allow sync of a file and a directory with the same name (forgems) - * this is common on bucket based remotes, eg s3, gcs + * this is common on bucket based remotes, e.g. s3, gcs * Add `--ignore-case-sync` for forced case insensitivity (garry415) * Implement `--stats-one-line-date` and `--stats-one-line-date-format` (Peter Berbec) * Log an ERROR for all commands which exit with non-zero status (Nick Craig-Wood) @@ -1319,7 +1319,7 @@ all the docs and Edward Barker for helping re-write the front page. * Add support for PEM encrypted private keys (Fabian Möller) * Add option to force the usage of an ssh-agent (Fabian Möller) * Perform environment variable expansion on key-file (Fabian Möller) - * Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig-Wood) + * Fix rmdir on Windows based servers (e.g. CrushFTP) (Nick Craig-Wood) * Fix rmdir deleting directory contents on some SFTP servers (Nick Craig-Wood) * Fix error on dangling symlinks (Nick Craig-Wood) * Swift @@ -1350,7 +1350,7 @@ all the docs and Edward Barker for helping re-write the front page. * sensitive operations require authorization or the `--rc-no-auth` flag * config/* operations to configure rclone * options/* for reading/setting command line flags - * operations/* for all low level operations, eg copy file, list directory + * operations/* for all low level operations, e.g. copy file, list directory * sync/* for sync, copy and move * `--rc-files` flag to serve files on the rc http server * this is for building web native GUIs for rclone @@ -1745,7 +1745,7 @@ Point release to fix hubic and azureblob backends. * rc: fix setting bwlimit to unlimited * rc: take note of the --rc-addr flag too as per the docs * Mount - * Use About to return the correct disk total/used/free (eg in `df`) + * Use About to return the correct disk total/used/free (e.g. in `df`) * Set `--attr-timeout default` to `1s` - fixes: * rclone using too much memory * rclone not serving files to samba @@ -1984,7 +1984,7 @@ Point release to fix hubic and azureblob backends. * Retry lots more different types of errors to make multipart transfers more reliable * Save the config before asking for a token, fixes disappearing oauth config * Warn the user if --include and --exclude are used together (Ernest Borowski) - * Fix duplicate files (eg on Google drive) causing spurious copies + * Fix duplicate files (e.g. on Google drive) causing spurious copies * Allow trailing and leading whitespace for passwords (Jason Rose) * ncdu: fix crashes on empty directories * rcat: fix goroutine leak @@ -2412,7 +2412,7 @@ Point release to fix hubic and azureblob backends. * New B2 API endpoint (thanks Per Cederberg) * Set maximum backoff to 5 Minutes * onedrive - * Fix URL escaping in file names - eg uploading files with `+` in them. + * Fix URL escaping in file names - e.g. uploading files with `+` in them. * amazon cloud drive * Fix token expiry during large uploads * Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors @@ -2453,7 +2453,7 @@ Point release to fix hubic and azureblob backends. * Skip setting the modified time for objects > 5GB as it isn't possible. * Backblaze B2 * Add --b2-versions flag so old versions can be listed and retrieved. - * Treat 403 errors (eg cap exceeded) as fatal. + * Treat 403 errors (e.g. cap exceeded) as fatal. * Implement cleanup command for deleting old file versions. * Make error handling compliant with B2 integrations notes. * Fix handling of token expiry. @@ -2625,7 +2625,7 @@ Point release to fix hubic and azureblob backends. * This could have deleted files unexpectedly on sync * Always check first with `--dry-run`! * Swift - * Stop SetModTime losing metadata (eg X-Object-Manifest) + * Stop SetModTime losing metadata (e.g. X-Object-Manifest) * This could have caused data loss for files > 5GB in size * Use ContentType from Object to avoid lookups in listings * OneDrive @@ -2788,7 +2788,7 @@ Point release to fix hubic and azureblob backends. ## v1.09 - 2015-02-07 -* windows: Stop drive letters (eg C:) getting mixed up with remotes (eg drive:) +* windows: Stop drive letters (e.g. C:) getting mixed up with remotes (e.g. drive:) * local: Fix directory separators on Windows * drive: fix rate limit exceeded errors diff --git a/docs/content/chunker.md b/docs/content/chunker.md index 9bc2b95a4..9b815f726 100644 --- a/docs/content/chunker.md +++ b/docs/content/chunker.md @@ -17,7 +17,7 @@ a remote. First check your chosen remote is working - we'll call it `remote:path` here. Note that anything inside `remote:path` will be chunked and anything outside -won't. This means that if you are using a bucket based remote (eg S3, B2, swift) +won't. This means that if you are using a bucket based remote (e.g. S3, B2, swift) then you should probably put the bucket in the remote `s3:bucket`. Now configure `chunker` using `rclone config`. We will call this one `overlay` @@ -38,7 +38,7 @@ XX / Transparently chunk/split large files [snip] Storage> chunker Remote to chunk/unchunk. -Normally should contain a ':' and a path, eg "myremote:path/to/dir", +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). Enter a string value. Press Enter for the default (""). remote> remote:path @@ -118,7 +118,7 @@ the potential chunk files are accounted for, grouped and assembled into composite directory entries. Any temporary chunks are hidden. List and other commands can sometimes come across composite files with -missing or invalid chunks, eg. shadowed by like-named directory or +missing or invalid chunks, e.g. shadowed by like-named directory or another file. This usually means that wrapped file system has been directly tampered with or damaged. If chunker detects a missing chunk it will by default print warning, skip the whole incomplete group of chunks but @@ -140,7 +140,7 @@ characters defines the minimum length of a string representing a chunk number. If decimal chunk number has less digits than the number of hashes, it is left-padded by zeros. If the decimal string is longer, it is left intact. By default numbering starts from 1 but there is another option that allows -user to start from 0, eg. for compatibility with legacy software. +user to start from 0, e.g. for compatibility with legacy software. For example, if name format is `big_*-##.part` and original file name is `data.txt` and numbering starts from 0, then the first chunk will be named @@ -211,7 +211,7 @@ guarantee given hash for all files. If wrapped remote doesn't support it, chunker will then add metadata to all files, even small. However, this can double the amount of small files in storage and incur additional service charges. You can even use chunker to force md5/sha1 support in any other remote -at expense of sidecar meta objects by setting eg. `chunk_type=sha1all` +at expense of sidecar meta objects by setting e.g. `chunk_type=sha1all` to force hashsums and `chunk_size=1P` to effectively disable chunking. Normally, when a file is copied to chunker controlled remote, chunker @@ -282,7 +282,7 @@ suffix during operations. Many file systems limit base file name without path by 255 characters. Using rclone's crypt remote as a base file system limits file name by 143 characters. Thus, maximum name length is 231 for most files and 119 for chunker-over-crypt. A user in need can change name format to -eg. `*.rcc##` and save 10 characters (provided at most 99 chunks per file). +e.g. `*.rcc##` and save 10 characters (provided at most 99 chunks per file). Note that a move implemented using the copy-and-delete method may incur double charging with some cloud storage providers. @@ -308,7 +308,7 @@ Here are the standard options specific to chunker (Transparently chunk/split lar #### --chunker-remote Remote to chunk/unchunk. -Normally should contain a ':' and a path, eg "myremote:path/to/dir", +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote diff --git a/docs/content/crypt.md b/docs/content/crypt.md index 12d3030e4..3348085c1 100644 --- a/docs/content/crypt.md +++ b/docs/content/crypt.md @@ -18,7 +18,7 @@ removable drives. Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called `remote:path`. Anything inside `remote:path` will be encrypted and anything outside -will not. In the case of an S3 based underlying remote (eg Amazon S3, +will not. In the case of an S3 based underlying remote (e.g. Amazon S3, B2, Swift) it is generally advisable to define a crypt remote in the underlying remote `s3:bucket`. If `s3:` alone is specified alongside file name encryption, rclone will encrypt the bucket name. @@ -42,7 +42,7 @@ XX / Encrypt/Decrypt a remote [snip] Storage> crypt Remote to encrypt/decrypt. -Normally should contain a ':' and a path, eg "myremote:path/to/dir", +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). remote> remote:path How to encrypt the filenames. @@ -281,7 +281,7 @@ Here are the standard options specific to crypt (Encrypt/Decrypt a remote). #### --crypt-remote Remote to encrypt/decrypt. -Normally should contain a ':' and a path, eg "myremote:path/to/dir", +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). - Config: remote @@ -350,7 +350,7 @@ Here are the advanced options specific to crypt (Encrypt/Decrypt a remote). #### --crypt-server-side-across-configs -Allow server-side operations (eg copy) to work across different crypt configs. +Allow server-side operations (e.g. copy) to work across different crypt configs. Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it. @@ -545,7 +545,7 @@ encoding is modified in two ways: * we strip the padding character `=` `base32` is used rather than the more efficient `base64` so rclone can be -used on case insensitive remotes (eg Windows, Amazon Drive). +used on case insensitive remotes (e.g. Windows, Amazon Drive). ### Key derivation ### diff --git a/docs/content/docs.md b/docs/content/docs.md index fb67ab727..dc1061648 100644 --- a/docs/content/docs.md +++ b/docs/content/docs.md @@ -68,7 +68,7 @@ Its syntax is like this Syntax: [options] subcommand Source and destination paths are specified by the name you gave the -storage system in the config file then the sub path, eg +storage system in the config file then the sub path, e.g. "drive:myfolder" to look at "myfolder" in Google drive. You can define as many storage paths as you like in the config file. @@ -219,12 +219,12 @@ Here are some gotchas which may help users unfamiliar with the shell rules ### Linux / OSX ### -If your names have spaces or shell metacharacters (eg `*`, `?`, `$`, +If your names have spaces or shell metacharacters (e.g. `*`, `?`, `$`, `'`, `"` etc) then you must quote them. Use single quotes `'` by default. rclone copy 'Important files?' remote:backup -If you want to send a `'` you will need to use `"`, eg +If you want to send a `'` you will need to use `"`, e.g. rclone copy "O'Reilly Reviews" remote:backup @@ -234,12 +234,12 @@ shell. ### Windows ### -If your names have spaces in you need to put them in `"`, eg +If your names have spaces in you need to put them in `"`, e.g. rclone copy "E:\folder name\folder name\folder name" remote:backup If you are using the root directory on its own then don't quote it -(see [#464](https://github.com/rclone/rclone/issues/464) for why), eg +(see [#464](https://github.com/rclone/rclone/issues/464) for why), e.g. rclone copy E:\ remote:backup @@ -289,7 +289,7 @@ quicker than a download and re-upload. Server side copies will only be attempted if the remote names are the same. -This can be used when scripting to make aged backups efficiently, eg +This can be used when scripting to make aged backups efficiently, e.g. rclone sync -i remote:current-backup remote:previous-backup rclone sync -i /path/to/files remote:current-backup @@ -315,7 +315,7 @@ time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Options which use SIZE use kByte by default. However, a suffix of `b` for bytes, `k` for kBytes, `M` for MBytes, `G` for GBytes, `T` for -TBytes and `P` for PBytes may be used. These are the binary units, eg +TBytes and `P` for PBytes may be used. These are the binary units, e.g. 1, 2\*\*10, 2\*\*20, 2\*\*30 respectively. ### --backup-dir=DIR ### @@ -467,7 +467,7 @@ objects to transfer is held in memory before the transfers start. ### --checkers=N ### The number of checkers to run in parallel. Checkers do the equality -checking of files during a sync. For some storage systems (eg S3, +checking of files during a sync. For some storage systems (e.g. S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel. @@ -483,7 +483,7 @@ This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size. This is very useful when transferring between remotes which store the -same hash type on the object, eg Drive and Swift. For details of which +same hash type on the object, e.g. Drive and Swift. For details of which remotes support which hash type see the table in the [overview section](/overview/). @@ -521,7 +521,7 @@ for Rclone to use it, it will never be created automatically. If you run `rclone config file` you will see where the default location is for you. -Use this flag to override the config location, eg `rclone +Use this flag to override the config location, e.g. `rclone --config=".myconfig" .config`. ### --contimeout=TIME ### @@ -568,7 +568,7 @@ See the overview [features](/overview/#features) and which feature does what. This flag can be useful for debugging and in exceptional circumstances -(eg Google Drive limiting the total volume of Server Side Copies to +(e.g. Google Drive limiting the total volume of Server Side Copies to 100GB/day). ### -n, --dry-run ### @@ -956,7 +956,7 @@ This means that: - the destination is not listed minimising the API calls - files are always transferred -- this can cause duplicates on remotes which allow it (eg Google Drive) +- this can cause duplicates on remotes which allow it (e.g. Google Drive) - `--retries 1` is recommended otherwise you'll transfer everything again on a retry This flag is useful to minimise the transactions if you know that none @@ -1012,7 +1012,7 @@ When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally. This can be used if the remote is being synced with another tool also -(eg the Google Drive client). +(e.g. the Google Drive client). ### --order-by string ### @@ -1033,7 +1033,7 @@ This can have a modifier appended with a comma: - `mixed` - order so that the smallest is processed first for some threads and the largest for others If the modifier is `mixed` then it can have an optional percentage -(which defaults to `50`), eg `size,mixed,25` which means that 25% of +(which defaults to `50`), e.g. `size,mixed,25` which means that 25% of the threads should be taking the smallest items and 75% the largest. The threads which take the smallest first will always take the smallest first and likewise the largest first threads. The `mixed` @@ -1127,7 +1127,7 @@ This is useful if you uploaded files with the incorrect timestamps and you now wish to correct them. This flag is **only** useful for destinations which don't support -hashes (eg `crypt`). +hashes (e.g. `crypt`). This can be used any of the sync commands `sync`, `copy` or `move`. @@ -1140,7 +1140,7 @@ to see if there is an existing file on the destination. If this file matches the source with size (and checksum if available) but has a differing timestamp then instead of re-uploading it, rclone will update the timestamp on the destination file. If the checksum does not -match rclone will upload the new file. If the checksum is absent (eg +match rclone will upload the new file. If the checksum is absent (e.g. on a `crypt` backend) then rclone will update the timestamp. Note that some remotes can't set the modification time without @@ -1287,7 +1287,7 @@ This can be useful for running rclone in a script or `rclone mount`. ### --syslog-facility string ### -If using `--syslog` this sets the syslog facility (eg `KERN`, `USER`). +If using `--syslog` this sets the syslog facility (e.g. `KERN`, `USER`). See `man syslog` for a list of possible facilities. The default facility is `DAEMON`. @@ -1301,7 +1301,7 @@ For example to limit rclone to 10 HTTP transactions per second use 0.5`. Use this when the number of transactions per second from rclone is -causing a problem with the cloud storage provider (eg getting you +causing a problem with the cloud storage provider (e.g. getting you banned or rate limited). This can be very useful for `rclone mount` to control the behaviour of @@ -1400,7 +1400,7 @@ there were IO errors`. ### --fast-list ### -When doing anything which involves a directory listing (eg `sync`, +When doing anything which involves a directory listing (e.g. `sync`, `copy`, `ls` - in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very @@ -1408,7 +1408,7 @@ quickly using the least amount of memory. However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to -be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic). +be the bucket based remotes (e.g. S3, B2, GCS, Swift, Hubic). If you use the `--fast-list` flag then rclone will use this method for listing directories. This will have the following consequences for @@ -1671,7 +1671,7 @@ Developer options These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented -here which are used for testing. These start with remote name eg +here which are used for testing. These start with remote name e.g. `--drive-test-option` - see the docs for the remote in question. ### --cpuprofile=FILE ### @@ -1781,7 +1781,7 @@ Logging rclone has 4 levels of logging, `ERROR`, `NOTICE`, `INFO` and `DEBUG`. By default, rclone logs to standard error. This means you can redirect -standard error and still see the normal output of rclone commands (eg +standard error and still see the normal output of rclone commands (e.g. `rclone ls`). By default, rclone will produce `Error` and `Notice` level messages. @@ -1802,7 +1802,7 @@ If you use the `--log-file=FILE` option, rclone will redirect `Error`, If you use the `--syslog` flag then rclone will log to syslog and the `--syslog-facility` control which facility it uses. -Rclone prefixes all log messages with their level in capitals, eg INFO +Rclone prefixes all log messages with their level in capitals, e.g. INFO which makes it easy to grep the log file for different kinds of information. @@ -1897,11 +1897,11 @@ you must create the `..._TYPE` variable as above. The various different methods of backend configuration are read in this order and the first one with a value is used. -- Flag values as supplied on the command line, eg `--drive-use-trash`. -- Remote specific environment vars, eg `RCLONE_CONFIG_MYREMOTE_USE_TRASH` (see above). -- Backend specific environment vars, eg `RCLONE_DRIVE_USE_TRASH`. -- Config file, eg `use_trash = false`. -- Default values, eg `true` - these can't be changed. +- Flag values as supplied on the command line, e.g. `--drive-use-trash`. +- Remote specific environment vars, e.g. `RCLONE_CONFIG_MYREMOTE_USE_TRASH` (see above). +- Backend specific environment vars, e.g. `RCLONE_DRIVE_USE_TRASH`. +- Config file, e.g. `use_trash = false`. +- Default values, e.g. `true` - these can't be changed. So if both `--drive-use-trash` is supplied on the config line and an environment variable `RCLONE_DRIVE_USE_TRASH` is set, the command line @@ -1909,9 +1909,9 @@ flag will take preference. For non backend configuration the order is as follows: -- Flag values as supplied on the command line, eg `--stats 5s`. -- Environment vars, eg `RCLONE_STATS=5s`. -- Default values, eg `1m` - these can't be changed. +- Flag values as supplied on the command line, e.g. `--stats 5s`. +- Environment vars, e.g. `RCLONE_STATS=5s`. +- Default values, e.g. `1m` - these can't be changed. ### Other environment variables ### diff --git a/docs/content/downloads.md b/docs/content/downloads.md index 43b42f38b..c69898b93 100644 --- a/docs/content/downloads.md +++ b/docs/content/downloads.md @@ -40,7 +40,7 @@ to master. Note these are named like {Version Tag}.beta.{Commit Number}.{Git Commit Hash} -eg +e.g. v1.53.0-beta.4677.b657a2204 @@ -54,7 +54,7 @@ Some beta releases may have a branch name also: {Version Tag}-beta.{Commit Number}.{Git Commit Hash}.{Branch Name} -eg +e.g. v1.53.0-beta.4677.b657a2204.semver diff --git a/docs/content/drive.md b/docs/content/drive.md index d369d0778..df3524a05 100644 --- a/docs/content/drive.md +++ b/docs/content/drive.md @@ -8,7 +8,7 @@ description: "Rclone docs for Google drive" Paths are specified as `drive:path` -Drive paths may be as deep as required, eg `drive:directory/subdirectory`. +Drive paths may be as deep as required, e.g. `drive:directory/subdirectory`. The initial setup for drive involves getting a token from Google drive which you need to do in your browser. `rclone config` walks you @@ -397,7 +397,7 @@ be in multiple folders at once](https://cloud.google.com/blog/products/g-suite/s Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they point to the underlying file data -(eg the inode in unix terms) so they don't break if the source is +(e.g. the inode in unix terms) so they don't break if the source is renamed or moved about. Be default rclone treats these as follows. @@ -490,7 +490,7 @@ Here are some examples for allowed and prohibited conversions. This limitation can be disabled by specifying `--drive-allow-import-name-change`. When using this flag, rclone can convert multiple files types resulting -in the same document type at once, eg with `--drive-import-formats docx,odt,txt`, +in the same document type at once, e.g. with `--drive-import-formats docx,odt,txt`, all files having these extension would result in a document represented as a docx file. This brings the additional risk of overwriting a document, if multiple files have the same stem. Many rclone operations will not handle this name change @@ -956,7 +956,7 @@ Number of API calls to allow without sleeping. #### --drive-server-side-across-configs -Allow server-side operations (eg copy) to work across different drive configs. +Allow server-side operations (e.g. copy) to work across different drive configs. This can be useful if you wish to do a server-side copy between two different Google drives. Note that this isn't enabled by default @@ -1188,7 +1188,7 @@ and upload the files if you prefer. #### Limitations of Google Docs #### Google docs will appear as size -1 in `rclone ls` and as size 0 in -anything which uses the VFS layer, eg `rclone mount`, `rclone serve`. +anything which uses the VFS layer, e.g. `rclone mount`, `rclone serve`. This is because rclone can't find out the size of the Google docs without downloading them. diff --git a/docs/content/dropbox.md b/docs/content/dropbox.md index c7448035e..8b5f5e83d 100644 --- a/docs/content/dropbox.md +++ b/docs/content/dropbox.md @@ -8,7 +8,7 @@ description: "Rclone docs for Dropbox" Paths are specified as `remote:path` -Dropbox paths may be as deep as required, eg +Dropbox paths may be as deep as required, e.g. `remote:directory/subdirectory`. The initial setup for dropbox involves getting a token from Dropbox diff --git a/docs/content/faq.md b/docs/content/faq.md index 2af1b3020..83de3074e 100644 --- a/docs/content/faq.md +++ b/docs/content/faq.md @@ -8,7 +8,7 @@ Frequently Asked Questions ### Do all cloud storage systems support all rclone commands ### -Yes they do. All the rclone commands (eg `sync`, `copy` etc) will +Yes they do. All the rclone commands (e.g. `sync`, `copy` etc) will work on all the remote storage systems. ### Can I copy the config from one machine to another ### @@ -40,7 +40,7 @@ Eg ### Using rclone from multiple locations at the same time ### You can use rclone from multiple places at the same time if you choose -different subdirectory for the output, eg +different subdirectory for the output, e.g. ``` Server A> rclone sync -i /tmp/whatever remote:ServerA @@ -48,7 +48,7 @@ Server B> rclone sync -i /tmp/whatever remote:ServerB ``` If you sync to the same directory then you should use rclone copy -otherwise the two instances of rclone may delete each other's files, eg +otherwise the two instances of rclone may delete each other's files, e.g. ``` Server A> rclone copy /tmp/whatever remote:Backup @@ -56,14 +56,14 @@ Server B> rclone copy /tmp/whatever remote:Backup ``` The file names you upload from Server A and Server B should be -different in this case, otherwise some file systems (eg Drive) may +different in this case, otherwise some file systems (e.g. Drive) may make duplicates. ### Why doesn't rclone support partial transfers / binary diffs like rsync? ### Rclone stores each file you transfer as a native object on the remote cloud storage system. This means that you can see the files you -upload as expected using alternative access methods (eg using the +upload as expected using alternative access methods (e.g. using the Google Drive web interface). There is a 1:1 mapping between files on your hard disk and objects created in the cloud storage system. diff --git a/docs/content/fichier.md b/docs/content/fichier.md index 0051dce9c..fa789e504 100644 --- a/docs/content/fichier.md +++ b/docs/content/fichier.md @@ -12,7 +12,7 @@ the API. Paths are specified as `remote:path` -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. The initial setup for 1Fichier involves getting the API key from the website which you need to do in your browser. diff --git a/docs/content/filtering.md b/docs/content/filtering.md index d21ccb6cf..2074d1a63 100644 --- a/docs/content/filtering.md +++ b/docs/content/filtering.md @@ -118,7 +118,7 @@ directories. Directory matches are **only** used to optimise directory access patterns - you must still match the files that you want to match. -Directory matches won't optimise anything on bucket based remotes (eg +Directory matches won't optimise anything on bucket based remotes (e.g. s3, swift, google compute storage, b2) which don't have a concept of directory. @@ -162,7 +162,7 @@ This would exclude A similar process is done on directory entries before recursing into them. This only works on remotes which have a concept of directory (Eg local, google drive, onedrive, amazon drive) and not on bucket -based remotes (eg s3, swift, google compute storage, b2). +based remotes (e.g. s3, swift, google compute storage, b2). ## Adding filtering rules ## @@ -233,7 +233,7 @@ backup and no others. This adds an implicit `--exclude *` at the very end of the filter list. This means you can mix `--include` and `--include-from` with the -other filters (eg `--exclude`) but you must include all the files you +other filters (e.g. `--exclude`) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use `--filter-from`. @@ -258,7 +258,7 @@ This is useful if you have a lot of rules. This adds an implicit `--exclude *` at the very end of the filter list. This means you can mix `--include` and `--include-from` with the -other filters (eg `--exclude`) but you must include all the files you +other filters (e.g. `--exclude`) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use `--filter-from`. @@ -352,7 +352,7 @@ want to back up regularly with these absolute paths: To copy these you'd find a common subdirectory - in this case `/home` and put the remaining files in `files-from.txt` with or without -leading `/`, eg +leading `/`, e.g. user1/important user1/dir/file @@ -430,7 +430,7 @@ transferred. This can also be an absolute time in one of these formats -- RFC3339 - eg "2006-01-02T15:04:05Z07:00" +- RFC3339 - e.g. "2006-01-02T15:04:05Z07:00" - ISO8601 Date and time, local timezone - "2006-01-02T15:04:05" - ISO8601 Date and time, local timezone - "2006-01-02 15:04:05" - ISO8601 Date - "2006-01-02" (YYYY-MM-DD) @@ -481,7 +481,7 @@ Normally a `--include "file.txt"` will not match a file called ## Quoting shell metacharacters ## The examples above may not work verbatim in your shell as they have -shell metacharacters in them (eg `*`), and may require quoting. +shell metacharacters in them (e.g. `*`), and may require quoting. Eg linux, OSX diff --git a/docs/content/flags.md b/docs/content/flags.md index d89d381ef..aab6eadf5 100755 --- a/docs/content/flags.md +++ b/docs/content/flags.md @@ -90,7 +90,7 @@ These flags are available for every command. --no-traverse Don't traverse destination file system on copy. --no-unicode-normalization Don't normalize unicode characters in filenames. --no-update-modtime Don't update destination mod-time if files identical. - --order-by string Instructions on how to order the transfers, eg 'size,descending' + --order-by string Instructions on how to order the transfers, e.g. 'size,descending' --password-command SpaceSepList Command for supplying password for encrypted configuration. -P, --progress Show progress during transfer. -q, --quiet Print as little stuff as possible @@ -135,7 +135,7 @@ These flags are available for every command. --suffix string Suffix to add to changed files. --suffix-keep-extension Preserve the extension when using --suffix. --syslog Use Syslog for logging - --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") + --syslog-facility string Facility for syslog, e.g. KERN,USER,... (default "DAEMON") --timeout duration IO idle timeout (default 5m0s) --tpslimit float Limit HTTP transactions per second to this. --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) @@ -239,7 +239,7 @@ and may be set in the config file. --crypt-password string Password or pass phrase for encryption. (obscured) --crypt-password2 string Password or pass phrase for salt. Optional but recommended. (obscured) --crypt-remote string Remote to encrypt/decrypt. - --crypt-server-side-across-configs Allow server-side operations (eg copy) to work across different crypt configs. + --crypt-server-side-across-configs Allow server-side operations (e.g. copy) to work across different crypt configs. --crypt-show-mapping For all files listed show how the names encrypt. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. @@ -260,7 +260,7 @@ and may be set in the config file. --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms) --drive-root-folder-id string ID of the root folder --drive-scope string Scope that rclone should use when requesting access from drive. - --drive-server-side-across-configs Allow server-side operations (eg copy) to work across different drive configs. + --drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs. --drive-service-account-credentials string Service Account Credentials JSON blob --drive-service-account-file string Service Account Credentials JSON file path --drive-shared-with-me Only show files that are shared with me. @@ -377,7 +377,7 @@ and may be set in the config file. --onedrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-no-versions Remove all versions on modifying operations - --onedrive-server-side-across-configs Allow server-side operations (eg copy) to work across different onedrive configs. + --onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs. --onedrive-token string OAuth Access Token as a JSON blob. --onedrive-token-url string Token server url. --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10M) @@ -511,7 +511,7 @@ and may be set in the config file. --union-create-policy string Policy to choose upstream on CREATE category. (default "epmfs") --union-search-policy string Policy to choose upstream on SEARCH category. (default "ff") --union-upstreams string List of space separated upstreams. - --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) + --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) --webdav-bearer-token-command string Command to run to get a bearer token --webdav-pass string Password. (obscured) --webdav-url string URL of http host to connect to diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md index ae60d8527..2db32ef15 100644 --- a/docs/content/googlecloudstorage.md +++ b/docs/content/googlecloudstorage.md @@ -7,7 +7,7 @@ description: "Rclone docs for Google Cloud Storage" ------------------------------------------------- Paths are specified as `remote:bucket` (or `remote:` for the `lsd` -command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. +command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. `rclone config` walks you diff --git a/docs/content/http.md b/docs/content/http.md index 3ecb26e9a..823474b76 100644 --- a/docs/content/http.md +++ b/docs/content/http.md @@ -133,7 +133,7 @@ The input format is comma separated list of key,value pairs. Standard For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. -You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'. +You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'. - Config: headers diff --git a/docs/content/hubic.md b/docs/content/hubic.md index 989bdcf7a..64e93867b 100644 --- a/docs/content/hubic.md +++ b/docs/content/hubic.md @@ -9,7 +9,7 @@ description: "Rclone docs for Hubic" Paths are specified as `remote:path` Paths are specified as `remote:container` (or `remote:` for the `lsd` -command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. +command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`. The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. `rclone config` walks you through it. @@ -179,7 +179,7 @@ default for this is 5GB which is its maximum value. Don't chunk files during streaming upload. -When doing streaming uploads (eg using rcat or mount) setting this +When doing streaming uploads (e.g. using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked diff --git a/docs/content/install.md b/docs/content/install.md index 2e8e83c74..3afc065b7 100644 --- a/docs/content/install.md +++ b/docs/content/install.md @@ -108,7 +108,7 @@ on a minimal Alpine linux image. The `:latest` tag will always point to the latest stable release. You can use the `:beta` tag to get the latest build from master. You can -also use version tags, eg `:1.49.1`, `:1.49` or `:1`. +also use version tags, e.g. `:1.49.1`, `:1.49` or `:1`. ``` $ docker pull rclone/rclone:latest diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md index 361b7ce69..fb0eb41a5 100644 --- a/docs/content/jottacloud.md +++ b/docs/content/jottacloud.md @@ -13,7 +13,7 @@ also several whitelabel versions which should work with this backend. Paths are specified as `remote:path` -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. ## Setup diff --git a/docs/content/koofr.md b/docs/content/koofr.md index 4096a5918..a5f0881fd 100644 --- a/docs/content/koofr.md +++ b/docs/content/koofr.md @@ -8,7 +8,7 @@ description: "Rclone docs for Koofr" Paths are specified as `remote:path` -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr diff --git a/docs/content/local.md b/docs/content/local.md index 60060ecef..780306f32 100644 --- a/docs/content/local.md +++ b/docs/content/local.md @@ -6,7 +6,7 @@ description: "Rclone docs for the local filesystem" {{< icon "fas fa-hdd" >}} Local Filesystem ------------------------------------------- -Local paths are specified as normal filesystem paths, eg `/path/to/wherever`, so +Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`, so rclone sync -i /home/source /tmp/destination @@ -28,14 +28,14 @@ for Windows and OS X. There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an -old Linux filesystem with non UTF-8 file names (eg latin1) then you +old Linux filesystem with non UTF-8 file names (e.g. latin1) then you can use the `convmv` tool to convert the filesystem to UTF-8. This tool is available in most distributions' package managers. If an invalid (non-UTF8) filename is read, the invalid characters will be replaced with a quoted representation of the invalid bytes. The name `gro\xdf` will be transferred as `gro‛DF`. `rclone` will emit a debug -message in this case (use `-v` to see), eg +message in this case (use `-v` to see), e.g. ``` Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf" @@ -295,7 +295,7 @@ treats a bind mount to the same device as being on the same filesystem. **NB** This flag is only available on Unix based systems. On systems -where it isn't supported (eg Windows) it will be ignored. +where it isn't supported (e.g. Windows) it will be ignored. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/local/local.go then run make backenddocs" >}} ### Standard Options @@ -368,13 +368,13 @@ Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts "can't copy - source file is being updated" if the file changes during upload. -However on some file systems this modification time check may fail (eg +However on some file systems this modification time check may fail (e.g. [Glusterfs #2206](https://github.com/rclone/rclone/issues/2206)) so this check can be disabled with this flag. If this flag is set, rclone will use its best efforts to transfer a file which is being updated. If the file is only having things -appended to it (eg a log) then rclone will transfer the log file with +appended to it (e.g. a log) then rclone will transfer the log file with the size it had the first time rclone saw it. If the file is being modified throughout (not just appended to) then diff --git a/docs/content/mailru.md b/docs/content/mailru.md index 090332f70..3e5f43a64 100644 --- a/docs/content/mailru.md +++ b/docs/content/mailru.md @@ -12,7 +12,7 @@ Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclo ### Features highlights ### -- Paths may be as deep as required, eg `remote:directory/subdirectory` +- Paths may be as deep as required, e.g. `remote:directory/subdirectory` - Files have a `last modified time` property, directories don't - Deleted files are by default moved to the trash - Files and directories can be shared via public links diff --git a/docs/content/mega.md b/docs/content/mega.md index 9cee961ba..1c7308ceb 100644 --- a/docs/content/mega.md +++ b/docs/content/mega.md @@ -17,7 +17,7 @@ features of Mega using the same client side encryption. Paths are specified as `remote:path` -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: diff --git a/docs/content/memory.md b/docs/content/memory.md index a37fa8b70..4d8cb6611 100644 --- a/docs/content/memory.md +++ b/docs/content/memory.md @@ -9,7 +9,7 @@ description: "Rclone docs for Memory backend" The memory backend is an in RAM backend. It does not persist its data - use the local backend for that. -The memory backend behaves like a bucket based remote (eg like +The memory backend behaves like a bucket based remote (e.g. like s3). Because it has no parameters you can just use it with the `:memory:` remote name. @@ -46,7 +46,7 @@ y/e/d> y ``` Because the memory backend isn't persistent it is most useful for -testing or with an rclone server or rclone mount, eg +testing or with an rclone server or rclone mount, e.g. rclone mount :memory: /mnt/tmp rclone serve webdav :memory: diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md index 4dbeb59ef..07cd98b3d 100644 --- a/docs/content/onedrive.md +++ b/docs/content/onedrive.md @@ -8,7 +8,7 @@ description: "Rclone docs for Microsoft OneDrive" Paths are specified as `remote:path` -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. `rclone config` walks @@ -298,7 +298,7 @@ listing, set this option. #### --onedrive-server-side-across-configs -Allow server-side operations (eg copy) to work across different onedrive configs. +Allow server-side operations (e.g. copy) to work across different onedrive configs. This can be useful if you wish to do a server-side copy between two different Onedrives. Note that this isn't enabled by default diff --git a/docs/content/opendrive.md b/docs/content/opendrive.md index 2c5c5d12e..1e0a406b5 100644 --- a/docs/content/opendrive.md +++ b/docs/content/opendrive.md @@ -8,7 +8,7 @@ description: "Rclone docs for OpenDrive" Paths are specified as `remote:path` -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: diff --git a/docs/content/overview.md b/docs/content/overview.md index 84e670185..19e9bd8bb 100644 --- a/docs/content/overview.md +++ b/docs/content/overview.md @@ -90,7 +90,7 @@ these will be set when transferring from the cloud storage system. ### Case Insensitive ### If a cloud storage systems is case sensitive then it is possible to -have two files which differ only in case, eg `file.txt` and +have two files which differ only in case, e.g. `file.txt` and `FILE.txt`. If a cloud storage system is case insensitive then that isn't possible. @@ -103,7 +103,7 @@ depending on OS. * Windows - usually case insensitive, though case is preserved * OSX - usually case insensitive, though it is possible to format case sensitive - * Linux - usually case sensitive, but there are case insensitive file systems (eg FAT formatted USB keys) + * Linux - usually case sensitive, but there are case insensitive file systems (e.g. FAT formatted USB keys) Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive @@ -241,7 +241,7 @@ disable the encoding completely with `--backend-encoding None` or set Encoding takes a comma separated list of encodings. You can see the list of all available characters by passing an invalid value to this -flag, eg `--local-encoding "help"` and `rclone help flags encoding` +flag, e.g. `--local-encoding "help"` and `rclone help flags encoding` will show you the defaults for the backends. | Encoding | Characters | @@ -257,7 +257,7 @@ will show you the defaults for the backends. | Dot | `.` | | DoubleQuote | `"` | | Hash | `#` | -| InvalidUtf8 | An invalid UTF-8 character (eg latin1) | +| InvalidUtf8 | An invalid UTF-8 character (e.g. latin1) | | LeftCrLfHtVt | CR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a string | | LeftPeriod | `.` on the left of a string | | LeftSpace | SPACE on the left of a string | @@ -302,7 +302,7 @@ This can be specified using the `--local-encoding` flag or using an ### MIME Type ### MIME types (also known as media types) classify types of documents -using a simple text classification, eg `text/html` or +using a simple text classification, e.g. `text/html` or `application/pdf`. Some cloud storage systems support reading (`R`) the MIME type of diff --git a/docs/content/pcloud.md b/docs/content/pcloud.md index d4160439e..682e0f3ad 100644 --- a/docs/content/pcloud.md +++ b/docs/content/pcloud.md @@ -8,7 +8,7 @@ description: "Rclone docs for pCloud" Paths are specified as `remote:path` -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. `rclone config` walks you through it. diff --git a/docs/content/premiumizeme.md b/docs/content/premiumizeme.md index b58f06049..e0db7c9d2 100644 --- a/docs/content/premiumizeme.md +++ b/docs/content/premiumizeme.md @@ -8,7 +8,7 @@ description: "Rclone docs for premiumize.me" Paths are specified as `remote:path` -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you need to do in your browser. `rclone config` walks you through it. diff --git a/docs/content/putio.md b/docs/content/putio.md index 78f7fc68b..cad2f7f71 100644 --- a/docs/content/putio.md +++ b/docs/content/putio.md @@ -8,7 +8,7 @@ description: "Rclone docs for put.io" Paths are specified as `remote:path` -put.io paths may be as deep as required, eg +put.io paths may be as deep as required, e.g. `remote:directory/subdirectory`. The initial setup for put.io involves getting a token from put.io diff --git a/docs/content/qingstor.md b/docs/content/qingstor.md index e0e6e1eef..03df7d534 100644 --- a/docs/content/qingstor.md +++ b/docs/content/qingstor.md @@ -7,7 +7,7 @@ description: "Rclone docs for QingStor Object Storage" --------------------------------------- Paths are specified as `remote:bucket` (or `remote:` for the `lsd` -command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. +command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. Here is an example of making an QingStor configuration. First run diff --git a/docs/content/rc.md b/docs/content/rc.md index 490ebf307..69b262184 100644 --- a/docs/content/rc.md +++ b/docs/content/rc.md @@ -218,7 +218,7 @@ background. The `job/status` call can be used to get information of the background job. The job can be queried for up to 1 minute after it has finished. -It is recommended that potentially long running jobs, eg `sync/sync`, +It is recommended that potentially long running jobs, e.g. `sync/sync`, `sync/copy`, `sync/move`, `operations/purge` are run with the `_async` flag to avoid any potential problems with the HTTP request and response timing out. @@ -298,7 +298,7 @@ $ rclone rc --json '{ "group": "job/1" }' core/stats This takes the following parameters - command - a string with the command name -- fs - a remote name string eg "drive:" +- fs - a remote name string e.g. "drive:" - arg - a list of arguments for the backend command - opt - a map of string to string of options @@ -371,7 +371,7 @@ Some valid examples are: "0:10" -> the first ten chunks Any parameter with a key that starts with "file" can be used to -specify files to fetch, eg +specify files to fetch, e.g. rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye @@ -695,7 +695,7 @@ Returns the following values: This shows the current version of go and the go runtime -- version - rclone version, eg "v1.53.0" +- version - rclone version, e.g. "v1.53.0" - decomposed - version number as [major, minor, patch] - isGit - boolean - true if this was compiled from the git version - isBeta - boolean - true if this is a beta version @@ -759,11 +759,11 @@ Results - finished - boolean - duration - time in seconds that the job ran for -- endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00") +- endTime - time the job finished (e.g. "2018-10-26T18:50:20.528746884+01:00") - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above -- startTime - time the job started (eg "2018-10-26T18:50:20.528336039+01:00") +- startTime - time the job started (e.g. "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously - progress - output of the progress related to the underlying job @@ -865,7 +865,7 @@ Eg This takes the following parameters -- fs - a remote name string eg "drive:" +- fs - a remote name string e.g. "drive:" The result is as returned from rclone about --json @@ -877,7 +877,7 @@ See the [about command](/commands/rclone_size/) command for more information on This takes the following parameters -- fs - a remote name string eg "drive:" +- fs - a remote name string e.g. "drive:" See the [cleanup command](/commands/rclone_cleanup/) command for more information on the above. @@ -887,10 +887,10 @@ See the [cleanup command](/commands/rclone_cleanup/) command for more informatio This takes the following parameters -- srcFs - a remote name string eg "drive:" for the source -- srcRemote - a path within that remote eg "file.txt" for the source -- dstFs - a remote name string eg "drive2:" for the destination -- dstRemote - a path within that remote eg "file2.txt" for the destination +- srcFs - a remote name string e.g. "drive:" for the source +- srcRemote - a path within that remote e.g. "file.txt" for the source +- dstFs - a remote name string e.g. "drive2:" for the destination +- dstRemote - a path within that remote e.g. "file2.txt" for the destination **Authentication is required for this call.** @@ -898,8 +898,8 @@ This takes the following parameters This takes the following parameters -- fs - a remote name string eg "drive:" -- remote - a path within that remote eg "dir" +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" - url - string, URL to read from - autoFilename - boolean, set to true to retrieve destination file name from url See the [copyurl command](/commands/rclone_copyurl/) command for more information on the above. @@ -910,7 +910,7 @@ See the [copyurl command](/commands/rclone_copyurl/) command for more informatio This takes the following parameters -- fs - a remote name string eg "drive:" +- fs - a remote name string e.g. "drive:" See the [delete command](/commands/rclone_delete/) command for more information on the above. @@ -920,8 +920,8 @@ See the [delete command](/commands/rclone_delete/) command for more information This takes the following parameters -- fs - a remote name string eg "drive:" -- remote - a path within that remote eg "dir" +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" See the [deletefile command](/commands/rclone_deletefile/) command for more information on the above. @@ -931,7 +931,7 @@ See the [deletefile command](/commands/rclone_deletefile/) command for more info This takes the following parameters -- fs - a remote name string eg "drive:" +- fs - a remote name string e.g. "drive:" This returns info about the remote passed in; @@ -988,8 +988,8 @@ This command does not have a command line equivalent so use this instead: This takes the following parameters -- fs - a remote name string eg "drive:" -- remote - a path within that remote eg "dir" +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" - opt - a dictionary of options to control the listing (optional) - recurse - If set recurse directories - noModTime - If set return modification time @@ -1010,8 +1010,8 @@ See the [lsjson command](/commands/rclone_lsjson/) for more information on the a This takes the following parameters -- fs - a remote name string eg "drive:" -- remote - a path within that remote eg "dir" +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" See the [mkdir command](/commands/rclone_mkdir/) command for more information on the above. @@ -1021,10 +1021,10 @@ See the [mkdir command](/commands/rclone_mkdir/) command for more information on This takes the following parameters -- srcFs - a remote name string eg "drive:" for the source -- srcRemote - a path within that remote eg "file.txt" for the source -- dstFs - a remote name string eg "drive2:" for the destination -- dstRemote - a path within that remote eg "file2.txt" for the destination +- srcFs - a remote name string e.g. "drive:" for the source +- srcRemote - a path within that remote e.g. "file.txt" for the source +- dstFs - a remote name string e.g. "drive2:" for the destination +- dstRemote - a path within that remote e.g. "file2.txt" for the destination **Authentication is required for this call.** @@ -1032,10 +1032,10 @@ This takes the following parameters This takes the following parameters -- fs - a remote name string eg "drive:" -- remote - a path within that remote eg "dir" +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" - unlink - boolean - if set removes the link rather than adding it (optional) -- expire - string - the expiry time of the link eg "1d" (optional) +- expire - string - the expiry time of the link e.g. "1d" (optional) Returns @@ -1049,8 +1049,8 @@ See the [link command](/commands/rclone_link/) command for more information on t This takes the following parameters -- fs - a remote name string eg "drive:" -- remote - a path within that remote eg "dir" +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" See the [purge command](/commands/rclone_purge/) command for more information on the above. @@ -1060,8 +1060,8 @@ See the [purge command](/commands/rclone_purge/) command for more information on This takes the following parameters -- fs - a remote name string eg "drive:" -- remote - a path within that remote eg "dir" +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" See the [rmdir command](/commands/rclone_rmdir/) command for more information on the above. @@ -1071,8 +1071,8 @@ See the [rmdir command](/commands/rclone_rmdir/) command for more information on This takes the following parameters -- fs - a remote name string eg "drive:" -- remote - a path within that remote eg "dir" +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" - leaveRoot - boolean, set to true not to delete the root See the [rmdirs command](/commands/rclone_rmdirs/) command for more information on the above. @@ -1083,7 +1083,7 @@ See the [rmdirs command](/commands/rclone_rmdirs/) command for more information This takes the following parameters -- fs - a remote name string eg "drive:path/to/dir" +- fs - a remote name string e.g. "drive:path/to/dir" Returns @@ -1098,8 +1098,8 @@ See the [size command](/commands/rclone_size/) command for more information on t This takes the following parameters -- fs - a remote name string eg "drive:" -- remote - a path within that remote eg "dir" +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" - each part in body represents a file to be uploaded See the [uploadfile command](/commands/rclone_uploadfile/) command for more information on the above. @@ -1165,8 +1165,8 @@ This shows all possible plugins by a mime type This takes the following parameters -- type: supported mime type by a loaded plugin eg (video/mp4, audio/mp3) -- pluginType: filter plugins based on their type eg (DASHBOARD, FILE_HANDLER, TERMINAL) +- type: supported mime type by a loaded plugin e.g. (video/mp4, audio/mp3) +- pluginType: filter plugins based on their type e.g. (DASHBOARD, FILE_HANDLER, TERMINAL) and returns @@ -1264,8 +1264,8 @@ check that parameter passing is working properly. This takes the following parameters -- srcFs - a remote name string eg "drive:src" for the source -- dstFs - a remote name string eg "drive:dst" for the destination +- srcFs - a remote name string e.g. "drive:src" for the source +- dstFs - a remote name string e.g. "drive:dst" for the destination See the [copy command](/commands/rclone_copy/) command for more information on the above. @@ -1276,8 +1276,8 @@ See the [copy command](/commands/rclone_copy/) command for more information on t This takes the following parameters -- srcFs - a remote name string eg "drive:src" for the source -- dstFs - a remote name string eg "drive:dst" for the destination +- srcFs - a remote name string e.g. "drive:src" for the source +- dstFs - a remote name string e.g. "drive:dst" for the destination - deleteEmptySrcDirs - delete empty src directories if set @@ -1289,8 +1289,8 @@ See the [move command](/commands/rclone_move/) command for more information on t This takes the following parameters -- srcFs - a remote name string eg "drive:src" for the source -- dstFs - a remote name string eg "drive:dst" for the destination +- srcFs - a remote name string e.g. "drive:src" for the source +- dstFs - a remote name string e.g. "drive:dst" for the destination See the [sync command](/commands/rclone_sync/) command for more information on the above. @@ -1309,7 +1309,7 @@ directory cache. Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any -starting with dir will forget that dir, eg +starting with dir will forget that dir, e.g. rclone rc vfs/forget file=hello file2=goodbye dir=home/junk @@ -1363,7 +1363,7 @@ If no paths are passed in then it will refresh the root directory. rclone rc vfs/refresh Otherwise pass directories in as dir=path. Any parameter key -starting with dir will refresh that directory, eg +starting with dir will refresh that directory, e.g. rclone rc vfs/refresh dir=home/junk dir2=data/misc @@ -1396,9 +1396,9 @@ formatted to be reasonably human readable. ### Error returns -If an error occurs then there will be an HTTP error status (eg 500) +If an error occurs then there will be an HTTP error status (e.g. 500) and the body of the response will contain a JSON encoded error object, -eg +e.g. ``` { diff --git a/docs/content/remote_setup.md b/docs/content/remote_setup.md index d26981857..3552ebe23 100644 --- a/docs/content/remote_setup.md +++ b/docs/content/remote_setup.md @@ -9,7 +9,7 @@ Some of the configurations (those involving oauth2) require an Internet connected web browser. If you are trying to set rclone up on a remote or headless box with no -browser available on it (eg a NAS or a server in a datacenter) then +browser available on it (e.g. a NAS or a server in a datacenter) then you will need to use an alternative means of configuration. There are two ways of doing it, described below. diff --git a/docs/content/s3.md b/docs/content/s3.md index cba5ada88..4e3e879a9 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -23,7 +23,7 @@ The S3 backend can be used with a number of different providers: {{< /provider_list >}} Paths are specified as `remote:bucket` (or `remote:` for the `lsd` -command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. +command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. Once you have made a remote (see the provider specific section above) you can use it like this: @@ -366,7 +366,7 @@ The different authentication methods are tried in this order: - Session Token: `AWS_SESSION_TOKEN` (optional) - Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-multiple-profiles.html): - Profile files are standard files used by AWS CLI tools - - By default it will use the profile in your home directory (eg `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables: + - By default it will use the profile in your home directory (e.g. `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables: - `AWS_SHARED_CREDENTIALS_FILE` to control which file. - `AWS_PROFILE` to control which profile to use. - Or, run `rclone` in an ECS task with an IAM role (AWS only). @@ -615,7 +615,7 @@ Leave blank if you are using an S3 clone and you don't have a region. - "" - Use this if unsure. Will use v4 signatures and an empty region. - "other-v2-signature" - - Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. + - Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH. #### --s3-endpoint @@ -1206,7 +1206,7 @@ The minimum is 0 and the maximum is 5GB. Chunk size to use for uploading. When uploading files larger than upload_cutoff or files with unknown -size (eg from "rclone rcat" or uploaded with "rclone mount" or google +size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google photos or google docs) they will be uploaded as multipart uploads using this chunk size. @@ -1346,7 +1346,7 @@ if false then rclone will use virtual path style. See [the AWS S3 docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro) for more info. -Some providers (eg AWS, Aliyun OSS or Netease COS) require this set to +Some providers (e.g. AWS, Aliyun OSS or Netease COS) require this set to false - rclone will do this automatically based on the provider setting. @@ -1362,7 +1362,7 @@ If true use v2 authentication. If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication. -Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. +Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH. - Config: v2_auth - Env Var: RCLONE_S3_V2_AUTH @@ -1599,7 +1599,7 @@ server_side_encryption = storage_class = ``` -Then use it as normal with the name of the public bucket, eg +Then use it as normal with the name of the public bucket, e.g. rclone lsd anons3:1000genomes @@ -1631,7 +1631,7 @@ server_side_encryption = storage_class = ``` -If you are using an older version of CEPH, eg 10.2.x Jewel, then you +If you are using an older version of CEPH, e.g. 10.2.x Jewel, then you may need to supply the parameter `--s3-upload-cutoff 0` or put this in the config file as `upload_cutoff 0` to work around a bug which causes uploading of small files to fail. diff --git a/docs/content/seafile.md b/docs/content/seafile.md index 46fa854c7..9fc27b8d9 100644 --- a/docs/content/seafile.md +++ b/docs/content/seafile.md @@ -16,7 +16,7 @@ This is a backend for the [Seafile](https://www.seafile.com/) storage service: There are two distinct modes you can setup your remote: - you point your remote to the **root of the server**, meaning you don't specify a library during the configuration: -Paths are specified as `remote:library`. You may put subdirectories in too, eg `remote:library/path/to/dir`. +Paths are specified as `remote:library`. You may put subdirectories in too, e.g. `remote:library/path/to/dir`. - you point your remote to a specific library during the configuration: Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_) diff --git a/docs/content/sftp.md b/docs/content/sftp.md index aef29698e..f3dab1dc1 100644 --- a/docs/content/sftp.md +++ b/docs/content/sftp.md @@ -203,7 +203,7 @@ advanced option. Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around -seems to be to start an ssh-agent in each session, eg +seems to be to start an ssh-agent in each session, e.g. eval `ssh-agent -s` && ssh-add -A @@ -498,7 +498,7 @@ the disk of the root on the remote. `about` will fail if it does not have shell access or if `df` is not in the remote's PATH. -Note that some SFTP servers (eg Synology) the paths are different for +Note that some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using `disable_hashcheck` is a good idea. diff --git a/docs/content/sharefile.md b/docs/content/sharefile.md index 1ec346122..a24ee5525 100644 --- a/docs/content/sharefile.md +++ b/docs/content/sharefile.md @@ -99,7 +99,7 @@ To copy a local directory to an ShareFile directory called backup rclone copy /home/source remote:backup -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. ### Modified time and hashes ### diff --git a/docs/content/sugarsync.md b/docs/content/sugarsync.md index c66bf471b..82a629639 100644 --- a/docs/content/sugarsync.md +++ b/docs/content/sugarsync.md @@ -90,7 +90,7 @@ To copy a local directory to an SugarSync folder called backup Paths are specified as `remote:path` -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. **NB** you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with diff --git a/docs/content/swift.md b/docs/content/swift.md index 491b78a68..8544e0741 100644 --- a/docs/content/swift.md +++ b/docs/content/swift.md @@ -16,7 +16,7 @@ Commercial implementations of that being: * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) Paths are specified as `remote:container` (or `remote:` for the `lsd` -command.) You may put subdirectories in too, eg `remote:container/path/to/dir`. +command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`. Here is an example of making a swift configuration. First run @@ -446,7 +446,7 @@ default for this is 5GB which is its maximum value. Don't chunk files during streaming upload. -When doing streaming uploads (eg using rcat or mount) setting this +When doing streaming uploads (e.g. using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked @@ -510,7 +510,7 @@ So this most likely means your username / password is wrong. You can investigate further with the `--dump-bodies` flag. This may also be caused by specifying the region when you shouldn't -have (eg OVH). +have (e.g. OVH). #### Rclone gives Failed to create file system: Response didn't have storage url and auth token #### diff --git a/docs/content/tardigrade.md b/docs/content/tardigrade.md index 6af32d2ac..464f9402d 100644 --- a/docs/content/tardigrade.md +++ b/docs/content/tardigrade.md @@ -126,7 +126,7 @@ y/e/d> y ## Usage Paths are specified as `remote:bucket` (or `remote:` for the `lsf` -command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. +command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. Once configured you can then use `rclone` like this. diff --git a/docs/content/union.md b/docs/content/union.md index 06cb39fc8..4f321b6c5 100644 --- a/docs/content/union.md +++ b/docs/content/union.md @@ -9,13 +9,13 @@ description: "Remote Unification" The `union` remote provides a unification similar to UnionFS using other remotes. Paths may be as deep as required or a local path, -eg `remote:directory/subdirectory` or `/directory/subdirectory`. +e.g. `remote:directory/subdirectory` or `/directory/subdirectory`. During the initial setup with `rclone config` you will specify the upstream remotes as a space separated list. The upstream remotes can either be a local paths or other remotes. Attribute `:ro` and `:nc` can be attach to the end of path to tag the remote as **read only** or **no create**, -eg `remote:directory/subdirectory:ro` or `remote:directory/subdirectory:nc`. +e.g. `remote:directory/subdirectory:ro` or `remote:directory/subdirectory:nc`. Subfolders can be used in upstream remotes. Assume a union remote named `backup` with the remotes `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop` diff --git a/docs/content/webdav.md b/docs/content/webdav.md index 219f7f481..b4846156d 100644 --- a/docs/content/webdav.md +++ b/docs/content/webdav.md @@ -8,7 +8,7 @@ description: "Rclone docs for WebDAV" Paths are specified as `remote:path` -Paths may be as deep as required, eg `remote:directory/subdirectory`. +Paths may be as deep as required, e.g. `remote:directory/subdirectory`. To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are @@ -61,7 +61,7 @@ Enter the password: password: Confirm the password: password: -Bearer token instead of user/pass (eg a Macaroon) +Bearer token instead of user/pass (e.g. a Macaroon) bearer_token> Remote config -------------------- @@ -161,7 +161,7 @@ Password. #### --webdav-bearer-token -Bearer token instead of user/pass (eg a Macaroon) +Bearer token instead of user/pass (e.g. a Macaroon) - Config: bearer_token - Env Var: RCLONE_WEBDAV_BEARER_TOKEN diff --git a/docs/content/yandex.md b/docs/content/yandex.md index 003b045b5..4538523bd 100644 --- a/docs/content/yandex.md +++ b/docs/content/yandex.md @@ -82,7 +82,7 @@ excess files in the path. rclone sync -i /home/local/directory remote:directory -Yandex paths may be as deep as required, eg `remote:directory/subdirectory`. +Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`. ### Modified time ### diff --git a/fs/config.go b/fs/config.go index 9697dbb07..d02a0f494 100644 --- a/fs/config.go +++ b/fs/config.go @@ -162,14 +162,14 @@ func NewConfig() *ConfigInfo { return c } -// ConfigToEnv converts a config section and name, eg ("myremote", +// ConfigToEnv converts a config section and name, e.g. ("myremote", // "ignore-size") into an environment name // "RCLONE_CONFIG_MYREMOTE_IGNORE_SIZE" func ConfigToEnv(section, name string) string { return "RCLONE_CONFIG_" + strings.ToUpper(strings.Replace(section+"_"+name, "-", "_", -1)) } -// OptionToEnv converts an option name, eg "ignore-size" into an +// OptionToEnv converts an option name, e.g. "ignore-size" into an // environment name "RCLONE_IGNORE_SIZE" func OptionToEnv(name string) string { return "RCLONE_" + strings.ToUpper(strings.Replace(name, "-", "_", -1)) diff --git a/fs/config/configflags/configflags.go b/fs/config/configflags/configflags.go index b118fc815..f3bdf116e 100644 --- a/fs/config/configflags/configflags.go +++ b/fs/config/configflags/configflags.go @@ -119,7 +119,7 @@ func AddFlags(flagSet *pflag.FlagSet) { flags.FVarP(flagSet, &fs.Config.MultiThreadCutoff, "multi-thread-cutoff", "", "Use multi-thread downloads for files above this size.") flags.IntVarP(flagSet, &fs.Config.MultiThreadStreams, "multi-thread-streams", "", fs.Config.MultiThreadStreams, "Max number of streams to use for multi-thread downloads.") flags.BoolVarP(flagSet, &fs.Config.UseJSONLog, "use-json-log", "", fs.Config.UseJSONLog, "Use json log format.") - flags.StringVarP(flagSet, &fs.Config.OrderBy, "order-by", "", fs.Config.OrderBy, "Instructions on how to order the transfers, eg 'size,descending'") + flags.StringVarP(flagSet, &fs.Config.OrderBy, "order-by", "", fs.Config.OrderBy, "Instructions on how to order the transfers, e.g. 'size,descending'") flags.StringArrayVarP(flagSet, &uploadHeaders, "header-upload", "", nil, "Set HTTP header for upload transactions") flags.StringArrayVarP(flagSet, &downloadHeaders, "header-download", "", nil, "Set HTTP header for download transactions") flags.StringArrayVarP(flagSet, &headers, "header", "", nil, "Set HTTP header for all transactions") diff --git a/fs/fs.go b/fs/fs.go index fee8854f4..0a3dd730c 100644 --- a/fs/fs.go +++ b/fs/fs.go @@ -490,7 +490,7 @@ type Usage struct { Total *int64 `json:"total,omitempty"` // quota of bytes that can be used Used *int64 `json:"used,omitempty"` // bytes in use Trashed *int64 `json:"trashed,omitempty"` // bytes in trash - Other *int64 `json:"other,omitempty"` // other usage eg gmail in drive + Other *int64 `json:"other,omitempty"` // other usage e.g. gmail in drive Free *int64 `json:"free,omitempty"` // bytes which can be uploaded before reaching the quota Objects *int64 `json:"objects,omitempty"` // objects in the storage system } @@ -1079,7 +1079,7 @@ type Disconnecter interface { // // These are automatically inserted in the docs type CommandHelp struct { - Name string // Name of the command, eg "link" + Name string // Name of the command, e.g. "link" Short string // Single line description Long string // Long multi-line description Opts map[string]string // maps option name to a single line help diff --git a/fs/log/log.go b/fs/log/log.go index f505b99c3..fbd1f8118 100644 --- a/fs/log/log.go +++ b/fs/log/log.go @@ -18,7 +18,7 @@ type Options struct { File string // Log everything to this file Format string // Comma separated list of log format options UseSyslog bool // Use Syslog for logging - SyslogFacility string // Facility for syslog, eg KERN,USER,... + SyslogFacility string // Facility for syslog, e.g. KERN,USER,... } // DefaultOpt is the default values used for Opt diff --git a/fs/log/logflags/logflags.go b/fs/log/logflags/logflags.go index 5753878c6..a788bbf4a 100644 --- a/fs/log/logflags/logflags.go +++ b/fs/log/logflags/logflags.go @@ -15,5 +15,5 @@ func AddFlags(flagSet *pflag.FlagSet) { flags.StringVarP(flagSet, &log.Opt.File, "log-file", "", log.Opt.File, "Log everything to this file") flags.StringVarP(flagSet, &log.Opt.Format, "log-format", "", log.Opt.Format, "Comma separated list of log format options") flags.BoolVarP(flagSet, &log.Opt.UseSyslog, "syslog", "", log.Opt.UseSyslog, "Use Syslog for logging") - flags.StringVarP(flagSet, &log.Opt.SyslogFacility, "syslog-facility", "", log.Opt.SyslogFacility, "Facility for syslog, eg KERN,USER,...") + flags.StringVarP(flagSet, &log.Opt.SyslogFacility, "syslog-facility", "", log.Opt.SyslogFacility, "Facility for syslog, e.g. KERN,USER,...") } diff --git a/fs/operations/lsjson.go b/fs/operations/lsjson.go index 781ef6f8a..0a0066950 100644 --- a/fs/operations/lsjson.go +++ b/fs/operations/lsjson.go @@ -78,7 +78,7 @@ type ListJSONOpt struct { ShowHash bool `json:"showHash"` DirsOnly bool `json:"dirsOnly"` FilesOnly bool `json:"filesOnly"` - HashTypes []string `json:"hashTypes"` // hash types to show if ShowHash is set, eg "MD5", "SHA-1" + HashTypes []string `json:"hashTypes"` // hash types to show if ShowHash is set, e.g. "MD5", "SHA-1" } // ListJSON lists fsrc using the options in opt calling callback for each item diff --git a/fs/operations/rc.go b/fs/operations/rc.go index ce66f353a..755d46285 100644 --- a/fs/operations/rc.go +++ b/fs/operations/rc.go @@ -23,8 +23,8 @@ func init() { Title: "List the given remote and path in JSON format", Help: `This takes the following parameters -- fs - a remote name string eg "drive:" -- remote - a path within that remote eg "dir" +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" - opt - a dictionary of options to control the listing (optional) - recurse - If set recurse directories - noModTime - If set return modification time @@ -74,7 +74,7 @@ func init() { Title: "Return the space used on the remote", Help: `This takes the following parameters -- fs - a remote name string eg "drive:" +- fs - a remote name string e.g. "drive:" The result is as returned from rclone about --json @@ -120,10 +120,10 @@ func init() { Title: name + " a file from source remote to destination remote", Help: `This takes the following parameters -- srcFs - a remote name string eg "drive:" for the source -- srcRemote - a path within that remote eg "file.txt" for the source -- dstFs - a remote name string eg "drive2:" for the destination -- dstRemote - a path within that remote eg "file2.txt" for the destination +- srcFs - a remote name string e.g. "drive:" for the source +- srcRemote - a path within that remote e.g. "file.txt" for the source +- dstFs - a remote name string e.g. "drive2:" for the destination +- dstRemote - a path within that remote e.g. "file2.txt" for the destination `, }) } @@ -161,7 +161,7 @@ func init() { {name: "cleanup", title: "Remove trashed files in the remote or path", noRemote: true}, } { op := op - remote := "- remote - a path within that remote eg \"dir\"\n" + remote := "- remote - a path within that remote e.g. \"dir\"\n" if op.noRemote { remote = "" } @@ -175,7 +175,7 @@ func init() { Title: op.title, Help: `This takes the following parameters -- fs - a remote name string eg "drive:" +- fs - a remote name string e.g. "drive:" ` + remote + op.help + ` See the [` + op.name + ` command](/commands/rclone_` + op.name + `/) command for more information on the above. `, @@ -183,7 +183,7 @@ See the [` + op.name + ` command](/commands/rclone_` + op.name + `/) command for } } -// Run a single command, eg Mkdir +// Run a single command, e.g. Mkdir func rcSingleCommand(ctx context.Context, in rc.Params, name string, noRemote bool) (out rc.Params, err error) { var ( f fs.Fs @@ -277,7 +277,7 @@ func init() { Title: "Count the number of bytes and files in remote", Help: `This takes the following parameters -- fs - a remote name string eg "drive:path/to/dir" +- fs - a remote name string e.g. "drive:path/to/dir" Returns @@ -313,10 +313,10 @@ func init() { Title: "Create or retrieve a public link to the given file or folder.", Help: `This takes the following parameters -- fs - a remote name string eg "drive:" -- remote - a path within that remote eg "dir" +- fs - a remote name string e.g. "drive:" +- remote - a path within that remote e.g. "dir" - unlink - boolean - if set removes the link rather than adding it (optional) -- expire - string - the expiry time of the link eg "1d" (optional) +- expire - string - the expiry time of the link e.g. "1d" (optional) Returns @@ -354,7 +354,7 @@ func init() { Title: "Return information about the remote", Help: `This takes the following parameters -- fs - a remote name string eg "drive:" +- fs - a remote name string e.g. "drive:" This returns info about the remote passed in; @@ -434,7 +434,7 @@ func init() { Help: `This takes the following parameters - command - a string with the command name -- fs - a remote name string eg "drive:" +- fs - a remote name string e.g. "drive:" - arg - a list of arguments for the backend command - opt - a map of string to string of options diff --git a/fs/options.go b/fs/options.go index e8310045f..a85cd6f00 100644 --- a/fs/options.go +++ b/fs/options.go @@ -136,7 +136,7 @@ func (o *RangeOption) Decode(size int64) (offset, limit int64) { // FixRangeOption looks through the slice of options and adjusts any // RangeOption~s found that request a fetch from the end into an // absolute fetch using the size passed in and makes sure the range does -// not exceed filesize. Some remotes (eg Onedrive, Box) don't support +// not exceed filesize. Some remotes (e.g. Onedrive, Box) don't support // range requests which index from the end. func FixRangeOption(options []OpenOption, size int64) { if size == 0 { diff --git a/fs/rc/internal.go b/fs/rc/internal.go index a820c908a..3f2b02644 100644 --- a/fs/rc/internal.go +++ b/fs/rc/internal.go @@ -172,7 +172,7 @@ func init() { Help: ` This shows the current version of go and the go runtime -- version - rclone version, eg "v1.53.0" +- version - rclone version, e.g. "v1.53.0" - decomposed - version number as [major, minor, patch] - isGit - boolean - true if this was compiled from the git version - isBeta - boolean - true if this is a beta version @@ -417,7 +417,7 @@ func rcRunCommand(ctx context.Context, in Params) (out Params, err error) { var allArgs = []string{} if command != "" { - // Add the command eg: ls to the args + // Add the command e.g.: ls to the args allArgs = append(allArgs, command) } // Add all from arg @@ -425,7 +425,7 @@ func rcRunCommand(ctx context.Context, in Params) (out Params, err error) { allArgs = append(allArgs, cur) } - // Add flags to args for eg --max-depth 1 comes in as { max-depth 1 }. + // Add flags to args for e.g. --max-depth 1 comes in as { max-depth 1 }. // Convert it to [ max-depth, 1 ] and append to args list for key, value := range opt { if len(key) == 1 { diff --git a/fs/rc/jobs/job.go b/fs/rc/jobs/job.go index c5a423fac..d27a7788b 100644 --- a/fs/rc/jobs/job.go +++ b/fs/rc/jobs/job.go @@ -244,11 +244,11 @@ Results - finished - boolean - duration - time in seconds that the job ran for -- endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00") +- endTime - time the job finished (e.g. "2018-10-26T18:50:20.528746884+01:00") - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above -- startTime - time the job started (eg "2018-10-26T18:50:20.528336039+01:00") +- startTime - time the job started (e.g. "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously - progress - output of the progress related to the underlying job diff --git a/fs/rc/webgui/rc.go b/fs/rc/webgui/rc.go index 2c8263219..32a641715 100644 --- a/fs/rc/webgui/rc.go +++ b/fs/rc/webgui/rc.go @@ -244,8 +244,8 @@ func init() { This takes the following parameters -- type: supported mime type by a loaded plugin eg (video/mp4, audio/mp3) -- pluginType: filter plugins based on their type eg (DASHBOARD, FILE_HANDLER, TERMINAL) +- type: supported mime type by a loaded plugin e.g. (video/mp4, audio/mp3) +- pluginType: filter plugins based on their type e.g. (DASHBOARD, FILE_HANDLER, TERMINAL) and returns diff --git a/fs/sync/rc.go b/fs/sync/rc.go index 0b6ee1622..50c360d48 100644 --- a/fs/sync/rc.go +++ b/fs/sync/rc.go @@ -22,8 +22,8 @@ func init() { Title: name + " a directory from source remote to destination remote", Help: `This takes the following parameters -- srcFs - a remote name string eg "drive:src" for the source -- dstFs - a remote name string eg "drive:dst" for the destination +- srcFs - a remote name string e.g. "drive:src" for the source +- dstFs - a remote name string e.g. "drive:dst" for the destination ` + moveHelp + ` See the [` + name + ` command](/commands/rclone_` + name + `/) command for more information on the above.`, diff --git a/fstest/test_all/config.go b/fstest/test_all/config.go index 9d3440d15..609e4c0a5 100644 --- a/fstest/test_all/config.go +++ b/fstest/test_all/config.go @@ -56,7 +56,7 @@ func (b *Backend) includeTest(t *Test) bool { // MakeRuns creates Run objects the Backend and Test // // There can be several created, one for each combination of optional -// flags (eg FastList) +// flags (e.g. FastList) func (b *Backend) MakeRuns(t *Test) (runs []*Run) { if !b.includeTest(t) { return runs diff --git a/fstest/test_all/test_all.go b/fstest/test_all/test_all.go index b3ab7f35d..22c3032f4 100644 --- a/fstest/test_all/test_all.go +++ b/fstest/test_all/test_all.go @@ -29,9 +29,9 @@ var ( // Flags maxTries = flag.Int("maxtries", 5, "Number of times to try each test") maxN = flag.Int("n", 20, "Maximum number of tests to run at once") - testRemotes = flag.String("remotes", "", "Comma separated list of remotes to test, eg 'TestSwift:,TestS3'") - testBackends = flag.String("backends", "", "Comma separated list of backends to test, eg 's3,googlecloudstorage") - testTests = flag.String("tests", "", "Comma separated list of tests to test, eg 'fs/sync,fs/operations'") + testRemotes = flag.String("remotes", "", "Comma separated list of remotes to test, e.g. 'TestSwift:,TestS3'") + testBackends = flag.String("backends", "", "Comma separated list of backends to test, e.g. 's3,googlecloudstorage") + testTests = flag.String("tests", "", "Comma separated list of tests to test, e.g. 'fs/sync,fs/operations'") clean = flag.Bool("clean", false, "Instead of testing, clean all left over test directories") runOnly = flag.String("run", "", "Run only those tests matching the regexp supplied") timeout = flag.Duration("timeout", 60*time.Minute, "Maximum time to run each test for before giving up") diff --git a/lib/structs/structs.go b/lib/structs/structs.go index d1cb676c9..9c6ee05b5 100644 --- a/lib/structs/structs.go +++ b/lib/structs/structs.go @@ -40,7 +40,7 @@ func SetFrom(a, b interface{}) { // // This copies the public members only from b to a. This is useful if // you can't just use a struct copy because it contains a private -// mutex, eg as http.Transport. +// mutex, e.g. as http.Transport. func SetDefaults(a, b interface{}) { pt := reflect.TypeOf(a) t := pt.Elem() diff --git a/lib/terminal/terminal.go b/lib/terminal/terminal.go index b3007a93b..4eda1abab 100644 --- a/lib/terminal/terminal.go +++ b/lib/terminal/terminal.go @@ -92,7 +92,7 @@ func WriteString(s string) { } // Out is an io.Writer which can be used to write to the terminal -// eg for use with fmt.Fprintf(terminal.Out, "terminal fun: %d\n", n) +// e.g. for use with fmt.Fprintf(terminal.Out, "terminal fun: %d\n", n) var Out io.Writer // Write sends out to the VT100 terminal. diff --git a/vfs/rc.go b/vfs/rc.go index 2fde29d1c..081e9f5da 100644 --- a/vfs/rc.go +++ b/vfs/rc.go @@ -63,7 +63,7 @@ If no paths are passed in then it will refresh the root directory. rclone rc vfs/refresh Otherwise pass directories in as dir=path. Any parameter key -starting with dir will refresh that directory, eg +starting with dir will refresh that directory, e.g. rclone rc vfs/refresh dir=home/junk dir2=data/misc @@ -180,7 +180,7 @@ directory cache. Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any -starting with dir will forget that dir, eg +starting with dir will forget that dir, e.g. rclone rc vfs/forget file=hello file2=goodbye dir=home/junk ` + getVFSHelp, diff --git a/vfs/vfs.go b/vfs/vfs.go index 0b1151b2f..b0f50a11e 100644 --- a/vfs/vfs.go +++ b/vfs/vfs.go @@ -14,7 +14,7 @@ // // The vfs package returns Error values to signal precisely which // error conditions have ocurred. It may also return general errors -// it receives. It tries to use os Error values (eg os.ErrExist) +// it receives. It tries to use os Error values (e.g. os.ErrExist) // where possible. //go:generate sh -c "go run make_open_tests.go | gofmt > open_test.go" diff --git a/vfs/vfscommon/cachemode.go b/vfs/vfscommon/cachemode.go index 9df7e8452..2a4f87c9d 100644 --- a/vfs/vfscommon/cachemode.go +++ b/vfs/vfscommon/cachemode.go @@ -12,7 +12,7 @@ type CacheMode byte // CacheMode options const ( CacheModeOff CacheMode = iota // cache nothing - return errors for writes which can't be satisfied - CacheModeMinimal // cache only the minimum, eg read/write opens + CacheModeMinimal // cache only the minimum, e.g. read/write opens CacheModeWrites // cache all files opened with write intent CacheModeFull // cache all files opened in any mode )