docs: use --interactive instead of -i in examples to avoid confusion

This commit is contained in:
albertony 2023-01-20 21:47:36 +01:00
parent c40b706186
commit b9d9f9edb0
31 changed files with 65 additions and 65 deletions

View File

@ -3322,9 +3322,9 @@ This takes an optional directory to trash which make this easier to
use via the API.
rclone backend untrash drive:directory
rclone backend -i untrash drive:directory subdir
rclone backend --interactive untrash drive:directory subdir
Use the -i flag to see what would be restored before restoring it.
Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it.
Result:
@ -3354,7 +3354,7 @@ component will be used as the file name.
If the destination is a drive backend then server-side copying will be
attempted if possible.
Use the -i flag to see what would be copied before copying.
Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
`,
}, {
Name: "exportformats",

View File

@ -65,8 +65,8 @@ a bucket or with a bucket and path.
Long: `This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
Note that you can use -i/--dry-run with this command to see what it
would do.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup oos:bucket/path/to/object
rclone backend cleanup -o max-age=7w oos:bucket/path/to/object

View File

@ -4103,9 +4103,9 @@ Usage Examples:
rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]
This flag also obeys the filters. Test first with -i/--interactive or --dry-run flags
This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags
rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard
rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard
All the objects shown will be marked for restore, then
@ -4173,8 +4173,8 @@ a bucket or with a bucket and path.
Long: `This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
Note that you can use -i/--dry-run with this command to see what it
would do.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup s3:bucket/path/to/object
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
@ -4190,8 +4190,8 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
Long: `This command removes any old hidden versions of files
on a versions enabled bucket.
Note that you can use -i/--dry-run with this command to see what it
would do.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup-hidden s3:bucket/path/to/dir
`,

View File

@ -36,7 +36,7 @@ want to delete files from destination, use the
**Important**: Since this can cause data loss, test first with the
` + "`--dry-run` or the `--interactive`/`-i`" + ` flag.
rclone sync -i SOURCE remote:DESTINATION
rclone sync --interactive SOURCE remote:DESTINATION
Note that files in the destination won't be deleted if there were any
errors at any point. Duplicate objects (files with the same name, on

View File

@ -52,7 +52,7 @@ unless ` + "`--no-create`" + ` or ` + "`--recursive`" + ` is provided.
If ` + "`--recursive`" + ` is used then recursively sets the modification
time on all existing files that is found under the path. Filters are supported,
and you can test with the ` + "`--dry-run`" + ` or the ` + "`--interactive`" + ` flag.
and you can test with the ` + "`--dry-run`" + ` or the ` + "`--interactive`/`-i`" + ` flag.
If ` + "`--timestamp`" + ` is used then sets the modification time to that
time instead of the current time. Times may be specified as one of:

View File

@ -67,7 +67,7 @@ List the contents of a container
Sync `/home/local/directory` to the remote container, deleting any excess
files in the container.
rclone sync -i /home/local/directory remote:container
rclone sync --interactive /home/local/directory remote:container
### --fast-list

View File

@ -72,7 +72,7 @@ List the contents of a bucket
Sync `/home/local/directory` to the remote bucket, deleting any
excess files in the bucket.
rclone sync -i /home/local/directory remote:bucket
rclone sync --interactive /home/local/directory remote:bucket
### Application Keys

View File

@ -257,7 +257,7 @@ style or chunk naming scheme is to:
- Create another directory (most probably on the same cloud storage)
and configure a new remote with desired metadata format,
hash type, chunk naming etc.
- Now run `rclone sync -i oldchunks: newchunks:` and all your data
- Now run `rclone sync --interactive oldchunks: newchunks:` and all your data
will be transparently converted in transfer.
This may take some time, yet chunker will try server-side
copy if possible.

View File

@ -23,7 +23,7 @@ want to delete files from destination, use the
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
rclone sync -i SOURCE remote:DESTINATION
rclone sync --interactive SOURCE remote:DESTINATION
Note that files in the destination won't be deleted if there were any
errors at any point. Duplicate objects (files with the same name, on

View File

@ -21,7 +21,7 @@ unless `--no-create` or `--recursive` is provided.
If `--recursive` is used then recursively sets the modification
time on all existing files that is found under the path. Filters are supported,
and you can test with the `--dry-run` or the `--interactive` flag.
and you can test with the `--dry-run` or the `--interactive`/`-i` flag.
If `--timestamp` is used then sets the modification time to that
time instead of the current time. Times may be specified as one of:

View File

@ -662,7 +662,7 @@ as `eremote:`.
To sync the two remotes you would do
rclone sync -i remote:crypt remote2:crypt
rclone sync --interactive remote:crypt remote2:crypt
And to check the integrity you would do

View File

@ -94,7 +94,7 @@ storage system in the config file then the sub path, e.g.
You can define as many storage paths as you like in the config file.
Please use the [`-i` / `--interactive`](#interactive) flag while
Please use the [`--interactive`/`-i`](#interactive) flag while
learning rclone to avoid accidental data loss.
Subcommands
@ -104,7 +104,7 @@ rclone uses a system of subcommands. For example
rclone ls remote:path # lists a remote
rclone copy /local/path remote:path # copies /local/path to the remote
rclone sync -i /local/path remote:path # syncs /local/path to the remote
rclone sync --interactive /local/path remote:path # syncs /local/path to the remote
The main rclone commands with most used first
@ -396,11 +396,11 @@ file or directory like this then use the full path starting with a
So to sync a directory called `sync:me` to a remote called `remote:` use
rclone sync -i ./sync:me remote:path
rclone sync --interactive ./sync:me remote:path
or
rclone sync -i /full/path/to/sync:me remote:path
rclone sync --interactive /full/path/to/sync:me remote:path
Server Side Copy
----------------
@ -433,8 +433,8 @@ same.
This can be used when scripting to make aged backups efficiently, e.g.
rclone sync -i remote:current-backup remote:previous-backup
rclone sync -i /path/to/files remote:current-backup
rclone sync --interactive remote:current-backup remote:previous-backup
rclone sync --interactive /path/to/files remote:current-backup
## Metadata support {#metadata}
@ -621,7 +621,7 @@ excluded by a filter rule.
For example
rclone sync -i /path/to/local remote:current --backup-dir remote:old
rclone sync --interactive /path/to/local remote:current --backup-dir remote:old
will sync `/path/to/local` to `remote:current`, but for any files
which would have been updated or deleted will be stored in
@ -1086,7 +1086,7 @@ Add an HTTP header for all download transactions. The flag can be repeated to
add multiple headers.
```
rclone sync -i s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"
rclone sync --interactive s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"
```
See the GitHub issue [here](https://github.com/rclone/rclone/issues/59) for
@ -1098,7 +1098,7 @@ Add an HTTP header for all upload transactions. The flag can be repeated to add
multiple headers.
```
rclone sync -i ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"
rclone sync --interactive ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"
```
See the GitHub issue [here](https://github.com/rclone/rclone/issues/59) for
@ -1208,7 +1208,7 @@ This can be useful as an additional layer of protection for immutable
or append-only data sets (notably backup archives), where modification
implies corruption and should not be propagated.
### -i / --interactive {#interactive}
### -i, --interactive {#interactive}
This flag can be used to tell rclone that you wish a manual
confirmation before destructive operations.
@ -1219,7 +1219,7 @@ especially with `rclone sync`.
For example
```
$ rclone delete -i /tmp/dir
$ rclone delete --interactive /tmp/dir
rclone: delete "important-file.txt"?
y) Yes, this is OK (default)
n) No, skip this
@ -1372,7 +1372,7 @@ When the limit is reached all transfers will stop immediately.
Rclone will exit with exit code 8 if the transfer limit is reached.
## --metadata / -M
## -M, --metadata
Setting this flag enables rclone to copy the metadata from the source
to the destination. For local backends this is ownership, permissions,
@ -1791,7 +1791,7 @@ or with `--backup-dir`. See `--backup-dir` for more info.
For example
rclone copy -i /path/to/local/file remote:current --suffix .bak
rclone copy --interactive /path/to/local/file remote:current --suffix .bak
will copy `/path/to/local` to `remote:current`, but for any files
which would have been updated or deleted have .bak added.
@ -1800,7 +1800,7 @@ If using `rclone sync` with `--suffix` and without `--backup-dir` then
it is recommended to put a filter rule in excluding the suffix
otherwise the `sync` will delete the backup files.
rclone sync -i /path/to/local/file remote:current --suffix .bak --exclude "*.bak"
rclone sync --interactive /path/to/local/file remote:current --suffix .bak --exclude "*.bak"
### --suffix-keep-extension ###

View File

@ -33,7 +33,7 @@ The syncs would be incremental (on a file by file basis).
e.g.
rclone sync -i drive:Folder s3:bucket
rclone sync --interactive drive:Folder s3:bucket
### Using rclone from multiple locations at the same time ###
@ -42,8 +42,8 @@ You can use rclone from multiple places at the same time if you choose
different subdirectory for the output, e.g.
```
Server A> rclone sync -i /tmp/whatever remote:ServerA
Server B> rclone sync -i /tmp/whatever remote:ServerB
Server A> rclone sync --interactive /tmp/whatever remote:ServerA
Server B> rclone sync --interactive /tmp/whatever remote:ServerB
```
If you sync to the same directory then you should use rclone copy

View File

@ -723,7 +723,7 @@ and `-v` first.
In conjunction with `rclone sync`, `--delete-excluded` deletes any files
on the destination which are excluded from the command.
E.g. the scope of `rclone sync -i A: B:` can be restricted:
E.g. the scope of `rclone sync --interactive A: B:` can be restricted:
rclone --min-size 50k --delete-excluded sync A: B:

View File

@ -99,7 +99,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote directory, deleting any
excess files in the directory.
rclone sync -i /home/local/directory remote:directory
rclone sync --interactive /home/local/directory remote:directory
### Anonymous FTP

View File

@ -172,7 +172,7 @@ List the contents of a bucket
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
rclone sync -i /home/local/directory remote:bucket
rclone sync --interactive /home/local/directory remote:bucket
### Service Account support

View File

@ -117,7 +117,7 @@ List the contents of an album
Sync `/home/local/images` to the Google Photos, removing any excess
files in the album.
rclone sync -i /home/local/image remote:album/newAlbum
rclone sync --interactive /home/local/image remote:album/newAlbum
### Layout

View File

@ -91,7 +91,7 @@ List the contents of a directory
Sync the remote `directory` to `/home/local/directory`, deleting any excess files.
rclone sync -i remote:directory /home/local/directory
rclone sync --interactive remote:directory /home/local/directory
### Setting up your own HDFS instance for testing

View File

@ -99,7 +99,7 @@ List the contents of a directory
Sync the remote `directory` to `/home/local/directory`, deleting any excess files.
rclone sync -i remote:directory /home/local/directory
rclone sync --interactive remote:directory /home/local/directory
### Read only

View File

@ -28,7 +28,7 @@ List the contents of a item
Sync `/home/local/directory` to the remote item, deleting any excess
files in the item.
rclone sync -i /home/local/directory remote:item
rclone sync --interactive /home/local/directory remote:item
## Notes
Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available.

View File

@ -8,7 +8,7 @@ versionIntroduced: "v0.91"
Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`, so
rclone sync -i /home/source /tmp/destination
rclone sync --interactive /home/source /tmp/destination
Will sync `/home/source` to `/tmp/destination`.

View File

@ -123,7 +123,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote path, deleting any
excess files in the path.
rclone sync -i /home/local/directory remote:directory
rclone sync --interactive /home/local/directory remote:directory
### Modified time

View File

@ -629,11 +629,11 @@ OneDrive supports `rclone cleanup` which causes rclone to look through
every file under the path supplied and delete all version but the
current version. Because this involves traversing all the files, then
querying each file for versions it can be quite slow. Rclone does
`--checkers` tests in parallel. The command also supports `-i` which
is a great way to see what it would do.
`--checkers` tests in parallel. The command also supports `--interactive`/`i`
or `--dry-run` which is a great way to see what it would do.
rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir
rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
**NB** Onedrive personal can't currently delete versions

View File

@ -92,7 +92,7 @@ List the contents of a bucket
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
rclone sync -i /home/local/directory remote:bucket
rclone sync --interactive /home/local/directory remote:bucket
### --fast-list

View File

@ -55,7 +55,7 @@ List the contents of a bucket
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
rclone sync -i /home/local/directory remote:bucket
rclone sync --interactive /home/local/directory remote:bucket
## Configuration
@ -459,10 +459,10 @@ $ rclone -q --s3-versions ls s3:cleanup-test
### Cleanup
If you run `rclone cleanup s3:bucket` then it will remove all pending
multipart uploads older than 24 hours. You can use the `-i` flag to
see exactly what it will do. If you want more control over the expiry
date then run `rclone backend cleanup s3:bucket -o max-age=1h` to
expire all uploads older than one hour. You can use `rclone backend
multipart uploads older than 24 hours. You can use the `--interactive`/`i`
or `--dry-run` flag to see exactly what it will do. If you want more control over the
expiry date then run `rclone backend cleanup s3:bucket -o max-age=1h`
to expire all uploads older than one hour. You can use `rclone backend
list-multipart-uploads s3:bucket` to see the pending multipart
uploads.

View File

@ -113,7 +113,7 @@ List the contents of a library
Sync `/home/local/directory` to the remote library, deleting any
excess files in the library.
rclone sync -i /home/local/directory seafile:library
rclone sync --interactive /home/local/directory seafile:library
### Configuration in library mode
@ -209,7 +209,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote library, deleting any
excess files in the library.
rclone sync -i /home/local/directory seafile:
rclone sync --interactive /home/local/directory seafile:
### --fast-list

View File

@ -109,7 +109,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote directory, deleting any
excess files in the directory.
rclone sync -i /home/local/directory remote:directory
rclone sync --interactive /home/local/directory remote:directory
Mount the remote path `/srv/www-data/` to the local path
`/mnt/www-data`

View File

@ -389,7 +389,7 @@ Use the `size` command to print the total size of objects in a bucket or a folde
Use the `sync` command to sync the source to the destination,
changing the destination only, deleting any excess files.
rclone sync -i --progress /home/local/directory/ remote:bucket/path/to/dir/
rclone sync --interactive --progress /home/local/directory/ remote:bucket/path/to/dir/
The `--progress` flag is for displaying progress information.
Remove it if you don't need this information.
@ -399,15 +399,15 @@ to see exactly what would be copied and deleted.
The sync can be done also from Storj to the local file system.
rclone sync -i --progress remote:bucket/path/to/dir/ /home/local/directory/
rclone sync --interactive --progress remote:bucket/path/to/dir/ /home/local/directory/
Or between two Storj buckets.
rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
Or even between another cloud storage and Storj.
rclone sync -i --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
## Limitations

View File

@ -134,7 +134,7 @@ List the contents of a container
Sync `/home/local/directory` to the remote container, deleting any
excess files in the container.
rclone sync -i /home/local/directory remote:container
rclone sync --interactive /home/local/directory remote:container
### Configuration from an OpenStack credentials file

View File

@ -83,7 +83,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote path, deleting any
excess files in the path.
rclone sync -i /home/local/directory remote:directory
rclone sync --interactive /home/local/directory remote:directory
Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`.

View File

@ -103,7 +103,7 @@ List the contents of a directory
Sync `/home/local/directory` to the remote path, deleting any
excess files in the path.
rclone sync -i /home/local/directory remote:directory
rclone sync --interactive /home/local/directory remote:directory
Zoho paths may be as deep as required, eg `remote:directory/subdirectory`.