docs: cleanup header levels in backend docs (#5698)

This commit is contained in:
albertony 2021-10-14 15:40:18 +02:00 committed by GitHub
parent ceaafe6620
commit c2597a4fa3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
45 changed files with 608 additions and 523 deletions

View File

@ -137,7 +137,7 @@ func showHelp(fsInfo *fs.RegInfo) error {
if len(cmds) == 0 {
return errors.Errorf("%s backend has no commands", name)
}
fmt.Printf("### Backend commands\n\n")
fmt.Printf("## Backend commands\n\n")
fmt.Printf(`Here are the commands specific to the %s backend.
Run them with
@ -154,7 +154,7 @@ These can be run on a running backend using the rc command
`, name)
for _, cmd := range cmds {
fmt.Printf("#### %s\n\n", cmd.Name)
fmt.Printf("### %s\n\n", cmd.Name)
fmt.Printf("%s\n\n", cmd.Short)
fmt.Printf(" rclone backend %s remote: [options] [<arguments>+]\n\n", cmd.Name)
if cmd.Long != "" {

View File

@ -315,7 +315,7 @@ func showBackend(name string) {
optionsType = "advanced"
continue
}
fmt.Printf("### %s Options\n\n", strings.Title(optionsType))
fmt.Printf("### %s options\n\n", strings.Title(optionsType))
fmt.Printf("Here are the %s options specific to %s (%s).\n\n", optionsType, backend.Name, backend.Description)
optionsType = "advanced"
for _, opt := range opts {

View File

@ -23,6 +23,8 @@ Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking
The empty path is not allowed as a remote. To alias the current directory
use `.` instead.
## Configuration
Here is an example of how to make an alias called `remote` for local folder.
First run:

View File

@ -22,7 +22,7 @@ keys see [the forum](https://forum.rclone.org/t/rclone-has-been-banned-from-amaz
If you happen to know anyone who works at Amazon then please ask them
to re-instate rclone into the Amazon Drive developer program - thanks!
## Setup
## Configuration
The initial setup for Amazon Drive involves getting a token from
Amazon which you need to do in your browser. `rclone config` walks
@ -125,7 +125,7 @@ To copy a local directory to an Amazon Drive directory called backup
rclone copy /home/source remote:backup
### Modified time and MD5SUMs ###
### Modified time and MD5SUMs
Amazon Drive doesn't allow modification times to be changed via
the API so these won't be accurate or used for syncing.
@ -133,7 +133,7 @@ the API so these won't be accurate or used for syncing.
It does store MD5SUMs so for a more accurate sync, you can use the
`--checksum` flag.
#### Restricted filename characters
### Restricted filename characters
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
@ -143,7 +143,7 @@ It does store MD5SUMs so for a more accurate sync, you can use the
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Deleting files ###
### Deleting files
Any files you delete with rclone will end up in the trash. Amazon
don't provide an API to permanently delete files, nor to empty the
@ -151,7 +151,7 @@ trash, so you will have to do that with one of Amazon's apps or via
the Amazon Drive website. As of November 17, 2016, files are
automatically deleted by Amazon from the trash after 30 days.
### Using with non `.com` Amazon accounts ###
### Using with non `.com` Amazon accounts
Let's say you usually use `amazon.co.uk`. When you authenticate with
rclone it will take you to an `amazon.com` page to log in. Your
@ -284,7 +284,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations ###
## Limitations
Note that Amazon Drive is case insensitive so you can't have a
file called "Hello.doc" and one called "hello.doc".

View File

@ -9,6 +9,8 @@ Paths are specified as `remote:container` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g.
`remote:container/path/to/dir`.
## Configuration
Here is an example of making a Microsoft Azure Blob Storage
configuration. For a remote called `remote`. First run:
@ -66,13 +68,13 @@ files in the container.
rclone sync -i /home/local/directory remote:container
### --fast-list ###
### --fast-list
This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](/docs/#fast-list) for more details.
### Modified time ###
### Modified time
The modified time is stored as metadata on the object with the `mtime`
key. It is stored using RFC3339 Format time with nanosecond
@ -99,7 +101,7 @@ These only get replaced if they are the last character in the name:
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Hashes ###
### Hashes
MD5 hashes are stored with blobs. However blobs that were uploaded in
chunks only have an MD5 if the source remote was capable of MD5
@ -407,7 +409,8 @@ Public access level of a container: blob, container.
- Allow full public read access for container and blob data.
{{< rem autogenerated options stop >}}
### Limitations ###
## Limitations
MD5 sums are only uploaded with chunked files if the source has an MD5
sum. This will always be the case for a local to azure copy.
@ -420,7 +423,8 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
### Azure Storage Emulator Support ###
## Azure Storage Emulator Support
You can test rclone with storage emulator locally, to do this make sure azure storage emulator
installed locally and set up a new remote with `rclone config` follow instructions described in
introduction, set `use_emulator` config as `true`, you do not need to provide default account name

View File

@ -10,6 +10,8 @@ B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/).
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
## Configuration
Here is an example of making a b2 configuration. First run
rclone config
@ -71,7 +73,7 @@ excess files in the bucket.
rclone sync -i /home/local/directory remote:bucket
### Application Keys ###
### Application Keys
B2 supports multiple [Application Keys for different access permission
to B2 Buckets](https://www.backblaze.com/b2/docs/application_keys.html).
@ -87,13 +89,13 @@ Note that you must put the _applicationKeyId_ as the `account` you
can't use the master Account ID. If you try then B2 will return 401
errors.
### --fast-list ###
### --fast-list
This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](/docs/#fast-list) for more details.
### Modified time ###
### Modified time
The modified time is stored as metadata on the object as
`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
@ -104,7 +106,7 @@ Modified times are used in syncing and are fully supported. Note that
if a modification time needs to be updated on an object then it will
create a new version of the object.
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -122,7 +124,7 @@ re-transfer files. If you want rclone not to replace \ then see the
`--b2-encoding` flag below and remove the `BackSlash` from the
string. This can be set in the config.
### SHA1 checksums ###
### SHA1 checksums
The SHA1 checksums of the files are checked on upload and download and
will be used in the syncing process.
@ -144,7 +146,7 @@ large files without SHA1 checksums. This may be fixed in the future
Files sizes below `--b2-upload-cutoff` will always have an SHA1
regardless of the source.
### Transfers ###
### Transfers
Backblaze recommends that you do lots of transfers simultaneously for
maximum speed. In tests from my SSD equipped laptop the optimum
@ -159,7 +161,7 @@ a 96 MiB RAM buffer by default. There can be at most `--transfers` of
these in use at any moment, so this sets the upper limit on the memory
used.
### Versions ###
### Versions
When rclone uploads a new version of a file it creates a [new version
of it](https://www.backblaze.com/b2/docs/file_versions.html).
@ -223,7 +225,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
```
### Data usage ###
### Data usage
It is useful to know how many requests are sent to the server in different scenarios.
@ -261,7 +263,7 @@ start and finish the upload) and another 2 requests for each chunk:
/b2api/v1/b2_finish_large_file
```
#### Versions ####
#### Versions
Versions can be viewed with the `--b2-versions` flag. When it is set
rclone will show and act on older versions of files. For example
@ -290,7 +292,7 @@ server to the nearest millisecond appended to them.
Note that when using `--b2-versions` no file write operations are
permitted, so you can't upload files or delete them.
### B2 and rclone link ###
### B2 and rclone link
Rclone supports generating file share links for private B2 buckets.
They can either be for a file for example:
@ -509,7 +511,8 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
`rclone about` is not supported by the B2 backend. Backends without
this capability cannot determine free space for an rclone mount or

View File

@ -13,6 +13,8 @@ The initial setup for Box involves getting a token from Box which you
can do either in your browser, or with a config.json downloaded from Box
to use JWT authentication. `rclone config` walks you through it.
## Configuration
Here is an example of how to make a remote called `remote`. First run:
rclone config
@ -99,7 +101,7 @@ To copy a local directory to an Box directory called backup
rclone copy /home/source remote:backup
### Using rclone with an Enterprise account with SSO ###
### Using rclone with an Enterprise account with SSO
If you have an "Enterprise" account type with Box with single sign on
(SSO), you need to create a password to use Box with rclone. This can
@ -110,7 +112,7 @@ Once you have done this, you can setup your Enterprise Box account
using the same procedure detailed above in the, using the password you
have just set.
### Invalid refresh token ###
### Invalid refresh token
According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens):
@ -194,7 +196,7 @@ d) Delete this remote
y/e/d> y
```
### Modified time and hashes ###
### Modified time and hashes
Box allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@ -203,7 +205,7 @@ not.
Box supports SHA1 type hashes, so you can use the `--checksum`
flag.
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -222,14 +224,14 @@ These only get replaced if they are the last character in the name:
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Transfers ###
### Transfers
For files above 50 MiB rclone will use a chunked transfer. Rclone will
upload up to `--transfers` chunks at the same time (shared among all
the multipart uploads). Chunks are buffered in memory and are
normally 8 MiB so increasing `--transfers` will increase memory use.
### Deleting files ###
### Deleting files
Depending on the enterprise settings for your user, the item will
either be actually deleted from Box or moved to the trash.
@ -240,7 +242,7 @@ may take a very long time.
Emptying the trash via the WebUI does not have this limitation
so it is advised to empty the trash via the WebUI.
### Root folder ID ###
### Root folder ID
You can set the `root_folder_id` for rclone. This is the directory
(identified by its `Folder ID`) that rclone considers to be the root
@ -397,7 +399,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations ###
## Limitations
Note that Box is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".

View File

@ -22,7 +22,7 @@ the use of the cache backend to minimize API hits and by-and-large
these are out of date and the cache backend isn't needed in those
scenarios any more.
## Setup
## Configuration
To get started you just need to have an existing remote which can be configured
with `cache`.

View File

@ -10,6 +10,8 @@ during upload to wrapped remote and transparently assembles them back
when the file is downloaded. This allows to effectively overcome size limits
imposed by storage providers.
## Configuration
To use it, first set up the underlying remote following the configuration
instructions for that remote. You can also use a local pathname instead of
a remote.

View File

@ -5,7 +5,8 @@ description: "Compression Remote"
# {{< icon "fas fa-compress" >}}Compress (Experimental)
### Warning
## Warning
This remote is currently **experimental**. Things may break and data may be lost. Anything you do with this remote is
at your own risk. Please understand the risks associated with using experimental code and don't use this remote in
critical applications.
@ -13,6 +14,8 @@ critical applications.
The `Compress` remote adds compression to another remote. It is best used with remotes containing
many large compressible files.
## Configuration
To use this remote, all you need to do is specify another remote and a compression mode to use:
```
@ -66,11 +69,13 @@ y/e/d> y
```
### Compression Modes
Currently only gzip compression is supported. It provides a decent balance between speed and size and is well
supported by other applications. Compression strength can further be configured via an advanced setting where 0 is no
compression and 9 is strongest compression.
#### Filetype
### File types
If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to
the compression algorithm you chose. These files are standard files that can be opened by various archive programs,
but they have some hidden metadata that allows them to be used by rclone.

View File

@ -59,7 +59,7 @@ based on XSalsa20 cipher and Poly1305 for integrity.
by default, but this has some implications and is therefore
possible to turned off.
### Configuration
## Configuration
Here is an example of how to make a remote called `secret`.

View File

@ -9,6 +9,8 @@ Paths are specified as `drive:path`
Drive paths may be as deep as required, e.g. `drive:directory/subdirectory`.
## Configuration
The initial setup for drive involves getting a token from Google drive
which you need to do in your browser. `rclone config` walks you
through it.
@ -111,7 +113,7 @@ To copy a local directory to a drive directory called backup
rclone copy /home/source remote:backup
### Scopes ###
### Scopes
Rclone allows you to select which scope you would like for rclone to
use. This changes what type of token is granted to rclone. [The
@ -120,19 +122,19 @@ here](https://developers.google.com/drive/v3/web/about-auth).
The scope are
#### drive ####
#### drive
This is the default scope and allows full access to all files, except
for the Application Data Folder (see below).
Choose this one if you aren't sure.
#### drive.readonly ####
#### drive.readonly
This allows read only access to all files. Files may be listed and
downloaded but not uploaded, renamed or deleted.
#### drive.file ####
#### drive.file
With this scope rclone can read/view/modify only those files and
folders it creates.
@ -145,19 +147,19 @@ to be sure confidential data on your drive is not visible to rclone.
Files created with this scope are visible in the web interface.
#### drive.appfolder ####
#### drive.appfolder
This gives rclone its own private area to store files. Rclone will
not be able to see any other files on your drive and you won't be able
to see rclone's files from the web interface either.
#### drive.metadata.readonly ####
#### drive.metadata.readonly
This allows read only access to file names only. It does not allow
rclone to download or upload data, or rename or delete files or
directories.
### Root folder ID ###
### Root folder ID
You can set the `root_folder_id` for rclone. This is the directory
(identified by its `Folder ID`) that rclone considers to be the root
@ -190,7 +192,7 @@ There doesn't appear to be an API to discover the folder IDs of the
Note also that rclone can't access any data under the "Backups" tab on
the google drive web interface yet.
### Service Account support ###
### Service Account support
You can set up rclone with Google Drive in an unattended mode,
i.e. not tied to a specific end-user Google account. This is useful
@ -205,7 +207,7 @@ credentials file into the rclone config file, you can set
`service_account_credentials` with the actual contents of the file
instead, or set the equivalent environment variable.
#### Use case - Google Apps/G-suite account and individual Drive ####
#### Use case - Google Apps/G-suite account and individual Drive
Let's say that you are the administrator of a Google Apps (old) or
G-suite account.
@ -216,7 +218,7 @@ We'll call the domain **example.com**, and the user
There's a few steps we need to go through to accomplish this:
##### 1. Create a service account for example.com #####
##### 1. Create a service account for example.com
- To create a service account and obtain its credentials, go to the
[Google Developer Console](https://console.developers.google.com).
- You must have a project - create one if you don't.
@ -231,7 +233,7 @@ with something that identifies your client. "Role" can be empty.
If you ever need to remove access, press the "Delete service
account key" button.
##### 2. Allowing API access to example.com Google Drive #####
##### 2. Allowing API access to example.com Google Drive
- Go to example.com's admin console
- Go into "Security" (or use the search bar)
- Select "Show more" and then "Advanced settings"
@ -245,7 +247,7 @@ It is a ~21 character numerical string.
`https://www.googleapis.com/auth/drive`
to grant access to Google Drive specifically.
##### 3. Configure rclone, assuming a new install #####
##### 3. Configure rclone, assuming a new install
```
rclone config
@ -262,7 +264,7 @@ y/n> # Auto config, n
```
##### 4. Verify that it's working #####
##### 4. Verify that it's working
- `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup`
- The arguments do:
- `-v` - verbose logging
@ -278,7 +280,7 @@ Note: in case you configured a specific root folder on gdrive and rclone is unab
`rclone -v lsf gdrive:backup`
### Shared drives (team drives) ###
### Shared drives (team drives)
If you want to configure the remote to point to a Google Shared Drive
(previously known as Team Drives) then answer `y` to the question
@ -317,7 +319,7 @@ d) Delete this remote
y/e/d> y
```
### --fast-list ###
### --fast-list
This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
@ -356,11 +358,11 @@ large folder (10600 directories, 39000 files):
- without `--fast-list`: 22:05 min
- with `--fast-list`: 58s
### Modified time ###
### Modified time
Google drive stores modification times accurate to 1 ms.
#### Restricted filename characters
### Restricted filename characters
Only Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
@ -368,7 +370,7 @@ as they can't be used in JSON strings.
In contrast to other backends, `/` can also be used in names and `.`
or `..` are valid names.
### Revisions ###
### Revisions
Google drive stores revisions of files. When you upload a change to
an existing file to google drive using rclone it will create a new
@ -380,14 +382,14 @@ was
* They are deleted after 30 days or 100 revisions (whatever comes first).
* They do not count towards a user storage quota.
### Deleting files ###
### Deleting files
By default rclone will send all files to the trash when deleting
files. If deleting them permanently is required then use the
`--drive-use-trash=false` flag, or set the equivalent environment
variable.
### Shortcuts ###
### Shortcuts
In March 2020 Google introduced a new feature in Google Drive called
[drive shortcuts](https://support.google.com/drive/answer/9700156)
@ -427,7 +429,7 @@ The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be
Shortcuts can be completely ignored with the `--drive-skip-shortcuts` flag
or the corresponding `skip_shortcuts` configuration setting.
### Emptying trash ###
### Emptying trash
If you wish to empty your trash you can use the `rclone cleanup remote:`
command which will permanently delete all your trashed files. This command
@ -437,7 +439,7 @@ Note that Google Drive takes some time (minutes to days) to empty the
trash even though the command returns within a few seconds. No output
is echoed, so there will be no confirmation even using -v or -vv.
### Quota information ###
### Quota information
To view your current quota you can use the `rclone about remote:`
command which will display your usage limit (quota), the usage in Google
@ -445,7 +447,7 @@ Drive, the size of all files in the Trash and the space used by other
Google services such as Gmail. This command does not take any path
arguments.
#### Import/Export of google documents ####
#### Import/Export of google documents
Google documents can be exported from and uploaded to Google Drive.
@ -1221,7 +1223,7 @@ Use the -i flag to see what would be copied before copying.
{{< rem autogenerated options stop >}}
### Limitations ###
## Limitations
Drive has quite a lot of rate limiting. This causes rclone to be
limited to transferring about 2 files per second only. Individual
@ -1233,7 +1235,7 @@ see User rate limit exceeded errors, wait at least 24 hours and retry.
You can disable server-side copies with `--disable copy` to download
and upload the files if you prefer.
#### Limitations of Google Docs ####
### Limitations of Google Docs
Google docs will appear as size -1 in `rclone ls` and as size 0 in
anything which uses the VFS layer, e.g. `rclone mount`, `rclone serve`.
@ -1251,7 +1253,7 @@ correct size and be downloadable. Whether it will work on not depends
on the application accessing the mount and the OS you are running -
experiment to find out if it does work for you!
### Duplicated files ###
### Duplicated files
Sometimes, for no reason I've been able to track down, drive will
duplicate a file that rclone uploads. Drive unlike all the other
@ -1265,7 +1267,7 @@ Use `rclone dedupe` to fix duplicated files.
Note that this isn't just a problem with rclone, even Google Photos on
Android duplicates files on drive sometimes.
### Rclone appears to be re-copying files it shouldn't ###
### Rclone appears to be re-copying files it shouldn't
The most likely cause of this is the duplicated file issue above - run
`rclone dedupe` and check your logs for duplicate object or directory
@ -1280,7 +1282,7 @@ Waiting a moderate period of time between attempts (estimated to be
approximately 1 hour) and/or not using --fast-list both seem to be
effective in preventing the problem.
### Making your own client_id ###
## Making your own client_id
When you use rclone with Google drive in its default configuration you
are using rclone's client_id. This is shared between all the rclone

View File

@ -10,6 +10,8 @@ Paths are specified as `remote:path`
Dropbox paths may be as deep as required, e.g.
`remote:directory/subdirectory`.
## Configuration
The initial setup for dropbox involves getting a token from Dropbox
which you need to do in your browser. `rclone config` walks you
through it.
@ -67,7 +69,7 @@ To copy a local directory to a dropbox directory called backup
rclone copy /home/source remote:backup
### Dropbox for business ###
### Dropbox for business
Rclone supports Dropbox for business and Team Folders.
@ -84,7 +86,7 @@ You can then use team folders like this `remote:/TeamFolder` and
A leading `/` for a Dropbox personal account will do nothing, but it
will take an extra HTTP transaction so it should be avoided.
### Modified time and Hashes ###
### Modified time and Hashes
Dropbox supports modified times, but the only way to set a
modification time is to re-upload the file.
@ -390,7 +392,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations ###
## Limitations
Note that Dropbox is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".
@ -418,7 +420,7 @@ non-personal account otherwise the visibility may not be correct.
[forum discussion](https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211) and the
[dropbox SDK issue](https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75).
### Get your own Dropbox App ID ###
## Get your own Dropbox App ID
When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.

View File

@ -13,6 +13,8 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
The initial setup for 1Fichier involves getting the API key from the website which you
need to do in your browser.
@ -85,7 +87,7 @@ normal file system).
Duplicated files cause problems with the syncing and you will see
messages in the log about duplicates.
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -172,7 +174,8 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
`rclone about` is not supported by the 1Fichier backend. Backends without
this capability cannot determine free space for an rclone mount or

View File

@ -10,6 +10,8 @@ Fabric™](https://storagemadeeasy.com/about/) which provides a software
solution to integrate and unify File and Object Storage accessible
through a global file system.
## Configuration
The initial setup for the Enterprise File Fabric backend involves
getting a token from the the Enterprise File Fabric which you need to
do in your browser. `rclone config` walks you through it.

View File

@ -15,6 +15,8 @@ Paths are specified as `remote:path`. If the path does not begin with
a `/` it is relative to the home directory of the user. An empty path
`remote:` refers to the user's home directory.
## Configuration
To create an FTP configuration named `remote`, run
rclone config
@ -103,7 +105,6 @@ excess files in the directory.
rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=`rclone obscure dummy`
### Implicit TLS ###
Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to
@ -111,6 +112,29 @@ be enabled in the FTP backend config for the remote, or with
[`--ftp-tls`](#ftp-tls). The default FTPS port is `990`, not `21` and
can be set with [`--ftp-port`](#ftp-port).
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
File names cannot end with the following characters. Repacement is
limited to the last character in a file name:
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| SP | 0x20 | ␠ |
Not all FTP servers can have all characters in file names, for example:
| FTP Server| Forbidden characters |
| --------- |:--------------------:|
| proftpd | `*` |
| pureftpd | `\ [ ]` |
This backend's interactive configuration wizard provides a selection of
sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd.
Just hit a selection number when prompted.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/ftp/ftp.go then run make backenddocs" >}}
### Standard Options
@ -266,7 +290,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
FTP servers acting as rclone remotes must support `passive` mode.
The mode cannot be configured as `passive` is the only supported one.
@ -300,26 +324,3 @@ Rclone's FTP backend could support server-side move but does not
at present.
The `ftp_proxy` environment variable is not currently supported.
#### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
File names cannot end with the following characters. Repacement is
limited to the last character in a file name:
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| SP | 0x20 | ␠ |
Not all FTP servers can have all characters in file names, for example:
| FTP Server| Forbidden characters |
| --------- |:--------------------:|
| proftpd | `*` |
| pureftpd | `\ [ ]` |
This backend's interactive configuration wizard provides a selection of
sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd.
Just hit a selection number when prompted.

View File

@ -8,6 +8,8 @@ description: "Rclone docs for Google Cloud Storage"
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
## Configuration
The initial setup for google cloud storage involves getting a token from Google Cloud Storage
which you need to do in your browser. `rclone config` walks you
through it.
@ -520,7 +522,8 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Default: Slash,CrLf,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
`rclone about` is not supported by the Google Cloud Storage backend. Backends without
this capability cannot determine free space for an rclone mount or

View File

@ -13,7 +13,7 @@ Google Photos.
limitations, so please read the [limitations section](#limitations)
carefully to make sure it is suitable for your use.
## Configuring Google Photos
## Configuration
The initial setup for google cloud storage involves getting a token from Google Photos
which you need to do in your browser. `rclone config` walks you
@ -113,7 +113,7 @@ files in the album.
rclone sync -i /home/local/image remote:album/newAlbum
## Layout
### Layout
As Google Photos is not a general purpose cloud storage system the
backend is laid out to help you navigate it.
@ -221,100 +221,6 @@ filesystem and it is a good target for repeated syncing.
The `shared-album` directory shows albums shared with you or by you.
This is similar to the Sharing tab in the Google Photos web interface.
## Limitations
Only images and videos can be uploaded. If you attempt to upload non
videos or images or formats that Google Photos doesn't understand,
rclone will upload the file, then Google Photos will give an error
when it is put turned into a media item.
Note that all media items uploaded to Google Photos through the API
are stored in full resolution at "original quality" and **will** count
towards your storage quota in your Google Account. The API does
**not** offer a way to upload in "high quality" mode..
`rclone about` is not supported by the Google Photos backend. Backends without
this capability cannot determine free space for an rclone mount or
use policy `mfs` (most free space) as a member of an rclone union
remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
### Downloading Images
When Images are downloaded this strips EXIF location (according to the
docs and my tests). This is a limitation of the Google Photos API and
is covered by [bug #112096115](https://issuetracker.google.com/issues/112096115).
**The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort**
### Downloading Videos
When videos are downloaded they are downloaded in a really compressed
version of the video compared to downloading it via the Google Photos
web interface. This is covered by [bug #113672044](https://issuetracker.google.com/issues/113672044).
### Duplicates
If a file name is duplicated in a directory then rclone will add the
file ID into its name. So two files called `file.jpg` would then
appear as `file {123456}.jpg` and `file {ABCDEF}.jpg` (the actual IDs
are a lot longer alas!).
If you upload the same image (with the same binary data) twice then
Google Photos will deduplicate it. However it will retain the
filename from the first upload which may confuse rclone. For example
if you uploaded an image to `upload` then uploaded the same image to
`album/my_album` the filename of the image in `album/my_album` will be
what it was uploaded with initially, not what you uploaded it with to
`album`. In practise this shouldn't cause too many problems.
### Modified time
The date shown of media in Google Photos is the creation date as
determined by the EXIF information, or the upload date if that is not
known.
This is not changeable by rclone and is not the modification date of
the media on local disk. This means that rclone cannot use the dates
from Google Photos for syncing purposes.
### Size
The Google Photos API does not return the size of media. This means
that when syncing to Google Photos, rclone can only do a file
existence check.
It is possible to read the size of the media, but this needs an extra
HTTP HEAD request per media item so is **very slow** and uses up a lot of
transactions. This can be enabled with the `--gphotos-read-size`
option or the `read_size = true` config parameter.
If you want to use the backend with `rclone mount` you may need to
enable this flag (depending on your OS and application using the
photos) otherwise you may not be able to read media off the mount.
You'll need to experiment to see if it works for you without the flag.
### Albums
Rclone can only upload files to albums it created. This is a
[limitation of the Google Photos API](https://developers.google.com/photos/library/guides/manage-albums).
Rclone can remove files it uploaded from albums it created only.
### Deleting files
Rclone can remove files from albums it created, but note that the
Google Photos API does not allow media to be deleted permanently so
this media will still remain. See [bug #109759781](https://issuetracker.google.com/issues/109759781).
Rclone cannot delete files anywhere except under `album`.
### Deleting albums
The Google Photos API does not support deleting albums - see [bug #135714733](https://issuetracker.google.com/issues/135714733).
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/googlephotos/googlephotos.go then run make backenddocs" >}}
### Standard Options
@ -431,3 +337,97 @@ listings and won't be transferred.
- Default: false
{{< rem autogenerated options stop >}}
## Limitations
Only images and videos can be uploaded. If you attempt to upload non
videos or images or formats that Google Photos doesn't understand,
rclone will upload the file, then Google Photos will give an error
when it is put turned into a media item.
Note that all media items uploaded to Google Photos through the API
are stored in full resolution at "original quality" and **will** count
towards your storage quota in your Google Account. The API does
**not** offer a way to upload in "high quality" mode..
`rclone about` is not supported by the Google Photos backend. Backends without
this capability cannot determine free space for an rclone mount or
use policy `mfs` (most free space) as a member of an rclone union
remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
### Downloading Images
When Images are downloaded this strips EXIF location (according to the
docs and my tests). This is a limitation of the Google Photos API and
is covered by [bug #112096115](https://issuetracker.google.com/issues/112096115).
**The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort**
### Downloading Videos
When videos are downloaded they are downloaded in a really compressed
version of the video compared to downloading it via the Google Photos
web interface. This is covered by [bug #113672044](https://issuetracker.google.com/issues/113672044).
### Duplicates
If a file name is duplicated in a directory then rclone will add the
file ID into its name. So two files called `file.jpg` would then
appear as `file {123456}.jpg` and `file {ABCDEF}.jpg` (the actual IDs
are a lot longer alas!).
If you upload the same image (with the same binary data) twice then
Google Photos will deduplicate it. However it will retain the
filename from the first upload which may confuse rclone. For example
if you uploaded an image to `upload` then uploaded the same image to
`album/my_album` the filename of the image in `album/my_album` will be
what it was uploaded with initially, not what you uploaded it with to
`album`. In practise this shouldn't cause too many problems.
### Modified time
The date shown of media in Google Photos is the creation date as
determined by the EXIF information, or the upload date if that is not
known.
This is not changeable by rclone and is not the modification date of
the media on local disk. This means that rclone cannot use the dates
from Google Photos for syncing purposes.
### Size
The Google Photos API does not return the size of media. This means
that when syncing to Google Photos, rclone can only do a file
existence check.
It is possible to read the size of the media, but this needs an extra
HTTP HEAD request per media item so is **very slow** and uses up a lot of
transactions. This can be enabled with the `--gphotos-read-size`
option or the `read_size = true` config parameter.
If you want to use the backend with `rclone mount` you may need to
enable this flag (depending on your OS and application using the
photos) otherwise you may not be able to read media off the mount.
You'll need to experiment to see if it works for you without the flag.
### Albums
Rclone can only upload files to albums it created. This is a
[limitation of the Google Photos API](https://developers.google.com/photos/library/guides/manage-albums).
Rclone can remove files it uploaded from albums it created only.
### Deleting files
Rclone can remove files from albums it created, but note that the
Google Photos API does not allow media to be deleted permanently so
this media will still remain. See [bug #109759781](https://issuetracker.google.com/issues/109759781).
Rclone cannot delete files anywhere except under `album`.
### Deleting albums
The Google Photos API does not support deleting albums - see [bug #135714733](https://issuetracker.google.com/issues/135714733).

View File

@ -10,6 +10,8 @@ distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/)
Paths are specified as `remote:` or `remote:path/to/dir`.
## Configuration
Here is an example of how to make a remote called `remote`. First run:
rclone config
@ -146,11 +148,6 @@ the following characters are also replaced:
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8).
### Limitations
- No server-side `Move` or `DirMove`.
- Checksums not implemented.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/hdfs/hdfs.go then run make backenddocs" >}}
### Standard Options
@ -228,3 +225,8 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
## Limitations
- No server-side `Move` or `DirMove`.
- Checksums not implemented.

View File

@ -14,6 +14,8 @@ issue, or send a pull request!)
Paths are specified as `remote:` or `remote:path/to/dir`.
## Configuration
Here is an example of how to make a remote called `remote`. First
run:
@ -185,7 +187,8 @@ If you set this option, rclone will not do the HEAD request. This will mean
- Default: false
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
`rclone about` is not supported by the HTTP backend. Backends without
this capability cannot determine free space for an rclone mount or

View File

@ -10,6 +10,8 @@ Paths are specified as `remote:path`
Paths are specified as `remote:container` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`.
## Configuration
The initial setup for Hubic involves getting a token from Hubic which
you need to do in your browser. `rclone config` walks you through it.
@ -205,7 +207,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations ###
## Limitations
This uses the normal OpenStack Swift mechanism to refresh the Swift
API credentials and ignores the expires field returned by the Hubic

View File

@ -14,30 +14,33 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Setup
## Authentication types
### Default Setup
Some of the whitelabel versions uses a different authentication method than the official service,
and you have to choose the correct one when setting up the remote.
### Standard authentication
To configure Jottacloud you will need to generate a personal security token in the Jottacloud web interface.
You will the option to do in your [account security settings](https://www.jottacloud.com/web/secure)
(for whitelabel version you need to find this page in its web interface).
Note that the web interface may refer to this token as a JottaCli token.
### Legacy Setup
### Legacy authentication
If you are using one of the whitelabel versions (Elgiganten, Com Hem Cloud) you may not have the option
to generate a CLI token. In this case you'll have to use the legacy authentication. To to this select
yes when the setup asks for legacy authentication and enter your username and password.
The rest of the setup is identical to the default setup.
### Telia Cloud Setup
### Telia Cloud authentication
Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and
additionally uses a separate authentication flow where the username is generated internally. To setup
rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is
identical to the default setup.
### Example
## Configuration
Here is an example of how to make a remote called `remote` with the default setup. First run:
@ -164,7 +167,7 @@ true for crypted remotes (in older versions the crypt backend would not
calculate hashes for uploads from local disk, so the Jottacloud
backend had to do it as described above).
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -267,7 +270,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
Note that Jottacloud is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".
@ -277,7 +280,7 @@ looking unicode equivalent. For example if a file has a ? in it will be mapped t
Jottacloud only supports filenames up to 255 characters in length.
### Troubleshooting
## Troubleshooting
Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove
operations to previously deleted paths to fail. Emptying the trash should help in such cases.

View File

@ -9,6 +9,8 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
The initial setup for Koofr involves creating an application password for
rclone. You can do that by opening the Koofr
[web application](https://app.koofr.net/app/admin/preferences/password),
@ -84,7 +86,7 @@ To copy a local directory to an Koofr directory called backup
rclone copy /home/source remote:backup
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -165,7 +167,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations ###
## Limitations
Note that Koofr is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".

View File

@ -11,6 +11,8 @@ Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`,
Will sync `/home/source` to `/tmp/destination`.
## Configuration
For consistencies sake one can also configure a remote of type
`local` in the config file, and access the local filesystem using
rclone remote paths, e.g. `remote:path/to/wherever`, but it is probably

View File

@ -9,7 +9,7 @@ description: "Mail.ru Cloud"
Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented.
### Features highlights ###
## Features highlights
- Paths may be as deep as required, e.g. `remote:directory/subdirectory`
- Files have a `last modified time` property, directories don't
@ -22,7 +22,7 @@ Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclo
- If a particular file is already present in storage, one can quickly submit file hash
instead of long file upload (this optimization is supported by rclone)
### Configuration ###
## Configuration
Here is an example of making a mailru configuration. First create a Mail.ru Cloud
account and choose a tariff, then run
@ -107,12 +107,12 @@ excess files in the path.
rclone sync -i /home/local/directory remote:directory
### Modified time ###
### Modified time
Files support a modification time attribute with up to 1 second precision.
Directories do not have a modification time, which is shown as "Jan 1 1970".
### Hash checksums ###
### Hash checksums
Hash sums use a custom Mail.ru algorithm based on SHA1.
If file size is less than or equal to the SHA1 block size (20 bytes),
@ -120,7 +120,7 @@ its hash is simply its data right-padded with zero bytes.
Hash sum of a larger file is computed as a SHA1 sum of the file data
bytes concatenated with a decimal representation of the data length.
### Emptying Trash ###
### Emptying Trash
Removing a file or directory actually moves it to the trash, which is not
visible to rclone but can be seen in a web browser. The trashed file
@ -129,12 +129,12 @@ and free some quota, you can use the `rclone cleanup remote:` command,
which will permanently delete all your trashed files.
This command does not take any path arguments.
### Quota information ###
### Quota information
To view your current quota you can use the `rclone about remote:`
command which will display your usage limit (quota) and the current usage.
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -153,15 +153,6 @@ the following characters are also replaced:
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Limitations ###
File size limits depend on your account. A single file size is limited by 2G
for a free account and unlimited for paid tariffs. Please refer to the Mail.ru
site for the total uploaded size limits.
Note that Mailru is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/mailru/mailru.go then run make backenddocs" >}}
### Standard Options
@ -315,3 +306,12 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
## Limitations
File size limits depend on your account. A single file size is limited by 2G
for a free account and unlimited for paid tariffs. Please refer to the Mail.ru
site for the total uploaded size limits.
Note that Mailru is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".

View File

@ -18,6 +18,8 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
Here is an example of how to make a remote called `remote`. First run:
rclone config
@ -79,11 +81,11 @@ To copy a local directory to an Mega directory called backup
rclone copy /home/source remote:backup
### Modified time and hashes ###
### Modified time and hashes
Mega does not support modification times or hashes yet.
#### Restricted filename characters
### Restricted filename characters
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
@ -93,7 +95,7 @@ Mega does not support modification times or hashes yet.
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Duplicated files ###
### Duplicated files
Mega can have two files with exactly the same name and path (unlike a
normal file system).
@ -103,7 +105,7 @@ messages in the log about duplicates.
Use `rclone dedupe` to fix duplicated files.
### Failure to log-in ###
### Failure to log-in
Mega remotes seem to get blocked (reject logins) under "heavy use".
We haven't worked out the exact blocking rules but it seems to be
@ -216,7 +218,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations ###
## Limitations
This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) which is an opensource
go library implementing the Mega API. There doesn't appear to be any

View File

@ -12,6 +12,8 @@ The memory backend behaves like a bucket based remote (e.g. like
s3). Because it has no parameters you can just use it with the
`:memory:` remote name.
## Configuration
You can configure it as a remote like this with `rclone config` too if
you want to:
@ -51,11 +53,11 @@ testing or with an rclone server or rclone mount, e.g.
rclone serve webdav :memory:
rclone serve sftp :memory:
### Modified time and hashes ###
### Modified time and hashes
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
#### Restricted filename characters
### Restricted filename characters
The memory backend replaces the [default restricted characters
set](/overview/#restricted-characters).

View File

@ -9,6 +9,8 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
The initial setup for OneDrive involves getting a token from
Microsoft which you need to do in your browser. `rclone config` walks
you through it.
@ -116,7 +118,7 @@ To copy a local directory to an OneDrive directory called backup
rclone copy /home/source remote:backup
### Getting your own Client ID and Key ###
### Getting your own Client ID and Key
You can use your own Client ID if the default (`client_id` left blank)
one doesn't work for you or you see lots of throttling. The default
@ -135,7 +137,7 @@ Client ID and Key by following the steps below:
Now the application is complete. Run `rclone config` to create or edit a OneDrive remote.
Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.
### Modification time and hashes ###
### Modification time and hashes
OneDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@ -147,7 +149,7 @@ Sharepoint Server support
For all types of OneDrive you can use the `--checksum` flag.
### Restricted filename characters ###
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -182,7 +184,7 @@ These only get replaced if they are the first character in the name:
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Deleting files ###
### Deleting files
Any files you delete with rclone will end up in the trash. Microsoft
doesn't provide an API to permanently delete files, nor to empty the
@ -409,14 +411,14 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
If you don't use rclone for 90 days the refresh token will
expire. This will result in authorization problems. This is easy to
fix by running the `rclone config reconnect remote:` command to get a
new token and refresh token.
#### Naming ####
### Naming
Note that OneDrive is case insensitive so you can't have a
file called "Hello.doc" and one called "hello.doc".
@ -427,15 +429,15 @@ platforms they are common. Rclone will map these names to and from an
identical looking unicode equivalent. For example if a file has a `?`
in it will be mapped to `` instead.
#### File sizes ####
### File sizes
The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize).
#### Path length ####
### Path length
The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones.
#### Number of files ####
### Number of files
OneDrive seems to be OK with at least 50,000 files in a folder, but at
100,000 rclone will get errors listing the directory like `couldnt
@ -444,7 +446,7 @@ list files: UnknownError:`. See
An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa).
### Versions
## Versions
Every change in a file OneDrive causes the service to create a new
version of the the file. This counts against a users quota. For
@ -500,7 +502,7 @@ Note: This will disable the creation of new file versions, but will not remove a
8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag)
9. Restore the versioning settings after using rclone. (Optional)
### Cleanup
## Cleanup
OneDrive supports `rclone cleanup` which causes rclone to look through
every file under the path supplied and delete all version but the
@ -514,15 +516,15 @@ is a great way to see what it would do.
**NB** Onedrive personal can't currently delete versions
### Troubleshooting ###
## Troubleshooting ##
#### Excessive throttling or blocked on SharePoint
### Excessive throttling or blocked on SharePoint
If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: `--user-agent "ISV|rclone.org|rclone/v1.55.1"`
The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling)
#### Unexpected file size/hash differences on Sharepoint ####
### Unexpected file size/hash differences on Sharepoint ####
It is a
[known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631)
@ -546,7 +548,7 @@ file to be converted in place to a format that is functionally equivalent
but which will no longer trigger the size discrepancy. Once all problematic files
are converted you will no longer need the ignore options above.
#### Replacing/deleting existing files on Sharepoint gets "item not found" ####
### Replacing/deleting existing files on Sharepoint gets "item not found" ####
It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue
that Sharepoint (not OneDrive or OneDrive for Business) may return "item not
@ -561,7 +563,7 @@ the directory `rclone-backup-dir` on backend `mysharepoint`, you may use:
--backup-dir mysharepoint:rclone-backup-dir
```
#### access\_denied (AADSTS65005) ####
### access\_denied (AADSTS65005) ####
```
Error: access_denied
@ -573,7 +575,7 @@ This means that rclone can't use the OneDrive for Business API with your account
However, there are other ways to interact with your OneDrive account. Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint
#### invalid\_grant (AADSTS50076) ####
### invalid\_grant (AADSTS50076) ####
```
Error: invalid_grant
@ -583,7 +585,7 @@ Description: Due to a configuration change made by your administrator, or becaus
If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.
#### Invalid request when making public links ####
### Invalid request when making public links ####
On Sharepoint and OneDrive for Business, `rclone link` may return an "Invalid
request" error. A possible cause is that the organisation admin didn't allow

View File

@ -9,6 +9,8 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
Here is an example of how to make a remote called `remote`. First run:
rclone config
@ -61,13 +63,13 @@ To copy a local directory to an OpenDrive directory called backup
rclone copy /home/source remote:backup
### Modified time and MD5SUMs ###
### Modified time and MD5SUMs
OpenDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
#### Restricted filename characters
### Restricted filename characters
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
@ -151,7 +153,7 @@ increase memory use.
{{< rem autogenerated options stop >}}
### Limitations ###
## Limitations
Note that OpenDrive is case insensitive so you can't have a
file called "Hello.doc" and one called "hello.doc".

View File

@ -9,6 +9,8 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
The initial setup for pCloud involves getting a token from pCloud which you
need to do in your browser. `rclone config` walks you through it.
@ -92,7 +94,7 @@ be re-uploaded.
pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256
hashes in the EU region, so you can use the `--checksum` flag.
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -104,13 +106,13 @@ the following characters are also replaced:
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Deleting files ###
### Deleting files
Deleted files will be moved to the trash. Your subscription level
will determine how long items stay in the trash. `rclone cleanup` can
be used to empty the trash.
### Root folder ID ###
### Root folder ID
You can set the `root_folder_id` for rclone. This is the directory
(identified by its `Folder ID`) that rclone considers to be the root

View File

@ -9,6 +9,8 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you
need to do in your browser. `rclone config` walks you through it.
@ -80,13 +82,13 @@ To copy a local directory to an premiumize.me directory called backup
rclone copy /home/source remote:backup
### Modified time and hashes ###
### Modified time and hashes
premiumize.me does not support modification times or hashes, therefore
syncing will default to `--size-only` checking. Note that using
`--update` will work.
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -133,7 +135,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations ###
## Limitations
Note that premiumize.me is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".

View File

@ -10,6 +10,8 @@ Paths are specified as `remote:path`
put.io paths may be as deep as required, e.g.
`remote:directory/subdirectory`.
## Configuration
The initial setup for put.io involves getting a token from put.io
which you need to do in your browser. `rclone config` walks you
through it.
@ -94,7 +96,7 @@ To copy a local directory to a put.io directory called backup
rclone copy /home/source remote:backup
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:

View File

@ -8,6 +8,8 @@ description: "Rclone docs for QingStor Object Storage"
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
## Configuration
Here is an example of making an QingStor configuration. First run
rclone config
@ -91,13 +93,13 @@ files in the bucket.
rclone sync -i /home/local/directory remote:bucket
### --fast-list ###
### --fast-list
This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](/docs/#fast-list) for more details.
### Multipart uploads ###
### Multipart uploads
rclone supports multipart uploads with QingStor which means that it can
upload files bigger than 5 GiB. Note that files uploaded with multipart
@ -109,7 +111,7 @@ removed with `rclone cleanup remote:bucket` just for one bucket
remove incomplete multipart uploads so it may be necessary to run this
from time to time.
### Buckets and Zone ###
### Buckets and Zone
With QingStor you can list buckets (`rclone lsd`) using any zone,
but you can only access the content of a bucket from the zone it was
@ -117,7 +119,7 @@ created in. If you attempt to access a bucket from the wrong zone,
you will get an error, `incorrect zone, the bucket is not in 'XXX'
zone`.
### Authentication ###
### Authentication
There are two ways to supply `rclone` with a set of QingStor
credentials. In order of precedence:
@ -282,7 +284,8 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Default: Slash,Ctl,InvalidUtf8
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
`rclone about` is not supported by the qingstor backend. Backends without
this capability cannot determine free space for an rclone mount or

View File

@ -8,7 +8,7 @@ description: "Rclone docs for Amazon S3"
The S3 backend can be used with a number of different providers:
{{< provider_list >}}
{{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/#amazon-s3" start="true" >}}
{{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/#configuration" start="true" >}}
{{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
{{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
@ -45,9 +45,12 @@ files in the bucket.
rclone sync -i /home/local/directory remote:bucket
## AWS S3 {#amazon-s3}
## Configuration
Here is an example of making an s3 configuration. First run
Here is an example of making an s3 configuration for the AWS S3 provider.
Most applies to the other providers as well, any differences are described [below](#providers).
First run
rclone config
@ -248,7 +251,7 @@ d) Delete this remote
y/e/d>
```
### Modified time ###
### Modified time
The modified time is stored as metadata on the object as
`X-Amz-Meta-Mtime` as floating point since the epoch, accurate to 1 ns.
@ -352,7 +355,7 @@ there for more details.
Setting this flag increases the chance for undetected upload failures.
### Hashes ###
### Hashes
For small objects which weren't uploaded as multipart uploads (objects
sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
@ -371,7 +374,7 @@ This will mean that these objects do not have an MD5 checksum.
Note that reading this from the object takes an additional `HEAD`
request as the metadata isn't returned in object listings.
### Cleanup ###
### Cleanup
If you run `rclone cleanup s3:bucket` then it will remove all pending
multipart uploads older than 24 hours. You can use the `-i` flag to
@ -381,7 +384,7 @@ expire all uploads older than one hour. You can use `rclone backend
list-multipart-uploads s3:bucket` to see the pending multipart
uploads.
#### Restricted filename characters
### Restricted filename characters
S3 allows any valid UTF-8 string as a key.
@ -404,7 +407,7 @@ work with the SDK properly:
| . | |
| .. | |
### Multipart uploads ###
### Multipart uploads
rclone supports multipart uploads with S3 which means that it can
upload files bigger than 5 GiB.
@ -435,7 +438,7 @@ use more memory. The default values are high enough to gain most of
the possible performance without using too much memory.
### Buckets and Regions ###
### Buckets and Regions
With Amazon S3 you can list buckets (`rclone lsd`) using any region,
but you can only access the content of a bucket from the region it was
@ -443,7 +446,7 @@ created in. If you attempt to access a bucket from the wrong region,
you will get an error, `incorrect region, the bucket is not in 'XXX'
region`.
### Authentication ###
### Authentication
There are a number of ways to supply `rclone` with a set of AWS
credentials, with and without using the environment.
@ -470,7 +473,7 @@ The different authentication methods are tried in this order:
If none of these option actually end up providing `rclone` with AWS
credentials then S3 interaction will be non-authenticated (see below).
### S3 Permissions ###
### S3 Permissions
When using the `sync` subcommand of `rclone` the following minimum
permissions are required to be available on the bucket being written to:
@ -525,14 +528,14 @@ Notes on above:
For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
that will generate one or more buckets that will work with `rclone sync`.
### Key Management System (KMS) ###
### Key Management System (KMS)
If you are using server-side encryption with KMS then you must make
sure rclone is configured with `server_side_encryption = aws:kms`
otherwise you will find you can't transfer small objects - these will
create checksum errors.
### Glacier and Glacier Deep Archive ###
### Glacier and Glacier Deep Archive
You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html).
The bucket can still be synced or copied into normally, but if rclone
@ -1908,7 +1911,13 @@ Then use it as normal with the name of the public bucket, e.g.
You will be able to list and copy data but not upload it.
## Ceph
## Providers
### AWS S3
This is the provider used as main example and described in the [configuration](#configuration) section above.
### Ceph
[Ceph](https://ceph.com/) is an open source unified, distributed
storage system designed for excellent performance, reliability and
@ -1964,7 +1973,7 @@ removed).
Because this is a json dump, it is encoding the `/` as `\/`, so if you
use the secret key as `xxxxxx/xxxx` it will work fine.
## Dreamhost
### Dreamhost
Dreamhost [DreamObjects](https://www.dreamhost.com/cloud/storage/) is
an object storage system based on CEPH.
@ -1988,7 +1997,7 @@ server_side_encryption =
storage_class =
```
## DigitalOcean Spaces
### DigitalOcean Spaces
[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean.
@ -2034,7 +2043,7 @@ rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
```
## IBM COS (S3)
### IBM COS (S3)
Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBMs Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
@ -2206,7 +2215,7 @@ acl> 1
rclone delete IBM-COS-XREGION:newbucket/file.txt
```
## Minio
### Minio
[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
@ -2273,7 +2282,7 @@ So once set up, for example to copy files into a bucket
rclone copy /path/to/files minio:bucket
```
## Scaleway
### Scaleway
[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos.
Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
@ -2295,7 +2304,7 @@ server_side_encryption =
storage_class =
```
## SeaweedFS
### SeaweedFS
[SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage system for
blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store.
@ -2345,7 +2354,7 @@ So once set up, for example to copy files into a bucket
rclone copy /path/to/files seaweedfs_s3:foo
```
## Wasabi
### Wasabi
[Wasabi](https://wasabi.com) is a cloud-based object storage service for a
broad range of applications and use cases. Wasabi is designed for
@ -2458,7 +2467,7 @@ server_side_encryption =
storage_class =
```
## Alibaba OSS {#alibaba-oss}
### Alibaba OSS {#alibaba-oss}
Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/)
configuration. First run:
@ -2568,7 +2577,7 @@ d) Delete this remote
y/e/d> y
```
## Tencent COS {#tencent-cos}
### Tencent COS {#tencent-cos}
[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
@ -2700,7 +2709,7 @@ Name Type
cos s3
```
## Netease NOS
### Netease NOS
For Netease NOS configure as per the configurator `rclone config`
setting the provider `Netease`. This will automatically set

View File

@ -11,7 +11,7 @@ This is a backend for the [Seafile](https://www.seafile.com/) storage service:
- Encrypted libraries are also supported.
- It supports 2FA enabled users
### Root mode vs Library mode ###
## Configuration
There are two distinct modes you can setup your remote:
- you point your remote to the **root of the server**, meaning you don't specify a library during the configuration:
@ -19,7 +19,7 @@ Paths are specified as `remote:library`. You may put subdirectories in too, e.g.
- you point your remote to a specific library during the configuration:
Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_)
### Configuration in root mode ###
### Configuration in root mode
Here is an example of making a seafile configuration for a user with **no** two-factor authentication. First run
@ -113,7 +113,7 @@ excess files in the library.
rclone sync -i /home/local/directory seafile:library
### Configuration in library mode ###
### Configuration in library mode
Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:
@ -210,7 +210,7 @@ excess files in the library.
rclone sync -i /home/local/directory seafile:
### --fast-list ###
### --fast-list
Seafile version 7+ supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
@ -218,7 +218,7 @@ docs](/docs/#fast-list) for more details.
Please note this is not supported on seafile server version 6.x
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -232,7 +232,7 @@ the following characters are also replaced:
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Seafile and rclone link ###
### Seafile and rclone link
Rclone supports generating share links for non-encrypted libraries only.
They can either be for a file or a directory:
@ -253,7 +253,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
Please note a share link is unique for each file or directory. If you run a link command on a file/dir
that has already been shared, you will get the exact same link.
### Compatibility ###
### Compatibility
It has been actively tested using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions:
- 6.3.4 community edition

View File

@ -29,6 +29,8 @@ directory for remote machine (i.e. `/`)
good example of this. rsync.net, on the other hand, requires users to
OMIT the leading /.
## Configuration
Here is an example of making an SFTP configuration. First run
rclone config
@ -108,7 +110,7 @@ Mount the remote path `/srv/www-data/` to the local path
rclone mount remote:/srv/www-data/ /mnt/www-data
### SSH Authentication ###
### SSH Authentication
The SFTP remote supports three authentication methods:
@ -562,7 +564,7 @@ Set to 0 to keep connections indefinitely.
{{< rem autogenerated options stop >}}
### Limitations ###
## Limitations
SFTP supports checksums if the same login has shell access and `md5sum`
or `sha1sum` as well as `echo` are in the remote's PATH.

View File

@ -7,6 +7,8 @@ description: "Rclone docs for Citrix ShareFile"
[Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business.
## Configuration
The initial setup for Citrix ShareFile involves getting a token from
Citrix ShareFile which you can in your browser. `rclone config` walks you
through it.
@ -101,7 +103,7 @@ To copy a local directory to an ShareFile directory called backup
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
### Modified time and hashes ###
### Modified time and hashes
ShareFile allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@ -110,21 +112,14 @@ not.
ShareFile supports MD5 type hashes, so you can use the `--checksum`
flag.
### Transfers ###
### Transfers
For files above 128 MiB rclone will use a chunked transfer. Rclone will
upload up to `--transfers` chunks at the same time (shared among all
the multipart uploads). Chunks are buffered in memory and are
normally 64 MiB so increasing `--transfers` will increase memory use.
### Limitations ###
Note that ShareFile is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".
ShareFile only supports filenames up to 256 characters in length.
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -232,7 +227,12 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
Note that ShareFile is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".
ShareFile only supports filenames up to 256 characters in length.
`rclone about` is not supported by the Citrix ShareFile backend. Backends without
this capability cannot determine free space for an rclone mount or

View File

@ -9,6 +9,8 @@ description: "Rclone docs for SugarSync"
active synchronization of files across computers and other devices for
file backup, access, syncing, and sharing.
## Configuration
The initial setup for SugarSync involves getting a token from SugarSync which you
can do with rclone. `rclone config` walks you through it.
@ -95,13 +97,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
create a folder, which rclone will create as a "Sync Folder" with
SugarSync.
### Modified time and hashes ###
### Modified time and hashes
SugarSync does not support modification times or hashes, therefore
syncing will default to `--size-only` checking. Note that using
`--update` will work as rclone can read the time files were uploaded.
#### Restricted filename characters
### Restricted filename characters
SugarSync replaces the [default restricted characters set](/overview/#restricted-characters)
except for DEL.
@ -109,7 +111,7 @@ except for DEL.
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in XML strings.
### Deleting files ###
### Deleting files
Deleted files will be moved to the "Deleted items" folder by default.
@ -248,7 +250,8 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Default: Slash,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
`rclone about` is not supported by the SugarSync backend. Backends without
this capability cannot determine free space for an rclone mount or

View File

@ -17,6 +17,8 @@ Commercial implementations of that being:
Paths are specified as `remote:container` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`.
## Configuration
Here is an example of making a swift configuration. First run
rclone config
@ -133,7 +135,7 @@ excess files in the container.
rclone sync -i /home/local/directory remote:container
### Configuration from an OpenStack credentials file ###
### Configuration from an OpenStack credentials file
An OpenStack credentials file typically looks something something
like this (without the comments)
@ -165,7 +167,7 @@ tenant = $OS_TENANT_NAME
Note that you may (or may not) need to set `region` too - try without first.
### Configuration from the environment ###
### Configuration from the environment
If you prefer you can configure rclone to use swift using a standard
set of OpenStack environment variables.
@ -179,7 +181,7 @@ the
variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment)
in the docs for the swift library.
### Using an alternate authentication method ###
### Using an alternate authentication method
If your OpenStack installation uses a non-standard authentication method
that might not be yet supported by rclone or the underlying swift library,
@ -190,7 +192,7 @@ If they are both provided, the other variables are ignored. rclone will
not try to authenticate but instead assume it is already authenticated
and use these two variables to access the OpenStack installation.
#### Using rclone without a config file ####
#### Using rclone without a config file
You can use rclone with swift without a config file, if desired, like
this:
@ -202,13 +204,13 @@ export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
rclone lsd myremote:
```
### --fast-list ###
### --fast-list
This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](/docs/#fast-list) for more details.
### --update and --use-server-modtime ###
### --update and --use-server-modtime
As noted below, the modified time is stored on metadata on the object. It is
used by default for all operations that require checking the time a file was
@ -221,6 +223,25 @@ sufficient to determine if it is "dirty". By using `--update` along with
`--use-server-modtime`, you can avoid the extra API call and simply upload
files whose local modtime is newer than the time it was last uploaded.
### Modified time
The modified time is stored as metadata on the object as
`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
ns.
This is a de facto standard (used in the official python-swiftclient
amongst others) for storing the modification time for an object.
### Restricted filename characters
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| NUL | 0x00 | ␀ |
| / | 0x2F | |
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/swift/swift.go then run make backenddocs" >}}
### Standard Options
@ -481,34 +502,15 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Modified time ###
The modified time is stored as metadata on the object as
`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
ns.
This is a de facto standard (used in the official python-swiftclient
amongst others) for storing the modification time for an object.
### Restricted filename characters
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| NUL | 0x00 | ␀ |
| / | 0x2F | |
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Limitations ###
## Limitations
The Swift API doesn't return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won't check or use the
MD5SUM for these.
### Troubleshooting ###
## Troubleshooting
#### Rclone gives Failed to create file system for "remote:": Bad Request ####
### Rclone gives Failed to create file system for "remote:": Bad Request
Due to an oddity of the underlying swift library, it gives a "Bad
Request" error rather than a more sensible error when the
@ -520,19 +522,20 @@ investigate further with the `--dump-bodies` flag.
This may also be caused by specifying the region when you shouldn't
have (e.g. OVH).
#### Rclone gives Failed to create file system: Response didn't have storage url and auth token ####
### Rclone gives Failed to create file system: Response didn't have storage url and auth token
This is most likely caused by forgetting to specify your tenant when
setting up a swift remote.
### OVH Cloud Archive ###
## OVH Cloud Archive
To use rclone with OVH cloud archive, first use `rclone config` to set up a `swift` backend with OVH, choosing `pca` as the `storage_policy`.
#### Uploading Objects ####
### Uploading Objects
Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel.
#### Retrieving Objects ####
### Retrieving Objects
To retrieve objects use `rclone copy` as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following:

View File

@ -9,7 +9,7 @@ description: "Rclone docs for Tardigrade"
cost-effective object storage service that enables you to store, back up, and
archive large amounts of data in a decentralized manner.
## Setup
## Configuration
To make a new Tardigrade configuration you need one of the following:
* Access Grant that someone else shared with you.
@ -122,6 +122,70 @@ d) Delete this remote
y/e/d> y
```
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/tardigrade/tardigrade.go then run make backenddocs" >}}
### Standard Options
Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage).
#### --tardigrade-provider
Choose an authentication method.
- Config: provider
- Env Var: RCLONE_TARDIGRADE_PROVIDER
- Type: string
- Default: "existing"
- Examples:
- "existing"
- Use an existing access grant.
- "new"
- Create a new access grant from satellite address, API key, and passphrase.
#### --tardigrade-access-grant
Access Grant.
- Config: access_grant
- Env Var: RCLONE_TARDIGRADE_ACCESS_GRANT
- Type: string
- Default: ""
#### --tardigrade-satellite-address
Satellite Address. Custom satellite address should match the format: `<nodeid>@<address>:<port>`.
- Config: satellite_address
- Env Var: RCLONE_TARDIGRADE_SATELLITE_ADDRESS
- Type: string
- Default: "us-central-1.tardigrade.io"
- Examples:
- "us-central-1.tardigrade.io"
- US Central 1
- "europe-west-1.tardigrade.io"
- Europe West 1
- "asia-east-1.tardigrade.io"
- Asia East 1
#### --tardigrade-api-key
API Key.
- Config: api_key
- Env Var: RCLONE_TARDIGRADE_API_KEY
- Type: string
- Default: ""
#### --tardigrade-passphrase
Encryption Passphrase. To access existing objects enter passphrase used for uploading.
- Config: passphrase
- Env Var: RCLONE_TARDIGRADE_PASSPHRASE
- Type: string
- Default: ""
{{< rem autogenerated options stop >}}
## Usage
Paths are specified as `remote:bucket` (or `remote:` for the `lsf`
@ -236,70 +300,7 @@ Or even between another cloud storage and Tardigrade.
rclone sync -i --progress s3:bucket/path/to/dir/ tardigrade:bucket/path/to/dir/
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/tardigrade/tardigrade.go then run make backenddocs" >}}
### Standard Options
Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage).
#### --tardigrade-provider
Choose an authentication method.
- Config: provider
- Env Var: RCLONE_TARDIGRADE_PROVIDER
- Type: string
- Default: "existing"
- Examples:
- "existing"
- Use an existing access grant.
- "new"
- Create a new access grant from satellite address, API key, and passphrase.
#### --tardigrade-access-grant
Access Grant.
- Config: access_grant
- Env Var: RCLONE_TARDIGRADE_ACCESS_GRANT
- Type: string
- Default: ""
#### --tardigrade-satellite-address
Satellite Address. Custom satellite address should match the format: `<nodeid>@<address>:<port>`.
- Config: satellite_address
- Env Var: RCLONE_TARDIGRADE_SATELLITE_ADDRESS
- Type: string
- Default: "us-central-1.tardigrade.io"
- Examples:
- "us-central-1.tardigrade.io"
- US Central 1
- "europe-west-1.tardigrade.io"
- Europe West 1
- "asia-east-1.tardigrade.io"
- Asia East 1
#### --tardigrade-api-key
API Key.
- Config: api_key
- Env Var: RCLONE_TARDIGRADE_API_KEY
- Type: string
- Default: ""
#### --tardigrade-passphrase
Encryption Passphrase. To access existing objects enter passphrase used for uploading.
- Config: passphrase
- Env Var: RCLONE_TARDIGRADE_PASSPHRASE
- Type: string
- Default: ""
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
`rclone about` is not supported by the rclone Tardigrade backend. Backends without
this capability cannot determine free space for an rclone mount or
@ -309,7 +310,7 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
See [rclone about](https://rclone.org/commands/rclone_about/)
### Known issues
## Known issues
If you get errors like `too many open files` this usually happens when the default `ulimit` for system max open files is exceeded. Native Storj protocol opens a large number of TCP connections (each of which is counted as an open file). For a single upload stream you can expect 110 TCP connections to be opened. For a single download stream you can expect 35. This batch of connections will be opened for every 64 MiB segment and you should also expect TCP connections to be reused. If you do many transfers you eventually open a connection to most storage nodes (thousands of nodes).

View File

@ -24,75 +24,7 @@ There will be no special handling of paths containing `..` segments.
Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking
`rclone mkdir mydrive:private/backup/../desktop`.
### Behavior / Policies
The behavior of union backend is inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs). All functions are grouped into 3 categories: **action**, **create** and **search**. These functions and categories can be assigned a policy which dictates what file or directory is chosen when performing that behavior. Any policy can be assigned to a function or category though some may not be very useful in practice. For instance: **rand** (random) may be useful for file creation (create) but could lead to very odd behavior if used for `delete` if there were more than one copy of the file.
#### Function / Category classifications
| Category | Description | Functions |
|----------|--------------------------|-------------------------------------------------------------------------------------|
| action | Writing Existing file | move, rmdir, rmdirs, delete, purge and copy, sync (as destination when file exist) |
| create | Create non-existing file | copy, sync (as destination when file not exist) |
| search | Reading and listing file | ls, lsd, lsl, cat, md5sum, sha1sum and copy, sync (as source) |
| N/A | | size, about |
#### Path Preservation
Policies, as described below, are of two basic types. `path preserving` and `non-path preserving`.
All policies which start with `ep` (**epff**, **eplfs**, **eplus**, **epmfs**, **eprand**) are `path preserving`. `ep` stands for `existing path`.
A path preserving policy will only consider upstreams where the relative path being accessed already exists.
When using non-path preserving policies paths will be created in target upstreams as necessary.
#### Quota Relevant Policies
Some policies rely on quota information. These policies should be used only if your upstreams support the respective quota fields.
| Policy | Required Field |
|------------|----------------|
| lfs, eplfs | Free |
| mfs, epmfs | Free |
| lus, eplus | Used |
| lno, eplno | Objects |
To check if your upstream supports the field, run `rclone about remote: [flags]` and see if the required field exists.
#### Filters
Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below.
* No **search** policies filter.
* All **action** policies will filter out remotes which are tagged as **read-only**.
* All **create** policies will filter out remotes which are tagged **read-only** or **no-create**.
If all remotes are filtered an error will be returned.
#### Policy descriptions
The policies definition are inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs) but not exactly the same. Some policy definition could be different due to the much larger latency of remote file systems.
| Policy | Description |
|------------------|------------------------------------------------------------|
| all | Search category: same as **epall**. Action category: same as **epall**. Create category: act on all upstreams. |
| epall (existing path, all) | Search category: Given this order configured, act on the first one found where the relative path exists. Action category: apply to all found. Create category: act on all upstreams where the relative path exists. |
| epff (existing path, first found) | Act on the first one found, by the time upstreams reply, where the relative path exists. |
| eplfs (existing path, least free space) | Of all the upstreams on which the relative path exists choose the one with the least free space. |
| eplus (existing path, least used space) | Of all the upstreams on which the relative path exists choose the one with the least used space. |
| eplno (existing path, least number of objects) | Of all the upstreams on which the relative path exists choose the one with the least number of objects. |
| epmfs (existing path, most free space) | Of all the upstreams on which the relative path exists choose the one with the most free space. |
| eprand (existing path, random) | Calls **epall** and then randomizes. Returns only one upstream. |
| ff (first found) | Search category: same as **epff**. Action category: same as **epff**. Create category: Act on the first one found by the time upstreams reply. |
| lfs (least free space) | Search category: same as **eplfs**. Action category: same as **eplfs**. Create category: Pick the upstream with the least available free space. |
| lus (least used space) | Search category: same as **eplus**. Action category: same as **eplus**. Create category: Pick the upstream with the least used space. |
| lno (least number of objects) | Search category: same as **eplno**. Action category: same as **eplno**. Create category: Pick the upstream with the least number of objects. |
| mfs (most free space) | Search category: same as **epmfs**. Action category: same as **epmfs**. Create category: Pick the upstream with the most available free space. |
| newest | Pick the file / directory with the largest mtime. |
| rand (random) | Calls **all** and then randomizes. Returns only one upstream. |
### Setup
## Configuration
Here is an example of how to make a union called `remote` for local folders.
First run:
@ -171,6 +103,74 @@ Copy another local directory to the union directory called source, which will be
rclone copy C:\source remote:source
### Behavior / Policies
The behavior of union backend is inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs). All functions are grouped into 3 categories: **action**, **create** and **search**. These functions and categories can be assigned a policy which dictates what file or directory is chosen when performing that behavior. Any policy can be assigned to a function or category though some may not be very useful in practice. For instance: **rand** (random) may be useful for file creation (create) but could lead to very odd behavior if used for `delete` if there were more than one copy of the file.
### Function / Category classifications
| Category | Description | Functions |
|----------|--------------------------|-------------------------------------------------------------------------------------|
| action | Writing Existing file | move, rmdir, rmdirs, delete, purge and copy, sync (as destination when file exist) |
| create | Create non-existing file | copy, sync (as destination when file not exist) |
| search | Reading and listing file | ls, lsd, lsl, cat, md5sum, sha1sum and copy, sync (as source) |
| N/A | | size, about |
### Path Preservation
Policies, as described below, are of two basic types. `path preserving` and `non-path preserving`.
All policies which start with `ep` (**epff**, **eplfs**, **eplus**, **epmfs**, **eprand**) are `path preserving`. `ep` stands for `existing path`.
A path preserving policy will only consider upstreams where the relative path being accessed already exists.
When using non-path preserving policies paths will be created in target upstreams as necessary.
### Quota Relevant Policies
Some policies rely on quota information. These policies should be used only if your upstreams support the respective quota fields.
| Policy | Required Field |
|------------|----------------|
| lfs, eplfs | Free |
| mfs, epmfs | Free |
| lus, eplus | Used |
| lno, eplno | Objects |
To check if your upstream supports the field, run `rclone about remote: [flags]` and see if the required field exists.
### Filters
Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below.
* No **search** policies filter.
* All **action** policies will filter out remotes which are tagged as **read-only**.
* All **create** policies will filter out remotes which are tagged **read-only** or **no-create**.
If all remotes are filtered an error will be returned.
### Policy descriptions
The policies definition are inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs) but not exactly the same. Some policy definition could be different due to the much larger latency of remote file systems.
| Policy | Description |
|------------------|------------------------------------------------------------|
| all | Search category: same as **epall**. Action category: same as **epall**. Create category: act on all upstreams. |
| epall (existing path, all) | Search category: Given this order configured, act on the first one found where the relative path exists. Action category: apply to all found. Create category: act on all upstreams where the relative path exists. |
| epff (existing path, first found) | Act on the first one found, by the time upstreams reply, where the relative path exists. |
| eplfs (existing path, least free space) | Of all the upstreams on which the relative path exists choose the one with the least free space. |
| eplus (existing path, least used space) | Of all the upstreams on which the relative path exists choose the one with the least used space. |
| eplno (existing path, least number of objects) | Of all the upstreams on which the relative path exists choose the one with the least number of objects. |
| epmfs (existing path, most free space) | Of all the upstreams on which the relative path exists choose the one with the most free space. |
| eprand (existing path, random) | Calls **epall** and then randomizes. Returns only one upstream. |
| ff (first found) | Search category: same as **epff**. Action category: same as **epff**. Create category: Act on the first one found by the time upstreams reply. |
| lfs (least free space) | Search category: same as **eplfs**. Action category: same as **eplfs**. Create category: Pick the upstream with the least available free space. |
| lus (least used space) | Search category: same as **eplus**. Action category: same as **eplus**. Create category: Pick the upstream with the least used space. |
| lno (least number of objects) | Search category: same as **eplno**. Action category: same as **eplno**. Create category: Pick the upstream with the least number of objects. |
| mfs (most free space) | Search category: same as **epmfs**. Action category: same as **epmfs**. Create category: Pick the upstream with the most available free space. |
| newest | Pick the file / directory with the largest mtime. |
| rand (random) | Calls **all** and then randomizes. Returns only one upstream. |
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/union/union.go then run make backenddocs" >}}
### Standard Options

View File

@ -12,14 +12,11 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
### Setup
## Configuration
To configure an Uptobox backend you'll need your personal api token. You'll find it in your
[account settings](https://uptobox.com/my_account)
### Example
Here is an example of how to make a remote called `remote` with the default setup. First run:
rclone config
@ -88,7 +85,7 @@ To copy a local directory to an Uptobox directory called backup
Uptobox supports neither modified times nor checksums.
#### Restricted filename characters
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
@ -132,7 +129,7 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations
## Limitations
Uptobox will delete inactive files that have not been accessed in 60 days.

View File

@ -9,6 +9,8 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
To configure the WebDAV remote you will need to have a URL for it, and
a username and password. If you know what kind of system you are
connecting to then rclone can enable extra features.
@ -218,11 +220,11 @@ You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'
{{< rem autogenerated options stop >}}
## Provider notes ##
## Provider notes
See below for notes on specific providers.
### Owncloud ###
### Owncloud
Click on the settings cog in the bottom right of the page and this
will show the WebDAV URL that rclone needs in the config step. It
@ -230,13 +232,13 @@ will look something like `https://example.com/remote.php/webdav/`.
Owncloud supports modified times using the `X-OC-Mtime` header.
### Nextcloud ###
### Nextcloud
This is configured in an identical way to Owncloud. Note that
Nextcloud initially did not support streaming of files (`rcat`) whereas
Owncloud did, but [this](https://github.com/nextcloud/nextcloud-snap/issues/365) seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud Server v19).
### Sharepoint Online ###
### Sharepoint Online
Rclone can be used with Sharepoint provided by OneDrive for Business
or Office365 Education Accounts.
@ -277,7 +279,7 @@ user = YourEmailAddress
pass = encryptedpassword
```
### Sharepoint with NTLM Authentication ###
### Sharepoint with NTLM Authentication
Use this option in case your (hosted) Sharepoint is not tied to OneDrive accounts and uses NTLM authentication.
@ -306,7 +308,9 @@ vendor = sharepoint-ntlm
user = DOMAIN\user
pass = encryptedpassword
```
#### Required Flags for SharePoint ####
#### Required Flags for SharePoint
As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer.
For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents:
@ -315,7 +319,7 @@ For Rclone calls copying files (especially Office files such as .docx, .xlsx, et
--ignore-size --ignore-checksum --update
```
### dCache ###
### dCache
dCache is a storage system that supports many protocols and
authentication/authorisation schemes. For WebDAV clients, it allows
@ -346,7 +350,7 @@ obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config f
Macaroons may also be obtained from the dCacheView
web-browser/JavaScript client that comes with dCache.
### OpenID-Connect ###
### OpenID-Connect
dCache also supports authenticating with OpenID-Connect access tokens.
OpenID-Connect is a protocol (based on OAuth 2.0) that allows services

View File

@ -7,6 +7,8 @@ description: "Yandex Disk"
[Yandex Disk](https://disk.yandex.com) is a cloud storage solution created by [Yandex](https://yandex.com).
## Configuration
Here is an example of making a yandex configuration. First run
rclone config
@ -83,27 +85,27 @@ excess files in the path.
Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`.
### Modified time ###
### Modified time
Modified times are supported and are stored accurate to 1 ns in custom
metadata called `rclone_modified` in RFC3339 with nanoseconds format.
### MD5 checksums ###
### MD5 checksums
MD5 checksums are natively supported by Yandex Disk.
### Emptying Trash ###
### Emptying Trash
If you wish to empty your trash you can use the `rclone cleanup remote:`
command which will permanently delete all your trashed files. This command
does not take any path arguments.
### Quota information ###
### Quota information
To view your current quota you can use the `rclone about remote:`
command which will display your usage limit (quota) and the current usage.
#### Restricted filename characters
### Restricted filename characters
The [default restricted characters set](/overview/#restricted-characters)
are replaced.
@ -111,25 +113,6 @@ are replaced.
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Limitations ###
When uploading very large files (bigger than about 5 GiB) you will need
to increase the `--timeout` parameter. This is because Yandex pauses
(perhaps to calculate the MD5SUM for the entire file) before returning
confirmation that the file has been uploaded. The default handling of
timeouts in rclone is to assume a 5 minute pause is an error and close
the connection - you'll see `net/http: timeout awaiting response
headers` errors in the logs if this is happening. Setting the timeout
to twice the max size of file in GiB should be enough, so if you want
to upload a 30 GiB file set a timeout of `2 * 30 = 60m`, that is
`--timeout 60m`.
Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription.
Token generation will work without a mail account, but Rclone won't be able to complete any actions.
```
[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
```
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/yandex/yandex.go then run make backenddocs" >}}
### Standard Options
@ -200,3 +183,22 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
## Limitations
When uploading very large files (bigger than about 5 GiB) you will need
to increase the `--timeout` parameter. This is because Yandex pauses
(perhaps to calculate the MD5SUM for the entire file) before returning
confirmation that the file has been uploaded. The default handling of
timeouts in rclone is to assume a 5 minute pause is an error and close
the connection - you'll see `net/http: timeout awaiting response
headers` errors in the logs if this is happening. Setting the timeout
to twice the max size of file in GiB should be enough, so if you want
to upload a 30 GiB file set a timeout of `2 * 30 = 60m`, that is
`--timeout 60m`.
Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription.
Token generation will work without a mail account, but Rclone won't be able to complete any actions.
```
[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
```

View File

@ -7,6 +7,8 @@ description: "Zoho WorkDrive"
[Zoho WorkDrive](https://www.zoho.com/workdrive/) is a cloud storage solution created by [Zoho](https://zoho.com).
## Configuration
Here is an example of making a zoho configuration. First run
rclone config
@ -103,20 +105,20 @@ excess files in the path.
Zoho paths may be as deep as required, eg `remote:directory/subdirectory`.
### Modified time ###
### Modified time
Modified times are currently not supported for Zoho Workdrive
### Checksums ###
### Checksums
No checksums are supported.
### Usage information ###
### Usage information
To view your current quota you can use the `rclone about remote:`
command which will display your current usage.
#### Restricted filename characters
### Restricted filename characters
Only control characters and invalid UTF-8 are replaced. In addition most
Unicode full-width characters are not supported at all and will be removed