Commit Graph

780 Commits

Author SHA1 Message Date
Nick Craig-Wood b52a39a84e drive: fix merge breakage
In 2f5a2d3c48 an incorrect merge caused compilation to fail
2020-05-01 13:02:32 +01:00
Nick Craig-Wood 2f5a2d3c48 drive: Don't return nil Object with nil error from newObject* functions.
Before this change the newObject* functions could return object=nil
with err=nil.  The result of these functions are passed outside of the
backend code (eg in Copy, Move) and returning a nil object with a nil
error leads to crashes elsewhere as it breaks expectations.

After this change we return (nil, fs.ErrorObjectNotFound) in these
cases. The one place this is actually needd internally (when turning
items into listings) we detect that error and use it to mean skip the
directory item.

This problem was noticed while testing the shortcuts code. It
shouldn't happen normally but it is conceivable it could.
2020-04-30 17:11:36 +01:00
Nick Craig-Wood 74d9dabdff b2: force the case of the SHA1 to lowercase - fixes #4162
Apparently some tools (eg duplicati) upload the SHA1 in uppercase to
b2 to be stored in the `large_file_sha1` metadata. This patch forces
it to lower case.
2020-04-29 17:08:21 +01:00
Nick Craig-Wood 90d738b561 cache: implement rclone backend stats command 2020-04-29 10:10:57 +01:00
Nick Craig-Wood e2916f3a55 local: implement backend command "noop" for testing purposes 2020-04-29 10:10:57 +01:00
Nick Craig-Wood 37a53570d4 azureblob: implement memory pooling to control memory use
This commit implements memory pooling to control excessive memory use
as was implemented in the s3 backend.
2020-04-28 17:47:10 +01:00
Nick Craig-Wood ee7219aa20 azureblob: add --azureblob-disable-checksum flag 2020-04-28 17:47:10 +01:00
Nick Craig-Wood b1d8da484b azureblob: retry InvalidBlobOrBlock error as it may indicate block concurrency problems
According to Microsoft support this error can be caused by

> A timing/concurrency issue where the PUT operations are happening
> about the same time for a single blob. The Put Block List operation
> writes a blob by specifying the list of block IDs that make up the
> blob. In order to be written as part of a blob, a block must have
> been successfully written to the server in a prior Put Block
> operation.
>
> Documentation reference:
>
> https://docs.microsoft.com/en-us/rest/api/storageservices/put-block
>
> This error can happen when doing concurrent upload commits after you
> have started the upload but before you commit. In that case, the
> upload fails. The application can retry this error or attempt some
> other recovery action based on the required scenario.

See: https://forum.rclone.org/t/error-while-syncing-with-azure-blob-storage-x-ms-error-code-invalidbloborblock/15561
2020-04-28 17:47:10 +01:00
Nick Craig-Wood 4e869e03f7 s3: improve docs for --s3-disable-checksum 2020-04-28 17:47:10 +01:00
Nick Craig-Wood 52c9647b06 b2: improve docs for --b2-disable-checksum 2020-04-28 17:47:10 +01:00
Nick Craig-Wood 551a829eba googlephotos: don't put an image in error message - fixes #4144
For a certain class of broken or missing image Google Photos puts an
image in the error message.

Before this fix we blindly chucked it into the error message.

After this fix we replace it with some sensible text.
2020-04-28 16:51:47 +01:00
Adam Stroud 8e91f83174 googlecloudstorage: Add ARCHIVE storage class to help 2020-04-27 11:40:21 +01:00
buengese 7f776c64f0 fichier: implement custom pacer to deal with the new rate limiting 2020-04-26 20:38:56 +02:00
David 0c0ed2fe04 box: Remove unnecessary iat from jws claims 2020-04-23 17:52:14 +01:00
Nick Craig-Wood ab6ed256e5 putio: add support for --header-upload and --header-download #59 2020-04-23 15:55:52 +01:00
Nick Craig-Wood 7c98ecd3ab putio: make downloading files use the rclone http Client
This fixes `--download-header` and these transactions being missed from
`--dump bodies` or `--tpslimit`
2020-04-23 15:48:30 +01:00
Nick Craig-Wood b502a74cff gcs: add support for --header-upload and --header-download #59 2020-04-23 11:41:57 +01:00
Nick Craig-Wood 8e9c25063a swift: add support for --header-upload and --header-download #59 2020-04-23 11:34:36 +01:00
Tim Gallant c390fc8100 onedrive: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant 14f6ce1e77 premiumizeme: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant 385542e2f9 sharefile: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant fc946d0c44 fichier: pass options to rest.Opts for uploadFile 2020-04-23 11:07:21 +01:00
Tim Gallant 854c84d0ca pcloud: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant 90bd0eb44c webdav: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant 3130f870bb sugarsync: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant 51b617f601 yandex: pass options to rest.Opts for upload 2020-04-23 11:07:21 +01:00
Tim Gallant 011ca244b2 jottacloud: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant 9ea1361044 googlephotos: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant 776966e22c opendrive: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant 01cb256b84 box: pass options to rest.Opts for uploadPart 2020-04-23 11:07:21 +01:00
Tim Gallant 0b0163dde2 box: pass options to rest.Opts for upload 2020-04-23 11:07:21 +01:00
Tim Gallant 38123c70eb b2: pass options to rest.Opts for Update 2020-04-23 11:07:21 +01:00
Tim Gallant 5cb7229a16 s3: add support for HTTPOption 2020-04-23 11:07:21 +01:00
Nick Craig-Wood f8039deb7c s3: fix detection of BucketAlreadyOwnedByYou and BucketAlreadyExists error
This was being silently ignored until this commit

e2bf91452a s3: report errors on bucket creation (mkdir) correctly
2020-04-22 18:14:03 +01:00
Sunil Patra 39319b4858 @Sunil-P
box: Added support for interchangeable root folder for Box backend - #3422
2020-04-22 17:00:13 +01:00
Sunil Patra 4af5c9aed7 pCloud: Added support for interchangeable root folder for pCloud backend - #3957 2020-04-22 16:58:01 +01:00
David Bramwell 8a3c4c6a7b
box: add token renew function for jwt auth - Fixes #4901 2020-04-22 16:53:03 +01:00
Nick Craig-Wood 1648c1a0f3 crypt: calculate hashes for uploads from local disk
Before this change crypt would not calculate hashes for files it was
uploading. This is because, in the general case, they have to be
downloaded, encrypted and hashed which is too resource intensive.

However this causes backends which need the hash first before
uploading (eg s3/b2 when uploading chunked files) not to have a hash
of the file. This causes cryptcheck to complain about missing hashes
on large files uploaded via s3/b2.

This change calculates hashes for the upload if the upload is coming
from a local filesystem. It does this by encrypting and hashing the
local file re-using the code used by cryptcheck. For a local disk this
is not a lot more intensive than calculating the hash.

See: https://forum.rclone.org/t/strange-output-for-cryptcheck/15437
Fixes: #2809
2020-04-22 11:33:48 +01:00
Nick Craig-Wood 44b1a591a8 crypt: get rid of the unused Cipher interface as it obfuscated the code 2020-04-22 11:33:48 +01:00
Nick Craig-Wood bbb6f94377 fstest: create AssertTimeEqualWithPrecision from CheckTimeEqualWithPrecision 2020-04-22 11:33:00 +01:00
Nick Craig-Wood cd3c699f28 lib/readers: factor ErrorReader from multiple sources 2020-04-19 15:18:49 +01:00
Nick Craig-Wood 36d2c46bcf local: factor PreAllocate and SetSparse to lib/file 2020-04-19 15:18:49 +01:00
Daven 4c258787b5
googlephotos: make the start year configurable - fixes #3630 2020-04-15 18:08:07 +01:00
Nick Craig-Wood e2bf91452a s3: report errors on bucket creation (mkdir) correctly
Before this fix errors on bucket creation were being silently
swallowed.

See: https://forum.rclone.org/t/rclone-with-brand-new-aws-account-for-s3/15590
2020-04-15 13:13:13 +01:00
Michał Matczuk 6893ce0bbf s3: do not resize buf on put to memBuf
This is handled by Pool implementation.
2020-04-11 16:35:48 +01:00
Michał Matczuk 399cf18013 s3: use single memory pool
Previously we had a map of pools for different chunk sizes.
In practice the mapping is not very useful and requires a lock.
Pools of size other that ChunkSize can only happen when we have a huge file (over 10k * ChunkSize).
We need to have a bunch of identically sized huge files.
In such case most likely ChunkSize should be increased.

The mapping and its lock is replaced with a single initialised pool for ChunkSize, in other cases pool is allocated and freed on per file basis.
2020-04-11 16:34:05 +01:00
buengese 64b5105edd jottacloud: implement cleanup 2020-04-11 16:42:25 +02:00
buengese 2c2f4a6a05 jottacloud: implement --jottacloud-trashed-only 2020-04-11 16:42:25 +02:00
Jack Anderson 815ae7df45 backend/s3: add SSE-C support for AWS, Ceph, and MinIO 2020-03-31 18:16:45 +01:00
Nick Craig-Wood ff0a299bfb drive: don't delete files with multiple parents to avoid data loss
Rclone can't safely delete files with multiple parents without
PATCHing the parents list. This can be done, but since multiple
parents are going away to be replaced by drive shortcuts we return an
error for now.

See #4013
2020-03-31 17:28:32 +01:00