![]() You probably already have a large number of chunks uploaded - due to previously aborted backups - and it’s taking a long time to list them. Your issue seems to be related to the ListAllFiles() phase of an initial backup. What is my best bet here? Should I split up my movie directories further to allow Duplicacy easier indexing? I’ll be moving from 37Mb/s upload to 100Mb/s shortly - that might help too but optimally, I’d have a complete backup before moving. ![]() The indexing seems to be IOPS-limited, which I cannot change. I see around 2-4MB/s of reads due to Duplicacy on the drive with 100MB of RAM used and 0% of CPU used according to unRAID. The system is equipped with an i5 11600k and 32GB of RAM. I am running Duplicacy in a Docker container under unRAID on shucked WD Elements 12TB drives with read speeds of 100-190MB/s (depending on data location on the platter). The directories in question range from 500GB to 5TB with 300 to 3500 files. The few seconds my connection drops due to this does not seem to influence uploads but when it happens during indexing, I think, the indexing fails with the error: ERROR LIST_FILES Failed to list the directory chunks/: RequestError: send request failed. I suspect it fails because of the nightly reset of my IP - a behaviour dictated by my ISP I cannot change. Yet, I am still running into issues with Duplicacy taking a long time to index each library which then results in it failing when comparing the file list to the cloud. I split my libraries to reduce the individual size. ![]() I set Duplicacy up to back up my movies to iDrive e2. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |