<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Issue with Lengthy and Failing Updates]]></title><description><![CDATA[<p dir="auto">Hi everyone,</p>
<p dir="auto">I'm having trouble with updates on my system. Each update takes incredibly long, sometimes up to 10 hours, and yet it frequently fails.<br />
I use Idrive bucket with rsync for performing updates. Even though I've attempted various adjustments and optimizations, issues with the speed and success of updates persist.</p>
<p dir="auto">Here is the log output I've obtained when attempting to update:</p>
<pre><code>box:shell backup-snapshot/app_cc8f455d-4285-4133-9234-9767a4977f23: /usr/bin/sudo -S -E --close-from=4 /home/yellowtent/box/src/scripts/backupupload.js snapshot/app_cc8f455d-4285-4133-9234-9767a4977f23 rsync {"localRoot":"/home/yellowtent/appsdata/cc8f455d-4285-4133-9234-9767a4977f23","layout":[]} errored BoxError: backup-snapshot/app_cc8f455d-4285-4133-9234-9767a4977f23 exited with code null signal SIGKILL
May 03 08:29:55 box:taskworker Task took 26992.838 seconds
May 03 08:29:55 box:tasks setCompleted - 9854: {"result":null,"error":{"stack":"BoxError: Backuptask crashed\n at runBackupUpload (/home/yellowtent/box/src/backuptask.js:163:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:360:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:382:5)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:503:29)","name":"BoxError","reason":"Internal Error","details":{},"message":"Backuptask crashed"}}
May 03 08:29:55 box:tasks update 9854: {"percent":100,"result":null,"error":{"stack":"BoxError: Backuptask crashed\n at runBackupUpload (/home/yellowtent/box/src/backuptask.js:163:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:360:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:382:5)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:503:29)","name":"BoxError","reason":"Internal Error","details":{},"message":"Backuptask crashed"}}
[no timestamp]  Backuptask crashed
[no timestamp]  }
</code></pre>
<p dir="auto">Could you please advise me on how to address this issue? Thank you in advance for your assistance.</p>
]]></description><link>https://forum.cloudron.io/topic/11663/issue-with-lengthy-and-failing-updates</link><generator>RSS for Node</generator><lastBuildDate>Wed, 22 Apr 2026 18:06:55 GMT</lastBuildDate><atom:link href="https://forum.cloudron.io/topic/11663.rss" rel="self" type="application/rss+xml"/><pubDate>Fri, 03 May 2024 08:42:28 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Tue, 17 Jun 2025 20:49:17 GMT]]></title><description><![CDATA[<h1>Closed due to inactivity</h1>
<p dir="auto">Note: A lot has changed for backups and will change even further with Cloudron 9.<br />
If this issue persists please reopen this as a new issue.</p>
]]></description><link>https://forum.cloudron.io/post/108863</link><guid isPermaLink="true">https://forum.cloudron.io/post/108863</guid><dc:creator><![CDATA[james]]></dc:creator><pubDate>Tue, 17 Jun 2025 20:49:17 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Tue, 28 May 2024 07:31:52 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/archos" aria-label="Profile: archos">@<bdi>archos</bdi></a> for bugs like Synapse, I am fixing our code to have better logs . Off late, there's been a bunch of backup related issues. It's not clear (yet) whether this is some bug in node or some node module or the upstream provider. The error happens randomly and the logs are not helpful. I will update this post when I have a patch.</p>
]]></description><link>https://forum.cloudron.io/post/89102</link><guid isPermaLink="true">https://forum.cloudron.io/post/89102</guid><dc:creator><![CDATA[girish]]></dc:creator><pubDate>Tue, 28 May 2024 07:31:52 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Sun, 26 May 2024 05:56:19 GMT]]></title><description><![CDATA[<p dir="auto">I apologize for the previous question; the issue was apparently caused by the idrive-e2 provider, as I experienced it on other servers with Cloudron as well. The backup is now working again, except for Matrix Synapse.</p>
]]></description><link>https://forum.cloudron.io/post/89015</link><guid isPermaLink="true">https://forum.cloudron.io/post/89015</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Sun, 26 May 2024 05:56:19 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Sat, 25 May 2024 06:16:51 GMT]]></title><description><![CDATA[<p dir="auto">I've tried manual backups several times, but each time I end up with the same error. I would really appreciate any help or advice. Thank you very much for the information.</p>
<pre><code>May 25 08:12:10 box:tasks update 10025: {"percent":6.555555555555555,"message":"Copying with concurrency of 110"}
May 25 08:12:10 box:tasks update 10025: {"percent":6.555555555555555,"message":"Copying files from 0-4"}
May 25 08:12:10 box:tasks update 10025: {"percent":6.555555555555555,"message":"Copying config.json"}
May 25 08:12:10 box:tasks update 10025: {"percent":6.555555555555555,"message":"Copying data/env"}
May 25 08:12:10 box:tasks update 10025: {"percent":6.555555555555555,"message":"Copying fsmetadata.json"}
May 25 08:12:10 box:tasks update 10025: {"percent":6.555555555555555,"message":"Copying postgresqldump"}
May 25 08:12:10 box:tasks update 10025: {"percent":6.555555555555555,"message":"Retrying (1) copy of postgresqldump. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 08:12:30 box:tasks update 10025: {"percent":6.555555555555555,"message":"Retrying (2) copy of postgresqldump. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 08:12:50 box:tasks update 10025: {"percent":6.555555555555555,"message":"Retrying (3) copy of postgresqldump. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 08:13:10 box:tasks update 10025: {"percent":6.555555555555555,"message":"Retrying (4) copy of postgresqldump. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 08:13:31 box:tasks update 10025: {"percent":6.555555555555555,"message":"Retrying (5) copy of postgresqldump. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 08:13:51 box:tasks update 10025: {"percent":6.555555555555555,"message":"Retrying (6) copy of postgresqldump. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 08:14:11 box:tasks update 10025: {"percent":6.555555555555555,"message":"Retrying (7) copy of postgresqldump. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 08:14:31 box:tasks update 10025: {"percent":6.555555555555555,"message":"Retrying (8) copy of postgresqldump. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 08:14:51 box:tasks update 10025: {"percent":6.555555555555555,"message":"Retrying (9) copy of postgresqldump. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}

</code></pre>
]]></description><link>https://forum.cloudron.io/post/88986</link><guid isPermaLink="true">https://forum.cloudron.io/post/88986</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Sat, 25 May 2024 06:16:51 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Sat, 25 May 2024 06:06:24 GMT]]></title><description><![CDATA[<p dir="auto">Hello everyone, it's been quiet for a few days since I turned off the automatic backups at Matrix, but today the backup failed again. It's frustrating, and from what I see on the forum, I'm not the only one having issues with backups.<br />
Error copying archoslinux-backup/snapshot/app_070963ae-8ef3-4bcb-bab3-7ca671f4a72b/fsmetadata.json (148 bytes): InternalError InternalError: We encountered an internal error, please try again.: cause(EOF)</p>
<pre><code>May 25 01:00:06 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Copying files from 0-3"}
May 25 01:00:06 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Copying config.json"}
May 25 01:00:06 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Copying data/coolwsd.xml"}
May 25 01:00:06 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Copying fsmetadata.json"}
May 25 01:00:06 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Retrying (1) copy of fsmetadata.json. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 01:00:26 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Retrying (2) copy of fsmetadata.json. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 01:00:46 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Retrying (3) copy of fsmetadata.json. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 01:01:06 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Retrying (4) copy of fsmetadata.json. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 01:01:26 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Retrying (5) copy of fsmetadata.json. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 01:01:46 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Retrying (6) copy of fsmetadata.json. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 01:02:07 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Retrying (7) copy of fsmetadata.json. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 01:02:27 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Retrying (8) copy of fsmetadata.json. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 01:02:47 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Retrying (9) copy of fsmetadata.json. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 01:03:07 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Retrying (10) copy of fsmetadata.json. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 01:03:27 box:storage/s3 copy: s3 copy error when copying archoslinux-backup/snapshot/app_070963ae-8ef3-4bcb-bab3-7ca671f4a72b/fsmetadata.json: InternalError: We encountered an internal error, please try again.: cause(EOF)
May 25 01:03:27 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Retrying (11) copy of fsmetadata.json. Error: InternalError: We encountered an internal error, please try again.: cause(EOF) 500"}
May 25 01:03:27 box:tasks update 10016: {"percent":3.7777777777777777,"message":"Copied 3 files with error: BoxError: Error copying archoslinux-backup/snapshot/app_070963ae-8ef3-4bcb-bab3-7ca671f4a72b/fsmetadata.json (148 bytes): InternalError InternalError: We encountered an internal error, please try again.: cause(EOF)"}
May 25 01:03:27 box:backuptask copy: copied to 2024-05-24-230002-287/app_collabora.arch-linux.cz_v1.17.3 errored. error: Error copying archoslinux-backup/snapshot/app_070963ae-8ef3-4bcb-bab3-7ca671f4a72b/fsmetadata.json (148 bytes): InternalError InternalError: We encountered an internal error, please try again.: cause(EOF)
May 25 01:03:27 box:taskworker Task took 205.487 seconds
May 25 01:03:27 box:tasks setCompleted - 10016: {"result":null,"error":{"stack":"BoxError: Error copying archoslinux-backup/snapshot/app_070963ae-8ef3-4bcb-bab3-7ca671f4a72b/fsmetadata.json (148 bytes): InternalError InternalError: We encountered an internal error, please try again.: cause(EOF)\n at Response.done (/home/yellowtent/box/src/storage/s3.js:338:48)\n at Request.&lt;anonymous&gt; (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:367:18)\n at Request.callListeners (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:106:20)\n at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:78:10)\n at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:686:14)\n at Request.transition (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:22:10)\n at AcceptorStateMachine.runTo (/home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:14:12)\n at /home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:26:10\n at Request.&lt;anonymous&gt; (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:38:9)\n at Request.&lt;anonymous&gt; (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:688:12)","name":"BoxError","reason":"External Error","details":{},"message":"Error copying archoslinux-backup/snapshot/app_070963ae-8ef3-4bcb-bab3-7ca671f4a72b/fsmetadata.json (148 bytes): InternalError InternalError: We encountered an internal error, please try again.: cause(EOF)"}}
May 25 01:03:27 box:tasks update 10016: {"percent":100,"result":null,"error":{"stack":"BoxError: Error copying archoslinux-backup/snapshot/app_070963ae-8ef3-4bcb-bab3-7ca671f4a72b/fsmetadata.json (148 bytes): InternalError InternalError: We encountered an internal error, please try again.: cause(EOF)\n at Response.done (/home/yellowtent/box/src/storage/s3.js:338:48)\n at Request.&lt;anonymous&gt; (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:367:18)\n at Request.callListeners (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:106:20)\n at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:78:10)\n at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:686:14)\n at Request.transition (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:22:10)\n at AcceptorStateMachine.runTo (/home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:14:12)\n at /home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:26:10\n at Request.&lt;anonymous&gt; (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:38:9)\n at Request.&lt;anonymous&gt; (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:688:12)","name":"BoxError","reason":"External Error","details":{},"message":"Error copying archoslinux-backup/snapshot/app_070963ae-8ef3-4bcb-bab3-7ca671f4a72b/fsmetadata.json (148 bytes): InternalError InternalError: We encountered an internal error, please try again.: cause(EOF)"}}
[no timestamp]  Error copying archoslinux-backup/snapshot/app_070963ae-8ef3-4bcb-bab3-7ca671f4a72b/fsmetadata.json (148 bytes): InternalError InternalError: We encountered an internal error, please try again.: cause(EOF)

</code></pre>
]]></description><link>https://forum.cloudron.io/post/88985</link><guid isPermaLink="true">https://forum.cloudron.io/post/88985</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Sat, 25 May 2024 06:06:24 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Tue, 21 May 2024 06:47:23 GMT]]></title><description><![CDATA[<p dir="auto">The issue with the backup was likely caused by the Matrix app. I turned off the backup in Matrix and initiated a new backup. Within two hours, the backup was successfully completed. I'll see how the backup turns out tonight.<br />
Something is wrong with the Matrix Synapse.</p>
]]></description><link>https://forum.cloudron.io/post/88760</link><guid isPermaLink="true">https://forum.cloudron.io/post/88760</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Tue, 21 May 2024 06:47:23 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Mon, 20 May 2024 17:57:37 GMT]]></title><description><![CDATA[<p dir="auto">Unfortunately, the backups fail to complete again. I have tried to manually initiate the backup process twice, but each time the backup fails with an error, specifically with the Matrix application. I have no idea what to do. I also tried using a Hetzner box, but the backups take even longer there.</p>
<p dir="auto"><img src="/assets/uploads/files/1716225811755-sn%C3%ADmek-obrazovky-po%C5%99%C3%ADzen%C3%BD-2024-05-20-19-23-09-resized.png" alt="Snímek obrazovky pořízený 2024-05-20 19-23-09.png" class=" img-fluid img-markdown" /></p>
]]></description><link>https://forum.cloudron.io/post/88748</link><guid isPermaLink="true">https://forum.cloudron.io/post/88748</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Mon, 20 May 2024 17:57:37 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Fri, 17 May 2024 09:57:47 GMT]]></title><description><![CDATA[<p dir="auto">I would recommend to investigate whether the same files in the app and in the backup are indeed identical.<br />
rsync would not copy identical files all over again.<br />
I use rsync to backup to an external harddrive - my Cloudron instance is around ~ 250 GB and <em>all</em> my backups (7 daily, 4 weekly, 12 monthly) going back to June 2023 occupy ~ 350 GB.<br />
The same backups that I push via <a href="https://docs.cloudron.io/guides/community/restic-rclone/" target="_blank" rel="noopener noreferrer nofollow ugc">restic to Onedrive</a> going back to June 2022 (!) occupy ~ 310 GB (restic is very good in compression and de-duplication).</p>
]]></description><link>https://forum.cloudron.io/post/88608</link><guid isPermaLink="true">https://forum.cloudron.io/post/88608</guid><dc:creator><![CDATA[necrevistonnezr]]></dc:creator><pubDate>Fri, 17 May 2024 09:57:47 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Fri, 17 May 2024 09:04:59 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/necrevistonnezr" aria-label="Profile: necrevistonnezr">@<bdi>necrevistonnezr</bdi></a> The server occupies 960.45 GB and I currently maintain only two backups, with a total size of 2 TB. The attached image shows the log from today's backup of the Calibre application. I uploaded books to this library about two years ago. Since then, I haven't uploaded any new books to Calibre and no one else has made any changes, yet every day, the entire 10 GB of books is being backed up. It doesn't make sense for the two backups to occupy 2 TB when the server itself only uses nearly 1 TB.<br />
<img src="/assets/uploads/files/1715936693461-sn%C3%ADmek-obrazovky-po%C5%99%C3%ADzen%C3%BD-2024-05-17-10-56-15-resized.png" alt="Snímek obrazovky pořízený 2024-05-17 10-56-15.png" class=" img-fluid img-markdown" /></p>
]]></description><link>https://forum.cloudron.io/post/88605</link><guid isPermaLink="true">https://forum.cloudron.io/post/88605</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Fri, 17 May 2024 09:04:59 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Fri, 17 May 2024 07:24:46 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/archos" aria-label="Profile: archos">@<bdi>archos</bdi></a> said in <a href="/post/88598">Issue with Lengthy and Failing Updates</a>:</p>
<blockquote>
<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/nebulon" aria-label="Profile: nebulon">@<bdi>nebulon</bdi></a> Hello, thank you for the information. I find it odd why Rsync always backs up all the files. Yes, the first backup is large, but afterwards, only the changed files should be backed up, not all the videos from Peertube and many GB from Nextcloud every day.</p>
</blockquote>
<p dir="auto">Are you sure those files that <em>seem</em> identical but are copied regardless are not modified in any way? Other timestamps for example?</p>
]]></description><link>https://forum.cloudron.io/post/88601</link><guid isPermaLink="true">https://forum.cloudron.io/post/88601</guid><dc:creator><![CDATA[necrevistonnezr]]></dc:creator><pubDate>Fri, 17 May 2024 07:24:46 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Fri, 17 May 2024 04:24:25 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/nebulon" aria-label="Profile: nebulon">@<bdi>nebulon</bdi></a> Hello, thank you for the information. I find it odd why Rsync always backs up all the files. Yes, the first backup is large, but afterwards, only the changed files should be backed up, not all the videos from Peertube and many GB from Nextcloud every day. I've tried several backup services, kept two-day and four-week backups, but Rsync always backs up the entire server each time. That's why I now keep only two backups. I use Rsync on another server as well, and there is no problem there.<br />
That's why the backups are slow when the server backs up almost 1 TB of data daily.</p>
]]></description><link>https://forum.cloudron.io/post/88598</link><guid isPermaLink="true">https://forum.cloudron.io/post/88598</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Fri, 17 May 2024 04:24:25 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Wed, 15 May 2024 14:10:15 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/nebulon" aria-label="Profile: nebulon">@<bdi>nebulon</bdi></a> said in <a href="/post/88512">Issue with Lengthy and Failing Updates</a>:</p>
<blockquote>
<p dir="auto">This is of course no great answer for the moment. Not just specific to this, but 100s of Gb on top of various degrees of stable connections + remote storage protocols and endpoints which behave inconsistently in our experience (especially the lower cost they get) make this all not easy  However there is much room to improve on our side with time.</p>
</blockquote>
<p dir="auto">Recommended solutions help. Backup works with this and this and this service. It's unreliable with that and that and that service.</p>
<p dir="auto">Next big help would be for your backups to pace themselves for the long haul. You could pre-estimate size and refuse to run the backup in intervals which do not allow enough time to finish the backup (hourly, perhaps daily is not enough for some crazy Cloudroner with hundreds of gigabytes of data: we were just mailed by someone who uses our software with 260 TB of storage, argh, all at consumer level, not enterprise level).</p>
<p dir="auto">Next you could cap backup speed not to max out the bandwidth for a working server.</p>
<p dir="auto">Yes, there's lots Cloudron could do to make backups easier and more successful.</p>
<p dir="auto">Ours fail all the time despite following guidelines. We had to manually remove 16GB of failed updates from our server (failed updates should autodelete), and then manually clean out NextCloud deleted files (which Cloudron could make automatic by default, even if NextCloud developers don't care about server space as they plan for dedicated servers and object space, our intrepid leaders at Cloudron could). And Cloudron didn't tell us we had to update from Ubuntu 18 LTS for backups and updates to work properly so we were caught in a closed loop. It's not easy setting up a service as complex as Cloudron but there is some low-hanging fruit here.</p>
]]></description><link>https://forum.cloudron.io/post/88518</link><guid isPermaLink="true">https://forum.cloudron.io/post/88518</guid><dc:creator><![CDATA[foliovision]]></dc:creator><pubDate>Wed, 15 May 2024 14:10:15 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Wed, 15 May 2024 12:05:18 GMT]]></title><description><![CDATA[<p dir="auto">I was looking at it just now, so apparently backups have succeeded afterwards, however one backup run in this case takes around 5h. Since the backup does work on live-data for 5h a lot of things may interfere (like in this run database connection issues)... all this is of course not ideal, but the current strategy is to start all over again if it fails midway to avoid random state with the goal to eventually succeed.</p>
<p dir="auto">This is of course no great answer for the moment. Not just specific to this, but 100s of Gb on top of various degrees of stable connections + remote storage protocols and endpoints which behave inconsistently in our experience (especially the lower cost they get) make this all not easy <img src="https://forum.cloudron.io/assets/plugins/nodebb-plugin-emoji/emoji/android/1f615.png?v=c34f2a691b3" class="not-responsive emoji emoji-android emoji--confused" style="height:23px;width:auto;vertical-align:middle" title=":/" alt="😕" /> However there is much room to improve on our side with time.</p>
<p dir="auto">Maybe we have to revisit this to look into other similar use-case setups how fine-grained and periodically other platforms deal with this, especially when cost per Gb or I/O is a factor.</p>
]]></description><link>https://forum.cloudron.io/post/88512</link><guid isPermaLink="true">https://forum.cloudron.io/post/88512</guid><dc:creator><![CDATA[nebulon]]></dc:creator><pubDate>Wed, 15 May 2024 12:05:18 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Tue, 14 May 2024 16:19:43 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/nebulon" aria-label="Profile: nebulon">@<bdi>nebulon</bdi></a> Hello,<br />
Unfortunately, the backup got stuck again today, this time during the Privatebin update. The update has not been completed since yesterday, and neither has the backup.</p>
<pre><code>:database Connection 7252 error: Packets out of order. Got: 0 Expected: 2 PROTOCOL_PACKETS_OUT_OF_ORDER
May 14 10:12:21 box:database Connection 7258 error: Packets out of order. Got: 0 Expected: 2 PROTOCOL_PACKETS_OUT_OF_ORDER
May 14 10:12:21 box:database Connection 7256 error: Packets out of order. Got: 0 Expected: 2 PROTOCOL_PACKETS_OUT_OF_ORDER
May 14 10:12:21 box:database Connection 7253 error: Packets out of order. Got: 0 Expected: 2 PROTOCOL_PACKETS_OUT_OF_ORDER
May 14 10:12:21 box:database Connection 7255 error: Packets out of order. Got: 0 Expected: 2 PROTOCOL_PACKETS_OUT_OF_ORDER

</code></pre>
]]></description><link>https://forum.cloudron.io/post/88468</link><guid isPermaLink="true">https://forum.cloudron.io/post/88468</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Tue, 14 May 2024 16:19:43 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Sat, 11 May 2024 07:18:36 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/nebulon" aria-label="Profile: nebulon">@<bdi>nebulon</bdi></a> Thank you very much for attending to my issue.</p>
]]></description><link>https://forum.cloudron.io/post/88309</link><guid isPermaLink="true">https://forum.cloudron.io/post/88309</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Sat, 11 May 2024 07:18:36 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Fri, 10 May 2024 15:53:32 GMT]]></title><description><![CDATA[<p dir="auto">To update this thread. There seems to be a race where the backup cleaner kicks in while the full backup is still busy backing up apps. Those take a very long time so the cleaner already starts purging app backups from the database which then in the end cannot be referenced and thus the whole backup will error.</p>
<p dir="auto">This will be good have fixed, currently testing a patch...</p>
]]></description><link>https://forum.cloudron.io/post/88297</link><guid isPermaLink="true">https://forum.cloudron.io/post/88297</guid><dc:creator><![CDATA[nebulon]]></dc:creator><pubDate>Fri, 10 May 2024 15:53:32 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Tue, 07 May 2024 12:01:31 GMT]]></title><description><![CDATA[<p dir="auto">So two backups were completed successfully, but today the backup failed again. Mainly through rsync, it backs up the entire Cloudron each time, instead of incremental backups.<br />
That seems strange to me.</p>
<pre><code>May 07 07:30:11 box:tasks setCompleted - 9891: {"result":null,"error":{"stack":"BoxError: Backup not found\n at Object.setState (/home/yellowtent/box/src/backups.js:238:42)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async rotateAppBackup (/home/yellowtent/box/src/backuptask.js:303:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:383:12)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:503:29)","name":"BoxError","reason":"Not found","details":{},"message":"Backup not found"}}
May 07 07:30:11 box:tasks update 9891: {"percent":100,"result":null,"error":{"stack":"BoxError: Backup not found\n at Object.setState (/home/yellowtent/box/src/backups.js:238:42)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async rotateAppBackup (/home/yellowtent/box/src/backuptask.js:303:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:383:12)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:503:29)","name":"BoxError","reason":"Not found","details":{},"message":"Backup not found"}}
May 07 07:30:11 box:taskworker Task took 23409.098 seconds
[no timestamp]  Backup not found
</code></pre>
]]></description><link>https://forum.cloudron.io/post/88133</link><guid isPermaLink="true">https://forum.cloudron.io/post/88133</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Tue, 07 May 2024 12:01:31 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Sat, 04 May 2024 08:53:05 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/girish" aria-label="Profile: girish">@<bdi>girish</bdi></a> I sent an email to support. Thank you very much.</p>
]]></description><link>https://forum.cloudron.io/post/87967</link><guid isPermaLink="true">https://forum.cloudron.io/post/87967</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Sat, 04 May 2024 08:53:05 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Sat, 04 May 2024 08:18:57 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/archos" aria-label="Profile: archos">@<bdi>archos</bdi></a> strange... it says that it copied the files but after that it fails. If you reach out to us at <a href="mailto:support@cloudron.io" target="_blank" rel="noopener noreferrer nofollow ugc">support@cloudron.io</a> , I can debug further.</p>
]]></description><link>https://forum.cloudron.io/post/87964</link><guid isPermaLink="true">https://forum.cloudron.io/post/87964</guid><dc:creator><![CDATA[girish]]></dc:creator><pubDate>Sat, 04 May 2024 08:18:57 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Sat, 04 May 2024 06:47:45 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/archos" aria-label="Profile: archos">@<bdi>archos</bdi></a> I added more memory for backups and created a new bucket. The first manually initiated backup went smoothly, but the second automatic one ended with 'Backup failed' again. I am attaching the log below. What's also strange is that the second backup again took 8 hours. I have set up the backups using Rsync, but it still re-uploads all videos, photos, books, etc., which is a tremendous amount of data—I have a total of 912 GB used. So, I'm not sure where the problem could be.<br />
I also think that rsync uploads all the data from the server every day. This would match the size of the first bucket, which is 3.64 TB, even though I only keep two daily backups. I would be grateful for any advice.</p>
<pre><code>box:tasks update 9868: {"percent":55.05405405405404,"message":"Copied 321629 files with error: null"}
May 04 07:20:32 box:tasks setCompleted - 9868: {"result":null,"error":{"stack":"BoxError: Backup not found\n at Object.setState (/home/yellowtent/box/src/backups.js:238:42)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async rotateAppBackup (/home/yellowtent/box/src/backuptask.js:303:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:383:12)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:503:29)","name":"BoxError","reason":"Not found","details":{},"message":"Backup not found"}}
May 04 07:20:32 box:tasks update 9868: {"percent":100,"result":null,"error":{"stack":"BoxError: Backup not found\n at Object.setState (/home/yellowtent/box/src/backups.js:238:42)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async rotateAppBackup (/home/yellowtent/box/src/backuptask.js:303:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:383:12)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:503:29)","name":"BoxError","reason":"Not found","details":{},"message":"Backup not found"}}
May 04 07:20:32 box:taskworker Task took 22829.343 seconds
[no timestamp]  Backup not found

</code></pre>
]]></description><link>https://forum.cloudron.io/post/87952</link><guid isPermaLink="true">https://forum.cloudron.io/post/87952</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Sat, 04 May 2024 06:47:45 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Fri, 03 May 2024 12:10:18 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/girish" aria-label="Profile: girish">@<bdi>girish</bdi></a> Ok, thank you for the information. I use backup on IDrive e2. The backups are performed daily, but I only keep two. It's true that lately the backups have occasionally failed. I will try creating a new bucket and maybe now that I've added more memory, it will be better.</p>
]]></description><link>https://forum.cloudron.io/post/87912</link><guid isPermaLink="true">https://forum.cloudron.io/post/87912</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Fri, 03 May 2024 12:10:18 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Fri, 03 May 2024 11:57:01 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/archos" aria-label="Profile: archos">@<bdi>archos</bdi></a> yes, it should upload only changed files. Which backend are you using? If it's uploading every day again and again, it's most likely because it has never succeeded in previous attempts. Once, the first attempt succeeds, the future attempts won't re-upload.</p>
]]></description><link>https://forum.cloudron.io/post/87909</link><guid isPermaLink="true">https://forum.cloudron.io/post/87909</guid><dc:creator><![CDATA[girish]]></dc:creator><pubDate>Fri, 03 May 2024 11:57:01 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Fri, 03 May 2024 11:27:28 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/girish" aria-label="Profile: girish">@<bdi>girish</bdi></a> Thank you for the response, I tried adding more memory. There was only 7 GB available for backup. I have another question, shouldn't rsync be uploading only changed files? For example, it uploads photos in Nextcloud and videos in Peertube every day anew."</p>
]]></description><link>https://forum.cloudron.io/post/87903</link><guid isPermaLink="true">https://forum.cloudron.io/post/87903</guid><dc:creator><![CDATA[archos]]></dc:creator><pubDate>Fri, 03 May 2024 11:27:28 GMT</pubDate></item><item><title><![CDATA[Reply to Issue with Lengthy and Failing Updates on Fri, 03 May 2024 11:19:03 GMT]]></title><description><![CDATA[<p dir="auto"><a class="plugin-mentions-user plugin-mentions-a" href="/user/archos" aria-label="Profile: archos">@<bdi>archos</bdi></a> How much memory have you given the backup task ? Backups -&gt; Configure -&gt; Advanced. Maybe try increasing that a lot more.</p>
]]></description><link>https://forum.cloudron.io/post/87900</link><guid isPermaLink="true">https://forum.cloudron.io/post/87900</guid><dc:creator><![CDATA[girish]]></dc:creator><pubDate>Fri, 03 May 2024 11:19:03 GMT</pubDate></item></channel></rss>