Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Backup problem, not clear what is it

    Support
    4
    0 Votes
    4 Posts
    107 Views
    nebulonN

    Ah this is good information and indeed a likely a problem for rsync backups with a lot of files (not sure how the rate-limit turns out in the end though)

  • Backup failed on backblaze. Status 408?

    Solved Support
    4
    1 Votes
    4 Posts
    142 Views
    J

    @daixiwen ah ok. I would ignore the errors. It's normal for upstream services to go down once in a while...

    I just checked https://status.backblaze.com/ and unfortunately no history of uptime here. But it says "Backblaze performs regular maintenance every Thursday from 11:30 am to 1:30 p.m. Pacific Time with the aim to consistently improve our systems and services. "

  • backup failed (CR 8.0.6)

    Solved Support
    7
    0 Votes
    7 Posts
    185 Views
    matix131997M

    Just today I got a notification about an erroneous backup of the server where I have Nextcloud installed (150GB size) and I backup once a week.. I should add that the previous backups were successful. The provider is Backblaze B2.

    Oct 06 01:02:32 box:storage/s3 Upload progress: {"loaded":1073741824,"part":2,"key":"102/snapshot/app_50d07945-bf1b-4345-8e8c-c146e86c12a7.tar.gz.enc"} Jan 01 01:24:00 node:events:496 [no timestamp] throw er; // Unhandled 'error' event [no timestamp] ^ [no timestamp] [no timestamp] write EPIPE [no timestamp] at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:94:16) [no timestamp] 'error' event on TLSSocket instance at: [no timestamp] at emitErrorNT (node:internal/streams/destroy:169:8) [no timestamp] at emitErrorCloseNT (node:internal/streams/destroy:128:3) [no timestamp] at process.processTicksAndRejections (node:internal/process/task_queues:82:21) { [no timestamp] errno: -32, [no timestamp] code: 'EPIPE', [no timestamp] syscall: 'write' [no timestamp] } [no timestamp] [no timestamp] v20.12.2 Oct 06 01:02:32 box:shell backup-snapshot/app_50d07945-bf1b-4345-8e8c-c146e86c12a7: /usr/bin/sudo -S -E --close-from=4 /home/yellowtent/box/src/scripts/backupupload.js snapshot/app_50d07945-bf1b-4345-8e8c-c146e86c12a7 tgz {"localRoot":"/home/yellowtent/appsdata/50d07945-bf1b-4345-8e8c-c146e86c12a7","layout":[]} errored BoxError: backup-snapshot/app_50d07945-bf1b-4345-8e8c-c146e86c12a7 exited with code 1 signal null [no timestamp] at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:122:19) [no timestamp] at ChildProcess.emit (node:events:518:28) [no timestamp] at ChildProcess.emit (node:domain:488:12) [no timestamp] at ChildProcess._handle.onexit (node:internal/child_process:294:12) { [no timestamp] reason: 'Shell Error', [no timestamp] details: {}, [no timestamp] code: 1, [no timestamp] signal: null [no timestamp] } Oct 06 01:02:32 box:taskworker Task took 152.05 seconds Oct 06 01:02:32 box:tasks setCompleted - 2443: {"result":null,"error":{"stack":"BoxError: Backuptask crashed\n at runBackupUpload (/home/yellowtent/box/src/backuptask.js:164:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:361:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:383:5)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:504:29)","name":"BoxError","reason":"Internal Error","details":{},"message":"Backuptask crashed"}} Oct 06 01:02:32 box:tasks update 2443: {"percent":100,"result":null,"error":{"stack":"BoxError: Backuptask crashed\n at runBackupUpload (/home/yellowtent/box/src/backuptask.js:164:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:361:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:383:5)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:504:29)","name":"BoxError","reason":"Internal Error","details":{},"message":"Backuptask crashed"}} [no timestamp] Backuptask crashed [no timestamp] at runBackupUpload (/home/yellowtent/box/src/backuptask.js:164:15) [no timestamp] at process.processTicksAndRejections (node:internal/process/task_queues:95:5) [no timestamp] at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:361:5) [no timestamp] at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:383:5) [no timestamp] at async fullBackup (/home/yellowtent/box/src/backuptask.js:504:29)

    On other servers the backups are successful.

  • Reducing backup costs / Backup to pCloud

    Moved Discuss
    20
    1 Votes
    20 Posts
    2k Views
    randyjcR

    You could try/experiment with using rclone.
    create a config for your desired mount, for example google drive.
    Mount that via systemd and then point your backups to that location.

    For example:
    d39b4f53-a684-4ef5-a73c-d211fed3c4e6-image.png

    5763fb36-5be4-40f8-89e4-1c69e8a87378-image.png

    [Unit] Description=rclone Service Google Drive Mount Wants=network-online.target After=network-online.target [Service] Type=notify Environment=RCLONE_CONFIG=/root/.config/rclone/rclone.conf RestartSec=5 ExecStart=/usr/bin/rclone mount google:cloudron /mnt/google \ # This is for allowing users other than the user running rclone access to the mount --allow-other \ # Dropbox is a polling remote so this value can be set very high and any changes are detected via polling. --dir-cache-time 9999h \ # Log file location --log-file /root/.config/rclone/logs/rclone-google.log \ # Set the log level --log-level INFO \ # This is setting the file permission on the mount to user and group have the same access and other can read --umask 002 \ # This sets up the remote control daemon so you can issue rc commands locally --rc \ # This is the default port it runs on --rc-addr 127.0.0.1:5574 \ # no-auth is used as no one else uses my server --rc-no-auth \ # The local disk used for caching --cache-dir=/cache/google \ # This is used for caching files to local disk for streaming --vfs-cache-mode full \ # This limits the cache size to the value below --vfs-cache-max-size 50G \ # Speed up the reading: Use fast (less accurate) fingerprints for change detection --vfs-fast-fingerprint \ # Wait before uploading --vfs-write-back 1m \ # This limits the age in the cache if the size is reached and it removes the oldest files first --vfs-cache-max-age 9999h \ # Disable HTTP2 #--disable-http2 \ # Set the tpslimit --tpslimit 12 \ # Set the tpslimit-burst --tpslimit-burst 0 ExecStop=/bin/fusermount3 -uz /mnt/google ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --url 127.0.0.1:5574 _async=true Restart=on-failure User=root Group=root [Install] WantedBy=multi-user.target # https://github.com/animosity22/homescripts/blob/master/systemd/rclone-drive.service
  • Using Backblaze as a backup option gives me an error

    Solved Support
    3
    0 Votes
    3 Posts
    354 Views
    girishG

    @stvslkt What is the endpoint URL you are entering? It should be like https://s3.us-west-002.backblazeb2.com or just s3.us-west-002.backblazeb2.com.

  • remote path while restoring

    Solved Support
    5
    0 Votes
    5 Posts
    640 Views
    jagadeesh-s2104J

    @girish Thank you!! It worked with the version parameter!

  • How to find the size of each apps' backup?

    Solved Support
    5
    1 Votes
    5 Posts
    577 Views
    jagadeesh-s2104J

    @jagadeesh-s2104 Clean backup is a good idea. Thank you!

  • Backup size growing

    Solved Support
    25
    0 Votes
    25 Posts
    3k Views
    christiaanC

    @girish ah yes, that makes sense. And recent ones are being deleted, yes. Thanks again.

  • Backblaze not removing old backups

    Support
    2
    0 Votes
    2 Posts
    383 Views
    girishG

    @dylightful if you click the 'cleanup backups' button in the Backups view, is there anything interesting in the logs?

  • No backups because of NullPointerException

    Solved Support
    15
    0 Votes
    15 Posts
    910 Views
    G

    @girish said in No backups because of NullPointerException:

    @guyds said in No backups because of NullPointerException:

    The first level is the per app level. Right now, apps are already backed up one by one, but they are not stored nor reported individually. And this is the missing feature in my opinion.

    Ah no, they are listed in app -> backups. So, even if you do a full backup, each individual app backup will be listed in app -> backups.

    Yes, they are listed at the app level, but there's no reporting at the app level because the backup succeeds or fails at the box level.

    You can also use that backup to restore/clone the app.

    Yes, but only if the backup succeeds for all apps.

    So, again, I think the current issue is that everything is treated as a whole while it makes more sense in my opinion to treat each app individually and then in the end (optionally?) bundle the individual parts as a whole.

    Yes, so they are treated individually, it's actually very close to what you want. The only issue is that when a full backup fails, those successful individual app backups that are part of a failed full backup will get removed/cleaned up every night.

    Exactly, they are per app bot not treated like that in the end because success or failure is determined by the whole box.

    I have made a task to fix the behavior, let's see.

    Great, thanks!

  • Easy way to "reset" backups...

    Solved Support
    10
    0 Votes
    10 Posts
    1k Views
    ericdrgnE

    @girish Awesome thank you, I appreciate the advice. All seems to be working great now.

    Also thanks @marcusquinn

  • Are (encrypted) rsync backups not "incremental" on B2?

    Solved Support
    16
    0 Votes
    16 Posts
    2k Views
    girishG

    For restic, can you upvote here - https://forum.cloudron.io/topic/1575/backup-improvements-restic-backend ? It should be possible to add a new restic backend like tgz/rsync. Though, we still have to figure out how we can "support" it. But that's a separate topic.

  • Signature validation failed

    Solved Support
    4
    0 Votes
    4 Posts
    576 Views
    X

    Fixed it by taking a new backup, after I reinstated my API Credentails

  • Backblaze backups failed most of the time

    Support
    2
    0 Votes
    2 Posts
    371 Views
    girishG

    @vladimir Can you try this change https://git.cloudron.io/cloudron/box/-/commit/bedcd6fccf58830b316318699375bc1f582a5d7a ? The file on Cloudron is /home/yellowtent/box/src/storage/s3.js. Essentially change the timeout from 3000 * 1000 to 0. And also change maxRetries to 10. You don't need to restart anything after the change since the code changes are immediately picked up.

    (See also https://forum.cloudron.io/topic/3680/backup-issues-with-minio)

  • 0 Votes
    10 Posts
    1k Views
    M

    I can confirm that, now that the lifecycle settings are correct, backups get physically deleted from B2.

  • 0 Votes
    28 Posts
    2k Views
    A

    @girish Great! Thanks again for your help debugging this and adding more configuration. Huge help for larger backups like mine.

  • 0 Votes
    13 Posts
    2k Views
    M

    good points, thanks, gonna switch to cifs then! 🙂