Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • 1 Votes
    20 Posts
    751 Views
    randyjcR

    You could try/experiment with using rclone.
    create a config for your desired mount, for example google drive.
    Mount that via systemd and then point your backups to that location.

    For example:
    d39b4f53-a684-4ef5-a73c-d211fed3c4e6-image.png

    5763fb36-5be4-40f8-89e4-1c69e8a87378-image.png

    [Unit] Description=rclone Service Google Drive Mount Wants=network-online.target After=network-online.target [Service] Type=notify Environment=RCLONE_CONFIG=/root/.config/rclone/rclone.conf RestartSec=5 ExecStart=/usr/bin/rclone mount google:cloudron /mnt/google \ # This is for allowing users other than the user running rclone access to the mount --allow-other \ # Dropbox is a polling remote so this value can be set very high and any changes are detected via polling. --dir-cache-time 9999h \ # Log file location --log-file /root/.config/rclone/logs/rclone-google.log \ # Set the log level --log-level INFO \ # This is setting the file permission on the mount to user and group have the same access and other can read --umask 002 \ # This sets up the remote control daemon so you can issue rc commands locally --rc \ # This is the default port it runs on --rc-addr 127.0.0.1:5574 \ # no-auth is used as no one else uses my server --rc-no-auth \ # The local disk used for caching --cache-dir=/cache/google \ # This is used for caching files to local disk for streaming --vfs-cache-mode full \ # This limits the cache size to the value below --vfs-cache-max-size 50G \ # Speed up the reading: Use fast (less accurate) fingerprints for change detection --vfs-fast-fingerprint \ # Wait before uploading --vfs-write-back 1m \ # This limits the age in the cache if the size is reached and it removes the oldest files first --vfs-cache-max-age 9999h \ # Disable HTTP2 #--disable-http2 \ # Set the tpslimit --tpslimit 12 \ # Set the tpslimit-burst --tpslimit-burst 0 ExecStop=/bin/fusermount3 -uz /mnt/google ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --url 127.0.0.1:5574 _async=true Restart=on-failure User=root Group=root [Install] WantedBy=multi-user.target # https://github.com/animosity22/homescripts/blob/master/systemd/rclone-drive.service
  • 0 Votes
    3 Posts
    146 Views
    girishG

    @stvslkt What is the endpoint URL you are entering? It should be like https://s3.us-west-002.backblazeb2.com or just s3.us-west-002.backblazeb2.com.

  • remote path while restoring

    Solved Support
    5
    0 Votes
    5 Posts
    300 Views
    jagadeesh-s2104J

    @girish Thank you!! It worked with the version parameter!

  • 1 Votes
    5 Posts
    190 Views
    jagadeesh-s2104J

    @jagadeesh-s2104 Clean backup is a good idea. Thank you!

  • Backup size growing

    Solved Support
    25
    0 Votes
    25 Posts
    1k Views
    christiaanC

    @girish ah yes, that makes sense. And recent ones are being deleted, yes. Thanks again.

  • 0 Votes
    2 Posts
    208 Views
    girishG

    @dylightful if you click the 'cleanup backups' button in the Backups view, is there anything interesting in the logs?

  • 0 Votes
    15 Posts
    694 Views
    G

    @girish said in No backups because of NullPointerException:

    @guyds said in No backups because of NullPointerException:

    The first level is the per app level. Right now, apps are already backed up one by one, but they are not stored nor reported individually. And this is the missing feature in my opinion.

    Ah no, they are listed in app -> backups. So, even if you do a full backup, each individual app backup will be listed in app -> backups.

    Yes, they are listed at the app level, but there's no reporting at the app level because the backup succeeds or fails at the box level.

    You can also use that backup to restore/clone the app.

    Yes, but only if the backup succeeds for all apps.

    So, again, I think the current issue is that everything is treated as a whole while it makes more sense in my opinion to treat each app individually and then in the end (optionally?) bundle the individual parts as a whole.

    Yes, so they are treated individually, it's actually very close to what you want. The only issue is that when a full backup fails, those successful individual app backups that are part of a failed full backup will get removed/cleaned up every night.

    Exactly, they are per app bot not treated like that in the end because success or failure is determined by the whole box.

    I have made a task to fix the behavior, let's see.

    Great, thanks!

  • Easy way to "reset" backups...

    Solved Support
    10
    0 Votes
    10 Posts
    589 Views
    ericdrgnE

    @girish Awesome thank you, I appreciate the advice. All seems to be working great now.

    Also thanks @marcusquinn

  • 0 Votes
    16 Posts
    977 Views
    girishG

    For restic, can you upvote here - https://forum.cloudron.io/topic/1575/backup-improvements-restic-backend ? It should be possible to add a new restic backend like tgz/rsync. Though, we still have to figure out how we can "support" it. But that's a separate topic.

  • Signature validation failed

    Solved Support
    4
    0 Votes
    4 Posts
    339 Views
    X

    Fixed it by taking a new backup, after I reinstated my API Credentails

  • 0 Votes
    2 Posts
    249 Views
    girishG

    @vladimir Can you try this change https://git.cloudron.io/cloudron/box/-/commit/bedcd6fccf58830b316318699375bc1f582a5d7a ? The file on Cloudron is /home/yellowtent/box/src/storage/s3.js. Essentially change the timeout from 3000 * 1000 to 0. And also change maxRetries to 10. You don't need to restart anything after the change since the code changes are immediately picked up.

    (See also https://forum.cloudron.io/topic/3680/backup-issues-with-minio)

  • 0 Votes
    10 Posts
    471 Views
    M

    I can confirm that, now that the lifecycle settings are correct, backups get physically deleted from B2.

  • 0 Votes
    28 Posts
    1k Views
    A

    @girish Great! Thanks again for your help debugging this and adding more configuration. Huge help for larger backups like mine.

  • 0 Votes
    13 Posts
    933 Views
    M

    good points, thanks, gonna switch to cifs then! 🙂