Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
P

pitadavespa

@pitadavespa
About
Posts
15
Topics
3
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • optimizing VPS for backups
    P pitadavespa

    @nebulon I appreciate your advice. The backup server is not on the same location as the cloudron server. It isn't even on the same country (like I do with all my backups).

    Maybe I wasn't clear: the server has an OS disk (nvme) and a HDD for backups. The mount I showed is the HDD disk mount. It was just to check if any filesystem mount options were needed to improve performance.

    Thank you all.

    Support backups

  • optimizing VPS for backups
    P pitadavespa

    @nebulon said in optimizing VPS for backups:

    EXT4 and XFS are basically treated the same way from Cloudron side, so there is no difference in that aspect.

    Just to be on the same page, you are still using the same configs sshfs (rsync with hardlinks) as before but only the remote is now not hetzner storage box anymore, but your own server with some disks mounted?

    If this is the case, then there is no difference at all from a Cloudron perspective. Maybe the disk I/O is just not as fast in your setup as with hetzner?

    Hi, @nebulon . Thank you for your time.
    Yes, the config is the same, except now backups are being sent to my server. It has a NVME boot disk and an HDD storage disk, for Cloudron backups.
    The first backup, 764GB, takes a few hours, just like hetzner storagebox. Disk I/O is (at least) good enough, I manage to do 500Mbps, easy. With lots of small files, it's less, off course.

    What differs are the next backups, where (almost) no data is transferred, only hard links are "updated". It takes around 4h to do this. I tested with a XFS mounted partition and now I'm testing with an EXT4. I think this one will handle this operation a bit better. Let's see.

    Partition is mount like this: UUID=xxxxxxxxxx /mnt/backups ext4 defaults,noatime,nodiratime,data=ordered,commit=15 0 2

    Any tips to make it better?

    Thanks

    Support backups

  • optimizing VPS for backups
    P pitadavespa

    @BrutalBirdie said in optimizing VPS for backups:

    Because of a fix @girish implement that runs some opperataions directly via. ssh in the storage-box instead of over fuse sshfs > network > mount yada yada.

    Thanks.
    @girish Could this fix be used in my use case?

    --

    Since yesterday, I'm testing backups with a XFS partition.
    I think something's different, compared to EXT4.
    On EXT4, after the first backup almost no data was transferred. It was just a matter of "taking care of the" hardlinks, although it took a few hours.
    On XFS, if I'm seeing it right, hardlinks on the snapshot folder where deleted, and now data is being transferred again from the Cloudron server.

    Can this be related to the filesystem?
    Or maybe the way the partition is mounted?

    Thanks for your help.

    PS: this is not a problem to me, things are working just fine. I'm just trying to understand, so I can optimize it, if possible.

    Support backups

  • optimizing VPS for backups
    P pitadavespa

    Screenshot from 2025-04-11 10-43-18.png

    Support backups

  • optimizing VPS for backups
    P pitadavespa

    1st backup took around 10h. 745GB.
    2nd backup (rsync) took 3h06m, with 10 concurrent connections.
    Now running 3rd backup (rsynd) with 100 concurrent connections, to check if it's faster than the 2nd.

    I think all this time is just to delete and add hardlinks. Is there any way I can optimize it? Maybe on the server side?
    On Hetzner Storagebox this was way faster, just a few minutes.

    Many thanks to you all.

    Support backups

  • optimizing VPS for backups
    P pitadavespa

    Hi.
    Yes, I know. I have it to 12GB, although I'm only using 10 parallel connections.

    Since the backup started (the first one), around 7h ago or so, it copied 675GB, but now it's doing "something" that takes forever. iftop shows around 80Kbps data transfer from the source server.
    I'm not sure how much more time it will take.

    Support backups

  • optimizing VPS for backups
    P pitadavespa

    Hi.

    When I used Hetzner Storagebox for backups, everything worked just fine. Incremental backups took only around 4 minutes, for a 600BG backup. Just great.

    I wanted to move to my own server, so I've been doing some tests.
    Everything works just fine. I'm using sshfs, rsync with hardlinks.
    The first backup, around 700GB now, takes around 10 hours (lots of small photos and stuff, backup server on a different country, etc), which I consider normal.

    BUT, next backups, although (almost) no data is transferred between the servers, take around 3,5 hours.
    I think it's because lots of hardlinks are beeing added/deleted, and it takes a lot of time.

    My questions are:

    1. I tested with ext4, am now testing with xfs. Should I expect xfs to be faster dealing with hardlinks?
    2. Any tips for mounting the partitions?
    3. Any ideas to make the whole process faster?

    The server is capable enough, no high load on CPU, no IO Wait, so I think that's not the issue.
    Also, I'm using just 10 concurrency connections, because with more I didn't notice it faster, and also because it's sshfs and there's some latency involved (although just 25ms), I didn't want to saturate the ssh connection.

    Any tips is appreciated.
    Thanks!

    Support backups

  • Slow to delete files
    P pitadavespa

    Thank you.

    Cubby

  • Slow to delete files
    P pitadavespa

    @nebulon yes, exactly what you describe.

    Do you have any plans to implement those changes?
    Is it possible for you to do it? (I know this is a side project of yours)

    I'm really enjoying cubby. I tested several alternatives, mainly because cubby doesn't have user quotas, but I'd like to stay with cubby.

    Thanks,
    L

    Cubby

  • Slow to delete files
    P pitadavespa

    Hi.

    I'm using Cubby on a NFS mounted volume (same DC, 0.2ms latency, 10Gbs network).
    Deleting files is very slow. But deleting a folder with those same files is (fairly) quick.

    I also tested Cubby on the same NVME disk as the server, and it's the same behaviour.

    There's a process, recollindex, that runs when files are deleted. It takes many minutes, in case of hundreds of files are deleted.
    Is there anything that can be done (maybe on my side) to avoid this?

    Thanks,
    Luis

    Cubby

  • Collabora word docs bigger than 100KB - cannot save
    P pitadavespa

    Yes, it has.
    Thank you very much!

    Cubby

  • [Feature-Request] Add Size limitation for users
    P pitadavespa

    @pistou79

    This would be great.

    Cubby feature-request cubby

  • Collabora word docs bigger than 100KB - cannot save
    P pitadavespa

    That's great. Thanks!

    Cubby

  • Collabora word docs bigger than 100KB - cannot save
    P pitadavespa

    Hi!

    The error is from the cubby instance.
    I'm now testing it with a different collabora server (outside cloudron), and the same error shows on cubby.

    The .docx file is not relevant. I even created a new one. If it has enough text to do (around) 100KB, or if you put some image (printscreens, for example) on the doc, it gives the error.

    Thank you for your time!

    Cubby

  • Collabora word docs bigger than 100KB - cannot save
    P pitadavespa

    Hi!
    This is my first post here. I just started using Cloudron (still on the free version, doing some tests) and I'm very happy I found it.

    I have this issue: when using Cubby and Collabora (installed on the same server), I can't seem to save word files (haven't tested other types) bigger than 100KB.

    Error log:

    Nov 22 11:03:45at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
    PayloadTooLargeError: request entity too large
    Nov 22 11:03:45 at Layer.handle [as handle_request] (/app/code/node_modules/express/lib/router/layer.js:95:5)
    Nov 22 11:03:45 at getRawBody (/app/code/node_modules/raw-body/index.js:116:12)
    Nov 22 11:03:45 at next (/app/code/node_modules/express/lib/router/route.js:149:13)
    Nov 22 11:03:45 at rawParser (/app/code/node_modules/body-parser/lib/types/raw.js:81:5)
    Nov 22 11:03:45 at read (/app/code/node_modules/body-parser/lib/read.js:79:3)
    Nov 22 11:03:45 at readStream (/app/code/node_modules/raw-body/index.js:163:17)
    Nov 22 11:03:45 at tokenAuth (/app/code/backend/routes/users.js:131:5)
    Nov 22 11:03:45 expected: 416711,
    Nov 22 11:03:45 expose: true,
    Nov 22 11:03:45 length: 416711,
    Nov 22 11:03:45 limit: 102400,
    Nov 22 11:03:45 status: 413
    Nov 22 11:03:45 statusCode: 413,
    Nov 22 11:03:45 type: 'entity.too.large',

    I tried to changing client_max_body_size on nginx, but no success.
    Any other thing I can do?

    Thanks!

    Cubby
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search