Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
D

djxx

@djxx
About
Posts
77
Topics
14
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • sshfs backup duplicates network traffic?
    D djxx

    @james said in sshfs backup duplicates network traffic?:

    So, sshfs tries to issue a remote copy command but falls back to sshfs based copy if it fails for some reason.

    What is your provider for sshfs? Most people here use Hetzner Storage Boxes.

    I'm my own provider πŸ™‚ I'm just using a standard SSH install on proxmox, and the files are stored on a ZFS cluster. I don't know of anything that would stop the copy command from working; what can I do to check / troubleshoot this?

    Discuss

  • sshfs backup duplicates network traffic?
    D djxx

    I am configuring my backup to use sshfs and noticed this while it was running:

    Copying /mnt/cloudronbackup/snapshot/mail.tar.gz to /mnt/cloudronbackup/2025-07-27-215345-102/mail_v8.3.2.tar.gz

    On the remote server I can see the snapshot and timestamped directory (e.g. 2025-07-27-215345-102) while the backup is running.

    Based on the network traffic, it seems that while it is moving each file from snapshot to the timestamped directory, it is literally using copy, which means the file has to make another round trip. If I'm not mistaken, this means the network usage for this backup will be 3x the size of the file.

    It seems like this is happening:

    1. Cloudron makes an archive
    2. Cloudron sends this file to the snapshot folder
    3. Cloudron receives the file back again (part of copy)
    4. Cloudron sends the file again to the timestamped folder (part of copy)

    Wouldn't it be much more efficient (and faster) to issue a mv command to move the file rather than have a round trip?

    Also, it seems like another side effect is the snapshot folder keeps the files there until the next run - requiring 2x the space for the backup.

    Discuss

  • SSHFS read speed significantly slower than scp with the same target
    D djxx

    @nebulon Can you tell me how / where to edit this so it uses the options -o direct_io,compression=no? And if it's safe to do so, and how long the change will persist?

    Discuss sshfs volume

  • XMPP Server - Prosody
    D djxx

    I'm happy to say that I've moved my XMPP server from NethServer to Cloudron. While this is probably not a common move, I am sharing some notes here in case it helps someone else. Also, perhaps this'll cause Cloudron to show up in a few more searches. πŸ˜„

    • Install XMPP on Cloudron using the steps above. A bit manual for now!

    • Dump your ejabberd data (that's the XMPP server NethServer uses) with this command:
      /opt/ejabberd-20.04/bin/ejabberdctl --config-dir /etc/ejabberd dump /etc/ejabberd/xmpp_dump.txt

    • Download this dump file locally

    • For ease, clone the source for prosody to your local computer so you can utilize the migration tools and not install needless packages on Cloudron. You'll need to run ./configure and ./make - but you don't need to actually install it.

    • Don't be a Lua noob. I spent a while struggling to get my Lua environment setup, and thought I needed to run the tools like lua ejabberd2prosody.lua but got lots of errors about dependencies missing. Once I figured out you need to execute it directly like ./ejabberd2prosody.lua things worked fine.

    • run the ejabberd2prosody.lua script on your xmpp_dump.txt file:
      ./tools/ejabberd2prosody.lua ~/Desktop/xmpp_migrate/xmpp_dump.txt

    • Create a migrator configuration (or use the one I've pasted below). It basically takes everything from the file data format and puts it into the sqlite format, since that's how the Cloudron prosody is configured. Docs:

      • https://prosody.im/doc/migrator
      • https://prosody.im/doc/storage
        Β 
    • Run the migrator script:
      ./tools/migration/prosody-migrator.lua --config=./migrator.cfg.lua prosody_files database

    • Turn off your Cloudron XMPP app

    • Copy the resulting prosody.sqlite file into your Cloudron XMPP's /app/data folder. It will be in the /data folder under your local prosody directory.

    • Turn on your Cloudron XMPP app

    Your bookmarks, rosters, etc. will now be transferred to your new server! This doesn't appear to move archive messages (mod_mam). Probably because most prosody servers aren't configured to store these permanently so they don't bother migrating them.

    I only noticed one issue while migrating. When I first ran the migrator script it gave me errors about topics being empty on some MUCs. I thought I was being smart and edited the code to handle the blanks. This caused me to be unable to join the MUCs on Prosody on certain XMPP clients because Prosody expects there to be a Topic for every MUC.

    Once I manually adjusted the MUC topics to be non-empty, the other clients started working fine.

    Another almost-issue is that Gajim needed to be restarted a few times to start using OMEMO properly. I think the other MUC issues may have thrown it into an error state.

    prosody_files {
        hosts = {
            -- each VirtualHost to be migrated must be represented
            ["domain.com"] = {
                "accounts";
                "account_details";
                "account_flags";
                "account_roles";
                "accounts_cleanup";
                "auth_tokens";
                "invite_token";
                "roster";
                "vcard";
                "vcard_muc";
                "private";
                "blocklist";
                "privacy";
                "archive";
                "archive_cleanup";
                "archive_prefs";
                "muc_log";
                "muc_log_cleanup";
                "persistent";
                "config";
                "state";
                "cloud_notify";
                "cron";
                "offline";
                "pubsub_nodes";
                "pubsub_data";
                "pep";
                "pep_data";
                "skeletons";
                "smacks_h";
                "tombstones";
                "upload_stats";
                "uploads";
            };
            ["conference.domain.com"] = {
                "accounts";
                "account_details";
                "account_flags";
                "account_roles";
                "accounts_cleanup";
                "auth_tokens";
                "invite_token";
                "roster";
                "vcard";
                "vcard_muc";
                "private";
                "blocklist";
                "privacy";
                "archive";
                "archive_cleanup";
                "archive_prefs";
                "muc_log";
                "muc_log_cleanup";
                "persistent";
                "config";
                "state";
                "cloud_notify";
                "cron";
                "offline";
                "pubsub_nodes";
                "pubsub_data";
                "pep";
                "pep_data";
                "skeletons";
                "smacks_h";
                "tombstones";
                "upload_stats";
                "uploads";
            };
        };
    
        type = "internal"; -- the default file based backend
        path = "/home/user/code/prosody-build/prosody-0.12.4/data/";
    }
    
    database {
        -- The migration target does not need 'hosts'
        type = "sql";
        driver = "SQLite3";
        database = "prosody.sqlite";
    }
    
    App Packaging & Development

  • Trying to add an sshfs mounted location as a regular file system volume type in Cloudron
    D djxx

    I'm facing the same issue with Nextcloud and trying to tune the performance of SSHFS (https://forum.cloudron.io/topic/13852/sshfs-read-speed-significantly-slower-than-scp-with-the-same-target/9) . The answer, at least for Nextcloud, is to adjust the configuration file. Instead of trying to "trick" Cloudron into accepting an SSHFS mount point as the primary storage, just adjust the applications config file to point to the mount point. Not sure if it will work for Immich, but it works for Nextcloud.

    Also, I'm doing this with Nextcloud for the exact same reason - I want to manage my pictures. I'm trying the "Memories" plugin in Nextcloud which has prettty good reviews. I'll probably move on to Immich next for testing. πŸ™‚

    Support volumes sshfs

  • SSHFS read speed significantly slower than scp with the same target
    D djxx

    @robi - I wonder if it still does any buffering when writing to /dev/null ? Since that's what the dd read command above does. In any case, this suggestion caused me to revisit the direct_io option. It says it disables the kernel paging cache, which does seem to give the most consistent performance improvement.

    Yet Another Data Point - I did a lot more testing today, and I think I'm as far as I can go. The good news: I can consistently get 16 - 25 MB/s read speeds.

    TL;DR: using this command gives me the best read performance (2x-3x improvement): nice -n -10 sshfs -s -o direct_io,compression=no

    Why I'm using these options:

    direct_io

    direct_io disables caching, and had quite an interesting effect on reads.

    Using the -f -d options I was able to watch the packets going through. I was wrong before about the writes being bigger than the reads; they're not. But the writes are being done more parallel than the reads.

    Before direct_io:

    [01315] READ
      [01308]           DATA    32781bytes (31ms)
      [01309]           DATA    32781bytes (31ms)
      [01310]           DATA    32781bytes (31ms)
      [01311]           DATA    32781bytes (31ms)
    [01316] READ
    [01317] READ
    [01318] READ
    [01319] READ
      [01312]           DATA    32781bytes (31ms)
      [01313]           DATA    32781bytes (31ms)
      [01314]           DATA    32781bytes (31ms)
      [01315]           DATA    32781bytes (31ms)
    

    READ requests 4 chunks at a time, waits for them, and then requests 4 more.

    [05895] WRITE
      [05827]         STATUS       28bytes (34ms)
      [05828]         STATUS       28bytes (34ms)
      [05829]         STATUS       28bytes (35ms)
      [05830]         STATUS       28bytes (35ms)
      [05831]         STATUS       28bytes (35ms)
      [05832]         STATUS       28bytes (34ms)
      [05833]         STATUS       28bytes (34ms)
      [05834]         STATUS       28bytes (34ms)
      [05835]         STATUS       28bytes (34ms)
    [05896] WRITE
    [05897] WRITE
    

    WRITE requests at least 60 chunks at a time, and sometimes I saw over 100 chunks pending.

    After turning on direct_io, the reads look more like the writes:

    [06342] READ
    [06343] READ
    [06344] READ
      [06313]           DATA    32781bytes (31ms)
      [06314]           DATA    32781bytes (31ms)
      [06315]           DATA    32781bytes (31ms)
      [06316]           DATA    32781bytes (31ms)
      [06317]           DATA    32781bytes (32ms)
      [06318]           DATA    32781bytes (32ms)
      [06319]           DATA    32781bytes (32ms)
      [06320]           DATA    32781bytes (32ms)
      [06321]           DATA    32781bytes (33ms)
      [06322]           DATA    32781bytes (35ms)
      [06323]           DATA    32781bytes (35ms)
      [06324]           DATA    32781bytes (36ms)
      [06325]           DATA    32781bytes (36ms)
      [06326]           DATA    32781bytes (36ms)
      [06327]           DATA    32781bytes (37ms)
    

    Note the difference in the chunk IDs and you can see it's allowing at most 31 chunks to be pending before requesting more.

    I think this is the primary reason for the speed increase.

    -s for single threading

    I noticed that running it on a single thread caused the degredation of the repeated file reads to be less pronounced. Instead of dropping back to 8 MB/s after a few reads, it does 25 MB/s read at least 5-6 times (500-600 MB) before dropping down to 16 MB/s. Also, it recovers back to 25 MB/s over time, whereas with multi-threaded I needed to restart the SSHFS connection in order to get 25 MB/s speeds again.

    nice

    Since there seems to be an element of CPU bottleneck (as resolved by running in a single process) I also wanted to give this process priority. It seems to help the session get more 25 MB/s reads before slowing down

    compression=no

    Because we're now on one thread, and we're hogging lots of CPU time, I disabled compression. I didn't notice a difference in throughput with it on, but turning it off helps reduce CPU load

    Next Steps:

    I will run this test a few more times, and probably even adjust my mount for the volume manually to see if it helps performance.

    There is definitely some element of throttling / filling up, because repeated reads in the same session can get slower, and starting a new session can help the speed go back up. I'm not sure if this is on the client side or the server side. Any insights would be greatly appreciated.

    Even though I wish there was a clearer answer, I'll be happy if the 2x boost to read speed works.

    P.S. - I even tried a "high performance SSH" binary hpnssh, and it did not make a noticeable difference in my tests.

    Discuss sshfs volume

  • XMPP Server - Prosody
    D djxx

    @nebulon - as you may see from my other posts, I'm all in on Cloudron ;). I'm in the process of migrating my last server from NethServer to Cloudron; XMPP is one of those services I couldn't move without. I plan to move forward with my manual approach in this thread, but am still interested to see when this could become an official app. Does Cloudron 9 have an ETA?

    App Packaging & Development

  • SSHFS read speed significantly slower than scp with the same target
    D djxx

    Another datapoint. I tried using sshfs on my lan to the data server, and I got 112 MB/s write and 117 MB/s read - both of which are right at the theoretical limit of the gigabit connection. Not to mention it's taunting me with the read speed actually being faster than the write speed. 😧

    I decided to do another test with my laptop <-> hetzner server. So now we're transferring between the SSD on the VPS and the SSD on my laptop. The speeds are the same:
    writing from laptop to VPS: 50 MB/s
    reading from VPS to laptop: 7 MB/s

    I checked and both my laptop and Cloudron are using the same version of SSHFS and Fuse:

    SSHFS version 3.7.3
    FUSE library version 3.14.0
    using FUSE kernel interface version 7.31
    fusermount3 version: 3.14.0
    

    To get the best picture possible of the traffic, I put a fast watch on the connection:
    watch -n 0 ss -t state established dst <server_ip>/24

    With this, I can see that the write sizes are ~10x bigger than the read sizes. I'm wondering if this is why the performance difference is only seen in WAN situations; because 10x the round trips hurts a lot more on the WAN than the LAN. And for those of us with storage boxes in Europe and servers outside of europe, 10X the round trips really hurts.

    I finally stumbled across this article (https://www.admin-magazine.com/HPC/Articles/Sharing-Data-with-SSHFS) which does some pretty detailed performance testing and tuning with SSHFS.

    The options they suggest for the sshfs mount didn't help much. I did notice that using direct_io can sometimes make the read speeds go up to 20 MB/s, but it's not reliable.

    I'm wondering if we're now into the realm of TCP configuration (which is the article's option #2) to increase a TCP buffer sizes. This would be a server-wide change, and is out of my depth. What are your thoughts, @nebulon ?

    Discuss sshfs volume

  • SSHFS read speed significantly slower than scp with the same target
    D djxx

    As a data point, I was also using a hetzner storage box previously. 10 MB/s writes, 1.5 MB/s reads. The speeds used to be faster, but I moved my server to the US and storage boxes aren't offered in the US. Moving the data to my server gave me a 5X write speed, and a ~5X read speed - but I still think that a 7 MB/s read vs a 50 MB/s a write points to some configuration issue.

    Since SSHFS is barely maintained, are there other volume options coming? From what I've read, CIFS could work if we override the user ID at the mount configuration level.

    Discuss sshfs volume

  • SSHFS read speed significantly slower than scp with the same target
    D djxx

    @james Thanks for your response. Yes, I'm aware the tools are different, and that sshfs will have more overhead. But the scp speed shows a maximum throughput of 50 MB/s, and I cannot believe that a properly configured sshfs connection has an 85% performance hit vs scp.

    The remote server I'm mounting through SSHFS is a server in my LAN and my cloudron server is with Hetzner. My home connection is gigabit fiber.

    I did my testing by mounting via SSHFS and just seeing how quickly I could read/write with this:

    dd if=/dev/zero of=tempfile bs=1M count=100 conv=fdatasync
    dd if=tempfile of=/dev/null bs=1M count=100
    

    For my SCP test, I just copied the same file both directions.

    I tried different sizes and counts for dd; the writing (of=tempfile) is always 50MB/s, and the reading (if=tempfile) is always 7MB/s or less.

    Discuss sshfs volume

  • SSHFS read speed significantly slower than scp with the same target
    D djxx

    I'm trying to use an SSHFS volume and noticing some speed issues.

    With SSHFS, I can write to the remote server at 50 MB/s, but I can only read at 7 MB/s.

    When using SCP with the same remote server, I can read and write at 50 MB/s.

    The network link, local disk, and remote disk speeds all exceed 50 MB/s.

    Has anyone else experienced this, and possibly have a fix?

    Discuss sshfs volume

  • n8n - Puppeteer - shared libraries failure
    D djxx

    @BrutalBirdie - thanks, but these are Node modules and not OS modules. Luckily, I did find a workaround. Step by step, I found all the missing modules, downloaded the deb files from https://packages.debian.org/, then extracted them into = /app/data/libatk/ (because that was the first missing library)

    f4e99437-2a55-4546-a116-f145740b55be-image.png

    After this, I discovered the entrypoint script calls an editable script and edited /app/data/env.sh with these lines:

    export PATH="/app/data/libatk:$PATH"
    export LD_LIBRARY_PATH="/app/data/libatk"
    

    Once I did this, I was able to enable Add Container Arguments on the Puppeteer nodes options in n8n and it worked just fine. The "container arguments" has it ignore all sandboxing best practices (which is OK since we're running in a container).

    It would be good if the base n8n application for Cloudron would include the dependencies mentioned above, because my hack will get overwritten at each application update.

    N8N

  • n8n - Puppeteer - shared libraries failure
    D djxx

    I'm trying to use Puppeteer in n8n and getting an error. Normally I would just install the package, but since these are in a docker container and /var/lib/dpkg is read-only, that doesn't seem like it will work. Does anyone have a workaround, either for getting puppeteer to work or to install packages in an app? I don't want to use a 3rd party browserless/headless service.

    Problem in node β€˜Puppeteerβ€˜

    Failed to launch/connect to browser: Failed to launch the browser process! /home/cloudron/.cache/puppeteer/chrome/linux-135.0.7049.84/chrome-linux64/chrome: error while loading shared libraries: libatk-1.0.so.0: cannot open shared object file: No such file or directory TROUBLESHOOTING: https://pptr.dev/troubleshooting

    1bfaf9a0-bca1-4b18-8356-9d24ae533935-image.png

    N8N

  • XMPP Server - Prosody
    D djxx

    @robi I'm not an expert, but I did do some research on this to respond to Nebulon above. The main thing to note here is that the XMPP protocol expects different domains for different functions. So yes - it's an architectural choice that is mostly beyond our control. Each subdomain represents a different module, and the modules talk to each other (and other servers). They need a way to identify themselves, and they use the domain.

    I did some reading and it seems it is theoretically possible to make a server that utilizes different ports to differentiate the modules under a single sub-domain, but this would require tweaking and re-compiling an XMPP server.

    Even if this was done, and all XMPP protocols followed, there's always the chance that some not-fully-compliant client that works with every well known XMPP server would not work with ours because we've chosen to deviate so far from the norm, while still technically following the standards. Making these upstream changes would require someone much more familiar with XMPP servers than me.

    While I look forward to Cloudron allowing us to package more complex services, I think XMPP is in a pretty good place for the above-average admin; it currently takes less than 5 minutes to set it up. I will probably end up making a cron job to sync the certs into my app's storage volume when I deploy this to production. If I think it's useful to others, and Cloudron 9 isn't out yet, I'll share it here.

    App Packaging & Development

  • XMPP Server - Prosody
    D djxx

    @nebulon In general, XMPP isn't going to actually serve any web pages. I installed an extra module for health checking because Cloudron requires it - but there's no reason it needs to be exposed externally. BOSH may use HTTP, but it doesn't serve responses that can be interpreted as web pages.

    Personally, I don't think it hurts that there's a 404 on an HTTP/S port for an application that doesn't serve HTTP. Putting up a big page like "You found an XMPP server!" just invites trouble from the bad people on the internet.

    App Packaging & Development

  • XMPP Server - Prosody
    D djxx

    @nebulon understood. If needed, I'm sure we could make it not crash, even if it's not really a functional XMPP server. And for me I'm at least happy I have a way to run XMPP on my server. I'm just trying to share the work with others.

    When is the next version scheduled for release?

    App Packaging & Development

  • XMPP Server - Prosody
    D djxx

    The root cert will always be needed to get user@domain.com addresses, and not having the other sub-domains would make the app basically unusable. The extra sub-domains are for things like: uploading files, proxying connections when a direct connection is unavailable, and for allowing people from other serveres to contact users on your XMPP instance. So, if all the extra domains were removed you would only have an XMPP chat that works locally on your server, and without pictures. These limitations defeat the person of having XMPP, in my opinion.

    After looking again, it seems we could drop the "proxy" sub-domain since we're using Cloudron's TURN server (I would need to test). But there's still a non-zero number of extra domains needed, as well as the TLD cert.

    App Packaging & Development

  • XMPP Server - Prosody
    D djxx

    @nebulon - I just tried the alias approach, but it gives the error alias location 'domain.tld' is in use. So it seems the cert workaround will still be needed until there is a manifest option to get the TLD cert without using the alias

    App Packaging & Development

  • XMPP Server - Prosody
    D djxx

    @nebulon Thanks for taking a look through. I am certainly open to making changes to make it a compliant package - I just don't know what those changes are.

    I didn't realize that the TLD could be added as an alias which would give the cert. I was using /etc/certs before I did the hacky workaround to get the TLD. I will try the approach you mentioned and make sure things are still working.

    One question is, apparently certs for the root domain are also required, even though no DNS records for that are?

    Correct. The XMPP specification uses the SRV records to point to a specific server. It does need the cert to validate usernames like user@domain.com even if you chose to have the XMPP server somewhere else completely.

    Otherwise there are naturally discrepancies elsewhere, given that this started (as far as I can tell) from some upstream Dockerimage which will not fit, but we can get through one by one.

    Can you tell me a bit more about this? I changed it to use the Cloudron base, and confirmed that all the actions it does to build the server are successful. I did my best to follow putting data under /app/data, and runtime files under /run, according to the documentation.

    For the extra required DNS records, we have to see if and how we can integrate this in the platform to support those. We already have some well-known type records, maybe it fits there.

    This would be nice, but it's really not that difficult (compared to the entirety of setting up an XMPP server). I think if the package was installable and the only extra steps are some DNS changes, it'd still be very easy for people who want XMPP to do.

    App Packaging & Development

  • XMPP Server - Prosody
    D djxx

    I guess some clear set up instructions would help πŸ™‚

    • install the app to the 'xmpp' subdomain
    • add extra DNS CNAME/A records pointing to the same server:
      • pubsub
      • proxy
      • upload
      • conference
    • add these domains as aliases for the app
    • add DNS SRV records:
      • _xmpp-client._tcp.domain.tld port 5222
      • _xmpps-client._tcp.domain.tld port 5223
      • _xmpp-server._tcp.domain.tld port 5269
    • make a storage volume called "Extra Storage"
    • mount the storage volume to the app
    • copy certs into the storage under the subdirectory app_xmpp
      • for my server, the command looks like this (run in cloudron directly, not in the app terminal):
      • cp -f /home/yellowtent/platformdata/nginx/cert/* /mnt/HC_Volume_102134826/app_xmpp/
    • restart the app to make sure it's starting up with all pieces in place (extra domains, DNS records, and certs)

    Then you can check for compliance with these two tools:

    • https://compliance.conversations.im
    • https://connect.xmpp.net/
    App Packaging & Development
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search