Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • 25 Votes
    63 Posts
    26k Views
    andreasduerenA
    @jdaviescoates Would you be willing to run a custom app as a standalone application for recordings? I set something up for our org which I'm hosting at recording.cloud.tld.com. But the package is hardcoding some opinionated things and secrets. For example I set up recording and Speaker-diarized transcription (which is super slow on CPU). I'm willing to open source this but the package requires more work for a general audience which I'm only willing to put in, if this is of value for other people.
  • Languagetool: Download of n-grams ignores path set in env

    Solved LanguageTool
    10
    1
    0 Votes
    10 Posts
    3k Views
    G
    I would suggest adding the following to the documentation here: https://docs.cloudron.io/packages/languagetool/ If you want to use a volume to install and store ngrams there, after creating that volume[link to doc], mount it to the Language Tool application under the “Storage” menu and set it as a "read and writing" volume. Normally the volumes are reachable via /mnt/volumens/<volume-id> but once mounted to an application are reachable via /media/<volumen-id>, therefore the command you need to use for installing the ngrams will be as follows: /app/pkg/install-ngrams.sh /media/<volume-id> en es and the path under /app/data/env has to be like NGRAMS_DATASET_PATH=/media/<volume-id> That may help some user like me with a more step-by-step guide
  • [Bug] Cubby: Drag & drop multiple files only uploads the first file

    Unsolved Cubby
    6
    0 Votes
    6 Posts
    52 Views
    nebulonN
    I found a fix for this: https://git.cloudron.io/apps/cubby/-/commit/3ea54dc85266177ffde5b35f83e207c9ba5ef527 will be part of next package release.
  • 0 Votes
    8 Posts
    821 Views
    humptydumptyH
    BTW, if I click on "Back to login", and log in with my username/password, it'll log into Cloudron and show me my dashboard! On a positive note, I have a Cloudron dashboard desktop app now
  • 1 Votes
    7 Posts
    58 Views
    SupaikuS
    Honestly, it's a very convenient bug. I kind of like that it attempts to restore to the other location. However, it would be a problem if the file system was too small to support that. It would be great if, when it errors, there's a clear way to resolve the error.
  • Grist - Package Updates

    Pinned Grist
    7
    0 Votes
    7 Posts
    269 Views
    Package UpdatesP
    [0.5.1] Update grist-core to 1.7.11 Full Changelog Optional authentication using getgrist.com accounts Import from Airtable New environment variables have been added to provide better control over which users are able to access your Grist instance. GRIST_PERSONAL_ORGS will disable personal organizations, while GRIST_ORG_CREATION_ANYONE will prevent any non-admins from creating new organizations. Configurable email notifications for suggestions Allow the maximum options limit on Forms to be configured (defaults to 30 options, configurable up to 1000) Limit GVisor to 8 process by default Display references and reference lists in a friendlier fashion Prevent conditional formatting changes from being displayed as suggestions Add a confirmation dialog when a resource is being shared publicly Hide the bell icon showing connection state when Grist is connected and functioning normally
  • 2 Votes
    7 Posts
    64 Views
    robiR
    Also as off grid local only devices doing home automation and security for example.
  • The Matrix is coming

    Off-topic
    13
    2 Votes
    13 Posts
    78 Views
    jdaviescoatesJ
    @timconsidine what would be the right things to do from your perspective? The recently published Warm Homes Plan and Local Power Plan are both pretty decent IMHO (although still LOTS of room for improvement and not enough details about implementation yet). IMHO there is much MUCH more political corruption related to keeping us all hooked on fossil fuels than there likely ever will be with renewables (which by their very natures are inherently more distributed).
  • Notification being unhelpful

    Unsolved Support dns porkbun
    4
    0 Votes
    4 Posts
    42 Views
    E
    @james said in Notification being unhelpful: Hello @ekevu123 Where did you see this log message? In the main notifications of the server instance @joseph said in Notification being unhelpful: This is coming from porkbun - https://status.porkbun.com/ . They are having some service issues . Oh, thank you, that wasn't clear!
  • Czech Translation for Cloudron Now 100% Complete 🇨🇿

    Discuss
    5
    5 Votes
    5 Posts
    33 Views
    archosA
    @nebulon said in Czech Translation for Cloudron Now 100% Complete : This is great! We will ship the next Cloudron version then with Czech (internally 9.1 is released for new installs already, so will be added to the next patch release then) That’s great news, thank you!
  • 0 Votes
    17 Posts
    115 Views
    girishG
    Right, that postinstall is totally misleading. I have removed it, making a new package now. thanks for reporting @scooke @jdaviescoates
  • Cubby & Collabora integration

    Locked Cubby
    18
    0 Votes
    18 Posts
    2k Views
    nebulonN
    This is fixed with latest package version. I will lock this thread as it seems to collect different issues. Please open a new thread in the cubby category for new collabora issues with cubby.
  • Cubby - Package Updates

    Pinned Cubby
    104
    0 Votes
    104 Posts
    44k Views
    Package UpdatesP
    [2.7.2] Update cubby to 2.7.2 Fix opening office documents which require path encoding
  • Point arbitrary domains to a cloudron app?

    Feature Requests
    6
    0 Votes
    6 Posts
    71 Views
    nebulonN
    Not sure if I get the whole point, but is there a reason to not make that domain known by Cloudron? You can add it via the "manual" provider if you anyways manage those DNS records externally. If you add it, you can just set those domains in the app settings in your Cloudron dashboard and things should work as you want.
  • Ente for cloudron, help for testing wanted.

    App Packaging & Development
    24
    9 Votes
    24 Posts
    2k Views
    andreasduerenA
    Updated Ente Package: andreasdueren/ente-cloudron:0.6.3 ️ Fix: startup.log no longer grows unbounded The startup log was being appended across every restart, growing to 4-5 GB and causing very slow backups. It is now truncated on each startup — only the current session's logs are kept. Backups will be significantly faster from the next restart onward. Upstream changes (ente-io/ente): • 792f28c: README: add Locker Obtainium and GitHub release links • 9810269: [mob][locker] Prevent duplicate default collections during signup • 26549fc: [mob][photos] VectorDB write index fix • 686706b: Toggle to let ML run continuously • 922784b: Internal toggle to let ML run continuously without interruption Upstream: https://github.com/ente-io/ente/commit/922784b
  • Bookstack - Package Updates

    Pinned BookStack
    150
    0 Votes
    150 Posts
    117k Views
    Package UpdatesP
    [1.46.7] Update BookStack to 25.12.8 Full Changelog Fixed content filtering removing link target attribute, which would impact "New Window" links. (#​6034) Fixed content filtering to not remove user references in comments. Updated PHP package versions.
  • Collabora Online - Package Updates

    Pinned Collabora Online (CODE)
    160
    0 Votes
    160 Posts
    89k Views
    Package UpdatesP
    [1.46.0] Update code to 25.04.9.2.1
  • Struggling to Replace MinIO - Advice Welcome!

    Discuss
    7
    1 Votes
    7 Posts
    141 Views
    jadudmJ
    Depending on your appetite for loss, I would consider backups-in-depth. That is, one backup site is not a backup. Use rsync-based backup over SSHFS to Hetzner or similar. You will want to select "use hardlinks" and, if you want it, encryption. The use of hardlinks is, essentially, your de-duplication. (See below.) For a second layer of depth, I would consider a (daily? weekly? monthly?) backup of your primary backup site to a secondary. This could be a sync to AWS S3, for example. Note that any S3-based backup (B2, Cloudflare ObjectSomething, etc.) will have both a storage cost and an API cost. If you are dealing with millions of small files in your backups, the API costs will become real, because dedupe requires checking each object, and then possibly transferring it (multiple PUT/GET requests per file). S3 has the ability to automatically keep multiple versions of a file. You could use this to have an in-place rotation/update of files. If you are doing an S3 backup, you can use lifecycle rules to automatically move your S3 content to Glacier. This is much cheaper than "hot" S3 storage. But, you pay a penalty if you download/delete to early/too often. As a third, cheap-ish option, go get a 2- or 4-bay NAS that can run TrueNAS, and put a pair of 8-12TB HDDs in it. Configure the disks in a ZFS mirrored pair. Run a cron job once per day/week to pull down the contents of the Hetzner box. (Your cron will want to, again, use rsync with hardlinks.) You now have a local machine mirroring your hot backups. It is arguably more expensive than some other options (~600USD up front), but you don't have any "we might run out of space" issues. And, because you're using it to pull, you don't have any weird networking problems: just SCP the data down. (Or, rsync it down over SSH.) Whatever you are doing, consider targeting two different destinations at two different times (per day/alternating/etc.). Or, consider having some combination of backups that give you multiple copies at multiple sites. That could be Hetzner in two regions, with backups run on alternating days, or it could be you backup to a storage box and pull down a clone every day to a local NAS, or ... or ... Ultimately, your 150GB is small. If you're increasing by a few GB per week, you're saying that you are likely to have 1TB/year. Not knowing your company's finances, this is generally considered a small amount of data. Trying to optimize for cost, immediately, is possibly less important than just getting the backups somewhere. Other strategies could involve backing up to the NAS locally first, and then using a cron to borg or rsync to a remote host (possibly more annoying to set up), etc. But, you might have more "dedupe" options then. (borg has dedupe built in, I think, but...) I have a suspicion that your desire to use object storage might be a red herring. But, again, I don't know your constraints/budget/needs/concerns. Deduplication: If you use rsync with hardlinks, then each daily backup will automatically dedupe unchanged files. A hardlink is a pointer to a file. So, if you upload super_ai_outputs_day_1.md to your storage on Monday, and it remains unchanged for the rest of time, then each subsequent day is going to be a hardlink to that file. It will, for all intents and purposes, take up zero disk space. So, if you are backing up large numbers of small-to-medium sized files that do not change, SSHFS/rsync with hardlinks is going to naturally dedupe your unchanging old data. This will not do binary deduplication of different files. So, if you're looking for a backup solution that would (say) identify that two, 1GB files where the middle 500GB are identical, and somehow dedupe that... you need more sophisticated tools and strategies. Rsync/hardlinks just makes sure that the same file, backed up every day, does not take (# days * size) space. It just takes the original size of the file plus an inode in the FS for each link. Note, though, if you involve a snapshot of your hardlinked backups to an object store, every file may take the full size of every file for every day. I'm possibly wrong on that, but I'm not confident that most tools would know what to do with those hardlinks when you're copying to an object store. I think you'd end up multiplying your disk usage significantly, because your backup tool will have to create a copy of each file into the object store. (Most object stores do not have a notion of symlinks/hardlinks.) An experiment with a subset of the data, or even a few files, will tell you the answer to that question. If you have other questions, you can ask here, or DM me.
  • Email group forwarding

    Unsolved Support mail forwarding
    4
    1 Votes
    4 Posts
    33 Views
    jamesJ
    Hello @da5nsy @da5nsy said in Email group forwarding: Currently, we need to go into the mailbox and manually set up the email forwarding. "The mailbox" is the client I assume, so either @sogo, @roundcube, @snappymail or some other client like thunderbird. If you are using @sogo you could use the sogo-tool to update user preferences like forwarding with automation. But since you did not share what mail client you are using, this is just one example.
  • FreeScout - Package Updates

    Pinned FreeScout
    262
    0 Votes
    262 Posts
    274k Views
    Package UpdatesP
    [1.15.32] Update freescout to 1.8.207 Full Changelog Allow to Fetch and Send emails via Goole Workspace OAuth (#​5241) Fixed array_filter() in DB config (#​5230) Fixed checking user authorization when changing conversation customer (#​5232) Check user access to mailbox in empty_folder ajax action (Security). Check customer visibility when merging customers (Security). Add PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT to DB config (#​5230) Fixed parsing email part's Content-Type ending with semicolon. Fixed "Undefined array key" error on sending reply to a Phone conversation (#​5236) Perform sanitizing of the file name at the beginning of Helper::sanitizeUploadedFileName() (Security).