Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • 1 Votes
    8 Posts
    82 Views
    girishG
    I have fixed this for the upcoming release
  • Cubby - Package Updates

    Pinned Cubby
    105
    0 Votes
    105 Posts
    44k Views
    Package UpdatesP
    [2.7.4] Update cubby to 2.7.4 Add grid view mode Fix dropping multiple files in chromium browsers
  • Firefly III - Package Updates

    Pinned Firefly III
    131
    0 Votes
    131 Posts
    62k Views
    Package UpdatesP
    [3.11.1] Update firefly-iii to 6.5.1 Full Changelog MR 11808 (Add Thai baht to Currency Seeder) reported by @​CinnamonPyro Issue 11817 (500 Error if internet is inaccessible while checking for updates) reported by @​NoiTheCat Issue 11814 (Budget : error with CRON after switch user range view) reported by @​fabienfitoussi Issue 11750 (500 error when creating first user with USD balance (works after refresh)) reported by @​pinalgirkar Security issue where any authenticated user with API access also has read access to the /api/v1/users endpoint. Authenticated users would be able to see other user's email addresses, blocked status and roles, even when not admin. No actual financial data was exposed, just the user's info itself. Added extra checks to the /api/v1/users endpoints.
  • Traccar - Package Updates

    Pinned Traccar
    49
    1 Votes
    49 Posts
    13k Views
    Package UpdatesP
    [1.23.2] Update traccar to 6.12.2 Full Changelog
  • Change Detection - Package Updates

    Pinned Change Detection
    143
    1 Votes
    143 Posts
    45k Views
    Package UpdatesP
    [1.29.1] Update changedetection.io to 0.54.2 Full Changelog Fixing change_datetime notification token (and adding test) by @​dgtlmoon in #​3922 Notification Token {{diff}} can accept arguments like {{diff_added(lines=5, context=2)}} by @​dgtlmoon in #​3923 Processor extensible API for updating by @​dgtlmoon in #​3902 Update jsonpath-ng requirement from ~=1.7.0 to ~=1.8.0 by @​dependabot[bot] in #​3929 Bump the all group with 2 updates by @​dependabot[bot] in #​3931 Unresolvable hostnames should still be added, they are security checked at fetch time by @​dgtlmoon in #​3933
  • 25 Votes
    63 Posts
    26k Views
    andreasduerenA
    @jdaviescoates Would you be willing to run a custom app as a standalone application for recordings? I set something up for our org which I'm hosting at recording.cloud.tld.com. But the package is hardcoding some opinionated things and secrets. For example I set up recording and Speaker-diarized transcription (which is super slow on CPU). I'm willing to open source this but the package requires more work for a general audience which I'm only willing to put in, if this is of value for other people.
  • Languagetool: Download of n-grams ignores path set in env

    Solved LanguageTool
    10
    1
    0 Votes
    10 Posts
    3k Views
    G
    I would suggest adding the following to the documentation here: https://docs.cloudron.io/packages/languagetool/ If you want to use a volume to install and store ngrams there, after creating that volume[link to doc], mount it to the Language Tool application under the “Storage” menu and set it as a "read and writing" volume. Normally the volumes are reachable via /mnt/volumens/<volume-id> but once mounted to an application are reachable via /media/<volumen-id>, therefore the command you need to use for installing the ngrams will be as follows: /app/pkg/install-ngrams.sh /media/<volume-id> en es and the path under /app/data/env has to be like NGRAMS_DATASET_PATH=/media/<volume-id> That may help some user like me with a more step-by-step guide
  • 0 Votes
    8 Posts
    839 Views
    humptydumptyH
    BTW, if I click on "Back to login", and log in with my username/password, it'll log into Cloudron and show me my dashboard! On a positive note, I have a Cloudron dashboard desktop app now
  • Grist - Package Updates

    Pinned Grist
    7
    0 Votes
    7 Posts
    277 Views
    Package UpdatesP
    [0.5.1] Update grist-core to 1.7.11 Full Changelog Optional authentication using getgrist.com accounts Import from Airtable New environment variables have been added to provide better control over which users are able to access your Grist instance. GRIST_PERSONAL_ORGS will disable personal organizations, while GRIST_ORG_CREATION_ANYONE will prevent any non-admins from creating new organizations. Configurable email notifications for suggestions Allow the maximum options limit on Forms to be configured (defaults to 30 options, configurable up to 1000) Limit GVisor to 8 process by default Display references and reference lists in a friendlier fashion Prevent conditional formatting changes from being displayed as suggestions Add a confirmation dialog when a resource is being shared publicly Hide the bell icon showing connection state when Grist is connected and functioning normally
  • 2 Votes
    7 Posts
    112 Views
    robiR
    Also as off grid local only devices doing home automation and security for example.
  • The Matrix is coming

    Off-topic
    13
    2 Votes
    13 Posts
    100 Views
    jdaviescoatesJ
    @timconsidine what would be the right things to do from your perspective? The recently published Warm Homes Plan and Local Power Plan are both pretty decent IMHO (although still LOTS of room for improvement and not enough details about implementation yet). IMHO there is much MUCH more political corruption related to keeping us all hooked on fossil fuels than there likely ever will be with renewables (which by their very natures are inherently more distributed).
  • Notification being unhelpful

    Solved Support dns porkbun
    4
    0 Votes
    4 Posts
    51 Views
    E
    @james said in Notification being unhelpful: Hello @ekevu123 Where did you see this log message? In the main notifications of the server instance @joseph said in Notification being unhelpful: This is coming from porkbun - https://status.porkbun.com/ . They are having some service issues . Oh, thank you, that wasn't clear!
  • Czech Translation for Cloudron Now 100% Complete 🇨🇿

    Discuss
    5
    5 Votes
    5 Posts
    43 Views
    archosA
    @nebulon said in Czech Translation for Cloudron Now 100% Complete : This is great! We will ship the next Cloudron version then with Czech (internally 9.1 is released for new installs already, so will be added to the next patch release then) That’s great news, thank you!
  • 0 Votes
    17 Posts
    126 Views
    girishG
    Right, that postinstall is totally misleading. I have removed it, making a new package now. thanks for reporting @scooke @jdaviescoates
  • Cubby & Collabora integration

    Locked Cubby
    18
    0 Votes
    18 Posts
    2k Views
    nebulonN
    This is fixed with latest package version. I will lock this thread as it seems to collect different issues. Please open a new thread in the cubby category for new collabora issues with cubby.
  • Ente for cloudron, help for testing wanted.

    App Packaging & Development
    24
    9 Votes
    24 Posts
    2k Views
    andreasduerenA
    Updated Ente Package: andreasdueren/ente-cloudron:0.6.3 ️ Fix: startup.log no longer grows unbounded The startup log was being appended across every restart, growing to 4-5 GB and causing very slow backups. It is now truncated on each startup — only the current session's logs are kept. Backups will be significantly faster from the next restart onward. Upstream changes (ente-io/ente): • 792f28c: README: add Locker Obtainium and GitHub release links • 9810269: [mob][locker] Prevent duplicate default collections during signup • 26549fc: [mob][photos] VectorDB write index fix • 686706b: Toggle to let ML run continuously • 922784b: Internal toggle to let ML run continuously without interruption Upstream: https://github.com/ente-io/ente/commit/922784b
  • Bookstack - Package Updates

    Pinned BookStack
    150
    0 Votes
    150 Posts
    118k Views
    Package UpdatesP
    [1.46.7] Update BookStack to 25.12.8 Full Changelog Fixed content filtering removing link target attribute, which would impact "New Window" links. (#​6034) Fixed content filtering to not remove user references in comments. Updated PHP package versions.
  • Collabora Online - Package Updates

    Pinned Collabora Online (CODE)
    160
    0 Votes
    160 Posts
    89k Views
    Package UpdatesP
    [1.46.0] Update code to 25.04.9.2.1
  • Struggling to Replace MinIO - Advice Welcome!

    Discuss
    7
    1 Votes
    7 Posts
    152 Views
    jadudmJ
    Depending on your appetite for loss, I would consider backups-in-depth. That is, one backup site is not a backup. Use rsync-based backup over SSHFS to Hetzner or similar. You will want to select "use hardlinks" and, if you want it, encryption. The use of hardlinks is, essentially, your de-duplication. (See below.) For a second layer of depth, I would consider a (daily? weekly? monthly?) backup of your primary backup site to a secondary. This could be a sync to AWS S3, for example. Note that any S3-based backup (B2, Cloudflare ObjectSomething, etc.) will have both a storage cost and an API cost. If you are dealing with millions of small files in your backups, the API costs will become real, because dedupe requires checking each object, and then possibly transferring it (multiple PUT/GET requests per file). S3 has the ability to automatically keep multiple versions of a file. You could use this to have an in-place rotation/update of files. If you are doing an S3 backup, you can use lifecycle rules to automatically move your S3 content to Glacier. This is much cheaper than "hot" S3 storage. But, you pay a penalty if you download/delete to early/too often. As a third, cheap-ish option, go get a 2- or 4-bay NAS that can run TrueNAS, and put a pair of 8-12TB HDDs in it. Configure the disks in a ZFS mirrored pair. Run a cron job once per day/week to pull down the contents of the Hetzner box. (Your cron will want to, again, use rsync with hardlinks.) You now have a local machine mirroring your hot backups. It is arguably more expensive than some other options (~600USD up front), but you don't have any "we might run out of space" issues. And, because you're using it to pull, you don't have any weird networking problems: just SCP the data down. (Or, rsync it down over SSH.) Whatever you are doing, consider targeting two different destinations at two different times (per day/alternating/etc.). Or, consider having some combination of backups that give you multiple copies at multiple sites. That could be Hetzner in two regions, with backups run on alternating days, or it could be you backup to a storage box and pull down a clone every day to a local NAS, or ... or ... Ultimately, your 150GB is small. If you're increasing by a few GB per week, you're saying that you are likely to have 1TB/year. Not knowing your company's finances, this is generally considered a small amount of data. Trying to optimize for cost, immediately, is possibly less important than just getting the backups somewhere. Other strategies could involve backing up to the NAS locally first, and then using a cron to borg or rsync to a remote host (possibly more annoying to set up), etc. But, you might have more "dedupe" options then. (borg has dedupe built in, I think, but...) I have a suspicion that your desire to use object storage might be a red herring. But, again, I don't know your constraints/budget/needs/concerns. Deduplication: If you use rsync with hardlinks, then each daily backup will automatically dedupe unchanged files. A hardlink is a pointer to a file. So, if you upload super_ai_outputs_day_1.md to your storage on Monday, and it remains unchanged for the rest of time, then each subsequent day is going to be a hardlink to that file. It will, for all intents and purposes, take up zero disk space. So, if you are backing up large numbers of small-to-medium sized files that do not change, SSHFS/rsync with hardlinks is going to naturally dedupe your unchanging old data. This will not do binary deduplication of different files. So, if you're looking for a backup solution that would (say) identify that two, 1GB files where the middle 500GB are identical, and somehow dedupe that... you need more sophisticated tools and strategies. Rsync/hardlinks just makes sure that the same file, backed up every day, does not take (# days * size) space. It just takes the original size of the file plus an inode in the FS for each link. Note, though, if you involve a snapshot of your hardlinked backups to an object store, every file may take the full size of every file for every day. I'm possibly wrong on that, but I'm not confident that most tools would know what to do with those hardlinks when you're copying to an object store. I think you'd end up multiplying your disk usage significantly, because your backup tool will have to create a copy of each file into the object store. (Most object stores do not have a notion of symlinks/hardlinks.) An experiment with a subset of the data, or even a few files, will tell you the answer to that question. If you have other questions, you can ask here, or DM me.
  • Email group forwarding

    Unsolved Support mail forwarding
    4
    1 Votes
    4 Posts
    39 Views
    jamesJ
    Hello @da5nsy @da5nsy said in Email group forwarding: Currently, we need to go into the mailbox and manually set up the email forwarding. "The mailbox" is the client I assume, so either @sogo, @roundcube, @snappymail or some other client like thunderbird. If you are using @sogo you could use the sogo-tool to update user preferences like forwarding with automation. But since you did not share what mail client you are using, this is just one example.