Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Keila - Package Updates

    Pinned Keila
    4
    0 Votes
    4 Posts
    269 Views
    Package UpdatesP
    [0.3.0] Update keila to 0.18.0 Full Changelog Added public archive links for campaigns Added ability to review sent campaigns Added $empty operator to Contact query language and segment editor Added segment filtering based on campaign interactions Added text color tool to Block Editor Added social media icons block to Block Editor (see #444) success_url in contact forms is now processed with Liquid, allowing personalized redirects Added support for id_type in API Contact deletion endpoint Preview emails are now prefixed with "[Preview]" Added Spanish translation (thanks @PedroMartpico)
  • 9 Votes
    24 Posts
    5k Views
    timconsidineT
    @scooke I didn't see any minimum requirements on the garage docs site My instance is idling away at 20Mib but it is not under load. And the app is using the default 256Mb max memory limit. Disk space - well, that all depends on your storage needs obviously. It's a very nice app, but I don't think it is a resource hog. Check out this link (lower down the page) : https://medium.com/@kryukz/garage-standalone-your-lightweight-s3-compatible-object-storage-journey-5073bd51b566 https://portalzine.de/day-38-garage-object-storage-the-self-hosted-s3-alternative-7-days-of-docker/
  • 4 Votes
    4 Posts
    61 Views
    girishG
    Good catch. nc is not part of the base image it seems (not explicitly). I have replaced that code with bash logic.
  • Searx - Package Updates

    Pinned SearXNG
    97
    0 Votes
    97 Posts
    27k Views
    Package UpdatesP
    [2.72.0] Update searxng to 3d88876
  • Tiny Tiny RSS - Package Updates

    Pinned Tiny Tiny RSS
    95
    0 Votes
    95 Posts
    27k Views
    Package UpdatesP
    [2.72.0] Update tt-rss to b69713c
  • LanguageTool - Package Updates

    Pinned LanguageTool
    28
    1 Votes
    28 Posts
    5k Views
    Package UpdatesP
    [1.21.0] Update languagetool to 09f5b9a
  • Actual - Package Updates

    Pinned Actual
    39
    0 Votes
    39 Posts
    8k Views
    Package UpdatesP
    [1.21.0] Update actual to 26.1.0 Full Changelog Breaking changes: Removed support for legacy CSV import format Features: Added new budgeting overview page Features: Improved sync performance with cloud accounts Bug fixes: Fixed issue with transaction duplication on import Bug fixes: Resolved crash when opening settings on macOS Bug fixes: Corrected display issue in dark mode Bug fixes: Fixed bug causing incorrect currency conversion rates
  • Dawarich - Package Updates

    Pinned Dawarich
    14
    0 Votes
    14 Posts
    457 Views
    Package UpdatesP
    [1.4.1] Update dawarich to 0.37.2 Full Changelog Months are now correctly ordered (Jan-Dec) in the year-end digest chart instead of being sorted alphabetically. Time spent in a country and city is now calculated correctly for the year-end digest email. #2104 Updated Trix to fix a XSS vulnerability. #2102 Map v2 UI no longer blocks when Immich/Photoprism integration has a bad URL or is unreachable. Added 10-second timeout to photo API requests and improved error handling to prevent UI freezing during initial load. #2085 In Map v2 settings, you can now enable map to be rendered as a globe.
  • How to stop "TURN" service

    Support turn
    13
    1
    0 Votes
    13 Posts
    3k Views
    girishG
    @crazybrad I think that would be ideal yeah. I can look into it. Can you make a feature request thread?
  • 0 Votes
    1 Posts
    6 Views
    No one has replied
  • Focus on Business Apps

    Discuss
    74
    12 Votes
    74 Posts
    15k Views
    scookeS
    @dualoswinwiz I find it difficult to imagine that you as an Enterprise have found and evaluated ONLY Cloudron for your Enterprise needs. Other than the obvious in-house solution route (which involves yearly salaries far exceeding the price of Cloudron), or vendor route (which also far exceed's Cloudron's pricing, but includes lock-in), please share the working, functional, productive alternative(s) you've found and evaluated, or in other words, what your plan is/will be. Thank you. I could share a similar evaluation of how Microsoft just doesn't meet the needs of X communities (not tiwtter), but that doesn't mean it's not useful for others.
  • Garage packaging status, next steps

    App Packaging & Development
    13
    1 Votes
    13 Posts
    263 Views
    timconsidineT
    @robi said in Garage packaging status, next steps: More options is better than less IMHO In that spirit, I made my own package, principally so I could learn about Garage : https://forum.cloudron.io/post/117911
  • Vaultwarden fails to start after update – DB migration error (SSO)

    Solved Vaultwarden
    36
    1 Votes
    36 Posts
    632 Views
    ChristopherMagC
    I have run the command sed -i 's/\r$//g' /app/data/fix_db.sh to fix the new line characters and then run bash /app/data/fix_db.sh again and it ran as expected. Disabled recover mode and confirmed things are back to working as expected.
  • 3 Votes
    7 Posts
    198 Views
    jadudmJ
    Everything I'm about to say is independent of your actual needs and what you want to achieve. Are you hosting for others? Do you need to be able to restore within hours? Then some of what I say is not for you. If you're instead looking for some "I messed up, everything is gone, and I need a way to recover, even if it takes me a few days," then some of what I say will be more applicable. I have generally built my backups in tiers, for redundancy and disaster recovery. I would backup the first tier to something close and "hot." That is, use rsync or direct mount to your instance's local disk. I would consider SSHFS mounted disk a second option, if #1 does not have enough space. I would have a cron that then backs up your backup. If you backup 3x/day using Cloudron (using hardlinks, to preserve space), I would then do a daily copy to "somewhere else." That could be via restic/duplicati/etc. as you prefer. I would weekly dump to S3 (again, via cron if possible), and consider doing that as a single large file (if feasible), or multiple smaller files. Those could be straight tar files, or tar.gz if you think the data is compressible. Set up a lifecycle rule to move it quickly (one day?) to Glacier if you're thinking about cost. At the end of the month, keep only one monthly in Glacier. Not sure what the deletion costs would be if you delete that quickly, so some thought may need to be given here. That's perhaps a lot, but it depends on your data/what you're doing. You could also go the other way: if you think your cloud backup costs will be too high, you could do the following: Pick up a $300 NAS and a pair of 8-16TB hard drives Install TrueNAS on it, and put the disks in a ZFS Mirror Set up a cron on the NAS to pull down your Cloudron backups on a periodic (daily/weekly) basis. restic or similar will be your friend here. That's the... $800 or so solution, but you would weigh that cost against how much you're going to be paying in cloud storage. (That is, if you decide you're going to be paying $200+/year for backups, perhaps the NAS is going to start to look attractive.) The incremental backups should get smaller once you get the initial pull done (in terms of size to pull down). A version of the NAS is where you buy one external drive, run your backups locally, and pray the drive doesn't go underneath you, or worse, when you have to do a restore. I would personally chuck a single drive out the window, but some people love to gamble. Recovery from the offline backup will be annoying/painful. You'd have to upload it, and then configure your restore to point at it. However, it would be your "last ditch" recovery approach. This would be to my opening point: your backups are dictated, in no small part, by your budget and needs. If you have money to spare, use direct- or SSHFS-mounted disk, and just backup to it. If you are looking for some savings, you can price out S3-based storage (B2 tends to be cheapest, I think, but don't forget to estimate how many operations your backup will need---those API calls can get expensive if you have enough small objects in your backup.) Moving to Glacier is possible if you use AWS, which is significantly cheaper per TB. Having at least one disconnected backup (and a sequence) matters in the event of things like ransomware-style attacks (if that is a threat vector for you). Ultimately, each layer adds cost, complexity, and time to data recovery. Finally, remember: your backups are only as good as your recovery procedures and testing. If you never test your backups, you might discover you did something wrong all along, and have been wasting time from the beginning. I find Cloudron's backups to be remarkably robust, and was surprised (pleasantly!) by a recent restore. But, if you mangle backups via cron, etc., then you're just spending a lot of money moving zeros and ones around...
  • Pydio free file sharing

    App Wishlist
    41
    11 Votes
    41 Posts
    11k Views
    timconsidineT
    I have found that Pydio Cells needs a decent amount of memory. 1Gb might be a viable minimum. I am experimenting with 2Gb and will report back how that goes.
  • Wildcard Alias added, but no https

    App Packaging & Development
    9
    0 Votes
    9 Posts
    116 Views
    S
    I see, I think. I'll try to work the API example into the start.sh script that runs when the app is started.
  • 0 Votes
    14 Posts
    2k Views
    I
    @james thank you James for the information
  • VPS with 8GB RAM, only 4 used

    Solved Support
    7
    0 Votes
    7 Posts
    71 Views
    sebastienserreS
    thank you for all your explanations
  • cloudron cli uninstall not working ?

    App Packaging & Development
    5
    0 Votes
    5 Posts
    43 Views
    timconsidineT
    @James I used sudo because system told me I don't have permissions which / sudo which both tell me I have /opt/homebrew/bin/cloudron I updated / upgraded brew but it's still 5.14.7 No brew doctor errors. Tried to remove the brew version of cloudron but it is rejecting that. I will investigate Thank you for the pointers.
  • Cloudron v9: huge disk I/O is this normal/safe/needed?

    Unsolved Support graphs
    25
    3
    2 Votes
    25 Posts
    818 Views
    imc67I
    I enabled this en within seconds the log file was enormous, I asked ChatGPT to analyse it and here is it's observations: (too technical for me): Some observations after briefly enabling the MySQL general log (Cloudron v9) I enabled the MySQL general log only for a short time because of disk I/O concerns, but even within a few minutes a clear pattern showed up. What I’m seeing: A very high number of INSERT INTO session (...) and INSERT ... ON DUPLICATE KEY UPDATE These happen continuously and come from 172.18.0.1 As far as I understand, this IP is the Docker bridge gateway in Cloudron, so it likely represents multiple apps I temporarily disabled Matomo to rule that out, but disk I/O and session-related writes did not noticeably decrease, so it does not seem to be the main contributor. From the log it looks like: Multiple applications are storing sessions in MySQL Session rows are updated on almost every request This can generate a lot of InnoDB redo log and disk I/O, even with low traffic Nothing looks obviously broken, but I’m trying to understand whether this level of session write activity is: expected behavior in Cloudron v9 something that can be tuned or configured or if there are recommended best practices (e.g. Redis for sessions) Any guidance on how Cloudron expects apps to handle sessions, or how to reduce unnecessary MySQL write I/O, would be much appreciated. Thanks for looking into this.