Take no offense, I'm not assuming anything lol just want to hear more about your own experience. Thanks.
It works for me. Their support a couple of years ago wasn't as good, but has improved a bunch since then.
Hey thanks for the review folk 🙂
That's what I thought, since the complaints I'd found were relatively old, and I also thought a folk in the Cloudron community wouldn't mislead us, wouldn't it?
Truly, prices are hard to beat.
@timconsidine Yes, as mentioned by @humptydumpty and maybe start with, what tools do I need to have ready, and we should get our feet wet with in some cases, before starting, which are going to be in use during the whole process.
Cloudron CLI of course, what else?
A docker registry?
A GIT server?
A VPS with docker installed?
A Docker desktop version? (as well as the previous?)
How are they connected?
Maybe also an example of how one is (or should be) set up?
The changelog is not comprehensive, it doesn't have 1-1 relation with all changes since it also goes into the UI. Only way to know is the git log for small changes.
Totally fair. I was surprised to see that one missing since there's a ton of smaller changes still in the changelog in previous versions. I always thought the changelog was quite comprehensive before, but maybe it's geared more high-level now.
The API style in Cloudron you are referring to is commonly called REST API, although Cloudron does not strictly follow REST patterns. It can be used with a multitude of clients. Some are libraries to be used withing other programs and they exist for all programming languages by now I would guess. Some other clients are browsers or also commandline clients like CURL.
I think it might help you a lot to read up about REST APIs which should clarify things for you.
Just a simple example, without an access token (so using just public API), run the following on a terminal with curl installed:
will not allow for adding essentially multiple DNS provider for the same domain
Just to be clear, I'm not looking to store multiple DNS credentials, if I switch it in Cloudron to Vultr from DigitalOcean, I'd assume or expect Cloudron to clear out the old credentials/API keys, so it definitely doesn't need to retain two or more at any given time for a single domain. Hopefully that clarifies that part. Definitely no need to go down the rabbit hole of refactoring, haha.
if there would be an option to not resolve DNS records from the Cloudron side, one could change the DNS backend, then hit re-sync DNS in the Cloudron dashboard, then wait for the new nameservers to have all records in-sync and then switch the nameservers for the domain itself.
That's basically what's being asked here... to remove the requirement to double-check the nameservers in the first place before being able to save the API keys on the domain. I understand why Cloudron does it, it's to help avoid DNS propagation issues, however at the end of the day, Cloudron works as a script to populate the DNS records in any provider we give it access to, so it should not be restricted to only doing so if the nameservers are configured correctly. That limitation that exists currently prevents admins from having Cloudron setup the DNS records in the new location first which of course is best practice before switching the nameservers at the domain level, you don't want to be doing it afterwards. 😉 I think a good compromise if Cloudron team feels it's necessary to still check for nameserver pointers, is to warn but allow a user to move past it to freely have Cloudron setup the DNS records where we provide access regardless of nameservers.
Does the above make sense? I can clarify if needed. Basically just hoping that limitation can be removed in the product as it's impeding what I'd think are some important tasks to be able to do before changing nameservers on a domain.
I would be interested to know if someone does this outside of Cloudron for their existing business/personal use. Seems like a lot of work (and probably just easier to switch out your NS provider if they are flaky). We use route53 for cloudron.io and AFAIK it has never gone down.
@necrevistonnezr Great questions! I'm still pondering this myself, but got delayed with some other projects.
I don't have an HA setup yet, but I'd assume the easiest way to achieve it may be to duplicate the server (i.e. restore Cloudron backup on a new server), then use rsync to keep them in sync on the boxdata directory for example, then use DNS for failover where the MX server address on the other server can be used with a higher weight to be last in the selection and thus only used if the first MX server isn't responding.
Just my initial two cents anyways, but haven't thought it out thoroughly yet. Will probably give it more thought once the Cloudron multi-server management feature is in-place as it may make things a little bit easier.
I teste FileRun in a LAMP app (after reading @scooke reply here) and it does exactly what I need without all the added bloat and unpolished UI of Nextcloud.
✔ It sorts pictures by date taken
✔ Has a better UI than NextCloud (personal opinion)
✔ It is a lot faster in previewing images
✔ Works with the Nextcloud desktop and Android client (or any other app that supports webdav)
✔ Can organize photos in albums and collections (I am not really interested in this, but it's a nice to have)
I hope that it will officially come to Cloudron at some point.
@BrutalBirdie Sorry have been busy and thanks for documenting my steps 👍 👍
The docker part is done on the /etc/crowdsec/acquis.yml
I'm not entirely convinced my regex call works appropriately
or even that in the cloudron context this is entirely needed (I have one cloudron with & without not seeing much difference)
@3246 Are you backing up /snapshot/ or the parent directory?
/snapshot/ is sufficient for a daily backup as it holds the current status of all files - versioning etc. is done by restic.
Also, how did you calculate these dirsizes?
The top are the values needed usually for best performance from certain page builders and important plugins in WordPress, plus I have a few sites that host videos / other large files so I keep the sizes somewhat large too and tweak as needed for each customer website.
Also, on the wp-config.php file, I also set the following property which allows the WordPress memory limit to match that of the php.ini file, rather than keeping them separate. But in some cases it may be wise to keep them separate too, it's really a case-by-case basis, I just find I often keep them the same so doing it this way means I only have to set it in one spot and removes the risk of keeping them separate unintentionally...
define( 'WP_MEMORY_LIMIT', ini_get( 'memory_limit' ) ); // sets memory limit to match php.ini
Then for the actual application itself, I currently set a 1 GB memory limit on each one.