Since I opened that issue back then, I want to mention that in the meantime I developed my own backup solution which I think is better than Borg. Right now I am 90% done with version 2 of it and I will definitively use it to backup my Cloudron instance.
As mentioned before, I use the built-in daily unencrypted Backup to local filesystem using the rsync backend and the "3 daily, 4 weekly, 6 monthly" schedule.
These daily encrypted backups of the "snapshot" folder are backed up via restic to OneDrive (which is in my O365 subscription) with the same schedule as above.
This gives me mountable(!) remote point-in-time backups.
Here's someone's account of do's and don't with ZFS.
Creating and managing single/dual drive ZFS filesystem on Linux
Do NOT use ZFS in these cases:
you want to use ZFS on single/one external USB drive (worst case, data corruption will happen on non clean dismount, and you would have to recreate whole dataset)
you want to use ZFS on single/one drive and you do not have any external drive for the backup purpose (why? when the zpool is not cleanly dismounted/exported, some data can get corrupted permanently and zfs will have no other mirror drive from which it can automatically get valid data, unless you get secondary drive of same type, size for parity, redundancy)
you do not have hours of your time to learn basics of ZFS management, on this page though are most basic things
Majority of following commands will work on all Linux distributions though first part of the tutorial is using Arch/Manjaro Linux packages and package manager. On Ubuntu i was able to setup ZFS using command "sudo apt install zfsutils-linux". If you have other distribution, you need to discover if your distribution has packages for zfs (and kernel modules).
Upgrade and update system and reboot (in case new kernel was installed since last reboot)
sudo pacman -S linux-latest-zfs
sudo /sbin/modprobe zfs
if modprobe not works, try "sudo pacman -R linux-latest-zfs" and try method B:
and install zfs packages for these:
sudo pacman -Ss zfs|grep -i linux
sudo pacman -S linux123-zfs
pamac install zfs-dkms
enable zfs support in kernel (it was not enabled in 5.8.16-2-MANJARO after reboot, but once enabled by following command it persist)
sudo smartctl -a /dev/sdb|grep -i "sector size"
Sector Sizes: 512 bytes logical, 4096 bytes physical
(smartctl is in package "smartmontools")
It was suggested here https://forum.proxmox.com/threads/how-can-i-set-the-correct-ashift-on-zfs.58242/post-268384 to use in the following "zpool create" command the parameter ashift=12 for drives with 4096 bytes physical sector size and for 8K physical to use ashift=13. If ashift not defined, then zfs use autodetect where i do not know how good it is.
attempt to create pool named "poolname" on a HDD of choice: (use disk that store no important data or it will be lost + unmount the drive, maybe using gparted)
or the same command only the pool will be created across 2 physical drives (of same size, else pool will not use all the space on bigger drive?) where one will be used for redundancy (recommended to reduce irreversible data corruption risk and double the read/write performance)
B) sudo zpool create -o ashift=12 -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@lz4_compress=enabled poolname mirror /dev/disk/DRIVE1-ID-HERE(ls -l /dev/disk/by-id/) /dev/disk/DRIVE2-ID-HERE(ls -l /dev/disk/by-id/)
(for 4 drives mirror, it should be: zpool create poolname mirror drive1id drive2id mirror drive3id drive4id)
Regarding following parameter recordsize, it was suggested on places like this https://blog.programster.org/zfs-record-size and https://jrs-s.net/2019/04/03/on-zfs-recordsize/ and https://www.reddit.com/r/zfs/comments/8l20f5/zfs_record_size_is_smaller_really_better/ that for large media files drive, the block size is better to increase from 128k to 512k. So i did it for my multimedia drive. Though above linked manual page for zfs says this value is only suggested and zfs automatically adjust size per usage patterns. Also is said on mentioned unofficial article that the record size should be similar to a size of the typical storage operation within the dataset which may contradict with the file size itself. "zpool iostat -r" shows the operation sizes distribution/counts, also if zpool is single drive, maybe can be used "sudo iostat -axh 3 /dev/zpooldrivename" and checking the "rareq-sz" (read average request size).
gracefully unmount the pools (i think necessary or poor will be marked as suspended and compute restart needed):
sudo zpool export -a
mount the pools:
sudo zpool import -a
(if it fails, you have to mount manually, list disk names (ls -l /dev/disk/by-id/), then: sudo zpool import -a -d /dev/disk/by-id/yourdisk1name-part1 -d /dev/disk/by-id/yourdisk2name-part1 )
If some pool is encrypted, then additional command needed (-l parameter to enter passphrase, else it complains "encryption key not loaded"):
sudo zfs mount -a -l
attach new drive (if the existing one is non-redundant single drive, result will be mirror (something like RAID1, with enhanced read/write and 1 drive fault tollerance, data self healing), if existing is part of mirror, it will be three way mirror:
zpool attach poolname existingdrive newdrive
destroy (delete) all snapshots (no prompt):
sudo zfs list -H -o name -t snapshot -r POOLNAME|sudo xargs -n1 zfs destroy
destroy (delete) dataset (no prompt):
sudo zfs destroy poolname/enc
destroy (delete) whole pool (no prompt):
sudo zpool destroy poolname
If you are OK with the HDD activity to increase at times regular activity is no/low, then consider enabling automatic scrubbing (kind of runtime "fsck" that checks files and even can repair files on replicated devices (mirror/raidz)). Following sets the monthly task:
ZIL - ZFS intent log is allocated from blocks within the main pool. However, it might be possible to get better sequential write performance using separate intent log devices (SLOG) such as NVRAM.
SLOG - It's just a really fast place/device to store the ZIL (Zfs Intent Log). Most systems do not write anything close to 4GB to ZIL. (cat /proc/spl/kstat/zfs/zil). ZFS will not benefit from more SLOG storage than the maximum ARC size. That is half of system memory on Linux by default. SLOG device can only increase throughput and decrease latency in a workload with many sync writes.
ARC - Adaptive Replacement Cache is the ZFS read cache in the main memory (DRAM).
L2ARC - Second Level Adaptive Replacement Cache is used to store read cache data outside of the main memory. ... use read-optimized SSDs (no need to mirror/fault tolerance)
Cache - These devices (typically a SSD) are managed by L2ARC to provide an additional layer of caching between main memory and disk. For read-heavy workloads, where the working set size is much larger than what can be cached in main memory, using cache devices allow much more of this working set to be served from low latency media. Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content. (zpool add POOLNAME cache DEVICENAME)
Yes, you can add new domains to Cloudron. Depending on where the authoritative servers are for the domain, you can use the dedicated API-based setups (i.e. AWS, DO, Namecheap, Linode, etc), or you can instead use the Wildcard setup for any other service that's not explicitly listed in the providers list.
I run it on a rented Dedicated Server at OneProvider, much better bang for the buck compared to a VPS :
CPU: Intel Xeon 4 cores
Hard Drives: 2x 2TB (HDD SATA)
Bandwidth: Unmetered @ 1Gbps
I have mainly media apps on it (JellyFin, SickChill, ...), but also a few websites with Ghost and Wordpress that I host for friends, and a few utilities (VPN, NextCloud, Etherpad, ...).
True, though I suppose it would provide more immediate value to current users of Cloudron if we package apps requested in the App Wishlist forum sorted by upvote order desc. (I suppose this is the reason why keeping the App Wishlist clean is a good idea.)
I agree - I usually use the LAMP app now for purely testing (Hence my last comment), I've packaged quite a few apps for Cloudron (with 1 on the store front too!)
@robi I have had the same experience, discovering a message that was sent to me months before.
I had not thought about using a PTT service. I think the one that shows the most promise with a quick glance is the discord sandbox, as discord allows people to communicate via text and also send files. But then there is the issue that is it active for some ( the tech savy) but passive for the people who are not.
Thanks for the info. I am going to look into it deeper.
@atrilahiji I have seen elsewhere in the forum, that you are knowledgeable about Moodle. Do you think what I am thinking is possible even if a little crazy?
A bit of a wild guess: mail from is usually <> for bounce mail. So, this seems like some poor denial of service or maybe those IPs know that some mail software misbehaves with such carefully crafted mail.
Ah very interesting, I appreciate that insight. It was definitely strange when I saw it happening - so many requests at once. I'll keep an eye out for it. Sounds like it's all good then as far as Cloudron is concerned. 🙂 Thanks Girish.
The backdoor was removed before it was compiled into a binary for admins to download so there is no issue for anyone running PHP. However this does prove to be an issues in regards to PHP's safety - They have moved to GitHub (@girish mentions in his reply) and will be better closely monitoring pushes and merges into the code base.
PHP's Own Nikita Popov: "The changes were on the development branch for PHP 8.1, which is due to release at the end of the year" which means the code has not been distributed. It's a big deal but not as big as everyone is making it out to be.
I wouldn't mind having two apps to get the job done, as long as they could sync perfectly. I think that's the biggest challenge, they should be able to work together.
Has anyone frequently used opencart or oxid eshop? Could they be synced with moodle or canvas LMS?
What we are looking for is that the ecommerce system, in addition to all the necessary features in an online store, has the ability to easily design the main pages: the store, product page, home page ... this seems to be possible with odoo.
It would also seem like a good option for Cloudron to manage WP + Woo.
I wonder if nginxproxymanager is an app or something that we have to make sure Cloudron should integrate with? I feel it's the latter. If that's the case, let us know what is needed on the Cloudron side to make proxying work.
What I mean is: nginx proxy manager should be your "front" and Cloudron is just one of the apps it proxies to.
If there is an API, maybe we can at some point look into integrating with nginx proxy manager i.e an app installation can add entries into nginx proxy manager. Of course, this is viable only if nginx proxy manager is a supported and reasonably popular product. I remember we had similar ideas for integrating with Cloud Firewalls to open up ports automatically.
@d19dotca since we had some recent timeout issue with the services view (and mail being a service here) it could be that also here the mail container or the server overall is busy during that time and thus the search queries timeout. If you see this happening again, a look into the system/box logs as well as mail service logs should reveal this then.
@uiguy It's all opportunity-cost - just because we can, it doesn't mean we should if someone else has already done and there's a community with a vested interest in continuing to.
My team (not part of Cloudron, just fans and occasional contributors) are crack-commando developers, the last think I want them consumed with is dev-ops wheel reinventing when we have a world of wonder in evolving organisations based based on the apps themselves.
I can replace the clutch in my car, build furniture, clean my own house, do accounts and go to the shops to buy something - not really the best use of time nowadays though.
I've seen almost every alternative under the sun in this space, but where do you stop?
Many have said similar - no-one has yet shown any of us a faster way to achieve the same and repeatedly.
All open minds here - it's easy to throw technology names around, we all have search engines for that too - experience tells me claims & realities often differ.
If you want to live in dev-ops, then Terraform K8S with GitLab CI.
If you can do everything Cloudron will do faster, I'll personally pay for your licence for wasting your time.
Proof is in the pudding though, you either try something and learn or you don't.
No-one's here to convince anyone of anything, we just all have places to be, people to help, and value the time-saving, community for just getting on with minimum drama.
This ain't the corporate political brand name dropping world here - in fact I've found it to be a community of doers that don't wear or care for any brand names. We're here for a good time, not a promotion, pitch or long time. No-one's paid to help anyone, karma is the currency of a community greater than any one opinion, and I'm here to be enlightened, challenged and proven wrong to be right next time.
I once sat in a cafe, overhearing frustrated project managers for a bank talking about abandoning a £2m investment in an Azure & Dynamics setup because it was never ending and a constant turnover of "certified" junior developers.
They could have had it all done and moved on with something like Odoo - but hey, some people still say that no-one ever got fired for recommending Microsoft! Or did they? 😉
@plusone-nick Yeah, that's application & storage layer but the reason for looking at Scale Computing is OS layer, so it covers server & client OS images. Kinda like your own private hosting provider in a box (or boxes).