@infogulch if you can find an email relay services to relay outgoing messages, then you can have your email server at home just fine. This is the setup I am using for a long time. In my case I use postmark for the relay to send out emails, but maybe there are more privacy focused relays by now. I haven't thought further about postmark after setup, since it has worked flawlessly since then.
I was able to help the project get their dockerfile off the ground so that it builds and runs with just docker, but it still has a ways to go. Here are some things I think would need to be improved before it would be suitable for cloudron:
Reduce image size by copying binary results out of the build stages. This is a bit complex because the project is built with rust/wasm-pack and runs a wasm on the frontend, and uses haskell stack for the backend. I'm not familiar with doing docker optimizations with either of these technologies.
Change the backend to save all data into a /data volume. This will require some changes in the haskell code.
Get deps-only build steps working. This would be a nice-to-have for the devs to reduce build times after the first build (which can reach to 40min!), but this one isn't a deal breaker for cloudron. I tried, but it causes some compilation output issues so it was commented out for now.
The author was very nice and quite receptive to changes. If anyone else works on this you may want to just aim at getting them merged upstream.
@moocloud_matt As soon as someone comes in reporting a real case that their NIC is saturated and it's slowing down their server I'd be interested in pursuing a solution. I've never seen such a real case on these forums (though I may just be misinformed) so I'd have to tilt towards YAGNI until one appears.
Object storage is not magic, it's just data like everything else. If it's just used to serve file attachments for an app with 25 users I wouldn't expect it to be a bottleneck.
Then login to authenticate with proxyauth at the subdomain you choose to install it. The https://promnesia.my.example.com/status endpoint displays a short JSON status (the app is not designed to be interacted with from the web)
Install the extension.
Go to extension options and set the "Backend source" url to your domain, (e.g. https://promnesia.my.example.com)
It seems that logging into the app in the browser enables the extension to authenticate correctly. (This makes me want to critically review which extensions I have installed 😬.)
Why does a note taking app have to store on minio/s3? Notes cannot possibly require so much space no?
From observing how people now write notes, I can see how individual notes could require a lot of space - I've seen colleagues, and other people create "rich" notes with hi-res (10-12+ Mb) images, 50+ Mb video files commonly.
From looking at some of my notes, they can be essentially "containers" with 5-10 hi-res pictures.
we usually find that these extensions are usually very specific and sometimes it is way more work to write a general system than just hard coding it
One of the things I recognized right away about the way you guys develop Cloudron is a laser focus on implementing practical, minimal, incremental solutions instead of chasing maximum general power right out of the gate. It leads to a more stable system that I don't have to constantly fuss over -- thank you for that! I agree that this is probably the right approach for matrix plugins as well, at least for now.
I think the way issue 569 was resolved is a good example of this.
I think it makes more sense to continue the matrix discussion on that thread, which I just replied to. Thanks for taking the time to read and respond. 🙂
True, though I suppose it would provide more immediate value to current users of Cloudron if we package apps requested in the App Wishlist forum sorted by upvote order desc. (I suppose this is the reason why keeping the App Wishlist clean is a good idea.)
I agree - I usually use the LAMP app now for purely testing (Hence my last comment), I've packaged quite a few apps for Cloudron (with 1 on the store front too!)
Here's someone's account of do's and don't with ZFS.
Creating and managing single/dual drive ZFS filesystem on Linux
Do NOT use ZFS in these cases:
you want to use ZFS on single/one external USB drive (worst case, data corruption will happen on non clean dismount, and you would have to recreate whole dataset)
you want to use ZFS on single/one drive and you do not have any external drive for the backup purpose (why? when the zpool is not cleanly dismounted/exported, some data can get corrupted permanently and zfs will have no other mirror drive from which it can automatically get valid data, unless you get secondary drive of same type, size for parity, redundancy)
you do not have hours of your time to learn basics of ZFS management, on this page though are most basic things
Majority of following commands will work on all Linux distributions though first part of the tutorial is using Arch/Manjaro Linux packages and package manager. On Ubuntu i was able to setup ZFS using command "sudo apt install zfsutils-linux". If you have other distribution, you need to discover if your distribution has packages for zfs (and kernel modules).
Upgrade and update system and reboot (in case new kernel was installed since last reboot)
sudo pacman -S linux-latest-zfs
sudo /sbin/modprobe zfs
if modprobe not works, try "sudo pacman -R linux-latest-zfs" and try method B:
and install zfs packages for these:
sudo pacman -Ss zfs|grep -i linux
sudo pacman -S linux123-zfs
pamac install zfs-dkms
enable zfs support in kernel (it was not enabled in 5.8.16-2-MANJARO after reboot, but once enabled by following command it persist)
sudo smartctl -a /dev/sdb|grep -i "sector size"
Sector Sizes: 512 bytes logical, 4096 bytes physical
(smartctl is in package "smartmontools")
It was suggested here https://forum.proxmox.com/threads/how-can-i-set-the-correct-ashift-on-zfs.58242/post-268384 to use in the following "zpool create" command the parameter ashift=12 for drives with 4096 bytes physical sector size and for 8K physical to use ashift=13. If ashift not defined, then zfs use autodetect where i do not know how good it is.
attempt to create pool named "poolname" on a HDD of choice: (use disk that store no important data or it will be lost + unmount the drive, maybe using gparted)
or the same command only the pool will be created across 2 physical drives (of same size, else pool will not use all the space on bigger drive?) where one will be used for redundancy (recommended to reduce irreversible data corruption risk and double the read/write performance)
B) sudo zpool create -o ashift=12 -o feature@async_destroy=enabled -o feature@empty_bpobj=enabled -o feature@lz4_compress=enabled poolname mirror /dev/disk/DRIVE1-ID-HERE(ls -l /dev/disk/by-id/) /dev/disk/DRIVE2-ID-HERE(ls -l /dev/disk/by-id/)
(for 4 drives mirror, it should be: zpool create poolname mirror drive1id drive2id mirror drive3id drive4id)
Regarding following parameter recordsize, it was suggested on places like this https://blog.programster.org/zfs-record-size and https://jrs-s.net/2019/04/03/on-zfs-recordsize/ and https://www.reddit.com/r/zfs/comments/8l20f5/zfs_record_size_is_smaller_really_better/ that for large media files drive, the block size is better to increase from 128k to 512k. So i did it for my multimedia drive. Though above linked manual page for zfs says this value is only suggested and zfs automatically adjust size per usage patterns. Also is said on mentioned unofficial article that the record size should be similar to a size of the typical storage operation within the dataset which may contradict with the file size itself. "zpool iostat -r" shows the operation sizes distribution/counts, also if zpool is single drive, maybe can be used "sudo iostat -axh 3 /dev/zpooldrivename" and checking the "rareq-sz" (read average request size).
gracefully unmount the pools (i think necessary or poor will be marked as suspended and compute restart needed):
sudo zpool export -a
mount the pools:
sudo zpool import -a
(if it fails, you have to mount manually, list disk names (ls -l /dev/disk/by-id/), then: sudo zpool import -a -d /dev/disk/by-id/yourdisk1name-part1 -d /dev/disk/by-id/yourdisk2name-part1 )
If some pool is encrypted, then additional command needed (-l parameter to enter passphrase, else it complains "encryption key not loaded"):
sudo zfs mount -a -l
attach new drive (if the existing one is non-redundant single drive, result will be mirror (something like RAID1, with enhanced read/write and 1 drive fault tollerance, data self healing), if existing is part of mirror, it will be three way mirror:
zpool attach poolname existingdrive newdrive
destroy (delete) all snapshots (no prompt):
sudo zfs list -H -o name -t snapshot -r POOLNAME|sudo xargs -n1 zfs destroy
destroy (delete) dataset (no prompt):
sudo zfs destroy poolname/enc
destroy (delete) whole pool (no prompt):
sudo zpool destroy poolname
If you are OK with the HDD activity to increase at times regular activity is no/low, then consider enabling automatic scrubbing (kind of runtime "fsck" that checks files and even can repair files on replicated devices (mirror/raidz)). Following sets the monthly task:
ZIL - ZFS intent log is allocated from blocks within the main pool. However, it might be possible to get better sequential write performance using separate intent log devices (SLOG) such as NVRAM.
SLOG - It's just a really fast place/device to store the ZIL (Zfs Intent Log). Most systems do not write anything close to 4GB to ZIL. (cat /proc/spl/kstat/zfs/zil). ZFS will not benefit from more SLOG storage than the maximum ARC size. That is half of system memory on Linux by default. SLOG device can only increase throughput and decrease latency in a workload with many sync writes.
ARC - Adaptive Replacement Cache is the ZFS read cache in the main memory (DRAM).
L2ARC - Second Level Adaptive Replacement Cache is used to store read cache data outside of the main memory. ... use read-optimized SSDs (no need to mirror/fault tolerance)
Cache - These devices (typically a SSD) are managed by L2ARC to provide an additional layer of caching between main memory and disk. For read-heavy workloads, where the working set size is much larger than what can be cached in main memory, using cache devices allow much more of this working set to be served from low latency media. Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content. (zpool add POOLNAME cache DEVICENAME)
This was required as part of another feature - making it possible to restore Cloudron without updating DNS (https://git.cloudron.io/cloudron/box/-/issues/737). Currently, it is not at a domain level but at a global level (can always add it later).