@humptydumpty
We got it some weeks ago as a partner, but API docs is pretty unstable already change 2 times, and we reported some bugs too.
Let's w8 for stable
@humptydumpty
We got it some weeks ago as a partner, but API docs is pretty unstable already change 2 times, and we reported some bugs too.
Let's w8 for stable
@savity
You can install clamav on a second machine and then connect it to nextcloud over the network.
Clamav need minimum 2, better 4GB of ram.
This is the easy way to implement antivirus to nextcloud.
@jtippetts
Accounting is different in every State, in Switzerland a good one is Banana.
@jdaviescoates said in Most cost effective server for cloudron:
laptop
For that go for a big dramless SSD, if it is only of backup.
They are cheap.
@jdaviescoates
2 option:
CMR HDD: good if your backup is daily scheduled and you cancel data often.
SMR HDD: good if you use that drive for monthly backup or secondary backup, that you will not interact with the storage frequently.
CMR costs more and has less storage capacity.
SMR cost less and has more storage capacity.
Good brands are:
Seagate IronWolf (NAS + Prosumer multi-disk setup)
Seagate Exos (more Enterprise)
@jdaviescoates
For what you will use them, video storage, small files, backup ?
@opensourced
spamhaus will just check your ip/domain if it's listed on their database, this means that your server or the owner of that IP before you is using it for spamming or malicious activity.
A good way to understand what's happening in your server is to lock down SSH port to just an SSH key (check if you find another ssh key that is not one of yours) and check if there is any process that you don't recognize or any docker container that shouldn't be there.
we use Gilab CI/CD, but even n8n can be used as a ci/cd if you want to use no code.
@ekevu123
You can limit what a user can do with the operator role, and update the app using a custom docker image, with the API and a Rest Client or the Cloudron CLI(that work flawlessly with a CI/CD).
You should follow the idea of updating the image and not the software inside of it, when you containerize any app for cloudron, because they are the same principle applied to any docker install.
As I suggested, you could just set up a CI CD, and provide access to the GIT repo + the operator access to cloudron for reading logs and access the terminal, to your developer.
This allows you to update the app, keeping the advantage of cloudron for stability, security, and backup, ... and having that extra flexibility that you are asking for.
I know that is not what you expect as an answer, but cloudron is built with the idea of using an opinionated layout(for easy maintenance and stability), for DIY a solution, maybe Portainer is a better fit for the flexibility that it offers.
@jdaviescoates
I'm not on the same idea, if you can specify in the manifest the directories that are writable(or better where a volume should be mounted), you can back them up as you do with /app/data.
and the same for updates and migration, updates are managed by the start script or by including the new code directly in the image, this will not change is exactly the same as now, and is secure as now, because you still have all the FS lock, just some directory will be writable, and they are specified in the manifest.
And we have proof that it works because the compose specification is easy to use and easy to set up a backup of the volume mounted on the container.
@girish
I think the big deal is just read only fs.
Been able to select directory to be R+W and not just App/data can be a good idea on the simplify packaging for Cloudron.
@ekevu123 storage space is checked every 12h, not in real-time.
EDIT:
Yes, collectd is set to monitor 2 times every 24h.
@ekevu123
Just to let you know: Cloudron use GiB and MiB, so remember to convert the 128GB.
Check if you have old dump MySQL files on your storage, that's a usual cause of unexpected use of space
In general, remember that Cloudron stats use binary units and not standard SI units.
So in all cases, your math can be a bit off because of that too.
@jdaviescoates
I was not sure about that, after the change to fair code license, but it was written on the Commons Clause License.
@roofboard
just 2 note on that:
@subven
More or less it's correct but the difference is that n8n workers need access to db and redis for the queue.
And they don't work correctly (they add a lot of delay) if you use webhook
@roofboard
You should expose psql, and keep the credentials the save.
And both are feature not currently available in Cloudron
@robi they use 3°party datacenter outside Germany.
And there networking infrastrutture was not good already in Germany, now for example don't specify how much bandwidth they have on 3° party location.
So i would not trust them too much
We had in the past collocation to their datacenter and we got really big issue with there support and especially there networking, they didn't use at that time (2019ish) any software define networking so it was hard to understand some behavior, and customer support was not wall trained.
@girish
as you know we use an external Mail Proxy, imapsieve is a good tool to get spam/ham feedback from our users.
Second, it can be used to automate flow based on where the mail has been moved by the user.
Hello to all,
one of the features that it's missing currently is the IMAP-Sieve, maybe I'm mistaken but Cloudron support only Sieve for SMTP connection/incoming email. https://doc.dovecot.org/configuration_manual/sieve/plugins/imapsieve/
@marcusquinn
Partially is solvable using the additional docker container feature in Cloudron, that one to have a specific database.
I really think that most feature would be solve if we add a more open and robust manifest, like multiple volume support (in the manifest).
Maybe a script that is executed before a backup snapshot is taken, so you have the option to stop the app and dump your db.
Managing VM is difficult and to do it reliable it's better to use specific kernels modules, so i think that for that we can still use other software.
But LXC is born to be different from Docker:
LXC is 1 container == 1 OS
Docker container (still Linux container) == 1 Service/app.
Maybe I join too late on this discussion, but from what I understand the bid issue is flexibility vs cloudron ideals of Easy, Safe(backup) and it just works.
Probably expanding the Manifest specification and openly allowing the file format to be also used externally from cloudron can help its spread, and increasing the 3* party apps.
So these 2 changes could help:
If CloudronManifest is an open standard, like Docker Compose based on the https://compose-spec.io/. Other dev could build a CLI tool to install the app based on it. And provide the community the ability to trust this format to be in the future use outside cloudron if cloudron changes its idea on supporting its community or gets sold.
Improving the Manifest with the introduction of more than 3 directories now available, allowing the dev to set if a directory is writable and if the directory needs to be backup.
with the list of directories, that need to be backup available on the manifest file, backups can be standardized and at the same time is easy and more convenient to containerize apps for cloudron.
Hello,
we are facing some consistency issues, on how mails are managed from external server to cloudron and from cloudron to cloudron.
Because if you use an external mail proxy with some filtering rules, those are applied to incoming emails only, not internal email.
I think that sending internal traffic to the external SMTP should be an option, or idk the default.
@scooke
ACKEE_ALLOW_ORIGIN is the domain where the JS script will be able to track analytics.
for example, in our case is moocloud.ch
Then all the options are on the JS tracker side, not on ackee.
He you can find those option: https://github.com/electerious/ackee-tracker#-options
We use GitKraken for vscode, or IntelliJ/GoLand integrated git.
@scooke
You need to enable an option to track yourself in Ackee, that's actually a good idea to ignore your IP if you have a low visit website.
@TechoutDev
OpenVZ is mostly a container it self and its a better fit as a platform to install 1 single stack then a complex solution like Cloudron.
KVM or hyper-v or vSphere are better fit for that
Hello @girish,
Just lost a lot of time on a bug related to storage and I then discover that what shows in the cloudron UI was in fact Binary, can be updated the label to the correct one with the "i" in-between letters, so that it's clear what unit of measurement is been used?
I know that is a stupid thing to complain but it can save some time on understanding what's happening sometimes.
@d19dotca
that is also because Nginx uses FPM and not FastCGI(Apache in some cases can use FPM) for PHP, which give already a good boost.
NextCloud and WP could take good advantage of it, and Nginx could help solve the issue of the main image being too "fatty"
It's easy to use Nginx in a multi-layer image.
Arm CPU in servers are more and more comune, and for a lot of operations are a better fit, especially if you use a modern language that takes advantage of multithreading.
And I think is time for cloudron to start to support it/working on supporting it, for now, 3 big cloud providers offer instances based on ARM CPUs (AWS,Azure,Alibaba).
Hetzer announced that they will start buying CPU from Ampere too, and I think that with the rise of requests on the forum DigitalOcean is not far behind.
Ovviusly this is my personal idea, what do you think?
@timconsidine
if you use Windows, just go with Hyper-V it's just the most optimized and is included with the OS.
or WSL can do a lot in the latest v2 update, with even docker desktop support.
if you are on Linux, and you want a UI, just go for the simple Gnome Boxes, that use QEMU.
This is all valid if you don't wanna go with a professional solution, but for homelabs and testing, they are pretty good.
@girish any plan to improve the backup solution on cloudron maybe with Restic as engine?
@girish
Do you think is possible to change it at cloudron lvl the import time ?
@marcusquinn
They have win10pro so hyper-v it's easy to use and pre installed, could be faster and easier the use proxmox
@humptydumpty said in Which NVME is best for servers?:
custom top cover to accommodate a 80mm fan for better airflow since it'll be on 24/7
See how it goes, the i5-6500T is not soo power-hungry, so you should be fine even for a 24/7 server.
@humptydumpty said in Which NVME is best for servers?:
less
Less is better for writing a lot of random data, without burning the Nand lifetime.
All the time that you have to write, delete or modify a bite of data, on one column, all the layer need to be rewritten, which mean that that need to be moved.
So more layer more work, and less performance on spikes of small random write.
We have a lot of Dell Optiblex, they are great I hope that you will have a good experience with the elitedesk, but now I'm so sad because there is no support for M1 CPU on linux, so no real ARM server for small/medium providers.
@humptydumpty
The evo if I remember correctly use 3 layer or 4 layer nand, so not really ideal.
The WD Red is not the best for performance but it's firmware is probably more optimize for storage in a NAS or server.
@avatar1024 said in What's coming in Cloudron 8.0:
(e.g. this one on my side)
Same here, restic support could be a great idea for backups as an alternative to rsync.
Hello to all,
I'm trying a bit N8N, to see the performance on various tasks that now we use some bash or python script to do.
But I wasn't able to use the Node Crypto function, even when I set the ENV to export NODE_FUNCTION_ALLOW_EXTERNAL=*
or
export NODE_FUNCTION_ALLOW_EXTERNAL=crypto
I even try with the API to add ENV to the container directly but Nada.
my test crypt in the Function node is this:
let crypto;
const items = [{
json: {
"is disable": "yes"
}
}]
try {
crypto = await import('crypto');
} catch (err) {
return items
}
I'm not a good JS dev, and my understanding is pretty basic (i live tnx to the debuggers) so is highly possible that is my code doesn't work, but maybe is some weird implementation with the docker container.
@doodlemania2
normally "phishing" tag in an antispam is triggered by a link that hides a different URL.
Like cloudron.com, cloudron.com --> pointing to cloudron.io.
RspamD support, with the addition of some module logo detection, and there could be some issue there, but is pretty rare as a filter because it cost too much in resources for a big install.
@msbt
the issue could start with a lot of concurrent users and a lot of small changes to file, due to SMB not working at FS lvl but at file level/block of data.
IF hetzner would offer NFS, that could be a solution or better even if there would be support for iscsi.
@marcusquinn
Hetzner don't use remote storage use remote clustered DB + local storage.
So it's not possible to compare the 2 installs cloudron + storagebox and hetzner storageshare, performance will be a lot better on storageshare, in order of magnitude especially on small files.
@girish
Yes i did and it's poorly documented how IMAP import works.
I think that the IMAP pool should be even every minute as for freescout.
Imap import is too too slow, I think every 5 (or more) mins, is possible or have any of you found a way to trigger it more often with a cron maybe?
@systemaddict
Directus is a good software especially because it handles SQL migration automatically, and in this pSQL is a lot faster, and is, in general, a better SQL database than MySQL, with support for more features than MySQL; Directus will emulate for you those features so that you will not feel the difference (obviously you will use resources for this)
At the end, is not a big thing to clone the app from GitLab, change the manifest and the start.sh file to use MySQL.
https://git.cloudron.io/cloudron/directus-app
https://docs.cloudron.io/packaging/tutorial/
@JOduMonT
We use a mail proxy for all outgoing mail and most of our customer want to have a tracker, but they don't go to spam/marketing on Gmail.
Probably is only the header generated by mautic that make it recognize as marketing.
@marcusquinn
You can offer that pricing anyway if you go for a 3+ server install with TrueNas Scale or any other OS + SoftwareDefineStorage solution.
@marcusquinn said in Make (Hetzner) Storage Box mounting for Nextcloud etc a native Cloudron feature:
that was used through the federation feature
can you expose your idea, because federation for my knowledge doesn't allow to access users that are stored on other nextcloud, it allows just to share with them.
@marcusquinn said in Make (Hetzner) Storage Box mounting for Nextcloud etc a native Cloudron feature:
If they can do it with satisfactory speed, we should be able to as well please
kind of, they don't mount a remote directory as /data but they have it locally, instead, they use MySQL in a cluster, this reduces downtime and you have great performances.
The only way you can have that performance with a remote storage server is using ISCSi or a SoftwareDefineStorage FileSystem.
@marcusquinn said in Make (Hetzner) Storage Box mounting for Nextcloud etc a native Cloudron feature:
Storage Boxes are at least taking daily and on-demand snapshots
I was interested because until now we use our own storage server, and knowing the cost I didn't think that they could do that price with a backup of the volume included on other servers.
And in fact, is not included, only their nextcloud storage has backup, not the storage-box but only the storage-share.
@luckow
I actually prefer if our jitsi is private.
I have try, to be honest i didn't dedicated enough time, to do it with a manual install but i couldn't do it.
Hello,
did somebody already try to integrate Jitsi with rocket chat? if yes, how did you have done it?
@msbt
Snapshot are copy on write/ zfs like snapshot.
@marcusquinn
not impossible to containerize it, but almost impossible to do it for cloudron.
It needs to be installed with docker-compose because it uses multiple docker containers to run: