@kqcav I hope you made it working already, but I had the exact same issue after updating from NextCloud 4.23.4 to version 5 (which changes auth from ldap to oicd) and could solve it.
In my case almost every file and directory we use in our Nextcloud, where files from one Nextcloud user, that was sharing it with a group of users. We had to login this user with the new oicd "Login with Cloudron" and all files and directories were back again.
pbischoff
Posts
-
All non-binary documents vanished after upgrade to 5.4, only top level folders left -
Cloudron as LDAP Directory Server with around two thousand UsersHello everyone,
I was asked to develop an account managing solution via LDAP for an institution with around 2000 users. In this scenario only a LDAP Directory Server is needed. External apps that consume the LDAP Directory are already there.
I immediately thought of Cloudron and its Directory Server for external Apps, because we also use it very successfully in our company. But in our company we use it for about 10 people and in this case it should manage 2000 users.
Does anyone have experience using Cloudron Directory Server with several thousand users as an LDAP provider for external apps and can give me an assessment of whether this is a good idea?
Or can someone from the Cloudron team tell something about whether the Cloudron Directory Server is designed to handle so many users or where problems might lie?Thank you very much for your answers.
Kind regards
Philipp -
Oversized Disk Usage@girish I just wanted to say thank you for your help two weeks ago!
I didn't get Postgres running because of too little storage on the disk, so I had no idea in this short time how to delete the things out of Postgres.
For us this issue was very critical and I had to fix it very fast and so I just deleted the dump file which means the loss of the directus instance data, but that was okay in this case.
Again: thank you very much and best regards! -
Oversized Disk Usage@girish Yes it's this file! It has 447G!
Do I have to manually modify this file or will just deleting the data within Postgres do the trick?
-
Oversized Disk Usage@girish okay this is really curious. When I look under "Storage" of that app in the Cloudron Dashboard it tells me The app is currently using 9.5 MB of storage (as of 1 hours ago).
-
Oversized Disk Usage@girish okay thank you, that helped very much.
I identified a Directus instance that uses almost 500 GB. I think this is database data because of a cron process that fetches data periodically and writes logs for every fetch. How can I identify if it is really the database data or some files?
-
Oversized Disk Usage@girish and the yellowtent/ directory uses 858 GB
-
Oversized Disk Usage@girish Yes it looks like that!
-
Oversized Disk Usage@girish okay, can you give me a hint which directories a the ones that cloudron uses?
I already tested all directories in /var directory. Here only lib/ has 135 GB and the rest under 100 MB.
-
Oversized Disk Usage@girish Thanks for your answer.
As I said "For Backups I used S3 bucket from the beginning."
So backups NEVER were on disk.
The disk usage of/var/backups
is 2,1 MB. -
Oversized Disk UsageHello everyone,
I have a very similar Problem.
I am wondering what makes "Everything else (Ubuntu, etc)" 495.02 GB big.
For Backups I used S3 bucket from the beginning.I also already decreased the retention policy and cleand up backups, but nothing changed in disk usage.
Thanks for your help and best regards
Philipp -
Scaling/vCPU core usage of Directus in Cloudron@girish thank you for your answer.
I did a little bit of load testing and that confirmed me again, that only one vCPU core is used.
But this is a problem for almost every node.js app of the cloudron App Store, isn't it? Do they all use only one vCPU core?
Then I came across this project pm2 https://pm2.keymetrics.io/, a process manager for node.js with integrated load balancer. I've never took a deeper look on how you package your Apps on Cloudron, but would it theoretically be possible to include pm2 in the Directus app setup for Cloudron or do you have any conditions that forbid "external" dependencies like pm2. Or are there any other obstacles that would speak against that?
I was thinking about playing a little bit around with it.Thanks for your help again and best regards
Philipp -
Scaling/vCPU core usage of Directus in CloudronHello everyone,
I want to use Directus installed via Cloudron as data and content management system for a web project. I was wondering how many concurrent requests this setup could handle with a server that has multiple virtual CPU cores.
This led me to the question how scaling in Cloudron and Directus actually works.The Directus backend runs with node and is single threaded. In the following GitHub discussion the tech lead of Directus, Rijk van Zanten, says that directus has no vertical node process scaling and that they use horizontal scaling of multiple container instances. https://github.com/directus/directus/discussions/9781#discussioncomment-1645326.
When I get the idea of Cloudron right, then there is one single container running for every app and there is no option to run multiple container instances of one app.
Actual Question
So am I getting it right that Directus on Cloudron always uses only one virtual CPU Core?
So when I have a server with 4 vCPUs and run Cloudron with only Directus installed on it, 3 vCPUs of them are needless?Thanks for any help
Philipp -
NextCloud Backup fails with "Message: Invalid part number: 10001"@girish Thank you very much, that solved the problem.
The 'change upload part size' hint would maybe be great for a Backup section in Troubleshooting Knowledge Base.
I also received an e-mail, where a backup section in troubleshooting was mentioned. But that section does not exist: -
NextCloud Backup fails with "Message: Invalid part number: 10001"Hello everyone,
Since I migrated about 250 GB to my NextCloud app, the automatic backup takes several hours trying to backup and ends with the error:
box:backuptask runBackupUpload: result - {"result":"Error uploading snapshot/app_52cfd53a-8842-4b5b-afa5-836e3af76c54.tar.gz. Message: Invalid part number: 10001 HTTP Code: InvalidPart"}
MY PREREQUISITES
- Platform version v7.3.4 (Ubuntu 22.04.1 LTS)
- Backup Provider ionos-objectstorage
- Backup Storage Format tgz
Thanks for your help
-
Move to new Server destroyed Baserow@nebulon you were right.
With the new version 7.3 the restore worked immediately and now baserow is running again.
Thank you for the great support! -
Move to new Server destroyed Baserow@nebulon thanks to your advice.
I was wondering how I can update to a "stable" 7.3 version. Normally it should update automatically to the newest version, when I'm getting your knowledge base right. But my Cloudron has version 7.2.5. for a while now and is not automatically updating. Am I doing something wrong here in any config or setting?
When I check for updates, it says that I can update to version 7.3.2, but that it is a prerelease and updating would be on my own risk.
-
Move to new Server destroyed Baserow@nebulon also in services ist looks like postgres is only using 0,5 of the 3 GB.
-
Move to new Server destroyed Baserow@nebulon Yes, I put the Limit to almost 3 GB and got the same behavior. Maybe there is a way to look in the logs to see way the task fails?
-
Move to new Server destroyed BaserowThanks for your quick reply @nebulon and @girish
I increased the memory of postgres to 1 GB and triggered restore again, but got the same behavior.
After that I observed that mysql also seems to reach the memory limit and so I increased mysql limit to 1 GB as well.
I triggered the restore and have the same behavior. The task gets stuck in "Restoring addons" for hours now.Memory utilization of services
This is a screenshot of the services page.
Mysql and postgres are not reaching their memory limits.
But the server has 4 GB of RAM overall. Maybe this is the limiting factor?Thanks for help
Philipp