Nextcloud Migration Error
-
Thank you @nebulon and @joseph on your help
The PostreSQL service was already set to the max available 64GB RAM (from total 132GB RAM)
@nebulon here is the requested log: (thank you again)
Jul 17 06:51:43 2024-07-17 04:51:43.846 UTC [1062] ERROR: temporary file size exceeds temp_file_limit (1048576kB) Jul 17 06:51:43 2024-07-17 04:51:43.846 UTC [1062] STATEMENT: CREATE INDEX oc_notifications_object ON public.oc_notifications USING btree (object_type, object_id); Jul 17 06:51:43 2024-07-17 04:51:43.848 UTC [1007] root@db57af86418ca34a61b18acbf6308fb45e CONTEXT: parallel worker Jul 17 06:51:43 2024-07-17 04:51:43.848 UTC [1007] root@db57af86418ca34a61b18acbf6308fb45e ERROR: temporary file size exceeds temp_file_limit (1048576kB) Jul 17 06:51:43 2024-07-17 04:51:43.848 UTC [1007] root@db57af86418ca34a61b18acbf6308fb45e STATEMENT: CREATE INDEX oc_notifications_object ON public.oc_notifications USING btree (object_type, object_id); Jul 17 06:51:43 2024-07-17 04:51:43.851 UTC [49] LOG: background worker "parallel worker" (PID 1062) exited with exit code 1 Jul 17 06:51:44 CONTEXT: parallel worker Jul 17 06:51:44 restore: failed to restore database. code=3 Jul 17 06:51:44 restore: stderr from db import: ERROR: temporary file size exceeds temp_file_limit (1048576kB) Jul 17 22:20:03 [GET] /healthcheck Jul 17 22:20:03 healthcheck: disconnected
The PostgreSQL Dump file is 15GB.
From what we understand here, the limit is 1GB.
How can we increase this limit or solve this problem another way? -
That is some large Nextcloud database instance! So I think the issue is with the postgres service creating a temp file in /tmp within the container. This is a transient volume, can you check if the disk it is created at has enough space temporarily? You can find the actual system path via
docker inspect postgresql
looking for theSource
in the/tmp
Mount.I think we will have to rewrite that part to not rely on tmp files in the future.
-
-
This might be a regression I introduced in Cloudron 8. @creative567145 are you already on Cloudron 8 ?
Assuming, you are on Cloudron 8, here's a workaround. A bit rough but should work:
- SSH into server and
docker exec -ti postgresql /bin/bash
- Edit the file
/var/lib/postgresql/14/main/postgresql.conf
. There is a field theretemp_file_limit = 1GB
. Remove that line altogether (the default is unlimited). supervisorctl restart postgresql
- Edit the file
Now try the import.
- SSH into server and
-
Thank you @nebulon and @girish very much!
Yes @nebulon it's a quite big database. This Nextcloud has around 6000 users.
The disk has above 1TB SSD so that was ok.Thank you @girish for the detailed steps. They were very helpful!
The import was a success.Note: The change isn't persistent on a Reboot.
It would be nice to change this:
max_connections = 500
to this:
max_connections = 3000
If the server is powerful enough, it should be able to sustain the x6 boost.
The only challenge is how to make the change persistent after Reboot -
-
Thank you @girish that will be very useful
-
I wish that we could do that, but this specific client doesn't want the configuration to be done. He is afraid that something might break. We assured him that all will be ok, but he insists. We must respect his wishes.
Perhaps in the near future we will try it for some other client with high number of users.