Thank you.
pitadavespa
Posts
-
Slow to delete files -
Slow to delete files@nebulon yes, exactly what you describe.
Do you have any plans to implement those changes?
Is it possible for you to do it? (I know this is a side project of yours)I'm really enjoying cubby. I tested several alternatives, mainly because cubby doesn't have user quotas, but I'd like to stay with cubby.
Thanks,
L -
Slow to delete filesHi.
I'm using Cubby on a NFS mounted volume (same DC, 0.2ms latency, 10Gbs network).
Deleting files is very slow. But deleting a folder with those same files is (fairly) quick.I also tested Cubby on the same NVME disk as the server, and it's the same behaviour.
There's a process, recollindex, that runs when files are deleted. It takes many minutes, in case of hundreds of files are deleted.
Is there anything that can be done (maybe on my side) to avoid this?Thanks,
Luis -
Collabora word docs bigger than 100KB - cannot saveYes, it has.
Thank you very much! -
Size limitationThis would be great.
-
Collabora word docs bigger than 100KB - cannot saveThat's great. Thanks!
-
Collabora word docs bigger than 100KB - cannot saveHi!
The error is from the cubby instance.
I'm now testing it with a different collabora server (outside cloudron), and the same error shows on cubby.The .docx file is not relevant. I even created a new one. If it has enough text to do (around) 100KB, or if you put some image (printscreens, for example) on the doc, it gives the error.
Thank you for your time!
-
Collabora word docs bigger than 100KB - cannot saveHi!
This is my first post here. I just started using Cloudron (still on the free version, doing some tests) and I'm very happy I found it.I have this issue: when using Cubby and Collabora (installed on the same server), I can't seem to save word files (haven't tested other types) bigger than 100KB.
Error log:
Nov 22 11:03:45at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
PayloadTooLargeError: request entity too large
Nov 22 11:03:45 at Layer.handle [as handle_request] (/app/code/node_modules/express/lib/router/layer.js:95:5)
Nov 22 11:03:45 at getRawBody (/app/code/node_modules/raw-body/index.js:116:12)
Nov 22 11:03:45 at next (/app/code/node_modules/express/lib/router/route.js:149:13)
Nov 22 11:03:45 at rawParser (/app/code/node_modules/body-parser/lib/types/raw.js:81:5)
Nov 22 11:03:45 at read (/app/code/node_modules/body-parser/lib/read.js:79:3)
Nov 22 11:03:45 at readStream (/app/code/node_modules/raw-body/index.js:163:17)
Nov 22 11:03:45 at tokenAuth (/app/code/backend/routes/users.js:131:5)
Nov 22 11:03:45 expected: 416711,
Nov 22 11:03:45 expose: true,
Nov 22 11:03:45 length: 416711,
Nov 22 11:03:45 limit: 102400,
Nov 22 11:03:45 status: 413
Nov 22 11:03:45 statusCode: 413,
Nov 22 11:03:45 type: 'entity.too.large',I tried to changing client_max_body_size on nginx, but no success.
Any other thing I can do?Thanks!