If an app is not ready yet, should I delete it from the wishlist? Just added journey.cloud to the list, but discovered they are still working on it
nottheend
Posts
-
Please search & upvote before opening a new topic -
Journey.cloud - feature rich diary@timconsidine Got it. Should I close this thread until then?
-
Journey.cloud - feature rich diaryhttps://journey.cloud offers a feature rich journalling esp. for personal experiences.
There seem to be a Docker file already available:
https://github.com/Journey-Cloud/self-hosted-boilerplate -
tarExtract pipeline error: Invalid tar headerYes, I got the error mentioned before on a newly created backup without encryption. The tarball was around 13 GB.
I was only able to restore backups older than 1 month for the nextcloud app. For all other 12+ apps restoring did work like a charme for the most recent backup (younger than 1 month).
It gives me the impression that the CIFS with Hetzner Storagebox is a risky choice.
Therefore, a hint in the docs might be saving some lives of other folks.I would like to offer 2 additional feedbacks beyond the CIFS issue and the docs:
- it seems to be not possible to use the decryption command line cloudron command with a password that includes special characters. I tried to escape with " but it didn't work. Probably sth to document. Maybe there is a way to escape after all.
- Is there any way to do backup validation? It is probably concerning to see successful backups while they are corrupt.
And thanks a lot for the support @girish and @nebulon for pointing me in the right direction! I resolved it by choosing a different cloud provider and different storage protocol. I used S3 now. Backing up and restoring worked with encryption completely fine!
Thank you! -
tarExtract pipeline error: Invalid tar headerI just found this hint in the docs:
Hetzner Storage Box We recommend using SSHFS for Hetzner Storage Box since it is much faster and efficient storage wise compared to CIFS. When using Hetzner Storage Box with CIFS, the Remote Directory is /backup for the main account. For sub accounts, the Remote Directory is /subaccount.
Probably worth expanding to hint that there seem to be some unrealiable component or interaction with this way the storage box is used by Cloudron.
The issue in this thread is related to CIFS with Hetzner Storage box
-
tarExtract pipeline error: Invalid tar headerThanks, I am getting a different error:
An error occurred during the import operation: External Error: tarExtract pipeline error: invalid stored block lengths
Is there any limitation in terms of CIFS?
-
tarExtract pipeline error: Invalid tar headerThanks. Luckily I discovered that I didn't delete the old server yet. I spinned it up again and took another backup - the new backup seems to be corrupt again
Not sure what I am missing -
tarExtract pipeline error: Invalid tar headerThanks, I was not aware.
When I try to decrypt, it doesn't work, although I use the same password like for the other apps:Error: Could not decrypt: Invalid password or tampered file (mac mismatch)
I am confused now, because I am not aware of using another password for encrypting a single app (nextcloud) compared to other apps used on the same Cloudron instance.
Is there any other idea?
-
tarExtract pipeline error: Invalid tar headerDownloaded the backuped file from the nextcloud app and tried to encrypt them with commands like this, tried also other algorithms:
openssl enc -d -aes-128-cbc -in app_nextclouddomain.com_v4.22.5.tar.gz.enc -out file.tar.gz
No luck. Any other idea to be able to restore the Nextcloud app?
If the encryption password of the backup weren't correct, the other apps wouldn't be restored successfully too.
Thoughts:- Is there any possibility that the passwords for encryption can be different per app?
- Could a removed volume be an issue? I removed a volume recently before the backup but the app should have been consistent during the backup time
-
tarExtract pipeline error: Invalid tar headerThanks for your input!
It didn't restore in the sense that the nextcloud app was restoring - as all other Cloudron apps - but ended up with the error message:
Error : External Error - tarExtract pipeline error: incorrect header check
The status of the nextcloud app is "Error".
I tried to restore again the whole cloudron server as well as deinstalling the nextcloud app and import only the backed up version. Neither of them did work.It sounds to me that the password is wrong or that the algorithm for decrypting would be wrong.
The backuped file is on the server. Will try to decrypt it manually. The nextcloud backup has 12 GB of size, the server has more than 100 GB free
-
Cloudron should become its own backup provider@humptydumpty Valid points, but the main focus here is how Cloudron could simplify backups by providing an already-configured solution. Sure, it may require some initial setup, but the idea is that it reduces ongoing manual work and offers peace of mind. If one fails to setup the backup, data is eventually lost. Lost DNS entries are less severe. Things like running Restic or leveraging higher support levels could be added advantages if Cloudron manages the infrastructure. Let's keep the discussion about whether that kind of convenience and integration is worth considering, rather than comparing unrelated scenarios.
-
Cloudron should become its own backup providerThe main advantage of using Cloudron's storage over managing backups with a provider like DigitalOcean directly is simplicity and convenience. Cloudron could offer a storage solution that’s already configured for you, so there’s no need to mess with setting things up yourself. It’s essentially a one-click solution: everything’s integrated seamlessly within Cloudron.
Yes, you could handle backups directly with DO or other providers, but having Cloudron manage the backup configuration can save time and reduce the potential for errors. You don't have to worry about ensuring all the settings are right: Cloudron takes care of it, and if anything goes wrong, their support has a better idea of the setup. For someone who values ease of use, that could justify the markup.
While Cloudron may not offer more control over issues related to the provider itself, the advantage lies in reducing your manual workload. You’re essentially paying for that added convenience and peace of mind.
-
Cloudron should become its own backup providerI would consider a backup service offered by Cloudron.
Just because of "seamlessly", I am pretty confident to catch the single edge case where the backup doesn't work just because I messed up with my config -
tarExtract pipeline error: Invalid tar headerAll other apps apart from one were successfully restored.
Further, I discovered that the general backup section of cloudron contains this message:
Task was stopped because the server was restarted or crashed
Could be related to a server reboot. I didn't verify the completion of the backup process before rebooting the server.
But if this backup is corrupt, would I need to restore the complete cloudron? -
tarExtract pipeline error: Invalid tar headerAfter trying to restore cloudron on a new server, I am getting this error while attempting to restore nextcloud:
[no timestamp] tarExtract pipeline error: Invalid tar header. Maybe the tar is corrupted or it needs to be gunzipped?
It worries me, that there is no timestamp.
Until now I tried to
- reboot the server (multiple times)
- restore older backups (not only the one from directly before)
Further logs before the log above:
Oct 11 20:09:04 - [POST] /clear Oct 11 20:12:35 box:backupformat/tgz tarExtract: ./data/appdata_ocofjozos9pc/preview/2/0/9/d/1/9/5/83606/4096-4096-max.png 98592 file to /home/yellowtent/appsdata/39b0914d-1add-4a32-966f-d331fb91f589/data/appdata_ocofjozos9pc/preview/2/0/9/d/1/9/5/83606/4096-4096-max.png [...] Oct 11 20:12:35 box:apptask run: app error for state pending_restore: BoxError: tarExtract pipeline error: Invalid tar header. Maybe the tar is corrupted or it needs to be gunzipped? at tarExtract (/home/yellowtent/box/src/backupformat/tgz.js:225:26) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async /home/yellowtent/box/src/backupformat/tgz.js:248:9 at async promiseRetry (/home/yellowtent/box/src/promise-retry.js:17:20) at async Object.download (/home/yellowtent/box/src/backupformat/tgz.js:244:5) at async download (/home/yellowtent/box/src/backuptask.js:100:5) at async Object.downloadApp (/home/yellowtent/box/src/backuptask.js:133:5) at async install (/home/yellowtent/box/src/apptask.js:368:9) { reason: 'External Error', details: {} } Oct 11 20:12:35 box:tasks setCompleted - 2703: {"result":null,"error":{"stack":"BoxError: tarExtract pipeline error: Invalid tar header. Maybe the tar is corrupted or it needs to be gunzipped?\n at tarExtract (/home/yellowtent/box/src/backupformat/tgz.js:225:26)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async /home/yellowtent/box/src/backupformat/tgz.js:248:9\n at async promiseRetry (/home/yellowtent/box/src/promise-retry.js:17:20)\n at async Object.download (/home/yellowtent/box/src/backupformat/tgz.js:244:5)\n at async download (/home/yellowtent/box/src/backuptask.js:100:5)\n at async Object.downloadApp (/home/yellowtent/box/src/backuptask.js:133:5)\n at async install (/home/yellowtent/box/src/apptask.js:368:9)","name":"BoxError","reason":"External Error","details":{},"message":"tarExtract pipeline error: Invalid tar header. Maybe the tar is corrupted or it needs to be gunzipped?"}} Oct 11 20:12:35 box:tasks update 2703: {"percent":100,"result":null,"error":{"stack":"BoxError: tarExtract pipeline error: Invalid tar header. Maybe the tar is corrupted or it needs to be gunzipped?\n at tarExtract (/home/yellowtent/box/src/backupformat/tgz.js:225:26)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async /home/yellowtent/box/src/backupformat/tgz.js:248:9\n at async promiseRetry (/home/yellowtent/box/src/promise-retry.js:17:20)\n at async Object.download (/home/yellowtent/box/src/backupformat/tgz.js:244:5)\n at async download (/home/yellowtent/box/src/backuptask.js:100:5)\n at async Object.downloadApp (/home/yellowtent/box/src/backuptask.js:133:5)\n at async install (/home/yellowtent/box/src/apptask.js:368:9)","name":"BoxError","reason":"External Error","details":{},"message":"tarExtract pipeline error: Invalid tar header. Maybe the tar is corrupted or it needs to be gunzipped?"}} Oct 11 20:12:35 box:taskworker Task took 241.42 seconds [no timestamp] 2024-10-11T17:53:16Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 12:C 11 Oct 2024 17:53:16.518 * Configuration loaded [no timestamp] 2024-10-11T17:53:16Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 12:C 11 Oct 2024 17:53:16.518 * Redis version=7.2.1, bits=64, commit=00000000, modified=0, pid=12, just started [no timestamp] 2024-10-11T17:53:16Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 12:M 11 Oct 2024 17:53:16.565 * RDB memory usage when created 0.90 Mb [no timestamp] 2024-10-11T17:53:16Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 12:M 11 Oct 2024 17:53:16.566 * Done loading RDB, keys loaded: 0, keys expired: 0. [no timestamp] 2024-10-11T17:53:16Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 12:M 11 Oct 2024 17:53:16.566 * Ready to accept connections tcp [no timestamp] 2024-10-11T17:53:18Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 17:53:18,146 INFO success: redis entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) [no timestamp] 2024-10-11T17:55:07Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 12:M 11 Oct 2024 17:55:07.306 * Saving the final RDB snapshot before exiting. [no timestamp] 2024-10-11T17:55:07Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 12:M 11 Oct 2024 17:55:07.308 * DB saved on disk [no timestamp] 2024-10-11T17:55:07Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 12:M 11 Oct 2024 17:55:07.308 * Removing the pid file. [no timestamp] 2024-10-11T17:55:07Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 12:M 11 Oct 2024 17:55:07.309 # Redis is now ready to exit, bye bye... [no timestamp] 2024-10-11T17:55:07Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 12:signal-handler (1728669307) Received SIGTERM scheduling shutdown... [no timestamp] 2024-10-11T17:55:07Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1010 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 17:55:07,274 INFO waiting for redis, redis-service to die [no timestamp] 2024-10-11T17:58:47Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 17:58:47,457 INFO Included extra file "/etc/supervisor/conf.d/redis-service.conf" during parsing [no timestamp] 2024-10-11T17:58:47Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 17:58:47,457 INFO Included extra file "/etc/supervisor/conf.d/redis.conf" during parsing [no timestamp] 2024-10-11T17:58:47Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 17:58:47,482 CRIT Server 'unix_http_server' running without any HTTP authentication checking [no timestamp] 2024-10-11T17:58:47Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 17:58:47,483 INFO supervisord started with pid 1 [no timestamp] 2024-10-11T17:58:48Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:C 11 Oct 2024 17:58:48.694 * Configuration loaded [no timestamp] 2024-10-11T17:58:48Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:C 11 Oct 2024 17:58:48.694 * Redis version=7.2.1, bits=64, commit=00000000, modified=0, pid=13, just started [no timestamp] 2024-10-11T17:58:48Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 17:58:48.778 * RDB age 221 seconds [no timestamp] 2024-10-11T17:58:48Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 17:58:48.778 * RDB memory usage when created 0.83 Mb [no timestamp] 2024-10-11T17:58:48Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 17:58:48.779 * DB loaded from disk: 0.002 seconds [no timestamp] 2024-10-11T17:58:48Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 17:58:48.779 * Done loading RDB, keys loaded: 0, keys expired: 0. [no timestamp] 2024-10-11T17:58:48Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 17:58:48.779 * Ready to accept connections tcp [no timestamp] 2024-10-11T17:58:50Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 17:58:50,463 INFO success: redis-service entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) [no timestamp] 2024-10-11T18:07:05Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:05.456 * Saving the final RDB snapshot before exiting. [no timestamp] 2024-10-11T18:07:05Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:05.458 * Removing the pid file. [no timestamp] 2024-10-11T18:07:05Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:05.459 # Redis is now ready to exit, bye bye... [no timestamp] 2024-10-11T18:07:05Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:signal-handler (1728670025) Received SIGTERM scheduling shutdown... [no timestamp] 2024-10-11T18:07:05Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1009 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 18:07:05,461 INFO stopped: redis (exit status 0) [no timestamp] 2024-10-11T18:07:33Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 18:07:33,814 INFO Included extra file "/etc/supervisor/conf.d/redis.conf" during parsing [no timestamp] 2024-10-11T18:07:33Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 18:07:33,832 INFO RPC interface 'supervisor' initialized [no timestamp] 2024-10-11T18:07:33Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 18:07:33,833 CRIT Server 'unix_http_server' running without any HTTP authentication checking [no timestamp] 2024-10-11T18:07:33Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 2024-10-11 18:07:33,833 INFO supervisord started with pid 1 [no timestamp] 2024-10-11T18:07:34Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:C 11 Oct 2024 18:07:34.906 * Configuration loaded [no timestamp] 2024-10-11T18:07:34Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:34.909 * monotonic clock: POSIX clock_gettime [no timestamp] 2024-10-11T18:07:34Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:34.921 # Failed to write PID file: Permission denied [no timestamp] 2024-10-11T18:07:34Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:34.921 * Running mode=standalone, port=6379. [no timestamp] 2024-10-11T18:07:34Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:34.922 * Server initialized [no timestamp] 2024-10-11T18:07:34Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:34.927 * DB loaded from disk: 0.006 seconds [no timestamp] 2024-10-11T18:07:34Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:34.927 * Loading RDB produced by version 7.2.1 [no timestamp] 2024-10-11T18:07:34Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:34.927 * RDB age 29 seconds [no timestamp] 2024-10-11T18:07:34Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:34.927 * RDB memory usage when created 0.90 Mb [no timestamp] 2024-10-11T18:07:34Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32-966f-d331fb91f589 - 13:M 11 Oct 2024 18:07:34.927 * Ready to accept connections tcp [no timestamp] 2024-10-11T18:07:35Z username redis-39b0914d-1add-4a32-966f-d331fb91f589 1011 redis-39b0914d-1add-4a32
-
saving you the homework: recommended windows & iOS apps for NavidromeThanks, really helpful!
Do you also have a recommendation to update the metadata on the songs automatically? Like artist etc
-
CPU Usage Graph with 400%In my Cloudron instance, the CPU Usage graph is scaled up to 400%, although the actual value is ranging around 20%.
Is there a way to configure it, e.g. with a max of 100%?
-
Docker Error 500 - Unable to pull image on same instanceThe error doesn't appear anymore. Not sure why (cloudron updated to v8 meanwhile).
The errors remaining are "typical build image errors". Thanks for your support!
-
Docker Error 500 - Unable to pull image on same instanceThank you!
Basically, pushing a image works to the registry withdocker push
also on the server (see below.But I seem to have change something and while deploying the app with the
cloudron
npm command, I get an error of an invalid manifest file. Will have to dig into it (I have all required parameters). Probably my project structure is wrong:App installation error: Installation failed: Unable to pull image dockerhub.domain.com/myuser/myapp. message: (HTTP code 404) unexpected - manifest for dockerhub.domain.com/myuser/myapp:latest not found: manifest unknown: manifest unknown statusCode: 404
Logs on server for docker push:
2024-08-11T13:35:40.000Z time="2024-08-11T13:35:40.047968853Z" level=info msg="response completed" go.version=go1.20.8 http.request.host="localhost:5000" http.request.id=ea838230-bda2-414b-aa83-599c00a48956 http.request.method=GET http.request.remoteaddr="[::1]:59814" http.request.uri="/v2" http.request.useragent="Mozilla (CloudronHealth)" http.response.contenttype="text/html; charset=utf-8" http.response.duration="104.432µs" http.response.status=301 http.response.written=39
2.Below is the response for curl -v ```
https://dockerhub.domain.com/v2/_catalog[user]$ curl -v https://dockerhub.domain.com/v2/_catalog
- Host dockerhub.domain.com:443 was resolved.
- IPv6: (none)
- IPv4: xxx.xxx.xxx.xxx
- Trying xxx.xxx.xxx.xxx:443...
- Connected to dockerhub.domain.com (xxx.xxx.xxx.xxx) port 443
- ALPN: curl offers h2,http/1.1
- TLSv1.3 (OUT), TLS handshake, Client hello (1):
- CAfile: /etc/pki/tls/certs/ca-bundle.crt
- CApath: none
- TLSv1.3 (IN), TLS handshake, Server hello (2):
- TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
- TLSv1.3 (IN), TLS handshake, Certificate (11):
- TLSv1.3 (IN), TLS handshake, CERT verify (15):
- TLSv1.3 (IN), TLS handshake, Finished (20):
- TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
- TLSv1.3 (OUT), TLS handshake, Finished (20):
- SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / x25519 / id-ecPublicKey
- ALPN: server accepted h2
- Server certificate:
- subject: CN=*.domain.com
- start date: 2024 GMT
- expire date: 2024 GMT
- subjectAltName: host "dockerhub.domain.com" matched cert's "*.domain.com"
- issuer: C=US; O=Let's Encrypt; CN=E5
- SSL certificate verify ok.
- Certificate level 0: Public key type EC/secp384r1 (384/192 Bits/secBits), signed using ecdsa-with-SHA384
- Certificate level 1: Public key type EC/secp384r1 (384/192 Bits/secBits), signed using sha256WithRSAEncryption
- Certificate level 2: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption
- using HTTP/2
- [HTTP/2] [1] OPENED stream for https://dockerhub.domain.com/v2/_catalog
- [HTTP/2] [1] [:method: GET]
- [HTTP/2] [1] [:scheme: https]
- [HTTP/2] [1] [:authority: dockerhub.domain.com]
- [HTTP/2] [1] [:path: /v2/_catalog]
- [HTTP/2] [1] [user-agent: curl/8.6.0]
- [HTTP/2] [1] [accept: /]
GET /v2/_catalog HTTP/2
Host: dockerhub.domain.com
User-Agent: curl/8.6.0
Accept: /- TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
- TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
- old SSL session ID is stale, removing
< HTTP/2 302
< server: nginx
< date: Sun, 11 Aug 2024 13:19:02 GMT
< content-type: text/html
< content-length: 138
< location: https://dockerhub.domain.com/login?redirect=/v2/_catalog
< strict-transport-security: max-age=63072000
< x-xss-protection: 1; mode=block
< x-download-options: noopen
< x-content-type-options: nosniff
< x-permitted-cross-domain-policies: none
< referrer-policy: same-origin
< cache-control: no-cache
< set-cookie: authToken=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT
<
<html>
<head><title>302 Found</title></head>
<body>
<center><h1>302 Found</h1></center>
<hr><center>nginx</center>
</body>
</html> - Connection #0 to host dockerhub.domain.com left intact
-
Nextcloud seems to be stuck in restart loop after upgrading to Cloudron v8.0.3I was able to solve the issue by stopping the nextcloud app and afterwards rebooting the Cloudron Server. After the Cloudron Server came up again, I waited for a grace period and started the Nextcloud App afterwards again.
It should be mentioned that the reboot of the server took exceptionally long, more than 45 min. I didn't investigate the root cause.
This topic can be marked as solved.