contabo installation error
-
@girish just got the same issue on a clean new contabo vps with ubuntu 22
=> Cloudron version 7.3.6 ...it appears to be related to node v16.18.1
this is the end of the cloudron-setup log file :
Setting up docker-ce-cli (5:20.10.21~3-0~ubuntu-focal) ... Setting up pigz (2.4-1) ... Setting up git-man (1:2.25.1-1ubuntu3.10) ... Setting up docker-ce (5:20.10.21~3-0~ubuntu-focal) ... Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service. Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket. Setting up git (1:2.25.1-1ubuntu3.10) ... Processing triggers for man-db (2.9.1-1) ... Processing triggers for systemd (245.4-4ubuntu3.19) ... 2023-02-23T11:37:39 ==> installer: switching nginx to ubuntu package WARNING: apt does not have a stable CLI interface. Use with caution in scripts. Reading package lists... Building dependency tree... Reading state information... Package 'nginx' is not installed, so not removed Use 'apt autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. WARNING: apt does not have a stable CLI interface. Use with caution in scripts. Reading package lists... Building dependency tree... Reading state information... nginx-full is already the newest version (1.18.0-0ubuntu1.4). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2023-02-23T11:37:42 ==> installer: installing/updating node 16.18.1 node-v16.18.1-linux-x64/bin/ node-v16.18.1-linux-x64/bin/node gzip: stdin: unexpected end of file tar: Unexpected EOF in archive tar: Unexpected EOF in archive tar: Error is not recoverable: exiting now
-
-
-
For the record, I had the exact same issue this morning on a new cloudron install attempt with a SSDNode VPS running a fresh Ubuntu 22.
The change offered by @benborges worked effectively and allowed me to complete the cloudron installation.
While 2 different hosts having the same issue might still be coincidental, I think it is worth mentioning since the probability of it being a network glitch are consequently going down.
Hopefully this makes sense - I am not planing on re-deploying things just yet and so would not be in a position to test for re-occurence. With this in mind, I am happy to help in any ways you see relevant.