The original issue is that the zone is added in Cloudflare but the nameservers of the domain are not set to Cloudflare. This makes Cloudflare return name_servers as empty in the response which makes our code crash.
No, you have to migrate using some technique like https://docs.cloudron.io/guides/migrate-wordpress/
Note that depending on your Developer setup, you may not be able to migrate at all. Especially, if you had edited core files or used plugins that edit files and change code etc .
You could try/experiment with using rclone.
create a config for your desired mount, for example google drive.
Mount that via systemd and then point your backups to that location.
For example:
d39b4f53-a684-4ef5-a73c-d211fed3c4e6-image.png
5763fb36-5be4-40f8-89e4-1c69e8a87378-image.png
[Unit]
Description=rclone Service Google Drive Mount
Wants=network-online.target
After=network-online.target
[Service]
Type=notify
Environment=RCLONE_CONFIG=/root/.config/rclone/rclone.conf
RestartSec=5
ExecStart=/usr/bin/rclone mount google:cloudron /mnt/google \
# This is for allowing users other than the user running rclone access to the mount
--allow-other \
# Dropbox is a polling remote so this value can be set very high and any changes are detected via polling.
--dir-cache-time 9999h \
# Log file location
--log-file /root/.config/rclone/logs/rclone-google.log \
# Set the log level
--log-level INFO \
# This is setting the file permission on the mount to user and group have the same access and other can read
--umask 002 \
# This sets up the remote control daemon so you can issue rc commands locally
--rc \
# This is the default port it runs on
--rc-addr 127.0.0.1:5574 \
# no-auth is used as no one else uses my server
--rc-no-auth \
# The local disk used for caching
--cache-dir=/cache/google \
# This is used for caching files to local disk for streaming
--vfs-cache-mode full \
# This limits the cache size to the value below
--vfs-cache-max-size 50G \
# Speed up the reading: Use fast (less accurate) fingerprints for change detection
--vfs-fast-fingerprint \
# Wait before uploading
--vfs-write-back 1m \
# This limits the age in the cache if the size is reached and it removes the oldest files first
--vfs-cache-max-age 9999h \
# Disable HTTP2
#--disable-http2 \
# Set the tpslimit
--tpslimit 12 \
# Set the tpslimit-burst
--tpslimit-burst 0
ExecStop=/bin/fusermount3 -uz /mnt/google
ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --url 127.0.0.1:5574 _async=true
Restart=on-failure
User=root
Group=root
[Install]
WantedBy=multi-user.target
# https://github.com/animosity22/homescripts/blob/master/systemd/rclone-drive.service
@eddowding said in Updating Cloudron to a stable version:
I was migrating servers when I noticed that I'm on 7.2.5, but the latest is 7.3.2.
When moving to the new server, you can install it as ./cloudron-setup --version 7.2.5 . 7.3 was out last week, just ironing out some regressions. We expect to start rolling it out next week.
Mostly you can build your own app packages for this. You could look up various different tech stack packages as referenced at https://docs.cloudron.io/packaging/cheat-sheet/#examples