My attempts failed, unfortunately. However, I am happy to use gitlab's export groups and projects and rebuild our new gitlab server that way. We are fine with this process but cloudron seems to be great for starting new services from a fresh state or if you're transferring existing projects with the same setup (using the same database, for example) I have some questions regarding that but I'll open a new thread on that. Thank you all for your suggestions.
Best posts made by paul.toone
-
RE: gitlab-ee question
-
RE: Custom Apps
Just an update, I was able to get NextCloud converted over. Dumping the existing mysql database with PostgreSQL compatibility did not work. I'm next going to test out user mapping via cloudron LDAP and once that is complete I'll update all my notes into a more readable format. But the process was essentially this:
-
Have a fresh NextCloud App running on Cloudron and immediately enter recovery mode
-
Bring up a temporary machine running PostgreSQL
-
Build pgloader on the temporary PostgreSQL Server
- sudo apt install sbcl unzip libsqlite3-dev gawk curl make freetds-dev libzip-dev
- curl -fsSLO https://github.com/dimitri/pgloader/archive/v3.6.2.tar.gz
- tar xvf v3.6.2.tar.gz
- cd pgloader-3.6.2/
- make pgloader
- sudo mv ./build/bin/pgloader /usr/local/bin/
-
Create a postgres user
- sudo -u postgres createuser --interactive -P
- <NewPostgresUser>(username)
- Set the password for this role <NewPostgresPassword>
- Make the user a super user
-
Create an empty postgres DB
- sudo -u postgres createdb <emptyDBName>
-
Create a user on the MySQLDB and give it all privileges to the NextCloud DB
- mysql> CREATE USER ‘<someMySQLUserName>’@’<postgersServerIP’ IDENTIFIED BY ‘<SomeMySQLUserPassword>’
- mysql> GRANT ALL ON <nextcloudDB>.* to ‘<someMySQLUserName>’@’<postgresServerIP>’;
- mysql> flush privileges;
-
From the PostgreSQL Server ran pgloader
- pgloader mysql://<someMySQLUserName>:<SomeMySQLUserPassword>@mysql_server_ip/<nextcloudDB> postgresql://<NewPostgresUser>:<NewPostgresPassword>@localhost/<emptyDBName>
-
Dump the PostgreSQL Database
- pg_dump <emptyDBName> > pgdump.sql
-
Get CLOUDRON_POSTGRESQL_USERNAME from the app container running the NextCloud APP.
-
sed the .sql file and replace the schema with public
-
sed the .sql file and replace the <table owners> with CLOUDRON_POSTGRESQL_USERNAME
-
Open a web terminal to the NextCloud App that is in recovery mode and upload the .sql file via the Upload to /tmp button
-
Run the following commands grabbed from https://docs.cloudron.io/guides/import-postgresql/
- sed -e 's/CREATE EXTENSION/-- CREATE EXTENSION/g' -e 's/COMMENT ON EXTENSION/-- COMMENT ON EXTENSION/g' /tmp/pgdump.sql > /tmp/pgdump_mod.sql
- PGPASSWORD=${CLOUDRON_POSTGRESQL_PASSWORD} psql -h ${CLOUDRON_POSTGRESQL_HOST} -p ${CLOUDRON_POSTGRESQL_PORT} -U ${CLOUDRON_POSTGRESQL_USERNAME} -d ${CLOUDRON_POSTGRESQL_DATABASE} -c "DROP SCHEMA public CASCADE; CREATE SCHEMA public"
- PGPASSWORD=${CLOUDRON_POSTGRESQL_PASSWORD} psql -h ${CLOUDRON_POSTGRESQL_HOST} -p ${CLOUDRON_POSTGRESQL_PORT} -U ${CLOUDRON_POSTGRESQL_USERNAME} -d ${CLOUDRON_POSTGRESQL_DATABASE} --set ON_ERROR_STOP=on --file=/tmp/pgdump_mod.sql
-
Move just the data from the old NextCloud server into the data directory of the NextCloud App on Cloudron. This is found in the app’s config page | Storage Section, should be something like:
/home/yellowtent/appsdata/<containerHash>/data -
Bring the container out of recovery and boot up
Again, I'll update these notes if/when I try again and get cloudron's LDAP to map to the users in the database but for now, maybe these steps will help someone looking to convert and existing installation over to a Cloudron app.
-
-
RE: Custom Apps
@brutalbirdie fantastic advice. I'll give it a shot and see how it goes. Thanks for your help
-
RE: Custom Apps
@brutalbirdie Yeah, I'm concerned about this as well. I'll definitely keep notes on the progress
-
RE: Custom Apps
One other note, the .sql file originally failed to upload due to the dump having the following line:
Create SCHEMA public;
I commented that out and the import ran fine. I'm guessing there is a command to dump the Postgres DB without a create statement but someone smarter than me probably knows it. I haven't used Postgres a day in my life until yesterday so I'm still getting to know it. I have only used MySQL, Maria and MS SQL and all of those have an option to dump without the create statement, but I figured it was just easier to comment that line out of the SQL file instead of looking up the command and doing a new dump.
Latest posts made by paul.toone
-
RE: Manifest Environment Variable
@girish Thank you for those commands. I think I'll just use the cloudron push after I run cloudron install on my image server.
Also, not sure if there is a specific way to go about this but I could strip this down a bit to have a base install for the app if it's of use to the Cloudron community. I'm sure it would have to be polished by the devs but it is an install for Timetrex CE which is a timeclock software that our company uses.
-
RE: Manifest Environment Variable
@girish Right, but this is a container that is a migration with an existing salt. It's alright, I can just manually put the salt in as I have to restore the pgsql each time I deploy to test anyway. I appreciate the response though.
-
RE: Manifest Environment Variable
@mehdi I was hoping it could be done on deployment. Currently, I just have to go in from the host, edit the salt in the local storage add-on I'm using.
But, I know Cloudron is mostly for new apps, not migrating existing apps over, so I figured it would not be possible in the manifest file.
-
RE: Manifest Environment Variable
@brutalbirdie I'm moving an app that is already in production, so my salt doesn't change; but I don't want it out in public. For every new install, yes, the salt is unique.
-
RE: Manifest Environment Variable
@brutalbirdie Close, it's the apps salt password.
-
Manifest Environment Variable
I was looking on the form to see if there was a way to add a custom environment variable via the manifest file. I have a container for a custom app that has one component that needs to be kept private. I am posting the image publicly on docker hub and would generally put this variable in via the Dockerfile but since it's public, I'd like to keep this one component secret.
So, is there a way to set a custom environment variable from the manifest file?
-
RE: Custom Apps
One other note, the .sql file originally failed to upload due to the dump having the following line:
Create SCHEMA public;
I commented that out and the import ran fine. I'm guessing there is a command to dump the Postgres DB without a create statement but someone smarter than me probably knows it. I haven't used Postgres a day in my life until yesterday so I'm still getting to know it. I have only used MySQL, Maria and MS SQL and all of those have an option to dump without the create statement, but I figured it was just easier to comment that line out of the SQL file instead of looking up the command and doing a new dump.
-
RE: Custom Apps
Just an update, I was able to get NextCloud converted over. Dumping the existing mysql database with PostgreSQL compatibility did not work. I'm next going to test out user mapping via cloudron LDAP and once that is complete I'll update all my notes into a more readable format. But the process was essentially this:
-
Have a fresh NextCloud App running on Cloudron and immediately enter recovery mode
-
Bring up a temporary machine running PostgreSQL
-
Build pgloader on the temporary PostgreSQL Server
- sudo apt install sbcl unzip libsqlite3-dev gawk curl make freetds-dev libzip-dev
- curl -fsSLO https://github.com/dimitri/pgloader/archive/v3.6.2.tar.gz
- tar xvf v3.6.2.tar.gz
- cd pgloader-3.6.2/
- make pgloader
- sudo mv ./build/bin/pgloader /usr/local/bin/
-
Create a postgres user
- sudo -u postgres createuser --interactive -P
- <NewPostgresUser>(username)
- Set the password for this role <NewPostgresPassword>
- Make the user a super user
-
Create an empty postgres DB
- sudo -u postgres createdb <emptyDBName>
-
Create a user on the MySQLDB and give it all privileges to the NextCloud DB
- mysql> CREATE USER ‘<someMySQLUserName>’@’<postgersServerIP’ IDENTIFIED BY ‘<SomeMySQLUserPassword>’
- mysql> GRANT ALL ON <nextcloudDB>.* to ‘<someMySQLUserName>’@’<postgresServerIP>’;
- mysql> flush privileges;
-
From the PostgreSQL Server ran pgloader
- pgloader mysql://<someMySQLUserName>:<SomeMySQLUserPassword>@mysql_server_ip/<nextcloudDB> postgresql://<NewPostgresUser>:<NewPostgresPassword>@localhost/<emptyDBName>
-
Dump the PostgreSQL Database
- pg_dump <emptyDBName> > pgdump.sql
-
Get CLOUDRON_POSTGRESQL_USERNAME from the app container running the NextCloud APP.
-
sed the .sql file and replace the schema with public
-
sed the .sql file and replace the <table owners> with CLOUDRON_POSTGRESQL_USERNAME
-
Open a web terminal to the NextCloud App that is in recovery mode and upload the .sql file via the Upload to /tmp button
-
Run the following commands grabbed from https://docs.cloudron.io/guides/import-postgresql/
- sed -e 's/CREATE EXTENSION/-- CREATE EXTENSION/g' -e 's/COMMENT ON EXTENSION/-- COMMENT ON EXTENSION/g' /tmp/pgdump.sql > /tmp/pgdump_mod.sql
- PGPASSWORD=${CLOUDRON_POSTGRESQL_PASSWORD} psql -h ${CLOUDRON_POSTGRESQL_HOST} -p ${CLOUDRON_POSTGRESQL_PORT} -U ${CLOUDRON_POSTGRESQL_USERNAME} -d ${CLOUDRON_POSTGRESQL_DATABASE} -c "DROP SCHEMA public CASCADE; CREATE SCHEMA public"
- PGPASSWORD=${CLOUDRON_POSTGRESQL_PASSWORD} psql -h ${CLOUDRON_POSTGRESQL_HOST} -p ${CLOUDRON_POSTGRESQL_PORT} -U ${CLOUDRON_POSTGRESQL_USERNAME} -d ${CLOUDRON_POSTGRESQL_DATABASE} --set ON_ERROR_STOP=on --file=/tmp/pgdump_mod.sql
-
Move just the data from the old NextCloud server into the data directory of the NextCloud App on Cloudron. This is found in the app’s config page | Storage Section, should be something like:
/home/yellowtent/appsdata/<containerHash>/data -
Bring the container out of recovery and boot up
Again, I'll update these notes if/when I try again and get cloudron's LDAP to map to the users in the database but for now, maybe these steps will help someone looking to convert and existing installation over to a Cloudron app.
-
-
RE: gitlab-ee question
@girish You are correct. It would be difficult no matter what. Moving from omnibus to source is a difficult task. It pretty much boiled down to the secrets file not being converted correctly. The other issue was uploading a tar.gz backup to the container. I waited +2hours to upload the file and at 99% uploaded it just said the the upload failed with no other information given on why it failed. My upload speed is 50 mb/s but I know this is more on DO side, but I'm still puzzled as to why the upload failed.
Either way, we're headed in the right direction and I'm excited with the possibilities.
-
RE: Custom Apps
@brutalbirdie Yeah, I'm concerned about this as well. I'll definitely keep notes on the progress