Database connections using SSH workaround
-
Hello,
I have two cloudrons. I want to hook up nocodb on one to the postgres on the other.
However, it appears that nocodb does not support database connections over ssh.
Given that apps cannot "see" the machine host network, I can't use a port using an ssh tunnel on the server either.
Any ideas on how to achieve this?
I do have a VPN set up between the two cloudrons -- if there's a simple way of bridging the docker and vpn networks, that would suffice. (Admittedly, this would be unfamilair territory for me.)
Thanks
-
I have another issue that's possibly related.I've used nocodb to connect to the local postgres. When I try to edit a table, I get the toast notification:Cannot read properties of undefined (reading 'column_name')
I'm unable to write any row data to the database table. I can add new columns to the table though.EDIT: Scratch this, I figured out including a primary key on the table avoids this error.
-
I managed to find this post that sounds like what I'm trying to do, but I'm not able to get it to work. I get back the following error:
(psycopg2.OperationalError) connection to server at "x.x.x.x", port 5555 failed: FATAL: no pg_hba.conf entry for host "172.18.0.1", user "my_user", database "my_db", no encryption (Background on this error at: https://sqlalche.me/e/14/e3q8)
x.x.x.x: Either the origin IP or the VPN IP, both return the same error. Using the FQN doesn't work.
Port 5555: The port I've established coupling to the internal postgres server
user my_user and database my_db: I created a custom user and databas within the target cloudron postgres server (using root credentials).I found two pg conf file. The first one, in /var, I added an entry to temporarily whitelist all IPs for the user, to no avail. The second, in /etc, is not writable.
Is the second causing the issue? Don't think it's possible to edit it as it's a mounted file.
-
Just had an epiphany...
I had to do create several roles to get the custom user and db working to circumvent the cloudron restrictions in place on the database:
# samerole specifies that the requested user must be a member of the role with the same name as the requested database host samerole all 0.0.0.0/0 md5
In light of this, I've just tried to log into root (after granting said roles), and it works. So the current issue is the cloudon database security, not the bridge -- that's progress.
- Does this post now go into another topic?
- How to deal with the database user security stuff? I dont want to be using root.
-
You shouldn't use root database user.
I have two cloudrons. I want to hook up nocodb on one to the postgres on the other.
However, it appears that nocodb does not support database connections over ssh.Since this gets asked often, I wrote a guide now on SSH tunnels - https://docs.cloudron.io/guides/ssh-tunnel/
For example, you can set up a local forwarding tunnel from server A to server B with something like:
/usr/bin/ssh -NT -L 172.18.0.1:4000:172.18.30.2:5432 root@server_b_ip
If you run the above on server A, then nocodb on server A, can connect to server B's postgresql at port 4000 on IP 172.18.0.1 . 172.18.0.1 is the internal docker bridge IP, so it's quite secure.
You can test the connection on Server A like:
PGPASSWORD=65bd069b8997ed746665a2c84b2de9454d2976dab2bbfe811e34f193e8b7ce9c871b26f2644e15467f1aa4e963eafa0d740c65ee20b98f409acc22595f773bca psql -h 172.18.0.1 -p 4000 -U user8ad81c1740b9483391160eb14440a4f9 -d db8ad81c1740b9483391160eb14440a4f9
(The values come from nocodb of server B. Do
env | grep POSTGRESQL
in the web terminal of the app) -
I restarted a couple of times and retried my setup again using my custom postgres user, and it's working now. But I'm not sure it will hold, it seemed to be flip-flopping between working and not working due to a security error. I am using Superset and Nocodb to test this out.
To re-clarify the setup:
-
Use this method to whitelist an external port that will be exposed externally and connect to cloudron B postgres.
E.g: 5555. -
Use socat to couple cloudron Bs postgres to the external port, as described here.
-
Log into postgres and create a new user that, via assigning custom roles, has access to all the necessary custom databases (I don't know if this is recommended, so I'll omit details on what I did here exactly).
E.g: my_user has access to db_1, db_2, db_3 -
Go into the app and login with the cloudron B IP + external port + the custom db user + etc etc.
I will continue to test this out and post back if the errors come back again.
@girish I had tried the SSH tunnel method the other day, with no success, but it appears I should have been using the docker interface instead, as in your example. Is this method or the socat method more meaningfully preferred over the other?
-
-
I guess socat is fine too if it works for your use case. Instead of creating a new postgres user, you can just use the database user which is created for nocodb. They are in the env vars of the apps - https://docs.cloudron.io/packaging/addons/#postgresql . Note that when the postgresql database upgrades, your user will not be preserved since it is not part of any backup and cloudron has no idea about this user either (so, it's best not modify postgres).
-
@girish said in Database connections using SSH workaround:
I guess socat is fine too if it works for your use case. Instead of creating a new postgres user, you can just use the database user which is created for nocodb. They are in the env vars of the apps - https://docs.cloudron.io/packaging/addons/#postgresql . Note that when the postgresql database upgrades, your user will not be preserved since it is not part of any backup and cloudron has no idea about this user either (so, it's best not modify postgres).
Re database upgrades, are additional databases I create also excluded from backups?
-
@cdrm yes , additional databases are also excluded from backups.