Ubuntu 20.04 "landscape" user account running mysqld
-
Okay, so I ran another test and I think this makes more sense now and sort of validates what you mentioned above.
I spun up a new VPS on Vultr on Ubuntu 20.04 (not the Cloudron marketplace app version), and before I did anything, I immediately uninstalled landscape. This took away UID 107. Then I installed Cloudron, and of course Cloudron created the mysql user with UID 107 which can be verified in /etc/password file too.
Here's my current server:
tcpdump:x:105:111::/nonexistent:/usr/sbin/nologin sshd:x:106:65534::/run/sshd:/usr/sbin/nologin pollinate:x:108:1::/var/cache/pollinate:/bin/false systemd-network:x:109:114:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin systemd-resolve:x:110:115:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin systemd-timesync:x:111:116:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin mysql:x:112:118:MySQL Server,,,:/nonexistent:/bin/false unbound:x:113:119::/var/lib/unbound:/usr/sbin/nologin nginx:x:114:120:nginx user,,,:/nonexistent:/bin/false yellowtent:x:1000:1000::/home/yellowtent:/bin/sh
Notice MySQL took UID 112 since at the time of install, landscape user was already generated with UID 107.
On a new server install where I purged landscape prior to installing Cloudron, the MySQL user then takes UID 107 since it's "available":
tcpdump:x:105:111::/nonexistent:/usr/sbin/nologin sshd:x:106:65534::/run/sshd:/usr/sbin/nologin pollinate:x:108:1::/var/cache/pollinate:/bin/false systemd-network:x:109:114:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin systemd-resolve:x:110:115:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin systemd-timesync:x:111:116:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin mysql:x:107:113:MySQL Server,,,:/nonexistent:/bin/false unbound:x:112:118::/var/lib/unbound:/usr/sbin/nologin nginx:x:113:119:nginx user,,,:/nonexistent:/bin/false yellowtent:x:1000:1000::/home/yellowtent:/bin/sh
So when I look at the top output, there's no UID being shown anymore and basically everything appears to run as the mysql user since it's UID 107 which then matches the UID used in the container images:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 614 mysql 20 0 1069132 141280 35112 S 0.0 14.1 0:01.01 mysqld 643 root 20 0 725468 92732 49580 S 0.0 9.2 0:00.35 dockerd 935 yellowt+ 20 0 638720 61260 30140 S 0.0 6.1 0:01.11 node 521 root 20 0 749848 46900 26172 S 0.0 4.7 0:00.32 containerd 497 yellowt+ 20 0 579328 34044 27896 S 0.0 3.4 0:00.16 node 579 root 20 0 110808 21048 13392 S 0.0 2.1 0:00.07 unattended-upgr 504 root 20 0 31976 18176 10464 S 0.0 1.8 0:00.09 networkd-dispat [...]
So I guess if I want to "clean" this up (even though there's really no issue at all I guess outside of there being no associated username with UID 107), I should reimage my server and restore from the backup after installing Cloudron only after I've already purged landscape. Though this I admit is really not necessary and I should just get used to seeing UID 107 in the process list and top output and I should know that's coming from the container UIDs then rather than the actual account in Ubuntu (outside of the fact when it actually is MySQL running, lol).
Hopefully the above makes some sense.
-
@girish - I stumbled across this article: https://blog.dbi-services.com/how-uid-mapping-works-in-docker-containers/
The article seems to imply (unless I'm misunderstanding it) that it may be a good practice to create a user on the host system that matches up with the user running the service in each container. So in other words, taking mysql as the example... on the Ubuntu host the mysql user should be a known user that's created with UID 2000 for example, and then in the Docker container for MySQL it'll also run with a UID of 2000. Then for something like mongodb, it'll be a host user with UID 2001 for example and then the user running mongodb in the container will also run with the UID of 2001. You're definitely allowed to create users with a specific UID, so hopefully this is all doable.
The above makes it a bit more secure from what I'm reading, plus of course for the "OCD" in all of us it keeps things cleaner and easier to understand what's happening when looking at
top
of aps
output for example.Another article that is similar in nature is this one: https://medium.com/@mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
I'm just wondering if this is an improvement that should and could be made to Cloudron's images for at least the fundamental services that Cloudron deploys and runs in both host and containers.
I guess I'm not convinced the current setup is the ideal setup for people. Since Cloudron is mean to to be deployed on brand new Ubuntu systems, there should be no real need to accomodate other UIDs which may be present on some providers and not others because you can simply choose a really high UID that no default Ubuntu image should be using. Hopefully the above makes sense. Please correct me if I'm misunderstanding some of this.
-
@d19dotca I think your understanding is correct. It's definitely possible to have all the UIDs in sync. However, most of the addon (mongo, postgres, redis) etc users don't exist on the host at all and only exist in containers. So, we will then have to create these dummy users on the host etc.
-
@girish I think that might be an improvement worth considering, no? Making the necessary changes would (I think)...
-
Make it clear to monitoring tools and manual checks by users which processes being run are local to the host and which are from containers, particularly useful when multiple instances of a service are being run both on the host and container.
-
Follow what appears to be "best practices" when running containers.
-
Improve security in certain situations.
Admittedly these may be minor and not worth the overhead, but now that I'm aware of the behaviour, I'm a bit irked by it as it currently prevents me from easily identifying which services are container-run and which are local to the host, as well as making it confusing to which user is actually running the process listed.
-