Ubuntu 20.04 "landscape" user account running mysqld
-
@d19dotca said in Ubuntu 20.04 "landscape" user account running mysqld:
Okay so I understood how UIDs on the Ubuntu system level work, but I guess where I'm confused is how Cloudron specifies a UID for it's main process in each container, and why are they all the same (in my case they're all UID 107 in each service container, is it always 107 even in new installs?). I think that's where it's throwing me off. Why is Cloudron using UID 107 for example in all of it's service containers running user (like mysql, dovecot, postgresql, etc)?
Ah, I think that's just a happy coincidence. We build all the containers out of the base image . So if you see
docker run -ti cloudron/base:3.0.0 /bin/bash
and then inspect/etc/passwd
:systemd-resolve:x:103:104:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin messagebus:x:104:106::/nonexistent:/usr/sbin/nologin redis:x:105:107::/var/lib/redis:/usr/sbin/nologin sshd:x:106:65534::/run/sshd:/usr/sbin/nologin cloudron:x:1000:1000:Cloudron,,,:/home/cloudron:/bin/bash
It ends at 106. And the first user that gets installed after gets the UID 107. So, in the addon containers first thing we do is to install the database program which adds their own database user. So, they happen to get 107.
-
@girish Ahhh okay, haha, than I guess that makes sense. What a bizarre thing, lol. So if I understand correctly, the next UID in your service containers will get UID 107 since the Docker images you base from go up to UID 106 already. And in my case, since Vultr has it's Ubuntu image using landscape as UID 107 on the operating system level, that's why it looks all weird for me. Okay, I think that makes more sense to me then. haha.
Though one last question... if there was no UID 107 in the /etc/passwd file for example (which I confirmed last night is the case with OVH's instance of Ubuntu), then when Cloudron sets its containers and its user is the given UID 107 in the container, why does
top
not show 107 there instead since there's no system UID 107? I should investigate that a bit more, I think that's the last part of my curiosity. haha.Thanks for bearing with me and explaining everything Girish! I really appreciate it. Always love learning from the experts on these things.
-
@d19dotca said in Ubuntu 20.04 "landscape" user account running mysqld:
@girish Ahhh okay, haha, than I guess that makes sense. What a bizarre thing, lol. So if I understand correctly, the next UID in your service containers will get UID 107 since the Docker images you base from go up to UID 106 already. And in my case, since Vultr has it's Ubuntu image using landscape as UID 107 on the operating system level, that's why it looks all weird for me. Okay, I think that makes more sense to me then. haha.
Yup, that's exactly right! I have to say I never noticed this myself, good spot
-
@d19dotca said in Ubuntu 20.04 "landscape" user account running mysqld:
Though one last question... if there was no UID 107 in the /etc/passwd file for example (which I confirmed last night is the case with OVH's instance of Ubuntu), then when Cloudron sets its containers and its user is the given UID 107 in the container, why does top not show 107 there instead since there's no system UID 107?
It totally should! What does it show if not for the raw number "107" ?
-
So, in vultr this was uuidd (107) in the host image. I removed that line entirely in /etc/passwd. Then, I did ps aux | grep mysql and I got the raw 107 as expected. All the other stuff like postgres, mongo etc show raw 107 as well.
107 4065 0.4 5.8 1559384 235096 ? Sl May27 23:31 /usr/sbin/mysqld
-
@girish Yeah that's in Vultr though and what my latest experience has been too. But I guess what I'm meaning is when I have used CLoudron on other providers like LunaNode, OVH, etc. I've never seen this issue before. I suspect it's because there is no UID 107 in the Ubuntu OS in those provider's implementations of it, but if that were the case then I'd supposedly see "107" in all my
top
output which I've never ever noticed before. That's what I want to try and figure out next, curiosity is getting the better of me. haha. This has been a very interesting puzzle and learning experience. -
Okay, so I ran another test and I think this makes more sense now and sort of validates what you mentioned above.
I spun up a new VPS on Vultr on Ubuntu 20.04 (not the Cloudron marketplace app version), and before I did anything, I immediately uninstalled landscape. This took away UID 107. Then I installed Cloudron, and of course Cloudron created the mysql user with UID 107 which can be verified in /etc/password file too.
Here's my current server:
tcpdump:x:105:111::/nonexistent:/usr/sbin/nologin sshd:x:106:65534::/run/sshd:/usr/sbin/nologin pollinate:x:108:1::/var/cache/pollinate:/bin/false systemd-network:x:109:114:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin systemd-resolve:x:110:115:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin systemd-timesync:x:111:116:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin mysql:x:112:118:MySQL Server,,,:/nonexistent:/bin/false unbound:x:113:119::/var/lib/unbound:/usr/sbin/nologin nginx:x:114:120:nginx user,,,:/nonexistent:/bin/false yellowtent:x:1000:1000::/home/yellowtent:/bin/sh
Notice MySQL took UID 112 since at the time of install, landscape user was already generated with UID 107.
On a new server install where I purged landscape prior to installing Cloudron, the MySQL user then takes UID 107 since it's "available":
tcpdump:x:105:111::/nonexistent:/usr/sbin/nologin sshd:x:106:65534::/run/sshd:/usr/sbin/nologin pollinate:x:108:1::/var/cache/pollinate:/bin/false systemd-network:x:109:114:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin systemd-resolve:x:110:115:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin systemd-timesync:x:111:116:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin mysql:x:107:113:MySQL Server,,,:/nonexistent:/bin/false unbound:x:112:118::/var/lib/unbound:/usr/sbin/nologin nginx:x:113:119:nginx user,,,:/nonexistent:/bin/false yellowtent:x:1000:1000::/home/yellowtent:/bin/sh
So when I look at the top output, there's no UID being shown anymore and basically everything appears to run as the mysql user since it's UID 107 which then matches the UID used in the container images:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 614 mysql 20 0 1069132 141280 35112 S 0.0 14.1 0:01.01 mysqld 643 root 20 0 725468 92732 49580 S 0.0 9.2 0:00.35 dockerd 935 yellowt+ 20 0 638720 61260 30140 S 0.0 6.1 0:01.11 node 521 root 20 0 749848 46900 26172 S 0.0 4.7 0:00.32 containerd 497 yellowt+ 20 0 579328 34044 27896 S 0.0 3.4 0:00.16 node 579 root 20 0 110808 21048 13392 S 0.0 2.1 0:00.07 unattended-upgr 504 root 20 0 31976 18176 10464 S 0.0 1.8 0:00.09 networkd-dispat [...]
So I guess if I want to "clean" this up (even though there's really no issue at all I guess outside of there being no associated username with UID 107), I should reimage my server and restore from the backup after installing Cloudron only after I've already purged landscape. Though this I admit is really not necessary and I should just get used to seeing UID 107 in the process list and top output and I should know that's coming from the container UIDs then rather than the actual account in Ubuntu (outside of the fact when it actually is MySQL running, lol).
Hopefully the above makes some sense.
-
@girish - I stumbled across this article: https://blog.dbi-services.com/how-uid-mapping-works-in-docker-containers/
The article seems to imply (unless I'm misunderstanding it) that it may be a good practice to create a user on the host system that matches up with the user running the service in each container. So in other words, taking mysql as the example... on the Ubuntu host the mysql user should be a known user that's created with UID 2000 for example, and then in the Docker container for MySQL it'll also run with a UID of 2000. Then for something like mongodb, it'll be a host user with UID 2001 for example and then the user running mongodb in the container will also run with the UID of 2001. You're definitely allowed to create users with a specific UID, so hopefully this is all doable.
The above makes it a bit more secure from what I'm reading, plus of course for the "OCD" in all of us it keeps things cleaner and easier to understand what's happening when looking at
top
of aps
output for example.Another article that is similar in nature is this one: https://medium.com/@mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
I'm just wondering if this is an improvement that should and could be made to Cloudron's images for at least the fundamental services that Cloudron deploys and runs in both host and containers.
I guess I'm not convinced the current setup is the ideal setup for people. Since Cloudron is mean to to be deployed on brand new Ubuntu systems, there should be no real need to accomodate other UIDs which may be present on some providers and not others because you can simply choose a really high UID that no default Ubuntu image should be using. Hopefully the above makes sense. Please correct me if I'm misunderstanding some of this.
-
@d19dotca I think your understanding is correct. It's definitely possible to have all the UIDs in sync. However, most of the addon (mongo, postgres, redis) etc users don't exist on the host at all and only exist in containers. So, we will then have to create these dummy users on the host etc.
-
@girish I think that might be an improvement worth considering, no? Making the necessary changes would (I think)...
-
Make it clear to monitoring tools and manual checks by users which processes being run are local to the host and which are from containers, particularly useful when multiple instances of a service are being run both on the host and container.
-
Follow what appears to be "best practices" when running containers.
-
Improve security in certain situations.
Admittedly these may be minor and not worth the overhead, but now that I'm aware of the behaviour, I'm a bit irked by it as it currently prevents me from easily identifying which services are container-run and which are local to the host, as well as making it confusing to which user is actually running the process listed.
-