Ubuntu 20.04 "landscape" user account running mysqld
-
Okay so here's my theory of what's happened before, would love it if @girish could sort of sanity check this for me.
In previous installs, there was no UID 107 used either in Ubuntu or Cloudron services. When I tried a default install in OVH where I didn't ever see landscape before, it turns out landscape DID exist but was never seen in
top
because it had a UID of 113 or something like that, not 107.Since in my case, the Vultr Ubuntu image has landscape as UID 107 by default, so all the container services running as UID 107 (but mysql, postreqsql, dovecot, etc) would then appear to be run by Lanscape user when it really was just a mix of different users all with the same UID of 107.
Thus when I removed landscape which also removed the user that was mapped to UID 107, then since the Cloudron services are coincidentally using UID 107,
top
output now shows user 107.Curious though... where does UID 107 come from? I.e. How did Cloudron pick that UID, and why is that UID shared among all the containers but using different usernames in the containers with that UID?
Is my current setup best practice? Is there any issue with there only being a UID shown and no associated username? Would this be resolved if I reimaged the appliance, and then removed landscape prior to even starting the Cloudron install (last time I did it a few minutes during the install so I wonder if I did it too late), would it then create a user properly for that? I tried briefly to do that and then install Cloudron on a new smaller VPS in Vultr, and it seemed like it created MySQL user at 107 instead or something, so the MySQL user in Ubuntu was running many of the services for Cloudron. But I should probably test again.
I've never seen a UID before at all in my Cloudron servers which aren't associated to any username when looking at
top
, so I feel like this is just a very unusual circumstance in that Cloudron is running its own services with UID 107 in containers, but Ubuntu image in Vultr had UID 107 associated to Landscape, causing the confusions.Hopefully the above makes sense. Would love your insight into this.
-
@d19dotca I got a bit lost with all the notes, but I think you are looking for some understanding of why UIDs are not consistent ? On linux, there's only user ids (user names are just a "friendly" thing for the user which is got from /etc/passwd). In those user ids, 0 (root) is special, rest are all just the same inside the kernel. Ultimately, non-0 uids control the permissions to "files" and "processes". Also, the ids are generated dynamically. So, if you install mysql after 10 different programs, mysql user will have a different id than if you had installed it first. For the kernel and the end user, it makes no difference what the ids are. The mysql files in the file system gets the right dynamic "ids".
Now, for containers, they have their own uid namespace but Cloudron does not use this feature (yet). The uids only control "files" as said above and each container has it's own file system. So, the uids can be totally different inside each container (depending on how you installed programs inside it) but functionally the same (sorry, don't know how to explain this better without writing a full article )
-
@girish haha, sorry, I was kind of just brain dumping as I learned more about it myself, didn't mean to make it extra confusing.
Okay so I understood how UIDs on the Ubuntu system level work, but I guess where I'm confused is how Cloudron specifies a UID for it's main process in each container, and why are they all the same (in my case they're all UID 107 in each service container, is it always 107 even in new installs?). I think that's where it's throwing me off. Why is Cloudron using UID 107 for example in all of it's service containers running user (like mysql, dovecot, postgresql, etc)?
Additionally, I never have seen a UID used when looking at
top
before, so is there something perhaps not "registered" properly in my install?As I understand it, Cloudron's containers - at least in my case - are using it's main user accounts for various services with UID 107. In Ubuntu, UID 107 happened to match the landscape user. So when I uninstalled landscape which removed the user too, now there's no system level UID 107 in Ubuntu, but I guess
top
sees the UID from the container as UID 107 running the process like mysql, so it outputs 107.I have just never seen this before in any of my many other servers I've build with Cloudron.... so I'm confused I guess by why this is happening in this server seemingly alone, where UID 107 gets generated from in Cloudron's container services, etc.
Hopefully that helps a little bit clarify where my confusion is and what I'm hoping to understand better.
-
@d19dotca said in Ubuntu 20.04 "landscape" user account running mysqld:
Okay so I understood how UIDs on the Ubuntu system level work, but I guess where I'm confused is how Cloudron specifies a UID for it's main process in each container, and why are they all the same (in my case they're all UID 107 in each service container, is it always 107 even in new installs?). I think that's where it's throwing me off. Why is Cloudron using UID 107 for example in all of it's service containers running user (like mysql, dovecot, postgresql, etc)?
Ah, I think that's just a happy coincidence. We build all the containers out of the base image . So if you see
docker run -ti cloudron/base:3.0.0 /bin/bash
and then inspect/etc/passwd
:systemd-resolve:x:103:104:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin messagebus:x:104:106::/nonexistent:/usr/sbin/nologin redis:x:105:107::/var/lib/redis:/usr/sbin/nologin sshd:x:106:65534::/run/sshd:/usr/sbin/nologin cloudron:x:1000:1000:Cloudron,,,:/home/cloudron:/bin/bash
It ends at 106. And the first user that gets installed after gets the UID 107. So, in the addon containers first thing we do is to install the database program which adds their own database user. So, they happen to get 107.
-
@girish Ahhh okay, haha, than I guess that makes sense. What a bizarre thing, lol. So if I understand correctly, the next UID in your service containers will get UID 107 since the Docker images you base from go up to UID 106 already. And in my case, since Vultr has it's Ubuntu image using landscape as UID 107 on the operating system level, that's why it looks all weird for me. Okay, I think that makes more sense to me then. haha.
Though one last question... if there was no UID 107 in the /etc/passwd file for example (which I confirmed last night is the case with OVH's instance of Ubuntu), then when Cloudron sets its containers and its user is the given UID 107 in the container, why does
top
not show 107 there instead since there's no system UID 107? I should investigate that a bit more, I think that's the last part of my curiosity. haha.Thanks for bearing with me and explaining everything Girish! I really appreciate it. Always love learning from the experts on these things.
-
@d19dotca said in Ubuntu 20.04 "landscape" user account running mysqld:
@girish Ahhh okay, haha, than I guess that makes sense. What a bizarre thing, lol. So if I understand correctly, the next UID in your service containers will get UID 107 since the Docker images you base from go up to UID 106 already. And in my case, since Vultr has it's Ubuntu image using landscape as UID 107 on the operating system level, that's why it looks all weird for me. Okay, I think that makes more sense to me then. haha.
Yup, that's exactly right! I have to say I never noticed this myself, good spot
-
@d19dotca said in Ubuntu 20.04 "landscape" user account running mysqld:
Though one last question... if there was no UID 107 in the /etc/passwd file for example (which I confirmed last night is the case with OVH's instance of Ubuntu), then when Cloudron sets its containers and its user is the given UID 107 in the container, why does top not show 107 there instead since there's no system UID 107?
It totally should! What does it show if not for the raw number "107" ?
-
So, in vultr this was uuidd (107) in the host image. I removed that line entirely in /etc/passwd. Then, I did ps aux | grep mysql and I got the raw 107 as expected. All the other stuff like postgres, mongo etc show raw 107 as well.
107 4065 0.4 5.8 1559384 235096 ? Sl May27 23:31 /usr/sbin/mysqld
-
@girish Yeah that's in Vultr though and what my latest experience has been too. But I guess what I'm meaning is when I have used CLoudron on other providers like LunaNode, OVH, etc. I've never seen this issue before. I suspect it's because there is no UID 107 in the Ubuntu OS in those provider's implementations of it, but if that were the case then I'd supposedly see "107" in all my
top
output which I've never ever noticed before. That's what I want to try and figure out next, curiosity is getting the better of me. haha. This has been a very interesting puzzle and learning experience. -
Okay, so I ran another test and I think this makes more sense now and sort of validates what you mentioned above.
I spun up a new VPS on Vultr on Ubuntu 20.04 (not the Cloudron marketplace app version), and before I did anything, I immediately uninstalled landscape. This took away UID 107. Then I installed Cloudron, and of course Cloudron created the mysql user with UID 107 which can be verified in /etc/password file too.
Here's my current server:
tcpdump:x:105:111::/nonexistent:/usr/sbin/nologin sshd:x:106:65534::/run/sshd:/usr/sbin/nologin pollinate:x:108:1::/var/cache/pollinate:/bin/false systemd-network:x:109:114:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin systemd-resolve:x:110:115:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin systemd-timesync:x:111:116:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin mysql:x:112:118:MySQL Server,,,:/nonexistent:/bin/false unbound:x:113:119::/var/lib/unbound:/usr/sbin/nologin nginx:x:114:120:nginx user,,,:/nonexistent:/bin/false yellowtent:x:1000:1000::/home/yellowtent:/bin/sh
Notice MySQL took UID 112 since at the time of install, landscape user was already generated with UID 107.
On a new server install where I purged landscape prior to installing Cloudron, the MySQL user then takes UID 107 since it's "available":
tcpdump:x:105:111::/nonexistent:/usr/sbin/nologin sshd:x:106:65534::/run/sshd:/usr/sbin/nologin pollinate:x:108:1::/var/cache/pollinate:/bin/false systemd-network:x:109:114:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin systemd-resolve:x:110:115:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin systemd-timesync:x:111:116:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin mysql:x:107:113:MySQL Server,,,:/nonexistent:/bin/false unbound:x:112:118::/var/lib/unbound:/usr/sbin/nologin nginx:x:113:119:nginx user,,,:/nonexistent:/bin/false yellowtent:x:1000:1000::/home/yellowtent:/bin/sh
So when I look at the top output, there's no UID being shown anymore and basically everything appears to run as the mysql user since it's UID 107 which then matches the UID used in the container images:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 614 mysql 20 0 1069132 141280 35112 S 0.0 14.1 0:01.01 mysqld 643 root 20 0 725468 92732 49580 S 0.0 9.2 0:00.35 dockerd 935 yellowt+ 20 0 638720 61260 30140 S 0.0 6.1 0:01.11 node 521 root 20 0 749848 46900 26172 S 0.0 4.7 0:00.32 containerd 497 yellowt+ 20 0 579328 34044 27896 S 0.0 3.4 0:00.16 node 579 root 20 0 110808 21048 13392 S 0.0 2.1 0:00.07 unattended-upgr 504 root 20 0 31976 18176 10464 S 0.0 1.8 0:00.09 networkd-dispat [...]
So I guess if I want to "clean" this up (even though there's really no issue at all I guess outside of there being no associated username with UID 107), I should reimage my server and restore from the backup after installing Cloudron only after I've already purged landscape. Though this I admit is really not necessary and I should just get used to seeing UID 107 in the process list and top output and I should know that's coming from the container UIDs then rather than the actual account in Ubuntu (outside of the fact when it actually is MySQL running, lol).
Hopefully the above makes some sense.
-
@girish - I stumbled across this article: https://blog.dbi-services.com/how-uid-mapping-works-in-docker-containers/
The article seems to imply (unless I'm misunderstanding it) that it may be a good practice to create a user on the host system that matches up with the user running the service in each container. So in other words, taking mysql as the example... on the Ubuntu host the mysql user should be a known user that's created with UID 2000 for example, and then in the Docker container for MySQL it'll also run with a UID of 2000. Then for something like mongodb, it'll be a host user with UID 2001 for example and then the user running mongodb in the container will also run with the UID of 2001. You're definitely allowed to create users with a specific UID, so hopefully this is all doable.
The above makes it a bit more secure from what I'm reading, plus of course for the "OCD" in all of us it keeps things cleaner and easier to understand what's happening when looking at
top
of aps
output for example.Another article that is similar in nature is this one: https://medium.com/@mccode/understanding-how-uid-and-gid-work-in-docker-containers-c37a01d01cf
I'm just wondering if this is an improvement that should and could be made to Cloudron's images for at least the fundamental services that Cloudron deploys and runs in both host and containers.
I guess I'm not convinced the current setup is the ideal setup for people. Since Cloudron is mean to to be deployed on brand new Ubuntu systems, there should be no real need to accomodate other UIDs which may be present on some providers and not others because you can simply choose a really high UID that no default Ubuntu image should be using. Hopefully the above makes sense. Please correct me if I'm misunderstanding some of this.
-
@d19dotca I think your understanding is correct. It's definitely possible to have all the UIDs in sync. However, most of the addon (mongo, postgres, redis) etc users don't exist on the host at all and only exist in containers. So, we will then have to create these dummy users on the host etc.
-
@girish I think that might be an improvement worth considering, no? Making the necessary changes would (I think)...
-
Make it clear to monitoring tools and manual checks by users which processes being run are local to the host and which are from containers, particularly useful when multiple instances of a service are being run both on the host and container.
-
Follow what appears to be "best practices" when running containers.
-
Improve security in certain situations.
Admittedly these may be minor and not worth the overhead, but now that I'm aware of the behaviour, I'm a bit irked by it as it currently prevents me from easily identifying which services are container-run and which are local to the host, as well as making it confusing to which user is actually running the process listed.
-