Memory and disk totals, how are they calculated?
-
I am curious how the memory and disk totals are calculated. The reason I ask is because in Cloudron it shows the following:
Memory: RAM (7.13 GB) + Swap (4.29 GB) in MB
Disk: 11.74 GB of 51.85 GB availableHowever, if I SSH to the server and run the free -h command for memory and the df -h command for disk space, I get different totals.
Memory:
total used free shared buff/cache available Mem: 6802 3887 452 535 2462 2090 Swap: 4095 1209 2886
Disk:
Filesystem Size Used Avail Use% Mounted on /dev/sda1 49G 38G 11G 78% /
The most pressing example is my external disk for backups... it is a 64 GB block storage disk and Cloudron is calculating it at 67.51 GB, so well over 3 GB larger than it actually is.
I'm worried that the values are not being calculated correctly. Knowing how they're currently calculated would be helpful in verifying. It isn't a giant difference, but enough of a difference (particularly on the disk space front) that I'd just like to do a little digging.
-
Regarding the memory usage, it is calculated from nodejs' os.totalmem() call and then divided by 1000 factors according to SI units. Not sure if using base 2 calculation with 1024 is more suitable or not.
free -h --si
should give you the same as what Cloudron calculates (with rounding due to the -h argument)I am not exactly sure why your total memory in free comes out as 6802 though in
free -h
what are your server specs here. What doescat /proc/meminfo
tell you?The same applies to disk space, where we also follow SI units. But I remember @girish having dug into this in much more detail.
-
The SI units (1000) vs Binary units (1024) confusion is endless. We use SI units for disk size and Binary units for memory sizes.
Please compare the outputs with
df -H
(not small -h), anddu --block-size=1000
and so on.For
free
, you shouldn't need the --si since we do use binary units. -
@girish Ah interesting, I see. So I ran df -H and it does indeed show 52 GB instead of 49 GB for the root drive. This is the first I had seen that sort of command. I've always used the lowercase h for example.
Here's the thing though and where I could see there being confusion (thus the "confusion is endless" part)... you mention that you use SI units (1000) instead of binary units (1024), however that seems to only apply to the OS level queries. I say that because setting up resources on individual apps in Cloudron is all done in binary (1024) not SI units. The minimum memory for WordPress for example isn't 200, it's 256.
I think that inconsistency is possibly confusing for people, particularly since your technical audience typically deals in 1024 rather than 1000.
Is it a possibility to use the binary values in calculating disk size instead? Is there a benefit to using the SI units instead of binary?
-
@nebulon It's a 7 GB machine, which is probably why it's an odd number. Here's the output of the command you requested in case this helps:
ubuntu@my:~$ cat /proc/meminfo MemTotal: 6965748 kB MemFree: 323536 kB MemAvailable: 2480972 kB Buffers: 733972 kB Cached: 1456664 kB SwapCached: 296884 kB Active: 2840428 kB Inactive: 2504388 kB Active(anon): 1754880 kB Inactive(anon): 1901776 kB Active(file): 1085548 kB Inactive(file): 602612 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4194300 kB SwapFree: 2349720 kB Dirty: 68 kB Writeback: 0 kB AnonPages: 2960480 kB Mapped: 620172 kB Shmem: 513068 kB Slab: 1039824 kB SReclaimable: 761684 kB SUnreclaim: 278140 kB KernelStack: 33488 kB PageTables: 137352 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 7677172 kB Committed_AS: 15388884 kB VmallocTotal: 34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB CmaTotal: 0 kB CmaFree: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 1527664 kB DirectMap2M: 5640192 kB
-
Using SI units for disk size was not a conscious choice. I think what happened is that I used binary units when doing memory related code and @nebulon used SI units for disk related code. I think it makes sense to use binary units everywhere especially since most people are probably not going to pass
--si
etc to the tools. In fact, I didn't know about them until I was trying to find the discrepancy in the graphs and the tools myself.I opened an issue for next release - https://git.cloudron.io/cloudron/box/-/issues/689
-
Hello @girish,
Just lost a lot of time on a bug related to storage and I then discover that what shows in the cloudron UI was in fact Binary, can be updated the label to the correct one with the "i" in-between letters, so that it's clear what unit of measurement is been used?- KiB kibibyte
- MiB mebibyte
- GiB gibibyte
I know that is a stupid thing to complain but it can save some time on understanding what's happening sometimes.
-
This post is deleted!