Disk space usage seems incorrect on external disk
-
@d19dotca It looks something like this now:
Among other fixes:
- It tells you when the data was collected
- Refresh to calcuate the current disk usage immediately
- We use SI units (1000) for reporting disk usage. We use IEC units (1024) for memory usage. This seems to be some industry pseudo standard.
-
@girish said in Disk space usage seems incorrect on external disk:
We use SI units (1000) for reporting disk usage. We use IEC units (1024) for memory usage. This seems to be some industry pseudo standard.
Perfect, thanks Girish! The si units seem to be more accurate from what I can see. I just checked on my MacBook Pro and it seems the GUI shows 494 GB of disk space and that's what the disk size is according to
df -H
as well, so that matches / is easier to compare. -
@girish Out of curiosity... how often will it check for updated disk sizes by default without manual intervention?
Btw, I think it'd be nice if the GUI showed free space too in an easy manner instead of having us do the math ourselves haha. For your consideration...
Or maybe it could be closer to how Apple displays it where the used bit is how you have it but then in the graph itself the free bit is the available space left?
-
@d19dotca said in Disk space usage seems incorrect on external disk:
@girish Out of curiosity... how often will it check for updated disk sizes by default without manual intervention?
Currently, it's manual. You have to click the refresh button on the top right to trigger a computation. I guess we can make this periodic for next release, but the issue was this value is always getting "out of date" and thus we weren't sure if there is value in periodic computation (besides, this causes much disk spin).
-
-
@girish - I just updated and there still seems like something wrong with the calculation...169 GB used at the top of the external disk but "this disk contains" only shows 156 GB used? How is there such a big difference there when all that's on the disk is backup files from Cloudron?
Does the external disk get recalculated too when I refresh it from the top?
-
Update:
For my main disk...
df -H
shows this for vda1 / (root):/dev/vda1 182G 100G 73G 58% /
Yet Cloudron shows the following: (108 GB used)
Where is it getting that extra nearly 9 GB used from? The total disk size is close enough (181.37 GB vs 182 GB shown from command), it's just the usage part that seems quite inaccurate, especially if we're using si units this time with version 7.3.2.
For my backup disk it's similar situation:
df -H
shows this for vdb1 (cloudronbackup):/dev/vdb1 253G 205G 36G 86% /mnt/cloudronbackup
Yet Cloudron shows this: (217 GB used above graph, 204 GB used below)
In this case for the external backup disk, the lower Used number of 204.51 GB is pretty close to the 205 GB from the command, so that seems accurate, however in that case where is that extra ~12 GB coming from for the above-graph used number?
There seems to be a lot of discrepancies in the calculations here still, unfortunately. Cc @girish
-
@d19dotca said in Disk space usage seems incorrect on external disk:
Btw, I think it'd be nice if the GUI showed free space too in an easy manner instead of having us do the math ourselves
+1
Been saying this for ages
-
-
-
@girish - I noticed you marked this as Solved 4 days ago, but I don't believe this has actually been solved.
There's a pretty large discrepancy in usage for my external disk as seen in earlier screenshots, and I don't see that having been resolved yet.
For example, "used" under the "this disk contains:" section is 204 GB, but then the graph part shows 217 GB used. Seems like a defect that there's multiple different "used" numbers presented, and by quite a few GBs too (approximately 5 - 20 GB discrepancy).
Even in your own screenshot from earlier, the total underneath the graph adds up to about 36 GB but yet above the graph it shows 41 GB used (this is the minimum difference I've seen so far but still a large 5 GB difference in used space).
-
@d19dotca said in Disk space usage seems incorrect on external disk:
@girish - I noticed you marked this as Solved 4 days ago, but I don't believe this has actually been solved.
Also:
it'd be nice if the GUI showed free space too in an easy manner instead of having us do the math ourselves
-
-
@d19dotca Can you give us the output of
df -B1 --output=source,fstype,size,used,avail,pcent,target
? This is the command used to get disk information.The du information is got from the du command. Can you check if running
du -DsB1 <path>
matches with what you see in graph for the apps and backup directory ?Finally, for docker itself, the size comes from
docker system df
output. -
@jdaviescoates yes, your message has been relayed to @nebulon
-
@girish said in Disk space usage seems incorrect on external disk:
@d19dotca Can you give us the output of
df -B1 --output=source,fstype,size,used,avail,pcent,target
? This is the command used to get disk information.The du information is got from the du command. Can you check if running
du -DsB1 <path>
matches with what you see in graph for the apps and backup directory ?Finally, for docker itself, the size comes from
docker system df
output.Hi Girish. Happy to help here. I ran the commands as requested.
The output for that df command is below (my backup disk is currently full, need to expand that in a minute, haha):
ubuntu@my:~$ df -B1 --output=source,fstype,size,used,avail,pcent,target Filesystem Type 1B-blocks Used Avail Use% Mounted on udev devtmpfs 4125618176 0 4125618176 0% /dev tmpfs tmpfs 834379776 6078464 828301312 1% /run /dev/vda1 ext4 181372190720 99085045760 72606834688 58% / tmpfs tmpfs 4171890688 0 4171890688 0% /dev/shm tmpfs tmpfs 5242880 0 5242880 0% /run/lock tmpfs tmpfs 4171890688 0 4171890688 0% /sys/fs/cgroup /dev/vdb1 ext4 252515119104 252498403328 0 100% /mnt/cloudronbackup /dev/loop1 squashfs 58327040 58327040 0 100% /snap/core18/2560 /dev/loop2 squashfs 66322432 66322432 0 100% /snap/core20/1623 /dev/loop4 squashfs 50331648 50331648 0 100% /snap/snapd/17336 /dev/loop0 squashfs 58327040 58327040 0 100% /snap/core18/2566 /dev/loop3 squashfs 50331648 50331648 0 100% /snap/snapd/17029 /dev/loop6 squashfs 71303168 71303168 0 100% /snap/lxd/22526 /dev/loop5 squashfs 71172096 71172096 0 100% /snap/lxd/22753 /dev/loop7 squashfs 65011712 65011712 0 100% /snap/core20/1611 tmpfs tmpfs 834375680 0 834375680 0% /run/user/1000
This seems to show an issue... if I'm understanding the output correctly, this means that I have a /dev/vda1 (root) disk size of about 181 GB with about 99 GB used, yet it's showing 108.77 GB used in the UI (screenshot included):
Hopefully that clarifies the issue a bit by showing the discrepancies between what's shown in the UI vs what's shown in the command line terminal on the VM.
The du command output is below:
ubuntu@my:~$ sudo du -DsB1 /home/yellowtent/platformdata/ 9498304512 /home/yellowtent/platformdata/ ubuntu@my:~$ sudo du -DsB1 /home/yellowtent/boxdata/ 46611869696 /home/yellowtent/boxdata/
Based on that output and my UI, I believe the values of each item beneath the graphs are correct, but the total numbers calculated at the top (above the graph) are incorrect, or at least the above-graph used space number seems to be incorrect.
The other docker command output is below:
ubuntu@my:~$ sudo docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 29 21 15.58GB 6.058GB (38%) Containers 94 70 0B 0B Local Volumes 606 138 1.559GB 1.02GB (65%) Build Cache 0 0 0B 0B
-
-