Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


  • Redis Errors - Entire Cloudron Down

    Solved Support
    2
    0 Votes
    2 Posts
    66 Views

    Part of the issue was that redis was getting stuck. Restarting box code fixed it - systemctl restart box. Another part of the issue is https://forum.cloudron.io/topic/9795/3-2-0-update-not-showing-up/9 . I am pushing a fix.

  • What's the meaning of these log entries?

    Solved Support
    2
    0 Votes
    2 Posts
    84 Views

    redis logs are of the format pid:role timestamp loglevel message . pid is container local.

    Role is:

    X sentinel C RDB/AOF writing child S slave M master

    Unfortunately, there is no way to change the log format in redis (afaik). We just need the message really (others are not useful on Cloudron). See also https://github.com/redis/redis/issues/2545

  • 0 Votes
    14 Posts
    379 Views

    @privsec 512mb is the highest I've got any of mine set, and that's mostly just me ramping up things for Nextcloud in the hope it somehow speeds it up. In reality it's not even using half of that. Most I've got on any WordPress website is 256MB. But again, seemingly not even using half of that most of the time. So strange that you're went up so insanely high.

  • 1 Votes
    2 Posts
    179 Views

    Ah, thanks for reporting. I have opened https://git.cloudron.io/cloudron/box/-/issues/808 internally .

  • 0 Votes
    6 Posts
    3k Views

    @robi removed 🙂

  • 0 Votes
    4 Posts
    303 Views

    @girish yeah, I've already bumped the memory up, thanks.

  • RediSearch

    Support
    6
    0 Votes
    6 Posts
    434 Views

    @vladimir What other configurations are needed by redisearch though? I think it needs to be laid out specifically so that the amazing Cloudron team can figure out what can be safely added if any to the current redis server configuration, or if it's configurations needed within the WordPress server/app instead, etc. When you deploy redissearch, does it error on anything, what errors does it provide in the user interface and the logs for example?

  • 0 Votes
    3 Posts
    233 Views

    @girish Thanks, after looking through a long list, turns out it was Snipe-IT, that we don't use and I just had installed as a test. So stopping it for now but maybe worth knowing in-case the pp itself is a but hungrier than it should be for some reason.

  • Interesting WARNINGs in the Kutt App Logs

    Solved Kutt
    2
    0 Votes
    2 Posts
    187 Views
    Nov 10 13:05:05 15:M 10 Nov 21:05:05.039 * Running mode=standalone, port=6379. Nov 10 13:05:05 15:M 10 Nov 21:05:05.039 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. Nov 10 13:05:05 15:M 10 Nov 21:05:05.039 # Server initialized Nov 10 13:05:05 15:M 10 Nov 21:05:05.039 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. Nov 10 13:05:05 15:M 10 Nov 21:05:05.039 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.

    I think https://forum.cloudron.io/topic/2721/please-disable-transparent-hugepage . Upstream didn't respond, the warning can be safely ignored.

  • Bug when Resizing Apps

    Solved Support
    3
    0 Votes
    3 Posts
    263 Views

    @girish Thanks, this fixed the issue! However, during subsequent testing I found that Redis goes down approx. 30% of the time when I stop the app. For testing I am using Scrumblr but I have seen similar issues with GitLab too. Is this an issue locally for me on my VPS, or could there be something else going on?

    Cheers,

    Ross

  • Redis Object cache

    WordPress (Developer)
    3
    1 Votes
    3 Posts
    289 Views

    @vova have you solved it?

  • Please disable transparent hugepage

    Solved Support
    16
    0 Votes
    16 Posts
    1k Views

    As I said on the other thread I seem to have managed to install cloudron on Ubuntu 20.04 LTS by following your advice to use ./cloudron-setup --version 5.6.3. I've disabled automatic upgrades and I'll wait a bit to upgrade to version 6.
    All is good!

  • 0 Votes
    15 Posts
    577 Views

    @nebulon about 15-20 every second. When it stops there is a line box:apphealthmonitor app health: 28 alive / 4 dead. Nothing about accessing any sites though.

  • 0 Votes
    5 Posts
    216 Views

    Can you check with docker ps -a if that container is in fact there and then restart it with docker restart <containerid>?

  • 0 Votes
    7 Posts
    243 Views

    OK, I pushed an unmanaged WP update that updates the plugins but this won't affect existing installs though. You have to update the plugins yourself.

  • Cloudron 5.2 update failed

    Solved Support
    8
    0 Votes
    8 Posts
    441 Views

    @valkalon Cloudron support is still the best 😉

  • 1 Votes
    7 Posts
    299 Views

    @dylightful Not yet. It's implemented, but we didn't merge into 7.4. So, we will put it in 7.5.

  • 0 Votes
    2 Posts
    222 Views

    @necrevistonnezr indeed, previous versions of Cloudron would allocate more memory for redis based on the app's memory limit. This limit was not settable by the user. In 5.2, we have made the redis limits visible and configurable by the user. Our 5.2 migration script should ideally could have been smarter and allocated more memory to redis.

  • 0 Votes
    8 Posts
    472 Views

    @alex-adestech That was quite some debugging session 🙂

    Wanted to leave some notes here... The server was an EC2 R5 xlarge instance. It worked well but when you resize any app, it will just hang. And the whole server will stop responding eventually. One curious thing was that server had 32GB and ~20GB was in buff/cache in free -m output. I have never seen kernel caching so much. We also found this backtrace in dmesg output:

    INFO: task docker:111571 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. docker D 0000000000000000 0 111571 1 0x00000080 ffff881c01527ab0 0000000000000086 ffff881c332f5080 ffff881c01527fd8 ffff881c01527fd8 ffff881c01527fd8 ffff881c332f5080 ffff881c01527bf0 ffff881c01527bf8 7fffffffffffffff ffff881c332f5080 0000000000000000 Call Trace: [<ffffffff8163a909>] schedule+0x29/0x70 [<ffffffff816385f9>] schedule_timeout+0x209/0x2d0 [<ffffffff8108e4cd>] ? mod_timer+0x11d/0x240 [<ffffffff8163acd6>] wait_for_completion+0x116/0x170 [<ffffffff810b8c10>] ? wake_up_state+0x20/0x20 [<ffffffff810ab676>] __synchronize_srcu+0x106/0x1a0 [<ffffffff810ab190>] ? call_srcu+0x70/0x70 [<ffffffff81219ebf>] ? __sync_blockdev+0x1f/0x40 [<ffffffff810ab72d>] synchronize_srcu+0x1d/0x20 [<ffffffffa000318d>] __dm_suspend+0x5d/0x220 [dm_mod] [<ffffffffa0004c9a>] dm_suspend+0xca/0xf0 [dm_mod] [<ffffffffa0009fe0>] ? table_load+0x380/0x380 [dm_mod] [<ffffffffa000a174>] dev_suspend+0x194/0x250 [dm_mod] [<ffffffffa0009fe0>] ? table_load+0x380/0x380 [dm_mod] [<ffffffffa000aa25>] ctl_ioctl+0x255/0x500 [dm_mod] [<ffffffffa000ace3>] dm_ctl_ioctl+0x13/0x20 [dm_mod] [<ffffffff811f1ef5>] do_vfs_ioctl+0x2e5/0x4c0 [<ffffffff8128bc6e>] ? file_has_perm+0xae/0xc0 [<ffffffff811f2171>] SyS_ioctl+0xa1/0xc0 [<ffffffff816408d9>] ? do_async_page_fault+0x29/0xe0 [<ffffffff81645909>] system_call_fastpath+0x16/0x1b

    Which led to this redhat article but the answer to that is locked. More debugging led to answers like this and this. The final answer was found here:

    sudo sysctl -w vm.dirty_ratio=10 sudo sysctl -w vm.dirty_background_ratio=5

    With the explanation "By default Linux uses up to 40% of the available memory for file system caching. After this mark has been reached the file system flushes all outstanding data to disk causing all following IOs going synchronous. For flushing out this data to disk this there is a time limit of 120 seconds by default. In the case here the IO subsystem is not fast enough to flush the data withing". Crazy 🙂 After we put those settings, it actually worked (!). Still cannot believe that choosing AWS instance is that important.

  • 0 Votes
    3 Posts
    156 Views

    There is now a sticky post under this category to follow updates.