Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • 0 Votes
    15 Posts
    396 Views
    girishG

    @warg said in The turn service ran out of memory:

    Could this be related to malicious clients, e. g. erros created by bots that try turn-specific exploits?

    It does seem to be like that.

    Is anyone else seeing such messages in the turn service logs?

  • 1 Votes
    2 Posts
    113 Views
    nottheendN

    This topic can be marked as closed. I decided to buy an instance with more memory and it worked well.

  • 0 Votes
    5 Posts
    486 Views
    A

    @derin mine is 8GB good enough for not stress about it. I think it better go higher so we don't had to worry about it much but always make sure you had enough:)

  • Spikes in RAM use

    Unsolved Support
    8
    0 Votes
    8 Posts
    386 Views
    J

    Over the past few days, I've been getting alerts that apps are running out of memory that typically have had no issues after the past few days. The System Info / System Memory isn't showing any spikes in RAM usage. I'm still trying to run this down but having some trouble doing so and need to keep digging.

  • Huge memory spikes

    Support
    5
    0 Votes
    5 Posts
    397 Views
    robiR

    @girish That's not it. You have OOMs because people install all sorts of apps and try to abuse the system, like WP and trying out large plugins.

    My system has 0 OOMs. None.

    Mine also has enough of IO as I asked them to limit the number of new customers on the host system.

    # dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync ; rm /tmp/test1.img 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.50861 s, 195 MB/s

    Doesn't explain 48GB mem spikes.

    This only seems to be happening on 20.04 Cloudrons, not 18.04.

    Btw, the main issue I think you're having is the choice of that NY datacenter, it's networking isn't great, they seem to allow burst traffic but then quickly throttle down or experience congestion.

    Seattle has been way better for me.

  • 2 Votes
    7 Posts
    356 Views
    girishG

    I think there was a task to integrate better with prometheus or grafana but it got lost. Let me check if we want to do this for 7.2

  • WiKiss

    App Wishlist
    3
    0 Votes
    3 Posts
    359 Views
    L

    @murgero Well spotted! I didn't check the history. 🙄

  • 1 Votes
    3 Posts
    305 Views
    L

    @timconsidine Thanks, that is good to hear. I think it is quite natural when somebody discovers Cloudron to go wild installing things to try them and that takes quite a lot of RAM.

    One other benefit of smaller applications is that there are often fewer lines where bad code could hide or where things could go wrong.

  • 0 Votes
    7 Posts
    523 Views
    timconsidineT

    @murgero thank you, good advice, will try it out

  • 0 Votes
    4 Posts
    355 Views
    nebulonN

    To conclude this, it was a memory issue. The instance as a whole was a bit overcommited.

    If the backup task is idle, it won't consume any memory. Also Cloudron does not reserve memory based on the limits set, neither for backup nor for apps. The limit is just to avoid rouge apps or the backup task to bring kill other apps.

  • 0 Votes
    4 Posts
    329 Views
    jdaviescoatesJ

    @girish yeah, I've already bumped the memory up, thanks.

  • Should my swap equal my RAM?

    Solved Support
    6
    0 Votes
    6 Posts
    624 Views
    M

    @girish Increasing the swap seems to have cured the problem. Additionally, there may have been a problem caused by a misreading of the amount of swap as originally on the Linode it said there was only 512mb but in Cloudron it said there was almost 1GB.

    I have increased the swap from 512mb to 2048mb and I haven't received any app out of memory notifications since then so that seems to have cured it.

    Thank you for your help.

  • 1 Votes
    16 Posts
    540 Views
    robiR

    @d19dotca Yes, the limits are there to protect against the noisy neighbor problem which exists when many processes are competing for the same resources and ONE uses up more than their fair share.

    Technically we could have all 30 Apps be set to 1+GB on a 16GB RAM system and it would work fine until one App behaved badly. Then the system would be in trouble as the OOM killer would select a potentially critical service to kill.

    With limits, the system is happy, and the killing happens in containers instead.

  • 0 Votes
    6 Posts
    1k Views
    M

    Usually, this error is thrown when the Java Virtual Machine cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector.

    Therefore you pretty much have two options:

    Increase the default memory your program is allowed to use using the -Xmx option (for instance for 1024 MB: -Xmx1024m)

    Modify your program so that it needs less memory, using less big data structures and getting rid of objects that are not any more used at some point in your program

    Increasing the heap sizeis a bad solution, 100% temporary. It will crash again in somewhere else. To avoid these issues, write high performance code:

    Use local variables wherever possible. Make sure you select the correct object (EX: Selection between String, StringBuffer and StringBuilder) Use a good code system for your program(EX: Using static variables VS non static variables) Other stuff which could work on your code. Try to move with Multy Threading
  • 3 Votes
    7 Posts
    274 Views
    LonkleL

    I'm with @nebulon, I see this as difficult to support, harder to code for - and more user friction. I usually never support more user friction unless it's dire. I think configuring after installation is more than acceptable. It's not like we can't write scripts using their API to macro your suggestion too, but I digress. 😂

  • 0 Votes
    5 Posts
    210 Views
    girishG

    @d19dotca You can try https://git.cloudron.io/cloudron/box/-/commit/3c565defca000feb21c47e9d8c9ada4c62c11cef and then systemctl restart box.

    Alternately, you just use the docker CLI until the next release:

    usr/bin/docker update --memory 805306368 --memory-swap 1610612736 mail

    The containers are named mysql, postgresql, mail, mongodb.

  • 0 Votes
    7 Posts
    393 Views
    girishG

    Correct. We don't need to re-create container to change the memory limit. But we still need to restart it after adjusting memory limit because of the limitations in our packaging.

    I think over time we have learnt that it is not a good idea to setup apache to auto-scale based on container memory limit. Those things are very dependent on the app/plugin use. I think java apps require a restart at this point since the JVM gets passed in the heap memory as a flag on startup, maybe there is a workaround for this as well, have to investigate.

    But at that point, we can atleast make the memory limit code not re-create container which I think is where bulk of the slowness is.

  • 1 Votes
    3 Posts
    261 Views
    ruihildtR

    @nebulon Thanks for your reply.

    After thinking about it, the graph chosen is better to view memory usage (which fluctuate much more than disk).

    I would then suggest separating both information:

    keep the current graph for the app usage (the Y axis adapting to the app using the most memory) add a bar for total memory used, modelled on the total disk usage (could be a single colour for memory usage)
    total disk usage

    This would help maximize the amount of information you can visualize in one go and help detect spikes better.

  • 0 Votes
    8 Posts
    862 Views
    d19dotcaD

    FYI - related discussion: https://forum.cloudron.io/topic/5960/setting-memory_limit-dynamically-in-wordpress-developer-package

  • restore rocket chat

    Support
    2
    0 Votes
    2 Posts
    220 Views
    girishG

    Usually, "Killed" means it ran out of memory when importing. Can you try to restore again and see "dmesg -w" output in parallel to see if that is indeed the case?

    You can bump the memory limit for the restore logic as indicate here - https://cloudron.io/documentation/troubleshooting/#backups