Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • 0 Votes
    2 Posts
    741 Views
    J
    Just found the solution here: https://forum.cloudron.io/topic/1279/receive-email/4 The problem was that AWS EC2 instances close port 25 by default. As soon as I opened it in the AWS console, I was able to receive emails.
  • Can't Move to new Server

    Solved Support aws restore namecheap
    9
    1 Votes
    9 Posts
    2k Views
    alkomyA
    @shai Yeah, it's tricky Thanks for your this hint, anyway, here's a screenshot for lazy ppl [image: 1607555574126-screenshot-2020-12-10-at-01.05.01.png] [image: 1607555601662-screenshot-2020-12-10-at-01.07.01.png]
  • Cloudron super slow and crashes at least once a month

    Solved Support ec2 aws performance
    5
    0 Votes
    5 Posts
    1k Views
    girishG
    @dreamcatch22 with AWS, I have usually found that it's either a) the cpu credits are kicking in. are you aware of https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html ? AFAIK, even lightsail has the same limitation (but not 100% sure). b) some machine have a lot of memory but perform very poorly . For example, see the notes in EC2 R5 xlarge (which has 32GB RAM!) but works very poorly with docker - https://forum.cloudron.io/post/17488 Not sure, if either of those apply to you. What instance are you using ?
  • 0 Votes
    9 Posts
    2k Views
    girishG
    @carbonbee So, you are saying that there are backups in S3 that are not listed in the Cloudron dashboard? And those backups are not getting removed?
  • Aws Backup error : EPIPE HTTP Code : NetworkingError

    Solved Support bacups aws
    10
    0 Votes
    10 Posts
    2k Views
    girishG
    @jordanurbs Yes, this is in 6.0.1. Are you in 6.0.1 ? https://git.cloudron.io/cloudron/box/-/commit/bedcd6fccf58830b316318699375bc1f582a5d7a has the fix. Do you have any error logs?
  • Limit CPU Usage to S3

    Support backups aws cpu
    11
    0 Votes
    11 Posts
    2k Views
    marcusquinnM
    @mehdi Very much interested in this topic. I love my American friends, but the EU certainly isn't without capability to respond to the data-mining free-for-all antics of the last decade. I share occasional posts on Twitter on these subjects if anyone's interested.
  • 0 Votes
    19 Posts
    3k Views
    A
    @scooke said in Installation on an AWS EC2 server (T2.Micro) at AWS China hangs: @andreas True, it doesn't matter where the host company, or head office is located. Anything in China, for China, is going to be under lots of scrutiny. Unfortunately true, we must accept this learning curve.
  • Best aws s3 backup storage class

    Discuss backups glacier aws
    15
    0 Votes
    15 Posts
    2k Views
    robiR
    Good points @mehdi @CarbonBee See if you can find another project that already integrates with Glacier and how they handle it. If it's OSS, the code will be reusable, easing integration for devs.
  • 0 Votes
    5 Posts
    1k Views
    M
    Thanks, Girish, That patch worked. I've successfully setup things. cheers Mark
  • Some kind of redis error breaking the app

    Support redis ec2 aws
    8
    1
    0 Votes
    8 Posts
    2k Views
    girishG
    @alex-adestech That was quite some debugging session Wanted to leave some notes here... The server was an EC2 R5 xlarge instance. It worked well but when you resize any app, it will just hang. And the whole server will stop responding eventually. One curious thing was that server had 32GB and ~20GB was in buff/cache in free -m output. I have never seen kernel caching so much. We also found this backtrace in dmesg output: INFO: task docker:111571 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. docker D 0000000000000000 0 111571 1 0x00000080 ffff881c01527ab0 0000000000000086 ffff881c332f5080 ffff881c01527fd8 ffff881c01527fd8 ffff881c01527fd8 ffff881c332f5080 ffff881c01527bf0 ffff881c01527bf8 7fffffffffffffff ffff881c332f5080 0000000000000000 Call Trace: [<ffffffff8163a909>] schedule+0x29/0x70 [<ffffffff816385f9>] schedule_timeout+0x209/0x2d0 [<ffffffff8108e4cd>] ? mod_timer+0x11d/0x240 [<ffffffff8163acd6>] wait_for_completion+0x116/0x170 [<ffffffff810b8c10>] ? wake_up_state+0x20/0x20 [<ffffffff810ab676>] __synchronize_srcu+0x106/0x1a0 [<ffffffff810ab190>] ? call_srcu+0x70/0x70 [<ffffffff81219ebf>] ? __sync_blockdev+0x1f/0x40 [<ffffffff810ab72d>] synchronize_srcu+0x1d/0x20 [<ffffffffa000318d>] __dm_suspend+0x5d/0x220 [dm_mod] [<ffffffffa0004c9a>] dm_suspend+0xca/0xf0 [dm_mod] [<ffffffffa0009fe0>] ? table_load+0x380/0x380 [dm_mod] [<ffffffffa000a174>] dev_suspend+0x194/0x250 [dm_mod] [<ffffffffa0009fe0>] ? table_load+0x380/0x380 [dm_mod] [<ffffffffa000aa25>] ctl_ioctl+0x255/0x500 [dm_mod] [<ffffffffa000ace3>] dm_ctl_ioctl+0x13/0x20 [dm_mod] [<ffffffff811f1ef5>] do_vfs_ioctl+0x2e5/0x4c0 [<ffffffff8128bc6e>] ? file_has_perm+0xae/0xc0 [<ffffffff811f2171>] SyS_ioctl+0xa1/0xc0 [<ffffffff816408d9>] ? do_async_page_fault+0x29/0xe0 [<ffffffff81645909>] system_call_fastpath+0x16/0x1b Which led to this redhat article but the answer to that is locked. More debugging led to answers like this and this. The final answer was found here: sudo sysctl -w vm.dirty_ratio=10 sudo sysctl -w vm.dirty_background_ratio=5 With the explanation "By default Linux uses up to 40% of the available memory for file system caching. After this mark has been reached the file system flushes all outstanding data to disk causing all following IOs going synchronous. For flushing out this data to disk this there is a time limit of 120 seconds by default. In the case here the IO subsystem is not fast enough to flush the data withing". Crazy After we put those settings, it actually worked (!). Still cannot believe that choosing AWS instance is that important.