Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Nextcloud
  3. App become unresponsible, Error "reached MaxRequestWorkers setting …"

App become unresponsible, Error "reached MaxRequestWorkers setting …"

Scheduled Pinned Locked Moved Solved Nextcloud
5 Posts 3 Posters 1.0k Views 3 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S Offline
    S Offline
    simon
    wrote on last edited by
    #1

    Hi there,

    for some time now, I have noticed that the Nextcloud GUI freezes for a few minutes. Recently I wanted to empty the trash.

    The log shows the following error.

    [mpm_prefork:error] [pid 1] AH00161: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting
    => Healtheck error: Error: Timeout of 7000ms exceeded
    

    The problem has already been discussed in the forum with other apps, but always without a conclusive result.

    1 Reply Last reply
    1
    • girishG Offline
      girishG Offline
      girish
      Staff
      wrote on last edited by
      #2

      Let's try to reach a conclusion this time around 🙂

      The solution here is:

      • Edit /app/data/apache/mpm_prefork.conf.
      • Adjust the MaxRequestWorkers settings there
      • Adjust the app's memory limit accordingly as well.
      • Restart the app

      The correct values to be put in prefork is better explained by the upstream docs - See https://httpd.apache.org/docs/2.4/mod/prefork.html . The configuration depends much on how much memory you have, what load you expect etc.

      Unfortunately, there is no easy math to figure out these values. For example, to figure out memory limit automatically is not possible. It depends on how much memory each apache process might consume. Which in turn depends on what kind of php script it is loading (nextcloud in this instance). Which in turn depends on what nextcloud is doing in that specific request (is it doing some image manipulation in memory or is it just a simple response) .

      1 Reply Last reply
      0
      • girishG Offline
        girishG Offline
        girish
        Staff
        wrote on last edited by
        #3

        If it helps in understanding:

        • Each HTTP request is processed by a "worker".
        • If you are expecting 100 requests a second, you want 100 workers
        • If you configured only 50 workers, then the remaining 50 requests will be queued by apache internally. And process when things workers free up.
        • A single nextcloud user can generate any number of requests at any instance. When a user loads a nextcloud dashboard, 100s of things are downloaded. Each thing is a separate request. Same goes for the file syncer. The syncer queries based on requests.
        1 Reply Last reply
        0
        • robiR Offline
          robiR Offline
          robi
          wrote on last edited by
          #4

          Might be good to upgrade Nextcloud package with redbean.dev instead of apache for more scalability and speed.

          Conscious tech

          1 Reply Last reply
          0
          • S Offline
            S Offline
            simon
            wrote on last edited by
            #5

            Many thanks for the quick help and the detailed explanation.
            As I said, the problem occurred when emptying the trash. I assume that many workers are being requested here.
            I have increased MaxRequestWorkers from 15 to 100 and slightly increased the RAM allocated to the app.
            As far as I could test now, the problem seems to be solved.

            1 Reply Last reply
            0
            • nebulonN nebulon marked this topic as a question on
            • nebulonN nebulon has marked this topic as solved on
            • nebulonN nebulon has marked this topic as solved on
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Don't have an account? Register

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • Bookmarks
            • Search