Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. GitLab
  3. "Unlock instructions" email due to brute force attack on gitlab users

"Unlock instructions" email due to brute force attack on gitlab users

Scheduled Pinned Locked Moved GitLab
20 Posts 3 Posters 472 Views 3 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A Offline
    A Offline
    allanbowe
    wrote last edited by allanbowe
    #1

    Running v1.104.4 since 5 days ago. Suddenly a large number of ourGitlab users have received the message below (both cloudron and external login accounts). There are no failed signin attempts in the log, that I can see. Did anyone else have this issue?

    From: GitLab git.app@xxxx
    Sent: xxxx
    To: xxxxx
    Subject: Unlock instructions

    GitLab
    Hello, xxxx!

    Your GitLab account has been locked due to an excessive number of unsuccessful sign in attempts. You can wait for your account to automatically unlock in 10 minutes or you can click the link below to unlock now.
    Unlock account

    If you did not initiate these sign-in attempts, please reach out to your administrator or enable two-factor authentication (2FA) on your account.

    1 Reply Last reply
    1
    • jamesJ Offline
      jamesJ Offline
      james
      Staff
      wrote last edited by
      #2

      Hello @allanbowe
      Thanks for reporting.
      Are there really no logs inside GitLab showing any failed login attempts for the affected accounts?

      1 Reply Last reply
      1
      • A Offline
        A Offline
        allanbowe
        wrote last edited by allanbowe
        #3

        Not on the day that it happened, happy to dm a (sanitised) copy

        1 Reply Last reply
        0
        • jamesJ Offline
          jamesJ Offline
          james
          Staff
          wrote last edited by
          #4

          Hello @allanbowe
          Just to clarify, I am not writing about the Cloudron App Log of the GitLab app.
          But the GitLab Administration UI itself for the User Accounts effected and the System logs inside the GitLab Administartion View.
          Did you mean this for sending a copy in a dm?

          1 Reply Last reply
          0
          • A Offline
            A Offline
            allanbowe
            wrote last edited by
            #5

            I was thinking about cloudron, yes

            The admin UI doesn't appear to have a "System Logs" option

            image.png

            The user accounts affected are indeed locked

            1 Reply Last reply
            0
            • jamesJ Offline
              jamesJ Offline
              james
              Staff
              wrote last edited by
              #6

              @allanbowe see https://gitlab.com/gitlab-org/gitlab/-/issues/233377 for details about Authentication Log for users

              1 Reply Last reply
              1
              • A Offline
                A Offline
                allanbowe
                wrote last edited by allanbowe
                #7

                Gotcha. According to the link you provided:

                Audit events of failed logins are currently recorded only on GitLab Starter and visible 
                under GitLab Premium (via Admin Area > Audit Log)🤕. Having those events surfaced 
                under Authentication log would means either one of these two things:
                
                1. Move failed login audit events completely to GitLab Core
                2. Add an extra EE version of Authentication log for licensed customer (i.e. for GitLab Premium)
                

                Here is what I do see in that location:

                image.png

                So for some reason the failed events are not shown.

                What is also interesting. When the event happened, the affected user(s) DID have a value in the "Locked account email verification code last sent at:" field under "/admin/users/XXXX". But after restarting the box, that entry is empty again. Not sure if that happened automatically after 10 minutes though.

                1 Reply Last reply
                0
                • A Offline
                  A Offline
                  allanbowe
                  wrote last edited by allanbowe
                  #8

                  I found the cloudron app logs of the gitlab app under /home/yellowtent/platformdata/logs/$APPID and tried the following:

                  image.png

                  Neither gave any result.

                  I wasn't able to find the gitlab internal logs..

                  1 Reply Last reply
                  0
                  • jamesJ Offline
                    jamesJ Offline
                    james
                    Staff
                    wrote last edited by
                    #9

                    The image you shared is of one user.
                    Is this your user or one of the locked users?
                    You can impersonate such a user and view this section as the user.

                    1 Reply Last reply
                    0
                    • A Offline
                      A Offline
                      allanbowe
                      wrote last edited by
                      #10

                      It was from my user but I also did this with one of the locked users and it was the same result (only shows successful logins). This was after the restart.

                      1 Reply Last reply
                      1
                      • A Offline
                        A Offline
                        allanbowe
                        wrote last edited by
                        #11

                        I found this thread which implies that it is a known issue in gitlab: https://gitlab.com/gitlab-org/gitlab/-/issues/297473

                        1 Reply Last reply
                        1
                        • A Offline
                          A Offline
                          allanbowe
                          wrote last edited by allanbowe
                          #12

                          I found the logs - they were inside the container at /home/git/gitlab/log

                          Running grep -i "failed" revealed that the attack started in the early morning of 20th June. Somehow the list of usernames was known (probably relates to the issue in the link in my previous post) and signin requests are being made from random ip addresses.

                          First 5 entries shown below (this pattern has continued since):

                          ./application_json.log:{"severity":"INFO","time":"2025-07-20T03:17:13.349Z","correlation_id":"xxx","meta.caller_id":"SessionsController#create","meta.feature_category":"system_access","meta.organization_id":1,"meta.remote_ip":"156.146.59.50","meta.client_id":"ip/156.146.59.50","message":"Failed Login: username=xxx1 ip=156.146.59.50"}
                          ./application_json.log:{"severity":"INFO","time":"2025-07-20T03:18:20.163Z","correlation_id":"xxx","meta.caller_id":"SessionsController#create","meta.feature_category":"system_access","meta.organization_id":1,"meta.remote_ip":"193.176.84.35","meta.client_id":"ip/193.176.84.35","message":"Failed Login: username=xxx2 ip=193.176.84.35"}
                          ./application_json.log:{"severity":"INFO","time":"2025-07-20T03:18:39.636Z","correlation_id":"xxx","meta.caller_id":"SessionsController#create","meta.feature_category":"system_access","meta.organization_id":1,"meta.remote_ip":"20.205.138.223","meta.client_id":"ip/20.205.138.223","message":"Failed Login: username=xxxx3 ip=20.205.138.223"}
                          ./application_json.log:{"severity":"INFO","time":"2025-07-20T03:19:04.255Z","correlation_id":"xxx","meta.caller_id":"SessionsController#create","meta.feature_category":"system_access","meta.organization_id":1,"meta.remote_ip":"98.152.200.61","meta.client_id":"ip/98.152.200.61","message":"Failed Login: username=xxx4 ip=98.152.200.61"}
                          ./application_json.log:{"severity":"INFO","time":"2025-07-20T03:21:03.314Z","correlation_id":"xxx","meta.caller_id":"SessionsController#create","meta.feature_category":"system_access","meta.organization_id":1,"meta.remote_ip":"200.34.32.138","meta.client_id":"ip/200.34.32.138","message":"Failed Login: username=xxx5 ip=200.34.32.138"}
                          
                          1 Reply Last reply
                          2
                          • A Offline
                            A Offline
                            allanbowe
                            wrote last edited by allanbowe
                            #13

                            So it appears that unauthenticated users (or attackers) are able to brute force usernames due to the fact that the corresponding API endpoints are not authenticated: https://gitlab.com/gitlab-org/gitlab/-/issues/297473

                            Furthermore, the gitlab team do not plan to fix the issue:

                            • https://gitlab.com/gitlab-org/gitlab/-/issues/16179
                            • https://gitlab.com/gitlab-org/gitlab/-/issues/336601

                            To mitigate the risk from such attacks in the future we took the following measures:

                            Actions taken on the server:

                            • Installed Fail2ban

                            Actions taken on the platform (cloudron):

                            • Removed several platform apps that were not being used
                            • Restricted visibility of (and access to) the gitlab instance to just those who need it
                            • Removed several users

                            Actions taken on the gitlab instance (cloudron container):

                            • Enabled 2FA (we had it natively on cloudron / OIDC but not for guest users)
                            • Removed Oauth apps that are no longer used
                            • Enabled "Deactivate dormant users after a period of inactivity" (90 days)
                            • Enabled "Enable unauthenticated API request rate limit" (1 per second)
                            • Enabled "Enable unauthenticated web request rate limit" (1 per second)
                            • Enabled "Enable authenticated API request rate limit" (2 per second)
                            • Enabled "Enable authenticated API request rate limit" (2 per second)
                            • Deleted 3 inactive runners and removed one active but no longer needed runner
                            • Removed dormant users

                            Suggestions (to the packaging team) for improvement:

                            • A hardened Gitlab configuration "out of the box" in cloudron
                            • Updates to the documentation (eg that the logs location is under /home/git/gitlab/log). Maybe even putting that location in the file explorer, to make log capture / analysis easier.
                            • Options within the cloudron platform itself to more aggressively reject IP addresses. It was noted that some attacker IPs were re-used after some time.
                            jdaviescoatesJ 1 Reply Last reply
                            2
                            • A allanbowe

                              So it appears that unauthenticated users (or attackers) are able to brute force usernames due to the fact that the corresponding API endpoints are not authenticated: https://gitlab.com/gitlab-org/gitlab/-/issues/297473

                              Furthermore, the gitlab team do not plan to fix the issue:

                              • https://gitlab.com/gitlab-org/gitlab/-/issues/16179
                              • https://gitlab.com/gitlab-org/gitlab/-/issues/336601

                              To mitigate the risk from such attacks in the future we took the following measures:

                              Actions taken on the server:

                              • Installed Fail2ban

                              Actions taken on the platform (cloudron):

                              • Removed several platform apps that were not being used
                              • Restricted visibility of (and access to) the gitlab instance to just those who need it
                              • Removed several users

                              Actions taken on the gitlab instance (cloudron container):

                              • Enabled 2FA (we had it natively on cloudron / OIDC but not for guest users)
                              • Removed Oauth apps that are no longer used
                              • Enabled "Deactivate dormant users after a period of inactivity" (90 days)
                              • Enabled "Enable unauthenticated API request rate limit" (1 per second)
                              • Enabled "Enable unauthenticated web request rate limit" (1 per second)
                              • Enabled "Enable authenticated API request rate limit" (2 per second)
                              • Enabled "Enable authenticated API request rate limit" (2 per second)
                              • Deleted 3 inactive runners and removed one active but no longer needed runner
                              • Removed dormant users

                              Suggestions (to the packaging team) for improvement:

                              • A hardened Gitlab configuration "out of the box" in cloudron
                              • Updates to the documentation (eg that the logs location is under /home/git/gitlab/log). Maybe even putting that location in the file explorer, to make log capture / analysis easier.
                              • Options within the cloudron platform itself to more aggressively reject IP addresses. It was noted that some attacker IPs were re-used after some time.
                              jdaviescoatesJ Offline
                              jdaviescoatesJ Offline
                              jdaviescoates
                              wrote last edited by
                              #14

                              @allanbowe said in "Unlock instructions" email due to brute force attack on gitlab users:

                              Suggestions (to the packaging team) for improvement:

                              A hardened Gitlab configuration "out of the box" in cloudron
                              Updates to the documentation (eg that the logs location is under /home/git/gitlab/log). Maybe even putting that location in the file explorer, to make log capture / analysis easier.
                              Options within the cloudron platform itself to more aggressively reject IP addresses. It was noted that some attacker IPs were re-used after some time.
                              

                              Agreed.

                              I'd add: copy the salient bits of above post (or at least link to it) into the docs too.

                              I use Cloudron with Gandi & Hetzner

                              1 Reply Last reply
                              0
                              • A Offline
                                A Offline
                                allanbowe
                                wrote last edited by allanbowe
                                #15

                                Just discovered a setting at the following path: /admin/application_settings/general#js-visibility-settings

                                Section: Restricted visibility levels
                                Setting: Public - If selected, only administrators are able to create public groups, projects, and snippets. Also, profiles are only visible to authenticated users.

                                0b3371ff-5bf9-4b28-bf75-8b8a8fceb58a-image.png

                                After checking this, and testing with CURL, the /api/v4/users/XXX endpoints now consistently return a 404 whether authenticated or not!!

                                I suspect this is the fix, but will wait and see if there are any more "Unlock Instructions" emails tonight / tomorrow.

                                Weirdly, after checking this checkbox and hitting save, it gets unchecked immediately after - but refreshing the page shows that it was indeed checked.

                                Another side note - we saw in our email logs that we were getting a large number of requests from a subdomain of https://academyforinternetresearch.org/

                                So it seems that this could be an issue on their radar.

                                1 Reply Last reply
                                2
                                • A Offline
                                  A Offline
                                  allanbowe
                                  wrote last edited by allanbowe
                                  #16

                                  So it turns out this does NOT stop the "Unlock Instructions" email being sent. They even continue after forcing 2FA for all users.
                                  What is more, we even get the emails for internal staff, who don't even have a password - because they authenticate using OIDC in cloudron.

                                  Any suggestions?

                                  1 Reply Last reply
                                  0
                                  • A Offline
                                    A Offline
                                    allanbowe
                                    wrote last edited by
                                    #17

                                    One thought is that now the usernames are "known", the attacker can continue the login attempts (even though they are futile).

                                    So our new approach is to delete the old accounts and create new ones.

                                    1 Reply Last reply
                                    0
                                    • jamesJ Offline
                                      jamesJ Offline
                                      james
                                      Staff
                                      wrote last edited by
                                      #18

                                      Hello @allanbowe
                                      I would assume this is a temporary automated attack.
                                      Maybe it would be a good idea to only allow access to Cloudron or GitLab from a VPN for some days.
                                      This way the bots will notice action was taken and will not resume for some while or stop completely.

                                      1 Reply Last reply
                                      0
                                      • A Offline
                                        A Offline
                                        allanbowe
                                        wrote last edited by
                                        #19

                                        Is there a way to restrict access to a cloudron app to users on the cloudron VPN?

                                        I did not realise this was a feature - it would be amazing, very (very) useful indeed

                                        1 Reply Last reply
                                        0
                                        • jamesJ Offline
                                          jamesJ Offline
                                          james
                                          Staff
                                          wrote last edited by james
                                          #20

                                          Hello @allanbowe
                                          Not by default from Cloudron. (Maybe in the future)

                                          I would advise to temporary edit the GitLab NGINX file to only allow certain IP addresses.
                                          This manual change will get reset with every Cloudron / Server / App restart.
                                          So it is really temporary.

                                          Example for APP ID 682ca768-93e5-4bcb-a760-677daa9a8e3b

                                          Go into the application NGINX config folder:

                                          cd /home/yellowtent/platformdata/nginx/applications/682ca768-93e5-4bcb-a760-677daa9a8e3b
                                          

                                          Edit the sub.domain.tld.conf file, in this case dokuwiki.cloudron.dev:

                                          nano dokuwiki.cloudron.dev.conf
                                          

                                          Inside this section, add:

                                          # https server
                                          server {
                                              [...]
                                              # allow localhost
                                              allow 127.0.0.1;
                                              # allow cloudron proxy
                                              allow 172.18.0.1;
                                              # allow this servers public ipv4
                                              allow REDACTED-IPV4;
                                              # allow this servers public ipv6
                                              allow REDACTED-IPV6;
                                              # Allow some other specific IPv4 e.g VPN
                                              allow VPN-IP;
                                              # deny all other
                                              deny all;
                                              [...]
                                          }
                                          

                                          Reload the NGINX service:

                                          systemctl reload nginx.service
                                          

                                          This will result for other IPs that are not explicitly allowed to return a 403 Forbidden:
                                          52b04087-811c-4d9f-8a90-2b20cb5de9f4-image.png

                                          Keep in mind, every Cloudron / Server / App Restart will reset this change!

                                          1 Reply Last reply
                                          2
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • Bookmarks
                                          • Search