Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. IPv4 outbound broken from app containers since today - n8n / Pipedrive

IPv4 outbound broken from app containers since today - n8n / Pipedrive

Scheduled Pinned Locked Moved Solved Support
ovhnetworkingrouting
8 Posts 3 Posters 48 Views 3 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A Offline
    A Offline
    adm1n
    wrote last edited by
    #1

    Hi,

    Since today (April 9, 2026), outbound IPv4 connectivity from app containers is broken. Nothing was changed on our side. Restoring an n8n backup from several days ago did not fix the issue.

    Symptoms:

    • Workflows using Pipedrive (and Webflow) fail with: "The host is unreachable, perhaps the server is offline"
    • Logs show: connect EHOSTUNREACH 172.67.68.102:443 and connect EHOSTUNREACH 104.18.188.228:443
    • Error also appears for community nodes fetching: Error while fetching community nodes: connect EHOSTUNREACH 172.67.68.102:443

    Diagnosis performed:

    1. DNS is working — after fixing the upstream DNS (changed from OVH DNS 213.186.33.99 to 1.1.1.1 on ens3/ens4), nslookup api.pipedrive.com resolves correctly from inside the container.

    2. IPv6 works, IPv4 does not — curl https://api.pipedrive.com succeeds via IPv6 but curl -4 https://api.pipedrive.com fails with "No route to host" from inside the container.

    3. Host can reach the IPs fine — ping 172.67.68.102 and ping 104.18.188.228 both succeed from the host.

    4. Container network looks correct:

      • Default route: default via 172.18.0.1 dev eth0
      • Container IP: 172.18.18.25/16
    5. iptables / NAT look correct:

      • MASQUERADE rule exists for 172.18.0.0/16 → !br-c372a117c03f
      • cloudron_blocklist ipset is nearly empty (8 entries), does not contain Cloudflare IPs
      • DOCKER-CT, DOCKER-FORWARD, DOCKER-USER chains reviewed — nothing obviously blocking
    6. The issue seems to affect all app containers for external IPv4, not just n8n.

    Question: Did something change on the Cloudron platform side today (routing, iptables rules, network configuration) that could explain why IPv4 forwarding from containers to external IPs stopped working, while IPv6 remains functional?

    Host info:

    • Cloudron host ID: 9fa34633-859b-460d-a883-d1d3f5030f54-0
    • Affected app container: 53ed2453-c0d6-49ca-96f9-666104462c2f (n8n)
    • VPS provider: OVH / OpenStack

    Please let us know what to check next or if a platform-level fix is needed.

    Thank you.

    jamesJ 1 Reply Last reply
    0
    • A adm1n

      Hi,

      Since today (April 9, 2026), outbound IPv4 connectivity from app containers is broken. Nothing was changed on our side. Restoring an n8n backup from several days ago did not fix the issue.

      Symptoms:

      • Workflows using Pipedrive (and Webflow) fail with: "The host is unreachable, perhaps the server is offline"
      • Logs show: connect EHOSTUNREACH 172.67.68.102:443 and connect EHOSTUNREACH 104.18.188.228:443
      • Error also appears for community nodes fetching: Error while fetching community nodes: connect EHOSTUNREACH 172.67.68.102:443

      Diagnosis performed:

      1. DNS is working — after fixing the upstream DNS (changed from OVH DNS 213.186.33.99 to 1.1.1.1 on ens3/ens4), nslookup api.pipedrive.com resolves correctly from inside the container.

      2. IPv6 works, IPv4 does not — curl https://api.pipedrive.com succeeds via IPv6 but curl -4 https://api.pipedrive.com fails with "No route to host" from inside the container.

      3. Host can reach the IPs fine — ping 172.67.68.102 and ping 104.18.188.228 both succeed from the host.

      4. Container network looks correct:

        • Default route: default via 172.18.0.1 dev eth0
        • Container IP: 172.18.18.25/16
      5. iptables / NAT look correct:

        • MASQUERADE rule exists for 172.18.0.0/16 → !br-c372a117c03f
        • cloudron_blocklist ipset is nearly empty (8 entries), does not contain Cloudflare IPs
        • DOCKER-CT, DOCKER-FORWARD, DOCKER-USER chains reviewed — nothing obviously blocking
      6. The issue seems to affect all app containers for external IPv4, not just n8n.

      Question: Did something change on the Cloudron platform side today (routing, iptables rules, network configuration) that could explain why IPv4 forwarding from containers to external IPs stopped working, while IPv6 remains functional?

      Host info:

      • Cloudron host ID: 9fa34633-859b-460d-a883-d1d3f5030f54-0
      • Affected app container: 53ed2453-c0d6-49ca-96f9-666104462c2f (n8n)
      • VPS provider: OVH / OpenStack

      Please let us know what to check next or if a platform-level fix is needed.

      Thank you.

      jamesJ Offline
      jamesJ Offline
      james
      Staff
      wrote last edited by
      #2

      Hello @adm1n

      @adm1n said:

      Question: Did something change on the Cloudron platform side today (routing, iptables rules, network configuration) that could explain why IPv4 forwarding from containers to external IPs stopped working, while IPv6 remains functional?

      If you see no Cloudron update in your recent system backups, no, nothing changed.

      Did you try to reboot the server?

      1 Reply Last reply
      0
      • A Offline
        A Offline
        adm1n
        wrote last edited by
        #3

        Hello, Yes of course, we tried several times to reboot the host server, n8n container.

        1 Reply Last reply
        0
        • nebulonN Away
          nebulonN Away
          nebulon
          Staff
          wrote last edited by
          #4

          Did you Cloudron update recently which might coincide with this?

          1 Reply Last reply
          0
          • A Offline
            A Offline
            adm1n
            wrote last edited by
            #5

            No recent Cloudron update on my end — the issue was flagged by a colleague when a workflow stopped working. The error was "Host Unreachable" on outbound container traffic, which kicked off my investigation.

            After a full network audit, here's what I found:

            • Two default routes coexist at identical metric 100 — one via ens3 (public, 135.125.47.1, healthy) and one via ens4 (OVH private vRack, 10.10.0.1, dead — ARP FAILED, 100% packet loss).
            • For locally-originated traffic, the kernel prefers ens3 via source address selection, so the host itself has no issue reaching the internet.
            • For forwarded container traffic (from 172.18.0.0/16), there's no source preference, so the kernel picks ens4 first (FIB order) — and the packet dies there.
            • Firewall rules are not the issue: DOCKER-FORWARD already accepts traffic from br-c372a117c03f (34K packets counted), MASQUERADE is present and active, rp_filter is set to loose.

            My proposed fix is to suppress the default route advertised by ens4's DHCP in Netplan (dhcp4-overrides: use-routes: false), keeping the 10.10.0.0/24 connected route intact for private network access.

            Before I apply anything — am I on the right track? And is there anything Cloudron-side that manages iptables or routing that I should be aware of before touching Netplan?

            1 Reply Last reply
            0
            • nebulonN Away
              nebulonN Away
              nebulon
              Staff
              wrote last edited by
              #6

              Can't say I am fully following the issue, but Cloudron itself does not interfere with netplan or network setup as such on a VPS. So maybe worth checking witih OVH what they have changed there.

              Cloudron's iptable rules area all setup in https://git.cloudron.io/platform/box/-/blob/master/setup/start/cloudron-firewall.sh?ref_type=heads

              1 Reply Last reply
              0
              • A Offline
                A Offline
                adm1n
                wrote last edited by
                #7

                Issue resolved — posting the full diagnosis in case it helps others.


                Root cause

                OVH private network (vRack / ens4) DHCP was injecting a default route at the same metric (100) as the public interface (ens3). For locally-originated traffic this was harmless — the kernel picked ens3 via source-address selection. But for forwarded Docker traffic (172.18.0.0/16), there's no source preference, so the kernel fell back to FIB order and picked ens4 first. The ens4 gateway (10.10.0.1) doesn't actually exist on OVH private networks — ARP resolution failed, the kernel returned EHOSTUNREACH, and every outbound container connection died with No route to host.

                • IPv6 unaffected — only one IPv6 default gateway existed (on ens3), no competition.
                • Host itself unaffected — source-address selection always preferred ens3 for locally-originated packets.

                Fix applied — /etc/netplan/50-cloud-init.yaml

                ens4:
                  match:
                    macaddress: "fa:16:3e:ee:32:6a"
                  dhcp4: true
                  dhcp4-overrides:
                    use-routes: false
                    use-dns: false
                  set-name: "ens4"
                  mtu: 1500
                
                netplan apply
                

                This drops the phantom default route while keeping the 10.10.0.0/24 connected route intact, so private network traffic still works. Container outbound IPv4 restored immediately.


                Note for OVH + vRack users — if you run Cloudron with a second interface for private networking, add use-routes: false on that interface. OVH's DHCP silently advertises a default gateway that doesn't route to the internet, which breaks Docker NAT for all containers while leaving the host itself seemingly fine.

                1 Reply Last reply
                2
                • nebulonN Away
                  nebulonN Away
                  nebulon
                  Staff
                  wrote last edited by
                  #8

                  Glad you found the issue and thanks for sharing the fix.

                  1 Reply Last reply
                  0
                  • nebulonN nebulon has marked this topic as solved

                  Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                  Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                  With your input, this post could be even better 💗

                  Register Login
                  Reply
                  • Reply as topic
                  Log in to reply
                  • Oldest to Newest
                  • Newest to Oldest
                  • Most Votes


                  • Login

                  • Don't have an account? Register

                  • Login or register to search.
                  • First post
                    Last post
                  0
                  • Categories
                  • Recent
                  • Tags
                  • Popular
                  • Bookmarks
                  • Search