Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. App Wishlist
  3. AnythingLLM - AI business intelligence tool

AnythingLLM - AI business intelligence tool

Scheduled Pinned Locked Moved App Wishlist
27 Posts 11 Posters 6.3k Views 17 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J Offline
    J Offline
    jagan
    wrote on last edited by
    #4

    I am using AnythingLLM on the desktop and it blows Open WebUI away for RAG and Agents!
    If we can have AnythingLLM on Cloudron, it would be a gamechanger for groups to collaborate in a workspace with shared documents.

    micmcM 1 Reply Last reply
    3
    • L Offline
      L Offline
      LoudLemur
      wrote on last edited by
      #5

      I can see that AnythingLLM supports the excellent LMstudio, but I am not sure how you would use the two together, though. It is obviously not just a replacement.

      1 Reply Last reply
      1
      • L Offline
        L Offline
        LoudLemur
        wrote on last edited by LoudLemur
        #6

        Check out the developer here. Looks like a good guy. Isn't he good at making a technical presentation?

        Lets support AnythingLLM on Cloudron! It is wonderful!

        https://iv.ggtyler.dev/watch?v=0vZ69AIP_hM

        1 Reply Last reply
        2
        • L LoudLemur referenced this topic on
        • L Offline
          L Offline
          LoudLemur
          wrote on last edited by
          #7

          https://github.com/Mintplex-Labs/anything-llm/issues/1540#issuecomment-2132429437

          1 Reply Last reply
          1
          • L Offline
            L Offline
            LoudLemur
            wrote on last edited by
            #8

            Bumpety Bump Bump Bump!

            humptydumptyH 1 Reply Last reply
            0
            • firmansiF Online
              firmansiF Online
              firmansi
              wrote on last edited by
              #9

              I hope cloudron team can package this app to cloudron as an alternative to openWebUI, the latest release seems more and more stable.

              1 Reply Last reply
              1
              • J jagan

                I am using AnythingLLM on the desktop and it blows Open WebUI away for RAG and Agents!
                If we can have AnythingLLM on Cloudron, it would be a gamechanger for groups to collaborate in a workspace with shared documents.

                micmcM Offline
                micmcM Offline
                micmc
                wrote on last edited by
                #10

                @jagan said in AnythingLLM - AI business intelligence tool:

                I am using AnythingLLM on the desktop and it blows Open WebUI away for RAG and Agents!
                If we can have AnythingLLM on Cloudron, it would be a gamechanger for groups to collaborate in a workspace with shared documents.

                Maybe you could explain us how AnythingLLM "blows" OW away, and is this still relevant today? OW has much, much developed since then so would be good to see.

                TO start with, this is an Desktop app and, yes, there's a "cloud" version however the only "cloud version" available is to be run from Docker (which should be easier for Cloudron to start with).

                And also, viewing the capacity of the Desktop app how relevant would be the need to have a "cloud version" on Cloudron for example?

                AI Intelligencia RED PILL Podcast
                (coming soon...)

                1 Reply Last reply
                1
                • L Offline
                  L Offline
                  LoudLemur
                  wrote on last edited by
                  #11

                  AnythingLLM has had enormous improvements since it was first requested by @TheMoodBoardz
                  In particular, support for agents and RAG has been advanced to try and keep up with the huge demand in this area.

                  Installing AnythingLLM on GNU+Linux now automatically installs Ollama, too.

                  Here is a summary of the main advances. If you want to make RAG and ai agents easier, AnythingLLM would be a great application to support.

                  Comparison between v1.0Based.0 and v1.9.0 of AnythingLLM, here are the most significant changes categorized into key areas:

                  Agent System Overhaul

                  • Complete redesign of the agent experience with real-time streaming tool calls
                  • Agents can now download and ingest web files (PDF, Excel, CSV) during conversations
                  • All providers and models support agentic streaming capabilities

                  Local LLM Integration

                  • Added Microsoft Foundry Local integration for Windows/MacOS (free local LLM solution)
                  • Linux now includes Ollama (0.11.4) built-in for immediate local LLM support
                  • ARM64 support for both Linux and Windows platforms

                  Platform & Infrastructure

                  • Expanded Linux support with automatic apparmor rules and .desktop file creation
                  • NVIDIA NIM being phased out in favor of newer solutions
                  • Upgraded core Electron version for improved stability

                  UI/UX Improvements

                  • Model swap capability during chats (Ctrl/Cmd+Shift+L)
                  • Workspace and thread searching in sidebar
                  • System prompt version tracking and restoration
                  • Enhanced mobile support with Android beta app

                  Developer Features

                  • MCP (Model Context Protocol) compatibility
                  • No-code AI agent builder
                  • Enhanced API endpoints and developer tools
                  • Multi-modal support for both open and closed-source LLMs

                  Internationalization

                  • Added Portuguese and Estonian translations
                  • Improved multi-language OCR support
                  • Expanded localization coverage

                  The update represents a massive leap from the initial version, transforming from a basic document chat system to a comprehensive AI agent platform with robust local operation capabilities and extensive customization options.

                  micmcM 1 Reply Last reply
                  3
                  • L LoudLemur

                    AnythingLLM has had enormous improvements since it was first requested by @TheMoodBoardz
                    In particular, support for agents and RAG has been advanced to try and keep up with the huge demand in this area.

                    Installing AnythingLLM on GNU+Linux now automatically installs Ollama, too.

                    Here is a summary of the main advances. If you want to make RAG and ai agents easier, AnythingLLM would be a great application to support.

                    Comparison between v1.0Based.0 and v1.9.0 of AnythingLLM, here are the most significant changes categorized into key areas:

                    Agent System Overhaul

                    • Complete redesign of the agent experience with real-time streaming tool calls
                    • Agents can now download and ingest web files (PDF, Excel, CSV) during conversations
                    • All providers and models support agentic streaming capabilities

                    Local LLM Integration

                    • Added Microsoft Foundry Local integration for Windows/MacOS (free local LLM solution)
                    • Linux now includes Ollama (0.11.4) built-in for immediate local LLM support
                    • ARM64 support for both Linux and Windows platforms

                    Platform & Infrastructure

                    • Expanded Linux support with automatic apparmor rules and .desktop file creation
                    • NVIDIA NIM being phased out in favor of newer solutions
                    • Upgraded core Electron version for improved stability

                    UI/UX Improvements

                    • Model swap capability during chats (Ctrl/Cmd+Shift+L)
                    • Workspace and thread searching in sidebar
                    • System prompt version tracking and restoration
                    • Enhanced mobile support with Android beta app

                    Developer Features

                    • MCP (Model Context Protocol) compatibility
                    • No-code AI agent builder
                    • Enhanced API endpoints and developer tools
                    • Multi-modal support for both open and closed-source LLMs

                    Internationalization

                    • Added Portuguese and Estonian translations
                    • Improved multi-language OCR support
                    • Expanded localization coverage

                    The update represents a massive leap from the initial version, transforming from a basic document chat system to a comprehensive AI agent platform with robust local operation capabilities and extensive customization options.

                    micmcM Offline
                    micmcM Offline
                    micmc
                    wrote on last edited by
                    #12

                    @LoudLemur Yeah AnythingLLM is amazing, I believe much cleaner and intuitive than Open WebUI.

                    One can install it online as well via Coolify or EasyPanel and it's quite impressive out of the box too.

                    Thanks for sharing this summary.

                    AI Intelligencia RED PILL Podcast
                    (coming soon...)

                    1 Reply Last reply
                    3
                    • L LoudLemur

                      Bumpety Bump Bump Bump!

                      humptydumptyH Offline
                      humptydumptyH Offline
                      humptydumpty
                      wrote on last edited by
                      #13

                      @LoudLemur said in AnythingLLM - AI business intelligence tool:

                      Bumpety Bump Bump Bump!

                      TheOfficeFistBumpGIF.gif

                      timconsidineT 1 Reply Last reply
                      0
                      • humptydumptyH humptydumpty

                        @LoudLemur said in AnythingLLM - AI business intelligence tool:

                        Bumpety Bump Bump Bump!

                        TheOfficeFistBumpGIF.gif

                        timconsidineT Online
                        timconsidineT Online
                        timconsidine
                        App Dev
                        wrote on last edited by
                        #14

                        @humptydumpty / @loudlemur started working on AnythingLLM package, but (a) got diverted (b) wondered if needed because the native app is so good
                        So it's parked for now.

                        Indie app dev, scratching my itches, lover of Cloudron PaaS

                        timconsidineT 1 Reply Last reply
                        2
                        • timconsidineT timconsidine

                          @humptydumpty / @loudlemur started working on AnythingLLM package, but (a) got diverted (b) wondered if needed because the native app is so good
                          So it's parked for now.

                          timconsidineT Online
                          timconsidineT Online
                          timconsidine
                          App Dev
                          wrote on last edited by
                          #15

                          errr, I missed the point slightly

                          Summary

                          If you are the only user and just want to chat with your local documents, stick with the Desktop App .
                          If you want to share knowledge with a team, provide a chat interface for others, or access your LLM tools from your phone/browser, Self-Hosting on Cloudron is the way to go.

                          So I am picking up packaging AnythingLLM again.

                          Indie app dev, scratching my itches, lover of Cloudron PaaS

                          1 Reply Last reply
                          3
                          • timconsidineT Online
                            timconsidineT Online
                            timconsidine
                            App Dev
                            wrote on last edited by
                            #16

                            🍾
                            AnythingLLM is packaged !
                            A little bit of testing and clean-up needed,

                            Does anyone want it ?
                            Just depends whether I make repo public or private.

                            Cloudron team don't seem to want custom apps anymore <tease>

                            Indie app dev, scratching my itches, lover of Cloudron PaaS

                            KubernetesK humptydumptyH 2 Replies Last reply
                            2
                            • timconsidineT timconsidine

                              🍾
                              AnythingLLM is packaged !
                              A little bit of testing and clean-up needed,

                              Does anyone want it ?
                              Just depends whether I make repo public or private.

                              Cloudron team don't seem to want custom apps anymore <tease>

                              KubernetesK Offline
                              KubernetesK Offline
                              Kubernetes
                              App Dev
                              wrote on last edited by
                              #17

                              @timconsidine looks like you are 100% in a flow of productivity 🙂

                              timconsidineT 1 Reply Last reply
                              1
                              • timconsidineT timconsidine

                                🍾
                                AnythingLLM is packaged !
                                A little bit of testing and clean-up needed,

                                Does anyone want it ?
                                Just depends whether I make repo public or private.

                                Cloudron team don't seem to want custom apps anymore <tease>

                                humptydumptyH Offline
                                humptydumptyH Offline
                                humptydumpty
                                wrote on last edited by
                                #18

                                @timconsidine said in AnythingLLM - AI business intelligence tool:

                                Does anyone want it ?

                                I was interested until chatgpt told me it requires a gpu to work reasonably fast or else it falls back to the cpu. Unless it's mistaken and it would work okay for text prompt on older servers/cpus (6th gen intel)?

                                timconsidineT 1 Reply Last reply
                                0
                                • KubernetesK Kubernetes

                                  @timconsidine looks like you are 100% in a flow of productivity 🙂

                                  timconsidineT Online
                                  timconsidineT Online
                                  timconsidine
                                  App Dev
                                  wrote on last edited by
                                  #19

                                  @Kubernetes ha ! I wish. My good friend TRAE helps me a lot, now with help of context7 and Sequential Thinking MCPs.

                                  Amazing how stupid they can be at times, though ! 🤣

                                  Indie app dev, scratching my itches, lover of Cloudron PaaS

                                  1 Reply Last reply
                                  2
                                  • humptydumptyH humptydumpty

                                    @timconsidine said in AnythingLLM - AI business intelligence tool:

                                    Does anyone want it ?

                                    I was interested until chatgpt told me it requires a gpu to work reasonably fast or else it falls back to the cpu. Unless it's mistaken and it would work okay for text prompt on older servers/cpus (6th gen intel)?

                                    timconsidineT Online
                                    timconsidineT Online
                                    timconsidine
                                    App Dev
                                    wrote on last edited by
                                    #20

                                    @humptydumpty could be a good point

                                    But I am NOT running on a GPU.
                                    I just have a good Hetzner server (128Gb, good CPU).

                                    I guess it's a limbo dance trial and error - how low can you go ?

                                    I have a 'spare' testing Cloudron instance - very modest specs - just a mini PC - will give it a go over the weekend if I have a chance.

                                    Indie app dev, scratching my itches, lover of Cloudron PaaS

                                    humptydumptyH 1 Reply Last reply
                                    0
                                    • timconsidineT timconsidine

                                      @humptydumpty could be a good point

                                      But I am NOT running on a GPU.
                                      I just have a good Hetzner server (128Gb, good CPU).

                                      I guess it's a limbo dance trial and error - how low can you go ?

                                      I have a 'spare' testing Cloudron instance - very modest specs - just a mini PC - will give it a go over the weekend if I have a chance.

                                      humptydumptyH Offline
                                      humptydumptyH Offline
                                      humptydumpty
                                      wrote on last edited by
                                      #21

                                      @timconsidine My home-servers are 6th or 7th gen Intel (i5-6500t or i5-7500t) both with 32GB RAM. I'll give it a try if you make it public on CCAI. What's the worst that could happen 😂?

                                      timconsidineT 1 Reply Last reply
                                      1
                                      • humptydumptyH humptydumpty

                                        @timconsidine My home-servers are 6th or 7th gen Intel (i5-6500t or i5-7500t) both with 32GB RAM. I'll give it a try if you make it public on CCAI. What's the worst that could happen 😂?

                                        timconsidineT Online
                                        timconsidineT Online
                                        timconsidine
                                        App Dev
                                        wrote on last edited by
                                        #22

                                        @humptydumpty 👍
                                        I will tidy it up (remove some debug code) and let you know when on CCAI.

                                        Indie app dev, scratching my itches, lover of Cloudron PaaS

                                        1 Reply Last reply
                                        1
                                        • robiR Offline
                                          robiR Offline
                                          robi
                                          wrote on last edited by robi
                                          #23

                                          The performance depends on the model you load and run.

                                          One guy found out he could run a small model (gemma 3) on his CPUs as multiple agents which fronts other API based models and agents who do heavier lifting.

                                          Conscious tech

                                          1 Reply Last reply
                                          3

                                          Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                                          Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                                          With your input, this post could be even better 💗

                                          Register Login
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • Bookmarks
                                          • Search