Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. App Wishlist
  3. AnythingLLM - AI business intelligence tool

AnythingLLM - AI business intelligence tool

Scheduled Pinned Locked Moved App Wishlist
23 Posts 10 Posters 4.6k Views 15 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • humptydumptyH humptydumpty

    @LoudLemur said in AnythingLLM - AI business intelligence tool:

    Bumpety Bump Bump Bump!

    TheOfficeFistBumpGIF.gif

    timconsidineT Online
    timconsidineT Online
    timconsidine
    App Dev
    wrote last edited by
    #14

    @humptydumpty / @loudlemur started working on AnythingLLM package, but (a) got diverted (b) wondered if needed because the native app is so good
    So it's parked for now.

    Custom apps available at CCAI https://ccai.appx.uk (ask if an app is not listed in the catalogue).

    timconsidineT 1 Reply Last reply
    1
    • timconsidineT timconsidine

      @humptydumpty / @loudlemur started working on AnythingLLM package, but (a) got diverted (b) wondered if needed because the native app is so good
      So it's parked for now.

      timconsidineT Online
      timconsidineT Online
      timconsidine
      App Dev
      wrote last edited by
      #15

      errr, I missed the point slightly

      Summary

      If you are the only user and just want to chat with your local documents, stick with the Desktop App .
      If you want to share knowledge with a team, provide a chat interface for others, or access your LLM tools from your phone/browser, Self-Hosting on Cloudron is the way to go.

      So I am picking up packaging AnythingLLM again.

      Custom apps available at CCAI https://ccai.appx.uk (ask if an app is not listed in the catalogue).

      1 Reply Last reply
      2
      • timconsidineT Online
        timconsidineT Online
        timconsidine
        App Dev
        wrote last edited by
        #16

        🍾
        AnythingLLM is packaged !
        A little bit of testing and clean-up needed,

        Does anyone want it ?
        Just depends whether I make repo public or private.

        Cloudron team don't seem to want custom apps anymore <tease>

        Custom apps available at CCAI https://ccai.appx.uk (ask if an app is not listed in the catalogue).

        KubernetesK humptydumptyH 2 Replies Last reply
        1
        • timconsidineT timconsidine

          🍾
          AnythingLLM is packaged !
          A little bit of testing and clean-up needed,

          Does anyone want it ?
          Just depends whether I make repo public or private.

          Cloudron team don't seem to want custom apps anymore <tease>

          KubernetesK Offline
          KubernetesK Offline
          Kubernetes
          App Dev
          wrote last edited by
          #17

          @timconsidine looks like you are 100% in a flow of productivity 🙂

          timconsidineT 1 Reply Last reply
          0
          • timconsidineT timconsidine

            🍾
            AnythingLLM is packaged !
            A little bit of testing and clean-up needed,

            Does anyone want it ?
            Just depends whether I make repo public or private.

            Cloudron team don't seem to want custom apps anymore <tease>

            humptydumptyH Offline
            humptydumptyH Offline
            humptydumpty
            wrote last edited by
            #18

            @timconsidine said in AnythingLLM - AI business intelligence tool:

            Does anyone want it ?

            I was interested until chatgpt told me it requires a gpu to work reasonably fast or else it falls back to the cpu. Unless it's mistaken and it would work okay for text prompt on older servers/cpus (6th gen intel)?

            timconsidineT 1 Reply Last reply
            0
            • KubernetesK Kubernetes

              @timconsidine looks like you are 100% in a flow of productivity 🙂

              timconsidineT Online
              timconsidineT Online
              timconsidine
              App Dev
              wrote last edited by
              #19

              @Kubernetes ha ! I wish. My good friend TRAE helps me a lot, now with help of context7 and Sequential Thinking MCPs.

              Amazing how stupid they can be at times, though ! 🤣

              Custom apps available at CCAI https://ccai.appx.uk (ask if an app is not listed in the catalogue).

              1 Reply Last reply
              1
              • humptydumptyH humptydumpty

                @timconsidine said in AnythingLLM - AI business intelligence tool:

                Does anyone want it ?

                I was interested until chatgpt told me it requires a gpu to work reasonably fast or else it falls back to the cpu. Unless it's mistaken and it would work okay for text prompt on older servers/cpus (6th gen intel)?

                timconsidineT Online
                timconsidineT Online
                timconsidine
                App Dev
                wrote last edited by
                #20

                @humptydumpty could be a good point

                But I am NOT running on a GPU.
                I just have a good Hetzner server (128Gb, good CPU).

                I guess it's a limbo dance trial and error - how low can you go ?

                I have a 'spare' testing Cloudron instance - very modest specs - just a mini PC - will give it a go over the weekend if I have a chance.

                Custom apps available at CCAI https://ccai.appx.uk (ask if an app is not listed in the catalogue).

                humptydumptyH 1 Reply Last reply
                0
                • timconsidineT timconsidine

                  @humptydumpty could be a good point

                  But I am NOT running on a GPU.
                  I just have a good Hetzner server (128Gb, good CPU).

                  I guess it's a limbo dance trial and error - how low can you go ?

                  I have a 'spare' testing Cloudron instance - very modest specs - just a mini PC - will give it a go over the weekend if I have a chance.

                  humptydumptyH Offline
                  humptydumptyH Offline
                  humptydumpty
                  wrote last edited by
                  #21

                  @timconsidine My home-servers are 6th or 7th gen Intel (i5-6500t or i5-7500t) both with 32GB RAM. I'll give it a try if you make it public on CCAI. What's the worst that could happen 😂?

                  timconsidineT 1 Reply Last reply
                  1
                  • humptydumptyH humptydumpty

                    @timconsidine My home-servers are 6th or 7th gen Intel (i5-6500t or i5-7500t) both with 32GB RAM. I'll give it a try if you make it public on CCAI. What's the worst that could happen 😂?

                    timconsidineT Online
                    timconsidineT Online
                    timconsidine
                    App Dev
                    wrote last edited by
                    #22

                    @humptydumpty 👍
                    I will tidy it up (remove some debug code) and let you know when on CCAI.

                    Custom apps available at CCAI https://ccai.appx.uk (ask if an app is not listed in the catalogue).

                    1 Reply Last reply
                    1
                    • robiR Offline
                      robiR Offline
                      robi
                      wrote last edited by robi
                      #23

                      The performance depends on the model you load and run.

                      One guy found out he could run a small model (gemma 3) on his CPUs as multiple agents which fronts other API based models and agents who do heavier lifting.

                      Conscious tech

                      1 Reply Last reply
                      1
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • Bookmarks
                      • Search