Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. App Wishlist
  3. H2O LLM Studio, no-code GUI, fine-tuning LLMs

H2O LLM Studio, no-code GUI, fine-tuning LLMs

Scheduled Pinned Locked Moved App Wishlist
4 Posts 2 Posters 1.3k Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • robiR Offline
      robiR Offline
      robi
      wrote on last edited by
      #1

      9b8ca8eb-913b-4cd2-b821-61ee80a5b8e9-llm-studio-logo.png

      Welcome to H2O LLM Studio, a framework and no-code GUI designed for

      fine-tuning state-of-the-art large language models (LLMs).
      https://github.com/h2oai/h2o-llmstudio/

      950bffac-8128-45ed-bfd7-fa8e470f45e7-233859311-32aa1f8c-4d68-47ac-8cd9-9313171ff9f9.png
      f06b0b86-f471-415a-ab69-e3aaf0a26fa4-233859315-e6928aa7-28d2-420b-8366-bc7323c368ca.png

      With H2O LLM Studio, you can

      • easily and effectively fine-tune LLMs without the need for any coding experience.
      • use a graphic user interface (GUI) specially designed for large language models.
      • finetune any LLM using a large variety of hyperparameters.
      • use recent finetuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint.
      • use advanced evaluation metrics to judge generated answers by the model.
      • track and compare your model performance visually. In addition, Neptune integration can be used.
      • chat with your model and get instant feedback on your model performance.
      • easily export your model to the Hugging Face Hub and share it with the community.

      Quickstart

      For questions, discussing, or just hanging out, come and join Discord!

      We offer several ways of getting started quickly.

      Using CLI for fine-tuning LLMs:

      Kaggle Open in Colab

      Conscious tech

      L 1 Reply Last reply
      3
      • L Offline
        L Offline
        LoudLemur
        wrote on last edited by LoudLemur
        #2

        Well done on finding this one, @robi ! I was about to suggest it myself.
        How did I miss it here?

        https://huggingface.co/h2oai
        https://gpt-gm.h2o.ai/

        There is a video of a chap using it with Falcon 40b here:
        https://invidious.io.lol/watch?v=H8Dx-iUY49s&quality=dash

        I can't seem to find that model there, at the moment. https://falcon.h2o.ai/

        Near the very end, you can see him upload / link by URL a document which he then interrogates using the AI.

        1 Reply Last reply
        0
        • robiR robi

          9b8ca8eb-913b-4cd2-b821-61ee80a5b8e9-llm-studio-logo.png

          Welcome to H2O LLM Studio, a framework and no-code GUI designed for

          fine-tuning state-of-the-art large language models (LLMs).
          https://github.com/h2oai/h2o-llmstudio/

          950bffac-8128-45ed-bfd7-fa8e470f45e7-233859311-32aa1f8c-4d68-47ac-8cd9-9313171ff9f9.png
          f06b0b86-f471-415a-ab69-e3aaf0a26fa4-233859315-e6928aa7-28d2-420b-8366-bc7323c368ca.png

          With H2O LLM Studio, you can

          • easily and effectively fine-tune LLMs without the need for any coding experience.
          • use a graphic user interface (GUI) specially designed for large language models.
          • finetune any LLM using a large variety of hyperparameters.
          • use recent finetuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint.
          • use advanced evaluation metrics to judge generated answers by the model.
          • track and compare your model performance visually. In addition, Neptune integration can be used.
          • chat with your model and get instant feedback on your model performance.
          • easily export your model to the Hugging Face Hub and share it with the community.

          Quickstart

          For questions, discussing, or just hanging out, come and join Discord!

          We offer several ways of getting started quickly.

          Using CLI for fine-tuning LLMs:

          Kaggle Open in Colab

          L Offline
          L Offline
          LoudLemur
          wrote on last edited by
          #3

          @robi On what sort of VPS would you consider running this?

          robiR 1 Reply Last reply
          0
          • L LoudLemur

            @robi On what sort of VPS would you consider running this?

            robiR Offline
            robiR Offline
            robi
            wrote on last edited by
            #4

            @LoudLemur no different than ones you would use to run the other projects that can run models with.

            Conscious tech

            1 Reply Last reply
            0
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


              • Login

              • Don't have an account? Register

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • Bookmarks
              • Search