Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. App Wishlist
  3. AgenticSeek: open, local Manus AI alternative. Powered with Deepseek R1. No APIs, no $456 monthly bills. Enjoy an AI agent that reason, code, and browse with no worries.

AgenticSeek: open, local Manus AI alternative. Powered with Deepseek R1. No APIs, no $456 monthly bills. Enjoy an AI agent that reason, code, and browse with no worries.

Scheduled Pinned Locked Moved App Wishlist
1 Posts 1 Posters 1.2k Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • robiR Offline
    robiR Offline
    robi
    wrote on last edited by
    #1

    AgenticSeek: Manus-like AI powered by Deepseek R1 Agents.

    A fully local alternative to Manus AI, a voice-enabled AI assistant that codes, explores your filesystem, browse the web and correct it's mistakes all without sending a byte of data to the cloud. Built with reasoning models like DeepSeek R1, this autonomous agent runs entirely on your hardware, keeping your data private.

    Visit AgenticSeek License Discord

    🛠️ Work in Progress – Looking for contributors!

    alt text

    Do a web search to find tech startup in Japan working on cutting edge AI research

    Make a snake game in Python

    Scan my network with nmap, find out who is connected?

    Hey can you find where is contract.pdf?

    Features:

    • 100% Local: No cloud, runs on your hardware. Your data stays yours.

    • Voice interaction: Voice-enabled natural interaction.

    • Filesystem interaction: Use bash to navigate and manipulate your files effortlessly.

    • Code what you ask: Can write, debug, and run code in Python, C, Golang and more languages on the way.

    • Autonomous: If a command flops or code breaks, it retries and fixes it by itself.

    • Agent routing: Automatically picks the right agent for the job.

    • Divide and Conquer: For big tasks, spins up multiple agents to plan and execute.

    • Tool-Equipped: From basic search to flight APIs and file exploration, every agent has it's own tools.

    • Memory: Remembers what’s useful, your preferences and past sessions conversation.

    • Web Browsing: Autonomous web navigation is underway.

    Searching the web with agenticSeek :

    alt text

    See media/examples for other use case screenshots.


    Installation

    Make sure you have chrome driver and docker installed.

    For issues related to chrome driver, see the Chromedriver section.

    1️⃣ Clone the repository and setup

    git clone https://github.com/Fosowl/agenticSeek.git
    cd agenticSeek
    mv .env.example .env
    

    2️ Create a virtual env

    python3 -m venv agentic_seek_env
    source agentic_seek_env/bin/activate     
    # On Windows: agentic_seek_env\Scripts\activate
    

    3️⃣ Install package

    Automatic Installation:

    ./install.sh
    

    Manually:

    pip3 install -r requirements.txt
    # or
    python3 setup.py install
    

    Run locally on your machine

    We recommend using at least Deepseek 14B, smaller models struggle with tool use and forget quickly the context.

    1️⃣ Download Models

    Make sure you have Ollama installed.

    Download the deepseek-r1:14b model from DeepSeek

    ollama pull deepseek-r1:14b
    

    2️ Run the Assistant (Ollama)

    Start the ollama server

    ollama serve
    

    Change the config.ini file to set the provider_name to ollama and provider_model to deepseek-r1:14b

    NOTE: deepseek-r1:14bis an example, use a bigger model if your hardware allow it.

    [MAIN]
    is_local = True
    provider_name = ollama
    provider_model = deepseek-r1:14b
    provider_server_address = 127.0.0.1:11434
    

    start all services :

    ./start_services.sh
    

    Run the assistant:

    python3 main.py
    

    See the Usage section if you don't understand how to use it

    See the Known issues section if you are having issues

    See the Run with an API section if your hardware can't run deepseek locally


    Usage

    Warning: currently the system that choose the best AI agent routing system will work poorly with non-english text. This is because the agent routing currently use a model that was trained on english text. We are working hard to fix this. Please use english for now.

    Make sure the services are up and running with ./start_services.sh and run the agenticSeek with python3 main.py

    ./start_services.sh
    python3 main.py
    

    You will be prompted with >>> This indicate agenticSeek await you type for instructions. You can also use speech to text by setting listen = True in the config.

    Here are some example usage:

    Coding/Bash

    Help me with matrix multiplication in Golang

    Scan my network with nmap, find if any suspicious devices is connected

    Make a snake game in python

    Web search

    Do a web search to find cool tech startup in Japan working on cutting edge AI research

    Can you find on the internet who created agenticSeek?

    Can you find on which website I can buy a rtx 4090 for cheap

    File system

    Hey can you find where is million_dollars_contract.pdf i lost it

    Show me how much space I have left on my disk

    Find and read the README.md and follow the install instruction

    Casual

    Tell me a joke

    Where is flight ABC777 ? my mom is on that plane

    what is the meaning of life ?

    After you type your query, agenticSeek will allocate the best agent for the task.

    Because this is an early prototype, the agent routing system might not always allocate the right agent based on your query.

    Therefore, you should be very explicit in what you want and how the AI might proceed for example if you want it to conduct a web search, do not say:

    Do you know some good countries for solo-travel?

    Instead, ask:

    Do a web search and find out which are the best country for solo-travel


    Run the LLM on your own server

    If you have a powerful computer or a server that you can use, but you want to use it from your laptop you have the options to run the LLM on a remote server.

    1️⃣ Set up and start the server scripts

    On your "server" that will run the AI model, get the ip address

    ip a | grep "inet " | grep -v 127.0.0.1 | awk '{print $2}' | cut -d/ -f1
    

    Note: For Windows or macOS, use ipconfig or ifconfig respectively to find the IP address.

    Clone the repository and then, run the script stream_llm.py in server/

    python3 server_ollama.py
    

    2️⃣ Run it

    Now on your personal computer:

    Clone the repository.

    Change the config.ini file to set the provider_name to server and provider_model to deepseek-r1:14b. Set the provider_server_address to the ip address of the machine that will run the model.

    [MAIN]
    is_local = False
    provider_name = server
    provider_model = deepseek-r1:14b
    provider_server_address = x.x.x.x:5000
    

    Run the assistant:

    ./start_services.sh
    python3 main.py
    

    Run with an API

    Clone the repository.

    Set the desired provider in the config.ini

    [MAIN]
    is_local = False
    provider_name = openai
    provider_model = gpt4-o
    provider_server_address = 127.0.0.1:5000 # can be set to anything, not used
    

    Run the assistant:

    ./start_services.sh
    python3 main.py
    

    Speech to Text

    The speech to text is disabled by default, you can enable it by setting listen to true in the config.ini:

    listen = True
    

    The speech to text will await for a AI name as a trigger keyword before it start listening, you can change the AI name by changing the agent_name in the config.ini:

    agent_name = Friday
    

    It will work better if you use a common english name like John or Emma.

    After hearing it's name agenticSeek will listen until it hear one of the following keyword for confirmation:

    "do it", "go ahead", "execute", "run", "start", "thanks", "would ya", "please", "okay?", "proceed", "continue", "go on", "do that", "go it", "do you understand?"
    

    Providers

    The table below show the available providers:

    Provider Local? Description
    Ollama Yes Run LLMs locally with ease using ollama as a LLM provider
    Server Yes Host the model on another machine, run your local machine
    OpenAI No Use ChatGPT API (non-private)
    Deepseek No Deepseek API (non-private)
    HuggingFace No Hugging-Face API (non-private)

    To select a provider change the config.ini:

    is_local = False
    provider_name = openai
    provider_model = gpt-4o
    provider_server_address = 127.0.0.1:5000
    

    is_local: should be True for any locally running LLM, otherwise False.

    provider_name: Select the provider to use by its name, see the provider list above.

    provider_model: Set the model to use by the agent.

    provider_server_address: can be set to anything if you are not using the server provider.

    Known issues

    Chromedriver Issues

    Known error #1: chromedriver mismatch

    Exception: Failed to initialize browser: Message: session not created: This version of ChromeDriver only supports Chrome version 113 Current browser version is 134.0.6998.89 with binary path

    This happen if there is a mismatch between your browser and chromedriver version.

    You need to navigate to download the latest version:

    https://developer.chrome.com/docs/chromedriver/downloads

    If you're using Chrome version 115 or newer go to:

    https://googlechromelabs.github.io/chrome-for-testing/

    And download the chromedriver version matching your OS.

    alt text

    FAQ

    Q: What hardware do I need?

    7B Model: GPU with 8GB VRAM. 14B Model: 12GB GPU (e.g., RTX 3060). 32B Model: 24GB+ VRAM.

    Q: Why Deepseek R1 over other models?

    Deepseek R1 excels at reasoning and tool use for its size. We think it’s a solid fit for our needs other models work fine, but Deepseek is our primary pick.

    Q: I get an error running main.py. What do I do?

    Ensure Ollama is running (ollama serve), your config.ini matches your provider, and dependencies are installed. If none work feel free to raise an issue.

    Q: How to join the discord ?

    Ask in the Community section for an invite.

    Q: Can it really run 100% locally?

    Yes with Ollama or Server providers, all speech to text, LLM and text to speech model run locally. Non-local options (OpenAI or others API) are optional.

    Q: How come it is older than manus ?

    we started this a fun side project to make a fully local, Jarvis-like AI. However, with the rise of Manus, we saw the opportunity to redirected some tasks to make yet another alternative.

    Q: How is it better than manus ?

    It's not but we prioritizes local execution and privacy over cloud based approach. It’s a fun, accessible alternative!

    Conscious tech

    1 Reply Last reply
    4
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Recent
    • Tags
    • Popular
    • Bookmarks
    • Search