AutoGPT Classic Agent Setup

🐋 Set up & Run with Docker  |  👷🏼 For Developers

📋 Requirements

Linux / macOS

Windows (WSL)

Windows

!!! attention We recommend setting up AutoGPT with WSL. Some things don't work exactly the same on Windows and we currently can't provide specialized instructions for all those cases.

Setting up AutoGPT

Getting AutoGPT

Since we don't ship AutoGPT as a desktop application, you'll need to download the projectarrow-up-right from GitHub and give it a place on your computer.

Screenshot of the dialog to clone or download the repo
  • To get the latest bleeding edge version, use master.

  • If you're looking for more stability, check out the latest AutoGPT releasearrow-up-right.

!!! note These instructions don't apply if you're looking to run AutoGPT as a docker image. Instead, check out the Docker setup guide.

Completing the Setup

Once you have cloned or downloaded the project, you can find the AutoGPT Agent in the original_autogpt/ folder. Inside this folder you can configure the AutoGPT application with an .env file and (optionally) a JSON configuration file:

  • .env for environment variables, which are mostly used for sensitive data like API keys

  • a JSON configuration file to customize certain features of AutoGPT's Componentsarrow-up-right

See the Configuration reference for a list of available environment variables.

  1. Find the file named .env.template. This file may be hidden by default in some operating systems due to the dot prefix. To reveal hidden files, follow the instructions for your specific operating system: Windowsarrow-up-right and macOSarrow-up-right.

  2. Create a copy of .env.template and call it .env; if you're already in a command prompt/terminal window:

  3. Open the .env file in a text editor.

  4. Set API keys for the LLM providers that you want to use: see below.

  5. Enter any other API keys or tokens for services you would like to use.

    !!! note To activate and adjust a setting, remove the # prefix.

  6. Save and close the .env file.

  7. Optional: run poetry install to install all required dependencies. The application also checks for and installs any required dependencies when it starts.

  8. Optional: configure the JSON file (e.g. config.json) with your desired settings. The application will use default settings if you don't provide a JSON configuration file. Learn how to set up the JSON configuration filearrow-up-right

You should now be able to explore the CLI (./autogpt.sh --help) and run the application.

See the user guide for further instructions.

Setting up LLM providers

You can use AutoGPT with any of the following LLM providers. Each of them comes with its own setup instructions.

AutoGPT was originally built on top of OpenAI's GPT-4, but now you can get similar and interesting results using other models/providers too. If you don't know which to choose, you can safely go with OpenAI*.

* subject to change

OpenAI

!!! attention To use AutoGPT with GPT-4 (recommended), you need to set up a paid OpenAI account with some money in it. Please refer to OpenAI for further instructions (linkarrow-up-right). Free accounts are limitedarrow-up-right to GPT-3.5 with only 3 requests per minute.

  1. Make sure you have a paid account with some credits set up: Settings > Organization > Billingarrow-up-right

  2. Get your OpenAI API key from: API keysarrow-up-right

  3. Open .env

  4. Find the line that says OPENAI_API_KEY=

  5. Insert your OpenAI API Key directly after = without quotes or spaces:

    !!! info "Using a GPT Azure-instance" If you want to use GPT on an Azure instance, set USE_AZURE to True and make an Azure configuration file.

!!! important Keep an eye on your API costs on the Usage pagearrow-up-right.

Anthropic

  1. Make sure you have credits in your account: Settings > Plans & billingarrow-up-right

  2. Get your Anthropic API key from Settings > API keysarrow-up-right

  3. Open .env

  4. Find the line that says ANTHROPIC_API_KEY=

  5. Insert your Anthropic API Key directly after = without quotes or spaces:

  6. Set SMART_LLM and/or FAST_LLM to the Claude 3 model you want to use. See Anthropic's models overviewarrow-up-right for info on the available models. Example:

!!! important Keep an eye on your API costs on the Usage pagearrow-up-right.

Groq

!!! note Although Groq is supported, its built-in function calling API isn't mature. Any features using this API may experience degraded performance. Let us know your experience!

  1. Get your Groq API key from Settings > API keysarrow-up-right

  2. Open .env

  3. Find the line that says GROQ_API_KEY=

  4. Insert your Groq API Key directly after = without quotes or spaces:

  5. Set SMART_LLM and/or FAST_LLM to the Groq model you want to use. See Groq's models overviewarrow-up-right for info on the available models. Example:

Llamafile

With llamafile you can run models locally, which means no need to set up billing, and guaranteed data privacy.

For more information and in-depth documentation, check out the llamafile documentationarrow-up-right.

!!! warning At the moment, llamafile only serves one model at a time. This means you can not set SMART_LLM and FAST_LLM to two different llamafile models.

!!! warning Due to the issues linked below, llamafiles don't work on WSL. To use a llamafile with AutoGPT in WSL, you will have to run the llamafile in Windows (outside WSL).

!!! note These instructions will download and use mistral-7b-instruct-v0.2.Q5_K_M.llamafile. mistral-7b-instruct-v0.2 is currently the only tested and supported model. If you want to try other models, you'll have to add them to LlamafileModelName in llamafile.pyarrow-up-right. For optimal results, you may also have to add some logic to adapt the message format, like LlamafileProvider._adapt_chat_messages_for_mistral_instruct(..) does.

  1. Run the llamafile serve script:

    The first time this is run, it will download a file containing the model + runtime, which may take a while and a few gigabytes of disk space.

    To force GPU acceleration, add --use-gpu to the command.

  2. In .env, set SMART_LLM/FAST_LLM or both to mistral-7b-instruct-v0.2

  3. If the server is running on different address than http://localhost:8080/v1, set LLAMAFILE_API_BASE in .env to the right base URL

Last updated

Was this helpful?