Setting Up AutoGPT (Self-Host)

Introduction

This guide will help you setup the server and builder for the project.

!!! warning DO NOT FOLLOW ANY OUTSIDE TUTORIALS AS THEY WILL LIKELY BE OUT OF DATE

Prerequisites

To setup the server, you need to have the following installed:

Checking if you have Node.js & NPM installed

We use Node.js to run our frontend application.

If you need assistance installing Node.js: https://nodejs.org/en/download/

NPM is included with Node.js, but if you need assistance installing NPM: https://docs.npmjs.com/downloading-and-installing-node-js-and-npm

You can check if you have Node.js & NPM installed by running the following command:

node -v
npm -v

Once you have Node.js installed, you can proceed to the next step.

Checking if you have Docker & Docker Compose installed

Docker containerizes applications, while Docker Compose orchestrates multi-container Docker applications.

If you need assistance installing docker: https://docs.docker.com/desktop/

Docker-compose is included in Docker Desktop, but if you need assistance installing docker compose: https://docs.docker.com/compose/install/

You can check if you have Docker installed by running the following command:

Once you have Docker and Docker Compose installed, you can proceed to the next step.

chevron-rightRaspberry Pi 5 Specific Noteshashtag

On Raspberry Pi 5 with Raspberry Pi OS, the default 16K page size will cause issues with the supabase-vector container (expected: 4K). To fix this, edit /boot/firmware/config.txt and add: ```ini kernel=kernel8.img ``` Then reboot. You can check your page size with: ```bash getconf PAGESIZE ``` 16384 means 16K (incorrect), and 4096 means 4K (correct). After adjusting, docker compose up -d --build should work normally. See supabase/supabase #33816arrow-up-right for additional context.

If you're self-hosting AutoGPT locally, we recommend using our official setup script to simplify the process. This will install dependencies (like Docker), pull the latest code, and launch the app with minimal effort.

For macOS/Linux:

For Windows (PowerShell):

This method is ideal if you're setting up for development or testing and want to skip manual configuration.

Manual Setup

Cloning the Repository

The first step is cloning the AutoGPT repository to your computer. To do this, open a terminal window in a folder on your computer and run:

If you get stuck, follow this guidearrow-up-right.

Once that's complete you can continue the setup process.

Running the AutoGPT Platform

To run the platform, follow these steps:

  • Navigate to the autogpt_platform directory inside the AutoGPT folder:

  • Copy the .env.default file to .env in autogpt_platform:

    This command will copy the .env.default file to .env in the autogpt_platform directory. You can modify the .env file to add your own environment variables.

  • Run the platform services:

    This command will start all the necessary backend services defined in the docker-compose.yml file in detached mode.


🛠️ Using the Makefile for Common Tasks

The repository includes a Makefile with helpful commands to streamline setup and development. You may use make commands as an alternative to calling Docker or scripts directly.

Most-used Makefile commands

Inside the autogpt_platform directory, you can use:

Command
What it Does

make start-core

Start just the core services (Supabase, Redis, RabbitMQ) in background

make stop-core

Stop the core services

make logs-core

Tail the logs for core services

make format

Format & lint backend (Python) and frontend (TypeScript) code

make migrate

Run backend database migrations

make run-backend

Run the backend FastAPI server

make run-frontend

Run the frontend Next.js development server

Example usage:

You can always check available Makefile recipes by running:

(or just inspecting the Makefile in the repo root).


Checking if the application is running

You can check if the server is running by visiting http://localhost:3000arrow-up-right in your browser.

Notes:

By default the application for different services run on the following ports:

Frontend UI Server: 3000 Backend Websocket Server: 8001 Execution API Rest Server: 8006

Additional Notes

You may want to change your encryption key in the .env file in the autogpt_platform/backend directory.

To generate a new encryption key, run the following command in python:

Or run the following command in the autogpt_platform/backend directory:

Then, replace the existing key in the autogpt_platform/backend/.env file with the new one.

📌 Windows Installation Note

When installing Docker on Windows, it is highly recommended to select WSL 2 instead of Hyper-V. Using Hyper-V can cause compatibility issues with Supabase, leading to the supabase-db container being marked as unhealthy.

Steps to enable WSL 2 for Docker:

  1. Ensure that your Docker settings use WSL 2 as the default backend:

    • Open Docker Desktop.

    • Navigate to Settings > General.

    • Check Use the WSL 2 based engine.

  2. Restart Docker Desktop.

Already Installed Docker with Hyper-V?

If you initially installed Docker with Hyper-V, you don’t need to reinstall it. You can switch to WSL 2 by following these steps:

  1. Open Docker Desktop.

  2. Go to Settings > General.

  3. Enable Use the WSL 2 based engine.

  4. Restart Docker.

🚨 Warning: Enabling WSL 2 may erase your existing containers and build history. If you have important containers, consider backing them up before switching.

For more details, refer to Docker's official documentationarrow-up-right.

Development

Frontend Development

Running the frontend locally

To run the frontend locally, you need to have Node.js and PNPM installed on your machine.

Install Node.jsarrow-up-right to manage dependencies and run the frontend application.

Install PNPMarrow-up-right to manage the frontend dependencies.

Run the service dependencies (backend, database, message queues, etc.):

Go to the autogpt_platform/frontend directory:

Install the dependencies:

Generate the API client:

Run the frontend application:

Formatting & Linting

Auto formatter and linter are set up in the project. To run them:

Format the code:

Lint the code:

Or for both frontend and backend, from the root:

Testing

To run the tests, you can use the following command:

Backend Development

Running the backend locally

To run the backend locally, you need to have Python 3.10 or higher installed on your machine.

Install Poetryarrow-up-right to manage dependencies and virtual environments.

Run the backend dependencies (database, message queues, etc.):

Or equivalently with Makefile:

Go to the autogpt_platform/backend directory:

Install the dependencies:

Run the backend server:

Or from within autogpt_platform:

Formatting & Linting

Auto formatter and linter are set up in the project. To run them:

Format the code:

Lint the code:

Or format both frontend and backend at once:

Testing

To run the tests:

Adding a New Agent Block

To add a new agent block, you need to create a new class that inherits from Block and provides the following information:

  • All the block code should live in the blocks (backend.blocks) module.

  • input_schema: the schema of the input data, represented by a Pydantic object.

  • output_schema: the schema of the output data, represented by a Pydantic object.

  • run method: the main logic of the block.

  • test_input & test_output: the sample input and output data for the block, which will be used to auto-test the block.

  • You can mock the functions declared in the block using the test_mock field for your unit tests.

  • Once you finish creating the block, you can test it by running poetry run pytest backend/blocks/test/test_block.py -s.

  • Create a Pull Request to the dev branch of the repository with your changes so you can share it with the community :)

Last updated

Was this helpful?