LLM Providers

There are several providers that AutoGPT users can use for running inference with LLM models.

Llama API

Llama API is a Meta-hosted API service that helps you integrate Llama models quickly and efficiently. Using OpenAI compatibility endpoints, you can easily access the power of Llama models without the need for complex setup or configuration!

Join the waitlistarrow-up-right to get access!

Try the Llama API provider by selecting any of the following LLM Model names from the AI blocks:

  • Llama-4-Scout-17B-16E-Instruct-FP8

  • Llama-4-Maverick-17B-128E-Instruct-FP8

  • Llama-3.3-8B-Instruct

  • Llama-3-70B-Instruct

Last updated

Was this helpful?