Amazon’s AI-agents for MariaDB Contributions

Bardia Hassanzadeh presentation

We at MariaDB Foundation are thrilled to see Amazon at the forefront of applying artificial intelligence to open source contributions — with MariaDB as their pilot. 

At the May 6th MariaDB meetup in Bremen, Bardia Hassanzadeh, PhD, presented the Upstream Pilot tool: an AI-based assistant designed to help developers identify, analyze, and resolve open issues in MariaDB more effectively.

This initiative is the most promising we’ve seen for improving the open source contribution process. Bardia and Hugo Wen of Amazon gave us a preview last week and we were already very impressed.

It’s not just about coding faster — it’s about thinking better about what’s to be done. For example, when getting started in contributing to complex open source infrastructure projects, the hardest part can be choosing the right issue, understanding what needs to be done, and clarifying vague Jira descriptions.

Development phases

The Upstream Pilot is currently its first incarnation for internal Amazon use. It will be developed in three phases with the goal of being publicly available within MariaDB Jira. The solution will have “a human in the loop”, but to a decreasing degree in each phase. 

The three phases are: 

  • Phase 1: Single-Agent Workflow – Initially, a serverless architecture connected to AWS Bedrock and OpenSearch enabled task triage and basic AI assistance.
  • Phase 2: Assisted Multi-Agent Workflow – Specialized agents handle planning, coding, debugging, and review — making it easier for humans to contribute high-quality patches.
  • Phase 3: Autonomous Multi-Agent Workflow – The long-term vision includes minimal human involvement, with agents that adapt, learn, and improve over time.

Eating their own dog food

The Amazon RDS team tried out the tool in their latest internal hackathon, and it showed concrete results. Developers resolved issues and submitted upstream patches to MariaDB with major time savings in both analysis and implementation. 

For pre evaluation, the tool evaluates for each Jira task: 

  • Difficulty level
  • Potential impact of a fix
  • Estimate of work in days
  • And most importantly, what actually needs to be done

This helped the six participants to choose a suitable Jira task for themselves quickly, instead of spending lots of time browsing Jira tickets. 

Jira tickets unfortunately often have a somewhat poor issue description. The AI tool generates both an analysis of the ticket, an improved description, as well as an implementation proposal. These help developers quickly get a grip on what the ticket is about and where to start. In the current phase, the AI assistance stops at this step, but as mentioned, Amazon has further plans to expand AI’s part in the development process. 

The Amazon team’s concrete results where submitted code for: 

  • MDEV-35461: Remove redundant checks for standard library functions
  • MDEV-28823: Secure mariadb-secure-installation output file with chmod 
  • MDEV-36397: Record change_user command in MTR output 
  • MDEV-36641: Implement Oracle Compatibility – INITCAP function
  • MDEV-35599: Which llstr(value, buff) can we replace with “%lld” ?

These items are in various stages of being reviewed internally at AWS, being submitted to MariaDB Foundation, and being reviewed and merged into the code base.

Looking to the Future

We’re especially excited that Amazon is not only developing this internally but actively considering how to make the tool available for anyone to use when contributing to MariaDB. This openness reflects a real commitment to open source.

We’re on the cusp of an AI revolution that can make even the hardest open source infrastructure projects more accessible and collaborative — and we are proud that Amazon has chosen MariaDB as its pilot project.

Links