TAF – Test Automation Framework for MariaDB
TAF is a reproducible, contract‑driven benchmarking and testing framework developed under the guidance of the MariaDB Foundation and built together with the MariaDB Server community. Its purpose is to make database performance testing transparent, repeatable, and comparable across MariaDB, MySQL, and future database systems.
TAF standardizes how benchmarks are defined, executed, profiled, and reported. It removes guesswork, eliminates configuration drift, and produces normalized results that can be compared across versions, workloads, and hardware environments. The framework is designed to be contributor‑proof, easy to extend, and suitable for both local testing and large‑scale automated benchmarking.
What TAF Does
TAF provides a unified, extensible system for:
- Installing and preparing database servers for benchmarking
- Running reproducible workloads across multiple database systems
- Collecting normalized performance metrics
- Integrating system profilers such as Perf
- Generating structured reports for comparison and analysis
- Supporting cross‑database benchmarking with consistent workflows
- Enabling contributors to add new databases, workloads, profilers, and reporters without modifying core logic
TAF is built for engineering teams, performance analysts, and contributors who need reliable, comparable results.
Why TAF Matters
Benchmarking databases is traditionally difficult due to:
- inconsistent configuration
- non‑reproducible test environments
- differences in SQL dialects
- manual setup steps
- unclear comparison narratives
- lack of normalized output formats
- difficulty comparing different database systems
TAF solves these problems by enforcing strict contracts for every stage of a benchmark run. This ensures that results are comparable across:
- MariaDB versions
- MariaDB vs MySQL
- different hardware
- different workloads
- different contributors
TAF makes benchmarking predictable, transparent, and scientifically clean.
Plugin Architecture
TAF is built entirely around a clean, contract‑driven plugin model. Every major component of a benchmark run is implemented as a plugin, allowing contributors to extend the framework without modifying core logic.
TAF currently supports four plugin types:
1. Database Maker Plugins
Maker plugins define how TAF installs, initializes, starts, stops, and validates a database server.
Examples:
- MariaDB
- MySQL
A maker plugin encapsulates:
- install logic
- initialization and bootstrap
- start/stop routines
- SQL dialect selection
- optional client build logic
The selected maker plugin becomes the active database for the run.
2. Test Suite Plugins
A test suite plugin defines the workload being executed.
Current and planned workloads:
- Sysbench
- HammerDB TPROC‑C
- HammerDB TPROC‑H
Each test suite plugin provides:
- PreTestSetup
- TestSetup
- TestRun
- TestPost
- TestCleanup
- workload metadata (threads, duration, defaults)
- SQL dialect requirements
This allows TAF to run completely different workloads without changing the core framework.
3. Reporter Plugins
Reporter plugins generate output from a benchmark run. Multiple reporters can be used in a single execution.
Examples:

Current plugins:
- raw text
- JSON
- HTML
- charts
- tables
- archive bundles
Reporters receive structured, normalized results and never influence test execution.
4. Profiler Plugins
Profiler plugins integrate external profiling tools into the benchmark workflow.
Current and planned profilers:
- Perf (implemented)
- vtune (planned)
- flamegraph generation
- future profilers via the same contract
Profiler plugins allow TAF to collect system‑level performance data during a run without modifying the test suite or database plugin.

How TAF Works
TAF executes a benchmark in a clean, deterministic sequence:
- Load the selected database maker plugin
- Install and initialize the database
- Load the selected test suite plugin
- Prepare the workload environment
- Optionally load profiler plugins
- Run the workload
- Collect normalized results
- Generate reports using reporter plugins
All configuration is driven by simple user properties files, ensuring reproducibility and clarity.
Results and Reporting
Who Develops TAF
TAF is coordinated by the MariaDB Foundation:
- We set direction
- We define specifications
- We review contributions
- We maintain CI/CD
- We promote use cases
- We support benchmarking transparency
The MariaDB Server community develops:
- maker plugins for database systems
- test suite plugins
- profiler integrations
- reporter plugins
- benchmarking improvements
- cross‑database comparison tooling
Contributors include individuals, companies, and organizations interested in transparent, reproducible benchmarking.
Documentation
- GitHub repository: https://github.com/mariadb/taf