This project compares the performance of three different backend implementations:
- FastAPI (Synchronous)
- FastAPI (Asynchronous)
- Express.js
The comparison is done through load testing using Artillery.io, measuring response times, throughput, and error rates under various load conditions.
.
├── fastapi_sync/ # Synchronous FastAPI implementation
├── fastapi_async/ # Asynchronous FastAPI implementation
├── express/ # Express.js implementation
├── postgres/ # Database initialization scripts
├── artillery_tests/ # Load testing configuration and scripts
├── documentation/ # Example reports and analysis
└── docker-compose.yml
- Docker and Docker Compose
- Node.js (for running Artillery)
- Artillery.io (version 2.0.21)
This project uses Docker Compose profiles to manage different service configurations. Profiles allow us to run specific sets of services without starting the entire stack. The following profiles are defined:
fastapi-sync
: Runs the synchronous FastAPI implementation with PostgreSQLfastapi-async
: Runs the asynchronous FastAPI implementation with PostgreSQLexpress
: Runs the Express.js implementation with PostgreSQL
Each profile can be activated using the --profile
flag with docker compose
commands. For example:
docker compose --profile fastapi-sync up
This approach allows us to:
- Run different implementations independently
- Compare performance without interference
- Save resources by only running necessary services
- Maintain clean separation between different implementations
Each service in this project is configured with the following resource constraints:
- CPU: Limited to 0.5 cores (50% of a single CPU core)
- Memory: Limited to 512MB RAM
These limitations ensure:
- Fair comparison between different implementations
- Controlled resource usage during load testing
- Consistent performance measurements
- Prevention of resource exhaustion
- Install Artillery globally:
npm install -g artillery@2.0.21
npm install -g artillery-plugin-metrics-by-endpoint
artillery version
https://www.artillery.io/docs/get-started/get-artillery
The project includes a comprehensive load testing suite using Artillery.io that simulates various real-world scenarios:
- Simulates high-volume
GET
requests - Includes random pagination and optional filtering
- Evaluates system performance under read-intensive workloads
- Identifies latency trends and potential bottlenecks in read paths
- Simulates concurrent
POST
,PUT
, andDELETE
operations - Stress-tests the database's write throughput
- Measures system stability and responsiveness under write-heavy traffic
- Useful for tuning transactions, indexing, and batch inserts
- Introduces sudden, large spikes in traffic
- Tests the system's elasticity and autoscaling behavior (if any)
- Measures degradation in response time and error rates during and after spikes
- Helps validate rate limiting and failover mechanisms
- Gradually increases the load to identify system breaking points
- Helps expose:
- Memory or CPU bottlenecks
- Slow database queries
- Throughput limits
- Evaluates how the system recovers after hitting its limits
- Simulates continuous moderate load over an extended duration
- Detects issues like:
- Memory leaks
- Connection pool exhaustion
- Resource starvation over time
- Helps validate system stability and reliability for long-running deployments
- Incrementally increases the number of read (
GET
) requests - Identifies the exact point where:
- Response times degrade
- Error rates increase
- Useful for testing:
- Caching mechanisms
- Database read pools
- Horizontal read scaling limits
- Gradually increases the number of write operations (
POST
,PUT
,DELETE
) - Identifies the load threshold where:
- Writes start failing or slowing down
- CPU, memory, or DB locks become critical
- Helps optimize:
- Write throughput
- Transaction handling
- Concurrency limits
The project includes an automated test runner script (run_artillery_test.sh
) that handles:
- Service orchestration using Docker Compose
- Test execution with Artillery.io
- Report generation in both JSON and HTML formats
- Automatic cleanup of test resources
./run_artillery_test.sh <profile> <test-type>
Available profiles:
fastapi-sync
: FastAPI with synchronous operationsfastapi-async
: FastAPI with asynchronous operationsexpress
: Express.js implementation
Available test types:
read-heavy
: Read-intensive workload simulationwrite-heavy
: Write-intensive workload simulationspike
: Sudden traffic spike simulationstress
: Sustained high load testingsoak
: Long-running stability testbreakpoint-read
: Read operation threshold testingbreakpoint-write
: Write operation threshold testing
Example:
# Run read-heavy test on FastAPI async implementation
./run_artillery_test.sh fastapi-async read-heavy
The project also includes a run_all_artillery_tests.sh
script to execute the entire test suite. This script will:
- Iterate through all profiles (
fastapi-sync
,fastapi-async
,express
). - Run every test type for each profile.
- Generate a summary report at the end.
This is useful for comprehensive performance validation across all implementations.
Usage:
./run_all_artillery_tests.sh
Test results are automatically generated in two formats:
-
JSON reports (
artillery_tests/reports/<profile>-<test-type>.json
)- Detailed metrics and raw data
- Suitable for programmatic analysis
- Contains timing, error rates, and throughput data
-
HTML reports (
artillery_tests/reports/<profile>-<test-type>-report.html
)- Visual representation of test results
- Interactive charts and graphs
- Summary statistics and key metrics
Each test scenario is configured in YAML files under the artillery_tests
directory:
read-heavy-test.yml
: Read operation simulationwrite-heavy-test.yml
: Write operation simulationspike-test.yml
: Traffic spike simulationstress-test.yml
: Sustained load testingsoak-test.yml
: Long-running stability testbreakpoint-read-test.yml
: Read operation threshold testingbreakpoint-write-test.yml
: Write operation threshold testing
The test configurations can be customized to adjust:
- Virtual user count
- Request rates
- Test duration
- Endpoint patterns
- Custom scenarios and functions
Monitor Docker container resources during tests:
docker stats
To stop and remove all containers and volumes:
docker compose down -v
This project provides comprehensive documentation and pre-generated reports in the documentation/
directory.
An interactive dashboard for visualizing and comparing performance metrics is available on Looker Studio.
- Public URL: View the Performance Dashboard
Here's a snapshot of what an Looker Dashboard looks like:
A Looker Dashboard PDF version can be downloaded here.
Reports for all test scenarios are available in documentation/artillery_generated_reports
. These can serve as a reference for what to expect from your own test runs.
Here's a snapshot of what an HTML report looks like:
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
Author: Siva Sai Krishna