Floci vs Ministack vs LocalStack
Hello,
I just wanted to try this as I've seen a lot of people asking about this.
I wrote an article comparing both solutions (against documentation of Localstack) with:
- 30 API latency tests (5 runs each = 150 calls per tool, 300 total)
- 500 SQS throughput calls per tool
- 31 service support checks per tool.
Total: ~1,462 API calls across both containers
Image Size
nahuelnucera/ministack:latest 211 MB
hectorvent/floci:latest 276 MB
localstack/localstack:latest ~1.0 GB
MiniStack is 24% smaller than Floci and 5x smaller than LocalStack. MiniStack uses Alpine + Python + Node.js. Floci uses a JVM-based stack.
Startup Time
| First Response |
|---|
| MiniStack |
| Floci |
| LocalStack |
MiniStack starts instantly. No JVM warm-up, no class loading. The ASGI server is ready before the health check even fires.
API Latency (median of 5 runs, single operation)
S3
| Operation | Floci | MiniStack | Difference |
|---|---|---|---|
| CreateBucket | 5.9 ms | 5.6 ms | -5% |
| PutObject (1 KB) | 6.3 ms | 6.4 ms | +2% |
| PutObject (100 KB) | 10.6 ms | 7.3 ms | -31% |
| GetObject | 4.8 ms | 5.4 ms | +13% |
| ListObjectsV2 | 5.5 ms | 6.2 ms | +13% |
S3 is competitive. Floci edges ahead on small reads. MiniStack is significantly faster on larger writes (100 KB: 31% faster).
SQS
| Operation | Floci | MiniStack | Difference |
|---|---|---|---|
| CreateQueue | 4.5 ms | 4.4 ms | -2% |
| SendMessage | 9.8 ms | 8.3 ms | -15% |
| ReceiveMessage | 7.8 ms | 6.5 ms | -17% |
MiniStack is consistently faster on SQS operations.
DynamoDB
| Operation | Floci | MiniStack | Difference |
|---|---|---|---|
| CreateTable | 3.7 ms | 3.4 ms | -8% |
| PutItem | 3.7 ms | 4.2 ms | +14% |
| GetItem | 3.8 ms | 4.4 ms | +16% |
| Query | 4.3 ms | 5.0 ms | +16% |
| Scan | 4.2 ms | 4.6 ms | +10% |
Floci wins on DynamoDB read/write operations. This is likely due to Java's optimized JSON parsing for the DynamoDB wire format.
Other Services
| Operation | Floci | MiniStack | Difference |
|---|---|---|---|
| SNS CreateTopic | 3.9 ms | 3.8 ms | -3% |
| SNS Publish | 8.5 ms | 8.8 ms | +4% |
| IAM CreateRole | 5.0 ms | 5.9 ms | +18% |
| STS GetCallerIdentity | 5.1 ms | 4.5 ms | -12% |
| SSM PutParameter | 6.6 ms | 4.7 ms | -29% |
| SSM GetParameter | 4.7 ms | 5.2 ms | +11% |
| SecretsManager Create | 4.8 ms | 4.4 ms | -8% |
| SecretsManager Get | 4.7 ms | 4.4 ms | -6% |
| EventBridge PutRule | 5.3 ms | 4.7 ms | -11% |
| EventBridge PutEvents | 4.8 ms | 5.5 ms | +15% |
| Kinesis CreateStream | 5.6 ms | 5.1 ms | -9% |
| CW PutMetricData | 4.9 ms | 4.4 ms | -10% |
| Logs CreateLogGroup | 6.5 ms | 4.6 ms | -29% |
| Route53 CreateHostedZone | ERR | 4.3 ms | Floci doesn't support Route53 |
MiniStack is faster on SSM, SecretsManager, CloudWatch, and Logs. Floci is faster on IAM and EventBridge PutEvents. Route53 only works on MiniStack.
Throughput
| Test | Floci | MiniStack |
|---|---|---|
| SQS SendMessage x500 | 221 ops/s | 233 ops/s |
On sustained SQS throughput, MiniStack is 5% faster. Earlier cold-start benchmarks showed Floci ahead, but with warm containers the gap disappears.
Memory Usage
| State | Floci | MiniStack |
|---|---|---|
| At idle | 26 MB | 38 MB |
| After 500+ operations | 56 MB | 39 MB |
Interesting: Floci uses less memory at idle (JVM lazy class loading) but grows to 56 MB after load. MiniStack starts at 38 MB and barely grows. Over time, MiniStack's memory profile is more predictable.
Service Coverage
| Service | Floci | MiniStack |
|---|---|---|
| S3 | YES | YES |
| SQS | YES | YES |
| SNS | YES | YES |
| DynamoDB | YES | YES |
| Lambda | YES | YES |
| IAM | YES | YES |
| STS | YES | YES |
| SecretsManager | YES | YES |
| CloudWatch Logs | YES | YES |
| SSM | YES | YES |
| EventBridge | YES | YES |
| Kinesis | YES | YES |
| CloudWatch Metrics | YES | YES |
| SES | YES | YES |
| Step Functions | YES | YES |
| Cognito | YES | YES |
| RDS | YES | YES |
| CloudFormation | YES | YES |
| ACM | YES | YES |
| KMS | YES | YES |
| ECS | NO | YES |
| ElastiCache | NO | YES |
| Glue | NO | YES |
| Athena | NO | YES |
| Firehose | NO | YES |
| Route53 | NO | YES |
| EC2/VPC | NO | YES |
| EMR | NO | YES |
| ELBv2/ALB | NO | YES |
| WAF v2 | NO | YES |
| ECR | NO | YES |
| Total | 20 | 31 |
MiniStack supports 55% more services. The gap is particularly significant for infrastructure-heavy workloads (ECS, RDS with real Docker, EC2/VPC, Route53, ALB).
Feature Comparison
| Feature | MiniStack | Floci | LocalStack Free |
|---|---|---|---|
| Lambda Python execution | YES | YES | YES |
| Lambda Node.js execution | YES | NO | YES |
| Lambda warm workers | YES | NO | YES |
| RDS real Postgres/MySQL | YES | YES | NO (Pro) |
| ECS real Docker containers | YES | NO | NO (Pro) |
| ElastiCache real Redis | YES | NO | NO (Pro) |
| Athena real SQL (DuckDB) | YES | NO | NO (Pro) |
| CloudFormation | YES (12 types) | YES | YES |
| Step Functions TestState API | YES | NO | NO |
| SFN Mock Config (SFN Local compat) | YES | NO | YES |
| State persistence | YES (20 services) | NO | Partial |
| S3 disk persistence | YES | YES | YES |
Detached mode (-d / --stop) |
YES | NO | NO |
| Terraform v6 compatible | YES | Partial | YES |
| AWS SDK v2 chunked encoding | YES | NO | YES |
| Testcontainers examples | Java, Go, Python | NO | Java |
docker run one-liner |
YES | YES | YES |
| PyPI installable | YES | NO | YES |
What Floci Does Better
Let's be honest about where Floci wins:
- DynamoDB read latency — 15-16% faster on GetItem/Query/Scan. Java's JSON processing is well-optimized for DynamoDB's wire format.
- Idle memory — 26 MB vs 38 MB at cold start. JVM defers class loading.
What MiniStack Does Better
- 11 more services — ECS, ElastiCache, Glue, Athena, Route53, EC2, EMR, ALB, WAF, Firehose, ECR.
- Real infrastructure — RDS spins up actual Postgres/MySQL. ECS runs real containers. Athena runs real SQL via DuckDB.
- Lambda Node.js — warm worker pool for both Python and Node.js.
- State persistence — 20 services survive restarts.
- Faster on most operations — SSM, SecretsManager, SQS, CloudWatch, Logs are 15-30% faster.
- Terraform v6 ready — EC2 stubs, S3 Control routing, DynamoDB WarmThroughput.
- Smaller image — 211 MB vs 276 MB (24% smaller).
Created by Nahuel Nucera, one of the maintainers of Ministack
