API Performance Benchmarks
Real latency data from production infrastructure. Measured via curl timing across 1,000 requests per endpoint.
Latency Distribution
P50, P95, and P99 percentile latencies across all services and endpoint categories. Bar widths are scaled relative to the slowest measured P99 (4200ms).
Throughput & Rate Limits
Default per-API-key rate limits. Contact support for higher tiers. All limits apply per individual API key, not per IP.
X-RateLimit-Remaining, X-RateLimit-Reset, and X-RateLimit-Limit so your agent can self-throttle proactively. When limits are exceeded the API returns HTTP 429 with a Retry-After header.
Purple Flea vs Alternatives
How Purple Flea's managed API layer compares to building your own integrations. Latency figures are median (P50) measurements.
| Use case | Purple Flea | Alternative | Verdict | Notes |
|---|---|---|---|---|
| Casino game results Provably fair flip, dice, crash | 45 ms Purple Flea Casino API | 200 ms+ Custom backend (on-chain proof gen) | 4x faster | Purple Flea pre-generates proof infrastructure; DIY requires full on-chain round-trip per game. |
| Perpetual trading — market data Prices, funding rates, OI | 120 ms Purple Flea Trading API | 80 ms Direct Hyperliquid REST/WS | +40ms overhead | Purple Flea adds auth, routing, and key management overhead. Direct Hyperliquid is faster for pure market data reads. |
| Perpetual trading — execution Open / close / stop-loss | 850 ms Purple Flea Trading API | 850 ms Direct Hyperliquid signing | Same speed | Execution latency is dominated by Hyperliquid's matching engine. Purple Flea adds negligible overhead; gains come from key management simplicity. |
| Multi-chain balance lookup EVM, Solana, Tron, BTC | 85 ms Purple Flea Wallet API | 500 ms+ Self-hosted RPC nodes | 6x faster | Purple Flea uses geo-distributed, pre-warmed RPC clusters. Self-hosted nodes require cold state sync and infrastructure maintenance. |
| Domain search + availability ENS, .sol, Unstoppable, etc. | 95 ms Purple Flea Domains API | 300 ms+ Direct registry RPC calls | 3x faster | Purple Flea aggregates across registries in parallel with cached availability state. Querying each registry individually is sequential by default. |
Measurement Methodology
Benchmarks are run from our production monitoring infrastructure. All figures represent server-side processing time, not client round-trip.
Benchmark command
Sample size & outlier handling
- 1,000 requests per endpoint per measurement run
- Top 1% of results (10 samples) are excluded as outliers before percentile calculation
- Runs are performed sequentially, not concurrently, to isolate per-request latency
- Warm-up: first 50 requests are discarded to allow connection pooling to stabilize
What the percentiles mean
- P50 (median): Half of all requests complete faster than this value. The best proxy for typical user experience.
- P95: 95% of requests complete faster. A good indicator of how slow requests feel in normal usage.
- P99: 99% of requests complete faster. Covers nearly all edge cases; 1 in 100 requests may exceed this.
Infrastructure context
Benchmark runner is co-located in the same data center region as the APIs (US-East). Figures represent server processing latency. Client-facing latency adds network round-trip time depending on geography — typically 20–80ms for US users, 80–200ms for EU/Asia.
Measurement frequency
Full benchmark suites run every 6 hours. Values on this page reflect the most recent completed run. Real-time health checks (bottom of page) probe each service every 60 seconds with a single authenticated request and report live response time.
Uptime & Reliability
30-day rolling uptime across all services. Each segment below represents one day; green = fully operational, amber = degraded, red = outage.
Real-Time Health Checks
Live probe of each service health endpoint. Fetched directly from your browser — latency reflects your network path to Purple Flea infrastructure.