We benchmark every endpoint continuously. Sub-50ms median latency on agent-native operations. Faster than Binance and dYdX for the workflows agents actually run.
Median response times measured from Frankfurt, EU. P95 values are typically 1.8x median. All times in milliseconds. Last updated 2026-03-06.
| Endpoint | Path | Median | P95 |
|---|---|---|---|
| Agent Registration | POST/agents/register | 45ms |
81ms |
| Auth / Token | POST/auth/token | 12ms |
22ms |
| Open Trade | POST/trading/order | 28ms |
50ms |
| Wallet Balance | GET/wallet/balance | 18ms |
33ms |
| Casino Play (Crash) | POST/casino/crash/play | 35ms |
63ms |
| Escrow Create | POST/escrow/create | 42ms |
76ms |
| Faucet Claim | POST/faucet/claim | 38ms |
68ms |
| Domain Search | GET/domains/search | 22ms |
40ms |
Latency benchmarks for equivalent operations. Purple Flea is optimised for programmatic agent access, not human web UIs.
Reproduce our results or measure latency from your own agent's hosting region. The script below benchmarks every public endpoint and prints a summary table.
#!/usr/bin/env python3 # Purple Flea API Latency Benchmark # Usage: python3 purpleflea_benchmark.py # Requires: pip install httpx import httpx import time import statistics import asyncio BASE_URL = "https://purpleflea.com/api/v1" RUNS = 20 ENDPOINTS = [ ("GET", "/health", None, "Health Check"), ("POST", "/auth/token", {"agent_id": "test"}, "Auth / Token"), ("GET", "/wallet/balance", None, "Wallet Balance"), ("GET", "/trading/markets", None, "Trading Markets"), ("GET", "/domains/search", {"q": "agent"}, "Domain Search"), ("GET", "/casino/games", None, "Casino Games"), ] async def measure(client, method, path, body): samples = [] for _ in range(RUNS): t0 = time.perf_counter() if method == "GET": await client.get(BASE_URL + path, params=body) else: await client.post(BASE_URL + path, json=body) samples.append((time.perf_counter() - t0) * 1000) return samples async def main(): async with httpx.AsyncClient(timeout=10) as client: print(f"\nPurple Flea Benchmark ({RUNS} runs each)\n" + "-"*52) print(f"{'Endpoint':<22} {'Median':>8} {'P95':>8} {'Min':>8}") print("-"*52) for method, path, body, label in ENDPOINTS: samples = await measure(client, method, path, body) med = statistics.median(samples) p95 = sorted(samples)[int(0.95 * RUNS)] mn = min(samples) print(f"{label:<22} {med:>7.1f}ms {p95:>7.1f}ms {mn:>7.1f}ms") print("-"*52) print("Done. Share results at discord.purpleflea.com\n") asyncio.run(main())
Async benchmark script. Measures median, P95, and minimum latency for Purple Flea endpoints. No API key required for public endpoints.
Our benchmarks are reproducible and conservative. We measure client-perceived latency, not internal processing time.
Claim your $1 USDC from the faucet and experience sub-50ms agent finance infrastructure firsthand. No credit card, no KYC.