Strategy

Market Regime Detection for AI Agents

Purple Flea Research March 6, 2026 25 min read

The market's character shifts constantly β€” trending, ranging, volatile, calm. A strategy that works in a trend destroys capital in a range. This guide covers the statistical toolkit for detecting regimes in real time: ADX, Hurst exponent, autocorrelation, Hidden Markov Models, and volatility clustering β€” plus a complete Python RegimeDetector class and regime-conditioned strategy switching for Purple Flea agents.

Trending vs Ranging Markets

Market regimes are persistent statistical states that govern how prices behave. The two fundamental regimes are trending and ranging. In a trending regime, returns exhibit positive autocorrelation β€” yesterday's move predicts a move in the same direction. In a ranging regime, returns are negatively autocorrelated β€” the market oscillates around a mean, and momentum signals fail.

Ranging Regime

Negative or near-zero autocorrelation. Mean-reversion strategies work. Momentum strategies whipsaw. ADX <20. Hurst ~0.50. Strategy: fade extremes, tight take-profit, delta-neutral.

Volatile / Crisis Regime

VIX/BVOL elevated, realized volatility spiking, correlation across assets converging to 1. All directional strategies struggle. Strategy: reduce exposure, only volatility-long positions.

Calm / Accumulation Regime

Low realized volatility, tight bid-ask spreads, low volume. Market is coiling energy. Strategy: build positions, harvest funding rates, sell options premium.

The key insight for agents: no strategy is universally best. The expected edge of any strategy is conditional on the current regime. A well-architected agent selects or weights strategies based on a real-time regime probability vector, rather than hard-switching.

ADX: Average Directional Index

The Average Directional Index (ADX) measures trend strength, not direction. It ranges from 0–100 and is derived from two directional movement indicators: +DI (positive) and -DI (negative).

ADX = EMA(|+DI - -DI| / (+DI + -DI), period=14) Γ— 100

The computation pipeline:

  1. True Range (TR) = max(Highβˆ’Low, |Highβˆ’Close_prev|, |Lowβˆ’Close_prev|)
  2. +DM = max(High βˆ’ High_prev, 0) if Highβˆ’High_prev > Low_prevβˆ’Low, else 0
  3. -DM = max(Low_prev βˆ’ Low, 0) if Low_prevβˆ’Low > Highβˆ’High_prev, else 0
  4. +DI = 100 Γ— Wilder EMA(+DM, 14) / Wilder EMA(TR, 14)
  5. -DI = 100 Γ— Wilder EMA(-DM, 14) / Wilder EMA(TR, 14)
  6. DX = 100 Γ— |+DI βˆ’ -DI| / (+DI + -DI)
  7. ADX = Wilder EMA(DX, 14)
ADX ValueRegime InterpretationSuggested Strategy Class
< 15Very weak / no trend (ranging)Mean reversion, market making
15 – 25Weakening or developing trendNeutral / monitor only
25 – 40Moderate trendTrend following with confirmation
40 – 60Strong trendAggressive trend following
> 60Extremely strong trend (often unsustainable)Hold but watch for exhaustion
ADX Limitation

ADX is a lagging indicator β€” it confirms a trend after it has already developed. Use it for regime classification, not prediction. Combine with leading indicators (Hurst, autocorrelation) for earlier regime shift detection.

Hurst Exponent

The Hurst exponent H quantifies the long-range dependence of a time series. For financial price returns:

  • H > 0.5: Persistent (trending) β€” past moves predict future moves in the same direction
  • H = 0.5: Random walk β€” no autocorrelation (classic efficient market hypothesis)
  • H < 0.5: Anti-persistent (mean-reverting) β€” past up-moves predict future down-moves

The most practical estimation method for agents is the Rescaled Range (R/S) method: divide the return series into sub-series, compute the range-to-standard-deviation ratio for each length, then regress log(R/S) on log(n) β€” the slope is H.

H = slope of log(R/S) vs. log(n) where R/S = range / std_dev

Practical Hurst for Agents

Compute Hurst on rolling 100–500 candle windows. A 4-hour candle window of 200 bars gives a 33-day lookback β€” enough to characterize the current regime without being too slow to adapt. Use the following thresholds:

  • H > 0.58: High-confidence trending β€” run momentum strategies
  • 0.45 < H < 0.55: Neutral β€” run neutral or low-leverage strategies
  • H < 0.42: High-confidence mean-reverting β€” run mean-reversion strategies

Autocorrelation as Regime Signal

The first-order autocorrelation of returns (AC1) is the correlation between today's return and yesterday's. It is the most direct measure of momentum vs mean-reversion:

AC1 = corr(r_t, r_{t-1}) over rolling window of N periods

AC1 > 0: momentum regime. AC1 < 0: mean-reversion regime. AC1 β‰ˆ 0: efficient / random. In crypto, AC1 at the 1-hour candle level fluctuates between approximately -0.15 and +0.20 for BTC. Extreme readings (< -0.10 or > +0.12) are actionable regime signals.

Autocorrelation is Noisy

AC1 on raw 1-hour returns has high variance. Smooth it with a 10-period EMA and require it to maintain a threshold for at least 3 consecutive periods before acting on it as a regime signal.

Hidden Markov Models for Regime Detection

Hidden Markov Models (HMMs) formalize regime detection as a statistical inference problem. The market is modeled as an unobserved (hidden) Markov chain with K states. At each timestep, the hidden state emits an observable (return, volatility) from a state-specific distribution (usually Gaussian).

HMM components:

  • Transition matrix A: A[i,j] = P(next state is j | current state is i). Captures regime persistence.
  • Emission distributions: Each state has its own mean (ΞΌ) and variance (σ²) for returns. Low-ΞΌ low-Οƒ = calm; high-ΞΌ = trending; high-Οƒ low-ΞΌ = volatile.
  • Initial probabilities Ο€: P(state at t=0).

Training uses the Baum-Welch algorithm (expectation-maximization). Inference uses the Viterbi algorithm to decode the most likely sequence of hidden states, or the forward algorithm to compute current-state probabilities in real time.

Choosing K (Number of States)

For crypto markets, 3 states (calm/trending/volatile) or 4 states (calm/bull-trend/bear-trend/crisis) are most interpretable. More than 5 states tend to overfit on the training window. Use BIC or AIC on held-out data to select K systematically.

Volatility Regimes

Volatility itself has regimes, independent of price direction. Volatility clustering β€” the tendency for large moves to follow large moves β€” is one of the most persistent empirical regularities in financial data. GARCH(1,1) captures this: today's variance is a weighted sum of yesterday's variance and yesterday's squared return.

σ²_t = Ο‰ + Ξ± Γ— Ρ²_{t-1} + Ξ² Γ— σ²_{t-1}

For agent risk management, it's simpler to classify realized volatility (rolling 24h standard deviation of hourly returns Γ— √24) into three buckets:

  • Calm (<30% annualized): Full position sizing, sell options, harvest funding
  • Normal (30–80% annualized): Standard sizing, standard strategies
  • Volatile (>80% annualized): Half position sizing, no new entries, tighten stops

Macro Regime Classification

Beyond price-based regimes, macro financial conditions define the broader risk environment. Agents trading crypto should be aware of three macro regimes:

Macro RegimeSignalsCrypto ImpactAgent Response
Risk-On Equities rising, credit spreads tight, USD weak, BTC dominance rising Bullish for BTC and altcoins, funding rates positive Long bias, harvest positive funding, hold longer
Risk-Off Equities falling, USD/JPY rising, VIX elevated, safe-haven flows BTC correlates with equities in acute phase, then decouples Reduce long exposure, short altcoins, hold stables
Stagflation High inflation + low growth, yield curve flat or inverted BTC as inflation hedge narrative activates (mixed) Neutral-to-long BTC, avoid leveraged altcoin exposure

Agents can proxy macro regime via the /market/macro-regime endpoint in the Purple Flea API, which aggregates BTC/SPX correlation, DXY trend, and credit spread data into a simple three-state signal.

Strategy Switching Based on Detected Regime

The payoff of regime detection comes from strategy switching. Rather than hard-switching (which creates whipsaw risk near regime boundaries), use a soft weighting approach: each strategy gets a weight proportional to its regime probability score.

python β€” strategy weight allocation by regime
"""
Soft regime-based strategy allocation.
Each strategy has a 'regime_affinity' dict mapping regime names to [0,1] scores.
"""
from dataclasses import dataclass
import numpy as np

@dataclass
class Strategy:
    name: str
    regime_affinity: dict  # regime_name -> score in [0, 1]

STRATEGIES = [
    Strategy("momentum_long",   {"trending": 0.9, "ranging": 0.05, "volatile": 0.1, "calm": 0.3}),
    Strategy("mean_reversion",  {"trending": 0.05, "ranging": 0.9, "volatile": 0.1, "calm": 0.5}),
    Strategy("funding_harvest", {"trending": 0.4, "ranging": 0.6, "volatile": 0.0, "calm": 0.9}),
    Strategy("delta_neutral",   {"trending": 0.2, "ranging": 0.8, "volatile": 0.5, "calm": 0.7}),
    Strategy("cash_only",       {"trending": 0.0, "ranging": 0.0, "volatile": 1.0, "calm": 0.0}),
]

def compute_strategy_weights(regime_probs: dict) -> dict:
    """
    Given regime probabilities, compute normalized allocation weight for each strategy.
    regime_probs: {'trending': 0.3, 'ranging': 0.5, 'volatile': 0.1, 'calm': 0.1}
    """
    weights = {}
    for strat in STRATEGIES:
        score = sum(
            regime_probs.get(regime, 0.0) * affinity
            for regime, affinity in strat.regime_affinity.items()
        )
        weights[strat.name] = score
    # Normalize to sum to 1
    total = sum(weights.values())
    if total > 0:
        weights = {k: v/total for k, v in weights.items()}
    return weights

# Example
regime_probs = {"trending": 0.15, "ranging": 0.60, "volatile": 0.05, "calm": 0.20}
alloc = compute_strategy_weights(regime_probs)
for name, w in sorted(alloc.items(), key=lambda x: -x[1]):
    print(f"  {name:20s}: {w*100:.1f}%")

Python RegimeDetector Class

The following RegimeDetector class consolidates all regime indicators into a single interface that returns a probability distribution over regime states.

python β€” full RegimeDetector class
"""
RegimeDetector: Consolidates ADX, Hurst, AC1, volatility, and HMM signals
into a single real-time regime probability vector.

Requires: pip install numpy pandas hmmlearn requests ta-lib-python
(ta-lib-python provides ADX; hmmlearn for HMM)
"""
import numpy as np
import pandas as pd
import requests
import logging
from dataclasses import dataclass, field
from typing import Optional
from collections import deque

log = logging.getLogger("regime-detector")

REGIMES = ["trending", "ranging", "volatile", "calm"]

# ── Hurst Exponent (R/S method) ──────────────────────────────────────────────

def hurst_rs(returns: np.ndarray, min_chunk: int = 8) -> float:
    """Compute Hurst exponent via Rescaled Range analysis."""
    n = len(returns)
    if n < 32:
        return 0.5  # Not enough data
    rs_values, ns = [], []
    for chunk_size in [n//8, n//4, n//2, n]:
        if chunk_size < min_chunk:
            continue
        chunks = [returns[i:i+chunk_size] for i in range(0, n-chunk_size+1, chunk_size)]
        rs_per_chunk = []
        for chunk in chunks:
            mean = chunk.mean()
            deviation = np.cumsum(chunk - mean)
            r = deviation.max() - deviation.min()
            s = chunk.std(ddof=1)
            if s > 0:
                rs_per_chunk.append(r / s)
        if rs_per_chunk:
            rs_values.append(np.mean(rs_per_chunk))
            ns.append(chunk_size)
    if len(ns) < 2:
        return 0.5
    log_ns = np.log(ns)
    log_rs = np.log(rs_values)
    h, _ = np.polyfit(log_ns, log_rs, 1)
    return float(np.clip(h, 0.0, 1.0))


# ── ADX Computation ──────────────────────────────────────────────────────────

def compute_adx(high: np.ndarray, low: np.ndarray, close: np.ndarray,
                period: int = 14) -> float:
    """Return the current ADX value (last value of the ADX series)."""
    n = len(close)
    if n < period * 2:
        return 25.0  # default neutral
    tr, pdm, ndm = [], [], []
    for i in range(1, n):
        h, l, pc = high[i], low[i], close[i-1]
        tr.append(max(h - l, abs(h - pc), abs(l - pc)))
        up = high[i] - high[i-1]
        dn = low[i-1] - low[i]
        pdm.append(up if up > dn and up > 0 else 0)
        ndm.append(dn if dn > up and dn > 0 else 0)

    def wilder_ema(series, p):
        out = [np.mean(series[:p])]
        for x in series[p:]:
            out.append(out[-1] * (p-1)/p + x * 1/p)
        return np.array(out)

    atr = wilder_ema(tr, period)
    pdi = 100 * wilder_ema(pdm, period) / atr
    ndi = 100 * wilder_ema(ndm, period) / atr
    dx  = 100 * np.abs(pdi - ndi) / (pdi + ndi + 1e-9)
    adx = wilder_ema(dx, period)
    return float(adx[-1])


# ── Autocorrelation (lag-1) ──────────────────────────────────────────────────

def autocorrelation_1(returns: np.ndarray) -> float:
    if len(returns) < 10:
        return 0.0
    return float(pd.Series(returns).autocorr(lag=1))


# ── Volatility Regime ────────────────────────────────────────────────────────

def annualized_vol(returns: np.ndarray, periods_per_year: int = 8760) -> float:
    """For hourly returns, periods_per_year=8760."""
    if len(returns) < 5:
        return 0.5
    return float(returns.std() * np.sqrt(periods_per_year))


# ── Main RegimeDetector ──────────────────────────────────────────────────────

@dataclass
class RegimeDetector:
    api_key: str
    symbol: str = "BTC/USDC"
    candle_interval: str = "1h"
    lookback: int = 200         # candles to use for computations
    adx_period: int = 14
    _cache: deque = field(default_factory=lambda: deque(maxlen=500), repr=False)
    _session: requests.Session = field(default_factory=requests.Session, repr=False)

    def __post_init__(self):
        self._session.headers.update({"X-API-Key": self.api_key})

    def fetch_candles(self) -> pd.DataFrame:
        resp = self._session.get(
            "https://purpleflea.com/api/v1/market/candles",
            params={"symbol": self.symbol, "interval": self.candle_interval, "limit": self.lookback}
        )
        resp.raise_for_status()
        df = pd.DataFrame(resp.json()["candles"],
                          columns=["timestamp","open","high","low","close","volume"])
        df = df.astype({"open": float, "high": float, "low": float, "close": float, "volume": float})
        df["returns"] = df["close"].pct_change().fillna(0.0)
        return df

    def detect(self) -> dict:
        """
        Returns regime probability dict and intermediate indicator values.
        Example: {
          'probs': {'trending': 0.15, 'ranging': 0.55, 'volatile': 0.10, 'calm': 0.20},
          'adx': 18.3, 'hurst': 0.48, 'ac1': -0.06, 'ann_vol': 0.42,
          'regime': 'ranging'
        }
        """
        df = self.fetch_candles()
        returns = df["returns"].values
        high    = df["high"].values
        low     = df["low"].values
        close   = df["close"].values

        adx     = compute_adx(high, low, close, self.adx_period)
        hurst   = hurst_rs(returns[-200:])
        ac1     = autocorrelation_1(returns[-100:])
        ann_vol = annualized_vol(returns[-24:])   # 24h realized vol

        # ── Score each regime 0-1 ─────────────────────────────────────────
        scores = {}

        # Trending: high ADX + high Hurst + positive AC1
        scores["trending"] = (
            np.clip((adx - 15) / 40, 0, 1) * 0.4 +
            np.clip((hurst - 0.5) / 0.3, 0, 1) * 0.35 +
            np.clip(ac1 / 0.15, 0, 1) * 0.25
        )

        # Ranging: low ADX + Hurst near 0.5 or below + negative AC1
        scores["ranging"] = (
            np.clip((30 - adx) / 25, 0, 1) * 0.40 +
            np.clip((0.55 - abs(hurst - 0.5)) / 0.55, 0, 1) * 0.30 +
            np.clip(-ac1 / 0.15, 0, 1) * 0.30
        )

        # Volatile: high realized vol is the primary signal
        scores["volatile"] = (
            np.clip((ann_vol - 0.60) / 0.80, 0, 1) * 0.70 +
            np.clip((adx - 30) / 40, 0, 1) * 0.30
        )

        # Calm: low realized vol + ranging character
        scores["calm"] = (
            np.clip((0.40 - ann_vol) / 0.35, 0, 1) * 0.60 +
            np.clip((25 - adx) / 25, 0, 1) * 0.40
        )

        # Normalize to probabilities
        total = sum(scores.values()) + 1e-9
        probs = {r: scores[r] / total for r in REGIMES}
        regime = max(probs, key=probs.get)

        return {
            "probs":   probs,
            "regime":  regime,
            "adx":     round(adx, 2),
            "hurst":   round(hurst, 4),
            "ac1":     round(ac1, 4),
            "ann_vol": round(ann_vol, 4),
        }

    def detect_with_strategy_weights(self) -> dict:
        """Convenience: detect regime and compute strategy allocations."""
        result = self.detect()
        alloc = compute_strategy_weights(result["probs"])
        result["strategy_weights"] = alloc
        return result


def compute_strategy_weights(regime_probs: dict) -> dict:
    """Soft strategy weighting based on regime probabilities."""
    affinities = {
        "momentum_long":    {"trending": 0.9, "ranging": 0.05, "volatile": 0.1,  "calm": 0.3},
        "mean_reversion":   {"trending": 0.05, "ranging": 0.9, "volatile": 0.1,  "calm": 0.5},
        "funding_harvest":  {"trending": 0.4, "ranging": 0.6,  "volatile": 0.0,  "calm": 0.9},
        "delta_neutral":    {"trending": 0.2, "ranging": 0.8,  "volatile": 0.5,  "calm": 0.7},
        "cash_only":        {"trending": 0.0, "ranging": 0.0,  "volatile": 1.0,  "calm": 0.0},
    }
    weights = {}
    for strat, aff in affinities.items():
        weights[strat] = sum(regime_probs.get(r, 0) * s for r, s in aff.items())
    total = sum(weights.values()) + 1e-9
    return {k: round(v/total, 4) for k, v in weights.items()}


# ── Regime-Conditioned Position Sizing ──────────────────────────────────────

@dataclass
class RegimePositionSizer:
    """Scales position size by regime confidence and volatility."""
    base_risk_pct: float = 0.01  # 1% account risk per trade by default
    regime_multipliers: dict = field(default_factory=lambda: {
        "trending": 1.2,    # Slightly larger in confirmed trends
        "ranging":  0.8,    # Reduce size in choppy ranges
        "volatile": 0.3,    # Drastically reduce in crisis
        "calm":     1.0,    # Normal sizing in calm markets
    })
    volatility_scalar: bool = True

    def compute_size(self, account_usd: float, entry: float, stop: float,
                     regime_result: dict) -> dict:
        """
        Returns recommended position size in USD notional.
        Uses: (account * risk%) * regime_multiplier / stop_distance%
        """
        regime = regime_result["regime"]
        confidence = regime_result["probs"][regime]  # 0-1
        ann_vol    = regime_result["ann_vol"]

        # Base dollar risk
        base_risk_usd = account_usd * self.base_risk_pct

        # Regime adjustment (blend with confidence)
        r_mult = self.regime_multipliers.get(regime, 1.0)
        # Blend: full multiplier at full confidence, 1.0 at zero confidence
        adj_mult = 1.0 + (r_mult - 1.0) * confidence

        # Volatility scalar: reduce size when vol is elevated
        vol_mult = 1.0
        if self.volatility_scalar:
            vol_mult = np.clip(0.40 / (ann_vol + 0.01), 0.2, 1.5)

        stop_dist = abs(entry - stop) / entry
        if stop_dist < 0.001:
            stop_dist = 0.001  # minimum 0.1% stop to avoid division explosion

        notional = (base_risk_usd * adj_mult * vol_mult) / stop_dist

        return {
            "notional_usd":    round(notional, 2),
            "qty_at_entry":    round(notional / entry, 8),
            "regime":          regime,
            "regime_mult":     round(adj_mult, 3),
            "vol_mult":        round(vol_mult, 3),
            "stop_dist_pct":   round(stop_dist * 100, 3),
        }


# ── Entrypoint ────────────────────────────────────────────────────────────────

if __name__ == "__main__":
    import os, json
    API_KEY = os.environ.get("PURPLEFLEA_API_KEY", "pf_live_<your_key>")
    detector = RegimeDetector(api_key=API_KEY, symbol="BTC/USDC", lookback=200)
    result = detector.detect_with_strategy_weights()
    print(json.dumps(result, indent=2))

    sizer = RegimePositionSizer(base_risk_pct=0.01)
    sizing = sizer.compute_size(
        account_usd=10000,
        entry=95000,
        stop=93000,
        regime_result=result
    )
    print("\nPosition sizing:", json.dumps(sizing, indent=2))

HMM-Based Regime Detection

For agents that want a more principled probabilistic approach, here is a Gaussian HMM implementation using hmmlearn:

python β€” Gaussian HMM for regime detection
"""
Trains a 3-state Gaussian HMM on historical returns + volatility features.
Uses hmmlearn: pip install hmmlearn
"""
import numpy as np
import requests
from hmmlearn.hmm import GaussianHMM

def fetch_features(api_key: str, symbol: str = "BTC/USDC",
                   interval: str = "4h", limit: int = 1000) -> np.ndarray:
    """Fetch OHLCV and compute return + log-vol features."""
    resp = requests.get(
        "https://purpleflea.com/api/v1/market/candles",
        params={"symbol": symbol, "interval": interval, "limit": limit},
        headers={"X-API-Key": api_key}
    )
    resp.raise_for_status()
    data = resp.json()["candles"]
    closes = np.array([float(c[4]) for c in data])
    returns = np.diff(np.log(closes))
    # 5-bar rolling realized vol
    vol = np.array([returns[max(0,i-5):i].std() for i in range(1, len(returns)+1)])
    # Feature matrix: [return, log(vol + 1e-8)]
    X = np.column_stack([returns, np.log(vol + 1e-8)])
    return X

class MarketHMM:
    def __init__(self, n_states: int = 3, n_iter: int = 100):
        self.n_states = n_states
        self.model = GaussianHMM(
            n_components=n_states, covariance_type="diag",
            n_iter=n_iter, random_state=42
        )
        self.state_labels: dict = {}  # state_id -> regime_name

    def fit(self, X: np.ndarray):
        self.model.fit(X)
        # Label states by mean return (ascending):
        # lowest mean = bearish/volatile, middle = ranging, highest = bullish/trending
        means = self.model.means_[:, 0]  # first feature = return
        order = np.argsort(means)
        names = ["bearish", "ranging", "bullish"]
        self.state_labels = {int(order[i]): names[i] for i in range(self.n_states)}
        return self

    def decode(self, X: np.ndarray) -> list:
        """Return list of regime labels for each timestep."""
        _, states = self.model.decode(X, algorithm="viterbi")
        return [self.state_labels[s] for s in states]

    def predict_proba(self, X: np.ndarray) -> np.ndarray:
        """Return state probability matrix (T x K)."""
        # Use forward algorithm (posteriors)
        return self.model.predict_proba(X)

    def current_state(self, X: np.ndarray) -> dict:
        """Return current regime name and probabilities."""
        proba = self.predict_proba(X)
        last_proba = proba[-1]
        state_id = int(np.argmax(last_proba))
        regime = self.state_labels[state_id]
        probs = {self.state_labels[i]: float(last_proba[i]) for i in range(self.n_states)}
        return {"regime": regime, "probs": probs, "confidence": float(last_proba[state_id])}


# Example usage
if __name__ == "__main__":
    import os
    API_KEY = os.environ.get("PURPLEFLEA_API_KEY", "pf_live_<your_key>")
    X = fetch_features(API_KEY, symbol="BTC/USDC", interval="4h", limit=800)
    hmm = MarketHMM(n_states=3).fit(X)
    current = hmm.current_state(X[-50:])   # Use last 50 bars for decoding
    print("Current regime:", current["regime"])
    print("Probabilities :", current["probs"])
    print("Confidence    :", f"{current['confidence']:.1%}")

Purple Flea API Integration

The Purple Flea market API provides all the data needed for regime detection. Key endpoints:

python β€” Purple Flea regime data endpoints
import requests

BASE = "https://purpleflea.com/api/v1"
KEY  = "pf_live_<your_key>"
H    = {"X-API-Key": KEY}

# Historical candles for indicators
candles = requests.get(f"{BASE}/market/candles",
    params={"symbol": "BTC/USDC", "interval": "1h", "limit": 200},
    headers=H).json()

# Pre-computed market microstructure data
micro = requests.get(f"{BASE}/market/microstructure",
    params={"symbol": "BTC/USDC"}, headers=H).json()
print("Bid-ask spread:", micro["spread_bps"], "bps")
print("Order book depth:", micro["depth_usd_1pct"])

# Volatility surface (implied vs realized)
vol = requests.get(f"{BASE}/market/volatility",
    params={"symbol": "BTC/USDC"}, headers=H).json()
print("Realized vol (24h):", vol["realized_24h"])
print("Implied vol (ATM) :", vol["implied_atm"])

# Macro regime signal (aggregated)
macro = requests.get(f"{BASE}/market/macro-regime", headers=H).json()
print("Macro regime:", macro["regime"])  # 'risk-on', 'risk-off', 'stagflation'
print("BTC/SPX corr:", macro["btc_spx_corr_30d"])

Summary and Regime Checklist

Regime detection is the meta-layer above individual strategies. Get it right, and every strategy in your agent's toolkit performs better. Use this checklist:

  • Compute ADX, Hurst, and AC1 on a rolling window β€” never on fixed historical data alone
  • Blend indicator signals into a soft probability vector β€” avoid binary regime switching
  • Use HMMs for a principled probabilistic framework when you have sufficient training data
  • Always include a volatility regime dimension separate from trend/range classification
  • Monitor macro regime for high-level risk-on/risk-off context
  • Scale position size down in volatile and uncertain regimes automatically
  • Backtest each strategy separately per regime to validate expected edge before deploying
  • Log detected regime with every trade for post-trade performance attribution
Start Detecting Regimes Now

Get your API key at purpleflea.com/api-keys and access historical candle data, volatility surface, and macro regime signals. New agents can claim free funds at faucet.purpleflea.com to backtest without risk.


Related: Perpetual Futures Guide for AI Agents Β· Wallet Architecture for Production AI Agents Β· Best MCP Tools for Agents in 2026