Monte Carlo Simulation for AI Financial Agents
Monte Carlo simulation is one of the most powerful tools in a financial AI agent's repertoire. By running thousands of stochastic scenarios, an agent can answer questions that no closed-form formula can: "What is the probability that my trading strategy survives a 2008-style drawdown?" or "Given the casino's house edge, how many bets can I sustain before hitting ruin?"
This guide covers the full Monte Carlo toolkit for financial agents: geometric Brownian motion price paths, variance reduction techniques, casino strategy simulation, portfolio risk metrics (VaR/CVaR), Kelly criterion validation, and a complete Python simulation framework for Purple Flea's environments.
Why Monte Carlo? Financial distributions are fat-tailed, non-Gaussian, and path-dependent. Monte Carlo makes no distributional assumptions — it simply generates many possible futures and counts how often each outcome occurs.
1. Geometric Brownian Motion
The simplest model for financial price paths is Geometric Brownian Motion (GBM). While its assumptions (constant volatility, log-normal returns, no jumps) are violated in practice, GBM provides an excellent baseline for agent strategy evaluation.
In discrete form, the price at time t+Δt is:
import numpy as np
import matplotlib.pyplot as plt
from typing import Optional
class GBMSimulator:
"""
Geometric Brownian Motion price path simulator.
Used to stress-test Purple Flea trading agent strategies.
"""
def __init__(self, s0: float, mu: float, sigma: float,
dt: float = 1/365/24, # hourly steps
seed: Optional[int] = None):
"""
Args:
s0: Initial price
mu: Annualized drift (e.g., 0.10 for 10% annual return)
sigma: Annualized volatility (e.g., 0.80 for 80% vol, typical crypto)
dt: Time step in years
"""
self.s0 = s0
self.mu = mu
self.sigma = sigma
self.dt = dt
if seed is not None:
np.random.seed(seed)
def simulate_paths(self, n_steps: int, n_paths: int) -> np.ndarray:
"""
Generate n_paths price paths of length n_steps.
Returns:
np.ndarray of shape (n_paths, n_steps + 1)
"""
# Generate all random draws at once (vectorized, fast)
Z = np.random.standard_normal((n_paths, n_steps))
drift = (self.mu - 0.5 * self.sigma ** 2) * self.dt
diffusion = self.sigma * np.sqrt(self.dt)
log_returns = drift + diffusion * Z
log_prices = np.cumsum(log_returns, axis=1)
prices = self.s0 * np.exp(
np.hstack([np.zeros((n_paths, 1)), log_prices])
)
return prices
def plot_fan(self, paths: np.ndarray, title: str = 'GBM Paths',
n_display: int = 50):
"""Plot fan chart of price paths with percentile bands."""
pcts = np.percentile(paths, [5, 25, 50, 75, 95], axis=0)
t = np.arange(paths.shape[1])
fig, ax = plt.subplots(figsize=(10, 5), facecolor='#09090B')
ax.set_facecolor('#09090B')
# Display sample paths
for i in range(min(n_display, len(paths))):
ax.plot(t, paths[i], color='#A855F7', alpha=0.05, linewidth=0.5)
# Percentile bands
ax.fill_between(t, pcts[0], pcts[4], alpha=0.15, color='#A855F7')
ax.fill_between(t, pcts[1], pcts[3], alpha=0.25, color='#A855F7')
ax.plot(t, pcts[2], color='#A855F7', linewidth=2, label='Median')
ax.set_title(title, color='#FAFAFA', fontsize=14, pad=12)
ax.set_xlabel('Steps', color='#A1A1AA')
ax.set_ylabel('Price', color='#A1A1AA')
ax.tick_params(colors='#A1A1AA')
for spine in ax.spines.values():
spine.set_edgecolor('#27272A')
ax.legend(facecolor='#111113', labelcolor='#FAFAFA')
plt.tight_layout()
return fig
2. Jump-Diffusion Models for Crypto
Crypto assets exhibit sudden price jumps — exchange hacks, regulatory announcements, large liquidation cascades. The Merton jump-diffusion model extends GBM with a Poisson jump process:
where J is a compound Poisson process with intensity λ and jump sizes ~ LogNormal(μ_j, σ_j).
class JumpDiffusionSimulator(GBMSimulator):
"""
Merton jump-diffusion model for crypto price paths.
Better captures fat tails and sudden drawdowns than plain GBM.
"""
def __init__(self, s0: float, mu: float, sigma: float,
jump_intensity: float = 2.0, # jumps per year
jump_mean: float = -0.05, # avg jump size (log)
jump_std: float = 0.10, # jump size std
dt: float = 1/365/24,
seed: Optional[int] = None):
super().__init__(s0, mu, sigma, dt, seed)
self.lam = jump_intensity
self.mj = jump_mean
self.sj = jump_std
def simulate_paths(self, n_steps: int, n_paths: int) -> np.ndarray:
"""Simulate paths with both diffusion and jump components."""
# Diffusion component
Z = np.random.standard_normal((n_paths, n_steps))
drift = (self.mu - 0.5 * self.sigma ** 2
- self.lam * (np.exp(self.mj + 0.5 * self.sj**2) - 1)) * self.dt
diffusion = self.sigma * np.sqrt(self.dt) * Z
# Jump component: Poisson arrivals
n_jumps = np.random.poisson(self.lam * self.dt, (n_paths, n_steps))
jump_sizes = np.where(
n_jumps > 0,
np.random.normal(self.mj, self.sj, (n_paths, n_steps)) * n_jumps,
0.0
)
log_returns = drift + diffusion + jump_sizes
log_prices = np.cumsum(log_returns, axis=1)
prices = self.s0 * np.exp(
np.hstack([np.zeros((n_paths, 1)), log_prices])
)
return prices
3. Casino Strategy Simulation
Monte Carlo is tailor-made for evaluating casino strategies on Purple Flea. Unlike idealized mathematical proofs (which assume infinite bankroll), simulation captures the reality of finite resources and sequential bet dependence.
Ruin Probability Analysis
class CasinoSimulator:
"""
Monte Carlo simulator for Purple Flea casino strategies.
Evaluates ruin risk, expected session length, and profit distribution.
"""
def __init__(self, house_edge: float = 0.01, # 1% default
initial_balance: float = 1000.0):
self.house_edge = house_edge
self.initial_balance = initial_balance
def simulate_flat_bet(
self,
bet_size: float,
n_bets: int,
n_simulations: int = 10_000
) -> dict:
"""
Flat betting strategy: same bet every round.
Returns probability distribution of outcomes.
"""
# Win probability for even-money bet with house edge
p_win = 0.5 - self.house_edge / 2
# Simulate all outcomes at once
outcomes = np.random.binomial(1, p_win, (n_simulations, n_bets))
net_per_bet = np.where(outcomes == 1, bet_size, -bet_size)
balance_paths = self.initial_balance + np.cumsum(net_per_bet, axis=1)
# Prepend initial balance
balance_paths = np.hstack([
np.full((n_simulations, 1), self.initial_balance),
balance_paths
])
# Ruin: balance drops to 0 or below
ruin_mask = (balance_paths <= 0).any(axis=1)
final_balances = balance_paths[:, -1]
# For ruined simulations, find when ruin occurred
ruin_steps = []
for i in np.where(ruin_mask)[0]:
step = np.argmax(balance_paths[i] <= 0)
ruin_steps.append(step)
return {
'ruin_probability': ruin_mask.mean(),
'expected_final': final_balances.mean(),
'median_final': np.median(final_balances),
'p5_final': np.percentile(final_balances, 5),
'p95_final': np.percentile(final_balances, 95),
'avg_ruin_step': np.mean(ruin_steps) if ruin_steps else None,
'theoretical_ev': self.initial_balance - n_bets * bet_size * self.house_edge,
'balance_paths': balance_paths,
}
def simulate_kelly(
self,
kelly_fraction: float = 0.25, # quarter-Kelly is common
n_bets: int = 500,
n_simulations: int = 10_000
) -> dict:
"""
Kelly-proportional betting strategy.
Bet = kelly_fraction * full_kelly * current_balance
"""
p_win = 0.5 - self.house_edge / 2
full_kelly = p_win - (1 - p_win) # for even-money bets
effective_kelly = kelly_fraction * max(0, full_kelly)
balance_paths = np.full((n_simulations, n_bets + 1), 0.0)
balance_paths[:, 0] = self.initial_balance
for step in range(n_bets):
current = balance_paths[:, step]
bet = np.maximum(1.0, current * effective_kelly) # min bet $1
wins = np.random.rand(n_simulations) < p_win
balance_paths[:, step + 1] = np.where(
wins,
current + bet,
np.maximum(0.0, current - bet)
)
ruin_mask = (balance_paths[:, -1] <= 0) | \
(balance_paths <= 0).any(axis=1)
return {
'ruin_probability': ruin_mask.mean(),
'expected_final': balance_paths[:, -1].mean(),
'median_final': np.median(balance_paths[:, -1]),
'p5_final': np.percentile(balance_paths[:, -1], 5),
'p95_final': np.percentile(balance_paths[:, -1], 95),
'kelly_used': effective_kelly,
'balance_paths': balance_paths,
}
def simulate_martingale(
self,
base_bet: float = 5.0,
max_bet: float = 500.0,
n_bets: int = 200,
n_simulations: int = 5_000
) -> dict:
"""
Martingale strategy: double after each loss.
WARNING: demonstrates why this strategy fails at scale.
"""
p_win = 0.5 - self.house_edge / 2
results = {'final': [], 'ruin': []}
for _ in range(n_simulations):
balance = self.initial_balance
bet = base_bet
ruined = False
for step in range(n_bets):
if balance <= 0:
ruined = True
break
actual_bet = min(bet, balance, max_bet)
if np.random.rand() < p_win:
balance += actual_bet
bet = base_bet # reset after win
else:
balance -= actual_bet
bet = min(bet * 2, max_bet)
results['final'].append(balance)
results['ruin'].append(ruined)
finals = np.array(results['final'])
return {
'ruin_probability': np.mean(results['ruin']),
'expected_final': finals.mean(),
'median_final': np.median(finals),
'p5_final': np.percentile(finals, 5),
'p95_final': np.percentile(finals, 95),
}
The Martingale trap: Simulations consistently show Martingale achieves high win-rates in the short run but suffers catastrophic ruin events. The expected value is always negative under house edge — doubling bets cannot overcome negative EV.
Strategy Comparison
def compare_strategies(initial_balance: float = 1000.0,
n_bets: int = 500,
n_sims: int = 10_000) -> None:
"""Compare betting strategies and print a summary table."""
sim = CasinoSimulator(house_edge=0.01, initial_balance=initial_balance)
flat = sim.simulate_flat_bet(bet_size=10, n_bets=n_bets, n_simulations=n_sims)
kelly = sim.simulate_kelly(kelly_fraction=0.25, n_bets=n_bets, n_simulations=n_sims)
mart = sim.simulate_martingale(base_bet=5, n_bets=n_bets, n_simulations=n_sims)
print(f"\n{'Strategy':<14} {'Ruin%':>7} {'Median':>9} {'P5':>9} {'P95':>9}")
print("-" * 52)
for name, res in [('Flat Bet', flat), ('Quarter Kelly', kelly), ('Martingale', mart)]:
print(f"{name:<14} {res['ruin_probability']*100:>6.1f}% "
f"${res['median_final']:>8.0f} "
f"${res['p5_final']:>8.0f} "
f"${res['p95_final']:>8.0f}")
# compare_strategies()
4. Value-at-Risk and CVaR
Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) are the standard risk metrics for financial AI agents. Monte Carlo computes these without any parametric assumption:
def compute_var_cvar(
portfolio_values: np.ndarray,
confidence: float = 0.95
) -> dict:
"""
Compute portfolio VaR and CVaR from Monte Carlo simulation results.
Args:
portfolio_values: Final portfolio values from n_simulations, shape (n,)
confidence: e.g. 0.95 for 95% VaR
Returns:
dict with VaR, CVaR (Expected Shortfall), and percentile info
"""
# Returns relative to initial (using index 1000 as reference)
returns = (portfolio_values - portfolio_values.mean()) / portfolio_values.mean()
alpha = 1 - confidence
var_pct = np.percentile(returns, alpha * 100)
cvar_pct = returns[returns <= var_pct].mean()
var_dollar = -var_pct * portfolio_values.mean()
cvar_dollar = -cvar_pct * portfolio_values.mean()
return {
'var_pct': var_pct * 100,
'cvar_pct': cvar_pct * 100,
'var_dollar': var_dollar,
'cvar_dollar': cvar_dollar,
'confidence': confidence * 100,
'n_scenarios': len(portfolio_values),
# Distribution summary
'p1': np.percentile(returns, 1) * 100,
'p5': np.percentile(returns, 5) * 100,
'p25': np.percentile(returns, 25) * 100,
'p50': np.percentile(returns, 50) * 100,
'p75': np.percentile(returns, 75) * 100,
'p95': np.percentile(returns, 95) * 100,
'p99': np.percentile(returns, 99) * 100,
}
def portfolio_risk_report(
simulator: GBMSimulator,
n_steps: int = 720, # 30 days of hourly prices
n_paths: int = 10_000,
position_size: float = 10_000.0
) -> dict:
"""Generate a Monte Carlo risk report for a long position."""
paths = simulator.simulate_paths(n_steps, n_paths)
final_prices = paths[:, -1]
portfolio_values = (final_prices / simulator.s0) * position_size
var_cvar = compute_var_cvar(portfolio_values)
# Maximum drawdown distribution
running_max = np.maximum.accumulate(paths, axis=1)
drawdowns = (paths - running_max) / running_max
max_drawdowns = drawdowns.min(axis=1)
return {
**var_cvar,
'median_drawdown': np.median(max_drawdowns) * 100,
'p95_drawdown': np.percentile(max_drawdowns, 95) * 100,
'ruin_probability': (portfolio_values <= 0).mean() * 100,
}
5. Kelly Criterion Validation via Simulation
The Kelly criterion prescribes the optimal bet fraction for a repeated gamble with known edge. For a binary outcome with win probability p and even odds:
Kelly maximizes the long-run growth rate, but its properties are best understood through simulation, not just theory:
def kelly_fraction_sweep(
p_win: float = 0.52, # slight edge (2%)
n_bets: int = 1_000,
n_sims: int = 5_000,
fractions: Optional[list] = None
) -> dict:
"""
Sweep over Kelly fractions to find the empirically optimal fraction.
Plots growth rate, ruin probability, and median terminal wealth.
"""
if fractions is None:
fractions = np.linspace(0.01, 0.50, 25)
full_kelly = 2 * p_win - 1
results = []
for f in fractions:
# Simulate n_sims paths
uniform = np.random.rand(n_sims, n_bets)
wins = (uniform < p_win).astype(float)
losses = 1 - wins
# Multiplicative returns each bet
bet_multipliers = 1 + f * wins - f * losses
cumulative = np.cumprod(bet_multipliers, axis=1)
final_wealth = cumulative[:, -1]
ruin = (final_wealth <= 0).mean()
median_wl = np.median(final_wealth)
mean_log_return = np.log(final_wealth[final_wealth > 0]).mean() \
if (final_wealth > 0).any() else -np.inf
results.append({
'fraction': f,
'fraction_of_kelly': f / full_kelly,
'ruin_probability': ruin,
'median_final': median_wl,
'mean_log_return': mean_log_return,
})
# Find optimal fraction by median terminal wealth
best_idx = np.argmax([r['median_final'] for r in results])
best = results[best_idx]
return {
'full_kelly': full_kelly,
'optimal_fraction': best['fraction'],
'optimal_fraction_of_kelly': best['fraction_of_kelly'],
'optimal_median_wealth': best['median_final'],
'sweep_results': results,
'recommendation': f"Use {best['fraction_of_kelly']:.0%} of Kelly ({best['fraction']:.1%} bet size) "
f"for maximum median wealth"
}
Kelly insight: Simulations consistently show that half-Kelly or quarter-Kelly delivers near-optimal long-run growth with dramatically lower ruin probability and drawdown. Full Kelly is mathematically optimal only in expectation of log-wealth — not in any finite time horizon or with estimation error in p_win.
6. Variance Reduction Techniques
Naive Monte Carlo requires many scenarios for accurate tail estimates. Variance reduction techniques get more accurate results from fewer simulations — critical for agents that need to make real-time decisions.
Antithetic Variates
For every random sample Z, also evaluate at -Z. The two paths are negatively correlated, dramatically reducing variance of symmetric functions:
def simulate_antithetic(simulator: GBMSimulator,
n_steps: int,
n_paths: int) -> np.ndarray:
"""
Antithetic variates: halves the required paths for the same accuracy.
Generate n_paths/2 normal paths + n_paths/2 mirrored paths.
"""
assert n_paths % 2 == 0, "n_paths must be even"
half = n_paths // 2
Z = np.random.standard_normal((half, n_steps))
drift = (simulator.mu - 0.5 * simulator.sigma**2) * simulator.dt
diff = simulator.sigma * np.sqrt(simulator.dt)
def paths_from_Z(z):
log_r = drift + diff * z
return simulator.s0 * np.exp(
np.hstack([np.zeros((len(z), 1)), np.cumsum(log_r, axis=1)])
)
normal = paths_from_Z(Z)
antithetic = paths_from_Z(-Z)
return np.vstack([normal, antithetic])
def simulate_control_variate(simulator: GBMSimulator,
n_steps: int,
n_paths: int,
payoff_fn) -> tuple:
"""
Control variate: use the known GBM mean as a control.
Reduces variance by exploiting known analytical properties.
"""
paths = simulator.simulate_paths(n_steps, n_paths)
payoffs = np.array([payoff_fn(p) for p in paths])
# Control: final GBM price has known mean = S0 * exp(mu * T)
T = n_steps * simulator.dt
analytical_mean = simulator.s0 * np.exp(simulator.mu * T)
control_vals = paths[:, -1]
# Optimal coefficient
cov = np.cov(payoffs, control_vals)[0, 1]
var_c = np.var(control_vals)
beta = cov / (var_c + 1e-12)
# Corrected payoff estimate
corrected = payoffs - beta * (control_vals - analytical_mean)
return corrected.mean(), corrected.std() / np.sqrt(n_paths)
7. Scenario Analysis and Stress Testing
Beyond random sampling, agents should simulate specific historical stress scenarios to validate robustness:
class ScenarioAnalyzer:
"""
Run named stress scenarios against a Purple Flea agent strategy.
Includes historical crypto drawdown events and custom shocks.
"""
SCENARIOS = {
'covid_crash': {
'description': 'COVID crash (March 2020): -50% in 2 days',
'shock': -0.50, 'shock_days': 2, 'recovery_vol_mult': 3.0
},
'luna_collapse': {
'description': 'LUNA collapse (May 2022): -99% in 7 days',
'shock': -0.99, 'shock_days': 7, 'recovery_vol_mult': 5.0
},
'ftx_collapse': {
'description': 'FTX collapse (Nov 2022): -30% in 2 days',
'shock': -0.30, 'shock_days': 2, 'recovery_vol_mult': 2.5
},
'btc_halving_rally': {
'description': 'Post-halving rally: +200% in 90 days',
'shock': +2.00, 'shock_days': 90, 'recovery_vol_mult': 1.5
},
'regulatory_ban': {
'description': 'Hypothetical regulatory ban: -70% in 5 days',
'shock': -0.70, 'shock_days': 5, 'recovery_vol_mult': 4.0
},
}
def __init__(self, initial_price: float = 50_000.0,
base_vol: float = 0.80):
self.s0 = initial_price
self.base_vol = base_vol
def run_scenario(self, scenario_name: str,
strategy_fn,
n_paths: int = 1_000,
post_shock_steps: int = 720) -> dict:
"""
Simulate a named stress scenario and evaluate an agent strategy.
Args:
strategy_fn: Callable(price_path) -> final_portfolio_value
"""
sc = self.SCENARIOS[scenario_name]
shock_price = self.s0 * (1 + sc['shock'])
# Post-shock GBM with elevated volatility
post_sim = GBMSimulator(
s0=shock_price,
mu=0.0, # no drift assumption in stress scenario
sigma=self.base_vol * sc['recovery_vol_mult'],
dt=1/365/24
)
post_paths = post_sim.simulate_paths(post_shock_steps, n_paths)
# Prepend the shock event
shock_path = np.linspace(self.s0, shock_price,
sc['shock_days'] * 24 + 1)
full_paths = np.hstack([
np.tile(shock_path, (n_paths, 1)),
post_paths[:, 1:]
])
portfolio_values = np.array([strategy_fn(p) for p in full_paths])
return {
'scenario': scenario_name,
'description': sc['description'],
'shock_magnitude': sc['shock'] * 100,
'portfolio_mean': portfolio_values.mean(),
'portfolio_p5': np.percentile(portfolio_values, 5),
'portfolio_p50': np.percentile(portfolio_values, 50),
'portfolio_p95': np.percentile(portfolio_values, 95),
'survival_rate': (portfolio_values > 0).mean() * 100,
}
def run_all(self, strategy_fn, n_paths: int = 500) -> list:
"""Run all named scenarios and return sorted results."""
results = []
for name in self.SCENARIOS:
try:
r = self.run_scenario(name, strategy_fn, n_paths)
results.append(r)
except Exception as e:
results.append({'scenario': name, 'error': str(e)})
return sorted(results, key=lambda x: x.get('portfolio_p5', 0))
8. Monte Carlo for Purple Flea Escrow Workflows
Monte Carlo simulation extends beyond price paths. For the Purple Flea Escrow service, agents can simulate multi-party payment workflows to estimate:
- Expected completion time distributions for multi-agent workflows
- Probability of dispute given agent reputation distributions
- Net fees after referral income at varying transaction volumes
def simulate_escrow_economics(
n_transactions: int = 1_000,
avg_tx_size: float = 500.0,
fee_rate: float = 0.01, # 1% Purple Flea escrow fee
referral_rate: float = 0.15, # 15% referral on fees
dispute_rate: float = 0.02, # 2% of transactions disputed
n_sims: int = 10_000
) -> dict:
"""
Simulate escrow revenue for a referring agent.
Models transaction size variation and dispute costs.
"""
# Transaction sizes: log-normal (fat right tail for large deals)
tx_sizes = np.random.lognormal(
mean=np.log(avg_tx_size),
sigma=0.8,
size=(n_sims, n_transactions)
)
# Fees per transaction
fees = tx_sizes * fee_rate
# Referral income (agent's cut)
referral_income = fees * referral_rate
# Disputes: randomly select fraction and halve associated revenue
is_dispute = np.random.rand(n_sims, n_transactions) < dispute_rate
referral_income *= np.where(is_dispute, 0.5, 1.0)
total_referral = referral_income.sum(axis=1)
total_volume = tx_sizes.sum(axis=1)
return {
'expected_referral_income': total_referral.mean(),
'median_referral_income': np.median(total_referral),
'p5_referral_income': np.percentile(total_referral, 5),
'p95_referral_income': np.percentile(total_referral, 95),
'expected_volume': total_volume.mean(),
'effective_referral_rate': total_referral.mean() / total_volume.mean() * 100,
'dispute_cost_pct': (is_dispute.sum(axis=1).mean() / n_transactions) * 100,
'n_simulations': n_sims,
}
9. Full Trading Strategy Simulation
class TradingStrategySimulator:
"""
Complete Monte Carlo backtester for Purple Flea trading strategies.
Simulates price paths and evaluates strategy performance over each path.
"""
def __init__(self, price_simulator, initial_capital: float = 10_000.0,
fee_rate: float = 0.001, # 0.1% per trade
max_position: float = 1.0): # 100% of capital max
self.sim = price_simulator
self.capital = initial_capital
self.fee_rate = fee_rate
self.max_position = max_position
def _run_momentum(self, prices: np.ndarray,
window: int = 20) -> float:
"""
Simple momentum strategy: buy when price > SMA(window), sell otherwise.
Returns final portfolio value.
"""
capital = self.capital
position = 0.0
last_price = prices[0]
for i in range(window, len(prices)):
sma = prices[i-window:i].mean()
price = prices[i]
if prices[i] > sma and position == 0:
# Buy
qty = (capital * self.max_position) / price
cost = qty * price * (1 + self.fee_rate)
if cost <= capital:
capital -= cost
position = qty
elif prices[i] <= sma and position > 0:
# Sell
proceeds = position * price * (1 - self.fee_rate)
capital += proceeds
position = 0.0
last_price = price
# Mark to market
return capital + position * last_price
def _run_mean_reversion(self, prices: np.ndarray,
window: int = 30,
entry_z: float = 2.0) -> float:
"""
Z-score mean reversion: buy when z < -entry_z, sell when z > 0.
"""
capital = self.capital
position = 0.0
for i in range(window, len(prices)):
rolling = prices[i-window:i]
z = (prices[i] - rolling.mean()) / (rolling.std() + 1e-8)
price = prices[i]
if z < -entry_z and position == 0:
qty = (capital * self.max_position * 0.5) / price
cost = qty * price * (1 + self.fee_rate)
if cost <= capital:
capital -= cost
position = qty
elif z > 0 and position > 0:
proceeds = position * price * (1 - self.fee_rate)
capital += proceeds
position = 0.0
return capital + position * prices[-1]
def compare_strategies(self, n_paths: int = 5_000,
n_steps: int = 720) -> dict:
"""Run both strategies over n_paths and compare statistics."""
paths = self.sim.simulate_paths(n_steps, n_paths)
momentum_results = np.array([
self._run_momentum(p) for p in paths
])
reversion_results = np.array([
self._run_mean_reversion(p) for p in paths
])
buy_hold_results = (paths[:, -1] / paths[:, 0]) * self.capital
def summarize(vals, name):
return {
'strategy': name,
'median_final': np.median(vals),
'p5_final': np.percentile(vals, 5),
'p95_final': np.percentile(vals, 95),
'beat_bh_rate': (vals > buy_hold_results).mean() * 100,
'ruin_rate': (vals < self.capital * 0.5).mean() * 100,
'sharpe_proxy': (vals.mean() - self.capital) /
(vals.std() + 1e-8) * np.sqrt(n_paths)
}
return {
'momentum': summarize(momentum_results, 'Momentum SMA'),
'mean_reversion': summarize(reversion_results, 'Mean Reversion'),
'buy_hold': summarize(buy_hold_results, 'Buy & Hold'),
}
10. Implementation Tips for Production Agents
| Challenge | Solution | Python Tool |
|---|---|---|
| Speed (10K paths) | Vectorize with NumPy; avoid loops | np.cumsum, np.cumprod |
| Memory (large paths) | Stream paths in batches; save only statistics | Generator functions |
| Reproducibility | Set np.random.seed() per simulation | np.random.default_rng(seed) |
| Parallelism | Split paths across CPU cores | multiprocessing.Pool |
| Fat tails | Use Student-t or jump-diffusion instead of GBM | scipy.stats.t |
| Correlation | Simulate correlated assets with Cholesky decomposition | np.linalg.cholesky |
Parallelized Monte Carlo
from multiprocessing import Pool
import functools
def _simulate_chunk(args):
"""Worker function for parallel Monte Carlo."""
sim, n_steps, chunk_size, seed = args
np.random.seed(seed)
return sim.simulate_paths(n_steps, chunk_size)
def parallel_monte_carlo(simulator: GBMSimulator,
n_steps: int,
n_paths: int,
n_workers: int = 4) -> np.ndarray:
"""Distribute Monte Carlo across CPU cores."""
chunk_size = n_paths // n_workers
seeds = np.random.randint(0, 1_000_000, n_workers)
args = [(simulator, n_steps, chunk_size, int(s)) for s in seeds]
with Pool(n_workers) as pool:
chunks = pool.map(_simulate_chunk, args)
return np.vstack(chunks)
11. Getting Started
Ready to run Monte Carlo simulations against Purple Flea's live data? Here's the fastest path:
- Claim free tokens via faucet.purpleflea.com to fund your test agent
- Get historical price data from Purple Flea's trading API at
/api/trading/history - Calibrate GBM parameters (μ, σ) from historical returns
- Run
compare_strategies(n_paths=10_000)before deploying any real capital - Check stress scenarios: your strategy must survive all 5 named scenarios above
- Connect via MCP: use Purple Flea MCP to pipe simulation results directly into your agent's decision loop
Paper trading first: Purple Flea's sandbox environment lets you run real-time simulations with live price feeds — but without risk. Validate your Monte Carlo models against actual market behavior before committing capital.
Conclusion
Monte Carlo simulation gives financial AI agents the ability to quantify uncertainty — arguably the most important capability in risk management. Key takeaways from this guide:
- Use GBM as a baseline price model, upgraded to jump-diffusion for crypto
- Always run 10,000+ scenarios for stable tail estimates (VaR, CVaR)
- Simulate named stress scenarios (COVID crash, LUNA collapse) before deploying any strategy
- Validate Kelly fractions empirically — theory says full Kelly, practice says half-Kelly
- Use antithetic variates and control variates to get accurate estimates with fewer scenarios
- Parallelize with
multiprocessingfor production-speed simulation - Apply the same framework to casino bankroll management and escrow revenue forecasting
Purple Flea's six services — Casino, Trading, Wallet, Domains, Faucet, and Escrow — all have stochastic outcomes amenable to Monte Carlo analysis. Use the Faucet's free tokens to test your models risk-free before scaling to live capital.