OLLAMA INTEGRATION

Run Crypto Agents Locally with Ollama + Purple Flea

Use Llama 3, Mistral, Qwen, and DeepSeek on your own hardware. Zero API costs, 100% private — your keys never leave your machine. Plug into Purple Flea's full financial suite in minutes.

$0 API Cost
100% Private
50+ Models
6 Chains

Private by Design, Powerful by Default

Ollama lets you run frontier-class LLMs on consumer hardware. Combined with Purple Flea's financial APIs, your agent operates entirely on-premise — no cloud LLM provider ever sees your trade data or wallet keys.

🔒
Zero Data Leakage
API keys, wallet addresses, and trade history never leave your machine. No third-party model provider processes your financial decisions.
PRIVACY
💰
$0 Inference Cost
Run millions of inferences per month for free. High-frequency trading loops, portfolio monitoring, and market scanning — no per-token billing.
COST
Low Latency Locally
Local inference eliminates network round-trips to cloud providers. Critical for time-sensitive trading decisions and rapid agent loops.
PERFORMANCE
🎰
Casino API
Provably fair crash, coin-flip, and Hyperliquid perpetual futures. Claim free USDC from the faucet to start playing with zero risk.
CASINO
📈
Trading API — 275 Markets
Access spot and perpetual markets across 275 pairs. Real-time orderbook, OHLCV data, and programmatic order placement via REST.
TRADING
💼
Wallet + Escrow
Generate wallets on ETH, SOL, BTC, MATIC, BNB, XMR. Use trustless escrow for agent-to-agent payments. 1% fee, 15% referral commission.
WALLET

Get Running in Three Steps

Install Ollama, pull a model, and wire up Purple Flea. Your private crypto agent is live in under 10 minutes.

1
Install Ollama and Start the Server
Download Ollama for your platform and start the local inference server on port 11434.
terminal bash
# Install Ollama (macOS/Linux)
curl -fsSL https://ollama.ai/install.sh | sh

# Start the Ollama server
ollama serve

# Pull a finance-capable model (choose one)
ollama pull llama3.3:70b        # Best reasoning, needs 40GB RAM
ollama pull qwen2.5:32b         # Great balance of speed + quality
ollama pull deepseek-r1:14b     # Strong math/finance reasoning
ollama pull mistral:7b          # Fast, runs on 8GB RAM

# Verify server is running
curl http://localhost:11434/api/tags
2
Create a Custom Modelfile for Purple Flea
Define a Modelfile that bakes in a Purple Flea system prompt and tool-calling behavior.
Modelfile.purpleflea modelfile
FROM qwen2.5:32b

SYSTEM """
You are an autonomous financial agent operating on Purple Flea
(purpleflea.com), a financial infrastructure platform for AI agents.

Your capabilities via the Purple Flea API:
- Casino: play provably fair games, Hyperliquid perpetual futures
- Trading: access 275 markets, place orders, read orderbooks
- Wallet: manage ETH/SOL/BTC/MATIC/BNB/XMR wallets
- Domains: register and manage blockchain domains
- Faucet: claim free USDC for new agents
- Escrow: send/receive trustless agent-to-agent payments

Always verify balances before placing trades. Use conservative
position sizing (max 5% per trade). Log all actions with timestamps.
"""

PARAMETER temperature 0.3
PARAMETER top_p 0.9
PARAMETER num_ctx 8192
terminal bash
# Build and run the custom model
ollama create purpleflea-agent -f Modelfile.purpleflea
ollama run purpleflea-agent
3
Get Your Purple Flea API Key
Register at purpleflea.com to get your API key. Optionally claim free USDC from the faucet.
terminal bash
# Register and get API key
curl -X POST https://purpleflea.com/api/register \
  -H 'Content-Type: application/json' \
  -d '{"name": "my-ollama-agent", "type": "ollama"}'

# Claim free USDC from the faucet
curl -X POST https://faucet.purpleflea.com/claim \
  -H 'Authorization: Bearer YOUR_API_KEY'

# Check your wallet balance
curl https://purpleflea.com/api/wallet/balance \
  -H 'Authorization: Bearer YOUR_API_KEY'

Full Agent Loop with Tool Calling

A production-ready Python agent loop: Ollama drives reasoning, Purple Flea executes financial actions. Tool definitions tell the model exactly what APIs are available.

agent.py python
import json
import requests
from ollama import Client

OLLAMA_HOST = "http://localhost:11434"
PURPLE_FLEA_KEY = "YOUR_API_KEY"
PF_BASE = "https://purpleflea.com/api"
MODEL = "purpleflea-agent"

client = Client(host=OLLAMA_HOST)
session = requests.Session()
session.headers.update({"Authorization": f"Bearer {PURPLE_FLEA_KEY}"})

# Tool definitions — tell Ollama what functions the agent can call
TOOLS = [
  {
    "type": "function",
    "function": {
      "name": "get_wallet_balance",
      "description": "Get current balances for all wallets",
      "parameters": {"type": "object", "properties": {}}
    }
  },
  {
    "type": "function",
    "function": {
      "name": "get_market_price",
      "description": "Get current price for a trading pair",
      "parameters": {
        "type": "object",
        "properties": {
          "symbol": {"type": "string", "description": "e.g. BTC-USD"}
        },
        "required": ["symbol"]
      }
    }
  },
  {
    "type": "function",
    "function": {
      "name": "place_trade",
      "description": "Place a buy or sell order",
      "parameters": {
        "type": "object",
        "properties": {
          "symbol": {"type": "string"},
          "side": {"type": "string", "enum": ["buy", "sell"]},
          "amount_usd": {"type": "number"}
        },
        "required": ["symbol", "side", "amount_usd"]
      }
    }
  },
  {
    "type": "function",
    "function": {
      "name": "casino_bet",
      "description": "Place a casino bet (crash or coin-flip)",
      "parameters": {
        "type": "object",
        "properties": {
          "game": {"type": "string", "enum": ["crash", "coinflip"]},
          "amount_usdc": {"type": "number"},
          "cashout_multiplier": {"type": "number"}
        },
        "required": ["game", "amount_usdc"]
      }
    }
  }
]

def execute_tool(name, args):
    """Execute a Purple Flea API call based on tool name."""
    if name == "get_wallet_balance":
        r = session.get(f"{PF_BASE}/wallet/balance")
        return r.json()
    elif name == "get_market_price":
        r = session.get(f"{PF_BASE}/trading/price/{args['symbol']}")
        return r.json()
    elif name == "place_trade":
        r = session.post(f"{PF_BASE}/trading/order", json=args)
        return r.json()
    elif name == "casino_bet":
        r = session.post(f"{PF_BASE}/casino/bet", json=args)
        return r.json()

def run_agent(goal: str):
    """Main agent loop: reason → act → observe → repeat."""
    messages = [{"role": "user", "content": goal}]
    print(f"Agent starting: {goal}\n")

    for step in range(10):  # max 10 reasoning steps
        response = client.chat(
            model=MODEL,
            messages=messages,
            tools=TOOLS
        )
        msg = response.message
        messages.append({"role": "assistant", "content": msg.content or ""})

        # If no tool calls, agent is done
        if not msg.tool_calls:
            print(f"Agent response: {msg.content}")
            break

        # Execute each tool call and feed results back
        for call in msg.tool_calls:
            fn = call.function
            print(f"  → Calling {fn.name}({fn.arguments})")
            result = execute_tool(fn.name, fn.arguments)
            print(f"  ← Result: {json.dumps(result)[:200]}")
            messages.append({
                "role": "tool",
                "content": json.dumps(result)
            })

if __name__ == "__main__":
    run_agent(
        "Check my wallet balance, look up BTC-USD price, "
        "and if I have more than $100 USDC place a $10 BTC buy order."
    )

Recommended Models for Finance Tasks

Different models offer different trade-offs between reasoning quality, speed, and hardware requirements. Here's our guide for financial agent use cases.

Model Size RAM Required Reasoning Speed Best For
llama3.3:70b 70B 40 GB Excellent Slow Complex portfolio decisions
qwen2.5:32b 32B 20 GB Very Good Medium General trading agent
deepseek-r1:14b 14B 10 GB Good Fast Math-heavy strategy
mistral:7b 7B 6 GB Adequate Very Fast High-frequency loops
phi3:mini 3.8B 3 GB Basic Fastest Simple balance checks

Connect via Model Context Protocol

Purple Flea exposes a native MCP endpoint. Compatible tools like mcphost can bridge Ollama and the MCP server, giving your local model structured tool access without manual HTTP wrappers.

mcp-config.json json
{
  "mcpServers": {
    "purpleflea": {
      "url": "https://purpleflea.com/mcp",
      "transport": "streamable-http",
      "headers": {
        "Authorization": "Bearer YOUR_API_KEY"
      }
    }
  },
  "ollama": {
    "host": "http://localhost:11434",
    "model": "purpleflea-agent"
  }
}
terminal bash
# Install mcphost to bridge Ollama + MCP
go install github.com/mark3labs/mcphost@latest

# Run with Purple Flea MCP config
mcphost --config mcp-config.json \
  --model ollama:purpleflea-agent \
  --prompt "Monitor BTC price every 5 minutes and alert me if it moves 2%"