LiteLLM Integration

Purple Flea for LiteLLM

Define your crypto tools once using OpenAI's function calling format, then run them with any LLM. Switch between GPT-4o, Claude 3.5, Gemini Pro, Llama 3.1, and 100+ other models โ€” your Purple Flea tools work everywhere.

Get Free API Key โ†’ View Docs

What is LiteLLM?

LiteLLM is a unified interface for 100+ LLM providers. Write your agent code once in the OpenAI format; point it at any model without changing a line.

๐Ÿ”€
Provider Agnostic
One API interface for OpenAI, Anthropic, Google, Mistral, Cohere, Together AI, Replicate, Ollama, Azure, AWS Bedrock, and 90+ other providers.
โš™๏ธ
Function Calling Everywhere
LiteLLM normalizes function/tool calling across providers. Define your Purple Flea tools once in the OpenAI schema and they work with every model that supports tool use.
๐Ÿ’ฐ
Cost Optimization
Route expensive calls to cheap models, fallback on failures, and load-balance across providers. LiteLLM can cut your LLM costs by 60-80% without changing your agent logic.

Works with 100+ Models Including:

GPT-4o
Claude 3.5 Sonnet
Gemini 1.5 Pro
Llama 3.1 405B
Mistral Large
Cohere Command
DeepSeek V3
Qwen 2.5
AWS Bedrock
Azure OpenAI
Ollama (local)
Together AI

Define Once, Run Anywhere

Define your Purple Flea tools in OpenAI format, then use them with any model via LiteLLM's completion() function.

Define Purple Flea Tools

import litellm import json import requests PURPLEFLEA_API_KEY = "YOUR_API_KEY" BASE = "https://api.purpleflea.com/v1" # Tool definitions (OpenAI function calling format) tools = [ { "type": "function", "function": { "name": "get_crypto_price", "description": "Get current price for any cryptocurrency", "parameters": { "type": "object", "properties": { "symbol": {"type": "string", "description": "e.g. BTC, ETH, SOL"} }, "required": ["symbol"] } } }, { "type": "function", "function": { "name": "open_trade", "description": "Open a perpetual futures trade", "parameters": { "type": "object", "properties": { "symbol": {"type": "string"}, "side": {"type": "string", "enum": ["long", "short"]}, "size": {"type": "number"}, "leverage": {"type": "integer", "default": 1} }, "required": ["symbol", "side", "size"] } } } ]

Run with Any LLM

def execute_tool(tool_call): name = tool_call.function.name args = json.loads(tool_call.function.arguments) if name == "get_crypto_price": r = requests.get( f"{BASE}/trading/price/{args['symbol']}", headers={"X-API-Key": PURPLEFLEA_API_KEY} ) return r.json() elif name == "open_trade": r = requests.post( f"{BASE}/trading/open", json=args, headers={"X-API-Key": PURPLEFLEA_API_KEY} ) return r.json() # Works with GPT-4o response = litellm.completion( model="gpt-4o", messages=[{"role": "user", "content": "Is BTC bullish? Open a long if yes."}], tools=tools ) # Switch to Claude with ZERO code changes response = litellm.completion( model="claude-3-5-sonnet-20241022", messages=[{"role": "user", "content": "Is BTC bullish? Open a long if yes."}], tools=tools ) # Or Llama via Ollama response = litellm.completion( model="ollama/llama3.1:70b", messages=[{"role": "user", "content": "Is BTC bullish? Open a long if yes."}], tools=tools )

Full Agentic Loop

def run_agent(model: str, task: str): messages = [{"role": "user", "content": task}] while True: response = litellm.completion( model=model, messages=messages, tools=tools ) msg = response.choices[0].message if not msg.tool_calls: return msg.content # Execute all tool calls messages.append(msg) for tc in msg.tool_calls: result = execute_tool(tc) messages.append({ "role": "tool", "tool_call_id": tc.id, "content": json.dumps(result) }) # Run the SAME agent across different models task = "Check BTC price, evaluate market conditions, and trade accordingly" print(run_agent("gpt-4o", task)) print(run_agent("claude-3-5-sonnet-20241022", task)) print(run_agent("gemini/gemini-1.5-pro", task))

Cost-Optimized Routing

from litellm import Router # Route expensive tasks to capable models # Route cheap tasks to efficient models router = Router(model_list=[ { "model_name": "analysis", "litellm_params": {"model": "gpt-4o"} }, { "model_name": "execution", "litellm_params": {"model": "gpt-4o-mini"} } ]) # Use the powerful model for market analysis analysis = router.completion( model="analysis", messages=[{"role": "user", "content": "Analyze BTC technicals in depth"}], tools=[get_crypto_price_tool] ) # Use cheaper model for trade execution execution = router.completion( model="execution", messages=[{"role": "user", "content": f"Execute: {analysis.choices[0].message.content}"}], tools=[open_trade_tool] )

Why LiteLLM + Purple Flea?

๐Ÿ”
Model Portability
Your crypto trading agent built on GPT-4o today will work on the next frontier model tomorrow โ€” with zero code changes to your Purple Flea tool definitions.
๐Ÿ 
Local Models Supported
Run your agent with a local Llama model via Ollama. Purple Flea handles the on-chain operations while your LLM stays private on your own hardware.
๐Ÿ›ก๏ธ
Fallback Resilience
Configure LiteLLM to automatically fall back to a secondary LLM if the primary fails. Your crypto trading agent keeps running even during provider outages.
๐Ÿ“‰
Cost Optimization
Use a cheap model for simple tasks (checking balances, formatting data) and a powerful model only when making trading decisions. Reduce LLM costs dramatically.
๐Ÿงช
A/B Testing Models
Compare trading performance across different LLMs using the same Purple Flea tools. Discover which model makes better trading decisions with your strategy.
๐Ÿ”’
One Key, Any Model
Your Purple Flea API key works regardless of which LLM provider you're using. Manage one crypto credential while freely switching AI providers.

FAQ

Does every LLM on LiteLLM support function calling?
Most frontier models do (GPT-4, Claude, Gemini, Mistral Large, etc.). Smaller models have varying support. LiteLLM's documentation lists function calling compatibility per provider.
Is the Purple Flea Python SDK compatible with LiteLLM?
Yes. You can use the Purple Flea SDK as the tool implementation while using LiteLLM for the LLM calls. They're independent layers โ€” LiteLLM manages AI, Purple Flea manages crypto.
Can I use LiteLLM's proxy server with Purple Flea tools?
Absolutely. Define your Purple Flea tools as custom functions in LiteLLM proxy config. This lets multiple applications share the same tool definitions through a central proxy.
Does Purple Flea have native integrations beyond LiteLLM?
Yes โ€” dedicated packages for LangChain, CrewAI, MCP servers for Claude Desktop, and integrations for AutoGen, Google ADK, and more.

Any Model. Any Agent. One Crypto Stack.

Get your free Purple Flea API key and define your tools once โ€” then let LiteLLM run them with whatever LLM fits your use case and budget.