Research

Tokenomics Analysis for AI Agents: Evaluating DeFi Tokens

A systematic framework for autonomous agents to evaluate DeFi token quality: from supply mechanics and vesting schedules to protocol revenue, value accrual, and on-chain scoring.

8
Scoring Dimensions
FDV
Key Valuation Metric
On-Chain
Data Source
P/F Ratio
Revenue Metric
Research DeFi Tokenomics AI Agents On-Chain

1. Token Supply Mechanics: The Foundation of Valuation

Before any price analysis, an agent must understand the complete supply picture. Many retail investors focus on current circulating supply while ignoring the enormous token unlocks scheduled for future dates — a critical analytical gap.

Supply Types and Their Implications

FDV
True valuation ceiling
Circulating %
Current float / Max supply
Unlock Rate
New supply entering monthly
Inflation %
Annual emission rate

Supply Inflation and Emission Schedules

Inflationary tokens continuously dilute holders. An agent must model the annual inflation rate and compare it against expected protocol growth to determine whether the dilution is economically justified.

from dataclasses import dataclass, field
from typing import List, Dict, Optional
from datetime import datetime, date
import httpx
import asyncio

@dataclass
class TokenSupplyProfile:
    symbol: str
    current_price: float
    circulating_supply: float
    total_supply: float
    max_supply: Optional[float]   # None if uncapped
    annual_emission_rate: float    # new tokens per year as % of circulating
    locked_supply_pct: float       # % of total in lockups/vesting

    @property
    def market_cap(self) -> float:
        return self.current_price * self.circulating_supply

    @property
    def fdv(self) -> Optional[float]:
        if self.max_supply:
            return self.current_price * self.max_supply
        return self.current_price * self.total_supply

    @property
    def float_ratio(self) -> float:
        """Circulating / total. Low ratio = large future unlock pressure."""
        base = self.max_supply or self.total_supply
        return self.circulating_supply / base

    @property
    def fdv_to_mcap_ratio(self) -> Optional[float]:
        """High ratio = large future dilution pressure. >5 = red flag."""
        fdv = self.fdv
        if fdv and self.market_cap > 0:
            return fdv / self.market_cap
        return None

2. Vesting Schedules and Unlock Events

Token vesting schedules define when locked tokens (held by teams, investors, foundations) become tradeable. Major unlocks are often significant bearish catalysts as early investors who paid fractions of current market price gain the ability to sell.

Types of Vesting Structures

Vesting TypeStructureRisk LevelRed Flag?
Cliff + LinearLock X months, then linear release over Y monthsMediumOnly if cliff very short
Pure LinearConstant daily/monthly release from day 1Low-MediumNo
Cliff DumpAll tokens unlock on single dateVery HighYes
No VestingTeam/investors hold unlocked tokensExtremeYes
Performance-gatedUnlocks tied to protocol milestonesLowNo
from datetime import date, timedelta

@dataclass
class VestingSchedule:
    category: str         # 'team', 'investors', 'ecosystem', etc.
    total_tokens: float
    cliff_months: int
    vesting_months: int   # after cliff
    start_date: date

    def cliff_end(self) -> date:
        return self.start_date + timedelta(days=self.cliff_months * 30)

    def monthly_unlock(self) -> float:
        if self.vesting_months == 0:
            return self.total_tokens   # cliff dump
        return self.total_tokens / self.vesting_months

    def unlocked_as_of(self, check_date: date) -> float:
        cliff_end = self.cliff_end()
        if check_date < cliff_end:
            return 0.0
        if self.vesting_months == 0:
            return self.total_tokens
        months_vested = min(
            (check_date - cliff_end).days / 30,
            self.vesting_months
        )
        return min(self.total_tokens, self.monthly_unlock() * months_vested)

    def unlock_pressure_score(self, next_months: int = 6) -> float:
        """Fraction of total tokens unlocking in next N months (0-1)."""
        today = date.today()
        future = today + timedelta(days=next_months * 30)
        now_unlocked = self.unlocked_as_of(today)
        future_unlocked = self.unlocked_as_of(future)
        return (future_unlocked - now_unlocked) / (self.total_tokens + 1e-10)
Unlock Red Flag

If a token's team/VC allocation represents more than 20% of total supply with a cliff ending in the next 90 days, this is a strong sell signal. Early investors typically hold tokens at 100-1000x discount to current market price.

3. Fully Diluted Valuation Analysis

FDV (Fully Diluted Valuation) is the most important single number in tokenomics analysis. It answers the question: "What is the total market cap if every token that will ever exist were circulating right now?"

FDV/Market Cap Ratio Interpretation

A token with a market cap of $500M and an FDV of $10B has a 20x ratio. This means 95% of the token supply has not yet entered circulation. Buyers at current prices are implicitly betting that demand will grow 20x faster than supply unlocks.

FDV/MC RatioInterpretationAgent Action
< 1.5xMost supply already circulating — low dilution riskFavorable signal
1.5x – 3xModerate future unlocks — acceptable for strong protocolsAnalyze unlock timeline
3x – 7xSignificant dilution pressure aheadRequire strong fundamentals to justify
> 7xExtreme dilution risk — mostly unreleased supplyAvoid unless extraordinary catalyst
UncappedPermanent inflation — no hard capModel emission rate vs revenue growth

4. Protocol Revenue vs Token Price: The P/F Ratio

The Price-to-Fees ratio (P/F) is the DeFi equivalent of a stock's P/E ratio. It measures how much the market is willing to pay per dollar of protocol fee revenue. Unlike earnings, protocol fees are often fully verifiable on-chain in real time.

Annualized Fee Revenue Calculation

import httpx
import asyncio
from datetime import datetime, timedelta

async def get_protocol_fees(protocol_slug: str, days: int = 30) -> dict:
    """Fetch protocol fee data from DeFiLlama fees API."""
    async with httpx.AsyncClient() as client:
        resp = await client.get(
            ff"https://api.llama.fi/summary/fees/{protocol_slug}",
            params={"dataType": "dailyFees"}
        )
        data = resp.json()

    daily_fees = data.get('totalDataChart', [])
    recent = [d[1] for d in daily_fees[-days:]]

    avg_daily = sum(recent) / len(recent) if recent else 0
    annualized = avg_daily * 365

    return {
        'avg_daily_fees_usd': avg_daily,
        'annualized_fees_usd': annualized,
        '30d_total': sum(recent),
        'data_points': len(recent)
    }

def price_to_fees_ratio(market_cap: float, annualized_fees: float) -> Optional[float]:
    """P/F ratio. Below 25 generally indicates fair value for strong protocols."""
    if annualized_fees <= 0:
        return None
    return market_cap / annualized_fees

def interpret_pf(pf: Optional[float]) -> str:
    if pf is None: return "no_revenue"
    if pf < 10:    return "undervalued"
    if pf < 25:    return "fair_value"
    if pf < 50:    return "growth_premium"
    if pf < 100:   return "speculative"
    return "detached_from_revenue"

5. Value Accrual Mechanisms

Owning a protocol's token is only valuable if the token actually captures value from protocol success. Many tokens have high valuations but weak or nonexistent value accrual mechanisms.

The Value Accrual Spectrum

Value Accrual Scoring Rubric

Buyback + Burn10/10
Revenue Share to Stakers8/10
Governance + Treasury5/10
Liquidity Mining Only2/10
Pure Speculation0/10

6. Governance Token Evaluation

Many DeFi protocols use governance tokens to distribute decision-making. But the value of governance rights depends heavily on how real that power is in practice. An agent must distinguish genuine governance from cosmetic governance.

Governance Quality Assessment Dimensions

async def assess_governance(protocol_slug: str) -> dict:
    """Fetch and score governance data from Snapshot + on-chain sources."""
    async with httpx.AsyncClient() as client:
        # Snapshot API for off-chain governance stats
        snapshot_resp = await client.post(
            "https://hub.snapshot.org/graphql",
            json={"query": f"""
                query {{
                  space(id: "{protocol_slug}.eth") {{
                    proposalsCount
                    votesCount
                    followersCount
                    strategies {{ name }}
                  }}
                }}
            """}
        )
        gov_data = snapshot_resp.json().get('data', {}).get('space', {})

    proposals = gov_data.get('proposalsCount', 0)
    votes = gov_data.get('votesCount', 0)
    participation_per_prop = votes / max(proposals, 1)

    score = 0
    if proposals > 20:  score += 2          # Active governance history
    if participation_per_prop > 100: score += 3  # High engagement per proposal
    if participation_per_prop > 500: score += 2  # Very high engagement

    return {
        'proposals': proposals,
        'total_votes': votes,
        'avg_votes_per_proposal': participation_per_prop,
        'governance_score': score,
        'assessment': 'active' if score >= 5 else 'weak'
    }

7. Full TokenomicsAnalyzer: On-Chain Scoring System

The complete agent integrates all the above dimensions into a single, composable scoring engine. Each dimension is scored 0-10 and weighted to produce a final composite score.

class TokenomicsAnalyzer:
    """
    Autonomous agent for evaluating DeFi token tokenomics quality.
    Produces a composite score (0-100) across 8 dimensions.
    Integrates with DeFiLlama, Snapshot, and on-chain RPC data.
    """

    WEIGHTS = {
        'fdv_ratio':       0.15,   # FDV/MC — dilution risk
        'float_ratio':     0.10,   # circulating/total
        'unlock_pressure': 0.20,   # near-term unlock schedule
        'pf_ratio':        0.20,   # price-to-fees valuation
        'value_accrual':   0.15,   # fee buyback vs pure governance
        'emission_rate':   0.08,   # annual inflation %
        'governance':      0.07,   # governance activity
        'revenue_trend':   0.05,   # fees growing or shrinking
    }

    def __init__(self, rpc_url: Optional[str] = None):
        self.rpc_url = rpc_url
        self.client = httpx.AsyncClient(timeout=20)

    def score_fdv_ratio(self, ratio: Optional[float]) -> float:
        if ratio is None: return 5.0
        if ratio <= 1.5:  return 10.0
        if ratio <= 3.0:  return 7.0
        if ratio <= 7.0:  return 4.0
        if ratio <= 15.0: return 2.0
        return 0.0

    def score_pf_ratio(self, pf: Optional[float]) -> float:
        if pf is None: return 0.0  # no revenue = worst score
        if pf < 10:    return 10.0
        if pf < 25:    return 8.0
        if pf < 50:    return 5.0
        if pf < 100:   return 3.0
        return 1.0

    def score_unlock_pressure(self, schedules: List[VestingSchedule]) -> float:
        """Score based on total unlock pressure over next 6 months."""
        total_tokens = sum(s.total_tokens for s in schedules) + 1e-10
        pressure = sum(s.unlock_pressure_score(6) * s.total_tokens for s in schedules) / total_tokens
        if pressure < 0.02:  return 10.0
        if pressure < 0.05:  return 7.0
        if pressure < 0.10:  return 4.0
        if pressure < 0.20:  return 2.0
        return 0.0

    def score_value_accrual(self, accrual_type: str) -> float:
        mapping = {
            'buyback_burn': 10.0,
            'revenue_share': 8.0,
            'governance_treasury': 5.0,
            'liquidity_mining': 2.0,
            'none': 0.0
        }
        return mapping.get(accrual_type, 2.0)

    def score_emission_rate(self, annual_pct: float) -> float:
        if annual_pct <= 0:    return 10.0   # deflationary
        if annual_pct <= 2:    return 9.0
        if annual_pct <= 5:    return 7.0
        if annual_pct <= 10:   return 5.0
        if annual_pct <= 25:   return 3.0
        return 0.0

    async def analyze(self, supply: TokenSupplyProfile, schedules: List[VestingSchedule],
                       protocol_slug: str, accrual_type: str) -> dict:
        """Run full tokenomics analysis and return composite score."""
        fees = await get_protocol_fees(protocol_slug)
        gov = await assess_governance(protocol_slug)
        pf = price_to_fees_ratio(supply.market_cap, fees['annualized_fees_usd'])

        scores = {
            'fdv_ratio':       self.score_fdv_ratio(supply.fdv_to_mcap_ratio),
            'float_ratio':     min(supply.float_ratio * 10, 10.0),
            'unlock_pressure': self.score_unlock_pressure(schedules),
            'pf_ratio':        self.score_pf_ratio(pf),
            'value_accrual':   self.score_value_accrual(accrual_type),
            'emission_rate':   self.score_emission_rate(supply.annual_emission_rate),
            'governance':      min(gov['governance_score'] / 7 * 10, 10.0),
            'revenue_trend':   5.0   # placeholder: implement trailing 30d vs 90d comparison
        }

        composite = sum(scores[k] * self.WEIGHTS[k] * 10 for k in scores)

        return {
            'symbol': supply.symbol,
            'composite_score': round(composite, 1),
            'dimension_scores': scores,
            'market_cap': supply.market_cap,
            'fdv': supply.fdv,
            'fdv_to_mcap': supply.fdv_to_mcap_ratio,
            'annualized_fees': fees['annualized_fees_usd'],
            'pf_ratio': pf,
            'pf_interpretation': interpret_pf(pf),
            'governance': gov,
            'recommendation': self._recommendation(composite)
        }

    def _recommendation(self, score: float) -> str:
        if score >= 75: return "STRONG_FUNDAMENTAL_BUY"
        if score >= 60: return "ACCEPTABLE_FUNDAMENTALS"
        if score >= 45: return "WEAK_FUNDAMENTALS_CAUTION"
        return "AVOID_POOR_TOKENOMICS"

8. Common Tokenomics Red Flags and Green Flags

Beyond the quantitative scoring, an agent must pattern-match against known red and green flag patterns in tokenomics design. These are signal patterns identified across hundreds of DeFi protocols.

Red Flags

Green Flags

Agent Integration Note

The TokenomicsAnalyzer can be integrated as a pre-trade filter in any position-taking agent. Before allocating capital to a new DeFi token, run the analyzer and require a minimum composite score of 55/100 before proceeding. Tokens below this threshold should require extraordinary catalyst evidence to justify override.

9. On-Chain Data Sources for Autonomous Agents

Token analysis is only as good as its underlying data. An autonomous agent needs reliable, machine-readable on-chain and protocol data sources that don't require human authentication or CAPTCHA resolution.

Recommended Data APIs

Data TypeAPI EndpointFree TierAgent Use Case
Protocol fees/revenueapi.llama.fi/summary/fees/{protocol}Yes (unlimited)P/F ratio calculation
TVL dataapi.llama.fi/tvl/{protocol}YesProtocol size/growth tracking
Token pricesapi.coingecko.com/api/v3/simple/priceYes (rate-limited)Market cap calculation
Token supplyapi.coingecko.com/api/v3/coins/{id}YesCirculating / FDV metrics
Governance voteshub.snapshot.org/graphqlYesParticipation rates
Unlock schedulestoken.unlocks.app/apiLimitedVesting event calendar
On-chain transfersapi.etherscan.io/api5 req/s freeWhale wallet tracking
class OnChainDataFetcher:
    """Aggregates tokenomics data from multiple public APIs."""

    def __init__(self):
        self.client = httpx.AsyncClient(timeout=15)
        self._cache = {}

    async def get_coingecko_data(self, token_id: str) -> dict:
        """Fetch token supply and market data from CoinGecko."""
        url = ff"https://api.coingecko.com/api/v3/coins/{token_id}"
        params = {"localization": "false", "tickers": "false",
                  "community_data": "false", "developer_data": "false"}
        resp = await self.client.get(url, params=params)
        data = resp.json()
        mkt = data.get('market_data', {})

        return {
            'symbol': data['symbol'].upper(),
            'price_usd': mkt.get('current_price', {}).get('usd', 0),
            'market_cap': mkt.get('market_cap', {}).get('usd', 0),
            'fdv': mkt.get('fully_diluted_valuation', {}).get('usd'),
            'circulating_supply': mkt.get('circulating_supply', 0),
            'total_supply': mkt.get('total_supply', 0),
            'max_supply': mkt.get('max_supply'),
        }

    async def get_protocol_tvl_trend(self, slug: str, days: int = 90) -> dict:
        """Fetch TVL trend to evaluate protocol health/growth."""
        resp = await self.client.get(ff"https://api.llama.fi/protocol/{slug}")
        data = resp.json()
        tvl_series = data.get('tvl', [])[-days:]

        if len(tvl_series) < 2:
            return {'trend': 'unknown', 'current_tvl': 0}

        current_tvl = tvl_series[-1]['totalLiquidityUSD']
        start_tvl = tvl_series[0]['totalLiquidityUSD']
        growth_pct = (current_tvl - start_tvl) / (abs(start_tvl) + 1e-10) * 100

        return {
            'current_tvl': current_tvl,
            'start_tvl': start_tvl,
            'growth_pct_90d': growth_pct,
            'trend': 'growing' if growth_pct > 10 else ('declining' if growth_pct < -10 else 'stable')
        }

10. Building a Token Watchlist Agent

Rather than analyzing tokens on-demand, a production agent should maintain a continuously updated watchlist of tokens under surveillance — running the full TokenomicsAnalyzer nightly and flagging tokens whose scores change significantly.

class TokenWatchlistAgent:
    """
    Nightly batch agent that scores a watchlist of tokens and
    alerts when fundamental conditions change materially.
    """
    def __init__(self, watchlist: List[dict], alert_threshold: float = 10.0):
        self.watchlist = watchlist   # [{coingecko_id, llama_slug, accrual_type}, ...]
        self.threshold = alert_threshold
        self.history: Dict[str, list] = {}
        self.analyzer = TokenomicsAnalyzer()
        self.fetcher = OnChainDataFetcher()
        self.logger = logging.getLogger('TokenWatchlistAgent')

    async def score_token(self, token_config: dict) -> dict:
        cg_data = await self.fetcher.get_coingecko_data(token_config['coingecko_id'])
        supply = TokenSupplyProfile(
            symbol=cg_data['symbol'],
            current_price=cg_data['price_usd'],
            circulating_supply=cg_data['circulating_supply'],
            total_supply=cg_data['total_supply'],
            max_supply=cg_data['max_supply'],
            annual_emission_rate=token_config.get('annual_emission_pct', 5.0),
            locked_supply_pct=token_config.get('locked_pct', 0.3)
        )
        schedules = token_config.get('vesting_schedules', [])
        result = await self.analyzer.analyze(
            supply, schedules, token_config['llama_slug'], token_config['accrual_type']
        )
        return result

    async def run_nightly(self):
        self.logger.info("Starting nightly tokenomics scan...")
        alerts = []
        for token in self.watchlist:
            try:
                result = await self.score_token(token)
                sym = result['symbol']
                score = result['composite_score']

                if sym in self.history and self.history[sym]:
                    prev_score = self.history[sym][-1]['composite_score']
                    delta = score - prev_score
                    if abs(delta) >= self.threshold:
                        alerts.append({
                            'symbol': sym, 'prev': prev_score,
                            'current': score, 'delta': delta,
                            'direction': 'improved' if delta > 0 else 'deteriorated'
                        })

                if sym not in self.history:
                    self.history[sym] = []
                self.history[sym].append(result)

            except Exception as e:
                self.logger.error(ff"Failed to score {token}: {e}")

        self.logger.info(ff"Scan complete. {len(alerts)} material changes detected.")
        for alert in alerts:
            self.logger.warning(ff"ALERT {alert['symbol']}: {alert['direction']} by {abs(alert['delta']):.1f} pts")
        return alerts
Analyst Note

Tokenomics quality is not static. A protocol that scores 70/100 today may score 40/100 six months from now due to a large team unlock or declining fee revenue. Run the watchlist agent nightly and treat score changes as primary signals — not just the absolute level.

11. Integrating Tokenomics Scores with Trading Decisions

A tokenomics score on its own is a fundamental filter, not a trading signal. The full workflow requires combining the fundamental score with technical momentum signals to produce high-conviction trade setups.

Four-Quadrant Decision Matrix

Tokenomics ScorePrice MomentumRecommended ActionPosition Size
High (>65)BullishStrong long — fundamentals and technicals alignedFull size
High (>65)BearishAccumulate on weakness — fundamentals support dip-buyingHalf size, DCA
Low (<45)BullishTactical long only — momentum play without fundamental supportQuarter size, tight stop
Low (<45)BearishAvoid or short — poor fundamentals accelerate downtrendZero or short

Signal Pipeline Architecture

class TokenSignalPipeline:
    """
    Combines tokenomics fundamental scores with technical signals
    to generate position sizing recommendations for autonomous agents.
    """
    def __init__(self, analyzer: TokenomicsAnalyzer, trading_api_url: str, api_key: str):
        self.analyzer = analyzer
        self.trading_url = trading_api_url
        self.client = httpx.AsyncClient(headers={'Authorization': ff'Bearer {api_key}'})

    async def get_momentum_signal(self, symbol: str) -> str:
        """Simple momentum: price vs 20-day and 50-day EMAs."""
        resp = await self.client.get(
            ff'{self.trading_url}/candles/{symbol}',
            params={'interval': '1d', 'limit': 60}
        )
        closes = pd.Series([c[4] for c in resp.json()])
        ema20 = closes.ewm(span=20).mean().iloc[-1]
        ema50 = closes.ewm(span=50).mean().iloc[-1]
        price = closes.iloc[-1]

        if price > ema20 > ema50:
            return 'bullish'
        elif price < ema20 < ema50:
            return 'bearish'
        return 'neutral'

    async def generate_signal(self, token_config: dict, symbol: str,
                               max_capital: float) -> dict:
        """Combine fundamental and technical signals into a position recommendation."""
        fundamental = await self.analyzer.analyze(
            token_config['supply'], token_config['schedules'],
            token_config['llama_slug'], token_config['accrual_type']
        )
        momentum = await self.get_momentum_signal(symbol)

        score = fundamental['composite_score']
        strong_fundamentals = score >= 65
        weak_fundamentals = score < 45

        if strong_fundamentals and momentum == 'bullish':
            size_fraction = 1.0
            action = 'long'
            conviction = 'high'
        elif strong_fundamentals and momentum == 'bearish':
            size_fraction = 0.4
            action = 'long_dca'
            conviction = 'medium'
        elif weak_fundamentals and momentum == 'bearish':
            size_fraction = 0
            action = 'avoid'
            conviction = 'low'
        else:
            size_fraction = 0.25
            action = 'tactical_only'
            conviction = 'low'

        return {
            'symbol': symbol,
            'action': action,
            'size_usd': max_capital * size_fraction,
            'conviction': conviction,
            'fundamental_score': score,
            'momentum': momentum,
            'recommendation': fundamental['recommendation']
        }

Trade Tokens You've Actually Analyzed

Access 275+ perpetual markets to execute on your tokenomics research. Fund your wallet, claim your free USDC from the faucet, and start trading with conviction backed by data.