← Back to Blog
AI/ML2026-05-12·64 min read

I shipped an MCP server for crypto airdrops — install in 1 config line

By Weston G

Agent-Ready Web3 Intelligence: Exposing Curated Crypto Data via Hosted MCP Servers

Current Situation Analysis

The integration of Large Language Models (LLMs) into developer workflows has exposed a critical gap in Web3 data accessibility. While retail users and developers increasingly rely on LLMs for research and decision-making, the underlying data sources remain fragmented. Most crypto intelligence exists as static web directories, unstructured blog posts, or complex RPC endpoints that require significant boilerplate to query.

This disconnect creates three primary friction points:

  1. Context Switching: Users must manually navigate websites to gather data, then paste it into LLM prompts, breaking the flow of agent-assisted workflows.
  2. Data Staleness: Crypto campaigns, particularly airdrops and incentive programs, have dynamic lifecycles. Local tools or cached datasets quickly become obsolete, leading to hallucinations or outdated advice.
  3. Wallet Analysis Complexity: Determining eligibility requires querying multiple chains. A single wallet address may interact with Ethereum, Solana, Base, Linea, Arbitrum, Polygon, and BSC. Aggregating this data requires fan-out logic that most standard LLM clients cannot perform natively.

The industry often overlooks the value of curated, structured datasets exposed via the Model Context Protocol (MCP). While raw on-chain data is abundant, hand-vetted intelligence—filtered for scams, verified timelines, and actionable steps—provides higher signal-to-noise ratios. Exposing this curated data as MCP tools allows LLM clients to perform natural-language filtering, wallet-aware analysis, and tool composition without requiring users to manage API keys or local dependencies.

WOW Moment: Key Findings

The shift from static directories to hosted MCP servers fundamentally changes how agents interact with Web3 data. By hosting the endpoint and using a transport bridge, developers can decouple data freshness from client updates.

The following comparison highlights the operational advantages of a hosted MCP architecture over traditional approaches for dynamic Web3 data:

Approach Data Freshness Client Maintenance Multi-Chain Aggregation Privacy Model
Hosted MCP Server Real-time (Server-side updates) Zero (Config only) Native (Server fan-out) Stateless (No address logging)
Local Stdio Package Stale until npm update High (User must update) Limited by local env Local execution
REST API + Custom Script Real-time High (Script maintenance) Requires custom logic Depends on implementation
Web Scraping Brittle / Unreliable High (Parser breaks) Manual per-site Risk of IP blocks

Why this matters: A hosted MCP server with a mcp-remote bridge enables a "set and forget" model for end-users. The data refreshes on the server side (e.g., weekly freshness sweeps), ensuring all connected clients receive the latest intelligence immediately. This is particularly valuable for datasets that change frequently, such as active campaign lists, where a local package would require constant redistribution. Additionally, server-side fan-out to multiple RPCs allows for complex wallet analysis that would be computationally expensive or network-restricted in a local client environment.

Core Solution

The architecture centers on a JSON-RPC 2.0 endpoint served over HTTP, exposing three distinct tools designed for agent consumption. The implementation leverages serverless functions to handle requests, ensuring scalability and zero infrastructure management.

Tool Definitions

The server exposes three tools with strict schemas to ensure reliable LLM invocation.

1. Campaign Discovery Tool Retrieves a filtered list of active campaigns. Supports sorting and risk profiling.

interface FetchCampaignsRequest {
  chain_filter?: string;      // e.g., "solana", "base", "ethereum"
  risk_profile?: "verified" | "unverified";
  sort_by?: "deadline" | "added" | "cost_floor";
  max_results?: number;
}

interface CampaignSummary {
  slug: string;
  project_name: string;
  chain: string;
  deadline: string;
  cost_floor_usd: number;
  risk_flag: "verified" | "unverified";
  official_url: string;
}

2. Campaign Detail Tool Fetches comprehensive information for a specific campaign, including action steps and effort estimates.

interface RetrieveDetailsRequest {
  campaign_slug: string;
}

interface CampaignDetails {
  slug: string;
  project_name: string;
  chain: string;
  description: string;
  action_steps: string[];
  weekly_effort_hours: number;
  cost_floor_usd: number;
  risk_notes: string;
  official_url: string;
}

3. Wallet Exposure Tool Analyzes a wallet address across multiple chains to identify relevant campaigns. This tool performs a fan-out query to public RPCs.

interface AnalyzeWalletRequest {
  wallet_address: string;     // EVM or Solana format
}

interface WalletAnalysisResult {
  address: string;
  active_chains: string[];    // Chains with detected activity
  relevant_campaigns: CampaignSummary[];
  analysis_timestamp: string;
}

Architecture Decisions

  • JSON-RPC 2.0 over HTTP: This protocol provides a standardized method for tool invocation and result retrieval. It supports batch requests and error handling, making it robust for agent workflows.
  • Hosted Endpoint: The server is deployed on a serverless platform (e.g., Vercel). This ensures the data is always current. The underlying dataset undergoes weekly freshness sweeps, and the hosted endpoint reflects these updates instantly.
  • mcp-remote Bridge: Most MCP clients expect a stdio transport. The mcp-remote bridge wraps the HTTP endpoint, allowing clients to connect via a simple command without implementing HTTP transport logic.
  • CORS-Open Policy: The endpoint allows cross-origin requests, enabling browser-based MCP clients and web integrations to interact with the server directly.
  • Privacy-First Design: The wallet analysis tool reads from public RPCs and does not log addresses. This ensures user privacy while providing actionable insights.

Implementation Example

The following TypeScript snippet demonstrates the server-side handler structure for the wallet analysis tool, highlighting parallel RPC execution:

import { JsonRpcServer } from 'mcp-jsonrpc';
import { rpcClients } from './rpc-clients';

const server = new JsonRpcServer();

server.registerTool('analyze_wallet_exposure', async (params: AnalyzeWalletRequest) => {
  const { wallet_address } = params;
  
  // Fan-out to 7 public RPCs in parallel
  const chainPromises = [
    rpcClients.ethereum.checkActivity(wallet_address),
    rpcClients.base.checkActivity(wallet_address),
    rpcClients.solana.checkActivity(wallet_address),
    // ... other chains
  ];

  const results = await Promise.allSettled(chainPromises);
  
  const activeChains = results
    .filter(r => r.status === 'fulfilled' && r.value.hasActivity)
    .map(r => r.value.chain);

  // Filter campaigns based on active chains
  const relevantCampaigns = campaignDatabase.filter(c => 
    activeChains.includes(c.chain)
  );

  return {
    address: wallet_address,
    active_chains: activeChains,
    relevant_campaigns: relevantCampaigns,
    analysis_timestamp: new Date().toISOString()
  };
});

export default server.handler();

Pitfall Guide

When implementing hosted MCP servers for Web3 data, several common pitfalls can degrade performance, security, or usability.

  1. Latency in Multi-Chain Fan-Out

    • Explanation: Querying multiple RPCs sequentially can cause timeouts, especially if some chains are slow or rate-limited.
    • Fix: Use Promise.allSettled to execute queries in parallel. Implement timeouts per chain and gracefully handle failures without blocking the entire response.
  2. Ambiguous Tool Schemas

    • Explanation: LLMs may struggle to invoke tools if parameter descriptions are vague or if optional parameters lack clear defaults.
    • Fix: Provide detailed descriptions for every parameter. Use enums for constrained inputs (e.g., risk_profile). Include examples in the schema documentation.
  3. Privacy Leaks via Logging

    • Explanation: Wallet analysis tools process sensitive addresses. Logging these addresses can violate user privacy and create compliance risks.
    • Fix: Ensure the server is stateless and does not persist addresses. Use ephemeral processing and disable access logs for sensitive endpoints.
  4. Transport Mismatch

    • Explanation: Clients may expect stdio transport, but the server only supports HTTP. This causes connection failures.
    • Fix: Use the mcp-remote bridge to wrap the HTTP endpoint. This allows clients to connect via npx mcp-remote <url> without custom transport code.
  5. Lack of Eligibility Depth

    • Explanation: A presence check (e.g., "wallet touched chain X") does not guarantee eligibility. Many campaigns require specific actions like volume thresholds or NFT holdings.
    • Fix: Clearly document the limitations of the analysis. Provide links to official eligibility criteria. Consider adding per-entry eligibility checks if public scoring APIs are available.
  6. CORS Misconfiguration

    • Explanation: Browser-based clients may be blocked if the server does not allow cross-origin requests.
    • Fix: Configure the server to return Access-Control-Allow-Origin: * headers. Ensure preflight OPTIONS requests are handled correctly.
  7. Stale Data Assumptions

    • Explanation: Users may assume the data is real-time, but the server might cache responses aggressively.
    • Fix: Implement cache invalidation strategies. For dynamic data, use short TTLs or server-side refresh triggers. Display a "last verified" timestamp in responses.

Production Bundle

Action Checklist

  • Validate Tool Schemas: Ensure all tool definitions include clear descriptions, types, and examples for LLM consumption.
  • Test Parallel Execution: Verify that multi-chain queries use parallel RPC calls with appropriate timeouts and error handling.
  • Audit Privacy Controls: Confirm that no wallet addresses or sensitive data are logged or persisted by the server.
  • Configure CORS: Set Access-Control-Allow-Origin: * and handle OPTIONS preflight requests.
  • Implement Freshness Checks: Add a mechanism to track and display the last verification date for data entries.
  • Monitor Latency: Set up alerts for high response times, especially during peak usage or RPC congestion.
  • Document Limitations: Clearly state what the tools can and cannot do (e.g., presence check vs. full eligibility).

Decision Matrix

Scenario Recommended Approach Why Cost Impact
Dynamic Data with Weekly Updates Hosted MCP Server Ensures all clients receive updates instantly without manual intervention. Low (Serverless costs scale with usage)
Static Reference Data Local Stdio Package Reduces network latency and dependency on external endpoints. Zero (Bundled with client)
High-Security Internal Tools Local Stdio with Auth Keeps data processing within the user's environment. Medium (Requires secure distribution)
Multi-Chain Wallet Analysis Hosted MCP Server Leverages server-side RPC fan-out for comprehensive analysis. Low (Serverless RPC calls)

Configuration Template

Add the following configuration to your MCP client (e.g., Claude Desktop, Cursor) to connect to the hosted server:

{
  "mcpServers": {
    "web3-intelligence": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-remote",
        "https://your-mcp-endpoint.example.com/api/mcp"
      ]
    }
  }
}

Quick Start Guide

  1. Obtain Endpoint: Deploy the MCP server to your preferred serverless platform and note the HTTP endpoint URL.
  2. Add Configuration: Insert the JSON configuration block into your MCP client's settings file.
  3. Restart Client: Reload your MCP client to establish the connection via the mcp-remote bridge.
  4. Invoke Tool: Use natural language to query the tools. Example: "Show me verified airdrops on Solana with a deadline this week."
  5. Verify Response: Ensure the client receives structured JSON data and that the LLM can interpret and present the results accurately.