Files
mev-beta/logs/BOT_ANALYSIS_20251109.md
Administrator 803de231ba feat: create v2-prep branch with comprehensive planning
Restructured project for V2 refactor:

**Structure Changes:**
- Moved all V1 code to orig/ folder (preserved with git mv)
- Created docs/planning/ directory
- Added orig/README_V1.md explaining V1 preservation

**Planning Documents:**
- 00_V2_MASTER_PLAN.md: Complete architecture overview
  - Executive summary of critical V1 issues
  - High-level component architecture diagrams
  - 5-phase implementation roadmap
  - Success metrics and risk mitigation

- 07_TASK_BREAKDOWN.md: Atomic task breakdown
  - 99+ hours of detailed tasks
  - Every task < 2 hours (atomic)
  - Clear dependencies and success criteria
  - Organized by implementation phase

**V2 Key Improvements:**
- Per-exchange parsers (factory pattern)
- Multi-layer strict validation
- Multi-index pool cache
- Background validation pipeline
- Comprehensive observability

**Critical Issues Addressed:**
- Zero address tokens (strict validation + cache enrichment)
- Parsing accuracy (protocol-specific parsers)
- No audit trail (background validation channel)
- Inefficient lookups (multi-index cache)
- Stats disconnection (event-driven metrics)

Next Steps:
1. Review planning documents
2. Begin Phase 1: Foundation (P1-001 through P1-010)
3. Implement parsers in Phase 2
4. Build cache system in Phase 3
5. Add validation pipeline in Phase 4
6. Migrate and test in Phase 5

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-10 10:14:26 +01:00

6.5 KiB

MEV Bot Analysis - November 9, 2025

Executive Summary

Bot Status: RUNNING (Container: mev-bot-dev-master-dev) Health: 🟡 OPERATIONAL but DEGRADED due to severe rate limiting Primary Issue: Excessive 429 rate limit errors from public RPC endpoint

Current Status

Container Health

Container: mev-bot-dev-master-dev
Status: Up 7 minutes (healthy)
Branch: master-dev
Ports: 8080:8080, 9090:9090
Image: localhost/mev-bot:dev-master-dev

Core Services Status

  • MEV Bot Started Successfully
  • Arbitrage Service Running
  • Arbitrage Detection Engine Active
  • Metrics Server Running (port 9090)
  • Block Processing Active
  • Pool Discovery Working
  • ⚠️ RPC Connection SEVERELY RATE LIMITED

Issues Identified

🔴 CRITICAL: RPC Rate Limiting

Severity: CRITICAL Impact: HIGH - Degraded performance, missed opportunities

Details:

  • 2,354 instances of "429 Too Many Requests" errors in 7 minutes
  • Average: ~5.6 rate limit errors per second
  • RPC Endpoint: https://arb1.arbitrum.io/rpc (public, free tier)

Error Examples:

[ERROR] Failed to get L2 block 398369920: 429 Too Many Requests
[DEBUG] Registry 0x0000000022D53366457F9d5E68Ec105046FC4383 failed: 429 Too Many Requests
[DEBUG] Batch fetch attempt 1 failed with transient error: 429 Too Many Requests

Root Cause:

  1. Using public RPC endpoint with very strict rate limits
  2. Bot configured for 5 requests/second but public endpoint allows less
  3. Concurrent queries to multiple registries (Curve, Uniswap, etc.)
  4. Batch fetching generates multiple parallel requests

🟡 MEDIUM: Configuration Mismatch

Current config.dev.yaml settings:

arbitrum:
  rpc_endpoint: "https://arb1.arbitrum.io/rpc"
  ws_endpoint: ""
  rate_limit:
    requests_per_second: 5    # Too high for public endpoint
    max_concurrent: 3
    burst: 10

Current .env settings:

# Has premium Chainstack endpoint but not being used!
ARBITRUM_RPC_ENDPOINT=https://arb1.arbitrum.io/rpc
# Premium endpoint commented out or unused

🟡 MEDIUM: Batch Fetch Failures

Details:

  • ~200+ instances of "Failed to fetch batch 0-1: batch fetch V3 data failed after 3 attempts"
  • Pools failing: Non-standard contracts and new/untested pools
  • Blacklist growing: 907 total blacklisted pools

Recommendations

1. 🔴 IMMEDIATE: Switch to Premium RPC Endpoint

Action: Use the Chainstack premium endpoint from .env

Current .env has:

ARBITRUM_RPC_ENDPOINT=https://arb1.arbitrum.io/rpc
ARBITRUM_WS_ENDPOINT=

Need to check if there's a premium endpoint available in environment or secrets.

Implementation:

# config.dev.yaml
arbitrum:
  rpc_endpoint: "${CHAINSTACK_RPC_ENDPOINT:-https://arb1.arbitrum.io/rpc}"

2. 🟡 URGENT: Reduce Rate Limits

Action: Configure conservative rate limits for public endpoint

Implementation:

# config.dev.yaml - for public endpoint
arbitrum:
  rate_limit:
    requests_per_second: 2    # Reduced from 5
    max_concurrent: 1         # Reduced from 3
    burst: 3                  # Reduced from 10
  fallback_endpoints:
    - url: "https://arbitrum-rpc.publicnode.com"
      rate_limit:
        requests_per_second: 1
        max_concurrent: 1
        burst: 2

Action: Configure multiple fallback RPC endpoints

Implementation:

fallback_endpoints:
  - url: "https://arbitrum-rpc.publicnode.com"
    rate_limit:
      requests_per_second: 1
      max_concurrent: 1
      burst: 2
  - url: "https://arb-mainnet.g.alchemy.com/v2/demo"
    rate_limit:
      requests_per_second: 1
      max_concurrent: 1
      burst: 2
  - url: "https://1rpc.io/arb"
    rate_limit:
      requests_per_second: 1
      max_concurrent: 1
      burst: 2

4. 🟢 OPTIMIZATION: Implement Exponential Backoff

Action: Enhance retry logic with exponential backoff

Current: Fixed retry delays (1s, 2s, 3s) Recommended: Exponential backoff (1s, 2s, 4s, 8s, 16s)

5. 🟢 OPTIMIZATION: Cache Pool Data More Aggressively

Action: Increase cache expiration times

Implementation:

uniswap:
  cache:
    enabled: true
    expiration: 600        # Increased from 300s to 10 minutes
    max_size: 2000        # Increased from 1000

6. 🟢 ENHANCEMENT: Reduce Curve Registry Queries

Action: Disable or limit Curve pool queries for now

Since Curve queries are generating many 429 errors and most Arbitrum volume is on Uniswap/Camelot, consider reducing Curve registry checks.

Performance Metrics

Block Processing

  • Blocks Processed: ~1,000+ blocks in 7 minutes
  • Processing Rate: ~2.4 blocks/second
  • Transaction Volume: Processing 6-12 transactions per block
  • DEX Transactions: Minimal DEX activity detected

Error Rates

  • Rate Limit Errors: 2,354 (avg 5.6/second)
  • Batch Fetch Failures: ~200
  • Pool Blacklisted: 907 total
  • Success Rate: Low due to rate limiting

Immediate Action Plan

Priority 1: Fix Rate Limiting

# 1. Check for premium endpoint credentials
cat .env | grep -i chainstack
cat .env | grep -i alchemy
cat .env | grep -i infura

# 2. Update config with conservative limits
# Edit config/config.dev.yaml

# 3. Restart container
./scripts/dev-env.sh rebuild master-dev

Priority 2: Monitor Improvements

# Watch for 429 errors
./scripts/dev-env.sh logs -f | grep "429"

# Check error rate
podman logs mev-bot-dev-master-dev 2>&1 | grep "429" | wc -l

Priority 3: Optimize Configuration

  • Reduce concurrent requests
  • Increase cache times
  • Add more fallback endpoints
  • Implement smarter retry logic

Positive Findings

Despite the rate limiting issues:

  • Bot architecture is sound
  • All services starting correctly
  • Block processing working
  • Pool discovery functional
  • Arbitrage detection engine running
  • Retry logic handling errors gracefully
  • No crashes or panics
  • Container healthy and stable

Conclusion

The bot is NOT stopped - it's running but severely degraded by rate limiting.

The primary issue is using a public RPC endpoint that can't handle the bot's request volume. Switching to a premium endpoint or drastically reducing request rates will resolve the issue.

Estimated Impact of Fixes:

  • 🔴 Switch to premium RPC → 95% error reduction
  • 🟡 Reduce rate limits → 70% error reduction
  • 🟢 Add fallbacks → Better reliability
  • 🟢 Increase caching → 20% fewer requests

Next Steps: Apply recommended fixes in priority order.