security: Add sensitive files to .gitignore
Some checks failed
V2 CI/CD Pipeline / Pre-Flight Checks (push) Has been cancelled
V2 CI/CD Pipeline / Build & Dependencies (push) Has been cancelled
V2 CI/CD Pipeline / Code Quality & Linting (push) Has been cancelled
V2 CI/CD Pipeline / Unit Tests (100% Coverage Required) (push) Has been cancelled
V2 CI/CD Pipeline / Integration Tests (push) Has been cancelled
V2 CI/CD Pipeline / Performance Benchmarks (push) Has been cancelled
V2 CI/CD Pipeline / Decimal Precision Validation (push) Has been cancelled
V2 CI/CD Pipeline / Modularity Validation (push) Has been cancelled
V2 CI/CD Pipeline / Final Validation Summary (push) Has been cancelled

- Add .env.phase1, .env.phase1.bak to .gitignore
- Add .env.production, .env.deployment patterns
- Add orig/ directory to prevent backup files with secrets
- Add *.bak and *.env patterns
- Remove sensitive files from git tracking

These files contained exposed private keys and API credentials.
Local files preserved, only removed from version control.

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
This commit is contained in:
Ralph Loop
2026-03-07 18:49:18 -06:00
parent e997ddc818
commit 8b6dfc7114
1093 changed files with 77 additions and 300051 deletions

View File

@@ -1,98 +0,0 @@
# 🚀 MEV BOT PRODUCTION CONFIGURATION - IMMEDIATE PROFIT MODE
# This is your LIVE TRADING configuration for immediate deployment
# =============================================================================
# 🔥 CRITICAL PRODUCTION SETTINGS - PROFIT OPTIMIZATION
# =============================================================================
# Arbitrum RPC and Sequencer Feed
ARBITRUM_RPC_ENDPOINT=https://arb1.arbitrum.io/rpc
ARBITRUM_WS_ENDPOINT=https://arb1.arbitrum.io/rpc
# Arbitrum Sequencer Feed for real-time transaction monitoring (CORRECTED URL)
SEQUENCER_WS_URL=wss://arb1.arbitrum.io/feed
# Aggressive rate limits for maximum throughput
RPC_REQUESTS_PER_SECOND=250
RPC_MAX_CONCURRENT=20
BOT_MAX_WORKERS=8
BOT_CHANNEL_BUFFER_SIZE=5000
# 🔐 PRODUCTION SECURITY
MEV_BOT_ENCRYPTION_KEY="i4qwh5vqUxehOdFsdZx0vFvDwKUHcVpGWC0K2BVQn6A="
# === WALLET CONFIGURATION ===
# Generated wallet address: 0xB5C11BE05226c010B7236dDc5903E7703c9Fc8BD
# Private key without 0x prefix (64 hex characters)
PRIVATE_KEY=cf7687e43f118f5f1390dcc3fec9f770e7852b6fd1ff9d3ea4294ab920f2609a
# === PHASE 1 SAFETY SETTINGS (CRITICAL - DO NOT MODIFY) ===
ENABLE_EXECUTION=false
DRY_RUN_MODE=true
ENABLE_SIMULATION=true
ENABLE_FRONT_RUNNING=false
# 💰 PROFIT MAXIMIZATION SETTINGS
ARBITRAGE_MIN_PROFIT_THRESHOLD=0.001 # 0.1% minimum profit (aggressive)
GAS_PRICE_MULTIPLIER=1.8 # Competitive gas pricing
MAX_SLIPPAGE_TOLERANCE=0.005 # 0.5% max slippage
POSITION_SIZE_ETH=0.1 # Start with 0.1 ETH positions
# 📊 MONITORING & ALERTS
METRICS_ENABLED=true
METRICS_PORT=9090
HEALTH_PORT=8080
LOG_LEVEL=info
LOG_FORMAT=json
# 🏭 PRODUCTION ENVIRONMENT
GO_ENV=production
DEBUG=false
# 💾 STORAGE PATHS
MEV_BOT_KEYSTORE_PATH=keystore/production
MEV_BOT_AUDIT_LOG=logs/production_audit.log
MEV_BOT_BACKUP_PATH=backups/production
# ⚡ PERFORMANCE TUNING
GOMAXPROCS=4
GOGC=100
# 🎯 TARGET EXCHANGES FOR ARBITRAGE
ENABLE_UNISWAP_V2=true
ENABLE_UNISWAP_V3=true
ENABLE_SUSHISWAP=true
ENABLE_BALANCER=true
ENABLE_CURVE=true
# 🔥 DEPLOYED CONTRACTS (PRODUCTION READY)
CONTRACT_ARBITRAGE_EXECUTOR=0xec2a16d5f8ac850d08c4c7f67efd50051e7cfc0b
CONTRACT_FLASH_SWAPPER=0x5801ee5c2f6069e0f11cce7c0f27c2ef88e79a95
CONTRACT_UNISWAP_V2_FLASH_SWAPPER=0xc0b8c3e9a976ec67d182d7cb0283fb4496692593
ARBISCAN_API_KEY=H8PEIY79385F4UKYU7MRV5IAT1BI1WYIVY
# Phase 1: Mainnet Dry-Run Configuration
# Duration: 48 hours minimum
# Risk: NONE (monitoring only, no execution)
# === SAFETY SETTINGS (ULTRA-CONSERVATIVE) ===
ENABLE_EXECUTION=false
DRY_RUN_MODE=true
ENABLE_SIMULATION=true
ENABLE_FRONT_RUNNING=false
# === DETECTION THRESHOLDS ===
MIN_PROFIT_THRESHOLD=0.001 # 0.1% minimum (detect more opportunities)
MAX_SLIPPAGE_TOLERANCE=0.005 # 0.5% max slippage
# === RISK LIMITS (NOT USED IN DRY-RUN BUT LOGGED) ===
MAX_POSITION_SIZE_ETH=0.01 # 0.01 ETH
MAX_DAILY_VOLUME_ETH=0.1 # 0.1 ETH
MAX_CONSECUTIVE_LOSSES=1 # Stop after 1 loss
MAX_HOURLY_LOSS_ETH=0.01 # 0.01 ETH hourly
MAX_DAILY_LOSS_ETH=0.05 # 0.05 ETH daily
# === MONITORING ===
METRICS_ENABLED=true
METRICS_PORT=9090
LOG_LEVEL=info

View File

@@ -1,73 +0,0 @@
# 🚀 MEV BOT PRODUCTION CONFIGURATION - IMMEDIATE PROFIT MODE
# This is your LIVE TRADING configuration for immediate deployment
# =============================================================================
# 🔥 CRITICAL PRODUCTION SETTINGS - PROFIT OPTIMIZATION
# =============================================================================
# Arbitrum RPC and Sequencer Feed
ARBITRUM_RPC_ENDPOINT=https://arb1.arbitrum.io/rpc
ARBITRUM_WS_ENDPOINT=https://arb1.arbitrum.io/rpc
# Arbitrum Sequencer Feed for real-time transaction monitoring (CORRECTED URL)
SEQUENCER_WS_URL=wss://arb1.arbitrum.io/feed
# Aggressive rate limits for maximum throughput
RPC_REQUESTS_PER_SECOND=250
RPC_MAX_CONCURRENT=20
BOT_MAX_WORKERS=8
BOT_CHANNEL_BUFFER_SIZE=5000
# 🔐 PRODUCTION SECURITY
MEV_BOT_ENCRYPTION_KEY="i4qwh5vqUxehOdFsdZx0vFvDwKUHcVpGWC0K2BVQn6A="
# === WALLET CONFIGURATION ===
# Generated wallet address: 0xB5C11BE05226c010B7236dDc5903E7703c9Fc8BD
# Private key without 0x prefix (64 hex characters)
PRIVATE_KEY=cf7687e43f118f5f1390dcc3fec9f770e7852b6fd1ff9d3ea4294ab920f2609a
# === PHASE 1 SAFETY SETTINGS (CRITICAL - DO NOT MODIFY) ===
ENABLE_EXECUTION=false
DRY_RUN_MODE=true
ENABLE_SIMULATION=true
ENABLE_FRONT_RUNNING=false
# 💰 PROFIT MAXIMIZATION SETTINGS
ARBITRAGE_MIN_PROFIT_THRESHOLD=0.001 # 0.1% minimum profit (aggressive)
GAS_PRICE_MULTIPLIER=1.8 # Competitive gas pricing
MAX_SLIPPAGE_TOLERANCE=0.005 # 0.5% max slippage
POSITION_SIZE_ETH=0.1 # Start with 0.1 ETH positions
# 📊 MONITORING & ALERTS
METRICS_ENABLED=true
METRICS_PORT=9090
HEALTH_PORT=8080
LOG_LEVEL=info
LOG_FORMAT=json
# 🏭 PRODUCTION ENVIRONMENT
GO_ENV=production
DEBUG=false
# 💾 STORAGE PATHS
MEV_BOT_KEYSTORE_PATH=keystore/production
MEV_BOT_AUDIT_LOG=logs/production_audit.log
MEV_BOT_BACKUP_PATH=backups/production
# ⚡ PERFORMANCE TUNING
GOMAXPROCS=4
GOGC=100
# 🎯 TARGET EXCHANGES FOR ARBITRAGE
ENABLE_UNISWAP_V2=true
ENABLE_UNISWAP_V3=true
ENABLE_SUSHISWAP=true
ENABLE_BALANCER=true
ENABLE_CURVE=true
# 🔥 DEPLOYED CONTRACTS (PRODUCTION READY)
CONTRACT_ARBITRAGE_EXECUTOR=0xec2a16d5f8ac850d08c4c7f67efd50051e7cfc0b
CONTRACT_FLASH_SWAPPER=0x5801ee5c2f6069e0f11cce7c0f27c2ef88e79a95
CONTRACT_UNISWAP_V2_FLASH_SWAPPER=0xc0b8c3e9a976ec67d182d7cb0283fb4496692593
ARBISCAN_API_KEY=H8PEIY79385F4UKYU7MRV5IAT1BI1WYIVY

77
.gitignore vendored Normal file
View File

@@ -0,0 +1,77 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool
*.out
# Go workspace file
go.work
# Dependency directories
vendor/
# IDE
.idea/
.vscode/
*.swp
*.swo
*~
# Environment variables (CRITICAL: Never commit secrets)
.env
.env.local
.env.*.local
.env.phase1
.env.phase1.bak
.env.production
.env.deployment
*.env
# Wallet and key files (CRITICAL: Never commit private keys)
.wallet_info.txt
*.key
*.pem
*.crt
private_keys/
# Legacy/backup directories (CRITICAL: May contain exposed secrets)
orig/
backup/
*.bak
# Build output
build/
bin/
dist/
# Logs
*.log
logs/
# Metrics data
*.prometheus
metrics/
# Temporary files
tmp/
temp/
*.tmp
# OS files
.DS_Store
Thumbs.db
# Docker
docker-compose.override.yml
# Coverage
coverage.html
coverage.out
coverage/

Binary file not shown.

View File

@@ -1,272 +0,0 @@
# Claude CLI Configuration
This directory contains Claude Code configuration and tools for the MEV Bot project.
## 🚀 Quick Start Commands
### Essential Build & Test Commands
```bash
# Build the MEV bot binary
make build
# Run tests
make test
# Start development server with hot reload
./scripts/run.sh
# Build and run with logging
./scripts/build.sh && ./mev-bot start
# Check for Go modules issues
go mod tidy && go mod verify
# Run linter
golangci-lint run
# Run security analysis
gosec ./...
```
### Development Workflow Commands
```bash
# Setup production environment for testing
export ARBITRUM_RPC_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57"
export ARBITRUM_WS_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57"
export MEV_BOT_ENCRYPTION_KEY="tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48="
export METRICS_ENABLED="false"
# Run with production endpoints and timeout for testing
env ARBITRUM_RPC_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57" ARBITRUM_WS_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57" MEV_BOT_ENCRYPTION_KEY="tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=" timeout 30 ./bin/mev-bot start
# Debug with verbose logging using production endpoints
env ARBITRUM_RPC_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57" ARBITRUM_WS_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57" MEV_BOT_ENCRYPTION_KEY="tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=" LOG_LEVEL="debug" timeout 15 ./bin/mev-bot start
# Scan for opportunities using production endpoints
env ARBITRUM_RPC_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57" ARBITRUM_WS_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57" MEV_BOT_ENCRYPTION_KEY="tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=" timeout 20 ./bin/mev-bot scan
# Profile performance
go tool pprof http://localhost:6060/debug/pprof/profile
```
## Claude Commands Directory
The `.claude/commands/` directory contains predefined commands for common development tasks:
- `analyze-performance.md` - Analyze application performance
- `debug-issue.md` - Debug issues in the codebase
- `implement-feature.md` - Implement new features
- `optimize-performance.md` - Optimize performance
- `security-audit.md` - Perform security audits
## Claude Settings
The `.claude/settings.local.json` file contains Claude's permissions configuration:
```json
{
"permissions": {
"allow": [
"Bash(find:*)",
"Bash(go mod:*)",
"Bash(go list:*)",
"mcp__ide__getDiagnostics",
"Bash(go test:*)",
"Bash(go build:*)",
"Read(//tmp/**)",
"Bash(./mev-bot start --help)",
"Bash(xargs ls:*)",
"Bash(timeout:*)",
"WebSearch",
"WebFetch(domain:docs.uniswap.org)",
"WebFetch(domain:github.com)",
"WebFetch(domain:raw.githubusercontent.com)",
"WebFetch(domain:htdocs.dev)"
],
"deny": [],
"ask": []
}
}
```
## 📋 Development Guidelines & Code Style
### Go Best Practices
- **Error Handling**: Always wrap errors with context using `fmt.Errorf("operation failed: %w", err)`
- **Concurrency**: Use worker pools for processing large datasets (see `pkg/market/pipeline.go`)
- **Interfaces**: Keep interfaces small and focused (1-3 methods maximum)
- **Testing**: Aim for >90% test coverage with table-driven tests
- **Logging**: Use structured logging with `slog` package
- **Performance**: Profile regularly with `go tool pprof`
### Code Organization Rules
- **File Size**: Keep files under 500 lines (split larger files into logical components)
- **Package Structure**: Follow Go standard layout (cmd/, internal/, pkg/)
- **Naming**: Use Go naming conventions (PascalCase for exports, camelCase for private)
- **Documentation**: Document all exported functions with examples
- **Constants**: Group related constants in blocks with `iota` when appropriate
### Required Checks Before Commit
```bash
# Run all checks before committing
make test && make lint && go mod tidy
# Security scan
gosec ./...
# Dependency vulnerability check
go list -json -m all | nancy sleuth
```
## Claude's Primary Focus Areas
As Claude, you're particularly skilled at:
1. **Code Architecture and Design Patterns**
- Implementing clean, maintainable architectures
- Applying appropriate design patterns (pipeline, worker pool, etc.)
- Creating well-structured interfaces between components
- Ensuring loose coupling and high cohesion
2. **System Integration and APIs**
- Designing clear APIs between components
- Implementing proper data flow between modules
- Creating robust configuration management
- Building error handling and recovery mechanisms
3. **Writing Clear Documentation**
- Documenting complex algorithms and mathematical calculations
- Creating clear API documentation
- Writing architectural decision records
- Producing user guides and examples
4. **Implementing Robust Error Handling**
- Using Go's error wrapping with context
- Implementing retry mechanisms with exponential backoff
- Handling timeouts appropriately
- Creating comprehensive logging strategies
5. **Creating Maintainable and Scalable Code Structures**
- Organizing code for easy testing and maintenance
- Implementing performance monitoring and metrics
- Designing for horizontal scalability
- Ensuring code follows established patterns and conventions
## 🛠 Claude Code Optimization Settings
### Workflow Preferences
- **Always commit changes**: Use `git commit -am "descriptive message"` after significant changes
- **Branch naming**: Use hyphens (`feat-add-new-parser`, `fix-memory-leak`)
- **Context management**: Use `/compact` to manage long conversations
- **Parallel processing**: Leverage Go's concurrency patterns extensively
### File Organization Preferences
- **Never save temporary files to root**: Use `/tmp/` or `internal/temp/`
- **Log files**: Always save to `logs/` directory
- **Test files**: Place alongside source files with `_test.go` suffix
- **Documentation**: Keep in `docs/` with clear naming
### Performance Monitoring
```bash
# Enable metrics endpoint
export METRICS_ENABLED="true"
export METRICS_PORT="9090"
# Monitor memory usage
go tool pprof http://localhost:9090/debug/pprof/heap
# Monitor CPU usage
go tool pprof http://localhost:9090/debug/pprof/profile?seconds=30
# Monitor goroutines
go tool pprof http://localhost:9090/debug/pprof/goroutine
```
## 🔧 Environment Configuration
### Required Environment Variables
```bash
# Production Arbitrum RPC Configuration (WSS for full features)
export ARBITRUM_RPC_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57"
export ARBITRUM_WS_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57"
# Security Configuration
export MEV_BOT_ENCRYPTION_KEY="tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48="
# Application Configuration
export LOG_LEVEL="info"
export METRICS_ENABLED="false"
export METRICS_PORT="9090"
# Development Configuration
export GO_ENV="development"
export DEBUG="true"
```
### Optional Environment Variables
```bash
# Performance Tuning
export GOMAXPROCS=4
export GOGC=100
# Logging Configuration
export LOG_FORMAT="json"
export LOG_OUTPUT="logs/mev-bot.log"
# Rate Limiting
export MAX_RPS=100
export RATE_LIMIT_BURST=200
```
## 📝 Commit Message Conventions
### Format
```
type(scope): brief description
- Detailed explanation of changes
- Why the change was needed
- Any breaking changes or migration notes
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
```
### Types
- `feat`: New feature implementation
- `fix`: Bug fix
- `perf`: Performance improvement
- `refactor`: Code restructuring without feature changes
- `test`: Adding or updating tests
- `docs`: Documentation updates
- `build`: Build system or dependency changes
- `ci`: CI/CD pipeline changes
## 🚨 Security Guidelines
### Never Commit
- Private keys or wallet seeds
- API keys or secrets
- RPC endpoints with authentication
- Personal configuration files
### Always Validate
- Input parameters for all functions
- RPC responses before processing
- Mathematical calculations for overflow
- Memory allocations for bounds
### Security Commands
```bash
# Scan for secrets
git-secrets --scan
# Security audit
gosec ./...
# Dependency vulnerabilities
go list -json -m all | nancy sleuth
# Check for hardcoded credentials
grep -r "password\|secret\|key" --exclude-dir=.git .
```

View File

@@ -1,120 +0,0 @@
{
"permissions": {
"allow": [
"Bash(find:*)",
"Bash(go mod:*)",
"Bash(go list:*)",
"mcp__ide__getDiagnostics",
"Bash(go test:*)",
"Bash(go build:*)",
"Read(//tmp/**)",
"Bash(./mev-bot start --help)",
"Bash(xargs ls:*)",
"Bash(timeout:*)",
"Bash(export ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\")",
"Bash(export ARBITRUM_WS_ENDPOINT=\"wss://arb1.arbitrum.io/ws\")",
"Bash(export METRICS_ENABLED=\"true\")",
"Bash(export METRICS_PORT=\"9090\")",
"Bash(echo:*)",
"Bash(ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" ARBITRUM_WS_ENDPOINT=\"wss://arb1.arbitrum.io/ws\" METRICS_ENABLED=\"true\" timeout 30 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" ARBITRUM_WS_ENDPOINT=\"wss://arb1.arbitrum.io/ws\" METRICS_ENABLED=\"true\" timeout 30 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" ARBITRUM_WS_ENDPOINT=\"wss://arb1.arbitrum.io/ws\" METRICS_ENABLED=\"true\" timeout 60 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" ARBITRUM_WS_ENDPOINT=\"\" METRICS_ENABLED=\"true\" timeout 60 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arb1.arbitrum.io/ws\" ARBITRUM_WS_ENDPOINT=\"wss://arb1.arbitrum.io/ws\" METRICS_ENABLED=\"true\" timeout 20 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.infura.io/ws/v3/demo\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.infura.io/ws/v3/demo\" METRICS_ENABLED=\"true\" timeout 30 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"true\" timeout 30 ./mev-bot start)",
"Bash(chmod:*)",
"Bash(pkill:*)",
"Bash(lsof:*)",
"Bash(xargs kill:*)",
"Bash(ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" timeout 30 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" ./bin/mev-bot start)",
"WebSearch",
"WebFetch(domain:docs.uniswap.org)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" timeout 15 ./bin/mev-bot start)",
"Bash(./scripts/run.sh:*)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" timeout 30 ./mev-bot start)",
"Bash(node:*)",
"Bash(python3:*)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" timeout 20 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" timeout 10 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" timeout 5 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" timeout 15 ./mev-bot start)",
"Bash(git checkout:*)",
"Bash(git add:*)",
"Bash(git commit:*)",
"Bash(cat:*)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" timeout 3 ./mev-bot start)",
"Read(//home/administrator/.claude/**)",
"WebFetch(domain:github.com)",
"WebFetch(domain:raw.githubusercontent.com)",
"WebFetch(domain:htdocs.dev)",
"Bash(while read file)",
"Read(//home/administrator/projects/**)",
"Read(//home/administrator/**)",
"Bash(mkdir:*)",
"Bash(abigen:*)",
"Bash(go vet:*)",
"Bash(go env:*)",
"Bash(./scripts/test-setup.sh:*)",
"Bash(go get:*)",
"Bash(grep:*)",
"Bash(go fmt:*)",
"Bash(./scripts/production-validation.sh:*)",
"Bash(go run:*)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" timeout 15 go run test/production/deployed_contracts_demo.go)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"dGVzdC1lbmNyeXB0aW9uLWtleS1mb3ItZGVtby0xMjM0NTY3OA==\" timeout 10 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"dGVzdC1lbmNyeXB0aW9uLWtleS1mb3ItZGVtby0xMjM0NTY3OA==\" timeout 20 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" ARBITRUM_WS_ENDPOINT=\"\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"dGVzdC1lbmNyeXB0aW9uLWtleS1mb3ItZGVtby0xMjM0NTY3OA==\" timeout 10 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"dGVzdC1lbmNyeXB0aW9uLWtleS1mb3ItZGVtby0xMjM0NTY3OA==\" timeout 15 ./mev-bot scan)",
"Bash(./scripts/run-fork-tests.sh:*)",
"Bash(openssl rand:*)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 15 ./bin/mev-bot scan)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 5 ./bin/mev-bot scan)",
"Bash(env MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" go run scripts/generate-key.go)",
"Bash(env MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 10 go run scripts/generate-key.go)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 30 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 30 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 60 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"true\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 30 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" LOG_LEVEL=\"debug\" timeout 20 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 15 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 10 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 15 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 10 ./mev-bot start)",
"WebFetch(domain:arbiscan.io)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 5 ./bin/mev-bot start)",
"Bash(kill:*)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" LOG_LEVEL=\"debug\" timeout 15 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" LOG_LEVEL=\"debug\" timeout 5 ./mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 3 ./bin/mev-bot start)",
"Bash(git mv:*)",
"Bash(./bin/mev-bot:*)",
"Bash(make:*)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" METRICS_ENABLED=\"false\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" LOG_LEVEL=\"debug\" timeout 3 ./bin/mev-bot start)",
"Bash(./bin/swap-cli:*)",
"Bash(ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" WALLET_ADDRESS=\"0x742d35Cc6AaB8f5d6649c8C4F7C6b2d123456789\" ./bin/swap-cli --dry-run uniswap-v3 --token-in 0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48 --token-out 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 --amount-in 1000000000)",
"Bash(tree:*)",
"Bash(gosec:*)",
"Bash(export MEV_BOT_ENCRYPTION_KEY=\"dGVzdC1lbmNyeXB0aW9uLWtleS1mb3ItZGVtby0xMjM0NTY3OA==\")",
"Bash(./scripts/security-validation.sh:*)",
"Bash(qwen:*)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 10 ./bin/mev-bot scan)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"https://arb1.arbitrum.io/rpc\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 20 ./bin/mev-bot scan)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 30 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 15 ./bin/mev-bot scan)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" LOG_LEVEL=\"debug\" timeout 15 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ./bin/swap-cli --dry-run uniswap-v3 --token-in 0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48 --token-out 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2 --amount-in 1000000000)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" ./scripts/production-validation.sh)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 20 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 5 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 10 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 3 ./bin/mev-bot start)",
"Bash(env ARBITRUM_RPC_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" ARBITRUM_WS_ENDPOINT=\"wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57\" MEV_BOT_ENCRYPTION_KEY=\"tVoxTugRw7lk7q/GC8yXd0wg3vLy8m6GtrvCqj/5q48=\" timeout 15 ./bin/mev-bot start)"
],
"deny": [],
"ask": []
}
}

View File

@@ -1,34 +0,0 @@
# Analyze Performance
Perform a comprehensive performance analysis of the MEV bot: $ARGUMENTS
## Analysis Steps:
1. **Memory Profiling**: Check for memory leaks and high allocation patterns
2. **CPU Profiling**: Identify CPU bottlenecks and hot paths
3. **Goroutine Analysis**: Look for goroutine leaks and blocking operations
4. **I/O Performance**: Analyze network and disk I/O patterns
5. **Concurrency Issues**: Check for race conditions and lock contention
## Performance Commands to Run:
```bash
# Memory profile
go tool pprof http://localhost:9090/debug/pprof/heap
# CPU profile
go tool pprof http://localhost:9090/debug/pprof/profile?seconds=30
# Goroutine analysis
go tool pprof http://localhost:9090/debug/pprof/goroutine
# Enable race detector
go run -race ./cmd/mev-bot
```
## Analysis Focus Areas:
- Worker pool efficiency in `pkg/market/pipeline.go`
- Event parsing performance in `pkg/events/`
- Uniswap math calculations in `pkg/uniswap/`
- Memory usage in large transaction processing
- Rate limiting effectiveness in `internal/ratelimit/`
Please provide specific performance metrics and recommendations for optimization.

View File

@@ -1,271 +0,0 @@
# Full Audit Checklist — Go MEV Arbitrage Bot
> Scope: a production-grade Go service that scans mempool/RPCs, finds arbitrage, constructs & signs transactions, and submits them (e.g., direct RPC, Flashbots bundles). Includes any on-chain smart-contracts the bot depends on (adapters, helpers, custom contracts).
---
## 1) Planning & scoping (before running tools)
* ✅ Identify attack surface: private keys, RPC/WS endpoints, mempool sources, bundle relays (Flashbots), third-party libs, config files.
* ✅ Define assets at risk (max ETH / tokens), operational windows, and which chains/nodes (e.g., Arbitrum, Ethereum).
* ✅ Create test accounts with funded testnet funds and a forked mainnet environment (Anvil/Hardhat/Foundry) for reproducible tests.
---
## 2) Static analysis — Go (SAST / linters / vuln DB)
Run these to find code smells, insecure patterns, and known vulnerable dependencies:
* Tools to run: `gosec`, `govulncheck`, `staticcheck`, `golangci-lint`. (gosec and govulncheck are high-value for security.) ([GitHub][1])
Example commands:
```bash
# gosec
gosec ./...
# govulncheck (requires Go toolchain)
govulncheck ./...
# staticcheck via golangci-lint
golangci-lint run
```
What to look for:
* Use of `crypto/rand` vs `math/rand`, insecure parsing of RPC responses, improper TLS skip verify, hard-coded secrets.
* Unsafe use of `sync` primitives, race conditions flagged by go vet/staticcheck patterns.
---
## 3) Dynamic analysis & runtime checks — Go
* Run `go test` with race detector and fuzzing (Go 1.18+ built-in fuzz):
```bash
# race detector
go test -race ./...
# fuzzing (example fuzz target)
go test -fuzz=Fuzz -fuzztime=10m ./...
```
* Run integration tests on a mainnet-fork (Anvil/Foundry/Hardhat) so you can simulate chain state and mempool behaviours.
* Instrument code with pprof and capture CPU/heap profiles during simulated high-throughput runs.
What to look for:
* Race detector failures, panics, deadlocks, goroutine leaks (goroutines that grow unbounded during workload).
---
## 4) Go security & dependency checks
* `govulncheck` (re: vulnerable dependencies). ([Go Packages][2])
* Dependency graph: `go list -m all` and search for unmaintained or forked packages.
* Check for unsafe cgo or native crypto libraries.
---
## 5) Code quality & architecture review checklist
* Error handling: ensure errors are checked and wrapped (`%w`) not ignored.
* Context: all network calls accept `context.Context` with deadlines/timeouts.
* Modularization: separate mempool ingestion, strategy logic, transaction builder, and signer.
* Testability: core arbitrage logic should be pure functions with injected interfaces for RPCs and time.
* Secrets: private keys are never in repo, use a secrets manager (Vault / KMS) or env with restricted perms.
---
## 6) Concurrency & rate limiting
* Ensure tight control over goroutine lifecycle (context cancellation).
* Use worker pools for RPCs and bounded channels to avoid OOM.
* Rate-limit RPC calls and implement backoff/retry strategies with jitter.
---
## 7) Transaction building & signing (critical)
* Validate chain ID, EIP-155 protection, correct v,r,s values.
* Nonce management: centralize nonce manager; handle failed/non-broadcast txs and re-sync nonces from node on error.
* Gas estimation/testing: include sanity checks & max gas limits.
* Signing: prefer hardware/remote signers (KMS, HSM). If local private keys used, ensure file perms and encryption.
* Replay protection: verify chain ID usage and consider EIP-1559 parameterization (maxFeePerGas/maxPriorityFeePerGas).
* If using Flashbots, validate bundle assembly and simulator checks (see Flashbots docs). ([Flashbots Docs][3])
---
## 8) Smart-contract audit (if you have custom contracts or interact closely)
Run this toolchain (static + dynamic + fuzz + formal):
* **Slither** for static analysis.
* **Mythril** (symbolic analysis) / **MythX** for additional SAST.
* **Foundry (forge)** for unit & fuzz tests — Foundry supports fuzzing & is fast. ([Cyfrin][4])
* **Echidna** for property-based fuzzing of invariants. ([0xmacro.com][5])
* Consider **Manticore**/**Mythril** for deeper symbolic exploration, and **Certora**/formal approaches for critical invariants (if budget allows). ([Medium][6])
Example solidity workflow:
```bash
# slither
slither .
# Foundry fuzzing
forge test --fuzz
# Echidna (property-based)
echidna-test contracts/ --contract MyContract --config echidna.yaml
```
What to test:
* Reentrancy, arithmetic over/underflow (if not using SafeMath / solidity ^0.8), access-control checks, unexpected token approvals, unchecked external calls.
---
## 9) Fuzzing — exhaustive plan
**Go fuzzing**
* Use Gos native fuzz (`go test -fuzz`) for core libraries (parsers, ABI decoders, bundle builders).
* Create fuzz targets focusing on:
* RPC response parsing (malformed JSON).
* ABI decoding and calldata construction.
* Signed-transaction bytes parser.
* Run for long durations with corpus seeding from real RPC responses.
**Solidity fuzzing**
* Use Foundrys `forge test --fuzz` and Echidna to target invariants (balances, no-negative slippage, token conservation).
* Seed fuzzers with historical tx traces & typical call sequences.
**Orchestration**
* Run fuzzers in CI but also schedule long runs in a buildkite/GitHub Actions runner or dedicated machine.
* Capture crashes/inputs and convert them to reproducible testcases.
References and how-to guides for solidity fuzzing and Foundry usage. ([Cyfrin][4])
---
## 10) Penetration-style tests / adversarial scenarios
* Mempool-level adversary: inject malformed or opponent transactions, test how bot reacts to reorgs & chain reorganizations.
* Time/latency: simulate delayed RPC responses and timeouts.
* Partial failures: simulate bundle reverts, tx replaced, or gas price spikes.
* Economic tests: simulate price or liquidity slippage, oracle manipulation scenarios.
---
## 11) Monitoring, metrics & observability
* Add structured logs (JSON), trace IDs, and use OpenTelemetry or Prometheus metrics for:
* latency from detection → tx submission,
* success/failure counters,
* gas usage per tx,
* nonce mismatches.
* Add alerts for repeated reorgs, high failure rates, or sudden profit/loss anomalies.
* Simulate alerting (PagerDuty/Slack) in staging to ensure operational readiness.
---
## 12) CI/CD & reproducible tests
* Integrate all static and dynamic checks into CI:
* `gosec`, `govulncheck`, `golangci-lint` → fail CI on new high/critical findings.
* Unit tests, fuzzing smoke tests (short), Foundry tests for solidity.
* Store fuzzing corpora and reproduce minimized crashing inputs in CI artifacts.
---
## 13) Secrets & deployment hardening
* Never commit private keys or mnemonic in code or config. Use secret manager (AWS KMS / GCP KMS / HashiCorp Vault).
* Use least-privilege for node credentials; isolate signer service from other components.
* Harden nodes: avoid public shared RPCs for production signing; prefer dedicated provider or local node.
---
## 14) Reporting format & remediation plan (what the auditor should deliver)
* **Executive summary:** risk posture and amount at risk.
* **Prioritized findings:** Critical / High / Medium / Low / Informational.
* For each finding: description, evidence (stack trace, log snippets, test reproducer), impact, exploitability, and line-level references.
* **Fix recommendation:** code patch or test to cover the case; include sample code where relevant.
* **Re-test plan:** how to validate fixes (unit/regression tests, fuzzing seeds).
* **Follow-up:** suggested schedule for re-audit (post-fix) and continuous scanning.
---
## 15) Severity guidance (example)
* **Critical:** fund-loss bug; private key compromised; unsigned tx broadcast leak.
* **High:** nonce desync under normal load; reentrancy in helper contract called by bot.
* **Medium:** panic on malformed RPC response; unbounded goroutine leak.
* **Low:** logging missing request IDs; non-idiomatic error handling.
* **Informational:** code style, minor refactors.
---
## 16) Example concrete test cases to include
* RPC returns truncated JSON → does the bot panic or gracefully retry?
* Node returns `nonce too low` mid-run → does the bot resync or keep retrying stale nonce?
* Simulate mempool reordering and reorg of 2 blocks → does bot detect revert & recover?
* Flashbots bundle simulator returns revert on one tx → ensure bot doesnt double-submit other txs.
* ABI-decoder fuzzed input causing unexpected `panic` in Go (fuzzer should find).
---
## 17) Example commands / CI snippets (condensed)
```yaml
# .github/workflows/ci.yml (snippets)
- name: Lint & Security
run: |
golangci-lint run ./...
gosec ./...
govulncheck ./...
- name: Unit + Race
run: go test -race ./...
- name: Go Fuzz (short)
run: go test -run TestFuzz -fuzz=Fuzz -fuzztime=1m ./...
```
---
## 18) Deliverables checklist for the auditor (what to hand back)
* Full report (PDF/Markdown) with prioritized findings & diffs/patches.
* Repro scripts for each failing case (docker-compose or Foundry/Anvil fork commands).
* Fuzzing corpora and minimized crashing inputs.
* CI changes / workflow proposals for enforcement.
* Suggested runtime hardening & monitoring dashboard templates.
---
## 19) Helpful references & toolset (quick links)
* `gosec` — Go security scanner. ([GitHub][1])
* `govulncheck` — Go vulnerability scanner for dependencies. ([Go Packages][2])
* Foundry (`forge`) — fast solidity testing + fuzzing. ([Cyfrin][4])
* Echidna — property-based fuzzing for Solidity. ([0xmacro.com][5])
* Slither / Mythril — solidity static analysis & symbolic analysis. ([Medium][6])
---
## 20) Final notes & recommended audit cadence
* Run a full audit (code + solidity + fuzzing) before mainnet launch.
* Keep continuous scanning (gosec/govulncheck) in CI on every PR.
* Schedule quarterly security re-checks + immediate re-audit for any major dependency or logic change.
---

View File

@@ -1,88 +0,0 @@
# Check Proper Implementations
We need to ensure we have a database and make a database connection. We need to ensure that we are logging all exchange data (swaps and liquidity) in their own log file. We need to make absolutely sure that we have all the relevant data (router/factory, token0, token1, amounts, fee, etc).
## Database Implementation
The market manager includes a comprehensive database adapter that handles persistence of market data:
### Database Schema
- **markets**: Stores core market information (factory_address, pool_address, token0/1 addresses, fee, ticker, protocol)
- **market_data**: Stores price and liquidity data with versioning support (sequencer vs on-chain)
- **market_events**: Stores parsed events (swaps, liquidity changes) with full details
- **arbitrage_opportunities**: Stores detected arbitrage opportunities for analysis
### Database Features
- PostgreSQL schema with proper indexing for performance
- Foreign key constraints for data integrity
- JSON serialization for complex data structures
- Batch operations for efficiency
- Connection pooling support
### Database Adapter Functions
- `NewDatabaseAdapter()`: Creates and tests database connection
- `InitializeSchema()`: Creates tables and indexes if they don't exist
- `SaveMarket()`: Persists market information
- `SaveMarketData()`: Stores price/liquidity data with source tracking
- `SaveArbitrageOpportunity()`: Records detected opportunities
- `GetMarket()`: Retrieves market by key
- `GetLatestMarketData()`: Gets most recent market data
## Logging Implementation
The logging system uses a multi-file approach with separation of concerns:
### Specialized Log Files
- **Main log**: General application logging
- **Opportunities log**: MEV opportunities and arbitrage attempts
- **Errors log**: Errors and warnings only
- **Performance log**: Performance metrics and RPC calls
- **Transactions log**: Detailed transaction analysis
### Logging Functions
- `Opportunity()`: Logs arbitrage opportunities with full details
- `Performance()`: Records performance metrics for optimization
- `Transaction()`: Logs detailed transaction information
- `BlockProcessing()`: Records block processing metrics
- `ArbitrageAnalysis()`: Logs arbitrage opportunity analysis
- `RPC()`: Records RPC call metrics for endpoint optimization
### Exchange Data Logging
All exchange data is logged with complete information:
- Router/factory addresses
- Token0 and Token1 addresses
- Swap amounts (both tokens)
- Pool fees
- Transaction hashes
- Block numbers
- Timestamps
## Data Collection Verification
### Market Data
- Markets stored with full identification (factory, pool, tokens)
- Price and liquidity data with timestamp tracking
- Status tracking (possible, confirmed, stale, invalid)
- Protocol information (UniswapV2, UniswapV3, etc.)
### Event Data
- Swap events with complete amount information
- Liquidity events (add/remove) with token amounts
- Transaction hashes and block numbers for verification
- Event types for filtering and analysis
### Arbitrage Data
- Path information for multi-hop opportunities
- Profit calculations with gas cost estimation
- ROI percentages for opportunity ranking
- Status tracking (detected, executed, failed)
## Implementation Status
✅ Database connection established and tested
✅ Database schema implemented with all required tables
✅ Market data persistence with versioning support
✅ Event data logging with full exchange details
✅ Specialized logging for different data types
✅ All required exchange data fields captured
✅ Proper error handling and connection management

View File

@@ -1,43 +0,0 @@
# Debug Issue
Debug the following MEV bot issue: $ARGUMENTS
## Debugging Protocol:
1. **Issue Understanding**: Analyze the problem description and expected vs actual behavior
2. **Log Analysis**: Examine relevant log files in `logs/` directory
3. **Code Investigation**: Review related source code and recent changes
4. **Reproduction**: Attempt to reproduce the issue in a controlled environment
5. **Root Cause**: Identify the underlying cause and contributing factors
## Debugging Commands:
```bash
# Check logs
tail -f logs/mev-bot.log
# Run with debug logging
LOG_LEVEL=debug ./mev-bot start
# Check system resources
htop
iostat -x 1 5
# Network connectivity
nc -zv arbitrum-mainnet.core.chainstack.com 443
# Go runtime stats
curl http://localhost:9090/debug/vars | jq
```
## Investigation Areas:
- **Connection Issues**: RPC endpoint connectivity and WebSocket stability
- **Parsing Errors**: Transaction and event parsing failures
- **Performance**: High CPU/memory usage or processing delays
- **Concurrency**: Race conditions or deadlocks
- **Configuration**: Environment variables and configuration issues
## Output Requirements:
- Clear problem identification
- Step-by-step reproduction instructions
- Root cause analysis
- Proposed solution with implementation steps
- Test plan to verify the fix

View File

@@ -1,34 +0,0 @@
# Implement Mathematical Algorithm
Implement the following mathematical algorithm for the MEV bot: $ARGUMENTS
## Implementation Framework:
1. **Requirements Analysis**: Break down the mathematical requirements and precision needs
2. **Formula Implementation**: Convert mathematical formulas to precise Go code
3. **Precision Handling**: Use appropriate data types (uint256, big.Int) for calculations
4. **Edge Case Handling**: Consider boundary conditions and error scenarios
5. **Testing**: Create comprehensive tests including property-based tests
6. **Optimization**: Optimize for performance while maintaining precision
## Implementation Standards:
- **Numerical Precision**: Use github.com/holiman/uint256 for precise uint256 arithmetic
- **Error Handling**: Implement robust error handling with clear error messages
- **Documentation**: Document all mathematical formulas and implementation decisions
- **Testing**: Achieve >95% test coverage with property-based tests for mathematical functions
- **Performance**: Consider performance implications and benchmark critical paths
## File Organization:
- **Core Logic**: Place in appropriate `pkg/uniswap/` or `pkg/math/` subdirectory
- **Tests**: Co-locate with source files (`*_test.go`)
- **Documentation**: Inline comments explaining mathematical formulas
## Integration Points:
- **Uniswap Pricing**: Integrate with `pkg/uniswap/` for pricing calculations
- **Market Analysis**: Connect to `pkg/market/` for market data processing
- **Precision Libraries**: Use `github.com/holiman/uint256` for uint256 arithmetic
## Deliverables:
- Working implementation with comprehensive tests
- Documentation of mathematical formulas and implementation approach
- Performance benchmarks for critical functions
- Edge case handling and error scenarios

View File

@@ -1,38 +0,0 @@
# Implement Feature
Implement the following feature for the MEV bot: $ARGUMENTS
## Implementation Framework:
1. **Requirements Analysis**: Break down the feature requirements and acceptance criteria
2. **Architecture Design**: Design the solution following existing patterns
3. **Interface Design**: Define clean interfaces between components
4. **Implementation**: Write the code following Go best practices
5. **Testing**: Create comprehensive unit and integration tests
6. **Documentation**: Update relevant documentation and examples
## Implementation Standards:
- **Code Quality**: Follow Go conventions and project coding standards
- **Error Handling**: Implement robust error handling with context
- **Logging**: Add appropriate logging with structured fields
- **Testing**: Achieve >90% test coverage
- **Performance**: Consider performance implications and add metrics
- **Security**: Validate all inputs and handle edge cases
## File Organization:
- **Core Logic**: Place in appropriate `pkg/` subdirectory
- **Configuration**: Add to `internal/config/` if needed
- **Tests**: Co-locate with source files (`*_test.go`)
- **Documentation**: Update `docs/` and inline comments
## Integration Points:
- **Event System**: Integrate with `pkg/events/` for transaction processing
- **Market Pipeline**: Connect to `pkg/market/pipeline.go` for processing
- **Monitoring**: Add metrics and health checks
- **Configuration**: Add necessary environment variables
## Deliverables:
- Working implementation with tests
- Updated documentation
- Configuration updates
- Performance benchmarks if applicable
- Migration guide for existing deployments

View File

@@ -1,65 +0,0 @@
# Optimize Mathematical Performance
Optimize the performance of the following mathematical function in the MEV bot: $ARGUMENTS
## Performance Optimization Strategy:
### 1. **Profiling and Measurement**
```bash
# CPU profiling for mathematical functions
go tool pprof http://localhost:9090/debug/pprof/profile?seconds=30
# Memory profiling for mathematical calculations
go tool pprof http://localhost:9090/debug/pprof/heap
# Benchmark testing for mathematical functions
go test -bench=. -benchmem ./pkg/uniswap/...
```
### 2. **Optimization Areas**
#### **Precision Handling Optimization**
- Uint256 arithmetic optimization
- Object pooling for frequent calculations
- Minimize memory allocations in hot paths
- Efficient conversion between data types
#### **Algorithm Optimization**
- Mathematical formula simplification
- Lookup table implementation for repeated calculations
- Caching strategies for expensive computations
- Parallel processing opportunities
#### **Memory Optimization**
- Pre-allocation of slices and buffers
- Object pooling for mathematical objects
- Minimize garbage collection pressure
- Efficient data structure selection
### 3. **MEV Bot Specific Optimizations**
#### **Uniswap V3 Pricing Functions**
- sqrtPriceX96 to price conversion optimization
- Tick calculation performance improvements
- Liquidity-based calculation efficiency
- Price impact computation optimization
#### **Arbitrage Calculations**
- Profit calculation optimization
- Cross-pool comparison performance
- Gas estimation accuracy and speed
- Multi-hop arbitrage efficiency
## Implementation Guidelines:
- Measure before optimizing (baseline metrics)
- Focus on bottlenecks identified through profiling
- Maintain mathematical precision while improving performance
- Add performance tests for regressions
- Document optimization strategies and results
## Deliverables:
- Performance benchmark results (before/after)
- Optimized code with maintained precision
- Performance monitoring enhancements
- Optimization documentation
- Regression test suite

View File

@@ -1,84 +0,0 @@
# Optimize Performance
Optimize the performance of the MEV bot in the following area: $ARGUMENTS
## Performance Optimization Strategy:
### 1. **Profiling and Measurement**
```bash
# CPU profiling
go tool pprof http://localhost:9090/debug/pprof/profile?seconds=30
# Memory profiling
go tool pprof http://localhost:9090/debug/pprof/heap
# Trace analysis
go tool trace trace.out
# Benchmark testing
go test -bench=. -benchmem ./...
```
### 2. **Optimization Areas**
#### **Concurrency Optimization**
- Worker pool sizing and configuration
- Channel buffer optimization
- Goroutine pooling and reuse
- Lock contention reduction
- Context cancellation patterns
#### **Memory Optimization**
- Object pooling for frequent allocations
- Buffer reuse patterns
- Garbage collection tuning
- Memory leak prevention
- Slice and map pre-allocation
#### **I/O Optimization**
- Connection pooling for RPC calls
- Request batching and pipelining
- Caching frequently accessed data
- Asynchronous processing patterns
- Rate limiting optimization
#### **Algorithm Optimization**
- Uniswap math calculation efficiency
- Event parsing performance
- Data structure selection
- Caching strategies
- Indexing improvements
### 3. **MEV Bot Specific Optimizations**
#### **Transaction Processing Pipeline**
- Parallel transaction processing
- Event filtering optimization
- Batch processing strategies
- Pipeline stage optimization
#### **Market Analysis**
- Price calculation caching
- Pool data caching
- Historical data indexing
- Real-time processing optimization
#### **Arbitrage Detection**
- Opportunity scanning efficiency
- Profit calculation optimization
- Market impact analysis speed
- Cross-DEX comparison performance
## Implementation Guidelines:
- Measure before optimizing (baseline metrics)
- Focus on bottlenecks identified through profiling
- Maintain code readability and maintainability
- Add performance tests for regressions
- Document performance characteristics
## Deliverables:
- Performance benchmark results (before/after)
- Optimized code with maintained functionality
- Performance monitoring enhancements
- Optimization documentation
- Regression test suite

View File

@@ -1,72 +0,0 @@
# Security Audit
Perform a comprehensive security audit of the MEV bot focusing on: $ARGUMENTS
## Security Audit Checklist:
### 1. **Code Security Analysis**
```bash
# Static security analysis
gosec ./...
# Dependency vulnerabilities
go list -json -m all | nancy sleuth
# Secret scanning
git-secrets --scan
```
### 2. **Input Validation Review**
- Transaction data parsing validation
- RPC response validation
- Configuration parameter validation
- Mathematical overflow/underflow checks
- Buffer overflow prevention
### 3. **Cryptographic Security**
- Private key handling and storage
- Signature verification processes
- Random number generation
- Hash function usage
- Encryption at rest and in transit
### 4. **Network Security**
- RPC endpoint authentication
- TLS/SSL configuration
- Rate limiting implementation
- DDoS protection mechanisms
- WebSocket connection security
### 5. **Runtime Security**
- Memory safety in Go code
- Goroutine safety and race conditions
- Resource exhaustion protection
- Error information disclosure
- Logging security (no sensitive data)
## Specific MEV Bot Security Areas:
### **Transaction Processing**
- Validate all transaction inputs
- Prevent transaction replay attacks
- Secure handling of swap calculations
- Protection against malicious contract calls
### **Market Data Integrity**
- Price feed validation
- Oracle manipulation detection
- Historical data integrity
- Real-time data verification
### **Financial Security**
- Gas estimation accuracy
- Slippage protection
- Minimum profit validation
- MEV protection mechanisms
## Output Requirements:
- Detailed security findings report
- Risk assessment (Critical/High/Medium/Low)
- Remediation recommendations
- Implementation timeline for fixes
- Security testing procedures

View File

@@ -1,54 +0,0 @@
# Verify Mathematical Precision
Verify the precision and correctness of the following mathematical implementation in the MEV bot: $ARGUMENTS
## Verification Protocol:
### 1. **Mathematical Correctness Analysis**
- Review mathematical formulas against official specifications
- Validate implementation against known test cases
- Check boundary conditions and edge cases
- Verify precision handling for large numbers
### 2. **Property-Based Testing**
```bash
# Run property-based tests for mathematical functions
go test -v -run=Property ./pkg/uniswap/...
# Run fuzz tests for mathematical calculations
go test -fuzz=Fuzz ./pkg/uniswap/...
```
### 3. **Precision Validation Areas**
#### **Uniswap V3 Calculations**
- sqrtPriceX96 to price conversion accuracy
- Tick calculation correctness
- Liquidity-based calculation precision
- Price impact computation validation
#### **Financial Calculations**
- Profit calculation accuracy
- Gas estimation precision
- Slippage protection validation
- Fee calculation correctness
### 4. **Comparison Testing**
- Compare results with reference implementations
- Validate against on-chain data when possible
- Cross-check with other DeFi protocol implementations
- Benchmark against established mathematical libraries
## Verification Steps:
1. **Static Analysis**: Review code for mathematical correctness
2. **Unit Testing**: Verify with known test cases
3. **Property Testing**: Test mathematical invariants
4. **Fuzz Testing**: Find edge cases with random inputs
5. **Comparison Testing**: Validate against reference implementations
## Output Requirements:
- Detailed correctness analysis report
- Precision validation results
- Edge case identification and handling
- Recommendations for improvements
- Test suite enhancements

View File

@@ -1,38 +0,0 @@
#!/bin/bash
# perf-test.sh - Run comprehensive performance tests for Claude
echo "Running comprehensive performance tests for Claude..."
# Create results directory if it doesn't exist
mkdir -p .claude/results
# Run unit tests
echo "Running unit tests..."
go test -v ./... | tee .claude/results/unit-tests.log
# Run integration tests
echo "Running integration tests..."
go test -v ./test/integration/... | tee .claude/results/integration-tests.log
# Run benchmarks
echo "Running benchmarks..."
go test -bench=. -benchmem ./... | tee .claude/results/benchmarks.log
# Run benchmarks with CPU profiling
echo "Running benchmarks with CPU profiling..."
go test -bench=. -cpuprofile=.claude/results/cpu.prof ./... | tee .claude/results/cpu-bench.log
# Run benchmarks with memory profiling
echo "Running benchmarks with memory profiling..."
go test -bench=. -memprofile=.claude/results/mem.prof ./... | tee .claude/results/mem-bench.log
# Check for errors
if [ $? -eq 0 ]; then
echo "All performance tests completed successfully!"
echo "Results saved to .claude/results/"
else
echo "Some performance tests failed!"
echo "Check .claude/results/ for details"
exit 1
fi

View File

@@ -1,42 +0,0 @@
**/*_test.go
test/
# Binaries
bin/
# Configuration files that might contain sensitive information
config/local.yaml
config/secrets.yaml
# Go workspace
go.work
# Test coverage files
coverage.txt
coverage.html
# IDE files
.vscode/
.idea/
*.swp
*.swo
# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Log files
*.log
# Database files
*.db
# Data directory
data/
vendor/
backup/
backups/

View File

@@ -1,160 +0,0 @@
kind: pipeline
type: docker
name: test-suite
trigger:
event:
- push
- pull_request
workspace:
path: /drone/src
steps:
- name: setup-go-cache
image: golang:1.24
environment:
GOCACHE: /drone/src/.gocache
commands:
- go env -w GOCACHE=$GOCACHE
- go mod download
- go mod verify
- name: lint
image: golangci/golangci-lint:1.55.2
environment:
GOFLAGS: -buildvcs=false
commands:
- golangci-lint run --timeout=10m
- name: unit-tests
image: golang:1.24
environment:
GOCACHE: /drone/src/.gocache
GOFLAGS: -buildvcs=false
commands:
- go test -race -coverprofile=coverage.out ./...
- name: build-binary
image: golang:1.24
environment:
GOFLAGS: -buildvcs=false
commands:
- go build -o bin/mev-bot ./cmd/mev-bot
- name: smoke-start
image: golang:1.24
environment:
GOFLAGS: -buildvcs=false
MEV_BOT_ENCRYPTION_KEY: test_key_32_chars_minimum_length
commands:
- timeout 5s ./bin/mev-bot start || true
- name: math-audit
image: golang:1.24
environment:
GOCACHE: /drone/src/.gocache
GOFLAGS: -buildvcs=false
commands:
- go run ./tools/math-audit --vectors default --report reports/math/latest
- test -s reports/math/latest/report.json
- test -s reports/math/latest/report.md
- name: simulate-profit
image: golang:1.24
environment:
GOCACHE: /drone/src/.gocache
GOFLAGS: -buildvcs=false
commands:
- ./scripts/run_profit_simulation.sh
- name: docker-build
image: plugins/docker:20
settings:
repo: mev-bot/local
tags:
- latest
dry_run: true
---
kind: pipeline
type: docker
name: security-suite
trigger:
event:
- push
- pull_request
branch:
include:
- main
- develop
- audit
workspace:
path: /drone/src
steps:
- name: setup-go
image: golang:1.24
environment:
GOCACHE: /drone/src/.gocache
commands:
- go env -w GOCACHE=$GOCACHE
- go mod download
- name: gosec
image: securego/gosec:2.18.1
commands:
- gosec -fmt sarif -out gosec-results.sarif ./...
- name: govulncheck
image: golang:1.24
commands:
- go install golang.org/x/vuln/cmd/govulncheck@latest
- govulncheck ./...
- name: dependency-scan
image: golang:1.24
commands:
- go install github.com/sonatypecommunity/nancy@latest
- go list -json -m all | nancy sleuth --exclude-vulnerability-file .nancy-ignore
- name: fuzz-security
image: golang:1.24
environment:
GOFLAGS: -buildvcs=false
commands:
- mkdir -p logs keystore test_keystore benchmark_keystore test_concurrent_keystore
- go test -v -race ./pkg/security/
- go test -fuzz=FuzzRPCResponseParser -fuzztime=30s ./pkg/security/
- go test -fuzz=FuzzKeyValidation -fuzztime=30s ./pkg/security/
- go test -fuzz=FuzzInputValidator -fuzztime=30s ./pkg/security/
- name: parser-sanity
image: golang:1.24
commands:
- go run cmd/mev-bot/main.go || true
---
kind: pipeline
type: docker
name: integration-opt-in
trigger:
event:
- custom
action:
- integration
workspace:
path: /drone/src
steps:
- name: run-integration
image: golang:1.24
environment:
GOCACHE: /drone/src/.gocache
GOFLAGS: -buildvcs=false
commands:
- go test -tags=integration ./...

View File

@@ -1,47 +0,0 @@
# MEV Bot Environment Configuration - Fixed Version
# ARBITRUM NETWORK CONFIGURATION
# HTTP endpoint for transaction execution (reliable)
ARBITRUM_RPC_ENDPOINT=https://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57
# WebSocket endpoint for real-time event monitoring
ARBITRUM_WS_ENDPOINT=wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57
# Rate limiting for RPC calls (reduced to avoid limits)
# CRITICAL FIX: Reduced from 5/3 to prevent rate limit errors
RPC_REQUESTS_PER_SECOND=2
RPC_MAX_CONCURRENT=1
# BOT CONFIGURATION
BOT_MAX_WORKERS=3
BOT_CHANNEL_BUFFER_SIZE=100
# ETHEREUM ACCOUNT CONFIGURATION
# IMPORTANT: Replace with your actual private key for production use
ETHEREUM_PRIVATE_KEY=your_actual_private_key_here
ETHEREUM_ACCOUNT_ADDRESS=0xYOUR_ETHEREUM_ACCOUNT_ADDRESS_HERE
ETHEREUM_GAS_PRICE_MULTIPLIER=1.2
# CONTRACT ADDRESSES
# IMPORTANT: Replace with your actual deployed contract addresses
CONTRACT_ARBITRAGE_EXECUTOR=0x1234567890123456789012345678901234567890
CONTRACT_FLASH_SWAPPER=0x1234567890123456789012345678901234567890
# SECURITY CONFIGURATION
# Generate with: openssl rand -base64 32
MEV_BOT_ENCRYPTION_KEY=K3GjJ8NnF6VbW2QxR9TzY4HcA7LmP5SvE1UjI8OwK0M=
MEV_BOT_KEYSTORE_PATH=keystore
MEV_BOT_AUDIT_LOG=logs/audit.log
MEV_BOT_BACKUP_PATH=backups
# LOGGING AND MONITORING
LOG_LEVEL=info
LOG_FORMAT=text
METRICS_ENABLED=true
METRICS_PORT=9090
# DEVELOPMENT/TESTING
GO_ENV=production
DEBUG=false
ARBISCAN_API_KEY=H8PEIY79385F4UKYU7MRV5IAT1BI1WYIVY

View File

@@ -1,24 +0,0 @@
# MEV Bot Smart Contract Deployment Configuration
# ⚠️ NEVER commit this file to git!
# Environment mode
GO_ENV="production"
# Deployer wallet private key
# NOTE: Using a test key for demonstration - REPLACE WITH YOUR ACTUAL KEY
DEPLOYER_PRIVATE_KEY="0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"
# Arbitrum RPC endpoint
ARBITRUM_RPC_ENDPOINT="https://arb1.arbitrum.io/rpc"
# Arbiscan API key for verification (optional)
ARBISCAN_API_KEY=""
# Enable verification
VERIFY="false"
# Target network
NETWORK="arbitrum"
# Contract source directory
CONTRACTS_DIR="/home/administrator/projects/Mev-Alpha"

View File

@@ -1,84 +0,0 @@
# MEV Bot Smart Contract Deployment Configuration
# Copy this file to .env.deployment and fill in your values
# ⚠️ NEVER commit .env.deployment to git!
# =============================================================================
# DEPLOYER WALLET (REQUIRED)
# =============================================================================
# Your deployer wallet private key (starts with 0x)
# ⚠️ Use a dedicated deployment wallet, not your main wallet!
DEPLOYER_PRIVATE_KEY="0x..."
# Alternative: Use PRIVATE_KEY if you prefer
# PRIVATE_KEY="0x..."
# =============================================================================
# RPC ENDPOINTS
# =============================================================================
# Arbitrum Mainnet RPC
ARBITRUM_RPC_ENDPOINT="https://arb1.arbitrum.io/rpc"
# Alternative premium RPC providers (recommended for reliability):
# Alchemy: https://arb-mainnet.g.alchemy.com/v2/YOUR_API_KEY
# Chainstack: https://arbitrum-mainnet.core.chainstack.com/YOUR_API_KEY
# Infura: https://arbitrum-mainnet.infura.io/v3/YOUR_API_KEY
# Arbitrum Goerli Testnet RPC (for testing)
# ARBITRUM_RPC_ENDPOINT="https://goerli-rollup.arbitrum.io/rpc"
# =============================================================================
# CONTRACT VERIFICATION (OPTIONAL)
# =============================================================================
# Arbiscan API key for contract verification
# Get your key from: https://arbiscan.io/myapikey
ARBISCAN_API_KEY=""
# Enable automatic verification during deployment
# Set to "true" to enable, "false" to disable
VERIFY="false"
# =============================================================================
# DEPLOYMENT SETTINGS
# =============================================================================
# Target network
# Options: "arbitrum" (mainnet), "arbitrum-goerli" (testnet)
NETWORK="arbitrum"
# Gas price (in gwei) - leave empty for automatic estimation
# GAS_PRICE=""
# Gas limit - leave empty for automatic estimation
# GAS_LIMIT=""
# =============================================================================
# DEPLOYMENT CHECKLIST
# =============================================================================
#
# Before deploying:
# 1. ✓ Copy this file to .env.deployment
# 2. ✓ Fill in DEPLOYER_PRIVATE_KEY
# 3. ✓ Fill in ARBITRUM_RPC_ENDPOINT (or use default)
# 4. ✓ (Optional) Fill in ARBISCAN_API_KEY for verification
# 5. ✓ Ensure deployer wallet has sufficient ETH (~0.01 ETH)
# 6. ✓ Test on testnet first (NETWORK="arbitrum-goerli")
# 7. ✓ Review contracts in contracts/ directory
# 8. ✓ Run: source .env.deployment
# 9. ✓ Run: ./scripts/deploy-contracts.sh
#
# =============================================================================
# =============================================================================
# SECURITY WARNINGS
# =============================================================================
#
# ⚠️ NEVER commit .env.deployment to version control!
# ⚠️ NEVER share your private key with anyone!
# ⚠️ Use a dedicated deployment wallet with minimal funds!
# ⚠️ Test on testnet before deploying to mainnet!
# ⚠️ Backup your deployment logs and addresses!
#
# =============================================================================

View File

@@ -1,115 +0,0 @@
# MEV Bot Environment Configuration Template
# Copy this file to .env and fill in your actual values
# SECURITY WARNING: Never commit .env files with actual credentials to version control
# ============================================================
# ARBITRUM NETWORK CONFIGURATION
# ============================================================
# HTTP endpoint for transaction execution (reliable)
# Get your own endpoint from: https://chainstack.com or https://alchemy.com
ARBITRUM_RPC_ENDPOINT=https://arbitrum-mainnet.infura.io/v3/YOUR_PROJECT_ID
# WebSocket endpoint for real-time event monitoring
ARBITRUM_WS_ENDPOINT=wss://arbitrum-mainnet.infura.io/ws/v3/YOUR_PROJECT_ID
# ============================================================
# RPC RATE LIMITING
# ============================================================
# Requests per second to avoid provider rate limits
# Adjust based on your provider's tier (free tier: 1-2, paid: 10-50)
RPC_REQUESTS_PER_SECOND=2
# Maximum concurrent RPC connections
# Lower values reduce rate limit errors but slow down processing
RPC_MAX_CONCURRENT=1
# ============================================================
# BOT PERFORMANCE CONFIGURATION
# ============================================================
# Number of worker goroutines for opportunity processing
BOT_MAX_WORKERS=3
# Buffer size for opportunity channel
BOT_CHANNEL_BUFFER_SIZE=100
# ============================================================
# ETHEREUM ACCOUNT CONFIGURATION
# ============================================================
# CRITICAL: Replace with your actual private key (without 0x prefix)
# Generate with: cast wallet new (foundry) or eth-keygen
ETHEREUM_PRIVATE_KEY=0000000000000000000000000000000000000000000000000000000000000000
# Your Ethereum account address (checksum format)
ETHEREUM_ACCOUNT_ADDRESS=0x0000000000000000000000000000000000000000
# Gas price multiplier for competitive transaction submission (1.0 = no increase)
ETHEREUM_GAS_PRICE_MULTIPLIER=1.2
# ============================================================
# CONTRACT ADDRESSES
# ============================================================
# Deploy these contracts first, then update addresses here
# See: docs/deployment/contract-deployment.md
CONTRACT_ARBITRAGE_EXECUTOR=0x0000000000000000000000000000000000000000
CONTRACT_FLASH_SWAPPER=0x0000000000000000000000000000000000000000
# ============================================================
# SECURITY CONFIGURATION
# ============================================================
# Encryption key for keystore (MUST be 32+ characters)
# Generate with: openssl rand -base64 32
# CRITICAL: Keep this secret! Losing it means losing access to keys
MEV_BOT_ENCRYPTION_KEY=REPLACE_WITH_32_CHARACTER_MINIMUM_RANDOM_STRING_FROM_OPENSSL
# Keystore directory for encrypted private keys
MEV_BOT_KEYSTORE_PATH=keystore
# Audit log path for security events
MEV_BOT_AUDIT_LOG=logs/audit.log
# Backup directory for key backups
MEV_BOT_BACKUP_PATH=backups
# ============================================================
# LOGGING AND MONITORING
# ============================================================
# Log level: debug, info, warn, error
LOG_LEVEL=info
# Log format: text, json
LOG_FORMAT=text
# Enable Prometheus metrics endpoint
METRICS_ENABLED=true
# Metrics server port
METRICS_PORT=9090
# ============================================================
# ENVIRONMENT MODE
# ============================================================
# Environment: development, staging, production
# Controls which config file is loaded (config/local.yaml, config/staging.yaml, config/arbitrum_production.yaml)
GO_ENV=development
# Debug mode (verbose logging)
DEBUG=false
# ============================================================
# BLOCKCHAIN EXPLORER API KEYS (OPTIONAL)
# ============================================================
# Arbiscan API key for contract verification and transaction tracking
# Get free key from: https://arbiscan.io/apis
ARBISCAN_API_KEY=YOUR_ARBISCAN_API_KEY_HERE
# ============================================================
# ADVANCED CONFIGURATION (OPTIONAL)
# ============================================================
# Allow localhost RPC endpoints (security: only enable for development)
MEV_BOT_ALLOW_LOCALHOST=false
# Dashboard server port
DASHBOARD_PORT=8080
# Security webhook URL for alerts (Slack, Discord, etc.)
SECURITY_WEBHOOK_URL=https://hooks.slack.com/services/YOUR/WEBHOOK/URL

View File

@@ -1,42 +0,0 @@
# MEV Bot Environment Configuration - Fixed Version
# ARBITRUM NETWORK CONFIGURATION
ARBITRUM_RPC_ENDPOINT=wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57
ARBITRUM_WS_ENDPOINT=wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57
# Rate limiting for RPC calls (reduced to avoid limits)
RPC_REQUESTS_PER_SECOND=5
RPC_MAX_CONCURRENT=3
# BOT CONFIGURATION
BOT_MAX_WORKERS=3
BOT_CHANNEL_BUFFER_SIZE=100
# ETHEREUM ACCOUNT CONFIGURATION
# IMPORTANT: Replace with your actual private key for production use
ETHEREUM_PRIVATE_KEY=your_actual_private_key_here
ETHEREUM_ACCOUNT_ADDRESS=0xYOUR_ETHEREUM_ACCOUNT_ADDRESS_HERE
ETHEREUM_GAS_PRICE_MULTIPLIER=1.2
# CONTRACT ADDRESSES
# IMPORTANT: Replace with your actual deployed contract addresses
CONTRACT_ARBITRAGE_EXECUTOR=0x1234567890123456789012345678901234567890
CONTRACT_FLASH_SWAPPER=0x1234567890123456789012345678901234567890
# SECURITY CONFIGURATION
# Generate with: openssl rand -base64 32
MEV_BOT_ENCRYPTION_KEY=K3GjJ8NnF6VbW2QxR9TzY4HcA7LmP5SvE1UjI8OwK0M=
# LOGGING AND MONITORING
LOG_LEVEL=info
LOG_FORMAT=text
METRICS_ENABLED=true
METRICS_PORT=9090
# DEVELOPMENT/TESTING
GO_ENV=production
DEBUG=false
ARBISCAN_API_KEY=H8PEIY79385F4UKYU7MRV5IAT1BI1WYIVY

View File

@@ -1,10 +0,0 @@
# MEV Bot Production Environment
MEV_BOT_ENCRYPTION_KEY="bc10d845ff456ed03c03cda81835436435051c476836c647687a49999439cdc1"
CONTRACT_ARBITRAGE_EXECUTOR="0x6C2B1c6Eb0e5aB73d8C60944c74A62bfE629c418"
CONTRACT_FLASH_SWAPPER="0x7Cc97259cBe0D02Cd0b8A80c2E1f79C7265808b4"
CONTRACT_DATA_FETCHER="0xC6BD82306943c0F3104296a46113ca0863723cBD"
ARBITRUM_RPC_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57"
ARBITRUM_WS_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57"
METRICS_ENABLED="true"
METRICS_PORT="9090"
LOG_LEVEL="debug"

View File

@@ -1,62 +0,0 @@
# 🚀 MEV BOT PRODUCTION CONFIGURATION - IMMEDIATE PROFIT MODE
# This is your LIVE TRADING configuration for immediate deployment
# =============================================================================
# 🔥 CRITICAL PRODUCTION SETTINGS - PROFIT OPTIMIZATION
# =============================================================================
# High-performance RPC endpoint
ARBITRUM_RPC_ENDPOINT=wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57
ARBITRUM_WS_ENDPOINT=wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57
ARBITRUM_RPC_ENDPOINT=wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57
ARBITRUM_RPC_ENDPOINT=wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57
# Aggressive rate limits for maximum throughput
RPC_REQUESTS_PER_SECOND=250
RPC_MAX_CONCURRENT=20
BOT_MAX_WORKERS=8
BOT_CHANNEL_BUFFER_SIZE=5000
# 🔐 PRODUCTION SECURITY
MEV_BOT_ENCRYPTION_KEY="i4qwh5vqUxehOdFsdZx0vFvDwKUHcVpGWC0K2BVQn6A="
# 💰 PROFIT MAXIMIZATION SETTINGS
ARBITRAGE_MIN_PROFIT_THRESHOLD=0.001 # 0.1% minimum profit (aggressive)
GAS_PRICE_MULTIPLIER=1.8 # Competitive gas pricing
MAX_SLIPPAGE_TOLERANCE=0.005 # 0.5% max slippage
POSITION_SIZE_ETH=0.1 # Start with 0.1 ETH positions
# 📊 MONITORING & ALERTS
METRICS_ENABLED=true
METRICS_PORT=9090
HEALTH_PORT=8080
LOG_LEVEL=info
LOG_FORMAT=json
# 🏭 PRODUCTION ENVIRONMENT
GO_ENV=production
DEBUG=false
# 💾 STORAGE PATHS
MEV_BOT_KEYSTORE_PATH=keystore/production
MEV_BOT_AUDIT_LOG=logs/production_audit.log
MEV_BOT_BACKUP_PATH=backups/production
# ⚡ PERFORMANCE TUNING
GOMAXPROCS=4
GOGC=100
# 🎯 TARGET EXCHANGES FOR ARBITRAGE
ENABLE_UNISWAP_V2=true
ENABLE_UNISWAP_V3=true
ENABLE_SUSHISWAP=true
ENABLE_BALANCER=true
ENABLE_CURVE=true
# 🔥 DEPLOYED CONTRACTS (PRODUCTION READY)
CONTRACT_ARBITRAGE_EXECUTOR=0xec2a16d5f8ac850d08c4c7f67efd50051e7cfc0b
CONTRACT_FLASH_SWAPPER=0x5801ee5c2f6069e0f11cce7c0f27c2ef88e79a95
CONTRACT_UNISWAP_V2_FLASH_SWAPPER=0xc0b8c3e9a976ec67d182d7cb0283fb4496692593
ARBISCAN_API_KEY=H8PEIY79385F4UKYU7MRV5IAT1BI1WYIVY

View File

@@ -1,181 +0,0 @@
# Production Environment Configuration for MEV Bot
# WARNING: This file contains sensitive information - NEVER commit to version control!
# =============================================================================
# MULTI-RPC ENDPOINT CONFIGURATION FOR PRODUCTION
# =============================================================================
# Reading endpoints (WSS preferred for real-time data monitoring)
# Format: Comma-separated list of WebSocket endpoints optimized for event monitoring
# The system will automatically prioritize by order (first = highest priority)
ARBITRUM_READING_ENDPOINTS="wss://arbitrum-mainnet.core.chainstack.com/YOUR_API_KEY_1,wss://arb-mainnet.g.alchemy.com/v2/YOUR_API_KEY_2,wss://arbitrum-mainnet.infura.io/ws/v3/YOUR_PROJECT_ID"
# Execution endpoints (HTTP/HTTPS preferred for transaction reliability)
# Format: Comma-separated list of RPC endpoints optimized for transaction submission
# The system will automatically handle failover and load balancing
ARBITRUM_EXECUTION_ENDPOINTS="https://arbitrum-mainnet.core.chainstack.com/YOUR_API_KEY_1,https://arb-mainnet.g.alchemy.com/v2/YOUR_API_KEY_2,https://arbitrum-mainnet.infura.io/v3/YOUR_PROJECT_ID"
# =============================================================================
# LEGACY CONFIGURATION (backward compatibility)
# =============================================================================
# Legacy single RPC endpoint (used if multi-endpoint config is not available)
ARBITRUM_RPC_ENDPOINT=wss://arbitrum-mainnet.core.chainstack.com/YOUR_API_KEY_HERE
# Legacy single WebSocket endpoint
ARBITRUM_WS_ENDPOINT=wss://arbitrum-mainnet.core.chainstack.com/YOUR_API_KEY_HERE
# Fallback RPC endpoints (used if reading/execution endpoints not specified)
ARBITRUM_FALLBACK_ENDPOINTS=https://arb1.arbitrum.io/rpc,https://arbitrum.llamarpc.com,https://arbitrum-one.publicnode.com,https://arbitrum-one.public.blastapi.io
# =============================================================================
# RATE LIMITING CONFIGURATION
# =============================================================================
# Global rate limiting settings (applied across all endpoints)
RPC_REQUESTS_PER_SECOND=200
RPC_MAX_CONCURRENT=20
# Per-endpoint rate limiting is automatically configured:
# - WebSocket endpoints: 300 RPS, 25 concurrent connections, 60s timeout
# - HTTP endpoints: 200 RPS, 20 concurrent connections, 30s timeout
# - Health checks: 30s interval for WSS, 60s for HTTP
# =============================================================================
# BOT CONFIGURATION FOR PRODUCTION
# =============================================================================
# Performance settings (higher than staging for production)
BOT_MAX_WORKERS=10
BOT_CHANNEL_BUFFER_SIZE=2000
# =============================================================================
# ETHEREUM ACCOUNT CONFIGURATION FOR PRODUCTION
# =============================================================================
# CRITICAL: Your production trading account private key (64 hex characters without 0x)
# Generate a new key specifically for production trading and fund it appropriately
# NEVER USE YOUR MAIN WALLET - USE A DEDICATED TRADING ACCOUNT
ETHEREUM_PRIVATE_KEY=your_64_character_production_private_key_here
# Account address (derived from private key)
ETHEREUM_ACCOUNT_ADDRESS=0xYOUR_PRODUCTION_ACCOUNT_ADDRESS_HERE
# Gas price multiplier for competitive transactions (higher than staging for faster execution)
ETHEREUM_GAS_PRICE_MULTIPLIER=2.0
# =============================================================================
# REAL DEPLOYED CONTRACT ADDRESSES ON ARBITRUM MAINNET FOR PRODUCTION
# =============================================================================
# PRODUCTION READY - ArbitrageExecutor contract (VERIFIED)
CONTRACT_ARBITRAGE_EXECUTOR=0xEC2A16d5F8Ac850D08C4C7F67EFD50051E7cFC0b
# PRODUCTION READY - UniswapV3FlashSwapper contract (VERIFIED)
CONTRACT_FLASH_SWAPPER=0x5801EE5C2f6069E0F11CcE7c0f27C2ef88e79a95
# Additional deployed contracts for production
CONTRACT_UNISWAP_V2_FLASH_SWAPPER=0xc0b8c3e9a976ec67d182d7cb0283fb4496692593
CONTRACT_DATA_FETCHER=0x3c2c9c86f081b9dac1f0bf97981cfbe96436b89d
# =============================================================================
# SECURITY CONFIGURATION FOR PRODUCTION
# =============================================================================
# Encryption key for secure storage (generate with: openssl rand -base64 32)
# KEEP THIS SECRET AND BACK IT UP SECURELY
MEV_BOT_ENCRYPTION_KEY="YOUR_32_CHARACTER_ENCRYPTION_KEY_HERE"
# =============================================================================
# DATABASE CONFIGURATION FOR PRODUCTION
# =============================================================================
# PostgreSQL configuration for production
POSTGRES_DB=mevbot_production
POSTGRES_USER=mevbot_production
POSTGRES_PASSWORD=your_secure_production_database_password
# =============================================================================
# MONITORING CONFIGURATION FOR PRODUCTION
# =============================================================================
# Metrics and logging for production
METRICS_ENABLED=true
METRICS_PORT=9090
HEALTH_PORT=8080
LOG_LEVEL=info
LOG_FORMAT=json
# Grafana credentials for production
GRAFANA_USER=admin
GRAFANA_PASSWORD=your_secure_production_grafana_password
# Prometheus port for production
PROMETHEUS_PORT=9091
GRAFANA_PORT=3000
# =============================================================================
# PRODUCTION SETTINGS
# =============================================================================
# Environment
GO_ENV=production
DEBUG=false
# Resource limits and timeouts for production
MAX_MEMORY=2G
MAX_CPU=4000m
# =============================================================================
# EXAMPLE PREMIUM RPC PROVIDERS FOR PRODUCTION
# =============================================================================
# Chainstack (Recommended for production)
# ARBITRUM_RPC_ENDPOINT=wss://arbitrum-mainnet.core.chainstack.com/YOUR_API_KEY
# Alchemy (Enterprise tier recommended for production)
# ARBITRUM_RPC_ENDPOINT=wss://arb-mainnet.g.alchemy.com/v2/YOUR_API_KEY
# Infura (Premium tier recommended for production)
# ARBITRUM_RPC_ENDPOINT=wss://arbitrum-mainnet.infura.io/ws/v3/YOUR_PROJECT_ID
# QuickNode (Business tier recommended for production)
# ARBITRUM_RPC_ENDPOINT=wss://YOUR_ENDPOINT.arbitrum-mainnet.quiknode.pro/YOUR_TOKEN/
# =============================================================================
# SECURITY BEST PRACTICES FOR PRODUCTION
# =============================================================================
# 1. Use a dedicated server/VPS for production deployment
# 2. Enable firewall and limit access to necessary ports only
# 3. Use premium RPC providers for better reliability and speed
# 4. Monitor all transactions and profits closely
# 5. Start with small position sizes to test everything works
# 6. Set up alerts for unusual activity or losses
# 7. Keep private keys encrypted and backed up securely
# 8. Use separate accounts for testing and production
# 9. Regularly update and patch the system
# 10. Monitor gas prices and adjust strategies accordingly
# =============================================================================
# PRODUCTION DEPLOYMENT CHECKLIST
# =============================================================================
# ☐ Set up dedicated server/VPS
# ☐ Configure firewall and security groups
# ☐ Install Docker and docker-compose
# ☐ Generate production private key and fund account
# ☐ Deploy smart contracts to Arbitrum mainnet
# ☐ Configure premium RPC provider
# ☐ Set up monitoring and alerting
# ☐ Test deployment with dry-run mode
# ☐ Start with small position sizes
# ☐ Monitor closely during first week
# ☐ Set up automated backups
# ☐ Configure log rotation
# ☐ Set up system monitoring (CPU, memory, disk)
# ☐ Set up profit tracking and reporting
# ☐ Set up emergency stop procedures
ARBISCAN_API_KEY=H8PEIY79385F4UKYU7MRV5IAT1BI1WYIVY

View File

@@ -1,6 +0,0 @@
# Test environment configuration
ARBITRUM_RPC_ENDPOINT=https://arb1.arbitrum.io/rpc
ARBITRUM_WS_ENDPOINT=wss://arb1.arbitrum.io/ws
ARBISCAN_API_KEY=H8PEIY79385F4UKYU7MRV5IAT1BI1WYIVY

View File

@@ -1,36 +0,0 @@
# Global configuration for reusable development tools
# This file can be sourced by scripts to share configuration
# Default configuration values
export TEST_LEVEL="${TEST_LEVEL:-basic}"
export COVERAGE="${COVERAGE:-false}"
export OUTPUT_DIR="${OUTPUT_DIR:-test-results}"
export PACKAGE_FILTER="${PACKAGE_FILTER:-./...}"
export TIMEOUT="${TIMEOUT:-10m}"
export JUNIT_OUTPUT="${JUNIT_OUTPUT:-false}"
# Build configuration
export BINARY_NAME="${BINARY_NAME:-$(basename $(pwd))}"
export BINARY_DIR="${BINARY_DIR:-bin}"
export MAIN_FILE="${MAIN_FILE:-.}"
export BUILD_TAGS="${BUILD_TAGS:-}"
export LDFLAGS="${LDFLAGS:-}"
export OUTPUT="${OUTPUT:-$BINARY_DIR/$BINARY_NAME}"
export GOOS="${GOOS:-$(go env GOOS)}"
export GOARCH="${GOARCH:-$(go env GOARCH)}"
# Profile configuration
export PROFILE_TYPES="${PROFILE_TYPES:-cpu,mem,block,mutex}"
export REPORT_DIR="${REPORT_DIR:-reports/performance}"
# Common directories
export LOGS_DIR="${LOGS_DIR:-logs}"
export REPORTS_DIR="${REPORTS_DIR:-reports}"
export STORAGE_DIR="${STORAGE_DIR:-storage}"
export CONFIG_DIR="${CONFIG_DIR:-config}"
# Create directories if they don't exist
mkdir -p "$LOGS_DIR" "$REPORTS_DIR" "$STORAGE_DIR" "$CONFIG_DIR" 2>/dev/null || true
ARBISCAN_API_KEY=H8PEIY79385F4UKYU7MRV5IAT1BI1WYIVY

View File

@@ -1,289 +0,0 @@
# Gemini CLI Configuration
This directory contains Gemini configuration and tools for the MEV Bot project.
## 🚀 Quick Start Commands
### Essential Build & Test Commands
```bash
# Build the MEV bot binary
make build
# Run performance tests
.gemini/scripts/perf-test.sh
# Run optimization analysis
.gemini/scripts/optimize.sh
# Run benchmarks with profiling
make bench-profile
# Run concurrency tests
make test-concurrent
# Check for Go modules issues
go mod tidy && go mod verify
# Run linter with performance focus
golangci-lint run --fast
```
### Development Workflow Commands
```bash
# Setup development environment
export ARBITRUM_RPC_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57"
export ARBITRUM_WS_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57"
export METRICS_ENABLED="true"
# Run with timeout for testing
timeout 60 ./mev-bot start
# Debug with verbose logging
LOG_LEVEL=debug ./mev-bot start
# Profile performance with detailed analysis
.gemini/scripts/profile.sh
```
## Gemini Commands Directory
The `.gemini/commands/` directory contains predefined commands for performance optimization tasks:
- `optimize-performance.md` - Optimize application performance
- `analyze-bottlenecks.md` - Analyze performance bottlenecks
- `tune-concurrency.md` - Tune concurrency patterns
- `reduce-memory.md` - Reduce memory allocations
- `improve-latency.md` - Improve latency performance
## Gemini Settings
The `.gemini/settings.json` file contains Gemini's performance optimization configuration:
```json
{
"focus_areas": [
"Algorithmic Implementations",
"Performance Optimization",
"Concurrency Patterns",
"Memory Management"
],
"primary_skills": [
"Optimizing mathematical calculations for performance",
"Profiling and identifying bottlenecks in critical paths",
"Reducing memory allocations in hot code paths",
"Optimizing concurrency patterns for maximum throughput"
],
"performance_optimization": {
"enabled": true,
"profiling": {
"cpu": true,
"memory": true,
"goroutine": true,
"mutex": true
},
"optimization_targets": [
"Reduce latency to < 10 microseconds for critical path",
"Achieve > 100,000 messages/second throughput",
"Minimize memory allocations in hot paths",
"Optimize garbage collection tuning"
]
},
"concurrency": {
"worker_pools": true,
"pipeline_patterns": true,
"fan_in_out": true,
"backpressure_handling": true
},
"benchmarking": {
"baseline_comparison": true,
"regression_detection": true,
"continuous_monitoring": true
}
}
```
## 📋 Development Guidelines & Code Style
### Performance Optimization Guidelines
- **Profiling First**: Always profile before optimizing
- **Measure Impact**: Measure performance impact of changes
- **Maintain Readability**: Don't sacrifice code readability for marginal gains
- **Focus on Hot Paths**: Optimize critical code paths first
- **Test Regressions**: Ensure optimizations don't cause regressions
### Concurrency Patterns
- **Worker Pools**: Use worker pools for parallel processing
- **Pipeline Patterns**: Implement pipeline patterns for multi-stage processing
- **Fan-in/Fan-out**: Use fan-in/fan-out patterns for data distribution
- **Backpressure Handling**: Implement proper backpressure handling
### Memory Management
- **Object Pooling**: Use sync.Pool for frequently created objects
- **Pre-allocation**: Pre-allocate slices and maps when size is known
- **Minimize Allocations**: Reduce allocations in hot paths
- **GC Tuning**: Properly tune garbage collection
### Required Checks Before Commit
```bash
# Run performance tests
.gemini/scripts/perf-test.sh
# Run benchmark comparisons
.gemini/scripts/bench-compare.sh
# Check for performance regressions
.gemini/scripts/regression-check.sh
```
## Gemini's Primary Focus Areas
As Gemini, you're particularly skilled at:
1. **Algorithmic Implementations and Mathematical Computations**
- Implementing precise Uniswap V3 pricing functions
- Optimizing mathematical calculations for performance
- Ensuring numerical stability and precision
- Creating efficient algorithms for arbitrage detection
2. **Optimizing Performance and Efficiency**
- Profiling and identifying bottlenecks in critical paths
- Reducing memory allocations in hot code paths
- Optimizing concurrency patterns for maximum throughput
- Tuning garbage collection for low-latency requirements
3. **Understanding Complex Uniswap V3 Pricing Functions**
- Implementing accurate tick and sqrtPriceX96 conversions
- Calculating price impact with proper precision handling
- Working with liquidity and fee calculations
- Handling edge cases in pricing mathematics
4. **Implementing Concurrent and Parallel Processing Patterns**
- Designing efficient worker pool implementations
- Creating robust pipeline processing systems
- Managing synchronization primitives correctly
- Preventing race conditions and deadlocks
5. **Working with Low-Level System Operations**
- Optimizing memory usage and allocation patterns
- Tuning system-level parameters for performance
- Implementing efficient data structures for high-frequency access
- Working with CPU cache optimization techniques
## 🛠 Gemini Optimization Settings
### Workflow Preferences
- **Always profile first**: Use `go tool pprof` before making changes
- **Branch naming**: Use prefixes (`perf-worker-pool`, `opt-pipeline`, `tune-gc`)
- **Context management**: Focus on performance metrics and bottlenecks
- **Continuous monitoring**: Implement monitoring for performance metrics
### File Organization Preferences
- **Performance-critical code**: Place in `pkg/market/` or `pkg/monitor/`
- **Benchmark files**: Place alongside source files with `_bench_test.go` suffix
- **Profiling scripts**: Place in `.gemini/scripts/`
- **Optimization documentation**: Inline comments explaining optimization approaches
### Performance Monitoring
```bash
# Enable detailed metrics endpoint
export METRICS_ENABLED="true"
export METRICS_PORT="9090"
# Monitor all performance aspects
go tool pprof http://localhost:9090/debug/pprof/profile?seconds=30
go tool pprof http://localhost:9090/debug/pprof/heap
go tool pprof http://localhost:9090/debug/pprof/goroutine
go tool pprof http://localhost:9090/debug/pprof/mutex
# Run comprehensive performance analysis
.gemini/scripts/profile.sh
```
## 🔧 Environment Configuration
### Required Environment Variables
```bash
# Arbitrum RPC Configuration
export ARBITRUM_RPC_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57"
export ARBITRUM_WS_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/53c30e7a941160679fdcc396c894fc57"
# Application Configuration
export LOG_LEVEL="info"
export METRICS_ENABLED="true"
export METRICS_PORT="9090"
# Development Configuration
export GO_ENV="development"
export DEBUG="true"
# Performance Configuration
export GOGC=20
export GOMAXPROCS=0
export CONCURRENCY_LEVEL=100
```
### Profiling Environment Variables
```bash
# Profiling Configuration
export PROFILING_ENABLED=true
export PROFILING_DURATION=30s
export PROFILING_OUTPUT_DIR=".gemini/profiles"
# Benchmark Configuration
export BENCHMARK_TIMEOUT=60s
export BENCHMARK_ITERATIONS=1000000
export BENCHMARK_CONCURRENCY=10
```
## 📝 Commit Message Conventions
### Format
```
perf(type): brief description
- Detailed explanation of performance optimization
- Measured impact of the change
- Profiling data supporting the optimization
🤖 Generated with [Gemini](https://gemini.example.com)
Co-Authored-By: Gemini <noreply@gemini.example.com>
```
### Types
- `worker-pool`: Worker pool optimizations
- `pipeline`: Pipeline pattern optimizations
- `memory`: Memory allocation reductions
- `gc`: Garbage collection tuning
- `concurrency`: Concurrency pattern improvements
- `algorithm`: Algorithmic optimizations
- `latency`: Latency improvements
- `throughput`: Throughput improvements
## 🚨 Performance Guidelines
### Always Profile
- Use `go tool pprof` to identify bottlenecks
- Measure baseline performance before optimization
- Compare performance before and after changes
- Monitor for regressions in unrelated areas
### Focus Areas
- **Critical Path**: Optimize the most time-consuming operations
- **Hot Paths**: Reduce allocations in frequently called functions
- **Concurrency**: Improve parallel processing efficiency
- **Memory**: Minimize memory usage and GC pressure
### Testing Performance
```bash
# Run comprehensive performance tests
.gemini/scripts/perf-test.sh
# Compare benchmarks
.gemini/scripts/bench-compare.sh
# Check for regressions
.gemini/scripts/regression-check.sh
# Generate flame graphs
.gemini/scripts/flame-graph.sh
```

View File

@@ -1,39 +0,0 @@
# Analyze Bottlenecks
Analyze performance bottlenecks in the following area: $ARGUMENTS
## Analysis Steps:
1. **CPU Profiling**: Identify CPU-intensive functions and hot paths
2. **Memory Profiling**: Check for memory leaks and high allocation patterns
3. **Goroutine Analysis**: Look for goroutine leaks and blocking operations
4. **I/O Performance**: Analyze network and disk I/O patterns
5. **Concurrency Issues**: Check for race conditions and lock contention
## Profiling Commands:
```bash
# CPU profile with detailed analysis
go tool pprof -top -cum http://localhost:9090/debug/pprof/profile?seconds=60
# Memory profile with allocation details
go tool pprof -alloc_space http://localhost:9090/debug/pprof/heap
# Goroutine blocking profile
go tool pprof http://localhost:9090/debug/pprof/block
# Mutex contention profile
go tool pprof http://localhost:9090/debug/pprof/mutex
```
## Analysis Focus Areas:
- Worker pool efficiency in `pkg/market/pipeline.go`
- Event parsing performance in `pkg/events/`
- Uniswap math calculations in `pkg/uniswap/`
- Memory usage in large transaction processing
- Rate limiting effectiveness in `internal/ratelimit/`
## Output Requirements:
- Detailed bottleneck analysis with percentages
- Flame graphs and performance visualizations
- Root cause identification for top bottlenecks
- Optimization recommendations with expected impact
- Priority ranking of issues

View File

@@ -1,51 +0,0 @@
# Improve Latency Performance
Improve latency performance for the following critical path: $ARGUMENTS
## Latency Optimization Strategy:
### 1. **Latency Analysis**
- Measure current end-to-end latency
- Identify latency components (network, computation, I/O)
- Analyze latency distribution and outliers
### 2. **Optimization Areas**
#### **Network Latency**
- Connection pooling for RPC calls
- Request batching and pipelining
- Caching frequently accessed data
- Asynchronous processing patterns
#### **Computational Latency**
- Algorithmic complexity reduction
- Lookup table implementation
- Parallel processing opportunities
- Precision vs. performance trade-offs
#### **I/O Latency**
- Buffering and streaming optimizations
- Disk I/O patterns
- Database query optimization
- File system caching
### 3. **MEV Bot Specific Optimizations**
#### **Critical Path Components**
- Transaction detection and parsing (< 10 microseconds target)
- Market analysis and arbitrage calculation
- Opportunity evaluation and ranking
- Execution decision making
## Implementation Guidelines:
- Measure latency at each component
- Focus on 95th and 99th percentile improvements
- Ensure deterministic performance characteristics
- Maintain accuracy while improving speed
## Deliverables:
- Latency benchmark results (before/after)
- Latency distribution analysis
- Optimization documentation
- Monitoring and alerting for latency regressions
- Performance vs. accuracy trade-off analysis

View File

@@ -1,68 +0,0 @@
# Optimize Performance
Optimize the performance of the following component in the MEV bot: $ARGUMENTS
## Performance Optimization Strategy:
### 1. **Profiling and Measurement**
```bash
# CPU profiling
go tool pprof http://localhost:9090/debug/pprof/profile?seconds=30
# Memory profiling
go tool pprof http://localhost:9090/debug/pprof/heap
# Goroutine analysis
go tool pprof http://localhost:9090/debug/pprof/goroutine
# Mutex contention analysis
go tool pprof http://localhost:9090/debug/pprof/mutex
```
### 2. **Optimization Areas**
#### **Concurrency Optimization**
- Worker pool sizing and configuration
- Channel buffer optimization
- Goroutine pooling and reuse
- Lock contention reduction
- Context cancellation patterns
#### **Memory Optimization**
- Object pooling for frequent allocations
- Buffer reuse patterns
- Garbage collection tuning
- Memory leak prevention
- Slice and map pre-allocation
#### **Algorithm Optimization**
- Computational complexity reduction
- Data structure selection
- Caching strategies
- Lookup table implementation
### 3. **MEV Bot Specific Optimizations**
#### **Transaction Processing**
- Parallel transaction processing
- Event filtering optimization
- Batch processing strategies
#### **Market Analysis**
- Price calculation caching
- Pool data caching
- Historical data indexing
## Implementation Guidelines:
- Measure before optimizing (baseline metrics)
- Focus on bottlenecks identified through profiling
- Maintain code readability and maintainability
- Add performance tests for regressions
- Document performance characteristics
## Deliverables:
- Performance benchmark results (before/after)
- Optimized code with maintained functionality
- Performance monitoring enhancements
- Optimization documentation
- Regression test suite

View File

@@ -1,52 +0,0 @@
# Reduce Memory Allocations
Reduce memory allocations in the following hot path: $ARGUMENTS
## Memory Optimization Strategy:
### 1. **Allocation Analysis**
- Identify high-frequency allocation points
- Measure current allocation rates and patterns
- Analyze garbage collection pressure
### 2. **Optimization Techniques**
#### **Object Pooling**
- Implement sync.Pool for frequently created objects
- Pool buffers, structs, and temporary objects
- Proper reset patterns for pooled objects
#### **Pre-allocation**
- Pre-allocate slices and maps when size is predictable
- Reuse existing data structures
- Avoid repeated allocations in loops
#### **Buffer Reuse**
- Reuse byte buffers and string builders
- Implement buffer pools for I/O operations
- Minimize string concatenation
### 3. **MEV Bot Specific Optimizations**
#### **Transaction Processing**
- Pool transaction objects and event structures
- Reuse parsing buffers
- Optimize log and metric object creation
#### **Mathematical Calculations**
- Pool uint256 and big.Int objects
- Reuse temporary calculation buffers
- Optimize precision object handling
## Implementation Guidelines:
- Measure allocation reduction with benchmarks
- Monitor garbage collection statistics
- Ensure thread safety in pooled objects
- Maintain code readability and maintainability
## Deliverables:
- Memory allocation reduction benchmarks
- Optimized code with pooling strategies
- GC pressure analysis before and after
- Memory usage monitoring enhancements
- Best practices documentation for team

View File

@@ -1,55 +0,0 @@
# Tune Concurrency Patterns
Tune concurrency patterns for the following component: $ARGUMENTS
## Concurrency Tuning Strategy:
### 1. **Current Pattern Analysis**
- Identify existing concurrency patterns (worker pools, pipelines, etc.)
- Measure current performance and resource utilization
- Analyze bottlenecks in concurrent processing
### 2. **Optimization Areas**
#### **Worker Pool Tuning**
- Optimal worker count based on CPU cores and workload
- Channel buffer sizing for backpressure management
- Task distribution strategies
- Worker lifecycle management
#### **Pipeline Optimization**
- Stage balancing to prevent bottlenecks
- Buffer sizing between pipeline stages
- Error propagation and recovery
- Context cancellation handling
#### **Fan-in/Fan-out Patterns**
- Optimal fan-out ratios
- Result merging strategies
- Resource allocation across branches
- Synchronization mechanisms
### 3. **MEV Bot Specific Tuning**
#### **Transaction Processing**
- Optimal concurrent transaction processing
- Event parsing parallelization
- Memory usage per goroutine
#### **Market Analysis**
- Concurrent pool data fetching
- Parallel arbitrage calculations
- Resource sharing between analysis tasks
## Implementation Guidelines:
- Test with realistic workload patterns
- Monitor resource utilization (CPU, memory, goroutines)
- Ensure graceful degradation under load
- Maintain error handling and recovery mechanisms
## Deliverables:
- Concurrency tuning recommendations
- Performance benchmarks with different configurations
- Resource utilization analysis
- Configuration guidelines for different environments
- Monitoring and alerting for concurrency issues

View File

@@ -1,42 +0,0 @@
#!/bin/bash
# perf-test.sh - Run comprehensive performance tests for Gemini
echo "Running comprehensive performance tests for Gemini..."
# Create results directory if it doesn't exist
mkdir -p .gemini/results
# Run unit tests
echo "Running unit tests..."
go test -v ./... | tee .gemini/results/unit-tests.log
# Run concurrency tests
echo "Running concurrency tests..."
go test -v -run=Concurrent ./... | tee .gemini/results/concurrency-tests.log
# Run benchmarks
echo "Running benchmarks..."
go test -bench=. -benchmem ./... | tee .gemini/results/benchmarks.log
# Run benchmarks with CPU profiling
echo "Running benchmarks with CPU profiling..."
go test -bench=. -cpuprofile=.gemini/results/cpu.prof ./... | tee .gemini/results/cpu-bench.log
# Run benchmarks with memory profiling
echo "Running benchmarks with memory profiling..."
go test -bench=. -memprofile=.gemini/results/mem.prof ./... | tee .gemini/results/mem-bench.log
# Run benchmarks with goroutine profiling
echo "Running benchmarks with goroutine profiling..."
go test -bench=. -blockprofile=.gemini/results/block.prof ./... | tee .gemini/results/block-bench.log
# Check for errors
if [ $? -eq 0 ]; then
echo "All performance tests completed successfully!"
echo "Results saved to .gemini/results/"
else
echo "Some performance tests failed!"
echo "Check .gemini/results/ for details"
exit 1
fi

View File

@@ -1 +0,0 @@
vendor/

View File

@@ -1,51 +0,0 @@
# Git Configuration for MEV Bot Project
This file contains the Git configuration settings for the MEV Bot project.
## Project-Level Git Configuration
```ini
[core]
autocrlf = input
editor = code --wait
excludesfile = ~/.gitignore
[push]
default = simple
[pull]
rebase = true
[merge]
tool = vimdiff
[diff]
tool = vimdiff
[color]
ui = auto
[help]
autocorrect = 1
[alias]
st = status
co = checkout
br = branch
ci = commit
unstage = reset HEAD --
last = log -1 HEAD
graph = log --oneline --graph --decorate --all
amend = commit --amend
fixup = commit --fixup
undo = reset --soft HEAD~1
hist = log --pretty=format:\"%h %ad | %s%d [%an]\" --graph --date=short
who = shortlog -s --
grep = grep -I
[rerere]
enabled = true
[push]
followTags = true
```

View File

@@ -1,222 +0,0 @@
name: Staging Pipeline
on:
workflow_dispatch:
inputs:
run_live_integration:
description: 'Run live RPC-dependent integration tests'
required: false
default: 'false'
workflow_call:
env:
GO_VERSION: '1.25'
jobs:
staging-test:
name: Build, Lint & Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go toolchain
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-staging-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-staging-${{ env.GO_VERSION }}-
- name: Download dependencies
run: go mod download
- name: Verify dependencies
run: go mod verify
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: latest
args: --timeout=10m
- name: Run go vet
run: go vet ./...
- name: Run unit tests (race + coverage)
run: |
export SKIP_LIVE_RPC_TESTS=true
export USE_MOCK_RPC=true
GOCACHE=$(pwd)/.gocache go test -race -coverprofile=coverage.out ./...
- name: Upload coverage
uses: actions/upload-artifact@v3
with:
name: staging-coverage
path: coverage.out
- name: Build binary
run: go build -v -o mev-bot ./cmd/mev-bot
- name: Smoke start binary
run: |
export MEV_BOT_ENCRYPTION_KEY="test_key_32_chars_minimum_length"
timeout 5s ./mev-bot start || true
echo "✓ Binary builds and starts successfully"
integration-test:
name: Integration Tests
runs-on: ubuntu-latest
needs: staging-test
if: vars.ENABLE_LIVE_INTEGRATION == 'true' || (github.event_name == 'workflow_dispatch' && github.event.inputs.run_live_integration == 'true')
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Restore Go cache
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-staging-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-staging-${{ env.GO_VERSION }}-
- name: Run integration tests
run: |
export ARBITRUM_RPC_ENDPOINT="mock://localhost:8545"
export ARBITRUM_WS_ENDPOINT="mock://localhost:8546"
export SKIP_LIVE_RPC_TESTS=true
go test -v ./pkg/monitor/ -tags=integration
go test -v ./pkg/arbitrage/ -tags=integration
go test -v ./pkg/arbitrum/ -tags=integration
- name: Performance benchmarks
run: |
go test -bench=. -benchmem ./pkg/monitor/
go test -bench=. -benchmem ./pkg/scanner/
docker-build:
name: Docker Build
runs-on: ubuntu-latest
needs: [staging-test, integration-test]
if: github.event_name == 'push'
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build Docker image
uses: docker/build-push-action@v4
with:
context: .
push: false
tags: mev-bot:staging
cache-from: type=gha
cache-to: type=gha,mode=max
math-audit:
name: Math Audit
runs-on: ubuntu-latest
needs: staging-test
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Restore Go cache
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-staging-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-staging-${{ env.GO_VERSION }}-
- name: Run math audit
run: GOCACHE=$(pwd)/.gocache go run ./tools/math-audit --vectors default --report reports/math/latest
- name: Verify math audit artifacts
run: |
test -s reports/math/latest/report.json
test -s reports/math/latest/report.md
- name: Upload math audit report
uses: actions/upload-artifact@v3
with:
name: math-audit-report
path: reports/math/latest
deployment-ready:
name: Deployment Ready Check
runs-on: ubuntu-latest
needs: [staging-test, integration-test, docker-build, math-audit]
if: always()
steps:
- name: Check deployment readiness
run: |
integration_result="${{ needs.integration-test.result }}"
if [[ "$integration_result" == "skipped" ]]; then
echo " Integration tests skipped (live RPC disabled)."
integration_result="success"
echo "INTEGRATION_STATUS=skipped (RPC disabled)" >> $GITHUB_ENV
else
echo "INTEGRATION_STATUS=${{ needs.integration-test.result }}" >> $GITHUB_ENV
fi
if [[ "${{ needs.staging-test.result }}" == "success" && "$integration_result" == "success" && "${{ needs.math-audit.result }}" == "success" ]]; then
echo "✅ All tests passed - Ready for deployment"
echo "DEPLOYMENT_READY=true" >> $GITHUB_ENV
else
echo "❌ Tests failed - Not ready for deployment"
echo "DEPLOYMENT_READY=false" >> $GITHUB_ENV
exit 1
fi
- name: Generate deployment summary
run: |
cat > deployment-summary.md << 'EOF'
# 🚀 MEV Bot Staging Summary
**Commit**: ${{ github.sha }}
**Branch**: ${{ github.ref_name }}
**Timestamp**: $(date -u)
## Test Results
- **Build & Unit**: ${{ needs.staging-test.result }}
- **Integration Tests**: ${INTEGRATION_STATUS:-${{ needs.integration-test.result }}}
- **Docker Build**: ${{ needs.docker-build.result }}
- **Math Audit**: ${{ needs.math-audit.result }}
## Reports
- Math Audit: reports/math/latest/report.md (artifact **math-audit-report**)
## Deployment Notes
- Ensure RPC endpoints are configured
- Set strong encryption key (32+ chars)
- Configure rate limits appropriately
- Monitor transaction processing metrics
EOF
- name: Upload deployment summary
uses: actions/upload-artifact@v3
with:
name: staging-deployment-summary
path: deployment-summary.md

View File

@@ -1,435 +0,0 @@
name: MEV Bot Parser Validation
on:
push:
branches: [ main, develop ]
paths:
- 'pkg/arbitrum/**'
- 'pkg/events/**'
- 'test/**'
- 'go.mod'
- 'go.sum'
pull_request:
branches: [ main ]
paths:
- 'pkg/arbitrum/**'
- 'pkg/events/**'
- 'test/**'
- 'go.mod'
- 'go.sum'
schedule:
# Run daily at 2 AM UTC to catch regressions
- cron: '0 2 * * *'
workflow_dispatch:
inputs:
run_live_tests:
description: 'Run live integration tests'
required: false
default: 'false'
type: boolean
run_fuzzing:
description: 'Run fuzzing tests'
required: false
default: 'false'
type: boolean
test_timeout:
description: 'Test timeout in minutes'
required: false
default: '30'
type: string
env:
GO_VERSION: '1.21'
GOLANGCI_LINT_VERSION: 'v1.55.2'
TEST_TIMEOUT: ${{ github.event.inputs.test_timeout || '30' }}m
jobs:
# Basic validation and unit tests
unit_tests:
name: Unit Tests & Basic Validation
runs-on: ubuntu-latest
strategy:
matrix:
go-version: ['1.21', '1.20']
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go ${{ matrix.go-version }}
uses: actions/setup-go@v4
with:
go-version: ${{ matrix.go-version }}
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-${{ matrix.go-version }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-${{ matrix.go-version }}-
- name: Download dependencies
run: go mod download
- name: Verify dependencies
run: go mod verify
- name: Run unit tests
run: |
go test -v -timeout=${{ env.TEST_TIMEOUT }} ./pkg/arbitrum/... ./pkg/events/...
- name: Run parser validation tests
run: |
go test -v -timeout=${{ env.TEST_TIMEOUT }} ./test/ -run TestComprehensiveParserValidation
- name: Generate test coverage
run: |
go test -coverprofile=coverage.out -covermode=atomic ./pkg/arbitrum/... ./pkg/events/... ./test/...
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.out
flags: unittests
name: codecov-umbrella
# Golden file testing for consistency
golden_file_tests:
name: Golden File Testing
runs-on: ubuntu-latest
needs: unit_tests
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-${{ env.GO_VERSION }}-
- name: Download dependencies
run: go mod download
- name: Run golden file tests
run: |
go test -v -timeout=${{ env.TEST_TIMEOUT }} ./test/ -run TestGoldenFiles
- name: Validate golden files exist
run: |
if [ ! -d "test/golden" ] || [ -z "$(ls -A test/golden)" ]; then
echo "❌ Golden files not found or empty"
echo "Generating golden files for future validation..."
REGENERATE_GOLDEN=true go test ./test/ -run TestGoldenFiles
else
echo "✅ Golden files validation passed"
fi
- name: Upload golden files as artifacts
uses: actions/upload-artifact@v3
with:
name: golden-files-${{ github.sha }}
path: test/golden/
retention-days: 30
# Performance benchmarking
performance_tests:
name: Performance Benchmarks
runs-on: ubuntu-latest
needs: unit_tests
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-${{ env.GO_VERSION }}-
- name: Download dependencies
run: go mod download
- name: Run performance benchmarks
run: |
go test -v -timeout=${{ env.TEST_TIMEOUT }} -bench=. -benchmem ./test/ -run TestParserPerformance
- name: Run specific benchmarks
run: |
echo "=== Single Transaction Parsing Benchmark ==="
go test -bench=BenchmarkSingleTransactionParsing -benchtime=10s ./test/
echo "=== Uniswap V3 Parsing Benchmark ==="
go test -bench=BenchmarkUniswapV3Parsing -benchtime=10s ./test/
echo "=== Complex Transaction Parsing Benchmark ==="
go test -bench=BenchmarkComplexTransactionParsing -benchtime=5s ./test/
- name: Performance regression check
run: |
# This would compare against baseline performance metrics
# For now, we'll just validate that benchmarks complete
echo "✅ Performance benchmarks completed successfully"
# Fuzzing tests for robustness
fuzzing_tests:
name: Fuzzing & Robustness Testing
runs-on: ubuntu-latest
needs: unit_tests
if: github.event.inputs.run_fuzzing == 'true' || github.event_name == 'schedule'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-${{ env.GO_VERSION }}-
- name: Download dependencies
run: go mod download
- name: Run fuzzing tests
run: |
echo "🔍 Starting fuzzing tests..."
go test -v -timeout=${{ env.TEST_TIMEOUT }} ./test/ -run TestFuzzingRobustness
- name: Run Go fuzzing (if available)
run: |
echo "🔍 Running native Go fuzzing..."
# Run for 30 seconds each
timeout 30s go test -fuzz=FuzzParserRobustness ./test/ || echo "Fuzzing completed"
- name: Generate fuzzing report
run: |
echo "📊 Fuzzing Summary:" > fuzzing_report.txt
echo "- Transaction data fuzzing: COMPLETED" >> fuzzing_report.txt
echo "- Function selector fuzzing: COMPLETED" >> fuzzing_report.txt
echo "- Amount value fuzzing: COMPLETED" >> fuzzing_report.txt
echo "- Address value fuzzing: COMPLETED" >> fuzzing_report.txt
echo "- Concurrent access fuzzing: COMPLETED" >> fuzzing_report.txt
cat fuzzing_report.txt
- name: Upload fuzzing report
uses: actions/upload-artifact@v3
with:
name: fuzzing-report-${{ github.sha }}
path: fuzzing_report.txt
# Live integration tests (optional, with external data)
integration_tests:
name: Live Integration Tests
runs-on: ubuntu-latest
needs: unit_tests
if: github.event.inputs.run_live_tests == 'true' || github.event_name == 'schedule'
env:
ENABLE_LIVE_TESTING: 'true'
ARBITRUM_RPC_ENDPOINT: ${{ secrets.ARBITRUM_RPC_ENDPOINT || 'https://arb1.arbitrum.io/rpc' }}
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-${{ env.GO_VERSION }}-
- name: Download dependencies
run: go mod download
- name: Test RPC connectivity
run: |
echo "Testing RPC connectivity..."
curl -X POST -H "Content-Type: application/json" \
--data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
${{ env.ARBITRUM_RPC_ENDPOINT }} || echo "RPC test failed - continuing with mock tests"
- name: Run integration tests
run: |
echo "🌐 Running live integration tests..."
go test -v -timeout=${{ env.TEST_TIMEOUT }} ./test/ -run TestArbitrumIntegration
- name: Generate integration report
run: |
echo "📊 Integration Test Summary:" > integration_report.txt
echo "- RPC Connectivity: TESTED" >> integration_report.txt
echo "- Block Retrieval: TESTED" >> integration_report.txt
echo "- Live Transaction Parsing: TESTED" >> integration_report.txt
echo "- Parser Accuracy: VALIDATED" >> integration_report.txt
cat integration_report.txt
- name: Upload integration report
uses: actions/upload-artifact@v3
with:
name: integration-report-${{ github.sha }}
path: integration_report.txt
# Code quality and security checks
code_quality:
name: Code Quality & Security
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: ${{ env.GOLANGCI_LINT_VERSION }}
args: --timeout=10m --config=.golangci.yml
- name: Run gosec security scan
uses: securecodewarrior/github-action-gosec@master
with:
args: '-fmt sarif -out gosec.sarif ./...'
- name: Upload SARIF file
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: gosec.sarif
- name: Run Nancy vulnerability scan
run: |
go list -json -m all | docker run --rm -i sonatypecommunity/nancy:latest sleuth
- name: Check for hardcoded secrets
run: |
echo "🔍 Checking for hardcoded secrets..."
if grep -r -i "password\|secret\|key\|token" --include="*.go" . | grep -v "test\|example\|demo"; then
echo "❌ Potential hardcoded secrets found"
exit 1
else
echo "✅ No hardcoded secrets detected"
fi
# Final validation and reporting
validation_summary:
name: Validation Summary
runs-on: ubuntu-latest
needs: [unit_tests, golden_file_tests, performance_tests, code_quality]
if: always()
steps:
- name: Download all artifacts
uses: actions/download-artifact@v3
- name: Generate comprehensive report
run: |
echo "# 🤖 MEV Bot Parser Validation Report" > validation_report.md
echo "" >> validation_report.md
echo "**Commit:** ${{ github.sha }}" >> validation_report.md
echo "**Date:** $(date)" >> validation_report.md
echo "**Triggered by:** ${{ github.event_name }}" >> validation_report.md
echo "" >> validation_report.md
echo "## 📊 Test Results" >> validation_report.md
echo "| Test Suite | Status |" >> validation_report.md
echo "|------------|--------|" >> validation_report.md
echo "| Unit Tests | ${{ needs.unit_tests.result == 'success' && '✅ PASSED' || '❌ FAILED' }} |" >> validation_report.md
echo "| Golden File Tests | ${{ needs.golden_file_tests.result == 'success' && '✅ PASSED' || '❌ FAILED' }} |" >> validation_report.md
echo "| Performance Tests | ${{ needs.performance_tests.result == 'success' && '✅ PASSED' || '❌ FAILED' }} |" >> validation_report.md
echo "| Code Quality | ${{ needs.code_quality.result == 'success' && '✅ PASSED' || '❌ FAILED' }} |" >> validation_report.md
if [[ "${{ needs.fuzzing_tests.result }}" != "skipped" ]]; then
echo "| Fuzzing Tests | ${{ needs.fuzzing_tests.result == 'success' && '✅ PASSED' || '❌ FAILED' }} |" >> validation_report.md
fi
if [[ "${{ needs.integration_tests.result }}" != "skipped" ]]; then
echo "| Integration Tests | ${{ needs.integration_tests.result == 'success' && '✅ PASSED' || '❌ FAILED' }} |" >> validation_report.md
fi
echo "" >> validation_report.md
echo "## 🎯 Key Validation Points" >> validation_report.md
echo "- ✅ Parser handles all major DEX protocols (Uniswap V2/V3, SushiSwap, etc.)" >> validation_report.md
echo "- ✅ Accurate parsing of swap amounts, fees, and addresses" >> validation_report.md
echo "- ✅ Robust handling of edge cases and malformed data" >> validation_report.md
echo "- ✅ Performance meets production requirements (>1000 tx/s)" >> validation_report.md
echo "- ✅ Memory usage within acceptable limits" >> validation_report.md
echo "- ✅ No security vulnerabilities detected" >> validation_report.md
echo "" >> validation_report.md
# Overall status
if [[ "${{ needs.unit_tests.result }}" == "success" &&
"${{ needs.golden_file_tests.result }}" == "success" &&
"${{ needs.performance_tests.result }}" == "success" &&
"${{ needs.code_quality.result }}" == "success" ]]; then
echo "## 🎉 Overall Status: PASSED ✅" >> validation_report.md
echo "The MEV bot parser has passed all validation tests and is ready for production use." >> validation_report.md
else
echo "## ⚠️ Overall Status: FAILED ❌" >> validation_report.md
echo "Some validation tests failed. Please review the failed tests and fix issues before proceeding." >> validation_report.md
fi
cat validation_report.md
- name: Upload validation report
uses: actions/upload-artifact@v3
with:
name: validation-report-${{ github.sha }}
path: validation_report.md
- name: Comment on PR (if applicable)
uses: actions/github-script@v6
if: github.event_name == 'pull_request'
with:
script: |
const fs = require('fs');
const report = fs.readFileSync('validation_report.md', 'utf8');
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: report
});

View File

@@ -1,79 +0,0 @@
name: Dev Pipeline
on:
workflow_dispatch:
workflow_call:
env:
GO_VERSION: '1.25'
jobs:
quick-checks:
name: Formatting & Static Checks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-${{ env.GO_VERSION }}-
- name: Check gofmt formatting
run: |
fmt_out=$(gofmt -l $(find . -name '*.go'))
if [[ -n "$fmt_out" ]]; then
echo "Following files need gofmt:" && echo "$fmt_out"
exit 1
fi
- name: Run go mod tidy check
run: |
go mod tidy
git diff --exit-code go.mod go.sum
- name: Run static vet
run: go vet ./...
unit-tests:
name: Unit Tests
runs-on: ubuntu-latest
needs: quick-checks
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Restore Go cache
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-${{ env.GO_VERSION }}-
- name: Run targeted package tests
run: |
GOCACHE=$(pwd)/.gocache go test ./pkg/... ./internal/... -count=1
- name: Upload test cache (optional diagnostics)
if: always()
uses: actions/upload-artifact@v3
with:
name: dev-unit-cache
path: .gocache

View File

@@ -1,80 +0,0 @@
name: Test Pipeline
on:
workflow_dispatch:
workflow_call:
env:
GO_VERSION: '1.25'
jobs:
lint-and-unit:
name: Lint & Unit Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go toolchain
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-${{ env.GO_VERSION }}-
- name: Download dependencies
run: go mod download
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: latest
args: --timeout=10m
- name: Run go test (race, cover)
run: |
GOCACHE=$(pwd)/.gocache go test -race -coverprofile=coverage.out ./...
- name: Upload coverage
uses: actions/upload-artifact@v3
with:
name: unit-test-coverage
path: coverage.out
smoke-binary:
name: Build & Smoke Test Binary
runs-on: ubuntu-latest
needs: lint-and-unit
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Restore Go build cache
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-go-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-${{ env.GO_VERSION }}-
- name: Build binary
run: go build -o bin/mev-bot ./cmd/mev-bot
- name: Smoke test startup
run: |
export MEV_BOT_ENCRYPTION_KEY="test_key_32_chars_minimum_length"
timeout 5s ./bin/mev-bot start || true
echo "✓ Binary builds and starts"

View File

@@ -1,256 +0,0 @@
name: Audit Pipeline
on:
workflow_dispatch:
workflow_call:
env:
GO_VERSION: '1.25'
jobs:
static-analysis:
name: Static Security Analysis
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go toolchain
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-audit-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-audit-${{ env.GO_VERSION }}-
- name: Download dependencies
run: go mod download
- name: Run gosec Security Scanner
uses: securecodewarrior/github-action-gosec@master
with:
args: '-fmt sarif -out gosec-results.sarif ./...'
continue-on-error: true
- name: Upload SARIF file
uses: github/codeql-action/upload-sarif@v2
if: always()
with:
sarif_file: gosec-results.sarif
- name: Run govulncheck
run: |
go install golang.org/x/vuln/cmd/govulncheck@latest
govulncheck ./...
- name: Run golangci-lint (security focus)
uses: golangci/golangci-lint-action@v3
with:
version: latest
args: --enable=gosec,gocritic,ineffassign,misspell,unparam --timeout=10m
dependency-scan:
name: Dependency Vulnerability Scan
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-audit-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-audit-${{ env.GO_VERSION }}-
- name: Run Nancy (Dependency Vulnerability Scanner)
run: |
go install github.com/sonatypecommunity/nancy@latest
go list -json -m all | nancy sleuth --exclude-vulnerability-file .nancy-ignore
- name: Generate dependency report
run: |
echo "# Dependency Security Report" > dependency-report.md
echo "Generated on: $(date)" >> dependency-report.md
echo "" >> dependency-report.md
echo "## Direct Dependencies" >> dependency-report.md
go list -m all | grep -v "^github.com/fraktal/mev-beta" >> dependency-report.md
- name: Upload dependency report
uses: actions/upload-artifact@v3
with:
name: dependency-report
path: dependency-report.md
security-tests:
name: Security Tests & Fuzzing
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Restore Go cache
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-audit-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-audit-${{ env.GO_VERSION }}-
- name: Create required directories
run: |
mkdir -p logs keystore test_keystore benchmark_keystore test_concurrent_keystore
- name: Run security unit tests
run: go test -v -race ./pkg/security/
- name: Run fuzzing tests (short)
run: |
go test -fuzz=FuzzRPCResponseParser -fuzztime=30s ./pkg/security/
go test -fuzz=FuzzKeyValidation -fuzztime=30s ./pkg/security/
go test -fuzz=FuzzInputValidator -fuzztime=30s ./pkg/security/
- name: Run race condition tests
run: go test -race -run=TestConcurrent ./...
- name: Run security benchmarks
run: go test -bench=BenchmarkSecurity -benchmem ./pkg/security/
integration-security:
name: Integration Security Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: ${{ env.GO_VERSION }}
- name: Restore Go cache
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
key: ${{ runner.os }}-audit-${{ env.GO_VERSION }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-audit-${{ env.GO_VERSION }}-
- name: Create required directories and files
run: |
mkdir -p logs keystore
echo "MEV_BOT_ENCRYPTION_KEY=integration_test_key_32_characters" > .env.test
- name: Test encryption key validation
run: |
export MEV_BOT_ENCRYPTION_KEY="test123"
if go run cmd/mev-bot/main.go 2>&1 | grep -q "production encryption key"; then
echo "✓ Weak encryption key properly rejected"
else
echo "✗ Weak encryption key not rejected"
exit 1
fi
- name: Test with proper encryption key
run: |
export MEV_BOT_ENCRYPTION_KEY="proper_production_key_32_chars_min"
timeout 10s go run cmd/mev-bot/main.go || true
echo "✓ Application accepts strong encryption key"
- name: Test configuration security
run: |
echo "Testing keystore security..."
export MEV_BOT_KEYSTORE_PATH="/tmp/insecure"
if go run cmd/mev-bot/main.go 2>&1 | grep -q "publicly accessible"; then
echo "✓ Insecure keystore path properly rejected"
else
echo "Warning: Insecure keystore path validation may need improvement"
fi
secret-scanning:
name: Secret Scanning
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run TruffleHog for secret detection
uses: trufflesecurity/trufflehog@main
with:
path: ./
base: main
head: HEAD
- name: Check for hardcoded secrets
run: |
echo "Scanning for potential hardcoded secrets..."
if grep -r -i "password.*=" --include="*.go" --include="*.yaml" --include="*.yml" . | grep -v "PASSWORD_PLACEHOLDER"; then
echo "Warning: Found potential hardcoded passwords"
fi
if grep -r -i "secret.*=" --include="*.go" --include="*.yaml" --include="*.yml" . | grep -v "SECRET_PLACEHOLDER"; then
echo "Warning: Found potential hardcoded secrets"
fi
if grep -r -i "key.*=" --include="*.go" --include="*.yaml" --include="*.yml" . | grep -v -E "(public|test|example|placeholder)"; then
echo "Warning: Found potential hardcoded keys"
fi
echo "Secret scan completed"
security-report:
name: Generate Security Report
needs: [static-analysis, dependency-scan, security-tests, integration-security, secret-scanning]
runs-on: ubuntu-latest
if: always()
steps:
- uses: actions/checkout@v4
- name: Generate comprehensive security report
run: |
cat > security-report.md << 'EOF'
# MEV Bot Security Report
**Commit**: ${{ github.sha }}
**Branch**: ${{ github.ref_name }}
**Generated**: $(date -u)
## Summary
- Static analysis: ${{ needs.static-analysis.result }}
- Dependency scan: ${{ needs.dependency-scan.result }}
- Security tests: ${{ needs.security-tests.result }}
- Integration security: ${{ needs.integration-security.result }}
- Secret scanning: ${{ needs.secret-scanning.result }}
## Next Actions
- Review SARIF results uploaded under artifacts `gosec-results`
- Review dependency-report artifact for vulnerable modules
- Address any warnings surfaced in logs
EOF
- name: Upload security report
uses: actions/upload-artifact@v3
with:
name: security-report
path: security-report.md

86
orig/.gitignore vendored
View File

@@ -1,86 +0,0 @@
# Binaries
bin/
mev-bot
mev-bot-test
ci-agent-bridge
# Configuration files that might contain sensitive information
config/local.yaml
config/secrets.yaml
config/providers.yaml
config/*_production.yaml
config/*_staging.yaml
.env
.env.local
.env.production
.env.staging
.env.development
.env.test
# Salt file for key derivation (CRITICAL: Must not be committed)
keystore/.salt
# Go workspace and modules
go.work
go.work.sum
# Test coverage files
coverage.txt
coverage.html
coverage.out
# IDE files
.vscode/
.idea/
*.swp
*.swo
*~
# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Log files and backups
*.log
logs/*.log
logs/*.bak
logs/*.jsonl
# Database files
*.db
*.sqlite
*.sqlite3
# Security and keystore files
keystore/
*.key
*.pem
*.p12
# Data and temporary directories
data/
tmp/
temp/
vendor/
backup/
backups/
artifacts/
# Archive files
*.tar.gz
*.zip
*.tar
# Performance profiles
*.prof
*.out
# Documentation builds
docs/_build/
.gocache/
.gomodcache/

9
orig/.gitmodules vendored
View File

@@ -1,9 +0,0 @@
[submodule "lib/forge-std"]
path = lib/forge-std
url = https://github.com/foundry-rs/forge-std
[submodule "lib/openzeppelin-contracts"]
path = lib/openzeppelin-contracts
url = https://github.com/OpenZeppelin/openzeppelin-contracts
[submodule "contracts/foundry/lib/forge-std"]
path = contracts/foundry/lib/forge-std
url = https://github.com/foundry-rs/forge-std

View File

@@ -1 +0,0 @@
v1 098913e2309b05c537465ac508215e0de044b93f926d98d6027ebddd02c28d2e ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760325989734818525

View File

@@ -1 +0,0 @@
v1 0d423a8b093af43c0f524119842f81b9bf5973bc1f1bc37145ec616c4e9e4c13 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323385561083

View File

@@ -1 +0,0 @@
v1 13dcc04e7ae8861fce657b379ef25bfcd9491d6189fdceb3b6e51c688e77ce1e ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760325989731856571

View File

@@ -1 +0,0 @@
v1 196a2dc2db65dc0807b31a6dd8b32df90a99d70f5d869ce42dce233189fc4376 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323375090408

View File

@@ -1 +0,0 @@
v1 1cffc925f3e593b436d9c2b792a3028258c54b78f58bbd49e9c6821ad5978de3 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761356762834032952

View File

@@ -1 +0,0 @@
v1 1e06e1336be50f405dde224508f835f8c75d3a0157acd65e1ae34fe633fcdcd9 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761356762843142373

View File

@@ -1 +0,0 @@
v1 1ea81d394914377f8f90c6232d188f829094fff58387bd841e74ed1a2615ed69 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323367593672

View File

@@ -1 +0,0 @@
v1 1ffb2fb22e05934aa3445631893548f9b8ab27da7e8098e5e8a98056895fcc95 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760325989732792233

View File

@@ -1 +0,0 @@
v1 20bc76e4ef52a4d3982cda29259d61ad23ed7b1ad30c0542f60fbc00322f8f41 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760324880889262084

View File

@@ -1 +0,0 @@
v1 2513f3b1a308fbb323d08bc8e7b81ca1ca776625cddf06311b2cf56ae0ab198a ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760325989732358509

View File

@@ -1 +0,0 @@
v1 29d99085652bc7e7f347a4fc0fdf7c37b654d979ef6cd3da7fb136af884153c4 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760324880889485399

View File

@@ -1 +0,0 @@
v1 2a96320ff2ccaf4e3dbb9441b49caabb0c21322258d4c371940e307ee9453786 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323375970388

View File

@@ -1 +0,0 @@
v1 2f5c9a25fd0af31242aecdb010c7b5de7ed81d2b674180d0e7459124f5d38c61 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761356762836531219

View File

@@ -1 +0,0 @@
v1 35b40a69722707adc1595df4d2d3339e5c5a0df9e5f6b40e97f631db7d4d3370 53cd580e4826fa7ac59df604cc34d09dc79e518417fed185bc8815495777a9b1 630 1761356762841397955

View File

@@ -1 +0,0 @@
v1 435c1fed3eb91c8718249e8aff2ba6bf9ad315dda98166fe8c9bcd3a1f3e0acd ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760324880893546016

View File

@@ -1 +0,0 @@
v1 4dd04ea46bf52f6b9a03274dff8363f6c235fe417bb4bd0472d6be1bafe6290d ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760324880890233872

View File

@@ -1 +0,0 @@
v1 4e4719c86a460436a2fc6783a646e99d0b36b991ef2a421daa0de8b25babfc2c ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761356762820856300

View File

@@ -1 +0,0 @@
v1 50699258b63e2221b025a3dbb82e37e4c70abf9288dccf77cf5d3431cf3668d9 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760325989735866287

View File

@@ -1 +0,0 @@
v1 54e18895928fda133c81a6bfe15ad35d7135b231f76511768a39aadaaefd209f ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760324880889649046

View File

@@ -1 +0,0 @@
v1 569e2acfa8ce7d9b13f61cdad280fb5079a0323b1063039c02977f0ddbd5e88e ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760325989733251184

View File

@@ -1 +0,0 @@
v1 592eed8937d710932f4add3b2d8baf8993fc404be1886a0594b8c5d2afea2ed6 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761356762823103607

View File

@@ -1 +0,0 @@
v1 59e2925f73fc8c42e81cbf8d2827c76034fc9227509554be9e21eca5599540ff ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760324880897436757

View File

@@ -1 +0,0 @@
v1 606a786159b6e8e727b3dbd3e31deaf29adf843f6c5b6133b93a6985309844c4 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761356762840691740

View File

@@ -1 +0,0 @@
v1 6571537941460891375fbbcbc074cf2f69769be3edf51b79ebb44db287fd647c ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760324880890338203

View File

@@ -1 +0,0 @@
v1 65938bc6d8e95896d627dc8944c7744b171938e002cda716da4e9e281a4d4ec0 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323376551313

View File

@@ -1 +0,0 @@
v1 6652fd2f19a6f90854c4373150d4f7da491d3d1f5b49ff21c43b94ba9510f292 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760324880892449126

View File

@@ -1 +0,0 @@
v1 692bd1b0921f4ce4fdce72067702632d38b8f4f50f44d96bc79ec1e28bd64d38 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323385948012

View File

@@ -1 +0,0 @@
v1 69cdc1ebf2a7074f5459e6700477bb197bd52fce2e4d6ca5b00b00f27e3d40a4 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323383220591

View File

@@ -1 +0,0 @@
v1 6a2d944f927f2b8da1b94a733a9b6c1c7b9aa8e339944512e5438711b77fead9 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761356762826388379

View File

@@ -1 +0,0 @@
v1 6b1ef2538142ea17f08c9d096514dd02c93a45a35aef385073b998714dae8a2a ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323379565701

View File

@@ -1 +0,0 @@
v1 6b94783ad81bbffd4d0d6ae4b61578542910040da21ff1c10332b43a767eae48 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760325989732027103

View File

@@ -1 +0,0 @@
v1 6d93a51433663241d535ba5d75c3fcc5fe44328e5f416e32c22e8dd78aec90c9 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760325989733888651

View File

@@ -1 +0,0 @@
v1 6f460949e458c598e65d6a7c7383dcf508c0e5f2f836d4011bca070e29eca520 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323394658175

View File

@@ -1 +0,0 @@
v1 78bab2bd034d16d64036a96cabb13113c887b96e946c32347ae3cda8f92d462d ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761356762846401425

View File

@@ -1 +0,0 @@
v1 7fda565fa50712d16678053fe036b01cceff33b5438d9fc4d2113b4c0a958c34 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323388089548

View File

@@ -1 +0,0 @@
v1 80b15196ac67e1ef6b1e116d8d0a8ef559ccacb124a4b0fd2d0871d9d854df0c ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323368869676

View File

@@ -1 +0,0 @@
v1 891be34214ad8ff1ff1118cae4be6e89f5c4785736a2f3a9d4656850911abd35 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761356762819900244

View File

@@ -1 +0,0 @@
v1 89f2eb7a00b3f8c80bd3d5dfd9e62120562c1887a72d6a0137d93d8bd06f670b 72c1e0d51c89d6a0acf14f6ba981f8d7a34b57e40aae404a1654716b419115d3 628 1761814323366759565

View File

@@ -1 +0,0 @@
v1 9267b93c2c4ad543d0f86607cd3e8ad68af5b7e95dda55d96d0ff3d8405e1485 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761356762831310238

View File

@@ -1 +0,0 @@
v1 960098d8ae7aff21f2115ed01774578eb6d93cf2faf1d9876c0ba41c8d1d5f3a 88729343e37ff85666a7a15d7a1e5a4ce42317461ab682c09ea263121552c2d2 634 1761814323390753087

View File

@@ -1 +0,0 @@
v1 974f6c949f5eb9b5b450268f5350e91512b27678096a78a1278687e8c54b8f0e 8376fa1ae23814caca2f34d4760a3c0b3aec42aff6219c05158a341dfbfd1a95 632 1761814323377381772

View File

@@ -1 +0,0 @@
v1 9bdaab7460cd797af0eb001c536d9ba8043689b794e1f62b62c7432017293956 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323391676836

View File

@@ -1 +0,0 @@
This directory holds cached build artifacts from golangci-lint.

View File

@@ -1 +0,0 @@
v1 a57a8f4a9a91187d3cb695ff2b4fbed13383bfa4f7d3060f25e068907aded446 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760324880890931758

View File

@@ -1 +0,0 @@
v1 a8b0f6466313ad9b62a3b0ca2384ae4bb9ff085e226d5daebec40749f7ca96c3 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760325989732530076

View File

@@ -1 +0,0 @@
v1 b1b2a9d57f44b6f7eda94c3193d55cf93dfc463c2cb0873dc29380ef92fea1db 306929311783946fcc92374d7ca28b48fe4bcae4d40c86e4d7b664a4722b4715 853 1761356762839925947

View File

@@ -1 +0,0 @@
v1 b353eff456cec89d9fcd606a309b8f1dfb8c414cc6ceb1775959e979a59b9155 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323388906262

View File

@@ -1 +0,0 @@
v1 bab9f9fa343a9857cbc7e26282155af84f5bc8b1a60aa1798722d04222f03fad 6058d8c10431a658ffde7988ce2f5db4a682c3a06a5ac948b8cf05cc5824122e 787 1761814323371801782

View File

@@ -1 +0,0 @@
v1 bf2b4f152e74b4e6338736227c8be2ea015cc55bef988b13f95e737ae5e5c72a ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1760324880891158724

View File

@@ -1 +0,0 @@
v1 c50aacd5a9136256112e9306e28e1192ff65294ba08df6fc608b053167faae78 ecff7e46d21beeab3f4421f76856e93f1beef079a91c628c13ad0fe682d7c199 452 1761814323378111348

Some files were not shown because too many files have changed in this diff Show More