docs: add comprehensive V2 requirements documentation

- Created MODULARITY_REQUIREMENTS.md with component independence rules
- Created PROTOCOL_SUPPORT_REQUIREMENTS.md covering 13+ protocols
- Created TESTING_REQUIREMENTS.md enforcing 100% coverage
- Updated CLAUDE.md with strict feature/v2/* branch strategy

Requirements documented:
- Component modularity (standalone + integrated)
- 100% test coverage enforcement (non-negotiable)
- All DEX protocols (Uniswap V2/V3/V4, Curve, Balancer V2/V3, Kyber Classic/Elastic, Camelot V2/V3 with all Algebra variants)
- Proper decimal handling (critical for calculations)
- Pool caching with multi-index and O(1) mappings
- Market building with essential arbitrage detection values
- Price movement detection with decimal precision
- Transaction building (single and batch execution)
- Pool discovery and caching
- Comprehensive validation at all layers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Administrator
2025-11-10 14:26:56 +01:00
parent c54c569f30
commit f41adbe575
4 changed files with 2063 additions and 0 deletions

351
CLAUDE.md Normal file
View File

@@ -0,0 +1,351 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Status: V2 Architecture Planning
This repository is currently in **V2 planning phase**. The V1 codebase has been moved to `orig/` for preservation while V2 architecture is being designed.
**Current State:**
- V1 implementation: `orig/` (frozen for reference)
- V2 planning documents: `docs/planning/`
- Active development: Not yet started (planning phase)
## Repository Structure
```
mev-bot/
├── docs/
│ └── planning/ # V2 architecture and task breakdown
│ ├── 00_V2_MASTER_PLAN.md
│ └── 07_TASK_BREAKDOWN.md
└── orig/ # V1 codebase (preserved)
├── cmd/mev-bot/ # V1 application entry point
├── pkg/ # V1 library code
│ ├── events/ # Event parsing (monolithic)
│ ├── monitor/ # Arbitrum sequencer monitoring
│ ├── scanner/ # Arbitrage scanning
│ ├── arbitrage/ # Arbitrage detection
│ ├── market/ # Market data management
│ └── pools/ # Pool discovery
├── internal/ # V1 private code
├── config/ # V1 configuration
├── go.mod # V1 dependencies
└── README_V1.md # V1 documentation
```
## V1 Reference (orig/)
### Building and Running V1
```bash
cd orig/
go build -o ../bin/mev-bot-v1 ./cmd/mev-bot/main.go
../bin/mev-bot-v1 start
```
### V1 Architecture Overview
- **Monolithic parser**: Single parser handling all DEX types
- **Basic validation**: Limited validation of parsed data
- **Single-index cache**: Pool cache by address only
- **Event-driven**: Real-time Arbitrum sequencer monitoring
### Critical V1 Issues (driving V2 refactor)
1. **Zero address tokens**: Parser returns zero addresses when transaction calldata unavailable
2. **Parsing accuracy**: Generic parser misses protocol-specific edge cases
3. **No validation audit trail**: Silent failures, no discrepancy logging
4. **Inefficient lookups**: Single-index cache, no liquidity ranking
5. **Stats disconnection**: Events detected but not reflected in metrics
See `orig/README_V1.md` for complete V1 documentation.
## V2 Architecture Plan
### Key Improvements
1. **Per-exchange parsers**: Individual parsers for UniswapV2, UniswapV3, SushiSwap, Camelot, Curve
2. **Multi-layer validation**: Strict validation at parser, monitor, and scanner layers
3. **Multi-index cache**: Lookups by address, token pair, protocol, and liquidity
4. **Background validation**: Audit trail comparing parsed vs cached data
5. **Observable by default**: Comprehensive metrics, structured logging, health monitoring
### V2 Directory Structure (planned)
```
pkg/
├── parsers/ # Per-exchange parser implementations
│ ├── factory.go # Parser factory pattern
│ ├── interface.go # Parser interface definition
│ ├── uniswap_v2.go # UniswapV2-specific parser
│ ├── uniswap_v3.go # UniswapV3-specific parser
│ └── ...
├── validation/ # Validation pipeline
│ ├── validator.go # Event validator
│ ├── rules.go # Validation rules
│ └── background.go # Background validation channel
├── cache/ # Multi-index pool cache
│ ├── pool_cache.go
│ ├── index_by_address.go
│ ├── index_by_tokens.go
│ └── index_by_liquidity.go
└── observability/ # Metrics and logging
├── metrics.go
└── logger.go
```
### Implementation Roadmap
See `docs/planning/07_TASK_BREAKDOWN.md` for detailed atomic tasks (~99 hours total):
- **Phase 1: Foundation** (11 hours) - Interfaces, logging, metrics
- **Phase 2: Parser Refactor** (45 hours) - Per-exchange parsers
- **Phase 3: Cache System** (16 hours) - Multi-index cache
- **Phase 4: Validation Pipeline** (13 hours) - Background validation
- **Phase 5: Migration & Testing** (14 hours) - V1/V2 comparison
## Development Workflow
### V1 Commands (reference only)
```bash
cd orig/
# Build
make build
# Run tests
make test
# Run V1 bot
./bin/mev-bot start
# View logs
./scripts/log-manager.sh analyze
```
### V2 Development (when started)
**DO NOT** start V2 implementation without:
1. Reviewing `docs/planning/00_V2_MASTER_PLAN.md`
2. Reviewing `docs/planning/07_TASK_BREAKDOWN.md`
3. Creating task branch from `feature/v2-prep`
4. Following atomic task breakdown
## Key Principles for V2 Development
### 1. Fail-Fast with Visibility
- Reject invalid data immediately at source
- Log all rejections with detailed context
- Never allow garbage data to propagate downstream
### 2. Single Responsibility
- One parser per exchange type
- One validator per data type
- One cache per index type
### 3. Observable by Default
- Every component emits metrics
- Every operation is logged with context
- Every error includes stack trace and state
### 4. Test-Driven
- Unit tests for every parser (>90% coverage)
- Integration tests for full pipeline
- Chaos testing for failure scenarios
### 5. Atomic Tasks
- Each task < 2 hours (from 07_TASK_BREAKDOWN.md)
- Clear dependencies between tasks
- Testable success criteria
## Architecture Patterns Used
### V1 (orig/)
- **Monolithic parser**: Single `EventParser` handling all protocols
- **Pipeline pattern**: Multi-stage processing with worker pools
- **Event-driven**: WebSocket subscription to Arbitrum sequencer
- **Connection pooling**: RPC connection management with failover
### V2 (planned)
- **Factory pattern**: Parser factory routes to protocol-specific parsers
- **Strategy pattern**: Per-exchange parsing strategies
- **Observer pattern**: Background validation observes all parsed events
- **Multi-index pattern**: Multiple indexes over same pool data
- **Circuit breaker**: Automatic failover on cascading failures
## Common Development Tasks
### Analyzing V1 Code
```bash
# Find monolithic parser
cat orig/pkg/events/parser.go
# Review arbitrage detection
cat orig/pkg/arbitrage/detection_engine.go
# Understand pool cache
cat orig/pkg/pools/discovery.go
```
### Creating V2 Components
Follow task breakdown in `docs/planning/07_TASK_BREAKDOWN.md`:
**Example: Creating UniswapV2 Parser (P2-002 through P2-009)**
1. Create `pkg/parsers/uniswap_v2.go`
2. Define struct with logger and cache dependencies
3. Implement `ParseLog()` for Swap events
4. Implement token extraction from pool cache
5. Implement validation rules
6. Add Mint/Burn event support
7. Implement `ParseReceipt()` for multi-event handling
8. Write comprehensive unit tests
9. Integration test with real Arbiscan data
### Testing Strategy
```bash
# Unit tests (when V2 implementation starts)
go test ./pkg/parsers/... -v
# Integration tests
go test ./tests/integration/... -v
# Benchmark parsers
go test ./pkg/parsers/... -bench=. -benchmem
# Load testing
go test ./tests/load/... -v
```
## Git Workflow
### Branch Strategy (STRICTLY ENFORCED)
**ALL V2 development MUST use feature branches:**
```bash
# Branch naming convention (REQUIRED)
feature/v2/<component>/<task-id>-<description>
# Examples:
feature/v2/parsers/P2-002-uniswap-v2-base
feature/v2/cache/P3-001-address-index
feature/v2/validation/P4-001-validation-rules
```
**Branch Rules:**
1.**ALWAYS** create feature branch from `feature/v2-prep`
2.**NEVER** commit directly to `feature/v2-prep` or `master-dev`
3. ✅ Branch name MUST match task ID from `07_TASK_BREAKDOWN.md`
4. ✅ One branch per atomic task (< 2 hours work)
5. ✅ Delete branch after merge
**Example Workflow:**
```bash
# 1. Create feature branch
git checkout feature/v2-prep
git pull origin feature/v2-prep
git checkout -b feature/v2/parsers/P2-002-uniswap-v2-base
# 2. Implement task P2-002
# ... make changes ...
# 3. Test with 100% coverage (REQUIRED)
go test ./pkg/parsers/uniswap_v2/... -coverprofile=coverage.out
# MUST show 100% coverage
# 4. Commit
git add .
git commit -m "feat(parsers): implement UniswapV2 parser base structure
- Created UniswapV2Parser struct with dependencies
- Implemented constructor with logger and cache injection
- Stubbed all interface methods
- Added 100% test coverage
Task: P2-002
Coverage: 100%
Tests: 15/15 passing
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
# 5. Push and create PR
git push -u origin feature/v2/parsers/P2-002-uniswap-v2-base
# 6. After merge, delete branch
git branch -d feature/v2/parsers/P2-002-uniswap-v2-base
```
### Commit Message Format
```
type(scope): brief description
- Detailed changes
- Why the change was needed
- Breaking changes or migration notes
Task: [TASK-ID from 07_TASK_BREAKDOWN.md]
Coverage: [100% REQUIRED]
Tests: [X/X passing - MUST be 100%]
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
```
**Types**: `feat`, `fix`, `perf`, `refactor`, `test`, `docs`, `build`, `ci`
**Examples:**
```bash
# Good commit
feat(parsers): implement UniswapV3 swap parsing
- Added ParseSwapEvent for V3 with signed amounts
- Implemented decimal scaling for token precision
- Added validation for sqrtPriceX96 and liquidity
Task: P2-011
Coverage: 100%
Tests: 23/23 passing
# Bad commit (missing task ID, coverage info)
fix: parser bug
```
## Important Notes
### What NOT to Do
- ❌ Modify V1 code in `orig/` (except for critical bugs)
- ❌ Start V2 implementation without reviewing planning docs
- ❌ Skip atomic task breakdown from `07_TASK_BREAKDOWN.md`
- ❌ Implement workarounds instead of fixing root causes
- ❌ Allow zero addresses or zero amounts to propagate
### What TO Do
- ✅ Read `docs/planning/00_V2_MASTER_PLAN.md` before starting
- ✅ Follow task breakdown in `07_TASK_BREAKDOWN.md`
- ✅ Write tests before implementation (TDD)
- ✅ Use strict validation at all layers
- ✅ Add comprehensive logging and metrics
- ✅ Fix root causes, not symptoms
## Key Files to Review
### Planning Documents
- `docs/planning/00_V2_MASTER_PLAN.md` - Complete V2 architecture
- `docs/planning/07_TASK_BREAKDOWN.md` - Atomic task list (99+ hours)
- `orig/README_V1.md` - V1 documentation and known issues
### V1 Reference Implementation
- `orig/pkg/events/parser.go` - Monolithic parser (to be replaced)
- `orig/pkg/monitor/concurrent.go` - Arbitrum monitor (to be enhanced)
- `orig/pkg/pools/discovery.go` - Pool discovery (cache to be multi-indexed)
- `orig/pkg/arbitrage/detection_engine.go` - Arbitrage detection (to be improved)
## Contact and Resources
- V2 Planning: `docs/planning/`
- V1 Reference: `orig/`
- Architecture diagrams: In `00_V2_MASTER_PLAN.md`
- Task breakdown: In `07_TASK_BREAKDOWN.md`
---
**Current Phase**: V2 Planning
**Next Step**: Begin Phase 1 implementation (Foundation)
**Estimated Time**: 12-13 weeks for complete V2 implementation

View File

@@ -0,0 +1,408 @@
# V2 Modularity Requirements
## Core Principle: Standalone Components
**Every component MUST be able to run independently OR as part of the integrated system.**
## Component Independence Rules
### 1. Zero Hard Dependencies
- Each component communicates through well-defined interfaces only
- NO direct imports between sibling components
- NO shared state except through explicit interfaces
- Each component has its own configuration
### 2. Standalone Executability
Every component must support:
```go
// Example: Each parser can be used standalone
parser := uniswap_v2.NewParser(logger, cache)
event, err := parser.ParseLog(log, tx)
// OR as part of factory
factory := parsers.NewFactory()
factory.Register(ProtocolUniswapV2, parser)
```
### 3. Interface-First Design
Define interfaces BEFORE implementation:
```go
// pkg/parsers/interface.go
type Parser interface {
ParseLog(log *types.Log, tx *types.Transaction) (*Event, error)
ParseReceipt(receipt *types.Receipt, tx *types.Transaction) ([]*Event, error)
SupportedProtocols() []Protocol
ValidateEvent(event *Event) error
}
// Each parser implements this independently
```
## Component Modularity Matrix
| Component | Standalone Use Case | Integrated Use Case | Interface |
|-----------|-------------------|---------------------|-----------|
| UniswapV2Parser | Parse individual V2 transactions | Part of ParserFactory | `Parser` |
| UniswapV3Parser | Parse individual V3 transactions | Part of ParserFactory | `Parser` |
| PoolCache | Standalone pool lookup service | Shared cache for all parsers | `PoolCache` |
| EventValidator | Validate any event independently | Part of validation pipeline | `Validator` |
| AddressIndex | Standalone address lookup | Part of multi-index cache | `Index` |
| TokenPairIndex | Standalone pair lookup | Part of multi-index cache | `Index` |
| BackgroundValidator | Standalone validation service | Part of monitoring pipeline | `BackgroundValidator` |
## Directory Structure for Modularity
```
pkg/
├── parsers/
│ ├── interface.go # Parser interface (shared)
│ ├── factory.go # Factory for integration
│ ├── uniswap_v2/ # Standalone package
│ │ ├── parser.go # Can be imported independently
│ │ ├── parser_test.go # Self-contained tests
│ │ └── README.md # Standalone usage docs
│ ├── uniswap_v3/ # Standalone package
│ │ ├── parser.go
│ │ ├── parser_test.go
│ │ └── README.md
│ └── sushiswap/ # Standalone package
│ ├── parser.go
│ ├── parser_test.go
│ └── README.md
├── cache/
│ ├── interface.go # Cache interfaces
│ ├── pool_cache.go # Main cache (uses indexes)
│ ├── indexes/ # Standalone index packages
│ │ ├── address/
│ │ │ ├── index.go # Standalone address index
│ │ │ └── index_test.go
│ │ ├── tokenpair/
│ │ │ ├── index.go # Standalone pair index
│ │ │ └── index_test.go
│ │ └── liquidity/
│ │ ├── index.go # Standalone liquidity index
│ │ └── index_test.go
├── validation/
│ ├── interface.go # Validator interface
│ ├── validator.go # Main validator
│ ├── rules/ # Standalone rule packages
│ │ ├── zero_address/
│ │ │ ├── rule.go # Standalone rule
│ │ │ └── rule_test.go
│ │ ├── zero_amount/
│ │ │ ├── rule.go
│ │ │ └── rule_test.go
│ │ └── pool_cache/
│ │ ├── rule.go
│ │ └── rule_test.go
│ └── background/
│ ├── validator.go # Standalone background validator
│ └── validator_test.go
```
## Testing Requirements for Modularity
### 1. Unit Tests (Component Isolation)
Each component has 100% independent unit tests:
```go
// pkg/parsers/uniswap_v2/parser_test.go
func TestParser_Standalone(t *testing.T) {
// NO dependencies on other parsers
// NO dependencies on factory
// Uses mocks for interfaces only
logger := NewMockLogger()
cache := NewMockCache()
parser := NewParser(logger, cache)
event, err := parser.ParseLog(mockLog, mockTx)
assert.NoError(t, err)
assert.NotNil(t, event)
}
```
### 2. Integration Tests (Component Composition)
Test components working together:
```go
// tests/integration/parser_factory_test.go
func TestParserFactory_Integration(t *testing.T) {
// Test all parsers working through factory
factory := parsers.NewFactory()
// Each parser registered independently
factory.Register(ProtocolUniswapV2, uniswap_v2.NewParser(logger, cache))
factory.Register(ProtocolUniswapV3, uniswap_v3.NewParser(logger, cache))
// Test factory routing
parser, err := factory.GetParser(ProtocolUniswapV2)
assert.NoError(t, err)
}
```
### 3. Standalone Executability Tests
Each component has example main:
```go
// examples/uniswap_v2_parser/main.go
func main() {
// Demonstrate standalone usage
logger := logger.New("info", "text", "")
cache := cache.NewPoolCache()
parser := uniswap_v2.NewParser(logger, cache)
// Parse single transaction
event, err := parser.ParseLog(log, tx)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Parsed event: %+v\n", event)
}
```
## Dependency Injection Pattern
All components use constructor injection:
```go
// Bad: Hard dependency
type UniswapV2Parser struct {
cache *PoolCache // Hard dependency!
}
// Good: Interface dependency
type UniswapV2Parser struct {
cache PoolCache // Interface - can be mocked or replaced
}
func NewParser(logger Logger, cache PoolCache) *UniswapV2Parser {
return &UniswapV2Parser{
logger: logger,
cache: cache,
}
}
```
## Configuration Independence
Each component has its own config:
```go
// pkg/parsers/uniswap_v2/config.go
type Config struct {
EnableValidation bool
CacheTimeout time.Duration
MaxRetries int
}
// Standalone usage
config := uniswap_v2.Config{
EnableValidation: true,
CacheTimeout: 5 * time.Minute,
MaxRetries: 3,
}
parser := uniswap_v2.NewParserWithConfig(logger, cache, config)
// Integrated usage
factory.RegisterWithConfig(
ProtocolUniswapV2,
parser,
config,
)
```
## Interface Contracts
### Minimal Interface Surface
Each interface has 1-3 methods maximum:
```go
// Good: Focused interface
type Parser interface {
ParseLog(log *types.Log, tx *types.Transaction) (*Event, error)
}
// Bad: God interface
type Parser interface {
ParseLog(log *types.Log, tx *types.Transaction) (*Event, error)
ParseReceipt(receipt *types.Receipt, tx *types.Transaction) ([]*Event, error)
ValidateEvent(event *Event) error
GetStats() Stats
Configure(config Config) error
// ... too many responsibilities
}
```
### Interface Segregation
Split large interfaces:
```go
// Split responsibilities
type LogParser interface {
ParseLog(log *types.Log, tx *types.Transaction) (*Event, error)
}
type ReceiptParser interface {
ParseReceipt(receipt *types.Receipt, tx *types.Transaction) ([]*Event, error)
}
type EventValidator interface {
ValidateEvent(event *Event) error
}
// Compose as needed
type Parser interface {
LogParser
ReceiptParser
EventValidator
}
```
## Build Tags for Optional Components
Use build tags for optional features:
```go
// pkg/parsers/uniswap_v2/parser.go
// +build !minimal
// Full implementation with all features
// pkg/parsers/uniswap_v2/parser_minimal.go
// +build minimal
// Minimal implementation for embedded systems
```
Build options:
```bash
# Full build (all components)
go build ./...
# Minimal build (core only)
go build -tags minimal ./...
# Custom build (specific parsers only)
go build -tags "uniswap_v2 uniswap_v3" ./...
```
## Component Communication Patterns
### 1. Synchronous (Direct Call)
For tight coupling when needed:
```go
event, err := parser.ParseLog(log, tx)
```
### 2. Asynchronous (Channels)
For loose coupling:
```go
eventChan := make(chan *Event, 100)
go parser.ParseAsync(logs, eventChan)
```
### 3. Pub/Sub (Event Bus)
For many-to-many communication:
```go
bus := eventbus.New()
parser.Subscribe(bus, "parsed")
validator.Subscribe(bus, "parsed")
bus.Publish("parsed", event)
```
### 4. Interface Composition
For static composition:
```go
type CompositeParser struct {
v2 *uniswap_v2.Parser
v3 *uniswap_v3.Parser
}
func (c *CompositeParser) Parse(log *types.Log) (*Event, error) {
// Route to appropriate parser
}
```
## Success Criteria for Modularity
Each component MUST:
- [ ] Compile independently (`go build ./pkg/parsers/uniswap_v2`)
- [ ] Test independently (`go test ./pkg/parsers/uniswap_v2`)
- [ ] Run standalone example (`go run examples/uniswap_v2_parser/main.go`)
- [ ] Have zero sibling dependencies
- [ ] Communicate only through interfaces
- [ ] Include standalone usage documentation
- [ ] Pass integration tests when composed
- [ ] Support mock implementations for testing
## Anti-Patterns to Avoid
### ❌ Circular Dependencies
```go
// pkg/parsers/uniswap_v2/parser.go
import "pkg/parsers/uniswap_v3" // BAD!
// pkg/parsers/uniswap_v3/parser.go
import "pkg/parsers/uniswap_v2" // BAD!
```
### ❌ Shared Mutable State
```go
// BAD: Global shared state
var globalCache *PoolCache
func (p *Parser) Parse(log *types.Log) (*Event, error) {
pool := globalCache.Get(log.Address) // BAD!
}
```
### ❌ Hard-coded Dependencies
```go
// BAD: Creates own dependencies
func NewParser() *Parser {
return &Parser{
cache: NewPoolCache(), // BAD! Should be injected
}
}
```
### ❌ Leaky Abstractions
```go
// BAD: Exposes internal structure
type Parser interface {
GetInternalCache() *PoolCache // BAD! Leaks implementation
}
```
## Migration Strategy
When moving from V1 to V2:
1. **Extract Interface**: Define interface from V1 implementation
2. **Create Package**: Move to standalone package
3. **Inject Dependencies**: Replace hard dependencies with interfaces
4. **Add Tests**: Unit tests for standalone operation
5. **Create Example**: Standalone usage example
6. **Document**: README with standalone and integrated usage
7. **Verify**: Check all modularity criteria
## Component Checklist
Before marking any component as "complete":
- [ ] Compiles independently
- [ ] Tests independently (>90% coverage)
- [ ] Has standalone example
- [ ] Has README with usage
- [ ] Zero sibling dependencies
- [ ] All dependencies injected through interfaces
- [ ] Can be mocked for testing
- [ ] Integrated into factory/orchestrator
- [ ] Integration tests pass
- [ ] Performance benchmarks exist
- [ ] Documentation complete
---
**Principle**: If you can't run it standalone, it's not modular enough.
**Guideline**: If you can't mock it, it's too coupled.
**Rule**: If it has circular dependencies, redesign it.

View File

@@ -0,0 +1,590 @@
# V2 Protocol Support Requirements
## Critical Requirement: Complete Protocol Coverage
**Every protocol MUST be parsed correctly with 100% accuracy and 100% test coverage.**
## Supported DEX Protocols (Complete List)
### Uniswap Family
1. **Uniswap V2**
- Constant product AMM (x * y = k)
- Event: `Swap(address indexed sender, uint amount0In, uint amount1In, uint amount0Out, uint amount1Out, address indexed to)`
- Pool info: token0, token1, reserves
- Fee: 0.3% (30 basis points)
2. **Uniswap V3**
- Concentrated liquidity AMM
- Event: `Swap(address indexed sender, address indexed recipient, int256 amount0, int256 amount1, uint160 sqrtPriceX96, uint128 liquidity, int24 tick)`
- Pool info: token0, token1, fee (500/3000/10000), tickSpacing, sqrtPriceX96, liquidity, tick
- CRITICAL: Amounts are signed (int256), handle negative values correctly
3. **Uniswap V4** (planned)
- Hooks-based architecture
- Event: TBD (monitor for mainnet deployment)
- Pool info: Dynamic based on hooks
### Curve Finance
4. **Curve StableSwap**
- Stable asset AMM
- Event: `TokenExchange(address indexed buyer, int128 sold_id, uint256 tokens_sold, int128 bought_id, uint256 tokens_bought)`
- Pool info: coins array, A (amplification coefficient), fee
- CRITICAL: Use int128 for token IDs, proper decimal handling
### Balancer
5. **Balancer V2**
- Weighted pool AMM
- Event: `Swap(bytes32 indexed poolId, address indexed tokenIn, address indexed tokenOut, uint256 amountIn, uint256 amountOut)`
- Pool info: poolId, tokens array, weights, swapFee
- CRITICAL: Uses poolId instead of pool address
6. **Balancer V3** (if deployed on Arbitrum)
- Next-gen weighted pools
- Event: Monitor for deployment
- Pool info: TBD
### Kyber Network
7. **Kyber Classic**
- Dynamic reserve AMM
- Event: `KyberTrade(address indexed src, address indexed dest, uint srcAmount, uint dstAmount)`
- Pool info: reserveId, tokens, rate
8. **Kyber Elastic**
- Concentrated liquidity (similar to Uniswap V3)
- Event: `Swap(address indexed sender, address indexed recipient, int256 deltaQty0, int256 deltaQty1, uint160 sqrtP, uint128 liquidity, int24 currentTick)`
- Pool info: token0, token1, swapFeeUnits, tickDistance
- CRITICAL: Different field names than Uniswap V3 but similar math
### Camelot (Arbitrum Native)
9. **Camelot V2**
- Uniswap V2 fork with dynamic fees
- Event: `Swap(address indexed sender, uint amount0In, uint amount1In, uint amount0Out, uint amount1Out, address indexed to)`
- Pool info: token0, token1, stableSwap (boolean), fee0, fee1
- CRITICAL: Fees can be different for token0 and token1
10. **Camelot V3 (Algebra V1)**
- Event: `Swap(address indexed sender, address indexed recipient, int256 amount0, int256 amount1, uint160 price, uint128 liquidity, int24 tick)`
- Pool info: token0, token1, fee, tickSpacing (from factory)
- Algebra V1 specific
11. **Camelot V3 (Algebra V1.9)**
- Enhanced Algebra with adaptive fees
- Event: Same as Algebra V1 but with `communityFee` field
- Pool info: token0, token1, fee, communityFee, tickSpacing
- CRITICAL: Fee can be dynamic
12. **Camelot V3 (Algebra Integral)**
- Latest Algebra version with plugins
- Event: `Swap(address indexed sender, address indexed recipient, int256 amount0, int256 amount1, uint160 price, uint128 liquidity, int24 tick, uint16 fee)`
- Pool info: token0, token1, fee (in event!), tickSpacing, plugin address
- CRITICAL: Fee is emitted in event, not stored in pool
13. **Camelot V3 (Algebra Directional - All Versions)**
- Directional liquidity (different fees for buy/sell)
- Event: `Swap(address indexed sender, address indexed recipient, int256 amount0, int256 amount1, uint160 price, uint128 liquidity, int24 tick, uint16 feeZeroToOne, uint16 feeOneToZero)`
- Pool info: token0, token1, feeZeroToOne, feeOneToZero, tickSpacing
- CRITICAL: Two separate fees based on direction
## Required Pool Information Extraction
For EVERY pool discovered, we MUST extract:
### Essential Fields
- `address` - Pool contract address
- `token0` - First token address (MUST NOT be zero address)
- `token1` - Second token address (MUST NOT be zero address)
- `protocol` - Protocol type (UniswapV2, UniswapV3, etc.)
- `poolType` - Pool type (ConstantProduct, Concentrated, StableSwap, etc.)
### Protocol-Specific Fields
#### V2-Style (Uniswap V2, SushiSwap, Camelot V2)
- `reserve0` - Token0 reserves
- `reserve1` - Token1 reserves
- `fee` - Fee in basis points (usually 30 = 0.3%)
#### V3-Style (Uniswap V3, Kyber Elastic, Camelot V3)
- `sqrtPriceX96` - Current price (Q64.96 format)
- `liquidity` - Current liquidity
- `tick` - Current tick
- `tickSpacing` - Tick spacing (from factory)
- `fee` - Fee tier (500/3000/10000) OR dynamic fee
#### Curve
- `A` - Amplification coefficient
- `fee` - Fee in basis points
- `coins` - Array of coin addresses (can be > 2)
#### Balancer
- `poolId` - Vault pool ID (bytes32)
- `tokens` - Array of token addresses
- `weights` - Array of token weights
- `swapFee` - Swap fee percentage
### Metadata Fields
- `factory` - Factory contract that created this pool
- `createdBlock` - Block number when pool was created
- `createdTx` - Transaction hash of pool creation
- `lastUpdated` - Timestamp of last update
- `token0Decimals` - Decimals for token0 (CRITICAL for calculations)
- `token1Decimals` - Decimals for token1 (CRITICAL for calculations)
- `token0Symbol` - Symbol for token0 (for logging)
- `token1Symbol` - Symbol for token1 (for logging)
## Parsing Requirements
### 1. Sequencer Event Reading
```go
type SequencerReader interface {
// Subscribe to new blocks
Subscribe(ctx context.Context) (<-chan *types.Block, error)
// Get full transaction receipts
GetReceipts(ctx context.Context, txHashes []common.Hash) ([]*types.Receipt, error)
// Parse block for DEX transactions
ParseBlock(block *types.Block) ([]*Transaction, error)
}
```
### 2. Multi-Protocol Parser
```go
type ProtocolParser interface {
// Identify if transaction is for this protocol
IsProtocolTransaction(tx *types.Transaction) bool
// Parse swap event
ParseSwapEvent(log *types.Log) (*SwapEvent, error)
// Parse mint/burn events
ParseLiquidityEvent(log *types.Log) (*LiquidityEvent, error)
// Extract pool info from logs
ExtractPoolInfo(logs []*types.Log) (*PoolInfo, error)
// Validate parsed data
Validate(event *SwapEvent) error
}
type SwapEvent struct {
PoolAddress common.Address
Token0 common.Address // MUST NOT be zero
Token1 common.Address // MUST NOT be zero
Amount0In *big.Int // MUST NOT be nil or zero (one of In/Out)
Amount0Out *big.Int
Amount1In *big.Int // MUST NOT be nil or zero (one of In/Out)
Amount1Out *big.Int
Sender common.Address
Recipient common.Address
TxHash common.Hash
BlockNumber uint64
LogIndex uint
Timestamp uint64
// V3-specific
SqrtPriceX96 *big.Int
Liquidity *big.Int
Tick int24
// Protocol identification
Protocol Protocol
PoolType PoolType
}
```
### 3. Amount Parsing Rules
**CRITICAL: Proper Decimal Handling**
```go
// Example: Parse Uniswap V2 swap
func (p *UniswapV2Parser) ParseSwap(log *types.Log) (*SwapEvent, error) {
// Decode event
event := new(UniswapV2SwapEvent)
err := p.abi.UnpackIntoInterface(event, "Swap", log.Data)
// Get token decimals (CRITICAL!)
poolInfo := p.cache.GetPool(log.Address)
token0Decimals := poolInfo.Token0Decimals
token1Decimals := poolInfo.Token1Decimals
// MUST use proper decimal scaling
amount0In := ScaleAmount(event.Amount0In, token0Decimals)
amount0Out := ScaleAmount(event.Amount0Out, token0Decimals)
amount1In := ScaleAmount(event.Amount1In, token1Decimals)
amount1Out := ScaleAmount(event.Amount1Out, token1Decimals)
return &SwapEvent{
Amount0In: amount0In,
Amount0Out: amount0Out,
Amount1In: amount1In,
Amount1Out: amount1Out,
}
}
// Decimal scaling helper
func ScaleAmount(amount *big.Int, decimals uint8) *big.Int {
// Scale to 18 decimals for internal representation
scale := new(big.Int).Exp(
big.NewInt(10),
big.NewInt(int64(18 - decimals)),
nil,
)
return new(big.Int).Mul(amount, scale)
}
```
## Pool Discovery Requirements
### 1. Factory Event Monitoring
```go
type PoolDiscovery interface {
// Monitor factory for pool creation
MonitorFactory(ctx context.Context, factoryAddress common.Address) error
// Discover pools from transaction
DiscoverFromTransaction(tx *types.Transaction, receipt *types.Receipt) ([]*PoolInfo, error)
// Verify pool exists and get info
VerifyPool(ctx context.Context, poolAddress common.Address) (*PoolInfo, error)
// Save discovered pool
SavePool(pool *PoolInfo) error
}
```
### 2. Pool Caching Strategy
```go
type PoolCache interface {
// Add pool to cache
Add(pool *PoolInfo) error
// Get pool by address (O(1))
Get(address common.Address) (*PoolInfo, error)
// Get pools by token pair (O(1))
GetByTokenPair(token0, token1 common.Address) ([]*PoolInfo, error)
// Get pools by protocol (O(1))
GetByProtocol(protocol Protocol) ([]*PoolInfo, error)
// Get top pools by liquidity
GetTopByLiquidity(limit int) ([]*PoolInfo, error)
// Update pool data
Update(address common.Address, updates *PoolUpdates) error
// Save to persistent storage
SaveToDisk(path string) error
// Load from persistent storage
LoadFromDisk(path string) error
}
```
### 3. Market Building with Mapping
```go
type MarketBuilder interface {
// Build market from pools
BuildMarket(pools []*PoolInfo) (*Market, error)
// Update market on new swap
UpdateOnSwap(market *Market, swap *SwapEvent) (*PriceMovement, error)
// Get market by token pair (using mapping for O(1) access)
GetMarket(token0, token1 common.Address) (*Market, error)
}
type Market struct {
Token0 common.Address
Token1 common.Address
Pools map[common.Address]*PoolState // Mapping for O(1) access
BestBid *big.Float // Best price to buy token0
BestAsk *big.Float // Best price to sell token0
MidPrice *big.Float // Mid-market price
Liquidity *big.Int // Total liquidity
LastUpdate uint64 // Timestamp
}
type PoolState struct {
Address common.Address
Protocol Protocol
CurrentPrice *big.Float // With proper decimals
Reserve0 *big.Int
Reserve1 *big.Int
Fee uint32
// V3-specific
SqrtPriceX96 *big.Int
Liquidity *big.Int
Tick int24
}
```
### 4. Price Movement Detection
```go
type PriceMovement struct {
Market *Market
OldPrice *big.Float // Before swap
NewPrice *big.Float // After swap
PriceChange *big.Float // Absolute change
PercentMove float64 // Percentage movement
TriggeredBy *SwapEvent
Timestamp uint64
// Arbitrage opportunity flag
IsArbitrageOpportunity bool
ExpectedProfit *big.Float
}
// CRITICAL: Proper decimal handling in price calculation
func CalculatePriceMovement(market *Market, swap *SwapEvent) (*PriceMovement, error) {
oldPrice := market.MidPrice
// Update pool state with proper decimals
pool := market.Pools[swap.PoolAddress]
pool.Reserve0 = new(big.Int).Sub(pool.Reserve0, swap.Amount0Out)
pool.Reserve0 = new(big.Int).Add(pool.Reserve0, swap.Amount0In)
pool.Reserve1 = new(big.Int).Sub(pool.Reserve1, swap.Amount1Out)
pool.Reserve1 = new(big.Int).Add(pool.Reserve1, swap.Amount1In)
// Calculate new price with EXACT decimal precision
newPrice := CalculatePrice(pool.Reserve0, pool.Reserve1,
market.Token0Decimals, market.Token1Decimals)
// Calculate percentage movement
priceChange := new(big.Float).Sub(newPrice, oldPrice)
percentMove := new(big.Float).Quo(priceChange, oldPrice)
percentMove.Mul(percentMove, big.NewFloat(100))
percent, _ := percentMove.Float64()
return &PriceMovement{
Market: market,
OldPrice: oldPrice,
NewPrice: newPrice,
PriceChange: priceChange,
PercentMove: percent,
TriggeredBy: swap,
Timestamp: swap.Timestamp,
}
}
```
## Arbitrage Detection Requirements
### 1. Essential Market Values
```go
type ArbitrageMarket struct {
// Token pair
TokenA common.Address
TokenB common.Address
// All pools for this pair
Pools map[common.Address]*PoolState // O(1) access
// Price quotes from each pool
Quotes map[common.Address]*Quote
// Liquidity depth
LiquidityDepth map[common.Address]*LiquidityBracket
// Best execution path
BestBuyPool common.Address
BestSellPool common.Address
// Arbitrage opportunity
SpreadPercent float64
ExpectedProfit *big.Float
OptimalAmount *big.Int
}
type Quote struct {
Pool common.Address
InputAmount *big.Int
OutputAmount *big.Int
Price *big.Float // With exact decimals
Fee uint32
Slippage float64 // Expected slippage %
}
type LiquidityBracket struct {
Pool common.Address
Amounts []*big.Int // Different trade sizes
Outputs []*big.Int // Expected outputs
Slippages []float64 // Slippage at each amount
}
```
### 2. Arbitrage Calculator
```go
type ArbitrageCalculator interface {
// Find arbitrage opportunities
FindOpportunities(market *ArbitrageMarket) ([]*Opportunity, error)
// Calculate optimal trade size
CalculateOptimalSize(opp *Opportunity) (*big.Int, error)
// Calculate expected profit (after gas)
CalculateProfit(opp *Opportunity, tradeSize *big.Int) (*big.Float, error)
// Build execution transaction
BuildTransaction(opp *Opportunity, tradeSize *big.Int) (*types.Transaction, error)
}
type Opportunity struct {
Market *ArbitrageMarket
BuyPool common.Address
SellPool common.Address
BuyPrice *big.Float // Exact decimals
SellPrice *big.Float // Exact decimals
Spread float64 // Percentage
OptimalAmount *big.Int
ExpectedProfit *big.Float // After fees and gas
GasCost *big.Int
NetProfit *big.Float // After ALL costs
Confidence float64 // 0-1 confidence score
}
```
## Transaction Building Requirements
### 1. Single Execution
```go
type SingleExecutor interface {
// Execute single arbitrage trade
Execute(ctx context.Context, opp *Opportunity) (*types.Transaction, error)
// Build transaction data
BuildTxData(opp *Opportunity) ([]byte, error)
// Estimate gas
EstimateGas(ctx context.Context, txData []byte) (uint64, error)
// Sign and send
SignAndSend(ctx context.Context, tx *types.Transaction) (common.Hash, error)
}
```
### 2. Batch Execution
```go
type BatchExecutor interface {
// Execute multiple arbitrage trades in one transaction
BatchExecute(ctx context.Context, opps []*Opportunity) (*types.Transaction, error)
// Build multicall data
BuildMulticall(opps []*Opportunity) ([]byte, error)
// Optimize batch order for maximum profit
OptimizeBatchOrder(opps []*Opportunity) []*Opportunity
// Calculate batch gas savings
CalculateGasSavings(opps []*Opportunity) (*big.Int, error)
}
// Example multicall structure
type Multicall struct {
Targets []common.Address // Contract addresses
Calldatas [][]byte // Call data for each
Values []*big.Int // ETH value for each
}
```
## Validation Requirements
### 1. Pool Data Validation
```go
// MUST validate ALL fields
func ValidatePoolInfo(pool *PoolInfo) error {
if pool.Address == (common.Address{}) {
return errors.New("pool address is zero")
}
if pool.Token0 == (common.Address{}) {
return errors.New("token0 is zero address")
}
if pool.Token1 == (common.Address{}) {
return errors.New("token1 is zero address")
}
if pool.Token0 == pool.Token1 {
return errors.New("token0 and token1 are the same")
}
if pool.Token0Decimals == 0 || pool.Token0Decimals > 18 {
return errors.New("invalid token0 decimals")
}
if pool.Token1Decimals == 0 || pool.Token1Decimals > 18 {
return errors.New("invalid token1 decimals")
}
// Protocol-specific validation
switch pool.PoolType {
case PoolTypeConstantProduct:
if pool.Reserve0 == nil || pool.Reserve0.Sign() <= 0 {
return errors.New("invalid reserve0")
}
if pool.Reserve1 == nil || pool.Reserve1.Sign() <= 0 {
return errors.New("invalid reserve1")
}
case PoolTypeConcentrated:
if pool.SqrtPriceX96 == nil || pool.SqrtPriceX96.Sign() <= 0 {
return errors.New("invalid sqrtPriceX96")
}
if pool.Liquidity == nil || pool.Liquidity.Sign() < 0 {
return errors.New("invalid liquidity")
}
}
return nil
}
```
### 2. Swap Event Validation
```go
func ValidateSwapEvent(event *SwapEvent) error {
// Zero address checks
if event.Token0 == (common.Address{}) {
return errors.New("token0 is zero address")
}
if event.Token1 == (common.Address{}) {
return errors.New("token1 is zero address")
}
if event.PoolAddress == (common.Address{}) {
return errors.New("pool address is zero")
}
// Amount validation (at least one must be non-zero)
hasAmount0 := (event.Amount0In != nil && event.Amount0In.Sign() > 0) ||
(event.Amount0Out != nil && event.Amount0Out.Sign() > 0)
hasAmount1 := (event.Amount1In != nil && event.Amount1In.Sign() > 0) ||
(event.Amount1Out != nil && event.Amount1Out.Sign() > 0)
if !hasAmount0 {
return errors.New("both amount0In and amount0Out are zero")
}
if !hasAmount1 {
return errors.New("both amount1In and amount1Out are zero")
}
// Logical validation (can't have both in and out for same token)
if event.Amount0In != nil && event.Amount0In.Sign() > 0 &&
event.Amount0Out != nil && event.Amount0Out.Sign() > 0 {
return errors.New("amount0In and amount0Out both positive")
}
return nil
}
```
## Testing Requirements
See `03_TESTING_REQUIREMENTS.md` for comprehensive testing strategy.
Each parser MUST have:
- Unit tests for all event types (100% coverage)
- Integration tests with real Arbiscan data
- Edge case tests (zero amounts, max values, etc.)
- Decimal precision tests
- Gas estimation tests
---
**CRITICAL**: All protocols must be supported. All decimals must be handled correctly. All validation must pass. No exceptions.

View File

@@ -0,0 +1,714 @@
# V2 Testing Requirements
## Non-Negotiable Standards
**100% Test Coverage Required**
**100% Test Passage Required**
**Zero Tolerance for Failures**
## Testing Philosophy
Every line of code MUST be tested. Every edge case MUST be covered. Every failure MUST be fixed before merging.
## Coverage Requirements
### Code Coverage Targets
```bash
# Minimum coverage requirements (ENFORCED)
Overall Project: 100%
Per Package: 100%
Per File: 100%
Branch Coverage: 100%
```
### Coverage Verification
```bash
# Run coverage report
go test ./... -coverprofile=coverage.out -covermode=atomic
# View coverage by package
go tool cover -func=coverage.out
# MUST show 100% for every file
pkg/parsers/uniswap_v2/parser.go:100.0%
pkg/parsers/uniswap_v3/parser.go:100.0%
pkg/cache/pool_cache.go:100.0%
# ... etc
# Generate HTML report
go tool cover -html=coverage.out -o coverage.html
# CI/CD enforcement
if [ $(go tool cover -func=coverage.out | grep total | awk '{print $3}' | sed 's/%//') -lt 100 ]; then
echo "FAILED: Coverage below 100%"
exit 1
fi
```
## Test Types and Requirements
### 1. Unit Tests (100% Coverage Required)
Every function, every method, every code path MUST be tested.
```go
// Example: Complete unit test coverage
package uniswap_v2
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestParser_ParseSwapEvent_Success(t *testing.T) {
// Test successful parsing
}
func TestParser_ParseSwapEvent_ZeroAddress(t *testing.T) {
// Test zero address rejection
}
func TestParser_ParseSwapEvent_ZeroAmounts(t *testing.T) {
// Test zero amount rejection
}
func TestParser_ParseSwapEvent_MaxValues(t *testing.T) {
// Test maximum value handling
}
func TestParser_ParseSwapEvent_MinValues(t *testing.T) {
// Test minimum value handling
}
func TestParser_ParseSwapEvent_InvalidLog(t *testing.T) {
// Test invalid log handling
}
func TestParser_ParseSwapEvent_NilTransaction(t *testing.T) {
// Test nil transaction handling
}
func TestParser_ParseSwapEvent_Decimals(t *testing.T) {
// Test decimal precision handling
tests := []struct{
name string
token0Dec uint8
token1Dec uint8
amount0In *big.Int
expected *big.Int
}{
{"USDC-WETH", 6, 18, big.NewInt(1000000), ...},
{"DAI-USDC", 18, 6, big.NewInt(1000000000000000000), ...},
{"WBTC-WETH", 8, 18, big.NewInt(100000000), ...},
}
// Test all combinations
}
// MUST test ALL error paths
func TestParser_ParseSwapEvent_AllErrors(t *testing.T) {
tests := []struct{
name string
log *types.Log
wantErr string
}{
{"nil log", nil, "log is nil"},
{"wrong signature", wrongSigLog, "invalid signature"},
{"malformed data", malformedLog, "failed to decode"},
// ... every possible error
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := parser.ParseSwapEvent(tt.log)
require.Error(t, err)
assert.Contains(t, err.Error(), tt.wantErr)
})
}
}
```
### 2. Integration Tests (Real Data Required)
Test with REAL transactions from Arbiscan.
```go
// Example: Integration test with real Arbiscan data
func TestUniswapV2Parser_RealTransaction(t *testing.T) {
// Load real transaction from Arbiscan
// txHash: 0x1234...
realTx := loadTransaction("testdata/uniswap_v2_swap_0x1234.json")
realReceipt := loadReceipt("testdata/uniswap_v2_receipt_0x1234.json")
parser := NewParser(logger, cache)
events, err := parser.ParseReceipt(realReceipt, realTx)
require.NoError(t, err)
require.Len(t, events, 1)
event := events[0]
// Verify against known values from Arbiscan
assert.Equal(t, "0x...", event.Token0.Hex())
assert.Equal(t, "0x...", event.Token1.Hex())
assert.Equal(t, big.NewInt(1000000), event.Amount0In)
// ... verify all fields match Arbiscan
}
// MUST test multiple real transactions per protocol
func TestUniswapV2Parser_RealTransactions_Comprehensive(t *testing.T) {
testCases := []string{
"0x1234", // Basic swap
"0x5678", // Swap with ETH
"0x9abc", // Multi-hop swap
"0xdef0", // Large amount swap
"0x2468", // Small amount swap
}
for _, txHash := range testCases {
t.Run(txHash, func(t *testing.T) {
// Load and test real transaction
})
}
}
```
### 3. Edge Case Tests (Comprehensive)
Test EVERY edge case imaginable.
```go
func TestParser_EdgeCases(t *testing.T) {
tests := []struct{
name string
setupFunc func() *types.Log
expectError bool
errorMsg string
}{
// Boundary values
{"max uint256", setupMaxUint256, false, ""},
{"min uint256", setupMinUint256, false, ""},
{"zero amount", setupZeroAmount, true, "zero amount"},
// Token edge cases
{"same token0 and token1", setupSameTokens, true, "same token"},
{"token0 > token1 (not sorted)", setupUnsorted, false, ""},
// Decimal edge cases
{"0 decimals", setupZeroDecimals, true, "invalid decimals"},
{"19 decimals", setup19Decimals, true, "invalid decimals"},
{"different decimals", setupDifferentDecimals, false, ""},
// Amount edge cases
{"both in amounts zero", setupBothInZero, true, "zero amount"},
{"both out amounts zero", setupBothOutZero, true, "zero amount"},
{"negative amount (V3)", setupNegativeAmount, false, ""},
// Address edge cases
{"zero token0 address", setupZeroToken0, true, "zero address"},
{"zero token1 address", setupZeroToken1, true, "zero address"},
{"zero pool address", setupZeroPool, true, "zero address"},
{"zero sender", setupZeroSender, true, "zero address"},
// Data edge cases
{"empty log data", setupEmptyData, true, "empty data"},
{"truncated log data", setupTruncatedData, true, "invalid data"},
{"extra log data", setupExtraData, false, ""},
// Overflow cases
{"amount overflow", setupOverflow, false, ""},
{"price overflow", setupPriceOverflow, false, ""},
{"liquidity overflow", setupLiquidityOverflow, false, ""},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
log := tt.setupFunc()
event, err := parser.ParseSwapEvent(log)
if tt.expectError {
require.Error(t, err)
assert.Contains(t, err.Error(), tt.errorMsg)
assert.Nil(t, event)
} else {
require.NoError(t, err)
assert.NotNil(t, event)
}
})
}
}
```
### 4. Decimal Precision Tests (Critical)
MUST test decimal handling with EXACT precision.
```go
func TestDecimalPrecision(t *testing.T) {
tests := []struct{
name string
token0Decimals uint8
token1Decimals uint8
reserve0 *big.Int
reserve1 *big.Int
expectedPrice string // Exact decimal string
}{
{
name: "USDC/WETH (6/18)",
token0Decimals: 6,
token1Decimals: 18,
reserve0: big.NewInt(1000000), // 1 USDC
reserve1: big.NewInt(1e18), // 1 WETH
expectedPrice: "1.000000000000000000", // Exact
},
{
name: "WBTC/WETH (8/18)",
token0Decimals: 8,
token1Decimals: 18,
reserve0: big.NewInt(100000000), // 1 WBTC
reserve1: big.NewInt(15.5 * 1e18), // 15.5 WETH
expectedPrice: "15.500000000000000000", // Exact
},
{
name: "DAI/USDC (18/6)",
token0Decimals: 18,
token1Decimals: 6,
reserve0: big.NewInt(1e18), // 1 DAI
reserve1: big.NewInt(999000), // 0.999 USDC
expectedPrice: "0.999000000000000000", // Exact
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
price := CalculatePrice(
tt.reserve0,
tt.reserve1,
tt.token0Decimals,
tt.token1Decimals,
)
// MUST match exactly
assert.Equal(t, tt.expectedPrice, price.Text('f', 18))
})
}
}
// Test rounding errors
func TestDecimalRounding(t *testing.T) {
// Ensure no precision loss through multiple operations
initial := new(big.Float).SetPrec(256).SetFloat64(1.123456789012345678)
// Simulate multiple swaps
result := initial
for i := 0; i < 1000; i++ {
result = simulateSwap(result)
}
// Should maintain precision
diff := new(big.Float).Sub(initial, result)
tolerance := new(big.Float).SetFloat64(1e-15)
assert.True(t, diff.Cmp(tolerance) < 0, "precision loss detected")
}
```
### 5. Concurrency Tests (Thread Safety)
Test concurrent access to shared resources.
```go
func TestPoolCache_Concurrency(t *testing.T) {
cache := NewPoolCache()
// Concurrent writes
var wg sync.WaitGroup
for i := 0; i < 100; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
pool := createTestPool(id)
err := cache.Add(pool)
assert.NoError(t, err)
}(i)
}
// Concurrent reads
for i := 0; i < 100; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
pool, err := cache.Get(testAddress(id))
assert.NoError(t, err)
if pool != nil {
assert.NotNil(t, pool.Token0)
}
}(i)
}
wg.Wait()
// Verify no data corruption
for i := 0; i < 100; i++ {
pool, err := cache.Get(testAddress(i))
require.NoError(t, err)
if pool != nil {
ValidatePoolInfo(pool) // MUST pass validation
}
}
}
// Test race conditions
func TestPoolCache_RaceConditions(t *testing.T) {
// Run with: go test -race
cache := NewPoolCache()
done := make(chan bool)
// Writer goroutine
go func() {
for i := 0; i < 1000; i++ {
cache.Add(createTestPool(i))
}
done <- true
}()
// Reader goroutines
for i := 0; i < 10; i++ {
go func() {
for j := 0; j < 1000; j++ {
cache.Get(testAddress(j % 100))
}
done <- true
}()
}
// Wait for completion
for i := 0; i < 11; i++ {
<-done
}
}
```
### 6. Performance Tests (Benchmarks Required)
MUST benchmark all critical paths.
```go
func BenchmarkParser_ParseSwapEvent(b *testing.B) {
parser := setupParser()
log := createTestLog()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, _ = parser.ParseSwapEvent(log)
}
}
// MUST meet performance targets
func BenchmarkParser_ParseSwapEvent_Target(b *testing.B) {
parser := setupParser()
log := createTestLog()
b.ResetTimer()
start := time.Now()
for i := 0; i < b.N; i++ {
_, _ = parser.ParseSwapEvent(log)
}
elapsed := time.Since(start)
avgTime := elapsed / time.Duration(b.N)
// MUST complete in < 1ms
if avgTime > time.Millisecond {
b.Fatalf("Too slow: %v per operation (target: < 1ms)", avgTime)
}
}
// Memory allocation benchmarks
func BenchmarkParser_ParseSwapEvent_Allocs(b *testing.B) {
parser := setupParser()
log := createTestLog()
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, _ = parser.ParseSwapEvent(log)
}
// Check allocation count
// Should minimize allocations
}
```
### 7. Protocol-Specific Tests
Each protocol MUST have comprehensive test suite.
```go
// Uniswap V2
func TestUniswapV2_AllEventTypes(t *testing.T) {
tests := []string{"Swap", "Mint", "Burn", "Sync"}
// Test all event types
}
// Uniswap V3
func TestUniswapV3_AllEventTypes(t *testing.T) {
tests := []string{"Swap", "Mint", "Burn", "Flash", "Collect"}
// Test all event types + V3 specific logic
}
// Curve
func TestCurve_AllPoolTypes(t *testing.T) {
tests := []string{"StableSwap", "CryptoSwap", "Tricrypto"}
// Test all Curve variants
}
// Camelot Algebra versions
func TestCamelot_AllAlgebraVersions(t *testing.T) {
tests := []string{"AlgebraV1", "AlgebraV1.9", "AlgebraIntegral", "AlgebraDirectional"}
// Test all Algebra variants with different fee structures
}
```
### 8. Integration Test Suite
Complete end-to-end testing.
```go
func TestE2E_FullPipeline(t *testing.T) {
// Setup full system
monitor := setupMonitor()
factory := setupParserFactory()
cache := setupCache()
validator := setupValidator()
arbDetector := setupArbitrageDetector()
// Feed real block data
block := loadRealBlock("testdata/block_12345.json")
// Process through full pipeline
txs := monitor.ParseBlock(block)
for _, tx := range txs {
events, err := factory.ParseTransaction(tx)
require.NoError(t, err)
for _, event := range events {
// Validate
err = validator.Validate(event)
require.NoError(t, err)
// Update cache
cache.UpdateFromEvent(event)
// Check for arbitrage
opps, err := arbDetector.FindOpportunities(event)
require.NoError(t, err)
for _, opp := range opps {
// Verify opportunity is valid
ValidateOpportunity(opp)
}
}
}
// Verify final state
assert.True(t, cache.Size() > 0)
assert.True(t, validator.GetStats().TotalValidated > 0)
}
```
## Test Data Requirements
### 1. Real Transaction Data
Store real Arbiscan data in `testdata/`:
```
testdata/
├── uniswap_v2/
│ ├── swap_0x1234.json
│ ├── mint_0x5678.json
│ └── burn_0x9abc.json
├── uniswap_v3/
│ ├── swap_0xdef0.json
│ ├── mint_0x2468.json
│ └── flash_0x1357.json
├── curve/
│ └── exchange_0xace0.json
└── camelot/
├── algebra_v1_swap_0xfff0.json
├── algebra_integral_swap_0xeee0.json
└── algebra_directional_swap_0xddd0.json
```
### 2. Test Data Generation
```go
// Generate comprehensive test data
func GenerateTestData() {
protocols := []Protocol{
ProtocolUniswapV2,
ProtocolUniswapV3,
ProtocolCurve,
// ... all protocols
}
for _, protocol := range protocols {
// Generate edge cases
generateZeroAddressCases(protocol)
generateZeroAmountCases(protocol)
generateMaxValueCases(protocol)
generateDecimalCases(protocol)
generateOverflowCases(protocol)
}
}
```
## CI/CD Integration
### Pre-commit Hooks
```bash
#!/bin/bash
# .git/hooks/pre-commit
# Run tests
go test ./... -v
if [ $? -ne 0 ]; then
echo "Tests failed. Commit rejected."
exit 1
fi
# Check coverage
coverage=$(go test ./... -coverprofile=coverage.out -covermode=atomic | \
go tool cover -func=coverage.out | \
grep total | \
awk '{print $3}' | \
sed 's/%//')
if (( $(echo "$coverage < 100" | bc -l) )); then
echo "Coverage is ${coverage}% (required: 100%). Commit rejected."
exit 1
fi
echo "All tests passed with 100% coverage. ✓"
```
### GitHub Actions
```yaml
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.24'
- name: Run tests
run: go test ./... -v -race -coverprofile=coverage.out -covermode=atomic
- name: Check coverage
run: |
coverage=$(go tool cover -func=coverage.out | grep total | awk '{print $3}' | sed 's/%//')
if (( $(echo "$coverage < 100" | bc -l) )); then
echo "Coverage is ${coverage}% (required: 100%)"
exit 1
fi
echo "Coverage: ${coverage}% ✓"
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
files: ./coverage.out
fail_ci_if_error: true
```
## Test Execution Commands
```bash
# Run all tests with coverage
go test ./... -v -race -coverprofile=coverage.out -covermode=atomic
# Run specific package tests
go test ./pkg/parsers/uniswap_v2/... -v
# Run with race detector
go test ./... -race
# Run benchmarks
go test ./... -bench=. -benchmem
# Run only unit tests
go test ./... -short
# Run integration tests
go test ./... -run Integration
# Generate coverage report
go tool cover -html=coverage.out -o coverage.html
# Check coverage percentage
go tool cover -func=coverage.out | grep total
```
## Coverage Enforcement Rules
1. **No merging without 100% coverage**
- PR CI/CD must show 100% coverage
- Manual override NOT allowed
2. **No skipping tests**
- `t.Skip()` NOT allowed except for known external dependencies
- Must document reason for skip
3. **No ignoring failures**
- All test failures MUST be fixed
- Cannot merge with failing tests
4. **Coverage for all code paths**
- Every `if` statement tested (both branches)
- Every `switch` case tested
- Every error path tested
- Every success path tested
## Test Documentation
Each test file MUST include:
```go
/*
Package uniswap_v2_test provides comprehensive test coverage for UniswapV2Parser.
Test Coverage:
- ParseSwapEvent: 100%
- ParseMintEvent: 100%
- ParseBurnEvent: 100%
- ValidateEvent: 100%
Edge Cases Tested:
- Zero addresses (rejected)
- Zero amounts (rejected)
- Maximum values (accepted)
- Decimal precision (verified)
- Concurrent access (safe)
Real Data Tests:
- 5 real Arbiscan transactions
- All event types covered
- Multiple pool types tested
Performance:
- Parse time: < 1ms (verified)
- Memory: < 1KB per parse (verified)
*/
```
---
**ABSOLUTE REQUIREMENT**: 100% coverage, 100% passage, zero tolerance for failures.