chore(ai): add comprehensive CLI configurations for all AI assistants

Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
This commit is contained in:
Krypto Kajun
2025-09-14 10:09:55 -05:00
parent 2c4f663728
commit a410f637cd
34 changed files with 2391 additions and 5 deletions

View File

@@ -0,0 +1,39 @@
# Analyze Bottlenecks
Analyze performance bottlenecks in the following area: $ARGUMENTS
## Analysis Steps:
1. **CPU Profiling**: Identify CPU-intensive functions and hot paths
2. **Memory Profiling**: Check for memory leaks and high allocation patterns
3. **Goroutine Analysis**: Look for goroutine leaks and blocking operations
4. **I/O Performance**: Analyze network and disk I/O patterns
5. **Concurrency Issues**: Check for race conditions and lock contention
## Profiling Commands:
```bash
# CPU profile with detailed analysis
go tool pprof -top -cum http://localhost:9090/debug/pprof/profile?seconds=60
# Memory profile with allocation details
go tool pprof -alloc_space http://localhost:9090/debug/pprof/heap
# Goroutine blocking profile
go tool pprof http://localhost:9090/debug/pprof/block
# Mutex contention profile
go tool pprof http://localhost:9090/debug/pprof/mutex
```
## Analysis Focus Areas:
- Worker pool efficiency in `pkg/market/pipeline.go`
- Event parsing performance in `pkg/events/`
- Uniswap math calculations in `pkg/uniswap/`
- Memory usage in large transaction processing
- Rate limiting effectiveness in `internal/ratelimit/`
## Output Requirements:
- Detailed bottleneck analysis with percentages
- Flame graphs and performance visualizations
- Root cause identification for top bottlenecks
- Optimization recommendations with expected impact
- Priority ranking of issues

View File

@@ -0,0 +1,51 @@
# Improve Latency Performance
Improve latency performance for the following critical path: $ARGUMENTS
## Latency Optimization Strategy:
### 1. **Latency Analysis**
- Measure current end-to-end latency
- Identify latency components (network, computation, I/O)
- Analyze latency distribution and outliers
### 2. **Optimization Areas**
#### **Network Latency**
- Connection pooling for RPC calls
- Request batching and pipelining
- Caching frequently accessed data
- Asynchronous processing patterns
#### **Computational Latency**
- Algorithmic complexity reduction
- Lookup table implementation
- Parallel processing opportunities
- Precision vs. performance trade-offs
#### **I/O Latency**
- Buffering and streaming optimizations
- Disk I/O patterns
- Database query optimization
- File system caching
### 3. **MEV Bot Specific Optimizations**
#### **Critical Path Components**
- Transaction detection and parsing (< 10 microseconds target)
- Market analysis and arbitrage calculation
- Opportunity evaluation and ranking
- Execution decision making
## Implementation Guidelines:
- Measure latency at each component
- Focus on 95th and 99th percentile improvements
- Ensure deterministic performance characteristics
- Maintain accuracy while improving speed
## Deliverables:
- Latency benchmark results (before/after)
- Latency distribution analysis
- Optimization documentation
- Monitoring and alerting for latency regressions
- Performance vs. accuracy trade-off analysis

View File

@@ -0,0 +1,68 @@
# Optimize Performance
Optimize the performance of the following component in the MEV bot: $ARGUMENTS
## Performance Optimization Strategy:
### 1. **Profiling and Measurement**
```bash
# CPU profiling
go tool pprof http://localhost:9090/debug/pprof/profile?seconds=30
# Memory profiling
go tool pprof http://localhost:9090/debug/pprof/heap
# Goroutine analysis
go tool pprof http://localhost:9090/debug/pprof/goroutine
# Mutex contention analysis
go tool pprof http://localhost:9090/debug/pprof/mutex
```
### 2. **Optimization Areas**
#### **Concurrency Optimization**
- Worker pool sizing and configuration
- Channel buffer optimization
- Goroutine pooling and reuse
- Lock contention reduction
- Context cancellation patterns
#### **Memory Optimization**
- Object pooling for frequent allocations
- Buffer reuse patterns
- Garbage collection tuning
- Memory leak prevention
- Slice and map pre-allocation
#### **Algorithm Optimization**
- Computational complexity reduction
- Data structure selection
- Caching strategies
- Lookup table implementation
### 3. **MEV Bot Specific Optimizations**
#### **Transaction Processing**
- Parallel transaction processing
- Event filtering optimization
- Batch processing strategies
#### **Market Analysis**
- Price calculation caching
- Pool data caching
- Historical data indexing
## Implementation Guidelines:
- Measure before optimizing (baseline metrics)
- Focus on bottlenecks identified through profiling
- Maintain code readability and maintainability
- Add performance tests for regressions
- Document performance characteristics
## Deliverables:
- Performance benchmark results (before/after)
- Optimized code with maintained functionality
- Performance monitoring enhancements
- Optimization documentation
- Regression test suite

View File

@@ -0,0 +1,52 @@
# Reduce Memory Allocations
Reduce memory allocations in the following hot path: $ARGUMENTS
## Memory Optimization Strategy:
### 1. **Allocation Analysis**
- Identify high-frequency allocation points
- Measure current allocation rates and patterns
- Analyze garbage collection pressure
### 2. **Optimization Techniques**
#### **Object Pooling**
- Implement sync.Pool for frequently created objects
- Pool buffers, structs, and temporary objects
- Proper reset patterns for pooled objects
#### **Pre-allocation**
- Pre-allocate slices and maps when size is predictable
- Reuse existing data structures
- Avoid repeated allocations in loops
#### **Buffer Reuse**
- Reuse byte buffers and string builders
- Implement buffer pools for I/O operations
- Minimize string concatenation
### 3. **MEV Bot Specific Optimizations**
#### **Transaction Processing**
- Pool transaction objects and event structures
- Reuse parsing buffers
- Optimize log and metric object creation
#### **Mathematical Calculations**
- Pool uint256 and big.Int objects
- Reuse temporary calculation buffers
- Optimize precision object handling
## Implementation Guidelines:
- Measure allocation reduction with benchmarks
- Monitor garbage collection statistics
- Ensure thread safety in pooled objects
- Maintain code readability and maintainability
## Deliverables:
- Memory allocation reduction benchmarks
- Optimized code with pooling strategies
- GC pressure analysis before and after
- Memory usage monitoring enhancements
- Best practices documentation for team

View File

@@ -0,0 +1,55 @@
# Tune Concurrency Patterns
Tune concurrency patterns for the following component: $ARGUMENTS
## Concurrency Tuning Strategy:
### 1. **Current Pattern Analysis**
- Identify existing concurrency patterns (worker pools, pipelines, etc.)
- Measure current performance and resource utilization
- Analyze bottlenecks in concurrent processing
### 2. **Optimization Areas**
#### **Worker Pool Tuning**
- Optimal worker count based on CPU cores and workload
- Channel buffer sizing for backpressure management
- Task distribution strategies
- Worker lifecycle management
#### **Pipeline Optimization**
- Stage balancing to prevent bottlenecks
- Buffer sizing between pipeline stages
- Error propagation and recovery
- Context cancellation handling
#### **Fan-in/Fan-out Patterns**
- Optimal fan-out ratios
- Result merging strategies
- Resource allocation across branches
- Synchronization mechanisms
### 3. **MEV Bot Specific Tuning**
#### **Transaction Processing**
- Optimal concurrent transaction processing
- Event parsing parallelization
- Memory usage per goroutine
#### **Market Analysis**
- Concurrent pool data fetching
- Parallel arbitrage calculations
- Resource sharing between analysis tasks
## Implementation Guidelines:
- Test with realistic workload patterns
- Monitor resource utilization (CPU, memory, goroutines)
- Ensure graceful degradation under load
- Maintain error handling and recovery mechanisms
## Deliverables:
- Concurrency tuning recommendations
- Performance benchmarks with different configurations
- Resource utilization analysis
- Configuration guidelines for different environments
- Monitoring and alerting for concurrency issues