chore(ai): add comprehensive CLI configurations for all AI assistants
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
This commit is contained in:
289
.gemini/GEMINI.md
Normal file
289
.gemini/GEMINI.md
Normal file
@@ -0,0 +1,289 @@
|
||||
# Gemini CLI Configuration
|
||||
|
||||
This directory contains Gemini configuration and tools for the MEV Bot project.
|
||||
|
||||
## 🚀 Quick Start Commands
|
||||
|
||||
### Essential Build & Test Commands
|
||||
```bash
|
||||
# Build the MEV bot binary
|
||||
make build
|
||||
|
||||
# Run performance tests
|
||||
.gemini/scripts/perf-test.sh
|
||||
|
||||
# Run optimization analysis
|
||||
.gemini/scripts/optimize.sh
|
||||
|
||||
# Run benchmarks with profiling
|
||||
make bench-profile
|
||||
|
||||
# Run concurrency tests
|
||||
make test-concurrent
|
||||
|
||||
# Check for Go modules issues
|
||||
go mod tidy && go mod verify
|
||||
|
||||
# Run linter with performance focus
|
||||
golangci-lint run --fast
|
||||
```
|
||||
|
||||
### Development Workflow Commands
|
||||
```bash
|
||||
# Setup development environment
|
||||
export ARBITRUM_RPC_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/73bc682fe9c5bd23b42ef40f752fa89a"
|
||||
export ARBITRUM_WS_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/73bc682fe9c5bd23b42ef40f752fa89a"
|
||||
export METRICS_ENABLED="true"
|
||||
|
||||
# Run with timeout for testing
|
||||
timeout 60 ./mev-bot start
|
||||
|
||||
# Debug with verbose logging
|
||||
LOG_LEVEL=debug ./mev-bot start
|
||||
|
||||
# Profile performance with detailed analysis
|
||||
.gemini/scripts/profile.sh
|
||||
```
|
||||
|
||||
## Gemini Commands Directory
|
||||
|
||||
The `.gemini/commands/` directory contains predefined commands for performance optimization tasks:
|
||||
|
||||
- `optimize-performance.md` - Optimize application performance
|
||||
- `analyze-bottlenecks.md` - Analyze performance bottlenecks
|
||||
- `tune-concurrency.md` - Tune concurrency patterns
|
||||
- `reduce-memory.md` - Reduce memory allocations
|
||||
- `improve-latency.md` - Improve latency performance
|
||||
|
||||
## Gemini Settings
|
||||
|
||||
The `.gemini/settings.json` file contains Gemini's performance optimization configuration:
|
||||
|
||||
```json
|
||||
{
|
||||
"focus_areas": [
|
||||
"Algorithmic Implementations",
|
||||
"Performance Optimization",
|
||||
"Concurrency Patterns",
|
||||
"Memory Management"
|
||||
],
|
||||
"primary_skills": [
|
||||
"Optimizing mathematical calculations for performance",
|
||||
"Profiling and identifying bottlenecks in critical paths",
|
||||
"Reducing memory allocations in hot code paths",
|
||||
"Optimizing concurrency patterns for maximum throughput"
|
||||
],
|
||||
"performance_optimization": {
|
||||
"enabled": true,
|
||||
"profiling": {
|
||||
"cpu": true,
|
||||
"memory": true,
|
||||
"goroutine": true,
|
||||
"mutex": true
|
||||
},
|
||||
"optimization_targets": [
|
||||
"Reduce latency to < 10 microseconds for critical path",
|
||||
"Achieve > 100,000 messages/second throughput",
|
||||
"Minimize memory allocations in hot paths",
|
||||
"Optimize garbage collection tuning"
|
||||
]
|
||||
},
|
||||
"concurrency": {
|
||||
"worker_pools": true,
|
||||
"pipeline_patterns": true,
|
||||
"fan_in_out": true,
|
||||
"backpressure_handling": true
|
||||
},
|
||||
"benchmarking": {
|
||||
"baseline_comparison": true,
|
||||
"regression_detection": true,
|
||||
"continuous_monitoring": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 📋 Development Guidelines & Code Style
|
||||
|
||||
### Performance Optimization Guidelines
|
||||
- **Profiling First**: Always profile before optimizing
|
||||
- **Measure Impact**: Measure performance impact of changes
|
||||
- **Maintain Readability**: Don't sacrifice code readability for marginal gains
|
||||
- **Focus on Hot Paths**: Optimize critical code paths first
|
||||
- **Test Regressions**: Ensure optimizations don't cause regressions
|
||||
|
||||
### Concurrency Patterns
|
||||
- **Worker Pools**: Use worker pools for parallel processing
|
||||
- **Pipeline Patterns**: Implement pipeline patterns for multi-stage processing
|
||||
- **Fan-in/Fan-out**: Use fan-in/fan-out patterns for data distribution
|
||||
- **Backpressure Handling**: Implement proper backpressure handling
|
||||
|
||||
### Memory Management
|
||||
- **Object Pooling**: Use sync.Pool for frequently created objects
|
||||
- **Pre-allocation**: Pre-allocate slices and maps when size is known
|
||||
- **Minimize Allocations**: Reduce allocations in hot paths
|
||||
- **GC Tuning**: Properly tune garbage collection
|
||||
|
||||
### Required Checks Before Commit
|
||||
```bash
|
||||
# Run performance tests
|
||||
.gemini/scripts/perf-test.sh
|
||||
|
||||
# Run benchmark comparisons
|
||||
.gemini/scripts/bench-compare.sh
|
||||
|
||||
# Check for performance regressions
|
||||
.gemini/scripts/regression-check.sh
|
||||
```
|
||||
|
||||
## Gemini's Primary Focus Areas
|
||||
|
||||
As Gemini, you're particularly skilled at:
|
||||
|
||||
1. **Algorithmic Implementations and Mathematical Computations**
|
||||
- Implementing precise Uniswap V3 pricing functions
|
||||
- Optimizing mathematical calculations for performance
|
||||
- Ensuring numerical stability and precision
|
||||
- Creating efficient algorithms for arbitrage detection
|
||||
|
||||
2. **Optimizing Performance and Efficiency**
|
||||
- Profiling and identifying bottlenecks in critical paths
|
||||
- Reducing memory allocations in hot code paths
|
||||
- Optimizing concurrency patterns for maximum throughput
|
||||
- Tuning garbage collection for low-latency requirements
|
||||
|
||||
3. **Understanding Complex Uniswap V3 Pricing Functions**
|
||||
- Implementing accurate tick and sqrtPriceX96 conversions
|
||||
- Calculating price impact with proper precision handling
|
||||
- Working with liquidity and fee calculations
|
||||
- Handling edge cases in pricing mathematics
|
||||
|
||||
4. **Implementing Concurrent and Parallel Processing Patterns**
|
||||
- Designing efficient worker pool implementations
|
||||
- Creating robust pipeline processing systems
|
||||
- Managing synchronization primitives correctly
|
||||
- Preventing race conditions and deadlocks
|
||||
|
||||
5. **Working with Low-Level System Operations**
|
||||
- Optimizing memory usage and allocation patterns
|
||||
- Tuning system-level parameters for performance
|
||||
- Implementing efficient data structures for high-frequency access
|
||||
- Working with CPU cache optimization techniques
|
||||
|
||||
## 🛠 Gemini Optimization Settings
|
||||
|
||||
### Workflow Preferences
|
||||
- **Always profile first**: Use `go tool pprof` before making changes
|
||||
- **Branch naming**: Use prefixes (`perf-worker-pool`, `opt-pipeline`, `tune-gc`)
|
||||
- **Context management**: Focus on performance metrics and bottlenecks
|
||||
- **Continuous monitoring**: Implement monitoring for performance metrics
|
||||
|
||||
### File Organization Preferences
|
||||
- **Performance-critical code**: Place in `pkg/market/` or `pkg/monitor/`
|
||||
- **Benchmark files**: Place alongside source files with `_bench_test.go` suffix
|
||||
- **Profiling scripts**: Place in `.gemini/scripts/`
|
||||
- **Optimization documentation**: Inline comments explaining optimization approaches
|
||||
|
||||
### Performance Monitoring
|
||||
```bash
|
||||
# Enable detailed metrics endpoint
|
||||
export METRICS_ENABLED="true"
|
||||
export METRICS_PORT="9090"
|
||||
|
||||
# Monitor all performance aspects
|
||||
go tool pprof http://localhost:9090/debug/pprof/profile?seconds=30
|
||||
go tool pprof http://localhost:9090/debug/pprof/heap
|
||||
go tool pprof http://localhost:9090/debug/pprof/goroutine
|
||||
go tool pprof http://localhost:9090/debug/pprof/mutex
|
||||
|
||||
# Run comprehensive performance analysis
|
||||
.gemini/scripts/profile.sh
|
||||
```
|
||||
|
||||
## 🔧 Environment Configuration
|
||||
|
||||
### Required Environment Variables
|
||||
```bash
|
||||
# Arbitrum RPC Configuration
|
||||
export ARBITRUM_RPC_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/73bc682fe9c5bd23b42ef40f752fa89a"
|
||||
export ARBITRUM_WS_ENDPOINT="wss://arbitrum-mainnet.core.chainstack.com/73bc682fe9c5bd23b42ef40f752fa89a"
|
||||
|
||||
# Application Configuration
|
||||
export LOG_LEVEL="info"
|
||||
export METRICS_ENABLED="true"
|
||||
export METRICS_PORT="9090"
|
||||
|
||||
# Development Configuration
|
||||
export GO_ENV="development"
|
||||
export DEBUG="true"
|
||||
|
||||
# Performance Configuration
|
||||
export GOGC=20
|
||||
export GOMAXPROCS=0
|
||||
export CONCURRENCY_LEVEL=100
|
||||
```
|
||||
|
||||
### Profiling Environment Variables
|
||||
```bash
|
||||
# Profiling Configuration
|
||||
export PROFILING_ENABLED=true
|
||||
export PROFILING_DURATION=30s
|
||||
export PROFILING_OUTPUT_DIR=".gemini/profiles"
|
||||
|
||||
# Benchmark Configuration
|
||||
export BENCHMARK_TIMEOUT=60s
|
||||
export BENCHMARK_ITERATIONS=1000000
|
||||
export BENCHMARK_CONCURRENCY=10
|
||||
```
|
||||
|
||||
## 📝 Commit Message Conventions
|
||||
|
||||
### Format
|
||||
```
|
||||
perf(type): brief description
|
||||
|
||||
- Detailed explanation of performance optimization
|
||||
- Measured impact of the change
|
||||
- Profiling data supporting the optimization
|
||||
|
||||
🤖 Generated with [Gemini](https://gemini.example.com)
|
||||
Co-Authored-By: Gemini <noreply@gemini.example.com>
|
||||
```
|
||||
|
||||
### Types
|
||||
- `worker-pool`: Worker pool optimizations
|
||||
- `pipeline`: Pipeline pattern optimizations
|
||||
- `memory`: Memory allocation reductions
|
||||
- `gc`: Garbage collection tuning
|
||||
- `concurrency`: Concurrency pattern improvements
|
||||
- `algorithm`: Algorithmic optimizations
|
||||
- `latency`: Latency improvements
|
||||
- `throughput`: Throughput improvements
|
||||
|
||||
## 🚨 Performance Guidelines
|
||||
|
||||
### Always Profile
|
||||
- Use `go tool pprof` to identify bottlenecks
|
||||
- Measure baseline performance before optimization
|
||||
- Compare performance before and after changes
|
||||
- Monitor for regressions in unrelated areas
|
||||
|
||||
### Focus Areas
|
||||
- **Critical Path**: Optimize the most time-consuming operations
|
||||
- **Hot Paths**: Reduce allocations in frequently called functions
|
||||
- **Concurrency**: Improve parallel processing efficiency
|
||||
- **Memory**: Minimize memory usage and GC pressure
|
||||
|
||||
### Testing Performance
|
||||
```bash
|
||||
# Run comprehensive performance tests
|
||||
.gemini/scripts/perf-test.sh
|
||||
|
||||
# Compare benchmarks
|
||||
.gemini/scripts/bench-compare.sh
|
||||
|
||||
# Check for regressions
|
||||
.gemini/scripts/regression-check.sh
|
||||
|
||||
# Generate flame graphs
|
||||
.gemini/scripts/flame-graph.sh
|
||||
```
|
||||
39
.gemini/commands/analyze-bottlenecks.md
Normal file
39
.gemini/commands/analyze-bottlenecks.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# Analyze Bottlenecks
|
||||
|
||||
Analyze performance bottlenecks in the following area: $ARGUMENTS
|
||||
|
||||
## Analysis Steps:
|
||||
1. **CPU Profiling**: Identify CPU-intensive functions and hot paths
|
||||
2. **Memory Profiling**: Check for memory leaks and high allocation patterns
|
||||
3. **Goroutine Analysis**: Look for goroutine leaks and blocking operations
|
||||
4. **I/O Performance**: Analyze network and disk I/O patterns
|
||||
5. **Concurrency Issues**: Check for race conditions and lock contention
|
||||
|
||||
## Profiling Commands:
|
||||
```bash
|
||||
# CPU profile with detailed analysis
|
||||
go tool pprof -top -cum http://localhost:9090/debug/pprof/profile?seconds=60
|
||||
|
||||
# Memory profile with allocation details
|
||||
go tool pprof -alloc_space http://localhost:9090/debug/pprof/heap
|
||||
|
||||
# Goroutine blocking profile
|
||||
go tool pprof http://localhost:9090/debug/pprof/block
|
||||
|
||||
# Mutex contention profile
|
||||
go tool pprof http://localhost:9090/debug/pprof/mutex
|
||||
```
|
||||
|
||||
## Analysis Focus Areas:
|
||||
- Worker pool efficiency in `pkg/market/pipeline.go`
|
||||
- Event parsing performance in `pkg/events/`
|
||||
- Uniswap math calculations in `pkg/uniswap/`
|
||||
- Memory usage in large transaction processing
|
||||
- Rate limiting effectiveness in `internal/ratelimit/`
|
||||
|
||||
## Output Requirements:
|
||||
- Detailed bottleneck analysis with percentages
|
||||
- Flame graphs and performance visualizations
|
||||
- Root cause identification for top bottlenecks
|
||||
- Optimization recommendations with expected impact
|
||||
- Priority ranking of issues
|
||||
51
.gemini/commands/improve-latency.md
Normal file
51
.gemini/commands/improve-latency.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# Improve Latency Performance
|
||||
|
||||
Improve latency performance for the following critical path: $ARGUMENTS
|
||||
|
||||
## Latency Optimization Strategy:
|
||||
|
||||
### 1. **Latency Analysis**
|
||||
- Measure current end-to-end latency
|
||||
- Identify latency components (network, computation, I/O)
|
||||
- Analyze latency distribution and outliers
|
||||
|
||||
### 2. **Optimization Areas**
|
||||
|
||||
#### **Network Latency**
|
||||
- Connection pooling for RPC calls
|
||||
- Request batching and pipelining
|
||||
- Caching frequently accessed data
|
||||
- Asynchronous processing patterns
|
||||
|
||||
#### **Computational Latency**
|
||||
- Algorithmic complexity reduction
|
||||
- Lookup table implementation
|
||||
- Parallel processing opportunities
|
||||
- Precision vs. performance trade-offs
|
||||
|
||||
#### **I/O Latency**
|
||||
- Buffering and streaming optimizations
|
||||
- Disk I/O patterns
|
||||
- Database query optimization
|
||||
- File system caching
|
||||
|
||||
### 3. **MEV Bot Specific Optimizations**
|
||||
|
||||
#### **Critical Path Components**
|
||||
- Transaction detection and parsing (< 10 microseconds target)
|
||||
- Market analysis and arbitrage calculation
|
||||
- Opportunity evaluation and ranking
|
||||
- Execution decision making
|
||||
|
||||
## Implementation Guidelines:
|
||||
- Measure latency at each component
|
||||
- Focus on 95th and 99th percentile improvements
|
||||
- Ensure deterministic performance characteristics
|
||||
- Maintain accuracy while improving speed
|
||||
|
||||
## Deliverables:
|
||||
- Latency benchmark results (before/after)
|
||||
- Latency distribution analysis
|
||||
- Optimization documentation
|
||||
- Monitoring and alerting for latency regressions
|
||||
- Performance vs. accuracy trade-off analysis
|
||||
68
.gemini/commands/optimize-performance.md
Normal file
68
.gemini/commands/optimize-performance.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# Optimize Performance
|
||||
|
||||
Optimize the performance of the following component in the MEV bot: $ARGUMENTS
|
||||
|
||||
## Performance Optimization Strategy:
|
||||
|
||||
### 1. **Profiling and Measurement**
|
||||
```bash
|
||||
# CPU profiling
|
||||
go tool pprof http://localhost:9090/debug/pprof/profile?seconds=30
|
||||
|
||||
# Memory profiling
|
||||
go tool pprof http://localhost:9090/debug/pprof/heap
|
||||
|
||||
# Goroutine analysis
|
||||
go tool pprof http://localhost:9090/debug/pprof/goroutine
|
||||
|
||||
# Mutex contention analysis
|
||||
go tool pprof http://localhost:9090/debug/pprof/mutex
|
||||
```
|
||||
|
||||
### 2. **Optimization Areas**
|
||||
|
||||
#### **Concurrency Optimization**
|
||||
- Worker pool sizing and configuration
|
||||
- Channel buffer optimization
|
||||
- Goroutine pooling and reuse
|
||||
- Lock contention reduction
|
||||
- Context cancellation patterns
|
||||
|
||||
#### **Memory Optimization**
|
||||
- Object pooling for frequent allocations
|
||||
- Buffer reuse patterns
|
||||
- Garbage collection tuning
|
||||
- Memory leak prevention
|
||||
- Slice and map pre-allocation
|
||||
|
||||
#### **Algorithm Optimization**
|
||||
- Computational complexity reduction
|
||||
- Data structure selection
|
||||
- Caching strategies
|
||||
- Lookup table implementation
|
||||
|
||||
### 3. **MEV Bot Specific Optimizations**
|
||||
|
||||
#### **Transaction Processing**
|
||||
- Parallel transaction processing
|
||||
- Event filtering optimization
|
||||
- Batch processing strategies
|
||||
|
||||
#### **Market Analysis**
|
||||
- Price calculation caching
|
||||
- Pool data caching
|
||||
- Historical data indexing
|
||||
|
||||
## Implementation Guidelines:
|
||||
- Measure before optimizing (baseline metrics)
|
||||
- Focus on bottlenecks identified through profiling
|
||||
- Maintain code readability and maintainability
|
||||
- Add performance tests for regressions
|
||||
- Document performance characteristics
|
||||
|
||||
## Deliverables:
|
||||
- Performance benchmark results (before/after)
|
||||
- Optimized code with maintained functionality
|
||||
- Performance monitoring enhancements
|
||||
- Optimization documentation
|
||||
- Regression test suite
|
||||
52
.gemini/commands/reduce-memory.md
Normal file
52
.gemini/commands/reduce-memory.md
Normal file
@@ -0,0 +1,52 @@
|
||||
# Reduce Memory Allocations
|
||||
|
||||
Reduce memory allocations in the following hot path: $ARGUMENTS
|
||||
|
||||
## Memory Optimization Strategy:
|
||||
|
||||
### 1. **Allocation Analysis**
|
||||
- Identify high-frequency allocation points
|
||||
- Measure current allocation rates and patterns
|
||||
- Analyze garbage collection pressure
|
||||
|
||||
### 2. **Optimization Techniques**
|
||||
|
||||
#### **Object Pooling**
|
||||
- Implement sync.Pool for frequently created objects
|
||||
- Pool buffers, structs, and temporary objects
|
||||
- Proper reset patterns for pooled objects
|
||||
|
||||
#### **Pre-allocation**
|
||||
- Pre-allocate slices and maps when size is predictable
|
||||
- Reuse existing data structures
|
||||
- Avoid repeated allocations in loops
|
||||
|
||||
#### **Buffer Reuse**
|
||||
- Reuse byte buffers and string builders
|
||||
- Implement buffer pools for I/O operations
|
||||
- Minimize string concatenation
|
||||
|
||||
### 3. **MEV Bot Specific Optimizations**
|
||||
|
||||
#### **Transaction Processing**
|
||||
- Pool transaction objects and event structures
|
||||
- Reuse parsing buffers
|
||||
- Optimize log and metric object creation
|
||||
|
||||
#### **Mathematical Calculations**
|
||||
- Pool uint256 and big.Int objects
|
||||
- Reuse temporary calculation buffers
|
||||
- Optimize precision object handling
|
||||
|
||||
## Implementation Guidelines:
|
||||
- Measure allocation reduction with benchmarks
|
||||
- Monitor garbage collection statistics
|
||||
- Ensure thread safety in pooled objects
|
||||
- Maintain code readability and maintainability
|
||||
|
||||
## Deliverables:
|
||||
- Memory allocation reduction benchmarks
|
||||
- Optimized code with pooling strategies
|
||||
- GC pressure analysis before and after
|
||||
- Memory usage monitoring enhancements
|
||||
- Best practices documentation for team
|
||||
55
.gemini/commands/tune-concurrency.md
Normal file
55
.gemini/commands/tune-concurrency.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Tune Concurrency Patterns
|
||||
|
||||
Tune concurrency patterns for the following component: $ARGUMENTS
|
||||
|
||||
## Concurrency Tuning Strategy:
|
||||
|
||||
### 1. **Current Pattern Analysis**
|
||||
- Identify existing concurrency patterns (worker pools, pipelines, etc.)
|
||||
- Measure current performance and resource utilization
|
||||
- Analyze bottlenecks in concurrent processing
|
||||
|
||||
### 2. **Optimization Areas**
|
||||
|
||||
#### **Worker Pool Tuning**
|
||||
- Optimal worker count based on CPU cores and workload
|
||||
- Channel buffer sizing for backpressure management
|
||||
- Task distribution strategies
|
||||
- Worker lifecycle management
|
||||
|
||||
#### **Pipeline Optimization**
|
||||
- Stage balancing to prevent bottlenecks
|
||||
- Buffer sizing between pipeline stages
|
||||
- Error propagation and recovery
|
||||
- Context cancellation handling
|
||||
|
||||
#### **Fan-in/Fan-out Patterns**
|
||||
- Optimal fan-out ratios
|
||||
- Result merging strategies
|
||||
- Resource allocation across branches
|
||||
- Synchronization mechanisms
|
||||
|
||||
### 3. **MEV Bot Specific Tuning**
|
||||
|
||||
#### **Transaction Processing**
|
||||
- Optimal concurrent transaction processing
|
||||
- Event parsing parallelization
|
||||
- Memory usage per goroutine
|
||||
|
||||
#### **Market Analysis**
|
||||
- Concurrent pool data fetching
|
||||
- Parallel arbitrage calculations
|
||||
- Resource sharing between analysis tasks
|
||||
|
||||
## Implementation Guidelines:
|
||||
- Test with realistic workload patterns
|
||||
- Monitor resource utilization (CPU, memory, goroutines)
|
||||
- Ensure graceful degradation under load
|
||||
- Maintain error handling and recovery mechanisms
|
||||
|
||||
## Deliverables:
|
||||
- Concurrency tuning recommendations
|
||||
- Performance benchmarks with different configurations
|
||||
- Resource utilization analysis
|
||||
- Configuration guidelines for different environments
|
||||
- Monitoring and alerting for concurrency issues
|
||||
42
.gemini/scripts/perf-test.sh
Executable file
42
.gemini/scripts/perf-test.sh
Executable file
@@ -0,0 +1,42 @@
|
||||
#!/bin/bash
|
||||
|
||||
# perf-test.sh - Run comprehensive performance tests for Gemini
|
||||
|
||||
echo "Running comprehensive performance tests for Gemini..."
|
||||
|
||||
# Create results directory if it doesn't exist
|
||||
mkdir -p .gemini/results
|
||||
|
||||
# Run unit tests
|
||||
echo "Running unit tests..."
|
||||
go test -v ./... | tee .gemini/results/unit-tests.log
|
||||
|
||||
# Run concurrency tests
|
||||
echo "Running concurrency tests..."
|
||||
go test -v -run=Concurrent ./... | tee .gemini/results/concurrency-tests.log
|
||||
|
||||
# Run benchmarks
|
||||
echo "Running benchmarks..."
|
||||
go test -bench=. -benchmem ./... | tee .gemini/results/benchmarks.log
|
||||
|
||||
# Run benchmarks with CPU profiling
|
||||
echo "Running benchmarks with CPU profiling..."
|
||||
go test -bench=. -cpuprofile=.gemini/results/cpu.prof ./... | tee .gemini/results/cpu-bench.log
|
||||
|
||||
# Run benchmarks with memory profiling
|
||||
echo "Running benchmarks with memory profiling..."
|
||||
go test -bench=. -memprofile=.gemini/results/mem.prof ./... | tee .gemini/results/mem-bench.log
|
||||
|
||||
# Run benchmarks with goroutine profiling
|
||||
echo "Running benchmarks with goroutine profiling..."
|
||||
go test -bench=. -blockprofile=.gemini/results/block.prof ./... | tee .gemini/results/block-bench.log
|
||||
|
||||
# Check for errors
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "All performance tests completed successfully!"
|
||||
echo "Results saved to .gemini/results/"
|
||||
else
|
||||
echo "Some performance tests failed!"
|
||||
echo "Check .gemini/results/ for details"
|
||||
exit 1
|
||||
fi
|
||||
Reference in New Issue
Block a user