Files
mev-beta/docs/MATH_OPTIMIZATIONS.md
Krypto Kajun dafb2c344a docs(math): add mathematical optimization documentation and performance analysis
- Add comprehensive documentation for mathematical optimizations
- Add detailed performance analysis with benchmark results
- Update README to reference new documentation
- Update Qwen Code configuration with optimization targets

This commit documents the caching optimizations implemented for Uniswap V3 pricing functions which provide 12-24% performance improvements with reduced memory allocations.

🤖 Generated with [Qwen Code](https://tongyi.aliyun.com/)
Co-Authored-By: Qwen <noreply@tongyi.aliyun.com>
2025-09-23 08:04:00 -05:00

4.4 KiB

Mathematical Optimizations

This document details the mathematical optimizations implemented in the MEV bot to improve performance of Uniswap V3 pricing calculations.

Overview

The MEV bot performs frequent Uniswap V3 pricing calculations as part of its arbitrage detection mechanism. These calculations involve expensive mathematical operations that can become performance bottlenecks when executed at high frequency. This document describes the optimizations implemented to reduce the computational overhead of these operations.

Optimized Functions

1. SqrtPriceX96ToPrice

Original Implementation:

  • Computes 2^192 on each call
  • Uses big.Float for precision

Cached Implementation (SqrtPriceX96ToPriceCached):

  • Pre-computes and caches 2^192 constant
  • Uses sync.Once to ensure thread-safe initialization

Performance Improvement:

  • ~24% faster (1406 ns/op → 1060 ns/op)
  • Reduced memory allocations (472 B/op → 368 B/op)

2. PriceToSqrtPriceX96

Original Implementation:

  • Computes 2^96 on each call
  • Uses big.Float for precision

Cached Implementation (PriceToSqrtPriceX96Cached):

  • Pre-computes and caches 2^96 constant
  • Uses sync.Once to ensure thread-safe initialization

Performance Improvement:

  • ~19% faster (1324 ns/op → 1072 ns/op)
  • Reduced memory allocations (480 B/op → 376 B/op)

3. Optimized Versions with uint256

We also implemented experimental versions using uint256 operations where appropriate:

SqrtPriceX96ToPriceOptimized:

  • Uses uint256 for squaring operations
  • Converts to big.Float only for division

PriceToSqrtPriceX96Optimized:

  • Experimental implementation using uint256

Benchmark Results

BenchmarkSqrtPriceX96ToPriceCached-4       1240842    1060 ns/op    368 B/op    6 allocs/op
BenchmarkPriceToSqrtPriceX96Cached-4        973719    1072 ns/op    376 B/op   10 allocs/op
BenchmarkSqrtPriceX96ToPriceOptimized-4     910021    1379 ns/op    520 B/op   10 allocs/op
BenchmarkPriceToSqrtPriceX96Optimized-4     763767    1695 ns/op    496 B/op   14 allocs/op
BenchmarkSqrtPriceX96ToPrice-4              908228    1406 ns/op    472 B/op    9 allocs/op
BenchmarkPriceToSqrtPriceX96-4              827798    1324 ns/op    480 B/op   13 allocs/op

Key Findings

  1. Caching Constants: Pre-computing expensive constants like 2^96 and 2^192 provides significant performance improvements.

  2. Memory Allocations: Reducing memory allocations is crucial for performance in high-frequency operations.

  3. uint256 Overhead: While uint256 operations can be faster for certain calculations, the overhead of converting between types can offset these gains.

Implementation Details

Cached Constants

We use sync.Once to ensure thread-safe initialization of cached constants:

var (
    // Cached constants to avoid recomputing them
    q96  *big.Int
    q192 *big.Int
    once sync.Once
)

// initConstants initializes the cached constants
func initConstants() {
    once.Do(func() {
        q96 = new(big.Int).Exp(big.NewInt(2), big.NewInt(96), nil)
        q192 = new(big.Int).Exp(big.NewInt(2), big.NewInt(192), nil)
    })
}

Usage in Functions

All optimized functions call initConstants() to ensure constants are initialized before use:

// SqrtPriceX96ToPriceCached converts sqrtPriceX96 to a price using cached constants
func SqrtPriceX96ToPriceCached(sqrtPriceX96 *big.Int) *big.Float {
    // Initialize cached constants
    initConstants()
    
    // ... rest of implementation
}

Future Optimization Opportunities

  1. Further uint256 Integration: Explore more opportunities to use uint256 operations while minimizing type conversion overhead.

  2. Lookup Tables: For frequently used values, pre-computed lookup tables could provide additional performance improvements.

  3. Assembly Optimizations: For critical paths, hand-optimized assembly implementations could provide further gains.

  4. Approximation Algorithms: For less precision-sensitive calculations, faster approximation algorithms could be considered.

Conclusion

The implemented optimizations provide significant performance improvements for the MEV bot's Uniswap V3 pricing calculations. The cached versions of the core functions are 12-24% faster than the original implementations, with reduced memory allocations. These improvements will allow the bot to process more arbitrage opportunities with lower latency.