PixCloak Compression Algorithm Documentation

Version 2.1.0 • Last Updated: January 15, 2024

🐙 View Source Code📊 Download Benchmarks

1. Algorithm Overview

1.1 Core Principles

PixCloak's compression algorithm is built on three core principles:

  • Privacy-First: All processing happens locally in the user's browser
  • Quality-Optimized: Maintains visual quality while achieving target file sizes
  • Performance-Focused: Optimized for speed and memory efficiency

1.2 Architecture

Input Image → Preprocessing → Compression Pipeline → Quality Check → Output
↓ ↓ ↓ ↓
Canvas Format Detection WebP/JPEG Size Validation
↓ ↓ ↓ ↓
Metadata Quality Analysis Optimization Final Output

1.3 Supported Formats

Input Formats

  • JPEG (.jpg, .jpeg)
  • PNG (.png)
  • WebP (.webp)
  • BMP (.bmp)
  • GIF (.gif)

Output Formats

  • WebP (recommended)
  • JPEG (compatibility)
  • PNG (lossless)

2. WebP Implementation

2.1 WebP Encoder Configuration

const webpConfig = {
quality: 85, // Default quality (0-100)
method: 6, // Compression method (0-6)
alpha_quality: 100, // Alpha channel quality
lossless: false, // Lossless compression
near_lossless: 60, // Near-lossless quality
smart_subsample: true, // Smart subsampling
filter_type: 1, // Filter type (0-4)
filter_strength: 0, // Filter strength (0-100)
filter_sharpness: 0, // Filter sharpness (0-7)
filter_smooth: 0, // Filter smoothness (0-100)
pass: 10, // Number of passes (1-10)
show_compressed: false, // Show compressed image
preprocessing: 0, // Preprocessing (0-2)
partitions: 0, // Number of partitions (0-3)
partition_limit: 0, // Partition limit (0-100)
emulate_jpeg_size: false, // Emulate JPEG size
thread_level: false, // Use threading
low_memory: false, // Low memory mode
near_lossless: 100, // Near-lossless quality
use_delta_palette: false, // Use delta palette
use_sharp_yuv: false // Use sharp YUV
};

2.2 Quality Optimization

Our quality optimization algorithm uses a binary search approach to find the optimal quality setting:

  1. Start with quality = 85 (empirically determined optimal starting point)
  2. Compress image and measure file size
  3. If size > target: decrease quality by 10
  4. If size < target * 0.9: increase quality by 5
  5. Repeat until size is within 5% of target
  6. Fine-tune with ±1 quality adjustments

2.3 Size Prediction

We use a machine learning model to predict the optimal quality setting based on:

  • Image dimensions (width × height)
  • Image complexity (entropy, edge density)
  • Color distribution
  • Target file size
  • Output format

3. Quality Control

3.1 Quality Metrics

SSIM (Structural Similarity)

Measures structural similarity between original and compressed images. Range: 0-1 (higher is better)

PSNR (Peak Signal-to-Noise Ratio)

Measures signal-to-noise ratio in decibels. Range: 0-∞ dB (higher is better)

Compression Ratio

Percentage reduction in file size. Range: 0-100% (higher is better)

3.2 Quality Thresholds

Use CaseMin SSIMMin PSNRQuality Range
Professional Photos0.9535 dB90-100
Web Images0.9030 dB75-90
Social Media0.8525 dB60-80
Thumbnails0.8020 dB40-70

4. Performance Metrics

4.1 Compression Speed

Small Images

< 1MP: ~50ms
< 5MP: ~200ms
< 10MP: ~500ms

Medium Images

10-20MP: ~1-2s
20-50MP: ~3-5s
50MP+: ~5-10s

4.2 Memory Usage

Memory usage is optimized for browser environments:

  • Peak Memory: 2-3× original image size
  • Processing Memory: 1-2× original image size
  • Output Memory: 0.1-0.5× original image size

4.3 Browser Performance

BrowserWebP SupportPerformanceMemory Efficiency
Chrome 90+✅ NativeExcellentHigh
Firefox 65+✅ NativeGoodHigh
Safari 14+✅ NativeGoodMedium
Edge 90+✅ NativeExcellentHigh

5. Browser Compatibility

5.1 Feature Detection

function supportsWebP() {
const canvas = document.createElement('canvas');
canvas.width = 1;
canvas.height = 1;
return canvas.toDataURL('image/webp').indexOf('data:image/webp') === 0;
}
function supportsCanvas() {
const canvas = document.createElement('canvas');
return !!(canvas.getContext && canvas.getContext('2d'));
}

5.2 Fallback Strategy

  1. Primary: WebP compression (if supported)
  2. Fallback 1: JPEG compression (if WebP not supported)
  3. Fallback 2: PNG compression (for lossless requirements)
  4. Fallback 3: Original format (if compression fails)

5.3 Progressive Enhancement

The algorithm automatically adapts based on browser capabilities:

  • Modern Browsers: Full WebP support with advanced features
  • Legacy Browsers: JPEG fallback with basic compression
  • Mobile Browsers: Optimized for touch interfaces and limited memory

6. Implementation Details

6.1 Core Algorithm

async function compressImage(imageFile, targetSizeKB) {
const canvas = await loadImageToCanvas(imageFile);
const analysis = analyzeImage(canvas);
let quality = predictQuality(analysis, targetSizeKB);
quality = binarySearchQuality(canvas, targetSizeKB, quality);
const compressedData = canvas.toDataURL('image/webp', quality);
return validateOutput(compressedData, targetSizeKB);
}

6.2 Error Handling

Input Validation

  • File size limits (max 50MB)
  • Image dimension limits (max 8192×8192)
  • Format validation
  • Corrupted file detection

Processing Errors

  • Memory allocation failures
  • Canvas rendering errors
  • Compression failures
  • Quality search timeouts

Output Validation

  • File size verification
  • Quality threshold checks
  • Format consistency
  • Data integrity validation

7. Testing Methodology

7.1 Test Data Sets

Portrait Photos

100 professional headshots
Sizes: 1MP - 20MP
Formats: JPEG, PNG

Product Images

200 e-commerce photos
Sizes: 2MP - 50MP
Formats: JPEG, WebP

Social Media

150 social media images
Sizes: 0.5MP - 10MP
Formats: JPEG, PNG, WebP

7.2 Performance Testing

All performance tests are conducted on standardized hardware:

  • CPU: Intel i7-10700K / AMD Ryzen 7 3700X
  • RAM: 16GB DDR4
  • Browser: Chrome 90+, Firefox 88+, Safari 14+
  • OS: Windows 10, macOS 11, Ubuntu 20.04

7.3 Quality Testing

Quality metrics are calculated using:

  • SSIM: Structural Similarity Index (0-1 scale)
  • PSNR: Peak Signal-to-Noise Ratio (dB)
  • Visual Assessment: Human evaluation by 10+ reviewers
  • Automated Testing: Regression testing on 1000+ images

Conclusion

PixCloak's compression algorithm represents a significant advancement in browser-based image processing. By combining modern web technologies with optimized compression techniques, we achieve:

  • Privacy: Complete local processing with no data uploads
  • Quality: Superior compression ratios with minimal quality loss
  • Performance: Fast processing suitable for real-time applications
  • Compatibility: Broad browser support with graceful degradation

Open Source Commitment

This algorithm is fully open source and available for research, modification, and commercial use. We encourage contributions from the community to further improve compression quality and performance.