MainWorkerFactory Demo

Each card runs one or more workers via foreman.runWorker(). The UI stays responsive throughout.

Benchmark

Main Thread vs Worker Threads

Runs 6 × 350×350 matrix multiplications twice — first sequentially on the main thread (UI freezes), then in parallel across 6 workers (UI stays live). Each task takes ~1 s, so main thread = ~6 s vs workers = ~1 s.

Worker: multiplyMatrices Tasks: 6 × 350×350 matrix multiply (~1 s each) Concurrency: 6
idle
Main thread
Workers

        
CPU-bound

Expensive Computation

Busy-waits for ~10 s in a dedicated worker. The page stays interactive the whole time.

Worker: exp1 Retries: 3
idle

        
Data generation

Generate Random Data

Generates 300 000 mixed-type items across 13 concurrent workers.

Worker: generateRandomData Concurrency: 13
idle

        
Partition + transform

Transform Array

Generates 300 000 items then partitions and transforms them across 8 workers with prefix / suffix / currency options.

Pipeline: generateRandomData → transformArray Concurrency: 8
idle

        
Structured data

Large List Transform

Generates 30 000 user records then normalises names, rates, emails, dates and prices across workers.

Pipeline: generateListTransformArrayTestData → listTransformArray Concurrency: 10
idle

        
Heavy data processing

Image Processing Pipeline

Generates 4 synthetic 512×512 RGBA images (noise), then runs a greyscale pass followed by a 3×3 box-blur kernel on each — all off the main thread. Simulates a real photo-editing or CV pipeline.

Pipeline: generateImageData → processImageData Images: 4 × 512×512 px Concurrency: 8
idle

        
Long list data transform

Server Log Analyser

Generates 500 000 realistic server log entries (auth, payments, search…), then partitions and analyses them across 8 workers — computing error rates, avg latency, slowest requests and top errors. Simulates a monitoring dashboard.

Pipeline: generateLogs → analyzeLogs Volume: 500 000 entries Concurrency: 8
idle

        
Long delayed task

Concurrent Batch Jobs

Spawns 6 independent tasks (report, export, sync, backup…) each with a random 2–5 s delay, running concurrently across 6 workers. Simulates a job queue where tasks must not block each other or the UI.

Pipeline: generateDelayedTasks → runDelayedTask Tasks: 6 × 2–5 s Concurrency: 6
idle

        
Error handling

Flaky Tasks + Auto-Retry

Runs 8 tasks where half have a 70% failure rate. The framework retries each failed shard up to 3 times. Shows which tasks eventually succeeded and which exhausted all retries.

Pipeline: generateFlakyTasks → flakyTask Retries: 3 per shard Concurrency: 8
idle

        
Network I/O

Fetch & Enrich Posts

Fetches 20 posts from a public REST API inside a worker, then enriches each post with a word-count and title-cased heading — without touching the main thread.

Worker: fetchAndEnrichPosts Source: jsonplaceholder.typicode.com Concurrency: 1
idle

        
Partial results

Distributed Search (Shard Failures)

Queries 8 search shards concurrently. Every 3rd shard is intentionally unavailable. Results from healthy shards are merged and ranked — the UI shows partial data rather than a full failure.

Pipeline: generateSearchShards → searchShard Shards: 8 (every 3rd fails) Retries: 0 — fail fast
idle