Skip to main content

Documentation Index

Fetch the complete documentation index at: https://fluxcrud.mahimai.dev/llms.txt

Use this file to discover all available pages before exploring further.

The Batcher

When inserting thousands of records, individual await repo.create() calls are slow due to network latency. The Batcher queues items and flushes them in chunks, reducing network round-trips drastically.
from fluxcrud.async_patterns import Batcher

# Usage via context manager on Repository
async with repo.batch_writer(batch_size=100) as writer:
    for i in range(5000):
        # Adds to buffer, auto-flushes at 100
        await writer.add({"name": f"Item {i}"})

ParallelExecutor

Processing many independent tasks (like calling external APIs or multiple distinct DB selects) sequentially is inefficient. ParallelExecutor runs them concurrently but with a limit (Semaphore) to avoid crashing your DB or exhausting connections.
from fluxcrud.async_patterns import ParallelExecutor

tasks = [lambda: api.fetch_data(i) for i in ids]

# Run at most 50 at once
results = await ParallelExecutor.gather_limited(limit=50, tasks=tasks)