Hyper-Efficient
Message Streaming
at Laser Speed
Apache Iggy (Incubating) is a high-performance, persistent message streaming platform written in Rust, capable of processing millions of messages per second with ultra-low latency.
Messages/Second/Node
Avg Write Latency
Language SDKs
Free & Open Source
Built for performance
Designed from the ground up with io_uring and thread-per-core, shared nothing architecture. Each CPU core runs its own shard, pinned and NUMA-aware. No locks on the hot path, no GC pauses, no thread contention.
Ultra-High Performance
Process millions of messages per second with predictable low latency thanks to Rust, combined with io_uring and thread-per-core, shared nothing architecture.
Zero-Copy Serialization
Custom zero-copy (de)serialization for improved performance and reduced memory usage, working directly with binary data.
Multiple Transport Protocols
Support for QUIC, TCP, WebSocket, and HTTP protocols with TLS encryption, giving you flexibility in how clients connect.
Multi-Language SDKs
Client libraries available for Rust, C#, Java, Go, Python, Node.js and C++ with more languages coming for best developer experience.
Consumer Groups & Partitioning
Built-in support for consumer groups with cooperative rebalancing, partitioning, and horizontal scaling across connected clients.
Security & Access Control
TLS on all transports, per-stream and per-topic permissions, Personal Access Tokens for programmatic access, optional AES-256-GCM encryption at rest.
Built-in Monitoring
OpenTelemetry logs & traces, Prometheus metrics, and built-in benchmarking tools for performance monitoring.
Multi-Tenant Support
Stream abstraction for multi-tenancy, configurable message retention policies, and tiered storage coming in the future.
How it works
Messages flow from producers through streams and topics into partitioned, append-only segment files on disk. Pick your language and start streaming in minutes.
Producers send messages
Connect via TCP, QUIC, WebSocket or HTTP. Messages are routed to the target partition using balanced, key-based or explicit partitioning.
Shard receives and buffers
Each partition is owned by exactly one CPU-pinned shard. Messages are buffered in a memory journal, then flushed to disk via vectored I/O through io_uring.
Segments store on disk
Data lands in append-only .log files with .index files for offset and timestamp lookups. Segments are sealed at 1 GiB and rotated automatically.
Consumers poll at any offset
Read from the beginning, a specific offset, a timestamp, or continue from the last committed position. Consumer groups distribute partitions for horizontal scaling.
use iggy::prelude::*;
let client = IggyClient::from_connection_string(
"iggy://iggy:iggy@localhost:8090"
)?;
client.connect().await?;
let producer = client
.producer("orders", "events")?
.direct(
DirectConfig::builder()
.batch_length(100)
.build(),
)
.partitioning(Partitioning::balanced())
.build();
producer.init().await?;
let msg = IggyMessage::from_str("order-123")?;
producer.send(vec![msg]).await?;Complete ecosystem
Iggy is more than a server. Integrate with external systems, manage everything from your browser or terminal, and connect LLMs to your streaming infrastructure.
Connectors
Dynamically loaded Rust plugins for data integration. Source from PostgreSQL, Elasticsearch, or sink to MongoDB, Elasticsearch, Apache Iceberg, Quickwit. Built-in data transforms.
MCP Server
Model Context Protocol server exposing 40+ tools for LLM integration. Works with Claude Desktop and other MCP clients via stdio and HTTP transports.
Web UI
SvelteKit dashboard for stream/topic management, message browsing with JSON/string/XML decoders, user management, server logs, and real-time terminal.
CLI
Full-featured command-line interface with named connection contexts, session-based login, and shell completions for bash/zsh/fish/powershell.