Redis-compatible. Adaptive tiered storage (memory → SSD → S3). Built-in feature flags, rate limiting, config management, and semantic caching for AI workloads — all in one system.
RESP3 wire protocol — every Redis client (Jedis, Lettuce, redis-cli) works without any code changes.
Hot data lives in memory, warm data on NVMe SSD, cold data auto-archived to S3. Zero manual configuration.
Percentage rollouts, instant kill switch, per-user targeting — without LaunchDarkly or a separate service.
Token bucket and sliding window algorithms. Enforced consistently across all nodes via CRDT counters.
Cache LLM responses by meaning, not exact match. "capital of France?" hits the same entry as "what is France's capital?" Saves 60–80% on LLM costs.
Strong consistency across a 3-node cluster. Leader failure recovers in under 500ms. No data loss.