The Feldera Blog
An unbounded stream of technical articles from the Feldera team
How Feldera Customers Slash Cloud Spend (10x and beyond)
By only needing compute resources proportional to the size of the change, instead of the size of the whole dataset, businesses can dramatically slash compute spend for their analytics.
Stream Integration
In this blog post we informally introduce one core streaming operation: integration. We show that integration is a simple, useful, and fundamental stream processing primitive, which is used not only in computing systems like Feldera, but also by organisms to interact with their environment.
The Babel tower of SQL dialects
You would think that almost 40 years should be enough for all SQL vendors and implementors to have converged on a well-defined syntax and semantics. In reality, the opposite has happened.
Batch Analytics at Warp Speed: A User Guide
The laws of computational complexity tell us that some computations require time. While we may not make batch jobs much faster, we can replace them with something better: always-on incremental pipelines that update the results in real time as input data changes.
Toward Real-Time Medallion Architecture
We replaced periodic Spark jobs with the Feldera Incremental View Maintenance (IVM) engine. Feldera picks up changes to lower-tier tables and updates higher-tier tables in real-time, reducing the end-to-end latency of the pipeline from hours to minutes.
Universal IVM: Incremental View Maintenance for the Modern Data Stack
Incremental View Maintenance is a paradoxical concept: it's lived in the database community’s collective conscience for decades, yet has never fully materialized as a feature in any modern DB. In my view, a complete IVM engine must: support arbitrary SQL queries, over data of any size, fully incrementally—processing input changes without full recomputation.
Incremental Update 23
Better setting settings display.
Incremental Update 22
New rust crate, storage on by default and new delta-lake connector settings.
Cutting Down Rust Compile Times From 30 to 2 Minutes With One Thousand Crates
We compile SQL into Rust. One customer wrote so much SQL, it turned into 100k+ lines of Rust — and took 30+ minutes to build. The fix? Split it into over a thousand crates. Now we get full CPU utilization and sub-3-minute builds. Here's how we made Rust compile times scale with hardware.