The simplest and cheapest event ingestion solution.
curl ... $PANCAKE_ENDPOINT/rest/write_to_partition -d '{
"tableName": "purchases",
"rows": [{
"user_id": "abc",
"dollar_amount": 12.34
}]
}'
spark.sql("select avg(dollar_amount) from purchases")
PancakeDB solves a problem that has stymied data engineering for a decade: making streaming data accessible to batch and offline analysis.
Write like a stream, read like a ton of bricks.
With write latency of only 10ms and read throughput of
millions of values per second per connection, PancakeDB is unlike
anything you've seen before.
Its new columnar format uses 30-50% less network
bandwidth and storage than .snappy.parquet.
How is this possible?
Read the white paper.
The simplicity data engineers crave.
Founded by a data engineer to solve event ingestion universally, PancakeDB lowers storage, compute, and engineering costs dramatically.
For Small Companies
For Large Companies