The simplest and cheapest event ingestion solution.

Write directly from your services and streams:
curl ... $PANCAKE_ENDPOINT/rest/write_to_partition -d '{
 "tableName": "purchases",
 "rows": [{
  "user_id": "abc",
  "dollar_amount": 12.34
 }]
}'
Do analytics and batch processing with real-time freshness:
spark.sql("select avg(dollar_amount) from purchases")

PancakeDB solves a problem that has stymied data engineering for a decade: making streaming data accessible to batch and offline analysis.

Parquet uses 194MB whereas PancakeDB's new format uses only 116MB.

Write like a stream, read like a ton of bricks.

With write latency of only 10ms and read throughput of millions of values per second per connection, PancakeDB is unlike anything you've seen before. Its new columnar format uses 30-50% less network bandwidth and storage than .snappy.parquet.

How is this possible? Read the white paper.

A pancake sits atop a pile of money.

The simplicity data engineers crave.

Founded by a data engineer to solve event ingestion universally, PancakeDB lowers storage, compute, and engineering costs dramatically.

For Small Companies

For Large Companies

Company C before PancakeDB
Company C after PancakeDB
Questions? Contact us.