Signal & alerting
Real-time pipelines that filter a firehose — on-chain events, off-chain APIs, market data — down to the hits that actually matter, pushed where you need them with sub-second latency. Built to stay up under aggressive rate limits.
I'm stifl — a solo engineer in the EU. I build the private dashboards, monitors, and trading rails that crypto teams run internally to move before the market and keep their name off the win.
Real-time pipelines that filter a firehose — on-chain events, off-chain APIs, market data — down to the hits that actually matter, pushed where you need them with sub-second latency. Built to stay up under aggressive rate limits.
Internal consoles with real auth — permissioned access, session handling, audit trails, and authenticated webhooks. The kind of ops surface that starts as a Notion board and needs to stop being one.
Shred-level ingestion before the block lands, Jito bundles with landed-rate tracking, priority-fee curves tuned to the block producer rather than the RPC. Built for the tail where a missed slot is a missed fill.
Anchor programs written for the hot path — tight CPI chains, account layouts that hold under load, compute profiled slot-by-slot. Native Rust when the CU budget doesn't forgive the framework.
Client names withheld by default.
One person, one Signal thread, no account manager, no standup theater. The same hands that write the Anchor program keep the systemd unit alive at 3am — the only model I trust for work where being half a slot early is the entire product.
I keep a private repo of every nasty bug I've shipped into mainnet and what the fix was. I re-read it before I touch anything money-moving. It's the closest thing I have to a process, and it's the reason my client list is short on purpose — I'd rather ship three things that hold than ten that need a post-mortem.
Send a paragraph: what you need, what's on-chain already, when it has to work. I reply within 24 hours on weekdays, usually faster.