Our business needs us to deliver big improvements to our analytics infrastructure and our client SDKs, on a reliable cadence and with low tolerance for regressions. This session covers some techniques we've developed to use the entropy of production to make this possible. I’ll provide a methodology that developers can use to do the same, especially in contexts with a lot of variability in how their software is used. With these ideas, we’ve been able to make predictable 20% to 40% improvements to the speed of our analysis infrastructure every quarter, with a team of two engineers.
Dan is CTO at Heap, where he uses PostgreSQL, Kafka, Flink, Redis, CitusDB, and Spark to build distributed analytics infrastructure. He's been known to get a little too much satisfaction out of solving problems with PL/pgSQL or an imprudent bash oneliner. Dan earned B.S. degrees in Computer Science and Mathematics from Stanford, where he spent most of his time studying machine learning. He likes hiking and building physical things.
Data Council, PO Box 2087, Wilson, WY 83014, USA - Phone: +1 (415) 800-4938 - Email: community (at) datacouncil.ai