At New Relic we've been heavy users of Kafka for a while now, so it's not a surprise that we've picked up Kafka Streams quite early too. Our teams have built several services that are working in production environment already, at full scale of New Relic data.
In this talk I would like to share our experience with Kafka Streams. We will first discuss when it's a good idea to use it, what your infrastructure should be prepared for and how your architecture might change. We will then go through most common use cases, like data aggregation and enrichment. Last but not least, we will talk through hardening your service before deploying it on production, what metrics to watch for, what to alert on, and what customer instrumentation is recommended.
As a result of this talk, you should have a clear idea of when you might want to use Kafka Streams at your company and what could you expect from running it in production.
Alex is leading a team of engineers at New Relic who build products on-top of all the data acquired from cloud providers for its customers. Before that he was building the scalable data acquisition platform for integrating New Relic with cloud vendors.
He is passionate about building data products from the ground up, starting with an IPython notebook, building the solutions using a batch approach, and, if necessary, implementing them using streams for scalability and latency reasons.