Technical Talks

View All

Making Humans and Code GPU-Capable at Mailchimp

Emily Curtin Emily Curtin | Staff MLOps Engineer | Intuit Mailchimp

What happens when you have a bunch of data scientists, a bunch of new and old projects, a big grab-bag of runtime environments, and you need to get all those humans and all that code access to GPUs? Come see how the ML Eng team at Mailchimp wrestled first with connecting abstract containerized processes to very-not-abstract hardware, then scaled that process across tons of humans and projects. We’ll talk through the technical how-to with Docker, Nvidia, and Kubernetes, but all good ML Engineers know that wrangling the tech is only half the battle and the human factors can be the trickiest part.

3 Key Takeaways:

  • An overview of the call stack from container, orchestration framework, OS, and all the way down to real GPU hardware
  • How ML Eng at Mailchimp provides GPU-compatible dev environments for many different projects and data scientists
  • An experienced take on how to balance data scientist’s human needs against heavy system optimization (spoiler alert: favor the humans)

Emily Curtin
Emily Curtin
Staff MLOps Engineer | Intuit Mailchimp

Emily is a Staff MLOps Engineer at Intuit Mailchimp, meaning she gets paid to say "it depends" and "well actually." Professionally she leads a crazy good team focused on helping Data Scientists do higher quality work faster and more intuitively. Non-professionally she paints huge landscapes and hurricanes in oils, crushes sweet V1s (as long as they're not too crimpy), rides her bike, reads a lot, and bothers her cats. She lives in Atlanta, GA, which is inarguably the best city in the world, with her husband Ryan who's a pretty darn cool guy.

FEATURED MEETINGS