As data engineers, we often begin with a focus on building a scalable platform for whatever challenge we have at hand. AI researchers have historically thought about problems differently, deferring concerns about reliable, scalable performance in production for later. With deep learning models becoming an integrated part of many widely used products, AI research needs a path to make into products with real production demands. At the same time, research needs to continue to explore new modeling approaches, so those production demands can't remove the necessary flexibility in the model authorship workflow.
PyTorch 1.0 is a platform that uniquely enables AI research to be productionized in a way that serves the needs of both AI researchers and engineers building production systems. The 1.0 release is the union of the production capabilities of Caffe2 with the research flexibility of PyTorch along with the broad model support of ONNX. With one platform, you can take the latest techniques coming out of research and rapidly incorporate them into your production systems.
Jeff Smith builds AI technology and the teams behind them. Currently, he supports the team building the open source PyTorch AI platform at Facebook AI Research and beyond. He’s the author of Machine Learning Systems: Designs that scale. While working at the intersection of functional programming, distributed systems, and machine learning, he coined the term reactive machine learning to describe an ideal machine learning architecture and associated set of techniques. Prior to joining FAIR, he built teams and technology for AI products like x.ai and Amelia.