Simon Kozlov: The difference between “human performance” in the industry vs. the academic setting. We expected to see differences between humans and machine learning models where humans are expected to perform at their best - careful, thoughtful and dedicating their full attention to the task.
However, in many practical situations, humans get tired of repetitive work, can’t concentrate for long periods of time and have a lot of things on their mind other than the specific task at hand - in general, they act "human!"
These factors change the tradeoffs significantly - suddenly even a modestly accurate model can help catch significant issues and assist humans in their work! And if you figure out how to get feedback from them in real-time, this can make a model even more accurate. Rinse, repeat.
Q: What do you think a listener will get out of this this talk vs. other talks on distributed data processing and data versioning that they've previously heard?
Simon Kozlov: Most of the time, machine learning performance is limited by the quality of the data that is input to the model. This talk is about creating high-quality datasets out of raw production data which require experts to provide labels and judgment.
However, there are two big problems that we've found with experts: first, they’re very hard to find. Second, they never agree with each other.
In our talk, we’ll cover several ideas and practices that will help listeners deal with both issues.
About the Startups Track
The data-oriented Startups Track at DataEngConf features dozen of startups forging ahead with innovative approaches to data and new data technologies. We find the most interesting startups at the intersection of ML, AI, data infrastructure and new applications of data science and highlight them in technical talks by their CTOs and lead engineers who are building these platforms.