Structured data extraction is not a new topic in the world of AI – but it is a discipline that is being completely upended by the advent of large language models. While research is still ongoing on whether LLMs are more cost-efficient for data extraction tasks compared to traditional NLP methods, it is clear that LLMs excel at intuitively grasping the intricacies of language, particularly in extracting structured data from unstructured text. For teams with traditional models, where does it make sense to shift to LLMs for structured data extraction today? How can you overcome issues around data privacy, hallucination, and difficulty in evaluating performance – particularly around AI-assisted evaluation?
Informed by work with dozens of enterprises with LLM apps in production and research on what works, this session will dive into emerging best practices and how to best leverage the OpenAI function calling API and open source tools to ensure LLM apps are deployed responsibly – and perform reliably.
Amber Roberts is a ML Growth Lead at Arize AI, a ML observability company built for maintaining models in production. Previously, Amber was a product manager of AI at Splunk and the Head of Artificial Intelligence at Insight Data Science. A Carnegie Fellow, Amber has an MS in Astrophysics from the Universidad de Chile.