Evaluate, monitor, and continually improve GenAI apps for quality in production using Azure AI
Evaluating, comparing, deploying, and monitoring models in production is an essential part of the AI lifecycle, but the process can be arduous without pre-built tooling! In this session, you’ll learn how to configure and use Azure AI's built-in model evaluation & monitoring to evaluate, deploy, monitor, receive timely alerts, and continually improve your GenAI application as data and user behavior changes over time. About William Alpine: Will is a Product Manager in Azure AI Platform's Responsible AI tooling team, who works at the intersection of Responsible AI, User Experience, and Inference/MLOps areas. He specializes in building compelling UX backed by technically complex data & AI platform capabilities. He is a champion of Green Software Engineering, and is a contributor to open-source sustainability tooling and standards through the Green Software Foundation. He draws from deep experience in hardware, IoT, manufacturing, data science, renewable energy, and mechanical engineering. He holds an M.S. in Technology Innovation, Connected Devices from the University of Washington, and a B.S. in Mechanical Engineering from Virginia Commonwealth University.