AI/ML - Engineering Program Manager, Search Human Annotation
Seattle, WA (United States)
Imagine what you could do here. At Apple, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Do you love taking on challenges that create a positive impact? Are you passionate about empowering many ground-breaking intelligent experiences to be made? We are looking for people like you!
The Siri Search team mission is to deliver compelling information seeking experiences for Apple users. The team continuously pushes for innovating core technologies to empower a great user experience across multi search modalities. The team powers search products on Safari, Spotlight, Siri Assistant, News App, Lookup and many more. A critical enabler for continuous Search products improvement is scalable, rigorous and comprehensive search quality evaluation. Were looking for someone to build and lead the program for human annotation to empower deep and precise quality evaluation that supports multi search modality, both at component level and end product level, across on-device and server search.
In this role you will lead Search human annotation program roadmap planning, annotation strategy, budget, partnership management to scale Siri human annotation to support the next generation of search products via human annotation based evaluation excellence. Additionally, you will be the specialist on human annotation, raising the bar for effective and efficient utilization of human judgment in empowering better product evaluation and development in the AI/ML org. You will work with world-class data scientists, machine learning engineers, evaluation tooling development engineers, Human Annotation Operation team and product teams to build the highest quality user experience that over 1 billion Apple customers love.
- Lead roadmap, detailed execution of future human annotation platforms and programs to deliver world class search quality offline evaluation via close collaboration with data scientists in Search analytics team
- Contribute to brainstorming on continuous human evaluation methodology improvement as well as solutions and workarounds for human evaluation challenges (e.g. on-device search evaluation)
- Drive organization wide initiatives to scale human annotation tooling and process
- Lead allocations of grading resources across Siri teams
- Partner with procurement and vendors to meet human annotation resource demand
- Collaborate with Search analytics team, Engineering team, Privacy team, Siri Annotation Operation team, Evaluation Tooling and Platform teams to manage dependencies, requirements as well as executions to ensure excellence in human grading based quality evaluation
- Organize data sharing initiatives for Siri teams to maximize utilization and value generation of our human grading data assets
- Establish efficacy and efficiency standard methodologies for human annotation in Siri org