As the lead in risk assessing and testing AI models, you will ensure control implementations meet external regulations, internal standards, and best practices. Your role will involve defining the technical vision and strategic roadmap for AI controls and testing including continuous monitoring, evaluation, and reporting of AI systems. To ensure you stay ahead of the regulatory landscape, you will work closely with the legal team to quickly adapt approaches to reflect new requirements.
You will play a critical role in prioritizing areas for risk assessment and mitigation, guiding the responsible development and deployment of AI systems. You will conduct testing of AI models as part of the governance process as well as validating testing of models that internal teams have conducted, and additionally in response to market changes or new regulatory demands, provide actionable insights and recommendations for improvement. In collaboration with cross-functional teams, you will spearhead the development of tools, automation strategies, and data pipelines that support scalable AI risk management efforts and that empower product and engineering teams to leverage those tools and playbooks to enable their own independent risk mitigation.
You will assist in developing standardized reporting templates tailored to meet the needs of both technical data scientists and senior leadership to facilitate clear communication of results.
Your collaboration will extend to model owners and senior management, where you will present findings, assess their implications for risk management, and propose enhancements to AI models.