You will be a core member of Periscope’s technology team with responsibilities that range from developing and deploying our core enterprise products to ensuring that McKinsey’s craft stays on the leading edge of technology.
In this role you will design, develop, and maintain scalable data pipelines and systems using Databricks on Azure. You will Implement and optimize ETL processes with a focus on performance, cost-efficiency, and reliability. You will utilize advanced Databricks technologies (DLT, Unity Catalog, Delta Sharing, SQL Warehouse) to enhance data management and sharing capabilities.
You will be leading and executing data engineering projects from inception to completion, ensuring timely delivery and high quality while you continuously monitor, troubleshoot, and improve data pipelines and workflows to ensure optimal performance and cost-effectiveness.
You will collaborate with cross-functional teams to understand data needs and deliver solutions that meet business requirements, mentor, and educate junior engineers on best practices, emerging technologies, and efficient data engineering techniques.