Why join us?
Bombardier is a global leader in aviation, focused on designing, manufacturing and servicing the world's most exceptional business jets and specialized mission platforms. Bombardier has been successful in setting the highest standards because we are a people-centric business that fosters passion, diversity and authenticity.
Prioritizing employee growth and development, we empower everyone to reach their full potential on their own terms.
Our Benefits
With our employees’ well-being top of mind, we offer a comprehensive and competitive Benefits Program, which includes the following:
- Insurance plans (Medical, life insurance, and more )
- Employee Assistance Program
- Competitive base salary
What are your contributions to the team?
- Responsible for overseeing and managing the enterprise data platform, including data warehousing (DWH) and data lakes.
- Lead/Administer the enterprise data platform (DWH, Data Lake):
- Ensure smooth operation, security, and performance of the data platform.
- Create and maintain optimal/reliable data pipeline architecture:
- Design and implement efficient data pipelines to meet business needs.
- Optimize data extraction, transformation, and loading (ETL) processes.
- Utilize SQL and ‘big data’ technologies (Hadoop, MapReduce, Hive, Spark, Kafka, Pig, data streaming, NoSQL, SQL, programming) as required.
- Implement life cycle management processes (DevOps) for data systems:
- Enable continuous integration and continuous deployment (CICD) of data solutions.
- Automate deployment, monitoring, and maintenance of data infrastructure.
- Integrate data from various resources and manage big data:
- Collect and consolidate data from external sources
How to thrive in this role?
- 10 years of experience in a Data Engineer / Data Specialist role.
- Experience coaching/leading a small team (technical leadership).
- Knowledge of Agile/SCRUM project delivery, DevOps, and CICD practices.
- Advanced knowledge of SQL, query authoring, and relational databases.
- Experience optimizing ‘big data’ pipelines (storage, file format, partitioning, Spark, Python, streaming).
- Efficient at performing root cause analysis and applying long-term fixes.
- Experience designing and building data transformation, data structures, metadata frameworks, and automated workload management.
- Familiarity with data protection measures, data privacy, and collaboration with Cyber teams.
- Good knowledge of Azure data services (Azure Data Factory, Synapse, Azure Data Lake Storage, Event Hub, Polybase, Databricks, Delta lake, Cognitive Services, etc.).
- Proficiency in object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.