Lead Data Engineer

job
  • Jobot
Job Summary
Location
Houston ,TX
Job Type
Contract
Visa
Any Valid Visa
Salary
PayRate
Qualification
BCA
Experience
2Years - 10Years
Posted
27 Feb 2025
Share
Job Description
Remote - Lead Data Engineer - up to $190K base - join a team building systems to make data driven business decisions

This Jobot Job is hosted by: Lucas Watson
Are you a fit? Easy Apply now by clicking the "Apply" buttonand sending us your resume.
Salary: $150,000 - $190,000 per year

A bit about us:

Our client, in the financial services industry, is seeking a Lead Data Engineer to join their team. This is a full-time, direct hire, remote role that can pay $150-190K base salary plus benefits, depending on experience.

Why join us?

This role is ideal for someone who thrives in a dynamic, fast-paced environment, enjoys solving complex data problems, and is passionate about driving innovation in data engineering. If you're looking to make an impact on the financial landscape with cutting-edge data solutions this could be for you!

Job Details

Core Responsibilities:

o Lead the design and implementation of end-to-end data pipelines, from extraction (API, scraping, pyodbc) to cleansing/transformation (Python, TSQL) and loading into SQL databases or data lakes.
o Oversee the development of robust data architectures that support efficient querying and analytics, ensuring high-performance and scalable data workflows.
o Collaborate with data scientists, software developers, business intelligence teams, and stakeholders to develop and deploy data solutions that meet business needs.
o Ensure smooth coordination between engineering and other teams to translate business requirements into technical solutions.
o Guide the development of data models and business schemas, ensuring that they are optimized for both relational (3NF) and dimensional (Kimball) architectures.
o Lead the creation of scalable, reliable data models and optimize them for performance and usability.
o Develop and maintain the infrastructure for large-scale data solutions, leveraging cloud platforms (e.g., Azure) and containerization technologies (e.g., Docker).
o Lead the use of modern data platforms such as Snowflake and Fabric, ensuring their effective use in large-scale data solutions.
o Manage and optimize data pipelines using tools such as Apache Airflow, Prefect, DBT, and SSIS, ensuring that all stages of the pipeline (ETL) are efficient, scalable, and reliable.
o Ensure robust testing, monitoring, and validation of all data systems and pipelines.
o Drive continuous improvement in data engineering processes and practices, ensuring they remain cutting-edge, efficient, and aligned with industry best practices.
o Foster a culture of clean code, best practices, and rigorous testing across the team.
o Strong experience with data pipeline design and implementation, including data extraction, transformation, and loading (ETL) processes.
o Proficiency in SQL (Postgres, SQL Server) and experience with modern data warehouse solutions (e.g., Snowflake, Fabric).
o Expertise in Python for data engineering tasks, including data manipulation (Pandas, NumPy) and workflow management (Dask, PySpark, FastAPI).
o Solid knowledge of cloud platforms (Azure, AWS) and big data technologies (Hadoop, Spark).
o Hands-on experience with Docker, Kubernetes, and containerized environments.
o Strong understanding of dimensional modeling (Kimball), relational database design (3NF), and best practices in data architecture.
o Experience with API development, including building and managing API integrations.
o Proficiency with orchestration tools like Prefect or Airflow for workflow management.
o Strong focus on testing and validation, ensuring that all data systems meet reliability and performance standards.


Experience & Qualifications:

  • 5+ years of experience in data engineering roles, with a proven track record of developing and maintaining data pipelines and architectures.
  • Experience working with large-scale data platforms and cloud environments.
  • Strong background in relational databases, dimensional data modeling, and cloud-native solutions.
  • Familiarity with data engineering tools such as Apache Airflow, Prefect, and cloud storage platforms.
  • Excellent problem-solving skills, with the ability to navigate complex technical challenges.


Interested in hearing more? Easy Apply now by clicking the "Apply" button.