Principal Data Engineer

job
  • S M Software Solutions Inc
Job Summary
Location
Austin ,TX 78716
Job Type
Contract
Visa
Any Valid Visa
Salary
PayRate
Qualification
BCA
Experience
2Years - 10Years
Posted
15 Mar 2025
Share
Job Description

If you find this opportunity aligns with your career goals and interests, we kindly request that you send us your documents to by 4 Mar. 2025 at your earliest convenience.


Job Title: AMPDE25-Principal Data Engineer

Client Name: LPL Financials

Office location: Fort Mill, Austin, US (Hybrid)

Duration: Full-time


Job Description

We are looking for a skilled Data Engineer to join our team and help build robust, scalable, and efficient data pipelines. The ideal candidate will have strong expertise in AWS, Python, Spark, ETL Pipelines, SQL, and Pytest. This role involves designing, implementing, and optimizing data pipelines to support analytics, business intelligence, and machine learning initiatives.

Key Responsibilities:

  1. Design, develop, and maintain ETL pipelines using AWS services, Python, and Spark.
  2. Optimize data ingestion, transformation, and storage processes for high-performance data processing.
  3. Work with structured and unstructured data, ensuring data integrity, quality, and governance.
  4. Develop SQL queries to extract and manipulate data efficiently from relational databases.
  5. Implement data validation and testing frameworks using Pytest to ensure data accuracy and reliability.
  6. Collaborate with data scientists, analysts, and software engineers to build scalable data solutions.
  7. Monitor and troubleshoot data pipelines to ensure smooth operation and minimal downtime.
  8. Stay up-to-date with industry trends, tools, and best practices for data engineering and cloud technologies.

Required Skills & Qualifications:

  1. 10+ years of experience in Data Engineering or a related field.
  2. Strong proficiency in AWS (S3, Glue, Lambda, EMR, Redshift, etc.) for cloud-based data processing.
  3. Hands-on experience with Python for data processing and automation.
  4. Expertise in Apache Spark for distributed data processing.
  5. Solid understanding of ETL pipeline design and data warehousing concepts.
  6. Proficiency in SQL for querying and managing relational databases.
  7. Experience writing unit and integration tests using Pytest.
  8. Familiarity with CI/CD pipelines and version control systems (e.g., Git).
  9. Strong problem-solving skills and ability to work in a fast-paced environment.

Preferred Qualifications:

  1. Experience with Terraform, Docker, or Kubernetes.
  2. Knowledge of big data tools such as Apache Kafka or Airflow.
  3. Exposure to data governance and security best practices.
#J-18808-Ljbffr
Other Smiliar Jobs
 
  • Fort Mill, SC
  • 4 Days ago
  • Concord, CA
  • 4 Days ago
  • Arlington, VA
  • 5 Days ago
  • Troy, MI
  • 5 Days ago
  • Cambridge, MA
  • 4 Days ago
  • Maplewood, MN
  • 4 Days ago
  • Bozeman, MT
  • 4 Days ago
  • , MN
  • 4 Days ago
  • Austin, TX
  • 4 Days ago
  • , MS
  • 4 Days ago
  • , LA
  • 4 Days ago
  • , LA
  • 4 Days ago
  • Nashville, TN
  • 4 Days ago
  • Fort Wayne, IN
  • 4 Days ago
  • , NC
  • 4 Days ago