6 month contract (good chance at FTE conversion)
Hybrid in Wilmington, DE (3 days onsite weekly)
Large client in banking industry
Job Description:
Requirements:
Ab Initio exposure or knowledge is a MUST have, will be more flexible on AWS, Hadoop, Spark.
Min 5 plus years of experience
Experience with big data – preferably in a large complex organization***
Must have hands on ETL background and pipeline development.
Must have Core Java background (not just J2EE)
*Scala or Python may work if not Java, but is a must*.
Additional Skills: Must have Hadoop, Spark experience.
***Nice to have
AWS
Hadoop and Spark knowledge (we are migrating legacy ETLs to Spark based processing framework)
Experience in AWS cloud is preferred.
Knowledge of Kafka, Unix, python is desirable
• Hands-on practical experience delivering system design, application development, testing, and operational stability
• Advanced in one or more programming language(s) like Java, Python.
• Advanced knowledge of software applications and technical processes with considerable in-depth knowledge in one or more technical disciplines (e.g., cloud, Hadoop, spark etc.)
• Practical cloud native experience preferably AWS
• Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages
• Proven experience in Event Driven/Streaming Architecture with Kafka, modeling RDBMS and No-SQL databases, Unix Shell scripting.
• Strong experience in architecting, designing, implementing ETL solutions. Ab Initio experience a plus.
• Working proficiency of a variety of software engineering toolsets.
#Pando