The role requires someone with an innovative mindset and the ability to conduct independent research.
Key Responsibilities:
- Design, test, and optimize models utilizing large-scale data systems, ETL processes, and data pipelines.
- Explore, clean, and preprocess data to create high-performance, customized models.
- Apply algorithmic thinking to develop and deploy data models with a focus on domain-specific analysis.
- Continuously refine models based on performance metrics and feedback from stakeholders.
- Contribute to research and development by investigating state-of-the-art machine learning models, including LLMs, transformers, etc.
- Enhance operational efficiency through AI/ML initiatives.
Skills, Experience, and Qualifications:
- PhD from a reputed institute with a strong track record of published research.
- Experience in deploying models and managing large datasets.
- Proficiency in Python, R, and tools like SQL for data manipulation.
- Hands-on experience with platforms like Databricks.
- Knowledge of NLP techniques and experience with implementing LLMs.
- Expertise in time series analysis and algorithmic problem-solving.
- Experience in transfer learning, hyperparameter tuning, and machine learning theory.
Preferred:
- Strong analytical skills for solving complex problems systematically.
- Ability to collaborate and communicate effectively across multidisciplinary teams.
- Strong communication skills to explain technical findings to non-technical stakeholders.
#J-18808-Ljbffr