Software Engineer - ML/LLM Inference

job
  • Alldus
Job Summary
Location
San Francisco ,CA 94199
Job Type
Contract
Visa
Any Valid Visa
Salary
PayRate
Qualification
BCA
Experience
2Years - 10Years
Posted
23 Jan 2025
Share
Job Description

My client is searching for a talented engineer to work on ML/LLM inference and serving. They specialize in developing next-gen LLM fine-tuning and inference engines.


We are seeking a talented and motivated Software Engineer specializing in Machine Learning (ML) and Large Language Model (LLM) inference to join our dynamic ML Inference team. In this role, you will bridge the gap between AI/ML research and systems programming to build and enhance our next-generation LLM Inference Engine. You will play a crucial role in optimizing the performance, scalability, and efficiency of our LLM serving systems.


Key Responsibilities:


Develop and Enhance Inference Engine:

  • Design, implement, and optimize the next-generation LLM Inference Engine.
  • Integrate the latest LLM inference techniques from research to enhance latency and throughput.


Performance Optimization:

  • Conduct deep performance optimizations across multiple layers of the technology stack, including PyTorch, C++, and CUDA.
  • Analyze and improve system performance to meet the demands of various use cases.


Customer Collaboration:

  • Work closely with customers to understand specific performance requirements and optimize solutions accordingly.
  • Provide technical expertise and support to ensure successful deployment and operation of inference systems.


Technical Leadership:

  • Define the roadmap and technical vision for the inference stack.
  • Lead initiatives to drive innovation and maintain the competitive edge of our inference technologies.


Infrastructure Development:

  • Collaborate with partner teams to build and maintain scalable, multi-replica serving infrastructure.
  • Ensure the reliability and scalability of LLM serving systems to handle increasing workloads.


Qualifications:


Technical Skills:

  • Proficiency in systems programming languages such as C++.
  • Strong experience with machine learning frameworks, particularly PyTorch.
  • Expertise in GPU programming and CUDA for performance optimization.
  • Solid understanding of AI/ML concepts, especially related to large language models.


Experience:

  • Proven experience in developing and optimizing ML/LLM inference systems.
  • Demonstrated ability to integrate research advancements into production systems.
  • Experience with performance tuning and profiling across various technology stacks.
  • Experience with vLLM

Other Smiliar Jobs
 
  • Sunnyvale, CA
  • 3 Days ago
  • Santa Clara, CA
  • 3 Days ago
  • San Jose, CA
  • 3 Days ago
  • Hayward, CA
  • 3 Days ago
  • San Mateo, CA
  • 3 Days ago
  • Santa Rosa, CA
  • 3 Days ago
  • Sonoma, CA
  • 3 Days ago
  • Alameda, CA
  • 3 Days ago
  • San Francisco, CA
  • 3 Days ago
  • Fremont, CA
  • 3 Days ago
  • New York, NY
  • 3 Days ago
  • New York, NY
  • 3 Days ago
  • Seattle, WA
  • 3 Days ago
  • Boston, MA
  • 3 Days ago