Machine Learning Intern
Posted November 13
Our pedigree comes from companies like Microsoft, Broadcom, Inphi, Intel, Texas Instruments, Lucent, MIPS and Wave Computing. Our past successes include building chips for all the cloud hyperscalers globally - Amazon, Facebook, Google, Microsoft, Alibaba, Tencent along with enterprise and mobile operators like China Mobile, Cisco, Nokia, Ciena, Reliance Jio, Verizon, AT&T. We are recognized leaders in the mixed signal, DSP connectivity space, now applying our skills to next-generation AI.
Location:
Hybrid, working onsite at our Santa Clara, CA headquarters 3 days per week.
The role: Machine Learning Intern
What you will do:
- Research and analyze existing KV-Cache implementations used in LLM inference, particularly those utilizing lists of past-key-values PyTorch tensors.
- Investigate “Paged Attention” mechanisms that leverage dedicated CUDA data structures to optimize memory for variable sequence lengths.
- Design and implement a torch-native dynamic KV-Cache model that can be integrated seamlessly within PyTorch.
- Model KV-Cache behavior within the PyTorch compute graph to improve compatibility with torch.compile and facilitate the export of the compute graph.
- Conduct experiments to evaluate memory utilization and inference efficiency on D-Matrix hardware.
Key Objectives:
- Develop an efficient support system for KV-Cache on D-Matrix hardware.
- Create a torch-level modeling framework for dynamic KV-Cache.
- Ensure compatibility of the KV-Cache model with torch.compile and other PyTorch features for optimized graph export.
What you will bring:
- Currently pursuing a degree in Computer Science, Electrical Engineering, Machine Learning, or a related field.
- Familiarity with PyTorch and deep learning concepts, particularly regarding model optimization and memory management.
- Understanding of CUDA programming and hardware-accelerated computation (experience with CUDA is a plus).
- Strong programming skills in Python, with experience in PyTorch.
- Analytical mindset with the ability to approach problems creatively.
Preferred Qualifications:
- Experience with deep learning model inference optimization.
- Knowledge of data structures used in machine learning for memory and compute efficiency.
- Experience with hardware-specific optimization, especially on custom hardware like D-Matrix, is an advantage.
This role is ideal for a self-motivated intern interested in applying advanced memory management techniques in the context of large-scale machine learning inference. If you’re passionate about optimizing machine learning models and are excited to explore cutting-edge solutions in model inference, we encourage you to apply.
Equal Opportunity Employment Policy
d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication, and a willingness to embrace challenges and learn together every day.
d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individuals interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of all applicants. Thank you for your understanding and cooperation.