Skip to content

AI Frameworks Intern Engineer

  • 3 min read
  • Full Time Internship
  • Santa Clara, CA
  • Applications have closed

Website Intel

Job Description
Do you have a strong passion for optimizing cutting-edge HPC, datacenter, and client SW for maximum performance on the latest HW? We are looking for individuals who are interested in optimizing the world’s leading Machine Learning / Deep Learning frameworks for current and future Intel datacenter/client CPUs and GPUs.

This is a product development position with the end goal being high-quality, high-performance, secure product SW that makes the latest cutting-edge HW shine. You will start optimization pre-silicon and have access to HW shortly after it is first powered on. Product innovation and publication is encouraged and there are some opportunities to collaborate with research partners to develop ideas and translate them into the product.

The Machine Learning Performance (MLP) division is at the leading edge of the AI revolution at Intel, covering the full stack from applied ML to ML / DL and data analytics frameworks, to Intel oneAPI AI libraries, and CPU/GPU HW/SW co-design for AI acceleration. It is an organization with a strong technical atmosphere, innovation, friendly team-work spirit, and engineers with diverse backgrounds. The Deep Learning Frameworks and Libraries (DLFL) department is responsible for optimizing leading DL frameworks on Intel platforms. We also develop the popular oneAPI Deep Neural Network Library (oneDNN), and oneDNN Graph library. Our goal is to lead in Deep Learning performance for both the CPU and GPU. We work closely with other Intel business units and industrial partners.

You will work on software development and optimizations in the following areas:

Analyze Deep Learning models and framework implementations to identify performance bottlenecks and optimization opportunities.
Accelerate the frameworks, such as PyTorch, on Intel platforms by contributing optimizations and features directly to the public framework source or to pluggable open source extension modules. These frameworks are primarily written in C++ and Python.
Develop low-precision high-performance versions of popular models to take advantage of new instructions and architectures designed to accelerate Deep Learning.

An ideal candidate would exhibit behavioral traits that indicate

Ability to work in a dynamic and team-oriented environment
Ability to work closely with teammates at multiple US sites as well as with closely related teams in other countries working virtually together on the same product
Positive can-do attitude, desire to deliver results and winning products
Excellent written and oral communication skills
You should have a passion for optimization and performance at the low level, close the HW, as well as for good SW engineering practice and usability.
Qualifications

You must possess the below minimum qualifications to be initially considered for this position. Preferred qualifications are in addition to the minimum requirements and are considered a plus factor in identifying top candidates.

Minimum Qualifications:

Active student pursuing a master’s degree or PhD in Computer Science, Data Analytics, or a related technical field
1+ year of Experience in C/C++/Python

Preferred Qualifications:

Research or publications or coursework related to Deep Learning
Previous internship experience in the field of AI
Experience with TensorFlow / PyTorch

The requirements listed would be obtained through a combination of industry-relevant job experience, internship experiences, and schoolwork, classes, or research.