EDB-IPP Project: Advancing GPU Optimization for Large Language Models
EDB-IPP Project: Advancing GPU Optimization for Large Language Models
Job Details
Vacancies
1 position
Experience Required
No experience required
Job Description
Rakuten Asia, in partnership with the Economic Development Board (EDB) through the Industrial Postgraduate Programme (IPP), is seeking new PhD students. We are looking for individuals with a robust understanding of deep learning, machine learning, and natural language processing to contribute to our innovative research projects.
Essential requirements include proven hands-on expertise and strong engineering skillsets, specifically in the development and training of PyTorch models.
IPP Programme Benefits
Candidates successfully selected for this programme will receive full sponsorship for their postgraduate studies and will be hired by Rakuten Asia upon successful completion.
Collaboration Model
The collaboration will include joint PhD student supervision, shared access to computational resources for large-scale model compression experiments, and regular research exchanges. Output will include high-impact publications, open-source tools, and demonstrable prototypes of efficient AI.
Project Outline
Introduction
Rakuten is committed to advancing the frontier of AI infrastructure, with a strong focus on optimizing large-scale GPU clusters for training and serving Large Language Models (LLMs). As models grow in size and complexity—ranging from dense architectures to mixture-of-experts (MoE)—achieving efficiency across training, inference, and deployment has become increasingly critical. Our GPU Optimization department combines deep system expertise and significant computational assets, and we are seeking strategic collaborations with leading universities to jointly tackle these challenges.
Proposed Research Areas
We propose collaborative research in the following areas, with flexibility to refine topics based on mutual expertise:
- Efficient Scheduling for Sparse & Dense LLMs:
Design token-aware, load-balanced scheduling algorithms for MoE and hybrid LLM workloads that reduce inter-GPU communication and optimize heterogeneous cluster utilization.
- Efficient Inference for State Space Models
Develop high-throughput, low-latency inference techniques for state space models, leveraging their linear-time properties to outperform traditional attention mechanisms in long-context scenarios.
- Memory-Aware Training & Serving
Explore advanced quantization, memory-efficient checkpointing, offloading strategies, and dynamic memory management techniques to support training and inference of ultra-large models.
- Scalable Parallelism for LLMs
Investigate hybrid parallelism (data, model, pipeline, expert) and communication-reduction strategies tailored for scaling LLMs across thousands of GPUs.
- Hardware-Aware Optimization
Develop compiler, kernel, and data layout optimizations that fully exploit features of modern GPU architectures, improving throughput for both dense and sparse model operations.
- High-Throughput, Low-Latency Inference
Create optimized model serving strategies using speculative decoding, continuous batching, expert routing, and adaptive computation for production-grade LLM applications.
Rakuten is an equal opportunities employer and welcomes applications regardless of sex, marital status, ethnic origin, sexual orientation, religious belief or age.
Similar Jobs
Assistant Quality Manager (Aerospace)
✨ Restaurant Service Captain|$3,500 + Sign on Bonus✨
*URGENT* 1 Year Contract Call Centre Executive (Up to $2,800) #NJA
Temp Building/Facilities Manager (Up to $6500) #NKA
General Operations Manager (Marine & Offshore)
Response Reality Check
RAKUTEN ASIA PTE. LTD.
Ready to Apply?
This is a direct application to RAKUTEN ASIA PTE. LTD.. No recruitment agencies involved.
Apply for this PositionResponse rate not available - Direct application to employer