LLMs for vehicles, The automotive industry is undergoing a significant transformation, with software playing an increasingly vital role. Large language models (LLMs), specifically optimized small language models (sLMS), are emerging as powerful tools to enhance in-vehicle experiences. This post will delve into the world of LLMs for vehicles, explaining what they are, how we can benefit from them, their real-world use cases, and how they are optimized for in-vehicle function-calling. We will also briefly touch upon specific efforts like the Mercedes Benz LLM model.
What are LLMs and sLMS?
LLMs (Large Language Models) are sophisticated AI models trained on vast amounts of text data. They excel at understanding and generating human-like text, enabling a wide range of applications such as natural language processing, text generation, and question answering. However, traditional LLMs are often too large to be deployed on resource-constrained devices such as those found in vehicles.
This is where sLMS (Small Language Models) come into play. sLMS, or Small large language models, are smaller, more efficient versions of LLMs, specifically designed to run on edge devices with limited computational resources. They are optimized for size and speed while maintaining a high level of performance, making them ideal for in-vehicle applications.
How Can We Benefit from LLMs and sLMS in Vehicles?
The integration of LLMs for vehicles, particularly through sLMS, offers numerous benefits:
- Enhanced User Experience: Natural, intuitive voice commands make interacting with vehicle systems easier and more user-friendly.
- Personalization: sLMS can understand user preferences and adapt vehicle settings accordingly.
- Seamless Integration: New features and updates can be integrated more quickly, reducing development time.
- Dynamic Control: Vehicle settings, such as seat heating, lighting, and temperature, can be controlled dynamically based on driver conditions.
- Reduced Distractions: Voice-activated controls minimize the need for manual adjustments, enhancing driving safety.
- Improved Safety: By having natural language processing of the data and the environment, the vehicle can get more accurate information and control, which ultimately makes the drive safer.
Real Use Cases of LLMs and sLMS in Vehicles
The real-world applications of LLMs for vehicles and sLMS are rapidly expanding, transforming in-car experiences:
- Voice Assistants: Responding to voice commands for setting navigation, making calls, or playing music.
- Interior Control: Dynamically adjusting vehicle settings such as seat heating, ambient lighting, and temperature based on user preferences.
- Real-Time Information: Providing real-time updates on traffic, weather, and nearby points of interest.
- Personalized Recommendations: Suggesting music, points of interest, or routes based on past preferences and driving habits.
- On-Demand Information Access: Answering user questions about vehicle functions or maintenance.
- Integration with External Services: Connecting with external applications for seamless control of smart home devices or scheduling apps.
- Adaptive Driver Assistance Systems: Enhancing driver assist systems with better awareness of the environment and the driver.
Optimizing Small Language Models for In-Vehicle Function-Calling
Deploying sLMS effectively in vehicles requires careful optimization. The provided PDF highlights several techniques used to optimize the performance of Small Language Models for In-Vehicle Function-Calling:
- Model Pruning: Reduces model size by removing less important connections or layers. Depth-wise pruning and width-wise pruning are employed.
- Depth-wise pruning focuses on removing entire layers based on similarity.
- Width-wise pruning aims at reducing the dimensionality of the layer through techniques like Principal Component Analysis (PCA).
- Healing: Fine-tuning the pruned model to recover its performance, using techniques like Low-Rank Adaptation (LoRA) and full fine-tuning.
- Quantization: Reducing the numerical precision of model weights to further decrease the size and computational requirements.
- Task-Specific Fine-Tuning: Training models on custom datasets for in-vehicle function-calling, incorporating specialized tokens that map language model outputs to gRPC-based vehicle functions.
Specifically, the optimization involves:
- Utilizing special MB tokens for vehicle functions to ensure that the language model can directly control the vehicles functions.
- Employing a multi-step prompt design to generate high-quality training examples.
- Leveraging lightweight runtimes like llama.cpp for on-device inference.
This combination of techniques enables efficient LLM for vehicles deployment on resource-constrained automotive hardware.
Mercedes-Benz LLM Model
Mercedes-Benz, like many automotive manufacturers, is actively exploring the use of LLMs for vehicles to enhance their in-car experiences. While the specific details of their current model are not the focus of the provided PDF, the research presented is closely aligned with those goals. The use of optimized sLMS such as Phi-3 mini, along with specific in-vehicle function-calling dataset is designed to align with the automotive sector and shows an effort to improve the in-car LLM technology.
The approach used demonstrates how real-time, on-device inference of LLM for functions like voice-command, ambient adjustments or maintenance requests, is made possible through advanced optimization techniques and will allow for more advanced in vehicle experience.
Read More on this from the paper published by Mercedes-Benz Research & Development Team.