Micron: Future Cars Will Require 300GB of Memory
01 New Automotive Architectures Are Disrupting Processor and Memory Choices
The surge in data from advanced driver-assistance and autonomous driving sensors, along with the need to make real-time decisions based on such data, is placing unprecedented demands on in-vehicle memory and storage subsystems.
As more mechanical functions are replaced by electronic systems and vehicle intelligence continues to advance, several challenges facing the automotive sector are increasingly mirroring those in large-scale data centers. To ensure priority for critical functions such as automatic emergency braking, lane centering, rear-view camera image processing, and suspension adjustment, data must be transferred at extremely high speeds within and between processing units and memory.
At the same time, these vehicles incorporate a diverse set of functions with varying criticality. For instance, certain features within the infotainment system are vital for driver awareness, while others are not. The challenge here is to manage the vehicle both as a single integrated system and as a "system of systems," in which some functions take priority over others. The optimal approach to addressing this challenge is to increase bandwidth, reduce latency, and conduct more granular partitioning: determining which components should be deployed where, which manufacturing processes to use, and the corresponding costs.
David Fritz, Vice President of Mixed Physical and Virtual Systems for Automotive, Aerospace and Defense at Siemens EDA, stated: “When we talk about things like 10Gb automotive Ethernet with quality of service guarantees, traditional automotive engineers ask, ‘How do I ensure this signal actually reaches the braking system within 100 milliseconds?’ My response is, ‘Do you see that building two blocks away? If you take a twisted-pair Ethernet cable, run it all the way around that building, and then back here, that’s probably only a few microseconds of latency—and you’re worried about milliseconds.’ That’s because the transmission rate is so high. Even if some arbitration occurs in between, you have more than enough time to resolve it. So the concern about whether my data can still get from point A to point B quickly enough when the system is busy essentially disappears. And the worry about how to partition the 1.5Mb/s CAN bus to ensure data arrives on time vanishes as well. That’s the difference between megabits and gigabits.”
This will have a significant impact on overall vehicle design. Fritz explained: “If you need to send a high-priority data packet from point A to point B on the network, and the network is extremely busy because it’s transmitting a video frame—nowadays some OEMs have 16, 20 cameras mounted around the vehicle—you want to process that data as close to the vehicle edge as possible. This reduces bandwidth requirements.”
Chinese OEMs understand that if they transmit data from 20 high-resolution cameras all at once, the system can still process the data, store frame images, and perform AI analysis even in the event of a collision. They can feed data to AI algorithms rapidly because they now have a large number of SoCs with latency at the nanosecond or even picosecond level. In contrast, their competitors only have a few ECUs, and at best, these independent ECUs are connected by communication channels with just a few megabits of bandwidth. Ultimately, vehicles must be designed in the same way as SoCs.
This also enables automakers to employ a diverse range of processing units and memories, focusing on areas that truly require peak performance while considering where specifications can be downgraded, how much power consumption different functions demand, and the overall cost.
Amir Kia, Senior Product Manager at Imagination Technologies, stated: “Traditionally, these functions relied on MPUs or DSPs, but there is growing interest in leveraging GPUs for some of these tasks today. For example, many companies already use GPUs in cockpit infotainment and in-vehicle display applications. Developers have realized that the flexibility of GPUs allows them to efficiently handle both compute and graphics workloads. Rather than integrating additional accelerators, they see value in extending the capabilities of existing GPUs to manage infotainment and boost compute performance, thereby reducing system overhead. This shift also creates opportunities to use smaller MPUs in these systems or minimize reliance on DSPs.”
02
Towards Software-Defined Cars
For automakers, many of these changes are fundamental, and it has only been in the past decade that they have begun shifting their focus from ECUs (Electronic Control Units) to a software-defined approach. The advantage of this shift is that different systems and subsystems can be designed like modules within an SoC, then integrated anywhere and in any manner that makes sense. This, in turn, makes it easier to determine how much bandwidth is required where, how much memory capacity is needed, which type of memory is best suited for each location, and which data carries the highest priority.
Kia noted: “Everyone is working toward moving to a more centralized architecture. We currently use many distributed ECUs, and we want to transition to a more centralized infrastructure. Some customer platforms are extremely compute-intensive, resulting in large volumes of real-time data from sensors and display systems. Some customers have six cameras, while others have eight to 12 cameras, all transmitting simultaneously. This creates massive high-speed data exchange within the system, and they are seeking to consolidate all of this processing onto a single SoC.”
A software-defined vehicle is fundamentally different from a set of function-dedicated ECUs. While individual systems must perform their required tasks, the central logic in such vehicles can also make real-time decisions involving multiple systems. To achieve this, however, the right data must be made accessible so that the system can act upon it.
Adiel Bahrouch, Director of Silicon IP Business Development at Rambus, stated: “High-resolution sensors, AI accelerators, and safety-critical workloads all converge on shared memory and storage subsystems. Without sufficient bandwidth, these subsystems quickly become performance bottlenecks. If memory cannot feed data to compute engines fast enough, chip utilization drops and latency rises, which directly impacts safety and user experience. A hierarchical memory and storage architecture—ranging from ultra-high-speed on-chip memory to high-capacity persistent storage—ensures that each workload achieves the right balance between bandwidth, latency, capacity, and cost, ultimately delivering safe, responsive, and feature-rich vehicles.”
As these architectural shifts reshape the automotive industry, memory technology selection has become increasingly critical. Michael Basca, Vice President of Products and Systems at Micron Technology, noted: “As you progress from L3 to L4 and beyond, model complexity, sophistication, and efficiency remain key focus areas for OEMs. We have all seen robotaxis getting stuck in certain traffic scenarios, so it is clear we have not yet reached a point where models can handle every extreme edge case. At least on the storage side, these models will likely continue to grow in size for some time to come, with greater efficiency being a longer-term objective.”
Figure 1: ADAS Bandwidth Requirements. Source: Micron Technology
Furthermore, as in-vehicle language models grow more complex, vendors will find themselves requiring greater memory capacity and higher bandwidth, while also striving to strike a balance between these and cost. Ferro noted: “Take Tesla as an example—you might find four LPDDR devices. They originally thought, ‘We can use less GDDR,’ but now they are actually using similar capacities. As a result, many customers are considering moving to LPDDR6, because they now need that capacity along with the other benefits of LPDDR.”
High Bandwidth Memory (HBM), which is stacked DRAM connected through through-silicon vias, is not yet deployable in automotive applications today due to reliability concerns related to TSVs and vibration. However, with the growing demand for high-performance memory—even at the expense of lower-cost memory options in some cases—it has certainly come onto the radar of several companies.
Yu Yang, Chief Analyst for Automotive Semiconductors at Yole Group, stated: “The memory industry is highly concentrated, with a small number of leading players holding dominant positions, and production capacity is shared across all other industries. Therefore, understanding the memory industry is critical for OEMs aiming to transform. A relatively recent example is the sharp surge in DDR4 memory prices over the past few months, driven by AI demand, capacity reallocation, or speculation in the distribution channels.”
According to Yole Group, the memory types and their applications in automotive currently include:
DRAM (LPDDR4/5, GDDR6): Used in ADAS domain controllers, central computing, smart sensors, and digital cockpit SoCs;
NAND flash memory;
eMMC/UFS: Used for infotainment, telematics, and ADAS software storage;
NVMe SSD: Used in emerging L3+ autonomous driving computing, as well as EDR/DSSAD storage;
SLC NAND: Used in telematics, RF modules, and high-durability logging;
NOR flash memory: Used for boot and security code in ADAS sensors, gateways, zone controllers, and MCUs;
Other non-volatile memories (EEPROM, FRAM, nvSRAM): Used for calibration data, configuration parameters, and low-density event logging.
A common rule of thumb is: DRAM is for computing, NAND is for data, and NOR is for code.
03
Other memory types
DRAM is becoming increasingly faster, while SRAM remains the go-to memory when the highest performance is required. However, other types of memory are also gradually entering automotive applications.
Seitzer from Synopsys noted: “SRAM supports real-time computing tasks, while MRAM and RRAM offer high density, low power consumption, and non-volatility, making them well-suited for OTA updates, data logging, and configuration retention. These memory options address the automotive industry’s needs for optimal power efficiency, performance, and reliability.”
In addition, some data can be pre-processed in the vehicle and stored locally before being sent to the cloud for less time-sensitive tasks, such as analyzing vehicle behavior or updating map changes across a fleet.
Amit Kumar, Director of Product Marketing and Management for Automotive at Cadence’s Tensilica product group, said: “This data is not uploaded to the cloud immediately; instead, it is stored for several hours or even a day, depending on the cloud service provider being used—AWS, Microsoft Azure, or Google Cloud. This type of data flow is typically accumulated in the vehicle first and then structured for analysis in a data warehouse.”
Flash memory is particularly useful for this. Seitzer said: “Flash memory—non-volatile, long-term—is still widely used in ECUs and central controllers. Non-volatile storage retains data throughout the entire lifespan of the vehicle, providing persistent storage for firmware, logs, and security assets. When accessing off-chip data, interfaces such as eMMC, UFS, and PCIe for high-bandwidth applications are utilized. Security is ensured through encryption, authentication, and compliance with automotive safety standards.”
Each OEM determines its own memory and storage architecture based on the available options and target markets.
Carrie Browen, Product Manager for the Software-Defined Vehicle business line at Keysight EDA, said: “Recordings from external cameras, as well as from interior cameras when enabled, can be used for ‘fleet learning’ to improve advanced driver assistance and fully autonomous driving functions. These are typically short video clips related to safety events, such as collisions or airbag deployments. For example, Tesla describes different data categories, such as ‘Autopilot Analytics & Improvements’ and ‘Road Segment Data Analytics,’ which are used to train and optimize driver assistance and navigation functions. Some data, such as dashcam footage and Sentry Mode storage—used to monitor for threats around a parked vehicle—is processed locally in the vehicle unless you explicitly enable sharing. In practice, some data is stored on the vehicle, while some is stored in Tesla’s data centers and its partners’ facilities for AI training, service, and support operations.”
Today, high-speed DRAM is typically used as near-compute memory, while flash and other non-volatile storage options provide data backup and redundancy. But these lines are beginning to blur.
Browen said: “Future architectures will gain greater flexibility by using more hybrid memory hierarchies, integrating traditional DRAM and flash memory within a single module or package. For camera and sensor data used in AI improvements, annotation and review tools allow authorized employees and contractors to view short clips and images to label objects and driving scenes. Media reports on annotation operations in the past have described such interfaces but have not disclosed the exact technical architecture.”
04
Conclusion
Automobiles are increasingly becoming complex “systems of systems,” integrating more memory and processing units, along with increasingly novel methods of data transmission and storage.
Randy White, Memory Solutions Project Manager at Keysight EDA, said: “In-vehicle compute requirements, including infotainment and ADAS, are increasingly demanding higher memory bandwidth and greater capacity. The low latency offered by local inference inside the vehicle, compared to cloud-based processing, ensures real-time processing and mission-critical timing.”
These are all stepping stones along the path to fully autonomous driving. Given the trajectory of this technology, that day is likely not far off.






