Encyclopediav0

Memory Controller

Last updated:

Memory Controller

A memory controller is a hardware component within a computer system that manages the flow of data between the central processing unit (CPU), main memory, and other devices [6]. It serves as a critical intermediary in the memory hierarchy, coordinating access requests to ensure data is delivered efficiently and reliably. As an essential element of memory management hardware, the memory controller enables the microprocessor, which cannot function alone, to interact with the necessary memory and input/output devices to perform its computational tasks [4]. These controllers are broadly classified based on their integration, either as a discrete chip on the motherboard or, in modern systems, as an integrated component directly within the CPU or system-on-a-chip (SoC) package. Their design and performance are fundamental to overall system stability and speed, as they dictate how quickly the processor can access the data it needs to execute instructions. The primary function of a memory controller is to handle read and write commands from the CPU or other system agents, translating them into the specific electrical signaling protocols required by the memory modules, such as DDR SDRAM. A key characteristic is its management of timing, addressing, and data buffering to maintain data integrity. Controllers often incorporate multiple, independent channels to increase bandwidth; for instance, some system-on-chip designs feature two independent memory controllers that must be configured separately and have no means of communicating directly with each other [3]. This architecture allows for concurrent memory operations. However, access requests can encounter contention, meaning it may not be possible to execute them immediately if the memory component is already processing another access, which can introduce latency [5]. Modern controllers also include advanced features like error correction code (ECC) support and power management states to enhance reliability and efficiency. The significance of the memory controller has grown with increasing demands for data throughput in computing. In high-performance applications, such as gaming consoles and servers, the immersive experience and computational power depend on data flowing to and from the processor at speeds far beyond earlier home-electronics systems, a task managed by the memory controller [1]. Its integration directly into the CPU die, a common practice in contemporary microarchitectures from companies like Intel and AMD, reduces latency and increases bandwidth by providing a more direct connection to the processor cores [2][7]. This integration is a standard feature in modern desktop, mobile, and embedded platforms, including Intel's 12th Generation Core processors [8]. As a result, the design and capabilities of the memory controller are pivotal in applications ranging from personal computers and data centers to graphics processing and embedded systems, directly influencing overall system responsiveness and capability.

Overview

A memory controller is a digital circuit that manages the flow of data between a computer's central processing unit (CPU) and its main memory, typically dynamic random-access memory (DRAM). This hardware component acts as an intermediary, translating memory access requests from the processor into the specific electrical signals and timing sequences required by the memory modules [14]. Its primary functions include initiating memory read and write cycles, handling address multiplexing, generating necessary control signals like row address strobe (RAS) and column address strobe (CAS), and performing critical refresh operations to maintain data integrity in DRAM cells [13]. The performance and efficiency of the memory controller directly influence overall system speed, power consumption, and stability, making it a cornerstone of modern computing architectures [14].

Architectural Integration and Evolution

Historically, memory controllers were located on the motherboard's northbridge chipset, separate from the CPU. This configuration introduced latency due to the need for communication across the front-side bus (FSB). Modern processor design has largely abandoned this approach in favor of integrating the memory controller directly onto the CPU die, a paradigm known as an integrated memory controller (IMC) [14]. This integration significantly reduces memory access latency and increases available memory bandwidth by establishing a direct, high-speed connection between the processor cores and the memory subsystem. For instance, in Intel's 12th Generation Core processors (Alder Lake architecture), the memory controller is a fundamental component of the processor's system agent, managing interfaces to both DDR4 and DDR5 memory technologies [14]. Similarly, AMD's accelerated processing units (APUs) integrate the memory controller to facilitate low-latency access for both CPU and graphics cores [13].

Core Functional Responsibilities

The memory controller executes a complex set of operations to manage DRAM, which stores data in a grid of capacitors that require periodic refreshing. A primary duty is address decoding and command scheduling. The controller receives logical addresses from the CPU and translates them into the physical row and column addresses of the DRAM array. It then schedules the precise sequence of commands—such as ACTIVATE (to open a row), READ/WRITE (to access a column), and PRECHARGE (to close a row)—adhering to strict timing parameters like tRCD (RAS to CAS Delay), tCL (CAS Latency), and tRP (Row Precharge Time) [14]. These timings, measured in clock cycles (e.g., DDR5-4800 with CL40), are critical for performance and are programmed into the controller's mode registers. Another critical function is refresh management. DRAM capacitors leak charge over time; to prevent data loss, every row must be refreshed typically every 64 milliseconds. The memory controller autonomously issues refresh commands, which can be distributed across the refresh interval (distributed refresh) or performed in bursts (burst refresh). During refresh cycles, the affected memory banks are unavailable for read/write operations, so the controller must arbitrate access to minimize performance impact [14]. Error detection and correction is also a key responsibility. Many controllers implement error-correcting code (ECC), which adds extra bits (e.g., 8 bits for a 64-bit data word) to detect and correct single-bit errors and detect double-bit errors. The controller calculates and checks these codes on every write and read operation, enhancing data reliability, especially in server and workstation environments [14].

Interface and Protocol Management

Modern memory controllers support sophisticated double data rate (DDR) interfaces. They generate the differential clock signals (CK_t/CK_c) and manage the data strobes (DQS_t/DQS_c) that are used to capture data on both the rising and falling edges of the clock, effectively doubling the data transfer rate. For a DDR5-4800 interface, the controller operates the I/O at 2400 MHz (the clock frequency), resulting in a data rate of 4800 megatransfers per second (MT/s) [14]. The controller's physical layer (PHY) handles impedance calibration (ZQ calibration) and manages on-die termination (ODT) to ensure signal integrity across the memory channel. Controllers also manage channel and rank organization. A single memory channel can support one or more ranks (sets of memory chips sharing the same bus). The controller interleaves accesses across ranks and channels to improve parallelism and bandwidth utilization. For example, a dual-channel controller can simultaneously access two independent 64-bit data paths, aggregating to a 128-bit total width, thereby doubling peak theoretical bandwidth compared to a single channel [14].

Power and Performance States

Advanced memory controllers implement several power management features to improve energy efficiency. These include:

  • Dynamic frequency and voltage scaling: The controller can lower the memory clock frequency and corresponding voltage during periods of low activity to save power [14].
  • Self-refresh and power-down modes: The controller can command the DRAM modules to enter low-power states where they use internal timers to perform refreshes (self-refresh) or where all but essential circuitry is powered down. Exiting these states incurs a latency penalty [14].
  • Active power management: Techniques like data bus inversion (DBI) are used to minimize the number of data lines toggling state during a transfer, reducing I/O power consumption [14].

Configuration and Programmability

Memory controllers are highly configurable through a set of mode registers (MRs) programmed during system initialization by the BIOS/UEFI. These registers set operational parameters such as:

  • Burst length (typically BL16 for DDR4/DDR5)
  • Read/write latency (CL, tRCD, tRP)
  • Command timing (tRFC, tFAW)
  • Enabling features like gear-down mode or decision feedback equalization (DFE) for signal integrity [14]. The controller performs memory training at boot, a process where it tests and adjusts timing delays for the command/address and data buses to compensate for PCB trace length variations, ensuring reliable communication [14]. In summary, the memory controller is a sophisticated, integrated hardware unit essential for efficient, reliable, and high-performance memory access in contemporary computing systems. Its design encompasses complex scheduling algorithms, signal integrity management, power optimization, and protocol handling, directly impacting the capabilities of the processors and platforms it serves [13][14].

History

Early Foundations and Discrete Controllers

The conceptual foundation for memory control emerged alongside the earliest digital computers, which required coordinated systems to manage data flow between processing units and storage. However, the modern memory controller's evolution is inextricably linked to the development of semiconductor memory. A pivotal moment occurred in 1970 with the debut of the IBM System/370 Model 145, the first commercial computer to feature main memory constructed entirely from silicon-based monolithic integrated circuits, rather than magnetic core technology [15]. This shift to silicon memory chips, which would soon become the universal symbol of computer technology, necessitated more sophisticated electronic interfaces to manage their operation [15]. Throughout the 1970s and 1980s, memory controllers existed as discrete components, often called "memory management units" (MMUs) or "northbridge" chips in PC architectures. These separate chips acted as intermediaries between the central processing unit (CPU) and dynamic random-access memory (DRAM), handling the complex timing, signaling, and protocol translation required. This arrangement was functional but introduced latency due to the need for signals to travel off the CPU die and through external buses.

The Integration Revolution

A paradigm shift began in the late 1990s and early 2000s as microprocessor designers sought to eliminate the performance bottleneck created by external memory controllers. The key innovation was moving the memory controller from a separate chip onto the same die as the CPU itself. This architectural change, known as an integrated memory controller (IMC), dramatically reduced latency by providing a direct, high-speed path between the processor cores and the DRAM. As noted earlier, the primary duty of this integrated controller is to manage the critical interface, translating processor requests into the specific commands required by the memory technology in use. A major driver for this integration was the increasing performance disparity between rapidly accelerating CPU clock speeds and the slower-growing bandwidth of external front-side buses. By integrating the controller, microprocessor platforms could achieve significantly higher memory bandwidth and lower latency, which became crucial for data-intensive applications. This transition marked a fundamental rethinking of platform architecture, concentrating core system functions onto the microprocessor die [14].

Driving Advanced Computing Platforms

The integrated memory controller became a cornerstone for advanced computing platforms, enabling new classes of performance. Its role is responsible for the efficient transfer of data between the processor and the DRAM, as well as overseeing essential DRAM maintenance operations [14]. This capability proved vital for cutting-edge systems. For instance, Sony's Cell microprocessor, co-developed with Toshiba and IBM for the PlayStation 3 (PS3), relied on an exceptionally high-bandwidth memory interface to achieve its intended immersive experience. The system demanded data flow speeds "way beyond anything achieved in a home-electronics system" at the time, a feat enabled by its advanced memory controller architecture [3]. The press coverage lavished on the PS3's technologies highlighted how integral memory control was to realizing the potential of novel processor designs like the Cell [3]. This era solidified the IMC as not merely a supporting component but a critical enabler of overall system performance, particularly for graphics processing units (GPUs) and heterogeneous processors where massive, parallel data streams are the norm.

Modern Evolution and Specialization

In contemporary microarchitectures, the memory controller has evolved into a highly sophisticated and often specialized unit. Building on the fundamental timing and command scheduling discussed previously, modern controllers incorporate advanced features such as:

  • Multi-channel architectures: Supporting two (dual-), four (quad-), or even eight (octal-) independent memory channels to multiply aggregate bandwidth.
  • Command queue optimization: Reordering pending memory requests to maximize bus utilization and minimize idle time, a process essential for hiding the inherent latency of DRAM accesses.
  • Error correction code (ECC): On-the-fly detection and correction of bit errors to improve data integrity, especially critical in server and workstation environments.
  • Power management: Implementing various low-power states for the memory interface and DRAM modules to conserve energy during periods of low activity.
  • Technology agnosticism: Modern controllers are designed to be adaptable, supporting multiple generations of DRAM (e.g., DDR4 and DDR5) through programmable timing parameters and signaling protocols. The performance of these controllers is a primary factor in platform capability. For example, a controller managing a DDR5-4800 interface operates its I/O at a base clock frequency of 2400 MHz to achieve the noted data rate. Furthermore, techniques like memory prefetching, where the controller anticipates data needs and loads it into caches before it is explicitly requested, have become standard for boosting application performance [4]. This evolution reflects the ongoing concentration of platform functionality onto the microprocessor, a trend that continues to define high-performance computing [14]. Today's memory controllers are integral, complex subsystems that play a decisive role in determining the real-world performance of everything from mobile devices to supercomputers.

Significance

The memory controller is a critical determinant of overall system performance, power efficiency, and architectural flexibility in modern computing. Its design and implementation directly influence a platform's capabilities, from consumer desktops to high-performance servers and specialized embedded systems [20][22]. The controller's role extends far beyond basic command scheduling to encompass complex optimization algorithms, system-level integration strategies, and enabling novel processor architectures that would be impractical with external memory management.

Architectural Integration and System Performance

The integration of the memory controller directly onto the CPU die, a transition pioneered in the early 2000s, fundamentally reshaped system architecture and performance benchmarks. This move was largely driven by the need to eliminate the bottleneck of the external front-side bus (FSB), which struggled to keep pace with rapidly increasing CPU clock speeds [19]. By placing the controller on-die, the path to main memory was drastically shortened, reducing latency and increasing available bandwidth. This architectural shift was a key differentiator in competitive landscapes; for instance, AMD's introduction of the integrated memory controller in its Athlon 64 (K8) architecture provided a significant performance advantage over Intel's contemporary Netburst architecture, which relied on a northbridge-based controller connected via a slower FSB [19]. The performance of these integrated controllers remains a primary factor in defining platform capability across generations [22]. This integration also rendered the traditional northbridge—a chipset component historically responsible for memory control and high-speed CPU interfaces—obsolete for this function in modern designs [16]. The elimination of this external component reduced system latency and paved the way for more direct and efficient communication pathways, such as Intel's QuickPath Interconnect (QPI) and AMD's Infinity Fabric, which link processors, memory, and I/O directly [21]. The design of the controller, including its scheduling algorithms, queue depths, and support for advanced protocols like bank grouping and same-bank refresh, is therefore central to realizing the full potential of a microprocessor's core architecture [20].

Enabling Specialized and High-Performance Designs

Memory controllers are pivotal in enabling specialized processor designs tailored for specific computational domains. A prominent example is the Cell Broadband Engine processor, developed jointly by Sony, Toshiba, and IBM for the PlayStation 3. The immersive experience targeted by this system depended on data flowing to and from the synergistic processing elements (SPEs) at speeds "way beyond anything achieved in a home-electronics system" at the time. This demanded an exceptionally high-bandwidth memory interface (the Element Interconnect Bus) and a sophisticated memory controller capable of managing concurrent, high-speed data streams between the PowerPC core, the SPEs, and both XDR DRAM and graphics memory. This controller design was integral to the Cell processor's ability to perform massive parallel computations for gaming and scientific workloads. Similarly, the flexibility of licensing architectures like ARM allows system-on-chip (SoC) designers to customize memory controllers to meet precise performance, power, and cost targets for embedded and mobile applications [17]. A designer can tailor the controller's width, number of channels, supported memory type (LPDDR4/5, DDR4/5), and power management features to align with the specific requirements of a smartphone, IoT device, or automotive system. This contrasts with traditional desktop CPUs, where, as noted in some integrated designs, the memory controller can lock the platform into a specific memory generation (e.g., only DDR2 or DDR3), limiting upgrade paths without a new CPU and motherboard [19].

Reliability, Error Management, and Signal Integrity

In data-critical environments such as servers, workstations, and enterprise storage, the memory controller's role in ensuring data integrity is paramount. Modern controllers implement advanced error checking and correcting (ECC) schemes that go beyond simple single-bit correction. These include:

  • Chipkill and SDDC (Single Device Data Correction) technologies, which can correct multi-bit errors from a complete DRAM chip failure
  • Patrol scrubbing, which proactively reads memory locations to detect and correct errors before they cause a system fault
  • Address parity protection for commands and addresses on the memory bus
  • Post-package repair (PPR), which can remap addresses to spare rows within a DRAM device to isolate failing cells [18][14]

These features are managed directly by the memory controller and are essential for meeting the reliability requirements of cloud, data center, and high-performance computing (HPC) workloads [22]. Furthermore, the controller is responsible for maintaining signal integrity at high data rates. For interfaces like DDR5-4800, where the I/O operates at 2400 MHz, the controller must implement precise impedance calibration, write leveling, and data eye training to ensure commands and data are correctly sampled amidst noise and signal skew across the physical motherboard traces [20][14].

Influence on Memory Technology Adoption and System Design

The memory controller acts as the essential bridge between the CPU's instruction set architecture and the physical DRAM technology. Its design dictates which memory standards a platform can support (e.g., DDR4, DDR5, LPDDR5, HBM). Consequently, controller capabilities directly drive the adoption of new memory technologies into the market. Features like bank grouping, same-bank refresh, and support for fine-granularity refresh modes are implemented in the controller to maximize efficiency and bandwidth utilization for specific DRAM generations [20][14]. Moreover, the controller's power management capabilities have a major impact on system energy consumption. Techniques include:

  • Dynamic frequency and voltage scaling for the memory interface
  • Partial-array self-refresh (PASR), which puts only portions of memory into low-power refresh mode
  • Controller clock gating during idle periods
  • Deep power-down modes for entire memory channels when not in use [20][14]

These optimizations are crucial for extending battery life in mobile devices and reducing operational costs in large-scale data centers. The controller's prefetching algorithms, which predict and fetch data from memory before the CPU explicitly requests it, are also a key area of optimization. Effective prefetching can hide memory latency and significantly boost application performance, especially for data-intensive workloads in gaming, scientific simulation, and artificial intelligence [17][20]. In summary, the significance of the memory controller lies in its role as the central nervous system for a computer's memory hierarchy. It is not merely a passive interface but an active, intelligent component whose design choices in integration, scheduling, reliability, and power management fundamentally shape the performance envelope, efficiency, and capabilities of the entire computing platform, from embedded sensors to exascale supercomputers [17][20][22][14].

Applications and Uses

The memory controller's fundamental role in managing data flow between processors and memory has made it a critical component across diverse computing domains. Its applications range from enabling general-purpose computing architectures to optimizing specialized systems for artificial intelligence and mobile devices. The design and implementation of the memory controller are often tailored to meet specific performance, power, and cost targets dictated by its application environment [15].

Enabling Processor and Platform Evolution

The integration of the memory controller directly onto the CPU die, a trend pioneered by companies like AMD, was a pivotal architectural shift. This move was largely driven by the need to mitigate the performance bottleneck created by the increasing disparity between accelerating CPU clock speeds and the slower-growing bandwidth of external front-side buses connecting to northbridge-based memory controllers [24]. By placing the controller on-die, latency was dramatically reduced and bandwidth scalability was improved, which became essential for supporting multi-core processors and complex workloads. This integration, however, introduced a degree of platform inflexibility, as it often locked a CPU generation into supporting a specific memory technology, such as exclusively DDR2 or DDR3, until interfaces evolved to support multiple standards [24]. The design flexibility afforded by licensing models, such as those from ARM, allows system-on-chip (SoC) manufacturers to customize memory controller implementations extensively. This enables the creation of processors finely tuned for particular applications, whether the priority is maximum performance, minimal power consumption for battery-operated devices, or lowest cost for high-volume embedded systems [15]. This customization extends to the number of memory channels, support for different memory types (e.g., LPDDR, DDR), and the sophistication of power management states.

Optimizing High-Performance and Enterprise Computing

In enterprise servers and high-performance computing (HPC) clusters, the memory controller is a cornerstone of system capability. Modern server processors, such as AMD's EPYC series, feature multiple integrated memory controllers to drive a high number of memory channels. For instance, a single 4th Gen AMD EPYC processor can support 12 channels of DDR5 memory, with the controller managing the signaling and protocols to deliver aggregate bandwidth exceeding 450 GB/s [24]. This immense bandwidth is crucial for data-intensive applications in databases, scientific simulations, and virtualization. The controller's advanced scheduling algorithms and support for large memory capacities directly influence overall platform performance and scalability for these workloads [24]. The evolution of memory controllers has also been shaped by competitive pressures and the need for standardization. Historical competition in the microprocessor market, such as Intel being challenged by Zilog, drove rapid innovation and feature adoption across the industry, including in supporting peripheral and memory subsystems [10]. Furthermore, the establishment of standardized interfaces, managed in part by the memory controller's adherence to JEDEC specifications, ensures interoperability between memory modules from various vendors and system platforms, a necessity for a stable ecosystem [10].

Accelerating Artificial Intelligence and Graphics Workloads

The demands of artificial intelligence (AI) and machine learning have led to specialized applications for memory controllers, particularly within graphics processing units (GPUs). AI models utilize a wide array of neural network architectures, many of which are characterized by massive parameter counts and require rapid streaming of training or inference data [25]. GPU memory controllers are engineered to handle the exceptionally high bandwidth requirements of these workloads. Techniques like aggressive prefetching are employed to predict and fetch data needed by thousands of concurrent threads before the data is explicitly requested, thereby hiding memory latency and keeping computational units saturated [27]. As noted in research on boosting application performance, while such prefetching logic adds complexity, it is highly effective at reducing stalls and does not fundamentally alter the programming model or argument for using GPU acceleration [27]. These controllers manage specialized high-bandwidth memory technologies like GDDR6 and HBM (High Bandwidth Memory), which are optimized for throughput rather than latency. The controller's ability to manage multiple, very wide memory channels and service a tremendous number of concurrent memory requests is a primary factor in the performance of AI training systems, such as those built on the NVIDIA H100 GPU platform [25].

Power Management in Mobile and Embedded Systems

In mobile phones, tablets, and other battery-powered embedded systems, the memory controller's role in power management is as critical as its role in performance. These controllers are designed to support low-power double data rate (LPDDR) memory standards. They implement sophisticated power states, allowing portions of the memory array and the interface itself to enter low-power modes during periods of inactivity. The transition between these states is managed dynamically by the controller based on memory traffic patterns [26]. Legal and technical agreements governing these advanced memory technologies, such as LPDDR5X, often include specific confidentiality and licensing terms that dictate how controller IP can be implemented, ensuring compliance with power and signaling specifications [26]. The controller's efficiency directly impacts device battery life. Furthermore, in these integrated environments, the memory controller's design is closely coupled with the application processor's needs, supporting features like "partial array self-refresh" where only the portion of DRAM containing needed data is kept active, thereby conserving power beyond the fundamental refresh operations required for all DRAM [23].

Ensuring Reliability and System Stability

Beyond performance and power, memory controllers incorporate features essential for data integrity and system reliability, particularly in mission-critical and enterprise environments. These include:

  • Error Correction Code (ECC) support: The controller calculates and checks ECC bits for data read from and written to memory, detecting and correcting single-bit errors and detecting multi-bit errors [24].
  • Command scheduling and arbitration: Advanced algorithms prioritize memory requests to minimize latency for critical operations while maintaining efficient utilization of the memory bus, a complex task with multi-channel architectures [24].
  • Signal integrity management: For high-speed interfaces like DDR5-4800, where the I/O operates at 2400 MHz, the controller incorporates circuitry to compensate for signal attenuation and timing skews on the physical traces between the CPU and memory modules [24]. The controller also manages the initialization and training of the memory interface upon system boot, calibrating timing parameters to ensure stable operation across variable environmental conditions. This foundational role in maintaining the integrity of the data pathway between storage and processing units makes the memory controller an indispensable component in virtually all modern digital systems, from the smallest Internet of Things (IoT) sensors to the largest supercomputers [23][24].

References

  1. Expressway To Your Skull - https://spectrum.ieee.org/expressway-to-your-skull
  2. [PDF] intel microarchitecture white paper - https://www.intel.com/content/dam/doc/white-paper/intel-microarchitecture-white-paper.pdf
  3. Memory Controller (MC) - 001 - ID:655258 - https://edc.intel.com/content/www/us/en/design/ipla/software-development-platforms/client/platforms/alder-lake-desktop/12th-generation-intel-core-processors-datasheet-volume-1-of-2/001/memory-controller-mc/
  4. Computing Platforms - https://www.sciencedirect.com/science/article/pii/B9780128053874000042
  5. Memory Controller - an overview - https://www.sciencedirect.com/topics/computer-science/memory-controller
  6. What is Memory Controller? - https://www.jotrin.com/technology/details/what-is-memory-controller
  7. Memory controller integration - https://forums.tomshardware.com/threads/memory-controller-integration.412285/
  8. Advantages of ARM architecture SOC array servers over traditional X86 architecture. - https://2nc.com/article/1.html
  9. A look at IBM S/360 core memory: In the 1960s, 128 kilobytes weighed 610 pounds - http://www.righto.com/2019/04/a-look-at-ibm-s360-core-memory-in-1960s.html
  10. Birth of a standard: The Intel 8086 Microprocessor - https://stevemorse.org/pcw2b.html
  11. [PDF] 9800722 03 The 8086 Family Users Manual Oct79 - http://bitsavers.org/components/intel/8086/9800722-03_The_8086_Family_Users_Manual_Oct79.pdf
  12. [PDF] VintageIntelMicrochipsRev4 - https://www.cpushack.com/wp-content/uploads/2018/06/VintageIntelMicrochipsRev4.pdf
  13. [PDF] 52259 KB G Series Product Data Sheet - https://www.amd.com/content/dam/amd/en/documents/archived-tech-docs/datasheets/52259_KB_G-Series_Product_Data_Sheet.pdf
  14. Memory controller - https://grokipedia.com/page/Memory_controller
  15. The IBM System/370 | IBM - https://www.ibm.com/history/system-370
  16. What is the Northbridge and What Does it Do? - https://www.lenovo.com/us/en/glossary/northbridge/
  17. SDRAM Memory Systems: Embedded Test & Measurement Challenges - https://www.tek.com/en/documents/primer/sdram-memory-systems-embedded-test-measurement-challenges
  18. Error checking and correcting for burst DRAM devices - https://patents.google.com/patent/US5740188A/en
  19. AMD’s Athlon 64: Getting the Basics Right - https://chipsandcheese.com/p/amds-athlon-64-getting-the-basics-right
  20. [PDF] HC20.26.630 - https://old.hotchips.org/wp-content/uploads/hc_archives/hc20/3_Tues/HC20.26.630.pdf
  21. [PDF] performance quickpath architecture paper - https://www.intel.com/content/dam/doc/white-paper/performance-quickpath-architecture-paper.pdf
  22. AMD "Zen" Core Architecture - https://www.amd.com/en/technologies/zen-core.html
  23. [PDF] 20170000291 - https://ntrs.nasa.gov/api/citations/20170000291/downloads/20170000291.pdf
  24. [PDF] 4th gen amd epyc processor architecture whitepaper - https://www.amd.com/content/dam/amd/en/documents/products/epyc/4th-gen-amd-epyc-processor-architecture-whitepaper.pdf
  25. NVIDIA H100 GPU - https://www.nvidia.com/en-us/data-center/h100/
  26. LPDDR5X - https://www.micron.com/products/memory/dram-components/lpddr5x
  27. Boosting Application Performance with GPU Memory Prefetching - https://developer.nvidia.com/blog/boosting-application-performance-with-gpu-memory-prefetching/
  28. DRAM Memory Types: FPO EDO BEDO . . - https://www.electronics-notes.com/articles/electronic_components/semiconductor-ic-memory/dynamic-ram-dram-edo-bedo-fpm.php