An Introduction to SRAM: History, Working Principles, Applications, and Limitations - 115DAB/WS2024 GitHub Wiki
Group 4: Henry Jiang, Gary Su, Yifu Li
Abstract
This article introduces the concept of Static Random Access Memory (SRAM) in an encyclopedic approach. The article contains a high-level introduction, the history of SRAM, the basic mechanism, and the application and limitations of current SRAM technologies.
Keywords
RAM, SRAM, Memory Array, Sense Amplifier, Transistor
Introduction
Random-access memory (RAM) is a type of data storage directly connected to the CPU, allowing for high-speed read and write operations. Compared to Sequential Access Memory, SRAM’s advantage lies in eliminating the need for sequential reading, providing uniform access time to all memory addresses. RAM is widely used in high-speed devices but is volatile, meaning it loses data when power is off [1].
Static RAM (SRAM) offers continuous data storage without periodic refreshing, making it suitable for high-bandwidth applications like registers and caches. SRAM cells, typically based on flip-flops built with CMOS transistors and controlled by the wordline and bitline, store data with low latency (usually around 50ns [2]) and reasonable energy efficiency. However, its structural complexity and high manufacturing costs compared to other memory mechanisms hinder its broad adoption in the main memory [3].
History
The advent of MOS transistor technology in 1959 catalyzed the advancement of SRAM. Robert Norman developed the first semiconductor bipolar transistor SRAM in 1963, preceding John Schmidt's 64-bit P-MOS transistor SRAM by a year [4]. IBM achieved commercialization of SRAM with its SP95 memory chip shortly thereafter. Improved techniques in semiconductor fabrication since 1970 fostered the production of high-density chips, such as the Intel 2147 SRAM, with capacities up to megabits, introduced in 1976. By the 1980s, SRAM had become widely employed in processors. Meanwhile, research focused on enhancing SRAM density, speed, reliability, and power efficiency since then [5][6].
Background of Memories Array
The memory array, a fundamental element within data storage systems, plays a pivotal role in storing and retrieving digital information. SRAM systems feature the most prevalent implementation of memory arrays.
In the context of a memory array with a dimension of 2N✕ M bits consisting of rows and M columns, data storage or retrieval demands an input of N bits of address data. For example, a 235 memory array will need 3 bits of address data to locate the targeted 5 bits of output data.
Fig.1 Memory Array Basic Structure
As shown in Fig 1, the system receives N bits of address input, subsequently producing M bits (in this case, 3 bits) of target data as output. The address data is directed to a decoder that is responsible for activating one of the entries, known as the wordlines in the memory array. They are associated with specific rows and responsible for activating bit cells for data reading and editing purposes. The bitlines are introduced in the system to control reading and writing data in the bit cells. bitlines serve the purpose of communicating with the individual bit cells aligned with the active wordline. In the read mode, bitlines extract data to the output, while during the write mode, they edit the data within the bit cells. [7]
SRAM Basic Operation
Read Opeartion
Fig.2 SRAM Cell [8]
For the read operation, as shown in Fig 2, the BL (bitline) and BLB (bitline bar) will first be precharged to a reference voltage, usually half of the VDD voltage. This will establish a stable reference for changes that may be detected on the bitlines after a cell is activated. In order to activate the cell of our interest, its horizontal WL (wordline) must be brought to a logic “1” that opens T1 and T2. The data bit represented at node Q will then attempt to charge/discharge BL depending on the data bit. BL will be charged if Q is “1” and discharged if Q is “0”. BLB will behave in a complementary way to BL, which creates a small change in voltages. The difference in voltages will be amplified by the Sense Amplifier (SA) in later stages.
Write Operation
For the SRAM write operation, similar to the read operation, the process begins by activating the wordline corresponding to a specific row of memory cells. In this stage, a pair of bit lines associated with the targeted memory cell is conditioned to the desired logic state, representing the data to be written.
For instance, to write a logic "1" into a memory cell, the true bit line (BL) is conditioned to a voltage level of VDD. This conditioning results in the desired logic state for the cell's storage nodes, influencing the state of Q to be a logic "1". At the same time, the complementary BL is conditioned to the opposite voltage level, such as VSS, to set Q into logic “0” [8]. The cross-coupled inverter ensures a proper latch of data, and the conditioned bit lines reinforce the retention of the logic state within the memory cell. After the write cycle, the wordline will be deactivated, and the bitlines are again precharged to the reference voltage in preparation for the next access cycle.
Precharger
Fig.3 Precharge Circuit [9]
In SRAM, the precharge circuit restores the bitlines to a reference voltage before read or write operations. Figure 3 shows a simple precharge circuit with three PMOS. The PMOS on the BL and BLB are connected to a VCC with an additional PMOS in between two bitlines to ensure the same voltage level across. The signal PRE# will be pulled low to logic “0” during precharge. This turns all PMOS on and the bitlines are charged to a desired reference level.
Sense Amplifier (S/A)
During the read cycle, the sense amplifier plays a role in detecting subtle voltage changes between two precharged bitlines and amplifying the difference to determine whether a cell is storing logic "1" or "0". If the two bitlines are not at the same voltage level, the sense amplifier may misread the input, resulting in inaccurate output. To avoid misreads, the sense amplifier should wait for a specific duration after the opening of T1 and T2. This delay allows the cell to fully discharge onto the bitlines, creating a sufficiently large voltage difference for the sense amplifier [8].
Application
Since each SRAM cell requires more transistors than other memory mechanisms, SRAM has higher production costs and occupies more physical space per bit. Therefore, SRAM is predominantly applied to a scope that needs high bandwidth and moderate capacity. The current major applications of SRAM are as follows:
1) Computers & Microcontrollers
SRAM finds essential application in speed-sensitive storage within modern CPUs, serving as registers and high-speed caches. In microprocessors, SRAM offers on-chip storage capacities ranging from 32 bytes to several megabytes. SRAM was once used as the main memory for early-stage computers.
2) Embedded Systems
SRAM is also widely used in embedded systems such as sensors, automobiles, and appliances for fast data-accessing, user interaction, and real-time data processing. It is also used as a buffer in networking devices.
3) Non-volatile Storage Devices
In devices requiring reliable data storage and fast access, SRAM is combined with non-volatile memory devices to simultaneously achieve the goal of fast access and reliable storage. Data in such devices are written to and read from SRAM at high speed while stored in non-volatile memory at low speed.
Limitation
1)Integration Level
SRAM has a low integration level compared to other memory systems. This means SRAM has limited ability to pack a higher amount of memory cells into a given area or chip size. The memory system consists of 6 transistors for each bit unit, which is more area-consuming compared to other memory storing structures (ie. capacitor-based storage and floating-gate transistors). Besides, SRAM is designed for fast access times and low-latency operations, but the high performance comes at the cost of larger and power-hungry cells [10]. To maintain a stable state, SRAM cells require a continuous power supply. The design choices made to optimize performance contribute to larger cell sizes, resulting in a lower integration level.
2) Power Budget
SRAM cells often operate at higher voltage levels compared to other memory types. The increased voltage is necessary for reliable and stable operation at high performance, but it contributes to higher energy consumption. Moreover, the flip-flop architecture employed in the SRAM bit cell requires a continuous power supply to maintain the stored data. This constant power supply to each cell unit will result in a higher power consumption compared to other memory types[11].
3) Production
SRAM is usually more expensive than other types of memory systems because of its cell complexity and size as mentioned. Also, the manufacturing process for SRAM is more intricate and demanding. This complexity increases the likelihood of defects during manufacturing, leading to a higher percentage of rejected chips and, consequently, higher costs[12].
Conclusion
SRAM, a double-edged sword in the realm of computer memory, is favored for its fast data access in high-performance computing applications but is less favorable in scenarios where size, power, and cost-effectiveness are critical factors. Its inherent advantages include fast read and write speed, low-latency operations, and suitability for use as a high-speed cache. However, all its advantages come at a cost – the 6T design leads to a larger area for integration and higher manufacturing costs. Nevertheless, SRAM remains crucial in applications that prioritize speed and reliability.
Reference
[1] K. Mehra and T. Sharma, “Analysis of Sram Cell Design,” Proceedings of the Third International Conference on Advanced Informatics for Computing Research, Jun. 2019. doi:10.1145/3339311.3339338
[2] S. Bhattacharya et al., “Analysis of read speed latency in 6T‐SRAM cell using multi‐layered graphene nanoribbon and CU based nano‐interconnects for High Performance Memory Circuit Design,” ETRI Journal, vol. 45, no. 5, pp. 910–921, Nov. 2022. doi:10.4218/etrij.2022-0068
[3] W. Gul, M. Shams, and D. Al-Khalili, “SRAM cell design challenges in modern deep Sub-Micron Technologies: An overview,” Micromachines, vol. 13, no. 8, p. 1332, Aug. 2022. doi:10.3390/mi13081332
[4] M. Koyanagi, “Chapter 17 History of MOS Memory Evolution on DRAM and SRAM,” The Institute of Electrical and Electronics Engineers, 2023
[5] S. Khan and S. Hamdioui, “Trends and challenges of SRAM reliability in the nano-scale era,” 5th International Conference on Design & Technology of Integrated Systems in Nanoscale Era, Mar. 2010. doi:10.1109/dtis.2010.5487565
[6] E. Abbasian and M. Gholipour, “Design of a Schmitt-trigger-based 7t SRAM cell for variation resilient low-energy consumption and reliable internet of things applications,” AEU - International Journal of Electronics and Communications, vol. 138, p. 153899, Aug. 2021. doi:10.1016/j.aeue.2021.153899
[7] X. Xue et al., “Design and performance analysis of 32 × 32 memory array SRAM for low-power applications,” Electronics, vol. 12, no. 4, p. 834, Feb. 2023. doi:10.3390/electronics12040834
[8] H. Jiang, X. Peng, S. Huang, and S. Yu, “CIMAT: A compute-in-memory architecture for on-chip training based on transpose SRAM arrays,” IEEE Transactions on Computers, pp. 1–1, 2020. doi:10.1109/tc.2020.2980533
[9] “Precharging circuits in SRAM,” Electrical Engineering Stack Exchange. https://electronics.stackexchange.com/questions/116599/precharging-circuits-in-sram (accessed Feb. 16, 2024).
[10] E. Morifuji, D. Patil, M. Horowitz, and Y. Nishi, “Power optimization for SRAM and its scaling,” IEEE Transactions on Electron Devices, vol. 54, no. 4, pp. 715–722, Apr. 2007. doi:10.1109/ted.2007.891869
[11] C. M. R. Prabhu and A. K. Singh, “A proposed SRAM cell for low power consumption during write operation,” Microelectronics International, vol. 26, no. 1, pp. 37–42, Jan. 2009. doi:10.1108/13565360910923151
[12] N. Snehith, S. Kumar, and S. Rao, “Design and performance analysis of 256 bit SRAM using different SRAM cell in 45NM CMOS technology,” International Journal of Modern Trends in Engineering & Research, vol. 4, no. 3, pp. 216–222, Apr. 2017. doi:10.21884/ijmter.2017.4111.0pwpc