Moore's Law and its Future - 115DAB/WS2024 GitHub Wiki
Group 7: Haoyu Zhang, William Shih, Trevor Cai
The creation of the transistor at Bell Labs in 1947, sparked a technological revolution, offering a compact alternative to vacuum tubes. Since that groundbreaking moment, the evolution of transistors has been propelled by relentless research and experimentation. This quest for improvement, driven by the constant pursuit of reducing dimensions and enhancing performance, eventually led to the formulation of what we know as Moore's Law. A concept that predicted the continuous doubling of transistor density every two years and a historical and predictive modeling of increasing computer performance as shown in Figure 1 below.
Figure 1: Performance Gains since the baseline of when Moore's Law began
With an increase in transistor density, we also observe a subsequent increase in thread performance, clock frequency, power, and number of cores.
Intensive research is ongoing across various aspects of transistor technology, including materials, innovative designs, and new techniques, all aimed at pushing transistors to be smaller and more efficient.
EUV lithography has revolutionized transistor manufacturing by enabling the creation of significantly smaller and more densely packed transistors than was previously possible with standard photolithography techniques. The precise control offered by EUV lithography allows for the fabrication of intricate transistor designs with features on the nanometer scale. Additionally, EUV lithography has streamlined the semiconductor manufacturing process by reducing the need for complex multi-patterning techniques required by traditional optical lithography methods.
One pivotal advancement in the relentless pursuit of reducing transistor dimensions has been the integration of strain engineering. In the early 2000s, researchers began exploring the application of mechanical stress to semiconductor materials to enhance carrier mobility, improving the overall performance of transistors. By inducing strain in the crystal lattice of the transistor's channel, electrons experienced less resistance, leading to increased speed and efficiency [3].
Another breakthrough came with the adoption of high-k dielectrics. In the mid-2000s, the limitations of silicon dioxide became evident, leading researchers to explore alternative materials. High-k dielectrics, such as hafnium-based compounds, offered improved insulation properties, effectively reducing power leakage and enabling the continuation of Moore's Law [4].
The emergence of Fin Field-Effect Transistor (FinFET) technology marked a significant shift in transistor design. In the early 2010s, FinFETs introduced a three-dimensional structure which allowed for better control of the transistor channel and reduced leakage currents. This design innovation significantly improved the electrostatic control of the transistor, enabling further size reduction. More recently, the advent of Gate-All-Around (GAA) offers even more precise control over the flow of current allowing for superior electrostatics and reduced leakage relative to FinFETs by surrounding the channel with the gate material.
Researchers have been actively exploring a range of new materials to enhance the performance and miniaturization of transistors beyond traditional silicon. Gallium Nitride (GaN), characterized by its high electron mobility and operational capabilities at elevated temperatures and frequencies, shows promise for power electronics and high-frequency devices. Indium Gallium Arsenide (InGaAs), another wide-bandgap semiconductor, offers high-speed and low-power operation suitable for communication systems and infrared detectors. Transition Metal Dichalcogenides (TMDs) like molybdenum disulfide (MoS2) and tungsten diselenide (WSe2) offer tunable bandgaps and high carrier mobility for novel transistor designs. Additionally, Silicon Germanium (SiGe) alloys integrated into silicon-based processes enhance transistor performance, particularly in speed and power efficiency [5].
The future trajectory of Moore’s Law is no longer focused on miniaturizing transistors but rather on integration of specialized hardware. With the introduction of newly specialized hardware calls for new complex compilers that optimize and assign different tasks ensuring seamless integration for the user.
Figure 2: Hardware Specialization Trajectory
Processing speeds have increased dramatically faster than data transfer speeds resulting in large latency when data must go through the memory hierarchy. With advancements in artificial intelligence, new algorithms are emerging which consist of far less data movement and more heavy arithmetic density. This takes advantage of the quick processing speeds and limits the latency that we typically experience when pulling data from either memory or hard drives [1]. Despite our quick processing speeds, research is still being conducted on how to improve our hardware for even greater performance. Software bloat has been growing as more and more users tend to program with abstract programming languages such as Python resulting in large inefficiencies compared to lower level programming languages such as C and Java [2]. The lag of computational time shown in Table 1 is exacerbated when such abstract languages are used for highly computational applications such as machine learning.
Table 1: Computational Time for multiplying two 4096x4096 matrices
Additional research has also been done in hardware streamlining to push parallelism and locality shown in Figure 2. Two dominant strategies being implemented for this is processor simplification and domain specialization ultimately leading to modularity. Domain specialization is critical for speeding up certain common tasks that normally take a long time under normal processing. By speeding up specialized tasks, we may improve the overall efficiency of a system [2]. Ultimately by maximizing processing power and locality, we can significantly improve the performance of our devices.
Another area of improvement is energy consumption. As transistors get progressively smaller, its energy consumption decreases but the wiring resistive losses have remained relatively constant. This lack of power improvement means we must search for new means of data transfer, namely photonics [1]. Firstly, research has shown that optical computing is significantly more energy efficient compared to traditional wiring and provides five times more bandwidth. This increase in bandwidth density is also well matched to the pin density of copper pillar or solder microbump, allowing for easy integration into current day systems [2]. However, serious consideration of photonics will shift the focus to other limiting factors that will bottleneck the improvement and advantages. It should also be noted that a wider integration of this technology will require more than just a one to one replacement of existing links and switches, but a higher system level view of how to change and integrate. Other research into inventing the new “transistor” is also being conducted with a new focus on architectural-level impact instead of simply their physical characteristics [2]. Yet the amount of system level change and adaptation to allow for these technologies to be applicable to current technologies is something that is also what makes competing and replacing CMOS so difficult.
As we push for more performance out of our devices, it is clear that Moore’s Law is no longer the Golden Rule it was in the past. With much research being done to push state of the art in both hardware and software, our desire to squeeze out efficiency and to increase computational power forces our creativity to expand hardware and computer architecture beyond what we know today. We don’t know what will take us over the wall that we are seeing with Moore’s Law, maybe it will be the development of a new transistor, maybe it will be the integration of light, maybe it's improving how hardware can run our software, maybe it's all of the above. But what we do know is that our thinking must go beyond the shrinking of transistors and the brightest minds in the field are doing just that.
[1] Charles E. Leiserson et al., “There’s plenty of room at the Top: What will drive computer performance after Moore’s law?”. Science 368, eaam9744 (2020). doi: 10.1126/science.aam9744
[2] Shalf John. 2020. “The future of computing beyond Moore’s Law,” Phil. Trans. R. Soc. A. 37820190061. 20190061. doi: 10.1098/rsta.2019.0061
[3] S. Datta et al., "High mobility Si/SiGe strained channel MOS transistors with HfO/sub 2//TiN gate stack," IEEE International Electron Devices Meeting 2003, Washington, DC, USA, 2003, pp. 28.1.1-28.1.4, doi: 10.1109/IEDM.2003.1269365.
[4] R. Chau, S. Datta, M. Doczy, B. Doyle, J. Kavalieros and M. Metz, "High-/spl kappa//metal-gate stack and its MOSFET characteristics," in IEEE Electron Device Letters, vol. 25, no. 6, pp. 408-410, June 2004, doi: 10.1109/LED.2004.828570.
[5] S. Sladic, M. De Santis, E. Zivic and W. Giemacki, "Paradigm Changes in Power Electronics Caused by Emerging Materials," 2022 International Congress on Advanced Materials Sciences and Engineering (AMSE), Opatija, Croatia, 2022, pp. 1-4, doi: 10.1109/AMSE51862.2022.10036673.