History of Speed and Memory in Computing Systems - 180D-FW-2023/Knowledge-Base-Wiki GitHub Wiki

Introduction

This paper outlines the significant improvements in computer memory and speed from their origins to the present. It traces the path from the early, bulky computers that were limited in both memory and storage to today's advanced devices that process vast amounts of information at high speed. The shift from large, mechanical parts to compact, electronic microchips has been pivotal in this journey. These developments have made computers not only faster and more reliable but also more accessible, changing how we use technology every day.

Early Beginnings

image Computer architecture took its first significant steps in the 1940s with machines like the ENIAC (pictured above), which were large and not very fast - the ENIAC took up 1500 square feet and had 70,000 resistors. In terms of speed, the ENIAC could compute 5000 additions per second - a very impressive amount for its time. It was based on a type of memory called "delay line memory", a complex system where tubes were filled with mercury and sound waves traveling through the tubes represented data. The most significant downside of this type of memory is that it was totally volatile. If the storage lost power, all memory would be lost. It is obvious now that such a system is quite clunky and slow, but the ENIAC and its designs paved the way for the exponential technological growth that soon followed.

Intermediate Evolutions

In 1958, Jack Tilby created the first integrated circuit, a revolutionary item that allowed for the miniaturization of electronic circuits. Before integrated circuits, components like transistors, resistors, and capacitors were individually soldered onto circuit boards, which took up a lot of space and often failed to heat and stress. Integrated circuits changed the game by packing these components onto a single small chip of semiconductor material, usually silicon. This drastically reduced the size and cost of electronic devices while also increasing their reliability and performance. Imaging comparing the size of a giant tube of the ENIAC to an integrated circuit; it is clear that data can travel orders of magnitude faster through the small distance of the integrated circuit.

image (Integrated circuit from 1965) image (ENIAC Tube)

Furthermore, because so much data can be packed into small area, that meant that computers could have much more memory immediately available for usage. image

In 1965, Gordon Moore, co-founder of Intel, made an astute observation: the number of transistors on an integrated circuit roughly doubled every two years. He predicted that this amazing exponential improvement in technology would continue for a long time. As can be seen in the picture above, Moore was correct.

Exponential Advancements

Before the integrated circuit, memory was constructed from magnetic core technology, which was slow, large, and had to be made manually. The acceleration of integrated circuit technology allowed for the introduction of semiconductor memory, a much faster and more reliable type of memory. Two notable types of semiconductor memory or Static Random Access Memory (SRAM) and Dynamic Random Access Memory (DRAM), both of which are the standard for memory for computers even today. Storage capacity increased exponentially from kilobytes to gigabytes.

Integrated circuits caused a similar advancement for the speed of computers. The introduction of transistors integrated onto silicon chips meant that the travel distances of electrical signals were greatly reduced. Consequently, the time it took for a computer to process data decreased substantially. Memory access times were reduced from microseconds to nanoseconds.

Modern Times

At this point, Moore's Law has started to hit a wall. Making transistors tinier is running into problems with physics and cost. To further shrink the size of the already miniscule transistors requires extremely advanced fabrication methods. The power density also becomes extremely high, causing heat generation that interferes with the operation and reliability of the transistors. Therefore, advancements in computer architecture are now often based on the innovation of specialized technologies designed to perform specific tasks with exceptional speed.

One example is the evolution of GPUs into parallel processing powerhouses. Originally designed for rendering graphics, GPUs evolved to accelerate a range of computing tasks, from scientific simulations to deep learning algorithms. With the surge in artificial intelligence and machine learning tasks, the computational prowess of GPUs became essential. Soon, general-purpose GPUs (GPGPUs) were introduced, and systems could perform complex calculations more quickly than traditional CPUs in many applications.

Conclusion

This paper has traced the journey of computer memory and processing speed from their early stages to the advanced systems used today. The transition from early, bulky computers to streamlined devices was driven by the shift from large mechanical components to tiny microchips, with a particular focus on the integrated circuit's impact in the 1960s.

These advancements have not only enhanced the speed and reliability of computers but have also broadened their accessibility. While Moore's Law faces challenges, the rise of specialized technologies, such as GPUs and AI optimizations, continues to propel advancements in computer architecture.

References

https://www.hp.com/ca-en/shop/offer.aspx?p=computer-history-all-about-the-eniac

https://www.techtarget.com/whatis/definition/integrated-circuit-IC#:~:text=An%20integrated%20circuit%20(IC)%2C,diodes%20and%20transistors%20are%20fabricated.

https://www.investopedia.com/terms/m/mooreslaw.asp

https://www.techtarget.com/searchvirtualdesktop/definition/GPU-graphics-processing-unit#:~:text=Graphics%20processing%20units%20came%20to,intensive%20tasks%20unrelated%20to%20graphics.

"Computer Organization and Design" by Pattersson and Hennessy.