history of computing - SYSC3020-Winter2016/SYSC3020LectureNotes GitHub Wiki
Ancient (1960s and earlier) computers filled entire rooms. They were essentially all different. This means that programs had to be written specifically for each computer, they could not be ported from one computer to the other.
They were written as a sequence of binary codes which encoded the electric signals (+5V or 0) to be sent to the different inputs, in sequence. These codes encoded arithmetic operators, storing values in registries, etc. Programs were input to the computer using punchcards.
- Main problems: debugging, porting programs from one computer to another, reusing programs (it was very labor-intensive to write each program!)
- Sounds familiar? Nowadays we still have trouble debugging, porting programs between operating systems and environments, and we would like to save money by reusing code better.
Since the 1950s there has been huge progress in the way we write code, with first the use of assembly language, then the introduciton of high-level languages.
What are the next steps?
- Even higher-level languages! Javascript dialects (coffeescript), python compiles to C or to jvm-compatible bytecode.
- Generate code using frameworks or UML tools: Describe your program functionality directly with a UML diagram and generate the code.
- Using containers and virtualization to solve portability issues. Similar idea to the JVM: Docker can run arbitrary code in a container (doesn't need to be docker-compatible).