Evolution and Performance
First Generation: Vacuum Tubes
First general purpose electronic digital computer is ENIAC, which designed by using vacuum tubes. (Before ENIAC there is Colossus, but that can not be used general purpose.) It could imagine something like before appear PC at that time people think the computer's appearance and purpose en/decryption, calculation complicated numbers.
The Von Neumann Machine
We can use a PC or laptop or other devices (smart phone, tablet....) very easily and frequently. All of that kind of devices are flowed Von Neumann Machine's concept (Stored program concept). This diagram shows stored program concept. (It's named IAS, because first Von Neumann Machine was named IAS.)
Ref
William Stallings, P.40
|
Second Generation: Transistors
After developing transistor, vacuum tubes are replaced by the transistor. Because the transistor is more powerful and efficient than vacuum tubes.
Example) IBM 7094
Ref: Computer Organization and Architecture 5th edition,
William Stallings, P.49
|
This computer has a CPU, I/O, Memory and especially MUX. MUX work instead of system bus's function. Except MUX, this is similar to recent computer's architecture.
Third Generation: Integrated Circuits (IC) Later Generation
Table for comparing performance of the processor
I think before this processor's table are useless, so I skip that.
Ref: Computer Organization and Architecture 5th edition,
William Stallings, P.59
|
Lest's check feature size. Feature size means the minimum line width and if value is smaller, all performances of processor is better (clock speed, energy efficiency, integration rate etc.).
Built in techniques to be effective for Microprocessor Speed.
The evolution of the Intel x86 architecture
Let's check some terms and skip this chapter.
Embedded systems and the ARM
Definition
Embedded system: A combination of computer hardware and software, and perhaps additional mechanical or other parts, designed to perform a dedicated function. In many cases, embedded systems are part of a larger system or product, as in the case of an anti-lock braking system in a car.
ARM: Acorn RISC Machine. A family of RISC- based microprocessors and microcontrollers designed by ARM Inc.
Performance Assessment
This topic I will use an example.
Clock frequency: 5MHz
Time: τ = 1/f = 1/5M = 0.2μs = 200ns
I: Instruction
f: Clock frequency
Designing for performance
On this topic, I will check one thing.Built in techniques to be effective for Microprocessor Speed.
- Pipelining
- Branch prediction
- Data flow analysis
- Speculative execution
Ref: Computer Organization and Architecture 5th edition,
|
Multicore
Every recently produced processors has multicore. Multicore means the multiple processors on the same chip.The evolution of the Intel x86 architecture
Let's check some terms and skip this chapter.
- CISC: Complex Instruction Set Computer
- RISC: Reduced Instruction Set Computer
Embedded systems and the ARM
Definition
Embedded system: A combination of computer hardware and software, and perhaps additional mechanical or other parts, designed to perform a dedicated function. In many cases, embedded systems are part of a larger system or product, as in the case of an anti-lock braking system in a car.
ARM: Acorn RISC Machine. A family of RISC- based microprocessors and microcontrollers designed by ARM Inc.
Performance Assessment
This topic I will use an example.
Clock frequency: 5MHz
Time: τ = 1/f = 1/5M = 0.2μs = 200ns
I1 = 4 cycles → 1.25MIPS
I2 = 6 cycles → 0.833MIPS
I3 = 3 cycles → 1.666MIPS
I4 = 7 cycles → 0.71428MIPS
Terms- CPI: Cycle per Instruction
- MIPS: Millions of instructions per second
f: Clock frequency
Summary of Laws
- Moor's law (1965~)
- The number of transistors that could be put on a single ship was doubling every 18month
- The cost of a ship has remained virtually unchanged during this period of rapid growth in density. - Hwang's law (2002~2008)
- The number of transistors that could be doubling every year. - Amdahl's law
- A program running on a single processor such that a fraction (1-f ) of the execution time involves code that is inherently serial and a fraction f that involves code that is infinitely parallelizable with no scheduling overhead.
- Little's law
- The general setup is that we have a steady state system to which items arrive at an average rate of 入 (Lamda) items per unit time. The items stay in the system an average of W units of time. Finally, there is an average of L units in the system at any one time.
댓글 없음:
댓글 쓰기