Chapter 1
People and machines have different languages. But we have to reunite them in order to program
stuff. We'll call machine language L0 and people language L1. To give a image in how these
languages look like: Machine language consists of long rows of numbers while people language
contain more words and abbreviations. When programming in L1 you eventually want the machine
to run your program. So the machine can either translate or interpret your L1 code. Translation is
the process of turning L1 instructions to their L0 equivalence. The machine will then run the L0
code instead of the L1 code. Interpretation is the other method where you create a program in L0
that takes an L1 program as input and executes the equivalence of that code. The difference between
them is that in translation the L1 program is converted to a L0 program, the L0 program is in
control of the your computer. In interpretation we examine the L1 instruction and execute the
equivalent L0 instruction directly, the interpreter is in control of the program, the L1 program is just
data. There are hybrid versions. One condition is that L1 can not be too different from L0 for them
to cooperate. But L1 is still far from human friendly. To fix that we can create another language
which we call L2 that is more human friendly and less machine code. We just apply L2 to L1 and
L1 will apply to L0. This can go on until a sufficient human friendly programming language has
been developed.
We have to note that below L0 there is another level, known as the device level. This is the level
where individual transistors reside. It falls into the realm of electrical engineering and thus is not
covered in this summary.
The digital logic level is where objects as gates are. Gates take an input (either 1 or 0) and also
gives an output. Gates can be formed together to create a 1-bit memory. This 1-bit memory can be
multiplied into 16, 32 or 64-bit memories to create a register. A register can hold a single binary
number up to some maximum.
,The microarchitecture level is where we find 8 to 32 registers that form a local memory and a
circuit known as the ALU (Arithmetic Logic Unit) which can perform simple arithmetic
operations. The local memory and ALU are connected to form a data path. The data path is
controlled by a program known as microprogram or directly by the hardware. This microprogram
is the interpreter of the instructions from level 2. It's job is to keep fetching instructions and
executing them.
The Instruction set Architecture level is where we find the instructions of this particular
computer. Every computer manufacturer publishes a manual for their instruction set. This is where
we find the basics of what the computer is capable of.
The Operating System machine level is where we find, depending on the operating system, the
instruction set you would also find in the ISA level. The operating system also provides additional
instructions, a different memory organization, the ability to run multiple programs etc. This all
depends on your operating system. Because some of the instructions are similar, the operating
system can just pass it down to the microprogram in the ISA level. The operating system will
process the instructions not shared by the ISA level. Because of this “shared” processing we call
this level a hybrid level.
This is where we reach a specific line:
The levels below level 4:
– Designed for running interpreters and translators supporting the upper levels by system
programmers
– Instructions are interpreted
– More machine language
The levels 4 and above:
– Focused on problem solving orientated programming
– Instructions are most of the time translated
– More human language
The Assembly language level is where we program stuff for the lower levels, in human friendly
languages. The assembler is the one that translates the human friendly language into machine
language. That's it.
The Problem orientated language level is where we find our different programming languages
like C++, Java, Python etc. These languages are known as high-level languages. These languages
are mostly translated into level 4 or level 3 languages by compilers. If desired, they can be
interpreted instead of translated.
In summary, a computer is designed in a series of levels. Each on top of the other. Just like the
Internet.
,Chapter 2
The CPU (Central Processing Unit) is the “brain” of the computer. It is designed to execute
programs stored in the main memory. A bus connects all the components. The control unit is
responsible for fetching the instructions from the main memory. The ALU performs arithmetic
operations. The CPU contains a small high speed memory (registers) used to store values
temporary. The disk and printer are input and output devices. The most important register is the
Program Counter register (PC) which points to the next instruction to be fetched. Another very
important register is the Instruction Register (IR) which holds the instructions currently being
executed.
The image on the right is a typical
data path. We give the ALU 2
registers, select the operation to
perform on the two registers and store
the result back into a different register.
This process is known as the data
path cycle. The faster the data path
cycle, the faster the computer. Later
on you can store registers into the
memory.
We divide instructions into two
groups: register-register and register-
memory. The register-register takes
two operands from the registers as
input for the ALU. The register-
memory allows an operand to be
fetched from the memory.
,The CPU executes instructions roughly in the following sequence:
1. Fetch instruction from the memory into the instruction register
2. Change the program counter to point to the next instructions
3. Determine the type of instruction fetched
4. If instruction uses a word from the memory, locate that word
5. Fetch the word if needed
6. Execute instruction
7. Go back to step 1
This sequence is known as the fetch-decode-execute sequence. The interesting thing here is that we
can recreate this sequence in, for example, Java. This shows us that a program does not have to be
executed by hardware but can also be done with software. A program that uses another program as
input to execute it, or as we know it, interpreter. This equivalence of hardware processors and
interpreters is kind of important. An interpreter breaks the instructions from the input into smaller
instructions known as microinstructions. This allows the machine with an interpreter to be simpler
and less expensive. This small saving will stack up when the amount and the complexity of the
instructions increases.
The quest for faster computers continues as we keep advancing in computers. At the moment most
computer architects look to parallelism. Parallelism comes in two forms:
– Instruction-level parallelism
– Processor parallelism
Instruction-level parallelism
Fetching instructions from the memory has been a known bottleneck in instruction execution speed.
The first solution was the prefetch buffer. This buffer fetches instructions from the memory in
advance. This way you could access instructions before the memory read has finished. Prefetching
divides instruction execution into two parts: fetching and actual execution. The concept of pipeline
takes this to the next level. Dividing the instruction execution into dozen of parts each with a
dedicated piece of hardware which can run parallel (see image below).
Suppose each stage takes 2 nanoseconds (ns) which means that 1 instruction takes 10ns to be
executed. At first glance it looks like we can run 100 microinstructions per second (MIPS) but it
does way better than that. Because after every stage cycle (2ns) a new instruction has been
completed. This means that we can run 500MIPS. Pipelining allows a trade-off between latency
(how long it takes to execute an instruction) and processor bandwidth (how many MIPS the CPU
has). With T as stage cycle time and n the amount of stages, we can calculated the latency in
nanoseconds by T * n.
, If one pipeline is good, then surely two pipelines are better. A single instruction fetch unit fetches
pairs of instructions and puts them in their own pipeline with their own ALUs. To run the ALUs in
parallel the two instructions should not use the same resource or depend on each other, this is a
must. Instructions should also be compatible and simple enough to run them parallel. If this isn't the
case, only one instruction is executed while the other is paired with the next incoming instruction.
When using two pipelines, the main pipeline is known as the u pipeline and the secondary pipeline
is known as the v pipeline.
Going into four pipelines was possible but it does create lots of duplicate hardware. Instead we use
one pipeline but give it multiple functionality units. This is known as a superscalar architecture
(see image below). The general idea is that the instructions are fetched faster than that they could be
executed, a couple of instructions per clock cycle. Theoretically, every stage completes in n
nanoseconds, but in reality stage 4 takes longer than n nanoseconds to execute the instructions (e.g
memory read or floating point arithmetics) which creates time gaps where stage 3 has to wait. These
time gaps are filled by stage 3 passing instructions to other functionality units creating parallelism.
Voordelen van het kopen van samenvattingen bij Stuvia op een rij:
Verzekerd van kwaliteit door reviews
Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!
Snel en makkelijk kopen
Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.
Focus op de essentie
Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!
Veelgestelde vragen
Wat krijg ik als ik dit document koop?
Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.
Tevredenheidsgarantie: hoe werkt dat?
Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.
Van wie koop ik deze samenvatting?
Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper ziyi26. Stuvia faciliteert de betaling aan de verkoper.
Zit ik meteen vast aan een abonnement?
Nee, je koopt alleen deze samenvatting voor €20,49. Je zit daarna nergens aan vast.