1.1 - The characteristics of
contemporary processors,
input, output, and storage
devices
1.1.1 - The processor
Function
A processor is a computer chip used to decode and execute instructions. The Purpose of the CPU is to:
Control the movement of data/instructions
Fetch data/instructions from memory
Decode and execute instructions
Perform arithmetic/logical operations
Registers
These are memory locations within the processor itself. These are extremely fast and each one serves different
purposes.
The Registers include:
Program Counter – stores the address of the next instruction which should be carried out. This counter is
incremented every FDE cycle.
Memory Address Register – stores a memory address which should be used to fetch data from memory.
Current Instruction Register – Stores the current instruction which should be decoded and executed by the
Control Unit – Fetches and decodes instructions from its instruction set and coordinates their execution by
sending control signals to hardware devices and the ALU.
Memory Data Register – stores data and instructions which have just been retrieved from memory.
Accumulator – Stores results from the ALU and can hold values which have been loaded in from memory.
Buses are communication channels used to send data between the registers, CPU components and other
components in the computer. These buses include:
Data Bus – Carry data between hardware devices in the computer
Control Bus – Carries processor commands to hardware devices as well as signals returning to the processor.
Address Bus – Carries the address of data that should be sent or retrieved from memory.
Arithmetic Logic Unit
The ALU can perform basic arithmetic such as add/subtract/divide/multiply/shift operations. As well as basic logic
operations such as AND/OR/NOT/XOR
Control Unit
The control unit is the part of the CPU that performs the Fetch-Decode-Execute cycle in which it fetches instructions
from memory, decodes the instructions and then executes each instruction accordingly.
,Key Assembly Instructions
STO - Stores the value in the AC into a given memory address
ADD – Adds a value in memory to the AC
SUB – Subtracts a value in memory from the AC
BRA – Branches to an address
BRZ – Branches to an address if the AC == 0
LDA – Load a value from a memory address into the AC
FDE Cycle
Fetch:
Copy the value from the PC into the MAR.
Fetch the data at the associated memory address from main memory and store this value in the MDR.
The value in the MDR should be then copied into the CIR.
The PC should then be incremented.
Decode:
The processor separates the value in the CIR into the function that should be executed and all the values
that are required to complete the instruction.
Any registers that are to be operated on or to have results stored inside it.
Execute:
The instruction is completed accordingly by dispatching it to the appropriate functional unit (E.g. the ALU or
LOAD/STORE unit)
Instructions involving branch operations will update the PC.
Load/Store operations will fetch or send data to main memory accordingly. This is done by updating the
MAR to the location in memory to be read from/stored to and the MDR is read/written to.
ALU instructions update the ACC, or another destination register if specified.
HLT (halt) instructions will stop any further instructions from being loaded.
Performance
Clock speed – An electronic oscillator produces a signal to synchronise with the processor. A higher clock
speed means that instructions will be carried out faster.
Bus size – the number of bytes that can be transferred per operation. A greater size means that more data
can be transferred in the CPU at a time. This is usually equivalent to word length.
Word length – This refers to the amount of data a processor can handle at a time. This is usually seen as 32
or 64 bit. A greater word length means that larger values can be stored and manipulated.
Amount of available cache – a small amount of memory stored closer to the CPU to reduce access to RAM.
Exists in levels to describe its speed and respective size. L1 cache is the closest, fastest, and smallest and
exists for each core. L2 is larger, slower and is shared between cores.
Number of cores – More cores allow for more instructions to be completed at once. If a program has been
effectively designed for multiple cores, a task can be divided into subtasks and completed in parallel across
cores (parallel processing). It should be noted that this may require additional overhead so that the task can
be organised and divided between cores correctly.
Pipelining – Pipelining separates processor steps in the FDE cycle into pipelining stages. Instructions move
from one pipelining stage to the next. This allows for an instruction to be completed for every pipeline stage.
This allows for multiple instructions to be completed in parallel. It should be noted that any instruction
which changes the PC such as BRA and BRZ instructions cause the pipeline to be flushed.
,Von Neumann vs Harvard
In Von Neumann architecture, data and instructions are stored alongside each other in memory. This enables for
more flexible use of memory which allows the processor to run a variety of programs without knowledge of them in
advance. This means the Von Neumann architecture is ideal for general purpose computers.
The Harvard architecture stores data and instructions in separate memory units; Each unit has its own buses for
data. This means that the data and instruction memory units can be used simultaneously. This reduces the time the
processor spends waiting for data to be written/read. This increases processor performance. This also means that
each memory unit can be adjusted for a particular system. For example, the instruction memory can be
implemented to be read-only if the system is designed to only run a specific program. This prevents malware from
being run on the system. This is why the Harvard architecture is often used on embedded systems where speed and
security are of priority.
,1.1.2 - Types of processors
CISC and RISC
CISC (Complex Instruction Set Computers) processors can execute larger instructions that may require several clock
cycles and can have more specialised instructions that carry out several functions. CISC processors focus on reducing
code and RAM sizes by having most of the computer’s functionality on the hardware circuitry. These more
specialised instructions, however, can require multiple clock cycles. These processors are often used on smaller,
more embedded systems where speed is not of highest priority and memory is limited.
RISC (Reduced Instruction Set Computers) processors execute smaller, more basic instructions that can only carry
out a single function and will execute one instruction every clock cycle and so has more predictable processing.
However, RISC processors also require more memory and multiple RISC instructions may equate to one CISC
instruction and so take the same amount of time. RISC also requires higher-level compilers to be used to add more
functionality. The predictability of its processing also makes it easier to pipeline.
GPUs
GPUs are specialised processors for graphics. They are designed for parallel processing across thousands of cores and
their own set of VRAMS. A separate GPU allows the CPU to complete other tasks. Since GPUs are able to carry out
many parallel calculations, they are also well suited for machine learning applications as well as of course, graphics
processing.
1.1.3 - Input, Output and Storage
Factors when considering appropriate
secondary storage
Capacity
Speed
Cost
Portability
Compatibility
Reliability
Magnetic Disk
Magnetic disks have very high capacity for cost. They contain many
moving parts and so that these drives are not very portable. This factor
also means that these drives can be less reliable if these moving parts
are to fail.
A drive may consist of several platters with a magnetic coating. On each
side of the platter, there are two mechanical arms which read and write
data by checking or changing if a point on the platter is magnetised or
not. Each platter is divided into radial sectors and circular tracks. The
magnetic arm can move between tracks and the disk can spin to a
different sector. The speed that a platter spins can have a great impact
on the read/write speed of the drive.
Magnetic disks can become fragmented when there is not enough
contiguous space to store a file. This means that a file may be stored at
several locations on the disk. This means that it takes longer to read a
file. Additionally, drives which are almost full will take longer to find space it can allocate to a file.
,Solid State
Solid state drives utilise flash memory and have no moving parts. They have faster read/write speeds than magnetic
storage, however, are more expensive and so have lower capacity. The lack of moving parts means that they are
more portable and less likely to experience mechanical failure. Solid state drives do have a limited lifespan, however,
since they have a limited number of write operations. If drives are also not periodically refreshed, memory leakage
can occur.
SSDs utilise NAND flash memory cells and a controller where transistors will trap charge to represent binary. These
cells are represented as pages and blocks. A page is a single cell in a grid and blocks represent rows of pages. Data
can be read by page. To write to a page, the page’s block must be empty. This means that as a drive becomes full, it
will become slower as it takes longer to write new pages and reorganise and write old blocks.
Optical Disk
Optical disks are slower than solid state and magnetic mediums but have the benefit of greater portability and low
cost. Some disks are read-only (CD-ROM / DVD-ROM), some can only be written to once (CD-R / DVD-R) and others
have full read/write functionality (CD-RW / DVD-RW).
Optical disks are read by using a laser to reflect off of the aluminium layer on the surface of a disk as it spins. How
the light is reflected determines how much light is received which can be interpreted into binary.
Recordable disks can be written to by burning dots into the chemical layer which alters how light is reflected. In CD-
RW / DVD-RW disks, this process is reversible so that the disk can be written to multiple times.
Memory
RAM – Random Access Memory is used to store data and instructions at specific memory addresses which are used
by the CPU to run programs. These programs are loaded from secondary storage into RAM to utilise the faster
read/write abilities. RAM is volatile, which means that all data will be lost when the power is turned off.
DRAM – Dynamic Random-Access Memory is most commonly found as the primary memory in many computers
because of its fast read/write speeds, low cost/GB and high capacities. This type of memory is relatively cheap
because it uses a single transistor and capacitor to store each bit. The capacitor, which is used to store the state of
the bit, can leak electrons which are used to determine its state. To ensure there is no data loss, DRAM must be
periodically refreshed which can cause DRAM to a more power-hungry method for storing data. DRAM is typically
used in RAM sticks.
SRAM – Static Random-Access Memory is an alternative method of storing memory which requires 6 transistors to
store each bit and so is more expensive to produce and has lower capacity. However, SRAM has faster read/writes
and uses less power as the memory does not need to refresh periodically. This memory is mostly used for CPU cache
and registers where speed is of priority.
ROM – A small amount of read-only, non-volatile memory used to boot the operating system and stores the BIOS.
PROM – Programmable ROM. This type of ROM can be written to once after manufacture. This type of ROM is
cheaper to manufacture and allows for the latest versions of software to be written to by the supplier of electronic
devices. Used in mobile phones, RFID tags and games controllers.
EPROM – Erasable Programmable ROM. Specialist equipment utilises UV light to erase all data on a ROM. The
EPROM can then be rewritten.
EEPROM – Electrically Erasable Programmable ROM. This allows for individual bytes to be erased and the device can
be reprogrammed.
Virtual Memory – This is a mapping between a process’s individual address space and the physical address space
associated with it. This mapping means that a process may only access its own allocated memory and no other
process can view or manipulate another process’ memory
, Disk Thrashing – This when the hard drive is overworked by moving data between primary and virtual memory
excessively. This can damage the hard drive and lead to early hard drive failure due to excessive movement of drive
arms and disk spin.
1.2 - Software and Software
Development
1.2.1 - Systems Software
Operating Systems
Operating systems provide a platform on which other applications can run. It allows applications to interface with
the hardware and input/output devices through a kernel. The operating system may also be responsible for
providing a user interface and security features.
Features of an operating system
The operating system will manage computer resources through:
Error reporting
Memory Management
Utility programs
Scheduling
File Management
Input/Output Device Management
Common Operating Systems
Windows – The most common OS for personal computers.
Linux – Often used for servers or computers that require more flexibility.
MacOS – Has some useful proprietary software but mostly used by people who enjoy wasting money.
Types of Operating Systems
Multi-tasking OSs rapidly switches between tasks to create the impression of multiple programs being
executed at the same time on a single core.
Multi-user OSs allow multiple users to user the same computer at the same time by running the OS through
networked terminals.
An embedded OS controls an embedded system within a larger system such as a car, plane, or medical
equipment.
A Real Time OS is designed and optimised for processes that need to respond to real-time events as quickly
as possible.
Distributed OS coordinate communication between computers in a network where a complex task may need
to be processed by multiple computer nodes.
BIOS
Basic Inptut Output System is stored in ROM/flash memory and is used to initialise and test hardware on a computer
so that it is ready to be used by other software known as POST (Power On Self-Test). The BIOS also loads the
bootloader from secondary storage which initialises an operating system.