Lecture 1

Data Storage

The binary number system:

Computer arithmetic is different from the arithmetic that people use:

Binary Notation

When we write a number, we usually represent it using a string of digits between 0 and 9. The only reason for using 10 digits seems to be that we have ten fingers. Computers have no fingers and so are not as restricted!

A decimal number is constructed by summing multiples of the digits:

i.e. thousands, hundreds, tens, units - for a four digit decimal number.

e.g.    576=5*100+7*10+6*1

The number to multiply each digit by is 10d where d is the position of the digit (0 for units, 1 for tens etc..). The base for this power is called the Radix. When dealing with computers it is common to use radices other than 10. The most important radices are 2 and 16.

Information is most commonly stored in a digital computer as the presence or absence of a current, a charge or a magnetic field. It is uncommon to find a storage device that can store more than two levels. For this reason, it is very common to encounter radix 2 (or binary) numbers.

In binary, each digit can only be 0 or 1. A binary digit is called a bit. The columns in a binary number represent powers of 2.

i.e. sixteens, eights, fours, twos, units - for a five digit binary number.

Decimal
    Binary
0   0
1   1
2   10
3   11
4   100
5   101
6   110
7   111
8   1000
9   1001
10  1010
11  1011
12  1100
13  1101
14  1110
15  1111
16  10000
17  10001
18  10010
19  10011
20  10100
To convert between binary and decimal representations, use the following methods:

decimal to binary:

  1. Find the largest power of two that is less than or equal to the number.
  2. if the no is greater than or equal to this power of two
  3. otherwise
    1. print a 0.
  4. look at the next smallest power of two
  5. repeat from step 2 until there are no more powers of two left.
To do this quickly it is a good idea to know the small powers of two:
Powers of two
Example: Print 481 in binary To convert from binary to decimal, add the powers of two for each position in the binary number that is 1.

10011=1+2+16=19

Here is some C code to print a number in binary.

These notes contain example programs in C like the one above. If you are reading the notes online, then you can run the program by clicking on the "Run" button above. If you want to change the program, just modify the source code and Run it again.

Hexadecimal notation

Although binary numbers are common when dealing with machine representation of data, they are hard for human beings to work with. To provide a more compact representation, hexadecimal notation is used. Hexadecimal (or Hex) uses base 16, and the digits are:

0123456789abcdef

Decimal
    Binary
         Hex
0   0    0
1   1    1
2   10   2
3   11   3
4   100  4
5   101  5
6   110  6
7   111  7
8   1000 8
9   1001 9
10  1010 a
11  1011 b
12  1100 c
13  1101 d
14  1110 e
15  1111 f
One hex digit is four binary bits and so converting between hex and binary is simple.

010010101101

0100:1010:1101

4:a:d

This number is 4ad in hex.

38a=011:1000:1010=01110001010

Two hex digits are an 8 bit binary number 00,01,02..ff and can represent a decimal number from 0..255

Four hex digits are a 16 bit binary number and can represent a decimal number from 0..65535

In C, hexadecimal numbers have the prefix 0x0 and can be used in the same way as a decimal number.

e.g.

i=0x01f3e;
a=a+0x0100;
The 'printf' function allows you to print hexadecimal numbers directly using %x in the format string.
 
 

Finite precision numbers

When working out a calculation on paper by hand, there is always enough space to represent numbers. For a computer things are usually quite different. The amount of memory used for storing a number is fixed (in bits) at the time the machine is designed. It is possible to represent numbers larger than this fixed size but doing so will probably involve extra effort by the programmer. Numbers in a computer thus have a finite precision.

To think about finite precision numbers consider the set of positive integers that can be represented using three decimal digits. There are 1000 such numbers from 0..999, but this representation can not store:

When performing arithmetic operations on numbers it is important that the result is also a number. With finite-precision numbers this is not always the case: For binary numbers the precision is usually 8,16,32 or 64.
Notice that this program uses unsigned numbers, what happens if they are signed?
 

Lecture 2

Negative Numbers

In a binary number, the bit furthest to the left is called the most significant bit (msb) and the bit furthest to the right is called the least significant bit (lsb).
msb
|
10010010
       |
       lsb
Four systems for representing negative numbers:
  1. Signed Magnitude

  2. The MSB gives the sign of the number (sign bit) , 0 for positive and 1 for negative. The remaining bits hold the magnitude of the number.
    e.g.: 4=00000100, -4=10000100
     
  3. One's Complement

  4. The MSB is also the sign but to negate a number all the bits are complemented (1 is replaced by 0 and 0 is replaced by 1)
    e.g.: 4=00000100, -4=11111011

    Signed Magnitude and One's Complement are not used often because they have two representations for 0 (+0 and -0).
     

  5. Excess n Notation

  6. Add n to the number. n is usually 2m-1 where m is the precision. For an 8 bit number, n is 128.
    e.g.:4=10000100, -4=011111100 (-4+128=124)
     
  7. Two's Complement

  8. This is the same as ones complement but negative numbers have one added to them. This is so that there are not two zeros (+0 and -0). The MSB is still the sign.
    e.g.:4=00000100, -4=11111100.
Two's Complement is the most common representation for negative numbers. Because it has the following properties: It is used by the machine running the following program.

The diagram below shows the relationship between binary, signed and unsigned numbers for 8 bit two's complement.

 

Moving clockwise around this diagram corresponds to addition; moving anti-clockwise corresponds to subtraction.

Note that there are two places on the diagram where an addition or subtraction will cause a result that is not valid.

  1. For unsigned numbers, crossing the top of the diagram.

  2. 10-11=255.....Wrong.
    250+9=3..........Wrong again.
    These cases cause a Carry.
  3. For signed numbers, crossing the bottom of the diagram.

  4. 120+10=-126.....Wrong.
    -100-29=127.....Wrong again.
    These cases cause an Overflow.

Similar diagrams can be drawn for the other representations of binary numbers to illustrate their disadvantages.

In C, there are 4 standard types of integer:

int

unsigned long unsigned long char unsigned char

Addition and Subtraction of binary numbers

Two binary numbers A and B are added from right to left, creating a sum and a carry in each bit position.

Since the rightmost bits of A and B can each assume one of two values, four cases must be considered: 0 + 0, 0 + 1, 1 + 0, and 1 + 1.

For the remaining bit positions, the carry into the position can be 0 or 1, so that a total of eight input combinations must be considered.

Subtraction is performed in a similar manner using different rules. For subtraction the carry is often called a borrow.
These operations are performed by the CPU (you rarely have to perform operations on individual bits).

The order of numbers is preserved when representing signed integers using twos complement.
This means that the same addition and subtraction can be used for signed and unsigned numbers.
The only difference is that a signed arithmetic can produce overflows and unsigned arithmetic can produce carrys.

Lecture 3

Floating Point Numbers

In many calculations the range of numbers is very large. For very large or small numbers, people use scientific notation.
Instead of writing: we write: Computers can use a similar notation, this is called floating point.
In general, a number n is represented as: f is called the fraction, or mantissa, and e is called the exponent. B is the base of the number system; 10 for decimal, 2 for binary.

Standardisation is better for floating point numbers than integers and most computers use the same representation (called I.E.E.E. 754).

There are two common formats for floating point numbers, single and double precision.

Single precision numbers are stored in 32 bits as shown below.

 __________________________________
|_|________|_______________________|
 |    \               \
 Sign  Exponent       Fraction
(1 bit)(8 bits)       (23 bits)
The sign bit gives the sign of the number as a whole (1 for negative).

The exponent is stored using excess 127 notation.

The fraction is stored in 23 bits and represents a decimal number from 1.0 to nearly 2.0.
The first bit in the fraction must always be one and is not stored.


 

Double Precision numbers are stored in 64 bits as shown below.

 ____________________________________________________
|_|___________|______________________________________|
 |     \                \
 Sign   Exponent         Fraction
(1 bit) (11 bits)        (52 bits)
The sign bit gives the sign of the number as a whole.

The exponent is stored using excess 1023 notation.

The fraction is stored in 52 bits.

        Item             Single        Double
 ________________________________________________
|___Bits in sign_____|_____1______|______1_______|
|___Bits in exponent_|_____8______|______11______|
|___Bits in Fraction_|_____23_____|______52______|
|___Bits, total______|_____32_____|______64______|
|___Exponent system__|_Excess 127_|_Excess 1023__|
|___Exponent Range___|_-126..+127_|_-1022..+1023_|
|___Smallest_________|___2-126_____|_____2-1022_____|
|___Largest__________|___2+128_____|_____2+1024_____|
|___Decimal range____|_10-38..10+38_|__10-308..10+308_|
Example: 0.5 is stored as 0x03f000000 in single precision f.p. (1x2-1 sign=0 exponent=126 fraction=0)

The floating point number line for single precision:

                Positive overflow
         Positive numbers        \    
   Positive underflow    \        \         
                    0\    \        \
-------|.  . . ..|--|--|.. . .  .|-------
 \         \       \
  \         \       Negative underflow
   \         Negative Numbers
    Negative Overflow
In C, these floating point numbers are called float (single precision) and double (double precision). There are a large number of mathematical functions that perform operations on floats and doubles.

e.g. .


Floating point calculations may be performed by using integer operations on the fraction and mantissa. However, most computers now have special hardware for performing f.p. operations. When the speed of a machine is measured, the results for integer and floating point calculations are usually presented separately.

Characters

Characters are stored as a 7 bit unsigned integer. The mapping from character to integer is defined by the American Standard Code for Information Interchange (ASCII).
                             Second Hex Digit 
        0|  1|  2|  3|  4|  5|  6|  7|  8|  9|  A|  B|  C|  D|  E|  F| 
     |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| 
 F 0 |NUL|SOH|STX|ETX|EOT|ENQ|ACK|BEL| BS| HT| LF| VT| FF| CR| SO| SI| 
 i   |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| 
 r 1 |DLE|DC1|DC2|DC3|DC4|NAK|SYN|ETB|CAN| EM|SUB|ESC| FS| GS| RS| US| 
 s   |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| 
 t 2 | SP|  !|  "|  #|  $|  %|  &|  '|  (|  )|  *|  +|  ,|  -|  .|  /| 
     |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| 
 H 3 |  0|  1|  2|  3|  4|  5|  6|  7|  8|  9|  :|  ;|  <|  =|  >|  ?| 
 e   |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| 
 x 4 |  @|  A|  B|  C|  D|  E|  F|  G|  H|  I|  J|  K|  L|  M|  N|  O| 
     |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| 
 D 5 |  P|  Q|  R|  S|  T|  U|  V|  W|  X|  Y|  Z|  [|  \|  ]|  ^|  _| 
 i   |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| 
 g 6 |  `|  a|  b|  c|  d|  e|  f|  g|  h|  i|  j|  k|  l|  m|  n|  o| 
 I   |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| 
 t 7 |  p|  q|  r|  s|  t|  u|  v|  w|  x|  y|  z|  {|  ||  }|   |DEL| 
     |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The representation for each character consists of 7 bits, and all 127 possible bit patterns represent valid characters.

The characters in positions 00 - 0x01f and position 0x07f are special control characters that are used for transmission, printing and control. Of these, 0x0c, 0x0d, 0x0a and 0x07f are most important.

The remaining characters are all printable, and include letters, numbers, punctuation, and a space.

The digits 0-9 appear in sequence, as do the upper and lower case letters. This organization simplifies character manipulation and comparison.

In order to change the character representation of a digit into its numerical value, we can subtract 0x030 from it. In order to convert the ASCII character `5,' which is in position 0x035, into the number 5, we compute 0x035 - 0x030 = 0x05.

In order to convert an upper case letter into a lower case letter, we add 0x020. To convert the letter `H,' which is at location 0x048 in the ASCII table, into the letter `h,' which is at position 0x068, we compute 0x048 + 0x020 = 0x068.

In C, characters are always stored using 8 bits. A character can be used in a C program by using either the ASCII code or enclosing the character in single quotes.

Strings

A string is a sequence of characters. In C, strings are stored in consecutive memory locations and are always terminated by a character with a code of 0 (NUL).

Lecture 4

Basic Logic Functions

Consider the box shown above; it has two inputs, x and y, and one output, z. If x, y and z are one bit binary numbers then the box performs a logic function.

Consider all combinations of the two bits, x and y. There are 4 such combinations, 00, 01,10 and 11.

Now consider all the possible logic functions with these two bits as inputs and one bit as the output.
For each of the 4 possible inputs there will be a value for the output. We can completely describe this function using a table:

 x y | z
-----|---
 0 0 | A
 0 1 | B
 1 0 | C
 1 1 | D
This is called a truth table, the values of ABCD completely describe the logic function.
 We can now look at all the possible logic function by looking at values for ABC and D.
Each row in the following table is a logic function. There are 16 possible functions of two variables.
The columns represent DCB and A.
  DCBA
  1100 <-x
  1010 <-y
  ----
  0000|0 X
  0001|1 X
  0010|2 X
  0011|3 X
  0100|4 X
  0101|5 X
  0110|6 
z 0111|7 X
  1000|8 
  1001|9 X
  1010|a X
  1011|b X
  1100|c X
  1101|d X
  1110|e
  1111|f X
For example row 6 is a logic function that is 1 when either x is one or y is one but not both.

We will use the hexadecimal row number to describe these functions.

Most of these functions are not useful.

0 and f are trivial, z is always the same.
c and a are also trivial (z=x and z=y)

NOT

Lets look at the simplest possible logic function: This is a function with one bit input and one bit output. If x is one then z is 0, if x is 0 then z is one. This function is call the NOT function and is drawn as shown below.

This is the Logic Symbol for NOT.

Lets use NOT to see if we can make some of the two input logic functions redundant.

If we write the functions as n(x,y), where n is 1..f then we can see the following:

1(x,y) is equivalent to NOT e(x,y)
2(x,y) is equivalent to NOT d(x,y)
3(x,y) is equivalent to NOT x
4(x,y) is equivalent to NOT b(x,y)
5(x,y) is equivalent to NOT y
9(x,y) is equivalent to NOT 6(x,y)
7(x,y) is equivalent to NOT 8(x,y)

also

b(x,y) is equivalent to e(NOT x,y)
d(x,y) is equivalent to e(x,NOT y)

This leaves 8, e and 6 which can not easily be reduced further.
8 is AND, e is OR, 6 is XOR.

AND

The AND function:

z is one, only if x is one AND y is one.

OR

The OR function:

z is one, if x is one OR y is one OR both are one.

Note that this is not the common English usage of OR. If we say:

"This lecture is tedious, lets go to the Cinema OR the Beach"

This does not include both.

XOR

The Exclusive OR function:

z is one, if x is one OR y is one, but not both.

In C, the following operators perform bitwise operations on every bit in an integer or pair of integers.
~    Bitwise NOT
&    Bitwise AND
|    Bitwise OR
^    Bitwise XOR
These should not be confused with the logical operators (&&,|| and !). C needs two types of operators because the value used for false is 0 and any other value is true. Using bitwise AND - (2 & 4)=0 but (2 && 4)=1;

Lecture 5

Example: Build a device to add two numbers together - An adder:
This device will add together A and B to give S. C4 is the carry from the top bit and indicates that the result is larger than 15. C0 is the carry input; by connecting C4 from one device to C0 from another an 8 bit adder can be made.

In order to add two bits together use the following truth table

s is the sum and c is the carry.

s=a XOR b, c=a AND b

In order to add three bits together use the following truth table:

s=(a XOR b) XOR c, c=(a AND b) OR (a AND c) OR (b AND c).

Logic Diagram:

Lecture 6

Combinational Logic

Using the three input, two output, logic device we designed last time we can now make a 9 input, 5 output, logic device to add two binary numbers together:

The logic circuit from lecture 5 can be drawn as a single block called a full adder.

If four of these devices are cascaded together, then we have a four bit adder.

Subtraction can be performed in a similar way.

An adder can be used for subtraction because A-B == A+(-B). To get -B, complement B (using NOT gates) and add one to the result (two's complement). Adding one to the result can be done by making C0 = 1.

Inside the central processing unit of a computer there will be a device that performs a number of arithmetic and logic functions. E.g. Add,Sub,And,Or,Xor,Not.

This is called the Arithmetic Logic Unit or ALU.


A, B and S are n bit numbers. F is a binary code to indicate the function to be performed. O is status output from the operation, including Carry, Overflow etc..

Sequential Logic

So far we have looked at logic circuits that have a direct mapping from input to output. Every output is just a logical combination of the input bits. This is called combinational (or combinatorial) logic.

If we allow a device to have an output that depends not only on the present input but on previous inputs (or outputs) then it is called sequential logic.


Now the output depends on the present input and the state. The box contains a combinational logic circuit.

This device can have a "memory", and we can use it to store bits.

It can also be used to control a complex sequence of operations.

Lecture 7

Data and Instructions

Schematic diagram of a simple computer

The CPU communicates with the memory using buses. A bus is just a number of connections that can carry binary numbers in parallel.

The Data Bus carries Instructions and Data to and from the memory.

The Address Bus carries information about where in memory the data is coming from or going to.

The Control Bus is used to tell the memory which way data is going (read/write).

If A is the size (in bits) of the Address bus and W is the size of the data bus, then the memory can be thought of as a large array of locations. Each location holds W bits and there are 2A locations numbered from 0....2A-1.

Execution Cycle

Example

Instruction Add the contents of memory location 1 to location 2 and put the result in location 0.


Execution Sequence

  1. Address of the instruction is sent to memory with a READ control signal
  2. Instruction is fetched into control unit
  3. Instruction is decoded.
  4. Address 1 sent to memory with READ signal
  5. Data fetched from memory location 1 into the datapath
  6. Address 2 sent to memory with READ signal
  7. Data fetched from memory location 2 into the datapath
  8. ADD operation is sent to ALU in datapath
  9. Address 0 sent to memory with WRITE signal
  10. Result sent from datapath to memory and written to location 0.

Instruction Sets

What is an instruction set

Each CPU will have a certain repertoire of instructions that it can decode and execute. This is called the instruction set of the CPU.

The instructions are coded in binary and a large number of them are required to execute a program. A program in this form is called a machine language or machine code program. All other computer languages must be translated into machine code before they can be run. The program used to perform translation to machine code is called the compiler.

Lecture 8

Instruction Sets

Information which must be present in an instruction

  1. Operation

  2. e.g. ADD, SUBTRACT etc.
  3. Where to get the data

  4. Memory addresses, or the data may be in the datapath already, or no data may be needed.
  5. Where to put the result

  6. A memory address or perhaps temporarily within the datapath.
  7. Where to find the next instruction

  8. An address or, to allow decisions to be made, a condition and a choice of addresses.
In order to put all the information necessary into a single instruction it would have the following format
 
Opcode
Destination
Source1 
Source2
Condition 
Next/True
Next/False 
Code
Address 
Address
Address 
Code
Address 
Address
Not all instructions need to change the order of instruction execution, so Next/False can be a default: make it the instruction in the memory location after the previous one. We now need a special storage location inside the CPU (called a register) to store the default address of the next instruction. This register is called the program counter (PC). To save space we will split the instruction into two types

Type 1 Instructions: ALU operations
 
Opcode
Destination 
Source1
Source2 
Code
Address 
Address
Address 
Type 2 Instructions: Control instructions
 
Opcode
Condition 
Next/True
Code
Code 
Address
 

The operation and the test of a condition now occur in different instructions. There must now be a way of holding information about the operation. This is usually done by having a special set of single bit storage locations inside the datapath called the flags. The flags are set by some or all ALU operations. This is often the case, but where it is not, an extra operation is necessary to move the other source to the destination. Allow an operand to be a code which represents a temporary location within the datapath. Registers are usually referred to by a symbolic name e.g. A,B,AX,R1,R2 etc.

Operations may now be from memory to a register (Memory-Register operations), from a register to memory (Register-Memory operations) or from one register to another (Register-Register operations)

Type 1 Instructions: Register-Register ALU operations
 
Opcode
Destination/Source1 
Source2
Code
Register 
Register
Type 2 Instructions: Control instructions
 
Opcode
Condition 
Next/True
Code
Code 
Address
Type 3 Instructions: Memory-Register Transfer instructions
 
LD
Destination 
Source
Code
Register 
Address
Type 4 Instructions: Register-Memory Transfer instructions
 
ST
Destination 
Source
Code
Address 
Register

Now lets design a computer from scratch!!

First decide on the instruction  set.
Lets have fixed length, 16 bit instructions, 4 general purpose registers and an ALU that performs 5 operations.
This is a list of all the types of instructions.
ADD rd,rs,rt  
SUB rd,rs,rt  
AND rd,rs,rt  
OR rd,rs,rt  
XOR rd,rs,rt  
LW rd,[rs+imm9]  
B dest  
BGT dest
ADD rd,rs,imm6  
SUB rd,rs,imm6  
AND rd,rs,imm6  
OR rd,rs,imm6  
XOR rd,rs,imm6  
SW rd,[rs+imm9]  
BE dest  
BLT dest
rd is the destination register (except for SW where it is the source) and rs and rt are source registers. imm9 and dest, are signed constants, imm6 is an unsigned constant.

B always branches (realtive to PC+1), BE branches if the zero flag is set, BLT branches if the carry flag is set and BGT branches if carry is clear. The instructions are coded into a 16 bit word as follows (lsb on right).
Possible values for type
0 = ALU    3 reg operands 
1 = ALUi   2 regs and an immediate 
2 = LW     load word from memory 
3 = SW     store word to memory 
4 = B      branch 
5 = BE     branch if equal 
6 = BLT    branch if less than 
7 = BGT    branch if greater than
type(3)
 
rd(2)
rs(2)
op(3)
rt (2)
0(4)
imm6(6)
imm9(9)
 
 
dest(13)
 
 
The possible values for op are: 0 = ADD , 1 = SUB, 2 = AND, 3 = OR , 4 = XOR
Now we need some hardware - Here is the logic diagram for a simple machine capable of executing these instructions.

An instruction is executed in 5 stages, Fetch, Decode, Execute, Memory and Write.
The important parts of this diagram to notice are the ALU (Top Centre), the Instruction Register (Bottom Left), The Program Counter (Above and to the right of IR), General Purpose Registers (Centre), Data and Address Busses, Memory, and the Control Unit (Centre Right).

If you would like to see this machine in operation, try the following:

Create a temporary directory on the machine you are using.
Press shift and click here to download the logic simulator (save it in your temporary directory)
Do the same with this (also save it in the temporary directory)
Open a window with your temporary directory in it.
Execute log304.exe, this will unpack the program.
if you are NOT using windows NT execute "log.exe" otherwise execute"logvga.bat".
After the program starts type:

Use  the < and > keys to zoom in and out, use the arrow keys to navigate, press 'g' to see the cpu working, magnify the keyboard and click on it to enter data.
Press 'Q' (capital Q), followed by 'y' to exit.
 

Registers

The following diagram is for a complete CPU: Fill in the locations of these registers.

Lecture 9

Assembly Language

Machine code is very hard to follow because it uses binary code to represent the instructions. To provide a more human friendly version of machine code, assembly language is used.

An assembly language statement is a line of text that translates into a single machine instruction.

Assembly Language is expressed in a more human readable form than the binary instructions and names are allowed for memory locations, registers, operations etc.

For example:

ADD [result],[coursework],[exam]

Example: translate the following C statement to assembly language and machine code.

x=y*(y+z);

Assume x,y and z are stored in memory locations 0,1 and 2 and there are general purpose registers called A,B,C...etc

The 3e,3f,8c,9f and 4e are binary codes that contain the operation and registers for the instruction. Note that I have just made these up and they have no real meaning. (See chapter 2 and appendix C of Brookshear for an example of a simple instruction set)

Example:

if (x==3) x++;
y++;

Example:

while (x--<4) {
  y+=x;
}

What about arrays and pointers?

A pointer is an address that can be stored in a register - it may point to the start of an array.

Example:

for(i=0;i!=100;i++) {
  score[i]=0;
}

Here is a small program that recompiles itself and generates an Assembly Language listing.

The first column is a line number, next is a byte offset, next is the machine code, then labels and assembler directives and finally the assembly language itself. Try changing the program to see what code it produces.

Click here to see the program that is run by the cpu in the last lecture.

Where is Assembly Language compared with other programming languages.

________________________
Specification Languages
   Z, VDM
________________________
Declarative Languages
  Prolog
________________________
Functional Languages
  Lisp, ML
_______________________
High Level Imperative Languages
  Pascal, Ada, C++
_______________________
Low Level Imperative Languages
  C
_______________________
Assembly Language
_______________________
Machine Code
_______________________
Hardware

Lecture 10

Input/Output (I/O)

How does our machine communicate with the outside world?

Simplest Solution: Memory Mapped I/O

Connect up the memory so that accessing certain memory locations causes data to be transferred to or from the outside world.

      ______________
    0|              |
    .|              |
    .|    Memory    |
     |              |
     |______________|
a0000|              |
    .|    I/O       |
     |______________|
b0000|              |
    .|              |
    .|    Memory    |
     |______________|
This diagram shows which parts of the memory are used for storage and which for I/O, it is called the memory map.

For example, the screen on a PC is mapped into memory between address 0xa0000 and 0xaffff. Writing to one of these memory locations causes a dot to appear on the screen.

 ______________
|012345...     |
|              |
|   Screen     |
|              |
|______________|

Interrupts

Memory Mapped I/O is good for output but not for input if it is not known when the input will happen. Such input is called Asynchronous input and is very common.

For example: consider keyboard input. Using Memory Mapped I/O, there may be a memory location holding the ASCII value of the key currently being pressed. The CPU could keep looking at this memory location to see if an input has occurred (this is what my cpu in lecture 8 did).

Unfortunately, the CPU can now do nothing else because if it looked away from the keyboard I/O location, it may miss a key-press (in fact my cpu is so slow that it does sometimes miss key presses).

Better is if the CPU can be told when data is ready. The input device interrupts the CPU to tell it that data is ready.

Interrupt driven I/O

The control bus has an extra signal that can interrupt the CPU called the interrupt request (IRQ). In order for the CPU to identify which device caused the interrupt, each device must identify itself. On a PC this is done by an 8 bit number called the IRQ number. The device will also have memory mapped I/O.

I/O Controllers

For I/O that transfers large quantities of data, it is inefficient for the CPU to be interrupted for each word of data. In this case an I/O Controller can be used to perform the complete I/O operation. The Controller will have access to the Address and Data Bus and perform the I/O while the CPU is free for other things.

Common I/O Devices

Serial

The bits in the data are transferred one at a time. Speed is measured in Bits per Second (bps).

Parallel

Data is transferred as whole words.

Lecture 11

RISC/CISC

Two types of CPU are currently popular. At the moment RISC CPUs are faster and cheaper to make than CISC CPUs. CISCs are currently not far behind in performance and are more popular because of compatability with previous CPUs.

Making the CPU faster.

How to measure the speed of a computer?
  1. Clock Rate
  2. Million Instructions Per Second (MIPS)
  3. Benchmarks
    1. Synthetic - test programs
    2. Real - real applications
At present, improvements in technology make the clock rates about twice as fast every two years. This is mainly due to increased miniturisation - Clock 10,20,40,80,160,320 MHz.

If fact CPUs are getting twice as fast every 1.5 years. The reason for this, is improvements in Computer Architecture.

These techniques allow the CPU to perform operations in parallel - parallel processing or parallelism.

When parallelism is applied at the level of the workings of the CPU it is called fine grained parallelism.

Multiprocessing.

There is a limit to how fast a computer can be made by increased miniturisation and fine grained parallelism. We have not yet reached that point but many people think we soon will. In order to make machines faster still coarse grained parallelism is necessary. This type of parallel processing involves using multiple CPUs in one computer.

Summary of topics covered so far

Lecture 12

Data Structures

Computer Science is the study of complex systems. There are two types of model that can be used to describe a complex system.
  1. Data Models
  2. Functional Models
Anything but a trivial problem will involve both types of model. So far we have only looked at a functional models (in the form of algorithms).

To solve a problem, it is vital that both types of model are considered.

C data types:

These data types have two properties associated with them:
  1. how they are stored.
  2. operation that are permitted on them.
For a new data type we will have to provide both of these properties.

The basic data types in C are extremely primitive (in abstract terms), but the language allows new types to be defined using typedef and struct e.g.

 

Pointers

In order to understand data types in C, it is vital to understand pointers. All data is stored in memory, the location of the data in memory memory is called the address of the data. An address is a number and this number can be stored just like any other - this is a pointer.

In C, a pointer is defined by putting a * after the name of any other data type.
e.g

        int *i;  // i is an "int *" i.e. a pointer to an int.
        char *c; // c is a "char *" i.e. a pointer to a char.
        student *st;  // st is a pointer to a student;
Initially the pointer will not hold the address of any data, it must be set to an address by:
  1. Allocating space in the memory for the data using malloc.
  2. Assigning the pointer to an address that contains valid data. e.g. using the &operator or from another pointer.
        char *name="martin";
        int *x;
        int j;
        char *c;

        x=&j;
        c=(char *)malloc(100);
Pointers can be manipulated in the same way as an integer. In C it is common to use a pointer to reference a structure. e.g.
 

Arrays

An array is an indexed collection of data with a fixed size.

Pointers allow arrays by storing data sequentially in memory. The pointer holds the address of the first item in the array. In order to access other items, an offset is added to the pointer.

Multidimensional arrays are stored by a mapping from a one dimensional array, or by using an array of pointers.

Lists

Arrays have a fixed size and shape, other data structures do not. These can be created using pointers and structs.

A list is an ordered collection of data items. There are a number of important operations on lists.

  1. Insertion - Add an item to the list
  2. Deletion - Delete an item from the list
  3. Lookup - Find an item in the list
  4. Retrieve - Return the n'th item in the list
  5. Head - Return the first item in the list
  6. Tail - Return the last item in the list
At first sight it seems that a list can always be stored using an array. This is true if the list is small and we know that it will not grow beyond a certain size.

Often, a list must be kept in order. In this case, adding and removing items will be time consuming and may involve moving many other list items.

A better representation is a Linked List:

Lecture 13

Linked Lists

Rather than use a single contiguous block of memory for the list, use individual blocks for each item. Memory is allocated using malloc. In order to connect up the items into a list they must be linked. This is done using a pointer within each item that points to the next item. A NULL pointer ( contains 0 in C ) is used for the pointer from the last item.

To insert an item in the list change the pointer for the item before it and make its pointer point to the next item.

To remove an item change the pointer for the item before it to point to the next item.

Lets write a program in C to manipulate a list.

First some declarations:

#include <stdio.h>
#define newptr(x,y) (x *)malloc((int)sizeof(x)*y)

typedef struct _list_item {
         char name[64];
         struct _list_item *next;
} list_item;

typedef list_item * list;
typedef list_item * item_ptr;
The 'newptr' macro is just an easy way of using malloc.

The structure will be used to hold a list item. In this case each item is a name containing up to 63 characters.

The typedefs will make the rest of the program easier to follow.

Now a function to add a name to the list so that the list stays in alphabetical order:

list add_to_list(list l, char *name)  {
   item_ptr before;                       // the item before the one we are adding
   item_ptr new_item = newptr(list_item, 1); // allocate space for the new item

   strcpy(new_item->name,name);           // and copy the name to it.

   if(l==NULL || strcmp(name,l->name)<0) {// do we need to put it at
      new_item->next=l;                   // the start of the list?
      return new_item;                    // if so, it becomes the new head pointer.
   }
   before=l;
   while(before->next && strcmp(name,before->next->name)>0) 
      before=before->next;      // find the item before the one we are adding

   new_item->next=before->next; // change the pointers
   before->next=new_item;

   return l;      // head pointer does not change.
}
Memory for the new list item is allocated using malloc and the name is copied into the new structure. If the new item is to be put into the list at the start, this is done by setting its 'next' pointer to be the rest of the list and returning a pointer to it as the new start of the list. Otherwise it is necessary to find the items in the list before and after the one we are inserting and change the pointers as described earlier.

Now a function to delete a name from the list:

list delete_from_list(list l, char *name)  {

   item_ptr delete;   // the item we will delete
   item_ptr before;     // the item before it

   if(!l) return NULL; // if the list is empty do nothing

   if(!strcmp(l->name, name)) { // if the item to delete is the first one
      delete=l;                 // change the head pointer to skip over it
      l=l->next;                
      free(delete);             // free its memory
      return l;                 // return the new head pointer
   }       

   before=l;
   while ( before->next && !strcmp(before->next->name, name))
      before =  before->next;   // find the item before the one we want to delete

   if(before->next)  {        // if it has been found
      delete=before->next;   
      before->next = delete -> next; // change pointers
      free(delete);        // free memory
   }
   return l;   // head pointer does not change
}
If the item to be deleted is at the top of the list then we delete it and return a pointer to the rest of the list.
If not, we locate the item before the one to be deleted, then the pointer from this is made to point to the item after the one to be deleted. In both cases, memory for the deleted item is freed.

Now to print the entire list:

void print_list(list l)  {
   puts("The list is:");
   while(l)  {
      printf("%s\n",l->name);
      l=l->next;
   }
}
For each item in the list, the printf statement is used to print it. (note that while(l) will execute the loop until l is NULL)

Now a main function to test our list.

void main() {
   list l=NULL;

   l=add_to_list(l,"Florence");
   l=add_to_list(l,"Dougal");
   l=add_to_list(l,"Dylan");
   l=add_to_list(l,"Zebedee");
   l=add_to_list(l,"Mr Rusty");
   l=add_to_list(l,"Mr McHenry");
   l=add_to_list(l,"Brian");

   print_list(l);

   l=delete_from_list(l,"Brian");
   l=delete_from_list(l,"Dylan");
   l=delete_from_list(l,"Zebedee");
   l=delete_from_list(l,"Dougal");
   l=delete_from_list(l,"Martin");

   print_list(l);
}
The complete program is:
 
When run, the program produces the following output:

Lecture 14

Stacks and Queues

Stacks

A stack is a special case of the list data structure. In a list, items may be added or removed from any point in the list; in a stack items may only be added or removed from the head of the list. The head of the list is often called the top of stack (TOS). The item on the top of the stack will always be the last item to have been put onto the stack. For this reason a stack is also called a last-in-first-out data structure (LIFO).

The term used for adding an element to the stack is push and for removing an element it is pop.

Here are push and pop functions for our example:

Stacks are very important in computer science because they allow a context to be saved. Stacks are so important that most instruction sets include simple stack operations. For example consider the following program:
void functionB()  {
....
  functionC();
....
  functionD();
....
}

void functionA() {
....
  functionB();
....
}
Here .... means that the program is filled out with more C statements. When A calls B then the variables that are active in A must be remembered, this is easily done using a stack frame. A stack frame is a number of stack items used to hold the context for a function while it calls another one. The use of stack frames allows a program to be recursive (i.e. functions may call themselves).

Lecture 15

Queues

A queue is a list in which all additions to the list are made at one end and all deletions are made at the other end. The first item put into the queue will always be the first one out, thus a queue is first-in-first-out data structure (FIFO). In order to implement a queue, we can use the stack pop function and another function to add an item to the end of the queue. To simplify adding an item to the end of a queue we will use another pointer that always holds the address of the last item - the tail pointer.

New definitions:

A function to initialise the list will now be used. Pop will change slightly because it now has to keep a track of the tail pointer We now need a function to add to the end of the queue.



Queues are useful in programs where one part of the program (the producer) produces data and another part (the consumer) uses that data. If the producer can produce data faster than the consumer can use it then a queue will be necessary to hold unconsumed data. When used in this way, a queue is often called a buffer because it shields one part of the program from the effects of another.

Implementing Stacks and Queues Using Arrays.

Pointers are not the only way of implementing stacks and queues. If the maximum size of the stack or queue is known and is reasonably small then an array can be used. An integer variable is used to to hold the index of the next element in the array to be used to hold a stack item. Here the bottom of the stack is s[0].





char *stack[N];

char *queue[N];

int top_of_stack=0;

int push_name(char *name) {
  if (top_of_stack==N)
    return 1;
  stack[top_of_stack]=malloc(strlen(name)+1);
  strcpy(stack[top_of_stack++],name);
  return 0;
}

char *pop_name() {
  if(top_of_stack==0)
    return "";
  return stack[--top_of_stack];
}
To push an item, the item is copied into the the array in the position given by the top of the stack. The top of the stack is then incremented.

To implement a queue using an array we use integers to hold the index of the head and the tail.

Note that the queue may 'wrap around' so that the head is before the tail in the array.

char *queue[N];

int head=0;
int tail=0;


int add_to_head(char *name) {
  if ((head+1)%N==tail)
    return 1;
  queue[head]=(char *)malloc(strlen(name)+1);
  strcpy(queue[head],name);
  head=(head+1)%N;
  return 0;
}
char *remove_from_tail() {
  if(head==tail)
    return NULL;
  tail=(tail+1)%N;
  return queue[(tail-1)%N];
}
Here is a program that implements both a stack and a queue using an array.

Lecture 16

A doubly linked list

This type of linked list contains links going backwards and forwards.

A doubly linked list allows items to be added and removed from the start and the end of the list.

Trees

A tree is a data structure in which items are organised in a hierarchy. An item in a tree is called a node. Trees are usually draw 'upside down' with the branches at the bottom. The top of the tree is called the root node.

Each node (except the root node) has only one node that it is connected to above it - its parent . A node can have any number of nodes connected below it - its children.

A node has one parent and zero or more children. Each node will be the root of a tree comprising all the nodes below it, this is called a subtree.

A tree in which a node may have many children is difficult to implement - each node must store a list of its children. A special case of a tree is a Binary Tree (this is not the same as a B-tree) - each node may have up to two children. The two children for each node are called the left and the right child.

Often a program must visit all the nodes in a tree, this is called traversing the tree. It is important to know the order in which nodes are traversed. There are three simple ways of traversing a tree.
 

  1. Inorder traversal
    1. traverse the left subtree using inorder traversal
    2. visit the root node
    3. traverse the right subtree using inorder traversal
  2. preorder traversal
    1. visit the root node
    2. traverse the left subtree using preorder traversal
    3. traverse the right subtree using preorder traversal
  3. postorder traversal
    1. traverse the left subtree using postorder traversal
    2. traverse the right subtree using postorder traversal
    3. visit the root node
For the tree given earlier - the nodes will be visited in the following order. Time to write some C.
#include <stdio.h>
#include <strings.h>

#define newptr(x,y) (x *)malloc((int)sizeof(x)*y)

typedef struct _node {
  char name[64];
  struct _node *left;
  struct _node *right;
} node;

typedef node *tree;
The structure contains pointers to the left and right trees. This function uses inorder traversal to print every node in the tree. Trees are very useful if the nodes are sorted. The search function assumes that the nodes are ordered. The list is ordered so that nodes would be visited in alphabetical order if they were traversed using inorder traversal. This function adds a new node to an ordered tree. Finally the main part of the program to test our data structure.
 
The tree that is built by this program is the following:

We can see that it is much quicker to find a piece of data in this tree than in the list. This is because at each node only one subtree needs to be searched.

For this reason trees are commonly used in databases.

One disadvantage of this simple binary tree is that the order of insertion determines the tree structure. If the nodes are inserted in alphabetical order, then the tree is just a linked list. In practice the tree would need to be reorganized (balanced) sometimes. Such a tree is called a balanced tree or B-tree.

Lecture 17

Using data structures to store information.

When a data structure, such as a tree or linked list is used to store information, the data is split into two parts.
  1. Key Data
  2. Non Key Data
Key data is information used to order the data and non key data is the rest. The key should be chosen carefully so that it uniquely identifies the data. examples:
 
Key Non Key
Student ID Name,Address,Marks
Course Code Name,Department,Campus
License Plate  Make,Model,Colour
Using a data structure such a a tree, it is easy to search for data using the key. We can still perform searches based on non key data but it will be more time consuming because the whole tree must be searched.

Hash Tables

It is often the case that data must be stored so that searching is very fast. The response time of a system is often determined by the speed at which it can retrieve data.

A tree is an efficient data structure, but it is not the fastest possible method for data retrieval. The fastest method would be to use an indexed array. If the whole key is converted into a number then that number could be used as the index into an array. Searching would simply involve looking up data in the array. If there are only limited values for the key then this could be useful, unfortunately this is rarely possible.

A hash table is a combination of an indexed array and a linked list. The key is used to generate a number (called the hash value) ,this number will not be unique to every key. The function used to generate the number from the key is called the hashing function. Since the hash value is not unique there will be a number of keys for each hash value. These are stored as a linked list. The hash value is used to index an array and then the linked list for that value is searched.

For example:

A simple hash function for words may be the ASCII value of the first character. The table will have 256 entries and each linked list will be a list of words which have the same first character.

The efficiency of hashing will depend on how randomly the hash function distributes key values into hash values. Our simple example would not be very efficient because there are more words starting with 't' than with '&'. A better hash function may be to add together all the characters in the key and use this value modulus 256.

Abstract Data Types.

In 'C' we can define new types using 'typedef struct'. This only creates a part of the type - how the type is stored. Along with this we must also write functions to perform operation of the data type.

Some languages provide a more complete way of creating new types - Abstract Data Types (ADTs). An ADT comprises the storage of the type and the operations allowed on the type. Languages such as Ada,C++, Haskell and Java allow ADTs. In C we must use functions.

Example: Complex numbers:

This program contains a data structure and some functions that perform operations on it. These definitions allow us to use complex numbers without having to worry about how to perform arithmetic on them. The main program performs a fourier transform, it would be considerably longer and harder to follow without the ADT.

Summary

Lecture 18

Algorithms

Algorithms are fundamental to computer science. An algorithm is an abstract description of a how a task is to be accomplished. Algorithms are independent of hardware and programming language.
 

Describing Algorithms

Using English

English is good for many things but expressing algorithms is not one of them.

Using a Real Programming Language

Not a bad idea, but an algorithm should be independent of any specific language

Using Pseudocode

Pseudocode is a type of simple imaginary programming language, you can make up new extensions to the language and don't need to worry about where to put semicolons.

Sequence, Selection and Iteration.

A sequence is a series of operations to be performed one after the other. In C, sequences are enclosed in {}.

A selection is a way of choosing between two of more sequences, based on a condition. In C, selection is achieved using:
 
 

or the switch construct.

Iteration is the repetition of a sequence. In C, this is achieved using while and for loops.

Surprisingly, these three forms are sufficient for constructing any algorithm. If it is possible to construct an algorithm to describe a particular process then such an algorithm can be constructed from sequence, selection and iteration alone.

Example:

Find the factorial of a number.

n factorial (n!) is defined as the product of all the positive integers up to and including n, 0! is defined as 1. The factorial of a negative number is undefined.
 
 

Where := means "is defined as". This can be evaluated by the following algorithm (expressed in C).
 
Functions using only sequence, selection and iteration are called iterative.

Recursion

Another way of describing the factorial function is: or This is called a recursive definition because it is defined in terms of itself. A recursive definition consists of two parts - the end or terminating case (also called the anchor or base) and the recursive or general case. The end case usually gives a value for a specific argument; this allows the recursion to terminate eventually. For example: Here, the definition of fact(0) is used to terminate the recursion.

A non-mathematical example of a recursive definition is that of ancestor.
 
 

The following uses a recursive definition of the factorial function:
 

Comparing this with the iterative version, we see that the recursive version seems more natural and straightforward. It is also more readable.

However, the iterative version will execute faster and use up less storage space.

The reason for this is that, for each recursive call, the arguments and local variables must be saved (by pushing them onto a stack). So, for example, the call factorial(10) will generate 10 recursive calls to factorial. When a call is completed, the arguments and local variables from the calling function must be popped from the top of the stack. All this pushing and popping takes time and occupies storage.

Often it is necessary to choose between a simple, compact and natural recursive algorithm and an iterative one which runs faster and uses less space but is less readable and more difficult to develop.

Using recursion is also appropriate when the data structure to be manipulated is defined recursively. An example of such a structure,is a binary tree. To search a binary tree we used a recursive function.

The Towers of Hanoi.

A classic example from the world of recursion is the towers of Hanoi problem. The problem is to move a pile of disks from one of three pins to another. What is the sequence of moves for a certain number of disks?

This is an example of a problem that is very difficult to solve by iteration. Call the pins A,B and C where A is the start pin and C is the end pin.

Suppose there is just one disk. This can be moved directly from A to C. Next suppose there are three disks on A. Suppose we can transfer the top two disks from A to B using C. We can then move the third disk from A to C. It remains only to transfer the two disks from B to C using A.

We have thus reduced the problem to transferring two disks from one pin to another. This in turn can be reduced to moving one disk from one pin to another, which we know how to do. The recursive solution for n disks is
 
 

The following is a C function for this operation.
void transfer(int n, char start, char end, char work)  {
  if (n > 0)  {
    transfer(n-1, start, work, end);
    printf("Move disk from %c to %c\n",start, end);
    transfer(n-1, work, end, start);
  }
}
When called as: The function prints:

Lecture 19

Towers of Hanoi Continued,
void transfer(int n, char start, char end, char work)  {
  if (n > 0)  {
    transfer(n-1, start, work, end);
    printf("Move disk from %c to %c\n",start, end);
    transfer(n-1, work, end, start);
  }
}
When called for three disks, the execution profile of transfer is: The function prints:


Another example: decimal to binary

Consider the problem of printing an unsigned integer in binary. In the very first lecture we did this by finding the largest power of two smaller than the number and printing 0's and 1' for all the powers of two present. We can use recursion to do this much more elegantly. The end case is when n=0.

Our first attempt might be:

but this will not work for n=0, we note that the end case should be when n=0 or n=1:
In this program we have replaced a 12 line iterative function by a 3 line recursive one.

Sorting

This is one of the simplest sorting algorithms - the bubble sort. An array is sorted by repeatedly passing through the array swapping adjacent elements that are not in order.

Quicksort

Quicksort sorts an array using a recursive technique.
int n[10]={53,12,98,63,18,72,80,46,32,21};
53
12
98
63
18
72
80
46
32
21
Consider the problem of sorting the array given above

We attempt to 'partition' the elements in the array with respect to the first element, 53 ( this element is usually referred to as the pivot). This means that we try to put 53 in such a position that all elements to its left are smaller and all elements to its right are larger. If this is done, then the 53 must be in its final position in the sorted array. For example, one method of partitioning might produce:

21
12
32
46
18
53
80
72
63
98
If we sort the portions to the left and right of 53, we will have sorted the entire array. Sorting the original array has been reduced to sorting two smaller arrays. To sort n[0] to n[4], we can partition this portion with respect to the first element, 21. This puts 21 in its final sorted position, leaving us to sort two smaller arrays. This is illustrated by:
 
18
12
21
46
32
53
80
72
63
98
The portions which remain to be sorted are: The process is continued until all the little pieces have been sorted.

This sort is easily written as a recursive function.

void quicksort(int n[], int left,int right) {
  int dp;
  if (left<right)  {
    dp=partition(n,left,right);
    quicksort(n,left,dp-1);
    quicksort(n,dp+1,right);
  }
}
The function quicksort, when given two values left and right, sorts an array from elements left to right inclusive. It calls the function 'partition' to partition the elements with respect to some pivot. 'partition' returns the new position of the pivot (called the division point -dp). Now we need to write the partition function.

Assume the first element (n[left] is the pivot)

  1. Assign pivot to the first element.
  2. Scan from the right for an element that is smaller than pivot.
  3. Scan from the left for an element that is larger than the pivot.
  4. Swap these two elements.
  5. repeat until the scans meet.
  6. swap element at division point and n[left]
Consider the array n from earlier;
 
53
12
98
63
18
72
80
46
32
21
After this partition the array looks like this:
 
46
12
21
32
18
53
80
72
63
98
The following function implements the above algorithm.
int partition(int n[],int left,int right)  {
  int lo,hi,pivot,t;
  pivot=n[left];
  lo=left-1;
  hi=right+1;
  while(lo+1!=hi)  {
    if(n[lo+1]<=pivot)
      lo++;
    else if(n[hi-1]>pivot)
      hi--;
    else {
      t=n[lo+1];
      n[++lo]=n[hi-1];
      n[--hi]=t;
    }
  }
  n[left]=n[lo];
  n[lo]=pivot;
  return lo;
}
Here is a program to test the bubblesort and quicksort.

If you have a browser with Java - here
is a graphical comparison of four sorting algorithms.

 

Lecture 20

Programming Languages

Generations

First generation:

1940

The first computers had to be programmed directly in machine code. Binary numbers were placed in the memory by setting switches.

1950

Second Generation:

By 1950, computers were being programmed in assembly language. As we saw earlier, one line in Assembly Language is translated into one machine instruction by a program called an assembler. At this time, computers were so expensive and slow that this was the only feasible method of programming.

Programs were completely machine dependent and writing a program was a very time consuming task. In order to simplify the writing of programs, small sections of commonly used code were collected together into libraries.

There would be libraries for maths and input/output operations amongst other things. The program responsible for combining a program and the libraries is called linker.

To run the program, another program called the loader was used to put the program into memory and start it (something like the main function of assignment 2).

The original program is called the source code. The program before linking is called the object code. The program after linking is called the executable.

1955-

Third Generation:

This generation of languages is machine independent and has to be translated into machine language by a translator. The translator takes high level primitives and compiles a sequence of machine instructions to perform the required function. The translators are often called compilers.

These languages are called high level languages. A program written in a high level language will almost certainly need to be linked with libraries.

Source code --Translate--> Object code --Link--> Executable --Load--> Running Program.

C is an example of a third generation compiled language.

Another approach to executing a program is called interpretation. A program called an interpreter translates source directly by emulating a machine which has the source language as its machine code. In assignment 2 you wrote an interpreter for a RISC machine.

Interpretation is often considerably slower than compilation, this is because translation must be performed each time the program is run. It is possible to write a program that is a combination of interpreter and compiler. This is sometimes called a Just In Time (JIT) compiler because it compiles sections of the program as it is being executed, i.e. just in time.

Java is an example of an interpreted language.

Fourth Generation

There are many definitions of fourth generation languages. In general it is a language that requires no programming knowledge to use. A database package may allow the development of a complex application without the need to write any source code. This is done by choosing from various pre-designed options and specifying what is to happen for particular events (sometimes called triggers).

Fifth Generation

The fifth generation of languages is used to refer to declarative programming languages. These languages represent a problem by concentrating on specifying the problem rather than concentrating on how to solve the problem. From a good specification the machine should be able to find a method to solve the problem itself. This is a rapidly developing area and although declarative languages are rather inefficient at the moment, expect this to change.

Paradigms

Describing languages using the idea of generations works badly after the third generation because in practice there has been a split into a number of programming methodologies (called programming paradigms).

The Imperative Paradigm

Machine Languages
FORTRAN
COBOL
ALGOL
BASIC
C
PASCAL
ADA
Imperative languages use sequence, selection and iteration to write programs that tell the machine how to perform a task. The following program solves the 8 queens problem - how can you arrange 8 queens on a chess board so that they don't attack one another. Don't worry about the algorithm, just notice that the program uses sequence, selection, iteration and recursion.

The Object-Oriented Paradigm

SIMULA
Smalltalk
C++
Java
Object-oriented languages force the programmer to concentrate on abstract data types.This is done by having imperative code associated with variables (called objects) and having a hierarchy of variable types (called classes).
The following program is written in C++, it finds a single solution. Notice that 'queen' is a class that holds information about a single queen. The class also has functions (called member functions) to move a queen -'advance' , print a list of queens -'print' and check if it attacks another square. The solution is stored in a linked list. You will find out more about Object Oriented Languages in 59201 and 59234.

The Functional Paradigm

LISP
ML
Scheme
Haskell
Functional languages have no variables and use recursion to perform computation. The advantage of functional languages is that it is easier to prove that a functional program is correct than an imperative program. The following solution to 8 queens is written in Haskell, notice that queens is a recursive definition, qu 0 is an empty list. qu m+1 is defined in terms of qu m, 'safe' checks for all attacks and 'check' checks for a single attack.
You will find out more about Haskell in 59202 and 59301
Source Code
Query


The Declarative Paradigm

PROLOG
A declarative or logic programming language is one that is based on a subset of mathematical logic. The computer is programmed to infer relationships between values rather than compute output values from input values.
The following solution to 8 queens is written in Prolog. Understanding Prolog programs is difficult if you are used to looking at imperative languages, because the program does not say how the solution is to be found. A program is just a list of facts (rules), the prolog interpreter 'runs' the program by trying to find values that make the query true using the rules.
You will find out more about Prolog in 59202 and 59301

Rules

Query


 

Lecture 21

Syntax and Semantics

Any language, computer or human, has these two components. The syntax of a language is a set of rules about how valid sentences are constructed. The semantics of a langauge concerns the meaning of a sentence. Semantics are very difficult to describe but we can easily describe the sytnax of a language.

The syntax of a language is described by a grammar, grammars must be precise and unambiguous so there is no possibility of misinterpretation.

BNF

One method for describing the syntax of a language is called Backus-Naur Form (BNF).

BNF consists of a set of rules called production rules, each rule has the form:

<symbol> := <symbol1>|<symbol2><symbol3>| C<symbol>

Parts of a BNF production rule:

:=  meaning "is defined as"
|   meaning "or"
< > angle brackets used to surround symbols.
The angle brackets distinguish syntax rule symbols (also called non-terminal symbols) from terminal symbols which are written exactly as they are to be represented. A BNF rule defining a nonterminal has the form:
nonterminal := sequence_of_alternatives consisting
               of strings of terminals or nonterminals
               separated by the meta-symbol |
For example, a BNF production rule for a mini-language might be:
<program> :=  program
                   <declaration_sequence>
               begin
                   <statements_sequence>
               end ;
This shows that a mini-language program consists of the keyword "program" followed by the declaration sequence, then the keyword "begin" and the statements sequence, finally the keyword "end" and a semicolon.

recursive rules are used to define sequences. '..' may be used to simplify long lists of terminal symbols.

<letter> := a..z|A..Z
<digit> := 0..9
<alnum> := <letter>|<digit>
<id> := <letter>|<id><alnum>
This says that an id is a single letter or a letter followed by any number of letters or digits.

Now here is the definition of BNF expressed in BNF: ( non terminals are shown in italics and terminals in bold underlined to avoid confusion)

<rule> := <<id>> := <expression>
<expression> := <term> | <term> | <expression>
<term> := <<id>> | <terminal> | 
          <term><<id>> | <term><terminal>
BNF is not only important for describing syntax rules in books, it is very commonly used (with variants) by syntactic tools. See for example any book on LEX and YACC, the standard UNIX parser generators.
Most Prolog interpreters have an extention that allows you to write BNF grammars. The syntax is slightly different to the one given above, a terminal is enclosed in square brackets, terms are separated by commas and --> is used instead of :=
i.e.
<rule> := <id> --> <expression>
<expression> := <term> | <term> | <expression>
<term> := <id> | [<terminal>] | 
          <term><id> | <term>[<terminal>]

Rules

Query

The BNF rules for a simplified C language:
<program> := <decls>|<functiondef>|<program><decls>|
             <program><functiondef>

<all_stmts> := <stmtlist><stop_stmt>|<stmtlist>

<stmtlist> := <stmtlist><stmt>|<stmt>
<stmt> := <compstmt>|<expr>';'|
         if(<condexpr>)<stmt>|
         if(<condexpr>)<stmt>else<stmt>|
         while(<condexpr>)<stmt>|
         do<stmt>while(condexpr);|
         for(<exprseq>;<condexpr>;<exprseq>)<stmt>|
         switch(val)<casestmts>|;
<compsmt> := {<all_stmts>}|
         {<decls><all_stmts>}
<type> := int|char|unsigned|unsigned char|
          float|short|long|unsigned short|
          unsigned long|double|<usertype>|
          struct <structname> {<decls>}|
          <type> *
<typedef> := typedef <type> <usertype>
<functiondef> := <type> <id>(<args>)<compsmt>|
                 <type> <id>()<compsmt>
<args> := <type> <id>|<args>,<type> <id>
<decls> := <type> <id>;|<type> <id>;<decls>
<exprseq> := <expr>|<exprseq>,<expr>
<stop_stmt> := break;|continue;|
               return;|return <val>;|return(<val>);
<casestmts> := {<cases><default>}
<cases> := <cases><case>|<case>
<case> := case <const>:<stmtlist><stop_stmt>
<default> := default:<stmtlist><stop_stmt>
<expr> := <rhs>|<modify_expr>
<call_expr> := <id>(<arglist>)
<arglist> := <arglist>,<val>|<val>
<modify_expr> := <varname>=<rhs>|*<id>=<rhs>
<rhs> := <binary_expr>|<unary_expr>
<unary_expr> := <simp_expr>|*<id>|&<varname>|
       <call_expr>|<unop><val>|(<type>)<varname>
<binary_expr> := <val><binop><val>

<unop> := +|-|~|!
<binop> :=  -|+|/|*|%|&|&&||||||^
<relop> : <|<=|>|>=
<condexpr> := <val>|<val><relop><val>
<simp_expr> := <varname>|<const>

<varname> := <idlist>.<id>|<id>
<arrayref> := <id><reflist>
<reflist> := [<val>]|<reflist>[<val>]
<idlist> := <id>|<idlist>.<id>

Lecture 22

Theory of Computation

A problem is solvable by a computer if a program can be written to solve instances of the problem.

As you know, this program must be based on an algorithm. When talking about the solvability of problems we want to use the idea of algorithms rather than programs, since we want to simplify the situation as much as possible. We do not want to make things more difficult by including issues such as which hardware or programming language to use.

The idea of an algorithm that we have used up to now has not been formalized. We have seen examples of algorithms such as quicksort and binary search but we have not even defined what an algorithm is.

The idea of algorithms has existed for thousands of years, the ancient Greeks discovered algorithms for finding prime numbers and common denominators. For many years people also thought that every problem could be solved (given enough effort) by an algorithm.

The German mathematician, David Hilbert said:

"In mathematics there is nothing that cannot be known."

In 1900, he proposed a problem called the Entscheidungsproblem - Find an algorithm which, given any statement in a formal system, would determine if it were true of false.

In the 1930's, research showed that this problem is not computable. Kurt Godel, Alonso Church, Stephen Kleene and Alan Turing all found problems that had no algorithmic solution.

The Church-Turing Thesis.

This is the thesis (i.e. it has not yet been proved true of false) that all computers are equally powerful as problem solvers. Obviously, some computers will be able to solve problems faster than others, but this is not the point. The Church-Turing thesis says that, given sufficient resources (time and memory), there is nothing that one computer can do, that any other can not (eventually). The RISC machine from assignment 2 can do anything your pentium can. Since nobody has ever found a case that proves this thesis wrong we will assume that it is true. It also seems reasonable, because we can use one computer to emulate another.

Assuming the truth of the Church-Turing thesis, it will be useful to look at the simplest machine that is still capable of computation.

Combinational logic can not perform computation, it is just a mapping from an input to an output. Sequential logic, on the other hand, can perform a limited form of computation. If we combine a sequential logic device with an external memory, then we have a Turing machine.

Turing Machines

We assume that instances of problems and solutions to instances can be encoded into sequences of 0's and 1's.

A Turing machine is a very simple device. It has a control unit which can be in any of a finite number of states. These states are given and fixed for any specific machine, i.e. the control unit is a sequential logic device. The machine also has an input and output tape (all the books call this a tape but in reality it is a toilet roll). The 'tape' is capable of holding any finite length sequence of 0's, 1's and any number of blank symbols, and has a read/write head to read and write symbols.

The tape is divided into place-holders; each place-holder can store one symbol.

We assume the symbols to be 0, 1, and the blank symbol. This machine performs computations as follows: the machine always starts in a particular state that we call S0, i.e., the state of the machine's control is S0 at the beginning of any computation.

The instance of the problem, encoded into a finite sequence of 0's and 1's, appears on the tape and the read/write head is positioned on a symbol of the input.

The actions of the machine are determined by a table. Given the state of the control unit, and the symbol currently under the input/output head, the table says to which state the control of the machine must change.

The table also says what symbol overwrites the currently read symbol and the movement of the input/output tape head: left, right or no move.

This diagram shows the components of a Turing Machine.
 


 

There will be a set of states, called accepting states, that will halt the machine. The machine produces an output when it halts.

Definition 1 An algorithm is a Turing Machine that always halts no matter what input is presented to it.

Since we are willing to accept machines whose computations may not terminate, then those machines are not algorithms (they are called partial functions), but we will see that our arguments apply even to those more general machines.

Let us define problems that are solvable by these more general machines.

Definition 2 A problem P is computable if there is a TM that produces, for any instance of the problem, its solution, if one exists; otherwise the TM runs forever.

Example add one to a binary number.

The following example adds one to a binary number. Assume the TM has the following state table.
 
Current State Current cell contents Value to write Direction to move New state to enter
0 * * L 1
1 0 1 L 3
1 1 0 L 2
1 * * R 6
2 0 1 L 3
2 1 0 L 2
2 * 1 L 4
3 0 0 L 3
3 1 1 L 3
3 * * R 5
4 0 * R 5
4 1 * R 5
4 * * R 5
5 0 0 R 5
5 1 1 R 5
5 * * NO MOVE 6
6 STOP STOP STOP STOP

Example of a Turing Machine. Add two numbers:

The numbers are represented in unary; this is to say, a number n is represented by n consecutive 1's. For instance, number 5 is represented as 11111. The numbers to be added, appear on the tape from left to right as follows: a 0, followed by n 1's, followed by 0, and then m 1's followed by 0. So, the instance of the problem n=5 and m=3 would appear on the tape as 01111101110. The solution of the instance appears on the tape surrounded by 0's. For this instance the solution is 0111111110.

The machine is fully specified by giving each of its components; these are listed below. In general, a machine M is a seven-tuple of the form , representing the set of states of the machine, the initial state, the set of input symbols, the set of output symbols, the accepting states, the blank symbol and the transition function or program, respectively.

For the addition problem
 

Let us see how this Turing Machine solves the problem 01111101110. The TM starts in state 1 and scans the first symbol, a 0.

The program tells it to write a 0 (thus leaving the symbol in that place-holder unchanged), move to the nearest symbol to the right, and change to state 2. Note there there are two choices each for states 2 and 3; the one to be executed depends on the symbol that is scanned.

In this case, we have a 1, so we write the 1 again (leaving that symbol unchanged) and move right without changing state. We continue in this fashion until reaching the 0 that separates the unary representations of 5 and 3.

Now we are past the first number, and we know the number to be added to it will follow. We move right and change to a new state, 3, and read a 1. Now the ``addition'' occurs, writing a 0 over the 1, we move left, go to state 4, read a 0, write a 1 over it, and return to state 2.

The program continues to ``loop'' in this manner until we have 8 ones in a row on our tape with 2 zeroes at the end. Reading the first 0 (we are in state 2), we move right one place, change state to 3, and read another 0. The second unary number is gone, and we have completed the addition, so we write a blank over the 0, move left, and go to state 5, in which we stop.

The final answer, now on the tape, is a unary representation of 8 with a 0 at each end.

Lecture 23

Non-computable Problems

Godelisation

This is the technique discovered by Kurt Godel for giving numbers to objects in a collection. The numbers are called Godel numbers and can be used to make deductions about the objects. Godel used his numbers for formulas and proofs, but they can also be used to make deductions about programs.

Every turing machine has a unique Godel number associated with it, a sort of serial number. We can obtain the Godel number by encoding the state table in binary. Since the state table is all that makes one TM different from another, then this will be a unique number.

The Halting Problem

Can we build a turing machine to decide if another turing machine will halt or not. This would be a very useful algorithm because we could use it to decide if our programs are correct or not.

The answer is NO.

To prove this assume that we have such an algorithm. Call it halttester. The algorithm has two inputs, P- a program and D- the program's data. halttester(P,D) returns 1 if P would halt when executed with input data D, otherwise it returns 0.

Now lets write another function that calls halttester. If we assume that halttester exists then we can write the following program. If we call funny with a program, then it will only halt if the program does not halt.

If we call funny with itself, then it will halt if it doesn't halt and not halt if it does halt. This makes no sense. The original assumption that we can write a halttester function must be false.

In practice, it is easy to see if some programs terminate. If the program uses only sequence and selection then it will terminate. However, there are lots of programs that do not obviously terminate.

consider the following program to try to disprove Fermat's last theorem. Assume a function called pow(x,y) exists to return x to the power of y.

There is a mathematician called Andrew Wiles who says he can prove that this program will never terminate, but the proof is so complex that very few people understand it.

Other non-computable problems

Can we write a program that takes two programs and compares them to see if they perform the same job - NO.

Given two BNF descriptions, can we tell if they describe the same language - NO.

Luckily some problems are computable.

Given a BNF description and a sequence of characters, can we check if the sequence is grammatically correct - YES.

Partial Computability

Some problems are more computable than others. There are inputs to the halting problem that will produce an output e.g. a sequence or a while(1); statement. The halting problem is partially computable.

Godel showed that there is no algorithm to decide weather an arbitrary statement about integers is true or not - "Arithmetic is not computable". In fact, arithmetic is not even partially computable. In any arithmetical proof system, there are statements about integers which are true but cannot be proved!!

Complexity

The study of computability looks at what can be computed, complexity looks at how efficiently things can be computed.

Complexity does not relate to the difficulty of writing a program or of understanding a program, but the resources required by a program.

Of all possible problems, only a small subset are computable and only a small subset of these are feasibly computable. Any problem that would take 1000 years to solve would not be feasibly computable. Luckily, the set of feasibly computable problems is large enough to keep us in work for the time being.

The computer resources used by a problem are time, memory and hardware. Time is how long the problem takes to solve, since computers often have different speed internal clocks, we measure this in imaginary units called steps. Memory is the amount of storage an algorithm uses. Hardware is the physical resources used, e.g. the number of processors.

Consider an algorithm to multiply two numbers by hand using long multiplication.
 

If the algorithm is presented with two n-digit numbers, it will need to add together n rows containing n (or n+1) digits. Each row can be computed in n steps and there are n rows. The total time taken for this problem is thus n*n or n2 steps. The execution time of the algorithm is proportional to n squared.

The same problem can be solved by another algorithm.

      a b  c d
      1984*6713

                            ac=19*67  = 1273
 (a+b)(c+d)-ac-bd=(103*80)-1273-1092  =   5875
                            bd=84*13  =     1092
                                        13318592
This algorithm takes a time proportional to n1.59 The fastest known algorithm for multiplying two numbers on a sequential machine has an execution time proportional to nlognloglogn. On a parallel computer, numbers can be multiplied in a time proportional to logn.

Thus different algorithms to solve the same problem may have different complexities. Often, different algorithms use resources in different ways. In this case, the programmer must choose a trade off, and try to balance the resources used.

Lecture 24

Complexity

As we saw last week, different algorithms to solve the same problem may have different complexities - how can this complexity be measured?

The execution time of an algorithm will be measured in steps. The time taken for each step is not very important; we only have to wait a few years for technology to reduce this time. What is important is the relationship between the size of the input data and the total number of steps required.

If there are n 'pieces' of input data (characters, numbers strings etc...) then we can express the number of steps as a function of n. e.g. n2, 2nlogn, 4n2+5n+100. In a function such as 4n2+5n+100, as n gets larger, the 4n2 term will dominate the other terms. The dominating term is called the asymptotic behavior of the algorithm. This behavior will often be all that we need to know about the complexity of the algorithm.
 
Size of input data (n) log2 n micro- seconds n micro- seconds n2 micro- seconds 2n micro- 
seconds
10 .000003 seconds .00001 seconds .0001 seconds 0.001
seconds
100 .000007 seconds .0001 seconds .01
seconds
1014
centuries
1000 .00001 seconds .001 seconds 1
second
astronomical
10000 .000013 seconds .01
seconds
1.7
minutes
astronomical
100000 .000017 seconds .1
seconds
2.8
hours
astronomical

Any constant factor in the dominant term can also be ignored because this is the same as changing the size of a step. After ignoring all the non-dominant terms and constants, we get an approximate measure of complexity. This is described using 'O-notation' (big-oh-notation). For example, an algorithm is considered O(n2), if it takes a time roughly proportional to n2 for large n.

Algorithms where the complexity is of the form: O(nC) where C is a constant are called polynomial algorithms. Polynomial algorithms often have practical solutions.

Algorithms where the complexity is of the form: O(Cn) where C is a constant are called exponential algorithms. These algorithms are often insoluble since the computation soon becomes enormous, even for relatively small n.

Example: What is the complexity of the bubble sort.

void bubble(int n[],int no) {
        int i,j,t;
        for(i=0;i<no;i++)
          for(j=0;j<no-i-1;j++)
            if(n[j]>n[j+1])  {
              t=n[j];
              n[j]=n[j+1];
              n[j+1]=t;
            }
      }
Here, we can use 'no' as n. If the if statement in the inner loop represents one step, then the number of steps taken is:
  Thus, the complexity of the bubble sort is: O(n2)

How about the quicksort:

void quicksort(int n[], int left,int right) {
  int dp;
  if (left<right)  {
    dp=partition(n,left,right);
    quicksort(n,left,dp-1);
    quicksort(n,dp+1,right);
  }
}
This algorithm will take different times depending on the input data. Now, we must do a best case and a worst case complexity analysis.

Assume that the partition algorithm takes n steps. The worst case time for quicksort will be when the array is already sorted. In this case, the division point will always be the first position in the array. The quicksort function will be called n times for arrays of size (n-1),(n-2),...1. The quicksort has exactly the same complexity as the bubble sort - O(n2), in this case.

The best case occurs when the division point is always half way through the array. Now the quicksort function will be called twice for arrays of size (n/2), four times for arrays of size (n/4)....n times for arrays of size 1. Each of these sets of calls (2*n/2,4*n/4,8*n/8...n*1) takes n steps. There are log2n sets of calls and so the best case complexity of the quicksort is - O(nlog2n).
 

In practice, the performance of the quicksort for randomly ordered data is nearer the best case. We can stop the quicksort having O(n2) complexity for ordered data by randomly choosing the pivot.

Every problem will have an inherent complexity, i.e. the complexity of the best algorithm that can solve the problem. We do not know the inherent complexity of many problems, in these cases we only know the complexity of the best algorithm found so far. This is referred to as an upper bound on the complexity.

Lecture 25

NP completeness and the Traveling Salesperson Problem.

Consider the following problem:

A salesperson has a number of towns to visit. How can they save fuel by visiting them in the order that gives the shortest total path.

One method for solving this is to look at all possible paths and choose the shortest. Unfortunately, for n towns there are (n-1)! possible paths. 16 factorial is 20922789888000 so even for small numbers of cities this method is impractical.

It is possible to find an approximate solution to the TSP by using a non-deterministic algorithm. Instead of going through every possibility we choose one at random and try and improve it by making small changes. The small changes will always try to reduce the total path length. This type of algorithm is called a gradient descent algorithm; it is a polynomial algorithm.

This algorithm is not guaranteed to get the optimum solution, but it will usually get a good one. Problems that can be solved in this way are called NP (nondetermanistic polynomial) problems. There is a set of similar NP problems (including the TSP) that are called NP complete.

Correctness

Most computer programs contain errors - bugs. How can we write programs without bugs?

Two ways - testing and proving.

Testing

Use test data to see if the program works.This will test the program for most cases but is no guarantee that it will be correct for all cases of input data. e.g. Pentium f.p. bug.

Proving

A proof attempts to show that the program actually does what it is supposed to.

Two types - Formal and Informal

An informal proof is an extension of the documentation of the program to include an argument that it does what it is supposed to. This is really just a representation of good understanding of the program. Your documentation should be an informal proof.

A formal proof uses logical assertions to proove the correctness of a program.

In order to prove that an algorithm is correct, it is necessary to know what the problem to be solved is. There must be a specification for the problem. This is usually written in a formal specification language such as Z or VDM.

Discovering if an algorithm is correct from its specification is a non-computable problem. It is up to the programmer to prove correctness. One way of doing this is to use assertions.

An assertion is a statement about the state of the machine during execution of the program. There are three types - preconditions, postconditions and loop invariants.

The precondition is a true statement about the state of the machine before an operation begins. The postcondition is a statement about the state of the machine after the operation has terminated. The operation here could be a C statement, a block of C statements or an algorithm.

A loop invariant is a statement about the state of the machine during execution of a loop.

If it can be shown that the postcondition will be true if the algorithm terminates then the program is said to be partially correct.

If the program is partially correct and it can be proved to always terminate then it is called totally correct.

To prove total correctness, it is necessary to prove that some value increases or decreases every time the loop executes until the loop condition is satisfied. Example: prove the correctness of a factorial algorithm.
 

This algorithm is totally correct because the loop invariant will always be true since n! is defined as n*(n-1)!. It will always terminate because n starts at 1 and increases by one each time round the loop, n must therefore reach 10.

Now lets try something a little more difficult, the partition algorithm. Given an array x[0..n-1], x is partitioned if:

This is read as:
for all possible values of k
if k is between 0 and a inclusive then x[k] is less than or equal to r
if k is greater than a and less than or equal to n-1 then x[k] is greater than r
Lets split this into two conditions P and Q.

let P([0..a]) be

let Q([b..n-1]) be we need to prove that after the algorithm terminates: Now the algorithm with all assertions.
 
a=-1;
b=n;
                      // { P([]) && Q([]) } 
while(a+1 != b)  {
                      // { P([0..a]) && Q([b..n-1]) } 
  if (x[a+1]<=r) {
     a=a+1;           // { P([0..a-1]) && x[a]<=r && Q([b..n-1]) }
     }                // { P([0..a]) && Q([b..n-1]) } 
  else if x[b-1]>r {
     b=b-1;           // { P([0..a]) && Q([b+1..n-1]) && x[b]>r } 
     }                // { P([0..a]) && Q([b..n-1]) }
  else {
                      // { x[b-1]<=r && x[a+1]>r }
     t=x[a+1];
     x[a+1]=x[b-1];   // { x[a+1]<=r && t>r } 
     x[b-1]=t;        // { x[a+1]<=r && x[b-1]>r }
     a=a+1;           // { P([0..a-1]) && x[a]<=r && Q([b..n-1]) }
     b=b-1;           // { P([0..a]) && Q([b+1..n-1]) && x[b]>r }
                      // { P([0..a]) && Q([b..n-1]) }
  }
}                     // { P([0..a]) && Q([b..n-1]) && a+1==b }
                      // { P([0..a]) && Q([a+1..n-1]) }
The program terminates because: The only problem is caused by (b-a) being decremented by 2 and missing the terminating condition. If this happened: so that after x[a+1] and x[b-1] are swapped we have: This is not possible, so the algorithm must terminate.

Operating Systems

What is an Operating system

 A program that acts as an intermediary between a user of a computer and the computer hardware.

A systems program which controls all the computer's resources and provides a base upon which application programs can be written.
 

Operating system goals:

  1.      Execute user programs and make solving user problems easier.
  2.      Make the computer system convenient to use.
  3.      Use the computer hardware in an efficient manner.

System services

The OS often acts as an intermediary between user programs and hardware. For example, to print a message on the screen, a user program could manipulate the hardware directly, but it is easier to call an operating system function to do this for you. When you use printf in a C program this is what you are really doing. If the operating system is the only program that manipulates hardware then there will be less chance of a user program using hardware badly. Calls to the operating system are usually called system calls.

Hardware controlled by the OS fits into three categories
 

  1. CPU
  2. Memory
  3. Input/Output
Efficient use of the CPU is called Process Management.
Efficient use of the Memory is Memory Management
Input Output covers many things, we will look at File Management.

Process Management

How can we efficiently use the CPU? This question only makes sense if there is more than one program running. An OS that supports running more than one program at once is called a multitasking OS.

Multitasking


Multitasking lets each process think that it has complete control of the CPU. There are two types of multitasking, pre-emptive and non pre-emptive.

Non pre-emptive multitasking.

The CPU is allocated to a process until the process performs a system call. This is simple to implement but is not always fair and allows a process monopolise the CPU. If a process enters an infinite loop, no other processes will ever get control of the CPU. This system is used by Windows 3.1 and MacOS.

Pre-emptive multitasking.

Before the CPU is allocated to a process, a hardware timer is set up to interrupt the process after a set time (called the time quantum). No process will ever be allowed to take over control of the  CPU. The system may run slightly slower than a non pre-emptive system because of the overhead of extra process switches. As long as the time quantum is large compared to the process switch time this effect will be negligible. This type of multitasking is used by Windows 95/98/NT, UNIX, and many other OS's.

Multiuser OS's

Windows 95/98/NT are multitasking but they are really only single user Operating Systems. Some OS's allow more than one user at the same time, obviously each user must have a different screen and keyboard but the CPU and disk space is shared.

Scheduling

A multitasking OS must decide which of a number of waiting process to run next. This is called CPU scheduling. A CPU scheduling algorithm must minimise the average time a process has to wait for the CPU and maximise the utilisation of the CPU. A simple scheduling algorithm called round-robin, executes processes in a fixed order.
 

Memory Management

If an OS is running more than one process, then memory must be allocated to each process. Most OS's have some form of memory protection scheme whereby a process has no access to the memory of any other processes.

The real layout of the memory is called the physical memory. The memory 'seen' by a particular process is called the logical memory.

Most modern CPU's incorporate a device called a Memory Management Unit that translates from logical to physical addresses.

File Management

A file is an abstraction used to name data in secondary storage (a hard disk). The hard disk can be thought of as a large, slow, memory; organisation of files on the hard disk is called the file system. A very simple file system is called contiguous allocation - just fill up the disk with files from start to finish. For example, after the creation of five files the disk may look like this:


A major problem with this system is that files cannot grow in size unless other files are moved.

Another problem with contiguous allocation is that if files are deleted, the disk will have gaps between files.

If file2 and file4 are now deleted, the disk will look like:


When a file is created, the operating system must look through a list of the gaps to see if there is one large enough. The disk may have enough space left on it for the file, but if the space is scattered over many gaps then file creation will fail.

A better system is called indexed allocation.

The hard disk is divided into small chunks called blocks or clusters. Block sizes vary between 512 bytes and 64K bytes.

At a fixed position on the disk, is a table that keeps track of which block is allocated to which file. The following diagram shows how file6 is stored at two different places on the disk. Indexed allocation allows files to grow in size and makes good use of disk space if the block size is small. If the block size is large then space will be wasted in partially filled blocks.

A Comparison of three operating systems:

Windows 98

Has pre-emptive multitasking, memory protection for some applications and a simple form of indexed allocation.

MacOS

Uses non pre-emptive multitasking, simple memory protection and indexed allocation.

UNIX

Has pre-emptive mutitasking, memory protection and indexed allocation.
 

Networks

Hardware

Types

LAN Local Area Network

A LAN is a small network within a single building or location.

WAN Wide Area Network

A WAN links machines in different locations

Topology

Point to Point

A simple connection from one machine to another. e.g. dialup link through a modem.

Bus

Many machines connected by a single pathway. Only one machine can use the bus at once. Machines must ask to use the bus and may have to wait. e.g. ethernet.

Star

Each machine is connected to a single point.

Routing

When a message is to be sent between LANs or over a WAN then it must co through a device called a router. A router may be used to restrict access to machines off the LAN, in this case itis called a firewall.

Example

This diagram shows the Albany Campus Network

Software

For machines to be able to communicate they must use an agreed protocol and messages must travel through different types of hardware between different applications. To understand all this it is easiest to use a layered approach.

The software has four layers

Application Layer : a set of library functions for sending messages etc. e.g. sockets.

Transport Layer: packages up the message for transportation to final destination

Network Layer: packages up the message for transportation to nearest router on route to destination or to local machine.

Link Layer: send the message over the local network.

The following diagram shows how a message is sent between two machines:




The following diagram shows a message passing through two machines acting as routers.




The most common set of network protocols is called TCP/IP (transmission control protocol/ internet protocol)

example: Ask for a web page from a remote machine:

Application (web browser) uses the application layer to open a connection to the remote machine:

Transport layer breaks message down into packets and adds a destination address and sequence number to each.

Network layer adds address of router to each packet.

Link layer asks for ethernet bus, is given it and puts the message on the bus.

router sees the message on the bus and sends it to another router until a router on the same LAN as the destination is found. This router puts the message on its ethernet bus.

Destination machine link layer sees a message for it on the bus and gives it to the network layer.

Network layer passes message to the transport layer.

Transport layer uses sequence numbers to reconstruct message.

Application layer passes message to waiting application (web server).









Lecture 26

Artificial Intelligence

Can Machines be intelligent?

This is a difficult question and to answer it we need to define intelligence and thinking.

Some aspects of intelligence:

In the 1950's, a test was proposed by Alan Turing to determine whether a machine is intelligent or not - The Turing Test.

The Turing Test

The basis of the turing test is for a person to attempt to distinguish between a machine and a human being. For fairness, the machine and the human are isolated from the person carrying out the test and messages are exchanged via a keyboard and screen. If the person can not distinguish between the computer and the human being, then the computer must be intelligent. This test is often criticised because it only tests a limited aspect of intelligence.

Some people think that even if a machine could pass the Turing Test it may still not be intelligent.

Each year, there is a Turing test contest called the Loebner Prize. This is a transcript from the winner for 1998.

The Chinese Room Problem

A (non Chinese speaking) person is locked in a room and given the algorithm for a program that could pass the Turing test in Chinese. He/she is asked questions in Chinese, applies the algorithm by hand, and feeds the answers back. The room will appear intelligent but the person inside understands no Chinese, so is there any intelligence present?

This problem is criticised because, it may well be possible for the complete system to be intelligent (i.e. room and person inside) without the person being intelligent.

Some people say that the passing the Turing test is sufficient to prove intelligence but it is not necessary to prove intelligence. In other words, a machine may fail the Turing Test but still be intelligent.

There are plenty of examples of computer systems that perform tasks that would require intelligence if they were performed by a human being.

Types of AI Tasks

One possible classification of AI tasks is into 3 classes: Mundane problems, Formal problems and Expert Problems.

Mundane Tasks

Formal Tasks

Expert Tasks

Rule based systems -
         if (conditions) then action

Example of AI problem solving.

Problem : A farmer has a hungry fox a fat goose and a bag of grain. The farmer needs to cross a river but his boat can only carry two things.

Constraints: Fox and goose cannot be left together Goose and grain cannot be left together.

How to cross the river?

English language representation is hard to solve.

Try visual/graphical representation:

This is called a search tree. The search tree is not a data structure like the trees we saw earlier. It is a representation of the problem that allows us to solve it.

To solve this problem we need only follow the tree from its root node to any leaf node.

Lecture 27

Search Strategies

Consider the following problem:
      T---4----G--4--C
     /\        |         W
    /  \       5        /
   3    5      |       3
  /      \     |      /
 A---4----H-2--B--4--P
The diagram above shows 8 towns (A,T,H,B,G,C,P and W) connected by roads. The distance along each road is also shown on the diagram.

We want to get from town A to town W.

There are two questions we will ask about this journey.

  1. Is it possible? i.e. find any route.
  2. What is the shortest route.
Algorithms to answer the first question are called some path algorithms. Algorithms to answer the second question are called optimal path algorithms.
                       A
                      / \
                    /     \ 
                  /         \ 
                /             \
              /                 \
            /                      \
          /                          \
         H                            T
        / \                          / \
       /   \                        /   \
      /     \                      /     \
     /       \                    /       \
    B         T                  H         G
   / \         \                /         / \
  G   P         G              B         B   C
 / \   \       / \            / \       / \
T   C   W     B   C          P   G     H   P
             /              /     \         \
            P              W       C         W
             \
              W
This diagram is a search tree for the map. The leaf nodes represent places where there is nowhere to go that has not already been visited.

From the tree it can be seen that there are four possible routes to W.

`Some Path' Algorithms

A 'some path' algorithm to find a route between A and W is easy to write, just search the tree until a W is found. There are two useful techniques for doing this, either search the tree from left to right or from top to bottom.

Searching from left to right is called depth first search because we must go to the very bottom of the tree first.

Searching from top to bottom is called breadth first search because we must search across the tree first.

Depth First Search

The tree is be searched using preorder traversal.
                       A1
                      / \
                    /     \ 
                  /         \ 
                /             \
              /                 \
            /                      \
          /                          \
         H2                           T
        / \                          / \
       /   \                        /   \
      /     \                      /     \
     /       \                    /       \
    B3        T                  H         G
   / \         \                /         / \
  G4  P7        G              B         B   C
 / \   \       / \            / \       / \
T5  C6  W8    B   C          P   G     H   P
             /              /     \         \
            P              W       C         W
             \
              W
This diagram shows the order that the first 8 nodes are visited for depth first search. After this, the goal is found.

This procedure can be described recursively.

Depth First Search Procedure:

  1. Determine if this node is the goal.
    If it is, then a path has been found.
  2. If the left subtree exists, search it.
  3. If the right subtree exists, search it.


  4. If the goal has been found then the algorithm has succeeded to find a route otherwise it has failed.

Breadth First Search

                       A1
                      / \
                    /     \ 
                  /         \ 
                /             \
              /                 \
            /                      \
          /                          \
         H2                           T3
        / \                          / \
       /   \                        /   \
      /     \                      /     \
     /       \                    /       \
    B4        T5                 H6        G7
   / \         \                /         / \
  G8  P9        G10            B11       B12 C13
 / \   \       / \            / \       / \
T14 C15 W16   B   C          P   G     H   P
             /              /     \         \
            P              W       C         W
             \
              W
Search Procedure:
  1. Form a one element queue consisting of the root node.
  2. Until the queue is empty or the goal has been reached, determine if the first element in the queue is the goal node.
    1. If it is the goal then do nothing
    2. otherwise: Remove the first element from the queue and add its children (if any) to the back of the queue.
    If the goal has been found then the algorithm has succeeded to find a route otherwise it has failed.
Breadth first search will be better than depth first if there are a lot of dead ends with long paths to them.

`Optimal Path' Algorithms

Now we need to find not just a solution, but the best solution. Assume there is a cost associated with every node. In our case this will be the total distance traveled so far.

The British Museum Algorithm

Search all possible paths. Use Breadth or Depth first but continue after a solution has been found. Choose the solution with the lowest cost.

This method is Impractical except for trivial problems.

The Branch and Bound Algorithm

Instead of searching all possible paths, search in order of lowest cost. The first path found will be optimal.

Search Procedure:

  1. Form a queue of partial paths. Let the initial queue consist of the zero length, zero step path from the root node to nowhere.
  2. Until the queue is empty or the goal has been reached, determine if the first path in the queue reaches the goal node.
    1. If the first path reaches the goal node then do nothing
    2. otherwise:
      1. Remove the first path from the queue.
      2. Form new paths from the removed path by extending one step.
      3. Add the new paths to the queue.
      4. Sort the queue by total cost, with least cost at the front.
    The first path to the goal that is found must be the optimum path.
                       A0
                      / \
                    /     \ 
                  /         \ 
                /             \
              /                 \
            /                      \
          /                          \
         H4                           T3
        / \                          / \
       /   \                        /   \
      /     \                      /     \
     /       \                    /       \
    B6        T9                 H8        G7
   / \         \                /         / \
  G11 P10       G13            B10       B12 C11
 / \   \       / \            / \       / \
T15 C15 W13   B18 C17        P14 G15   H14 P16
             /              /     \         \
            P22            W17     C19       W19
             \
              W25
This diagram shows the distances traveled to reach each node in the tree. From this, it can be seen that the branch and bound algorithm will visit the nodes in the following order.
                       A1
                      / \
                    /     \ 
                  /         \ 
                /             \
              /                 \
            /                      \
          /                          \
         H3                           T2
        / \                          / \
       /   \                        /   \
      /     \                      /     \
     /       \                    /       \
    B4        T7                 H6        G5
   / \         \                /         / \
  G11 P8        G13            B9        B12 C10
 / \   \       / \            / \       / \
T   C   W14   B   C          P   G     H   P
             /              /     \         \
            P              W       C         W
             \
              W
It will take 14 iterations to find the optimal path.

Lecture 28

Heuristics

A heuristic is a 'rule of thumb' that can be used to speed up a search algorithm.

A good heuristic can considerably speed up the search.

For example, consider the route finding example from the last lecture.

A good heuristic for this example is the straight line distance to the goal. Our search will be much more efficient if we look at routes going towards the goal first.

      8        5     2 
      T---4----G--4--C   0
     /\        |         W
    /  \       5        /
   3    5      |       3
  /      \     |      /
 A---4----H-2--B--4--P
 10       6.8  6     3

The numbers in bold in this diagram are straight line distances from W. They can be used in a modified branch and bound algorithm to find the goal even more efficiently.

Neural Networks

What is a Neural Network?

Here is a definition from a book:

It has lots of very simple processing elements (or neurons or units). Each unit has inputs and an output. The unit is defined by a set of parameters (or weights).

Units are connected together in a regular way (a network). Input is placed on input units. There may be a number of hidden units that also perform processing and there will be some output units.

The network is trained by applying example inputs (where we know the correct output).

The weights are then changed so that the outputs are more like the correct outputs.

Example: NETtalk

The problem

Convert English text into the vowel and consonant sounds (phonemes) that make up speech. For example consider the following words: Albany, Albania, misled and bedraggled.

Traditional Approach

  1. Create if-then rules to encode all the regularities in the language.
  2. Maintain a database of exceptions to these rules.
  3. Build a production system to resolve conflicts.

For example, a ``c'' can either be pronounced like an ``s'' as in center, icy, and city or a ``k'' as in cat and crumb. Here is a rule that might encode this difference:

Neural Network Approach

Allow a system to learn how to pronounce words by giving it lots of examples and correcting its mistakes.

  1. Choose an architecture.
  2. Choose a representation for the input and output.
  3. Determine a training set. 16,000 words were chosen randomly from the 20,000 words in Webster's Dictionary.
  4. Train the system. Give the network each word in the training set several times, and indicate the errors for learning.
  5. Test the system. The network was able to respond correctly to 90%of the 4,000 remaining words from the dictionary.

Advantages of the Network Approach

Steps in the training of NETtalk