Week #2 - Introduction to Basic Concurrency and Semaphores


This tutorial deals with multiple processes, mutual exclusion, critical
sections and semaphores.  The aim is provide examples of how and how not to
synchronize two processes that are sharing data or communicating using 
shared data structures.

The general outline will be:

1) Concurrency: Process and Fork
2) Problems with concurrent execution
3) Semaphores
4) Bounded Buffer
5) Monitors

1) Concurrency
==============

Concurrency allows you to run more than one program
"at the same time".  On a single processor machine, 
this is simulated by switching between programs to provide the
illusion of concurrent execution.  These programs are known as 
"processes".

In OOT, a process is defined with the "process" keyword, and is
similar to a procedure.  The process is "called" using the
"fork" keyword, which causes the process to begin executing 
concurrently, while the process that called it also continues 
executing.  This is in contrast to a normal procedure call, where
the calling code waits for the called code to finish executing
before it continues.

Example 1
---------
	process speak( word: string )
		loop
			put word
		end loop
	end speak

	fork speak( "Hi" )
	fork speak( "Ho" )

Example output:

	Hi
	Ho
	Hi
	Hi
	Ho
	HoHi
	Hi

	Ho
	Hi

...etc.


The process will continually print out the string passed to it.

When a fork is encountered two threads of execution are created
(actually one extra thread - one thread is the calling process and the
other is the process named in the fork), i.e., the original program
(main program here) calls speak which creates a process "speak" and starts
it executing.  At the same time, the calling process (main program
here) continues to execute its code. In the example, a second fork is
found and executed. This creates a new process which uses the speal
code again.  There are now three processes executing, the main program,
and two instances of the code in the process "speak".  (In this example
the main program has nothing else to do and will exit immediately.)

The fact that the same code is running in two processes at once is not
important in OOT, since processes are "reentrant".  This means that
the processes have different copies of their local data, stack, etc.
They do share global data, however, which is the source of the problems
we'll explore in the next section.  (Although reentrancy sounds like a 
must-have feature, it's not always a safe assumption to make.  MS-DOS is
a notable example of a non-reentrant system.)


The most important thing to note is that with two processes 
running at the same time, their relative order of execution is 
completely unknown beforehand.  The actual order of instructions
being executed can be affected by many factors, such as 
the CPU scheduling algorithm, the number of other processes on the system, etc.


2) Problems with concurrent execution
=====================================
	
What is wrong with the following code?

EXAMPLE 2:

var x := 0 

process A
	x := x + 1
end A

process B
	x := x + 1
end B

fork A
fork B
----------------------


At first glance this code looks ok, since either
process A or process B must first execute x := x + 1, and
then the other process will execute it.  Either way the
end result is that 2 gets added to x.
Right?  Wrong.

Both processes appear to consist of only one action, 
adding 1 to x, but at the assembly language level even this simple 
action will consist of at least 2 or 3 machine instructions.
e.g. load x into a register, add 1 to it, and write the result
back into memory.
The CPU can switch between runnable processes at any point 
between machine instructions (a context switch).  This means
that we could switch processes right in the middle of 
incrementing x, causing the final result to be incorrect.
 
Suppose we had the following sequence of execution.

process a  		process b       	x's value in main memory
---------		---------		------------------------
							5
load x from mem.
register contains 5
add 1 to register
register contains 6
        !!!Context Switch!!!
			load x from mem.
			register contains 5
			add 1 to register
			register contains 6
			write register to mem.		6
        !!!Context Switch!!!
write register to mem.					6

x now has the value of 6.  Another order of execution could 
give the value 7.  

This illustrates the basic problem of concurrent execution,
controlling access to shared resources.  In this case the
resource is the shared variable x.  The problem boils down to something 
we all learned in kindergarten - if someone else is using something, 
you can't have it until they're done.  

Since we can't make processes sit in the corner if they don't
play nicely, we have to make it impossible for them to 
interrupt each other during access to a shared resource.  Any section
of code which accesses a shared resource is known as a "critical section".
While executing in a critical section, no other process should be able to
access the resource protected by the critical section.

So maybe we should set a flag when we enter a critical section, that
other processes can check before accessing the resource.  Let's try it.

Example 3
---------

var using_disk := 0

process A
	loop 
	  exit when using_disk = 0
	end loop
	
	using_disk := 1
	use disk for something
	using_disk := 0	
end A


fork A
fork A

See the problem with this code?  

Suppose we check using_disk and find it to be 0.  We'll then proceed
merrily onward, assuming no one else is using the disk.  If a context
switch were to happen immediately after exiting the loop, but
before setting using_disk to 1, our other instance of process A
could come along, see that using_disk is 0, and decide that it is 
ok to use the disk.  Both processes would then be using the disk
at the same time.  The result - sheer uncontrolled mayhem.  Not what
you want.

Although we're on the right track, this solution just pushes the problem 
further back from the disk.  Now the variable using_disk is a shared 
resource that needs to be protected. Maybe we could use another flag 
to protect this flag?  No, that way lies madness.

3) Semaphores
=============

Luckily for us, someone else has already done the hard work of figuring
out the solution.  In 1968, E. Dijkstra proposed the "semaphore" data
type.  Essentially, a semaphore is an integer variable, with two operations
allowed on it, P() and V().  P and V stand for Dutch words that mean
"test" and "increment", which is kind of unintuitive unless you're Dutch.
A semaphore's implementation in pseudo-code looks like

procedure P(s : semaphore)
  loop 
    exit when not s = 0

    % yield control to another runnable process
    wait   
  end loop

  s := s - 1
end P

procedure V(s : semaphore)
  s := s + 1
end V		


This looks like we've encapsulated our faulty solution of Example 2 in
an absract data type.  The difference is that a true semaphore uses
an operating system trick to ensure that both P and V are uninterruptible
or "atomic" operations.  See section 8.3.2 in the Nutt textbook if you 
want to know about the details.  To use semaphores, you just need to know
that you should P a semaphore before entering a critical section, and
V the same semaphore on exiting the critical section.  If someone else is
in the critical section when P is executed, the semaphore's value will
not be 0, and the process will loop until it is.  Then it will set the 
semaphore back to 0, enter the critical section, and access the resource.
Calling V when exiting the critical section will increment the semaphore,
causing anyone else waiting in a P to be able to advance.

Our Example 3 rewritten using semaphores looks like this:


Example 4
---------


using_disk : semaphore

process A
	P(using_disk)
	use disk for something
	V(using_disk)
end A

% initialization - no one is using the disk yet.
using_disk := 1
fork A
fork A



4) Bounded Buffer 
=================

  Now let's look at a more complicated example of process synchronization.

We have 2 types of processes:
	- a producer P whose only job is to place data in a bounded
	buffer (produces data and places on the buffer to be consumed)
	- A consumer which gets data from the buffer and consumes it (or
	does some other unspeakable act to it).

	P --->  -------------------------  -----> C
		| | | | | | | | | | | | |
		-------------------------
		   Bounded buffer

One must worry about the following problems in the implementation (these
cause the processes to wait):

1) If the buffer is full -> the producer has to wait for an empty slot
2) If the buffer is empty -> the consumer must wait for a data item to be
		placed in the buffer

How long do they wait?
1) Producer can put its data item in the buffer when a consumer
	reads a data item (frees up a buffer slot).
2) Consumer will read a data item when producer puts one
	on the buffer.

Since the buffer is a shared resource, we must use synchronization 
techniques to control access to it.


Example 5
---------


	% set up semaphores
	% one to indicate how many empty slots there are.
	var emptySlots : semaphore

	% one to indicate how many full slots there are.
	var fullSlots : semaphore

	% one to ensure mutual exclusion on the buffer.
	var mutex : semaphore	
	
	% initialize semaphores
	emptySlot = numSlots   % all empty
	fullSlot = 0	       % none full 
	mutex = 1              % no one using buffer initially

	% declare circular buffer and variables pointing to 
	% the start and end of data residing on it
	numfull = 0
	head = 0
	tail = 0

	%initialize buffer to be empty
	for i: all slots
		buffer( i ) = empty (0)
	end for
	

	% Place data in buffer (called by producer)

	process producer()
	    loop		
		P(emptySlot)		% wait until a slot is free

		P(mutex)		% wait for access to buffer

		buffer( tail ) = data	% put data into buffer
		tail = (tail + 1) mod numSlots
		numfull = numfull + 1  	
		 
		V(mutex)		% give up access to buffer

		V(fullSlot)		% signal that there is now an occuppied slot
            end loop
	end Send

	process consumer()
 	   loop
		P(fullSlot) 		% wait if buffer empty

		P(mutex) 		% get access to buffer

		data = buffer( head )  	% get data
		head = (head + 1) mod numSlots
		numfull = numfull - 1

		V(mutex)		% give up access to buffer

		V(emptySlot)		% signal empty space available
	   end loop
	end Receive


fork producer
fork consumer
----------

Note that the producer can fill up the buffer until its full and AT
THE SAME TIME (except for exclusion) the consumer can read the
buffer until empty.


5) Monitors
===========

OOT does not support semaphores directly, but instead uses the concept
of a monitor, which accomplishes the same thing and is more 
flexible. Shared resources and data are placed in the monitor. Only one
process thread is allowed to be in the monitor at any one time. When you 
call a procedure or function of the monitor from outside the monitor we 
say you are trying to gain entry to the monitor (or you are trying to 
LOCK the critical section).

In a monitor, there is an entry queue. If the monitor is free (no one
is using the monitor), the first process waiting on the entry queue
is allowed access to the monitor. Other processes block on the queue.
If the monitor is being used, processes trying to enter the monitor wait
on the queue.

Note that the same code for a monitor is used as in a module.
The procedures and functions that are exported by are monitor
are the only way a process can gain entry to the monitor.
Note how a monitor procedure was called in one of the processes.

Example 6
---------

monitor changex
	export addx
	var x := 0

	procedure addx
		x := x + 1
	end addx
end changex

process a
	changex.addx
end a

process b
	changex.addx
end b

fork a
fork b
	
----------------------

When a process is given access to a monitor it executes
the procedure code and when it ends the procedure (returns
from the procedure) the monitor is said to be EXITED
by that process (or the critical section is UNLOCKED). (NOTE THAT ONLY ONE
PROCESS CAN BE IN A MONITOR AT ANYTIME -> THIS IS WHY ITS USEFUL
FOR DEFINING MUTUAL EXCLUSION) 

Really exaplainable if view the monitor as a room with entry as a door
and the shared variables in the room. The room also has space for
only one process.

    
       -------[entry through addx call]------
	|                                   |
 	| 		X		    |
	|				    |
	-------------------------------------

A process leaving the monitor causes the monitor to allow
another process to enter it.

Note: print statements must be handled by one monitor.
Why? The output device (stdout) is a shared device, but printing is a 
mutually exclusive operation, i.e. only one print should be done
to the output at a time. -> need a monitor to handle this
printing as a critical section.

Synchronization: Signal/wait and conditions in monitors
-------------------------------------------------------

From example 1, if we wanted to order the execution 
between the processes (in this case order the prints) 
we would use MONITORS for synchronization. 

Let's say we wanted to print out Hi then Ho then Hi then Ho
(i.e. alternate between the Hi's and Ho's).
We can do this by modifying example 1 and adding a monitor
for SYNCHRONIZATION (use signal, wait, and condition in monitor).

Condition: this is a special monitor variable which is used to synchronize
execution between processes (kind of like a semaphore - I SAID "KIND OF")
It has its own special waiting queue.

If one process must wait for another process to signal it to continue
execution than it WAITS on the condition: "wait condition"

When the process executes the WAIT, it is placed on the CONDITION
queue (in example 7, "done" is a condition with a queue). When
a process is put on a condition queue, it effectively releases the monitor
and is BLOCKED onto the queue. When the monitor is released, the monitor
tries to get another process to enter it (as before).

A process which wants to signal another to continue its execution
will execute a SIGNAL CONDITION. This command will start up the first 
blocked process on the condition queue (it gives up the monitor  and
goes to the head of the entry queue, or the signalled process goes back to the
entry queue - ASK YOUR TA FOR A NICE ROOM DIAGRAM WITH THE QUEUES AND CONDITIONS
LABELLED).
If no process is waiting, the signaller continues execution.

SO WAIT queues (and blocks) processes while SIGNAL dequeues (and unblocks)
processes.

Example 7
---------

monitor alternate
        export wait_for

        var done: condition

        procedure wait_for
                signal done
                wait done
        end wait_for
end alternate

process speak( word: string )
        loop
                put word
                alternate.wait_for
        end loop
end speak

fork speak( "Hi" )
fork speak( "Ho" )
----------

In this example, the first process will ask for entry to 
the monitor and gain entry (no one waiting). The second one
waits. The first process executes SIGNAL DONE which does nothing
since no one is waiting on the condition. It then executes
WAIT DONE. It is thus placed on the condition queue for DONE and releases
the monitor and blocks. When the monitor is RELEASED, the second
process gains entry into the monitor and executes the SIGNAL DONE.
Since there is now a process on the condition queue, the second
process is placed at the head of the entry queue. The first process which
was signalled gains access to the monitor again and continues executing
WHERE IT PREVIOUSLY LEFT OFF, i.e. executes END WAIT_FOR which means it
EXITS the monitor. Thus, the process at the head of the entry queue begins
executing (the second process). In this case the process continues executing
WHERE IT PREVIOUSLY LEFT OFF, i.e. executes a WAIT DONE and is blocked. Thus,
the first process will be the one to wake this up when it calls 
procedure wait_for.
(Whew! what a complicated mouthful. Don't worry, its all a matter of 
getting the order of processes being queued, blocked, woken up, signalled,
and continued execution.


Example 8
---------
Bounded Buffer problem from Example 5, using a monitor instead.

monitor BoundBuffer
	export Send, Receive

	% set up conditions
	% one for signalling when data is in buffer to consume
	var emptySlot : condition

	% On to signal when space in the buffer to place data
	var fullSlot : condition

	% declare circular buffer and variables pointing to 
	% the start and end of data residing on it

	numfull = 0
	head = 0
	tail = 0

	%initialize buffer to be empty
	for i: all slots
		buffer( i ) = empty (0)
	end for

	% Place data in buffer (called by producer)

	procedure Send( data )
		
		% wait until a slot is free

		% put data into buffer
		buffer( tail ) = data
		tail = tail + 1
		numfull = numfull + 1 (must cycle around buffer)

		% signal that there is now an occuppied slot
	end Send

	procedure Receive( var data )

		% wait if buffer empty

		% get data in buffer
		data = buffer( head )
		head = head + 1 (cycle back to 0 if needed)
		numfull = numfull - 1

		% signal empty space available
	end Receive
end BoundBuffer

% How is bounded buffer used:

process producer
	loop
		get data from TTY
		Send( data )
	end loop
end producer

process consumer
	loop
		Receive( data )
		write out data to TTY
	end loop
end consumer

fork producer
fork consumer
----------

Note that the producer can fill up the buffer until its full and AT
THE SAME TIME (except for exclusion) the consumer can read the
buffer until empty.


Example 8
---------
        % Place data in buffer (called by producer)

        procedure Send( data )

                % wait until a slot is free
		if buffer full (numfull = number of buffer slots) then
			wait emptySlot
		end if

                % put data into buffer
                buffer( tail ) = data
                tail = tail + 1
                numfull = numfull + 1 (must cycle around buffer)

                % signal that there is now an occuppied slot
		signal fullSlot

        end Send

        procedure Receive( var data )

                % wait if buffer empty
		if buffer is empty (numfull = 0) then
			wait fullSlot
		end if

                % get data in buffer
                data = buffer( head )
                head = head + 1 (cycle back to 0 if needed)
                numfull = numfull - 1

                % signal empty space available
		signal emptySlot
	
        end Receive
end BoundBuffer
----------