Race Conditions

0
198
Race Conditions

Race Conditions

In some operating systems, multiple processes use shared resources (Memory, storage or printer, etc) for reading and writing.

The race condition is a situation where two or more processes are reading or writing some shared data and the final result depends on who runs exactly when.

It is an undesirable situation that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence to be done correctly.

An operating system is providing printing services, “Printer System Process ” picks a print job from SPOOLer. Each job gets a job number, and this is done by the SPOOLer using two global variables “In” and “Out”.

Two processes, A & B need printing services, both of these processes have their own local variables named “next -fee-slot”.

Also Read: Introduction to Operating System

  • Process A found from “In” that slot 7 is free, so it recorded this number in its variable i.e. next free slot = 7.
  • Process A consumed its time and schedule now gives CPU to Process
  • Process B has some need so it also recorded next free slot = 7. Process B submitted the job and added 1 in its local variable and placed the value in the global variable i.e. now “In” =8.
  • Process A comes back for execution and starts from where it was proceeded out. As this time it has 7 in its local variable, so it will submit the job in slot number 7, overwriting the job submitted by process B and will again overwrite the global variable too i.e. “In = 8”.

Mutual Exclusion

Mutual Exclusion forces all processes to avoid the use of the shared variable if in use by one process. In order to avoid race conditions, there should be a way that if one process is using a shared variable, etc then other processes should not be allowed to use the same shared variable.

If a process is using a shared variable then other processes will be excluded from using the same -shared variable and this is called the Mutual Exclusion.

Critical Section

The problem occurred in the above example because process B started using the shared variable before process A was finished with it. So, the part of a program where shared memory is accessed is called the Critical Section. If we arrange that two processes will not be in their critical section at the same time, then we can avoid Race Condition.

Various techniques are used for achieving Mutual Exclusion so that when one process is busy updating shared memory in its critical region, no other process can enter its region and cause problems.

Also Read: Services Provided By Operating Systems

Disabling Interrupts

Each process is allowed to disable all the Interrupts immediately after it has entered the critical section and re-enable the interrupts just before exiting the critical section.

The problem here is that if a process disable interrupts and due to any reason, didn’t enable them, then it will cause a complete halt of the system.

Lock Variables

Here we have a single shared lock variable and its initial value is set to 0. 0 means Resource is available. The process sets it to 1 and enters the critical section. Another process wants to use the resource but the lock variables value is 1, so the process will have to wait for the value to be 0.

The problem here is that if a process reads the lock as 0 but before setting it to 1. CPU scheduled another process that sets lock to 1. So when the first process runs again, it will also set the lock to 1 and two processes will be in their critical regions at the same time.

The Producer-Consumer or Bounded Buffer Problem

Two processes share a common buffer. One is a producer, which puts information into the buffer, and the other one is a consumer that takes it out.

Bounded Buffer Problem

The producer wants to put a new item in the Buffer, but it is already full, then the producer should go to sleep and awakened by the consumer on removing one or more items. Similarly, consumer wants to take items but the buffer is empty, so it goes to sleep until the producer puts something in the buffer and wakes it up.

To check the number of items in the buffer we need a variable, Count. If the maximum number of items, the buffer can hold is N, the producer will first test if the count is N, if it is, the producer will get to sleep, if it is not, the producer will add an item and increments counter.

Similarly, consumer tests count to see if it is 0. If it is, it goes to sleep, if it is not 0, remove an item and decrement the counter.

The problem here is that if the buffer is empty and the consumer finds the count to 0, at that time if scheduler starts running the producer, producer enters the item in the buffer, increments count and sends a wakeup call to the consumer.

The consumer was not sleeping so the wake-up signal is lost. Similarly, when consumer runs again, it read previously that count is 0 although count now is 1, it will sleep, as it knows that the count is 0. Like this producer will fill up the buffer and will also go to sleep and both will sleep forever.

Also Read: Process and Communication in Operating Systems

Semaphore

A Semaphore ‘S’ is an integer variable that other initialization is accessed only through two standard atomic operations i.e. wait and signal.

So, when Semaphore = 0 it means ‘No wakeups’ i.e. no job is pending and When Semaphore > = then it means ‘Pending wakeups’ i.e. jobs are pending.

Operations of Semaphore

A Semaphore can have two operations that are called:

i)          Wait or Sleep or Down
ii)         Signal or Wakeup or Up

 

Wait Operation

  • tests semaphore for value > 0.
  • if true, then decrement the value and continue the processing.
  • (Decrements means one process has consumed one wakeup turn).
  • if = 0 then process is put to sleep.

Signal Operation

  • Increment the value of
  • If more than one process is sleeping then pick one

In programming, especially in UNIX systems, semaphores are a technique for coordinating or synchronizing activities in which multiple processes compete for the same operating system resources. A semaphore is a value in a designated place in the operating system (or kernel) storage that each process can check and then change.

Depending on the value that is found, the process can use the resource or will find that it is already in use and must wait for some period before trying again. Semaphores can be binary (0 or 1) or can have additional values. Typically, a process using semaphores checks the value and then, if it using the resource, changes the value to reflect this so that subsequent semaphore users will know to wait.

Semaphores are commonly used for two purposes: to share a common memory space and to share access to files. Semaphores are one of the techniques for inter-process communication (inter-process communication). The C/C++ programming language provides a set of interfaces or “functions” for managing semaphores.