Services Provided By Operating Systems

Services Provided By Operating Systems

Hello friends as we have talked about what an Operating System is and how it works in our previous article if you haven’t read it you can read it over here: Introduction to Operating System.

That will give you a pretty good idea about Operating systems, but in this post, we will be discussing what are the main services that OS provides the user which makes it different from other software. Here are the key Services Provided By Operating Systems.


Multiprogramming is used for improving CPU utilization. We know that in uni programming or mono programming only one process is executed at one time i.e. program loaded and executed at the command typed by the user and after the completion of process Operating System gives control back to the user by prompting on the terminal and wait for the next command.

Although mono programming is sometimes used on small computers but on large computers with multiple users it is not used. Reasons for Multiprogramming

  • Applications are made easier to program if we split them into two or more.
  • Large computers are used by several people which leads to the presence of more than one process in the.
  • Large computers are very expensive, so if the CPU remains idle, the precious time of the CPU is wasted. To utilize the CPU more efficiently more processes need to be in the memory, so that if one process waits another may be given the control of.

In mono programming, if a process is waiting for an I/O to complete and it takes 40msec to complete the I/O operation and the computation takes 10msec then the CPU was idle waiting for I/O operation to complete 80 percent of the time.

CPU Utilization = [ ( CPU Time / Total time ) x 100 ] CPU Utilization = [ ( 10 / 50 ) x 100 ]

CPU Utilization = 20 %

CPU Wastage = [ ( I/O Time / Total time ) x 100 ] CPU Wastage = [ ( 40 / 50 ) x 100 ]

CPU Wastage = 80 %

Calculate CPU Utilization and CPU Wastage of the following timeline of processes.

In multiprogramming, CPU will not sit idle, as the Operating System will switch it to the next job. When that job needs to wait, the CPU will again be switched to another job, and so on.

Process / Job execution in Multi-Programmed Operating System

CPU Utilization = {[CPU Times of P1 + P2 + P3 + P4 / Total Time (CPU + I/O)] x 100} CPU Utilization = [(12 + 6 + 4 + 8 / 36 ) x 100]

CPU Utilization = 83.33 %

Calculate CPU Utilization and CPU Wastage of the following timeline of processes.

In multiprogramming CPU sits idle only when there is no job to execute.

Multiprogramming is done in two ways.

  • Multiprogramming with Fixed partition.
  • Multiprogramming with Variable.

Also Read: 15 Best Linux Applications that You MUST HAVE in 2019

Time Sharing (Multi-Tasking):

Time-sharing also called Multi-tasking is a logical extension of multiprogramming. Multiple jobs are executed by the CPU switching between them and the switches occur so frequently that the user may interact with each program while it is running.

The rapid switching back and forth of the CPU between programs is called “pseudoparallelism”. As the system switches rapidly from one user to next, each user assumes that only he is using the system, but in actuality, CPU is sharing its time with all the users.

Time-sharing systems are sophisticated. A time-shared operating system uses CPU scheduling and Multi-programming to provide each user with a small portion of a time-shared computer. Each user has a program in memory and when it executes then only a small portion of time is given to that.

In that time either it finishes or needs to perform I/O operations. As I/O operations are slower, so operating system rapidly switches the CPU to other program and like this instead of sitting idle, the CPU time is given to another process.


Systems that have more than one CPU are called “Multi-processing System”. In multiprocessing systems, CPU shares the computer bus and sometimes memory and I/O devices, etc.

Multi-processor systems are more reliable and fast as compared to the systems that can not do multiprocessing. The speed-up ratio with N processors is not N, but less than N. When multiple processors process the tasks, then many overheads are to be catered in order to keep the process working correctly, so this catering of overheads lowers the expected results.

Multiprocessor systems are reliable in the sense that system will not halt or crash on failing a processor, but will only slow down the speed. Suppose a multiprocessor system has ten processors and one out of ten fails, then remaining nine processors will pick up the share of the work of the failed processor, so the entire system will run only 10 percent slow, instead of failing altogether.

Graceful Degradation

The ability to keep providing services even when a hardware failure occurs is called Graceful Degradation.


The system that is designed for graceful degradation is also called Fail-Soft. There are two techniques that are mostly used in multiprocessing systems.

  • Symmetric Multiprocessing.
  • Asymmetric

In Symmetric Multiprocessing, each processor runs a copy of the Operating System and these copies communicate with each other when needed. Encore’s version of Unix for Multimax Computers uses Symmetric Multiprocessing. This computer has many processors and each processor runs a copy of Unix.

The benefit here is that all processes can run at once ( N processes can run at the same time when there are N CPU’s ) without causing a deterioration in performance. In such systems I/O’s need to be controlled carefully to ensure that data reaches the appropriate processor.

Inefficiencies may occur, as CPU’s are separate, one may be sitting idle while another is overloaded. In order to avoid such kind of inefficiencies, Multiprocessing system should allow the jobs are resources, etc to be shared among various processors. These kind of systems are very delicate and should be written carefully.

In Asymmetric Multiprocessing, each processor is assigned a specific task. The processor that controls the system is called the master processor and it gives instructions to the slave processors. So, in Asymmetric multiprocessing, we have a master-slave relationship. Master processor schedules and allocates work to the slave processors.

Asymmetric multiprocessing is normally used in very large systems where most time-consuming activities are processing of I/O’s.

Also read: Understanding Linux File System

Distributed Systems

Distributed Systems distribute jobs or computation among several physical processors. So, several processors interconnected with each other by a communication network forms a Distributed System.

The processors in a Distributed System may vary in Size and Function. They may include small Microprocessors, Workstations, Mini Computers, and even Large general purpose computer systems. These processors can be referred by a number of different names, i.e. Sites, Nodes, Computers, Machines, etc.

Generally, one site, the Server, has a resource that another site, the client (user) would like to use. It is the purpose of the Distributed System to provide an efficient and convenient environment for this type of Sharing of Resources.

Two schemes are mostly used for building Distributed Systems.

i) Tightly Coupled System

In a Tightly Coupled System, the processors share Memory and a Clock. In such kind of Multiprocessor Systems, communication takes place through the Shared Memory.

ii) Loosely Coupled System

In a Loosely Coupled System, the processors do not share Memory or Clock, and each processor has its own Local Memory, and the processors communicate with each other through various communication lines i.e. High-Speed Buses or Telephone Lines.

Reasons for building Distributed Systems

A Distributed System is designed due to the following reasons

1) Resource Sharing

In a Distributed System, users can share or use the resources available at other sites or machines. A user working in a Distributed Environment can use the Printer (resource) connected with some other computer. Similarly, a user can Access files present on some other computer in a Distributed Environment. So, Processing information in a Distributed Database, Printing files at remote sites and using remote specialized hardware are all examples of resource sharing in a Distributed System.

2) Computation Speed Up

Computation speed can be increased in a Distributed Environment if computation can be partitioned into a number of sub-computations that can run concurrently.

Load Sharing If a particular site is overloaded with jobs, then some of them can be moved or transferred to other sites in the Distributed System that are not overloaded or are lightly loaded. This movement of jobs is called Load Sharing.

3) Reliability

High Reliability is achieved in Distributed Systems that if one site fails due to any reason, then the remaining sites can continue operating.

If a system is composed of a number of small machines and each of which is responsible for some important system function than a single failure (hardware or software) in that environment may cause the halt of the whole system.

On the other hand, Distributed Systems are reliable in a way that the system can continue its operation even if some of the sites have failed due to any reason or have stopped working.

4) Communication

Distributed Systems provides a way or mechanism that programs can exchange data with one another. A user can transfer files to a system that is geographically located at a distant place. Similarly, electronic mail provides a way of communication with other users working at remote sites.


SPOOL stands for “Simultaneous Peripheral Operations On Line”. A computer document or task list (or “job”) is to read it in and store it, usually on a hard disk or larger storage medium so that it can be printed or otherwise processed at a more convenient time (for example, when a printer is finished printing its current document). One can envision SPOOLing as reeling a document or task list onto a spool of thread so that it can be unreeled at a more convenient time

The spooling of documents for printing and batch job requests still goes on in mainframe computers where many users share a pool of resources. On personal computers, your print jobs (for example, a Web page you want to print) are spooled to an output file on the hard disk if your printer is already printing another file.

The idea of spooling originated in early computer days when the input was read in on punched cards for immediate printing (or processing and then immediately printing of the results). Since the computer operates at a much faster rate than input/output devices such as printers, it was more effective to store the read-in lines on a magnetic disk until they could be conveniently printed when the printer was free and the computer was less busy working on other tasks. Actually, a printer has a buffer but frequently the buffer isn’t large enough to hold the entire document, requiring multiple I/O operations with the printer.

Also read: Why I Switched to Parrot OS?


A buffer is a data area shared by hardware devices or program processes that operate at different speeds or with different sets of priorities. The buffer allows each device or process to operate without being held up by the other. In order for a buffer to be effective, the size of the buffer and the algorithms for moving data into and out of the buffer need to be considered by the buffer designer. Like a cache, a buffer is a “midpoint holding place” but exists not so much to accelerate the speed of activity as to support the coordination of separate activities.

This term is used both in programming and in hardware. In programming, buffering sometimes implies the need to screen data from its final intended place so that it can be edited or otherwise processed before being moved to a regular file or database.