Operating systems and multitasking

Updated Aug 31, 2022 ⤳ 14 min read

In this chapter, I’ll continue the series with another important OS concept, Multitasking.

Multitasking* is a common feature of modern operating systems, which involves switching between processes, to ensure each process will get a slice of the CPU’s time in a given period of time.

To achieve this, the operating system puts a process (that can wait) on hold, and starts (or resumes) another one.

This switching happens at a pace beyond our perception of time – so we never feel it.

This cycle goes on for as long as the system is switched on.

A bit of history on OS multitasking

One of the early forms of multitasking in operating systems was cooperative multitasking.

In cooperative multitasking, each process could take up the CPU’s time for as long it needed, and it voluntarily gave up the CPU to another waiting process.

There was a caveat to this approach, though.

A poorly written program could use the CPU’s time infinitely without sharing it with other processes.

Or if it crashed due to some bugs, the whole system would crash.

Cooperative multitasking used to be a scheduling scheme in early versions of Microsoft Windows and Mac OS. however, except for specific applications, it’s no longer used.

Multitasking in the modern time

The multitasking scheme used in today’s systems is of type Preemptive Multitasking.

In Preemptive Multitasking, the operating system has full control over the resource allocation and determines how long a process should have the CPU’s time.

The operating system keeps track of the state of each process and uses scheduling algorithms to choose the next process to run.

Each process can be in one of these states during its life in the memory:

  1. Created The process was just created
  2. Ready The process is residing in the memory
  3. Running The process instructions are being executed by the CPU
  4. Blocked The process execution has been paused due to an interrupt signal, such as I/O 5.Terminated The process has been terminated and is about to be removed from the memory

The act of switching between processes is called context switching.

Context switching involves storing the state of the current process (to make sure it can be resumed at a later point) and starting (or resuming) another process.

How does context switching work?

Context switching is done as a result of a CPU interrupt.

But what is an interrupt? you may ask.

An interrupt is an action taken by the CPU to pause a running process. It’s in the form of a signal transmitted to the operating system, so the operating system can take necessary actions on the running process.

An interrupt is issued in response to hardware or software events. This categorizes interrupts into hardware interrupts and software interrupts, respectively.

Upon receiving an interrupt signal (from the CPU), the operating system’s scheduler forcibly blocks the running process and prioritizes another process to run.

When a hardware device causes an interrupt

Hardware Interrupts happen due to an electronic or physical state change in a hardware device (e.g a mouse click or keyboard keypress) that requires immediate attention from the CPU.

For example, when you move your mouse, an interrupt request (IRQ) is transmitted to the CPU, as soon as the move is detected by the mouse electronics.

Interrupt requests are detected by devices embedded in the CPU, such as the CPU timer, which continuously checks for incoming hardware interrupts.

During each instruction cycle (Fetch-Decode-Execute cycle), the processor samples the interrupt trigger signal, to figure out to which device the signal belongs; it can be a keyboard, a mouse, hard disk, etc.

Upon receiving an interrupt request, the CPU interrupts the current process as soon as possible (not always instantly) and sends the appropriate interrupt signal to the operating system’s kernel.

The kernel takes over and considering the interrupt signal, schedules another process to run, which can be a process to handle the interrupt like a process to reposition the mouse pointer on the screen.

Not all interrupt requests are served the same way, though.

Some requests don’t need to be handled instantly, while some are time-sensitive and have to be taken care of in real-time.

That’s why when you move your mouse, you never have to wait for a few seconds to see your mouse pointer is repositioned.

What is a Software Interrupt

A software interrupt is issued by the CPU when it reaches an instruction with one of the following characteristics:

  1. The instruction initiates an I/O operation. For instance to read input from a keyboard, or write the output to disk, or the screen
  2. The instruction requests a system call (to use the low-level services of the OS)
  3. The instruction leads to an error, such as dividing a number by zero

You’re probably wondering what is an I/O operation, and why it leads to a CPU interrupt.

Let’s break it down.

I/O, also known as Input and Output refers to every interaction between the system and the outside world.

Devices involved in this interaction are called Peripheral devices or I/O devices.

Screens, printers, keyboards, hard disks, and network cards are considered peripheral devices.

An example of this two-way communication is when you use a keyboard to put data into the computer and use a screen to get the output.

I/O isn’t just a human-to-machine interaction, though; the communication between the CPU and other components in the system is also considered I/O.

For instance, when a file is being fetched from the hard disk (disk I/O), or an image is being downloaded from the Internet (network I/O).

I/O devices are much slower than the CPU.

The reason is CPU works by electric current, while I/O devices might require mechanical movements (like hard disks), or affected by network latency (like network cards).

Let’s suppose a process initiates an I/O operation like waiting for the user to input a value.

Now, if the CPU couldn’t issue an interrupt, and switch to another task, while a slow I/O operation was in progress, it would end up remaining idle until the I/O operation was completed.

Imagine your Spotify would stop working while you were writing a URL in your browser’s address bar.

Think of the CPU as a chess grandmaster who’s playing with 20 people simultaneously. She would move to the next player, while the current opponent is thinking about the next move – and gets back when they have made the move.

Otherwise, the game would take more than a couple of days.

Multi-tasking was the answer to this significant time difference between the slow and ultra-fast components in a computer system – allowing the faster part to switch to other tasks, while the slow part is still working.

How the Operating System Handles an Interrupt Signal

Upon receiving a software interrupt signal from the CPU, the kernel sets the current process status to blocked, and with the help of the CPU, the current execution state (the data stored in CPU registers) is swapped out of the CPU.

Next, the OS looks up the interrupt signal in a table called IVT, to find the interrupt handler associated with that interrupt.

An interrupt handler is a piece of code, which handles the respective interrupt event. IVT is a data structure, which contains a list of interrupts with their associated interrupt handlers.

The interrupt handler runs as a process, just like any normal process.

Once the interrupt handler’s execution is done, the interrupted process can be resumed.

For instance, when an instruction is about writing data to a file (like a program saving a file), the CPU issues an interrupt signal to the OS.

Consequently, the OS blocks the current process (which contains the instruction). Then, it looks up the appropriate interrupt handler in the IVT and prioritizes the interrupt handler to run – which involves placing data on the physical storage device.

Once the data is written to the physical disk, the CPU detects an interrupt request from the disk, which indicates the completion of data storage.

Consequently, the CPU issues disk interrupt signals to the operating system, which eventually causes the blocked process to resume execution.

Threads and Multi-threading

The instructions of a process, are treated (by the OS) as a stream of instructions, called a thread, or a thread of execution.

That said, the terms process instructions and thread instructions can be used interchangeably.

The thread instructions are normally executed sequentially; however, there are times when some instructions don’t have to wait until their turn because they are fully independent of the other instructions.

Let’s suppose we have a process with 100 instructions, however, the last ten instructions can be executed independently and don’t have to wait until the first ninety are executed.

To benefit from the OS multitasking capabilities, one approach is to put the last 10 instructions in a separate process within the same program.

Most programming languages provide features to help you write programs to run as multiple processes; so once the program runs, multiple processes will be created in the memory.

However, this is not the only way to use the OS multitasking features.

If we just want our independent instructions to run concurrently, and memory protection is not a concern, we can create another thread within the main process; So it’ll be one process with two threads of execution.

Every process has at least one thread of execution.

Threads are described as lightweight processes because switching between threads doesn’t require a context switch, as they share their parent process data (PCB).

Now, if we put those last 10 instructions in another thread, we’ll have one process with two threads.

The operating system treats a thread just like a process with one exception, a context switch wouldn’t be necessary when switching between threads.

Multithreading is actually multitasking at the process level.

It’s the programmer who decides whether the instructions should be split into multiple threads or not.

The memory protection doesn’t apply to the threads of a process, as they all share the same Process Control Block, such as inputs, heap, descriptors, etc.

Since threads share the same resources, it is important to understand how to synchronize access to those resources so that the work of one thread isn’t overridden or corrupted by another thread.

A Process with multiple threads vs multiple processes

To improve the performance of a program, and use the OS multitasking capabilities, the programmer either writes a program as multiple threads within one process or multiple independent processes.

But which approach is the best?

Well, that depends; each strategy has its advantages over the other, under different circumstances.

Creating and switching between threads is cheaper than a process; The reason is threads share the parent process context, and no Process Control Block has to be created for each thread.

However, using multiple processes yields better memory management.

For instance, in case of memory shortage, an inactive process can be swapped out to disk, to free up the memory needed for another process.

That cannot happen with multiple threads.

Another benefit of using multiple processes is memory protection, which prevents a process to affect other processes.

Let’s make it more clear with an example:

Google Chrome manages each tab as a separate process; this means anytime you open a new tab, a new process is created.

Each open tab involves rendering a page and execute its JavaScript code.

If the tabs were programmed as multiple threads, a non-responsive tab would affect the other tabs, and finally the web browser itself (because threads use the same memory space).

However, if each tab is managed as a process, thanks to the process-level memory protection, bugs, and glitches in one tab (one process), won’t have any effect on the other tabs (other processes).

Although creating/maintaining multiple processes has more overhead comparing to threads, the Google Chrome team traded off this fixed-cost overhead with a more reliable application.

Inter-process communication

Although processes cannot access each other’s data, the operating system may provide mechanisms to enable processes to interact with each other in a safe and predictable way – if needed.

This is called inter-process communications.

Alright, we’ve reached the end of this chapter. I hope this chapter could give a high-level overview of multitasking in the operating system.

In the next chapter, we’ll cover another important OS concept, called Memory Management.

If you think something is missing, or if there’s anything I’ve gotten wrong, please let me in the comments below.

Disclaimer: This post may contain affiliate links. I might receive a commission if a purchase is made. However, it doesn’t change the cost you’ll pay.

`