Operating systems and memory management
In the previous chapter, we discussed how the OS and the CPU team up to make multitasking possible. In this post, I’ll continue our discussion with an important OS concept: Memory management.
Any limited resource needs resource management, and the main memory is no exception.
Memory management is about keeping track of allocated and free memory spaces, to handle allocating memory to new processes.
And if there’s not enough free space available on the memory to keep all active processes, then, idle processes may have their memory swapped out to a location of the hard disk, called the backing store – to make memory space for the running processes.
Computer programs don’t have an idea of memory physical addresses; They use a virtualized address space, instead.
As far as a process knows, it’s the only program residing in the memory.
Every memory address used across the system is a logical memory address.
This address is mapped to the actual physical address by MMU (Memory management unit) upon receiving a read/write request.
MMU is a chipset on RAM.
Let’s go through two different approaches the operating systems take to allocate memory to processes.
Single Contiguous Allocation
Single contiguous allocation is the simplest memory management technique, where the whole memory (except for a small portion reserved for the OS) is allocated to one program at a time.
MS-DOS operating system allocated memory this way.
A system using a single contiguous allocation may still do multitasking by swapping the whole memory content (temporarily moving it to the hard disk) to switch between processes.
In Partinioed allocation, the memory is split into partitions and allocated to multiple processes at a time.
Partition allocation can be done in two ways:
One approach is to divide the memory into equal-sized partitions and allocate each partition to one process.
The problem with this approach is that not all processes need the same amount of memory, and the leftovers will remain unused as long as the process owns the space.
This might limit the number of simultaneous processes we can have in the memory.
A better approach, however, is to allocate memory to processes based on how much space they need.
Then, keep a list of free memory blocks (known as holes) and allocate the best possible hole (in terms of size) when a certain process is to be created.
This approach is the most common approach we see today in most operating systems.
When choosing the best possible hole, three different strategies are taken into account (to find the best hole possible):
First Fit: To search the list of holes until a hole – big enough for the respective process – is found.
Best Fit: To search the list of holes to find the smallest hole which satisfies the process’s requirements.
This is helpful to keep the bigger holes intact for processes that need a big chunk of memory.
Worst fit: Finding a hole, which is much larger than what the process requires.
The first fit and best fit are the preferred approaches when allocating memory space to a process.
As mentioned in the Multitasking chapter, each memory space given to a certain process is a subset of the main memory.
However, since the allocation is done through a virtualized layer, each process thinks it’s the only process using the memory and doesn’t know other processes even exist.
This level of separation prevents a malicious program (such as malware) from interfering with other program’s data and making them crash.
This strategy is called memory protection.
With memory protection, a process cannot modify the data of another process in the memory – either with good or bad intentions.
As a result, if a program crashes due to some bugs or an external issue, the other process in the memory or even the OS itself wouldn’t be affected.
For instance, if a Google Chrome tab crashes, your data in MS Word is guaranteed to remain intact.
But how does this mechanism work?
Here’s the explanation:
Any running process is associated with two values, base, and limit. These two values define a range (of memory addresses) that the instructions of a certain process can access.
As a result, any memory access instruction within the process is checked against these two registers by the CPU.
If memory access is attempted outside of this range, a fatal error is issued by the CPU and the process is ended immediately.
This limitation only applies to user programs (programs you write or install).
However, OS itself is exempt from this memory access limitation as it’s supposed to handle memory allocation for all processes.
Other system programs, such as OS utilities and drivers are normally exempt from this limitation as well.
Alright, this is the end of the fourth chapter in this series. I hope by far you could manage to draw the have a high-level understanding of a program works within a computer system.
If you read this far, you can tweet to the author to show them you care. Tweet a Thanks
Never miss a guide like this!
Join the DWD mailing list for your weekly dose of knowledge! This mailing list is for doers and thinkers. If you're a curious person, you might find it useful, just as 2,000 other devs do!
Disclaimer: This post might contain affiliate links. I might receive a commission if a purchase is made. However, it doesn’t change the cost you’ll pay.