Thursday, February 17, 2005

The Kernel and its variants

Kernel Definition

The kernel is a program that constitutes the central core of a computer operating system. It has complete control over everything that happens in the system.

A kernel can be contrasted with a shell (such as bash, csh or ksh in Unix-like operating systems), which is the outermost part of an operating system and a program that interacts with user commands. The kernel itself does not interact directly with the user, but rather interacts with the shell and other programs as well as with the hardware devices on the system, including the processor (also called the central processing unit or CPU), memory and disk drives.

The kernel is the first part of the operating system to load into the main memory (RAM) during booting, and it remains in the memory for the entire duration of the computer session. Thus it is important for it to be as small as possible while still providing all the essential services required by the other parts of the operating system and by the various applications.

Because the code that makes up the kernel is needed continuously, it is usually loaded into a protected area of memory, which prevents it from being overwritten by other, less frequently used parts of the operating system or by application programs. The kernel performs its tasks (e.g. executing processes and handling interrupts) in kernel space, whereas everything a user normally does (e.g. writing text in a text editor or running graphical programs in the X Window System) is done in user space. This separation is made in order to prevent user data and kernel data from interfering with one another and thereby diminishing performance or causing the system to become unstable and possibly crashing.

When a computer crashes, it actually means the kernel has crashed. If only a single program has crashed but the rest of the system remains in operation, then the kernel itself has not crashed. A crash is a situation in which a program, either an user application or a part of the operating system, stops performing its expected functions and also stops responding to other parts of the system. The program might appear to freeze. If such program is a critical to the operation of the kernel, the entire computer could freeze or crash.

The kernel provides basic services for all other parts of the operating system, typically including memory management, process management, file management and I/O (input/output) management (i.e., accessing the peripheral devices). These services are requested by other parts of the operating system or by application programs through a specified set of program interfaces referred to as system calls.

Process management, possibly the most obvious aspect of a kernel to the user, is the part of the kernel that ensures that each process gets its turn to run on the processor and that the individual processes do not interfere with each other by writing to their areas of memory. A process, also referred to as a task, can be defined as an executing (i.e., running) instance of a program.

The contents of a kernel vary considerably according to the operating system, but they typically include a scheduler, which determines how the various processes share the kernel's processing time (including in what order), a supervisor, which grants use of the computer to each process when it is scheduled, an interrupt handler, which handles all requests from the various hardware devices (such as disk drives and the keyboard) that compete for the kernel's services and a memory manager, which allocates the system's address spaces among all users of the kernel's services.
Normally people relate kernel to the BIOS. The kernel should not be confused with the BIOS (Basic Input/Output System). The BIOS is an independent program stored in a chip on the motherboard (the main circuit board of a computer) that is used during the booting (i.e., startup) process for such tasks as initializing the hardware and loading the kernel into memory (RAM). Whereas the BIOS always remains in the computer and is specific to its particular hardware, the kernel can be easily replaced or upgraded by changing or upgrading the operating system or, in the case of Linux, by adding a newer kernel or recompiling an existing kernel.

Most kernels have been developed for a specific operating system, and usually there is only one version available for each operating system. For example, the Microsoft Windows 2000 kernel is the sole kernel Microsoft Windows 2000 and the Microsoft Windows 98 kernel is the only kernel Microsoft Windows 98. Linux is far more flexible in that there are numerous versions of the Linux kernel, and each of these can be modified in innumerable ways by an informed user.
A few kernels have been designed with the intention of being suitable for use with any operating system. The best known of these is the Mach kernel, which was developed at Carnegie-Mellon University and is used in the Macintosh OS X operating system.

The term kernel is frequently used in books and discussions about Linux, whereas it is used less often when discussing some other operating systems, such as Microsoft Windows. The reason is that in the kernel is highly configurable in the case of Linux and the user is encouraged to learn about and modify it and/or download and install updated versions. With the Microsoft Windows operating systems, in contrast, there is relatively little point in discussing kernels because they cannot be modified or replaced.

Categories of Kernels

Kernels can be classified into four broad categories: monolithic kernels, microkernels, hybrid kernels and exokernels. Each has its own advantages and disadvantages as well.

Monolithic kernels, which have traditionally been used by Unix and Linux, contain all the operating system core functions and the device drivers (small programs that allow the operating system to interact with hardware devices, such as disk drives, video cards and printers). Modern monolithic kernels, such as those of Linux and FreeBSD, feature the ability to load modules at runtime, thereby allowing easy extension of the kernel's capabilities as required, while helping to minimize the amount of code running in kernel space.

Microkernels usually provides only minimal services, such as defining memory address spaces, interprocess communication (IPC) and process management. All other functions, such as hardware management, are implemented as processes running independently of the kernel. Examples of microkernel operating systems are AIX, BeOS, Hurd, Mach, Minix and QNX.

Hybrid kernels are similar to microkernels, except that they include additional code in kernel space so that such code can run more swiftly than it would if it were in the user space. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure microkernels can provide high performance. Hybrid kernels should not be confused with monolithic kernels that can load modules after booting example: Linux.

Most modern operating systems use hybrid kernels, including Microsoft Windows NT, 2000 and XP. Mac OS X also uses a modified microkernel, as it includes BSD kernel code in its Mach-based kernel. DragonFly BSD, a recent variant of FreeBSD, is the first non-Mach based BSD operating system to employ a hybrid kernel architecture.

Exokernels are a still experimental approach to operating system design. They differ from the other types of kernels in that their functionality is limited to the protection and multiplexing of the raw hardware, and they provide no hardware abstractions on top of which applications can be constructed. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program.

Exokernels in themselves they are extremely small. However, they are accompanied by library operating systems, which provide application developers with the conventional functionalities of a complete operating system. A major advantage of exokernel-based systems is that they can incorporate multiple library operating systems, each exporting a different API (application programming interface), such as one for Unix and one for Microsoft Windows, thus making it possible to simultaneously run both Unix and Windows applications.

The Monolithic Kernel Versus Micro Kernel
In the early 1990s, many computer scientists considered monolithic kernels to be obsolete, and they predicted that microkernels would revolutionize operating system design. In fact, the development of Linux as a monolithic kernel rather than a microkernel led to a famous flame war between Andrew Tanenbaum, the developer of the Minix operating system, and Linux Torvalds, who originally developed Linux based on Minix.

Proponents of microkernels point out that monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error. Although this sounds sensible, it is questionable how important it is in reality, because operating systems with monolithic kernels such as Linux have become extremely stable and can run for years without crashing.

Another disadvantage cited for monolithic kernels is that they are not portable; that is, they must be rewritten for each new architecture that the operating system is to be ported to. However, in practice, this has not appeared to be a major disadvantage, and it has not stopped Linux from being ported to numerous processors.

Monolithic kernels also appear to have the disadvantage that their source code can become extremely large. For example, the code for the Linux kernel version 2.4.0 is approximately 100MB and contains nearly 3.38 million lines, and that for version 2.6.0 is 212MB and contains 5.93 million lines. This adds to the complexity of maintaining the kernel, and it also makes it difficult for new generations of computer students to study and comprehend the kernel. However, the advocates of monolithic kernels claim that in spite of their size such kernels are easier to design correctly, and thus they can advance more quickly than microkernel-based systems.
Moreover, the size of the compiled kernel is only a tiny fraction of that of the source code, for example roughly 1.1MB in the case of version 2.4 on a typical Red Hat desktop installation. Contributing to the small size of the Linux kernel is its ability to dynamically load modules at runtime, so that the basic kernel contains only those components that are necessary for the system to start itself and to load modules. This approach allows all other components, which have been compiled but not linked into the kernel executable, to be loaded into the slots reserved for them whenever their services are required.

The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization. In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on a single floppy disk and still provide a fully functional version of Linux (one of the best known of which is mulinux). This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems.

Although microkernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not converse directly with the hardware, creates a not-insignificant cost in terms of efficiency.

Apparently the advantages of microkernels have not been sufficient to compel the majority of operating systems to adopt this approach. In fact, there are extremely few widely used operating systems today that utilize microkernels -- mainly just AIX and QNX. AIX is a proprietary Unix operating system developed by IBM, a company which has recently been placing an increasing emphasis on Linux. QNX is a highly successful commercial real-time operating system for embedded applications where reliability and small size are of paramount importance.

3 Comments:

Blogger Varun Soundararajan said...

Hi, nice article.
-Varun

12:53 PM  
Blogger Arvind said...

gr8 tech blog and nice detailed article..


btw,i am Arvind and I got selected in Futuresoft thru campus and will be joining this June 13th. Please tell something about the place and some tips to survive the place...tell me abt the kind of work we entry level folks got to do..and abt the skill sets needed (languages,databases etc..)

my mail id is
mailarvindk@gmail.com

Regards,
Arvind.

10:29 PM  
Blogger Joe Berenguer said...

Hey Fellow, you have a great blog here! I have a web
site & blog about East Tawas Computer Sales.
Yours is top-notch!
If you have a moment, please visit my site
East Tawas Computer Sales
I wish you all the best!

2:55 PM  

Post a Comment

<< Home