Sunday, September 04, 2005

A Quick Setup guide for Bugzilla on Linux

I wanted to setup and configure Bugzilla in my personal computer so just downloaded the package and started playing around with it. I will just run through the set up process which is quite simple.

My Personal Computer runs on Fedora Core 4 so i didnt have to install any of the basic packages that are required for bugzilla. They are as follows:

1. Apache
2. MySQL(3.22.5 or greater)
3. Perl(5.005 or greater, 5.6.1 is recommended if you wish to use Bundle::Bugzilla)
4. Apache (recommended)
5. Sendmail (Version 8.7)

Step 1: The first step would be to download bugzilla and then move the zipped archive to the following directory /var/www/html

Listing 1

$ cd /var/www/html/
$ tar zxvf bugzilla-2.18.tgz
$ mv bugzilla-2.18/ Bugzilla/

Step 2: Now log on to the bugzilla directory i.e;, (/var/www/html/Bugzilla) and execute the Perl Script.

Listing 2

$ su root
$ ./

Step 3: Once this script is executed it should tell you what modules you need and the corresponding CPAN command needed to install from the CPAN repository. The CPAN command would look like this

Listing 3

$ perl -MCPAN -e 'install ""'

Step 4: Issue the above command for each Perl module that you need to install. If your system is connected to the internet this downloads and installs the requested module automatically.

Configure Bugzilla

Step 5: Once the above Processes are completed successfully the script would generate a file called localconfig in the same directory (/var/www/html/Bugzilla)

Step 6: Open the localconfig file and configure the Bugzilla application to use your local database server.You might have to change the following entries in the localconfig file

Listing 4

$db_host = "localhost"; # where is the database?
$db_port = 3306; # which port to use
$db_name = "test"; # name of the MySQL database
$db_user = "balaji"; # user to attach to the MySQL database

Step 7: Save the settings and the next thing to be done is to Create a database account for Bugzilla. To do this the first step would be to connect to mysql and then issue the following command.

Listing 5

IDENTIFIED BY '$db_pass';

Step 8: This set of commands creates the bugs user and grants that user account numerous levels of access to the database configured when connecting locally.

To make sure that the required PERL modules are present rerun the PERL script from the Bugzilla directory.

Listing 6:

$ su root
$ ./

Step 9: Finally during this process you are asked to configure the Bugzilla's Administrator account.

Configure Apache

Step 10: Open the configuration file for Apache (/etc/httpd/conf/httpd.conf) and then add the following lines First you need to allow Apache to run CGI Scripts outside of the cgi-bin directory. To do so you need to add or uncomment the following line in httpd.conf

Listing 7:

AddHandler cgi-script .cgi

Step 11: Next you need to allow Bugzilla's .cgi files from the Bugzilla directory.Add the following lines

Listing 8:

AddHandler cgi-script .cgi
Options +Indexes +ExecCGI
DirectoryIndex index.cgi
AllowOverride Limit

Step 12: Now You need to configure Apache to accept index.cgi file when entering the Bugzilla directory by adding the following in the httpd.conf file

Listing 9:

DirectoryIndex index.html index.html.var index.cgi

Screenshots on how to create new Products and new Components (i.e,Sections and Subsections)

To add a new Section

To add a new subsection for an already created Section

To enter bugs in the appropriate Sections and Subsections

Sunday, August 07, 2005

Snapshot of DNS Ethereal Capture

I wanted to post a screenshot of a DNS query and its corresponding DNS response captured using Ethereal. In fact in my previous blog i had posted the output of the dig program but i thought posting a screenshot of the decoding of the DNS query and DNS response would really help people who view this blog in finding out the ingredients of the DNS query .

The query i gave in my Firefox browser was this and this query and its corresponding response was captured using Ethereal . Here we go !!!!

DNS Query

DNS Response (Screenshot of Answer Section) Screenshot - 1

DNS Response (Screenshot of Authority Section) Screenshot - 2

Hope people find this information useful :-)

Sunday, July 03, 2005

Domain Name System

The DNS plays a critical role in supporting the Internet infrastructure by providing a distributed and fairly robust mechanism that resolves Internet host names into IP addresses and IP addresses back into host names. The DNS also supports other Internet directory-like lookup capabilities to retrieve information pertaining to DNS Name Servers, Canonical Names, Mail Exchangers, etc.

Overview of the DNS
To connect to a system that supports IP, the host initiating the connection must know in advance the IP address of the remote system. An IP address is a 32-bit number that represents the location of the system on a network. The 32-bit address is separated into four octets and each octet is typically represented by a decimal number. The four decimal numbers are separated from each other by a dot character ("."). Even though four decimal numbers may be easier to remember there is a practical limit as to how many IP addresses a person can remember without the need for some sort of directory assistance. The directory essentially assigns host names to IP addresses.

The Stanford Research Institute’s Network Information Center (SRI-NIC) became the responsible authority for maintaining unique host names for the Internet. The SRI-NIC maintained a single file, called hosts.txt, and sites would continuously update SRI-NIC with their host name to IP address mappings to add to, delete from, or change in the file. The problem was that as the Internet grew rapidly, so did the file causing it to become increasingly difficult to manage. Moreover, the host names needed to be unique throughout the worldwide Internet. With the growing size of the Internet it became more and more impractical to guarantee the uniqueness of a host name. The need for such things paved the way for the creation of a new system called the Domain Name System.

DNS Design Goals

There were some design goals that were set at the time of structuring the Domain Name System.They are as follows:
  • The primary goal is a consistent name space which will be used for referring to resources.
  • The sheer size of the database and frequency of updates suggest that it must be maintained in a distributed manner,with local caching to improve performance.
  • Where there tradeoffs between the cost of acquiring data, the speed of updates, and the accuracy of caches, the source of the data should control the tradeoff.
  • The costs of implementing such a facility dictate that it be generally useful, and not restricted to a single application.
  • The system should be useful across a wide spectrum of host capabilities. Both personal computers and large timeshared hosts should be able to use the system, though perhaps in different ways.
Elements of DNSThe DNS has three major components:
  • The Domain Name Space
  • Name Servers
  • Resolvers
The Domain Name Space
The DNS is a hierarchical tree structure whose root node is known as the root domain. A label in a DNS name directly corresponds with a node in the DNS tree structure. A label is an alphanumeric string that uniquely identifies that node from its brothers. Labels are connected together with a dot notation, ".", and a DNS name containing multiple labels represents its path along the tree to the root. Labels are written from left to right. Only one zero length label is allowed and is reserved for the root of the tree. This is commonly referred to as the root zone. Due to the root label being zero length, all FQDNs end in a dot.

Each node has a label, which is zero to 63 octets in length. Brother nodes may not have the same label, although the same label can be used for nodes which are not brothers.The domain name of a node is the list of the labels on the path from the node to the root of the tree.By convention, domain names can be stored with arbitrary case, but domain name comparisons for all present domain functions are done in a
case-insensitive manner.

When a user needs to type a domain name, the length of each label is omitted and the labels are separated by dots (".").Since a complete domain name ends with the root label, this leads to a printed form which
ends in a dot. We use this property to distinguish between:
  • a character string which represents a complete domain name (often called "absolute"). For example, "poneria.ISI.EDU."
  • a character string that represents the starting labels of a domain name which is incomplete, and should be completed by local software using knowledge of the local domain (often
    called "relative"). For example, "poneria" used in the ISI.EDU domain.
Example Name Space
The following figure shows a Domain Name Space.

Resource Records
A domain name identifies a node. Each node has a set of resource information, which may be empty. The set of resource information associated with a particular name is composed of separate resource
records (RRs).A DNS RR has 6 fields:
  • NAME
  • TYPE
  • TTL
  • RD Length
  • RDATA.
The NAME field holds the DNS name, also referred to as the owner name, to which the RR belongs. The TYPE field is the TYPE of RR. This field is necessary because it is not uncommon for a DNS name to have more than one type of RR.The common RR types are listed in the following table.





An address record

Maps FQDN into an IP address


A pointer record

Maps an IP address into FQDN


A name server record

Denotes a name server for a zone


A Start of Authority record

Specifies many attributes concerning the zone, such as the name of the domain (forward or inverse), administrative contact, the serial number of the zone, refresh interval, retry interval, etc.


A canonical name record

Defines an alias name and maps it to the absolute (canonical) name


A Mail Exchanger record

Used to redirect email for a given domain or host to another host

The owner name is often implicit, rather than forming an integral part of the RR. For example, many name servers internally form tree or hash structures for the name space, and chain RRs off nodes. The remaining RR parts are the fixed header (type, class, TTL) which is consistent for all RRs, and a variable part (RDATA) that fits the needs of the resource being described.The meaning of the TTL field is a time limit on how long an RR can be kept in a cache. This limit does not apply to authoritative data in

Queries are messages which may be sent to a name server to provoke a response. In the Internet, queries are carried in UDP datagrams or over TCP connections. The response by the name server either answers the question posed in the query, refers the requester to another set of name servers, or signals some error condition.In general, the user does not generate queries directly, but instead makes a request to a resolver which in turn sends one or more queries to name servers and deals with the error conditions and referrals that may result.DNS queries and responses are carried in a standard message format. The message format has a header containing a number of fixed fields which are always present, and four sections which carry query parameters and RRs.
The most important field in the header is a four bit field called an opcode which separates different queries. Of the possible 16 values,one (standard query) is part of the official protocol, two (inverse
query and status query) are options, one (completion) is obsolete, and the rest are unassigned.

The four sections are:

Question Carries the query name and other query parameters.

Answer Carries RRs which directly answer the query.

Authority Carries RRs which describe other authoritative servers.
May optionally carry the SOA RR for the authoritative
data in the answer section.

Additional Carries RRs which may be helpful in using the RRs in the
other sections.

Standard Queries
A standard query specifies a target domain name (QNAME), query type (QTYPE), and query class (QCLASS) and asks for RRs which match. This type of query makes up such a vast majority of DNS queries that we use the term "query" to mean standard query unless otherwise specified. TheQTYPE and QCLASS fields are each 16 bits long, and are a superset of defined types and classes.

The QTYPE field may contain:

matches just that type. (e.g., A, PTR).

AXFR special zone transfer QTYPE.

MAILB matches all mail box related RRs (e.g. MB and MG).

* matches all RR types.

The QCLASS field may contain:

matches just that class (e.g., IN, CH).

* matches aLL RR classes.

Name Servers
Name servers are the repositories of information that make up the domain database. The database is divided up into sections called zones, which are distributed among the name servers. While name servers can have several optional functions and sources of data, the essential task of a name server is to answer queries using data in its zones.By design,name servers can answer queries in a simple manner; the response can always be generated using only local data, and either contains the answer to the question or a referral to other name servers "closer" to the desired information.A given zone will be available from several name servers to insure its availability in spite of host or communication link failure.
A given name server will typically support one or more zones, but this gives it authoritative information about only a small section of the domain tree. It may also have some cached non-authoritative data about
other parts of the tree. The name server marks its responses to queries so that the requester can tell whether the response comes from authoritative data or not.

Dig(Domain Information Groper) is a tool for interrogating with the DNS Server.It performs DNS lookups and displays the answers that are returned from the name server(s) that were queried.

dig @server name type
is the name or IP address of the name server to query. This can be an IPv4 address in dotted-decimal notation or an IPv6 address in colon-delimited notation.

Working of Dig
When the supplied server argument is a hostname, dig resolves that name before querying that name server. If no server argument is provided,dig consults /etc/resolv.conf and queries the name server listed there.

Working Snapshot of Dig

[balaji@localhost ~]$ dig

; <<>> DiG 9.3.1 <<>>
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4738
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 6, ADDITIONAL: 4

; IN A


;; AUTHORITY SECTION: 2779 IN NS 2779 IN NS 2779 IN NS 2779 IN NS 2779 IN NS 2779 IN NS

;; ADDITIONAL SECTION: 520573 IN A 2166 IN A 2166 IN A 2166 IN A

;; Query time: 319 msec
;; WHEN: Sun Jul 17 16:30:10 2005
;; MSG SIZE rcvd: 252

Wednesday, June 22, 2005

Hello World

Hello Everyone

This blog has been silent for over two months probably the longest time gap ever.I have got lot of excuses for that... Lots of work couple of months back and then was in Mumbai for a week and once i came back i became busy developing some tools for my team at office.Last week i took the red pill and joined an other big project in the same company.Its for Lucent and i ll be working on IP,IP based protocols and UMTS protocols and i am really excited about being part of this great project coz i love working on these protocols.Whats the Project about??? well !!! i cannot disclose that coz its confidential :-).

I have started blogging again and i wish to post better technical posts here . Have got lots to read and also lots to write .

Thursday, March 17, 2005

My Nerd Score looks Kewl

Thursday, February 17, 2005

The Kernel and its variants

Kernel Definition

The kernel is a program that constitutes the central core of a computer operating system. It has complete control over everything that happens in the system.

A kernel can be contrasted with a shell (such as bash, csh or ksh in Unix-like operating systems), which is the outermost part of an operating system and a program that interacts with user commands. The kernel itself does not interact directly with the user, but rather interacts with the shell and other programs as well as with the hardware devices on the system, including the processor (also called the central processing unit or CPU), memory and disk drives.

The kernel is the first part of the operating system to load into the main memory (RAM) during booting, and it remains in the memory for the entire duration of the computer session. Thus it is important for it to be as small as possible while still providing all the essential services required by the other parts of the operating system and by the various applications.

Because the code that makes up the kernel is needed continuously, it is usually loaded into a protected area of memory, which prevents it from being overwritten by other, less frequently used parts of the operating system or by application programs. The kernel performs its tasks (e.g. executing processes and handling interrupts) in kernel space, whereas everything a user normally does (e.g. writing text in a text editor or running graphical programs in the X Window System) is done in user space. This separation is made in order to prevent user data and kernel data from interfering with one another and thereby diminishing performance or causing the system to become unstable and possibly crashing.

When a computer crashes, it actually means the kernel has crashed. If only a single program has crashed but the rest of the system remains in operation, then the kernel itself has not crashed. A crash is a situation in which a program, either an user application or a part of the operating system, stops performing its expected functions and also stops responding to other parts of the system. The program might appear to freeze. If such program is a critical to the operation of the kernel, the entire computer could freeze or crash.

The kernel provides basic services for all other parts of the operating system, typically including memory management, process management, file management and I/O (input/output) management (i.e., accessing the peripheral devices). These services are requested by other parts of the operating system or by application programs through a specified set of program interfaces referred to as system calls.

Process management, possibly the most obvious aspect of a kernel to the user, is the part of the kernel that ensures that each process gets its turn to run on the processor and that the individual processes do not interfere with each other by writing to their areas of memory. A process, also referred to as a task, can be defined as an executing (i.e., running) instance of a program.

The contents of a kernel vary considerably according to the operating system, but they typically include a scheduler, which determines how the various processes share the kernel's processing time (including in what order), a supervisor, which grants use of the computer to each process when it is scheduled, an interrupt handler, which handles all requests from the various hardware devices (such as disk drives and the keyboard) that compete for the kernel's services and a memory manager, which allocates the system's address spaces among all users of the kernel's services.
Normally people relate kernel to the BIOS. The kernel should not be confused with the BIOS (Basic Input/Output System). The BIOS is an independent program stored in a chip on the motherboard (the main circuit board of a computer) that is used during the booting (i.e., startup) process for such tasks as initializing the hardware and loading the kernel into memory (RAM). Whereas the BIOS always remains in the computer and is specific to its particular hardware, the kernel can be easily replaced or upgraded by changing or upgrading the operating system or, in the case of Linux, by adding a newer kernel or recompiling an existing kernel.

Most kernels have been developed for a specific operating system, and usually there is only one version available for each operating system. For example, the Microsoft Windows 2000 kernel is the sole kernel Microsoft Windows 2000 and the Microsoft Windows 98 kernel is the only kernel Microsoft Windows 98. Linux is far more flexible in that there are numerous versions of the Linux kernel, and each of these can be modified in innumerable ways by an informed user.
A few kernels have been designed with the intention of being suitable for use with any operating system. The best known of these is the Mach kernel, which was developed at Carnegie-Mellon University and is used in the Macintosh OS X operating system.

The term kernel is frequently used in books and discussions about Linux, whereas it is used less often when discussing some other operating systems, such as Microsoft Windows. The reason is that in the kernel is highly configurable in the case of Linux and the user is encouraged to learn about and modify it and/or download and install updated versions. With the Microsoft Windows operating systems, in contrast, there is relatively little point in discussing kernels because they cannot be modified or replaced.

Categories of Kernels

Kernels can be classified into four broad categories: monolithic kernels, microkernels, hybrid kernels and exokernels. Each has its own advantages and disadvantages as well.

Monolithic kernels, which have traditionally been used by Unix and Linux, contain all the operating system core functions and the device drivers (small programs that allow the operating system to interact with hardware devices, such as disk drives, video cards and printers). Modern monolithic kernels, such as those of Linux and FreeBSD, feature the ability to load modules at runtime, thereby allowing easy extension of the kernel's capabilities as required, while helping to minimize the amount of code running in kernel space.

Microkernels usually provides only minimal services, such as defining memory address spaces, interprocess communication (IPC) and process management. All other functions, such as hardware management, are implemented as processes running independently of the kernel. Examples of microkernel operating systems are AIX, BeOS, Hurd, Mach, Minix and QNX.

Hybrid kernels are similar to microkernels, except that they include additional code in kernel space so that such code can run more swiftly than it would if it were in the user space. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure microkernels can provide high performance. Hybrid kernels should not be confused with monolithic kernels that can load modules after booting example: Linux.

Most modern operating systems use hybrid kernels, including Microsoft Windows NT, 2000 and XP. Mac OS X also uses a modified microkernel, as it includes BSD kernel code in its Mach-based kernel. DragonFly BSD, a recent variant of FreeBSD, is the first non-Mach based BSD operating system to employ a hybrid kernel architecture.

Exokernels are a still experimental approach to operating system design. They differ from the other types of kernels in that their functionality is limited to the protection and multiplexing of the raw hardware, and they provide no hardware abstractions on top of which applications can be constructed. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program.

Exokernels in themselves they are extremely small. However, they are accompanied by library operating systems, which provide application developers with the conventional functionalities of a complete operating system. A major advantage of exokernel-based systems is that they can incorporate multiple library operating systems, each exporting a different API (application programming interface), such as one for Unix and one for Microsoft Windows, thus making it possible to simultaneously run both Unix and Windows applications.

The Monolithic Kernel Versus Micro Kernel
In the early 1990s, many computer scientists considered monolithic kernels to be obsolete, and they predicted that microkernels would revolutionize operating system design. In fact, the development of Linux as a monolithic kernel rather than a microkernel led to a famous flame war between Andrew Tanenbaum, the developer of the Minix operating system, and Linux Torvalds, who originally developed Linux based on Minix.

Proponents of microkernels point out that monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error. Although this sounds sensible, it is questionable how important it is in reality, because operating systems with monolithic kernels such as Linux have become extremely stable and can run for years without crashing.

Another disadvantage cited for monolithic kernels is that they are not portable; that is, they must be rewritten for each new architecture that the operating system is to be ported to. However, in practice, this has not appeared to be a major disadvantage, and it has not stopped Linux from being ported to numerous processors.

Monolithic kernels also appear to have the disadvantage that their source code can become extremely large. For example, the code for the Linux kernel version 2.4.0 is approximately 100MB and contains nearly 3.38 million lines, and that for version 2.6.0 is 212MB and contains 5.93 million lines. This adds to the complexity of maintaining the kernel, and it also makes it difficult for new generations of computer students to study and comprehend the kernel. However, the advocates of monolithic kernels claim that in spite of their size such kernels are easier to design correctly, and thus they can advance more quickly than microkernel-based systems.
Moreover, the size of the compiled kernel is only a tiny fraction of that of the source code, for example roughly 1.1MB in the case of version 2.4 on a typical Red Hat desktop installation. Contributing to the small size of the Linux kernel is its ability to dynamically load modules at runtime, so that the basic kernel contains only those components that are necessary for the system to start itself and to load modules. This approach allows all other components, which have been compiled but not linked into the kernel executable, to be loaded into the slots reserved for them whenever their services are required.

The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization. In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on a single floppy disk and still provide a fully functional version of Linux (one of the best known of which is mulinux). This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems.

Although microkernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not converse directly with the hardware, creates a not-insignificant cost in terms of efficiency.

Apparently the advantages of microkernels have not been sufficient to compel the majority of operating systems to adopt this approach. In fact, there are extremely few widely used operating systems today that utilize microkernels -- mainly just AIX and QNX. AIX is a proprietary Unix operating system developed by IBM, a company which has recently been placing an increasing emphasis on Linux. QNX is a highly successful commercial real-time operating system for embedded applications where reliability and small size are of paramount importance.

Saturday, December 11, 2004

Debian Sarge - A Review

What is Debian

Debian GNU/Linux is a bit different from the other Linux Distros that are out there in the market.Debian does not have any point releases unlike Redhat 7.0,7.1,7.2,8.0,9.0.Debian has three versions with different content:Stable,testing,and unstable and the three versions called Woody,Sarge and Sid.Woody is officially known as Debian 3.0.Any program that will be published in the next version will be Compiled,Documented, packaged and added in the Unstable version.

Installation of Debian Sarge

Debian installation could be divided into three phases or stages. First is when you boot the machine with a Debian Bootable DVD/CD and do the Base install. The Second part consistes of installing packages such as the X Window System,Mail Server/client and common software packages that would make Debian Sarge a complete Operating System.After that you get a fully working Debian machine with Gnome or KDE or both (depends on the installation) .

So This is what i did to set up Debian Sarge in my system . Here we go...............
I ve got Debian Sarge DVD.With this i installed Debian Sarge in my system.I got this DVD when i bought this month's(Dec 2004) PC Quest Magazine.In fact i got PC Quest only for the sake of this DVD :-) .

I ve got a Samsung DVD ROM in which i inserted the Debian Sarge DVD.The System booted and i got the Debian Sarge Startup screen . I didnt have second thoughts at all i wanted 2.6 Kernel and i just typed linux26 in the boot prompt and the installer Started. The best part of Debian Sarge is that it has got the latest software packages and it has got a beautiful Installer. Debian has got a reputation of being the toughest Distribution to install on any PC because if a person needs to install Debian in his system the person needs to know a lot about the Hardware thats being used in the system but the Sarge installer is very easy to use and at the end of the installation you get a fully functional Debian Box with a shiny GUI (KDE or Gnome).

I was just keeping my fingers crossed till the installation started. The initial screens just asked me to select the language,country and keyboard layout respectively.The Installer detects all the hardware thats plugged into ths system.The Installer detected my Network Interface Card(NIC) and it tried to configure it with DHCP but i ve got a static IP so i had to enter the IP of my machine and the other necessary details and i was able to configure it easily . Next the installer asked for the hostname and domain name of the machine .I have just got one machine a stand alone so i just had to type local and then comes the important aspect . Guess what ................ Its about Paritioning . Wow....... I didnt have any problems because i already had knoppix running in my system so i already had an EXT3 partition and swap so all i had to do for this installation was to point the installer to format the current EXT3 Partition and use it for Sarge.It was a breeze doing that and i really enjoyed it.Then the installer started installing the Base System which hardly took about 10 mts and then the system rebooted and then i got GRUB on the screen with 2 options (one was normal mode and the other was the recovery mode ) and it also detected my other Operating System Guess what windows XP????????? No its Windows 98 .Hip Hip Hurrah !!!!!! I just selected the first option and booted into my new Operating System then the second phase of insallation started.In this phase the installer asked me to create enter the name for an User and the corresponding password and then it asked me to enter the administrator password and then it asked me about apt-setup . I just inserted my DVD and it just indexed all the packages in my DVD .Then it showed me a screen which showed me some options and asked me for the packages to install. So all i did was i needed apache so selected WebServer and Of course i need X Windows so i just selected Desktop Environment and i really didnt want to manually select the packages so i just clicked on Finish and i went for sleep and when i got up i saw the login Screen in the GUI mode with X windows System running and GDM running perfectly i selected KDE as the Desktop Environment and entered my user name and password and i logged into KDE and i was amazed with the number of software packages that were installed . I didnt break my head in selecting packages as this used to be the process in Debian Woody . I had everything including OpenOffice of course No doubt of GCC Compilers .

All in all i would say that Debian Sarge is a very good Distribution targeted towards Servers and Desktops too . I highly recommend it for newbies too but the only Disadvantage that users might face is with the internet . If the user has a normal Dial up i dont think so the user might like seeing his pocket burning when he issues apt-get update or apt-get install but if the user has got a cable connection and does not worry about the downloads and the time he spends online then Debian would be or infact should be the way to go.