LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > General
User Name
Password
General This forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!

Notices


Reply
  Search this Thread
Old 03-13-2004, 05:51 AM   #1
thegeekster
Member
 
Registered: Dec 2003
Location: USA (Pacific coast)
Distribution: Vector 5.8-SOHO, FreeBSD 6.2
Posts: 513

Rep: Reputation: 34
LinuxInsider.com: What Differentiates Linux from Windows?


This article basically says that M$ Windows code tends to be a bit on the kludgy side (code hacked together to make something work), which tends to bloat things and slow it down especially to make it backwards compatible, while Unix-type OSes tend to have cleaner code...

Quote:
What Differentiates Linux from Windows?
By Paul Murphy
LinuxInsider
March 11, 2004

Microsoft reacts to marketing pressure to make design decisions favoring running a few processes faster but then finds itself forced first to layer in backward compatibility and then to engage in a patch-and-kludge upgrade process until the code becomes so bloated, slow and unreliable that wholesale replacement is again called for.


What really are the most fundamental differences between Windows variants like 2003/XP and Unix variants like Linux?

From a practical perspective, cost is an obvious differentiator, as are access to source and the ability to run outside the Intel processor environment. But it's possible to argue that those differences are neither real nor important. For example, cost is usually important in business only if the products being compared are otherwise very similar. Some companies have negotiated access to Windows source, and NT 4.0 Server on Alpha was, until quite recently, the fastest way to run any Microsoft OS.

To get beyond superficialities like these, we must look at the fundamental functions of a modern business-oriented operating system and ask how these are implemented by the two groups: Microsoft and the Unix community. Conceptually, all major business-oriented operating systems, including Linux and Windows 2003/XP, are pretty similar because they use similar hardware to achieve similar goals.

Specifically, all of them act as interfaces between hardware and user applications, with most able to provide a single virtual interface to the hardware for multiple -- often concurrent -- user applications. Thus, most have four interlocking layers -- the user (or applications) layer communicates with the OS services layer, which uses kernel services to share access to hardware controllers -- and deliver five kernel functions. The scheduler mediates CPU resource sharing, the memory manager mediates memory sharing, the virtual file system abstracts the hardware to present a common file management interface to all applications, the network interface manages network I/O, and the Inter-Process Communication (IPC) module controls interprocess messaging.

Take any one of these, and the technical differences between how Unix and Microsoft implement the function overwhelm the commonality of terminology and purpose. It is more or less true, for example, that both Windows NT 5.X and Unix variants like Mach and some BSD variants use a modified microkernel design with a preemptive scheduler focused on interruptible thread execution, but that use of the same words is just about as far as the actual similarity goes.

Looking at Implementations
Look at how those ideas are implemented, and what you see is that core design philosophies influence how developers make thousands of small decisions on exactly what the terms mean and how things actually get done. Because the core philosophies behind the operating system design are diametrically opposed, these microdecisions tend to go in opposite directions and thereby most fundamentally differentiate the Microsoft operating systems from Linux.

To the extent, for example, that we know what decisions the Microsoft people made, it appears that they generally made choices preferring efficiency for -- and external controls over -- a small number of processes over scalable multiprocessing and internal process control. In contrast, Unix developers, whether aiming at a true microkernel-like BSD (or Darwin) or a monolithic kernel like Linux, generally made the opposite choices to favor multiple processes running under adaptive internal controls.

That difference in design philosophy shows up everywhere. In memory management, for example, Windows NT 5.0 and its successors use clustered paging, a working set memory analogue and a free memory manager that fires up exactly once per second, while Unix uses an adaptive page specific algorithm -- often least-recently used -- to control paging. In Unix, there is no working set equivalent, and the free memory manager runs when needed.

Another of the ways in which the preference for technical choices that favor a small number of core processes is expressed in the Windows kernel is in the fact that it runs nonthreaded internally. This choice avoids "object blockage" to trade off concurrency and context switching in favor of increased efficiency for, and better control of, a small number of key processes. Similarly, multiprocessor memory management and interprocess communications are tightly integrated with process control to gain better use of Intel's rather limited memory management hardware, in part by simplifying page management.

In contrast, the Unix approach generally has been to favor process creation and context switching at the cost of some efficiency for long-running processes, to favor multiprocessor memory management at the cost of increased hardware complexity, and to favor process or thread-level independence at the cost of making interprocess communication more difficult.

Consequences Beyond Differentiation
These kinds of decisions have consequences beyond fundamentally differentiating the multiuser communications orientation embedded in the Unix approach from the single-user, control-oriented focus in the Microsoft designs. Among these consequences, three groups -- affecting security, scalability and adaptability -- stand out as of interest in today's business environment.

In Windows NT 5.X, for example, the hard-wired nature of the one-second interval at which the balance set manager runs almost certainly allows an attacker with application-level access to crash the kernel more or less at will. Similarly, the hard 50:50 division of the available 32-bit memory space in NT 5.2 and earlier releases can be expected to cause serious application incompatibilities when some future service pack or new release changes that in the run-up to 64-bit system compatibility.

In contrast to intrinsic weaknesses affecting reliability and security, most simple problems affecting scalability can be kludged -- meaning that Microsoft can add temporary fixes as problems are recognized simply by adding code to isolate and work around each kind of special case as it comes up. Thus the "stack" idea found everywhere in NT 5.X, in which one processing object calls another -- which calls another until the process happens to hit one that deals with whatever the problem is -- presents an object lesson in institutionalized kludging.

Unix, of course, also has had its share of such kludges. But a key research direction, particularly in the Solaris and BSD communities, has been to remove them and so bring the core OS closer and closer to a clean realization of the original design ideas -- something that's both commercially and practically impossible for Microsoft to do.

For example, although we don't know what Microsoft's interprocess communications management code really looks like, it's a safe bet that the company's code for this is at least an order of magnitude longer, and correspondingly more complex, than that used in a typical BSD kernel -- despite the fact that the BSD approach is both more general and conceptually more complex.

New Ideas Require Change
Some external changes are too complex to be dealt with via kludges, and thus limit the OS's lifetime by constraining what can be achieved before the fundamental design breaks down. For example, the page-management philosophy now embedded in the network, file system and memory-management stacks makes it functionally impossible for Microsoft to copy the page-placement optimizations available for large multiprocessor systems in Solaris 2.8 and later releases without making fundamental change to NT 5.X first.

Because the change needed to take advantage of new ideas like this tends to be quite fundamental, such changes historically have been accompanied by the addition of new layers of kludged code intended to maintain some semblance of backward compatibility with previous kludges.

Unix hasn't had this problem with the fundamental philosophy and research-based development processes, allowing it to grow consistently closer to an ideal representation of the underlying ideas. Thus a device-dependent application -- like a 1991 copy of Vsifax for SunOS 4.4 -- works perfectly under Solaris 2.9, while Windows 2003/XP server now contains both a Posix-compliant interface set and four generations of the Win32 interface, but code written explicitly for devices supported by previous generations still often fails.

Similarly, Solaris-on-Sparc users will experience no need for software change when products like the forthcoming eight-way Niagara CPU assembly hit the market. But Microsoft -- and Intel -- remain trapped in the megahertz race because Microsoft's basic Windows OS design is unable to take full advantage of even today's limited two-way thread concurrency.

So, what's really the difference between a Unix variant like Linux and any Windows OS? It's that Microsoft reacts to marketing pressure to make design decisions favoring running a few processes faster but then finds itself forced first to layer in backward compatibility and then to engage in a patch-and-kludge upgrade process until the code becomes so bloated, slow and unreliable that wholesale replacement is again called for.

In total contrast, Unix developers advance systems research to provide both long-term continuity and continuous improvement in the software's ability to do more or better with respect to things like throughput, reliability, security and communications.

LinuxInsider.com article...
 
Old 03-13-2004, 07:03 AM   #2
mardanian
Member
 
Registered: Mar 2004
Distribution: Fedora
Posts: 254

Rep: Reputation: 30
very informative good job

 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Solution Dual Boot Windows & Linux [ALL DONE IN WINDOWS] No Linux terminology DSargeant Linux - Newbie 35 02-07-2006 03:29 PM
Solution Dual Boot Windows & Linux [ALL DONE IN WINDOWS] No Linux terminology DSargeant Linux - Newbie 4 11-10-2005 11:37 AM
Vnc only works linux to windows not windows to linux mbdayton Linux - Networking 2 04-16-2005 06:36 PM
Red Hat Linux 9 + Windows Server 2003 + Windows XP + Fedora in same domain wolfy339 Linux - Networking 5 03-02-2005 06:03 AM
Samba - Linux box can see Windows, but WIndows can't see Linux Korff Linux - Networking 1 06-02-2003 10:23 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > General

All times are GMT -5. The time now is 09:18 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration