LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 12-16-2009, 11:24 AM   #31
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258

Quote:
Originally Posted by salasi View Post

In that context, it is unsurprising that AMD, uninhibited by the need to protect a struggling higher-end, more expensive arch, showed more enthusiasm for attacking the higher-end market with x86. Even if you ignore the 64 bit/memory space issue, I can't see any other way in which you can explain the crippling of virtualisation performance in 64 bit Core 2 arch products, but not 32 bit.
That's a good point. Microsoft seems to be following Intel's lead by providing only a limited virtual environment with Windows XP Virtual Mode on Windows 7.

Quote:
Originally Posted by salasi View Post
The first x86 processor, surely?
Yes, the first x86 processor. I would be happier if some other architecture was competing with Intel in the PC market. AMD is essentially competing with a different implementation of the same architecture. I think I've seen ARM on a Linux netbook. I'm more familiar with MIPS since I write some embedded software for MIPS.

Quote:
Originally Posted by salasi View Post
Assuming that you mean Xeon...
I never can get that name right. I think (hope) that I've finally sorted out Core, Core 2, and Core 2 Duo / Core 2 Solo. Where does Intel come up with these confusing names?

I'm also known for my inconsistent pronunciation of "Linux". So far most Linux enthusiasts have forgiven me.

Thanks for all the thoughtful comments.
 
Old 12-17-2009, 06:48 AM   #32
salasi
Senior Member
 
Registered: Jul 2007
Location: Directly above centre of the earth, UK
Distribution: SuSE, plus some hopping
Posts: 4,070

Rep: Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897
Quote:
Originally Posted by Erik_FL View Post
Yes, the first x86 processor.
..but IIRC way after, say, a DEC alpha and I'm sure many of the non-commodity processors.

Quote:
I would be happier if some other architecture was competing with Intel in the PC market. AMD is essentially competing with a different implementation of the same architecture
.

And, right now, being absolutely destroyed by Intel on the performance front. Essentially, AMD are only staying in the market by 'fire sale pricing' at the low/medium end, and this isn't a healthy situation, either for consumers or for AMD.

I don't think AMD can really turn this situation around until their new architecture (new implementation of the x86 arch, if you like), and that's quite some time off, and in the interim Intel has a few speed bumps to go. I hope that they know what they are doing.

Quote:
I think I've seen ARM on a Linux netbook. I'm more familiar with MIPS since I write some embedded software for MIPS.
I've seen, or seen reviewed, netbooks on MIPS and ARM. The MIPS one got a poor review for lack of performance, but I suspect that this was caused by choosing a low clock speed chip for cost and power consumption reasons. These processors aren't challenging the top performance x86 chips, but are more comparable with x86 chips of a few years ago (and we thought that they were reasonable at the time) or maybe the Atom chips.


Quote:
I never can get that name right. I think (hope) that I've finally sorted out Core, Core 2, and Core 2 Duo / Core 2 Solo. Where does Intel come up with these confusing names?
Ha! Intel have decided that not only do they feel your pain, but they have the technology to make it worse! Not only is the 'Core' naming on the way out just as people have started to get used to it, the i3/i5/i7 seem destined to be even more confusing. The 'top end' parts have a different socket from the mid-end/low-end parts. So that'll be the i7 being different sockets from the i5, won't it? No, of course it won't, you could understand that. Some are, some aren't.

This is going to cause massive confusion amongst upgraders.

And there are still 'low end' Pentium (not the old Pentium IV, but the cut-down Core2 parts) and Celeron parts hanging around. Are these being killed off by the i3s? Possibly, but I've not actually seen anything on the subject. And I've not yet seen which socket the i3s will use, and as there is scope for further confusion there...

I ought to seek out Intel's latest roadmap, but I don't have any aspirin handy.
 
Old 12-17-2009, 12:58 PM   #33
trademark91
Member
 
Registered: Sep 2009
Distribution: Slackware -current x64
Posts: 372

Rep: Reputation: 74
until you add more ram to your system, there will not be any recognizable difference between the 32 bit and 64 bit system.
the main difference between the two is that 64 bit can detect and utilize more system ram than 64 bit.

once you add more ram, and upgrade to 64 bit, it should run faster and smoother, however some programs lack 64 bit compatibility, and therefore may become a pain to use.
 
Old 12-17-2009, 01:05 PM   #34
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
once you add more ram, and upgrade to 64 bit, it should run faster and smoother
Except that it doesn't, and in fact, there's no reason for it to do so, for reasons that have already been listed.
 
Old 12-18-2009, 06:09 AM   #35
vlbaindoor
LQ Newbie
 
Registered: Oct 2009
Location: UK
Distribution: Suse, Fedora, Ubuntu
Posts: 8

Rep: Reputation: 1
Smile History repeats itself

Quote:
Originally Posted by JK3mp View Post
What would be the advantage of running a 64bit system over a 32bit system? I only have 3GB of ram but plan on kickin another 1GIG into it. But i wanna try 64bit Linux(probably slackware) on it. But first im just wondering what the advantages are people have seen who have used both 32/64bit linux. Speed? Smoothness? And also what are major disadvantages such as compatibility, configuration, etc. Any help/comments appreciated...thanks.
Hi there,

Having read through some of the messages here, "history repeats" is the phrase that comes to mind.

During the pre-Windows95 days the debate elsewhere was - "Are 32bit OSes and applications any better than 16bit ones?" And the community went through very similar arguments.

At present point in time most of us use 32bit everything and looking at 64bit things and asking "Is 64bit any better than 32bit?"

Next decade we would probably be asking "Are 128bit OSes and applications any better than 64bit?" while most of use 64bit things!

'Fascinating!' as Spock (Startrek) would have said it!

Last edited by vlbaindoor; 12-18-2009 at 06:11 AM. Reason: 'Fascinating!'
 
Old 12-18-2009, 12:32 PM   #36
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258
Quote:
Originally Posted by vlbaindoor View Post
Hi there,

Having read through some of the messages here, "history repeats" is the phrase that comes to mind.

During the pre-Windows95 days the debate elsewhere was - "Are 32bit OSes and applications any better than 16bit ones?" And the community went through very similar arguments.

At present point in time most of us use 32bit everything and looking at 64bit things and asking "Is 64bit any better than 32bit?"

Next decade we would probably be asking "Are 128bit OSes and applications any better than 64bit?" while most of use 64bit things!

'Fascinating!' as Spock (Startrek) would have said it!
It's really not the same kind of change as 16-bit to 32-bit. You could use money as an analogy. With 16-bit you would have $65,000. With 32-bit you would have 4 Billion dollars. With 64-bit you would have some astronomically huge amount of money (4 Billion times 4 Billion). Going from $65,000 to 4 billion dollars is a lot more significant than going from 4 billion to an essentially unlimited amount.

16-bit was clearly limiting most of the applications as well as introducing significant complexity to access data. 32-bit CPUs are still more than adequate for most applications, though there are a some that can benefit from 64-bit.

Although the debate may be similar, the arguments for 64-bit are not nearly as strong as were those for 32-bit. A 16-bit address required the programs to manage segments (64K blocks) of memory if more space was required for data or instructions. The 32-bit CPUs use a "flat" memory model where programs can see up to 2 Billion or 3 Billion bytes at the same time. The other 1 Billion or 2 Billion bytes is usually reserved for parts of the OS to be visible in a program's memory space.

64-bit CPUs also use a flat memory model but it allows 4 Billion times as much space as a 32-bit CPU. Most of that space is empty, since physical RAM sizes are limited to only a tiny fraction. It's like having a wallet that will hold 4 Billion dollars when you only have $32 to put in the wallet. How much different is that from a wallet that will hold $2. It's really only 16 times more money, and not much use if what you want to buy only costs $1. And the wallet is not your entire savings because you also have a Linux bank account (physical RAM) that can hold $64, although the Microsoft "bank" is limited to $4.

In the world of technology, "because we can" is often enough reason for change. When something becomes possible and affordable, it often happens even without a definite requirement. With CPUs bumping up to the clock rate ceiling squeezing extra performance from the hardware is becoming more difficult. Multiple CPU cores and 64-bit are the current approach to extra performance. Both carry significant extra complexity and have the potential to introduce problems.

In my opinion the two limiting factors of computers are really data storage and RAM speeds. Hard disks are a relatively slow and unreliable way to store data. Many applications are limited by the disk access rather than the processing speed of the CPU or RAM. Applications that aren't limited by the hard disk tend to be limited by the RAM access times. In many applications CPUs can process data much faster than the RAM can allow reading and writing of that data. With multiple CPU cores and 64-bit the RAM demand is even higher. We're seeing more memory controllers on the CPU chip (admittedly an AMD innovation that Intel has copied). I think we may eventually see the RAM on the same chip with the CPU. Cache memory on CPU chips helps some, but there is a limited amount. It is also complicated to manage cache, and it slowed down because ALL of the RAM data isn't in the cache. Entries have to be tagged and it takes some extra amount of time to find the data in the cache and get a cache "hit". RAM running at the same speed would require less addressing time. I think that improvements in hard disk access speed and RAM speed are much more significant than the change from 32-bit to 64-bit CPUs.
 
Old 12-18-2009, 12:42 PM   #37
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
I don't think there's any doubt that it is time for native 64 bit integers. What's not clear is whether it's time for 64 bit addressing. Unfortunately, one implies the other, so in order to get the advantage of the one you have to accept the baggage (in load times and memory footprint) of the the other. I'm sure it will sort itself out, eventually. But, given the advantage, in some cases, of 32-bit or even 16-bit addressing, maybe it's not quite time to throw 32-bits away; even on an x86_64 bit system.
 
Old 12-18-2009, 02:47 PM   #38
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Quakeboy02 View Post
I don't think there's any doubt that it is time for native 64 bit integers.
I think there is plenty of doubt. 64 bit integers, other than those used in support of 64 bit addresses, are a rarely used feature.

X86_64 architecture makes 32 bit integers the natural size. I expect that was done because designers at AMD shared my opinion of 64 bit integers.

X86_64 does make 64 bit integers only a trivial amount slower than 32 bit integers (unlike 32 bit X86 in which 64 bit integer operations must be composed from multiple 32 bit operations). If that advantage didn't cost much to add, then even though 64 bit integers were rarely used, it would be a nice advantage.

Quote:
What's not clear is whether it's time for 64 bit addressing.
The need for more than 32 bits of virtual addressing is also still a minority of PC use. But I'm sure it is a much bigger minority than the applications which have enough natural (non addressing based) need for 64 bit integers to see a measurable performance difference from them.

Quote:
Unfortunately, one implies the other
Only in the other direction. If the designers of AMD64 thought 64 bit integers were important but 64 bit addresses weren't, that would have been far easier to provide as a similarly semi compatible evolutionary step for X86. Look at how 64 bit floating point has been supported even in 16 bit x86.

They understood the lack of market for 64 bit integers.

But providing 64 bit addressing without 64 bit integers would be insane. Lots of addressing operations must be supported by integer operations and most architectures use the same registers for integers and addresses. So 64 bit addresses without 64 bit integers makes no sense.

Quote:
But, given the advantage, in some cases, of 32-bit or even 16-bit addressing, maybe it's not quite time to throw 32-bits away; even on an x86_64 bit system.
Since they went to so much trouble to design 64 bit CPUs that do a great job of supporting the previous 32 bit architecture, it now takes only minor advantages for 32 bit to keep good 32 bit support in X86_64 CPUs far into the future.

There are many applications which run better in 32 bit mode.

But the same doesn't apply to 16-bit addressing. It is lame and obsolete. It doesn't make sense even for PC applications that would fit in 65KB (when chache size are bigger than 4GB, it will likely be time to say the same about 32 bit addressing).

The real reason for CPU backward compatibility is software that can't be recompiled for the new architecture. We don't see very much of that in the Linux world, but elsewhere it is the rule rather than the exception. That justified the big cost of putting all the backward compatibility into the X86_64 architecture. The fact that some applications run a little better in 32 bit even though they also run in 64 bit, would never have justified that cost.

Quote:
Originally Posted by Erik_FL View Post
It's really not the same kind of change as 16-bit to 32-bit.
I totally agree with that (though not with most of your analogies and other materials in support the claim).

The PC industry didn't go from 16 bit flat to 32 bit flat. It was already beyond 16 bit flat before the first IBM PC.

It evolved through increasingly messy segmented designs barely reaching any level of memory support before the declining cost of ram made that level obsolete. There most definitely was not a jump of 16 bits in virtual or physical address size in the transition from 16 bit segmented to 32 bit flat.

One really big difference in "16 -> 32" vs. "32 -> 64" is how messy 16 bit segmented addressing was. So that made the switch to 32 bit flat more compelling.

Another big difference is how overdue it was as measured against ram costs. 32 bit flat was less insufficient relative to ram costs when 64-bit was introduced than 16 bit segmented was for the introduction of 32 bit flat.

The current 64-bit is really 48 bit virtual addressing. That is a full 16 bit jump in virtual address size compared to 32 bit flat (32 bit segmented existed, but virtually no one used it). That 16 bit jump didn't happen between 16 bit segmented and 32 bit flat.

It is taking the software community quite a while to absorb that 16 bit jump. When virtual addressing was invented, virtual address space was much less limited than physical ram. But since then, many times and for long periods, virtual address space has been more limited than ram (3GB per process limit on a PAE system with up to 64GB physical ram and low enough ram prices that 64GB is reasonable).

It takes a while for industry practices to rediscover the power of plentiful virtual address space and to grow up to the limits of 48 bit virtual address space.

Relative to the suggestion of 128 bit addressing (that the 64 -> 128 transition might be pushed by the same factors as 32 -> 64 was) that misses the other 16 bits:

Going from 32-bit to 48-bit was a full 16-bit (factor of 65536) in virtual memory limits. That puts virtual memory limits a little beyond what current needs would be even if the industry quickly relearned how to make good use of virtual sizes far bigger than physical.

But with a moderate increase in gates within the CPU and a moderate increase in memory management in the OS, the next X86_64 architecture could have as much as another factor of 65536 in virtual size limits (with no disruption or even recompilation of application code) all within "64-bit". So we don't need 64 -> 128 when we use up the factor of 65536 that we got with X86_64, we need it when we use up the next factor of 65536 after that.

Last edited by johnsfine; 12-18-2009 at 03:10 PM.
 
Old 12-18-2009, 03:06 PM   #39
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
But the same doesn't apply to 16-bit addressing.
I wasn't talking about the oddball segment addressing introduced by the 286. I just meant that some programs work fine in a 16 bit environment, and I don't really see any reason to move them on to 64 bits of address space.

Quote:
The need for more than 32 bits of virtual addressing is also still a minority of PC use.
Well, the kernel truly needs more than 32 bits; otherwise we'd never have seen PAE. I just question whether 64 bits was a wise decision. Sure, it's a multiple of 2, but wouldn't 48 bits have been a better decision for everyone?
 
Old 12-18-2009, 03:15 PM   #40
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Come to think of it, what do we really need more than 32 bits for other than cipher computations and video editing?
 
Old 12-18-2009, 03:58 PM   #41
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Quakeboy02 View Post
Come to think of it, what do we really need more than 32 bits for other than cipher computations and video editing?
What do you mean by "we"?

Some customers of the product I work on routinely run examples that wouldn't be close to fitting in 4GB of virtual address space. (That product is not at all related to ciphers or to video editing).

We build that product with a closed source compiler that unfortunately is only available as a 32 bit executable. Even building the 32 bit version of the product, there is one module that can't be compiled with optimization when the compiler itself is limited to 3GB. So we need a 64 bit OS to get a full 4GB for a 32 bit compiler to compile one module of the 32 bit version of the product. Even the compiler for 64 bit is 32 bit and we have needed to trim the source code in strange ways so that compiling that module for 64 bit will work with the compiler itself limited to 4GB.

It is a matter of function complexity, not total module size that determines the virtual memory needed by the compiler. Reducing the function complexity in some problems is just not practical. Pulling out (to sub functions) parts of the function without resolving the underlying complexity issues would devastate the performance of the final product.

So even C++ compilers are used at the limits of 4GB virtual size.

I expect your web browser and games and whatever run just fine with far less than 3GB of process virtual address space. But the same computer architecture used to play games at home is used for doing real work. Quite a lot of that real work bumps into the 4GB limit.
 
Old 12-18-2009, 04:13 PM   #42
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
John,

Somewhere along the road I seem to have really pissed you off. I am very sorry for whatever it was. Honest.

Quote:
What do you mean by "we"?
I meant it in the normal sense of the word, and I wasn't trying to imply that there were no other needs. The kernel clearly needs more than 32 bits. Databases are potentially a big user, as well as any program needing access to a very large data space; which we could really consider a subset of database, I think. Am I wrong in assuming that your program has a very large dataset?

It's an honest question, John, from one who used to work with large datasets in an airline reservations environment. The fact that I'm now retired doesn't mean that my computing horizons are limited to web browsing and email.
 
Old 12-18-2009, 04:33 PM   #43
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,407

Rep: Reputation: 141Reputation: 141
Quote:
Originally Posted by Quakeboy02 View Post
Come to think of it, what do we really need more than 32 bits for other than cipher computations and video editing?
Ah, I see. I spoke poorly. So, to clarify: what do we do that needs integers larger than 32 bits other than cipher computations and video editing? John has suggested that we need large integers to access large datasets (or at least that's what I understood him to say).
 
Old 12-18-2009, 04:46 PM   #44
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Quakeboy02 View Post
Somewhere along the road I seem to have really pissed you off.
I didn't think I was pissed off. So I'm sorry I gave that impression.

I also was surprised to reread your "32 bits" statement and see you hadn't said "addresses". I was responding about 32 bit addresses and now it seems that isn't what you meant at all.

As I said earlier, I think the major need for 64 bit integers is only in support of 64 bit addresses.

I also knew my question about the word "we" could sound rude. But it was still intended as a legitimate question. "We" might reasonably mean ordinary home users, and then I'd have a hard time arguing 32 bits (addresses or integers) aren't enough. I think X86_64 might be better even for an ordinary user, but not by a large enough margin to say we "need" it, and also not because of the 64-bit aspect of X86_64 (SSE and twice as many registers matter more to ordinary users).
 
1 members found this post helpful.
Old 12-18-2009, 07:30 PM   #45
Erik_FL
Member
 
Registered: Sep 2005
Location: Boynton Beach, FL
Distribution: Slackware
Posts: 821

Rep: Reputation: 258Reputation: 258Reputation: 258
16-bit and even to some extent 8-bit CPUs are still being used in embedded applications. The low gate count translates to lower power and space requirements. Lower power usually means lower heat and lower cooling requirements.

Microsoft has done a lot to kill 16-bit software by not supporting the 16-bit Virtual DOS Machine (VDM) or Windows On Windows (WOW) for 16-bit on their 64-bit operating systems. That even caused some 32-bit compatibility issues since some 32-bit applications were still using 16-bit installer programs. I would have been happier if Microsoft had continued to support 16-bit VDM/WOW.

I'm not sure exactly what 16-bit support is present in Linux for Linux or Windows applications. WINE supports 32-bit Windows applications but I'm not sure about 16-bit Windows applications or 64-bit Windows applications.

There are obviously some applications that can benefit from 64-bit CPUs just like there are applications that can benefit from 8-bit or 16-bit CPUs. The trade-off between hardware cost, development cost and design goals affects the choice of CPU architecture.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
In a 64bit PC may make 4 partitions vista 32bit, vista 64bit, and ... lse123 Linux - Newbie 3 03-14-2009 09:09 AM
32bit or 64bit ust Linux - Newbie 3 10-22-2008 10:04 PM
32bit(i386) or 64bit(amd64) on an amd 64bit cpu (amd 6000+)? d-_-b Debian 7 10-28-2007 07:48 PM
can 64bit processor run both 64bit and 32bit computers? DJOtaku Linux - General 4 09-08-2005 08:14 PM
32Bit or 64Bit? bchivers Mandriva 9 05-22-2005 12:28 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 02:32 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration