LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   x86-64 processor? (https://www.linuxquestions.org/questions/linux-hardware-18/x86-64-processor-4175436354/)

stf92 11-08-2012 11:45 PM

x86-64 processor?
 
As I understand it, the term 32-bit has only meaning as applied to x86 processors. Same goes for 64-bit. Having the option to buy a desktop machine based on Intel G620, I read here under advanced technologies, that the processor is Intel (R) 64, that is, Intel 64 is a brand. It does not matter if it is a brand or not, for the question is: can I infer from this Intel page that the G620 is an x86-64 processor?

(b) I used to think 64-bit referred to the external data bus width, but one day I could verify that the Pentium I (aka 80586) had an external data bus 64-bit wide and, however, this CPU was not advertised as an x86-64 processor. The only way out is to accept what wikipedia says (first link above) and conclude that 64-bit (resp. 32-bit) refers to the INTERNAL data bus width, i.e., register size. Am I correct?

replica9000 11-09-2012 12:01 AM

To answer the first part of your question, the G620 is a x86_64 Intel Sandy Bridge processor.

stf92 11-09-2012 12:05 AM

Quote:

Originally Posted by replica9000 (Post 4825506)
To answer the first part of your question, the G620 is a x86_64 Intel Sandy Bridge processor.

Sir, thank you very much.

replica9000 11-09-2012 12:10 AM

To answer the second part, 32bit or 64bit refers to the instruction set.

cascade9 11-09-2012 12:34 AM

32bit does mean something when applied to non-x86 CPUs.

I really dislike the 'Intel 64' naming. Its confusing (IA-32 is 32bit x86, IA-64 is itanium, Intel 64 is x86-64/AMD64) and its also had other names (EM64T).

BTW, even if the intel porduct page says that a CPU supports feature 'X' that doesnt mean it will in the real world. For example, many intel atom systems are locked via the BIOS to 32bit only, even though the CPUs (and chipsets) support 64bit.

You shouldnt have that problem with G620/LGA 1155 systems....as far as I know.

I'd consider an AMD over absolute bottom of the line Intel CPUs.

stf92 11-09-2012 02:22 AM

Quote:

Originally Posted by replica9000 (Post 4825508)
To answer the second part, 32bit or 64bit refers to the instruction set.

Your definition is also valid, because, of course, when passing from 32 to 64 we must modified op-codes and prefixes, to make the instructions able to operate on both 32 and 64 bit registers (the whole register or only it lower half). I prefer mine precisely because it does not refer to the instruction set, a rather complicate subject.

stf92 11-09-2012 02:27 AM

Quote:

Originally Posted by cascade9 (Post 4825517)
I'd consider an AMD over absolute bottom of the line Intel CPUs.

Are you telling me G620 is "absolute bottom of the line" (within Intel CPUs)? Of course you are speaking about cpus currently released. But how does it compare with Intel Celeron D?

cascade9 11-09-2012 02:37 AM

Not quite, but near enough.

The only LGA 1155 CPUs slower than the G620 are a few Celeron G5XX dual-cores. The only difference between the G6XX and G5XX CPUs is L3 cache (G6XX is 3MB, G5XX is 2MB).

stf92 11-09-2012 02:55 AM

Quote:

Originally Posted by cascade9 (Post 4825579)
Not quite, but near enough.

The only LGA 1155 CPUs slower than the G620 are a few Celeron G5XX dual-cores. The only difference between the G6XX and G5XX CPUs is L3 cache (G6XX is 3MB, G5XX is 2MB).

The Celeron D I am referring to is 2.26GHz/256/533 while for G620 I have 2.6GHz/512/1333. So, appart from L1 and L3 caches and other condiderations, we have nearly identical clock frequencies, and the difference is double cache size and more than double FSB freq.

Are these two thing so important as to make such a big difference. And I say "big difference" because Celeron D is very old compared to G620. G620 admits DDR3, Celeron D only DDR: see how old it is! Plus Celeron D is single core.

cascade9 11-09-2012 05:19 AM

Quote:

Originally Posted by stf92 (Post 4825574)
But how does it compare with Intel Celeron D?

Sorry, didnt see this bit when I answered before.

Quote:

Originally Posted by stf92 (Post 4825590)
The Celeron D I am referring to is 2.26GHz/256/533 while for G620 I have 2.6GHz/512/1333. So, appart from L1 and L3 caches and other condiderations, we have nearly identical clock frequencies, and the difference is double cache size and more than double FSB freq.

Are these two thing so important as to make such a big difference. And I say "big difference" because Celeron D is very old compared to G620. G620 admits DDR3, Celeron D only DDR: see how old it is! Plus Celeron D is single core.

That would be a Celeron D 315. Which is based on the 'presscott' P4s.

Its hard enough to compare between 2 different CPUs of similar age based on clock speed alone. In the case of a Celeron D vs a 'Sandy bridge' iX CPU, MHz is meaningless.

Dont forget that intel replaced the Pentium D 9XX series (basicly 2 x P4 cores on single CPU, the fastest was Pentium D 960 @ 3.6GHz, 800MHz FSB, 2 x 2MB cache) with Core 2 Duo. A Core 2 Duo 6300 (1.83GHz, 2MB cache) is faster everywhere than a Pentium D 960. Core 2 Duo had several revisions and updates, and also had a die shink before they were repalced.

The Core 2 Duos were replaced with iX and there has been several revisions, updates and die shrinks since then.

My guess is that Celeron D 315 vs G60, the G620 would be about 8-10 times faster in some situations, and probably something like 2-5 times faster everywhere.

stf92 11-09-2012 06:53 AM

That was very kind of you. One of the main considerations for choosing to buy the machine with the G620 (motherboard Gigabyte H61) when I already had one with motherboard P4i65G (Celeron D) was the fact that while this admits only DDR, the former admits DDR3, and in this way I have assured RAM availability in the market for a good many years in case I want to extend my memory.

But anyways I wanted to make sure I wasn't being fooled by the seller, who happened to offer me the Gigabyte H61 machine when he saw my P4i65G one.

stf92 11-09-2012 07:13 AM

A marginal note: is it possible that a random fact like Microsort election of Intel as her partner or don't remember well what kind or arrangement prompted [Oh yes, the election of the 8088 for their O.S.]:
(a) The disappearing of middle range computers, call then minicomputers.
(b) The unbelievable acceleration in the development of microprocessors that followed (Intel was then suffering a bit from the auge of Z80 in the microcomputer market, which was eclipsing the 8080)
(c) The dominance of Intel in the home/office computer market up to our days?

johnsfine 11-09-2012 08:03 AM

Quote:

Originally Posted by stf92 (Post 4825502)
(b) I used to think 64-bit referred to the external data bus width, but one day I could verify that the Pentium I (aka 80586) had an external data bus 64-bit wide and, however, this CPU was not advertised as an x86-64 processor. The only way out is to accept what wikipedia says (first link above) and conclude that 64-bit (resp. 32-bit) refers to the INTERNAL data bus width, i.e., register size. Am I correct?

What aspect of a CPU is chosen for tagging it as "64-bit" is almost arbitrary.

Various models of 32-bit X86 had 64 bit internal and external data paths as well as some 64 bit registers and many instructions that operated on 64 bit data or even 128 bit data.

In 32-bit X86, virtual addresses are 32 bits. In X86-64, virtual addresses are 64-bit (but only 48 of those bits are used).

There are a lot of other differences between 32 bit X86 and X86-64. The size of a virtual address is hardly the most important difference. But for the simple tag of "32 bit" vs. "64 bit" the size of a virtual address was used.

Quote:

Originally Posted by stf92 (Post 4825732)
a random fact like Microsort election of Intel as her partner

Microsoft did not choose Intel. IBM chose a PC design from an Intel application note (a PC design Intel was giving away as a means of encouraging use of the 8088 chip required by that design).
IBM chose Microsoft as the OS vendor, because Microsoft was able to steal the exclusive rights to the OS that had been written by the first company that tried to manufacture and market a PC based on the same Intel design.
IBM chose both Intel and Microsoft (rather than in house design for everything) because they made an abrupt decision that they needed to immediately kill the momentum of the Apple II and they didn't have time to engineer their product themselves.
IBM inserted a proprietary BIOS design between the Intel hardware design (that anyone could freely copy) and the Microsoft OS (that Microsoft could license to whomever they wished). IBM thought the BIOS would protect them against clones, but that ultimately failed, giving Microsoft and Intel control of the PC industry that IBM's marketing muscle had created.

stf92 11-09-2012 08:50 AM

Quote:

Originally Posted by johnsfine (Post 4825770)
What aspect of a CPU is chosen for tagging it as "64-bit" is almost arbitrary.

Various models of 32-bit X86 had 64 bit internal and external data paths as well as some 64 bit registers and many instructions that operated on 64 bit data or even 128 bit data.

In 32-bit X86, virtual addresses are 32 bits. In X86-64, virtual addresses are 64-bit (but only 48 of those bits are used).

There are a lot of other differences between 32 bit X86 and X86-64. The size of a virtual address is hardly the most important difference. But for the simple tag of "32 bit" vs. "64 bit" the size of a virtual address was used.

So taking the ancient 80286 as an example, we have: segment selector size= 16, segment offset size= 16 giving a 32-bit pointer of size 32 (iAPX 286, Programmer's Reference Manual, p. 6-2) and an x86-32 Intel processor.

Quote:

IBM chose Microsoft as the OS vendor, because Microsoft was able to steal the exclusive rights to the OS that had been written by the first company that tried to manufacture and market a PC based on the same Intel design.
I saw this on TV.
Quote:

IBM chose both Intel and Microsoft (rather than in house design for everything) because they made an abrupt decision that they needed to immediately kill the momentum of the Apple II and they didn't have time to engineer their product themselves.
IBM inserted a proprietary BIOS design between the Intel hardware design (that anyone could freely copy) and the Microsoft OS (that Microsoft could license to whomever they wished). IBM thought the BIOS would protect them against clones, but that ultimately failed, giving Microsoft and Intel control of the PC industry that IBM's marketing muscle had created.

How interesting, interesting story.

TobiSGD 11-09-2012 10:20 AM

Quote:

Originally Posted by stf92 (Post 4825732)
A marginal note: is it possible that a random fact like Microsort election of Intel as her partner or don't remember well what kind or arrangement prompted [Oh yes, the election of the 8088 for their O.S.]:

It was not Intel that has chosen Microsoft, it was IBM that have chosen to use an Intel CPU with a Microsoft OS.

Quote:

(a) The disappearing of middle range computers, call then minicomputers.
Minicomputers, despite their name, where everything else than mini. They disappeared because thy could be replaced with more powerful microcomputers.

Quote:

(b) The unbelievable acceleration in the development of microprocessors that followed (Intel was then suffering a bit from the auge of Z80 in the microcomputer market, which was eclipsing the 8080)
The acceleration in the development and the huge performance gains are mostly caused by the discovery of the PC as a gaming and multimedia platform.


All times are GMT -5. The time now is 03:21 PM.