LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices

Reply
 
Search this Thread
Old 11-08-2012, 11:45 PM   #1
stf92
Senior Member
 
Registered: Apr 2007
Location: Buenos Aires.
Distribution: Slackware
Posts: 3,125

Rep: Reputation: 46
x86-64 processor?


As I understand it, the term 32-bit has only meaning as applied to x86 processors. Same goes for 64-bit. Having the option to buy a desktop machine based on Intel G620, I read here under advanced technologies, that the processor is Intel (R) 64, that is, Intel 64 is a brand. It does not matter if it is a brand or not, for the question is: can I infer from this Intel page that the G620 is an x86-64 processor?

(b) I used to think 64-bit referred to the external data bus width, but one day I could verify that the Pentium I (aka 80586) had an external data bus 64-bit wide and, however, this CPU was not advertised as an x86-64 processor. The only way out is to accept what wikipedia says (first link above) and conclude that 64-bit (resp. 32-bit) refers to the INTERNAL data bus width, i.e., register size. Am I correct?

Last edited by stf92; 11-08-2012 at 11:47 PM.
 
Old 11-09-2012, 12:01 AM   #2
replica9000
Member
 
Registered: Jul 2006
Location: Quahog, Rhode Island
Distribution: Debian 'Sid', FreeBSD, Android
Posts: 639
Blog Entries: 2

Rep: Reputation: 112Reputation: 112
To answer the first part of your question, the G620 is a x86_64 Intel Sandy Bridge processor.
 
1 members found this post helpful.
Old 11-09-2012, 12:05 AM   #3
stf92
Senior Member
 
Registered: Apr 2007
Location: Buenos Aires.
Distribution: Slackware
Posts: 3,125

Original Poster
Rep: Reputation: 46
Quote:
Originally Posted by replica9000 View Post
To answer the first part of your question, the G620 is a x86_64 Intel Sandy Bridge processor.
Sir, thank you very much.
 
Old 11-09-2012, 12:10 AM   #4
replica9000
Member
 
Registered: Jul 2006
Location: Quahog, Rhode Island
Distribution: Debian 'Sid', FreeBSD, Android
Posts: 639
Blog Entries: 2

Rep: Reputation: 112Reputation: 112
To answer the second part, 32bit or 64bit refers to the instruction set.
 
Old 11-09-2012, 12:34 AM   #5
cascade9
Senior Member
 
Registered: Mar 2011
Location: Brisneyland
Distribution: Debian, aptosid
Posts: 3,718

Rep: Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903
32bit does mean something when applied to non-x86 CPUs.

I really dislike the 'Intel 64' naming. Its confusing (IA-32 is 32bit x86, IA-64 is itanium, Intel 64 is x86-64/AMD64) and its also had other names (EM64T).

BTW, even if the intel porduct page says that a CPU supports feature 'X' that doesnt mean it will in the real world. For example, many intel atom systems are locked via the BIOS to 32bit only, even though the CPUs (and chipsets) support 64bit.

You shouldnt have that problem with G620/LGA 1155 systems....as far as I know.

I'd consider an AMD over absolute bottom of the line Intel CPUs.
 
Old 11-09-2012, 02:22 AM   #6
stf92
Senior Member
 
Registered: Apr 2007
Location: Buenos Aires.
Distribution: Slackware
Posts: 3,125

Original Poster
Rep: Reputation: 46
Quote:
Originally Posted by replica9000 View Post
To answer the second part, 32bit or 64bit refers to the instruction set.
Your definition is also valid, because, of course, when passing from 32 to 64 we must modified op-codes and prefixes, to make the instructions able to operate on both 32 and 64 bit registers (the whole register or only it lower half). I prefer mine precisely because it does not refer to the instruction set, a rather complicate subject.
 
Old 11-09-2012, 02:27 AM   #7
stf92
Senior Member
 
Registered: Apr 2007
Location: Buenos Aires.
Distribution: Slackware
Posts: 3,125

Original Poster
Rep: Reputation: 46
Quote:
Originally Posted by cascade9 View Post
I'd consider an AMD over absolute bottom of the line Intel CPUs.
Are you telling me G620 is "absolute bottom of the line" (within Intel CPUs)? Of course you are speaking about cpus currently released. But how does it compare with Intel Celeron D?

Last edited by stf92; 11-09-2012 at 02:29 AM.
 
Old 11-09-2012, 02:37 AM   #8
cascade9
Senior Member
 
Registered: Mar 2011
Location: Brisneyland
Distribution: Debian, aptosid
Posts: 3,718

Rep: Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903
Not quite, but near enough.

The only LGA 1155 CPUs slower than the G620 are a few Celeron G5XX dual-cores. The only difference between the G6XX and G5XX CPUs is L3 cache (G6XX is 3MB, G5XX is 2MB).
 
1 members found this post helpful.
Old 11-09-2012, 02:55 AM   #9
stf92
Senior Member
 
Registered: Apr 2007
Location: Buenos Aires.
Distribution: Slackware
Posts: 3,125

Original Poster
Rep: Reputation: 46
Quote:
Originally Posted by cascade9 View Post
Not quite, but near enough.

The only LGA 1155 CPUs slower than the G620 are a few Celeron G5XX dual-cores. The only difference between the G6XX and G5XX CPUs is L3 cache (G6XX is 3MB, G5XX is 2MB).
The Celeron D I am referring to is 2.26GHz/256/533 while for G620 I have 2.6GHz/512/1333. So, appart from L1 and L3 caches and other condiderations, we have nearly identical clock frequencies, and the difference is double cache size and more than double FSB freq.

Are these two thing so important as to make such a big difference. And I say "big difference" because Celeron D is very old compared to G620. G620 admits DDR3, Celeron D only DDR: see how old it is! Plus Celeron D is single core.

Last edited by stf92; 11-09-2012 at 02:58 AM.
 
Old 11-09-2012, 05:19 AM   #10
cascade9
Senior Member
 
Registered: Mar 2011
Location: Brisneyland
Distribution: Debian, aptosid
Posts: 3,718

Rep: Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903Reputation: 903
Quote:
Originally Posted by stf92 View Post
But how does it compare with Intel Celeron D?
Sorry, didnt see this bit when I answered before.

Quote:
Originally Posted by stf92 View Post
The Celeron D I am referring to is 2.26GHz/256/533 while for G620 I have 2.6GHz/512/1333. So, appart from L1 and L3 caches and other condiderations, we have nearly identical clock frequencies, and the difference is double cache size and more than double FSB freq.

Are these two thing so important as to make such a big difference. And I say "big difference" because Celeron D is very old compared to G620. G620 admits DDR3, Celeron D only DDR: see how old it is! Plus Celeron D is single core.
That would be a Celeron D 315. Which is based on the 'presscott' P4s.

Its hard enough to compare between 2 different CPUs of similar age based on clock speed alone. In the case of a Celeron D vs a 'Sandy bridge' iX CPU, MHz is meaningless.

Dont forget that intel replaced the Pentium D 9XX series (basicly 2 x P4 cores on single CPU, the fastest was Pentium D 960 @ 3.6GHz, 800MHz FSB, 2 x 2MB cache) with Core 2 Duo. A Core 2 Duo 6300 (1.83GHz, 2MB cache) is faster everywhere than a Pentium D 960. Core 2 Duo had several revisions and updates, and also had a die shink before they were repalced.

The Core 2 Duos were replaced with iX and there has been several revisions, updates and die shrinks since then.

My guess is that Celeron D 315 vs G60, the G620 would be about 8-10 times faster in some situations, and probably something like 2-5 times faster everywhere.

Last edited by cascade9; 11-09-2012 at 05:21 AM.
 
1 members found this post helpful.
Old 11-09-2012, 06:53 AM   #11
stf92
Senior Member
 
Registered: Apr 2007
Location: Buenos Aires.
Distribution: Slackware
Posts: 3,125

Original Poster
Rep: Reputation: 46
That was very kind of you. One of the main considerations for choosing to buy the machine with the G620 (motherboard Gigabyte H61) when I already had one with motherboard P4i65G (Celeron D) was the fact that while this admits only DDR, the former admits DDR3, and in this way I have assured RAM availability in the market for a good many years in case I want to extend my memory.

But anyways I wanted to make sure I wasn't being fooled by the seller, who happened to offer me the Gigabyte H61 machine when he saw my P4i65G one.
 
Old 11-09-2012, 07:13 AM   #12
stf92
Senior Member
 
Registered: Apr 2007
Location: Buenos Aires.
Distribution: Slackware
Posts: 3,125

Original Poster
Rep: Reputation: 46
A marginal note: is it possible that a random fact like Microsort election of Intel as her partner or don't remember well what kind or arrangement prompted [Oh yes, the election of the 8088 for their O.S.]:
(a) The disappearing of middle range computers, call then minicomputers.
(b) The unbelievable acceleration in the development of microprocessors that followed (Intel was then suffering a bit from the auge of Z80 in the microcomputer market, which was eclipsing the 8080)
(c) The dominance of Intel in the home/office computer market up to our days?
 
Old 11-09-2012, 08:03 AM   #13
johnsfine
Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,076

Rep: Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110Reputation: 1110
Quote:
Originally Posted by stf92 View Post
(b) I used to think 64-bit referred to the external data bus width, but one day I could verify that the Pentium I (aka 80586) had an external data bus 64-bit wide and, however, this CPU was not advertised as an x86-64 processor. The only way out is to accept what wikipedia says (first link above) and conclude that 64-bit (resp. 32-bit) refers to the INTERNAL data bus width, i.e., register size. Am I correct?
What aspect of a CPU is chosen for tagging it as "64-bit" is almost arbitrary.

Various models of 32-bit X86 had 64 bit internal and external data paths as well as some 64 bit registers and many instructions that operated on 64 bit data or even 128 bit data.

In 32-bit X86, virtual addresses are 32 bits. In X86-64, virtual addresses are 64-bit (but only 48 of those bits are used).

There are a lot of other differences between 32 bit X86 and X86-64. The size of a virtual address is hardly the most important difference. But for the simple tag of "32 bit" vs. "64 bit" the size of a virtual address was used.

Quote:
Originally Posted by stf92 View Post
a random fact like Microsort election of Intel as her partner
Microsoft did not choose Intel. IBM chose a PC design from an Intel application note (a PC design Intel was giving away as a means of encouraging use of the 8088 chip required by that design).
IBM chose Microsoft as the OS vendor, because Microsoft was able to steal the exclusive rights to the OS that had been written by the first company that tried to manufacture and market a PC based on the same Intel design.
IBM chose both Intel and Microsoft (rather than in house design for everything) because they made an abrupt decision that they needed to immediately kill the momentum of the Apple II and they didn't have time to engineer their product themselves.
IBM inserted a proprietary BIOS design between the Intel hardware design (that anyone could freely copy) and the Microsoft OS (that Microsoft could license to whomever they wished). IBM thought the BIOS would protect them against clones, but that ultimately failed, giving Microsoft and Intel control of the PC industry that IBM's marketing muscle had created.

Last edited by johnsfine; 11-09-2012 at 08:18 AM.
 
1 members found this post helpful.
Old 11-09-2012, 08:50 AM   #14
stf92
Senior Member
 
Registered: Apr 2007
Location: Buenos Aires.
Distribution: Slackware
Posts: 3,125

Original Poster
Rep: Reputation: 46
Quote:
Originally Posted by johnsfine View Post
What aspect of a CPU is chosen for tagging it as "64-bit" is almost arbitrary.

Various models of 32-bit X86 had 64 bit internal and external data paths as well as some 64 bit registers and many instructions that operated on 64 bit data or even 128 bit data.

In 32-bit X86, virtual addresses are 32 bits. In X86-64, virtual addresses are 64-bit (but only 48 of those bits are used).

There are a lot of other differences between 32 bit X86 and X86-64. The size of a virtual address is hardly the most important difference. But for the simple tag of "32 bit" vs. "64 bit" the size of a virtual address was used.
So taking the ancient 80286 as an example, we have: segment selector size= 16, segment offset size= 16 giving a 32-bit pointer of size 32 (iAPX 286, Programmer's Reference Manual, p. 6-2) and an x86-32 Intel processor.

Quote:
IBM chose Microsoft as the OS vendor, because Microsoft was able to steal the exclusive rights to the OS that had been written by the first company that tried to manufacture and market a PC based on the same Intel design.
I saw this on TV.
Quote:
IBM chose both Intel and Microsoft (rather than in house design for everything) because they made an abrupt decision that they needed to immediately kill the momentum of the Apple II and they didn't have time to engineer their product themselves.
IBM inserted a proprietary BIOS design between the Intel hardware design (that anyone could freely copy) and the Microsoft OS (that Microsoft could license to whomever they wished). IBM thought the BIOS would protect them against clones, but that ultimately failed, giving Microsoft and Intel control of the PC industry that IBM's marketing muscle had created.
How interesting, interesting story.

Last edited by stf92; 11-09-2012 at 09:01 AM.
 
Old 11-09-2012, 10:20 AM   #15
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Hanover, Germany
Distribution: Main: Gentoo Others: What fits the task
Posts: 15,580
Blog Entries: 2

Rep: Reputation: 4037Reputation: 4037Reputation: 4037Reputation: 4037Reputation: 4037Reputation: 4037Reputation: 4037Reputation: 4037Reputation: 4037Reputation: 4037Reputation: 4037
Quote:
Originally Posted by stf92 View Post
A marginal note: is it possible that a random fact like Microsort election of Intel as her partner or don't remember well what kind or arrangement prompted [Oh yes, the election of the 8088 for their O.S.]:
It was not Intel that has chosen Microsoft, it was IBM that have chosen to use an Intel CPU with a Microsoft OS.

Quote:
(a) The disappearing of middle range computers, call then minicomputers.
Minicomputers, despite their name, where everything else than mini. They disappeared because thy could be replaced with more powerful microcomputers.

Quote:
(b) The unbelievable acceleration in the development of microprocessors that followed (Intel was then suffering a bit from the auge of Z80 in the microcomputer market, which was eclipsing the 8080)
The acceleration in the development and the huge performance gains are mostly caused by the discovery of the PC as a gaming and multimedia platform.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
CentOS 6.2 installation on non-PAE x86 processor darkduck Linux - Distributions 6 10-22-2013 11:12 AM
LXer: AMD adds ARM processor as it looks beyond x86 LXer Syndicated Linux News 0 06-14-2012 09:10 AM
Can I use x86 linux distributions if i have an AMD QL-64 Processor? dynaemu Linux - Newbie 4 01-28-2011 04:55 AM
implementing matrix based keypad on x86 processor using GPIO's nathan Programming 1 02-06-2010 11:32 AM
How can I find if my processor type (x86) from the command line anuradha06 Linux - Software 3 12-15-2005 03:56 AM


All times are GMT -5. The time now is 11:53 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration