LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 12-14-2020, 02:25 PM   #1
business_kid
LQ Guru
 
Registered: Jan 2006
Location: Ireland
Distribution: Slackware, Slarm64 & Android
Posts: 16,409

Rep: Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338
CPU speeds, Bus Bandwidth, & similar stuff.


Back in the early days, you always had 4+ stages per cpu cycle
  1. Address (for upcoming instruction)
  2. Read Instruction
  3. Compute (= internally decode) Instruction
  4. Write Output
Now some instructions went to ≥5 clock cycles and if you had one of the crappy things with Adress & data bus combined, they got much longer, but mostly the cpu speed was crystal/4. All cpu i/o chips were on this single Address & Data Bus.

Now in these days of multiple cores, caches, You lose track a bit. Take this example: https://www.solid-run.com/arm-server...b-workstation/
That has 16Core A-72 Arm Cores @2Ghz. For networking, they are imho vastly overspecified. They offer
  • 1×100Gbps nic or 4×25Gbps nics
  • Also 4×10Gbps nics

I looked at this and it didn't add up. Now they've a set of 2 DIMMs so maybe it's 8 cores each, but still it didn't seem to compute. I sent some searching questions.I got a reply back on the state of software progress, and a link to their software forum. It's definitely a Software WIP. A 10Gbps nic is only doing 2.5Gbps unless they use a 'passthrough IOMMU' in which case it's 6.5-7Gbps. They're telling me they'll get more cores talking to the nics, but haven't done the software yet.
  • Sure it's got 16 Cores, but how much use are they?
  • Could that thing ever keep a 100GB Nic at top whack?
  • what on earth is going on instead of the single buses of old?
  • What sort of bus bandwidth do you need to feed a 100Gb (10 GigaBytes per second) NIC?
  • How does the IOMMU become the bottleneck?

EDIT: Just Found this: https://www.solid-run.com/arm-server...b-workstation/

Last edited by business_kid; 12-14-2020 at 02:44 PM. Reason: Wrong url stuck in
 
Old 12-16-2020, 04:07 AM   #2
business_kid
LQ Guru
 
Registered: Jan 2006
Location: Ireland
Distribution: Slackware, Slarm64 & Android
Posts: 16,409

Original Poster
Rep: Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338
OK. I've timed out on this. Marking it solved.

I consider myself a hardware guy, with designs under my belt. But It appears everyone is like me - they don't have a clue.
I'm going to grab some figures and play with them to see what I can come up with. If I crack it, I'll report back.
 
Old 12-16-2020, 06:00 AM   #3
business_kid
LQ Guru
 
Registered: Jan 2006
Location: Ireland
Distribution: Slackware, Slarm64 & Android
Posts: 16,409

Original Poster
Rep: Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338Reputation: 2338
Right, I kinda got this as clear as it's ever going to to be.

On the old IBM XT/AT, there was ONE cpu, ONE address & DATA bus, and life was simple. Now it's extremely complex. Sitting at several GHZ are your cpu cores, and everything else is throttled to some extent.

Using 2 DIMMS, there's a 128 line Memory bus. The last ram I read up on ram, it had a 6-1-1-1 access cycle. That means (I think):Fresh Address = 6 Cycles wait; then by a 64bit cpu doing a 128 bit read, you should get 2 successive reads in 2 cycles (so 8 cycles total for a fresh ram address); increment the address, and another 2 reads = 3 cycles, and so on. Now the ram can't go at core speed so that's throttled back a bit.

Then, rather than one distributed parallel bus, there's many highly efficient serial i/os dedicated to different parts. SATA, for example uses one lane and maxes out around 500 Mb/S on an SSD. But NVMe sits in a PCIE slot and can use more (16?) lanes to get to ~3GB/S which is seriously fast. USB 1/2/3 get their own slower speeds, and it kind of becomes obvious why GPIO is such a pain in the butt.Caches are thrown around like confetti, and everyone is happy. This gets you away from the hard physical addressing of days of yore, because all µP support chips had fixed physical addresses, which was a hacker's dream come true. So bus bandwidth is sales talk, unless somebody can explain exactly what it means. But it probably relates to the potential; All other things being equal, the more cores (at the same speed)the more potential bus bandwidth, if the I/O lanes are correctly routed. It kinda makes me glad I've retired.
 
Old 12-24-2020, 06:27 AM   #4
obobskivich
Member
 
Registered: Jun 2020
Posts: 596

Rep: Reputation: Disabled
FWIW: I think NVMe can only do up to 4 lanes per device, at least that's what all of the consumer devices I've toyed around with support (and its 1:1 with the PCIe generation, at least on paper - there's also stuff 'behind' NVMe since there's a whole ARM CPU, DRAM, etc between the NAND (however arranged) and the NVMe interface...). What I've seen from fancier arrangements is basically a bridged (e.g. PLX)/bifrucated (on newer systems) setup - so you have some 'device' that takes in x8 or x16 lanes and spits out N*x4 to NVMe devices that you could then soft-RAID back together (which can mean crazy fast speeds - 10GB/s is probably possible with modern NVMe drives, which should get close to saturaing your 100Gbit network - but as you point out, there's a lot 'in between' that may or may not let such happen). Non-standard stuff like fusionIO was/is after a similar goal, but I think most of that stuff has gone the way of the dodo in favor of NVMe's ubiquity.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
How to get VPN speeds better than 20% of ISP speeds Captain Brillo Linux - Networking 3 04-11-2019 09:36 AM
Phục hồi dữ liệu bị mất???, cứ pollsite General 1 06-27-2005 12:39 PM
PCI Bus Speeds wh33t Linux - Hardware 11 05-09-2005 08:33 AM
can I mix ddr ram bus speeds Lakota Linux - Hardware 1 12-08-2004 09:49 AM
Gotta love those ٱٱٱٱٱٱٱ&# iLLuSionZ Linux - General 5 11-18-2003 07:14 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 06:04 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration