Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux? |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
|
07-11-2020, 09:17 AM
|
#1
|
LQ Guru
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 8,256
|
A question about cpu frequencies
I have a laptop that runs AntiX and it has conky on the desktop. One of the things that conky shows is the cpu frequency, which seems to vary a lot. The maximum figure is 1600 (I assume MHz), but it is often lower.
How does this relate to the clock speed that you see in cpuinfo, which is clearly a constant? I understand that the tick rate of the cpu is the zeitgeber for the system clock, so presumably this has to be kept constant. I've tried googling but I don't understand the results. Sometimes they say clock rate and sometimes they say cpu frequency, but they don't say how the two relate.
|
|
|
07-11-2020, 02:13 PM
|
#2
|
Member
Registered: Aug 2004
Location: pune
Distribution: Slackware
Posts: 371
Rep:
|
|
|
|
07-11-2020, 02:18 PM
|
#3
|
Member
Registered: Aug 2004
Location: pune
Distribution: Slackware
Posts: 371
Rep:
|
Processor performance is usually a number in either MHz (megahertz) or GHz (gigahertz.) That number represents how many times the internal clock inside the CPU ticks in cycles per second. The clock inside a 2.5GHz CPU ticks 2.5 billion times each second.
But clock frequency isn’t a complete measure of performance. Efficiency — how much work can be done by the CPU in each clock cycle — is also important. This is measured in terms of instructions per cycle, often abbreviated as IPC. A CPU with a very high clock frequency, but low IPC, may not perform as well as a CPU with a lower clock frequency and high IPC.
source : https://www.pcworld.com/article/221559/cpu.html
|
|
1 members found this post helpful.
|
07-11-2020, 02:22 PM
|
#4
|
LQ Guru
Registered: Jan 2006
Location: Ireland
Distribution: Slackware, Slarm64 & Android
Posts: 17,524
|
It's now actually a funny relationship. In the 'good' old days, it used to be fixed @ 4:1 Your cycles were - Address (set your instruction read address)
- Read Data
- Compute
- Write
Ram was specified with an access time. Let's say your access time was 50ns; that meant your max cpu frequency would be 20Mhz
But it wasn't long before they started messing with that.
Now it's got fiercely complicated with thermal throttling, acpid throttling power, ram with 6-1-1-1 cycles, , different write cycles, multiple cores, clocking on rising & trailing edges, caches, burst modes, etc. I consider myself a hardware guy, but I haven't a clue, really.
|
|
1 members found this post helpful.
|
07-12-2020, 05:16 AM
|
#5
|
LQ Guru
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 8,256
Original Poster
|
So are you saying that the variable "frequency" that conky and similar programs measure is the number of actual cpu operations per sec, which is a stepped down factor of the basic tick rate of the clock (or some further modification of that)? It seems a bit odd to me to have programs reporting on a measure if nobody knows what it means.
Last edited by hazel; 07-12-2020 at 05:43 AM.
|
|
|
07-12-2020, 06:43 AM
|
#6
|
Member
Registered: Jun 2020
Posts: 614
Rep: 
|
Quote:
Originally Posted by hazel
So are you saying that the variable "frequency" that conky and similar programs measure is the number of actual cpu operations per sec, which is a stepped down factor of the basic tick rate of the clock (or some further modification of that)? It seems a bit odd to me to have programs reporting on a measure if nobody knows what it means.
|
Not entirely.
Short explanation: conky is accurately reporting 'real time' clock, and cpuinfo is reporting advertised 'maximum' (non-Turbo) clock. 'Clock speed,' 'clock rate,' and 'clock frequency' and so forth are all the same thing - I'm not sure if this is a British/American distinction, an old/new distinction, etc but they mean the same thing. I've seen it written all ways over the years - there doesn't seem to be a 'best convention' here either.
Less short explanation: Modern CPUs (on laptops this goes back to at least the Core 1-st gen chips for Intel, and Athlon (K7) mobile chips for AMD; on desktops more like Athlon64 and Core 2, broadly speaking) adjust their working clockspeed for power management. Intel call these different performance levels 'bins' in their marketing literature, but what is being adjusted is the clock multiplier. By lowering the clock multiplier they can also drop vcore and reduce overall power draw - with expanded ACPI states they can optimize task energy for many 'low demand' things.
Long explanation: If you've ever played around in your system's BIOS relating to CPU features, you'll usually find two things: a front-side bus (or Bclk on newer non-FSB chips - for our purposes these are the same*) and a clock multiplier. The working clockspeed of the CPU = FSB*multiplier, so if you use a modern Intel chip (like 10900k) for example, that will be 100MHz *[some value] where [some value] can range up to 53.
What conky is showing is real-time clock information from the CPU - what is it *currently* running at, frequency wise. CPU info utilities usually show you some advertised 'maximum' speed as reported by cpuid (or referenced against some online database in response to cpuid), but even that concept is nebulous these days - AMD and Intel chips going back a few generations now also have a 'Turbo' feature that allows them to temporarly exceed their advertised clockspeed as long as they stay under a short-term current limit, don't have to exceed their specified maximum vcore, and don't hit thermal trip points. How exactly that trio works out in practice is something of a proprietary formula between Intel and AMD, as in those limits, what 'short-term' means, and 'temporarily' are all contextually defined. And for extra confusion, motherboard makers can also substitute values (within reason) for some of those fields too (but on laptops you will almost never see this, because modern laptops tend to be very thermally constrained).
So what this ultimately means is your CPU may advertise a clockspeed of lets say 3GHz, but can temporarily run at lets say 3.2GHz as long as it isn't overheating or exceeding some other design limit. To use 10900k as an example again, lets check it out on ARK ( https://ark.intel.com/content/www/us...-5-30-ghz.html) and note the 'UP TO' 5.30GHz. Note the marketing weasel words there. The 10900k is not running at 5.3GHz all the time - instead if the system is within the above limits, the chip may temporarily boost up to 5.3GHz in certain contexts. The 'all core max clocks' ('Base Processor Frequency in Intel marketing speak) is 3.7GHz. If this was 2004, the 10900k would probably be sold as a '3.7GHz processor' with these stats, not a 5.3GHz chip. With multi-core chips like 10900k there is also a 'how many cores are active?' variable for clockspeed - so the chip will not run at 5.3GHz on 10 cores, but will pick some lower frequency. This doesn't mean the chip is turning off cores, just that if you are loading all 10 (or 20, with HT enabled) threads, you won't see 'the maximum clocks.' AMD CPUs going back a few generations have been able to set clocks per-core or per-module (for Bulldozer), this is a newer feature for Intel (I believe 10900k is the first generation to do so - someone feel free to correct me if I'm wrong here, but I know at least through Broadwell they cannot do per-core clocking). This can make reading 'cpu frequency' harder because a lot of applications assume either that A) all cores will be at the same frequency ('because this is how Intel does it') or that B) only cpu0 exists ('because this is how it used to be') but it is entirely possible to see cpu0 idle at 800-1200MHz while cpu4 (for example) is roaring at 5GHz. I don't remember what conky defaults to in terms of what it watches, but ideally you will see N CPU frequency reports (where N = total # of cores/modules in system) with modern equipment.
This is different than the Core 2/Phenom era that were basically 'on/off' from 'idle clocks' and 'working clocks' - e.g. my old Core 2 Quad Q9550 would 'idle' at 2GHz, and 'work' at 2.8GHz - not very exciting (and not very much power savings). With the expansion of ACPI power-states they can better optimize task energy (how many watts does it take to perform a given calculation) for 'light weight' loads - a lot of basic tasks like web browsing, checking email, etc don't rise to the level of 'warp speed Mr Sulu' and thus can be run at idle or some intermediate point between idle and '100%' (e.g. very modern Intel chips can have 15-20 different 'stops' along the way). So using 10900k as an example again, it would be entirely reasonable to expect 'basic' usage of the machine to see the CPU somewhere between 1.5-2.5GHz and fairly low reported VDDC from FIVR (realistically this can be under 15W), despite the chip being capable of drawing over 200W at fill tilt.**
* What is actually the difference? Historically CPUs communicated to a memory controller hub (MCH) called the 'northbridge' which provided interface to RAM and other I/O (like AGP or PCI) via the front-side bus. This bus is independent of the CPU clockrate so there exists a 'divider' - the multiplier - where the CPU's working speed is N times that bus. In the mid-late 2000s, Intel and AMD moved the memory controller into the CPU itself, and did away with the MCH - this was done for power efficiency (by the late 2000s, MCH chips could use almost as much power as the CPU!) and cost reasons mostly, but it did help performance in some situations as well (there is somewhat lower latency after all). So the 'northbridge' or 'chipset' on modern systems is generally just an I/O complex to provide USB, SATA, etc and the CPU is directly connecting to its RAM via internal memory controller. Rather than re-write history, they just create a 'reference clock' on the motherboard that stands-in for the FSB clock and retained the conventional FSB*multiplier relationship. Most Intel chips (especially modern ones) use a 100MHz reference clock, and most AMD chips use a 200MHz reference clock ('higher' or 'lower' here means nothing in terms of performance - this isn't a data bus). Actual I/O between the CPU and the rest of the system is handled via HyperTransport or Infinity Fabric (AMD), QPI or DMI-Link (Intel) or direct PCI Express (both) on these systems.
** Then what does that '105W tdp' number mean? Well, historically TDP actually had a relationship to real (or at least approximated real) power draw and dissipation by the chip. However since the Pentium 4-era, its basically just a made-up number to put the CPU in a 'class' for marketing purposes. Most Intel chips can draw (in real watts) around double (in some cases more) whatever their specified TDP is, and AMD isn't far behind in many cases (AMD was a lot slower to embrace the funny numbers for TDP, but with Ryzen that has changed entirely). Does that mean your 10900k will always draw more than 95W? no. The metric itself is kind of pointless these days due to the aforesaid dynamic clock/power management because actual task energy is generally going to be a lot lower than kWh/TDP would imply (unless you're doing some big AVX workload or somesuch). Also how the 'Turbo' and C-states are setup can influence this heavily - a real example from last generation with the Intel 9900k was that many Z390 motherboards would ignore certain near-term power limits, which consistently saw those chips over 95W, but Supermicro's Z390 board enforced the 'reference' power limits, and indeed kept the chip inside of a 95W envelope. In a lot of applications the performance was identical to the 'wide open' approach, but stress tests and long-term benchmarks could demonstrate substantial performance differences. The takeaway here is that modern TDP numbers are basically worthless.
What about 'maximum clocks' and the 'up to'? With modern chips, advertised 'maximum clockspeeds' are essentially worthless marketing fluff. Prior to (if memory serves) Coffee Lake 1 and Ryzen, Intel/AMD would publish accurate information about clockseed/core relationships, so for example on the Core i7 5775c we know that with 1 core under load, the chip can boost up to 37x, with 4 cores under load that drops to 33x. However they no longer publish this information (likely because it would challenge the marketing narrative, who knows though). This gets even wackier with laptops/mobile chips because the 'Processor Base Frequency' is usually some super low number (set where the hardware can actually survive) but the advertised clock is some 'in ideal conditions, lab environment, trained driver on a closed course' creation. For example the Intel 1060G7 advertises a 'base frequency' of just 1GHz (hello Pentium III!) but a 'maximum frequency up to' 3.8GHz ('hey that looks like my desktop!') - what do you figure will actually make it onto the product box?
Some more reading if you're curious:
https://www.anandtech.com/tag/turbo-boost
https://www.anandtech.com/show/2832/4
https://www.intel.com/content/www/us...echnology.html
https://www.intel.com/content/www/us...rocessors.html
https://www.amd.com/en/technologies/turbo-core (soooo many 'car analogies' in this one)
https://en.wikipedia.org/wiki/Cool%27n%27Quiet
https://www.phoronix.com/scan.php?pa...item=391&num=1
https://www.tomshardware.com/reviews...pu,1925-7.html
Also +1 to everything pingu_penguin and business_kid said. pingu_pengiun's point about 'clockspeed isn't really a good performance metric' is absolutely true, especially in the era of multi-core/multi-threading processors - it is very hard to do a 'straight across' comparison between two chips these days. The days of 'easy' rough comparisons between chips based on clockspeed (and some basic guesstimate of 'relative weighting' thereof) are long gone.
Something else I thought to add - you can view 'current' CPU frequency in the terminal for all cores with a kind of kludge command:
Code:
watch -n1 "cat /proc/cpuinfo | grep "MHz""
This will show (every 1s at least) CPU frequency - in practice modern chips are adjusting their frequency on a much finer scale, but 1s is generally enough to see what's happening across all cores/threads. At 1s polling this matches CPU-Z's behavior on Windows, and is enough to 'catch' the chips going to Turbo (at least on all of the chips I've tried it on).
If you want to fake a (constant) 100% load to see the chip go to Turbo (assuming you have a newer CPU), you can run
(will have to start multiple instances for a multi-core)
end this with
OR
run the 7zip benchmark, which seems to be an even 'heavier' load (both will show 100% CPU time, but the 7zip benchmark appears to use more complex instructions and thus you will usually not see 'full' Turbo speeds). To run the 7zip benchmark you need p7zip-full installed, and then just run
or
where N = number of threads to spawn (default is 1:1 with available hw threads), this is useful if you want to see how complex the relationship is between 'overall performance' and 'single core' performance (this can be especially dramatic on newer chips with >8 cores as their 'max frequencies' are usually much higher when only running 1-4 threads at 100%).
More information on the 7zip benchmark:
https://sevenzip.osdn.jp/chm/cmdline/commands/bench.htm
Notes on these commands: I would not do either of these things on a laptop (or something with equivalently constrained cooling). The cooling probably is not going to do a great job with the CPU at 100% load for any length of time, and forcing the chip to 90* C+ is not a great idea when it can be avoided. As with any 'stress test' kind of command, it's also a good idea to have some sort of temperature monitor running and keep an eye on that while working.
Last edited by obobskivich; 07-12-2020 at 06:58 AM.
|
|
1 members found this post helpful.
|
07-12-2020, 07:32 AM
|
#7
|
LQ Guru
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 8,256
Original Poster
|
Woah! Too much too soon. Let's simplify a bit. The machine in question is old and has a single-core Via cpu, so anything about modern chips is irrelevant here. I accept that the "advertised" cpu speed given by cpuinfo is a maximum and that the actual clock speed could be less (or more apparently if the cpu runs briefly in "turbo" mode). Presumably it is this actual rate that conky shows. But how does that relate to the system clock in the kernel? Every clock needs a timekeeper. The timekeeper for the real-time clock on the motherboard is a piezoelectric crystal (like the one in my wristwatch). The system clock is said to use the cpu as its timekeeper. But how can it if the cpu frequency isn't constant?
|
|
|
07-12-2020, 09:22 AM
|
#8
|
Member
Registered: Jun 2020
Posts: 614
Rep: 
|
Quote:
Originally Posted by hazel
Woah! Too much too soon. Let's simplify a bit. The machine in question is old and has a single-core Via cpu, so anything about modern chips is irrelevant here. I accept that the "advertised" cpu speed given by cpuinfo is a maximum and that the actual clock speed could be less (or more apparently if the cpu runs briefly in "turbo" mode). Presumably it is this actual rate that conky shows. But how does that relate to the system clock in the kernel? Every clock needs a timekeeper. The timekeeper for the real-time clock on the motherboard is a piezoelectric crystal (like the one in my wristwatch). The system clock is said to use the cpu as its timekeeper. But how can it if the cpu frequency isn't constant?
|
I admittedly know a lot less about VIA CPUs - especially the newer ones (like C7). From what I understand, they are capable of dynamic clocking, but it's probably 'primitive' compared to a modern Intel or AMD chip (mostly because they are so power efficient to start with), and more like the older Core 2-era stuff that just has an 'up' and 'down' state.
The system itself has a lot of internal clocks, which are generated by hardware clocks (actual chips or crystals), and those govern things like busses and the CPU reference clock. For 'timing' there is also a 'system timer' that is unrelated to the CPU clock, this is like like APIC or HPET which are primarily Intel-derived standards (AMD uses them too). VIA may be using some other implementation for the system timer or just uses something more Intel-like (this is that weird kind of 'Intel as actual brand of product' vs 'Intel as type of machine' thing). Also on initilization, the kernel considers delay on the computer in terms of IPS ( https://en.wikipedia.org/wiki/BogoMips) for 'timing' - I'm sure Windows NT and BSD/OS X are doing similar things on initilization in addition to being aware of some 'system timer' in the machine.
All of these timers requires OS and application awareness - it isn't just a hardware feature, and it isn't just a software feature. Although I also know pure-software timers are used in some places, for example in Direct3D API for physics calculations they are sometimes based on the timing event of frame render calls, which is entirely software, and can result in things being 'too fast' or 'too slow' depending on the system's performance (which is not directly 'clockspeed' but related). From what I understand further, this is 'how it works' in a modern, multi-user, preemptive multitasking OS (like Linux, Windows NT, etc) but in real-time OS it isn't as 'tidy' and you can end up with applications running 'too fast' or 'too slow' if CPU speed changes significantly (e.g. this is a well-documented thing with MS-DOS and many old games).
So the 'easy answer' is probably like: the CPU clock is not providing the RTC, PIT/APIC, HPET, etc which is explicitly separated, and may be accessed if the software is aware of (and compatible with) it, and dynamic clocking of the CPU in response to ACPI is also the result of the OS being aware of the feature and providing control signals which the hardware interprets within its constraints (which are potentially inchoately defined), and the OS has to also be vaguely aware of IPS rate as some unitless measure of latency.
Some wikipedia sources that you may find interesting:
https://en.wikipedia.org/wiki/Intel_8253 (historic)
https://en.wikipedia.org/wiki/High_P...on_Event_Timer (modern)
Also while looking on Wikipedia for the above I found this article, which may be of interest:
https://en.wikipedia.org/wiki/Time_Stamp_Counter
That reminded me of another 'bug' as processors became more modern - the original AMD dual-cores would break a lot of applications if they relied on TSC because the two 'cores' wouldn't produce identical, sequential outputs, which could lead to some weird results (in some 3D games it was kind of funny to watch - it would plod along at say 30 FPS, then suddenly drop to 1 FPS, and then jump to 200 FPS, and then back to 30 FPS, as it went in and out of sync with the 'clock'). AMD did release a software patch for this.
That article also has some interesting nuggest to your question:
"Recent Intel processors include a constant rate TSC (identified by the kern.timecounter.invariant_tsc sysctl on FreeBSD or by the "constant_tsc" flag in Linux's /proc/cpuinfo). With these processors, the TSC ticks at the processor's nominal frequency, regardless of the actual CPU clock frequency due to turbo or power saving states. Hence TSC ticks are counting the passage of time, not the number of CPU clock cycles elapsed."
It does not, however, define 'recent' more meaningfully, but that would be expected to produce similar performance to the APIC/HPET/etc methods where you have an external clock that isn't chained to the CPU's real clock frequency.
The further you read into this and business_kid's point becomes ever-more concrete: this becomes a complete mess as you get into multi-core, multi-thread-per-core, dynamically clocked processors that run with memory dividers into multi-data-rate memory and so on, which is (to a large extent) thankfully solved by 'external' timers (like HPET or APIC) that eliminate the CPU as the primary 'timing' source (if, of course, the software is setup for that).
|
|
|
07-12-2020, 09:38 AM
|
#9
|
LQ Guru
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 8,256
Original Poster
|
Still mostly over my head  , but I think I can just about understand this:
Quote:
Originally Posted by obobskivich
That article also has some interesting nuggest to your question:
"Recent Intel processors include a constant rate TSC (identified by the kern.timecounter.invariant_tsc sysctl on FreeBSD or by the "constant_tsc" flag in Linux's /proc/cpuinfo). With these processors, the TSC ticks at the processor's nominal frequency, regardless of the actual CPU clock frequency due to turbo or power saving states. Hence TSC ticks are counting the passage of time, not the number of CPU clock cycles elapsed."
|
My Lenovo desktop (4-core Intel cpu) which I am using now has this flag set, so presumably it is that constant ticker that serves as zeitgeber for the system clock in the kernel. I haven't checked the laptop for it yet, but I will do so and report back.
ISTR that the kernel's system clock is more accurate than the hardware clock (which depends on a crystal) and that is why most distros reset the hardware clock while closing down.
Last edited by hazel; 07-12-2020 at 09:40 AM.
|
|
|
07-12-2020, 10:19 AM
|
#10
|
Member
Registered: Jun 2020
Posts: 614
Rep: 
|
Modern OS/hardware may also be using HPET - most newer Intel/AMD systems have this enabled. From what I understand this is largely 'abstracted' to/within software - as in, there are software API methods that will allow a program to have a 'timer' but what they're actually invoking at the hardware level is up to their creators' whims (and/or some logic that polls the machine for various timers and chooses one based on some condition). So you have like hardware timer -> low-level code that exposes 'timer' to software -> API calls that reference that -> applications that make API calls for a 'timer' and get whatever they get. I'm not aware of any explicit reliance on the 'system timer' (e.g. some applications, like chrony, will use RTC, while others will use 'system timer' (as HPET or APIC) and still others probably call TSC) and I know in light of all the recent vulnerabilities, there has also been a move to fuzz timers for userland to make the system harder to exploit (especially in web browsers).
By 'reset the hardware clock' if you mean adjusting the RTC, the time service (e.g. chrony) should be continuously and gradually making adjustments to the RTC in response to the NTP server's sync as it corrects for offset (not just at start-up/shut-down). But this isn't related to the 'system timer' or any CPU/bus clocking. This is for actual timekeeping. Some embedded systems run just fine with no RTC, and either get a time update over the network (and rely on system timer to work out the progression of time between updates from NTP) or (I imagine/assume) experience some kind of Groundhog Day-esque scenario of waking up in 1970 at every reset, and survive that just fine.
|
|
|
07-12-2020, 10:40 AM
|
#11
|
LQ Guru
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 8,256
Original Poster
|
Not all of us use an NTP daemon. I don't because I don't have anything running that needs very accurate timing. I depend on the rtc to keep time more or less correctly between boots, and correct the displayed system time by hand if it's ahead of the wall clock by a more than a minute or so. The rtc then gets corrected at shutdown.
My earlier HP desktop had a failed battery and I couldn't replace it because it was soldered on (or rusted in perhaps). It did indeed wake up in 1970 each time it booted, which was a nuisance but hardly a disaster. After all, the first PC that I ever used didn't even have an rtc and the first thing you had to do after switching it on was to set the time.
Of course I could have avoided all that by just not powering off and running the machine 24/7, but I didn't want to waste so much electricity on what was really a trivial problem.
|
|
|
07-12-2020, 11:37 AM
|
#12
|
Member
Registered: Jun 2020
Posts: 614
Rep: 
|
Quote:
Originally Posted by hazel
Not all of us use an NTP daemon. I don't because I don't have anything running that needs very accurate timing. I depend on the rtc to keep time more or less correctly between boots, and correct the displayed system time by hand if it's ahead of the wall clock by a more than a minute or so. The rtc then gets corrected at shutdown.
|
Fair enough - that was a bad assumption on my part.
Quote:
My earlier HP desktop had a failed battery and I couldn't replace it because it was soldered on (or rusted in perhaps). It did indeed wake up in 1970 each time it booted, which was a nuisance but hardly a disaster. After all, the first PC that I ever used didn't even have an rtc and the first thing you had to do after switching it on was to set the time.
Of course I could have avoided all that by just not powering off and running the machine 24/7, but I didn't want to waste so much electricity on what was really a trivial problem.
|
Reminds me of my HD-DVD player - poor thing hasn't had Internet access (because all of its associated services no longer exist) in probably 10 years, and just keeps re-living January 2006 over and over again. Still works just fine. 
|
|
|
07-12-2020, 11:47 AM
|
#13
|
LQ Guru
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 8,256
Original Poster
|
Can you tell me then why the system clock, which is somehow linked to the cpu frequency, is so much more accurate than the rtc, which uses a crystal?
EDIT: I just found this in dmesg
Code:
[ 0.006462] ACPI: HPET 0x00000000993A1E70 000038 (v01 LENOVO TC-03 00001000 AMI. 00000005)
[ 0.036722] x86/hpet: Will disable the HPET for this platform because it's not reliable
Quote:
Originally Posted by Wikipedia
In 2019 it was decided to blacklist HPET in newer Linux kernels when running on some Intel CPUs (Coffee Lake) because of its instability.
|
My cpu is a Bay Trail, not a Coffee Lake, but according to Phoronix, their HPETs are blacklisted too. So whatever the kernel is using for its system time, it's not that!
Last edited by hazel; 07-12-2020 at 12:20 PM.
|
|
|
07-12-2020, 02:32 PM
|
#14
|
Member
Registered: Jun 2020
Posts: 614
Rep: 
|
Quote:
Originally Posted by hazel
Can you tell me then why the system clock, which is somehow linked to the cpu frequency, is so much more accurate than the rtc, which uses a crystal?
|
The system timer(s) is provided either by TSC (or some other in-CPU feature, like with ARM's udelay) on the CPU or a PIT or APIC/HPET which is usually built into the CPU or the chipset (this is a bit blurry now with SoCs being so common and CPUs having so many other things baked-in) - it's an accurate counter that will experience less jitter than the RTC (in theory its supposed to). I think this is on the level of ns (or maybe even fs) vs ms in terms of 'which is more accurate' but as you find below, there's always a difference between 'what it says on the box' and 'what it actually does' which Linux all too frequently shines light on.
Quote:
EDIT: I just found this in dmesg
Code:
[ 0.006462] ACPI: HPET 0x00000000993A1E70 000038 (v01 LENOVO TC-03 00001000 AMI. 00000005)
[ 0.036722] x86/hpet: Will disable the HPET for this platform because it's not reliable
My cpu is a Bay Trail, not a Coffee Lake, but according to Phoronix, their HPETs are blacklisted too. So whatever the kernel is using for its system time, it's not that!
|
I would not be surprised if there's APIC or some other 'legacy' option available too as part of backwards compatibility with non-HPET compatible OS/applications (as in your motherboard or CPU is providing an alternative device too). I think Bay Trail is a very different CPU as well - isn't that an Atom SoC? That may have more of the system's features built into it than a 'normal' CPU too (this isn't really good/bad its just different).
On the machine I'm typing this on (which is AMD Vishera based) I can run dmesg | grep apic and find the APIC registering about .2s before the HPET timer is registered (dmesg | grep hpet). RTC registers another .6s later as yet another device (dmesg | grep rtc), and sets the system clock (about 1.1s after APIC and HPET are registered). So it appears 'all three' can coexist just fine during initizalization on the same system, and all three are being provided on a modern(ish) machine to use. How to determine which one actually underlies a given function is a bit beyond my command line-fu, especially if they can be blacklisted as you've found, because there would logically have to be some sort of 'fail-over' for those cases. Maybe someone else knows how to better query a running system as to what is being routed where after the devices are registered.
Last edited by obobskivich; 07-12-2020 at 02:36 PM.
|
|
|
07-13-2020, 03:49 AM
|
#15
|
LQ Guru
Registered: Jan 2006
Location: Ireland
Distribution: Slackware, Slarm64 & Android
Posts: 17,524
|
Personally I doubt if the rtc clock is inextricably linked to the cpu frequency.
The rtc has only one concern - time. RTC frequency will probably be an exact multiple of 1 millisenond. Frequency on the cpu is varied, and any crystal is in close physical proximity. The crystal for the cpu will have tracks no longer than 15mm to the cpu. Capacitance will also be matched in pF. To be dragging lines doing Ghz off to wherever the rtc is would be a major source of instability. Likewise, the etc needs it's own crystal with matched capacitance within 15mm of the rtc. And you don't want 935.72435967159 crystal ticks per millisecond (or whatever they work with). If the cpu & rtc are phase linked, it is likely to be at a much lower frequency. It's humbling to think that light travels 30cm in a nanosecond. That makes you realize how important pcb layout is.
|
|
|
All times are GMT -5. The time now is 02:30 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|