LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 05-26-2022, 01:08 AM   #46
baumei
Member
 
Registered: Feb 2019
Location: USA; North Carolina
Distribution: Slackware 15.0 (replacing 14.2)
Posts: 365

Rep: Reputation: 124Reputation: 124

Hi "h2-1",
Quote:
Originally Posted by h2-1 View Post
Certainly interested to see what pinxi can get, to me, really old hardware is the real test for inxi/pinxi.
Here is an oldie for you, running Slackware 8.1. :-)
Code:
user1@darkstar:$ uname -a
Linux darkstar 2.4.18 #4 Fri May 31 01:25:31 PDT 2002 i386 unknown
user1@darkstar:$
user1@darkstar:$ ./pinxi -V | grep ^pin
pinxi 3.3.16-14 (2022-05-25)
user1@darkstar:$
user1@darkstar:$ time ./pinxi --gpu -C
CPU:
  Info: single core model: 386 bits: 32 type: UP arch: N/A family: 3
    model-id: 0 microcode: N/A cache: N/A
  Speed: N/A min/max: N/A core: No per core speed data found. bogomips: 7
  Flags: N/A
  Vulnerabilities: No CPU vulnerability/bugs data available.
Graphics:
  Message: No device data found.
  Display: server: No display server data found. Headless machine?
    tty: 80x25
  Message: Unable to show GL data. Required tool glxinfo missing.

real    9m8.654s
user    1m27.910s
sys     7m37.750s
The BIOS is dated 1992/06/06, and I think the motherboard is about the same age. The processor is an AMD Am386DX-40. Main memory is 32MB with parity checking, on eight SIMMs. There is 128KB of cache on the motherboard, and I think it runs at full processor speed (40 MHz). It has a Trident VGA 256K DRAM video card (circa 1989).

Last edited by baumei; 05-28-2022 at 12:02 AM. Reason: To correct video card information.
 
Old 05-26-2022, 02:22 PM   #47
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 556

Original Poster
Rep: Reputation: 316Reputation: 316Reputation: 316Reputation: 316
So slow... lol. But a reminder of a gross oversight, pinxi did not have AMD Family 3!! The am386 clan. Since that box worked so hard to generate this report, it seems unfair that it would be excluded from the cpu arch report. So that is corrected now.

I've added for some cpus the built dates, but getting those is really tedious, and no tables have the start to end dates, and some often get it wrong. I did the main AMD ones however.

am386 was built on the amd 900-1500 nm process node, for those interested. it would be a useful thing to distinguish different releases of intel cpus, like skylake, comet lake, which were done in 'refreshed' editions as well as the original. I'm sticking with amd cpus I think...

I've also corrected another oversight, I forgot that amd spun out its chip fabs as global foundries inn 2009, so I've updated the process nodes for those amd generations to indicate GF, not AMD. GF failed to succeed with 7nm at which point amd moved to tsmc, for zen 2 and later cpus.

The industry is starting to hit limits of physics at 5nm however, and one suspects 3nm may be the last using the optical lithography methods currently being used, the 3nm ASML machines will run 300 million usd, and may be barely economically viable because they are so slow. The 7nm machines are 150 million usd. It will be interesting to see how they work around this issue. If I understood the asianometry guy right, already the wavelengths of light they use, extreme UV, are 16nm, and they have to do multiple passes to create the smaller size patterns, which slows the process down, and increases the wafer costs. I believe 3nm is solved, but is VERY expensive, and very slow. I don't share Jim Keller's optimism about being able to get to sub nm levels for chips, at least not without a fundamental change in how they create these things.

Last edited by h2-1; 05-26-2022 at 02:26 PM.
 
Old 05-26-2022, 04:54 PM   #48
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 556

Original Poster
Rep: Reputation: 316Reputation: 316Reputation: 316Reputation: 316
Finally realized it, someone's comment in a thread somewhere, i740 was the last standalone gfx card intel tried before the current new ones, that's why it was its own category in TechPowerUp, and also why its product ID actually was bigger than the gen1 product IDs. All makes sense now. TPU really does a good job with their data I have to say.

I forgot they had that, but someone up thread mentioned having one, but it didn't register.
 
Old 05-26-2022, 11:00 PM   #49
rokytnji
LQ Veteran
 
Registered: Mar 2008
Location: Waaaaay out West Texas
Distribution: antiX 23, MX 23
Posts: 7,112
Blog Entries: 21

Rep: Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474
Code:
harry@biker:~
$ pinxi --gpu -C --zv
CPU:
  Info: model: Intel Core i5 M 520 bits: 64 type: MT MCP arch: Westmere
    process: Intel 32nm family: 6 model-id: 0x25 (37) stepping: 5
    microcode: 0x7
  Topology: cpus: 1x cores: 2 tpc: 2 threads: 4 smt: enabled cache:
    L1: 128 KiB desc: d-2x32 KiB; i-2x32 KiB L2: 512 KiB desc: 2x256 KiB
    L3: 3 MiB desc: 1x3 MiB
  Speed (MHz): avg: 1533 high: 1679 min/max: 1199/2400 boost: enabled
    scaling: driver: acpi-cpufreq governor: performance cores: 1: 1679 2: 1428
    3: 1599 4: 1428 bogomips: 19152
  Flags: ht lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx
  Vulnerabilities: <filter>
Graphics:
  Device-1: Intel Core Processor Integrated Graphics vendor: Dell
    driver: i915 v: kernel arch: Gen5.75 process: Intel 45nm built: 2010 ports:
    active: eDP-1 empty: DP-1, DP-2, HDMI-A-1, HDMI-A-2, VGA-1
    bus-ID: 00:02.0 chip-ID: 8086:0046 class-ID: 0300
  Display: x11 server: X.Org v: 1.20.11 driver: X: loaded: intel gpu: i915
    display-ID: :0.0 screens: 1
  Screen-1: 0 s-res: 1366x768 s-dpi: 96 s-size: 361x203mm (14.21x7.99")
    s-diag: 414mm (16.31")
  Monitor-1: eDP-1 mapped: eDP1 model: Seiko Epson 0x5441 built: 2010
    res: 1366x768 hz: 60 dpi: 118 gamma: 1.2 size: 293x165mm (11.54x6.5")
    diag: 336mm (13.2") ratio: 16:9 modes: 1366x768
  OpenGL: renderer: Mesa DRI Intel HD Graphics (ILK) v: 2.1 Mesa 20.3.5
    direct render: Yes
Code:
harry@biker:~
$ pinxi -b
System:
  Host: biker Kernel: 5.10.57-antix.1-amd64-smp arch: x86_64 bits: 64
    Desktop: IceWM v: 2.9.7
    Distro: antiX-21_x64-full Grup Yorum 31 October 2021
Machine:
  Type: Laptop System: Dell product: Latitude E4310 v: 0001
    serial: <superuser required>
  Mobo: Dell model: 0T6M8G v: A01 serial: <superuser required> BIOS: Dell
    v: A03 date: 07/08/2010
Battery:
  ID-1: BAT0 charge: 38.6 Wh (100.0%) condition: 38.6/48.8 Wh (79.0%)
CPU:
  Info: dual core Intel Core i5 M 520 [MT MCP] speed (MHz): avg: 1706
    min/max: 1199/2400
Graphics:
  Device-1: Intel Core Processor Integrated Graphics driver: i915 v: kernel
  Display: x11 server: X.Org v: 1.20.11 driver: X: loaded: intel
    gpu: i915 resolution: 1366x768~60Hz
  OpenGL: renderer: Mesa DRI Intel HD Graphics (ILK) v: 2.1 Mesa 20.3.5
Network:
  Device-1: Intel 82577LM Gigabit Network driver: e1000e
  Device-2: Intel Centrino Advanced-N 6200 driver: iwlwifi
Drives:
  Local Storage: total: 55.9 GiB used: 30.49 GiB (54.5%)
Info:
  Processes: 161 Uptime: 1h 42m Memory: 7.7 GiB used: 1.11 GiB (14.4%)
  Shell: Bash pinxi: 3.3.16-15
 
Old 05-27-2022, 02:02 PM   #50
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 556

Original Poster
Rep: Reputation: 316Reputation: 316Reputation: 316Reputation: 316
rokytnji, an alternate syntax for intel core 1! That should be corrected now in pinxi 3.3.16-17

Intel CPUs when Core were supposed to get core version detected, but your cpu string was a variant I had not seen.

It's core 1, normally it's something like Core i3-3565M indicating in that case core 3 Mobile. Hopefully type of variation only occurred on gen 1 before they locked the syntax down, but we'll see.

Thanks for finding a variant that broke the assumptions.

I gritted my teeth and added Intel years, there's some overlap in some cases because some sites list the introduction year, not the production year, so sometimes the built: might be a year off. Finding end of production runs is difficult and also inconsistent, but at least this is a base, and if anyone ever cares they can help dig in and get these more accurate, but I'd say this is enough.

Trying to get intel cpu generations is really complicated because the same microarch name, like coffee or comet lake, can span 2 generations.

Last edited by h2-1; 05-27-2022 at 02:03 PM.
 
Old 05-30-2022, 10:53 PM   #51
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 556

Original Poster
Rep: Reputation: 316Reputation: 316Reputation: 316Reputation: 316
Last bits of research, arcana piled on arcana.... yes, for the first time ever, on their new Arc gpu architecture, Intel is using TSMC's n6 process node, not Intel, which can't hit n6 yet, despite what TechPowerUp noted incorrectly, lol. No canonical docs, it's all just jumbling up different sources.

But many features are with data, unless I don't think the sources are clear enough or reliable enough to be trusted, like cpu year built, etc.

I'm going to let this percolate a few days then put it out as 3.3.17 and call it good.

Note that I've opensourced and made easier to work on all the static or regex lookup tables used by pinxi internally for these data types, so they can be worked on without touching inxi/pinxi at all. Will at least help me, if nobody else, long term, since these are hard features to maintain over time since it's not dynamic live data, it's manually assembled and strictly empirical matching.

But good corner cases in this thread, most posted examples showed cases where guesses had been wrong or slightly off, or lacking in some way, so good samples, thanks.

The ideal here is that since EOL CPUs and GPUs are easy to fix or update to make complete, after a while, the data will be very good, accurate, and the only slightly fuzzy stuff will be the latest generation things, but the tools make that a lot easier to work on (disk_vendors.pl cpu_arch.pl gpu_ids.pl gpu_raw.pl ram_vendors.pl). The tools either contain their own rules, or the data files contain the test data required to produce the updated rules.
 
Old 06-01-2022, 01:09 PM   #52
baumei
Member
 
Registered: Feb 2019
Location: USA; North Carolina
Distribution: Slackware 15.0 (replacing 14.2)
Posts: 365

Rep: Reputation: 124Reputation: 124
I found a clear use-case for inxi/pinxi.

A few days ago I started a Linode virtual-server instance (with Linode's Slackware 15.0 image). I was looking around at the configuration Linode had chosen, and among other things I saw they had installed the packages related to a sound-card. This made me wonder what on earth sort of virtual-server had Linode created, where it was reasonable to install sound-card stuff?

So, I ran pinxi. In this virtual-server instance, Linode is not simulating a sound-card. In my opinion, there is no reason for the sound-card software to be installed.
 
Old 06-01-2022, 02:23 PM   #53
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 556

Original Poster
Rep: Reputation: 316Reputation: 316Reputation: 316Reputation: 316
baumei, yes, remote data was always a core use-case for inxi, I was a sys admin during most of the initial phases of inxi development, and used that feature extensively. I still do some remote stuff, and find it very useful every time I'm in a remote box to see what is going on there. While sys admins don't tend to file too many issues or be that public, I believe they are still a key part of the inxi user-base. I can tell because every now and then I'll get an issue report about inxi having issues when run in Chef or Ansible or something similar, which is not something a standard user would even know exists, let alone use.

In fact, I was just testing something on a remote system. Testing on a small network of remote servers is usually a part of the release process, and testing/development process, for all new features.

Just found and fixed a series of runlevel/target issues based on a small bug report I got yesterday, one of the oldest bugs I believe in current inxi.

Sometimes the information is something I wouldn't have thought to check, like a remote server's operating system having been upgraded to next version, things like that. Or storage/ram use, etc.

Last edited by h2-1; 06-01-2022 at 03:08 PM.
 
1 members found this post helpful.
Old 06-02-2022, 01:09 PM   #54
rokytnji
LQ Veteran
 
Registered: Mar 2008
Location: Waaaaay out West Texas
Distribution: antiX 23, MX 23
Posts: 7,112
Blog Entries: 21

Rep: Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474Reputation: 3474
Just went round and round on another forum with a nvidia install thread.

Good thing inxi -Fxz showed me he is running amd radeon stuff.

Saved some cursing and headaches. Thanks.
 
1 members found this post helpful.
Old 06-05-2022, 05:18 PM   #55
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 556

Original Poster
Rep: Reputation: 316Reputation: 316Reputation: 316Reputation: 316
For latest pinxi, 3.3.16-28 (to become inxi 3.3.17), I decided, probably foolishly, to more fully convert pinxi internals to using array/hash refs, not always, but mostly, which was one of the last low hanging fruit items.

I have been testing this, and as far as I can tell, for a very large file or command output, it can save around 3ms per referenced vs copied hash or array. Only for big arrays/hashes though, for small ones, there may be a negative overhead to creating a reference vs a simply copy of a few elements, that's what my optimizer (Perl NTYProf) suggests, though there are gray zones when getting down to low single digit ms times, but I think I saw a pattern when comparing the two, the few places pinxi seems slower is when I set a ref to a small array or hash return, not a direct copy return, but it's so little I can't tell if it's real or not.

I think it worked, because inxi 3.3.16 and pinxi 3.3.16-28 are now running at almost identical speeds, which suggests that despite adding about 700+ lines to pinxi, I managed to squeeze out just enough performance gains to make the two run at basically the same speed almost exactly.

This optimization lemon is getting squeezed dry though, there's only a few more places I can squeak out a 3ms or so gain at this point, which is about what I see when passing a large array by reference vs copy, but there are not many files that big, or command outputs. I've done these over the years, I did them much more in early pinxi development, to see what perl tricks were actually fastest vs claims about which to use. Short answer, don't trust established Perl wisdom, test it yourself.

Note that the single biggest bottlenecks are /sys live data, cpu real speeds, and drive temps, both have a delay as they average real temp or speed over some ms to get a decent average reading, some of you may recall this from the CPU refactor a while ago, where getting live /sys cpu core speeds turned out to one of the single biggest slowdowns, still is, by the way, though drivetemp is similar slowdown per drive. The other big one, which I can't do anything to improve, are subshell commands, which run as fast as the tool being run runs, all I can do there is optimize around how its output is handled, but that's very small part of the overall system time relatively speaking.

A real oddity which I cannot figure out, is that new pinxi is running about 1000 fewer subroutines than inxi for -Fa, I wish I knew what I did there, but it was obviously something that really matters.

I ran these carefully, to avoid data buffering, by first running inxi -Fa/pinxi -Fa, then waiting 60 seconds, then running the same command with NYTProf, and waiting 60 seconds between the two runs as well.

Quote:
Profile of inxi for 1.32s (of 1.36s), executing 111186 statements and 61935 subroutine calls in 25 source files and 16 string evals.

Profile of pinxi for 1.34s (of 1.38s), executing 112108 statements and 60735 subroutine calls in 25 source files and 16 string evals.
I really wish I knew where those 1200 sub routine calls were removed, but I have no idea, I've been optimizing pinxi for a few weeks now, and I don't know what it was I did. Keeping in mind pinxi has the full new amd/intel gpu data and output features, a significant fix to runlevels and default level, which was completely broken for systemd, new CPU data items, built/process.

This will turn out to be the bug fix/refactor release I'd thought about doing one of these times, so I guess this is the release it will be. I now roughly know the only places I can really try to see real gains, I even used precompile of regex via perl qr(...) on heavy looped regex, like in reading Xorg.0.log file, but I don't see many other places to optimize beyond getting array ref returns for some large output commands I might have missed, or large files, but even if I can find 5-10 more, that's no more than a 30ms max gain.

Last edited by h2-1; 06-05-2022 at 05:36 PM.
 
1 members found this post helpful.
Old 06-06-2022, 06:07 AM   #56
zeebra
Senior Member
 
Registered: Dec 2011
Distribution: Slackware
Posts: 1,830
Blog Entries: 17

Rep: Reputation: 638Reputation: 638Reputation: 638Reputation: 638Reputation: 638Reputation: 638
Quote:
Originally Posted by h2-1 View Post
In other words, should they so chose, distros or others can use these tools and data sources to generate their own matching lists of product ids. ids.pl has output options that let you tweak how the output is sent to screen to fit various different ways you might want it, but it's of course designed to default to how pinxi wants it.
Howdy.. I was kind of just skimming your posts (sorry), and it's quite interesting. Have you ever heard of ldetect?

I was disassembling some core parts of Mageia some time ago, and part of that was something called "ldetect" with something called "ldetect-lst". I'm not 100% sure who maintains these things, but it may seem that they(Mageia) do it alone/mainly. It was awhile ago that I was doing these things so I can't remember accurately. The data(ldetect-lst) might have come from co-operative sources. (vaguely remember reading some scripts mentioning a website with hw info collection or some such)

Anyways, it seems along the same path of what you are doing. I came to think about the project when I was skimming your posts. I was rebuilding the stuff when I was playing around with it, and it seems alot of the Mageia Perl core uses/depend on ldetect.

One of the things with Magaia that strikes me is their excellent and above average hardware support, and I think some of that can be attributed to ldetect. That's part reason when I hear about newbies who say, "hardware xyz doesn work on distro xyz", that they give Mageia a try, due to excellent hardware support.

Anyways, perhaps you would be interested in contributing, or perhaps some efforts are shared, or perhaps you could use parts of their project/data. It's all kind of datatables, and I still have some of those things leftover here, and when I look at it, it doesn't seem to have a specific table for graphics cards... At least on a quick glance.
Anyways, I'd suggest getting in touch with one of the Mageia developers and have a chat about it at least. Sources for ldetect and ldetect-lst can be found on their website rmp.source kind of things, if you want to inspect it and find the dev info etc.
 
Old 06-06-2022, 02:17 PM   #57
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 556

Original Poster
Rep: Reputation: 316Reputation: 316Reputation: 316Reputation: 316
I can't tell if the tool was made by PCLinuxOS or Mandriva, one of the two, both were considered tops in user-friendliness in their day if I remember right.

I'm going to answer this with many words, because it's worth looking closely at least one time at such things to make sure I am not missing something.

Quote:
part of that was something called "ldetect" with something called "ldetect-lst". I'm not 100% sure who maintains these things, but it may seem that they(Mageia) do it alone/mainly.
https://repology.org/project/ldetect/versions

short: TLDR: these are just lists they took from other places, some I already use, because they are primary resource everyone uses, like pci.ids, which is used by lspci for example already, but I use them to generate data for inxi gpu features indirectly, not as part of inxi, since that's bloat that doesn't do anything for users, and can't be used raw anyway. Others I don't need to use because other tools already do the work for inxi so I'd rather leverage their work, I don't need inxi to reimplement lspci and lsusb, although I could, at least in part, and have done in various parts where they don't do what inxi needs. The only list that is useful to me they did not credit the source for, but they are not the source, I guarantee it. The manually generated lists are useless because they are manual and designed only for their tool, and are always out of date, and worse, fail to use the original mandriva edid tool from what I can tell, though I didn't read their entire codebase of course, but judging from some of the lists, they forgot about that tool.

Project taken over by mageia. Getting patches currently, so not dead. But some VERY old logic, lol, first file I opened, Cards+, is from 2010

An amusing note, I have never seen this until today, but I decided to call my raw data storage directory 'lists', they called theirs lst, one questions the wisdom of trying to save 2 characters in a word that you will almost never type. Small red flag to me.

It uses the same pci.ids file I'm using in inxi product id generating back end tools, and that lspci uses, so there's nothing interesting there, best to get it direct from pci-ids.ucw.cz, same version, 2022-05-18. Some old and very out of date other data files, like pcitable.x86_64, nothing interesting there, again, get the data from Nvidia directly, either via their json data file for pci ids, or from their tables. I find their html tables easier to work with, so I use those as data source for my backend tools, no better source for those than nvidia, so why use anything else?

Checking the code,data in the git repo:
http://gitweb.mageia.org/software/ldetect-lst/tree/lst

Some more out of date manual matches, which should by the way be using live EDID data, but instead use some manually generated data, dmitable for example, again, why? use dmidecode data or /sys data, it's better, and if available, use /sys edid binary data directly, which is even better.

Another list, from a site, linux-usb.org/usb-ids.html, but again, nothing interesting there, if I want to add a list of usb vendors, which isn't the worst idea in the world, I can just create a tool based on my gpu_ids.pl tool and then get that data directly into inxi, nothing to be gained there, our methods aren't compatible, and it's just using primary source data, so there's nothing added there. However, the usb ids is an idea. I have some matching tables already internally, for disk vendors, ram vendors, cpu data, and gpu data, as long as I make the tools well and the data sources are primary and reliable, adding stuff like this just takes LoC, and makes me have to maintain it. But the USB vendor has some appeal because usb actually is weak in that area. But basically this data is what lsusb uses to generate the product vendor/name strings, so it doesn't actually add anything, unless I wanted to dump lsusb, which I only use to get this data, but that just means, I let the lsusb guys do the work, and I don't have to, similar with lspci guys, they already do this stuff, sn need to rebuild the wheel, I try to do only stuff that nobody, or almost nobody, does in general, in other words, value added.

I am tempted, it would be easy to generate these ids since I just wrote the logic, and this has always been a weak spot in inxi usb data. Since its' a public data source, it's easy to update it. But lsusb already has this data, all they are doing is combining the vendor : product ID matches, so no need to do that in inxi anyway, so nothing added, except getting rid of a recommended package. Note inxi doesn't need lsusb to generate usb data, it can get it all from /sys now, and most of it is from /sys, the only parts it needs are the vendor name and product string values. Nothing else.

Now, with this said, there is a non credited file, PnP ids, that actually is useful to me, that's to generate monitor vendors, which I have manual matching tables also for, but I only use a subset of those, and try to only have the monitor vendors that actually will exist for inxi users (in other words, an embedded industrial device mnnitor isn't a useful ID for inxi users). I want to find the source for that file, I looked everywhere and never could find one single one, though a global list is not useful because I only need like 150-200 of the 2000 for monitors, nothing else uses that data. But I added that list to my 'inxi-perl/tools/lists' anyway because it's good to have it. Note that inxi outputs the raw vendor 3 letter abbreviation if it failed to make a match, so this table is useful for me to find matches and update inxi, but not to use in inxi directly since it's too big. 10 to 1 if I go through the names I researched and took notes on variations, I will find errors in their master list, it's a certainty.

But I'd rather know where they got that list from, 100% for sure they did not generate it themselves.

Also, let me add a very real caveat, these lists are sometimes wrong, like, actually incorrect, and you can't ever trust the sources, sometimes the company was sold, and actually the old match has changed, sometimes it's just wrong, etc, this is what I found researching these vendor abbreviations, no list I found online was right always, and trusting them blind means you are going to be wrong sometimes. This is not the case with device : product ids, those are generally unique identifiers that are reliable.

Then there are some fairly horrifying manually assembled monitor matching files, which are a dead end, you can't rely on those, too many new products released too often, that should be from EDID data directly, not from manual static data.

=================================

It's funny you mention mageia, whose source I've never looked at, but when I ended up forking, due to zero maintenance and incomplete features, the Perl Module Parse::Edid, I noticed that it had originally been generated by Mandriva, but also generated in a way that clearly indicated that it was meant to solve only a mandriva specific set of problems, because they had left a few fairly trivial to complete, and well documented, features incomplete, with only stubs left in place where those should have been. The Perl was not good, and used some really silly things that simply added execution time with zero benefit, actually it was negative because it obfuscated something that is really clear and obvious once you remove that extra stuff.

It had largely never been touched since then, 2010 was the last touched date, but I think that was only a bug fix, it was created in 2005 and probably not much more was done with it after that. I would estimate it's realworld live userbase now that it's integrated into inxi is several orders of magnitude bigger than the original buggy Perl module.

It's also worth noting that it appears to me that the new mageia stuff isn't using that module at all, and are trying to work around not having it, which is weird, since mandriva created it in the first place, lol.

I contacted the alleged module maintainer to ask if he was interested in an upgraded version (I even made a fully drop in, feature identical, with extras), version for the Perl module, but got no reply, which is whatever, that module in Perl is dead and actually not very well designed, the forked version in pinxi/inxi is better, though slightly handicapped by trying to retain drop in compatibility with the Perl module, but I'm going to dump all that in the next update I do to that code since it's not good code or logic, though the core method was good, it just wasn't well implemented in my opinion, it was clearly a quick hack meant to solve a specific issue for mandriva, and then Perl took it over in their cpan modules and never touched it again.

So that's mandriva > mageia code. Faster to fork imo. This is as an aside the first time I have ever used a substantial block of logic from any other project in inxi/pinxi, and I really hesitated, until I had worked with the stuff long enough to realize it wasn't very good, and there was no reason to pretend it was. Still has an entire block of data that I'm not even sure is ever used by anything, I suspect that is something they made for mandriva specifically but did not document, because I was not able to trigger output from it, but I left it in place to be on the safe side since the code was not well commented, and wasn't documented at all, zero, nothing.

Not something that inspires me to then seek out more, lol.

I'm sure other projects have done bits and pieces of the hardware stuff over years, just as I know that other projects have used bits and pieces of inxi over the years (screenfetch author used to hang out on #smxi IRC channel, for example, and his desktop logic all came from bash inxi, back when inxi was bash). I'm also fairly sure that other projects take the core logic and use it in their stuff without crediting it to inxi, but overall, I find little to emulate in other sys info projects, though I now and then, rarely, I look at a file or two just to see how they did something I am finishing up, but I never go, oh, darn it, they did it better, lol, I almost always find their stuff is incomplete or inadequate and I'm better off doing it myself, testing it myself, collating and collecting the core data myself, etc.

I won't name specific projects, but I have now and then looked at code to see if I missed anything, and always am struck that I found significant issues during testing that they had missed, or that their code failed to properly account for due to the design of the code not being flexible enough to account for it. Again, no names, lol, but I see this as a rule, not as an exception.

I know at least one distro have used, and may still be using, the inxi partition tool in their installer, because it's extremely robust and very powerful, with many layers of double checks and cross checks, to ensure that only a truly current mounted partition appears. But most of these guys don't talk to me so it's just stuff I hear randomly. I know one big Linux gaming company uses inxi to debug issues, or has used it, because they have talked to me now and then. Given the boost graphics has gotten now, I will assume that type of use will continue.

With all this said, and understanding that working with software or people I may not like or agree with is called 'work', and I get paid to do work (a lot), I have almost no interest in trying to deal with other foss projects, I'm way, way, way past that point in my life, I only do stuff I like for free anymore, and if it's not fun, I stop doing it. Dealing with distro politics and bruised egos etc is definitely not fun, and though I have done it in the past, I will never do it again, with a few exceptions, where I won't say never, like slackware and openbsd, because both are very well run, and really just do things in a very cool way, at least, to me they do. But most other projects, particularly anything that uses the model of consensus, or worse, corporate for profit, driven development (imo otherwise known as a very time consuming way to generate mediocre code in most cases), I happily leave to their own devices, I'm happy if they like inxi, or if their users like it, but that's as far into it I want to get on the distro-organizational level. The individual level is always just what it is, good, bad, benefit, harm, case by case always.

I am however completely into highly skilled good people learning the inxi codebase, and contributing, but sadly, it's just getting deeper and deeper into Perl advanced stuff, and despite my initial desire to keep the Perl very basic and user friendly, the fact is, that type of perl is very inefficient and slow, as all my optimization testing has shown me time after time, so it's all starting to go away, making the code harder for newbies to work on, which bums me out a bit to be honest because I really intended for that to not happen, but it created a conflict between good perl and easy to use for non perl people, that I could not correct, so I had to dump the user friendly part, this last upgrade basically dumps most of the last bits of that internally.

My conclusion re notions of interacting with that mageia project: Not Fun. Not Interesting. Not Rewarding. No benefit to inxi users. Pain.

Due to having already done the deal with distro stuff way too much, I decided years ago that I was not going to get into any specific distro, though I happily will deal with good people from distros, they are what make inxi work and continue to work, without their efforts and energy, inxi dies.

Last edited by h2-1; 06-06-2022 at 04:20 PM.
 
2 members found this post helpful.
Old 06-06-2022, 02:45 PM   #58
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 556

Original Poster
Rep: Reputation: 316Reputation: 316Reputation: 316Reputation: 316
Just as an aside, I've always published a ton of raw data, very few people know it exists, and unlike the relatively undocumented example from the mageia run tool, mine has a ton of information and explanations about why stuff is the way it is, data samples, variation examples, etc. inxi/pinxi interally also have a ton of data samples on very arcane stuff that you might think is a mistake, with samples showing why it's needed, otherwise who can remember those endless exceptions and alternate syntaxes. Also tons of data debuggers with raw data injection, those are not open though I may one day open those, but only if I am sure they don't contain any private user data, I don't think they do. The docs have more stuff.

This is a feature very few people are aware of because it's in the inxi-perl branch, in docs, tools, modules directories, lots of raw data, urls, resources, huge lists of things that nobody will ever remember and that took ages to research, if you want a list of gpu microarchitecture names, inxi-data.txt has it, along with relevant resource links, etc.

https://smxi.org has more end user oriented data, as well. Along with some generally out of date developer stuff.

There's so much data that I doubt anyone other than me will ever be able to dig through it, but it's also not something I will spend time on beyond tidying it up once in a blue moon, it's raw research data mostly, and not designed to be userfriendly because I don't get paid to do that, and it's for me, but it's all open and free data for anyone who wants it. You want a short list of commands to run a huge set of window managers? it's got it, part of my wm testing for example.

One very significant non opensource part is my vms I use, which are just too big to upload anywhere, but are key since I have vms that allow you test various features, like some with every wm/desktop known to humanity in 2016, 2020, etc, others with the most arcane logical volume setups, designed to develop and test and debug the most absurd levels of logical volumes/raid, and time consuming to setup. But those will never be opened, which is a drag, I wish I could. About 780 GiB and counting currently, that includes old install isos etc. Debian sarge 3.1, debian etch 4.0, to test ultra legacy systems and perl compatibility, absolutely critical. Just found a bug in my new code, used a newer method, hadn't been well documented by perl, so didn't know when it had been added, but for sure after perl 5.008.

So no, inxi is like an iceberg, there is a huge amount under the water nobody sees, and much nobody will ever see, but with patience, you can recreate that, mostly.

So yeah, to keep this project fun, I have to keep it interesting, and to keep it interesting, I have to make it not suck for myself, which is rule one, which means, I have to make the stuff do what I want the way I want, not the way some other project or person wants. That's in terms of the coding and development and debugging tools, features requests etc often come from other people and distros, stuff I might not care about, but which users like. Ignore that, and the project dies. And those have sometimes turned into my very favorite features, by the way. Same in acxi, some of my favorite features in that were by request of users, and it took me a while to realize how valuable that feature is. -y1 in inxi is a good example, that was done for a guy who provides a lot of support and ideas, and which is now one of my favorite output options.

Doing stuff in a non fun or non rewarding to myself way is called work, not fun. I will happily do work for pay, note, but not for free. I will also not work for free for corporations, unless doing so helps real inxi users. This is why inxi does not support OSX for example, beyond not crashing when it runs. The notion of donating my free time to a corporation that has close to if not past a 1 trillion dollar market cap is one of the most idiotic ideas I've ever seen or heard, no idea how these corporations con people into doing that for them.

I don't normally talk about the backend stuff, but given I just freed some of the tools and polished them so they can be viewed in public, it's been on my mind more, the more of inxi backend that is open, the more likely someone can work on it correctly, using all its backend debugging power.

Last edited by h2-1; 06-06-2022 at 03:21 PM.
 
1 members found this post helpful.
Old 06-06-2022, 05:23 PM   #59
zeebra
Senior Member
 
Registered: Dec 2011
Distribution: Slackware
Posts: 1,830
Blog Entries: 17

Rep: Reputation: 638Reputation: 638Reputation: 638Reputation: 638Reputation: 638Reputation: 638
Quote:
Originally Posted by h2-1 View Post
But most other projects, particularly anything that uses the model of consensus, or worse, corporate for profit, driven development (imo otherwise known as a very time consuming way to generate mediocre code in most cases), I happily leave to their own devices, I'm happy if they like inxi, or if their users like it, but that's as far into it I want to get on the distro-organizational level. The individual level is always just what it is, good, bad, benefit, harm, case by case always.
As a non-expert, I have observed some of those same things actually, and it does worry me in regards to the future of GNU/Linux.

Quote:
Originally Posted by h2-1 View Post
My conclusion re notions of interacting with that mageia project: Not Fun. Not Interesting. Not Rewarding. No benefit to inxi users. Pain.
I'm not sure about the devs, but the Mageia people and community I have had contact with or seen from are very friendly and pleasant actually. Which is not necessarily a common thing.

And well, it's interesting that you note on Perl, because when I was ripping apart some of those core components of Mageia (to rebuild something on Slackware) it became pretty clear that the whole distro core (drak etc, from old Mandrake) is Perl based.

Anyways, I just thought it might be of interest to you after reading your post
 
Old 06-06-2022, 08:40 PM   #60
h2-1
Member
 
Registered: Mar 2018
Distribution: Debian Testing
Posts: 556

Original Poster
Rep: Reputation: 316Reputation: 316Reputation: 316Reputation: 316
I've interacted with many distros over the years, too many I think, so the patterns start to get easier to see. Also watching talks by good project team members or leaders and bad ones, the difference is night and day, it's simply not subtle. I can watch pretty much anyone involved with OpenBSD talk for an hour and rarely feel I wasted my time, for instance.

There are still some really nice distros out there, the MX/AntiX that came out of Mepis is another good set, it surprised me, I thought it would die and fail, but MX is pretty solid now from what I can tell, and seems to have managed the transition decently. MX is I think similar to Mageia, neither creates most of their packages, they pull from a base core project, debian / fedora in this case for the two, but are also their own thing that seem to have managed to sustain themselves over years now. I have ongoing contact with a guy who does a lot of stuff with mageia/fedora/opensuse, if I didn't, I'd have almost no connection at all to the rpm world. I used to deal directly with the fedora packager of inxi, he was nice, but I have not heard from him for a long time.

I really have grown to like Perl, and view it's decline as very unfortunate because it's far superior for its type of use case than something guaranteed to break a new release down the road, like python, but fads are not based on technical merit sadly, they are just fads, and they sweep through the tech world like anywhere else. Like using git instead of curl or wget to get a file, or when svn would have been a better fit, for example. Or HTTP instead of sftp/ssh to download stuff.

But with this said, I increasingly suspect that if you found a lot of Perl in that codebase, that's old code that was done by mandrake or mandriva, not new stuff, or just patched old code. Taking that old stuff and fixing it and updating it takes a long time, days, so I don't see that happening that often. I know that with the old Parse::EDID perl module by mandriva, it was not good code, it was not completed, and it appeared to have been tossed away once written, because once I understood that the errors and bad stuff were not by design, it was trivial to add the missing EDID data, it was in the core spec, so they simply didn't finish the job, and also didn't really do a good job with the perl in the first place. Not encouraging to me, it takes me like a week to really digest something like that and finally realize the stuff that is weird or looks wrong is weird and wrong, so there's no reason to keep it. You can never know this really until you understand what they were doing, and by the signs they left behind that show you they weren't that good at what they were doing, good enough to get it to do what they needed, but not that good.

This was, in other words, corporate code, done by people who were paid to do it, and didn't care about it, and when their task was done, they never touched it again, and never finished it, and didn't care about the silly code stuff that didn't do anything useful beyond waste cpu cycles and make it harder to read. hciconfig by the way is similar, it was written by some company, stuck around linux for ages, and is finally being removed because the company left, its authors were paid and didn't care, and it finally is being removed and replaced. Same thing really. OpenBSD just dumped their kernel bluetooth support for the same reason, some bad old bluetooth code all the bsds used, finally theo said, fix this or we toss bluetooth, so they tossed bluetooth, without hesitation, the code could not be fixed. FreeBSD still ships that same code, cough cough... enough said there, no need to say more.

OpenBSD also uses I think Perl for their package manager tools, Debian does too, but I haven't looked at it, I am curious about OpenBSD's stuff. inxi supports a core OpenBSD security feature, via an OpenBSD perl module they have added to their core modules so it will get used, so I used it. OpenBSD::Pledge. Like almost everything from them I look at or interact with, it's well designed and easy to use. Sadly that one depends on the OpenBSD kernel to work, and I think one other os has those features, but it's not a real os, it's more of a test bed someone is doing to learn, I think. I'm also curious about Slackware stuff, the community impresses me, I miss this level of skill, it used to be much more common in distros, but I find it's fading fast now in most of them.

Last edited by h2-1; 06-06-2022 at 08:51 PM.
 
1 members found this post helpful.
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Testers for inxi/pinxi redone -C CPU logic... huge internal changes h2-1 Slackware 353 02-24-2022 08:51 PM
Huge inxi/pinxi upgrade, new features, Logical volumes, raid rewrite, beta testers? h2-1 Slackware 12 12-17-2020 05:04 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 06:37 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration