Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux? |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
01-11-2022, 06:36 PM
|
#1
|
Senior Member
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127
Rep: 
|
Which drive goes where?
Many years ago I wanted to build my own PC. While I eventually did that, my normal approach is to buy a minimally configured box from Dell and see how much I can stuff into it. My current workstation is a Precision T3620. I am running CentOS 7 and am planning to switch to perhaps Ubuntu or probably Mint.
I have been done a lot of my work in dedicated virtual machines running under VMWare Player/Workstation. One is always on my right monitor - portrait for web browsing. On the left monitor, landscape, I have a VM for my Protonmail accounts, a scratch VM for misc. use, a VM for accessing my investment accounts, a couple with various programming environments etc. I spin them up as needed. It provides some isolation. It is sort of my version of Qubes OS. And now for my question...
I have available a couple of high performance PCIE M.2 drives (240 and 480 GB) a decent performance SATA drive (512 GB) and some mechanical drives. I am trying to decide where to install what.
One 4 TB mechanical drive is dedicated data storage for a project which is accessed from one of the VMs so it is spoken for. Another mechanical drive is used for misc. data storage and backup. Again, performance is not an issue for nightly backups. So on to the high performance storage...
Currently I am launching CentOS from a small partition on one of the PCie drives. This boots quickly but I do not boot the host machine very often. The VM images are of course on the PCIe storage and I will have the /home directories for the VMs on PCIe, shared via nfs from the host and mounted by the VMs. This facilitates backup.
I am now pondering where to install the new host OS. Considering that most work on the new build will be performed within VMs and not directly on the host - Is there any reason to have the host OS running from high performance storage when it is basically just supporting the VMWare hypervisor? I am considering booting the host from a USB thumb drive. I run some data servers that way. Once the OS is booted I do not think there is too much activity on the root file system. The system has 32 GB of RAM so I do not have a swap partition.
Just sort of thinking out loud. Any comments or advice would be appreciated.
TIA,
Ken
|
|
|
01-12-2022, 08:08 PM
|
#2
|
LQ Guru
Registered: Jan 2006
Location: Virginia, USA
Distribution: Slackware, Ubuntu MATE, Mageia, and whatever VMs I happen to be playing with
Posts: 19,895
|
I'm hardly an expert on this, but I'd be inclined to put the OS on a slower drive and the data files that will be frequently accessed on the faster one(s).
As for booting the OS from an external drive, all my instincts cry out against that, but I can't offer offer a rational argument on said instincts' behalf  .
|
|
|
01-13-2022, 09:51 AM
|
#3
|
Member
Registered: Jun 2020
Posts: 614
Rep: 
|
I would agree with frankbell on not booting from external storage if possible - it can lead to weird 'non-booting' scenarios if the system's underlying BIOS doesn't grab the USB device, or if the USB device fails (and in my experience those little USB keys are cheap for a reason). I think your instincts that the underlying host OS probably won't suffer being on a newer mechanical drive (and remember that some modern mechanical drives can achieve performance in the ~200MB/s range on their own, which is nothing to sneeze at), with the one caveat being: if you're using FDE on the host, I've found booting that from mechanical drives (even high performance RAID arrays with 10k disks and controllers with DRAM caching) tends to be glacially slow on start-up. I've never investigated 'why' that happens more fully, but I'm assuming it has something to do with random access times - SSDs (of any pedigree) seem to have no trouble with that. If you aren't using FDE on the host this of course would be moot.
I'd also potentially rethink the 'no swap' configuration - 32GB of RAM isn't what it used to be.
|
|
|
01-13-2022, 10:43 AM
|
#4
|
Senior Member
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127
Original Poster
Rep: 
|
Thanks for the replies.
I am working on a test install to test these things. Unfortunately the old desktop I planned on using 1) does not support UEFI boot so the installation/partitioning is a little different from what I would actually run and 2) the USB3 add in card is not bootable so I would have to boot from USB2. To test the boot from flash drive I need to go back to one of my Intel NUCs which has enough RAM to run a virtual machine but only has a 2 core processor.
As far as using an external drive... I have 2 low end and one higher end Dell servers which I use for archive data storage. For about $150 each the lower end ones were cheaper than a NAS box. I decided to install the OS on a flash drive so I did not waste a SATA port. I have two pairs of drives in each and I mirror my data between the drives in the pair. As far as the flash drives being fragile... Once everything is installed and configured I take a Clonezill snapshot of the flash drive and keep that for backup. I do the same at the end of the year. If/when a flash drive fails I simply burn that Clonezilla image to a new flash drive and plug it in.
My higher end T130 - which I got for $129 (long story) is the only one which ever has any issues booting from USB. But then again it has been a PITA from the day I got it. Instead of having a bunch of wires flopping around inside like in a PC or the cheap servers, it had a beautifully loomed data and power wiring harness terminating at the 4 hard drive pockets. However, there was nothing on the motherboard for the other end to plug into. After almost an hour on the phone with Dell's premium, 100% US based, commercial tech support it was determined that I had the wrong wiring harness (it was for a RAID controller which I did not order - I told them that as it had PERC stamped on it) and I needed the OTHER wiring harness (there were only 2 for that model of machine) which they overnighted to me.
Let me get to some experimenting.
Ken
|
|
|
01-13-2022, 12:18 PM
|
#5
|
LQ Guru
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 11,201
|
Be very sure to use LVM = Logical Volume Management, which is usually enabled by default. This will allow you to cleanly separate the physical storage picture from the logical picture that is perceived by applications, and to change either of them independently. Physical storage is grouped into "storage pools" and then "logical volumes" are carved out of those pools. In this way, the combined space available in the entire pool can be put to work.
A variety of commands are available to deal more-or-less automatically with situations that often come up, such as a drive that is beginning to fail.
|
|
|
01-16-2022, 08:26 AM
|
#6
|
Senior Member
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127
Original Poster
Rep: 
|
Thanks sundialsvcs,
I am familiar with LVM and in some situations it is certainly of value. However, in the case of what I am building I would have to ask - Is it smart enough to realize that I added some FAST storage to a pool of slower storage? PCIe storage is a LOT faster than mechanical storage.
This makes me think back to an Oracle DBA course I took many years ago re. performance tuning. The Oracle mantra at the time used at least 7 physical disks with data on some, index files on others etc. Not being a DBA I asked a blasphemous question - What about putting the whole thing on a RAID array? That caused the instructor to jump up and down and turn red in the face. I told him it did not make a hoot. A programmer or user writing bad sql could bring his database to a crawl even if he had 70 disks  I have seen that happen even on a BIG mainframe.
As to my grand testing plans... I installed Mint 20.3 on a 32 GB USB2 flash drive. It boots my test PC in a reasonable time. Unfortunately when I went to install VMWare Workstation/Player I was told that my i7-860 CPU would not support the latest 3 versions. That means I would have to find an old version of VMWare and rebuild my VMs in that version. No thanks. Time to try something else.
Ken
|
|
1 members found this post helpful.
|
01-16-2022, 09:25 AM
|
#7
|
Member
Registered: Jun 2020
Posts: 614
Rep: 
|
Quote:
Originally Posted by taylorkh
Thanks sundialsvcs,
I am familiar with LVM and in some situations it is certainly of value. However, in the case of what I am building I would have to ask - Is it smart enough to realize that I added some FAST storage to a pool of slower storage? PCIe storage is a LOT faster than mechanical storage.
|
Not trying to speak for sundialsvcs (because I don't know if this what they had in mind): LVM supports caching as well - so ostensibly you could have a big pool of relatively slower mechanical disks and a smaller PCIe device as the lvmcache for them. See here for more: https://www.linux.org/docs/man7/lvmcache.html
I have no idea 'whats best' in terms of performance, and your example below with the DBA illustrates how obnoxious disk performance can be to quantify in complex settings. Probably 'do it and see how it works' is the best-case.
|
|
|
01-16-2022, 10:14 AM
|
#8
|
Senior Member
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127
Original Poster
Rep: 
|
Thanks obobskivich,
I guess I am a throwback to my two floppy drive Osborne. The thing that spooks me about LVM or RAID is the situation WHEN (not if) a device fails. I always keep at least two copies of important data. More copies if it is really important.
I know that some RAID configurations are supposed to be fault tolerant etc. I recall our second departmental Banyan Vines server some decades ago. It was a super duper machine with 4 spindle synced hard drives - perhaps a couple of GIGA bytes total  We got messages that one of the drives was throwing errors.
The corporate IT department was SUPPOSED to maintain a hot spare server which could be brought on-line and our data backup restored to it. However, have server will play with it and it was not properly configured. The hardware support company was called and their rep came to the site. He was told to remove the offending drive, lets say drive #3, from our server and replace it with drive 3 from the spare server. When booted back up our server said "that is not MY drive 3" and it would not rebuild the RAID. Next he was told to replace the original, ailing drive. Again, "not MY drive 3" message. We had to rebuild the server from scratch and restore our data from backup. I think we were down for a couple of days. Longest outage I saw in the whole time we used Banyan.
With my normal process of keeping storage devices and file systems discrete I KNOW what is where and back it up appropriately. When an OS partition/file system gets hosed or corrupted or an update breaks something I can grab a Clonzilla image and restore it. If a device goes bad I can restore everything to a new device and either send the old one back to the manufacturer for warranty repair or just shoot it for data security reasons
Try it and see what happens is certainly good advice. I need to upgrade my Dell workstation which would allow me to take my time building it out. I am sort of holding out to see what the Intel generation 12 processors amount to.
From what I have read the gen 11 series are just a little faster than gen 10 but use a lot more power. The generation 12 processors are supposed to offer something like the Cadillac Northstar engine which would run on 4, 6 or 8 cylinders depending on how much horsepower was required. As I understand some of the gen 12 processors will have a few lower power usage cores which run all the time and the more powerful and power consuming cores will only fire up when the workload demands. That sounds neat provided it works.
Thanks again,
Ken
|
|
|
01-16-2022, 04:07 PM
|
#9
|
Member
Registered: Jun 2020
Posts: 614
Rep: 
|
Quote:
Originally Posted by taylorkh
Thanks obobskivich,
I guess I am a throwback to my two floppy drive Osborne. The thing that spooks me about LVM or RAID is the situation WHEN (not if) a device fails. I always keep at least two copies of important data. More copies if it is really important.
I know that some RAID configurations are supposed to be fault tolerant etc. I recall our second departmental Banyan Vines server some decades ago. It was a super duper machine with 4 spindle synced hard drives - perhaps a couple of GIGA bytes total  We got messages that one of the drives was throwing errors.
The corporate IT department was SUPPOSED to maintain a hot spare server which could be brought on-line and our data backup restored to it. However, have server will play with it and it was not properly configured. The hardware support company was called and their rep came to the site. He was told to remove the offending drive, lets say drive #3, from our server and replace it with drive 3 from the spare server. When booted back up our server said "that is not MY drive 3" and it would not rebuild the RAID. Next he was told to replace the original, ailing drive. Again, "not MY drive 3" message. We had to rebuild the server from scratch and restore our data from backup. I think we were down for a couple of days. Longest outage I saw in the whole time we used Banyan.
With my normal process of keeping storage devices and file systems discrete I KNOW what is where and back it up appropriately. When an OS partition/file system gets hosed or corrupted or an update breaks something I can grab a Clonzilla image and restore it. If a device goes bad I can restore everything to a new device and either send the old one back to the manufacturer for warranty repair or just shoot it for data security reasons 
|
Fully understand it, and I've heard/seen/lived my share of horror stories as well. Generally I just run one block device per LVM to make LUKS/FDE easier, but conceptually I don't think going to multi-device LVM would violate the above - you can still image the entire LV as if it were a single device (that's the whole 'gotcha' for LVM) as long as you have somewhere big enough to take the output (so for (super simple) example say you had 4x1TB SSD together as one LVM to be very fast - a single 4TB hard drive could be the backup there, it'd just be slower). I'm of a similar mind to hardware RAID, and generally don't trust softRAID further than I can throw it - it's useful to get a block device's capacity/performance/whatever where it needs to be, but it isn't a substitute for the 3-2-1 rule (or some other similar backup schema).
Quote:
Try it and see what happens is certainly good advice. I need to upgrade my Dell workstation which would allow me to take my time building it out. I am sort of holding out to see what the Intel generation 12 processors amount to.
From what I have read the gen 11 series are just a little faster than gen 10 but use a lot more power. The generation 12 processors are supposed to offer something like the Cadillac Northstar engine which would run on 4, 6 or 8 cylinders depending on how much horsepower was required. As I understand some of the gen 12 processors will have a few lower power usage cores which run all the time and the more powerful and power consuming cores will only fire up when the workload demands. That sounds neat provided it works.
Thanks again,
Ken
|
This is going OT but I'm fine with that if you're fine with that:
A lot of the 'Intel uses so much power' is nonsense borne out of 'reviews' that will run Intel chips with all of their power limits/governors removed and then complain that (surprisingly) they can pull down some significant power. If allowed to function as designed, most modern Intel CPUs are 65W TDP devices, with only the 'K' (overclocking/enthusiast) chips offering higher out-of-box TDPs (usually 125W) - all of that can be disabled on enthusiast-grade systems and can result in some really spectacular power draw figures under heavy synthetic loads. The 11th gen chips indeed don't offer a great performance uplift over the 10th gen, at least on the higher-end desktop SKUs (e.g. 10900 vs 11900), but they did bring some other advances over 10th generation, like PCIe 4.0 support, more PCIe lanes overall, and on mobile they're also a somewhat different architecture (the mobile 11th-gen chips are built on Intel's '10nm' node, and some of them have newer/better graphics cores baked-in). Overall depending on what you're looking at, its half of one, six of the other kind of thing - neither is really a 'problematic' generation despite the histrionics of modern 'reviews.'
On the 12th gen - on the very high end SKUs they've implemented a 'bigLITTLE' scheme similar to what ARM (and Apple) have been doing for a few years, which has caused no end of problems with compatibility (it reminds me a lot of AMD's original roll-out of dual-cores back around 2004-5 and all of the 'timing bugs' associated with those) due to the heterogenous core layout (they aren't just different clocks, they have different CPUID values and (if I'm not mistaken) featuresets). Overall the performance gains range from very slim to reasonable, but the price is quite a bit higher (especially when you factor in the unbuyable DDR5 they generally need), and in their quest to 'win at synthetic benchmarks' Intel has also pulled power limits off on a lot of the higher end SKUs (like the 12900), which leads to very high peak power draw. Overall enforcing sane power caps does not hinder performance signifricantly (because the increase in power draw/performance seems to be along a log curve, not linear).
As far as the Northstar analogy - that's not quite accurate, but it's not quite wrong either: Intel (and AMD, if you were wondering) has offered per-core PLL for a few generations now, so indeed the CPU can leave some cores 'idle' while others are heavily loaded in lightly threaded apps (whether this saves any significant power is debateable - from experience I'm going to say 'it probably doesn't, but it sure looks good in the brochure!'), but the big 'gotcha' for 12th gen is the bigLITTLE hybrid design, although their choice to restrict that only to the flagship parts is curious. By contrast, Apple's bigLITTLE designs usually lean more toawrds 'performance' or 'big' cores as you move up the product stack (e.g. M1 (4+4) vs M1 Max (8+2) while Intel's high-end chip is 8+8, and their lower-end chips tend to be more like 6+0).
Overall I would probably pass on the 12th gen - let the schedulers mature around bigLITTLE for x86 and come back in a generation or two and it will probably be a lot better situation. That said, 10th/11th gen chips are being discounted/closed out and can be a great value in terms of performance/$, and indeed it may be the case when next-gen Raptor Lake (I'm not sure if they're going to call that '13th gen') launches and things mature a bit, that 12th gen will look like a good value on close-out too. Also remember that DDR5 is very hard to come by, and is the latest scalperware tech, so 12th gen is largely academic for most people unless you're buying one of those funky LGA 1700 boards that take DDR4.
|
|
|
01-16-2022, 05:47 PM
|
#10
|
Senior Member
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127
Original Poster
Rep: 
|
Thanks once more obobskivich,
OT? Who cares. I learn many things in random discussions. As to Intel processors... I have been unable to make sense of their numbering scheme once they got past the Pentium 4. My Osborne had a Zilog Z80. With the CP/M Plus OS I could overcome the 64kB RAM limit. I had 128 kB of RAM which was divided into 8 16kB banks which CP/M paged in and out as needed. Then with Intel, the 8088, IBM, the PC architectuer with its 640 kB limit... it has been downhill ever since
I never forgave Ken Olsen, former head of Digital Equipment, for his "VAX Forever" obsession. It kept the Alpha processor from achieving the success I think it deserved. I never had, or could afford, anything with an Alpha CPU but I always wanted one.
With Intel (or AMD) processors it makes one wonder why I bother to harden the OS when the underlying hardware is not secure. Recently I read that even ECC memory can be rowhammered. Perhaps I should tie 64 Raspberry Pi boards into a Beowulf cluster. I think the AMD processor less vulnerable than most.
My current Dell Precision T3620 with an i7-6700, 32 GB of RAM and an nVidia Quadro K620 video card has plenty of power for what I do. The only issue is with the sound. The on-board sound crapped out and I replaced it with a cheap PCIe board from evilbay. It provides enough sound (although the speakers are reversed left & right). However, when sharing the sound device with various VMs the sound card periodically disappears. I am not sure if this is an issue with the card, CentOS7 on the host, VMWare or a conflict among them. Then I have to restart the host and everything else. What a PITA.
Ken
|
|
|
01-16-2022, 06:35 PM
|
#11
|
Member
Registered: Jun 2020
Posts: 614
Rep: 
|
Quote:
Originally Posted by taylorkh
Thanks once more obobskivich,
OT? Who cares. I learn many things in random discussions. As to Intel processors... I have been unable to make sense of their numbering scheme once they got past the Pentium 4. My Osborne had a Zilog Z80. With the CP/M Plus OS I could overcome the 64kB RAM limit. I had 128 kB of RAM which was divided into 8 16kB banks which CP/M paged in and out as needed. Then with Intel, the 8088, IBM, the PC architectuer with its 640 kB limit... it has been downhill ever since 
|
I've always heard it explained that we can thank the oddities of trademark and copyright law for that - "you can't copyright a number" so the story goes...
More recent Intel nomenclature makes sense, at least within themselves, as they've stuck to the 'generation' metaphor pretty well for the last 5-10 years, whereas the Pentium-era they rarely made it simple to figure out which revision of CPU you had (for example there's technically 4 major revisions of the Pentium 4, each with a few sub-versions, and this can make a difference in terms of things like SSE support, VT support, if it has 64-bit capability, etc - but they're all just 'Pentium 4 3.20GHz' or somesuch - whereas the 'generation' metaphor at least delinates that, even if it isn't great across their whole offering (e.g. don't try to align desktop/mobile/server generations and make sense of it)).
Quote:
I never forgave Ken Olsen, former head of Digital Equipment, for his "VAX Forever" obsession. It kept the Alpha processor from achieving the success I think it deserved. I never had, or could afford, anything with an Alpha CPU but I always wanted one.
With Intel (or AMD) processors it makes one wonder why I bother to harden the OS when the underlying hardware is not secure. Recently I read that even ECC memory can be rowhammered. Perhaps I should tie 64 Raspberry Pi boards into a Beowulf cluster. I think the AMD processor less vulnerable than most.
|
Not a security expert, but: as I understand it the security vulnerability thing in modern CPUs/hardware is pretty much an 'everywhere' problem, and one that journalists have learned sells papers (if you haven't read Hector Martin's commentary on M1RACLES I'd suggest it). How paranoid do you want to be though? There's probably still a few Asus KGPE-D16s kicking around that you could throw libreboot on, and enjoy a Bulldozer or Piledriver CPU (the last x86 CPUs without embedded ARM SoCs running mystery meat blobs), or (if you have lots and lots of money and lots and lots of time) there's always Raptor Computing and the Talos Secure Workstation. I'm not sure either would really be a 'great' day-to-day experience however.  Overall I'd be a lot more concerned about hardening the OS, browser, general Internet habits, etc than abstract vulnerabilities in hardware ('abstract' is maybe not the best word here - but I mean things that may well require a degree in theoretical computer science just to fully get your head around).
Quote:
My current Dell Precision T3620 with an i7-6700, 32 GB of RAM and an nVidia Quadro K620 video card has plenty of power for what I do. The only issue is with the sound. The on-board sound crapped out and I replaced it with a cheap PCIe board from evilbay. It provides enough sound (although the speakers are reversed left & right). However, when sharing the sound device with various VMs the sound card periodically disappears. I am not sure if this is an issue with the card, CentOS7 on the host, VMWare or a conflict among them. Then I have to restart the host and everything else. What a PITA.
Ken
|
I don't have much experience with VMWare, but I know sound passthrough on Virtual Box is, to use your term, 'a PITA' in its own right. I had the best luck with USB audio devices, FWIW (let me extend that: on *nix-based systems I have had the overall best luck with USB Audio vs other implementations, and (ancient) PCI cards come in second - PCIe cards have been a crapshoot). I've been reading about QEMU/KVM recently, but haven't tried deploying it - supposedly its the 'new good way' to do things but I'm guessing you know how that goes... 
|
|
|
01-17-2022, 01:06 PM
|
#12
|
Senior Member
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127
Original Poster
Rep: 
|
When my Pentium got too slow I went to a Pentium II. When that got too slow I moved to a Pentium IV. I did have a couple of different clock speed P4s. But now...
Which is "better" an i5-11600K, Xeon W1350, i7-10700k or a Xeon W1270? They are all about the same price in a Dell Precision 3650. Of course it depends on the use. But to figure which benchmarks are relevant to MY use... And Dell offers TWENTY EIGHT different processors for this box. No DEC Alpha though
Ken
|
|
|
01-17-2022, 01:56 PM
|
#13
|
Member
Registered: Jun 2020
Posts: 614
Rep: 
|
Quote:
Originally Posted by taylorkh
When my Pentium got too slow I went to a Pentium II. When that got too slow I moved to a Pentium IV. I did have a couple of different clock speed P4s. But now...
Which is "better" an i5-11600K, Xeon W1350, i7-10700k or a Xeon W1270? They are all about the same price in a Dell Precision 3650. Of course it depends on the use. But to figure which benchmarks are relevant to MY use... And Dell offers TWENTY EIGHT different processors for this box. No DEC Alpha though
Ken
|
The 11th-gen chips have PCIe 4; the 10th gen do not. --600 is a 6-core, while --700 is an 8 core. The Xeons are the same thing (1350=11600; 1270=10700), the only 'add on' to Xeon on 1P 115x is ECC support. Simple enough. 'K' suffix means unlocked multiplier but that won't matter on a Dell.
|
|
|
01-17-2022, 02:30 PM
|
#14
|
LQ Guru
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 11,201
|
Of course I never meant to suggest that you didn't know about LVM, and I heartily agree with your observation of the ability of an SQL query-writer to screw up everything!
Sometimes, I wish that SQL query-processors had the option to say: "No. This sucks. You obviously don't know what you are doing. I'm not going to run this ..."  ... but, I digress ...
---
Having said all that, I believe that LVM might be of benefit here, no matter how you may choose to apply it, simply because it separates the logical view from the physical one. The operating system and its applications now perceive only the logical view, leaving you complete flexibility with regards to the physical setup, along with a variety of tools that are built to help you to resolve issues "without downtime."
Even if you decide to create a logical picture that does more-or-less reflect the physical picture (e.g. consciously grouping similar drives together, or even creating storage-pools which correspond to only one drive), LVM still offers a lot of "goodness." Such as, tools to deal with "the drive that has suddenly begun to emit ominous ticking sounds."
Quite frankly, when I first began to study the design, I said to myself: "Now, this is slick. The team which designed this feature really knew what we needed most, and did a bang-up job of providing it." I still feel that way about LVM. ( You know who you are ...)
Last edited by sundialsvcs; 01-17-2022 at 02:40 PM.
|
|
|
All times are GMT -5. The time now is 09:43 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|