SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Wow!!
Glad it finally worked out for you.
I've got a Sun blade with 32 cores/48 gigs which is (alas) running centos at the moment (thanks to my employer - not!).
Its just sitting there doing absolutely nothing - so here I come - Slackware it is!!
Linux really is amazing.. All you had to do to access 64 cores and half a TB of ram is change a single number in the kernel config..
Is there a downside to having a larger number as the default? Just out of curiosity.
Each possible CPU uses 8K of system RAM... but big deal these days, right? At least for x86_64, bumping it to 64 (or even 128 to future-proof it for a while) seems like a worthwhile trade.
Linux really is amazing.. All you had to do to access 64 cores and half a TB of ram is change a single number in the kernel config..
Is there a downside to having a larger number as the default? Just out of curiosity.
Each possible CPU uses 8K of system RAM... but big deal these days, right? At least for x86_64, bumping it to 64 (or even 128 to future-proof it for a while) seems like a worthwhile trade.
Well yes i remembered the -j switch But used only 10 it was still fast enough less then 5 min i think. But i built bzImage and modules only.
Well i'm working on genome analysis and Next-generation sequencing. So the power is needed. It saves days of work and frustration
That machine also came with CentOS but well i dont like linux distros designed for clicking people. I know Slackware for years so just changed to it.
The funny thing was that the company which sold us the server said that they provide it only with CentOS and Slackware might not be good. I figured out why. Someone had to recompile the kernel for 64 cores I dunno how they got theirs for CentOS tho. Maybe something like:
yum install kernel_which_supports_64_cores
Anyway, now the machine is up n running. All cores recognized.
I do not want your machine. I run Slackware 14 on a single core, 1.66GHz Atom with 1GB of RAM (plus a dual core celeron 877 netbook.) It’s OK but hardly fast. In benchmarks it scores before a 1.2GHZ PIII.
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541
Rep:
Quote:
Originally Posted by perbh
Wow!!
Glad it finally worked out for you.
I've got a Sun blade with 32 cores/48 gigs which is (alas) running centos at the moment (thanks to my employer - not!).
Its just sitting there doing absolutely nothing - so here I come - Slackware it is!!
So I've got a dumb question -- say I've got a blade server, will Slackware install and go without a whole lot of fiddling? Let's say a Sun Blade similar to yours (or whatever blade box).
I certainly don't want to hijack this thread but it is about high-performance computing and I don't know enough about blade servers, so I'm just askin'.
Reason I'm asking is that the county I live in is 1,791 sq miles with a mix of national forests, large- and small farms and towns and a real mix of geography. And GRASS is the berries for dealing with just that.
From the GRASS web page:
Quote:
GRASS GIS, commonly referred to as GRASS (Geographic Resources Analysis Support System), is a free Geographic Information System (GIS) software used for geospatial data management and analysis, image processing, graphics/maps production, spatial modeling, and visualization. GRASS GIS is currently used in academic and commercial settings around the world, as well as by many governmental agencies and environmental consulting companies. GRASS GIS is an official project of the Open Source Geospatial Foundation (OSGeo).
The county needs to do all of what GRASS does (as do most counties throughout the US) and, well, it's a resource hog. Runs fine in 32- and 64-bit Slackware 14.0, but "fine" is relative; it takes a while to do large-scale analysis and 3-D mapping. It do make pretty pictures though.
I'm using it for my own interests (along with GMT for large-scale maps) and am working on the county to abandon an old, clunky, unstable, difficult system (that nobody knows how to use) and, you know, get dragged kicking and screaming into the last quartile of the 20th century. It would be really nice if a blade box would actually "load-'n'-go" if that's possible (anything that requires a lot of fooling around is going to be a flat NO -- a dedicated desktop with lots of horsepower would be acceptable, but, hey, if I can lay hands on a used blade server (I can, a Sun, cheap), that would be interesting).
@tronayne:
Most certainly a plug-n-play, be it CentOS or Slackware.
The only 'install-difference' between a 'blade' and a work-station are the graphics capabilities - the blades are notoriously bad in this aspect. I have come across blades that will not support more than 800x600! Until you've got it set up properly with vnc etc, you're cli only.
Other than that - I'm really not that impressed with some of these high-spec blades ... checking out the cpu-activity, they seem to spend a fair proportion of their time just shuffling tasks between cpu's. We were testing out a 48-core beast at one time, pushing it to the limit of what our software can throw at it - and it used only 15 out of the 48 cores! But then, in our case, i/o is the primary bottleneck - and to avoid using slow (well, everything is relative) disks, we shuffle data between several machines using 10-gig network - only using disks at the final stage when we run out of other options - gotta save it for posterity!!
Last edited by perbh; 06-14-2013 at 12:47 PM.
Reason: mis-spelling
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541
Rep:
Quote:
Originally Posted by perbh
@tronayne:
Most certainly a plug-n-play, be it CentOS or Slackware.
The only 'install-difference' between a 'blade' and a work-station are the graphics capabilities - the blades are notoriously bad in this aspect. I have come across blades that will not support more than 800x600! Until you've got it set up properly with vnc etc, you're cli only.
Huh. Didn't know that (but, then, don't know a heckuva lot about blades, kinda thought they're beyond ordinary mortals, you know). I actually don't care about graphics all that much -- when you're talking global maps (even with terrain), you're not too worried about high definition and the display I have, an Acer 20" that'll do 1600x900 is good enough. I only spec stock Intel graphics controllers 'cause anything else is a waste of money.
I'll keep the advice in mind, though, about the 800x600 thing and vnc and all, thanks.
Quote:
Originally Posted by perbh
Other than that - I'm really not that impressed with some of these high-spec blades ... checking out the cpu-activity, they seem to spend a fair proportion of their time just shuffling tasks between cpu's. We were testing out a 48-core beast at one time, pushing it to the limit of what our software can throw at it - and it used only 15 out of the 48 cores! But then, in our case, i/o is the primary bottleneck - and to avoid using slow (well, everything is relative) disks, we shuffle data between several machines using 10-gig network - only using disks at the final stage when we run out of other options - gotta save it for posterity!!
I'm not thinking monster-box, I'm thinking a little-bit-faster box. GIS is heavily disk driven, lots of data in great big files. I have stuff spread over multiple drives in distinct categories to make things go faster. Place labels and the like (that get put on maps at defined latitude and longitude, roads, bridges, railroads, etc., etc.) are large files, lots of disk I/O and I don't expect (or get) blazing speed). Think about the data contained in a 10-degree by 10-degree patch of the earth and you get some idea.
Anyway, thanks for input -- I'll look into that Sun Blade a guy wants to get rid of.
@tronayne:
OK - your needs are obviously somewhat different from mine ...
There is nothing to stop you from putting in some better graphics adapter - I've done that to many blades - just gotta be extremely careful that you get the 'right' type of bus-connector. My experience is mostly with 'decommissioned' blades (ibm, hp and a coupla suns). Most blades have a propriatory riser-card which will allow you up to 4 extender-cards. For the ibm's you are mostly reduced to 'pci-ex-1' (ie 8 bits), the hp's use pci/x and then suns take almost any pci-ex (8/16/32-bits).
I once bought a $400 graphics card for an ibm-blade, only to find it wouldn't work cuz it was 16-bit pci-ex. Impossible to get an appropriate riser-card so I was sitting on it for however long and finally put it into one of the sun blades.
Other than that - they are almost identical to any work-station but are oh so easily racked :-)
One more thing - blades usually have 15k rpm sas-disks (300 gigs) - again, they differ. Older blades usually had 2 disks, newer ones can have up to 6. These disks are _not_ exchangable with the more 'normal' sata-drives. It means, though, that if your needs are big disk capacity, I would rather use the blades for heavy processing and in addition have an extra fileserver with some 2-3 tera disks ... ymmv
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541
Rep:
Learn something ever day, thanks. Somewhere in the back of my noggin lives a little gnome that mumbles at me now and again; once such was about disk drives in the rack that I sort of forgot about because, well, I'm just not looking at Big Data. 95% of the data I use is text files of vectors (the exception being topological files that total roughly 4.4G for the entire world, the "patch" file mentioned earlier). Those topo files aren't used too often because they don't manipulate easily -- 10 x 10 degree image files cover a helluva lot of area and aren't too useful for land-use studies and the like. Vector data is text, latitude, longitude, elevation, feed it to an equation that projects a roundish world onto flat paper and go speed-fast. This is actually one of those things where 64-bit shines over 32-bit (not easy to make a direct comparison, but doing map projections is heavy arithmetic and 64-bit just goes a lot faster).
The output is PostScript (or HP, or whatever) and you send that to a display, printer or plotter (and wait a while). A map of central Europe with Cold War boundaries, topological information and "natural" color information is a 66 M PostScript file; on the other hand, a world map using the Robinson (or any other) projection is 1.3 M PostScript file. The difference is one is full-color that "looks like" the physical are from orbit, the other is simple lines and blue water. Not terribly Big Data. The Robinson runs 0.81 seconds, the Cold War takes 1 minute, 21.3 seconds (lots of stuff going on). Now those are just maps, when you introduce GRASS into the mix you get layer and layers of information, both geographic and geologic, soil properties, buildings and what-all for small-to-large areas which is compute-intense but not so disk intense. You know, 6 G ain't a whole lot of disk, geographic names for the entire world doesn't occupy a lot of space (comparatively speaking) doesn't either -- lot of it, not all that much space, all vector data, essentially text.
It always surprises me how trivial it is when I think about storing these data nowadays -- I remember feeding floppy disks to a running program to get this data projected. I also remember swapping CD-ROMs in an out of drives doing the same thing. Nowadays it's just trivial to store a couple of hundred gigabytes worth and not even think about it. The first mapping program I had was Doug McIlroy's map that used World Data Bank I and II from the CIA on 9-track tapes (and the CIA did the data with kids tracing paper maps to digitize the X-Y points). I've massaged that data a few times to get Cold War country boundaries (still using it). It ain't Google Maps but it's good enough for my purposes (and where do you think they got their data?).
So, anyway, I'm going to get hold of my friend and see if he want to part with that Sun Blade (and, hopefully, has a graphic card in it that will fill a 20" LCD screen nicely). I seem to recall that his dates from 2006 or 2007, maybe later, and give 'er a shot and see what happens.
Huh. Didn't know that (but, then, don't know a heckuva lot about blades, kinda thought they're beyond ordinary mortals, you know).
You'd be surprised. Blade servers are just several minimalist computers optimized to occupy smallest possible space per computer, often with additional integration between them (at least common power, often an internal network). The blade server we were qualifying at my previous employer was typical, eight little four-processor PC's in a 3U package. They all shared a pair of redundant power supplies, and each had an internal gigabit ethernet interface on an integrated switch (for talking between the eight of them only) in additional to two external gigabit ethernet interfaces.
From the software's perspective (Ubuntu, in our case) they looked like eight ordinary PC's. We didn't bother trying to run X11 on them, just ssh'd into them from our workstations.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.