LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 06-12-2013, 11:18 PM   #16
perbh
Member
 
Registered: May 2008
Location: Republic of Texas
Posts: 393

Rep: Reputation: 81

Wow!!
Glad it finally worked out for you.
I've got a Sun blade with 32 cores/48 gigs which is (alas) running centos at the moment (thanks to my employer - not!).
Its just sitting there doing absolutely nothing - so here I come - Slackware it is!!
 
Old 06-12-2013, 11:58 PM   #17
D1ver
Member
 
Registered: Jan 2010
Distribution: Slackware 13.37
Posts: 598
Blog Entries: 3

Rep: Reputation: 194Reputation: 194
Linux really is amazing.. All you had to do to access 64 cores and half a TB of ram is change a single number in the kernel config..

Is there a downside to having a larger number as the default? Just out of curiosity.
 
Old 06-13-2013, 12:24 AM   #18
volkerdi
Slackware Maintainer
 
Registered: Dec 2002
Location: Minnesota
Distribution: Slackware! :-)
Posts: 2,504

Rep: Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461
Quote:
Originally Posted by D1ver View Post
Linux really is amazing.. All you had to do to access 64 cores and half a TB of ram is change a single number in the kernel config..

Is there a downside to having a larger number as the default? Just out of curiosity.
Each possible CPU uses 8K of system RAM... but big deal these days, right? At least for x86_64, bumping it to 64 (or even 128 to future-proof it for a while) seems like a worthwhile trade.
 
10 members found this post helpful.
Old 06-13-2013, 12:24 AM   #19
volkerdi
Slackware Maintainer
 
Registered: Dec 2002
Location: Minnesota
Distribution: Slackware! :-)
Posts: 2,504

Rep: Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461Reputation: 8461
Quote:
Originally Posted by D1ver View Post
Linux really is amazing.. All you had to do to access 64 cores and half a TB of ram is change a single number in the kernel config..

Is there a downside to having a larger number as the default? Just out of curiosity.
Each possible CPU uses 8K of system RAM... but big deal these days, right? At least for x86_64, bumping it to 64 (or even 128 to future-proof it for a while) seems like a worthwhile trade.
 
10 members found this post helpful.
Old 06-13-2013, 12:30 AM   #20
rworkman
Slackware Contributor
 
Registered: Oct 2004
Location: Tuscaloosa, Alabama (USA)
Distribution: Slackware
Posts: 2,559

Rep: Reputation: 1351Reputation: 1351Reputation: 1351Reputation: 1351Reputation: 1351Reputation: 1351Reputation: 1351Reputation: 1351Reputation: 1351Reputation: 1351
I echo volkerdi's sentiments (re "I want your machine").
Wow. :-)
 
2 members found this post helpful.
Old 06-13-2013, 04:30 AM   #21
GazL
LQ Veteran
 
Registered: May 2008
Posts: 6,897

Rep: Reputation: 5019Reputation: 5019Reputation: 5019Reputation: 5019Reputation: 5019Reputation: 5019Reputation: 5019Reputation: 5019Reputation: 5019Reputation: 5019Reputation: 5019
Quote:
Originally Posted by rworkman View Post
I echo volkerdi's sentiments (re "I want your machine").
Wow. :-)
Me too. I've worked on mainframes with less ummph than that.


Please tell me you remembered the -j 32 when you rebuilt that kernel!
 
1 members found this post helpful.
Old 06-13-2013, 09:10 PM   #22
Domine
Member
 
Registered: Nov 2005
Posts: 31

Original Poster
Rep: Reputation: 17
Well yes i remembered the -j switch But used only 10 it was still fast enough less then 5 min i think. But i built bzImage and modules only.

Well i'm working on genome analysis and Next-generation sequencing. So the power is needed. It saves days of work and frustration

That machine also came with CentOS but well i dont like linux distros designed for clicking people. I know Slackware for years so just changed to it.

The funny thing was that the company which sold us the server said that they provide it only with CentOS and Slackware might not be good. I figured out why. Someone had to recompile the kernel for 64 cores I dunno how they got theirs for CentOS tho. Maybe something like:

yum install kernel_which_supports_64_cores

Anyway, now the machine is up n running. All cores recognized.

Cheers
D.
 
1 members found this post helpful.
Old 06-14-2013, 12:45 AM   #23
Netnovice
Member
 
Registered: Feb 2013
Posts: 94

Rep: Reputation: Disabled
I do not want your machine. I run Slackware 14 on a single core, 1.66GHz Atom with 1GB of RAM (plus a dual core celeron 877 netbook.) It’s OK but hardly fast. In benchmarks it scores before a 1.2GHZ PIII.

After that, any computer seems fast!
:-D
 
Old 06-14-2013, 01:19 AM   #24
solarfields
Senior Member
 
Registered: Feb 2006
Location: slackalaxy.com
Distribution: Slackware, CRUX
Posts: 1,449

Rep: Reputation: 997Reputation: 997Reputation: 997Reputation: 997Reputation: 997Reputation: 997Reputation: 997Reputation: 997
It had to be done...
Attached Thumbnails
Click image for larger version

Name:	Screenshot - 06142013 - 09:23:37 AM.png
Views:	97
Size:	210.5 KB
ID:	12716  
 
3 members found this post helpful.
Old 06-14-2013, 09:09 AM   #25
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541

Rep: Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065
Quote:
Originally Posted by perbh View Post
Wow!!
Glad it finally worked out for you.
I've got a Sun blade with 32 cores/48 gigs which is (alas) running centos at the moment (thanks to my employer - not!).
Its just sitting there doing absolutely nothing - so here I come - Slackware it is!!
So I've got a dumb question -- say I've got a blade server, will Slackware install and go without a whole lot of fiddling? Let's say a Sun Blade similar to yours (or whatever blade box).

I certainly don't want to hijack this thread but it is about high-performance computing and I don't know enough about blade servers, so I'm just askin'.

Reason I'm asking is that the county I live in is 1,791 sq miles with a mix of national forests, large- and small farms and towns and a real mix of geography. And GRASS is the berries for dealing with just that.

From the GRASS web page:
Quote:
GRASS GIS, commonly referred to as GRASS (Geographic Resources Analysis Support System), is a free Geographic Information System (GIS) software used for geospatial data management and analysis, image processing, graphics/maps production, spatial modeling, and visualization. GRASS GIS is currently used in academic and commercial settings around the world, as well as by many governmental agencies and environmental consulting companies. GRASS GIS is an official project of the Open Source Geospatial Foundation (OSGeo).
The county needs to do all of what GRASS does (as do most counties throughout the US) and, well, it's a resource hog. Runs fine in 32- and 64-bit Slackware 14.0, but "fine" is relative; it takes a while to do large-scale analysis and 3-D mapping. It do make pretty pictures though.

I'm using it for my own interests (along with GMT for large-scale maps) and am working on the county to abandon an old, clunky, unstable, difficult system (that nobody knows how to use) and, you know, get dragged kicking and screaming into the last quartile of the 20th century. It would be really nice if a blade box would actually "load-'n'-go" if that's possible (anything that requires a lot of fooling around is going to be a flat NO -- a dedicated desktop with lots of horsepower would be acceptable, but, hey, if I can lay hands on a used blade server (I can, a Sun, cheap), that would be interesting).

Appreciate any thoughts?
 
Old 06-14-2013, 10:16 AM   #26
jhw
Member
 
Registered: Apr 2010
Posts: 83

Rep: Reputation: 32
I'm curious: What are you doing with this beast?
 
Old 06-14-2013, 12:42 PM   #27
perbh
Member
 
Registered: May 2008
Location: Republic of Texas
Posts: 393

Rep: Reputation: 81
@tronayne:
Most certainly a plug-n-play, be it CentOS or Slackware.
The only 'install-difference' between a 'blade' and a work-station are the graphics capabilities - the blades are notoriously bad in this aspect. I have come across blades that will not support more than 800x600! Until you've got it set up properly with vnc etc, you're cli only.
Other than that - I'm really not that impressed with some of these high-spec blades ... checking out the cpu-activity, they seem to spend a fair proportion of their time just shuffling tasks between cpu's. We were testing out a 48-core beast at one time, pushing it to the limit of what our software can throw at it - and it used only 15 out of the 48 cores! But then, in our case, i/o is the primary bottleneck - and to avoid using slow (well, everything is relative) disks, we shuffle data between several machines using 10-gig network - only using disks at the final stage when we run out of other options - gotta save it for posterity!!

Last edited by perbh; 06-14-2013 at 12:47 PM. Reason: mis-spelling
 
2 members found this post helpful.
Old 06-14-2013, 01:47 PM   #28
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541

Rep: Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065
Quote:
Originally Posted by perbh View Post
@tronayne:
Most certainly a plug-n-play, be it CentOS or Slackware.
The only 'install-difference' between a 'blade' and a work-station are the graphics capabilities - the blades are notoriously bad in this aspect. I have come across blades that will not support more than 800x600! Until you've got it set up properly with vnc etc, you're cli only.
Huh. Didn't know that (but, then, don't know a heckuva lot about blades, kinda thought they're beyond ordinary mortals, you know). I actually don't care about graphics all that much -- when you're talking global maps (even with terrain), you're not too worried about high definition and the display I have, an Acer 20" that'll do 1600x900 is good enough. I only spec stock Intel graphics controllers 'cause anything else is a waste of money.

I'll keep the advice in mind, though, about the 800x600 thing and vnc and all, thanks.
Quote:
Originally Posted by perbh View Post
Other than that - I'm really not that impressed with some of these high-spec blades ... checking out the cpu-activity, they seem to spend a fair proportion of their time just shuffling tasks between cpu's. We were testing out a 48-core beast at one time, pushing it to the limit of what our software can throw at it - and it used only 15 out of the 48 cores! But then, in our case, i/o is the primary bottleneck - and to avoid using slow (well, everything is relative) disks, we shuffle data between several machines using 10-gig network - only using disks at the final stage when we run out of other options - gotta save it for posterity!!
I'm not thinking monster-box, I'm thinking a little-bit-faster box. GIS is heavily disk driven, lots of data in great big files. I have stuff spread over multiple drives in distinct categories to make things go faster. Place labels and the like (that get put on maps at defined latitude and longitude, roads, bridges, railroads, etc., etc.) are large files, lots of disk I/O and I don't expect (or get) blazing speed). Think about the data contained in a 10-degree by 10-degree patch of the earth and you get some idea.

Anyway, thanks for input -- I'll look into that Sun Blade a guy wants to get rid of.
 
Old 06-14-2013, 09:26 PM   #29
perbh
Member
 
Registered: May 2008
Location: Republic of Texas
Posts: 393

Rep: Reputation: 81
@tronayne:
OK - your needs are obviously somewhat different from mine ...
There is nothing to stop you from putting in some better graphics adapter - I've done that to many blades - just gotta be extremely careful that you get the 'right' type of bus-connector. My experience is mostly with 'decommissioned' blades (ibm, hp and a coupla suns). Most blades have a propriatory riser-card which will allow you up to 4 extender-cards. For the ibm's you are mostly reduced to 'pci-ex-1' (ie 8 bits), the hp's use pci/x and then suns take almost any pci-ex (8/16/32-bits).
I once bought a $400 graphics card for an ibm-blade, only to find it wouldn't work cuz it was 16-bit pci-ex. Impossible to get an appropriate riser-card so I was sitting on it for however long and finally put it into one of the sun blades.

Other than that - they are almost identical to any work-station but are oh so easily racked :-)

One more thing - blades usually have 15k rpm sas-disks (300 gigs) - again, they differ. Older blades usually had 2 disks, newer ones can have up to 6. These disks are _not_ exchangable with the more 'normal' sata-drives. It means, though, that if your needs are big disk capacity, I would rather use the blades for heavy processing and in addition have an extra fileserver with some 2-3 tera disks ... ymmv

Last edited by perbh; 06-14-2013 at 09:33 PM.
 
Old 06-15-2013, 08:25 AM   #30
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541

Rep: Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065
Learn something ever day, thanks. Somewhere in the back of my noggin lives a little gnome that mumbles at me now and again; once such was about disk drives in the rack that I sort of forgot about because, well, I'm just not looking at Big Data. 95% of the data I use is text files of vectors (the exception being topological files that total roughly 4.4G for the entire world, the "patch" file mentioned earlier). Those topo files aren't used too often because they don't manipulate easily -- 10 x 10 degree image files cover a helluva lot of area and aren't too useful for land-use studies and the like. Vector data is text, latitude, longitude, elevation, feed it to an equation that projects a roundish world onto flat paper and go speed-fast. This is actually one of those things where 64-bit shines over 32-bit (not easy to make a direct comparison, but doing map projections is heavy arithmetic and 64-bit just goes a lot faster).

The output is PostScript (or HP, or whatever) and you send that to a display, printer or plotter (and wait a while). A map of central Europe with Cold War boundaries, topological information and "natural" color information is a 66 M PostScript file; on the other hand, a world map using the Robinson (or any other) projection is 1.3 M PostScript file. The difference is one is full-color that "looks like" the physical are from orbit, the other is simple lines and blue water. Not terribly Big Data. The Robinson runs 0.81 seconds, the Cold War takes 1 minute, 21.3 seconds (lots of stuff going on). Now those are just maps, when you introduce GRASS into the mix you get layer and layers of information, both geographic and geologic, soil properties, buildings and what-all for small-to-large areas which is compute-intense but not so disk intense. You know, 6 G ain't a whole lot of disk, geographic names for the entire world doesn't occupy a lot of space (comparatively speaking) doesn't either -- lot of it, not all that much space, all vector data, essentially text.

It always surprises me how trivial it is when I think about storing these data nowadays -- I remember feeding floppy disks to a running program to get this data projected. I also remember swapping CD-ROMs in an out of drives doing the same thing. Nowadays it's just trivial to store a couple of hundred gigabytes worth and not even think about it. The first mapping program I had was Doug McIlroy's map that used World Data Bank I and II from the CIA on 9-track tapes (and the CIA did the data with kids tracing paper maps to digitize the X-Y points). I've massaged that data a few times to get Cold War country boundaries (still using it). It ain't Google Maps but it's good enough for my purposes (and where do you think they got their data?).

So, anyway, I'm going to get hold of my friend and see if he want to part with that Sun Blade (and, hopefully, has a graphic card in it that will fill a 20" LCD screen nicely). I seem to recall that his dates from 2006 or 2007, maybe later, and give 'er a shot and see what happens.

Thanks again for the advice and counsel.
 
Old 06-15-2013, 12:13 PM   #31
ttk
Senior Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,038
Blog Entries: 27

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484

Quote:
Originally Posted by tronayne View Post
Huh. Didn't know that (but, then, don't know a heckuva lot about blades, kinda thought they're beyond ordinary mortals, you know).
You'd be surprised. Blade servers are just several minimalist computers optimized to occupy smallest possible space per computer, often with additional integration between them (at least common power, often an internal network). The blade server we were qualifying at my previous employer was typical, eight little four-processor PC's in a 3U package. They all shared a pair of redundant power supplies, and each had an internal gigabit ethernet interface on an integrated switch (for talking between the eight of them only) in additional to two external gigabit ethernet interfaces.

From the software's perspective (Ubuntu, in our case) they looked like eight ordinary PC's. We didn't bother trying to run X11 on them, just ssh'd into them from our workstations.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] How to get the Number of Cores using C MarcosPauloBR Programming 10 04-14-2011 09:54 AM
[SOLVED] Need help in identifying the number of cores in each processor running srajeshkumar Linux - Server 7 12-23-2010 08:52 AM
missing number of cores in according to /proc/cpuinfo centguy Linux - Software 14 01-10-2010 09:22 PM
Find number of cores in my processor ttsdinesh Linux - Newbie 8 12-02-2009 06:16 AM
how to find number of cores in CPU narensr Linux - Hardware 5 08-24-2006 01:09 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 09:34 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration