LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Enterprise Linux Forums > Linux - Enterprise
User Name
Password
Linux - Enterprise This forum is for all items relating to using Linux in the Enterprise.

Notices

Reply
 
Search this Thread
Old 08-31-2012, 08:16 AM   #1
lifeonatrip
LQ Newbie
 
Registered: Aug 2012
Distribution: Debian
Posts: 14

Rep: Reputation: Disabled
Open Source Mission-Critical Software on Enterprise


Hi all,

I would like to have some opinions by seasoned Linux/BSD/*nix administrators on why in so many enterprise grade companies is so hard to get things like Debian or PostgreSQL in production with live 24/7 environment because of managers that doesn't trust open source software.

Personally, I always found very hard to implement some open source and community driver solution (excluding the "used everywhere" software like httpd or tomcat) in an enterprise grade infrastructure with mission critical data because of the management, even if the solution is "perfect" for that scenario.
I can't get managers to trust me on software like PostgreSQL for small to huge infrastructures (regardless of the availability of the commercial support).
Anyway even with big players like Oracle, you need to pay a huge amount of money and the support is not the best thing in the world (IMHO).

So my question is:

In your opinion are you willing to rely on community driven software/systems like PostgreSQL or Debian for a mission critical infrastructure and why.

Thanks!

(if it's the wrong section of the Forum please move the thread or remove it)

Last edited by lifeonatrip; 08-31-2012 at 08:18 AM. Reason: Spelling Errors
 
Old 08-31-2012, 09:34 AM   #2
dugan
Senior Member
 
Registered: Nov 2003
Location: Canada
Distribution: distro hopper
Posts: 4,766

Rep: Reputation: 1467Reputation: 1467Reputation: 1467Reputation: 1467Reputation: 1467Reputation: 1467Reputation: 1467Reputation: 1467Reputation: 1467Reputation: 1467
Shotgun is PostGres-backed. Mission-critical enough?
 
Old 08-31-2012, 10:18 AM   #3
lifeonatrip
LQ Newbie
 
Registered: Aug 2012
Distribution: Debian
Posts: 14

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by dugan View Post
Shotgun is PostGres-backed. Mission-critical enough?
For mission critical I mean some solution that if goes down for 1 hour, you are going to lose millions of dollars or you gonna kill someone.

It's a good software anyway :-)
 
Old 08-31-2012, 10:42 AM   #4
szboardstretcher
Senior Member
 
Registered: Aug 2006
Location: Detroit, MI
Distribution: GNU/Linux systemd
Posts: 3,272
Blog Entries: 1

Rep: Reputation: 1055Reputation: 1055Reputation: 1055Reputation: 1055Reputation: 1055Reputation: 1055Reputation: 1055Reputation: 1055
I would not use Debian on a production system. Why?

1) Its packaged by a community, and that community has proved that they don't have great controls in place to keep it secure (See Debian SSH vulnerability from a couple years ago)

2) There is only one company in america that supports it professionally. So if you have a mission critical server that starts throwing strange OS errors that you are unfamiliar with, you are out of luck unless you want to depend on forums. (http://wiki.debian.org/DebianEdu/Help/ProfessionalHelp)

Redhat, Centos and Scientific Linux can all be supported for a price by Redhat or any Redhat partners. I have never installed a production system using anything else... Aside from an OpenBSD firewall.

As far as Postgres goes,.. having no experience with Postgres, I would use Mysql. thats just opinion and not based in metrics. Its good enough for Facebook, its good enough for me.
 
Old 08-31-2012, 11:07 AM   #5
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,077

Rep: Reputation: 787Reputation: 787Reputation: 787Reputation: 787Reputation: 787Reputation: 787Reputation: 787
Quote:
Originally Posted by lifeonatrip View Post
In your opinion are you willing to rely on community driven software/systems like PostgreSQL or Debian for a mission critical infrastructure and why.
Well, yeah -- maybe not Debian (dedicated Slackware user) but, yeah. Got a few that have been running for, wow, a year and half, no problems, no downtime, just sit there and mumble to themselves and serve applications (most of those MySQL but a couple of PostgreSQL, no burps, just work).

Thing is, once you've got a server working it's probably going to continue working until Something Bad Happens (like drive failure, power supply failure, who knows failure). Doesn't really matter what distribution it is as long as the thing works reliably. You do smart things like RAID and and back up critical data regularly, plug it in to a UPS, have a generator that automatically kicks in when the power line goes down, stuff like that.

You pick reliable hardware and software and do all the necessary configuring to keep the bad boys out of the system while allowing the good guys the access they need -- and keep an eye on things -- and there you go. Anymore, this stuff doesn't have to cost an arm, leg and three toes off the other foot to be usable and reliable; if it ain't broke don't screw with it until it is, you know?

PostgreSQL is fine -- well, actually more than fine. So's MySQL. So's Apache. Enterprise usually means Big Bucks and, if you don't need big iron and big software, and what you're using works for you, where's the problem? I wouldn't use a Sunfire to be only a Bugzilla server when any PC platform running Debian, Slackware or whatever would do that just fine, essentially for free. Matter of scale.

Now, I wouldn't try to use either PostgreSQL or MySQL if I was doing point of sale and inventory control for, say, Walmart (they use Informix, last I heard). Might be able to, but you're not going to be addressing multiple terabyte data bases on PC hardware; that's what Sun's are for. But the thing is that 99% of the software you're using is functionally equivalent to Solaris (or HP or IBM Linux) and can handle "small" tasks just fine. "Small" here is a relative term, hundreds of gigabytes isn't exactly trivial but there's still a lot be said for light-pipe back planes connected to cabinets full of RAID disk drives that you can hot-swap. Just a matter of scale (and, yeah, you can upscale both MySQL and PostgreSQL and they run just fine on Sun platforms and do a great job -- it's just that PC platforms are... well, not quite in the same class).

Over the years there have been open source servers installed in computer rooms on the Sneaky Pete model: it's easier to be forgiven for doing something that works instead of asking for permission. If it works and there aren't any problems and it didn't cost anything the little light over the head of the pointy-haired boss might just blink on. The usual argument is "We don't know anything about [fill in whatever here]." That's usually from Microsoft types (who, you know, don't much of anything about anything to begin with) but it also comes from big shops with lots of identical iron and a crew to take care of it. Can't blame 'em but it's easier to sneak in a few dedicated Linux servers when it's a Solaris or HP or even IBM shop to begin with and you mention the words "free" and "evaluate" a lot. "Here, you want to see it?" works better than "I'd like to try..."

Is open source "enterprise class?" Hell, yes it is. It's a matter of demonstrating capability and reliability... and not biting off more than you can chew. Keep in mind the limits of your hardware platform (there really aren't significant limits in the operating system -- within reason -- but the hardware can bite you if you're not mindful of its limits). That IBM box that won Jeopardy? Guess what the OS is.

Hope this helps some.
 
4 members found this post helpful.
Old 08-31-2012, 11:33 AM   #6
frieza
Senior Member
 
Registered: Feb 2002
Location: harvard, il
Distribution: Ubuntu 11.4,DD-WRT micro plus ssh,lfs-6.6,Fedora 15,Fedora 16
Posts: 3,104

Rep: Reputation: 369Reputation: 369Reputation: 369Reputation: 369
talk about mission critical? the NYSE runs Linux ^^
 
1 members found this post helpful.
Old 08-31-2012, 11:36 AM   #7
szboardstretcher
Senior Member
 
Registered: Aug 2006
Location: Detroit, MI
Distribution: GNU/Linux systemd
Posts: 3,272
Blog Entries: 1

Rep: Reputation: 1055Reputation: 1055Reputation: 1055Reputation: 1055Reputation: 1055Reputation: 1055Reputation: 1055Reputation: 1055
Quote:
Originally Posted by tronayne View Post
Well, yeah. . . . .
Wonderful post. I think that really answers the OP's question.
 
1 members found this post helpful.
Old 08-31-2012, 08:09 PM   #8
lifeonatrip
LQ Newbie
 
Registered: Aug 2012
Distribution: Debian
Posts: 14

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by tronayne View Post
Well, yeah -- maybe not Debian (dedicated Slackware user) but, yeah. Got a few that have been running for, wow, a year and half, no problems, no downtime, just sit there and mumble to themselves and serve applications (most of those MySQL but a couple of PostgreSQL, no burps, just work).

Thing is, once you've got a server working it's probably going to continue working until Something Bad Happens (like drive failure, power supply failure, who knows failure). Doesn't really matter what distribution it is as long as the thing works reliably. You do smart things like RAID and and back up critical data regularly, plug it in to a UPS, have a generator that automatically kicks in when the power line goes down, stuff like that.

You pick reliable hardware and software and do all the necessary configuring to keep the bad boys out of the system while allowing the good guys the access they need -- and keep an eye on things -- and there you go. Anymore, this stuff doesn't have to cost an arm, leg and three toes off the other foot to be usable and reliable; if it ain't broke don't screw with it until it is, you know?

PostgreSQL is fine -- well, actually more than fine. So's MySQL. So's Apache. Enterprise usually means Big Bucks and, if you don't need big iron and big software, and what you're using works for you, where's the problem? I wouldn't use a Sunfire to be only a Bugzilla server when any PC platform running Debian, Slackware or whatever would do that just fine, essentially for free. Matter of scale.

Now, I wouldn't try to use either PostgreSQL or MySQL if I was doing point of sale and inventory control for, say, Walmart (they use Informix, last I heard). Might be able to, but you're not going to be addressing multiple terabyte data bases on PC hardware; that's what Sun's are for. But the thing is that 99% of the software you're using is functionally equivalent to Solaris (or HP or IBM Linux) and can handle "small" tasks just fine. "Small" here is a relative term, hundreds of gigabytes isn't exactly trivial but there's still a lot be said for light-pipe back planes connected to cabinets full of RAID disk drives that you can hot-swap. Just a matter of scale (and, yeah, you can upscale both MySQL and PostgreSQL and they run just fine on Sun platforms and do a great job -- it's just that PC platforms are... well, not quite in the same class).

Over the years there have been open source servers installed in computer rooms on the Sneaky Pete model: it's easier to be forgiven for doing something that works instead of asking for permission. If it works and there aren't any problems and it didn't cost anything the little light over the head of the pointy-haired boss might just blink on. The usual argument is "We don't know anything about [fill in whatever here]." That's usually from Microsoft types (who, you know, don't much of anything about anything to begin with) but it also comes from big shops with lots of identical iron and a crew to take care of it. Can't blame 'em but it's easier to sneak in a few dedicated Linux servers when it's a Solaris or HP or even IBM shop to begin with and you mention the words "free" and "evaluate" a lot. "Here, you want to see it?" works better than "I'd like to try..."

Is open source "enterprise class?" Hell, yes it is. It's a matter of demonstrating capability and reliability... and not biting off more than you can chew. Keep in mind the limits of your hardware platform (there really aren't significant limits in the operating system -- within reason -- but the hardware can bite you if you're not mindful of its limits). That IBM box that won Jeopardy? Guess what the OS is.

Hope this helps some.

First of all, Thank you for the exhaustive answer, really appreciated.

Now, I want to talk about the "first class HW" according to my experience.
I agree with you when you say "you're not going to be addressing multiple terabyte data bases on PC hardware", but my question is:

You pay million of dollars for First Class HW and yes, it's the best for infrastructures that needs to process millions of requests per second with huge DB and important volumes of data, no doubt on that. But the software installed on these platforms, it's really so valuable?
Starting from the concept that even the most skilled, payed and reliable of the development team can commit errors, it's really a good idea to put their software on that hardware? According to the vendor it's "optimized" I know, but it's really so reliable?

I had experience with IBM mainframe (System Z) and DB2 database on a 30 million users installation, after one minor upgrade z/OS start slowly corrupting the DB and the IBM support was not able to solve the problem, the solution was calling the DB2 development team and start fix on spot. After the "fix" (4 days of casual downtime all over the country) the company lose millions and millions and yes, the trial for the refund it's still in progress. (Just for the facts, in the same HW Red Hat Mainframe edition runs as well).

This is just an extreme case, and it's not so realistic for the majority of the implementations, but it's good to have an idea on the "First Class Software" reliability is and considering that yes, at the end is not that different from normal open source software if used on this kind of "ultra end" scenarios. (it's still bugged IMHO).

I am absolutely not "pro open source at any costs", I am the first one to choose proprietary solutions when I recognize it's the best way to solve a problem, I just trying to say that the majority of medium/high end solutions are relying on software that is not so "precious" and "valuable". And at the end even after companies like THIS and THIS are using open source (not necessarily free) it's still to hard to implement in high end environments because of the minds of managers.
 
Old 09-01-2012, 09:46 AM   #9
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,077

Rep: Reputation: 787Reputation: 787Reputation: 787Reputation: 787Reputation: 787Reputation: 787Reputation: 787
The value of software? Hm. I think I'd ask what constitutes value? Low cost? Performance? Features? Ease of use?

What I value most is reliability. I don't give a rat's ass about fancy-schmancy bells and whistles, "gooey" interfaces (in fact, I prefer a terminal with ncurses to a GUI any time), functions and features that I can't figure out what they're for or what the heck they do (lots of that floating around). I want to set it down, plug it in, turn it on and have the damned thing work right. I don't want my telephone ringing in the middle of the night.

Too often us worker bees are cursed by marketing. Vendors, who have something to sell, have been known to lobby managements into buying a pig in a poke when, well, there were better options and we're stuck with trying to build a Ferrari out of Yugo parts. One might think Microsoft Vista here.

Data base management systems are, at bottom, nothing but tools; hammers, saws, screwdrivers, wrenches. They don't actually do anything until you put together the grammar and syntax to make them do something; they're only a platform. There are good ones and there are not so good ones -- "good" here means, above all else, reliable. Fast is nice, adherence to ANSI/ISO standards is nice, efficient is nice, runs the same on anything is nice but reliable is king. Over the past 30+ years of working with data bases I've developed some pretty firm convictions about what's good and what's not so good.

I've also developed some strong opinions about operating systems: Unix (and some derivatives), good. Linux, good, Solairs, good. Others, the hell with 'em. I use Slackware exclusively simply because it is complete, solid, dependable and reliable (I have systems that run for months with no attention whatsoever); it's my tool chest and I can count on it. Fancy? Nope. Just elegant.

Unix got developed by a bunch of really smart folks at Bell Telephone Laboratories because they got fed up with reinventing the wheel every time a computer manufacturer released a new box and all their software had to be rewritten to run on it. Royal pain, that. Those folks built tools that worked together and gave programmers the means to quickly turn out reliable applications to, you know, do something.

Lots of stupid stuff went on in the world of Unix and made a mess of things but this kid in Finland liked using it do to practical things. He couldn't afford a license so he sat down and built himself a look-work alike and gave it away. Thus Linux. Thank you Linus.

I've leaned that it's always a good idea to have a separate development and testing platform from a production platform. Nobody's perfect, everybody blows one every so often, and you really need to test upgrades separately from your income producer. That's true of operating systems, it's true of DBMS's, it's true of any application. I always like to turn users loose with new software because they'll, real quick, find the holes (users tend, in my experience, to whack keys in frustration and try anything just to see what happens), great way to debug stuff. Oops happens (just look at the Java security hole last week). It's what you do to find and eliminate as many oops' as you can before they get into the wild. In the real world we still do a lot of batch processing and, holy toot, can bad things happen really, really fast; so you test first. I never, ever put any update on a production server until it's been wrung dry on a test box (and still...).

When it comes to DBMS's the major players are pretty good. In no particular order, MySQL, PostgreSQL, Oracle, Informix all are pretty OK. They've all had problems, those problems get recognized and dealt with fairly quickly and life goes on. They all work with PHP (great tool, PHP). They all run on pretty much every platform worth using. They have individual strengths that can make one more desirable than another (for big stuff and transaction processing I prefer Informix which is kind of the DBMS of choice in the financial business, like NASDAQ, and in point of sale and inventory, like Walmart, KMart and others). I find Oracle to be a behemoth and unworkable for me. Amazon.com, if I remember correctly, uses a stripped-down version of Oracle with PHP. They do rather well with that, methinks. I like MySQL, I like PostgreSQL (which looks a lot like Informix to me). Personal opinions based on having used 'em, flogged 'em and recognizing many of my own shortcomings in the process.

The value in software -- other than the fact that without it you've got a damned expensive door stop or boat anchor -- I believe really lies in what it can (and doesn't!) do. It really does not matter who developed it or how or what it costs if it can do the job you've got to do. Matlab ain't cheap but boy, oh, boy does it do good work. Apache doesn't cost anything (unless you choose to donate support money) and... well, the vast majority of sites on the Internet are running Apache. Open source is a wonderful thing and has produced a plethora of useful tools, applications, you name it. Most of it is quite useful, some of it isn't and there are thousands, perhaps hundreds of thousands (or more!) of people around the world that help keep it viable, useful and reliable.

It's up to us to be smart in the ways we use this invaluable resource and take the necessary steps to prevent the problem you detail. It may be that problem might never have happened had a test bed been used first (I can't imagine administrators not testing first and perhaps they did and the problem just didn't show up). You know, one of the benefits of having Linux is that you can inexpensively emulate a production box on a development box and beat that up trying to break things -- might not run as fast, might not handle extraordinarily large data sets efficiently but it is doable, it just makes sense.

Hope this helps some.
 
1 members found this post helpful.
Old 09-01-2012, 04:14 PM   #10
jefro
Guru
 
Registered: Mar 2008
Posts: 11,722

Rep: Reputation: 1445Reputation: 1445Reputation: 1445Reputation: 1445Reputation: 1445Reputation: 1445Reputation: 1445Reputation: 1445Reputation: 1445Reputation: 1445
Large companies tend to buy systems, not just a workstation. They like the fact that HP sold them Suse or IBM offered AIX or Red Hat. They don't care to test out systems with hope it works or should work. They want to buy an entire system or package that may include thousands of computers across their company. They want hard written warranties and technical support.

It would be foolish for any IT manager to suggest to some mission critical deployment to try to scrimp on money the mission failed. Just pay the OS vendor if you need it to work.
 
Old 09-01-2012, 08:10 PM   #11
NyteOwl
Member
 
Registered: Aug 2008
Location: Nova Scotia, Canada
Distribution: Slackware, OpenBSD, others periodically
Posts: 512

Rep: Reputation: 139Reputation: 139
It also gives them someone to point the finger at if it does fail.
 
Old 09-04-2012, 11:41 AM   #12
Habitual
Senior Member
 
Registered: Jan 2011
Distribution: Undecided
Posts: 3,472
Blog Entries: 6

Rep: Reputation: Disabled
"Managers".

nuff said.
 
Old 09-05-2012, 08:26 AM   #13
lifeonatrip
LQ Newbie
 
Registered: Aug 2012
Distribution: Debian
Posts: 14

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by NyteOwl View Post
It also gives them someone to point the finger at if it does fail.
That's a very important sentence
 
Old 09-05-2012, 08:32 AM   #14
lifeonatrip
LQ Newbie
 
Registered: Aug 2012
Distribution: Debian
Posts: 14

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by tronayne View Post
The value of software? Hm. I think I'd ask what constitutes value? Low cost? Performance? Features? Ease of use?

What I value most is reliability. I don't give a rat's ass about fancy-schmancy bells and whistles, "gooey" interfaces (in fact, I prefer a terminal with ncurses to a GUI any time), functions and features that I can't figure out what they're for or what the heck they do (lots of that floating around). I want to set it down, plug it in, turn it on and have the damned thing work right. I don't want my telephone ringing in the middle of the night.

Too often us worker bees are cursed by marketing. Vendors, who have something to sell, have been known to lobby managements into buying a pig in a poke when, well, there were better options and we're stuck with trying to build a Ferrari out of Yugo parts. One might think Microsoft Vista here.

Data base management systems are, at bottom, nothing but tools; hammers, saws, screwdrivers, wrenches. They don't actually do anything until you put together the grammar and syntax to make them do something; they're only a platform. There are good ones and there are not so good ones -- "good" here means, above all else, reliable. Fast is nice, adherence to ANSI/ISO standards is nice, efficient is nice, runs the same on anything is nice but reliable is king. Over the past 30+ years of working with data bases I've developed some pretty firm convictions about what's good and what's not so good.

I've also developed some strong opinions about operating systems: Unix (and some derivatives), good. Linux, good, Solairs, good. Others, the hell with 'em. I use Slackware exclusively simply because it is complete, solid, dependable and reliable (I have systems that run for months with no attention whatsoever); it's my tool chest and I can count on it. Fancy? Nope. Just elegant.

Unix got developed by a bunch of really smart folks at Bell Telephone Laboratories because they got fed up with reinventing the wheel every time a computer manufacturer released a new box and all their software had to be rewritten to run on it. Royal pain, that. Those folks built tools that worked together and gave programmers the means to quickly turn out reliable applications to, you know, do something.

Lots of stupid stuff went on in the world of Unix and made a mess of things but this kid in Finland liked using it do to practical things. He couldn't afford a license so he sat down and built himself a look-work alike and gave it away. Thus Linux. Thank you Linus.

I've leaned that it's always a good idea to have a separate development and testing platform from a production platform. Nobody's perfect, everybody blows one every so often, and you really need to test upgrades separately from your income producer. That's true of operating systems, it's true of DBMS's, it's true of any application. I always like to turn users loose with new software because they'll, real quick, find the holes (users tend, in my experience, to whack keys in frustration and try anything just to see what happens), great way to debug stuff. Oops happens (just look at the Java security hole last week). It's what you do to find and eliminate as many oops' as you can before they get into the wild. In the real world we still do a lot of batch processing and, holy toot, can bad things happen really, really fast; so you test first. I never, ever put any update on a production server until it's been wrung dry on a test box (and still...).

When it comes to DBMS's the major players are pretty good. In no particular order, MySQL, PostgreSQL, Oracle, Informix all are pretty OK. They've all had problems, those problems get recognized and dealt with fairly quickly and life goes on. They all work with PHP (great tool, PHP). They all run on pretty much every platform worth using. They have individual strengths that can make one more desirable than another (for big stuff and transaction processing I prefer Informix which is kind of the DBMS of choice in the financial business, like NASDAQ, and in point of sale and inventory, like Walmart, KMart and others). I find Oracle to be a behemoth and unworkable for me. Amazon.com, if I remember correctly, uses a stripped-down version of Oracle with PHP. They do rather well with that, methinks. I like MySQL, I like PostgreSQL (which looks a lot like Informix to me). Personal opinions based on having used 'em, flogged 'em and recognizing many of my own shortcomings in the process.

The value in software -- other than the fact that without it you've got a damned expensive door stop or boat anchor -- I believe really lies in what it can (and doesn't!) do. It really does not matter who developed it or how or what it costs if it can do the job you've got to do. Matlab ain't cheap but boy, oh, boy does it do good work. Apache doesn't cost anything (unless you choose to donate support money) and... well, the vast majority of sites on the Internet are running Apache. Open source is a wonderful thing and has produced a plethora of useful tools, applications, you name it. Most of it is quite useful, some of it isn't and there are thousands, perhaps hundreds of thousands (or more!) of people around the world that help keep it viable, useful and reliable.

It's up to us to be smart in the ways we use this invaluable resource and take the necessary steps to prevent the problem you detail. It may be that problem might never have happened had a test bed been used first (I can't imagine administrators not testing first and perhaps they did and the problem just didn't show up). You know, one of the benefits of having Linux is that you can inexpensively emulate a production box on a development box and beat that up trying to break things -- might not run as fast, might not handle extraordinarily large data sets efficiently but it is doable, it just makes sense.

Hope this helps some.

I agreed completely with your point of view, for me the important part of the software is called reliability and usually is very close to the word "design".
In this IT world, corrupted by 3d graphs and fancy GUIs, often is hard to get the most reliable and well-fitted solution, because who made decision is in love with a great cutting edge web interface that produce automatic PDF reports.

Anyway thank you for your reply, very interesting point of view.
 
  


Reply

Tags
bsd, critical, debian, mission, open source


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Adopting Enterprise Open Source Software LXer Syndicated Linux News 0 09-03-2010 09:20 AM
LXer: Critical Open Source Software Projects Receive 6,000 Bug Fixes in First Year of Coverity Scan Site LXer Syndicated Linux News 0 03-27-2007 11:31 PM
LXer: Open Source Geospatial Software Provides an Enterprise Alternative ... LXer Syndicated Linux News 0 11-16-2006 02:54 PM
LXer: Open source vs. traditional enterprise software, part II LXer Syndicated Linux News 0 08-10-2006 03:21 AM
LXer: Open Source Software Will Permeate Enterprise Software, Says The ... LXer Syndicated Linux News 0 12-12-2005 08:31 PM


All times are GMT -5. The time now is 02:05 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration