SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
The big lesson here is don't run -current if you expect things to work properly.
It's not like Debian's -testing branch. The -current branch should be considered pre-alpha. It is unstable, which means that things will break. And Murphy's law dictates that they'll break whenever it's least convenient for you.
Breakage wasn't the issue here. Hardware incapable of fixing it was.
And I used Debian & Debian testing for many years, from '99-'13. I have found that Slackware-current is FAR more stable than Debian testing.
>> I am talking about compiling LibreOffice, not about your computer.
I understood your meaning. It surprises me that compiling LO could gobble up that much RAM.
I started the compile over again, this time using my current_64 laptop (also 4GB RAM). Been going for about 10 hours
now and memory use swings from 26-50% usage, mostly staying in the high-mid 30% range. Both processors pegged,
both speed scaling & time usage. Swap 9% full. I was not expecting failure the last time I started this, so I didn't pay attention
to resource usage on my desktop computer. This compile is the laptop's sole userland task at the moment, as with the previous
attempt using my desktop. If I have to lose the laptop or the desktop computer for 48 hours for a compile, I'de rather have my
desktop keyboard to type on.
We'll see what happens. If LO really can consume that much RAM during compile, then that is a real eye opener for me.
Wouldn't have expected that. Possibly also a side effect of relying on other people's work for too long in compiling pkgs for me
instead of doing it myself. Perhaps some things about resource use have changed over the years and I am acquainting myself
them anew.
If true, it would explain LO's odd failure to compile. Not something I would have suspected right away.
I did not keep an eye to the build (because it runs here without issues) but the massive RAM usage is likely to occur at the final linking stage, not during the actual compilation. Chromium has a similar issue, requiring stellar amounts of RAM in the final link.
Adding more swap would also help in many cases of a RAM-hungry process... my VM where I compile LibreOffice only has 2 GB RAM configured but uses a 16 GB swap partition. Seems to be sufficient.
Also, do not forget disk usage! LibreOffice eats a lot of temporary disk space too.
All this very good info here in this thread. I, like trollog, am not going to let it go to waste. I have a desktop just sitting around doing nothing and I'm going to set it up to try this again. Even if it is just for fun, it is a great learning experience.
Even though it's a HP, it has a 2.4GHz AMD Phenom X4 and 8GB RAM. The sad part is, most of the time my Core 2 Duo @ 2GHz & 4GB RAM can run circles around it. And my Core 2 Extreme @ 2.8GHz & 4GB RAM just leaves it in the dust, but that is my daily driver and I don't do much on it besides the everyday stuff.
x64_current updated yesterday before starting compile.
another puzzle piece.
Intermittent bursty use of astronomical amounts of RAM during certain phases
of compile is something I will keep in mind. Small programs certainly do some
of this.
Little compile little trouble. Big compile, big trouble. (Same thing they say
about raising children..)
On another note, from a design standpoint-
I don't think this bodes well for the upstream developers. What's the point of a piece of sorftware so
large, with such large requirements that ordinary people can't compile it in a reasonable amount
of time using ordinary hardware?
LO may be technically exquisite, but at the same time quite stupid from a design standpoint.
I'm not exactly trying to do this using a raspberry pi here. Maybe they need to get their priorities straight.
.
I thought about the runlevel issue for a moment before I started. I kept it in 4. I run fluxbox, conky, and 1 xterm for the purposes of executing
the compile. No other user processes.
I'm going to add some swap space (a lot of swap space..), and try again from runlevel 4. I'm not running a fat desktop here. I'll see what happens. If needed I will
shift gears on my thinking.
I'll have to throw swap at it since memory is not an option.
Since AB already put up "fixed" packages, for me it's no longer about obtaining a working LO package.
At this point it's pure orneriness on my part. I can be a stubborn ornery SOB at times, and this is one of those times.
I'm Ahab chasing the white whale now.
I'de like to figure this out. This thread may become a useful reference at exactly the era when AB has limited time to deal with these issues
because of other things he's got going on in his life.
As you can see in the thread I am not the only one having LO compile problems, at the moment I am just the most vocal one.
And with /current this is always going to be a recurring issue anyways.
Some of the quieter people out there having the same issue may be dodging this bullet by just opting for the slackbuild version as an alternative.
But when the slackbuild breaks, where are we then with this?
I don't think this bodes well for the upstream developers. What's the point of a piece of sorftware so
large, with such large requirements that ordinary people can't compile it in a reasonable amount
of time using ordinary hardware?
I wonder what Microsoft would say if you asked them that very same question?
Quote:
Originally Posted by trollog
LO may be technically exquisite, but at the same time quite stupid from a design standpoint.
They do not expect you to compile it.
There are pre-compiled binaries on their website.
The rpm packages they provide are easily re-packaged into a Slackware-friendly format, and will work 100% with no other tinkering required. You'll even get the little icons in your KDE/Gnome/XFCE menu.
The rest of your OS was pre-compiled by someone else. Why does this matter so much? The "problem" has been fixed. Move on.
I think that half the build time is downloading the sources. My d/l speed is 60MB/s, as soon as boost started to download it started to crawl at some kb/s. Not due to my connection either.
# Needs:
# Build-time: apache-ant,jdk,perl-archive-zip
# removepkg glew first.
# (and for Slackware 13.37, you need to install mozilla-nss and
# also upgrade to the seamonkey and seamonkey-solibs in /patches !)
Skaendo:"All this very good info here in this thread. I, like trollog, am not going to let it go to waste. I have a desktop just sitting around doing nothing and I'm going to set it up to try this again. Even if it is just for fun, it is a great learning experience."
Bingo.
****
rkelsen- you are welcome to STFU and cease posting on this thead, as is anyone else who sees no value in what I'm doing. Move along.
I know the score here- yes, you don't have to compile LO from source. You can settle for pre-compiled, either from upstream, or third parties.
Maybe that is even what upstream intends users to do.
But that is not MY point in this thread. My aim in this thread is to understand the build process from AB's source and to understand why
at the moment, it isn't working on my machines.
If you don't have build advice that leads to the desired aim, then you have nothing to add to this thread.
AB has thrown some productive advice out there that may serve to help. You have thrown nothing useful out there.
FOSS will destroy itself from within due to cunts like you.
Just move along.
Side Note- Regardless of what upstream's intentions are or are not, if AB didn't intend for people to built LO if they so desired, why did AB bother to provide the buildscripts on his site as he has done?
rkelsen- you are welcome to STFU and cease posting on this thead, as is anyone else who sees no value in what I'm doing. Move along.
Whoa! Let's calm down a bit and try to keep things a bit friendlier. I don't think he was intending for his post to be an insult to you or Skaendo. Rather, I think he is trying to explain why the LO developers may not care to try to make the linking process less memory intensive. For probably 99%+ of LO's users, they are using pre-compiled binaries, whether compiled by the distro maintainers or LO themselves.
The majority of those people who package it for others have beefy machines (or even build farms) that have plenty of RAM to deal with a memory-intensive linking process. While the source is obviously available, I don't think the LO developers expect many end-users will want to take the time to compile it themselves (I sure don't, but I have slower hardware). Because of that, they would probably rather devote more time towards features than the linking process.
I don't think there is anything wrong with your efforts, and it is nice to know that people are willing to step up and work on these time-consuming builds if Eric ends up not being able to devote enough time to keep them properly updated. But let's just keep things friendly
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.