LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (http://www.linuxquestions.org/questions/slackware-14/)
-   -   Is there a way to compile on one computer then install on another? (http://www.linuxquestions.org/questions/slackware-14/is-there-a-way-to-compile-on-one-computer-then-install-on-another-4175439878/)

SlackwareSlacker 12-03-2012 08:23 PM

Is there a way to compile on one computer then install on another?
 
Alright google didn't help me much on this subject. So here's the setup. I've got three 2000's model computers with a celeron, pentium, and an athlon processor, obviously they don't compile very fast, all running Slackware 14. I have a new computer running Slackware64 14 with plenty of horsepower. What I'd like to do is have the new computer compile the source for the older computers so that I don't have to wait as long and I won't have to put unneeded stress on them.

If I was doing a kernel compile I'd assume that I could just configure the extracted source on computer A, FTP that file to computer B and compile it, then send it back to computer A and install it. Or I may be completely wrong, I'm fairly new.

I've read about Gentoo having a program that lets you do this but I don't know if there's a Slackware equivalent. All the computers are networked together and are in the same room.

I'm open to just about any ideas on this because waiting an hour and a half for a kernel to compile only to be told that lzma is out of memory SUCKS. Any help is really appreciated.

kite 12-03-2012 09:08 PM

It seems that Alien Bob builds packages for different archs within qemu virtual machines.

TommyC7 12-03-2012 09:25 PM

You can build packages on other machines and transfer the completed package over to your computer. You can also use distcc so each machine can compile parts of the kernel which should help speed up the overall time it takes to finish the package.

SlackwareSlacker 12-03-2012 10:33 PM

Alright I've heard about distcc, I'm googling it now, I thought that pacakages still had to be compiled? Or am I wrong? Also is there a guide somewhere on building packages?

Edit: If I had say source.tar.bz then extracted it, did the ./configure then ftp'd the file onto my dual core comp. Then ran make inside the source directory then ftp'd that back to the weaker computer and ran make install would that work or would I mess something up?

TommyC7 12-04-2012 02:00 AM

I would highly recommend looking into some of the SlackBuild scripts on SlackBuilds.org. It's much easier to organize Slackware packages with pkgtools.

jpollard 12-04-2012 08:57 AM

A slightly more general answer is "It depends".

This topic is "cross compiling".

The target architecture has to be supported. It is possible to compile, but only if the compiler has the modules to compile for the target. This is applicable to compiling on an Intel CPU, but targeting an ARM CPU. The compiler (and any needed libraries) for the target needs to be present.

In the case where the target is of Intel variant CPUs (Intel from 386 to X86-64, AMD...) AND the source computer is also one of the variants, then most of the work is already done. gcc has the "-march=xxx" and most of the variant architectures are already present. You can check to see what is available using "gcc --help-target", it will list the available target architectures and associated options for that target.

onebuck 12-04-2012 09:37 AM

Member Response
 
Hi,

I do builds all the time on hot hardware for not so hot hardware. Just a matter of organization, I create '/home/build' directory for each 'not so hot hardware'. Look at this old thread: how to use the 'build' technique dated but the technique can be used today. This will help to keep things clean when doing builds for different arch or equipment. You can move things via LAN or sneaker net. :)

HTH!

BTW, Welcome to LQ & Slackware!

T3slider 12-04-2012 11:22 AM

My advice for cross-compiling for other architectures would be different, but if you're just trying to compile software for x86 on an x86_64 host, the easiest way is to just install 32-bit Slackware in a VM, compile the software you need, create a Slackware package, and copy it over to the x86 hosts for installation. You cannot easily compile 64-bit software on a 32-bit box, but the opposite is easy. 64-bit Slackware can run 32-bit VMs just fine without the need for multilib packages, and it will run them at native (or near-native) speeds. In my opinion, using distcc for this is overkill.

SlackwareSlacker 12-04-2012 11:43 AM

Alright, I'll look into cross compiling after I'm out of school. I thought that the ./configure command just set the parameters for the make file as to what arch to compile for and then the make command compiled around those parameters. I'd have to look into building packages though.

Martinus2u 12-04-2012 02:39 PM

Quote:

Originally Posted by SlackwareSlacker (Post 4842671)
I'd have to look into building packages though.

in general this problem is hard. As others have written, the compiler and even the linker can probably be configured to build a binary for the target architecture. But no statement is possible for make files and scripts, like configure and SlackBuild. So your options are: either outsourcing just the compilation (ie. cross compiling with distcc), or creating a dedicated 32 bit build environment (eg. via a VM).

I personally use distcc a lot, but always for the same architecture, so i cannot help with the cross compiling part. :p

jpollard 12-04-2012 05:46 PM

You can use a chroot environment without the VM as long as you are within the same architecture family. Takes the same amount of disk space, but some may consider it better as it is fully shared between the two environments, and you don't have to deal with any slowdown due to the VM environment (which isn't all that big).

One advantage is that it is easier to fix a goof. No need to reboot... This doesn't work if you are doing kernel patches though.

lazardo 12-05-2012 10:59 PM

Quote:

Originally Posted by onebuck (Post 4842602)
I do builds all the time on hot hardware for not so hot hardware...

I also take advantage of hot hardware, but for the simpler case of like architectures (in this case, all AMD). I create a .config that has components common to all three machines, and excludes functions/drivers that are missing. Built-in or modular doesn't matter, no initrds.

After building the kernel and modules on the fastest machine, I simply rsync /lib/modules/1.2-xyzzy and /boot/{System.map,config,vmlinuz}-1.2-xyzzy to the other machines, fixup lilo and reboot.

Cheers,

SlackwareSlacker 12-10-2012 08:57 PM

Alright sorry for the slow reply. Just switched ISPs so I had to get stuff back up and running before I could start bringing them down again haha.

Anyway I'm trying virtualbox right now, disk space isn't an issue on my desktop. I'll have to look into distcc and rsync because they seem like better solutions. If I understood correctly distcc takes a task and then sends part of the load to whatever computer is hooked to it right?


All times are GMT -5. The time now is 09:33 AM.