-   Linux - Server (
-   -   Can squid be used to cache OS updates/upgrades? (

memilanuk 05-15-2012 12:45 AM

Can squid be used to cache OS updates/upgrades?
So... I have a virtual LAN running in Virtualbox (currently on Ubuntu 11.10) where I frequently install, update, tinker with, blow away and then reinstall various distros. In the interest of not having to re-download a lot of the same files over and over again, I'm looking at setting up squid on the 'gateway' VM server. Actually, it's already installed, using squid-deb-proxy which acts as a caching proxy for Ubuntu updates and such. That particular variant uses its own config, log, and cache files separate from the main squid program, and is pretty much zero config out of the box (very slick).

I have some questions, though, as I had normally thought of squid mainly as something for accelerating (or blocking) web page viewing, not speeding up file downloads. Please bear with me as I ask some potentially silly questions...

If I have other VMs 'behind' this gateway server with squid, and say box A has CentOS 6.2 on it and downloads various RPMs for an update, will those be cached by squid? If I box B also has CentOS 6.2 on it and requests the same updates later, will it get them from the local cache or from the website? (think I know the answer to this one, but figured I'd ask anyways)

Same scenario, but box A and B have different versions of the same OS... CentOS 5.8 and 6.2... will box B get its updates from the local cache, or from the upstream site?

Same scenario, but box A and B have the same version of very *similar* OSes... like CentOS 6.2 and Scientific Linux 6.2. What happens?

Similar scenario, but box A and B have similar versions of the same file/package from related but different distros - say Ubuntu & Mint, or Debian and Crunchbang. What happens?

If I'm understanding things right, the only time I'd actually *want* the client VMs to share the files from the cache would be in the case of where there are multiple installs of the same OS i.e. both the same 'brand' (CentOS) and the same version number (6.2). In all the others, the machines should download their own file/rpm, and any subsequent request for the exact same file from the same place would then use the cached version. Am I understanding this correctly?

Do I need to do anything special - besides setting up a pretty big cache, with long retention time, to make this happen?

Anything else I'm overlooking?



UndiFineD 05-15-2012 02:28 AM

squid does cache any files you retrieve from the internet
another method you may like for multiple machines is this

devilboy09 05-15-2012 06:15 AM

i don't think so.

memilanuk 05-15-2012 09:09 AM


Already got squid-deb-proxy working, which is pretty slick/amazing for a 'zeroconf' system. Nothing to setup, beyond installing the client software - though I believe that just uses avahi to automatically find the right proxy in the background and a person *could* manually specify the same info as far as host ip and port number if they wanted. I probably *will* set up a small mirror at some point in the future... 'just because I can' ;)


Could you be a little more vague? You don't think so... what?

TobiSGD 05-15-2012 09:23 AM

I normally use apt-cacher-ng for those tasks, easy and fast.

memilanuk 05-15-2012 09:27 AM

Again... while I appreciate the debian/ubuntu related suggestions... please bear in mind I'm looking for something to handle multiple *non Debian* distros as well. I've got the deb angle covered, thanks. Please stick to the topic at hand - squid.

memilanuk 05-15-2012 01:01 PM

Okay... looks like the RH-based distros (CentOS, SL, Fedora) *can* update thru a proxy by simply adding a line or two in /etc/yum.conf... now it just looks like my squid config (stock, out of the box) needs some work to make everything work.

memilanuk 05-16-2012 11:33 PM

Well... turned out the stock setup for squid3, at least on Ubuntu 12.04 LTS, doesn't have any ACLs defined for the local LAN. Once I did that by adding the lines:


acl lan src

http_access allow lan

It also doesn't setup a disk-based cache either, for some reason. So I had to add a line like this:


cache_dir ufs /var/spool/squid3 1000 16 256
and added this line to /etc/yum.conf immediately after [MAIN]:


Then the RH-based distros started happily using the squid caching proxy server. There's probably some more fine-tuning that could be done, judging by the config file for squid-deb-proxy, but it got things working for now.

workent 02-15-2013 04:08 PM

Optimal Squid repo
Hi guys,

[Been a while since last post]
[Similar query - please excuse he cross-post from another forum]

I've been using apt-cacher-ng quite successfully for a number of years now, as I have a number of Ubuntu & Debian (*.deb) systems on my network - baremetal & VM's.

So far, so good, but it fails to address the RedHat (*.rpm @ yum) & other distro's quite as well.

I've now provisioned a VM running Squid3 & it seems to be holding pretty well so far, based largely on the work found here & here.

What I'd really like to do is not filter/cache *all* my traffic, but dedicate the instance to only do so for POSIX packages & images (.deb, .rpm, .img, .iso, etc), and in doing so also sort then into some appropriate mounted NFS shares & to store those files indefinately, unless there are newer versions of same files: ie. ISO @ nas:/mnt/iso , DEB @ nas:/mnt/apt , RPM @ nas:/mnt/yum

What's nice re apt-cacher-ng, is that it'll store (& expire) packages that have passed through it indefinitely, & has some very handy tools to check if those files are still cool, and if not, allows me to purge unreferenced, damaged or bogus .deb's

Could anyone with ninja squid-fu skills please offer some advice on how to achieve such an outcome?

All times are GMT -5. The time now is 01:02 PM.