Ask Your Question
2

Local Repository Creation and Management

asked 2011-11-14 13:27:09 -0600

dln gravatar image

updated 2014-09-30 21:54:02 -0600

mether gravatar image

(I come from a CentOS background and likely show bias, eg its more narrow and controlled environment)

How do folk manage initial load and regular updates in a multi-machine Fedora environment?

In my past I would download a newly released repository to a local server (or cache, if you will) and regularly rsync the updates over time. Local machines are then built and maintained from such local repos.

The advantages include (important) reduction in broadband time and costs because software is only downloaded once, and having a common software base for all local machines. However with Fedora, the range of software available is significantly more rich and deep. Thus to download a copy of the entire repository/ies involves unnecessary traffic-expense under something like the '80/20 rule'.

So, how do folk handle these situations with Fedora? Is there a 'management process' or do folk simply download their updates as many times as they have machines?

One idea: I recall that rpm/yum downloads items from authoritative repositories to a cache directory. Only the software we use would be downloaded in this scenario. Is it possible to such 'cache' as a source for maintaining a local repo?

NB will be happy to read web references if you are able to give pointers, and if 'glue' software is not available I don't mind writing simple-ish, scripts...

edit retag flag offensive close merge delete

3 Answers

Sort by ยป oldest newest most voted
1

answered 2011-11-15 10:55:00 -0600

mether gravatar image

Refer to http://yum.baseurl.org/wiki/YumMultipleMachineCaching for several general hints on the type of solutions you can look at.

edit flag offensive delete link more
1

answered 2011-11-15 10:38:18 -0600

Jitesh Shah gravatar image

I never had a multimachine environment professionally. But, I did have one at home on a slow broadband connection.

What I did was to update one machine, NFS mount its cache directory in another machine and just run the update there. It works like a charm!

edit flag offensive delete link more
0

answered 2011-11-14 14:09:20 -0600

lzap gravatar image

updated 2011-11-14 14:10:14 -0600

The thing you are looking for is Pulp Project:

Pulp is a Python application for managing software repositories and their associated content, such as packages, errata, and distributions. It can replicate software repositories from a variety of supported sources, such as http/https, file system, ISO, and RHN, to a local on-site repository. It provides mechanisms for systems to gain access to these repositories, providing centralized software installation.

You can do everything you describe here, and much more. With pulp-agent installed on the servers you can manage them (install packages/erratas/updates, uninstall etc) remotely with pulp-admin. By feature called "repo cloning" you can even create "environments" like production, test or pre-production. Pulp is very flexible, fast and stable.

There is emerging cloud technology called Katello which leverages Pulp features. Good luck with them!

edit flag offensive delete link more

Comments

Well, Pulp fetches all the data. Initial download is big, yeah.

lzap gravatar imagelzap ( 2011-11-15 11:04:06 -0600 )edit

Question Tools

1 follower

Stats

Asked: 2011-11-14 13:27:09 -0600

Seen: 1,569 times

Last updated: Nov 15 '11