Infrastructure Tasks to help with LSB 4.0, but not directly 4.0 deliverables
- Need autobuilder to be able to process one-shot requests bug 1917
- separate build, result upload, package upload so that a platform can do "build all" but then not upload some results, or maybe upload no packages, etc. Right now not all platforms have identical scripts to handle noarch packages (only done one place)
- Look into fragility
- Package up the checkem and related scripts; these are not in bzr
- Have another example setup to check procedure, docs
- start collecting a list of distros we want to test
- hardware resource planning: how many VMs can we host currently, how many more machines do we need to get to where we want
- a plan for "development" distros - these are updated frequently, how do we keep ours updated?
We need tools to actually pull out data of interest. The example page for this  just gives a list of completed test runs, we need some way to drill down and find out that Test FOO from Testsuite BAR is failing on three test distros while passing on twelve. Do we need to push data into a DB? Develop some scripts to mine each day's run (without saving it)?
Distro Testing / Autobuilding
On two architectures, ia32 and amd64, we want a full test matrix of active VMs for the forthcoming release, and a few inactive ones for the previous release (or more) - for back-rev support we need a way to fire up machines and test those. We also want separate VMs for autobuilding, need to keep those clean and distinct from the autotesting.
- Stable: RHEL/CentOS, SLES, Ubuntu, Debian stable
- Development: Fedora, OpenSUSE, Ubuntu, Debian testing
For the other machines, the range of distributions is limited, so the test matrix will be smaller. Some (s390/s390x) may not have any "development" distributions available.
Seems like the two full-test architectures need space for about 12 images for testing, and another two for building. Of these, we expect nine to be running, the other five to be Boot On Demand. This suggests about 180 gig of disk would be sufficient, but 4gb memory would be the minimum with more preferred. For ia32 there are memory limitations, so two machines would be better than one; that's probably true for amd64 as well.
(cut-n-pasted from Jeff's status report)
We have feeds for commit notifications, but lots of people prefer emails. So, we should have commit emails sent to a mailing list that people can subscribe to if they wish.
Version of bzr
Our current advertised support for bzr is 0.8 or better. Besides not strictly being true (due to some mistakes when creating two of our projects), it's holding us back; most of the performance improvements in bzr come with the newer formats supported in 1.0. The plan is to move the minimum version up to 1.0, ensure 1.0 (or newer) is installed everywhere, and migrate all repositories to the new pack format.
When we started using bzr, tags were provided with a plugin. Now bzr has real tags; we need to migrate tags from the old to the new format. We should do this for devel only; for now, keep the 3.1 and 3.2 branches on the old plugin, so that we don't have to update the build infrastructure in those branches.
PQM is slow, a bit unreliable, and otherwise not very convenient. We need real merge directive support, the elimination of "fake revisions" for the merge points, and some alternative ways to submit to PQM. We should also explore ways to allow direct commits to the bzr trees for core developers.
The bzr-webserve web front end is very inefficient. I've been testing loggerhead, which seems to work much better.
David wants bzr to be on a different server than the rest of the web stuff. The need for this may be satisfied with loggerhead, since the bzr-webserve load is a major reason for wanting to move it, but we may want to do this anyway.
According to David, this may actually happen by migrating the rest of the web stuff to a different server and leaving bzr behind; if this happens, we may not need to do anything here.
The wiki documentation needs maintenance; some of the pages still refer to the migration from CVS, and many of the above changes will need to be documented.