The Linux Foundation

Minutes Jul 20 2011

From The Linux Foundation

Meeting 20 July 2011 US EDT 1100-noon, Wednesdays Conference Dial-in Number: (605) 715-4920 Access code: 512468


Russ Herrold (CentOS), Stew Benedict (LF), Mats Wichmann (Intel), Robert Schweikert (Suse), Jeff Licquia (LF)


No specific agenda;

  • may talk about FHS,
  • architecture,
  • LSB 5.0
  • planning and schedule issues,
  • talk to be given at LinuxCon re: using the LSB.

Bring any topics you want to talk about.

  • per denis (ISPRAS): estimation of library uplift efforts

Carry over Action items from 7/13/2011: x Contact ISPRAS and find out how long an uplift typically takes to do. Factors? (# of interfaces, size of lib, etc.) Research the current state of the uplifted targets. Research what's available for new specs.


Linux Journal has a FHS article from the LF in its lead this month --- Stew: which seems to be behind a viewer sign-in wall, rather than the unrestricted blog feed -- then it produces a 15 pg PDF

Russ: very curiously at page 3, the PDF omits mention of i386 as an LSB supported architecture

Scheduling matters: Jeff out of pocket travelling (LinuxCon), so meetings will beed a alternative moderator, or to take a summer vacation for a bit (Aug 3 meeting)

Jeff: summary of denis' (ISPRAS) email on work-effort. Pasted here:


Well, estimations can be rough, but I can provide some statistics.

Our automated tools (in particular, script) work on per-header basis, and usually number of interfaces doesn't matter much (I'd say it's number of complex types that does matter). Usually it takes one day to process 2-3 headers of medium size/medium complexity (~50-60 functions) + one more day to run devchk/libchk, fix errors, regenerate libchk/devchk/headers and repeat this until success.

Either you process a new header from a completely new library or a new header from existing library, efforts are almost the same. Actually this is the most convenient case for the tools when library uplift is turned into addition of several new headers. In practice, uplift of an existing header takes approximately the same time as addition of a new one. And if you only want to add a couple of new functions to a big header, it can be easier to prepare appropriate SQL manually (on the basis of existing templates/examples).

Among different factors that can affect uplift process I can mention the following:

  • C++ libs are in general easy to uplift, since we don't store C++ headers in the DB. On the other side, we store some C++-specific binary data (e.g., vtables); uplift of such things is not automated at all, but in case of lack of resources it's ok to ignore this part (not very good, but this doesn't break anything; vtables & co just make the spec more complete).
  • For some GTK headers we use 'all stuff in one big header' approach, while in upstream has dozens of small headers and 'big' headers simply include smaller ones. Automated tools can be confused by this.
  • glibc libs are tricky, you can spend a lot of time to setup automated tools properly to process their headers correctly.
  • From time to time, you can meet headers with tricky dependencies where order of type declarations/header inclusions matter a lot and is quite complex. Sometimes you can spend the whole day fixing such a header. Fortunately, this is quite rare situations.
  • Also note that sometimes we spend a lot of time to decide which constant/macros should go to LSB headers.


Mats: to some extent, it is perhaps easier to add a new library, rather than review and uplift an older library, because the rationale behind the prior editoral exclusions are not centrally compiled, but rather exist in the memory of man alone. New matter, being new, lacks such baggage

Action items: Russ asks: Jeff to put up a LF blog copy of the same

[Article] [Discussion] [View source] [History]