Friday, July 25, 2008

Notes on source code preprocessing

(This post contains some reflections on the hypothetical design of an ideal programming language...)

Don't use or support the use of preprocessing source code. Preprocessing means that all the tools which operate on the source code (including editors, compilers, static analyzers, etc.) will necessarily need to either support preprocessing themselves, or call the preprocessor before operating on the resultant source code.

Consider, for example, a simple tool that parses C code and searches for a given text in all the literal strings found in that code. If the source code needs to be preprocessed, then certain parts of the code may not be searched if the part in question is contained within the equivalent of a C-preprocessor #ifdef/#endif block. On the other hand, if we want our tool to also find strings inside these sections, then we can no longer use a C parser because #ifdef and #endif are not recognized by the C parser proper.

(The solution to the problem might include using regular expressions as a kind of heuristic to determine where the strings are, and then do concatenations, etc. However, this is only an approximate solution, and is in general not entirely satisfying. What about the more complex task of looking for specific variable or function declarations? In fact, a number of such tools have had to deal with variations of this problem; see for example LXR and Coccinelle.)

Multiple, different types of preprocessors also don't mix very well. For example, as an extension to many programming languages, the source code is preprocessed so that SQL queries are substituted with the (possibly verbose) support code which would otherwise be needed to prepare and execute the query in question. Now we have exactly the same problem as mentioned above, which is that support tools (editors, compilers, etc.) will have to either support the preprocessor language or (more likely) always operate on already-preprocessed source code, with all the aforementioned drawbacks. In some cases, the order of preprocessing will also become significant, or different preprocessor languages might be incompatible.

Another practical aspect of preprocessing is that the code which is inside such #ifdef blocks will be compiled only conditionally; the compiler might not even look at it. This means that the compiler is a lot less useful than it could be. It is one of the main tasks of the compiler to inform the programmer when he/she is writing something which is internally inconsistent, such as calling a function with the wrong argument types. (This is entirely possible if there is a caller of the function inside an #ifdef block and the definition of the function was later changed to take different arguments.)

In conclusion, I think C/C++ could have done a lot better in this area. On the other hand, a lot of other programming languages did get it right. But maybe the need for alternative compilation is greater in the lower-level languages and it's really just a trade-off between performance and usability.

Monday, July 21, 2008

Single-stepping a REP STOS...

I've recently made a startling discovery that explains a LOT about how kmemcheck has been working (or not) on one of my machines.

Yesterday I added the kmemcheck hooks into the DMA API, which means that we should now not give any false-positive errors about DMA-able memory. The patch was essentially a one-liner, since the rest of the DMA API deals with whole pages only (and they come straight from the page allocator, so they're not tracked anyway).

But still I was getting a huge amount of errors from sysfs code. This puzzled me for many hours. The code was apparently okay. In fact, the allocation in question was explicitly being zeroed out (it was calling kzalloc()), so it shouldn't have been possible to even find any use of uninitialized memory in it. I added some code to dump the memory along with the shadow dump, and it showed that the array was indeed being zeroed, but not marked initialized.

Because memset() is such a common operation (and needs to be fast), I've written a custom memset() function that checks whether the target memory is being tracked or not; if it is, we don't have to take any page fault at all, but we can simply zero the memory and set the initialization status to "initialized" at the same time.

I suspected my custom memset() of being in error. So I commented it out. And the result was even more startling; I now got a kmemcheck error on just about every memory access...

Something had to be wrong with the built-in memset. This is its definition:

static inline void * __memset_generic(void * s, char c,size_t count)
int d0, d1;
__asm__ __volatile__(
: "=&c" (d0), "=&D" (d1)
:"a" (c),"1" (s),"0" (count)
return s;

(Source code taken from the Linux Kernel. This code is licensed under the GNU GPL version 2.)

Taking a new look at the first kmemcheck error reported, I got another clue. The kzalloc() allocation was uninitialized except for the very first byte!

I wrote a short program for userspace which called the above function and single-stepped it with gdb on my two machines, one P4 3.0 GHz, and one Pentium Dual-Core 1.47 GHz. To my great surprise, the P4 skipped the whole REP STOS construct in one go, while the Dual-Core got a trap for each repetition of the STOS.

This is very grave news for kmemcheck. I had tested it on my Dual-Core earlier and thought that it would behave the same for all CPUs; there's not a word about single-stepping REP instructions in the Intel's System Development Manuals that I could find, anyway.

There are basically two solutions to the problem:
  1. Black-list the models that don't single-step each repetition
  2. Emulate the instruction (i.e. no single-stepping at all for the REP instructions)
And neither of them is particularly attractive.

But I have started working on instruction emulation to see how far I can get.

I've also contacted Intel and asked for more information about the peculiarity.

I only hope that there are not more unpleasant surprises like this...


PS: I have written a (32-bit) userspace program that backs up my suspicions. Compare these outputs:

$ ./a.out
processor : 7
cpu family : 6
model : 15
model name : Intel(R) Xeon(R) CPU E5320 @ 1.86GHz

Counted 1000 REP STOS instructions (expected 1000).

$ ./a.out
processor : 1
cpu family : 15
model : 6
model name : Intel(R) Pentium(R) 4 CPU 3.00GHz

Counted 55 REP STOS instructions (expected 1000).

Monday, July 14, 2008

Fedora 8 -> 9 upgrade

Yesterday, I decided that it was time to try out Fedora 9; after all, it's been a few months since it was released, and having just made a backup, I found that this was a convenient time to perform the upgrade.

Things did not start out so well, though. The installer seemed incredibly slow this time. There is a progress bar which shows how many packages that have been installed, but this could probably have been dropped; the next phase of installation, which I believe is the configuration of the just-installed packages, seemed to take just as long as the installation itself, and that with no progress bar at all (though, admittedly, with a rather honest "This may take a long time").

Well, the installer eventually completed and I rebooted the system. GRUB loaded as usual, except for the new spiffy background image. I picked the most recent kernel in the list,, even though the 'fc8' suffix puzzled me somewhat, but no, it didn't boot. Apparently the kernel (or initrd?) was unable to find my LVM volumes, and so couldn't mount my root partition. Well, at least the old kernel,, was still present in the menu, and choosing this the next time got me all the way to the (now even spiffier) graphical login screen.

My desktop shows up as usual and things seem fine. I decide to grab the most recent updates from the web, as I had downloaded the release DVD which contains packages which are now almost two months old. Unfortunately, all that 'yum' can do for me is to print a message about a missing _sha256 python package and some download instructions. I search the web for the exact error message, and I find a forum where others have the same problem [1]. The suggestion there is to download the RPMs that I need and install them manually using the 'rpm' tool, which I do. This method did in fact work, but only after resolving a few dependencies by hand and downloading amongst other things the updated python and yum packages.

And now that yum again runs successfully, I turn to those updates I've been waiting for. In the world of Free Software, changes happen quickly and the Fedora packages are usually updated quite frequently. However, all I see is a list of just two lone packages which both belong to the ".fc8" set of packages. Did the upgrade somehow forget to switch my package repositories?

No, that question will have to remain unanswered. I do not know the internals of yum, nor do I have any wish to do so (beyond fixing my broken system, obviously). I don't want to spend time on fixing the package manager; that's why I use one to begin with!

For the record, though my opinions might be biased by the experiences just recounted, I am also unimpressed by the new Firefox 3. I feel that the new user interface has been forced upon me, and it would be better to leave the former theme as it was and instead allow the user to choose the new one, if so he or she dares. I simply don't like the way things look; the elements of the drop-down URL list/chooser now displays titles of the webpages in addition to the URL itself, an incredibly annoying misfeature when my habits are already tuned to one specific behaviour. I can now also report that Firefox, only showing my Gmail inbox, now defaults to using 10% CPU when idle instead of the usual 3% of Firefox 2.5.

So thank you, but no thanks. I should have stayed with F8.



PS: When attempting to start gvim (to copy this very text into my browser), I got the following error: "gvim: error while loading shared libraries: cannot open shared object file: No such file or directory".

PPS: Several other programs were broken as well, including git, which made the computer completely useless for me. In the end, I installed F9 from scratch and restored my home directory. Yum has been downloading/installing updates for an hour now. Still, Tomboy Notes crashed while Yum was installing a SELinux policy update.