Josef “Jeff” Sipek

Installing Debian under FreeBSD's bhyve

This weekend I tried to move my work dev vm to a slightly beefier vm host. This meant moving the vm from kvm on OmniOS to bhyve on FreeBSD 12.1. After moving the disk images over, I tried to configure vm-bhyve to use them as-is. Unfortunately, grub2-bhyve fails to boot XFS root file systems with CRC support and there was no good way to convert the disk image to get it to boot via UEFI instead (to avoid grub2-bhyve completely). So, I decided to simply reinstall it.

In theory, this shouldn’t have been too difficult since I had the foresight to have /home on a separate virtual disk. In practice, I spent several hours reinstalling Debian Buster over and over, trying to figure out why it’d install and reboot fine, but subsequent boots wouldn’t make it past the EFI firmware.

It turns out that Debian tries to make multi-booting possible and puts its EFI binary into a non-standard location. That combined with bhyve not persisting EFI variables after shutdown results in any boot after the the first poweroff not even trying to look at the Debian-specific path.

This is not a Debian bug, but rather bhyve’s EFI support being incomplete. The easiest way around this is to copy the Debian binary into the standard location immediately after installation. In other words:

# cd /boot/efi/EFI
# mkdir BOOT
# cp debian/grubx64.efi BOOT/bootx64.efi

That’s all that’s needed. The downside to this is that the copy will not get automatically upgraded when grub gets an update.

For completeness, here are the relevant bits of the vm’s config:

loader="uefi"
graphics="yes"
cpu="6"
memory="1G"
network0_type="virtio-net"
network0_switch="public"
zfs_zvol_opts="volblocksize=4096"
disk0_dev="sparse-zvol"
disk0_type="virtio-blk"
disk0_name="disk0"
disk1_dev="sparse-zvol"
disk1_type="virtio-blk"
disk1_name="disk1"

Creative xor Use

Last month at work I got to try to optimize a function that takes a number and rounds it up to the next power of 2. The previous implementation used a simple loop. I didn’t dive into obscure bit twiddling, but rather used a helper function that is already in the codebase. Yes, I let the compiler do the heavy lifting of turning easy to understand code into good machine code. The x86 binary that gcc 6.3 produced has an interesting idiom, and that’s why I’m writing this entry.

The new code:

static inline unsigned int bits_required32(uint32_t num)
{
        return num == 0 ? 0 : 32 - __builtin_clz(num);
}

/* Returns x, such that x is the smallest power of 2 >= num. */
uint32_t nearest_power(uint32_t num)
{
	if (num == 0)
		return 1;

        return 1U << bits_required32(num - 1);
}

This is a slightly simplified version of the code, but it demonstrates the optimization quite well.

The nearest_power function disassembles as:

nearest_power()
    nearest_power:      8b 54 24 04        movl   0x4(%esp),%edx
    nearest_power+0x4:  b8 01 00 00 00     movl   $0x1,%eax
    nearest_power+0x9:  85 d2              testl  %edx,%edx
    nearest_power+0xb:  74 14              je     +0x14	<nearest_power+0x21>
    nearest_power+0xd:  83 ea 01           subl   $0x1,%edx
    nearest_power+0x10: 74 0f              je     +0xf	<nearest_power+0x21>
    nearest_power+0x12: 0f bd d2           bsrl   %edx,%edx
    nearest_power+0x15: b9 20 00 00 00     movl   $0x20,%ecx
    nearest_power+0x1a: 83 f2 1f           xorl   $0x1f,%edx
    nearest_power+0x1d: 29 d1              subl   %edx,%ecx
    nearest_power+0x1f: d3 e0              shll   %cl,%eax
    nearest_power+0x21: c3                 ret    

The first 6 instructions contain the prologue and deal with num being zero or one—both cases produce the result 1. The remaining 6 instructions make up the epilogue and are where the calculation happens. I’m going to ignore the first half of the function, since the second half is where the interesting things happen.

First, we get the number of leading zeros in num - 1 and stash the value 32 in a register:

    nearest_power+0x12: 0f bd d2           bsrl   %edx,%edx
    nearest_power+0x15: b9 20 00 00 00     movl   $0x20,%ecx

The number of leading zeros (%edx) is in the range 0–31.

Here is the really interesting bit:

    nearest_power+0x1a: 83 f2 1f           xorl   $0x1f,%edx

This xors the number of leading zeros (i.e., 0–31) with 31. To decipher what this does, I find it easier to consider the top 27 bits and the bottom 5 bits separately.

operand binary
0x1f 00000000 00000000 00000000 000 11111
edx 00000000 00000000 00000000 000 xxxxx

The xor of the top bits produces 0 since both the constant 31 and the register containing any of the numbers 0–31 have zeros there.

The xor of the bottom bits negates them since the constant has ones there.

When combined, the xor has the same effect as this C expression:

out = (~in) & 0x1f;

This seems very weird and useless, but it is far from it. It turns out that for inputs 0–31 the above expression is the same as:

out = 31 - in;

I think it is really cool that gcc produced this xor instead of a less optimal multi-instruction version.

The remainder of the disassembly just subtracts and shifts to produce the return value.

Why xor?

I think the reason gcc (and clang for that matter) produce this sort of xor instruction instead of a subtraction is very simple: on x86 the sub instruction’s left hand side and the destination must be the same register. That is, on x86 the sub instruction works as:

x -= y;

Since the destination must be a register, it isn’t possible to express out = 31 - in using just one sub.

Anyway, that’s it for today. I hope you enjoyed this as much as I did.

bool bitfield:1

This is the first of hopefully many posts related to interesting pieces of code I’ve stumbled across in the dovecot repository.

Back in 1999, C99 added the bool type. This is old news. The thing I’ve never seen before is what amounts to:

struct foo {
	bool	a:1;
	bool	b:1;
};

Sure, I’ve seen bitfields before—just never with booleans. Since this is C, the obvious thing happens here. The compiler packs the two bool bits into a single byte. In other words, sizeof(struct foo) is 1 (instead of 2 had we not used bitfields).

The compiler emits pretty compact code as well. For example, suppose we have this simple function:

void set(struct foo *x)
{
	x->b = true;
}

We compile it and disassemble:

$ gcc -c -O2 -Wall -m64 test.c
$ dis -F set test.o
disassembly for test.o

set()
    set:     80 0f 02           orb    $0x2,(%rdi)
    set+0x3: c3                 ret

Had we used non-bitfield booleans, the resulting code would be:

set()
    set:     c6 47 01 01        movb   $0x1,0x1(%rdi)
    set+0x4: c3                 ret

There’s not much of a difference in these simple examples, but in more complicated structures with many boolean flags the structure size difference may be significant.

Of course, the usual caveats about bitfields apply (e.g., the machine’s endian matters).

Working @ Dovecot

It’s been a hectic couple of weeks, and so this post is a bit delayed. Oh well.

A couple of months ago, I decided that it was time for me to move on work-wise. As a result, four weeks ago, I joined Dovecot Oy (a part of Open-Xchange).

As you may have guessed from the name of the company, I get to spend my time making the Dovecot email server code better, more featureful, and otherwise more excellent. It is certainly a significant but fun change—going from kernel hacking on a fairly unknown operating system to hacking on the world’s most popular IMAP server. Not a day goes by where I’m not surprised just how much functionality is in the Dovecot codebase, or when I get to consult an RFC related to some IMAP extension I didn’t even know existed.

So, with this said, you should expect to see some posts related to Dovecot, Dovecot code, and email in general.

Powered by blahgd