Josef “Jeff” Sipek

dis(1): support for System/370, System/390, and z/Architecture ELF bins

A few months ago, I came to the conclusion that it would be both fun and educational to add a new disassembler backend to libdisasm—the disassembler library in Illumos. Being a mainframe fan, I decided that implementing a System/390 and z/Architecture disassembler would be fun (I’ve done it before in HVF).

At first, I was targetting only the 390 and z/Architecture, but given that the System/370 is a trivial (almost) subset of the 390 (and there is a spec for 370 ELF files!), I ended up including the 370 support as well.

It took a while to get the code written (z/Architecture has so many instructions!) and reviewed, but it finally happened… the commit just landed in the repository.

If you get the latest Illumos bits, you’ll be able to disassemble 370, 390, and z/Architecture binaries with style. For example:

$ dis -F strcmp hvf             
disassembly for hvf

    strcmp:      a7 19 00 00        lghi    %r1,0
    strcmp+0x4:  a7 f4 00 08        j       0x111aec
    strcmp+0x8:  a7 1b 00 01        aghi    %r1,1
    strcmp+0xc:  b9 02 00 55        ltgr    %r5,%r5
    strcmp+0x10: a7 84 00 17        je      0x111b16
    strcmp+0x14: e3 51 20 00 00 90  llgc    %r5,0(%r1,%r2)
    strcmp+0x1a: e3 41 30 00 00 90  llgc    %r4,0(%r1,%r3)
    strcmp+0x20: 18 05              lr      %r0,%r5
    strcmp+0x22: 1b 04              sr      %r0,%r4
    strcmp+0x24: 18 40              lr      %r4,%r0
    strcmp+0x26: a7 41 00 ff        tmll    %r4,255
    strcmp+0x2a: a7 84 ff ef        je      0x111ae0
    strcmp+0x2e: 18 20              lr      %r2,%r0
    strcmp+0x30: 89 20 00 18        sll     %r2,%r0,24(%r0)
    strcmp+0x34: 8a 20 00 18        sra     %r2,%r0,24(%r0)
    strcmp+0x38: b9 14 00 22        lgfr    %r2,%r2
    strcmp+0x3c: 07 fe              br      %r14
    strcmp+0x3e: a7 28 00 00        lhi     %r2,0
    strcmp+0x42: b9 14 00 22        lgfr    %r2,%r2
    strcmp+0x46: 07 fe              br      %r14

I am hoping that this will help document all the places needed to change when adding support for a new ISA to libdisasm.

Happy disassembling!

OpenIndiana (build 151a)

Over the past few months, I’ve played with Solaris — specifically, OpenIndiana, or OI for short. OI is a fork of OpenSolaris. OI’s first release happened on September 14, 2010. Today, exactly a year later, the OI community is proud to annouce the release of build 151a. The release notes say it all.

Personally, I find the KVM port to Illumos (the project that forked the core libs, programs, and OpenSolaris kernel) the most interesting. It’ll let me run (and manage!) virtual machines a bit more easily than what I get with VirtualBox. (Since OI now uses Illumos as the core Solaris upstream, it benefits from all the great work done by companies and individuals that contribute to Illumos.)

In case you are a bit confused, OI aims to be the defacto community Solaris distribution.

Oh, I almost forgot… 151a includes a package with Guilt (developer/versioning/guilt). :)

zEnterprise 196

I’m rather late with this post, but here it goes anyway. In August 2010, IBM announced a new mainframe — the zEnterprise 196.

At first, I wasn’t sure if that was supposed to be a z10 replacement, or if it was an entry level system like the Wikipedia article: Multiprise was back in the ESA/390 days. It turns out that it is a z10 replacement.

The system looks like a z10 from the outside, on the inside it’s different of course.

The specs for a fully configured system are:

  • 80 customer cores @ 5.2GHz (96 cores total, the 16 extra ones take care of system stuff as well as provide spare cores for failover)
  • 3TB of memory (z10 has 1.5TB limit)

The big new thing is the addition of the zBX. Now, it is possible to have your mainframe hooked up to several POWER (and soon x86) BladeCenters working as one system. I don’t really know much about how this part of the system works, but it sure looks interesting.

HVF v0.16-rc3

Whee! This weekend happened to be filled with coding.

First, I realized that it’s been 15 months since the last HVF release. I looked at the list of commits since then, and it was a sizable enough list to warrant a new release. Since there wasn’t a whole lot “Oh my! Must have!” it ended up being just another release candidate.

Once released, I looked around my patch directories to see what else I should hack at. The installer patch caught my eye.

I had a branch for the DASD loader work for a while — and the loader is complete. On the same branch, I had uncommitted code to implement a simple installer program. I started working on it almost a year ago (at least that’s what I gather from the bug report: bug # 146), but I kept all the code uncommitted. I did some simple cleanup, and committed the work-in-progress code. Then, over the next few hours, I managed to get a very large portion of it done. The last piece that needs to be implemented is the EDF handling. That is, the code that lets HVF use Wikipedia article: CMS file system for config files, etc.

In other news, I ran Doxygen on the HVF codebase. The output wasn’t as impressive as I hoped it would be. Part of it is probably because there are way too many functions that aren’t documented bug # 76. This would be a good starter project for anyone looking to start hacking on HVF. Since I’m on the topic of looking for help, I realized that the HVF website is awful and could use some help — both look & feel as well content.

Lastly, I’d like to post a link here to the HVF Ohloh page.

Planned Outage - Done

The server is back up. I think all the services have been restarted. If you find something wrong, let me know.

Planned Outage

my server will be off for the next day or two. you’ve been warned.

Old Email Address

I’m going to lose an email address I had for a long while: It’s because of an ISP switch. I haven’t used it as my primary email address for a long while, but either way, you’ll want to update my contact info to

HVF v0.15

Two days ago, I decided to release HVF v0.15. It’s been over a year since I did the v0.14 release. There were 4 -rc’s inbetween. All in all, there have been 132 commits with lots of changes all around.

You can get the source code via Git (git://, or a tarball.

No more RSS/RSS2

I decided that RSS/RSS2 were evil, and that I was going to provide only Atom feeds. Cope.

Just update your aggregator to use the Atom feed instead:

z10: greener, better, faster, stronger

Alright, I’ve finally managed to write a little entry about this…

On February 26th, IBM announced a new series of mainframes: the z10. It’s still z/Architecture, although they expanded it a bit.

Here’s what it looks like (image shamelessly stolen from the internet):

IBM z10

So, what makes it better, you ask?

It’s faster (up to 64x quad-core 4.4GHz processors), it supports 3 times as much memory (storage in mainframe speak) as the z9 did (512GB -> 1.5TB), the cores are 50-100% faster depending on the load, and other goodies….I’ll just list them in a list…it’s more parsable that way:

  • 64x quad-core 4.4GHz processors

    • 64 kB L1 i-cache
    • 128 kB L1 d-cache
    • 3 MB L2 cache

  • z/Architecture

    • crypto, decimal floating point, and compression accelerators
    • 894 instructions, 75% implemented in hardware
    • 1MB, as well as 4 kB page tables (z9 has only 4 kB)

  • 1.5TB memory (z9 had 512GB limit)
  • 50-100% faster execution (depending on the workload)
  • One z10 is equivalent to 1500 x86 servers, but uses 85% less power
  • Available in the second quarter of 2008

IBM also announced that there’s a porting effort to get OpenSolaris to run on the z10.

Powered by blahgd