Josef “Jeff” Sipek

IBM RAMAC

At some point, I came across this wonderful bit of history — the IBM RAMAC. Even though I’m a System/360 fan, I think this is too cool not to share.

First, the memory:

IBM RAMAC

Second, the whole “CPU”:

IBM RAMAC

And third, a video about it:

z/VOS - running x86 code on z

Earlier this year, I heard of a company that tried to make a product out of dynamic binary translation of x86 code to Wikipedia article: z/Architecture. Recently, I decided to look at what they do.

The company is called Mantissa Corporation, and their binary translation product is called z/VOS.

Much like VMWARE, they cache the translated code, in z/VOS’s case it’s really a must otherwise I’d guess the cost of traslation would make the result unusable. I like how they used VNC (see the demo mentioned below) to give the virtual x86 box a display.

There is an official blog that has some interesting bits of information. For example, they hint at how they use multiple address spaces to give a the x86 code the illusion of virtual memory. I am not quite sure why they list Wikipedia article: Decimal Floating Point facility as a requirement. Unfortunately, it has been a few months since the last update.

Their website also happens to have a demo of a small x86 assembly operating system starting up and running under z/VOS. I find this fascinating.

TurboHercules

Few days ago, a new company was created: TurboHercules.

As the name implies, they package up Hercules (an IBM mainframe emulator), and provide support for it. They are targetting the platform as a disaster recovery solution.

It shouldn’t directly affect the open source project in a negative way (just like Red Hat cannot prevent people from continuing their work on the Linux Kernel). At the same time, it’ll change the way people look at Hercules.

$23,148,855,308,184,500

Tee hee…an amusing story from BBC News:

A man in the United States popped out to his local petrol station to buy a pack of cigarettes - only to find his card charged $23,148,855,308,184,500.

Sharing the Computer's Time

Earlier today, someone I know sent me this Time article. I started reading the article, but something seemed a bit odd. To not spoil it for you, here’s the text of the article:

The computer has become a main stay of big business in the U.S., but most small and medium-sized companies still find it too expensive for normal use. Last week two of the biggest computer makers, General Electric and Control Data Corp., introduced new systems that will offer the small business man the same computer advantages as the biggest corporation. Their move to what is called “time sharing” is part of a growing trend to market the computer’s abilities much as a utility sells light or gas.

Dial for the Answer. Business some time ago began using computer centers to process data cards, count receipts or keep track of airline reservations from distant offices. Time sharing goes much beyond that. It links up as many as 500 widely separated customers with one large computer, lets each feed its own problems to the machine by telephone through a simple typewriter console. The time-sharing computer can answer questions in microseconds, is able to shift back and forth swiftly among the diverse programming needs of many companies, small and large.

Although still in its infancy, time sharing is already being used by business, government and universities. Boston’s Raytheon Co. prepares contract proposals, and Arthur D. Little solves problems in applied mechanics through a time-sharing system run by Cambridge’s Bolt Beranek & Newman. An other time-sharing firm, Keydata, will soon take up the problems of Boston distributors of liquor, books, automobile parts and building materials. Control Data, which introduced two time-shared computers last week, will open the U.S.’s biggest sharing center in Los Angeles next year. General Electric already has 88 customers, last week added a New York center to its service centers in Phoenix and Valley Forge, Pa.

From New York, IBM gives shared-time services to 50 customers, including Union Carbide and the Bank of California. Under G.E.’s system, a company can rent the big G.E. 265 for 25 shared hours a month for only $350, compared with a normal monthly rent of $13,000 for individual computers.

Plugging Them In. Some companies have discovered that time sharing has reduced to one-fiftieth the time needed to answer a problem, have found access to a large computer more profitable than ownership of a small or medium-sized machine. The Massachusetts Institute of Technology, one of the pioneers in time sharing, now has 400 users for its IBM 7094 computer, has served scientists as far away as Norway and Argentina. Experts predict that by 1970 time sharing will account for at least half of an estimated $5 billion computer business, will be used as widely and easily as the telephone switchboard.

Yep, that’s right, this article is dated: Friday, Nov. 12, 1965. :)

HVF v0.15

Two days ago, I decided to release HVF v0.15. It’s been over a year since I did the v0.14 release. There were 4 -rc’s inbetween. All in all, there have been 132 commits with lots of changes all around.

You can get the source code via Git (git://repo.or.cz/hvf.git), or a tarball.

HVF: Sample Session

Looking at some of my older posts about z/Architecture, I decided to post a sample console session (including some annotations) with the latest version of the code with some work-in-progress patches that I haven’t touched in a while.

Every OS needs a nice banner, right?

                    HH        HH  VV        VV  FFFFFFFFFFFF
                    HH        HH  VV        VV  FFFFFFFFFFFF
                    HH        HH  VV        VV  FF
                    HH        HH  VV        VV  FF
                    HH        HH  VV        VV  FF
                    HHHHHHHHHHHH  VV        VV  FFFFFFF
                    HHHHHHHHHHHH  VV        VV  FFFFFFF
                    HH        HH   VV      VV   FF
                    HH        HH    VV    VV    FF
                    HH        HH     VV  VV     FF
                    HH        HH      VVVV      FF
                    HH        HH       VV       FF

HVF VERSION v0.15-rc4-7-g62eac50

NOW 06:38:44 UTC 2009-04-15

LOGON AT 06:38:45 UTC 2009-04-15

IPL command isn’t completely done, so for the time being, It has the device number hardcoded in.

ipl
WARNING: IPL command is work-in-progress
GUEST IPL HELPER LOADED; ENTERED STOPPED STATE

You can see the device number in R2, the SSCH in R1, and the base address in R12.

d g
GR  0 = 0000000000000000 0000000000010005
GR  2 = 0000000000000a00 0000000000000000
GR  4 = 0000000000000000 0000000000000000
GR  6 = 0000000000000000 0000000000000000
GR  8 = 0000000000000000 0000000000000000
GR 10 = 0000000000000000 0000000000000000
GR 12 = 0000000001000000 0000000000000000
GR 14 = 0000000000000000 0000000000000000

Execution will begin at 16MB, that’s where the loader gets copied.

d psw
PSW = 00080000 81000000 00000000 00000000

The first few instruction of the loader…as disassembled by the built in disassembler.

d s i1000000.20
R0000000001000000  B234C090      STSCH  144(R12)
R0000000001000004  4770C040      BC     7,64(R0,R12)
R0000000001000008  9680C095      OI     149(R12),128
R000000000100000C  B232C090      MSCH   144(R12)
R0000000001000010  4770C040      BC     7,64(R0,R12)
R0000000001000014  D2070078C060  MVC    120(8,R0),96(R12)
R000000000100001A  5830007C      L      R3,124(R0,R0)
R000000000100001E  4133C03C      LA     R3,60(R3,R12)

There are real devices. Since this run was under Hercules, these were all defined in the hvf.cnf.

q real
CPU RUNNING
STORAGE = 128M
CONS 0009 3215 SCH = 10000
RDR  000C 3505 SCH = 10001
PUN  000D 3525 SCH = 10002
PRT  000E 1403 SCH = 10003
GRAF 0040 3278 SCH = 10004
GRAF 0041 3278 SCH = 10005
TAPE 0580 3590 SCH = 10006

And there are virtual devices (including their subchannel information blocks).

q virtual
CPU STOPPED
STORAGE = 17M
CONS 0009 3215 ON CONS 0009 SCH = 10000
RDR  000C 3505 SCH = 10001
PUN  000D 3525 SCH = 10002
PRT  000E 1403 SCH = 10003
DASD 0191 3390      0 CYL ON DASD 0000 SCH = 10004
d schib all
SCHIB DEV  INT-PARM ISC FLG LP PNO LPU PI MBI  PO PA CHPID0-3 CHPID4-7
10000 0009 00000000   0  01 80  00  00 80 —- FF 80 00000000 00000000
10001 000C 00000000   0  01 80  00  00 80 —- FF 80 00000000 00000000
10002 000D 00000000   0  01 80  00  00 80 —- FF 80 00000000 00000000
10003 000E 00000000   0  01 80  00  00 80 —- FF 80 00000000 00000000
10004 0191 00000000   0  01 80  00  00 80 —- FF 80 00000000 00000000

Let ’er rip! Well, it gets past SSCH (well, kind of) and then it stopped when it didn’t know what to do with a DIAG.

be
INTRCPT: INST (b234 c0900000)
STSCH handler (raw:0000b234c0900000 addr:0000000001000090 sch:10005)
INTRCPT: INST (8300 00010000)
Unknown/mis-handled intercept code 04, err = -4

Ah, condition code 3, that’s why the loader gave up with DIAG, instead of attempting MSCH.

d psw
PSW = 00083000 81000048 00000000 00000000
d s i1000040.10
R0000000001000040  980FC0C4      LM     R0,R15,196(R12)
R0000000001000044  83000001      DIAG   X’000001’
R0000000001000048  980FC0C4      LM     R0,R15,196(R12)
R000000000100004C  83000000      DIAG   X’000000’

What version is this anyway? Is it 6:45 already?!

q cplevel
HVF version v0.15-rc4-7-g62eac50
IPL at 06:38:44 UTC 2009-04-15
q time
TIME IS 06:45:26 UTC 2009-04-15

P.S. I just realized that the post id for this post is 360. How apt! :)

HOWTO: Installing CentOS 4.x under z/VM

I guess I should mention it here…

Almost 6 months ago, I wrote up another howto: Installing CentOS 4.x under z/VM (the first one being Installing Debian under Hercules).

z10: greener, better, faster, stronger

Alright, I’ve finally managed to write a little entry about this…

On February 26th, IBM announced a new series of mainframes: the z10. It’s still z/Architecture, although they expanded it a bit.

Here’s what it looks like (image shamelessly stolen from the internet):

IBM z10

So, what makes it better, you ask?

It’s faster (up to 64x quad-core 4.4GHz processors), it supports 3 times as much memory (storage in mainframe speak) as the z9 did (512GB -> 1.5TB), the cores are 50-100% faster depending on the load, and other goodies….I’ll just list them in a list…it’s more parsable that way:

  • 64x quad-core 4.4GHz processors

    • 64 kB L1 i-cache
    • 128 kB L1 d-cache
    • 3 MB L2 cache

  • z/Architecture

    • crypto, decimal floating point, and compression accelerators
    • 894 instructions, 75% implemented in hardware
    • 1MB, as well as 4 kB page tables (z9 has only 4 kB)

  • 1.5TB memory (z9 had 512GB limit)
  • 50-100% faster execution (depending on the workload)
  • One z10 is equivalent to 1500 x86 servers, but uses 85% less power
  • Available in the second quarter of 2008

IBM also announced that there’s a porting effort to get OpenSolaris to run on the z10.

z/VM = pure awesomeness

Today, I got to use z/VM 5.2 a whole lot (well, I didn’t do all that much, but considering that I never typed a single command in z/VM — only VM/370 — it was a whole lot of using :) ). Long story, short, z/VM is totally amazingly awesome.

Here’s an image that I found on IBM’s website:

I [heart] VM

Powered by blahgd