Josef “Jeff” Sipek

Dumping & restoring XFS volumes

Over the past few years, I’ve been using XFS wherever I could. I never really tried to tweak the mkfs options, and therefore most of my filesystems were quite sub-optimal. I managed to get my hands on an external 500GB disk that I decided to use for all this data shuffling…

320GB external firewire disk

This was probably the most offenseively made fs. Here’s the old info:

meta-data=/dev/sdb1      isize=512    agcount=17, agsize=4724999 blks
         =               sectsz=512   attr=1
data     =               bsize=4096   blocks=78142042, imaxpct=25
         =               sunit=0      swidth=0 blks, unwritten=1
naming   =version 2      bsize=4096  
log      =internal       bsize=4096   blocks=32768, version=2
         =               sectsz=512   sunit=0 blks, lazy-count=0
realtime =none           extsz=65536  blocks=0, rtextents=0

It had 512 byte inodes (instead of the more sane, and default 256 byte inodes) because I was playing around with SELinux when I made this filesystem, and the bigger inodes allow more extended attributes to be stored there — improving performance a whole lot. When I first made the fs, it had 16 allocation groups, but I grew the filesystem about 10GB which were used by a FAT32 partition that I used for Windows ↔ Linux data shuffling. On a simple disk (e.g., not a RAID 5), 4 allocation groups is far more logical then the 17 I had before. Another thing I wanted to use is the lazy-count. That got introduced in 2.6.23, and improved performance when multiple processes were filesystem metadata (create/unlink/mkdir/rmdir). And last, but not least, I wanted to use version 2 inodes.

The simples way to change all the filesystem to use these features is to backup, mkfs, and restore…and that’s what I did.

This is what the fs is like after the whole process (note that isize, agcount, attr, and lazy-count changed):

meta-data=/dev/sdb1      isize=256    agcount=4, agsize=19535511 blks
         =               sectsz=512   attr=2
data     =               bsize=4096   blocks=78142042, imaxpct=25
         =               sunit=0      swidth=0 blks
naming   =version 2      bsize=4096
log      =internal       bsize=4096   blocks=32768, version=2
         =               sectsz=512   sunit=0 blks, lazy-count=1
realtime =none           extsz=4096   blocks=0, rtextents=0

dumping…

I mkfs.xfs’d the 500GB disk, and mounted it on /mnt/dump. Since I like tinkering with storage, I couldn’t help but start blktrace for both of the disks (the one being dumped, and the one storing the dump).

Instead of using rsync, tar, or dd, I went with xfsdump/xfsrestore combo. xfsdump is a lot like tar — it creates a single with with all the data, but unlike tar, it also saves extended attributes, and preserves the hole information for sparse files. So, with blktrace running, it was time to start the dump:

# xfsdump -f /mnt/dump/acomdata_xfs.dump -p 60 -J /mnt/acomdata

The dump took about 9300 seconds (2 hours, 35 mins). Here are the graphs created by seekwatcher (which uses the blktrace traces)…The source disk is the firewire disk being dumped, and the target disk is the one being dumped to.

source disk

The IO here makes sense, xfsdump scans the entire filesystem — and backs up every inode sorted by the inode number (which is a function of the block number). The scattered accesses are because of fragmented files having data all over the place.

target disk

I’m not quite sure why XFS decided to break the dump file into 8 extents. These extents show up nicely as the 8 ascending lines. The horizontal line ~250GB is the journal being written to. (The seeks/second graph’s y-axis shows that seekwatcher has a bug when there’s very little seeking :) )

…and restoring

After the dump finished, I unmounted the 320GB fs, and ran mkfs on it (lazy-count=1, agcount=4, etc.). Then it was time to mount, start a new blktrace run on the 2 disks, and run xfsrestore — to extract all the files from the dump.

# xfsrestore -f /mnt/dump/acomdata_xfs.dump -p 60 -A -B -J /mnt/acomdata

I used the -A option to NOT restore xattrs as the only xattrs that were on the filesystem were some stray SELinux labels that managed to survive.

The restore took a bit longer…12000 seconds (3 hours, 20 minutes). And here are the traces for the restore:

source disk

Reading the 240GB file that was in 8 extents created a IO trace that’s pretty self explanatory. The constant writing to the journal was probably because of the inode access time updates. (And again, seekwatcher managed to round the seeks/second y-axis labels.)

target disk

This looks messy, but it actually isn’t bad at all. The 4 horizontal lines that look a lot like journal writes are probably the superblocks being updated to reflect the inode counts (4 allocation groups == 4 sets superblock + ag structures).

some analysis…

After the restore, I ran some debug tools to see how clean the filesystem ended up being…

…fragmentation

37945 extents used, ideal 37298 == not bad at all

…free space fragmentation

   from      to extents  blocks    pct
      1       1      19      19   0.00
      2       3       1       3   0.00
     64     127       2     150   0.00
    128     255       1     134   0.00
    512    1023       1     584   0.00
   4096    8191       1    4682   0.02
  32768   65535       1   36662   0.19
 131072  262143       1  224301   1.16
 262144  524287       3 1315076   6.79
 524288 1048575       2 1469184   7.59
1048576 2097151       4 6524753  33.71
2097152 4194303       4 9780810  50.53

== pretty much sqeaky clean

…per allocation group block usage

/dev/sdb1:
AG     1K-blocks         Used    Available    Use%
  0     78142044     40118136     38023908     51%
  1     78142044     78142040            4     99%
  2     78142044     42565780     35576264     54%
  3     78142036     74316844      3825192     95%
ALL    312568168    235142800     77425368     75%

I’m somewhat surprised that the 2nd and 4th are near full (well, 2nd ag has only 4kB free!), while the 1st and 3rd are only half full. As you can see, the 320GB disk is 75% used.

Bonus features

I decided to render mpeg versions of the IO traces…

source disk (dump) (4MB)
target disk (dump) (2MB)
source disk (restore) (2MB)
target disk (restore) (4.1MB) ← this is the best one of the bunch

Smile!

Last night, I forget who, challenged me to make a smilie with disk io and seekwatcher. Well, I couldn’t let such challenge just pass me by (click to enlarge):

Smile!

All that I used was: blktrace, seekwatcher, python (to do the math - sin, cos, etc.), and dd (to do the disk io). I am already planning bigger and better things :)

XFS, blktrace, seekwatcher

Today I was playing with blktrace, and graphing the results with seekwatcher. At one point, I ran acp (which is a lot like tar, but tries to be smarter) on a directory stored on an XFS volume, but I forgot that months ago, I created a sparse file 101PB (that’s peta) in size. Well, acp was happily reading all the sparse regions. I killed it, and decided to remove the gigantic file which was totally useless. About 30 seconds into the removal, I realized it would have been great to have a trace of that. Well, I started blktrace and about 12 minutes later the rm process finished.

I graphed it and here’s the result (click for larger version):

XFS removing a large sparse file

At first I was very confused why things looked the way they did, but eventually it dawned on me (after some discussion with Dave Chinner — XFS dude) that it’s all journal log traffic. I quickly ran xfs_info on the filesystem:

meta-data=/dev/sdb1              isize=256    agcount=16, agsize=1120031 blks
         =                       sectsz=512   attr=1
data     =                       bsize=4096   blocks=17920496, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=8750, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

And things just made sense. I calculated the size of the log (see bolded numbers) to be (4096*8750) bytes, or 34.17 MB (base 2) or 35.84 MB (base 10). If you look at the graph, you’ll see that the disk offsets accessed were 35001 to 35035 MB or about 35MB! XFS puts the log near the middle of the disk to minimize seeks as much as possible, so as you may have guessed, my disk is about 70GB in size (it’s a U160 73GB SCSI disk).

Linux Kernel Developers Go Insane

This is a continuation of the lguest: The New Kid on the Block post I made the other day.

In responses to Rusty’s patches, Linus Torvalds and Alan Cox attempt poetry.

First, Linus…

There’s a reason for [not having enough poetry in the kernel].

There once was a lad from Braidwood
With a wife and a hatred for FUD
He hacked kernels for fun,
couldn’t get them to run.
But he always felt that he should.

See?

So when you say "there’s not enough poetry", next time you’ll know why. You *really* don’t want want poetry.

Then Alan Cox replied with modified lyrics to Eleanor Rigby:

Ah look at all the laundered pages
Ah look at all the laundered pages

Handling Pages
Pick up the list and the link where kswap has been
A paging scheme
Runs down the I/O
Watching the queues that now keep me a list of the store
Who is it for

All the laundered pages
Where do they all come from
All the laundered pages
Where do they all belong

Meeting bdflush
Writing the pages of a disk file that no one will clear
No task comes near
Look at it working
Sleeping a lot in the night when there’s no pressure there
What does it care

All the laundered pages
Where do they all come from
All the laundered pages
Where do they all belong

Ah look at all the laundered pages
Ah look at all the laundered pages

Oracle DB
Died under load and was freed along with its name
No admin came
Good old bdflush
Wiping the dirt from the pages as it walks down the chain
Nothing was aged

All the laundered pages
(Ah look at all the laundered pages)
Where do they all come from
All the laundered pages
(Ah look at all the laundered pages)
Where do they all belong

Then, there was an exchange of limerics between Rusty and Alan…

Rusty:

There once was a virtualization coder,
Whose patches kept getting older,
Each time upstream would drop,
His documentation would slightly rot,
SO APPLY MY FUCKING PATCHES OR I’LL KEEP WRITING LIMERICKS.

Alan:

There once was a man they called rusty
Who patches were terribly crusty
Though his patches were right
And Linus was bright
They sat on the list getting dusty.

Rusty:

There was a poetic infection
Which distorted the kernel’s direction,
The code got no time
As they all tried to rhyme
And it shipped needing lots of correction.

And finally, Alan:

Dear Rusty I think that we know
Your code has good things to show
But an unreliable guide
To the poetic aside
Would probably steal the show

Either way, these are the people that write your operating system. :)

lguest: The New Kid on the Block

As most of you know, virtuallization doesn’t really interest me, so me writing about lguest is rather unusual. For those who don’t know, lguest is Rusty Russell’s way of saying virtualization sucks and I can make it better (don’t quote me on that).

Yesterday, Rusty sent out 7 patch series ( 1, 2, 3, 4, 5, 6, 7) that contains most of the documentation for lguest. This is not the normal style of documentation you’ll find in the kernel. Here’s Rusty’s description…

Lguest is an adventure, with you, the reader, as Hero. I can’t think of many 5000-line projects which offer both such capability and glimpses of future potential; it is an exciting time to be delving into the source!

But be warned; this is an arduous journey of several hours or more! And as we know, all true Heroes are driven by a Noble Goal. Thus I offer a Beer (or equivalent) to anyone I meet who has completed this documentation.

So get comfortable and keep your wits about you (both quick and humorous). Along your way to the Noble Goal, you will also gain masterly insight into lguest, and hypervisors and x86 virtualization in general.

There is a very large number of totally hillarious comments. It looks like one doesn’t have to be an x86 expert to get a laugh out of them, but knowing a thing or two about the architecture makes it all the more enjoyable.

I can’t help but include few excerpts here…

Intel provided a special instruction to clear the TS bit for people too cool to use write_cr0() to do it. This "clts" instruction is faster, because all the vowels have been optimized out.

I’m told there are only two stories in the world worth telling: love and hate. So there used to be a love scene here like this:

Launcher: We could make beautiful I/O together, you and I.
Guest: My, that’s a big disk!

Unfortunately, it was just too raunchy for our otherwise-gentle tale.

Just read the patches. They are really amusing :)

Looking up Files, Part II

So, here’s more updates about my adventures within the realm of unionfs_lookup (I suggest you read part I first). After my first post about lookup code, I went back to coding, and I had the pleasure to try to figure out why I was hitting a BUG_ON() with my new code, but not with the old code.

I made a simple test case, in one terminal I’d run fsx (a POSIX compliance tester program) on unionfs:

mount -t unionfs -o dirs=/mnt/foo/b0:/mnt/foo/b1=ro none unionfs/
cd unionfs/
fsx -l 104060000 -q foo

And then mid-way through, I’d insert a branch as the new branch index 0:

mount -o remount,add=/mnt/foo/b0:/mnt/foo/b2=rw /mnt/unionfs

The remount command immediatelly caused the BUG_ON (that tests for dentry validity) in unionfs_setattr to trigger. It seemed rather odd that the lookup code replacement would do something that’d cause the unionfs dentry to be invalid. I pondered for a bit, and then I tried to insert a number of branches quickly with the old code. Eureka! The same BUG_ON() got triggered. Some lxr-ing later, it became apparent that we need to potentially revalidate inside the inode ops (like unionfs_setattr). Seems kinda obvious now, oh well. I’m also pondering about the posibility of changing the VFS to call d_revalidate, but I’m still not sure if that’s the Right Thing(tm) to do.

Until next time!

Looking up Files

So, I spend the last two to three days mucking around with unionfs_lookup. Before I touched it, it was a very, very ugly, 340 line beast that no one on the Unionfs team wanted to touch in the past year or so, because it seemed that just looking at it the wrong way would make it not work. The function had 4 different modes of operation, which overlapped in subtle ways.

Well, I decided to have some fun — rewriting it from scratch. :) Currently, the lookup function is 210 lines long, and it looks like it is working. It has only one mode - as it should. Since, I am still not done, the original lookup code is still there, and used for the other 3 modes. I’ll hack on it some more, and either remove them completely, or collapse some of them because they seem a bit redundant.

In the end, even if the total code size is still around 340 lines, I’ll be happy. Having 340 lines of readable code is way better than 340 lines of barely readable code.

I'd Tap That

So, I just spend the whole night playing around with Systemtap. A neat way to dynamically instrument the kernel.

As far as I know, it uses the kprobe interface to do the hard work. The neat thing about it is, the fact that you use a rather safe language to write the code that’ll get inserted into the kernel. This code then gets translated into C code, which then gets compiled as a kernel module.

When I first heard about systemtap, I decide to ignore it for whatever reason. Now, I’m kind of sorry I did, because it definitely looks like a very useful tool.

Fun Kernel Experiment

Ok, so after doing all that work on Guilt, I decided that it would be an interesting experiment to resurect an idea I had about 2.5 years ago…my own kernel patch series. Back then, I called it -js, for rather obvious reasons (my name, if you haven’t noticed).

I wrote a small script, which allows me to make a release of a 2.6-js kernel patch series within seconds. I have no idea if it is worth anything having yet another kernel patch series, and I might not actually maintain it for long, but just in case someone is brave enough to try it, here it is: http://www.fsl.cs.sunysb.edu/~jsipek/kernel/js/. Yeah, I could put it on kernel.org, but I won’t unless I see that -js is actually getting somewhere.

Bug reports, etc. are..well..I guess, welcome :)

OLS 2007!

So, I submitted a proposal for OLS 2007, and I got in! Stay tuned for the self-inflicted agony associated with actually writing the paper :)

Powered by blahgd