Josef “Jeff” Sipek

Simple File System

On three or four occasions over the past 4 years, I had a use for simple file system spec. Either to teach people about file systems, or to have a simple file system to implement to learn the idiosyncracies of an operating system’s VFS layer. This is what I came up with back in 2011 when helping a friend learn about file systems.

Simple File System

The structure is really simple. All multi-byte integers are stored as big endian.

A disk is a linear sequence of blocks. Each block is 512 bytes long. You can read/write a block at a time. The first block on the disk is number 0, the second is 1, etc.

The following is the file system structure. First of all, the file system uses 1024 byte blocks, and therefore you need to issue two disk I/Os to process a file system block worth of data.

The first fs block (disk blocks 0 & 1) is reserved, you should not change it in any way.

The second block contains the superblock:

struct superblock {
        uint32_t magic;        /* == 0x42420374 */
        uint32_t root_inode;   /* the disk block containing the root inode */
        uint32_t nblocks;      /* number of block on the disk */
        uint32_t _pad[253];    /* unused (should be '\0' filled) */
};

Starting at the third block is the block allocation map. The most significant bit of the first byte of this block represents fs block 0. The next bit represents block 1, etc.

Each file is represented by an inode. The inode contains a number of data block pointers. The first pointer (blocks[0]) contains the first 1024 bytes of the file, blocks[1] the second, etc. The timestamps are in microseconds since 00:00:00 Jan 1, 1900 UTC.

sturct inode {
        uint32_t size;         /* file length in bytes */
        uint32_t _pad0;        /* unused (should be 0) */
        uint64_t ctime;        /* creation time stamp */
        uint64_t mtime;        /* last modification time stamp */
        uint16_t nblocks;      /* number of data blocks in this file */
        uint16_t _pad1;        /* unused (should be 0) */
        uint32_t _pad2;        /* unused (should be 0) */
        uint32_t blocks[248];  /* file block ptrs */
};

The root directory is represented by an inode, the data pointed to by this inode’s blocks[] have a special format. They should be treated as arrays of directory entries. The filename is space padded (so, “foo.txt” would be stored as “foo.txt                     ”).

struct direntry {
        char fname[28];        /* the filename */
        uint32_t inode;        /* the inode block ptr */
};

So, graphically, the file system looks something like:

sb -> rootinode
        |-> direntries
        |     |-> <"foo.txt", inodeptr>
        |     |                 \-> inode
        |     |                       |-> data
        |     |                       |-> data
        |     |                       \-> data
        |     |-> <"bar.txt", inodeptr>
        |     |                 \-> inode
        |     |                       |-> data
        |     |                       |-> data
        |     |                       \-> data
        |     |             .
        |     |             .
        |     |             .
        |     \-> <"xyz.txt", inodeptr>
        |                       \-> inode
        |                             |-> data
        |                             |-> data
        |                             \-> data
        |                   .
        |                   .
        |                   .
        \-> direntries
              |-> <"aaa.txt", inodeptr>
              |                 \-> inode
              |                       |-> data
              |                       |-> data
              |                       \-> data
              |-> <"bbb.txt", inodeptr>
              |                 \-> inode
              |                       |-> data
              |                       |-> data
              |                       \-> data
              |             .
              |             .
              |             .
              \-> <"ccc.txt", inodeptr>
                                \-> inode
                                      |-> data
                                      |-> data
                                      \-> data

Delegating mount/umount Privileges

Recently, I was doing some file system changes. Obviously, I wanted to run them as an unprivileged user. Unfortunately, the test involved mounting and unmounting a filesystem (tmpfs to be specific). At first I was going to set up a sudo rule to allow mount and umount to run without asking for a password. Then I remembered that I should be able to give the unprivileged user the additional privileges. It turns out that there is only one privilege (sys_mount) necessary to delegate…and it is easy to do!

$ usermod -K defaultpriv=basic,sys_mount jeffpc

Then it’s a matter of logging out and back in. We can check using ppriv:

$ ppriv $$
925:    bash
flags = <none>
        E: basic,sys_mount
        I: basic,sys_mount
        P: basic,sys_mount
        L: all

At this point, mounting and unmounting works without sudo or similar user switching:

$ mkdir tmp
$ mount -F tmpfs none /tmp/tmp
$ df -h /tmp/tmp
Filesystem      Size  Used Avail Use% Mounted on
swap            2.6G     0  2.6G   0% /tmp/tmp

FAST 2012

Last week I went to FAST ’12. As always, it’s been fun attending. Overall, the talks were good and gave me a couple of ideas. There was definitely a lot of flash and SSD related work — both of which I don’t find all that exciting. There was however a bunch of work related to dedup and backup.

Anyway, here’s my reading list. I’ve skimmed most of the papers, but I want to take a closer look at these.

  • Characteristics of Backup Workloads in Production Systems
  • WAN Optimized Replication of Backup Datasets Using Stream-Informed Delta Compression
  • Recon: Verifying File System Consistency at Runtime
  • Consistency Without Ordering
  • ZZFS: A Hybrid Device and Cloud File System for Spontaneous Users
  • Serving Large-scale Batch Computed Data with Project Voldemort
  • BlueSky: A Cloud-Backed File System for the Enterprise
  • Extracting Flexible, Replayable Models from Large Block Traces
  • iDedup: Latency-aware, Inline Data Deduplication for Primary Storage

Timesavers: ZFS & BE

I’ve mentioned Boot Environments before. Well, earlier this week BEs and ZFS snapshots saved me a bunch of time. Here’s what happened.

I was in the middle of installing some package (pkg install foo) when my laptop locked up. I had to power cycle it the hard way. When it booted back up, I retried the install, but pkg complained that some state file was corrupted and it didn’t want to do anything. Uh oh. I’ve had similar issue happen to me on Debian with aptitude, so I knew that the hard way of fixing this issue was going to take more time than I’d like to dedicate to it (read: none). Thankfully, I use OpenIndiana which has ZFS and BEs.

  1. Reboot into a BE from a while ago (openindiana-3). The latest BE (openindiana-4) was created by pkg about a month ago as a clone of openindiana-3 during a major upgrade.
  2. Figure out which automatic ZFS snapshot I want to revert to. A matter of running zfs list -t all rpool/ROOT/openindiana-4 | tail -5 and picking the latest snapshot which I believe is from before pkg messed it all up. I ended up going an hour back just to make sure.
  3. Revert the BE. beadm rollback openindiana-4@zfs-auto-snap_hourly-2011-10-25-19h11
  4. Reboot back into openindiana-4.

After the final reboot, everything worked just fine. (Since the home directories are on a different dataset, they were left untouched.)

Total downtime: 5 minutes
Ease of repair: trivial

Your Turn

Do you have a corrupt package manager war story? Did you just restore from backup? Let me know in a comment.

O_PONIES & Other Assorted Wishes

You might have already heard about ext4 “eating” people’s data. That’s simply not true.

While I am far from being a fan of ext4, I feel an obligation to set the record straight. But first, let me give you some references with an approximate timeline. I’m sure I managed to leave out a ton of details.

In mid-January, a bug titled Ext4 data loss showed up in the Ubuntu bug tracker. The complaining users apparently were using data on system crashes when using ext4. (The fact that Ubuntu likes to include every unstable & crappy driver into their kernels doesn’t help at all.) As part of the discussion, Ted Ts’o explained that the problem wasn’t with ext4 but with applications that did not ensure that the data they wrote was actually safe. The people did not like hearing that.

Things went pretty quiet until mid-March. That’s when a slashdot article made it painfully obvious that many of today’s apps are buggy. Some applications (KDE being a whole suite of applications) gotten used to the fact that ext3 was a very common filesystem used by Linux installations. More specifically, they got used to the behavior that ext3’s default mount option (data=ordered) provided. This is really the issue. The application developers assumed that the POSIX interface gave them more guarantees that it did! To make matters worse, the one way to ensure that the contents of a file get to the disk (the fsync system call) is very expensive on ext3. So over the past (almost) decade that ext3 has been around, application developers have been “trained” (think Wikipedia article: Pavlov reflexes) to not use fsync — on ext3, it’s expensive and the likelyhood of you losing data is much lower due to the default mount options. ext4’s fsync implementation, much like other filesystems’ implementations (e.g., XFS) does not suffer from this. (You may have heard about fsync on ext3 being expensive almost a year ago when Firefox was hit by this: Fsyncers and curveballs (the Firefox 3 fsync() problem). Note that in this case, as Ted Ts’o points out, the problem is that Firefox uses the same thread to draw the UI and do IO. That’s plain stupid.)

Over the next few days, Ted Ts’o posted two blog entries about delayed allocation (people seem to like to blame it for dataloss): Delayed allocation and the zero-length file problem, Don’t fear the fsync!.

About the same time, Eric Sandeen wrote a blurb about the state of affairs: fsync, sigh. He points out that XFS has faced the same issue years ago. When the application developers were confronted about their application being broken, they just put fingers in their ears, hummed loudly, yelled “I can’t hear you!” There is a word for that, and here’s the OED definition for it:

denial,

The asserting (of anything) to be untrue or untenable; contradiction of a statement or allegation as untrue or invalid; also, the denying of the existence or reality of a thing.

The problem is application developers not wanting to believe that it’s an application problem. Well, it really is! Not only are those apps broken, but they are not portable. AIX, IRIX, or Solaris will not give you the same guarantees as ext3!

(Eric is also trying to fight the common misconception that XFS nulls files: XFS does not null files, and requires no flux, which I assure you is not the case.)

About a week later, on an episode of Free Software Round Table, the problem was discussed a bit. They got most of it right :) (Here’s a 55MB mp3 of the show: 2009-03-21.)

When April 1st came about, the linux-fsdevel mailing list got a patch from yours truly: [PATCH] fs: point out any processes using O_PONIES. (The pony thing…it’s a bit of an inside joke among the Linux filesystem developers.) The idea of having O_PONIES first came up in #linuxfs on OFTC. While I don’t remember who first thought of it (my guess would be Eric), I know for sure that it wasn’t me. At the same time, I couldn’t help it, and considering that the patch took only a minute to make (and compile test), it was well worth it.

Few days later, during the Linux Storage and Filesystem workshop, the whole fsync issue got some discussion time. (See “Rename, fsync, and ponies” at Linux Storage and Filesystem workshop, day 1.) The part that really amused me:

Prior to Ted Ts’o’s session on fsync() and rename(), some joker filled the room with coloring-book pages depicting ponies. These pages reflected the sentiment that Ted has often expressed: application developers are asking too much of the filesystem, so they might as well request a pony while they’re at it.

In the comments for that article you can find Ted Ts’o saying:

Actually, it was Josef ’Jeff’ Sipek who deserves the first mention of application programmers asking for pones, when he posted an April Fools patch submission for the new open flag, O_PONIES — unreasonable file system assumptions desired.

Another file system developer who had worked on two major filesystems (ext4 and XFS) had a t-shirt on that had O_PONIES written on the front. And the joker who distributed the colouring book pages with pictures of ponies was another file system developer working yet another next generation file system.

Application programmers, while they were questioning my competence, judgement, and even my paternity, didn’t quite believe me when I told them that I was the moderate on these issues, but it’s safe to say that most of the file system developers in the room were utterly unsympathetic to the idea that it was a good idea to encourage application programmers to avoid the use of fsync(). About the only one who was also a moderate in the room was Val Aurora (formerly Henson). Both of us recognize that ext3’s data=ordered mode was responsible for people deciding that fsync() was harmful, and I’ve said already that if we had known how badly it would encourage application writers to Do The Wrong Thing, I would have pushed hard not to make data=ordered the default. Unfortunately, memory wasn’t as plentiful in those days, and so the associated page writeback latencies wasn’t nearly as bad ten years ago.

Hrm, I’m not sure how to take it…he makes it sound like I’m an extremist. Jeff — a freedom fighter for sanity of filesystem interfaces! :) As I said, I can’t take credit for the idea of O_PONIES. As I was writing this entry, I mentioned it to Eric and he promptly wrote an entry of his own: Coming clean on O_PONIES. It looks like he isn’t sure that he was the one to invent it! I’ll give him credit for it anyway.

The next day, a group photo of the attendees was taken… You can clearly see Val Aurora wearing an O_PONIES shirt. The idea was Eric’s, and as far as I know, he had his shirt the first day.

Fedora 11 is supposedly going to use ext4 as the default filesystem. When Ars Technica published an article about it (First look: Fedora 11 beta shows promise), some misguided people thinking that that ext4 eats your data left a bunch of comments….*sigh*

Well, there you have it. That’s the summary of events with some of my thoughts interleaved. If you are writing a userspace application that does file IO, do the right thing, fsync the data you care about (or at least fdatasync).

Dumping & restoring XFS volumes

Over the past few years, I’ve been using XFS wherever I could. I never really tried to tweak the mkfs options, and therefore most of my filesystems were quite sub-optimal. I managed to get my hands on an external 500GB disk that I decided to use for all this data shuffling…

320GB external firewire disk

This was probably the most offenseively made fs. Here’s the old info:

meta-data=/dev/sdb1      isize=512    agcount=17, agsize=4724999 blks
         =               sectsz=512   attr=1
data     =               bsize=4096   blocks=78142042, imaxpct=25
         =               sunit=0      swidth=0 blks, unwritten=1
naming   =version 2      bsize=4096  
log      =internal       bsize=4096   blocks=32768, version=2
         =               sectsz=512   sunit=0 blks, lazy-count=0
realtime =none           extsz=65536  blocks=0, rtextents=0

It had 512 byte inodes (instead of the more sane, and default 256 byte inodes) because I was playing around with SELinux when I made this filesystem, and the bigger inodes allow more extended attributes to be stored there — improving performance a whole lot. When I first made the fs, it had 16 allocation groups, but I grew the filesystem about 10GB which were used by a FAT32 partition that I used for Windows ↔ Linux data shuffling. On a simple disk (e.g., not a RAID 5), 4 allocation groups is far more logical then the 17 I had before. Another thing I wanted to use is the lazy-count. That got introduced in 2.6.23, and improved performance when multiple processes were filesystem metadata (create/unlink/mkdir/rmdir). And last, but not least, I wanted to use version 2 inodes.

The simples way to change all the filesystem to use these features is to backup, mkfs, and restore…and that’s what I did.

This is what the fs is like after the whole process (note that isize, agcount, attr, and lazy-count changed):

meta-data=/dev/sdb1      isize=256    agcount=4, agsize=19535511 blks
         =               sectsz=512   attr=2
data     =               bsize=4096   blocks=78142042, imaxpct=25
         =               sunit=0      swidth=0 blks
naming   =version 2      bsize=4096
log      =internal       bsize=4096   blocks=32768, version=2
         =               sectsz=512   sunit=0 blks, lazy-count=1
realtime =none           extsz=4096   blocks=0, rtextents=0

dumping…

I mkfs.xfs’d the 500GB disk, and mounted it on /mnt/dump. Since I like tinkering with storage, I couldn’t help but start blktrace for both of the disks (the one being dumped, and the one storing the dump).

Instead of using rsync, tar, or dd, I went with xfsdump/xfsrestore combo. xfsdump is a lot like tar — it creates a single with with all the data, but unlike tar, it also saves extended attributes, and preserves the hole information for sparse files. So, with blktrace running, it was time to start the dump:

# xfsdump -f /mnt/dump/acomdata_xfs.dump -p 60 -J /mnt/acomdata

The dump took about 9300 seconds (2 hours, 35 mins). Here are the graphs created by seekwatcher (which uses the blktrace traces)…The source disk is the firewire disk being dumped, and the target disk is the one being dumped to.

source disk

The IO here makes sense, xfsdump scans the entire filesystem — and backs up every inode sorted by the inode number (which is a function of the block number). The scattered accesses are because of fragmented files having data all over the place.

target disk

I’m not quite sure why XFS decided to break the dump file into 8 extents. These extents show up nicely as the 8 ascending lines. The horizontal line ~250GB is the journal being written to. (The seeks/second graph’s y-axis shows that seekwatcher has a bug when there’s very little seeking :) )

…and restoring

After the dump finished, I unmounted the 320GB fs, and ran mkfs on it (lazy-count=1, agcount=4, etc.). Then it was time to mount, start a new blktrace run on the 2 disks, and run xfsrestore — to extract all the files from the dump.

# xfsrestore -f /mnt/dump/acomdata_xfs.dump -p 60 -A -B -J /mnt/acomdata

I used the -A option to NOT restore xattrs as the only xattrs that were on the filesystem were some stray SELinux labels that managed to survive.

The restore took a bit longer…12000 seconds (3 hours, 20 minutes). And here are the traces for the restore:

source disk

Reading the 240GB file that was in 8 extents created a IO trace that’s pretty self explanatory. The constant writing to the journal was probably because of the inode access time updates. (And again, seekwatcher managed to round the seeks/second y-axis labels.)

target disk

This looks messy, but it actually isn’t bad at all. The 4 horizontal lines that look a lot like journal writes are probably the superblocks being updated to reflect the inode counts (4 allocation groups == 4 sets superblock + ag structures).

some analysis…

After the restore, I ran some debug tools to see how clean the filesystem ended up being…

…fragmentation

37945 extents used, ideal 37298 == not bad at all

…free space fragmentation

   from      to extents  blocks    pct
      1       1      19      19   0.00
      2       3       1       3   0.00
     64     127       2     150   0.00
    128     255       1     134   0.00
    512    1023       1     584   0.00
   4096    8191       1    4682   0.02
  32768   65535       1   36662   0.19
 131072  262143       1  224301   1.16
 262144  524287       3 1315076   6.79
 524288 1048575       2 1469184   7.59
1048576 2097151       4 6524753  33.71
2097152 4194303       4 9780810  50.53

== pretty much sqeaky clean

…per allocation group block usage

/dev/sdb1:
AG     1K-blocks         Used    Available    Use%
  0     78142044     40118136     38023908     51%
  1     78142044     78142040            4     99%
  2     78142044     42565780     35576264     54%
  3     78142036     74316844      3825192     95%
ALL    312568168    235142800     77425368     75%

I’m somewhat surprised that the 2nd and 4th are near full (well, 2nd ag has only 4kB free!), while the 1st and 3rd are only half full. As you can see, the 320GB disk is 75% used.

Bonus features

I decided to render mpeg versions of the IO traces…

source disk (dump) (4MB)
target disk (dump) (2MB)
source disk (restore) (2MB)
target disk (restore) (4.1MB) ← this is the best one of the bunch

Smile!

Last night, I forget who, challenged me to make a smilie with disk io and seekwatcher. Well, I couldn’t let such challenge just pass me by (click to enlarge):

Smile!

All that I used was: blktrace, seekwatcher, python (to do the math - sin, cos, etc.), and dd (to do the disk io). I am already planning bigger and better things :)

XFS, blktrace, seekwatcher

Today I was playing with blktrace, and graphing the results with seekwatcher. At one point, I ran acp (which is a lot like tar, but tries to be smarter) on a directory stored on an XFS volume, but I forgot that months ago, I created a sparse file 101PB (that’s peta) in size. Well, acp was happily reading all the sparse regions. I killed it, and decided to remove the gigantic file which was totally useless. About 30 seconds into the removal, I realized it would have been great to have a trace of that. Well, I started blktrace and about 12 minutes later the rm process finished.

I graphed it and here’s the result (click for larger version):

XFS removing a large sparse file

At first I was very confused why things looked the way they did, but eventually it dawned on me (after some discussion with Dave Chinner — XFS dude) that it’s all journal log traffic. I quickly ran xfs_info on the filesystem:

meta-data=/dev/sdb1              isize=256    agcount=16, agsize=1120031 blks
         =                       sectsz=512   attr=1
data     =                       bsize=4096   blocks=17920496, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=8750, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0

And things just made sense. I calculated the size of the log (see bolded numbers) to be (4096*8750) bytes, or 34.17 MB (base 2) or 35.84 MB (base 10). If you look at the graph, you’ll see that the disk offsets accessed were 35001 to 35035 MB or about 35MB! XFS puts the log near the middle of the disk to minimize seeks as much as possible, so as you may have guessed, my disk is about 70GB in size (it’s a U160 73GB SCSI disk).

Looking up Files, Part II

So, here’s more updates about my adventures within the realm of unionfs_lookup (I suggest you read part I first). After my first post about lookup code, I went back to coding, and I had the pleasure to try to figure out why I was hitting a BUG_ON() with my new code, but not with the old code.

I made a simple test case, in one terminal I’d run fsx (a POSIX compliance tester program) on unionfs:

mount -t unionfs -o dirs=/mnt/foo/b0:/mnt/foo/b1=ro none unionfs/
cd unionfs/
fsx -l 104060000 -q foo

And then mid-way through, I’d insert a branch as the new branch index 0:

mount -o remount,add=/mnt/foo/b0:/mnt/foo/b2=rw /mnt/unionfs

The remount command immediatelly caused the BUG_ON (that tests for dentry validity) in unionfs_setattr to trigger. It seemed rather odd that the lookup code replacement would do something that’d cause the unionfs dentry to be invalid. I pondered for a bit, and then I tried to insert a number of branches quickly with the old code. Eureka! The same BUG_ON() got triggered. Some lxr-ing later, it became apparent that we need to potentially revalidate inside the inode ops (like unionfs_setattr). Seems kinda obvious now, oh well. I’m also pondering about the posibility of changing the VFS to call d_revalidate, but I’m still not sure if that’s the Right Thing(tm) to do.

Until next time!

Looking up Files

So, I spend the last two to three days mucking around with unionfs_lookup. Before I touched it, it was a very, very ugly, 340 line beast that no one on the Unionfs team wanted to touch in the past year or so, because it seemed that just looking at it the wrong way would make it not work. The function had 4 different modes of operation, which overlapped in subtle ways.

Well, I decided to have some fun — rewriting it from scratch. :) Currently, the lookup function is 210 lines long, and it looks like it is working. It has only one mode - as it should. Since, I am still not done, the original lookup code is still there, and used for the other 3 modes. I’ll hack on it some more, and either remove them completely, or collapse some of them because they seem a bit redundant.

In the end, even if the total code size is still around 340 lines, I’ll be happy. Having 340 lines of readable code is way better than 340 lines of barely readable code.

Powered by blahgd