Josef “Jeff” Sipek

Think!

Alright, it ain’t rocket science. When you are trying to decide which filesystem to use, and you see a 7 year old article which talks about people having problems with the fs on Red Hat 7.x (running 2.4.18 kernels), are you going to assume that nothing changed? What if all the developers tell you that things changed? Are you still going to believe the slashdot article? Grrr… No one is forcing you to use this filesystem, so if you believe a 7-year old /. article, then go away and don’t waste the developers’ & others’ time.

Benchmarking Is Hard, Let's Go Shopping

It’s been a while since I started telling people that benchmarking systems is hard. I’m here today because of an article about an article about an article from the ACM Transactions on Storage. (If anyone refers to this post, they should cite it as “blog post about an article about…” ;) .)

While the statement “benchmarking systems is hard” is true for most of systems benchmarking (yes, that’s an assertion without supporting data, but this is a blog and so I can state these opinions left and right!), the underlying article (henceforth the article) is about filesystem and storage benchmarks specifically.

For those of you who are getting the TL;DR feeling already, here’s a quick summary:

  1. FS benchmarking is hard to get right.
  2. Many commonly accepted fs benchmarks are wrong.
  3. Many people misconfigure benchmarks yielding useless data.
  4. Many people don’t specify their experimental setup properly.

Hrm, I think I just summarized a 56 page journal article in 4-bullet points. I wonder what the authors will have to say about this :)

On a related note, it really bothers me when regular people attempt to “figure out” which filesystem is the best, and they share their findings. It’s the sharing part. Why? Because they are uniformly bad.

Here’s an example of a benchmark gone wrong…

Take Postmark. It’s a rather simple benchmark that simulates the IO workload of an email server. Or does it? What do mail servers do? They read. They write. But above all, they try to ensure that the data actually hit the disk. POSIX specifies a wonderful way to ensure data hits the disk - fsync(2). (You may remember fsync from O_PONIES & Other Assorted Wishes.) So, a typical email server will append a new email to the mail box, and then fsync it. Only then it’ll acknowledge the receiving the email to the remote host. How often does Postmark run fsync? The answer is simple: never.

Now you may be thinking…I’ve never heard of Postmark, so who uses it? Well, according to the article (the 56-pages long one), out of the 107 papers surveyed, 30 used Postmark. Postmark is so easy to run, that even non-experts try to use it. (The people at Phoronix constantly try to pretend that they figured out benchmarking. For example, on EXT4, Btrfs, NILFS2 Performance Benchmarks they are shocked (see page 2) that some filesystems take 500 times longer for one of their silly tests, even though people have pointed out to them what barriers are, and that they will have an impact on performance.)

Granted, non-experts are expected to make mistakes, but you’d expect that people at Sun would know better. Right? Well, they don’t. In their SOLARIS ZFS AND MICROSOFT SERVER 2003 NTFS FILE SYSTEM PERFORMANCE WHITE PAPER (emphasis added by me):

This white paper explores the performance characteristics and differences of Solaris ZFS and the Microsoft Windows Server 2003 NTFS file system through a series of publicly available benchmarks, including BenchW, Postmark, and others.

Sad. Perhaps ZFS isn’t as good as people make it out to be! ;)

Alright, fine Postmark doesn’t fsync but it should be otherwise ok, right? Wrong again! Take the default parameters (table taken from the article):

Parameter Default Value Number Disclosed (out of 30)
File sizes 500-10,000 bytes 21
Number of files 500 28
Number of transactions 500 25
Number of subdirectories 0 11
Read/write block size 512 bytes 7
Operation ratios equal 16
Buffered I/O yes 6
Postmark version - 7

First of all, note that some parameters weren’t specified by a large number of papers. The other interesting thing is the default configuration. Suppose that all 500 files will grow to 10000 bytes (they’ll have random sizes in the specified range). That means that the maximum size they’ll take up is 5000000 bytes, or under 5 MB. Since there’s no fsync, chances are that the data will never hit the disk! This easily explains why the default configuration executes in a fraction of a second. These defaults were reasonable many years ago, but not today. As the article points out:

Having outdated default parameters creates two problems. First, there is no standard configuration, and since different workloads exercise the system differently, the results across research papers are not comparable. Second, not all research papers precisely describe the parameters used, and so results are not reproducible.

Later on in the 3 pages dedicated to Postmark, it states:

An essential feature for a benchmark is accurate timing. Postmark uses the time(2) system call internally, which has a granularity of one sec. There are better timing functions available (e.g., gettimeofday) that have much finer granularity and therefore provide more meaningful and accurate results.

Anyway, now that we have beaten up Postmark and it is cowering in the corner of the room, let’s take a look at another favorite benchmark people like to use — a compile benchmark.

The great thing about compile benchmarks is that they are really easy to set up. Chances are that someone interested in running benchmarks already has some toolchain set up - so a compile benchmark consists of timing the compile! Easy? Definitely.

One problem with compile benchmarks is that they depend on a whole lot of state. They depend on the hardware configuration, software configuration (do you have libfoo 2.5 or libfoo 2.6 installed?), as well as the version of the toolchain (gcc 2.95? 2.96? 3.0? 3.4? 4.0? 4.2? or is it LLVM? or MSVC? or some other compiler? what about the linker?).

The other problem with them is…well, they are CPU bound. So why are they used for filesystem benchmarks? My argument is that it is useful to demonstrate that the change the researchers did does not incur a significant amount of CPU overhead.

Anyway, I think I’ll stop ranting now. I hope you learned something! You should go read at least the Linux Magazine or Byte and Switch article. They’re good reading. If you are brave enough, feel free to dive into the 56-pages of text. All of these will be less rant-y than this post. Class dismissed!

Amazon Kindle SNAFU

This is why I don’t like DRM or any other scheme which allows someone else to grant/remove access to content which I should have access to - if I buy a (dead-tree) book, I should have access to it until I get rid of it (trash it, donate it, give it to a friend, etc., etc.).

The link talks about books, but the same ideas apply to DRMd music, movies, and whatever other content.

The Science News Cycle

May 18, 2009: PhD Comics

Webinar

Today, I found out that “webinar” is not a new concoction. The OED tells us:

orig. U.S. Business

A seminar conducted over the Internet, allowing participants to interact with one another in real time.

1997 DM News (Nexis) 8 Dec. 8 The organization of the Webinar’s content and presentation is being handled by Clarify.

2004 Post-Standard (Syracuse, N.Y.) 18 Apr. E2/6 Participants in the Webinars can e-mail questions to experts, with some addressed during the sessions.

2008 Wall St. Jrnl. 24 June A12 (advt.) Attending our free monthly webinars gives you great exposure to new ideas that can drive your business.

The first mention seems to be from over 11 years ago! I just hope that this “word” disappears soon. :(

In general, I have nothing against new words…well, as long as there is a reason to have them. There is really no reason to add “webinar” to the language as seminar works just as well, if not better.

Jaunty Jackass

From IRC:

<jeffpc> Event: Happy Jaunty Release Day Everyone!
<jeffpc> Where: Corner Brewery
<jeffpc> Why: Because Ubuntu is awesome! Because we?re awesome! Right on.
* jeffpc groans
<jeffpc> obiwan: don't you ever take drugs, young man!
<jeffpc> obiwan: or you'll turn into one of those ubuntu lovers
<obiwan> hahahahahahahahhahahaahahahhaa
<obiwan> I was wondering why you cared about Jaunty Jackass or whatever the
         hell they call their releases
<jeffpc> hahaha
<jeffpc> that's awesome
<jeffpc> Jaunty Jackass
<obiwan> they just call it "Jaunty" like we're supposed to mentally fill in
         the other word
<obiwan> well there you go

SMTP

This is a (long overdue) reply to Ilya’s post: SMPT – Time to chuck it.

I’m going to quote it here, and reply to everything in it. Whenever I say "you," I mean Ilya. So, with that said, let’s get started.

E-mail, in particular SMTP (Simple Mail Transfer Protocol) has become an integral part of our lives, people routinely rely on it to send files, and messages. At the inception of SMTP the Internet was only accessible to a relatively small, close nit community; and as a result the architects of SMTP did not envision problems such as SPAM and sender-spoofing. Today, as the Internet has become more accessible, scrupulous people are making use of flaws in SMTP for their profit at the expense of the average Internet user.

Alright, this is pretty much the only thing I agree with.

There have been several attempts to bring this ancient protocol in-line with the current society but the problem of spam keeps creeping in. At first people had implemented simple filters to get rid of SPAM but as the sheer volume of SPAM increased mere filtering became impractical, and so we saw the advent of adaptive SPAM filters which automatically learned to identify and differentiate legitimate email from SPAM. Soon enough the spammers caught on and started embedding their ads into images where they could not be easily parsed by spam filters.

A history lesson…still fine.

AOL (America On Line) flirted with other ideas to control spam, imposing email tax on all email which would be delivered to its user. It seems like such a system might work but it stands in the way of the open principles which have been so important to the flourishing of the internet.

AOL (I believe Microsoft had a similar idea) really managed to think of something truly repulsive. The postal system in the USA didn’t always work the way it does today. A long time ago, the recipient paid for the delivery. AOL’s idea seems a lot like that.

There are two apparent problems at the root of the SMTP protocol which allow for easy manipulation: lack of authentication and sender validation, and lack of user interaction. It would not be difficult to design a more flexible protocol which would allow for us to enjoy the functionality that we are familiar with all the while address some, if not all of the problems within SMTP.

To allow for greater flexibility in the protocol, it would first be broken from a server-server model into a client-server model.

This is first point I 100% disagree with…

That is, traditionally when one would send mail, it would be sent to a local SMTP server which would then relay the message onto the next server until the email reached its destination. This approach allowed for email caching and delayed-send (when a (receiving) mail server was off-line for hours (or even days) on end, messages could still trickle through as the sending server would try to periodically resend the messages.) Todays mail servers have very high up times and many are redundant so caching email for delayed delivery is not very important.

"Delayed delivery is not very important"?! What? What happened to the whole "better late than never" idiom?

It is not just about uptime of the server. There are other variables one must consider when thinking about the whole system of delivering email. Here’s a short list; I’m sure I’m forgetting something:

  • server uptime
  • server reliability
  • network connection (all the routers between the server and the "source") uptime
  • network connection reliability

It does little to no good if the network connection is flakey. Ilya is arguing that that’s rarely the case, and while I must agree that it isn’t as bad as it used to be back in the 80’s, I also know from experience that networks are very fragile and it doesn’t take much to break them.

A couple of times over the past few years, I noticed that my ISP’s routing tables got screwed up. Within two hours of such a screwup, things returned to normal, but that’s 2 hours of "downtime."

Another instance of a network going haywire: one day, at Stony Brook University, the internet connection stopped working. Apparently, a compromised machine on the university campus caused a campus edge device to become overwhelmed. This eventually lead to a complete failure of the device. It took almost a day until the compromised machine got disconnected, the failed device reset, and the backlog of all the traffic on both sides of the router settled down.

Failures happen. Network failures happen frequently. More frequently that I would like them to, more frequently than the network admins would like them to. Failures happen near the user, far away from the user. One can hope that dynamic routing tables keep the internet as a whole functioning, but even those can fail. Want an example? Sure. Not that long ago, the well know video repository YouTube disappeared off the face of the Earth…well, to some degree. As this RIPE NCC RIS case study shows, on February 24, 2008, Pakistan Telecom decided to announce BGP routes for YouTube’s IP range. The result was, that if you tried to access any of YouTube’s servers on the 208.65.152.0/22 subnet, your packets were directed to Pakistan. For about an hour and twenty minutes that was the case. Then YouTube started announcing more granular subnets, diverting some of the traffic back to itself. Eleven minutes later, YouTube announced even more granular subnets, diverting large bulk of the traffic back to itself. Few dozen minutes later, PCCW Global (Pakistan Telecom’s provider responsible for forwarding the "offending" BGP announcements to the rest of the world) stopped forwarding the incorrect routing information.

So, networks are fragile, which is why having an email transfer protocol that allows for retransmission a good idea.

Instead, having direct communication between the sender-client and the receiver-server has many advantages: opens up the possibility for CAPTCHA systems, makes the send-portion of the protocol easier to upgrade, and allows for new functionality in the protocol.

Wow. So much to disagree with!

  • CAPTCHA doesn’t work
  • What about mailing lists? How does the mailing list server answer the CAPTCHAs?
  • How does eliminating server-to-server communication make the protocol easier to upgrade?
  • New functionality is a nice thing in theory, but what do you want from your mail transfer protocol? I, personally, want it to transfer my email between where I send it from and where it is supposed to be delivered to.
  • If anything eliminating the server-to-server communication would cause the MUAs to be "in charge" of the protocols. This means that at first there would be many competing protocols, until one takes over - not necessarily the better one (Betamax vs. VHS comes to mind).
  • What happens in the case of overzealous firewall admins? What if I really want to send email to bob@example.com, but the firewall (for whatever reason) is blocking all traffic to example.com?

Spam is driven by profit, the spammers make use of the fact that it is cheap to send email. Even the smallest returns on spam amount to good money. By making it more expensive to send spam, it would be phased out as the returns become negative. Charging money like AOL tried, would work; but it is not a good approach, not only does it not allow for senders anonymity but also it rewards mail-administrators for doing a bad job (the more spam we deliver the more money we make).

Yes, it is unfortunately true, money complicates things. Money tends to be the reason why superior design fails to take hold, and instead something inferior wins - think Betamax vs. VHS. This is why I think something similar would happen with competing mail transfer protocols - the one with most corporate backing would win, not the one that’s best for people.

Another approach is to make the sender interact with the recipient mail server by some kind of challenge authentication which is hard to compute for a machine but easy for a human, a Turing test. For example the recipient can ask the senders client to verify what is written on an obfuscated image (CAPTCHA) or what is being said on a audio clip, or both so as to minimize the effect on people with handicaps.

Nice thought about the handicapped, but you are forgetting that only 800-900 million people speak English (see Wikipedia article: Wikipedia). That is something on the order of 13-15 percent. Sorry, but "listening comprehension" tests are simply not viable.

Obfuscated image Wikipedia article: CAPTCHAs are "less" of a problem, but then again, one should consider the blind. I am not blind, and as a matter of fact my vision is still rather good (even after years of staring at computer screens), but at times I’m not sure what those "distorted text" CAPTCHAs are even displaying. I can’t even begin to imagine what it must be like for anyone with poor vision.

You seem to be making the assumption that most if not all legitimate email comes from humans. While that may be true for your average home user, let’s not forget that email is used by more technical people as well. These people will, and do, use email in creative ways. For example, take me…I receive lots of emails that are generated by all sorts of scripts that I wrote over time. These emails give me status of a number of systems I care about, and reminders about upcoming events. All in all, you could say that I live inside email. You can’t do a CAPTCHA for the process sending the automated email (there’s no human sending it), and if you do the CAPTCHA for the receiving, you’re just adding a "click here to display the message" wart to the mail client software user interface.

Just keep in mind that all those automated emails you get from "root" or even yourself were sent without a human answering a CAPTCHA.

It would be essential to also white list senders so that they do not have to preform a user-interactive challenge to send the email, such that mail from legitimate automated mass senders would get through (and for that current implementation of sieve scripts could be used). In this system, if users were to make wide use of filters, we would soon see a problem. If nearly everyone has a white list entry for Bank Of America what is to prevent a spammer to try to impersonate that bank?

White listing is really annoying, and as you point out, it doesn’t work.

And so this brings us to the next point, authentication, how do you know that the email actually did, originate from the sender. This is one of the largest problems with SMTP as it is so easy to fake ones outgoing email address. The white list has to rely on a verifiable and consistent flag in the email. A sample implementation of such a control could work similar to the current hack to the email system, SPF, in which a special entry is made in the DNS entry which says where the mail can originate from. While this approach is quite effective in a sever-server architecture it would not work in a client-server architecture. Part of the protocol could require the sending client to send a cryptographic-hash of the email to his own receiving mail server, so that the receiving party’s mail server could verify the authenticity of the source of the email. In essence this creates a 3 way handshake between the senders client, the senders (receiving) mail server and the receiver’s mail server.

I tend to stay away from making custom authentication protocols.

In this scheme, what guarantees you that the client and his "home server" aren’t both trying to convince the receiving server that the email is really from whom they say it is? In kerberos, you have a key for each system, and a password for each user. The kerberos server knows it all, and this central authority is why things work. With SSL certificates, you rely on the strength of the crypto used, as well as blind faith in the certificate authority.

At first it might seem that this process uses up more bandwidth and increases the delay of sending mail but one has to remember that in usual configuration of sending email using IMAP or POP for mail storage one undergoes a similar process,

Umm…while possible, I believe that very very large majority of email is sent via SMTP (and I’m not even counting all the spam).

first email is sent for storage (over IMAP or POP) to the senders mail server and then it is sent over SMTP to the senders email for redirection to the receivers mail server. It is even feasible to implement hooks in the IMAP and POP stacks to talk to the mail sending daemon directly eliminating an additional socket connection by the client.

Why would you want to stick with IMAP and POP? They do share certain ideas with SMTP.

For legitimate mass mail this process would not encumber the sending procedure as for this case the sending server would be located on the same machine as the senders receiving mail server (which would store the hash for authentication), and they could even be streamlined into one monolithic process.

Not necessarily. There are entire businesses that specialize in mailing list maintenance. You pay them, and they give you an account with software that maintains your mailing list. Actually, it’s amusing how similar it is to what spammers do. The major difference is that in the legitimate case, the customer supplies their own list of email address to mail. Anyway, my point is, in these cases (and they are more common than you think) the mailing sender is on a different computer than the "from" domain’s MX record.

Some might argue that phasing out SMTP is a extremely radical idea, it has been an essential part of the internet for 25 years.

Radical? Sure. But my problem is that there is no replacement. All the ideas you have listed have multiple problems - all of which have been identified by others. And so here we are, no closer to the solution.

But then, when is the right time to phase out this archaic and obsolete protocol, or do we commit to use it for the foreseeable future. Then longer we wait to try to phase it out the longer it will take to adopt something new. This protocol should be designed with a way to coexist with SMTP to get over the adoption curve, id est, make it possible for client to check for recipients functionality, if it can accept email by this new protocol then send it by it rather than SMTP.

Sounds great! What is this protocol that would replace SMTP? Oh, right there isn’t one.

The implementation of such a protocol would take very little time, the biggest problem would be with adoption.

Sad, but true.

The best approach for this problem is to entice several large mail providers (such as Gmail or Yahoo) to switch over. Since these providers handle a large fraction of all mail the smaller guys (like myself) would have to follow suit.

You mentioned Gmail…well, last I heard, Gmail’s servers were acting as open proxies. Congratulations! One of your example "if they switch things will be better" email providers is allowing the current spam problem to go on. I guess that makes you right. If Gmail were to use a protocol that didn’t allow for spam to exist, then things would be better.

There is even an incentive for mail providers to re-implement mail protocol, it would save them many CPU-cycles since Bayesian-spam-filters would no longer be that important.

What about generating all those CAPTCHAs you suggested? What about hashing all those emails? Neither activity is free.

By creating this new protocol we would dramatically improve an end users experience online, as there would be fewer annoyances to deal with. Hopefully alleviation of these annoyances will bring faster adoption of the protocol.

I really can’t help but read that as "If we use this magical protocol that will make things better, things will get better!" Sorry, but unless I see some protocol which would be a good candidate, I will remain sceptical.

As a side note, over the past  90 days, I received about 164MB of spam that SpamAssassin caught and procmail promptly shoved into the spam mail box. Do I care? Not enough to jump on the "let’s reinvent the email system" bandwagon. Sure, it eats up some of my servers clock cycles, and some bandwidth, the spam that gets to me is the few pieces that manage to get through, and show up in my inbox. Would I be happy if I didn’t have to use a spam filter, and not have to delete the few random spams by hand? Sure, but at least for the moment, I don’t see a viable alternative.

O_PONIES & Other Assorted Wishes

You might have already heard about ext4 “eating” people’s data. That’s simply not true.

While I am far from being a fan of ext4, I feel an obligation to set the record straight. But first, let me give you some references with an approximate timeline. I’m sure I managed to leave out a ton of details.

In mid-January, a bug titled Ext4 data loss showed up in the Ubuntu bug tracker. The complaining users apparently were using data on system crashes when using ext4. (The fact that Ubuntu likes to include every unstable & crappy driver into their kernels doesn’t help at all.) As part of the discussion, Ted Ts’o explained that the problem wasn’t with ext4 but with applications that did not ensure that the data they wrote was actually safe. The people did not like hearing that.

Things went pretty quiet until mid-March. That’s when a slashdot article made it painfully obvious that many of today’s apps are buggy. Some applications (KDE being a whole suite of applications) gotten used to the fact that ext3 was a very common filesystem used by Linux installations. More specifically, they got used to the behavior that ext3’s default mount option (data=ordered) provided. This is really the issue. The application developers assumed that the POSIX interface gave them more guarantees that it did! To make matters worse, the one way to ensure that the contents of a file get to the disk (the fsync system call) is very expensive on ext3. So over the past (almost) decade that ext3 has been around, application developers have been “trained” (think Wikipedia article: Pavlov reflexes) to not use fsync — on ext3, it’s expensive and the likelyhood of you losing data is much lower due to the default mount options. ext4’s fsync implementation, much like other filesystems’ implementations (e.g., XFS) does not suffer from this. (You may have heard about fsync on ext3 being expensive almost a year ago when Firefox was hit by this: Fsyncers and curveballs (the Firefox 3 fsync() problem). Note that in this case, as Ted Ts’o points out, the problem is that Firefox uses the same thread to draw the UI and do IO. That’s plain stupid.)

Over the next few days, Ted Ts’o posted two blog entries about delayed allocation (people seem to like to blame it for dataloss): Delayed allocation and the zero-length file problem, Don’t fear the fsync!.

About the same time, Eric Sandeen wrote a blurb about the state of affairs: fsync, sigh. He points out that XFS has faced the same issue years ago. When the application developers were confronted about their application being broken, they just put fingers in their ears, hummed loudly, yelled “I can’t hear you!” There is a word for that, and here’s the OED definition for it:

denial,

The asserting (of anything) to be untrue or untenable; contradiction of a statement or allegation as untrue or invalid; also, the denying of the existence or reality of a thing.

The problem is application developers not wanting to believe that it’s an application problem. Well, it really is! Not only are those apps broken, but they are not portable. AIX, IRIX, or Solaris will not give you the same guarantees as ext3!

(Eric is also trying to fight the common misconception that XFS nulls files: XFS does not null files, and requires no flux, which I assure you is not the case.)

About a week later, on an episode of Free Software Round Table, the problem was discussed a bit. They got most of it right :) (Here’s a 55MB mp3 of the show: 2009-03-21.)

When April 1st came about, the linux-fsdevel mailing list got a patch from yours truly: [PATCH] fs: point out any processes using O_PONIES. (The pony thing…it’s a bit of an inside joke among the Linux filesystem developers.) The idea of having O_PONIES first came up in #linuxfs on OFTC. While I don’t remember who first thought of it (my guess would be Eric), I know for sure that it wasn’t me. At the same time, I couldn’t help it, and considering that the patch took only a minute to make (and compile test), it was well worth it.

Few days later, during the Linux Storage and Filesystem workshop, the whole fsync issue got some discussion time. (See “Rename, fsync, and ponies” at Linux Storage and Filesystem workshop, day 1.) The part that really amused me:

Prior to Ted Ts’o’s session on fsync() and rename(), some joker filled the room with coloring-book pages depicting ponies. These pages reflected the sentiment that Ted has often expressed: application developers are asking too much of the filesystem, so they might as well request a pony while they’re at it.

In the comments for that article you can find Ted Ts’o saying:

Actually, it was Josef ’Jeff’ Sipek who deserves the first mention of application programmers asking for pones, when he posted an April Fools patch submission for the new open flag, O_PONIES — unreasonable file system assumptions desired.

Another file system developer who had worked on two major filesystems (ext4 and XFS) had a t-shirt on that had O_PONIES written on the front. And the joker who distributed the colouring book pages with pictures of ponies was another file system developer working yet another next generation file system.

Application programmers, while they were questioning my competence, judgement, and even my paternity, didn’t quite believe me when I told them that I was the moderate on these issues, but it’s safe to say that most of the file system developers in the room were utterly unsympathetic to the idea that it was a good idea to encourage application programmers to avoid the use of fsync(). About the only one who was also a moderate in the room was Val Aurora (formerly Henson). Both of us recognize that ext3’s data=ordered mode was responsible for people deciding that fsync() was harmful, and I’ve said already that if we had known how badly it would encourage application writers to Do The Wrong Thing, I would have pushed hard not to make data=ordered the default. Unfortunately, memory wasn’t as plentiful in those days, and so the associated page writeback latencies wasn’t nearly as bad ten years ago.

Hrm, I’m not sure how to take it…he makes it sound like I’m an extremist. Jeff — a freedom fighter for sanity of filesystem interfaces! :) As I said, I can’t take credit for the idea of O_PONIES. As I was writing this entry, I mentioned it to Eric and he promptly wrote an entry of his own: Coming clean on O_PONIES. It looks like he isn’t sure that he was the one to invent it! I’ll give him credit for it anyway.

The next day, a group photo of the attendees was taken… You can clearly see Val Aurora wearing an O_PONIES shirt. The idea was Eric’s, and as far as I know, he had his shirt the first day.

Fedora 11 is supposedly going to use ext4 as the default filesystem. When Ars Technica published an article about it (First look: Fedora 11 beta shows promise), some misguided people thinking that that ext4 eats your data left a bunch of comments….*sigh*

Well, there you have it. That’s the summary of events with some of my thoughts interleaved. If you are writing a userspace application that does file IO, do the right thing, fsync the data you care about (or at least fdatasync).

TV "Science"

Over the past year or so, I tend to let a few comics accumulate before reading them. One of these accumulated comics is this PhD Comics:

If TV science was more like real science

I have to say: bravo! MythBusters always bothered me. I find their pseudo-scientific approach irritating. It’s nice to see that I’m not the only one.

Fedora 10

Recently, I was tasked to install Fedora 10 on 2 systems. I have to say, it’s a major pain. Here are my observations.

Live CD

The Fedora CD I got my hands on was a live-cd with an icon allowing you to install it to a disk. Neat idea (I first saw it few years ago with Ubuntu; they used Unionfs with tmpfs) of using dm-zero (IIRC) to provide a virtual disk with stuff. The downside is…it takes ages to get into the installer.

Partition editor crash

I’ve experienced a crash of the installer when I was setting up the partition table. It was pretty annoying, but at least I just had to start the installer again and not reboot. (I didn’t bother saving the trace, but it was related to NoneType lacking isEncrypted attribute.)

NetworkManager

This is just plain ugly. Why do I want NetworkManager running on a server? Grr!

No choices

And worst of all…there were no choices to make during the installation process. So crap like X11 and Gnome got installed…I don’t need either on a server system.

Are Linux distros really turning into steaming piles of bloatware? I hope not!

Powered by blahgd