I'm sure you've seen it too, 'cause it was on Slashdot and if you're fishing here, you're definitely an online junkie. I'm talking about that Anandtech article, of course. The one that tries to compare OS X to Linux and a PowerPC to an x86. Lemme see...this one. No more mysteries, they promise!
None of it's pleasant, but what's the worst part? The mySQL results. I know it's painful - you don't have to look again. All right. So why was the G5, at best, 2/3 the speed of any of the other machines?
I don't have an official or authoritative answer. But I think this might have a lot to do with it.
When you commit a transaction to a database, you want to be sure that the data is fully written. If your machine loses power half a second after the transaction completes, you want to know that the data made it to disk. To enable this, Mac OS X provides the F_FULLFSYNC command, which you call with fcntl(). This forces the OS to write all the pending data to the disk drive, and then forces the disk drive to write all the data in its write cache to the platters. Or that is, it tries to - some ATA and Firewire drives lie and don't actually flush the cache. (The check's in the mail, really...)
F_FULLFSYNC is pretty slow. But if OS X didn't do it, you might end up with no data written or partial data written, even out of order writes, if you lose power suddenly.
Well! mySQL performs a F_FULLFSYNC on OS X, and not on Linux; as far as I know Linux doesn't provide a way to do this.
It's true that mySQL calls fsync() on both, but fsync() doesn't force the drive to flush its write cache, so it doesn't necessarily write out the data. Check out http://dev.mysql.com/doc/mysql/en/news-4-1-9.html and Dominic's comments at the bottom. Oh, and if you missed it, above, look at this Apple mailing list post.
So OS X takes a performance hit in order to fufill the contract of transactions. Linux is faster, but if you lose your wall juice, your transaction may have not been written, or been partially written, even though it appeared to succeed. And that's my guess as to the main reason OS X benchmarked slower on mySQL.
Again, this isn't an official explanation, and I'm not qualified to give one. But given that Anandtech missed this issue entirely, I'm not sure they are either.
What about Anandtech's theory, here? Could the mySQL benchmark be due to the LMbench results? I must confess, this part left me completely bewildered.
- They claim that making a new thread is called "forking". No, it's not. Calling fork() is forking, and fork() makes processes, not threads.
- They claim that Mac OS X is slower at making threads by benchmarking fork() and exec(). I don't follow this train of thought at all. Making a new process is substantially different from making a new thread, less so on Linux, but very much so on OS X. And, as you can see from their screenshot, there is one mySQL process with 60 threads; neither fork() nor exec() is being called here.
- They claim that OS X does not use kernel threads to implement user threads. But of course it does - see for yourself.
/* Create the Mach thread for this thread */ PTHREAD_MACH_CALL(thread_create(mach_task_self(), &kernel_thread), kern_res);
- They claim that OS X has to go through "extra layers" and "several threading wrappers" to create a thread. But anyone can see in that source file that a pthread maps pretty directly to a Mach thread, so I'm clueless as to what "extra layers" they're talking about.
- They guess a lot about the important performance factors, but they never actually profile mySQL. Why not?