|Home » Learning Curve » Red Hat Diaries
It's a long day living in Reseda. There's a freeway runnin' through the yard.
- T Petty
One thing all Apple computer users must keep in mind: they benefit only marginally from the supposed 'Rock Solid Foundation™' of Unix. And this benefit is diminishing all the time.
The benefit of Unix and open source is unquestionable: Unix has been 'out there' for quite a long time now and its code has been vetted and vetted and vetted.
And should a bug or vulnerability rear its head somewhere in a system affected users would need only refer to the module in question most likely with its own website where the source remains readily available - source as exactly used in their own system.
That's the idea behind open source and that's why open source is so powerful.
It's also the reason security researcher Charlie Miller could so easily poke holes in the iPhone: Apple don't use open source 'out of the box' and they don't keep their open source modules up to date either.
But it's one thing to lapse (to the tune of several years in the case of the technologies used in the iPhone) in updating open source modules - and it's quite another to take these freely available modules, draw them within the walls of Cupertino, and do unspeakable and undocumented things with them.
Things weren't always like this of course. NeXTSTEP ran on vanilla FreeBSD and OPENSTEP was platform independent. It was only when the Beige Wizards™ inside the fortress decided this stellar technology needed a 'makeover' only they could provide things started tumbling downhill.
And for a while things were almost manageable. Some people - some developers - were still under the Apple-induced illusion OS X would become 'open' and 'Unix conformant' again. And with the advent of Leopard Apple are touting their 'Unix certificate'. But only the inadept are impressed at this point.
For every release of OS X there is less it has in common with Unix and greater the distance between itself and open source. And with Leopard things have yet again taken a turn for the worse.
For when typical file system behaviour is no longer what you expect you're basically free fallin'.
One of the early clues things were not as they should be was found in the default file system HFS. And for that matter in Apple's makeover of UFS as well. HFS in particular cannot handle Unix hard links. Apple have long since admitted this - yet short of gutting not only their file system but their entire way of thinking there's no way they can fix it.
Tiger provided further clues something was not right. Prior to Tiger special utilities were needed to manage files with resource forks from the Unix command line - tools such as the ADC tools CpMac and MvMac.
Starting with Tiger Apple started digging into venerable legacy Unix code to change the very way 'their' Unix worked. Standard thirty year old Unix commands such as 'cp' and 'mv' will today take care of not only resource forks but possibly other 'Apple' cruft as well.
And the recent 'massive data loss' and other brouhahas show how deep this goes and how dangerous it is: nobody knows where to look in the source code and with Apple's code of secrecy even when it comes to their 'open source' modules there's no guarantee the culprit code can ever be found.
Apple's stillborn Leopard shows evidence things are even worse. Here's a small test anyone can perform with access to both Tiger and Leopard systems.
First a few words on how Unix file systems work. Unix files and directories are both treated as 'streams of data'. And they share the same 'rwx' permissions scheme for user, group, and other.
And it's important to realise removing a 'w' bit from a file does not mean it can't be tampered with: the file can still be removed and replaced.
This is what fanboys discovered when they scrambled to find a cure for Opener and Oompa Loompa.
To protect against file corruption in this fashion the directory's 'w' bit must also be removed.
There's a converse to the above as well: directories can namely be protected from the addition of further (unwanted) files. The files already there can be edited as always - but they can't be renamed, they can't be removed, and new files can't be added either.
As most features of Unix, this one's a cornerstone.
A strange - a very beige - thing happened with Leopard's NSDocumentController. As the name implies this is NeXTSTEP code (or at least was) and it has something to do with controlling documents. Documents as in files run by 'Cocoa document based applications'. Which today is right about everything.
Any time you try to open a file within an application; any time you try to 'save as' a file; any time you use ⌘S to save a file: it's NSDocumentController running the show.
The Likely Scenario
What happens when you hit ⌘S to save a file? Up to Leopard it was pretty easy to figure out. At some point sooner or later the 'overbody' would contact the 'underbody' - Unix - and call one of the following runtime APIs.
int open(char *, int, mode_t);
FILE *fopen(char *, char *);
After that the data is written to the file.
ssize_t write(int, void *, size_t);
size_t fwrite(void *, size_t, size_t, FILE *);
And after that the file is closed.
int fclose(FILE *);
Simple enough. And remember: as long as the file has the 'w' bit set you can write to it. No matter what bits its directory happens to have.
And why? Because directories are 'streams of data' just like files. Unix directories contain namely only two things of interest: file names and inodes.
You can't rename a file in a write protected directory: to do so you have to write to the directory itself.
And this is by design.
The Unlikely Scenario
And suddenly with Apple's OS X 10.5 all this gets tossed topsy turvy on its head. Suddenly the system's NSDocumentController doesn't open files and write to them as Unix (and NeXTSTEP and OPENSTEP and OS X up to now) have always done - as outlined above.
Suddenly NSDocumentController first DELETES a file you want to write to, thereby unlinking its inode; it then attempts to create a NEW file with the same name but unfortunately of course with a NEW inode; whether you think this is a safe procedure is up to you - and whilst you consider this please keep in mind recent scandals which centred around keeping data in RAM only and relying completely on nothing bad happening.
And when you're finished shuddering over that consider this: what happens to a file in a write protected directory?
There's a correlate to the above of course. Whilst it's suddenly impossible to write to files you should be able to write to it's also suddenly possible to write to files you shouldn't be able to write to.
The entire Unix file protection scheme gets tossed topsy turvy on its head.
The ramifications for system administrators are considerable: suddenly all the protection schemes they've used to protect their corporate file systems are out the window - tossed topsy turvy on their heads. All the remedies they've taken to make their users' systems secure and safe are for naught.
They have to go through directory after directory, think through everything all over again, and replace tried and true Unix algorithms with Apple's latest 'ideas'.
They need all the encouragement you can give them.