|Home » Learning Curve » Developers Workshop
More on Codesign
From the forum. 2009-02-18 02:18 AM.
I've had enough of this shit. I've only been looking into it two days but is it ever slow going. It's like a hall of mirrors. Let me explain. Hopefully someone will have some insight. This is a long story and I feel it's incredibly complex.
This is about CLIX and keeping CLIX secure. This is not to say it's not secure today. This is instead to say the overriding concern is always that it must be secure.
The Old Model
The old model used an embedded 'agent' application which functioned as a 'revolving door' for the Unix commands you issued.
CLIX.app - Contents - MacOS - CLIX
CLIX.app - Contents - Resources - clix
The CLIX (upper case) binary is the main Cocoa executable. The clix (lower case) in Resources is the embedded agent, the revolving door.
The integrity of the embedded agent clix is crucial. The main executable at times will be prompted to submit the user's password. The main executable must know the prompt is coming from approved code and not part of some hack attempt.
Therefore the main CLIX executable regularly performs a rather long series of integrity checks on the embedded executable clix. The integrity is checked against facts known about the embedded agent at build time. If the checks fail the main executable basically shuts down. Some cute reactions can be observed: when trying to run a command with a corrupted embedded agent the main executable can close the command sheet, remove all vestiges of the password, refuse to open any more sheets for commands and password, and so forth.
Can this architecture be hacked? The feasibility is practically zero but theoretically it's possible. But it's an incredibly long shot. To do this - starting with corrupting the embedded agent to harvest passwords - the hacker must thereafter corrupt the main executable to either dispense with the integrity checks or to use bogus data to perform them. This is to say the least a gargantuan task.
A Newer Model
A newer model - used for reasons which take too much time to go into here - involves dispensing with the embedded agent as an independent file and incorporating it into the main CLIX executable as a separate 'code fork' so to speak. By default CLIX.app will run as CLIX.app has always run, but if the CLIX.app executable CLIX is called in a special way it will act not as a GUI (Cocoa) app but only as the embedded agent clix used to act. It's a totally separate fork in the code.
This model preserves the integrity check code but seemingly for little benefit. OS X binaries are loaded in toto at runtime. They do not serve as their own virtual memory. And if they did then it would be reasonable to assume the system (the VM manager) would lock these files down. But it doesn't and you can completely remove any Cocoa app from disk once it is running and aside from its need for more NIBs it will continue to run flawlessly. Info.plist data is already loaded as well as the application icon and so forth.
The integrity check code is still there and CLIX will react to any change to what's on disk but nothing can at present get to what's already loaded into memory. The task for a hacker then becomes to hack the command processor code so it harvests passwords. Can this be done? I think so. I think it's still remote as can be. But I think it's more feasible than having to hack two executables at once.
The overall architecture is far superior. It can also be seen as theoretically more secure. Having an embedded agent is by its very nature very shaky and without integrity check code becomes incredibly easy to hack as soon as you know how the main binary works. And hacking either is still remote, but with the integrity check code in place the old model may be seen as more secure. Although the newer model is otherwise a better idea all around.
A Third Model
A third model involves code signing. We've been through this several times already but now I'm starting to dig into it and it's like a hall of mirrors.
I have another app to compare with: Pacifist. Pacifist for 10.5 comes signed. Apple are actually recommending everyone sign all apps for 10.5.
Pacifist uses the system's authorisation services to authenticate you when you want to extract things as root. As I understand it the program can also get a pre-ordered authentication - you can submit your password ahead of time and Pacifist holds onto the authentication and uses it when next you need it. As I understand it this 'pre-order' authentication works only once. I might be wrong. Whatever.
The author claims he's using code signing today because someone could in theory corrupt his program with a 'virus' as he calls it and this would be disastrous as he's taking your password.
[Comment: actually he's not taking your password. I don't believe the author is confused here. The dialog prompts for the password, doesn't hold onto it, and only returns a 'go ahead' that can be used later.]
Basically I concluded that if we can get code signing to work on CLIX we should use it: even if the risk for corruption is nonexistent the perception one is more secure with code signing helps a lot of people - and in dire circumstances could possibly get us off the hook. So I've pursued the issue the past few days. I too don't think anyone is ever going to hack this app as it stands today but I want that final finishing touch to make the app perfect. So here we go.
I've thus got two apps to compare behaviourally. CLIX and Pacifist. Tonight I figured out how to sign CLIX so any tampering with its binary or bundle files and it will refuse to run. I used options called 'hard' and 'kill' to (hopefully) instruct the code signing mechanism/operating system to refuse to run the app if something's wrong.
I use HexFiend to go into both CLIX and Pacifist and tamper with character constants. I've also tampered only with the CLIX icon. Tampering with the CLIX icon means the app won't run. Tampering with character constants in either app means they won't run. You have to tamper within the limits of the binary as the code signer understood it - you can evidently add new code on the back end of a binary and nothing will happen. The apps will still run fine.
All I've done with CLIX is specify on a command line the following.
codesign -o hard -o kill -s Rixstep CLIX.app
'Rixstep' is the name of a certificate I made with Keychain's Certificate Assistant completely as the Apple tutorial instructed.
When I've done this I can no longer tamper with anything in the bundle and expect it to run. The same holds for Pacifist.
Tampering with these bundles produces slightly different results. When the CLIX bundle is tampered with CLIX will simply (silently) refuse to run. This is fine by me. When the Pacifist bundle is tampered with Pacifist will issue an alert and then exit. This tells me three things.
1. Pacifist's code sign is not set to automatically refuse to run the app if it's tampered with.
2. The alert message comes from a strings file in a localisation directory.
3. Pacifist is somehow getting word from somewhere that the integrity check for the code sign failed.
I spent the better part of two days trying to find a programmatic interface for this notification (query). The Apple documentation repeatedly says 'if all you want to do is check the integrity of your code...' but nowhere do they explain how someone is supposed to actually check this!
Pacifist has the string '/usr/bin/codesign' embedded in its main binary. It's possible it calls this program with its own path and the switch '-v' to check integrity; a nonzero return means either not signed or corrupt.
Pacifist also has two further binaries which might in some way be involved.
But What If...
I now created a second certificate. Just for fun. With a different identifier: Radsoft this time. The modus operandi was otherwise identical to the first certificate.
I built a new copy of CLIX, I signed the binary with the Rixstep certificate, I ran the app. Ran right. I then altered the CLIX binary with HexFiend and then forced codesign to overwrite the Rixstep certificate with the new Radsoft certificate.
CLIX still worked fine.
But when I tried the same with Pacifist I got stopped.
I'm sitting here and trying to figure out how Pacifist can know somebody's tampered with its binary. If I overwrite the code sign - how can it know?
This is what I get down to. Namely that no code sign is going to be any more secure if anyone can go to a command line anywhere and simply overwrite the code sign with another once the application has been altered as one wants.
Yet that bastard succeeds in detecting it anyway. And I cannot figure out how.
And What If...
Final considerations. We haven't tried hacking at signed binaries but that doesn't mean one can't. I don't see any reason why a good hacker can't use otool to figure out where the code signing is and remove it. It's just extra code plastered on. If you have access to the binary you should be able to do anything, including remove the code sign completely.
And Terminal.app's binary comes shipped from factory with root:admin 0775. So any process running on the admin account can tamper with it. And it doesn't matter it's been signed.
Yes it's a long shot to corrupt Terminal.app but so is corrupting CLIX.app.
So there are questions there I need answers to. If any of you have any suggestions then please post something. Much obliged.
Since then CLIX adopted the bulletproof 'Houdini' integrity check system which is incompatible with Apple code-signing. And although 'Houdini' is impervious to hacking, Apple code-signing isn't.
Rixstep Forum: More on Codesign (2009-02-18 02:18 AM UTC)
Rixstep ACP: CLIX
Developers Workshop: Hacking Code Sign
Stockholm/London-based Rixstep are a constellation of programmers and support staff from Radsoft Laboratories who tired of Windows vulnerabilities, Linux driver issues, and cursing x86 hardware all day long. Rixstep have many years of experience behind their efforts, with teaching and consulting credentials from the likes of British Aerospace, General Electric, Lockheed Martin, Lloyds TSB, SAAB Defence Systems, British Broadcasting Corporation, Barclays Bank, IBM, Microsoft, and Sony/Ericsson.
Rixstep and Radsoft products are or have been in use by Sweden's Royal Mail, Sony/Ericsson, the US Department of Defense, the offices of the US Supreme Court, the Government of Western Australia, the German Federal Police, Verizon Wireless, Los Alamos National Laboratory, Microsoft Corporation, the New York Times, Apple Inc, Oxford University, and hundreds of research institutes around the globe. See here.
All Content and Software Copyright © Rixstep. All Rights Reserved.