The NeXTonian

Interface Builder

What is so fantastic about Interface Builder? Why did Steve Jobs move mountains to get this technology on his NeXT computers?

To explain that, we have to look a bit at how GUI applications are traditionally constructed.

GUI applications differ from 'console' applications in that they are event driven, and in that a significant part of the code is dedicated to managing the visual interface.

The visual interface serves two purposes: it gives the user feedback on what is going on, and it gives the user a place to input data into the program.

In the parlance of the Smalltalk team at Palo Alto, this is the 'model view controller' (MVC) paradigm.

MVC applications are divided up logically into three parts.

  1. Model. The actual data the program is editing.
  2. View. What the user sees, clicks on, types into, etc.
  3. Controller. The code that coordinates the model and view.

Take a text editor: a text editor has a buffer which contains the actual contents of the file being edited. This is not the same thing as what you see on screen. What you see on screen is a view of the data - a view of the model.

If your text window is temporarily covered up by another window, the model must be used to render the text again in the view when your text window is again visible - this is the job of the controller.

If the user types in some text (in the view), then the model must adjust its internal data accordingly.

And so forth.

Writing GUI programs is not easy, and writing them well - especially on other platforms - is nigh on impossible. This is because the tools available are so poor.

It is possible to code a NeXTSTEP/Cocoa application at the bare metal level, but it is not advisable, and certainly isn't worth it. Getting a Cocoa window on screen with the 'bare metal' approach would look something like this:

void init() {
    NSRect rc = NSMakeRect(x, y, width, height);
    NSSize size = {width, height};
    NSView *myView; NSWindow *myWindow;

    myWindow = [[NSWindow alloc]
            initWithContentRect:rc
            styleMask:myWindowFlags
            backing:NSBackingStoreBuffered
            defer:NO];

    [myWindow setMinSize:size];
    [myWindow setTitle:@"MyApp"];
    [myWindow setShowsResizeIndicator:YES];
    [myWindow useOptimizedDrawing:YES];
    [myWindow setCanHide:YES];
    [myWindow center];
    [myWindow makeKeyWindow];

    myView = [[[MyView alloc] initWithFrame:rc] autorelease];

    [myWindow setContentView:myView];
    [myWindow setDelegate:myView];
    [myWindow makeKeyAndOrderFront:nil];
}

Of course, all you get with that is a window on screen - you don't get any logic. The most important part of the equation - the interaction between user and program - is missing. Doing that takes a lot more code!

Systems like Windows use scripts to define windows, their layout, etc - but they don't go much further.

The main window for the XPT program Autolog looks like this:

1 DIALOG 46, 99, 206, 52
CAPTION "Autolog"
{
    LTEXT "DefaultUserName:", -1, 6, 8, 68, 8
    EDITTEXT EDT1, 78, 6, 64, 12, ES_AUTOHSCROLL
    LTEXT "DefaultDomainName:", -1, 6, 22, 68, 8
    EDITTEXT EDT2, 78, 20, 64, 12, ES_AUTOHSCROLL
    LTEXT "DefaultPassword:", -1, 6, 36, 68, 8
    EDITTEXT EDT3, 78, 34, 64, 12, ES_AUTOHSCROLL
    PUSHBUTTON "&Enable", BTN1, 150, 6, 50, 14
    PUSHBUTTON "&Disable", BTN2, 150, 23, 50, 14
}

It's a script, defining a dialog ('DIALOG') with dimensions 206x52, the caption 'Autolog', and controls within: left-justified text controls ('LTEXT'), editable text controls ('EDITTEXT'), and a couple of buttons ('PUSHBUTTON').

That's the layout only - the dialog script does not 'live' in any sense of the word. Using the above definition, the programmer can write code to interface with the dialog and its controls.

There's no initialisation in these controls either - the programmer has to do it all in code on application startup. And when the program is going to exit - or when the user clicks a button - the program has to see this click happening and run the appropriate code.

LRESULT APIENTRY DefWindowProc(HWND hWnd, UINT message,
        WPARAM wParam, LPARAM lParam)
{
    switch (message) {
    case WM_COMMAND :
        switch (LOWORD(wParam)) {

'Messages' come in through the function 'DefWindowProc'. These messages are sent to the application itself, but then dispatched back through the operating system, which in turn finds the function 'DefWindowProc', and calls it.

If the message is 'WM_COMMAND' - if it is a menu command - then the code must determine the numerical identifier of the command to take the appropriate action.

Anyway, that's the 'Windows' method, and it's roughly the same for the MFC and for Linux desktops. You can 'sketch' windows with an editor (or write it yourself) and the program can use the sketch at runtime. You don't need to specify the height and width, or anything else, at runtime - you do that in the dialog script itself.

But that's how far it goes. And what Jean-Marie Hullot, and later Steve Jobs, knew that was not far enough. Far from it.

The idea behind Interface Builder is to add 'intelligence' to these lifeless dialog scripts. You still design the window layout in the editor, but you do much more.

Interface Builder speaks of actions and outlets. An action is a message generated by a control in your visual interface. An outlet is a pointer in your code to an element in your visual interface.

When Interface Builder knows what 'class' will be controlling the window you're working on, when it knows what methods and variables are available, you're able to drag your mouse from the one to the other to make the connection.

Take the buttons 'Enable' and 'Disable' in the Autolog code example above. With Interface Builder, the programmer doesn't have to peek at the 'message queue' to see what's happening, and what button the user clicked on (if any); Interface Builder sets up your file so that when the user clicks a button, a specific action is sent directly to your code.

You might have a method called enable: - all you do is drag your mouse from the Enable button to your controlling class, and pick the enable: method out of the box and connect it. Likewise with the Disable button.

And if you need control over the buttons from time to time, to, for example, enable and disable them, then you can have outlets to do so. You can have two variables, enableButton and disableButton, in your code; you let Interface Builder see this code; you drag your mouse so Interface Builder makes a connection.

At runtime there is nothing more to write or hookup. You can directly enable your Enable button by writing:

[enableButton setEnabled:YES];

And that's it. You don't have to go off searching for that button first so you can access it - it's all 'freeze dried' in your Interface Builder file. When your program loads at runtime, it's all connected together automatically.

And because of the power of the NeXTSTEP/Cocoa classes, it's perfectly possible to create entire programs without a single line of code. All you have to do is connect the controls in your window together, and make it so each event - such as the click of a button - sends an action to another control, and so forth.



About | ACP | Buy | Industry Watch | Learning Curve | Search | Test Drive
Copyright © Rixstep. All rights reserved.