About | Buy Stuff | Industry Watch | Learning Curve | Products | Search | Twitter
Home » Learning Curve

The Target-Action Paradigm

Of ad hoc copycats.

Buy It

Try It

Every graphical environment works basically the same way: a user with a keyboard and pointing device drags on a window title bar, types into text input fields, drags scroll bars, and so forth and so on. Each graphical environment finds unique ways to deal with this: some use straight C, others use C++, and of course Apple use NeXT's Objective-C.

But the NeXT method is unique in offering what is known as the 'target-action paradigm', yet another reason why NS remains far and above the best. This article looks a bit into the target-action paradigm, compares it to Windows and other methods, and explains how and why the other methods fall short.


Objective-C and NeXTSTEP are built up around sovereign objects that communicate with each other at runtime. The code that does this is not 'baked' into programs at build time but transported by the Objective-C runtime from one object to another.

Actions are a special type of message always with a 'void' return value (ie no return value at all) and always with a single argument, most often a pointer to the object sending the message.

If the recipient of an action - the 'target' - needs to know more about the message, it can query the sender: the sender becomes the 'target' of this new message.

-(void)action:(id)sender {
    // do something such as:
    id reason = [sender whatsup:self]; // not an 'action'

And so forth. NS objects come built-in with a plethora of useful actions. It's not unusual to be able to create non-trivial applications without a single line of code: the actions needed for the objects of the application to communicate with one another are already coded into the objects and need only be connected with Interface Builder.

Other graphical environments attempting to emulate the target-action paradigm - such as Windows - will instead send numerical values that must in turn be interpreted by the receiver, often using lugubrious 'switch' constructs or tables of function pointers. Subclassing for these purposes is also common in other environments. None of these alternatives work particularly well.

Most importantly, the target-action paradigm paves the way for a clean use of the concept of 'first responder'.

First Responder

An Objective-C action can be sent at runtime with a 'nil' object: instead of a valid pointer 'zero' is supplied instead. When this happens, the runtime tries to find the most appropriate receiver for the action based on the context and the current 'focus'.

If a user types in a text field, selects text, and then invokes a 'Cut' command, it is the text field that receives the action. If the user browses through a table, selects a record in the table, and again invokes the 'Cut' command, the same action is instead sent to the table.

The same message works for all possible targets, and the runtime itself finds the most appropriate target in any one given situation.

When the target is 'nil', the responder chain is traversed, starting with the 'first responder' in the key window - the window accepting keyboard input. If the first responder cannot respond to the message, the search continues upward until the key window is reached. If the key window doesn't want the message, its 'delegate' is checked. If the 'delegate' doesn't want it and the key window is different from the 'main window', the main window is now checked.

The search continues up the main window's responder chain to the main window object itself and then to the main window's delegate, and finally to the application object itself. If this object doesn't want the action, its delegate is asked - and if this object is not interested in the action, you'll hear a brief 'ping' on your OS X computer.

The target-action paradigm together with the concept of the responder chain affords a flexible, dynamic, context sensitive way to process messages. It's the kind of thing you won't find on other platforms using procedural or 'pseudo-OO' languages and is unique in the GUI programming world of today.

The World of Windows™

MS Windows and associated 'desktops' such as GNOME and KDE do not have the target-action paradigm. Their programming languages do not afford such possibilities and their application architecture won't let it happen anyway.

These applications are not object-oriented. Document windows are 'singletons'. Dropping a file on a document window prompts the system to ask you if you want to save the current document before opening the new one. There is no facility for opening further document windows - it's one or the other but never both.

Because application and window are one and the same, menus are not independent of their document windows but bolted to them. When dialogs pop up they lock out messages to the main window, including access to the menu.

The underlying operating system code performs hit tests to determine where the mouse is clicked and if the click occurs outside the dialog - such as on the menu which is actually a part of the main window - the system will generate an 'error beep'.

[This system is used even on OS X but only in extreme situations where the user must respond to an 'alert' before proceeding.]

When a modal dialog is visible on screen, the user must click inside it. The dialog window has an entirely different code snippet (dialog procedure) to take care of incoming messages. Once the dialog is dismissed, messages are again sent to the main window's code snippet.

And all of this has to be managed behind the scenes by the operating system. It's messy.

Users dealing with dialogs in applications have to click buttons to do things instead. Buttons are programmed to generate the same type of message as menus. Dialogs rarely have menus of their own, even though such a thing is not impossible.

All menus and dialog buttons generate messages sent to their window with a number identifying which menu item was clicked on.

The system works by 'pure luck': considering the format of window messages, it's nothing but a fortuitous coincidence that both button and menu clicks can be handled in exactly the same way.

In a truly object-oriented system such as NeXTSTEP, the menu itself is an object which can be used to generate messages to anyone at almost any time. In other systems such as Windows, GNOME, and KDE, the menu is an ad hoc copycat glued on rush job made by imitators using inferior development tools and unaware of the underlying concepts.

See Also
The NeXTonian
Ars Forums: Paradigms Lost

About | Buy Stuff | Industry Watch | Learning Curve | Products | Search | Twitter
Copyright © Rixstep. All rights reserved.