Touch Patterns: Chapter 6 - Programming the iPhone User Experience

by Toby Boudreaux

The most famous feature of the iPhone and iPod Touch is the Multi-Touch interface. Multi-Touch allows a user to interact with a device using one or more fingers on a smooth, consistent physical screen. Touch-based interfaces have existed in prototypes and specialty devices for a while, but the iPhone and iPod Touch introduced the concept to the general consumer market. It’s safe to say that the interaction pattern has proven very effective and popular, inspiring other companies to implement similar systems on their devices.

Programming the iPhone User Experience book cover

This excerpt is from Programming the iPhone User Experience. This practical book provides you with a hands-on, example-driven tour of Apple's user interface toolkit, UIKit, and some common design patterns for creating gestural interfaces and multi-touch navigation for the iPhone and iPod Touch. You'll learn how to build applications with Apple's Cocoa Touch frameworks that put the needs of mobile users front and center.

buy button

Any new interface requires updated patterns for accepting and handling input and for providing feedback to users. Apple has identified several simple and intuitive patterns not entirely dissimilar from those for traditional mouse use, but specialized for a Multi-Touch interface. Paired with the conceptual patterns and physical hardware are several libraries developers can use to manage user interaction. The currency of Multi-Touch programming is the UITouch class, which is one of many related classes in UIKit.

In Cocoa Touch applications, user input actions like button presses trigger events. The iPhone OS processes a related series of touches by grouping them into Multi-Touch sequences. Possible key events in a hypothetical sequence are listed here:

  • One finger touches the device

  • A second finger optionally touches the device

  • One or both fingers move across the screen

  • One or both fingers lift off the device

  • A series of quick taps, such as a double-tap

The number of touch combinations that can make up a sequence seems endless. For this reason, it’s important to examine established patterns and user expectations when deciding how to implement event management inside an application. In addition to sequences, touch accuracy and the visibility of “hot” controls or areas are vital to providing a good user experience. An application with buttons that are too small or too close together is likely to lead to frustration. This is also true of controls in areas that fingers or thumbs tend to block.

Touches and the Responder Chain

The class that represents touch events is the UITouch class. As a user interacts with the Multi-Touch interface, the operating system continually sends a stream of events to the dominant application. Each event includes information about all distinct touches in the current sequence. Each snapshot of a touch is represented by an instance of UITouch. The UITouch instance representing a given finger is updated through the sequence until it ends by all fingers being removed from the interface or by an external interruption.

UITouch Overview

As a user moves his finger across the screen, the current UITouch instances are updated to reflect several local (read-only) properties. The UITouch class is described in Figure 6-1 .

Figure 6-1. Public UITouch properties and methods

The following is a list of public properties of UITouch:

tapCount The tapCount represents the number of quick, repeated taps associated with the UITouch instance.

timestamp The timestamp is the time when the touch was either created (when a finger touched the screen) or updated (when successive taps or fingertip movement occurred).

phase The phase value is a constant indicating where the touch is in its lifecycle. The phases correspond to: touch began, touch moved, touch remained stationary, touch ended, and touch canceled.

view The view property references the UIView in which the touch originated.

window Like the view property, the window property references the UIWindow instance in which the touch originated.

In addition to these properties, the UITouch class provides helpful methods for accessing the two-dimensional point (x, y) relative to a given UIView, representing both the current location and the location immediately preceding the current location. The locationIn View: and previousLocationInView: methods accept a UIView instance and return the point (as a CGPoint) in the coordinate space of that view.

UITouch instances are updated constantly, and the values change over time. You can maintain state by copying these properties into an appropriate structure of your choosing as the values change. You cannot simply copy the UITouch instance because UITouch doesn’t conform to the NSCopying protocol.

The Responder Chain

Cocoa and Cocoa Touch applications both handle UI events by way of a responder chain. The responder chain is a group of responder objects organized hierarchically. A responder object is any object that inherits from UIResponder. Core classes in UIKit that act as responder objects are UIView, UIWindow, and UIApplication, in addition to all UIControl subclasses. Figure 6-2 illustrates the responder chain.

Figure 6-2. The UIKit responder chain

When a user interacts with the device, an event is generated by the operating system in response to the user interaction. An event in this case is an instance of the UIEvent class. Figure 6-3 shows the UIEvent class model.

Figure 6-3. Public UIEvent properties and methods

Each new event moves up the responder chain from the most to least specific object. Not all descendants of UIResponder are required to handle all events. In fact, a responder object can ignore all events. If a responder object lacks a known event handler method for a specific event, the event will be passed up the chain until it is encountered by a responder object willing to handle it. That responder object can choose to pass the event on to the next responder in the chain for further processing, whether or not the responder object has acted on the event.

Becoming a responder object requires two steps:

1. Inherit from UIResponder or a descendant of UIResponder, such as UIView, UIButton, or UIWindow.

2. Override one of four touch-related event handler methods inherited from UIResponder.

The following list contains descriptions of UIResponder event handler methods:

    • The touchesBegan:withEvent: method is called when one or more fingers touch the Multi-Touch surface. Very often, this will represent the initial contact in a sequence of single-finger touches. When objects enable support for multiple touches per sequence (such as with the familiar pinch-to-zoom gesture), this method may be called twice per sequence. To enable multiple touches per sequence, a responder object must declare that it wishes to receive multiple touches per sequence. This is done by sending a message, setMultipleTouchEnabled:, to the instance with an affirmative YES parameter.

      A frequent determination for the touchesBegan:withEvent: method is whether a touch is initial or supplemental in the sequence. The logic you implement for handling touches and gestures will often depend on state data around the entire sequence; therefore, you will want to initiate your data with an initial touch and only add to it for supplemental touches.

    • The touchesMoved:withEvent: method is called when a finger moves from one point on the screen to another without being lifted. The event will be fired with each pass of the event loop, and not necessarily with each pixel-by-pixel movement. Though the stream is nearly constant, it's worth keeping in mind that the interval between calls is dependent upon the event loop and is thus technically variable.

      This method is an excellent point at which to record the location of the full set of UITouch instances delivered with the UIEvent parameter. The touchesMoved:withE vent: method is called very frequently during a touch sequence—often hundreds of times per second— so be careful of using it for expensive work.

    • The touchesEnded:withEvent: method is invoked when both fingers (or one, in a single-touch application) are lifted from the Multi-Touch screen. If your responder object accepts multiple touches, it may receive more than one touchesEnded:with Event: message during the touch sequence, as a second finger makes contact and then leaves the screen.

      As with the touchesCancelled:withEvent: method, you will often perform the bulk of your logic and cleanup operations when this message is received.

    • The touchesCancelled:withEvent: method is called when the touch sequence is canceled by an external factor. Interruptions from the operating system, such as a warning for low memory or an incoming phone call, are fairly common. As you’ll see in this chapter, the art of managing touches often includes managing state around touch sequences, and persisting and updating that state across events. It’s therefore important to use both the touchesEnded:withEvent: and the touchesCan celled:withEvent: methods to perform any operations that manage state. For example, deleting a stack of UITouch objects and committing/undoing a graphics transaction are possible cleanup operations.

    Each event contains the full set of UITouch instances included in the Multi-Touch sequence of which it is a part. Each UITouch contains a pointer to the UIView in which the touch event was generated. Figure 6-4 illustrates the relationship.

    Figure 6-4. Relationship between UIEvent, UITouch, and UIView

Touch Accuracy

An instance of UITouch exposes its location as a two-dimensional CGPoint value. Each CGPoint represents an (x, y) pair of float values. Clearly, even the tiniest fingertip is much larger than a single point on the screen. The iPhone does a great job of training users to expect and accept the approximate fidelity that results from translating a physical touch to a single point in the coordinate space of a view. Still, developers with an appreciation for user experience should pay attention to the perception of accuracy. If a user feels that input results in a loss of precision, frustration is a likely outcome.

The main considerations for touch accuracy are:

  • The size of touchable objects

  • The shape of touchable objects

  • The placement of touchable objects in relation to one another

  • The overlapping of touchable objects


The size of touchable objects is an interesting problem. One of the more curious facets of a portable touch interface is that the main input device (a finger) also obfuscates the feedback mechanism (the screen). Touching a control, such as a button, should provide users with visual feedback to provide a confirmation that their intentions have been communicated to the device. So how does Apple address this issue in UIKit? They attack the issue from many angles.

First, many controls are quite large. By displaying buttons that span approximately 80% of the width of the screen, Apple guarantees that users can see portions of the button in both its highlighted and its touched state. The passive confirmation mechanism works very well. Figure 6-5 shows the device emulator included in the iPhone SDK with the Contacts application running. The “Delete Contact” and “Cancel” buttons are good examples of very prominent, large controls.

In addition to expanding the visible area of controls into the periphery, Apple has bolstered the ambient feedback mechanism by changing the hit area of these controls for drags. In desktop Cocoa applications, interaction is canceled when the mouse is dragged outside the visible boundaries of the view handling the event. With Cocoa Touch controls on the iPhone OS, Apple drastically expands the “hot” area of the control on touch. This means that touches require a certain level of accuracy, but the chance of accidentally dragging outside of a control and inadvertently canceling a touch sequence is lessened. This allows users to slightly drag away from a control to visually confirm their input. This implementation pattern is free with standard controls and in many cases with subclasses. When drawing your own views and managing your own hit test logic, you should attempt to copy this functionality to ensure compliance with the new muscle memory users acquire on the iPhone OS. Figure 6-6 displays three similar controls. The first is standard; the second displays the hot area for receiving touches; and the third displays the virtual hit area for active touches. Dragging outside of the area highlighted in the figure cancels the selection.

Figure 6-5. Examples of large buttons in the Contacts application

Figure 6-6. Hot area and active hot area examples

The onscreen keyboard has an elegant solution to the problem of touch area. The size of each button in the keyboard is smaller than the typical adult fingertip. Since the keyboard layout is a standard QWERTY configuration, users are familiar with the location of each key. But because the keyboard is displayed on screen, the standard “home row” finger positions and ingrained muscle memory can’t help accuracy. Apple allows users to confirm the input of each key by briefly expanding the key graphics above the touch location. This pattern is also used in an enhanced form for special keys, such as the .com key added conditionally to the keyboard when the first responder field represents a URL. Figure 6-7 illustrates the touch-and-hold control style.

Figure 6-7. A standard touch-and-hold control

You can use virtual hit areas to enlarge the hot area for a control without changing the visual interface. You can override the pointInside:withEvent: or hitTest:withEvent: method to create a virtual hit area. This method is called for a UIView by its superview property as a part of the responder chain. Returning a NO value from these methods causes the responder chain to move on to the next responder object in the chain. Returning YES allows the responder object to handle the event and terminate the trip up the responder chain. Creating a virtual hit area may be as simple as returning YES for points outside the visible boundaries of the view.

The following example creates an enlarged virtual hit area:

// HotView.h #import

@interface HotView : UIView { BOOL hot; }


// HotView.m #import "HotView.h"

@implementation HotView

-(id)initWithFrame:(CGRect)frame{ if (self = [super initWithFrame:frame]) {

hot = true; } return self;


#define MARGIN_SIZE 10.0 #define DRAGGING_MARGIN_SIZE 40.0

-(BOOL) point:(CGPoint)point insideWithMargin:(float)margin

{ CGRect rect = CGRectInset(self.bounds, -margin, -margin); return CGRectContainsPoint(rect, point);


-(BOOL) pointInside:(CGPoint)point withEvent:(UIEvent *)event

{ float phasedMargin; UITouch *touch = [[event touchesForView:self] anyObject]; if(touch.phase != UITouchPhaseBegan){

phasedMargin = DRAGGING_MARGIN_SIZE; }else{ phasedMargin = MARGIN_SIZE; }

if([self point:point insideWithMargin:phasedMargin]){ return YES; }else{ return NO; } }

-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event

{ NSLog(@"Touches began."); hot = YES;

} -(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event


if(hot == NO) return;

CGPoint point = [[touches anyObject] locationInView:self];

if([self point:point insideWithMargin:DRAGGING_MARGIN_SIZE] == false){

[self.nextResponder touchesBegan:touches withEvent:event]; hot = NO; }

NSLog(@"Touch moved."); }

-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event


if(hot == NO) return;

NSLog(@"Touches ended.");

hot = YES;


-(void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event


if(hot == NO) return;

NSLog(@"Touches cancelled.");

hot = YES;




Designing touch-enabled views with irregular shapes is appropriate in many applications. Luckily, Cocoa Touch application developers can use any of several strategies for deciding when a custom view should handle a touch sequence.

When a touch is being handled by the view hierarchy, the hitTest:withEvent: message is sent to the topmost UIView in the view hierarchy that can handle the touch event. The top view then sends the pointInside:withEvent: message to each of its subviews to help divine which descendant view should handle the event.

You can override pointInside:withEvent: to perform any logic required by your custom UIView subclass.

For example, if your view renders itself as a circle centered inside its bounds and you’d like to ignore touches outside the visible circle, you can override pointInside:withE vent: to check the location against the radius of the circle:

-(BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event // Assume the view/circle is 100px square CGFloat x = (point.x - 50.0) / 50.0; CGFloat y = (point.y - 50.0) / 50.0; float h = hypot(x, y); return (h < 1.0);


If you have an irregular shape that you’ve drawn with CoreGraphics, you can test the CGPoint against the bounds of that shape using similar methods.

In some cases, you may have an image in a touch-enabled UIImageView with an alpha channel and an irregular shape. In such cases, the simplest means of testing against the shape is to compare the pixel at the CGPoint against a bitmap representation of the UIImageView. If the pixel in the image is transparent, you should return NO. For all other values, you should return YES.


The placement of views in relation to one another affects usability and perception of accuracy as much as the size of controls. The iPhone is a portable Multi-Touch device and thus lends itself to accidental or imprecise user input. Applications that assist users by attempting to divine their intentions probably gain an advantage over competing applications with cluttered interfaces that demand focus and precision from users. Virtual hit areas for untouched states are difficult or impossible to use when views are very close together.

When two views touch one another and a finger touches the edges of both, the view most covered by the fingertip will act as the first Responder in the responder chain and receive the touch events. Regardless of the view in which the touch originated, you can get the location of a UITouch instance in the coordinate system of any UIView, or in the UIWindow. You can program your views in a way that maintains encapsulation when a UITouch instance is processed:

// Get the location of a UITouch (touch) in a UIView (viewA) CGPoint locationInViewA = [touch locationInView:viewA];

// Get the location of a UITouch (touch) in a UIView (viewB) CGPoint locationInViewB = [touch locationInView:viewB];

// Get the location of a UITouch (touch) in the UIView that // is the current responder CGPoint locationInSelf = [touch locationInView:self];

// Get the location of a UITouch (touch) in the main window CGPoint locationInWindow = [touch locationInView:nil];

Depending on the shape and meaning of the view handling a touch event, you should consider placement in relation to a fingertip when appropriate. A great example of this is when dragging a view under a fingertip. If you require precision when users drag a view around the screen, you can improve the user experience by positioning the element slightly above the touch instead of centering it under the touch:

-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event

{ UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:self];

// Positioning directly under the touch = location;

float halfHeight = self.frame.size.height * 0.5;

CGpoint betterLocation = CGPointMake(location.x, (location.y - halfHeight));

// Positioning slightly above the touch = betterLocation;


Overlapping Views

Designing a user experience that allows elements to overlap each other on the z-axis* presents a few key challenges:

  • If the overlapping elements are movable by users or animations, care should be taken to prevent any single element from fully covering another element. If such behavior is expected, users should be given some means of easily accessing underlying elements.

  • If an overlapping area has an irregular shape, the desired behavior is probably to restrict the hit area to the shape and not to the bounding rectangle. Doing so allows touch events to pass “through” the bounding rectangle of the top element to the bottom element.

  • Enlarged virtual hit areas are more difficult to program when touchable views overlap because the logic for passing touch events down the stack could conflict with the logic that facilitates virtual hit areas.

Apple recommends not allowing sibling views to overlap one another for both usability and performance reasons. You can find additional information on overlapping UIKit views in the iPhone Application Programming Guide, which can be found online at

Detecting Taps

So far, this chapter has focused on the conceptual side of Multi-Touch programming. The remainder of the chapter will focus on example code showing how to detect and use the main types of touch sequence.

Detecting Single Taps

Single taps are used by standard buttons, links (in browsers and the SMS application), and many other UIControl subclasses. They are also used by the iPhone OS to launch applications. Users touch elements on the screen to communicate intent and, in doing

* 3D has three axes: x, y, and z. When applied to 2D displays, the z-axis is—to your eyes—the surface of the screen. So when things overlap, it occurs on the z-axis.

So, expect a response. On the Home screen, the response is to launch an application. With buttons, a specific action is usually expected: search, close, cancel, clear, accept.

Single taps are trivial to detect. The simplest method is to assign an action to a UIControl subclass (versus a custom UIView subclass). This sends a specific message to a given object. For a given UIControl, send the addTarget:action:forControlEvents: message with appropriate parameters to assign a receiving target and action message for any number of control events. This example assumes a UIButton instance in a UIView subclass with the instance variable name button:

-(void) awakeFromNib


[super awakeFromNib];

[button addTarget:self action:@selector(handleButtonPress:)

forControlEvents:UIControlEventTouchDown]; }

-(IBAction) handleButtonPress:(id)sender{

NSLog(@"Button pressed!"); }

For responder objects that are not descendants of UIControl, you can detect single taps within the touchesBegan:withEvent: handler:

-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent*)event


UITouch *touch = [touches anyObject];

NSUInteger numTaps = [touch tapCount];

NSLog(@"The number of taps was: %i", numTaps);

if(numTaps == 1){

NSLog(@"Single tap detected."); }else{

// Pass the event to the next responder in the chain.

[self.nextResponder touchesBegan:touches withEvent:event];

} }

Detecting Multiple Taps

You can handle multiple taps similarly to single taps. The UITouch tapCount property will increment appropriately to reflect the number of taps within the same sequence. Most computer interaction systems use single and double tap patterns. For special cases, such as certain games, you may wish to allow users to use triple taps—or endless taps. If a sufficient pause between taps occurs, the operating system treats new taps as part of a new sequence. If you’d like to handle repeated tapping with longer pauses, you should write logic that maintains state between multiple touch sequences and treats them as members of the same series within the temporal boundaries you set:

-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent*)event

{ UITouch *touch = [touches anyObject];

NSUInteger numTaps = [touch tapCount];

NSLog(@"The number of taps was: %i", numTaps);

if(numTaps > 1){

NSLog(@"Multiple taps detected."); } }

Detecting Multiple Touches

Handling multiple touches in a sequence is different from handling multiple taps for a single touch. Each UIEvent dispatched up the responder chain can contain multiple UITouch events—one for each finger on the screen. You can derive the number of touches by counting the touches argument to any of the touch event handlers:

-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent*)event


int numberOfTouches = [touches count];

NSLog(@"The number of fingers on screen: %i", numberOfTouches);


Handling Touch and Hold

An interesting control present in the onscreen keyboard is the .com button that appears when a URL entry field has focus. Quickly tapping the button like any other key inserts the string “.com” into the field. Tapping on the control and holding it down for a moment causes a new subview to appear with a set of similar buttons representing common top-level domain name parts, such as .net and .org.

To program a similar touch-and-hold control, you need to detect that a touch has begun and that an appropriate amount of time has passed without the touch being completed or canceled. There are many ways to do so, but the use of a timer is a simple solution:

// Expander.h

@interface Expander : UIView {

UIView *expandedView;

NSTimer *timer;



// Expander.m import "Expander.h"

@interface Expander ()



-(void)expand:(NSTimer *)theTimer;@end

@implementation Expander -(id)initWithFrame:(CGRect)frame{

if(self = [super initWithFrame:frame]){ self.frame = CGRectMake(0.0, 0.0, 40.0, 40.0); self.backgroundColor = [UIColor redColor];

expandedView = [[UIView alloc] initWithFrame:CGRectZero]; expandedView.backgroundColor = [UIColor greenColor]; expandedView.frame = CGRectMake(-100.0, -40.0, 140.0, 40.0); expandedView.hidden = YES; [self addSubview:expandedView];

} return self; }

-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event

{ [self stopTimer]; timer = [NSTimer scheduledTimerWithTimeInterval:1.0

target:self selector:@selector(expand:) userInfo:nil repeats:NO];

[timer retain]; }

-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event

{ [self stopTimer]; [self close];


-(void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event

{ [self stopTimer]; [self close];


-(void)stopTimer{ if([timer isValid]){ [timer invalidate];

} }

-(void)expand:(NSTimer *)theTimer

{ [self stopTimer]; expandedView.hidden = NO;



expandedView.hidden = YES; }



[expandedView release];

[super dealloc];



Handling Swipes and Drags

A UITouch instance persists during an entire drag sequence and is sent to all event handlers set up in a UIView. Each instance has mutable and immutable properties that are relevant to gesture detection.

As a finger moves across the screen, its associated UITouch is updated to reflect the location. The coordinates of the location are stored as a CGPoint and are accessible by way of the locationInView: method of the UIView class.

Dragging a view is simple. The following example shows the implementation of a simple UIView subclass, Draggable. When handling a touchesMoved:withEvent: message, a Draggable instance will position itself at the point of a touch relative to the coordinate space of its superview:

@implementation Draggable

-(id)initWithFrame:(CGRect)frame{ if (self = [super initWithFrame:frame]) {

self.backgroundColor = [UIColor redColor]; } return self;


(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{

NSLog(@"Touched."); }

(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event



UITouch *touch = [touches anyObject];

CGPoint location = [touch locationInView:self.superview]; = location;



Swipe detection is slightly more complex than drag management. In the iPhone Application Programming Guide, Apple recommends a strategy for detecting swipes that leads to consistent user behavior across applications. Conforming to the standard set by Apple helps improve user experience because it helps build and takes advantage of muscle memory. For example, UIKit includes built-in support for detecting swipes across table cells, prompting users with a button to delete. Mapping the swipe-to-delete gesture in default applications—and in UIKit as a free feature—helps to “train” users that the swipe is a dismissive gesture. This carries over to other uses of the swipe gesture. Another example is the Photos application. Users can swipe across a photo when viewing a gallery. The gesture will dismiss the current photo and, depending on the swipe direction, transition the next or previous photo into place.

You can leverage the swipe to perform your own equivalent of dismissal:

// MainView.h @interface MainView : UIView { Spinner *spinner; }

// MainView.m @interface MainView (PrivateMethods)

-(void)transformSpinnerWithFirstTouch:(UITouch *)firstTouch andSecondTouch:(UITouch *)secondTouch;

-(CGFloat)distanceFromPoint:(CGPoint)fromPoint toPoint:(CGPoint)toPoint;

-(CGPoint)vectorFromPoint:(CGPoint)firstPoint toPoint:(CGPoint)secondPoint;


@implementation MainView


{ self.multipleTouchEnabled = YES; spinner = [[Spinner alloc] initWithFrame:CGRectMake(0.0, 0.0, 50.0, 50.0)]; =; [self addSubview:spinner];


-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{ if([touches count] != 2){

return; } NSArray *allTouches = [touches allObjects]; UITouch *firstTouch = [allTouches objectAtIndex:0]; UITouch *secondTouch = [allTouches objectAtIndex:1]; [self transformSpinnerWithFirstTouch:firstTouch andSecondTouch:secondTouch];


-(void)transformSpinnerWithFirstTouch:(UITouch *)firstTouch andSecondTouch:(UITouch *)secondTouch

{ CGPoint firstTouchLocation = [firstTouch locationInView:self]; CGPoint firstTouchPreviousLocaion = [firstTouch previousLocationInView:self]; CGPoint secondTouchLocation = [secondTouch locationInView:self]; CGPoint secondTouchPreviousLocation = [secondTouch previousLocationInView:self];

CGPoint previousDifference = [self vectorFromPoint:firstTouchPreviousLocaion toPoint:secondTouchPreviousLocation]; CGAffineTransform newTransform =

CGAffineTransformScale(spinner.transform, 1.0, 1.0); CGFloat previousRotation = atan2(previousDifference.y, previousDifference.x); CGPoint currentDifference = [self vectorFromPoint:firstTouchLocation

toPoint:secondTouchLocation]; CGFloat currentRotation = atan2(currentDifference.y, currentDifference.x); CGFloat newAngle = currentRotation - previousRotation; newTransform = CGAffineTransformRotate(newTransform, newAngle); spinner.transform = newTransform;


-(CGFloat)distanceFromPoint:(CGPoint)fromPoint toPoint:(CGPoint)toPoint

{ float x = toPoint.x - fromPoint.x; float y = toPoint.y - fromPoint.y; return hypot(x, y);


-(CGPoint)vectorFromPoint:(CGPoint)firstPoint toPoint:(CGPoint)secondPoint

{ CGFloat x = firstPoint.x - secondPoint.x; CGFloat y = firstPoint.y - secondPoint.y; CGPoint result = CGPointMake(x, y); return result;


-(void) dealloc

{ [spinner release]; [super dealloc];



Handling Arbitrary Shapes

The Multi-Touch interface allows developers to create interaction patterns based on simple taps, drags, and flicks. It also opens the door for more complex and engaging interfaces. We’ve seen ways to implement taps (single and multiple) and have explored dragging view objects around the screen. Those examples conceptually bind a fingertip to an object in space, creating an interface through the sense of touch, or haptic experience. There is another way of thinking of touches in relation to user interface objects that is a little more abstract, but nonetheless compelling to users.

The following example creates an interface that displays a grid of simple tiles, as shown in Figure 6-8. Each tile has two states: on and off. When a user taps a tile, it toggles the state and updates the view to use an image that correlates to that state. In addition to tapping, a user can drag over any number of tiles, toggling them as the touch moves in and out of the bounds of the tile.

Figure 6-8. Sample tile-based application

Clicking the “Remove” button at the bottom of the screen removes all tiles in the selected state and triggers a short animation that repositions the remaining tiles:

// Board.h #import #import "Tile.h"

@interface Board : UIView { NSMutableArray *tiles; Tile *currentTile; BOOL hasTiles;


@property (nonatomic, retain) NSMutableArray *tiles; @property (nonatomic, assign) BOOL hasTiles;




-(void)removeTile:(Tile *)tile;

@end // Board.m #import "Board.h"

@interface Board (PrivateMethods)


-(void)toggleRelevantTilesForTouches:(NSSet *)touches andEvent:(UIEvent *)event;


@implementation Board

@synthesize tiles, hasTiles;

-(id)initWithFrame:(CGRect)frame{ if(self = [super initWithFrame:frame]){

[self setup]; } return self;



[tiles addObject:[[[Tile alloc] init] autorelease]]; }

-(void)removeTile:(Tile *)tile{

if([tiles containsObject:tile]){ [tiles removeObject:tile]; [tile disappear];

} if([tiles count] < 1){ self.hasTiles = NO; }else{ self.hasTiles = YES; } }


{ Tile *tile; for(tile in tiles){

[self removeTile:tile]; } self.hasTiles = NO;


-(void)willRemoveSubview:(UIView *)subview{

[self removeTile:(Tile *)subview]; } -(IBAction)removeSelectedTiles

{ Tile *tile; NSArray *tilesSnapshot = [NSArray arrayWithArray:tiles]; for(tile in tilesSnapshot){

if(tile.selected){ [self removeTile:tile];

} } if([tiles count] < 1){

self.hasTiles = NO; }else{ self.hasTiles = YES; } }

#define NUM_COLS #define NUM_ROWS #define MARGIN_SIZE #define TILE_COUNT



if(tiles == nil){ tiles = [NSMutableArray arrayWithCapacity:TILE_COUNT]; [tiles retain];

} for(int i = 0; i < TILE_COUNT; i++){

[self addTile]; } self.backgroundColor = [UIColor whiteColor]; [self setNeedsDisplay];



{ Tile *tile; int currentRow = 0; int currentColumn = 0; int i = 0; float tileSize = (320.0/NUM_COLS) - (MARGIN_SIZE * 1.25); float x, y; for(tile in tiles){

// Lay out the tile at the given location [self addSubview:tile]; x = (currentColumn * tileSize) + (MARGIN_SIZE * (currentColumn + 1)); y = (currentRow * tileSize) + (MARGIN_SIZE * (currentRow + 1)); [tile appearWithSize:CGSizeMake(tileSize, tileSize)

AtPoint:CGPointMake(x, y)];

if(++i % 4 == 0){ currentRow++; currentColumn = 0;

}else{ currentColumn++;

} [tile setNeedsDisplay]; } }

-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event

{ currentTile = nil; [self toggleRelevantTilesForTouches:touches andEvent:event];


-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {

currentTile = nil; }

-(void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {

currentTile = nil; }

-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {

[self toggleRelevantTilesForTouches:touches andEvent:event]; }

-(void)toggleRelevantTilesForTouches:(NSSet *)touches andEvent:(UIEvent *)event

{ UITouch *touch = [touches anyObject]; Tile *tile; CGPoint location; for(tile in tiles){

location = [touch locationInView:tile];

if([tile pointInside:location withEvent:event]){ // if the touch is still over the same tile, get out if(tile == currentTile){

continue; } [tile toggleSelected]; currentTile = tile;

} } }

-(void)dealloc { [tiles release]; [currentTile release]; [super dealloc];

} @end

// Tile.h #import

@interface Tile : UIView { BOOL selected; BOOL hasAppeared; UIImageView *backgroundView;


@property (nonatomic, assign) BOOL selected;



-(void)appearWithSize:(CGSize)size AtPoint:(CGPoint)point;


// Tile.m #import "Tile.h"

@implementation Tile

@synthesize selected;


if (self = [super init]) { self.backgroundColor = [UIColor clearColor]; backgroundView = [[UIImageView alloc]

initWithImage:[UIImage imageNamed:@"on.png"]]; [self addSubview:backgroundView]; [self sendSubviewToBack:backgroundView]; self.selected = NO; hasAppeared = NO;

} return self; }


{ [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:0.5]; CGRect frame = self.frame; frame.origin = point; self.frame = frame; [UIView commitAnimations];


-(void)appearWithSize:(CGSize)size AtPoint:(CGPoint)point

{ // If it's new, have it 'grow' into being if(!hasAppeared){

CGRect frame = self.frame; frame.origin = point; frame.size = size; self.frame = frame;

// Shrink it CGAffineTransform shrinker = CGAffineTransformMakeScale(0.01, 0.01); self.transform = shrinker;

// Start the animations transaction [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:0.5];

// Grow it CGAffineTransform grower = CGAffineTransformScale(self.transform, 100.0, 100.0); self.transform = grower;

// Commit the transaction [UIView commitAnimations];

// Flag that I have been on screen hasAppeared = YES; }else{ [self moveToPoint:point]; } }

-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event

{ UITouch *touch = [touches anyObject]; if([touch tapCount] == 2){

[self removeFromSuperview];

}else{ [self.nextResponder touchesBegan:touches withEvent:event]; return;




{ [UIView beginAnimations:nil context:nil]; [UIView setAnimationDuration:0.5]; CGAffineTransform transform =

CGAffineTransformMakeScale(.001, .001); self.transform = transform; [UIView commitAnimations];



{ self.selected = !self.selected;

if(self.selected){ backgroundView.image = [UIImage imageNamed:@"off.png"]; }else{ backgroundView.image = [UIImage imageNamed:@"on.png"]; } }


{ self.bounds = self.frame; backgroundView.frame = self.bounds;



{ [backgroundView release]; [super dealloc];



If you enjoyed this excerpt, buy a copy of Programming the iPhone User Experience.