Xcode Archives – What a Buncha Jerks

It’s been a little quiet around here lately, and if this post title hasn’t given it away, it’s because Joel and I have been pretty hard at work getting some products shipped. At the day job, the bossman is generally responsible for shipping the builds off to Apple, so archiving builds isn’t normally my deal. But, a while back I ran into a situation where I couldn’t share an IPA from Xcode Archives; and rather than the app icon, the archive list has some stupid notebook with a sketch that says “Archive” on the front.

I was too impatient to figure it out last time – but tonight I ran into this again while trying to ship a build off to the App Store. Fortunately, clicking “validate” gave me a little more context:

“PegJump” does not contain a single–bundle application or contains multiple products. Please select another archive, or adjust your scheme to create a single–bundle application.

The common theme here? Both applications were using static libraries, and the libraries are also being dumped into the archive. The archive contents can be seen by right clicking the archive and choosing “Show in finder”, then right clicking the .xarchive and choosing “Show Package Contents”. Being tired and stupid, I tried just deleting the extra files inside the Products directory; this was a terrible idea, and definitely didn’t work.

Some quick googling later, and a Stack Overflow post gave me the correct answerin the build settings of your sub-projects / dependencies, set the skip install property to YES. This basically prevents the libraries from being added (installed) to the build archive.

At this point – archiving your project should give you a nice, fully functional archive – ready for validation and sharing.

Voilà, bed time.

Posted in Explanation | Tagged , | Leave a comment

Mistakes Were Made: Integral Bounds

Here’s another mistake from the day job. (Why do they pay us? Because we do eventually find and correct our errors?)

“Misaligned” CATextLayers

As you may know, the Core Animation instrument has a flag to “Color Misaligned Images“. This is somewhat poorly named; in fact, it will color misaligned layers in magenta, whether or not they contain images. (It will also color layers containing stretched images in yellow, even if they are aligned correctly.) This is useful for two reasons. First, drawing misaligned layers is a performance hit. The GPU has to do blending to antialias the fractional pixels on the misaligned edge; blending tends to be very expensive on iOS devices. Second, because everything is shifted a fractional pixel and antialiased, it will all look a little bit blurry — problematic when you want crisp, clear images.

But there’s a third issue I just discovered. When using CALayer subclasses that draw content (at least CATextLayer, and possibly others), the actual created content can be wrong! Not just the appearance onscreen, but the bitmap backing the layer! This is particularly pernicious because the position does not even have to be misaligned; it’s enough simply to make the height non-integral. Observe:

See a difference? Well, the first line looks a little less crisp. But there’s more to it than that. Take a look at the top of the capital letters, particularly the curved ones.

It’s not just overly antialiased, it’s actually missing pixels! Now, how do I know that the actual content is wrong, not just the display? There are a couple of options. Since the backing store is an opaque type, I can’t just write it out to file and hope to get a usable image (although I can get a clue from the pixel dimensions — they’re rounded down to the next integral pixel). But I can have the layer render itself in an image context I create, and write that out. More amusingly, I can take advantage of Core Animation’s OpenGL underpinnings and use the contentsRect property, noticing that “If pixels outside the unit rectangles are requested, the edge pixels of the contents image will be extended outwards.” And indeed, I get something fun:

This makes it clear that the top row of pixels from the correct image has been cut off. The extended row is what should be the second row of pixels.

What and Why?

The hint I got from examining the contents seems to tell the story. If the size of the backing store is smaller than the size of the bounds, that fractional pixel simply won’t be drawn. The solution for you is to make sure your bounds are integral. The solution for Apple? That’s a tougher question; there are a lot of options with different tradeoffs. I’m not even sure what they’ve chosen is the wrong one, although it’s unexpected behavior and should be documented. Other options include rounding the dimensions up for the backing store, and scaling the content image back down, or continuing to scale the dimensions down but then rendering scaled and at an offset.

Post Script: Code

This can go in a simple view controller-based app. You’ll need to add and include the QuartzCore and CoreGraphics frameworks.

- (void)viewWillAppear:(BOOL)animated
{
#ifdef RENDER_IMAGE
	UIGraphicsBeginImageContext(CGSizeMake(200, 370));
	[[UIColor whiteColor] set];
	UIRectFill(CGRectMake(0, 0, 200, 370));
#endif
    [super viewWillAppear:animated];
	CATextLayer *textLayer = [CATextLayer layer];
	textLayer.position = CGPointMake(120, 220);
	textLayer.bounds = CGRectMake(0, 0, 200, 50.394);
	textLayer.fontSize = 14.f;
	textLayer.foregroundColor = [UIColor blackColor].CGColor;
	textLayer.string = @"Sample Sentence With Curves";
#ifdef CONTENTS_RECT
	textLayer.contentsRect = CGRectMake(0.f, -.1f, 1.f, 1.2f);
#endif
	[self.view.layer addSublayer:textLayer];
	
#ifdef RENDER_IMAGE
	CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 5, 50);
	[textLayer renderInContext:UIGraphicsGetCurrentContext()];
#endif
	
	textLayer = [CATextLayer layer];
	textLayer.position = CGPointMake(120, 270);
	textLayer.bounds = CGRectIntegral(CGRectMake(0, 0, 200, 50.394));
	textLayer.fontSize = 14.f;
	textLayer.foregroundColor = [UIColor blackColor].CGColor;
	textLayer.string = @"Sample Sentence With Curves";
#ifdef CONTENTS_RECT
	textLayer.contentsRect = CGRectMake(0.f, -.1f, 1.f, 1.2f);
#endif
	[self.view.layer addSublayer:textLayer];
	
#ifdef RENDER_IMAGE
	CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, 50);
	[textLayer renderInContext:UIGraphicsGetCurrentContext()];
	UIImage *render = UIGraphicsGetImageFromCurrentImageContext();
	UIGraphicsEndImageContext();
	[UIImagePNGRepresentation(render) writeToFile:@"/path/to/render.png" atomically:NO];
#endif
}
Posted in Code | Tagged , , , , , , , , | Leave a comment

Mistakes Were Made: Description Isn’t Enough

It’s my turn to take a crack at a “Mistakes Were Made” post, and this one happens to be about my first post on this blog. If you didn’t happen to catch that one – it was a post about improving your debugging process by giving your objects (More) Descriptive Logging. As it turns out, the post wasn’t 100% accurate.

I basically said, in a nutshell, that overriding the description method would allow you to have a custom object description that would be used when the object was added to a formatted string with %@ (like you use in NSLog), or when using the po command in GDB. It turns out, that last bit is where the inaccuracy lies.

Recently at our day job, Joel was trying to get some information about a slew of custom objects that were tucked off in an array. He was getting tired of all the casting and property-to-method converting that was necessary to get the relevant info through GDB; and then he remembered my blog post, and implemented a slick description method that would provide him all the bits he cared about in one single po command. Except, it didn’t work. In this case, Joel’s objects were subclasses of CALayer, and rather than his informative string, all he saw was the standard CALayer output.

<SomeLayer:0x915a8b0; position = CGPoint (0 0); bounds = CGRect (0 0; 100 100); >

We were perplexed; and Joel questioned whether or not I had even tested any of this before making a blog post about it (what a jerk). Some grumbling and searching later, we came up with this document, Technical Note TN2124 (otherwise known as “Mac OS X Debugging Magic”, it is a worthwhile bookmark, as it’s got gobs of good information for debugging objective-c). The Cocoa and Cocoa Touch section starts out discussing the same description methods we talked about before, but also contains an interesting note (emphasis added by me).

Note: print-object actually calls the debugDescription method of the specified object. NSObject implements this method by calling through to the description method. Thus, by default, an object’s debug description is the same as its description. However, you can override debugDescription if you want to decouple these; many Cocoa objects do this

There it is folks, po doesn’t actually call description, it calls debugDescription. My original tests worked because NSObject doesn’t do anything special with debugDescription and simply calls off to description. If you want to implement a custom description method, to keep some superclass from hijacking it, I’d suggest always just doing the following:

- (NSString *)description
{
	return @"my awesome description";
}

- (NSString *)debugDescription
{
	return [self description];
}
Posted in Code | Tagged , , , | 2 Comments

All in the Timing: Keeping Track of Time Passed on iOS

Imagine you’re writing a game called Small Skyscraper. It’s one of a certain type of freemium game: it’s not particularly difficult, but achievements take a lot of time. You make money by selling in-app purchases to reduce the amount of time the user has to wait. Setting aside the value of this kind of game for the moment, let’s think about the game developer’s Sisyphean goal: deterring cheaters.

Cheating

(We all know, or should, that it’s impossible to completely prevent cheating. The user has all of your resources in hand. On a jailbroken device he can modify your code directly; he can modify your plists and resource files; he can spoof your server calls. All the developer can do it make cheating enough work that it’s not worth the average cheater’s effort.)

The first goal for the cheater will be the first impediment to success: the delay required to make progress in-game. He might think about trying to change the time values specified in your xml game object descriptions, or in your code; but well before that he’ll try the simplest possible cheat: changing the system clock.

It is frankly surprising how many time-based games are susceptible to this kind of cheating. The real game on which our Diminutive Domicile example is loosely based is one such game. It’s such an obvious cheating vector — why are all these games falling down in the same way? Well, it turns out this is a difficult problem on iOS. I don’t have a full answer, but I do have some information that might be useful.

Methodologies

There are a number of ways to get a number of representations of “time” on iOS. They boil down to two types: absolute and relative time. Absolute time is what you get back from [NSDate date], CFAbsoluteTimeGetCurrent(), or gettimeofday(). At the lowest level it’s expressed as seconds since the start of time: midnight on January 1, 1970 (or 2001 for the CF function). Thus each point in time is uniquely expressible as an NSDate. Relative time, on the other hand, does not have a fixed reference date. This is the time you get back from CACurrentMediaTime() or  -[NSProcessInfo systemUptime]. Because the reference time can change, CACurrentMediaTime() may return the same value at different times.

There is a clear advantage to absolute times — to wit, they are absolute. When your app suspends and resumes some time later, there’s no question about how long you’ve been out. But that’s not really true, or cheating with the system clock wouldn’t work. And indeed, this makes sense: while the user cannot change the reference date, he can change the system’s concept of when right now is. It amounts to the same thing. Thus, the disadvantage: NSDates and so on should and do respect the user’s idea of what time it is — for time zone support if nothing else — rather than the developer’s idea.

Relative times, on the other hand, do not change in response to the system clock. If CACurrentMediaTime() dropped an hour when the device moved between time zones, users watching movies would not be thrilled. This is a useful property.

It turns out that all the relative times rely on the low-level mach_absolute_time(), which is relative despite the name. It returns the time since the system booted, expressed in some machine-dependent time base that we don’t need to worry about at the moment. This is great for us — that’s certainly not something that should change in response to time zones. But it’s also not so great for us! The time since the system booted will reset if the system reboots. That means we can’t completely rely on relative times.

Solutions?

As I said, I don’t have a good answer for this question. Both relative and absolute times on the system have inherent flaws. One idea I haven’t mentioned is to get off the system: have Brief Building call back to a server process to get the absolute, un-modified time. This only works in situations where there is internet access, of course. But it suggests a hybrid solution.

I suggest implementing the server callback, and saving the “known good” server time alongside the system’s relative and absolute times. When the server is available, use that time. When it’s not, use the difference in relative times since the last known server time to extrapolate the present server time.

This covers Minuscule Mall in every situation except the user rebooting on a plane. As far as I can tell, there is no allowed way to get a more concrete relative time on an iOS device. In this situation, then, the app has a number of reasonable choices. The most draconian is to demand server access, and prevent the user from playing until the plane lands. This might not be so great if the plane is going to an international location (or even if it’s just a long trip). The most permissive option is to assume, in the absence of more reliable information, that the system’s absolute time is correct. Since this is the approach always taken by the Small Skyscraper-type games of today, it’s probably considered acceptable. A reasonable in-between might be to simply ignore the time elapsed between the last recorded time before the reboot, and the first use after. If the game hadn’t been run for a while before the reboot, though, this is still potentially a big loss to the user.

Easier on the Desktop

Things would be different if we could install daemons, or keep a process running after the game has been started, or guarantee internet access. But we can’t.

Posted in Explanation | Tagged , , , , , | 1 Comment

Lion: Breaking the Boundaries

New operating systems always bring so much to be annoyed by. In an attempt (perhaps already failed!) to look less like a curmudgeon, I’m going to talk about one of these things in a constructive way.

Full-Screen Animations in Non-Full-Screen Apps

You may have noticed that in Lion’s Mail.app, sending an email results in a somewhat disconcerting animation. (If you think this animation is delightful and aids your comprehension, I guess you should stop reading now.) The message window zooms up and offscreen, a visual metaphor for your email dispatching to the cloud to be routed to its intended recipient. Or something.

Similarly, in the Month and Year views of iCal, hitting the “next”button causes the current calendar “page” to peel up and zoom, again, up and offscreen. This time it’s a visual metaphor for time passing, seasons changing, your life slowly slipping away. Seriously, if a student filmmaker has not yet created a montage featuring the flipping calendar page, it’s only because students can’t afford new computers.

The common element here is an animation breaking out of the bounds of the application frame. iCal is a single-window application. Mail is, too. It’s easy to describe the boundary between iCal and not-iCal; it’s simply the frame of the window. While we’ve had transitional animations within apps for some time, animations that break that boundary are, as far as I can tell, a new thing in Lion.

Why Is This a Thing Now?

These new animations are clearly inspired by iOS. The Mail animation is the same one you see when sending an email from a landscape-oriented iPad. And the iCal animation is the same as the iPad’s Calendar app.

These particular animations are useful in the heavily animation-driven interface of iOS devices. They serve as smooth transitions between states, and, a nice bonus on low-powered mobile devices, give the CPU time to do the necessary work of changing states while the GPU entertains the user. Transitioning between states is particularly important in the Mail example: not only is the app going from writing to reading, but also from modal to main — that is, the interface that was unusable while the composition window was up becomes active again.

Is That Necessary on the Desktop?

The desktop comes from a static legacy. Animation used to be costly in every sense: programmer time, cpu time, memory, and so on. Apple’s designers have been on a slow journey of shedding that legacy; the example of iOS has accelerated the change. I’m okay with this! I am perfectly happy to see the desktop become a more animated place, if the animations serve a useful purpose. What I don’t want to see is animations ported from iOS without a serious re-examination of their utility. It’s still important and useful to transition smoothly between states on the desktop, of course. But is that happening?

The iCal animation seems like a no-brainer. Move from month to month, see a pretty animation, what else could one want? But back up a minute. I mean it: hit the “previous” button. The new month “page” doesn’t zoom down from the top of the screen, even though the “next” animation sets that expectation. Instead, it appears at the top of the window, translucent but rapidly gaining opacity as it drapes over the old page. That’s jarring! To my mind, it’s less of a seamless transition than simply swapping window contents. (Not even to mention a simple push animation, as on iPhone.)

The Mail animation is more problematic. For one thing, there’s no modal/main distinction to transition across — when a new message window is open, the rest of the interface is still usable. Perhaps more importantly, it’s not an incidental animation of a custom UI element like the iCal calendar page. It’s a standard system window, suddenly taken out of the user’s control. For the lifetime of the desktop metaphor we’ve had two states for standard windows: open, and thus usable, versus closed, and invisible. Even if the contents of an open window are greyed out, the user can still do window management functions like resizing or repositioning. Now, for the sake of an interstitial animation, Apple has introduced a new state. The window is open, it looks usable, but it is not manageable. (Indeed, it’s running away.) This, to put it mildly, breaks expectations.

Aside from all that app-specific stuff, both of these animations suffer from the issue with which I started the post: they break the application’s boundaries. This doesn’t happen on iOS because an application’s boundaries are necessarily the boundaries of the screen. The zooming window, the flipping page, all are neatly confined within the display. Not so on the desktop. If you have a 27″ iMac and a small iCal window, that flipping page has to travel some serious distance before it can gracefully disappear! Same goes for Mail. More than anything, this is a distraction. For all the effort Apple spent on making the content come to the fore in Lion, the design team seems not to have considered that UI zooming over all of it might have the opposite effect.

What Can You Do?

Well, you can run all your apps in full-screen! This seems to be the use case these animations were designed for. They’ll no longer break the application boundaries, the iCal “previous” animation won’t be broken, and everything will feel much more iOS-like. Power users may not like this idea (I know I don’t); hopefully it won’t become mandatory in 10.8. Interleaving windows from different applications is a big feature of modern window management systems, and I’m not quite ready to give it up.

Or you can turn off the Mail animations! It’s better than nothing. And you can manually disable the iCal animations.

What Should Apple Do?

Accept that not everything from iOS belongs on the desktop! While it’s true that iOS is a rich source of inspiration, the new ideas there stem from the unique challenges and opportunities provided by a tiny, low-power, touch-sensitive device. Not all of those ideas belong on the desktop.

Coming to this realization will take the Apple design team time, but they could get there. We’ve seen them get a little over-excited about new paradigms before, but sometimes they do end up reigning themselves in. Remember lickability, or brushed metal? In the meantime, let’s keep our fingers crossed.

Posted in Philosophy | Tagged , , , | Leave a comment

CFTree Is Leaking It’s Children

It’s 12:40 AM, and I’ve got a client related deadline tomorrow afternoon – so what am I doing writing a blog post? The real answer is: I’m not really sure; but the more relevant answer is: Because this took far too long to track down, and I’d like to save someone else the time I wasted. Essentially it boils down to the fact that the documentation for CFTree is not only misleading, but it seems to be flat out wrong.

First, you may be sitting there wondering what the hell a CFTree is, and several weeks ago I would have wondered the same thing. CFTree is (as it’s name suggests) a tree structure; it is a pseudo-collection *, and is used to organize elements in hierarchical relationships. In this case, the lines between the contents of the collection and the collection itself are blurred. A tree can having relationships with other trees (potentially parents and children), and can hold a payload the size of a pointer, which allows you to store integers, pointers to structs, or objects – even other collections if you feel being all wild and crazy.

This post isn’t supposed to be about all the fancy things you can do with a CFTree, but rather, this line in the documentation for CFTree:

Releasing a tree releases its child trees, and all of their child trees (recursively). Note also that the final release of a tree (when its retain count decreases to zero) causes all of its child trees, and all of their child trees (recursively), to be destroyed, regardless of their retain counts.

The way I interpret that suggests I would have created and (completely) destroyed a tree if I were to do the following: Create Root Tree, Create Child Tree, Append Child Tree, Release Child Tree, Release Tree. Destroying the root tree does not release or destroy it’s children trees. It’s been difficult to track down for a variety of reasons, but I suspected something was wrong (even if I only found one google result on the topic), and I set out to prove it. Lets look at some code.

There’s some very non-cocoa looking stuff going on here, but the details are something I’d like to dive into in a later post. The important thing is that we’ve created a tree with a release callback pointing to DummyReleaseCallback. This is the function that the CFTree is going to call when it is done with the value passed in to treeContext.info, which is that payload I was talking about before. It’s safe to think of this whole process in the same manner you would an NSArray sending the release message to an object when it it removed from the collection.

static void DummyReleaseCallback(const void *info )
{
	NSLog(@"release %i", (int)info);
}

...

{
	CFTreeContext treeContext;
	treeContext.version = 0;
	treeContext.retain = NULL;
	treeContext.release = DummyReleaseCallback;
	treeContext.copyDescription = NULL;
	treeContext.info = (void *)1;
	
	// DummyReleaseCallback should be called after we release this tree
	CFTreeRef dummyTree = CFTreeCreate(NULL, &treeContext);
	CFRelease(dummyTree);

	...

2011-09-07 01:13:26.836 CFTree[1044:f203] release 1

Perfect, the release callback is being called when the tree is released. Let’s add some children…


	...

	treeContext.info = (void *)2;
	CFTreeRef root = CFTreeCreate(NULL, &treeContext);

	for (NSUInteger i = 0; i < 10; i++) {
		CFTreeContext treeContext;
		treeContext.version = 0;
		treeContext.retain = NULL;
		treeContext.release = DummyReleaseCallback;
		treeContext.copyDescription = NULL;
		treeContext.info = (void *)100 + i;
		
		CFTreeRef newChild = CFTreeCreate(NULL, &treeContext);
		CFTreeAppendChild(root, newChild);
		CFRelease(newChild);
	}
	CFRelease(root);
}
2011-09-07 01:14:09.875 CFTree[1044:f203] release 2

This is where things go wrong; we only see one release log. How about if we remove the children with CFTreeRemoveAllChildren before releasing the root?

2011-09-07 01:19:37.394 CFTree[1122:f203] release 100
2011-09-07 01:19:37.395 CFTree[1122:f203] release 101
2011-09-07 01:19:37.395 CFTree[1122:f203] release 102
2011-09-07 01:19:37.395 CFTree[1122:f203] release 103
2011-09-07 01:19:37.396 CFTree[1122:f203] release 104
2011-09-07 01:19:37.396 CFTree[1122:f203] release 105
2011-09-07 01:19:37.397 CFTree[1122:f203] release 106
2011-09-07 01:19:37.397 CFTree[1122:f203] release 107
2011-09-07 01:19:37.397 CFTree[1122:f203] release 108
2011-09-07 01:19:37.398 CFTree[1122:f203] release 109
2011-09-07 01:19:37.398 CFTree[1122:f203] release 2

That’s a lot more like it! Now – these logs really only tell us what’s happening with the info payload of the trees, not the CFTrees themselves. For the sake of comfort, I decided to take a look at object allocations in Instruments. Based on the screenshots below, it is confirmed that destroying only the root tree isn’t sufficient for destroying the entire tree – you can see our 10 children tree’s lingering around.

What To Do

I’ve filed a bug report with Apple, you should too. In the mean time, this hiccup isn’t going to stop me from using CFTree. For now, I’m destroying the tree manually by traversing it (recursively) and using the CFTreeRemoveAllChildren function on each child tree starting with the deepest depth. The part in the documentation about child tree’s being retained by their parent and released when removed is accurate; following normal memory management will result in the expected behavior here. This solution isn’t nearly as clean and pretty as CFRelease(rootTree), but for now it will have to do.

*I say pseudo because it’s not a collection managed by a single object like an NSArray, but rather a collection made up of a relationship of “collection” objects.

Posted in Code | Tagged , , , , | 1 Comment

CALayer Internals: Contents

It’s right there in the CALayer documentation:

contents
An object that provides the contents of the layer. Animatable.

@property(retain) id contents

Discussion
A layer can set this property to a CGImageRef to display the image as its contents. The default value is nil.

There’s exactly one thing a developer can assign to a layer’s contents property: a CGImageRef. So why is the property declared to be an id?

Back Up a Second.

id is Objective-C’s general object type.  It’s like a void* for objects. We’ve already got kind of a problem here — how is a CGImageRef the same as an Objective-C object? — but short story, Core Foundation pseudo-objects (CFTypes) — and pseudo-objects that derive from CFType (like Core Graphics types) — are set up such that they satisfy the requirements of id. This is a prerequisite for, but not the same as, toll-free bridging. Maybe Jerry will write a post about that in the future!

Anyway.

It’s true that there’s only one thing a developer can write to a layer’s contents, but that’s only half of what a property does. If you read the contents back, you won’t necessarily end up holding a CGImageRef. If the layer has been drawn into, using delegate methods (displayLayer:drawLayer:inContext:) or subclassing (drawInContext:), you’ll actually get an opaque internal type called CABackingStore. This is, as the name implies, a pixel buffer holding the stuff you see in the layer.

Sounds like we have another problem! There’s no header file for CABackingStore; there’s nothing a well-meaning developer can do with it. Or is there? Although the documentation specifies that developers should set layers’ contents to CGImageRefs, they are actually perfectly happy to share generic contents. That means cloning a layer is as easy as layerB.contents = layerA.contents; no cast required, since they’re both type id! (…if they’re both in the same layer hierarchy*, which on iOS they almost certainly will be.)

Takeaway

The documentation doesn’t make it clear, but you can set a CALayer‘s contents property to either a CGImageRef, or the contents of another layer. When querying the contents of a layer, don’t expect to get back a CGImageRef, but do expect something that can serve as the contents of something else. Even if new types (internal or external) are added to the API, this will always hold true.

Posted in Explanation | Tagged , , , | 1 Comment

CALayer’s Parallel Universe

Ever tried to animate a UIView’s position?  It’s easy, using UIView animation class methods like animateWithDuration:animations: and friends.  Simply change the position inside the “animations” block, et voila, a pretty animation with the duration of your choice.

But have you ever tried to change that animation while it’s running?  Suppose you’re writing a simple application that animates a view to a point the user touches. The first one works fine, but after that, additional animation blocks will result in the view animating from the previous final location that it was supposed to go to — not from the location it’s very clearly occupying onscreen.

“Why Is That?”*

Now, I’m fudging a little bit for simplicity’s sake. One of the more complicated UIView animation methods allows options, one of which is UIViewAnimationOptionBeginFromCurrentState.  That fixes this problem in one fell swoop — as the name implies, animations will begin from the view’s current state, that being its position onscreen in our example, rather than the final state. But the larger question remains: why is such an option necessary at all? What’s going on behind the scenes? Join me as I draw back the curtain.

The Great Cover-Up

You may have heard somewhere along the way that UIViews are “backed” by CALayers. In practical terms, this means that CALayers actually handle the rendering, compositing, and animation of your view.  The UIView class is a relatively complex add-on that knows about things like user interaction (in the form of gesture recognizers), printing, and all the specialized things that UIView subclasses do.  But many of the core UIView methods — things that deal with view hierarchy, display, colors, even hit testing — simply call through to the underlying CALayer, which has similar methods.

Now that we know CALayer is doing all the work under the covers (which are themselves behind a curtain, as you’ll recall), we can talk about exactly how it does its dirty deeds.  What really happens when a view (that is, a layer) animates from point A to B?

The Parallel Universe

It’s not magic!  It’s even better: technology.  Two extraordinarily divergent things happen when you kick off an animation.  The animated property of the view (in our example, position) doesn’t animate at all!  It is immediately set to the final value. If you start an animation with a duration of a year, and in the next line of code read back the view’s position, you’ll get the position you’d expect the view to occupy a year from now. But that makes no sense — the view is onscreen, and it’s clearly not all the way over there. It doesn’t yet appear to have moved at all.  The view seems to be expressing two contradictory pieces of state.

(By the way, the fact that the view’s position jumps to the final value as soon as the animation begins is why the first example I talked about doesn’t work. When you start an animation, it’s internally expressed as “animate from A to B”, and the “A” is implicitly set to be the view’s current position. So when you animate from A to B, and then change it to C halfway through, the view already considers itself to be at B, although it does not appear so to the naked eye. But I suspect you may be more interested in the underlying question at this point! Let’s continue.)

If the view’s position changes instantaneously, but we can watch it travel across the screen, there must be some kind of trick taking place. And indeed, there is an incredibly pervasive trick. The secret is this: the view your code talks to is not the view on screen at all.  Indeed, no UI element you address is ever on screen! Instead, Core Animation creates a parallel view hierarchy, from UIWindow on down. What you see on screen is something like your view’s evil twin.

Did I blow your mind?

Theory

What Core Animation is doing is a low-level Model/View separation, just like the MVC pattern with which you’re familiar.  Wait, isn’t everything we’re talking about a view? Yes, we’re overloading the term here. Now we’re talking about model data about an object that happens to be a UIView, and the view of that model data. The model is the UIView you talk to — it contains the truth about the data (the position of the UIView).  The view is the parallel CALayer on screen — it’s a visual representation of the data. It can animate rather than moving immediately because just as in other MVC situations, the view renders the data however it feels appropriate; it’s not guaranteed to be a one-to-one representation.

This is cool to know, but it’s only of academic interest if you can’t access the parallel view hierarchy. Fortunately, you can! Not on the UIView level, but CALayer’s presentationLayer method gets you there. Terminology time: A layer’s “presentation layer” is the view I was talking about before. To move back and forth between the hierarchies, presentation layers have a “model layer” (accessed through the modelLayer method) that is, as you’d guess, the model — the layer you usually use in your code. Using these two methods, you can jump between the model and view layer hierarchies with ease.

Code

The practical upshot of this: the data of the presentation layer will reflect where things currently are on screen, as opposed to the model layer we’re used to. Suddenly, animating from a view’s current position is simple (although you will have to drop down into Core Animation to do it).  As a refresher, here’s the pertinent part of the example I started with. Remember, the idea here was to animate a view to the user’s touch, but it doesn’t animate cleanly once there’s another animation in effect.

- (void)viewDidLoad
{
	[super viewDidLoad];
	touchView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 40, 40)];
	touchView.backgroundColor = [UIColor redColor];
	[self.view addSubview:touchView];
	UITapGestureRecognizer *gr = [[UITapGestureRecognizer alloc] 
		initWithTarget:self action:@selector(tap:)];
	[self.view addGestureRecognizer:gr];
}

- (void)tap:(UITapGestureRecognizer*)gr
{
	[UIView animateWithDuration:1.f animations:
	 ^{touchView.center = [gr locationInView:self.view];}];
}

And here are the changes we have to make to use the presentation layer to run the animation from the current location:

- (void)tap:(UITapGestureRecognizer*)gr
{
	CGPoint newPos = [gr locationInView:self.view];
	CGPoint oldPos = [touchView.layer.presentationLayer position];
	CABasicAnimation *animation = [CABasicAnimation animationWithKeyPath:@"position"];
	animation.fromValue = [NSValue valueWithCGPoint:oldPos];
	animation.toValue = [NSValue valueWithCGPoint:newPos];
	touchView.layer.position = newPos;
	[touchView.layer addAnimation:animation forKey:@""];
}

What’s this all about? Just as before, we’re getting the gesture recognizer’s location to determine where we want to animate to. But where before we were depending on the UIView animation method to tell Core Animation to create an implicit animation, now we create our own, aptly called an explicit animation. (More posts on this distinction to come: for now, all that matters is that usually Core Animation will do the right thing for you. That’s an implicit animation.) The basic animation here simply takes a “from” and a “to”, which we fill in appropriately. (The fact that we have to wrap the CGPoints in NSValues is an unfortunate implementation detail.) We then set the final value, and right after, add the animation. It looks like a lot more code than before, but that’s really all that’s necessary, and this methodology can be used to do much more complex stuff than UIView animations are capable of. Check out the CAAnimation subclasses to see how you can do keyframe animations and lots more.

Problem Solved!

And a whole lot more besides. More on all of these concepts to come!

Posted in Explanation | Tagged , , , | 1 Comment

Integers in Your Collections (NSNumber’s not my friend)

Early on in the days of learning Cocoa, I remember coming across a situation where I had a bunch of integers that I needed to keep around, but wasn’t immediately sure about how to go about doing that using an NSArray. As is quickly made evident by the documentation, Cocoa collections pretty much all require their values and keys to be objective-c objects, which an integer (int, NSInteger, or NSUInteger) is not.

- (void)addObject:(id)anObject;
- (void)setValue:(id)value forKey:(NSString *)key;

Based on the plethora of Google results on the topic, it’s obvious that I’m not the only one who’s run into this situation; Sadly, nearly all of them indicate that, because collections require objects, the only solution is to wrap your integers with NSNumber. I’m writing this blog post to let you know that there ARE other ways.

(This post got a little long winded – if you don’t care about the academic conversation, go ahead and just skip to the code.)

Why You Should Care: NSNumber Comes With a Cost

Let’s start with why using NSNumbers might not be your best option: Objects are more expensive than scalars. In a nutshell, that’s all there really is to it; NSNumbers are often unnecessarily heavy for the job required of them. Objects require more memory to create than primitives, which in turn requires more CPU cycles to allocate. Another object cost, albeit a much smaller one, is the Objective-C dispatching required for the method calls needed to retrieve and compare basic values of an NSNumber.

Compare the following snippet of code: we are iterating through a loop 1 million times, and assigning our loop counter to a variable with two very different methods – using NSNumbers vs. using NSIntegers. (Logging code removed for brevity)

NSNumber *numberValue;
NSInteger intValue;

for (NSInteger i = 0; i < collectionSize; i++) {
	NSNumber *number = [NSNumber numberWithInt:i];
	numberValue = number;
}

for (NSInteger i = 0; i < collectionSize; i++) {
	intValue = i;
}
// 2.8 GHz i7 iMac
NSNumber 0.20776 Seconds
NSInteger 0.00208 Seconds

// iPad 1
NSNumber 3.76227 Seconds
NSInteger 0.00952 Seconds

Woah, assigning 1 million Integers is 100 times faster than creating and assigning 1 million NSNumbers on an iMac, and nearly 400 times faster on an iPad! In the land of performance optimization, a 100-400x improvement is almost always a win, even if it involves a small amount of extra code complexity.

A Cocoa Flavored Layer Cake

One of the most amazing (and simultaneously intimidating) parts of being a iOS/Mac developer is that for any particular problem, there exists a schmorgesborg of API ranging from high-level libraries like Foundation, down to straight C. For this occasion, our solution lies in the not-so-scary land that sits comfortably between Foundation, and C: Core Foundation. Technically, Core Foundation IS a C API, but a lot of the nitty gritty details of C have been abstracted away. When it comes to collections, this abstraction relieves us from needing to think about things like dynamically growing memory for an object with a capacity of an unknown length.

Anyone familiar with using Foundation should have very little trouble understanding Core Foundation, as much of the API is nearly identical – with the exception that it is C, and procedural based. In fact, the two are so closely related that many of the equivalent classes (e.g. NSArray/CFArray) only need to be typecast before they can be used interchangeably (this is called Toll-Free Bridging, and is something we have planned for a future article).

Below is an example that creates mutable instances of a CFDictionary and a CFArray.

CFMutableArrayRef array;
CFMutableDictionaryRef dictionary;
array = CFArrayCreateMutable(NULL, 0, &kCFTypeArrayCallBacks);
dictionary = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);

NSString *aKey = @"ultrajoke";
NSString *aString = @"jerry";
CFArrayAppendValue(array, aString);
CFDictionarySetValue(dictionary, aKey, aString);

If you’ve been loving life in the realm of UIKit and Foundation, and haven’t spent any time with Core Foundation or any of the lower level API – it’s possible your head just went spinning as a result of this vastly different looking code. Trust me, it’s not so bad.

  • CFArrayCreateMutable and CFDictionaryCreateMutable – This is C, those are just the function names.
  • NULL – NULL is being passed for the allocator argument, and is the same as using kCFAllocatorDefault. This is something we can dive into more another time, but you know how in Objective-C you see things like [[MyClass alloc] init]? This is kinda like the alloc part. The important thing to know is that this argument impacts how memory is allocated, and you’re probably always going to want to use NULL.
  • 0 – This is just the capacity, the docs tell us that 0 means these collections will grow their capacity (and memory) as needed.
  • &kCFTypeArrayCallBacks, etc – These are pointers to structs of callback functions used for the values/keys, and are the kingpin of this whole article; more on them in a moment.

It’s All About The Callbacks

The sole reason we’ve moved to Core Foundation is that the functions for creating collection objects give us greater control over what happens when things are added and removed (notice I said things, not objects). This control is given by way of the callbacks we mentioned earlier; They vary depending on the collection type, but all of them fall into one of the 5 following basic types.

  • Retain Callback – Function called when a value is added to the array or dictionary, as a value or key.
  • Release Callback – Function called when a value is removed from the array or dictionary, as a value or key.
  • Copy Description Callback – Function called to get the description of a value. (Remember descriptions from ourprevious post?)
  • Equal Callback – Function called to determine if one value is equal to another
  • Hash Callback – Function used to calculate a hash for keys in a dictionary

Of the five types of callbacks, two sound very “object-y” in nature: Retain and Release. In fact, these are the two that need to change if we want to store integers in our collections; Integers aren’t objects, and don’t know anything about retain counts. According to the documentation for CFArrayCallBacks, CFDictionaryKeyCallBacks and CFDictionaryValueCallBacks, passing NULL to the retain and release callbacks results in the collection simply not retaining/releasing those values (or keys). What if we pass NULL to the other callback types? Again we turn to the documentation, and we find that they all have default behaviors that are used when NULL is provided. Description creates a simple description, Equal uses pointer equality, and hash is derived by converting the pointer into a integer.

If you’ve trudged all the way through this long winded post, you’re probably starting to see where I’m going with this, so let’s look at some code.

The Code

// Non Retained Array and Dictionary
CFMutableArrayRef intArray = CFArrayCreateMutable(NULL, 0, NULL);
CFMutableDictionaryRef intDict = CFDictionaryCreateMutable(NULL, 0, NULL, NULL);

// Dictionary With Non Retained Keys and Object Values
CFMutableDictionaryRef intObjDict = CFDictionaryCreateMutable(NULL, 0, NULL, &kCFTypeDictionaryValueCallBacks);

// Setting values
CFArrayAppendValue(intArray, (void *)79);
CFDictionarySetValue(intDict, (void *)5, (void *)10);
CFDictionarySetValue(intObjDict, (void *)5, @"ultrajoke");

// Getting values
NSInteger arrayInt = (NSInteger)CFArrayGetValueAtIndex(intCFArray, 0);
NSInteger dictInt = (NSInteger)CFDictionaryGetValue(intDict, (void *)5);
NSString *dictString = (NSString *)CFDictionaryGetValue(intObjDict, (void *)5);

CFRelease(intArray);
intArray = NULL;
CFRelease(intDict);
intDict = NULL;
CFRelease(intObjDict);
intObjDict = NULL;

Yeah, that’s really all there is to it; we simply pass NULL for the callback pointers, which prevents the collections from trying to call retain/release on the values assigned to it. It’s worth pointing out that there are some caveats to be aware of:

  • intArray and intDict are blindly storing pointer sized values, including pointers to objects, integers and booleans – nothing is retained/released.
  • The equal method for intArray and intDict uses “pointer comparison”, which is essentially the direct value that was stored. This means that while you can get away with storing a pointer to an object (that will not be retained), equality is determined by only the memory address.
  • Because the intObjDict Dictionary uses kCFTypeDictionaryValueCallBacks it’s values MUST be objects (either CFType or NSObject)
Posted in Code | Tagged , , , , , , | Leave a comment

Things I Learned at Siggraph

Our legions of dedicated fans (hi Mom) may have noticed a dry spell in the posts of late.  This is partly because I spent last week in beautiful Vancouver, B.C., attending the annual Siggraph conference.  In between time spent watching the year’s best computer animation and learning about morphological antialiasing, I picked up some stuff that’s specifically applicable to graphics development on iOS.  The following are my notes from “Beyond Programmable Shading”, a course about GPU utilization. Note, this is somewhat advanced stuff; if you’re not writing your own shaders, this may not be the blog post for you.

Background

Graphics implementations seem to run in cycles. We go from software graphics to fixed function hardware and back again. OpenGL was all about the fixed function, but with programmable shaders we’re right back in software. Of course there’s a tradeoff to either side; it’s all about flexibility versus speed.

Power

Working on a mobile device, our primary concern is power use. We’ve all seen games that drain the battery in a half hour of play time. The somewhat informed assumption is that doing parallelizable work on the GPU (graphics or otherwise) will always be a win. The reality is more complicated. CPUs take a higher voltage per core, but GPUs have many more cores. (Fixed function hardware, such as a floating point unit, is the cheapest to operate; it’s very fast and very inflexible). Offloading work onto the GPU is only a win if it takes less power overall — not just the power taken to do the work on those cores versus the CPU’s cores, but also the CPU power it takes to upload the data and read it back. For small tasks, this can dominate the time spent running code on the GPU.

Moving forward, let’s assume that we have good reason to run code on the GPU — like, say, graphics. We can’t control the voltage the chip takes when in use, but we can control how often it’s in use. Sounds like a no-brainer, but the best thing we as software developers can do to minimize power use is to minimize how long the chip spends running.

How can we do this?  First, cap your frame rate. The fewer frames per second you draw, the more time the chip spends idle. If you’re writing a graphically complex game that hits 45fps on a good day, you may not think about this; but you could be getting extremely high frame rates on easy content like menus. This can be even worse than expected, because working that fast can cause the chip to heat up, triggering throttling meant to avoid excessive temperatures. That means that when the user closes the menu and gets to the good stuff, you’ll no longer be capable of rendering at as high a frame rate as you’d like.

Now that your frame rate is low, optimize the time you spend rendering a frame. Same as before: the less time spent rendering, the more time the chip is idle. Don’t stop optimizing once you hit 60fps; further performance gains, combined with a capped frame rate, will really help power consumption.

Another way to keep the GPU idle is to coalesce work in the frame. Rather than computing the player’s position, then rendering the player, then computing the enemy’s position, the rendering the enemy, and so on, do all your rendering back to back. This will maximize the solid time the GPU can power off. It’s particularly important to keep the idle time in one large chunk rather than many small ones, because there is some latency associated with switching on and off parts or all of the chip.

Miscellany

There are plenty of ways to keep your GPU code fast; you’ve probably seen some of it if you’ve read anything about optimizing shaders. One common tip is to minimize branching. I learned why: when a GPU runs conditionals, it actually evaluates both branches — and not in parallel. For an if/else, it simply masks off writes on the cores that don’t meet the condition; runs the first branch on all cores; reverses the mask; and runs the second branch. That’s potentially a high price to pay! It pays to get clever with mix(), sign(), swizzling, and so on. Fortunately GLSL gives you lots of ways to avoid branching, if you’re willing to take the time to figure them out.

The most time-consuming operation in a shader is reading from memory. GPUs utterly lack the sophisticated caching mechanisms CPUs have; that’s the price for massive parallelism. GPUs are clever about hiding the stalls caused by memory loads by switching to work on other units (vertices or fragments, in our common cases); the trick is making sure there’s enough math for them to do to take up the time. Counterintuitively, a good strategy is often to recompute data rather than taking the time to load it. Those little cores are really fast, and reading from memory is really slow! You’d be surprised how many cosines you can calculate in the time it takes to read from your lookup table.

Bonus Notes on Vancouver

There are way more women than US-average wearing sheer tops. And a way higher incidence than I am used to of slight limps in both genders. Causation?

Posted in Philosophy | Tagged , , , , , | 1 Comment