Things I Learned at Siggraph II (2012)

In the interests of becoming a more well-rounded individual in the extremely narrow field of computer graphics, I spent last week in sunny LA, attending Siggraph. What follow are my notes and a little bit of synthesis from various sessions that I thought might be of some interest to our audience. I jotted these notes down during the presentations, and it’s very likely I either misunderstood or mis-transcribed things. All errors are my own.

Computer Aesthetics

Turns out, there’s no reliable way for a computer to understand human aesthetics. We like what we like as a result of a huge mess of ad hoc built up junk in our evolutionary history; this is difficult to model. Attempts have been made in the past: the most famous of these, the Golden Ratio, is actually not as historically important as we’ve been told. It doesn’t really appear in all the great works of art and culture people like to cite, although modern artists, aware of its reputation, have consciously used it in a sort of self-fulfilling prophecy.

Interestingly, computers can, through the use of evolutionary algorithms, develop their own sense of aesthetics. They’re just not going to like what we like. We can’t use an evolutionary algorithm to bridge this gap because the fitness test can’t be automated – deciding which of two choices is more aesthetically pleasing is of course the problem we’re trying to solve. Thus evolutionary algorithms are bottlenecked by humans making that choice (and are subject to the taste of those particular humans).

Mobile GPU

Big news: at Siggraph, OpenGL ES 3 was announced. It’ll be a while before we have mobile devices running it, but now we can start planning ahead.

The important thing on mobile, as ever, is energy efficiency. This year I learned something new: mobile GPUs are never going to have more power. Power means heat, and if they get much more than the ~1watt they currently consume, the chips will melt. Well ok, not the chips, but the solder dots holding the chips in place. So, increases in mobile performance have to come from more energy-efficient hardware and software; we can’t just throw power at the problem like we do on the desktop.

Of course, we as software engineers don’t have much say over the chips. But we can write our code in a way that consumes less power. As I said last time, this mostly means writing more efficient code, so the hardware has a chance to power down. It also means common-sense things like throttling down the frame rate on static or slow content like menus.

There are some things I didn’t talk about last time, because I didn’t know them. Reducing bandwidth is of course important; communication between CPU and GPU takes time and power. One simple way to do this is to mirror symmetrical textures with GL_MIRRORED_REPEAT. Hey, half the bandwidth!

You can also reduce the amount of drawing the GPU has to do. Drawing models front to back, rather than vice versa, means the GPU can reject covered pixels and potentially save you a lot of drawing time. Yes, that means the skybox is drawn last, not first.

Changing GPU state takes a ton of time. That means bouncing between buffers for multi-pass effects is expensive. Rather than drawing to the framebuffer, then to an FBO, then compositing that back into the framebuffer, do the FBO first, and change states just once. Furthermore, you should try to draw all your other FBOs and assorted render targets before rendering to the default framebuffer.

Lastly on stuff I should already have known: FBO contents are saved to main memory. Clearing them before drawing into them on a new frame saves you from having to restore that memory back to the GPU. That’s a huge win for one line of code.

New Stuff

The new ES spec gives us a couple additional options. To reduce bandwidth, it provides two standard texture compression formats that look quite nice compared to the current non-standards (PVRTC on iOS) and tend to compress smaller.

Also, glFramebufferInvalidate does the same job as glClear for informing the GPU that you don’t need the contents of a framebuffer, without having to keep track of which bits need to be cleared. There’s the added bonus of having a name that indicates what it does.

That’s all, folks!

Posted in Philosophy | Tagged , , , | Leave a comment

SMShadowedLayer

Impetus

For a recent project, we needed to “simulate” or “fake” the look of pieces of paper in a physical environment. We didn’t need fancy physical modeling or curves and curls and folds, just perspective and shadowing during an animation. With OpenGL, perspective and shadowing are dead simple, but animation is hard. With Core Animation, perspective and animation are dead simple, but shadowing is hard. Being loathe to work with a lower-level framework when a high-level one will do, we put together a little CALayer subclass that knows how to render sufficiently accurate shadows on its surface. Note that it doesn’t do inter-object shadows, just shading based on its angle in space. Since the math is essentially the same, we decided to throw in specular highlights free of charge.

How Does It Work?

The math more or less uses the Lambert illumination model. The key concept is that the illumination of a surface is related to the angle between the incoming light ray and the normal of the surface at that point. (Since layers are by definition planar, this is the normal of the layer.) All we do is calculate the dot product of the incoming light with the normal of the layer, and scale the shadow’s strength by that value. (For simplicity, we model the light as a directional source, shooting into the screen.) This means that the closer the layer is to facing you full-on, the less visible the shadow is. The specular highlight works the same way, except it has a sharper falloff: exponential rather than linear.

I say “more or less” because in the true illumination model, this math would change the color of the surface. We can’t do that for a number of reasons — primarily because you can set an image or other content in a CALayer, and we don’t want to wipe it out — so instead we put a shadow layer and a highlight layer on top of the base layer, and change their opacity.

Animation

To get the shadow and highlight to do the right thing during an implicit animation, we had to find a way to recalculate their opacities during each frame. If we had followed the naive approach, we would have missed changes that occur when, for instance, the layer rotates from facing left to facing right by going through facing center. In that case the shadow should go from dark to light to dark, but an implicit animation would simply go from dark to dark. Instead, we define our own internal animatable property, tell the layer it needs to re-display when that property changes, and then recalculate the shadow math rather than drawing anything in the display method.

We also put in a method to allow users to bypass this extra math, if they know their transform animation won’t cause this kind of nonlinear change in the shading.

Enough Talk, Let’s See It

(Why is there a slight stutter in that video? Because I did not take the time to do this.)

Where Can I Get It?

Here. You may want to follow the Spaceman Labs Github account, as it will surely have more interesting stuff in the future. It’s even got some interesting stuff now.

Posted in Code, Software | Tagged , , , , , , , , , , , | 1 Comment

Countries of the World in an NSArray

Continuing our popular series of lists we’ve typed in so you don’t have to (although this one is thanks to sed). Caveats: scraped from a random source on the internet; may not be accurate; provided for entertainment/lorem ipsum purposes only.

NSArray *countries = [NSArray arrayWithObjects:@"Afghanistan", @"Akrotiri", @"Albania", @"Algeria", @"American Samoa", @"Andorra", @"Angola", @"Anguilla", @"Antarctica", @"Antigua and Barbuda", @"Argentina", @"Armenia", @"Aruba", @"Ashmore and Cartier Islands", @"Australia", @"Austria", @"Azerbaijan", @"The Bahamas", @"Bahrain", @"Bangladesh", @"Barbados", @"Bassas da India", @"Belarus", @"Belgium", @"Belize", @"Benin", @"Bermuda", @"Bhutan", @"Bolivia", @"Bosnia and Herzegovina", @"Botswana", @"Bouvet Island", @"Brazil", @"British Indian Ocean Territory", @"British Virgin Islands", @"Brunei", @"Bulgaria", @"Burkina Faso", @"Burma", @"Burundi", @"Cambodia", @"Cameroon", @"Canada", @"Cape Verde", @"Cayman Islands", @"Central African Republic", @"Chad", @"Chile", @"China", @"Christmas Island", @"Clipperton Island", @"Cocos (Keeling) Islands", @"Colombia", @"Comoros", @"Democratic Republic of the Congo", @"Republic of the Congo", @"Cook Islands", @"Coral Sea Islands", @"Costa Rica", @"Cote d'Ivoire", @"Croatia", @"Cuba", @"Cyprus", @"Czech Republic", @"Denmark", @"Dhekelia", @"Djibouti", @"Dominica", @"Dominican Republic", @"Ecuador", @"Egypt", @"El Salvador", @"Equatorial Guinea", @"Eritrea", @"Estonia", @"Ethiopia", @"Europa Island", @"Falkland Islands (Islas Malvinas)", @"Faroe Islands", @"Fiji", @"Finland", @"France", @"French Guiana", @"French Polynesia", @"French Southern and Antarctic Lands", @"Gabon", @"The Gambia", @"Gaza Strip", @"Georgia", @"Germany", @"Ghana", @"Gibraltar", @"Glorioso Islands", @"Greece", @"Greenland", @"Grenada", @"Guadeloupe", @"Guam", @"Guatemala", @"Guernsey", @"Guinea", @"Guinea-Bissau", @"Guyana", @"Haiti", @"Heard Island and McDonald Islands", @"Holy See (Vatican City)", @"Honduras", @"Hong Kong", @"Hungary", @"Iceland", @"India", @"Indonesia", @"Iran", @"Iraq", @"Ireland", @"Isle of Man", @"Israel", @"Italy", @"Jamaica", @"Jan Mayen", @"Japan", @"Jersey", @"Jordan", @"Juan de Nova Island", @"Kazakhstan", @"Kenya", @"Kiribati", @"North Korea", @"South Korea", @"Kuwait", @"Kyrgyzstan", @"Laos", @"Latvia", @"Lebanon", @"Lesotho", @"Liberia", @"Libya", @"Liechtenstein", @"Lithuania", @"Luxembourg", @"Macau", @"Macedonia", @"Madagascar", @"Malawi", @"Malaysia", @"Maldives", @"Mali", @"Malta", @"Marshall Islands", @"Martinique", @"Mauritania", @"Mauritius", @"Mayotte", @"Mexico", @"Federated States of Micronesia", @"Moldova", @"Monaco", @"Mongolia", @"Montserrat", @"Morocco", @"Mozambique", @"Namibia", @"Nauru", @"Navassa Island", @"Nepal", @"Netherlands", @"Netherlands Antilles", @"New Caledonia", @"New Zealand", @"Nicaragua", @"Niger", @"Nigeria", @"Niue", @"Norfolk Island", @"Northern Mariana Islands", @"Norway", @"Oman", @"Pakistan", @"Palau", @"Panama", @"Papua New Guinea", @"Paracel Islands", @"Paraguay", @"Peru", @"Philippines", @"Pitcairn Islands", @"Poland", @"Portugal", @"Puerto Rico", @"Qatar", @"Reunion", @"Romania", @"Russia", @"Rwanda", @"Saint Helena", @"Saint Kitts and Nevis", @"Saint Lucia", @"Saint Pierre and Miquelon", @"Saint Vincent and the Grenadines", @"Samoa", @"San Marino", @"Sao Tome and Principe", @"Saudi Arabia", @"Senegal", @"Serbia", @"Montenegro", @"Seychelles", @"Sierra Leone", @"Singapore", @"Slovakia", @"Slovenia", @"Solomon Islands", @"Somalia", @"South Africa", @"South Georgia and the South Sandwich Islands", @"Spain", @"Spratly Islands", @"Sri Lanka", @"Sudan", @"Suriname", @"Svalbard", @"Swaziland", @"Sweden", @"Switzerland", @"Syria", @"Taiwan", @"Tajikistan", @"Tanzania", @"Thailand", @"Tibet", @"Timor-Leste", @"Togo", @"Tokelau", @"Tonga", @"Trinidad and Tobago", @"Tromelin Island", @"Tunisia", @"Turkey", @"Turkmenistan", @"Turks and Caicos Islands", @"Tuvalu", @"Uganda", @"Ukraine", @"United Arab Emirates", @"United Kingdom", @"United States", @"Uruguay", @"Uzbekistan", @"Vanuatu", @"Venezuela", @"Vietnam", @"Virgin Islands", @"Wake Island", @"Wallis and Futuna", @"West Bank", @"Western Sahara", @"Yemen", @"Zambia", @"Zimbabwe", nil];

Posted in Code | Tagged , , , , , , , | 1 Comment

Goodnight Safari

We’ve mentioned a couple of times over the past few months that there’s a side project keeping us busy. Today we are proud to announce that said project has shipped, and we can tell you all about it (except for stuff we’re contractually bound not to tell you).

Say hello to Goodnight Safari. It’s a digital storybook aimed at children ages two to four. It’s got a lot of interaction perfect for that age group, directing the story as well as incidental fun flourishes. We think the art is really beautiful, the interactivity is a ton of fun, and all said we’re extremely proud to have been involved in this project. Take a look at the publisher’s page for the book, or check it out at the App Store here.

A lot of our time and energy has gone into this, and we’ve played with plenty of new (to us) technologies along the way. In addition to the blocks stuff we’ve talked about earlier on the blog, and inspiring the development of Sim Deploy, we experimented with other ways to manage timing, with texture atlases, with audio and video. We ended up creating our own animation framework (which is now the IP of the client). The most fun part of this project was how it allowed us to get out of our comfort zones and learn a lot of new things in a lot of new areas. We’re very proud of what we’ve done, and we hope to have many more cool things to show you in the future.

Posted in Software | Tagged , , , , | 1 Comment

Sim Deploy

The iPhone Simulator is a big part of any iOS developer’s workflow, but running any apps in the simulator that weren’t put there by Xcode can be a major pain. Generally the simulator is considered a developers tool, so what’s wrong with installing apps as part of the build process? At this point, based on the combined experiences of Joel and I, it seems commonplace for one or many non-developers to be part of the development process of an iOS app. Managers, Designers, Animators, Sales – they probably all want to run the latest build on the simulator, without all the difficulties of getting a build environment setup.

To solve this, we’ve put together a little utility – Sim Deploy. It’s nothing fancy, it does what it says on the box: it provides drag and drop installation of simulator builds. It also allows for downloading a build from a remote URL. To provide a simple, and as close to an OTA experience, we added support for a custom URL scheme. This means new builds can be provided by simply asking a person to click a link. I know, I know — Fancy.

If you ever send simulator builds around, give it a shot, and give us some feedback. We’ve already got a couple more little tools in progress that should help this utility being even more useful than we think it already is – but we’re more than happy to help improve workflow cases we haven’t though of as well.

Sim Deploy Webpage

Posted in Software | Tagged , , , , | 4 Comments

Cancel dispatch_after

Joel and I have been working on a project recently that relies pretty heavily on the delayed execution of blocks. It became evident pretty quickly that we needed a way to cancel these blocks. We worked around the problem in kludgy ways initially, but because of the high memory usage of this app, we began running into crashes in cases where delayed blocks were retaining their objects far longer than we would have liked them to. I finally had to bite the bullet, and wrote a simple wrapper function that would allow us to cancel blocks that had been delayed using dispatch_after();

We’ve decided to put this code up on GitHub, and you can find it at our new repository.

The function is extremely easy to use, and tends to have a usage style similar to the block based API for NSNotificationCenter. Simply put, the perform_block_after_delay function returns a block handle that allows for the delayed execution to be canceled at any time.

@interface SMViewController : UIViewController
{
    __block SMDelayedBlockHandle _delayedBlockHandle;
}
@end

@implementation SMViewController

- (void)delayBlock
{
    SMDelayedBlockHandle handle = perform_block_after_delay(2.0f, ^{
                // Work
                [_delayedBlockHandle release];
                _delayedBlockHandle = nil;
            });
    _delayedBlockHandle = [handle retain];
}

- (void)cancelBlock
{
    if (nil == _delayedBlockHandle) {
        return;
    }

    _delayedBlockHandle(YES);
    [_delayedBlockHandle release];
    _delayedBlockHandle = nil;
}

@end

Under The Hood

Under the hood, there’s not much taking place outside of a little block juggling. The block passed to the perform_block_after_delay function is copied, and wrapped in a cancel block. The cancel block takes one argument – BOOL cancel. A third block literal is executed using the dispatch_after method, and does nothing more than execute the cancel handle, passing an argument of NO.

The magic mostly lies in the cancel handle. When the cancel block is executed, if and only if the passed argument is NO, the original delayed block is executed – next, regardless of the argument passed, the original block and the cancel handle are released and set to nil. This means that if the handle is executed at any time, the original block is potentially executed, and then all blocks are cleaned up. There is no way to stop a block from executing that was dispatched with dispatch_after (there wouldn’t be a point for this post if there were), so we get around this by gutting the handle after it’s canceled which ensures that the delayed block never has any real code to perform.

The major upshot to all of this is that once a block is executed or canceled, all retains are cleaned up and memory has the potential to be cleaned up; or at least, memory is no longer being retained by a block waiting to execute…..eventually.

Posted in Code | Tagged , , , | 4 Comments

Xcode Archives – What a Buncha Jerks

It’s been a little quiet around here lately, and if this post title hasn’t given it away, it’s because Joel and I have been pretty hard at work getting some products shipped. At the day job, the bossman is generally responsible for shipping the builds off to Apple, so archiving builds isn’t normally my deal. But, a while back I ran into a situation where I couldn’t share an IPA from Xcode Archives; and rather than the app icon, the archive list has some stupid notebook with a sketch that says “Archive” on the front.

I was too impatient to figure it out last time – but tonight I ran into this again while trying to ship a build off to the App Store. Fortunately, clicking “validate” gave me a little more context:

“PegJump” does not contain a single–bundle application or contains multiple products. Please select another archive, or adjust your scheme to create a single–bundle application.

The common theme here? Both applications were using static libraries, and the libraries are also being dumped into the archive. The archive contents can be seen by right clicking the archive and choosing “Show in finder”, then right clicking the .xarchive and choosing “Show Package Contents”. Being tired and stupid, I tried just deleting the extra files inside the Products directory; this was a terrible idea, and definitely didn’t work.

Some quick googling later, and a Stack Overflow post gave me the correct answerin the build settings of your sub-projects / dependencies, set the skip install property to YES. This basically prevents the libraries from being added (installed) to the build archive.

At this point – archiving your project should give you a nice, fully functional archive – ready for validation and sharing.

Voilà, bed time.

Posted in Explanation | Tagged , | Leave a comment

Mistakes Were Made: Integral Bounds

Here’s another mistake from the day job. (Why do they pay us? Because we do eventually find and correct our errors?)

“Misaligned” CATextLayers

As you may know, the Core Animation instrument has a flag to “Color Misaligned Images“. This is somewhat poorly named; in fact, it will color misaligned layers in magenta, whether or not they contain images. (It will also color layers containing stretched images in yellow, even if they are aligned correctly.) This is useful for two reasons. First, drawing misaligned layers is a performance hit. The GPU has to do blending to antialias the fractional pixels on the misaligned edge; blending tends to be very expensive on iOS devices. Second, because everything is shifted a fractional pixel and antialiased, it will all look a little bit blurry — problematic when you want crisp, clear images.

But there’s a third issue I just discovered. When using CALayer subclasses that draw content (at least CATextLayer, and possibly others), the actual created content can be wrong! Not just the appearance onscreen, but the bitmap backing the layer! This is particularly pernicious because the position does not even have to be misaligned; it’s enough simply to make the height non-integral. Observe:

See a difference? Well, the first line looks a little less crisp. But there’s more to it than that. Take a look at the top of the capital letters, particularly the curved ones.

It’s not just overly antialiased, it’s actually missing pixels! Now, how do I know that the actual content is wrong, not just the display? There are a couple of options. Since the backing store is an opaque type, I can’t just write it out to file and hope to get a usable image (although I can get a clue from the pixel dimensions — they’re rounded down to the next integral pixel). But I can have the layer render itself in an image context I create, and write that out. More amusingly, I can take advantage of Core Animation’s OpenGL underpinnings and use the contentsRect property, noticing that “If pixels outside the unit rectangles are requested, the edge pixels of the contents image will be extended outwards.” And indeed, I get something fun:

This makes it clear that the top row of pixels from the correct image has been cut off. The extended row is what should be the second row of pixels.

What and Why?

The hint I got from examining the contents seems to tell the story. If the size of the backing store is smaller than the size of the bounds, that fractional pixel simply won’t be drawn. The solution for you is to make sure your bounds are integral. The solution for Apple? That’s a tougher question; there are a lot of options with different tradeoffs. I’m not even sure what they’ve chosen is the wrong one, although it’s unexpected behavior and should be documented. Other options include rounding the dimensions up for the backing store, and scaling the content image back down, or continuing to scale the dimensions down but then rendering scaled and at an offset.

Post Script: Code

This can go in a simple view controller-based app. You’ll need to add and include the QuartzCore and CoreGraphics frameworks.

- (void)viewWillAppear:(BOOL)animated
{
#ifdef RENDER_IMAGE
	UIGraphicsBeginImageContext(CGSizeMake(200, 370));
	[[UIColor whiteColor] set];
	UIRectFill(CGRectMake(0, 0, 200, 370));
#endif
    [super viewWillAppear:animated];
	CATextLayer *textLayer = [CATextLayer layer];
	textLayer.position = CGPointMake(120, 220);
	textLayer.bounds = CGRectMake(0, 0, 200, 50.394);
	textLayer.fontSize = 14.f;
	textLayer.foregroundColor = [UIColor blackColor].CGColor;
	textLayer.string = @"Sample Sentence With Curves";
#ifdef CONTENTS_RECT
	textLayer.contentsRect = CGRectMake(0.f, -.1f, 1.f, 1.2f);
#endif
	[self.view.layer addSublayer:textLayer];
	
#ifdef RENDER_IMAGE
	CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 5, 50);
	[textLayer renderInContext:UIGraphicsGetCurrentContext()];
#endif
	
	textLayer = [CATextLayer layer];
	textLayer.position = CGPointMake(120, 270);
	textLayer.bounds = CGRectIntegral(CGRectMake(0, 0, 200, 50.394));
	textLayer.fontSize = 14.f;
	textLayer.foregroundColor = [UIColor blackColor].CGColor;
	textLayer.string = @"Sample Sentence With Curves";
#ifdef CONTENTS_RECT
	textLayer.contentsRect = CGRectMake(0.f, -.1f, 1.f, 1.2f);
#endif
	[self.view.layer addSublayer:textLayer];
	
#ifdef RENDER_IMAGE
	CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, 50);
	[textLayer renderInContext:UIGraphicsGetCurrentContext()];
	UIImage *render = UIGraphicsGetImageFromCurrentImageContext();
	UIGraphicsEndImageContext();
	[UIImagePNGRepresentation(render) writeToFile:@"/path/to/render.png" atomically:NO];
#endif
}
Posted in Code | Tagged , , , , , , , , | Leave a comment

Mistakes Were Made: Description Isn’t Enough

It’s my turn to take a crack at a “Mistakes Were Made” post, and this one happens to be about my first post on this blog. If you didn’t happen to catch that one – it was a post about improving your debugging process by giving your objects (More) Descriptive Logging. As it turns out, the post wasn’t 100% accurate.

I basically said, in a nutshell, that overriding the description method would allow you to have a custom object description that would be used when the object was added to a formatted string with %@ (like you use in NSLog), or when using the po command in GDB. It turns out, that last bit is where the inaccuracy lies.

Recently at our day job, Joel was trying to get some information about a slew of custom objects that were tucked off in an array. He was getting tired of all the casting and property-to-method converting that was necessary to get the relevant info through GDB; and then he remembered my blog post, and implemented a slick description method that would provide him all the bits he cared about in one single po command. Except, it didn’t work. In this case, Joel’s objects were subclasses of CALayer, and rather than his informative string, all he saw was the standard CALayer output.

<SomeLayer:0x915a8b0; position = CGPoint (0 0); bounds = CGRect (0 0; 100 100); >

We were perplexed; and Joel questioned whether or not I had even tested any of this before making a blog post about it (what a jerk). Some grumbling and searching later, we came up with this document, Technical Note TN2124 (otherwise known as “Mac OS X Debugging Magic”, it is a worthwhile bookmark, as it’s got gobs of good information for debugging objective-c). The Cocoa and Cocoa Touch section starts out discussing the same description methods we talked about before, but also contains an interesting note (emphasis added by me).

Note: print-object actually calls the debugDescription method of the specified object. NSObject implements this method by calling through to the description method. Thus, by default, an object’s debug description is the same as its description. However, you can override debugDescription if you want to decouple these; many Cocoa objects do this

There it is folks, po doesn’t actually call description, it calls debugDescription. My original tests worked because NSObject doesn’t do anything special with debugDescription and simply calls off to description. If you want to implement a custom description method, to keep some superclass from hijacking it, I’d suggest always just doing the following:

- (NSString *)description
{
	return @"my awesome description";
}

- (NSString *)debugDescription
{
	return [self description];
}
Posted in Code | Tagged , , , | 2 Comments

All in the Timing: Keeping Track of Time Passed on iOS

Imagine you’re writing a game called Small Skyscraper. It’s one of a certain type of freemium game: it’s not particularly difficult, but achievements take a lot of time. You make money by selling in-app purchases to reduce the amount of time the user has to wait. Setting aside the value of this kind of game for the moment, let’s think about the game developer’s Sisyphean goal: deterring cheaters.

Cheating

(We all know, or should, that it’s impossible to completely prevent cheating. The user has all of your resources in hand. On a jailbroken device he can modify your code directly; he can modify your plists and resource files; he can spoof your server calls. All the developer can do it make cheating enough work that it’s not worth the average cheater’s effort.)

The first goal for the cheater will be the first impediment to success: the delay required to make progress in-game. He might think about trying to change the time values specified in your xml game object descriptions, or in your code; but well before that he’ll try the simplest possible cheat: changing the system clock.

It is frankly surprising how many time-based games are susceptible to this kind of cheating. The real game on which our Diminutive Domicile example is loosely based is one such game. It’s such an obvious cheating vector — why are all these games falling down in the same way? Well, it turns out this is a difficult problem on iOS. I don’t have a full answer, but I do have some information that might be useful.

Methodologies

There are a number of ways to get a number of representations of “time” on iOS. They boil down to two types: absolute and relative time. Absolute time is what you get back from [NSDate date], CFAbsoluteTimeGetCurrent(), or gettimeofday(). At the lowest level it’s expressed as seconds since the start of time: midnight on January 1, 1970 (or 2001 for the CF function). Thus each point in time is uniquely expressible as an NSDate. Relative time, on the other hand, does not have a fixed reference date. This is the time you get back from CACurrentMediaTime() or  -[NSProcessInfo systemUptime]. Because the reference time can change, CACurrentMediaTime() may return the same value at different times.

There is a clear advantage to absolute times — to wit, they are absolute. When your app suspends and resumes some time later, there’s no question about how long you’ve been out. But that’s not really true, or cheating with the system clock wouldn’t work. And indeed, this makes sense: while the user cannot change the reference date, he can change the system’s concept of when right now is. It amounts to the same thing. Thus, the disadvantage: NSDates and so on should and do respect the user’s idea of what time it is — for time zone support if nothing else — rather than the developer’s idea.

Relative times, on the other hand, do not change in response to the system clock. If CACurrentMediaTime() dropped an hour when the device moved between time zones, users watching movies would not be thrilled. This is a useful property.

It turns out that all the relative times rely on the low-level mach_absolute_time(), which is relative despite the name. It returns the time since the system booted, expressed in some machine-dependent time base that we don’t need to worry about at the moment. This is great for us — that’s certainly not something that should change in response to time zones. But it’s also not so great for us! The time since the system booted will reset if the system reboots. That means we can’t completely rely on relative times.

Solutions?

As I said, I don’t have a good answer for this question. Both relative and absolute times on the system have inherent flaws. One idea I haven’t mentioned is to get off the system: have Brief Building call back to a server process to get the absolute, un-modified time. This only works in situations where there is internet access, of course. But it suggests a hybrid solution.

I suggest implementing the server callback, and saving the “known good” server time alongside the system’s relative and absolute times. When the server is available, use that time. When it’s not, use the difference in relative times since the last known server time to extrapolate the present server time.

This covers Minuscule Mall in every situation except the user rebooting on a plane. As far as I can tell, there is no allowed way to get a more concrete relative time on an iOS device. In this situation, then, the app has a number of reasonable choices. The most draconian is to demand server access, and prevent the user from playing until the plane lands. This might not be so great if the plane is going to an international location (or even if it’s just a long trip). The most permissive option is to assume, in the absence of more reliable information, that the system’s absolute time is correct. Since this is the approach always taken by the Small Skyscraper-type games of today, it’s probably considered acceptable. A reasonable in-between might be to simply ignore the time elapsed between the last recorded time before the reboot, and the first use after. If the game hadn’t been run for a while before the reboot, though, this is still potentially a big loss to the user.

Easier on the Desktop

Things would be different if we could install daemons, or keep a process running after the game has been started, or guarantee internet access. But we can’t.

Posted in Explanation | Tagged , , , , , | 1 Comment