SMSquashView

Why Now? Why Me?

Lately we’ve been hearing a lot about adding personality and interest to apps with subtle animation, rather than with complex textures. As an animation guy from old times*, I couldn’t be happier about this new focus in iOS. There have always been lots of rich animation capabilities built into the frameworks, and based on nothing but the WWDC keynote, there’s bound to be even more simple, powerful animation API in iOS 7.

Of course, there’s still plenty that’s not easy. In the 2.5d world of iOS 7, things like SMShadowedLayer are more necessary than ever. Dimensionality is one way to add, if I may, depth to your animations. Another way is to infuse character and realism into the animated object.

Take a look at the principles of animation. Many of these – slow in and slow out, anticipation, timing – are provided by existing iOS animation APIs. More – arcs, follow through – are likely to be provided by iOS 7 APIs. But the most important item on the list, squash and stretch, is not addressed. This is a shame, as it’s a remarkably versatile technique. Used with a delicate touch, it can help the eye follow animation, making it easier to tell where an object is heading and at what rate. Used more liberally, it can add an appealing playfulness and character to the same animation.

I Fixed It For You.

SMSquashView and its backing class, SMSquashLayer, squash and stretch themselves automatically in response to motion. They will distort themselves longer in the direction of motion, and shorter in the perpendicular directions. The distortion is proportional to the velocity the object, and scaled such that its volume remains constant. In short: it does exactly what you’d want it to do, with no configuration required.

How Does It Work?

The math is pretty simple, if you like transformation matrices. Every time the layer’s position changes, we note the time and position delta from last time. Using that information we can calculate the velocity for the current frame of animation. We align a transformation matrix to the direction of travel using the same idea as gluLookAt, then scale according to the magnitude of the velocity.

Animation

Much the same as SMShadowedLayer, we won’t get enough information during implicit animations. Accordingly, we use exactly the same technique: set up a custom internal animatable property, tell the layer it needs to redraw when that property changes, but sneakily do our math rather than drawing when display is called.

Demo

Where Can I Get It?

Here. There’s lots of really interesting open source code on our Github account, so you might want to check out the rest while you’re at it. This code in particular is short and quite well-commented (I think!) and might make a fun read.

 

*I argued for exactly this use of animation in my 2008 WWDC session. Prescient! I am willing to speak about the future of graphics and animation at your conference too, just ask.

Posted in Code, Software | Tagged , , , , , , , , , , | 5 Comments

WWDC Blast

Early this morning we and our friend Alex C. Schaefer launched a new project of special interest to iOS and Mac developers. WWDC Blast is a free service to inform you via SMS the very moment that WWDC tickets go on sale. With tickets having sold out last year in under two hours, there’s no better time to automate this. We won’t spam you, we won’t sell your information, we just like the idea of keeping people as informed as possible. Check it out.

Technically, this project was a fun one. It’s the first web app we’ve launched, and while the guts aren’t incredibly complicated, we’re integrating lots of cool services and frameworks to make some magic happen. We’re using Heroku, Twilio, S3, Twitter Bootstrap, MixPanel, and some other cool stuff that isn’t even user-visible yet. And the list goes on.

Most fun of all is that we put together the whole thing, soup to nuts, concept to launch, in under four days. That’s right: the initial idea was broached Monday morning. We launched early early Friday.

Not bad.

Posted in Software | Tagged , , , , , , , | Leave a comment

Spatter and Spark

icon_150

Another year-and-a-bit, another successful client project! As of a few weeks ago, Polk Street Press shipped a brand-new storybook, Spatter and Spark. It’s a genuinely endearing tale of two best friends solving problems together in a way that five-year-olds will find endlessly entertaining. We do too. The story is engaging, the character design and illustration is phenomenal, and there’s just a ton of incidental interactivity to keep the experience fresh and engaging read after read. There are also IAP games and activities designed to teach a preschool skill set – they happen to be pretty fun, too. In a very cool innovation, the activities, and some in-story data, are optionally parsed on PSP’s backend to give parents an auto-updating report on their child’s educational progress.

Most importantly from the perspective of this blog, Spaceman Labs is responsible for all the code in the iOS app. We built on all the work we’d done for Goodnight Safari, and were able to make use of a lot of clever stuff we’d done, like cancelable blocks, audio-synced text highlighting, and animated character encapsulation. But we also developed the animation framework significantly. As it’s now Polk Street Press’s IP we can’t tell you too much about it, but we can say it delivers greatly improved fidelity, performance, and memory utilization (yes, more than just fixing leaks), all of which enables much more detailed, expressive, and long-running animations in the same app size and on the same hardware. We also did a lot of work on moving animation specifications out of code and into artist-maintainable XML, which allowed for rapid iteration without our involvement.

We’re really proud of this project. We love the work we’ve done, and we love the finished product. So far reviews on iTunes seem to agree. Check it out (on the App Store and at Polk Street Press’s website) and let us know what you think.

Posted in Software | Tagged , , , , | Leave a comment

SMPageControl, Meet UIAccessibility

A while back we introduced SMPageControl, our drop in replacement for UIPageControl with a slew of extra bells and whistles. It was surprisingly well received, and is by far the most popular bit of open source code we’ve released to date. (Thanks for starring the repo on GitHub!)

Unfortunately, it seems that deeming it a “drop in replacement” wasn’t 100% accurate; while SMPageControl provided a pixel perfect representation of UIPageControl, it lacked UIAccessibility support all together. I hate to admit it, but until recently I haven’t paid much attention to accessibility in the iOS apps I’ve worked on. That has changed – for a variety of reasons that I plan to cover in a future blog post – and SMPageControl now really is a drop in replacement, providing the exact same UIAccessibility behaviors as UIPageControl.

But Wait, There’s More!

In sticking with the theme of being UIPageControl’s Fancy One-Upping Cousin, SMPageControl extends UIPageControl’s accessibility functions by letting you give each page its own name. The default behavior is to set the accessibility value to "page [current page + 1] of [number of pages]". e.g. "page 1 of 10". If a page has been named, the provided name is prepended to the default accessibility value. e.g "Search - page 1 of 10". This is extremely useful when using per page indicator images, where one or more page is likely to have a specific usage that is identifiable to the user.

Example

SMPageControl *pageControl = [[SMPageControl alloc] init];
[pageControl setImage:[UIImage imageNamed:@"searchDot"] forPage:0];
[pageControl setCurrentImage:[UIImage imageNamed:@"currentSearchDot"] forPage:0];
[pageControl setImage:[UIImage imageNamed:@"appleDot"] forPage:1];
[pageControl setCurrentImage:[UIImage imageNamed:@"currentAppleDot"] forPage:1];
[pageControl setName:@"Search" forPage:0];
[pageControl setName:@"Apple" forPage:1];

SMPageControl-3

Version 1.0 and CocoaPods

With the addition of accessibility support, I feel comfortable calling SMPageControl a 1.0; bugs may still surface, and features may be added, but the goal of creating a fully featured page control has been achieved. Having reached this point, I’ve created a 1.0 version tag in the git repo and added it to CocoaPods.

As always, feedback is welcome. We accept pull requests for features and bug fixes at the GitHub repo, so if you’d like to contribute, by all means, please do.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Integrating Asana and Git

Recently we at Spaceman Labs have been testing out different tools and workflows to manage our various projects. After trying a few different packages, we’ve more or less settled on Asana as our issue tracking solution. It’s got a lot of advantages: it’s free, we can have as many projects as we want, and it’s low-friction enough that we’re likely to use it regularly.

That last one is actually the biggest feature – just as the best camera is the one you have with you, the best issue tracker is the one you’ll actually use. All the bells and whistles in the world don’t matter if you’re scared away from the core functionality. But there’s one way Asana could be even easier to use. It doesn’t have a standard way to integrate with our Git workflow. We’d like to be able to commit some code with a comment like “fixes #123” and have that fix reflected in Asana.

Fortunately, all of the groundwork has been done. Asana has a really awesome, and beautifully well-documented, API. And other developers, most notably for our purposes Bruno Costa, have taken that API and put together useful scripts.

The only thing missing from Bruno’s post-commit hook (and go back and read his blog post if you haven’t, it’s great) is the ability to run it non-interactively. Our Git workflow is built around Tower; it’s an awesome tool, but it doesn’t know how to simulate reading from /dev/tty in a hook.

So we needed to make the script non-interactive. This means it loses some potential flexibility, but on the other hand, it’s faster and has a somewhat higher “just works” quotient. Once you set up your git config variables, anyway. While we were at it, we also made it handle multiple ticket numbers per checkin, and differentiate between referencing and closing a ticket. That means you can write a commit message like

"Tweaked the widget; fixed #1, #2, and #3; references #4 and #5; oh yeah, and closes #6"

and have the right thing happen. Neat, right?

You can find the script at our GitHub page; please modify and open pull requests if you’d like to see it do more. (For instance, re-opening a closed ticket would be trivial.) V1 of the script is duplicated below. To run it, save it to your local repo’s .git/hooks/post-commit, and chmod 755 that mother. Run the git config line at the top of the script with the appropriate values, and you’re good to go.

#!/bin/bash

# modified from http://brunohq.com/journal/speed-project-git-hook-for-asana/

# -----------------------
# necessary configuration:
# git config --global user.asana-key "MY_ASANA_API_KEY" (http://app.asana.com/-/account_api)
# -----------------------

apikey=$(git config user.asana-key)
if [ $apikey == '' ] ; then exit 0; fi

# hold the closed ticket numbers
declare -a closed
# hold the ticket numbers that are not closed, just touched
declare -a referenced
# track whether we're currently closing tickets or just referencing them
closes='NO'

# regex pattern to recognize a story number
taskid_pattern='#([0-9]*)'
# regex pattern to recognize a "closing this ticket" word
closes_pattern='([Ff]ix|[Cc]lose|[Cc]losing)'
# regex pattern to recognize an "and" word (eg "fixes #1, #2, and #3")
and_pattern='([Aa]nd|&)'

# get the checkin comment for parsing
comment=$(git log --pretty=oneline -n1)

# break the commit comment down into words
IFS=' ' read -a words <<< "$comment"

for element in "${words[@]}"
do
    # if we have a task id, save it to the appropriate array
    if [[ $element =~ $taskid_pattern ]]; then
	if [ "${closes}" == "YES" ]; then
	    closed=("${closed[@]}" "${BASH_REMATCH[1]}")
	fi
	referenced=("${referenced[@]}" "${BASH_REMATCH[1]}")
    # or else if we have a "closes" word, set the tracking bool accordingly
    elif [[ $element =~ $closes_pattern ]]; then
	closes='YES'
    # and if we don't, set us back to referencing
    # (if we're an "and", don't change any state)
    elif [[ ! $element =~ $and_pattern ]]; then
	closes='NO'
    fi
done

# touch the stories we've referenced
for element in "${referenced[@]}"
do
    curl -u ${apikey}: https://app.asana.com/api/1.0/tasks/${element}/stories \
         -d "text=just committed ${comment/ / with message:%0A}" > /dev/null 2>&1
done

# close the tasks we've fixed
for element in "${closed[@]}"
do
    curl --request PUT -u ${apikey}: https://app.asana.com/api/1.0/tasks/${element} \
         -d "completed=true" > /dev/null 2>&1
done
Posted in Code | Tagged , , , , | 10 Comments

Mistakes Were Made: Audio and ARC

There’s a mistake I have, to my own great embarrassment, made twice, so I think it’s worth writing up for posterity. This one’s brief: you can laugh at me and move on.

Under ARC, AVAudioPlayers don’t retain themselves while playing.

A brief recap: ARC takes memory management off the minds of developers by automating the insertion of retains and releases according to a simple set of rules. This is much to the delight of new developers, and equally to the chagrin of grizzled oldsters. (I admit to having felt more than a bit of the latter at the outset.)

AVAudioPlayer is the simplest API to get an audio file from your Resources directory to your user’s ears.

AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithContentsOfURL:audioPath error:nil];
[player play];

And you’re set! Under manual memory management, anyway.

What happens with ARC?  The newly created player is retained. As soon as the method ends, however, there is no longer anything referring to it. Even though it’s still doing stuff — the audio file you’re playing will presumably last longer than a millisecond — it’s released, and happily cuts off the audio early.

Having reasoned through it, the solution is of course trivial. Make the AVAudioPlayer an instance variable of whatever class is creating it, and it won’t be released until the controlling object is. Audio bliss.

Of course, I feel that the mistake is perhaps not entirely mine. AVAudioPlayer really should retain itself while it’s playing audio, and release itself when it finishes. To that end, I’ve filed rdar://12635205; feel free to dupe it if you agree. And for general interest, here’s the sample code I included to demonstrate the problem:

#define IVAR_PLAYER 0

#import "SMViewController.h"
#import <AVFoundation/AVFoundation.h>

@interface SMViewController ()
{
#if IVAR_PLAYER
	AVAudioPlayer *player;
#endif
}

@end

@implementation SMViewController

- (IBAction)playSound:(id)sender {
#if !IVAR_PLAYER
	AVAudioPlayer *player;
#endif
	NSURL *audioPath = [NSURL fileURLWithPath:[[NSBundle mainBundle]
											   pathForResource:@"audio" ofType:@"mp3"]];
	player = [[AVAudioPlayer alloc] initWithContentsOfURL:audioPath
													error:nil];
	[player play];
	
	// delay a bit to keep the run loop alive long enough to let the audio start playing
	// don't do this in production code, obviously! It is a bad way to do basically anything.
	for (int i = 0; i < 1000; i++)
		NSLog(@"%d", i);

@end
Posted in Explanation | Tagged , , , , , , , | Leave a comment

SMPageControl: UIPageControl’s Fancy One-Upping Cousin

If you’ve ever spent any time at Dribbble, you know how much designers love to customize UIPageControl, normally in the form of custom spacing or fancy inset looking page dots. As a developer, you’re probably also keenly aware of the lack of customization that Apple’s built in class provides. Hell, it wasn’t even until the introduction of iOS 6 that you could change the tint color, and even then, that’s where the flexibility ends.

Given my blogging absence in recent months, and the fact that I have a somewhat immediate need for a highly customizable page control, it seemed like a fun opportunity to spend an afternoon making something I could share. Admittedly, this isn’t a terribly hard problem to solve – UIPageControl is one of the least sophisticated bit of UI in iOS. Despite this being an open-source softball, I was excited by the idea of writing something simple, yet robust enough that I wouldn’t have to tackle this issue ever again.

SMPageControl provides all the functionality of UIPageControl, but also adds the ability to change indicator diameters, margins and alignment. It also provides support for using images as the inactive and active page indicator. For greater control, it is also possible to provide index specific indicator images.

If you’re not interested in the fanfare and hot air about to follow, go ahead and just grab the code from GitHub.

Update: Two new features have been added. A masking mode that will allow indicators to use the tint coloring, along with an image as a clipping mask. And a convenience method for updating the current page by passing in a scrollview.

Find and Replace

Using this class is as simple as they come, you can quite literally just change your class names from UIPageControl to SMPageControl, and SMPageControl will seamlessly provide all of the bland, out of the box functionality you are used to. It also provides the same two (and more) UIAppearance properties, pageIndicatorTintColor and pageIndicatorTintColor. I’ve made an effort to precisely mirror the default functionality, so that UIPageControl can be used as normal, while being able to make a quick shift to SMPageControl as soon as extra flexibility is required.

Example:

// Positioning code suppressed
UIPageControl *pageControl = [[UIPageControl alloc] init];
pageControl.numberOfPages = 10;
[pageControl sizeToFit];
[self.view addSubview:pageControl];

SMPageControl *pageControl2 = [[SMPageControl alloc] init];
pageControl2.numberOfPages = 10;
[pageControl2 sizeToFit];
[self.view addSubview:pageControl2];

A Little Bit Retro, a Little Bit Rock and roll

SMPageControl allows for the tried and true page control appearance, with some slightly adjusted layout. Using the fully UIAppearance compatible properties, indicatorDiameter and indicatorMargin, it’s easy to make SMPageControl appear a bit more modern.

// All instances of SMPageControl across the app will inherit this style
SMPageControl *appearance = [SMPageControl appearance];
appearance.indicatorDiameter = 10.0f;
appearance.indicatorMargin = 20.0f;

Alright, so bigger, more whitespace; that’s hardly Dribbble worthy, right? Well fortunately, SMPageControl has two more powerful properties – Also UIAppearance compatible – pageIndicatorImage and currentPageIndicatorImage. These are probably my favorite two properties, as they allow for a wholesale replacement of the active and inactive “dots”, with two quick and easy lines of code.

// All instances of SMPageControl across the app will inherit this style
SMPageControl *appearance = [SMPageControl appearance];
appearance.pageIndicatorImage = [UIImage imageNamed:@"pageDot"];
appearance.currentPageIndicatorImage = [UIImage imageNamed:@"currentPageDot"];

Stand Out In A Crowd

The final bit of customization provided by SMPageControl is its support for per-indicator images, a function that really is the icing of its flexibility. The easiest example of this to point out is the all too familiar Springboard. Ever since Spotlight was added in iPhone OS 3.0, the springboard page control has included a little search icon along with those classic circles. SMPageControl makes this sort of customization trivial. Because of the case by case nature of how I expect this feature to be implemented, per-indicator images are not compatible with UIAppearance – but adding them is just as quick and easy.

SMPageControl *pageControl = [[SMPageControl alloc] init];
pageControl.numberOfPages = 10;
[pageControl setImage:[UIImage imageNamed:@"searchDot"] forPage:0];
[pageControl setCurrentImage:[UIImage imageNamed:@"currentSearchDot"] forPage:0];
[pageControl setImage:[UIImage imageNamed:@"appleDot"] forPage:3];
[pageControl setCurrentImage:[UIImage imageNamed:@"currentAppleDot"] forPage:3];
[pageControl sizeToFit];

Feedback and Suggestions Welcome

I suspect that for a problem with such a narrow set of requirements, there isn’t much left that this little project doesn’t cover. That seems to be further supported by the similar feature set of this project and the others I’ve found. (P.S. It’s generally best to do the diligent Googling before writing your own new code to solve solved problems – whoops). However, I already mentioned that I wanted this to be robust enough that I wouldn’t have to chase a similar solution ever again. If there’s something I’ve missed, fill out the comment box just below.

If you’ve read this far, I’m impressed – and it’s probably time you just went and grabbed the code for a bit of tinkering.

Posted in Code | Tagged , , , , , | 13 Comments

Premature Completion: An Embarrassing Problem

Working on a project recently, Jerry and I came across an odd bug. We have a two-level UI that allows the user to navigate between several different scroll views. For the sake of keeping things pretty, we want to reset a given scroll view before going back to the navigation interface paradigm. No problem, right? Something like this should do the trick:

[UIView animateWithDuration:2.f animations:^{
		[scrollView scrollRectToVisible:CGRectMake(0, 0, 10, 10) animated:YES];
	}completion:^(BOOL finished){
		[self.delegate resumeNavigation];
	}];

Turns out, nope! Somehow this runs the completion block in parallel with the animation block. The solution? Simply changing the YES to a NO in the scrollRectToVisible call.

But Why?

It seems that Core Animation (and therefore UIView animation) does some unexpected, not entirely welcome magic behind the scenes. If there is nothing to do in the animation block, the framework shrugs and runs the completion block immediately, rather than waiting the specified duration.

My guess, and this is just a guess, is that internally Core Animation depends on the animationDidStop:finished: delegate callback from a CAAnimationGroup or similar it sets up to handle everything that happens inside an animation block. When that callback fires, CA knows it’s time to kick off the completion block. If there are no animations created, there is nothing to send a callback. Rather than set a timer to wait for nothing to happen, the completion block runs right away, because why not?

This is seductive reasoning, and has the advantage of being easy to code. Unfortunately, it’s not always what the user of the API expects. (I would venture to say never!) In our case, it means the naive code fails because (again, guessing) asking the scroll view to animate its scrolling sets up another animation context. Asking it not to animate, by contrast, allows Core Animation to create an implicit animation and work its magic.

What’s the Solution?

Unfortunately, for us API clients there really isn’t one. This is more of an informational post than anything: once you’re aware this can be a problem, you may save yourself hours you otherwise would’ve spent in fruitless debugging. Believe me, I’ve been there on this issue.

The best thing you can do is play around and see where exactly unexpected things happen.  As long as you keep the “are there any animations created?” question in mind, whatever behavior you see will be easy to explain. But reasoning a priori and figuring out what to expect is impossible without inside knowledge of the implementation of UIKit. Which brings us to…

Toys

To see just what was going on, I put together a tiny sample project. You can find it here. It has a UIProgressView and a UITextView, both set up to perform some transition either animated or not. The important part of the code looks like this:

- (void)party:(BOOL)animated
{
	[UIView animateWithDuration:2.f animations:^{
		[self.partyProgress setProgress:1 animated:animated];
	}completion:^(BOOL finished){
		self.partyLabel.text = @"PARTY TIME!";
	}];
}

- (void)study:(BOOL)animated
{
	[UIView animateWithDuration:2.f animations:^{
		[self.studyView scrollRectToVisible:CGRectMake(0, 0, 10, 10) animated:animated];
	}completion:^(BOOL finished){
		self.studyLabel.text = @"STUDY TIME!";
	}];
}

Try scrolling the text view all the way to the bottom, then compare the difference between animating and not animating inside of an animation block. Then do the same with the progress view. Why would they be different?

There are plenty of other classes in UIKit with methods that take an animated: argument. I encourage you to plug them into this test app to see just what happens. Protect yourself from being surprised next time. Take any precaution you can against the embarrassment and social shunning that inevitably accompany premature completion.

Posted in Explanation | Tagged , , , , , , , , | 10 Comments

Things I Learned at Siggraph II (2012)

In the interests of becoming a more well-rounded individual in the extremely narrow field of computer graphics, I spent last week in sunny LA, attending Siggraph. What follow are my notes and a little bit of synthesis from various sessions that I thought might be of some interest to our audience. I jotted these notes down during the presentations, and it’s very likely I either misunderstood or mis-transcribed things. All errors are my own.

Computer Aesthetics

Turns out, there’s no reliable way for a computer to understand human aesthetics. We like what we like as a result of a huge mess of ad hoc built up junk in our evolutionary history; this is difficult to model. Attempts have been made in the past: the most famous of these, the Golden Ratio, is actually not as historically important as we’ve been told. It doesn’t really appear in all the great works of art and culture people like to cite, although modern artists, aware of its reputation, have consciously used it in a sort of self-fulfilling prophecy.

Interestingly, computers can, through the use of evolutionary algorithms, develop their own sense of aesthetics. They’re just not going to like what we like. We can’t use an evolutionary algorithm to bridge this gap because the fitness test can’t be automated – deciding which of two choices is more aesthetically pleasing is of course the problem we’re trying to solve. Thus evolutionary algorithms are bottlenecked by humans making that choice (and are subject to the taste of those particular humans).

Mobile GPU

Big news: at Siggraph, OpenGL ES 3 was announced. It’ll be a while before we have mobile devices running it, but now we can start planning ahead.

The important thing on mobile, as ever, is energy efficiency. This year I learned something new: mobile GPUs are never going to have more power. Power means heat, and if they get much more than the ~1watt they currently consume, the chips will melt. Well ok, not the chips, but the solder dots holding the chips in place. So, increases in mobile performance have to come from more energy-efficient hardware and software; we can’t just throw power at the problem like we do on the desktop.

Of course, we as software engineers don’t have much say over the chips. But we can write our code in a way that consumes less power. As I said last time, this mostly means writing more efficient code, so the hardware has a chance to power down. It also means common-sense things like throttling down the frame rate on static or slow content like menus.

There are some things I didn’t talk about last time, because I didn’t know them. Reducing bandwidth is of course important; communication between CPU and GPU takes time and power. One simple way to do this is to mirror symmetrical textures with GL_MIRRORED_REPEAT. Hey, half the bandwidth!

You can also reduce the amount of drawing the GPU has to do. Drawing models front to back, rather than vice versa, means the GPU can reject covered pixels and potentially save you a lot of drawing time. Yes, that means the skybox is drawn last, not first.

Changing GPU state takes a ton of time. That means bouncing between buffers for multi-pass effects is expensive. Rather than drawing to the framebuffer, then to an FBO, then compositing that back into the framebuffer, do the FBO first, and change states just once. Furthermore, you should try to draw all your other FBOs and assorted render targets before rendering to the default framebuffer.

Lastly on stuff I should already have known: FBO contents are saved to main memory. Clearing them before drawing into them on a new frame saves you from having to restore that memory back to the GPU. That’s a huge win for one line of code.

New Stuff

The new ES spec gives us a couple additional options. To reduce bandwidth, it provides two standard texture compression formats that look quite nice compared to the current non-standards (PVRTC on iOS) and tend to compress smaller.

Also, glFramebufferInvalidate does the same job as glClear for informing the GPU that you don’t need the contents of a framebuffer, without having to keep track of which bits need to be cleared. There’s the added bonus of having a name that indicates what it does.

That’s all, folks!

Posted in Philosophy | Tagged , , , | Leave a comment

SMShadowedLayer

Impetus

For a recent project, we needed to “simulate” or “fake” the look of pieces of paper in a physical environment. We didn’t need fancy physical modeling or curves and curls and folds, just perspective and shadowing during an animation. With OpenGL, perspective and shadowing are dead simple, but animation is hard. With Core Animation, perspective and animation are dead simple, but shadowing is hard. Being loathe to work with a lower-level framework when a high-level one will do, we put together a little CALayer subclass that knows how to render sufficiently accurate shadows on its surface. Note that it doesn’t do inter-object shadows, just shading based on its angle in space. Since the math is essentially the same, we decided to throw in specular highlights free of charge.

How Does It Work?

The math more or less uses the Lambert illumination model. The key concept is that the illumination of a surface is related to the angle between the incoming light ray and the normal of the surface at that point. (Since layers are by definition planar, this is the normal of the layer.) All we do is calculate the dot product of the incoming light with the normal of the layer, and scale the shadow’s strength by that value. (For simplicity, we model the light as a directional source, shooting into the screen.) This means that the closer the layer is to facing you full-on, the less visible the shadow is. The specular highlight works the same way, except it has a sharper falloff: exponential rather than linear.

I say “more or less” because in the true illumination model, this math would change the color of the surface. We can’t do that for a number of reasons — primarily because you can set an image or other content in a CALayer, and we don’t want to wipe it out — so instead we put a shadow layer and a highlight layer on top of the base layer, and change their opacity.

Animation

To get the shadow and highlight to do the right thing during an implicit animation, we had to find a way to recalculate their opacities during each frame. If we had followed the naive approach, we would have missed changes that occur when, for instance, the layer rotates from facing left to facing right by going through facing center. In that case the shadow should go from dark to light to dark, but an implicit animation would simply go from dark to dark. Instead, we define our own internal animatable property, tell the layer it needs to re-display when that property changes, and then recalculate the shadow math rather than drawing anything in the display method.

We also put in a method to allow users to bypass this extra math, if they know their transform animation won’t cause this kind of nonlinear change in the shading.

Enough Talk, Let’s See It

(Why is there a slight stutter in that video? Because I did not take the time to do this.)

Where Can I Get It?

Here. You may want to follow the Spaceman Labs Github account, as it will surely have more interesting stuff in the future. It’s even got some interesting stuff now.

Posted in Code, Software | Tagged , , , , , , , , , , , | 1 Comment