Frame rate on the iPhone just reared it's ugly head. I wrote a function that would allow the user to change the opacity of a given layer of a TMX tile map, but that requires changing the opacity of each of the tiles (sprites) in the layer.
But once I did that, the frame rate dropped to something like 3 fps. So what I am finding I need to do is just change the tiles on the screen at the moment. That takes just 0.001 seconds, since I am updating 70 tiles instead of 2500. The problem is that now I need to update the opacity of tiles whenever I move new ones onto the screen. It'll be a hassle, but it should be much faster.
Anybody have a better idea?
- Posted using BlogPress from my iPad
Saturday, February 5, 2011
Wednesday, February 2, 2011
Let's get started...
I've been playing around with some bits of cocos2d since our three-week class wound up. Messing around with rotationGestureRecognizers adjusting the opacity of everything in a layer at once, putting multiple layers in a TMX file. I think it's time I get the show on the road!
Thursday, January 20, 2011
Finishing Touches
Here are a few things to make your game that much better looking:
Update your icon. The "coconut" icon looks nice, but you may decide you want something more specific to your program. You can do this easily by updating the "Icon.png" file (and it's relatives) to something else. You need to at least update the Icon.png file - make sure that its replacement is still 57x57 pixels - the iphone will automatically round off the corners and apply a gloss look. There are a few other variants - Icon-Small@2x.png, etc. You can update these too - just make sure the replacement graphics are the same size as the original.
You might also want to update your splash screen. This is the screen that shows up while your program is loading. This is the "Default.png" file in your Resources folder. Again, the easiest way to do this is just to replace Default.png with another copy (the same size) that has your preferred graphics.
Give your game a cool name. There are lots of settings based on this, but you can change all of them at once in the "Project" menu --> Rename... option. It should be pretty straightforward.
Oh, and don't forget sound to go with your graphics!
Update your icon. The "coconut" icon looks nice, but you may decide you want something more specific to your program. You can do this easily by updating the "Icon.png" file (and it's relatives) to something else. You need to at least update the Icon.png file - make sure that its replacement is still 57x57 pixels - the iphone will automatically round off the corners and apply a gloss look. There are a few other variants - Icon-Small@2x.png, etc. You can update these too - just make sure the replacement graphics are the same size as the original.
You might also want to update your splash screen. This is the screen that shows up while your program is loading. This is the "Default.png" file in your Resources folder. Again, the easiest way to do this is just to replace Default.png with another copy (the same size) that has your preferred graphics.
Give your game a cool name. There are lots of settings based on this, but you can change all of them at once in the "Project" menu --> Rename... option. It should be pretty straightforward.
Oh, and don't forget sound to go with your graphics!
Gesticulate this!
"What did you do in school today, Johnny?"
"Oh, Mom! Mr. Howe taught us how to make some gestures!"
(Before I forget, here is the example project for this.)
In the iOs world, a gesture is a motion that can be recognized by the iPhone, iPad, or iPod - such as a tap, a two finger pinch, a two finger swipe (aka a pan), or a two finger rotate. It turns out that there is a built-in way of handling this!
In this post, I'm going to show you how to use a PanGesture - this is when you drag two fingers across the screen together, and the object on the screen moves around with your fingers. In this case, we'll move an entire layer around, one that has a TMXMap in it.
The basic idea is to create an instance of UIPanGestureRecognizer and link it to a function you will write that handles the motion of the object. The second part is getting information about what the pan gesture involved - how far to move the layer, and doing so.
Part 1 is pretty straightforward. I started with a layer class that has another layer, called "gameLayer" in it, and I added a TMXMap to it. (You could add just about anything, but this made for a good example.) I then added a new function, which I called from the "init:" function:
-(void) setupPanGestureRecognition
{
UIPanGestureRecognizer* panGesture = [[UIPanGestureRecognizer alloc] initWithTarget: self action:@selector:(handlePanGesture:)[;
[[[CCDirector sharedDirector] openGLView] addGestureRecognizer: panGesture];
[panGesture release];
}
This creates a new PanGestureRecognizer, which is going to tell the target (self) to execute the function (handlePanGesture:) if it happens to notice a pan gesture. We then add it to the main view of the program - the director's openGLView. (And we release it, because the initWithTarget: method had retained it, but our class doesn't need to do so - the openGL view has it now.)
Then we have to make the actual handlePanGesture: function. It gets information about how much the pan gesture has moved from the beginning of the gesture. Before we can write this function, pop over to the header (.h) file and create a new CGPoint variable, panSoFar. Also, write the header for the function in the (.h) file:
-(IBAction)handlePanGesture:(UIPanGestureRecognizer*)sender;
Then we can write the function in the main (.m) file:
-(IBAction)handlePanGesture:(UIPanGestureRecognizer*)sender
{
if (sender.state == UIGestureRecognizerStateBegan)
panSoFar = ccp(0,0); // reset to a new pan...
CGPoint panFromSender = [sender translationInView:[[CCDirector sharedDirector] openGLView]];
// how far does the system say the image should pan... from the start of the motion
CGPoint panChange=ccpSub(panFromSender, panSoFar); // how far did the pan change since the last time we updated the pan?
panChange.y = panChange.y*-1; // flip the y-axis
gameLayer.position = ccpAdd(gameLayer.position, panChange); // move the gameLayer to match the pan.
panSoFar = panFromSender; //update the "panSoFar" so that next time it will just be a small increment to the pan.
}
Most of this calculation happens because despite the fact that you are moving the gameLayer a little bit every time this function is called, the information from the gesture recognizer is actually giving you how far the gameLayer needs to be moved from when the pan started. So we have to subtract how far we already moved it to find out how much more we need to move the layer.
Give it a try!
"Oh, Mom! Mr. Howe taught us how to make some gestures!"
(Before I forget, here is the example project for this.)
In the iOs world, a gesture is a motion that can be recognized by the iPhone, iPad, or iPod - such as a tap, a two finger pinch, a two finger swipe (aka a pan), or a two finger rotate. It turns out that there is a built-in way of handling this!
In this post, I'm going to show you how to use a PanGesture - this is when you drag two fingers across the screen together, and the object on the screen moves around with your fingers. In this case, we'll move an entire layer around, one that has a TMXMap in it.
The basic idea is to create an instance of UIPanGestureRecognizer and link it to a function you will write that handles the motion of the object. The second part is getting information about what the pan gesture involved - how far to move the layer, and doing so.
Part 1 is pretty straightforward. I started with a layer class that has another layer, called "gameLayer" in it, and I added a TMXMap to it. (You could add just about anything, but this made for a good example.) I then added a new function, which I called from the "init:" function:
-(void) setupPanGestureRecognition
{
UIPanGestureRecognizer* panGesture = [[UIPanGestureRecognizer alloc] initWithTarget: self action:@selector:(handlePanGesture:)[;
[[[CCDirector sharedDirector] openGLView] addGestureRecognizer: panGesture];
[panGesture release];
}
This creates a new PanGestureRecognizer, which is going to tell the target (self) to execute the function (handlePanGesture:) if it happens to notice a pan gesture. We then add it to the main view of the program - the director's openGLView. (And we release it, because the initWithTarget: method had retained it, but our class doesn't need to do so - the openGL view has it now.)
Then we have to make the actual handlePanGesture: function. It gets information about how much the pan gesture has moved from the beginning of the gesture. Before we can write this function, pop over to the header (.h) file and create a new CGPoint variable, panSoFar. Also, write the header for the function in the (.h) file:
-(IBAction)handlePanGesture:(UIPanGestureRecognizer*)sender;
Then we can write the function in the main (.m) file:
-(IBAction)handlePanGesture:(UIPanGestureRecognizer*)sender
{
if (sender.state == UIGestureRecognizerStateBegan)
panSoFar = ccp(0,0); // reset to a new pan...
CGPoint panFromSender = [sender translationInView:[[CCDirector sharedDirector] openGLView]];
// how far does the system say the image should pan... from the start of the motion
CGPoint panChange=ccpSub(panFromSender, panSoFar); // how far did the pan change since the last time we updated the pan?
panChange.y = panChange.y*-1; // flip the y-axis
gameLayer.position = ccpAdd(gameLayer.position, panChange); // move the gameLayer to match the pan.
panSoFar = panFromSender; //update the "panSoFar" so that next time it will just be a small increment to the pan.
}
Most of this calculation happens because despite the fact that you are moving the gameLayer a little bit every time this function is called, the information from the gesture recognizer is actually giving you how far the gameLayer needs to be moved from when the pan started. So we have to subtract how far we already moved it to find out how much more we need to move the layer.
Give it a try!
Tuesday, January 18, 2011
Multi-touch, part 2
So in the intervening time since I had to break from the first part of my multitouch post, I've done some more reading and had a revelation:
In your code, to handle touch events, you have to implement several different methods:
touchBegan, touchMoved, touchEnded, and touchCanceled (in case the phone rings - instead of the user picking up their fingers.) And in each case, you are sent a "UITouch" variable and a "UIEvent" variable.
And it's the same UITouch variable each time, from when the touch begins to when it ends. Oh, sure, there are things about it that change - the location of the touch for instance, or the status (begin/move/stationary/end/cancel) of the touch. But the memory location of the touch will be the same when you receive it as a touchBegan as it is when you receive it as a touchEnded!
Why does this matter? Well, frankly it doesn't, if you are using a single-touch model. But it is vital if you go to multi-touch. Because when the user touches the screen with two fingers, you get a touch object for the index finger and a touch object for the thumb. And those two touch objects will remain locked to those fingers until the user lets go. So you can track what each finger is doing.
For example:
One subtle difference you may notice is that instead of touchBegan (and its ilk), which is singular, this table includes touchesBegan (et al) - the plural. This is another difference with multitouch.
Let's step back a sec. In order to receive multitouch information, you first have to tell the view that is receiving the touches that it should receive multitouch info. As I mentioned in an earlier post, in Cocos2d-iOs, this can be done in the AppDelegate's applicationDidFinishLaunching: method, by adding the following:
[glView setMultipleTouchEnabled:YES];
right after the glView variable is initialized.
Then, in your Layer class, you still need to have the registerTouchDispatcher function:
-(void)registerTouchDispatcher
{
[[CCTouchDispatcher sharedTouchDispatcher] addTargetedDelegate:self priority:0 swallowsTouches:YES];
}
Then you still need to write your responsive functions for when touch events arrive, just the plural version. So instead of the single-touch "began" responder:
-(BOOL)ccTouchBegan:(UITouch*)touch withEvent:(UIEvent*)event
you will write the multi-touch, plural version:
-(BOOL)ccTouchesBegan:(NSSet*)touches withEvent(UIEvent*)event
Note that in this case, you get an NSSet of touches, rather than a single UITouch. A set is a collection of several things in no particular order, so in this case, you'd get a set of one or more touch objects.
In their book, iPhone Programming: the Big Nerd Ranch Guide, Joe Conway and Aaron Hillegass have an elegant demonstration of how to use these sets of touch objects to maintain an NSDictionary of touches - and draw several lines at once. I won't recreate the whole thing here, but I will give an overview of what they did:
In your code, to handle touch events, you have to implement several different methods:
touchBegan, touchMoved, touchEnded, and touchCanceled (in case the phone rings - instead of the user picking up their fingers.) And in each case, you are sent a "UITouch" variable and a "UIEvent" variable.
And it's the same UITouch variable each time, from when the touch begins to when it ends. Oh, sure, there are things about it that change - the location of the touch for instance, or the status (begin/move/stationary/end/cancel) of the touch. But the memory location of the touch will be the same when you receive it as a touchBegan as it is when you receive it as a touchEnded!
Why does this matter? Well, frankly it doesn't, if you are using a single-touch model. But it is vital if you go to multi-touch. Because when the user touches the screen with two fingers, you get a touch object for the index finger and a touch object for the thumb. And those two touch objects will remain locked to those fingers until the user lets go. So you can track what each finger is doing.
For example:
User touches screen with index and thumb. | touchesBegan --> touch1 and touch2 |
User moves index finger | touchesMoved --> touch1 |
User touches screen with pinkie | touchesBegan --> touch3 |
User swipes all three fingers | touchesMoved --> touch1, touch2, and touch3 |
User lifts thumb | touchesEnded --> touch2 |
User lifts index and pinkie | touchesEnded --> touch1 and touch 3 |
One subtle difference you may notice is that instead of touchBegan (and its ilk), which is singular, this table includes touchesBegan (et al) - the plural. This is another difference with multitouch.
Let's step back a sec. In order to receive multitouch information, you first have to tell the view that is receiving the touches that it should receive multitouch info. As I mentioned in an earlier post, in Cocos2d-iOs, this can be done in the AppDelegate's applicationDidFinishLaunching: method, by adding the following:
[glView setMultipleTouchEnabled:YES];
right after the glView variable is initialized.
Then, in your Layer class, you still need to have the registerTouchDispatcher function:
-(void)registerTouchDispatcher
{
[[CCTouchDispatcher sharedTouchDispatcher] addTargetedDelegate:self priority:0 swallowsTouches:YES];
}
Then you still need to write your responsive functions for when touch events arrive, just the plural version. So instead of the single-touch "began" responder:
-(BOOL)ccTouchBegan:(UITouch*)touch withEvent:(UIEvent*)event
you will write the multi-touch, plural version:
-(BOOL)ccTouchesBegan:(NSSet*)touches withEvent(UIEvent*)event
Note that in this case, you get an NSSet of touches, rather than a single UITouch. A set is a collection of several things in no particular order, so in this case, you'd get a set of one or more touch objects.
In their book, iPhone Programming: the Big Nerd Ranch Guide, Joe Conway and Aaron Hillegass have an elegant demonstration of how to use these sets of touch objects to maintain an NSDictionary of touches - and draw several lines at once. I won't recreate the whole thing here, but I will give an overview of what they did:
- They created a Line class that kept track of a starting point and an ending point.
- They had a member variable for the class that was an NSMutableDictionary.
- When the calls came into ccTouchesBegan, they encapsulated the UITouch pointers in NSValues so they could be used as keys in the dictionary and set new Line objects as the matching values, with both the start and end set to the touches' locations.
- Then when ccTouchesMoved came in, those touch objects were re-encapsulated in NSValues and used to look up the lines in the dictionary, and the end points of the lines adjusted to the new locations.
- Finally, when the ccTouchesEnded or ccTouchesCanceled came in, those touch objects were used again to look up the Lines in the dictionary. These entries in the dictionary were removed, and the Lines were transferred to more permanent storage in the program.
For the details, you should probably buy their book! It's definitely a good one for general iPhone programming.
Multitouch
Whew! I've been working on providing a good guide to making multitouch fun & easy for your games. I'd hoped to have this finished over the weekend, but I had a few personal setbacks. So let's see what I can do.
First of all, let's start with an overview.
When the user touches the screen with a finger, the iphone keeps track of that finger for as long as you are still touching it, until you let go. It fires off messages when you first touch it, when you move your finger, and when you release. We've seen this in action with our single touch functions - ccTouchBegan, ccTouchMoved, and ccTouchEnded. For single-touch handling, that's about all there is.
... except there's this little bit in ccTouchBegan, where we have to return YES. What's with that, anyway? Well, the truth is that there are many different things going on in the iphone that could be in charge of listening to that finger, your HelloWorld layer, sprites, other background layers, etc. And when the user touches the screen, the iphone is going to check with lots of them. But that isn't really efficient, once you start moving the finger around or releasing it. It is faster and better if just one part of your program starts dealing with the touch event.
This is why we say that your layer "swallows touches." When your touchBegan method returns a YES, you are really saying, "YES, I'll take responsibility for this touch event - nobody else needs to worry about it. Let me know about any further events associated with this finger." Then all the touchMoves and touchEnded events for this finger only will go to your layer - the rest won't be bothered with it. (Of course, the next time the user touches the screen, the process will start all over.)
So this is what is going on when you say:
[[CCTouchDispatcher sharedDispatcher] addTargetedDelegate: self
priority:0
swallowsTouches: YES];
you're telling the touch dispatcher that the layer should be informed of any touches - the #0 priority lets you get first stab at claiming the touches, and the YES for "swallows touches" means that you may very well claim a touch for your own.
So what does this mean for multitouch? The iphone is actually very clever - moreso than I gave it credit for when I first started programming it. See, the issue is that if you have two fingers on the screen, there are two touch events, and you might grab them both. But what happens if the two fingers were originally at (10,10) and (150,150) and later they are at (150,10) and (10,150)? How could you possibly know which finger went where?
Well, as it turns out, the iphone does - it tracks the touches for you.
Whoa! The bell just rang, so I'll have to finish this later. Next time we'll see how to activate the mutlitouch feature, and how to handle many different touches at once. It is similar to what you've seen so far, so don't worry!
...to be continued
First of all, let's start with an overview.
When the user touches the screen with a finger, the iphone keeps track of that finger for as long as you are still touching it, until you let go. It fires off messages when you first touch it, when you move your finger, and when you release. We've seen this in action with our single touch functions - ccTouchBegan, ccTouchMoved, and ccTouchEnded. For single-touch handling, that's about all there is.
... except there's this little bit in ccTouchBegan, where we have to return YES. What's with that, anyway? Well, the truth is that there are many different things going on in the iphone that could be in charge of listening to that finger, your HelloWorld layer, sprites, other background layers, etc. And when the user touches the screen, the iphone is going to check with lots of them. But that isn't really efficient, once you start moving the finger around or releasing it. It is faster and better if just one part of your program starts dealing with the touch event.
This is why we say that your layer "swallows touches." When your touchBegan method returns a YES, you are really saying, "YES, I'll take responsibility for this touch event - nobody else needs to worry about it. Let me know about any further events associated with this finger." Then all the touchMoves and touchEnded events for this finger only will go to your layer - the rest won't be bothered with it. (Of course, the next time the user touches the screen, the process will start all over.)
So this is what is going on when you say:
[[CCTouchDispatcher sharedDispatcher] addTargetedDelegate: self
priority:0
swallowsTouches: YES];
you're telling the touch dispatcher that the layer should be informed of any touches - the #0 priority lets you get first stab at claiming the touches, and the YES for "swallows touches" means that you may very well claim a touch for your own.
So what does this mean for multitouch? The iphone is actually very clever - moreso than I gave it credit for when I first started programming it. See, the issue is that if you have two fingers on the screen, there are two touch events, and you might grab them both. But what happens if the two fingers were originally at (10,10) and (150,150) and later they are at (150,10) and (10,150)? How could you possibly know which finger went where?
Well, as it turns out, the iphone does - it tracks the touches for you.
Whoa! The bell just rang, so I'll have to finish this later. Next time we'll see how to activate the mutlitouch feature, and how to handle many different touches at once. It is similar to what you've seen so far, so don't worry!
...to be continued
Monday, January 17, 2011
SneakyInput - addendum
I've promised a few of you instructions on how to handle multiple touches, but before I do, here is an answer to a more specific question: how do I get a SneakyJoystick and a SneakyButton to work at the same time? In other words, can I move my ship with my left thumb while I fire the guns with my right?
The version of the ShooterGame we wrote together would not do this. It turns out that it is a one-line change to fix this!
You have to go into the file that is called "yourProgramAppDelegate.m" (of course, it has your program name there, not "yourProgram"....) Partway through the "applicationDidFinishLaunching:" function, there is a line where the variable "glView" is created:
EAGLView *glView = [EAGLView viewWithFrame:...... and so forth, for about 4 lines.
Sometime after those lines, but still in the applicationDidFinishLaunching: function, add the following line:
[glView setMultipleTouchEnabled:YES];
... and that should fix it!
(For the curious, this post is where I found this solution.)
(Thanks to Patrick, who found a typo in the one line that you actually need to type in! It's fixed now.)
The version of the ShooterGame we wrote together would not do this. It turns out that it is a one-line change to fix this!
You have to go into the file that is called "yourProgramAppDelegate.m" (of course, it has your program name there, not "yourProgram"....) Partway through the "applicationDidFinishLaunching:" function, there is a line where the variable "glView" is created:
EAGLView *glView = [EAGLView viewWithFrame:...... and so forth, for about 4 lines.
Sometime after those lines, but still in the applicationDidFinishLaunching: function, add the following line:
[glView setMultipleTouchEnabled:YES];
... and that should fix it!
(For the curious, this post is where I found this solution.)
(Thanks to Patrick, who found a typo in the one line that you actually need to type in! It's fixed now.)
Subscribe to:
Posts (Atom)