Friday, December 17, 2010

Here I come, Android - Trying out 3D on the android

Remember my "Foof Engine" for the PC? My ongoing 3D engine project which was seemingly surrounded in ridiculous characters? :) It was truly a prototype for bigger and better things, and I'm still not ready to showcase what's been going on with that.

BUT! Today I figured I should try my hand on 3D on the Android! I've been working on some Android apps lately (again, will showcase those when I'm ready) and figured it's time to tackle 3D. First vid here:

http://www.youtube.com/watch?v=LuMMCLcDVIA

 Not bad for a day's work I'd say. This was all from scratch and most of the work was spent on writing the mesh class and the math. I got a little spoiled by the D3DX and XNA math libraries. It's been a while since I touched OpenGL but it hasn't changed a bit, anyone familiar with it at all should feel at home with the implementation on the Android.

I rolled a custom model format based on the one I use in the Foof Engine tech demos, but watered down tremendously - no bone weights, no support for normal maps, etc. I think I'll use a simple hierarchy based animation system in favor of more processor intense ones like vertex tweening or bones.

I put all my models and textures into the /res/assets/ folder, this is where you should put all your raw data that you don't want the IDE to give built in ID codes to, so you can access them by name. 

Textures in the sample were borrowed from CGTextures.com, and the unicorns are actually a snapshot of ones I made in LittleBigPlanet 2. The mushrooms I drew, aren't they pretty?

Mushroom Mushroom

 But all that aside, the render function right now just consists of clearing the screen and then going through a list of meshes and rendering them. I'm going to create a system of hashes to catalog the textures and meshes like I do in Foof Engine. 

I think I can make something cool out of this, so stay tuned. I have some ideas and this, as usual, is just a prototype for bigger and better things.

Monday, December 13, 2010

Unicorns like to dance

Um... so there's a bug in 1.04 of the beta that prevents me from editing music online. Poop. So I loaded an empty level offline to see if I still could edit - and I could! So then one thing led to another...

 http://www.youtube.com/watch?v=nxm-p_MZoJU

The unicorns each have a recorded animation to just move their arms around, and their legs and movement is handled by movers, and they also have anti-gravity. So it just keeps emitting them to form the congo line sort of thing.

Saturday, December 11, 2010

Unicorn Nightmare - More LBP2 Ray Caster Fun



 So yesterday I worked on this wonder for the most of the day. http://www.youtube.com/watch?v=VthaIYW3OVA

This is NOT using the layer glitch. Although I've had plenty of ideas for using the layer glitch with the ray casting method. Raphael has published a level that uses my ray casting idea combined with his knowledge of emitting into glitched layers (Sackenstein 3D) and I will detail the key to both methods. Please see my other blog post as well: http://foofles.blogspot.com/2010/11/indepth-look-at-ray-casting-theory-and.html

First off, my ray casters, all of them, are entirely pseudo-3D. That means the perspective effect is just an illusion and is provided by a series of rays hitting walls at incrementing angles. It looks something like this:

Figure 1: Diagram of "shish kebab" or "Spider Web" Perspective Projection of rays.  


The calculations recursively travel down each line in the web, stopping when one piece intersects a wall. Due to the arc-like shape, we achieve a perspective effect in the same manner as Wolfenstein 3D or Ken's Labyrinth. This translates well to my methods of treating the rendering side of the equation as plotting 2D pixels. Thus, this method is best combined with either thin hologram or solid material in layers combined with a fully flat camera view. Note a fully flat camera view can still be mixed with depth of field or depth attenuation via hologram brightness or light by layer. If using holographic material or not using the layer glitch, microchip based early cancellation is necessary. This can be both a blessing or a curse. On one hand, it's good to be able to cut off all extraneous calculations in the chain. On another, recursion is slow. You must use a workaround to make sure the microchip recursion does not lag - such as wiring a "dummy" chain of inputs and outputs aside from the "NOT -> Activate" chain.  The full implementation of this style of recursive microchip logic is detailed here: http://foofles.blogspot.com/2010/11/indepth-look-at-ray-casting-theory-and.html





 Does not require knowledge/use of the 3D layer glitch.
 Full control over FOV angle


Naturally suited to 2D pixel mapping which is easier on the eyes.


Can be used with holographic OR solid material

Can be mixed with the glitched layers to provide higher res depth testing for game sprites.






Microchip based recursion has exponential latency - there will be bigger delays the further away on the chain you are. Unless you use workarounds.

Takes time and effort to get accurate angle increments in the rays.

Fixed camera height and orientation.

Works best with flat camera view. Texture mapping per distance must be simulated.


 
Sackenstein 3D is performed slightly differently. Perspective effect is slightly more 3D, it uses the glitched layers to provide a sense of depth and perspective. Therefore mixing it with an artificial perspective as in my arc based ray routine is unnecessary and may lead to strange results. Rather, an orthogonal projection is probably better.

Figure 2: Diagram of orthogonal "net" type setup. Recursion is not necessary.




Rather than simulate perspective with differently scaled slices of material, it uses the game's 3D graphics to do it with the glitched layers. In Sackenstein 3D and similar approaches, all the material is solid and emitted within a grid like figure 2 - imagine the Y axis of this grid to be further away in layers, and the X axis to be left and right.

Recursion is unnecessary - We are not trying to render a true orthogonal view, we don't want the rays to function in columns or that is the effect we will have. Rather, all cells in this grid have an impact sensor - just like the shish kebob method. Let's say it's set to read tag "WALL". Then all that happens is each cell is paired up with an emitter in the world, and when a cell impacts with wall the emitter is turned on. (Emit with 0.1 lifespan constantly to turn "on" solid material).  The slices of material each take up a thick layer.

 





Since it is 3D you can perform effects like the camera rolling or moving up and down.

Texture mapping for distance is handled by game engine.

Naturally clips against sprites well - put sprites in thin layer

The ray casting portion is extremely simple to setup.



Requires knowledge/use of 3D layer glitch.  




Wall Slices are solid blocks with fixed orientation relative to the camera. This makes for a visual phenomena that is very annoying to look at and confuses and give headaches easily.
Lack of early cancellation means all impact sensors are being calculated constantly.


Both have their strong points and weak points. Also, with character sprites, in Unicorn Nightmare I use holographic images for them - this leads to translucency and duplicate ray hits. Using layer glitch in either method will allow you to use a solid material for sprites - meaning that one cannot emit into the same space as the other, minimizing the ghosting effect.

Both however share one fatal flaw: too many impact sensors in one place will cause them all to stop working.

This ray caster concept I've started last month has really gone to town. I'm very glad I brought this idea to the community but it is beginning to reach its technical limits - for all simulated 3D, from now on I will use the 90 degree dungeon crawler style variant of my ray caster - stay tuned for full feature on it this weekend.



Sunday, December 5, 2010

Beaten to the punch, eep!

Anyone reading this blog or watching my youtube lately probably knows about how much time I've been putting into my ray casting techniques for the 360 degree raycaster and the dungeon crawler model, my attempts for full 3D with no layer glitch.

A couple weeks ago I was having a conversation with someone, discussing the limitations - namely clipping sprites against walls. Then it hit me - use the game's depth buffer! I might have to dabble in the layer glitch after all, but I decided to make it a surprise for the community.

But I was beaten to the punch! Checkout Sackenstein 3D by Raphael: http://www.youtube.com/watch?v=34S7h0k5oIY&feature=player_embedded

That's ok, at least I got to see what it'd look like in motion and it looks like the resolution can't be too great :( I think this ray casting idea is starting to reach its functional limitations.

Good job Raphael!