Friday, December 17, 2010

Here I come, Android - Trying out 3D on the android

Remember my "Foof Engine" for the PC? My ongoing 3D engine project which was seemingly surrounded in ridiculous characters? :) It was truly a prototype for bigger and better things, and I'm still not ready to showcase what's been going on with that.

BUT! Today I figured I should try my hand on 3D on the Android! I've been working on some Android apps lately (again, will showcase those when I'm ready) and figured it's time to tackle 3D. First vid here:

http://www.youtube.com/watch?v=LuMMCLcDVIA

 Not bad for a day's work I'd say. This was all from scratch and most of the work was spent on writing the mesh class and the math. I got a little spoiled by the D3DX and XNA math libraries. It's been a while since I touched OpenGL but it hasn't changed a bit, anyone familiar with it at all should feel at home with the implementation on the Android.

I rolled a custom model format based on the one I use in the Foof Engine tech demos, but watered down tremendously - no bone weights, no support for normal maps, etc. I think I'll use a simple hierarchy based animation system in favor of more processor intense ones like vertex tweening or bones.

I put all my models and textures into the /res/assets/ folder, this is where you should put all your raw data that you don't want the IDE to give built in ID codes to, so you can access them by name. 

Textures in the sample were borrowed from CGTextures.com, and the unicorns are actually a snapshot of ones I made in LittleBigPlanet 2. The mushrooms I drew, aren't they pretty?

Mushroom Mushroom

 But all that aside, the render function right now just consists of clearing the screen and then going through a list of meshes and rendering them. I'm going to create a system of hashes to catalog the textures and meshes like I do in Foof Engine. 

I think I can make something cool out of this, so stay tuned. I have some ideas and this, as usual, is just a prototype for bigger and better things.

Monday, December 13, 2010

Unicorns like to dance

Um... so there's a bug in 1.04 of the beta that prevents me from editing music online. Poop. So I loaded an empty level offline to see if I still could edit - and I could! So then one thing led to another...

 http://www.youtube.com/watch?v=nxm-p_MZoJU

The unicorns each have a recorded animation to just move their arms around, and their legs and movement is handled by movers, and they also have anti-gravity. So it just keeps emitting them to form the congo line sort of thing.

Saturday, December 11, 2010

Unicorn Nightmare - More LBP2 Ray Caster Fun



 So yesterday I worked on this wonder for the most of the day. http://www.youtube.com/watch?v=VthaIYW3OVA

This is NOT using the layer glitch. Although I've had plenty of ideas for using the layer glitch with the ray casting method. Raphael has published a level that uses my ray casting idea combined with his knowledge of emitting into glitched layers (Sackenstein 3D) and I will detail the key to both methods. Please see my other blog post as well: http://foofles.blogspot.com/2010/11/indepth-look-at-ray-casting-theory-and.html

First off, my ray casters, all of them, are entirely pseudo-3D. That means the perspective effect is just an illusion and is provided by a series of rays hitting walls at incrementing angles. It looks something like this:

Figure 1: Diagram of "shish kebab" or "Spider Web" Perspective Projection of rays.  


The calculations recursively travel down each line in the web, stopping when one piece intersects a wall. Due to the arc-like shape, we achieve a perspective effect in the same manner as Wolfenstein 3D or Ken's Labyrinth. This translates well to my methods of treating the rendering side of the equation as plotting 2D pixels. Thus, this method is best combined with either thin hologram or solid material in layers combined with a fully flat camera view. Note a fully flat camera view can still be mixed with depth of field or depth attenuation via hologram brightness or light by layer. If using holographic material or not using the layer glitch, microchip based early cancellation is necessary. This can be both a blessing or a curse. On one hand, it's good to be able to cut off all extraneous calculations in the chain. On another, recursion is slow. You must use a workaround to make sure the microchip recursion does not lag - such as wiring a "dummy" chain of inputs and outputs aside from the "NOT -> Activate" chain.  The full implementation of this style of recursive microchip logic is detailed here: http://foofles.blogspot.com/2010/11/indepth-look-at-ray-casting-theory-and.html





 Does not require knowledge/use of the 3D layer glitch.
 Full control over FOV angle


Naturally suited to 2D pixel mapping which is easier on the eyes.


Can be used with holographic OR solid material

Can be mixed with the glitched layers to provide higher res depth testing for game sprites.






Microchip based recursion has exponential latency - there will be bigger delays the further away on the chain you are. Unless you use workarounds.

Takes time and effort to get accurate angle increments in the rays.

Fixed camera height and orientation.

Works best with flat camera view. Texture mapping per distance must be simulated.


 
Sackenstein 3D is performed slightly differently. Perspective effect is slightly more 3D, it uses the glitched layers to provide a sense of depth and perspective. Therefore mixing it with an artificial perspective as in my arc based ray routine is unnecessary and may lead to strange results. Rather, an orthogonal projection is probably better.

Figure 2: Diagram of orthogonal "net" type setup. Recursion is not necessary.




Rather than simulate perspective with differently scaled slices of material, it uses the game's 3D graphics to do it with the glitched layers. In Sackenstein 3D and similar approaches, all the material is solid and emitted within a grid like figure 2 - imagine the Y axis of this grid to be further away in layers, and the X axis to be left and right.

Recursion is unnecessary - We are not trying to render a true orthogonal view, we don't want the rays to function in columns or that is the effect we will have. Rather, all cells in this grid have an impact sensor - just like the shish kebob method. Let's say it's set to read tag "WALL". Then all that happens is each cell is paired up with an emitter in the world, and when a cell impacts with wall the emitter is turned on. (Emit with 0.1 lifespan constantly to turn "on" solid material).  The slices of material each take up a thick layer.

 





Since it is 3D you can perform effects like the camera rolling or moving up and down.

Texture mapping for distance is handled by game engine.

Naturally clips against sprites well - put sprites in thin layer

The ray casting portion is extremely simple to setup.



Requires knowledge/use of 3D layer glitch.  




Wall Slices are solid blocks with fixed orientation relative to the camera. This makes for a visual phenomena that is very annoying to look at and confuses and give headaches easily.
Lack of early cancellation means all impact sensors are being calculated constantly.


Both have their strong points and weak points. Also, with character sprites, in Unicorn Nightmare I use holographic images for them - this leads to translucency and duplicate ray hits. Using layer glitch in either method will allow you to use a solid material for sprites - meaning that one cannot emit into the same space as the other, minimizing the ghosting effect.

Both however share one fatal flaw: too many impact sensors in one place will cause them all to stop working.

This ray caster concept I've started last month has really gone to town. I'm very glad I brought this idea to the community but it is beginning to reach its technical limits - for all simulated 3D, from now on I will use the 90 degree dungeon crawler style variant of my ray caster - stay tuned for full feature on it this weekend.



Sunday, December 5, 2010

Beaten to the punch, eep!

Anyone reading this blog or watching my youtube lately probably knows about how much time I've been putting into my ray casting techniques for the 360 degree raycaster and the dungeon crawler model, my attempts for full 3D with no layer glitch.

A couple weeks ago I was having a conversation with someone, discussing the limitations - namely clipping sprites against walls. Then it hit me - use the game's depth buffer! I might have to dabble in the layer glitch after all, but I decided to make it a surprise for the community.

But I was beaten to the punch! Checkout Sackenstein 3D by Raphael: http://www.youtube.com/watch?v=34S7h0k5oIY&feature=player_embedded

That's ok, at least I got to see what it'd look like in motion and it looks like the resolution can't be too great :( I think this ray casting idea is starting to reach its functional limitations.

Good job Raphael!

Saturday, November 27, 2010

An indepth look at ray casting theory and how I've applied it to LittleBigPlanet 2

Full explanation of various methods and theories on implementing a ray casting renderer in LittleBigPlanet 2.
(If you have the beta you can run the samples here:  3D Raycaster Tech Demo  , 3D Raycaster top down/concept view
This assumes an intermediate understanding of the tools in LittleBigPlanet 2

Contents:
  1. Introduction
  2. Preparing Solid Color Wall Slices
  3. Creating the Rays
  4. Creating The Player Chassis
  5. Putting It All Together!
  6. Different Colors And Textures
  7. Backgrounds, Floors, Ceilings
  8. Character and Object Sprites... are a problem.
  9. Conclusions
Part 1: Introduction
Have you ever wondered how games like Wolfenstein 3D or Ken's Labyrinth achieved their 3D effects? The genius is in the simplicity. They utilize a technique called ray casting, and I decided it'd be interesting to try to get this into LittleBigPlanet 2, and here I will detail all my implementations, theories, and observations of the process.

First off: Imagine the screen as a grid of columns and rows. for example in a resolution of 320 x 280, we have 320 columns to represent the width and 280 rows to represent the height. Ray casting in Wolfenstein 3D is achieved by giving each column of the width its own ray, and testing this ray against a grid of 2D shapes.

For a 3D-like effect, the engine assumes each ray has an origin at the player's eye. It then gives each ray an angular offset to achieve a desired field of view and gives us the perspective effect. Think back to art class, lines vanishing into the distance.
Figure 1: Example of a 3D-Like effect by simulating perspective with a vanishing point.

You may notice something in the above image: Things further away appear to get smaller. This is what simple Ray Casting renderers exploit.

There is a reason Ray Casters like in Wolfenstein 3D and even up to more advanced like the Build engine (Duke Nukem 3D) are called "Pseudo 3D": The routine is only calculated in 2 dimensional space. In the case of Wolfenstein 3D, it is simplified by using a uniform grid, with each cell being able to hold a wall.

We will impose and exploit three very important limitations: The player's eye never moves up or down, the camera cannot roll side to side (eg. barrel roll), and walls will always go from floor to ceiling. This means that the horizon line / vanishing point will be directly in the center of the screen. Renderers like in Wolfenstein 3D operate column by column as mentioned. As each ray is cast from the eye, it eventually intersects a wall and to simulate perspective we simply scale the wall texture vertically; the further the ray to wall intersection is from the wall, the shorter the wall slice will appear, and simply center the slice vertically within the column. When you have a whole series of these lined up side by side, you get your perspective illusion. This is the "3D" effect that we see. Important note: There is a slight distortion in the perspective due to the fact that we're simulating the FOV of something round (the human eye or a camera lens) with something that is flat (the computer screen) ... there is a small step involved in solving for this error in real raycasters but for now it will be ignored in this article, just be mindful of that in case you're wondering what's going on. It can be as simple as just scaling the rays so they're projected in such a way that their arc creates a flat trapezoid shape instead of a round arc.


Part 2: Preparing Solid Color Wall Slices

In LittleBigPlanet2 I've used a solid color to present the most basic effect. Holographic material in LBP2 is extremely versatile, so first things first - let us create the wall slices per distance. We'll go through all the different ways I've come up with.

PRACTICAL METHOD ADo-it-yourself.   Create slabs of holo to represent distances and then overlay them.

Figure 2: Slices of holographic material representing different distances in one column.


For your slices, I recommend setting your OFF color as the same as your ON color. If your walls are red, make sure the same shade of red is selected as your OFF color. Disable animation on both unless you want that funky pulsing effect. You also see the smaller slices get darker. This is another trick of perspective, distance attenuation: things further away appear darker. So I simply lower the brightness per each piece with a minimum of 0.

We have some options here. In both samples I have each of these different heights overlayed per column and just activate the appropriate piece of holo for the distance ( Part 3 will cover different ray casting theories and how to translate that to your height chunk.) You can also emit the appropriate height rather than having them all overlayed at once: even if it's invisible, having tons of holographic material overlayed at once causes very strange things to happen in the engine - collisions stop registering, the players constantly respawn or fall through the level boundaries, etc.  Make sure to be working in grid mode the entire time.




Very easy to design and implement

Dependable

Easy to punch in depth shading / fog as either preset per piece or with dimmer functions

Easy for low depth-resolution, but becomes increasingly tedious for high ones


A lot of overlayed holographic material causes engine eccentricities.


It may be tricky to get a stable image if emitting the appropriate piece per ray hit distance.



Holographic material's base texture may sit oddly on the slices. Experiment with UV tool.

THEORETICAL METHOD B:   Spread 'em


Here rather than having to hand craft each height for the slice, We'd use some combination of pistons, emitters and more to achieve maximum resolution fill with the least amount of wires. For example, 3 pieces of invisible holo. One on the horizon line, 1 above, and  1 below - with the two off-center being equidistant from the horizon. Attach the outer two to the center holo with pistons, and you'd be able to modulate them in and out in sync to create a boundary. The idea here would then be to use emitters to "fill in" holographic material. Possible ways could be emitting bits of holo out vertically and have them wired to self-destruct when they collide with the barrier (put a tag on the outer pieces, put impact sensor on the emitted pixels and wire it to a destroyer set to "disappear") ,  a single "paint brush" piece of holo that bounces between the barriers and constantly emits a color with a short lifespan (to bounce use impact sensor, toggle, and 2 movers. ... eg. toggle -> Move up, toggle->NOT-> Move down).




Would easily scale to very high depth resolution.
 
A lot less lag in create mode
 
Potential for less wires flying around


Potential for more modularity and easier enhancement.



Impact sensors are unreliable if something is moving very fast
 Bit of work to get the boundaries to scale appropriately

Would take work to get distance shading / fog to work
HAS NOT BEEN SUCCESSFULLY IMPLEMENTED.

Part 3: Creating the Rays

Sounds like a pretty fundamental part of the ray caster, huh? :)  Each column is associated with a ray, which all have an origin at the eye position. In any case, the rays will need to use an impact sensor to detect a particular tag. If you want to draw "walls", simply draw material and put your "WALL" tag on it. It helps to be made of holographic material since it doesn't collide.
Figure 3: Overhead view of a player's field of vision. Blue represents walls. This is the space everything is computed in.
PRACTICAL METHOD A: The Shish Kabob Technique.

I call this shish kabob because it's like chunks of shish kebob on a skewer. Essentially, for each level of depth you have an independent piece with this microchip on it.

Very simple, just an impact switch set to INCLUDE TOUCHING - YES, INCLUDE TAG - YES and I set it to a blue tag with label "WALL", which is also wired into a NOT gate. What happens is if each piece of our ray kebab fails to intersect, it'll query the next piece by feeding the output of the NOT gate into the activation input of the next kebab chunk's microchip, like so:

Figure 4: Each link in the ray will check intersection with walls, if there is none it will go to the next one.

This way, we end up recursively checking at predesignated distances and we stop as soon as we hit something. We then wire that impact sensor into our holo slice routine. I combine this with the pre-made slices to get the effect in the sample. Also, since we'll want this to be mobile, glue all the chunks together using the advanced glue tool.

Figure 5: The "ray" is a chain of recursive intersection tests and will cancel out if one of them is true.




Extremely simple to understand and implement.

Extremely stable and dependable.





Very easy for low depth-resolution but becomes tedious to manually wire each output at a high res.

Too many impact sensors occupying a small space can cause them to not register properly.

Also introduces another piece of holo for each step of depth. Multiply that by your horizontal resolution.


THEORETICAL METHODS B & C: The Facehugger and The Bouncing Bullet

I've experimented with using piston and mover based rays. I had a piston based ray working but it broke apart when I added more columns. The idea is that the sensor is on the end of a piston and it'll try its best to hug a surface. It was a little jittery but it did hug the wall. (use impact sensors and forward/backward input on the piston). I also experimented with using a little piece of holo that quickly was shot out and bounce back to the eye. In both cases, the piece of holo touching the wall would have a tag sensor looking for "EYE" and report its closeness to the render column, and from there on we can either extract the analog value and activate a specific height piece or scale and shade one of the theoretical dynamic wall slices.




Would significantly decrease the amount of impact sensors and holo onscreen to achieve a higher res



A lot less wires flying around the level.

Would give a full 0-100% range in terms of scale and shading and/or fog.


 Pistons are extremely unstable. They have tendency to just break apart and go crazy.


Movers are unreliable. Impossible to get enough speed to follow the player well for bouncing bullet.

Even if it was, impact sensors fail at high speeds.

Both prone to ray misses or other eccentricities when rapidly approaching nearer wall:


Figure 6: An example of a piston based ray. If I were to move the ray to the left quickly, it would ignore the truly nearest wall. There are possible workarounds but they end up putting more stress on the game engine than they're worth.

Part 4: Creating The Player Chassis
(This will NOT cover player-wall collision detection.)
 
As important as ray casting itself is the ability to move around in the world. The fact that I need to glue the ray routines to the chassis posed some challenges of its own. 

Figure 7: Basic view of the chassis (right) and a controllinator seat

But it's not too complicated. For the samples I used cardboard for the chasses but you can use whatever material you wish. Essentially, we have one piece to which all view rays will be glued to (this is their origin). This will receive all rotational and movement input from a controllinator. In my sample I use an advanced mover with local space ON to provide full forward, backward, and strafing movement like in a typical first person shooter. I also use the right stick wired to an advanced rotator to provide turning.

The problem is, if this was left as is the center of rotation would NOT be the origin of the rays, but the center of the entire glued mass - it would give VERY odd results. So to keep it relatively confined to rotate around the origin, I use another mass of material and bolt it to the ray origin material. On this is a very strong gyroscope and anti-gravity so it does not fall or rotate, and now the ray origin disc will rotate properly.

Figure 8: The logic behind the chassis. Very simple but necessary to maintain properly centered rotation.

Part 5: Putting It All Together!

Ok, so now you have a single render column, a view ray, and an apparatus to move the player around. We need to copy the column/ray pairs to suit the resolution you want, and glue all the rays to the player eye, as well as rotate in increments to give us a nice perspective effect. Daunting? Well, here's some tips:
  • ALWAYS have grid snap on when you're creating your column and ray and make sure your ray and column each fit neatly on the grid. This will allow you to copy and paste it along the grid without problems.
  • If doing it all manually, make use of angle snap. In the sample I have a 90 degree FOV So I use a 45 degree snap on the very first and very last ray. Then I just went in halves and filled in what was too tight for angle snap by eye. So after putting down the very first and last making 90, I'd split the ray group in half. The one(s) in the middle would be (around) 0 degrees and I'd glue that in.Then I split each half in half, and the one in the middle would be the halfway point between whatever's already glued. 
  • Rotational speed on bolts and rotators refers to how many degrees of rotation it will spin per second. You can create a jig that rotates at regular intervals to give you exact and precise angle between rays.
  • With a little bit of work in the column department, you can create a system that automatically creates the rays and columns for you. (You'd just have to glue it afterwards). Keep in mind that the speed of a mover is the amount of small grid units it will move per half-second. You can create something that moves along the bottom of your intended frame at regular intervals, that also contains a tag. Maybe "Column anchor". Then on your column you'd need to clue a little base to the display portion and give it some logic to follow this "Column Anchor" and deactivate the follower as soon as it's on top of it. Also, a very strong gyroscope pointing straight up. Then, have a small piece of material that rotates and emits the pairs of column/ray from its center at regular intervals. You could end up with a full and precise array of columns and rays like this, at the cost of a little extra logic per ray. Just remember to glue it to the base afterwards.
  • Keep an eye on the thermometer. Even though an individual column/ray combo takes up almost nothing on the meter, adding a ton of pairs will start to add up fast, especially if you're using the overlayed method. If you're using a series of emitters to emit pre-made heights or you're using dynamic slices, it may be different.
  • If your wall slices all have their heights overlayed internally at all times, creating too high a resolution may break the impact sensors. Be careful and if you're doing it all by hand, do a quick function test before you start glueing.
Part 6: Different Colors and Textures

What's that? Getting bored of having walls just one color? Well, there are ways around that. For example, instead of just using "WALL" we could expand on that impact sensor, and then go something like "if ( collides with wall) {  collides with blue wall? collides with red wall? } "  as shown here.
Figure 9: Example logic to handle multiple colored walls. First there is an "Impacts Wall" check, which will then activate a microchip containing checks for each type of wall. Those are only evaluated if the "Impacts Wall" is true.

This is mainly useful with the shish kebab method, if you only have one sensor piece per ray then you don't need to check for "WALL" first. I am not sure if doing this is more or less efficient than just checking for both red and blue right away, but I have a feeling it should be better to do an extra check on the hit than N checks on everything, hit or miss. Imagine you have 5 different colors, that's 5 different impacts you'd have to check for at each link of the kebab. Mmmm. I love kebab.

And ... textures? Did I say textures? Yes I did. As mentioned, games like Wolfenstein 3D used a uniform grid to test their rays against, which also made it very easy to do uniform texture mapping... we can translate some of this to LittleBigPlanet2.  

One way you could accomplish it is to create a massive square of your desired texture in game,  and then scale it down for each slice and cut the slice down to size, trying to keep the same general look in each column but scaled down. This works best for textures that would tile well from column to column, like bricks.


Though another way that would better and be more open ended, albeit a little more work...

This theory requires either the PS Eye or ample time to draw by hand in the game - of which I have neither therefore could not test yet. If anyone has a PS Eye and would like to help me implement a texture mapped sample let me know!

First off, let's take a texture. To make things really easy, I'm gonna take one that can tile well in vertical stripes - a brick texture from cgtextures.com. (I'll also explain how to do things for textures that wouldn't tile well like that, for the masochists amongst us)
Figure 10: Basic base texture of a brick wall

Now, let's take a slice that we could tile onto the walls. 
Figure 11: Cropped out what will map to a single column in the renderer

And now we resample this for every height in our depth resolution. (Vertically Only)

Figure 12: A full mipmap chain for a single column for each distance. They are only scaled vertically.

We can then take this into the game via the PS Eye. Remember that black is fully transparent when stickered onto holographic material, use this to create translucency in your sprites and textures.

What about slices that wouldn't neatly tile horizontally? What if you want to sample the texture horizontally? Well, then you'd need to do the above process, but also at uniform points horizontally on the texture... which could turn into a daunting task. For example, you'd have to make stripes of the full res texture, and then create a full mipmap chain of each stripe for your renderer's depth resolution. And that's not even the hard part...

There are many ways I could fathom to read in texture coordinates but one that I think would be easiest is to maintain a gridlike style to your level like in Wolfenstein 3D, and have each cell have a separate portion and tag for UP, LEFT, RIGHT, DOWN, so you know which side of the wall you hit. Then have little tags at the corners and a tag sensor on your ray and read the distance to the appropriate corner tag. Then feed the tag sensor into a sequencer so you can map it's analog value to a particular slice of the texture. This however is a massive feat and would require a lot of effort and holographic material, might be best tackled with emitting each particular slice as it's actually needed, I'm sure it would be extremely unstable if it was all overlayed at once. I'm sure this would work. I know this all sounds very abstract, but I cannot test this so I can't detail a full implementation. Again, if anyone has a PS Eye and is willing to help me, let me know!

Part 7 - Backgrounds, Floors, Ceilings

You might notice something about Wolfenstein 3D - the floors and ceilings are a solid color. As I said, the horizon point would always be in the center of the screen so you can just freely split the screen in half, floor and ceiling. Years ago when I coded my own raycaster (which I might provide vids and source code of, it's still backed up), every frame I would blit a bitmap of a solid color floor and ceiling, but they also had a black gradient applied to it to tie in with my distance attenuation shading. It's another great help to the illusion of 2D. 

In my LBP2 raycaster sample, I just used 2 pieces of blue wood (it's very neutral when colored) and gave them a light hue and used the black gradient tool to aid in the illusion. They are always behind the walls being drawn.
Figure 13: The background of the LBP2 ray casting sample

They are dark enough to not present too many artifacts on the "shaded" walls (remember, their shading is just them being translucent and showing more black), but light enough to notice when there isn't a wall on top of it. If you were inclined to make colored walls and floors, you could just draw them in for each height slice (or if you're using dynamic slices, make secondary barriers that will rest on the desired view frame, and do the same filling technique but for ceiling and floor colors - filling inbetween the frame and moving boundaries).

Part 8 - Character and Object Sprites... are a problem.

Tip: Remember that black is totally transparent on holo. Use black backgrounds behind your characters and objects to make a clipping mask.

Of course there's no fun in just running around neat hallways. You need something to shoot at! Unfortunately, there are problems here. It is easy enough to create the sprites themselves; just like creating a slice of the wall per distance, make a scaled copy of the sprite to fit for distance. One problem is clipping against the walls; there isn't any. Sprites would always appear to be drawn over walls no matter what, there is no way to clip them like in Wolfenstein 3D. In that game when the walls are rendered there is also something called a depth buffer that is created, and all game sprites are clipped against this so they are not drawn over walls they're behind and most certainly not out of the rendering frame as would be the case in LBP2. 

As mentioned, you could create the scaled sprites easily enough. You would NOT be using the same rays or the same ray casting technique as for walls with sprites, you'd just use a simple tag sensor on the EYE to detect the distance and maybe hook it up to a (positional) sequencer to allow selection of the appropriate scaled sprite. 

For the position on the screen, I  suppose you could try to emulate the inverse camera matrix multiplication done in Wolfenstein 3D, but I'm not sure how effective that would be. I think one way would be to just make a solid arc encompassing the full FOV and on intersection with tag "ENEMY"  and have a tag sensor on one corner reading how far from the left or right side the ENEMY is and use that to manipulate pistons holding the enemy sprites. But again... it's a bit of a pain to get that working, if it can be done.
Another possible option is to have a smaller resolution of solid rays (Not linked like the shish kebab method) and have fixed horizontal coordinates where enemy sprites could appear, and activate the appropriate scale still using the "distance from the eye to ENEMY" technique. Of course this would mean you could only effectively determine one enemy / game sprite at a time... 

Which means another option still is to just use a very low res shish kebob method for sprites. And use that to determine horizontal coordinates AND scale, without using the "distance from EYE to ENEMY" technique. I'd say just use your existing shish kebab, but then you have the problem of duplicate hits and you'd end up with blurred sprites. That's why the resolution has to be... low. Though like anything there are workarounds, but it's all a matter of it being worth it or not, being too much stress on the engine, etc. 

All in all I'd say game sprites in a full 360 freedom of view holographic material based Ray casting engine is just too problematic to effectively implement in LBP2.

Part 9 - Conclusions

So, we can get some pretty interesting pseudo-3D effects in LBP2 and I hope this has helped you understand how I've been going about it. There are some limitations with the engine that prevent it from being too great. Limits with how much holographic material can be onscreen and/or overlapping, limits with how many impact sensors can be active in the same space, etc. put great constraints on how much detail you can squeeze out of your image. 

Using emitter based techniques to draw may help squeeze more real estate, but I doubt I will ever reach my target resolution of 320 x 280 pixels. More research will be highly necessary, and I'm sure I could get a better resolution than I have now. It's all a matter of pushing a little bit at a time to make sure it still works. The sample's resolution is 45 x 22 if I remember correctly, but this was with all the columns being overlaid and not emitted, putting great strain on the engine. I will eventually experiment with emitter based techniques to improve the quality and reduce engine load, but for now the sample will do as a proof of concept and I'm sure this article will also aid its cause. 

Note that there is also a delay to the logic computations in LBP2. You can notice a lag in the fill rate due to the nature of the signals passing down the chain in the shish kebab method.

The techniques and methods here need not be limited to this type of rendering. For example, you can use the ray cast technique to simulate an enemy's line of sight. I will also post in detail of a knockoff of this rendering technique, my pseudo-3D dungeon crawler, which easily supports game sprites. I hope you enjoyed this read and look forward to seeing what everyone will come up. :)