Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

3D API Overview - 1.3.2 Beta

SimeonSimeon Admin Mod
edited March 2012 in General Posts: 4,958

PLEASE NOTE THE CODE IN THIS THREAD WILL NOT WORK IN CODEA 1.3.1

I've opened this thread up to everyone so people can examine and comment on the new 3D stuff

There is no documentation for this stuff yet. So this thread will serve as the beta documentation for now.

You can grab a sample 3D scene here: http://twolivesleft.com/Codea/Projects/3D_Test.codea

Source code on pastebin: http://pastebin.com/gJ6CxXD2


Basic Matrix Control

multMatrix( m )

Multiplies matrix m against the current model matrix.

modelMatrix()
modelMatrix( m )

Called with no parameters, returns the current model matrix. When called with a matrix parameter modelMatrix sets the current model matrix to the specified matrix. Defaults to identity.

projectionMatrix()
projectionMatrix( m )

Called with no parameters, returns the current projection matrix. When called with a matrix parameter projectionMatrix sets the current projection matrix to the specified matrix.

The default projection is an orthographic projection starting at the lower left of the screen and extending WIDTH, HEIGHT to the upper right corner of the screen.

viewMatrix()
viewMatrix( m )

Called with no parameters, returns the current view matrix. When called with a matrix parameter viewMatrix sets the current view matrix to the specified matrix. Defaults to identity.


Basic Types

matrix

Represents a 4x4 matrix (column major). Supports the following operations:

m1 = matrix() -- creates an identity matrix

m2 = matrix( 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 )

m = m1 * m2 -- matrix multiplication

m = m1 * 5 -- scalar multiplication

m = m1 / 5 -- scalar division

m = m:rotate( angle, x, y, z )

m = m:translate( x, y, z )

m = m:scale( x, y, z )

print( m )

vec4
vec3

vec4 is new. vec3 now supports most common operations that vec2 supports.


View Control

perspective()
perspective( fov, aspect, zNear, zFar )

This sets the projection matrix to the perspective projection defined by the given parameters. If called with no parameters it defaults to fov=45 degrees, aspect=WIDTH/HEIGHT, zNear=0.1, zFar=(HEIGHT/2) / tan( pi * 60 / 360 ) * 10.

ortho()
ortho( left, right, bottom, top, near, far )

This sets the projection matrix to the orthographic projection defined by the given parameters. If called with no parameters it defaults to ortho( 0, WIDTH, 0, HEIGHT, -10, 10 )

camera()
camera( eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ )

This sets up the view matrix to emulate a camera positioned eye and pointing at center, with an up-vector defined by up. Called without parameters it defaults to camera( 0, 0, -10, 0, 0, 0, 0, 1, 0 )

Tagged:
«13

Comments

  • Posts: 2,161

    Is this in testflight yet? I haven't received a notification if it is (it keeps asking if I want to install Codea 1.3.1 (15))

  • SimeonSimeon Admin Mod
    edited March 2012 Posts: 4,958

    It's 4am and I still can't get the build uploaded to TestFlight. It's driving me crazy — Xcode 4.3 and iCloud entitlements are causing all sorts of issues.

    In the worst case I'll have to recompile it on my laptop tomorrow morning using the older version of Xcode. That shouldn't have issues.

    Also every attempt to upload the build is taking 15 minutes. The internet connection here is pretty poor.

  • SimeonSimeon Admin Mod
    Posts: 4,958

    Finally sorted it out. It should be available now.

  • Posts: 2,161

    Downloading as I type ... Yippee!

  • BortelsBortels Mod
    Posts: 1,557

    Neat! Got the demo, pretty damn cool. Time to mess with things. :-)

  • Posts: 159

    Looks really cool, @Simeon! I'm just getting into learning OpenGLES, so this kind of ties in pretty nicely :)

    I'm not sure if the following questions are tangential to what you want to discuss here, but I figured this was likely to best place to ask.

    I've just drawn a cube using a mesh with vec3 vertices, and it worked perfectly. Is there any way to texture the different faces of something like this at present? addRect / setRectTex / etc only support 2D meshes? This may well be beyond the scope of what you want the API to provide.

    Also, is there any way (presumably by using a different shader) that you could provide the ability to add / position light sources? Again, might be outside the scope of what you're going for, but of course the cube I created is just rendered in a flat colour. As I said, I'm just learning OpenGLES, so I'm not sure how difficult / possible this is, or if I'm even asking the wrong question entirely! :)

  • SimeonSimeon Admin Mod
    Posts: 4,958

    You can texture them by setting the texture coordinates of each vertex (mesh.texCoords). The addRect API is 2D at the moment. I'm not sure if we'll take that into 3D — maybe addQuad with vec3 arguments would be more suited.

    We haven't exposed GLSL shader functionality. But it's something I'd like to do — definitely not this update though. At the moment you can simulate vertex lighting by computing the normals for each vertex and then computing the light intensity for each vertex given a particular light position. You'd use this to modulate the vertex colours to simulate light.

  • edited March 2012 Posts: 159

    Thanks @Simeon - setting texCoords makes perfect sense! Your description of lighting lost me at 'normals' (I know of them, but not really what they are) - I'll have to read up. :)

    Anyways, I made a little cube:

  • BortelsBortels Mod
    Posts: 1,557

    Hmm - exposing my ignorance here. I thought that GLSL let you program the lighting model (among other things) - but that the base normal lighting models (point sources, and ambient, and so on) were "cooked in" - ie. there was, for example, a GLSL chunk of code for "ambient" you would simply reference. (This may be my old-school-before-shader-pipelines ignorance showing).

    Even if we can't do our own GLSL, a basic lighting model would be very handy - I don't know how far flat-shaded 3D will go...

  • SimeonSimeon Admin Mod
    Posts: 4,958

    Basic lighting models are no longer built-in to OpenGL ES2 (they were in OpenGL ES1). It's very easy to emulate the standard fixed vertex lighting model, but in us doing so (and designing an API around it) I feel we would better spend that time implementing some really nice GLSL UI and API.

  • edited March 2012 Posts: 159

    Expanded to a basic 3D tilemap:

  • Posts: 2,820

    Yay. This is awsome. Have your thought about just putong this in 1.4 and adding other features to? This is a giant feature in my opinion...
    Thank you so much!

  • Posts: 159

    I presume backfaces are not drawn? I tried adding translucent water blocks, but you can only see through them from one direction.

  • SimeonSimeon Admin Mod
    Posts: 4,958

    We'll have to add a flag to enable back face rendering.

  • Posts: 196

    @Simeon - Oh this is awesome guys :D

    Can't wait for it to be released to try it out on my code !
    If all goes well I should see a nice framerate boost :D

    If you could get back face culling as a switchable flag it would be nice :P

    Cheers,

    Xavier

  • edited March 2012 Posts: 2,820

    Wholly s***! I just looked at the code, at BAM! even I can understand it. THANK YOU SO MUCH! And was thought out just the right way and also limits your work. If you could eventually add the ability to add a light source and turn light lighting on and off, that would be nice, but that also means add and object's luster parameters. YIPEE! Life just got much more exciting now that I can really comprehend your method. And @frosty: Out of curiosity, I just want to see some of your code. Your stuff looks cool.

  • Posts: 2,161

    I want to see frosty's code too because I don't have this working yet. I'm trying to get my head round the matrices but when I print them I get weird stuff : some entries are even nans.

  • SimeonSimeon Admin Mod
    Posts: 4,958

    Are you able to share some code that generates NaNs?

  • Posts: 2,161
    function setup()
        end
        
    function draw()
        if not done then
            print(viewMatrix())
            done = true
        end
    end
    

    produces a 4x4 matrix with nans in the 4,1 and 4,2 slots (row,coloumn), a 1 in the 1,1 slot, and 0 everywhere else.

  • SimeonSimeon Admin Mod
    Posts: 4,958

    Thanks @Andrew — looks like it might be a bug when returning the matrix.

  • edited March 2012 Posts: 159

    Here's my cubemap code. I'm doing anything particularly unusual, just using meshes with 3d coordinates, and transforming in 3 dimensions when drawing. @Simeon Am I correct in thinking the normal scale, translate, rotate methods simply transform the modelMatrix? I'm still very new to this stuff!

    function setup()
         displayMode(STANDARD)
    
        parameter("Size",50,500,200)
        parameter("CamHeight", 0, 1000, 300)
        parameter("Angle",-360, 360, 0)
        
        -- all the unique vertices that make up a cube
        local vertices = {
          vec3(-0.5, -0.5,  0.5), -- Left  bottom front
          vec3( 0.5, -0.5,  0.5), -- Right bottom front
          vec3( 0.5,  0.5,  0.5), -- Right top    front
          vec3(-0.5,  0.5,  0.5), -- Left  top    front
          vec3(-0.5, -0.5, -0.5), -- Left  bottom back
          vec3( 0.5, -0.5, -0.5), -- Right bottom back
          vec3( 0.5,  0.5, -0.5), -- Right top    back
          vec3(-0.5,  0.5, -0.5), -- Left  top    back
        }
    
    
        -- now construct a cube out of the vertices above
        local cubeverts = {
          -- Front
          vertices[1], vertices[2], vertices[3],
          vertices[1], vertices[3], vertices[4],
          -- Right
          vertices[2], vertices[6], vertices[7],
          vertices[2], vertices[7], vertices[3],
          -- Back
          vertices[6], vertices[5], vertices[8],
          vertices[6], vertices[8], vertices[7],
          -- Left
          vertices[5], vertices[1], vertices[4],
          vertices[5], vertices[4], vertices[8],
          -- Top
          vertices[4], vertices[3], vertices[7],
          vertices[4], vertices[7], vertices[8],
          -- Bottom
          vertices[5], vertices[6], vertices[2],
          vertices[5], vertices[2], vertices[1],
        }
    
        -- all the unique texture positions needed
        local texvertices = { vec2(0.03,0.24),
                              vec2(0.97,0.24),
                              vec2(0.03,0.69),
                              vec2(0.97,0.69) }
                    
        -- apply the texture coordinates to each triangle
        local cubetexCoords = {
          -- Front
          texvertices[1], texvertices[2], texvertices[4],
          texvertices[1], texvertices[4], texvertices[3],
          -- Right
          texvertices[1], texvertices[2], texvertices[4],
          texvertices[1], texvertices[4], texvertices[3],
          -- Back
          texvertices[1], texvertices[2], texvertices[4],
          texvertices[1], texvertices[4], texvertices[3],
          -- Left
          texvertices[1], texvertices[2], texvertices[4],
          texvertices[1], texvertices[4], texvertices[3],
          -- Top
          texvertices[1], texvertices[2], texvertices[4],
          texvertices[1], texvertices[4], texvertices[3],
          -- Bottom
          texvertices[1], texvertices[2], texvertices[4],
          texvertices[1], texvertices[4], texvertices[3],
        }
        
        -- now we make our 3 different block types
        ms = mesh()
        ms.vertices = cubeverts
        ms.texture = "Planet Cute:Stone Block"
        ms.texCoords = cubetexCoords
        ms:setColors(255,255,255,255)
        
        md = mesh()
        md.vertices = cubeverts
        md.texture = "Planet Cute:Dirt Block"
        md.texCoords = cubetexCoords
        md:setColors(255,255,255,255)
        
        mg = mesh()
        mg.vertices = cubeverts
        mg.texture = "Planet Cute:Grass Block"
        mg.texCoords = cubetexCoords
        mg:setColors(255,255,255,255)   
        
        -- currently doesnt work properly without backfaces
        mw = mesh()
        mw.vertices = cubeverts
        mw.texture = "Planet Cute:Water Block"
        mw.texCoords = cubetexCoords
        mw:setColors(255,255,255,100)
        
        -- stick 'em in a table
        blocks = { mg, md, ms }
        
        -- our scene itself
        -- numbers correspond to block positions in the blockTypes table
        --             bottom      middle      top
        scene = {   { {3, 3, 0}, {2, 0, 0}, {0, 0, 0} },
                    { {3, 3, 3}, {2, 2, 0}, {1, 0, 0} },
                    { {3, 3, 3}, {2, 2, 2}, {1, 1, 0} } }
            
    end
    
    function draw()
       -- First arg is FOV, second is aspect
       perspective(45, WIDTH/HEIGHT)
    
       -- Position the camera up and back, look at origin
       camera(0,CamHeight,-300, 0,0,0, 0,1,0)
    
       -- This sets a dark background color 
       background(40, 40, 50)
    
       -- Make a floor
       translate(0,-Size/2,0)
       rotate(Angle,0,1,0)
       rotate(90,1,0,0)
       sprite("SpaceCute:Background", 0, 0, 300, 300) 
    
        -- render each block in turn
        for zi,zv in ipairs(scene) do
            for yi,yv in ipairs(zv) do
                for xi, xv in ipairs(yv) do
                    -- apply each transform  need - rotate, scale, translate to the correct place
                    resetMatrix()
                    rotate(Angle,0,1,0)
                    
                    local s = Size*0.25
                    scale(s,s,s)
                    
                    translate(xi-2, yi-2, zi-2)    -- renders based on corner
                                                   -- so -2 fudges it near center
                    
                    if xv > 0 then
                        blocks[xv]:draw()
                    end
                end
            end
        end
    end
    
  • SimeonSimeon Admin Mod
    Posts: 4,958

    That's correct @frosty. They have actually always transformed the model matrix, we just never exposed the model matrix directly before.

  • SimeonSimeon Admin Mod
    Posts: 4,958

    By the way. I really want to try adding ambient occlusion (by darkening the "inner" vertices) of your blocks. I think that would look great.

  • edited March 2012 Posts: 159

    Nice idea! I just had a quick go and ended up with this:

    Obviously not proper occlusion or anything, but hey :)

  • Posts: 2,161

    Simeon: Ah, okay. That might explain why things didn't work well when I tried setting the matrix to the value that I got out.

    Since the discussion about this that was prior to its implementation started out as something completely different, and quickly degenerated into a fight about matrices(!), I'd like to share a couple of links about the process that might be useful to others and so that I can refer to them in my questions.

    Firstly, from http://www.opengl.org/resources/faq/technical/transformations.htm we have the explanation of what's going on "under the bonnet" (aka hood).

    • Object Coordinates are transformed by the ModelView matrix to produce Eye Coordinates.

    • Eye Coordinates are transformed by the Projection matrix to produce Clip Coordinates.

    • Clip Coordinate X, Y, and Z are divided by Clip Coordinate W to produce Normalized Device Coordinates.

    • Normalized Device Coordinates are scaled and translated by the viewport parameters to produce Window Coordinates.

    Here's a link for those who like formulae (not me - I like armadillos^H^H pictures) http://www.songho.ca/opengl/gl_projectionmatrix.html. If, like me, you prefer pictures then the workflow is in an image on this page: http://www.songho.ca/opengl/gl_transform.html. I found that very useful.

    Okay, so what does all of this mean?

    We start by defining an object in 3-space, by specifying a bunch of 3-vectors as coordinates. These are the object coordinates. We don't assume that we are standing at a particular point in space, so once our object is defined we are free to declare that we are looking at it from a particular place. This choice defines a new set of coordinates with the eye at the centre and so that up is up, left is left, and so on. The coordinates of our object in this system are the eye coordinates. Now comes the magic bit. We want to draw what we see on a piece of paper (iPad screen). So we hold up a piece of glass and draw on it exactly what we see through it. But we have a fair amount of choice as to hold up that piece of glass: is it orthogonal to us, skewed, rotated, how far away? The clip coordinates reorient space so that the glass is in a standard position and it is space that is skewed (don't worry, it's all relative. No 3D-shapes are actually harmed in this process). Now the coordinates are projected onto this piece of glass, and finally drawn on the screen.

    Now, what do we have control over? We have control over the first two steps in this. We can control how we are looking at the scene (the "eye coordinates") and where we are holding the sheet of glass (the "clip coordinates"). What is important here is to define the transformations between them. To control the first, we set the Model and View matrices. To control the second, we set the Projection matrix.

    There are two further issues: we actually have two matrices involved in the first step: The Model and the View matrices. The reason for this is that you often don't just want to transform the coordinates, you want to transform the normal vectors as well and they transform slightly differently. Separating the two allows this control. This is used in computing lighting effects.

    The other issue is that the matrices involved are 4x4 matrices, but the initial coordinates are 3-vectors. They are promoted to 4x4 matrices by appending a 1 at the end, so (x,y,z) becomes (x,y,z,1). Reading around, I've seen a lot of stuff about projective space and homogeneous coordinates. Whilst it isn't completely unrelated, it's really a load of high-falutin' nonsense. What really matters is that this promotion allows the computer to do a transformation involving a translation by a single matrix multiplication.

    There's probably lots of bits here that aren't quite right, or are a bit vague. But I find I understand stuff better if I try to explain it (for example, I hadn't twigged about how the Model and View matrices interacted). I'm going to experiment to see what happens. As I find out stuff, I'll report back.

  • Posts: 88

    @Simeon: would it be possible to get also some 3D drawing primitives like point(x,y,z) for example? Also a kind of fast meshgrid generator would be nice. Just some ideas in order to draw 3D math functions or make some geometric 3D drawings considering hidden faces, hidden lines, ... Point(x,y,z) would also be nice to do some fluid dynamics simulations, something almost impossible today due to lack of speed. Doing the preprocessing already in GPU will enhance that significantly.

  • SimeonSimeon Admin Mod
    Posts: 4,958

    @Andrew_Stacey the reason for the separation of Model and View is so that viewpoint can be changed without altering the model matrix. It's not strictly necessary, you can do the whole thing in the Model matrix is you prefer — e.g. you might use translate in Codea if you want to shift the whole scene, for example.

    View is just there for your own convenience, camera() writes to it, but other than that, it's up to the user how it's used. If you ignore it, it doesn't affect anything.

    Here's how the modelViewProjection matrix is computed:

    modelViewProjection = fixMatrix * projectionMatrix * (viewMatrix * topOfModelMatrixStack)
    

    Ignore fixMatrix — it's an orthographic unit projection that is only used when screen recording comes into effect to invert rendering on the Y axis (necessary because CoreVideo uses flipped texture data).

    modelViewProjection is multiplied against each vertex in your scene.

  • edited March 2012 Posts: 2,161

    This is so good! I'm still figuring out all of how it works, but I've managed to get something to render on the screen at last.

    (How does one embed YouTube videos here?)

    I'm going to have to rip out the innards of my shape explorer code to integrate this.

  • Posts: 196

    @Andrew_Stacey - just type the unmodified youtube.com link

  • SimeonSimeon Admin Mod
    edited March 2012 Posts: 4,958

    The best way is actually to choose "Share" in YouTube, then "Embed", then choose "Old Embed Code" with a resolution of 640x480 — if you do that the video comes out nicely sized.

    If you just type the link these forums squish the video for some reason.

    By the way @Andrew - next build fixes those matrix bugs. It just wasn't copying them into Lua properly when accessed.

  • edited March 2012 Posts: 2,820

    Like this:

    <object width="640" height="480"><param name="movie" value="http://www.youtube.com/v/Yk74hJj4sEE?version=3&amp;hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/Yk74hJj4sEE?version=3&amp;hl=en_US" type="application/x-shockwave-flash" width="640" height="480" allowscriptaccess="always" allowfullscreen="true"></embed></object>

  • Posts: 146

    Looks very nice so far. May I suggest sth like GluUnproject(), which is handy for all sorts of things, like extracting the viewing frustum or finding an object in 3D space from a screen coordinate.

  • SimeonSimeon Admin Mod
    Posts: 4,958

    True, unProject would be a good one, @gunnar_z. And easy enough for us to include.

    @Andrew_Stacey that looks really cool, can't wait to see what you do with it.

    @Zoyt good embed code except you should choose the 640x480 size option — it fits the forum size fairly nicely.

  • edited March 2012 Posts: 2,820

    And by the way @Andrew_Stacey - It looks almost 4 diminutional with the colors, no shading, and no area edges. And alright Simeon. Fixed it.

  • edited March 2012 Posts: 2,820

    And a little fun animation you might want to check out just because I'm so excited about Codea 3D. Really sorry the video part doesn't work in iOS because it doesn't have autoplay.
    You can find it on my blog here.

  • BortelsBortels Mod
    Posts: 1,557

    Nice demo, @Zoyt - the text effects are very old-school demoscene.

    But - 27 lines? With the other demos embedded?

    I'd like to see them. :-)

  • Posts: 159

    Er, @Zoyt, in my browser (Safari) your animation is fixed to the top left of the window and covers pretty much all of the content in this thread :/

  • edited March 2012 Posts: 2,820

    @Bortels: Sorry. I meant the video. Sadly, it's tons and tons of lines of code (actually I did most of the generating in Hype). Sorry @frosty. Mine was zoomed in so it looked like it was a little displaced, but not all the way. If someone could help me figure out how to make this code work without doing that bug, that would be nice.
    <object width="600" height="600"><embed src="http://flurry.name/zoyt/codea/Codea%203D%20Thanks/Codea%203D%20Thanks.html" width="600" height="600" allowscriptaccess="always" allowfullscreen="true"></embed></object>
    You can play around on it with my junked up and really old thread here. It has no value and I don't care if you do something to it.
    Thanks!
    P.S. Sorry the videos don't work on iOS. They don't have auto play.

  • Posts: 2,161

    I was just about to ask for what @gunnar_z suggested!

    I think I might want a little more, though. Here's the scenario: I have a load of shapes in 3d space which I render onto the screen via this method. The user then touches one and that shape then interacts with the user's touch in some fashion. To do this, I need to translate between the touch and the object in the Codea program. I think I need to be able to do it in both directions. Firstly, I need to figure out which object has been touched. This might be tricky with GluUnproject because that takes a 3-vector in screen coordinates to a 3-vector in space. But I only have a 2-vector: the touch coordinates. And that doesn't correspond to just one point in 3-space but to a line. So what I want to do is loop through my objects, extract their screen coordinates, compare them with the touch coordinate, and figure out which was touched from that. So for that, I need a code-level gluproject (or whatever it is called, and it should return a 3-vector with x,y coordinates being the screen coordinates and z coordinate being the depth). Then when I've figured out which object is being used, I need to translate the touch information (most likely the deltas) into information that the object can use. In this case, I know how to resolve the ambiguity inherent in going from 2d to 3d because I know that the original touch position must go to some coordinate relating to the object. So now I want a function that says "Return the 3-vector with the property that if I move the original object by this 3-vector then I get an apparent movement on the screen that matches the touch movement, and it should be parallel (in 3-space) to the screen."

    (Incidentally, I already do this in my 3D Shape Explorer, but I have to do it "by hand" so lose the speed of using the GLU. However, doing it "by hand" is not so bad since I only have to do this when processing touches, which isn't every frame. Still, it would make it easier to interact with touches.)

  • SimeonSimeon Admin Mod
    edited March 2012 Posts: 4,958

    @Andrew_Stacey there's a couple of ways to do "picking" in OpenGL.

    Unproject: the way to get the "depth" of the screen space coordinate typically involves reading the depth buffer value for touched pixel. This is not possible to read in Codea (and even if it were, it would be slow).

    Raycasting: If you store your triangle data in a hierarchical data structure, such as an octree or bounding volume hierarchy, you can construct a ray or line segment from the touched point. You can then efficiently check where it intersects with your scene and return the exact triangle. This is probably the most modern way to pick objects — it's ridiculously easy with a 3D physics engine, since the physics engine will maintain the efficient spatial partitioning of the scene.

    Pickbuffer: This is a really old fashioned way to pick objects. You can probably pull this off in Codea without much hassle. Basically you render all the objects in your scene into an image that is the size of your screen — this is the "pickbuffer". Each mesh is rendered in a different, solid, color. Then when the user touches a point on the screen you simply query the color of the pixel in the pickbuffer at that location, and return the associated mesh. (Thinking about this some more you can actually render the meshes into a lower-res image for speed but you lose pixel accuracy.)

  • Posts: 2,161

    One thing is that I don't always want to ask "Was the touch in the image of this triangle?" Sometimes I want to ask "Was the touch within a particular radius of this rendered point?". For example, if I render a load of spheres on the screen and some are far away, I might want to say that their effective touch radius is larger than their actual radius. So it's not just about picking objects by where they are rendered, but about being able to compare the rendered coordinate with the touch coordinate.

  • edited March 2012 Posts: 146

    GluUnproject gives you a point in 3D space. If you call it twice with different depths, or use your camera position as the second point, you have a vector, with which you can then compute intersections with triangles or spheres. There is only so much you can do for picking objects in 3D space from a 2D projection, this method works good for me. Keep in mind that your representation of objects for rendering need not be the same as for picking them, think for example bounding spheres.

  • SimeonSimeon Admin Mod
    Posts: 4,958

    @Andrew_Stacey in that case I would do as @gunnar_z suggests and have bounding spheres that you test against for hit purposes (i.e. not rendered).

  • Posts: 2,161

    That sounds reasonable. Probably also faster than my way as I only do at most two transforms (the touch at different depths). Okay, let's have GluUnproject then!

  • Posts: 2,161

    Memo to self: to scale the entire scene using matrices (rather than the inbuilt scale function), the method is not to do modelMatrix(s*modelMatrix()) as that does absolutely nothing! The scale factor has to be applied only to the first three coordinates (or one could apply the inverse scale factor to the fourth).

    Also, the new improved watch stuff is fantastic. I've just been using watch("modelMatrix()") and similar and it is extremely useful.

    Another memo to self: I'm drawing my scene, then want to put the UI stuff on top. All the UI stuff is defined in "screen coordinates". It seems that to restore those, I need to save the viewMatrix from the start, then do resetMatrix() ortho() viewMatrix(savedMatrix). This seems like a fairly common thing to want to do, so maybe a resetWorld() function?

  • SimeonSimeon Admin Mod
    edited March 2012 Posts: 4,958

    Glad you like the improved watch(), it's much better at actually watching stuff.

    Since we don't know what matrix you saved, your code:

    resetMatrix()
    ortho()
    viewMatrix(savedMatrix)
    

    Would be something that you would have to do. Any resetWorld() that we implement would have to do the following

    resetMatrix()
    ortho()
    viewMatrix(matrix()) -- set to identity (default)
    
  • Posts: 2,161

    Strange, I could have sworn that first time I tried it then viewMatrix(matrix()) didn't work, which is why I had to save the "initial" view matrix. But now I try it, it does work.

    Although it's just a convenience, I do think that there's value in a resetWorld() function. Maybe, to justify it's existence, it could also include a resetStyle() so it really is put everything back where it came from function.

  • Posts: 2,161

    Oh, and normalize seems to be missing from the vec3 stuff.

  • SimeonSimeon Admin Mod
    Posts: 4,958

    Oh thanks for pointing that out. I must have missed it.

    I'll have to consider the merits of resetWorld() for longer — it probably won't be a decision I can make before the next update.

    Planned additions are: unproject, project, transpose, invert.

  • SimeonSimeon Admin Mod
    Posts: 4,958

    By the way, I'm trying to think of a good example project for 3D in the next beta. I'm thinking of borrowing the concept of "Tests" from "Physics Lab". Perhaps having a "3D Lab" with number of small scenes. Do any of you fantastic beta testers have a small 3D demo to contribute? Something that can be easily modified to run in a Test framework.

Sign In or Register to comment.