Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Set mesh opacity globally

edited March 2012 in Suggestions Posts: 2,161

(I'll put this on the issue tracker as well - remind me if I forget.)

The more I use meshes, the more I like them.

I've been using them for quickly drawing complicated shapes, but have run into something that I'm turning into a feature request: the ability to set the opacity globally.

As I understand it, mesh:setColors sets all the colours of the current list of vertices, so it's a convenience function. But setting the alpha of each vertex isn't always quite right. When using a mesh to draw a complicated shape, it can be computationally irritating to ensure that the triangles don't overlap. Sometimes, it's easiest just to draw a shape with overlapping triangles. But when this happens, the opacities add meaning that the overlaps are obvious if the shape isn't completely opaque. It would be nice to be able to render the whole mesh and set the opacity so that the opacity on overlaps is the same as the opacity on the rest.

I could do this by rendering to an image and then tinting the image with an appropriate alpha, but that adds an extra step to something that oughtn't to need it.

Comments

  • beebee
    Posts: 381

    Mesh this, mesh that. It seems that mesh got my attention and interest. Thanks to @xavier's code which I'm still trying to understand. The terms around the usage (vertex, etc) and the lack of my english in this regard make my study a bit harder. Thanks to Google and Wikipedia! :D

    Sorry for the interruption. Just ignore me and keep the discussion going. Thank you. :)

  • SimeonSimeon Admin Mod
    Posts: 5,054

    @Andrew I'm not sure if I'm clear on this. Do you mean you'd like a way to set the opacity on all vertices to a specific value, while maintaining the RGB components of those vertices?

  • Posts: 196

    @bee - My example probably isn't the best code to learn about the mesh api as there is a lot of other stuff that might cloud what you're after. I was planning to make a very simple Codea tutorial on 3D projections/transformations, but it will be useless with the next version of Codea :P

    @Simeon - That would actually be nice, something like overloading the function setColors(R, G, B, A) with setColors(A) ?

  • Posts: 2,161

    No. I'd like to be able to set the opacity of the mesh as a single object.

    So if the triangles are at, say, (0,0), (100,0), (100,100) and (0,0), (100,0), (0,100) then when I set the global opacity I see a single shape, not two overlapping triangles.

    Take a look at the PGF Manual, page 241 on transparency groups for the idea of what I'm on about.

  • beebee
    Posts: 381

    @xavier: Useless? How come?

  • Posts: 196

    @bee - Well right now we need to project our 3D points to our 2D screen, which means quite a few calculations.
    Next Codea version will have built-in matrices and vec3 for meshes. You'll be able to draw a cube in just a few lines of code :)

  • Posts: 2,161

    But meshes can take vec3 objects, I believe. The problem is that these are projected orthogonally to the screen. What we really want is stereographic projection and I haven't seen that in the proposal (as it's not implemented by a matrix).

  • NatNat
    Posts: 143

    I like this idea. It's like a tint for the mesh. In fact, would applying the current tint to the mesh be a reasonable way of doing this?

  • Posts: 196

    @Andrew_Stacey - What do you mean by "stereographic projection" ? Wikipedia tells me it's a specific way of projecting a sphere onto a plane ?
    Either way, if Codea update brings matrix transformations, couldn't you put that in matrix form or is that not possible ?

  • Posts: 2,161

    @Xavier: It isn't a linear transformation, so no: it can't be put in matrix form.

    It might be a term that is used for many different things, so to be clear: by stereographic projection I mean that you project your object onto a plane from a point (the "eye"). It's what you do if you look at a scene through a window and then draw on the window exactly what you see (so it is the right way to project 3D to 2D and make it look "realistic"). If you look for the videos of my 3D shape explorer then you might see what I mean as it's what I use there.

    @Nat: Yes, that would be good. But it has to be applied at the right time. Applying it to the vertices won't work due to the overlap issue. So it has to be applied to the rendered mesh.

  • NatNat
    Posts: 143

    @Andrew: I think you mean perspective projection. It can be expressed as a 4x4 matrix if you use homogeneous coordinates (4d vectors) for your points.

  • SimeonSimeon Admin Mod
    Posts: 5,054

    @Andrew the next update will retain vec3.z on meshes (rather than discarding it). In addition you will be able to set the current 4x4 transform matrix — for example, to integrate a perspective transform.

    Regarding the opacity setting — you mention that it needs to be applied to the rendered mesh. So internally we'd need to render all meshes to images just to support this setting, which would hurt performance quite a lot. I think perhaps having a helper function (mesh2image()) might be a better way to go. Though maybe it's too specific for us to implement, and it's not too hard to implement on the Lua side.

  • Posts: 2,161

    @Nat: Sorry, I'm going to pull rank here. There is no way to represent stereographic (aka perspective) projection via a matrix. It is possible to simplify it using matrices but you cannot take in a 3-vector, apply a matrix, and produce the right 2-vector as output. It also cannot be done via an affine transformation, so it can't be done using those "homogeneous coordinates" either.

    What you can do is transform your data using affine transformations before and after the stereographic projection. What this gains you is that you only need to encode one projection, and so it might as well be (x,y,z) -> (x/z,y/z). But the point of doing it in the heart of Codea, rather than as a user implementation, is to get it faster since looping over all the vertices in a mesh can take some time, in which case you probably don't want to encode it as the composition of several functions but as a single function.

    @Simeon: Hmm, in that case I'll work around it and make sure that where I really care about this then my mesh doesn't have overlaps. Rendering to an image each time is going to be expensive if my mesh is dynamic. I can work around this problem - I'd rather you concentrated on better sorting routines.

    In addition you will be able to set the current 4x4 transform matrix

    Huh? What's the fourth coordinate? Are we leaving the Gallilean system and leaping straight into relativity? Or are you referring to the same bit as Nat on the Wikipedia page on perspective projection? In which case, I refer you both to the image:

    final step of projection

    which is the non-linear step. For the other steps, you don't need 4x4 matrices (requiring 16 numbers). You want 3x3 + translation (only 12 numbers). Indeed, most people will only want to combine rotation, scaling, and translation, in which case the entire transformation can be encoded as a unit quaternion, a real number, and a 3-vector: 8 numbers (ignoring the normalisation on the quaternion which would get you down to 7).

    Take a close look at the Space bit on my 3D shape explorer. This is exactly what I do there.

  • SimeonSimeon Admin Mod
    edited March 2012 Posts: 5,054

    @Andrew that's exactly it. Rendering an image each time is expensive, and is what we would have to do internally in order to treat the mesh as an image with regards to opacity.

    Regarding the 4x4 transformation matrix. OpenGL uses a 4x4 matrix with homogenous coordinates for points. Irrespective of necessity, it is the fastest way to operate — because the GPU will do the matrix multiplication against each point in the mesh. This is already being done in Codea for every frame (for every primitive's vertex, not just meshes) — so we might as well allow you to set that matrix.

    To make this a little bit clearer, here is what Codea does for every vertex on every frame:

        Position = ModelView * Vertex
    

    Where ModelView is a matrix combining the current View matrix — by default an orthographic projection the width and height of the view. And the Model matrix, which is a 4x4 matrix representing the current transform state.

    This is the same matrix that gets copied when you do pushMatrix, and it gets set to identity when you do resetMatrix.

    This gets executed on the GPU. It's extremely fast. I believe the iPad 1 hardware can process 15-30 million vertices per second, the iPad 2 is probably double that.

  • Posts: 2,161

    And in that setup then a 3-vector (x,y,z) is first promoted to the 4-vector (x,y,z,1). That means that it can handle arbitrary affine transformations by a single matrix multiplication. Neat. You might need to supply some auxiliary functions for translating "traditional" transformations into 4x4-matrix form. As this is in the GPU, I see why it's so fast and why it's what you would expose. No argument from me there.

    The orthographic projection matrix should be something like [1 0 0 0; 0 1 0 0]. So the total result is to apply a 2x4 matrix to the vector (x,y,z,1).

    This still does not get us perspective projections. So while I would welcome the ability, for some projects then it wouldn't help. From what you write, it would appear that I have to apply the matrix last of all. But with perspective projections then I want to apply a matrix, then apply the projection, and finally apply another matrix. So I still have to loop over my vertices and apply a transformation to each one, and then the GPU stuff doesn't really save me a lot.

  • SimeonSimeon Admin Mod
    edited March 2012 Posts: 5,054

    I was thinking to expose the View matrix to you as well. This would allow convenient perspective projection, at least the way current 3D games and applications handle it through OpenGL.

  • Posts: 2,161

    There's clearly a language disconnect here.

    I searched for a bit more information. Unfortunately, most explanations seem geared for people who don't understand mathematics but do understand code. Which is the complete opposite of my situation! I did find the following site: http://www.songho.ca/opengl/gl_projectionmatrix.html which had quite a detailed explanation. In there, is the paragraph:

    Note that both xp and yp depend on ze; they are inversely propotional to -ze. It is an important fact to construct GL_PROJECTION matrix. After an eye coordinates are transformed by multiplying GL_PROJECTION matrix, the clip coordinates are still a homogeneous coordinates. It finally becomes normalized device coordinates (NDC) divided by the w-component of the clip coordinates.

    which might help to explain my confusion.

    To do perspective projection (as that's the OpenGL name) you have to do three steps:

    1. Apply a spatial transformation. This is done using the GL_PROJECTION matrix. The fact that it is a 4x4 matrix and acts on so-called homogeneous coordinates are aspects of implementation and nothing special.
    2. Apply the projection transformation. This is referred to in the final sentence of the quoted segment.
    3. Apply a view transformation. This is an application of a 2x2 matrix.

    Now, when you write something like:

    To make this a little bit clearer, here is what Codea does for every vertex on every frame:

       Position = ModelView * Vertex
    

    then I interpret that as at most implementing the first and third steps. This is because the middle one cannot be implemented as matrix multiplication. I was therefore assuming that it was not implemented at all. Reading around, I find that it can be implemented. That's, in effect, what I'm asking: is it?

    A-ha! Clicking further on that site I got to http://www.songho.ca/opengl/gl_transform.html and the diagram at the top of that is just what I need. The point is that the "divide by w" step is not a matrix transformation. However, there are no parameters involved in that step so to specify the entire transformation, one only needs to give the two matrices, the so-called perspective matrix and view matrix.

    So what I want to know is this: is the "divide by w" step included in what you are going to implement when you put in true 3D support in meshes (for example)?

    If yes, brilliant! Then, if you expose the two matrices I have all the pieces I need. Just please, please, please don't say that you are "applying a matrix". You are applying a non-linear transformation whose parameters are specified by a matrix. But the relationship between matrices and linear transformations is so strong that if you don't specify that you aren't doing this, the implication is that you are.

    If no, please do! Otherwise, I still have to loop over my vertices so I may as well apply the transformation myself.

    (The OpenGL commands appear to be glFrustrum() and gluPerspective(). Are these involved at all?)

  • SimeonSimeon Admin Mod
    edited March 2012 Posts: 5,054

    When I say "view" I did not mean the viewport transform, I meant a separate 4x4 matrix that specifies the camera's transform. This could be incorporated into the model matrix, but it is common to separate them.

    The "divide by w" could be incorporated into the projection matrix. Which we will allow you to set, and probably include some utility functions such as perspective( fov, aspect, near, far ). Edit: I think I don't mean "divide by w", the perspective division can be incorporated into the projection matrix. OpenGL projects onto the screen, it is currently and has always been doing that in Codea — projecting a 3D scene onto a 2D image plane.

    In fact, here is the 3D API I had in mind (in addition to the matrix, vec4 and improved vec3 classes). Feel free to suggest improvements.

    -- This loads matrix m, replacing the current model matrix
    loadMatrix( m ) -- "modelMatrix" below could make this redundant
    
    -- This multiplies matrix m against the current model matrix
    -- Yes, it's called "apply" but we could name it "multMatrix"
    -- applyMatrix is consistent with processing.org, though
    applyMatrix( m )
    
    --These configure the view matrix so that the camera "looks at" a given point. '
    -- Similar to gluLookAt. Called without arguments it resets the view matrix to 
    -- default parameters.
    camera() 
    camera( eyeX, eyeY, eyeZ, centerX, centerY, centerZ, upX, upY, upZ )
    
    -- These set the projection matrix using perspective projection. 
    -- No parameters gives some default projection
    perspective()
    perspective( fov, aspect, zNear, zFar )
    
    -- These set the projection matrix using orthographic projection. 
    -- No parameters gives the default ortho( 0, WIDTH, 0, HEIGHT, -10, 10 )
    ortho()
    ortho( left, right, bottom, top, near, far )
    
    -- These replace the matrix directly with a user matrix for the appropriate stage
    -- Called without arguments to get the current matrix.
    modelMatrix( m ) -- Could also be called "transformMatrix"
    viewMatrix( m )
    projectionMatrix( m )
    
  • Posts: 2,161

    OpenGL projects onto the screen, it is currently and has always been doing that in Codea — projecting a 3D scene onto a 2D image plane.

    The difficulty with that phrase is that the word "projects" is ambiguous here! And since no Codea built-in function takes 3D vector objects and does anything with them (just checked mesh: it throws away the z-coordinate), I can't test what is actually going on.

    However, the fact that the code you've posted included "perspective" and "ortho" as separate functions gives me hope! Together with what I've read about OpenGL, I'm almost sure that this is going to be what I would like.

    NB It's going to be messy trying to explain how it all works.

    So at this point, I think I'll retire from the fray and let you get on with coding it.

    Oh, except one more thing: what about z-level ordering? Seems to be called depth-buffering in OpenGL. Can that be enabled on a mesh? Then I wouldn't have to sort my triangles, which would be fantastic.

  • SimeonSimeon Admin Mod
    Posts: 5,054

    A depth buffer is already used in Codea. For example, translate(x, y, z) will currently work in Codea 1.3.1, moving things on the z axis.

    The Codea renderer is actually rendering 3D scene locked to an orthographic projection — this is in the current version (and all versions since the beginning).

    All these extra functions will do is allow you to change some of the matrices that were previously fixed.

  • Posts: 2,161

    What I meant by depth buffering was to avoid having to manually sort triangles before they were added to the mesh. At the moment, I compute a load of stuff for the vertices and then do table.sort(vertices, function(a,b) blah blah end). If I could avoid that, it would be great.

    The Codea renderer is actually rendering 3D scene locked to an orthographic projection ... All these extra functions will do is allow you to change some of the matrices that were previously fixed. [emphasis added]

    Now my hopes are less hopeful. The orthographic/perspective is not one of those matrices.

    Will you remove that lock to allow perspective projection?

  • SimeonSimeon Admin Mod
    edited March 2012 Posts: 5,054

    The orthographic/perspective was one of those matrices. I think I misled you when I told you that vertices were computed as follows:

     Position = ModelView * Vertex
    

    ModelView, above, is incorrectly named. It should be ModelViewProjection. The Projection matrix is multiplied into the ModelView matrix on the CPU and then sent to the GPU for multiplication against each vertex.

    I simply copied and pasted the code from my OpenGL shader, where I have it named as ModelView — the projection is kind of assumed, otherwise we would not be able to see them on the screen.

    Edit: by the way, I have a 3D rotating plane with the following code:

    function setup()
        parameter("Distance",-100,0,-20)
        parameter("Size",0.1,20,8)
        parameter("Angle",-180, 180, 0)
    end
    
    function draw()
    
        perspective(45, WIDTH/HEIGHT)
    
        -- This sets a dark background color 
        background(40, 40, 50)
    
        -- This sets the line thickness
        strokeWidth(5)
    
        -- Do your drawing here
        translate(0, 0, Distance)
        rotate(Angle,0,1,0)
    
        noSmooth()
        rectMode(CENTER)
        rect(0, 0, Size, Size)
    end
    

    As you can see it creates a perspective effect on the rect as it is rotated around the Y-axis:

    Perspective

    Just to be clear: Codea has always been using a projection matrix multiplied against each vertex. It was just fixed to an orthographic projection before now.

  • Posts: 2,161

    The proof of the pudding ..., as they say. That looks right. Any chance of pushing that to beta?

    But please stop saying things that imply that this is happening by multiplying a vector by a matrix. That just ain't so. It may be just semantics, but this is my language - mathematics - that you are mangling. The transformation is more complicated than just matrix multiplication. It is true that all of the parameters are encoded by a matrix, but you are not simply applying that matrix.

    I know it must seem as though I've been making a mountain out of a molehill here. Sorry about that. It's ... mathematics.

  • SimeonSimeon Admin Mod
    Posts: 5,054

    @Andrew_Stacey each homogeneous coordinate (vertex) is being multiplied by a matrix that encodes the projection and model transformation. I'm a bit confused about why you say this isn't the case.

  • DylanDylan Admin Mod
    Posts: 121

    @Andrew_Stacey
    When you apply a 4x4 matrix transformation to a homogenous point (ie, [x,y,z,w]), you get another homogenous point in return (called the Clip Coordinate). Dividing this point by the homogeneous coordinate (w) gives you the perspective division (into Normalized Device Coordinates). This is done by the graphics hardware though, so as far as Codea is concerned the perspective transformation is just a matrix multiplication.

    Mathematically the perspective divide isn't required for the projection matrix to 'work' as all points in a homogenous space multiplied by a constant - in this case 1/w - are the same point, ie it is scale invariant. The division is required to actually produce the image though (ie, to extract the points from the homogenous space into a 2D one)

    http://www.opengl.org/resources/faq/technical/transformations.htm
    Section 9.011 has more details.

  • Posts: 2,161

    This is rapidly getting ... ridiculous. The short version of what I want to say is:

    1. Whatever code is need to make Simeon's example work, please implement it and get it into beta - I want to work with it.
    2. If I use that code and draw two rectangles which, from the viewer's point of view, are one in front of the other, but I list them in the wrong order in my code, do they come out the right way around or the wrong way around? What if they are part of the same mesh?
    3. Don't refer to "matrix multiplication" in the documentation.

    Okay, here's the long version.

    I am a mathematician. I know that you know that. I'm just reminding you. Moreover, I am teaching about this stuff this semester. So it's not that it's something I half remember from when I learnt it. It's stuff that I know.

    As an end user of Codea, I don't care where the stuff is taking place. Whether it is Codea, GL, or the iPad sends the data away to a support centre in the middle of Birmingham where people sit and work out the transformation by hand. What I care about is: I specify a coordinate (3,4,5) and something is drawn on the screen, and I want to know what went on in the meantime. Take a look at page 23 of today's lecture (6th March, beamer version). Question 3 on that slide is what I'm trying to work out here.

    What I would like to happen is "Perspective Projection" (that I would call Stereographic Projection). From the OpenGL FAQ that Dylan linked to, this is a composition of several steps:

    1. Object Coordinates are transformed by the ModelView matrix to produce Eye Coordinates.

    2. Eye Coordinates are transformed by the Projection matrix to produce Clip Coordinates.

    3. Clip Coordinate X, Y, and Z are divided by Clip Coordinate W to produce Normalized Device Coordinates.

    4. Normalized Device Coordinates are scaled and translated by the viewport parameters to produce Window Coordinates.

    Now, some of those are happening currently. Exactly which, I'm not sure. What is fairly clear is that Step 3, Clip Coordinates to Normalised Device Coordinates, is not happening.

    In the discussion, various people have referred to the matrices involved. Simeon has said that he will provide the ability to set them directly. This is great, but - and this is my key point -: This has nothing to do with Step 3. Step 3 is not implemented by anything remotely resembling "matrix multiplication".

    As far as implementation is concerned, Simeon's picture and code demonstrate that whatever he has said, his intention is to enable Step 3. So in terms of code I'm satisfied (though I would like to know about the ordering - that's a follow-up question, and one that I can test when the above is in beta).

    However, code isn't everything. There's also documentation to think of. And also the link between the geometric transformation and the 4x4 matrix implementing it is not the simplest thing for people to understand - I suspect. So there is great potential for confusion here and good documentation could go a long way to alleviating this.

    It would be very wrong to refer to the entire process as "matrix multiplication". And it would be wrong to write the documentation in such a way that it could be misconstrued as saying this (in particular, the end user is not going care about the distinction between what Codea does and what GL does).

    I know this is just "mathematician being pedantic". But there's a reason. The ideal way to write mathematics is not in such a way that it be clear, but in such a way that it not be unclear. The idea is that it should be very, very hard to misunderstand the mathematics. That's not so different from writing documentation.

  • SimeonSimeon Admin Mod
    Posts: 5,054

    As Dylan pointed out, step 3 is happening in the current version of Codea. The graphics hardware takes care of this step automatically. It is never explicit in the code (ours or yours), it is just a fundamental part of how OpenGL works to display images on screen.

    Regarding the ordering. This is a bit tricky as it behaves differently for transparent and non-transparent primitives.

    Something is considered transparent if it has "blended pixels" — e.g. a transparent texture on a mesh, a primitive such as ellipse (or even a smooth() rect). Basically: anything that doesn't fill, with opaque pixels, the entirety of the triangles it is composed of.

    With that out of the way, every "fragment" (like a pixel) rendered by Codea writes its depth into the depth buffer. When new drawing takes place, Codea decides to draw the new pixel if its depth value is less than or equal to the value stored in the buffer.

    The reason transparent triangles are a special case is because regardless of their opacity, they write to the depth buffer. They could be completely transparent and will still write to the depth buffer when rendered.

    So for non-transparent polygons (such as a mesh with a solid texture, or colour) the draw-order does not matter. Polygons will draw in the correct order because their fragments will be tested against the value in the depth buffer before drawing.

    With transparent polygons, you must draw them after drawing all non-transparent polygons and order them back-to-front. This will ensure that every fragment that should be visible gets rendered correctly.

    Does that make sense?

  • Posts: 2,161

    As Dylan pointed out, step 3 is happening in the current version of Codea.

    Then I'm very confused. I can't reproduce your picture with your code, or anything looking remotely like your code. It complains about the perspective function. Even when I take that out, I have great difficulty getting your code to produce anything drawn.

    So while I accept that OpenGL is technically doing all of the steps, nonetheless the observable result is that it is somehow crippled. Exactly how to make it so that I, as an end-user, can exploit that is, as the song says, not my department. That it appears to be straightforward to enable is great. Please do so. If trying to figure out what I'm getting at is stopping you, then please ignore me. I'd rather you did the code.

    What you say about the depth ordering sounds absolutely fantastic. I don't think that I do have transparent stuff, but if I did then what I could do is make it opaque when doing rapid changes, and then when things have settled do a one-shot reordering to fix the transparent stuff. Now I'm even more keen to have this capability.

    So please get this into beta! I'll fight you over the documentation at a later date.

  • SimeonSimeon Admin Mod
    Posts: 5,054

    I'll try to get a beta out tonight. I'm keen to get your feedback on the API. And keen to see it put to the test.

  • DylanDylan Admin Mod
    edited March 2012 Posts: 121

    @Andrew_Stacey
    You are correct, step 3 is not matrix multiplication, we know this so there isn't anything to fight about :)

    We only have control over steps 1 and 2 in hardware accelerated 3D (and the final step with glViewport but that isn't very interesting usually), which consists entirely of matrix multiplication, so it is very common for 3D programmers to refer to the projection matrix as "doing the perspective divide" as it sets up the system for the hardware to do the divide correctly by setting the homogenous coordinates to their correct values.

    PS
    I have a Bachelor in Mathematics and Computer Science, so while not technically a practicing mathematician, I am trained in it and understand the need for precision in terms :)

  • SimeonSimeon Admin Mod
    Posts: 5,054

    I like to write terms randomly at 160 tokens per minute.

  • Posts: 2,161

    Phew!

  • Posts: 622

    Whooosh.

    That would be the sound of most things in this thread going over my head.

    I came up the business side and was taught the same math (if it could be called that) which claims credit default swaps are nifty and safe.

Sign In or Register to comment.