Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Explosion shader (code & video)

edited June 2015 in Questions Posts: 2,020

I'm having a very strange issue.

I'm working on an explosion shader. It's inspired by @Andrew_Stacey 's 2D explosion shader here,

http://codea.io/talk/discussion/2257/exploding-mesh-with-shaders

but it's for 3D models. It disintegrates each of the faces along a trajectory derived from the face normal (unlike Andrew's, I assume that the model is already multi-faceted, I don't further subdivide the faces).

Now, the weird thing is, the shader was actually coming along nicely within a large project I'm working on, when I decided to export it to a minimal working example (MWE) to work on it further (and to share it). And, ironically enough, I can't get the custom attribute buffer to work at all in the MWE. In the parent code, it works about 70% of the time, but does sometimes produce this error.

It's probably something really simple, and I've just been staring at it too long, but I can't work out what I'm doing wrong at all.

The bit that triggers the error is when I try to index the custom buffers. When this code runs:

    local origin = m:buffer("origin")
    local trajectory = m:buffer("trajectory")
    origin:resize(#verts) --shouldn't be necessary. Triggers error.
    trajectory:resize(#verts)
    for i=1, #verts do
        origin[i] = ori[i] --triggers error

I get:

error: [string "explode = {}..."]:9: attempt to index a nil value (local 'origin')

The mesh has been set up, it has vertices, colours, normals (so the custom buffers shouldn't need resizing), it has a shader that compiled without error, which uses the custom attribute.

What have I forgotten??

The full MWE is here (it borrows code from @Ignatz , @Andrew_Stacey , and whoever made the Icosphere function):

https://gist.github.com/Utsira/463f656b56fd38ad5b68

I'd be very grateful if someone could put me out of my misery.

Comments

  • Posts: 2,020

    I've figured it out. It was an issue in the plumbing between the vertex and the fragment shader. The latter was expecting a variable vNormal but the former was supplying one called vDirectDiffuse instead. Weird though that it results in an error in a different attribute. I'll post the working version soon, as there's some other elements I'd like help with.

  • Posts: 2,020

    Here's a video. Code is on it's way. Does the resolution of the video look really terrible to you? For some reason the stuff I upload to YouTube always ends up looking awful.

  • Posts: 433

    I was just about to post the solution ... but you found it yourself.

  • edited June 2015 Posts: 2,020

    As the video quality is so crappy, here's an image:

    bang bang

  • Posts: 2,020

    The gist link above has been updated with functioning code. The next challenge... gravity (I'll have to refer back to Andrew's 2d shader I think...)

  • Posts: 735

    @yojimbo2000 - looking good!

  • Posts: 2,020

    bang2

  • edited June 2015 Posts: 2,020

    I've updated the code at the gist again so that it now has gravity. The gravity and friction code is adapted again from Andrew's shader. One tricky thing is working out which way is down as the model is rotating. To work out what the world gravity vector is I load the inverse of the model matrix into the shader. Here's the updated vertex shader. Suggestions and criticisms welcome.

    explodeVert=    [[
    
    uniform mat4 modelViewProjection;
    uniform mat4 modelMatrix;
    uniform mat4 modelInverse; //inverse of the model matrix
    uniform vec4 eye; // -- position of camera (x,y,z,1)
    //uniform vec4 light; //--directional light direction (x,y,z,0)
    uniform float fogRadius;
    uniform vec4 lightColor; //--directional light colour
    uniform float time;// animate explosion
    //uniform bool hasTexture;
    const vec4 grav = vec4(0.,0.,-0.02,0.);
    const float friction = 0.02;
    
    attribute vec4 position;
    attribute vec4 color;
    attribute vec2 texCoord;
    attribute vec3 normal;
    attribute vec4 origin; //centre of each face
    attribute vec4 trajectory; // trajectory + w = angular velocity
    
    varying lowp vec4 vColor;
    varying float dist;
    varying highp vec2 vTexCoord;
    varying vec4 vNormal;
    
    void main()
    {
        float angle = time * trajectory.w;
        float angCos = cos(angle);
        float angSin = sin(angle);
        lowp mat2 rotMat = mat2(angCos, angSin, -angSin, angCos); 
        vec3 normRot = normal;
        normRot.xy = rotMat * normRot.xy;
    
        vNormal = normalize(modelMatrix * vec4( normRot, 0.0 ));
       // vDirectDiffuse = lightColor * max( 0.0, dot( norm, light )); // brightness of diffuse light
    
        vec4 gravity = modelInverse * grav; //convert world gravity vector into model coords
        highp vec4 A = gravity/(friction*friction) - vec4(trajectory.xyz, 0.)/friction;
        highp vec4 B = origin - A;
    
        vec4 pos = position - origin; // convert to local
        pos.xy = rotMat * pos.xy; // rotate
        pos += exp(-time*friction)*A + B + time * gravity/friction;
    
        vec4 vPosition = modelMatrix * pos;
    
        dist = clamp(1.0-distance(vPosition.xyz, eye.xyz)/fogRadius+0.1, 0.0, 1.1); //(vPosition.y-eye.y)
    
        vColor = color;
        vTexCoord = texCoord;
        gl_Position = modelViewProjection * pos;
    }
    
    ]],
    
  • Posts: 2,020

    Sorry code above is bit of a mess, transitioning from goraud (per vertex) to phong (per fragment) lighting, that's why there's bits of unused lighting code still clogging it up. That's what got me into trouble in the first place.

  • Posts: 2,020

    I suppose really now that each face is double-sided I should check for non front-facing fragments in the frag shader and invert the normal....

  • Posts: 2,020

    I'm not sure back-face normal inverting makes any difference, as far as I can see. I've read that this is the "correct" way to do double-sided faces. Has anyone tried this?

    Change these lines in the fragment shader :

        vec4 norm = vNormal;
        if (! gl_FrontFacing) norm = -vNormal;
        vec4 vDirectDiffuse = lightColor * max( 0.0, dot( norm, light )); // brightness of diffuse light
    
  • IgnatzIgnatz Mod
    edited June 2015 Posts: 5,396

    Wrt gravity, it might be a little simpler to pass the shader a uniform of the vector (0,-1,0) multiplied by modelMatrix, to tell it which way is down.

  • Posts: 2,020

    Yes, that's a good idea, thank you. Rather than do multiplying by the inverse modelmatrix on every vertex, I can just do that calculation once, and pass the result to the shader.

    I should also point out, if anyone's interested in using the code in their own projects, that it's for a Z-up orientation (I just like working in Z-up). This means that because I'm only bothering to do a 2D rotation (as the angle computation is much easier, and I was adapting a 2D source), each fragment rotates around the model's up axis (so the fragments appear to change size, and catch the light as they rotate). I think it looks good, and I'm not going to bother adding code to compute rotations around the other axes of each fragment.

  • Posts: 2,020

    Ok, trying to implement @Ignatz 's suggestion above, and I've got it working, but something rather expected has happened:

        m.shader.gravity = modelMatrix():inverse() * vec4(0,0,0,-0.05) --why is the gravity vector on w?
    

    For some reason, when doing this calculation outside of the shader, the down vector is on w, not z. Why?? This is replacing these lines:

        --in draw
         m.shader.modelInverse = modelMatrix():inverse() 
    ... 
    // in the vert shader
        const vec4 grav = vec4(0.,0.,-0.05,0.); //down on the z axis
    ...
        vec4 gravity = modelInverse * grav; //convert world gravity vector into model coords
    
    

    How come down shifts from z to w when the calculation is moved to outside the shader?

  • IgnatzIgnatz Mod
    edited June 2015 Posts: 5,396

    @yojimbo2000 - I guess one reason is that this doesn't work

    v = modelMatrix() * vec(1,1,1,1)
    --test if we can reverse it
    m = modelMatrix():inverse()
    v2 = m * v  --< (0.94, 1.20, 0.74, 0) --doesn't reverse!
    

    I'm no matrix expert, so I have no immediate explanation.

    However, I think a better approach is this
    1. pass modelMatrix to the vertex shader
    2. apply it to the current position attribute to get the world position
    3. compare that with your gravity vector (0,0,-0.05)

    This is a little more work for the shader, but it should work, and also avoids inverting the model matrix, something our resident mathematician says is a Very.Bad.Idea (possibly because not all matrices will invert).

  • Posts: 2,020
    1. pass modelMatrix to the vertex shader

    2. apply it to the current position attribute to get the world position
    3. compare that with your gravity vector (0,0,-0.05)

    with that approach you'd then also need to create your own viewProjection matrix to use instead of the built-in modelViewProjection. Something like this perhaps:

    vec4 worldPos = modelMatrix * pos;
    // do gravity calculations in world space
    gl_FragPosition = viewProjection * worldPos;
    

    I guess one reason is that this doesn't work

    Damn. So how do you translate a world coordinate into local space? What does your getLocalPoint function look like?

  • IgnatzIgnatz Mod
    edited June 2015 Posts: 5,396

    @yojimbo2000 - Based on previous Codea discussions, converting from world space to object space is done like this

    --in Codea
    m.shader.invModel=modelMatrix():inverse():transpose()
    
    //in the vertex shader  
    uniform mat4 mInvModel;
    
    //in main, convert vector v to object space  
    v = v * mInvModel;
    
  • Posts: 2,020

    Maybe OpenGL's matrix multiplication works different than Codea's? In the test below I get much better results with modelMatrix:inverse() than you reported above. The result is within a rounding error. But if I change it to modelMatrix:inverse():transpose() the result is totally wrong.

    -- ModelMatrix inverse as getLocalPoint
    -- reasonably accurate
    function setup()
        print ("test of using the inverse of the model matrix as a get local point \ntap screen to create a new random transform")
        parameter.watch("mat")
        --   parameter.watch("worldToLocal")
    
        parameter.watch("localCoord")
        parameter.watch("localBackFromWorld")
        parameter.watch("closeness")
        randomTransform()
    end
    
    function randomTransform()
        x,y,z = rand(), rand(), rand()
        a,b,c = rand(720), rand(720),rand(720)
        u,v,w = rand(), rand(), rand()
    end
    
    function rand(range)
        local range = range or 2000
        return (math.random()-0.5)*range
    end
    
    
    function draw()
        background(40, 40, 50)
        translate(x,y,z)
        rotate(a,1,0,0)
        rotate(b,0,1,0)
        rotate(c)
        localCoord = vec3(u,v,w)
        mat = modelMatrix()
        worldToLocal = mat:inverse() -- :transpose() adding transpose makes the results really bad
    
        worldCoord = mat * localCoord
        localBackFromWorld = worldToLocal * worldCoord
        closeness = localBackFromWorld - localCoord
    
    end
    
    function touched(t)
        if t.state==BEGAN then randomTransform() end
    end
    
    
  • IgnatzIgnatz Mod
    Posts: 5,396

    Yes, when trying to reverse modelMatrix in Codea, I probably should have put the vector on the left, which Codea doesn't support.

    I haven't done much conversion from world to object, so I'm not sure.

    Interestingly, inverting modelMatrix transposes the rotation part and makes the translation part negative..

  • Posts: 433

    Ahem, matrices, ahem.

    (Might need a little updating - I need to look up if newer versions of Codea introduced new methods for combining vectors and matrices.)

  • Posts: 2,020

    @Ignatz OK, I didn't realise that you could multiply a vector on the left of a matrix in openGL. The code you posted above m.shader.invModel=modelMatrix():inverse():transpose() ... v = v * mInvModel; works. I was just hoping that I could convert the world vector into a model vector outside of the shader and then pass it in to the shader, on the assumption that doing this once in Codea would be faster than doing it, say, several thousand times in the vertex shader (although, the GPU does multiplication so blindingly fast, maybe it makes no difference?). Generally (as the code above shows) localCoord = modelMatrix():inverse * worldCoord, seems to work acceptably well in Codea (out by 0.00001). But for some reason I haven't been able to fathom yet, something peculiar happens when I try to pass that converted world coord into the shader, and I've found that the value for z needs to be placed in w, which just seems wrong.

    @LoopSpace thanks for that link, wonderfully informative as ever. One thing that I think has changed is that you say that Codea doesn't support multiplying a matrix by a vector, which it does do now (though only with the vector on the right). Apologies if I've misunderstood.

  • IgnatzIgnatz Mod
    Posts: 5,396

    Codea always multiplied on the right, but it never has done on the left

  • Posts: 433

    Codea does now do matrix times vector, but it's weird. You can do m * v where m is a 4x4 matrix and v is a vec3 and you get the result of the transformation described at the bottom of this section of my matrices description, specifically the transformation [x y z] -> [x'', y'', z'']. That isn't actually all that useful as it is useless for, say, doing anything with normals. It also means that while you write it as multiplication on the right, it is actually doing multiplication on the left. I haven't investigated what this means for associativity, but it might be weird.

    You can also multiply a matrix by a vec4, but that's just odd. Try it; if you can discern any logic to it then you're better than me!

  • Posts: 2,020

    Ok, I think I've worked out how to pass the world vector for gravity into the shader. In the shader, if you set w to zero, ie vec4(gravity, 0.) then it works. I'll post the working code tomorrow.

  • IgnatzIgnatz Mod
    edited June 2015 Posts: 5,396

    @LoopSpace - can you explain this strange result?

    m=matrix(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)
        print(m*vec4(1,0,0,0)) -- 1   2  3   4
        print(m*vec4(0,1,0,0)) -- 5   6  7   8
        print(m*vec4(0,0,1,0)) -- 0   0  0   0  << ???
        print(m*vec4(0,0,0,1)) -- 9 10 11 12 << ???
        print(m*vec4(1,1,1,1))
    

    @yojimbo- I think this is the strange thing you were talking about, how z and w swapped around - although (even worse) z is zero above!

  • edited June 2015 Posts: 2,020

    @Ignatz that is weird. I get the same result. You get the correct results if you use a custom mat * vec function, eg:

    m=matrix(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)
    print(matXVec(m,vec4(1,0,0,0))) -- 1   2  3   4
    print(matXVec(m,vec4(0,1,0,0))) -- 5   6  7   8
    print(matXVec(m,vec4(0,0,1,0))) -- 9 10 11 12
    print(matXVec(m,vec4(0,0,0,1))) -- 13 14 15 16
    print(matXVec(m,vec4(1,1,1,1)))
    
    function matXVec(m,v)
        return vec4(
        m[1]*v.x + m[5]*v.y + m[9]*v.z + m[13]*v.w,
        m[2]*v.x + m[6]*v.y + m[10]*v.z + m[14]*v.w,
        m[3]*v.x + m[7]*v.y + m[11]*v.z + m[15]*v.w,
        m[4]*v.x + m[8]*v.y + m[12]*v.z + m[16]*v.w)
    end
    

    I assumed actually that the built-in codea mat*vec function resembled the above, but it seems it's doing something a bit different. Is this a bug? Or just some alternative way of doing matrix multiplications that I don't know about?

    I often use a 3x3 version of the above function (or 2x2 in 2D) if I just want to get the rotation of the matrix, not the translation or scale. I can't remember where I got this from. Probably from one of you two.

    function vecRotMat(v, m)
        return vec3(
        m[1]*v.x + m[5]*v.y + m[9]*v.z,
        m[2]*v.x + m[6]*v.y + m[10]*v.z,
        m[3]*v.x + m[7]*v.y + m[11]*v.z)
    end
    
  • Posts: 2,020

    The gist linked to above now has the updated version. World gravity is transformed to local gravity in Codea and passed to the shader:

    m.shader.gravity = modelMatrix():inverse() * vec3(0,0,-0.05)
    

    In the shader, the w of gravity is set to zero:

        highp vec4 A = vec4(gravity, 0.)/(friction*friction) - vec4(trajectory.xyz, 0.)/friction;
        highp vec4 B = origin - A;
    
        vec4 pos = position - origin; // convert to local
        pos.xy = rotMat * pos.xy; // rotate
        pos += exp(-time*friction)*A + B + time * vec4(gravity, 0.)/friction;
    

    I guess this is necessary because, like a normal, gravity is a relational vector, rather than one describing a fixed point.

  • IgnatzIgnatz Mod
    Posts: 5,396

    That's a bug, I think

    And yes, directions have a w=0, while points have w=1

  • Posts: 433

    That's the behaviour I was referring to. There should be a mat4 x vec4 method and it should be right. I don't think I ever got round to filing a bug report, though. Any volunteers?

    It's also a bit weird that Codea writes matrix multiplication on the left (m * v) but the effect is multiplication on the right (v * m). It would break associately, except that I don't know if I would expect associativity with mat4 x vec3 as it's not a natural operation.

    I think that in my own code then i overwrite the matrix x vector multiplication with the correct one.

  • Posts: 2,020

    I get very confused by this on the left/ on the right. So is this multiplication on the right?

    function matXVec(m,v)
        return vec4(
        m[1]*v.x + m[5]*v.y + m[9]*v.z + m[13]*v.w,
        m[2]*v.x + m[6]*v.y + m[10]*v.z + m[14]*v.w,
        m[3]*v.x + m[7]*v.y + m[11]*v.z + m[15]*v.w,
        m[4]*v.x + m[8]*v.y + m[12]*v.z + m[16]*v.w)
    end
    
  • Posts: 433

    @yojimbo2000 The crucial difference is whether you regard vectors as rows or columns. Then when you write a matrix, depending on your convention either the rows or the columns are where the standard vectors end up. If rows then you need to do v * m and if columns you need to do m * v. It also has an effect on composition. If you want to do A and then B, is the result A * B or B * A? If rows, then it is A * B and if columns then it is B * A.

  • Posts: 2,020

    Codea / OpenGL is always column matrices isn't it?

    So, how should we word the bug report? Is there something more specific we can say than "Mat * vec4 is sometimes weird"?

  • Posts: 433

    No, OpenGL is row matrices, contrary to just about everyone else in the world.

    That seems a reasonable bug report! Could say that mat4 * vec4 should be the correct matrix multiplication. For bonus marks, it should be possible to do v * m and m * v and have them come out correctly (ie differently).

  • IgnatzIgnatz Mod
    Posts: 5,396

    I already reported the bug to Simeon. I think my code example above is clear enough.

  • Posts: 2,020

    Thank you both.

  • edited June 2015 Posts: 2,020

    I've added it to the issue tracker. Feel free to vote for it!

    https://bitbucket.org/TwoLivesLeft/core/issue/362/mat4-vec4-produces-odd-results

  • IgnatzIgnatz Mod
    Posts: 5,396

    Simeon said he'll look at it

  • edited February 2016 Posts: 395

    @yojimbo2000 i tried to add a texture to the disintegration shader with
    M.texture=image but did not succeed. Any suggestions?

  • Posts: 2,020

    @piinthesky yeah, the shader in the code is for objects without textures. I'm away from my iPad so can't test this, but the below should work for textured objects. Let me know if it throws errors:

    shaders = {
    explodeVert=    [[
    uniform mat4 modelViewProjection;
    uniform mat4 modelMatrix;
    uniform vec4 eye; // -- position of camera (x,y,z,1)
    //uniform vec4 light; //--directional light direction (x,y,z,0)
    uniform float fogRadius;
    uniform vec4 lightColor; //--directional light colour
    uniform float time;// animate explosion
    //uniform bool hasTexture;
    uniform vec3 gravity; 
    const float friction = 0.02;
    attribute vec4 position;
    attribute vec4 color;
    attribute vec2 texCoord;
    attribute vec3 normal;
    attribute vec4 origin; //centre of each face
    attribute vec4 trajectory; // trajectory + w = angular velocity
    varying lowp vec4 vColor;
    varying float dist;
    varying highp vec2 vTexCoord;
    varying vec4 vNormal;
    varying vec4 vPosition;
    void main()
    {
        float angle = time * trajectory.w;
        float angCos = cos(angle);
        float angSin = sin(angle);
        lowp mat2 rotMat = mat2(angCos, angSin, -angSin, angCos); 
        vec3 normRot = normal;
          normRot.xy = rotMat * normRot.xy; 
        vNormal = normalize(modelMatrix * vec4( normRot, 0.0 ));
       // vDirectDiffuse = lightColor * max( 0.0, dot( norm, light )); // brightness of diffuse light
    
        highp vec4 A = vec4(gravity, 0.)/(friction*friction) - vec4(trajectory.xyz, 0.)/friction;
        highp vec4 B = origin - A;
        vec4 pos = position - origin; // convert to local
        pos.xy = rotMat * pos.xy; // rotate
        pos += exp(-time*friction)*A + B + time * vec4(gravity, 0.)/friction;
        vPosition = modelMatrix * pos; 
    
        dist = clamp(1.0-distance(vPosition.xyz, eye.xyz)/fogRadius+0.1, 0.0, 1.1); //(vPosition.y-eye.y)
    
        vColor = color;
        vTexCoord = texCoord;
        gl_Position = modelViewProjection * pos;
    }
    ]],
    
    
    frag = [[
    precision highp float;
    uniform lowp sampler2D texture;
    uniform float ambient; // --strength of ambient light 0-1
    uniform lowp vec4 aerial;
    uniform vec4 light; //--directional light direction (x,y,z,0)
    uniform vec4 lightColor; //--directional light colour
    uniform vec4 eye; // -- position of camera (x,y,z,1)
    const float specularPower = 48.;
    const float shine = 0.8;
    varying lowp vec4 vColor;
    varying highp vec2 vTexCoord;
    varying float dist;
    varying vec4 vPosition;
    varying vec4 vNormal;
    // varying vec4 vSpecular;
    void main()
    {
    
        lowp vec4 pixel= texture2D( texture, vTexCoord ) * vColor;
        lowp vec4 ambientLight = pixel * ambient;    
    
        vec4 norm = normalize(vNormal);
        if (! gl_FrontFacing) norm = -vNormal; //invert normal if back facing (double-sided faces)
        vec4 viewDirection = normalize(eye - vPosition);
        vec4 diffuse = lightColor * max( 0.0, dot( norm, light )) * pixel; // brightness of diffuse light
        vec4 specular = vec4(1.,1.,1.,1.) * pow(max(0.0, dot(reflect(light, norm), viewDirection)), specularPower) * shine;
        //  vec4 halfAngle = normalize( viewDirection + light );
        //   float spec = pow( max( 0.0, dot( norm, halfAngle)), specularPower );
        // vec4 specular = vec4(1.,1.,1.,1.) * spec * shine; //
    
        vec4 totalColor = mix(aerial, ambientLight + diffuse + specular, dist * dist);
    
        totalColor.a=1.;
    
        gl_FragColor=totalColor;
    }
    ]]}
    
  • @yojimbo2000 i did not manage to make it work. No error message, but does not do what i hoped. I set the m.texCoords=verts is that right?

  • IgnatzIgnatz Mod
    Posts: 5,396

    no, m.vertices is for verts, m.texCoords is for texture positions

  • Posts: 395

    @yojimbo2000 If i use verts = m:buffer("position") to get the vertex positions. I don't seem to be able to use #verts to count the number of vertices. Any ideas why that does not work?

  • edited March 2016 Posts: 2,020

    No, don't use a buffer for position. Buffers are only for custom attributes, not the built-in ones that the Codea api sets up for you (vertices --> position, texCoords, normals, colors). Use myMesh.vertices or the vertex method to access the position attribute.

    To get the size, you can use myMesh.size or #myMesh.vertices

  • Posts: 395

    Ahhh, thanks

  • IgnatzIgnatz Mod
    Posts: 5,396

    @yojimbo2000 - that's not right - buffers are available for all the built in attributes, see the examples in the reference

    https://codea.io/reference/Shaders.html#buffer

  • Posts: 2,020

    Oh ok. Not sure why you'd want to for the built-in attributes.

  • IgnatzIgnatz Mod
    Posts: 5,396

    That's so you can change them dynamically, eg if you have a set of separate objects in a single mesh and need to change their positions each frame

  • Posts: 2,020

    Sure, but you can do that by referring directly to the built-in myMesh.vertices table, or using the vertex command. I don't see any need to set up another buffer variable.

  • IgnatzIgnatz Mod
    Posts: 5,396

    you might want to check comparative speed. My understanding is that buffers allow you to change values "in place" which should be faster

Sign In or Register to comment.