#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

# Screen position of 3D point? [Solved]

edited January 2015 Posts: 1,976

I've been Googling around for a long time on the subject but haven't found anything that works 100% correctly yet. I'm pretty sure it has to do with a lot of matrices and is an expensive calculation, but does anyone have any idea of how to get the 2D screen point of a 3D point in space?

Tagged:
«1

• edited December 2014 Posts: 5,396
• edited December 2014 Posts: 1,976

@Ignatz I've seen that, but that's not my problem. I want to get the screen coordinate of a 3D point, not which object I pressed. That's kind of the opposite, getting the 3D point of a screen position.

Edit: example: I have a godray shader, but it works in screen space. I need to specify the point on the screen that is the position of the godray light. I can set it to 0.5, 0.5, but then it is in the middle of the screen, wherever I look. If I turn, I want it to move as if it were a 3D point in space.

• Posts: 5,396

@SkyTheCoder - why wouldn't your ray be in 3D space? Why 2D?

• Posts: 1,976

@Ignatz Because 3D godrays would require volumetric rendering and 3D computations of angles and raycasting...

Either way, how would I get a screen position of a 3D point? There are other uses I was thinking of that would require it.

• Posts: 5,396

I think you need professional help, @LoopSpace to the rescue

• Posts: 1,976

@Ignatz I got this off the internet:

``````projectionMatrix() * (viewMatrix() * position_in_3d)
``````

But it always seemed a little bit off, no matter what I did.

• Posts: 3,295

@skythecoder if it is always in the same direction, maybe you have forgotten a translate() somewhere else before?

• edited December 2014 Posts: 398

@SkyTheCoder Try this (and yes, you'd need to know a bit about matrices to figure this out otherwise!).

Basically, call this BEFORE your ortho() command once you've done your camera updates with the point in space you're querying as a vec3. Sprite the output using the resulting vec2 AFTER you've done the ortho() bit and reset your viewMatrix back to identity:

(Ps. Based on some great work Andrew Stacey did on this topic a few years ago)

I use this a lot in most of my 3d games ;-)

``````function _3d_to_2d(vector)
-- Takes a vec3 of a point in 3d space and returns a vec2 of the 2d point relating to the screen WIDTH and HEIGHT
-- Should be called prior to resetting the view matrix via ortho()

local m = modelMatrix() * viewMatrix() * projectionMatrix()
m = m:translate(vector.x,vector.y, vector.z)
return vec2(WIDTH * (m/m + 1)/2, HEIGHT * (m/m+1)/2)
end
``````
• edited December 2014 Posts: 5,396

So what the matrices do is to convert a 3D point to its final position in front of the camera.

Then in the last part, m is the x translation, which is scaled for distance by dividing it by the factor in m, then the same is done for y.

Cool B-)

• Posts: 455

yep, just looked up mine and it's the same in an old "game" I never got done...

If you translated already so (0,0,0) is middle of your object for the draw then you can skip the m = m:translate(x,y,z) since modelMatrix already includes your translation. So for instance in the draw for my 3d objects I have something like:

``````function obj:draw()
resetMatrix()
translate(self.position.x, self.position.y, self.position.z)
self.mesh:draw()
local l = modelMatrix()*viewMatrix()*projectionMatrix()
self.screenPosition = vec2((l/l+1)*WIDTH/2, (l/l+1)*HEIGHT/2)
end
``````

And I use the obj.screenPosition(s) later to add some HUD type stuff in 2d drawing mode.

• Posts: 1,976

@andymac3d @spacemonkey Thanks so much! That was exactly what I was looking for!

• Posts: 398

Ditto that @spacemonkey - I tend to use it as a HUD overlay in my own custom GUI class. Really useful for debugging 3D games and tracking attributes attached to objects etc - it's a nightmare otherwise.

Glad people found it useful - Merry Xmas :-D

• Posts: 453

Here's the detailed explanation http://loopspace.mathforge.org/HowDidIDoThat/Codea/Matrices/. The function that andymac3d uses is in sections 8 and 9.

• edited December 2014 Posts: 1,976

All right, I have a bit of a follow-up question that I decided wasn't different enough to create a new discussion - is there any way to get the world position of a screen point? (Along a fixed x, y, or z plane)

Sorry for a bump-like post, but it is a new question...

Example: a 3D top-down level editor, to get the coordinates you wanted to edit at

• Posts: 453

See section 11 of the link I posted just above.

• Posts: 1,976

@LoopSpace I'm not very good with traditional math equations that are written out with all those symbols...do you think you could make a code version?

• Posts: 453

Section 13?

Also, try reading the post from the beginning to get familiar with "traditional math equations". They're sort of useful.

• Posts: 5,396

@SkyTheCoder - here is code that LoopSpace wrote to detect a 2D touch on a 3D object. As I understand it, it works by translating the 2D touch to a ray, and calculating the intersection with one of the planes of the object. While this sounds close to what you want, the code will need adapting, and it is not commented, so good luck.

https://gist.github.com/dermotbalson/5914883

@LoopSpace - to be fair to SkyTheCoder, he has probably never seen math like this. I have, but many years ago, and so I also find the matrix post difficult to follow. I wish I did understand it, because it contains some brilliant stuff.

• Posts: 453

Point taken, but what would make it clearer?

• Posts: 1,976

@LoopSpace @Ignatz Thanks, it works great! Sorry, I didn't learn all the math symbols in school, and I can't seem to find anything on Khan Academy. I understand math when it's code, though.

• Posts: 453

@SkyTheCoder Are there any particular symbols or is it general maths?

If you understand math[s] when it's code, just pretend it is always code. After all, there's not a big difference.

• edited January 2015 Posts: 1,976

@LoopSpace I just don't understand what all the symbols mean, the lines, the placement of numbers and symbols, all that stuff. I can understand when someone tells it to me, but with all the symbols it's like a different language (literally, it's only that I can't read it)

• Posts: 455

A cheat I've used in the past is to draw a second version of the scene with setContext, and in that version just do the objects as flat colours. If you colour each object differently then on touch you can get the colour of the touched point, and based on the colour know what was touched.

Or for a surface you need x,y for you can use a gradient colour to determine where the touch landed...

I use this technique for a plane in https://gist.github.com/sp4cemonkey/5208579 from thread http://codea.io/talk/discussion/2465/my-grass-simulation-with-vertex-shaders/p1 although looking at that old code now, it's a bit obscene ;-)

• Posts: 5,396

1. You can calculate the 2D screen position of any 3D point in world space, using the answer to your initial question above

2. Your 3D plane can be seen as a rectangle defined by 4 corner points

3. You can calculate the 2D screen position for (say) the bottom left and top right corners, which gives you width and height, and then

4. interpolate your 2D touch position between the points, to get the position on the plane

No nasty math required...

• Posts: 5,396

Of course, if your plane is at an angle, straight interpolation won't work. But there's another way, using binary search.

1. Start with a rectangle for the plane as before (before it is drawn)

2. take the centre point of the rectangle, call it c

3. set dx, dy = WIDTH/4, HEIGHT/4

4. calculate the 2D screen position of c (when drawn in 3D, rotated etc), call it c2

5. If touch.x > c2.x, c.x = c.x+ dx, otherwise c.x = c.x- dx

6. do the same for y

7. dx, dy = dx/2, dy/2

8. go back to 4 and repeat until c2:dist(p) < 1 or dx<=1

This looks like a lot of steps, but they are all simple, and even step 3 is only a couple of multiplications, because the required matrix values can be pre-calculated.

I believe there is another method, too. Draw your plane to an image in memory, clipping it to the 1x1 pixel that you touched, creating texture coordinates that map from (0,0) to (1,1), and using a fragment shader that encodes the (x,y) texture coordinates into the pixel colours. Then read the touched pixel colour and decode the coordinates.

Both of these methods should give sufficiently exact answers for any purpose, and they should be guaranteed to work, unlike the usual solution of creating a ray by inverting matrices. (Also, I understand these methods, which I like )

• edited January 2015 Posts: 1,976

@spacemonkey I never thought about using a gradient shader and then getting the color from the touch's position!

@Ignatz The main game I was using this for was actually not looking straight down, but sort of at an angle. Wouldn't a "binary search" be a bit laggy at 60 FPS, though? I'd be willing to learn more advanced maths if it was faster.

• Posts: 5,396

@SkyTheCoder - if your plane is full screen, then a gradient shader will only give you an accuracy of within 4 pixels in the x direction, because there are only 256 colours and 1024 pixels. That's why I suggested encoding the x and y values in colours, which sounds the same as a gradient shader, except you use the z value to provide additional accuracy for the x and y values. Also clip the image to 1x1.

I'll demo my interpolation, no advanced math is necessary, I assure you.

• Posts: 5,396

@SkyTheCoder - try this. Touch the yellow plane.

It assumes the plane is XY, which is easily changed if necessary.

It takes about 0.00015 sec on my iPad3, which means it takes up just 1/110 of one draw cycle.

It works by starting in the middle, seeing where that point gets drawn, and adjusting the guess to get closer and closer to the touch point. You'll see there is no fancy math, and no dangerous matrix inversions.

``````function setup()
--plane settings
plane={}
plane.centre=vec3(0,0,0)
plane.rotate=vec3(45,0,20)
plane.size=vec2(70,100)
--calculate factors needed for interpolation
--set up camera and rotations in a function so we are sure the settings are the same here and in draw
SetupPerspective()
M=modelMatrix()*viewMatrix()*projectionMatrix()
parameter.text("Touch","") --shows results
end

function draw()
background(50)
translate(plane.centre.x,plane.centre.y,plane.centre.z)
SetupPerspective() --camera and rotation
--draw plane
fill(255,255,0)
rect(-plane.size.x/2,-plane.size.y/2,plane.size.x,plane.size.y)
--draw touch to show it worked
if p then
translate(p.x-plane.size.x/2,p.y-plane.size.y/2,1)
fill(255,0,0)
ellipse(0,0,5)
end
end

function SetupPerspective()
perspective()
camera(0,0,200,0,0,-1)
rotate(plane.rotate.x,1,0,0)
rotate(plane.rotate.y,0,1,0)
rotate(plane.rotate.z,0,0,1)
end

function touched(t)
if t.state==ENDED then
p=GetTouchPosition(t)
Touch=tostring(p)
end
end

--binary iteration
function GetTouchPosition(t)
local c=vec3(plane.centre.x,plane.centre.y,plane.centre.z)
--set initial step size to 1/4 size
local d=vec2(plane.size.x/4,plane.size.y/4)
--iterate until we get down to 1 pixel
while d.x>1 do
--get 2D position of 3D point
local m=M:translate(c.x,c.y,c.z)
local x=(m/m+1)*WIDTH/2
--if it is greater than x, we need to go left, so adjust c by step size, and vice versa
if x>t.x then c.x=c.x-d.x else c.x=c.x+d.x end
--same for y
local y=(m/m+1)*HEIGHT/2
if y>t.y then c.y=c.y-d.y else c.y=c.y+d.y end
--halve step size
d=d/2
end
--calculate position of point on plane
return c-plane.centre+vec3(plane.size.x/2,plane.size.y/2,0)
end

``````
• Posts: 1,976

@Ignatz It works pretty good, but seems to be a bit jumpy and unprecise, such as in this image (white circle is my touch) Any idea why? @Loopspace Out of curiosity, could you comment your code? I tried to understand it, but got stuck at all the cofactor functions with all the exponents and modulus.

• Posts: 5,396

@SkyTheCoder - I see what you mean. Try this amended function. It runs slower, but still only takes 1/10 of a draw cycle.

``````function GetTouchPosition(t)
local c=vec3(plane.centre.x,plane.centre.y,plane.centre.z)
local s=0.7 --step
local d=vec2(plane.size.x*s*s,plane.size.y*s*s)
--iterate until we get down to 1 pixel
local error
while true do
local m=M:translate(c.x,c.y,c.z)
local x=(m/m+1)*WIDTH/2
local y=(m/m+1)*HEIGHT/2
--if it is greater than x, we need to go left, so adjust c by step size, and vice versa
if x>t.x then c.x=c.x-d.x else c.x=c.x+d.x end
--same for y
if y>t.y then c.y=c.y-d.y else c.y=c.y+d.y end
--halve step size
d=d*s
error=vec2(x,y):dist(vec2(t.x,t.y))
if error<.2 or d.x<0.05 then break end
end
--calculate position of point on plane, and error
return c-plane.centre+vec3(plane.size.x/2,plane.size.y/2,0),error
end
``````
• Posts: 5,396

@SkyTheCoder - here's one that does it with an image and shader, about as fast as the code immediately above (1/600 second)

``````function setup()
--plane settings
plane={}
plane.centre=vec3(0,0,0)
plane.rotate=vec3(45,0,20)
plane.size=vec2(500,500,0)
--set up mesh, needed for shader
plane.mesh=mesh()
local s=plane.size
local x1,y1,x2,y2,z=-s.x/2,-s.y/2,s.x/2,s.y/2,s.z
plane.mesh.vertices={vec3(x1,y1,z),vec3(x2,y1,z),vec3(x2,y2,z),vec3(x2,y2,z),vec3(x1,y2,z),vec3(x1,y1,z)}
plane.mesh.texCoords={vec2(0,0),vec2(1,0),vec2(1,1),vec2(1,1),vec2(0,1),vec2(0,0)}
plane.mesh:setColors(color(255,255,0))
parameter.text("Touch","") --shows results
end

function draw()
background(50)
pushMatrix()
SetupPerspective()
--draw plane
plane.mesh:draw()
--draw touch to show it worked
if p then
translate(p.x-plane.size.x/2,p.y-plane.size.y/2,1)
fill(255,0,0)
ellipse(0,0,20)
end
popMatrix()
end

function SetupPerspective()
perspective()
camera(0,0,900,0,0,-1)
translate(plane.centre.x,plane.centre.y,plane.centre.z)
rotate(plane.rotate.x,1,0,0)
rotate(plane.rotate.y,0,1,0)
rotate(plane.rotate.z,0,0,1)
end

function touched(t)
if t.state==ENDED then
p=GetPlaneTouchPoint(t)
Touch=p
end
end

function GetPlaneTouchPoint(t)
local img=image(WIDTH,HEIGHT)
setContext(img)
pushMatrix()
SetupPerspective()
clip(t.x-1,t.y-1,3,3)
plane.mesh:draw()
setContext()
popMatrix()
local x2,y2,x1y1=img:get(t.x,t.y)
local y1=math.fmod(x1y1,16)
local x1=(x1y1-y1)/16
local x,y=(x1+x2*16)/4096*plane.size.x,(y1+y2*16)/4096*plane.size.y
return vec2(x,y)
end

v = [[
uniform mat4 modelViewProjection;
attribute vec4 position;
attribute vec2 texCoord;
varying highp vec2 vTexCoord;

void main()
{
vTexCoord = texCoord;
gl_Position = modelViewProjection * position;
}
]],
f = [[
precision highp float;
varying highp vec2 vTexCoord;

void main()
{
highp vec2 T = vTexCoord * 4096.0;
highp float x1=mod(T.x,16.0);
highp float x2=(T.x - x1) / 16.0;
highp float y1=mod(T.y,16.0);
highp float y2=(T.y - y1) / 16.0;
gl_FragColor = vec4(x2/255.0,y2/255.0,(x1*16.0+y1)/255.0,1.0);
}
]]}

``````
• Posts: 1,976

@Ignatz But now, using it too much causes Codea to crash because it runs out of memory...

• edited January 2015 Posts: 3,295

``````function GetPlaneTouchPoint(t)
local img=image(WIDTH,HEIGHT)
setContext(img)
``````

try

``````local img=image(WIDTH,HEIGHT)
function GetPlaneTouchPoint(t)
setContext(img)
background(0)
``````

it may do it.

• Posts: 5,396

@Jmv38 - good idea. You don't need the background command, though.

• edited January 2015 Posts: 5,396

Actually, I think the best may be to define img as a global in setup. I was able to do many touches without problems.

NB @SkyTheCoder - the image shader provides high resolution accuracy up to a screen size of 4096x4096 because the z value is used to extend the x and y colour values by a factor of 16 each. That's what all the formulae are for.

• Posts: 212

(commenting to get notifications on this topic)

• Posts: 1,976

@Ignatz Now it's working when img is in global, but why was it a little bit off when it was local? @matkatmusic you can also bookmark the thread, by pressing the little star icon next to it.

• Posts: 5,396

No idea

• Posts: 5,396

@SkyTheCoder - even better, start by creating an image with all the pixel positions coded, then you only have to do it once, and each touch is a simple lookup.

Run this from setup

``````function Preset()
img=image(WIDTH,HEIGHT)
setContext(img)
pushMatrix()
SetupPerspective()
plane.mesh:draw()
setContext()
popMatrix()
end
``````

Then getting the touch is simply a lookup, super fast

``````function GetPlaneTouchPoint(
local x2,y2,x1y1=img:get(t.x,t.y)
local y1=math.fmod(x1y1,16)
local x1=(x1y1-y1)/16
local x,y=(x1+x2*16)/4096*plane.size.x,(y1+y2*16)/4096*plane.size.y
return vec2(x,y)
end
``````
• Posts: 453
• Posts: 5,396

@LoopSpace - there are shader compilation errors, because you can't define shaders before setup runs, in the latest version of Codea. You need to wrap your cube and sphere setup code in functions.

• Posts: 5,396

@LoopSpace - can you please confirm the way you choose the rotation for the objects? Specifically - you choose a rotation vector

``````local th = 2*math.pi*math.random()
local z = 2*math.random() - 1
local r = math.sqrt(1 - z*z)
rotation = vec4(r*math.cos(th),r*math.sin(th),z,360*math.random())
``````

and then rotate

``````mm:rotate(self.rotation.w,self.rotation.x,self.rotation.y,self.rotation.z)
``````

[which is a little confusing at first, because the components of the
rotation vector (x,y,z,w) are used in a different order (w,x,y,z) ]

So the x,y,z components define a normalised axis. z is first given a random value, and the length of (x,y,z) must equal 1, so the total length of the x,y components is `1-z*z`, which explains the choice of r.

Then the x component = `r*math.cos(th)` and y = `r*math.sin(th)`, which is the way you calculate x,y values for angle th and radius r. The squares add up to `r*r` = `1 - z*z`, as required.

So it looks to me as though, having chosen a random direction for z, you then choose a further random rotation on the xy axis such that the overall xyz axis is normalised.

• Posts: 5,396

@LoopSpace - apologies for all the questions, but these are really valuable functions, so I'm looking at the code very carefully.

I notice that you test for a touch by working through a list of objects, and the first object to claim a touch gets it. If one object is partly in front of another, and you touch the first one, the back object would claim the touch if it was polled first. You provide a solution by pushing a touched object to the back of the list, so if you touch again, you should get the front object this time.

This would be ok if you're just arranging objects on the screen and don't mind a bit of toggling, but if you're playing a fast moving game, touch needs to be both consistent and accurate. If you touch the front object, you expect to always get the front object, and toggling between front and back objects would be a player nightmare.

So I don't see how the code could be used in a game, as it stands. Could you perhaps poll all objects, and if you have multiple claimants, pick the touch point closest to the camera?

• Posts: 5,396

I think there is a bug in line 105, you pass`t` through to `isTouchedBy`, but the touch parameter is `touch`, not `t`, which is nil. The code still works, I think because your functions have a line `t = t or CurrentTouch()`.

• edited January 2015 Posts: 5,396

Should the addRect for the Picture be defined with x,y = 0,0 rather than 0.5, 0.5, since addRect centres the rectangle on the x,y values?

• Posts: 453

@Ignatz

• Shader: Thanks, I hadn't tried running this code with the latest Codea so hadn't spotted that.
• Rotations: Yes, that's about it. The `vec4` is a rotation using the angle-axis encoding. Since the axis is a 3-vector, it felt best to use the `x`, `y`, `z` components of the `vec4` for it, leaving the `w` component for the angle. However, if you define a `vec4` as `r = vec4(1,2,3,4)` then the order of the components is `x`,`y`,`z`,`w`. Hence the `w` component gets swapped from last when defined to first when fed into the rotation. The rotation is chosen "at random" by choosing the axis and the angle each at random (this isn't actually the best way to choose a random rotation, but for this code cheap-and-cheerful is good enough). I don't know how much detail you want me to go into here, but the way I choose the `x`,`y`,`z` components ensures that the axis is chosen properly "at random".
• Touch Order: You have to remember where I'm coming from here. I think I can lay claim to writing the first 3D program in Codea, and it was a shape explorer where you defined a shape by vertices and edges and could move those around. In that situation, it is common to want to select an object that happens to be behind another so the `z`-ordering was never the "right" way to decide which object to query first. If you want to ensure that you always pick the front object, then there are various ways you could modify the querying routine but I'd want to do a bit of testing as to which was fastest before deciding which is best.
• Touch (Bug): Yup, that's a bug. Thanks.
• Picture: I'm consistent, that's the main thing here. The PIcture's internal coordinates are set so that it is in the unit square, and that's used when testing touches. If I want the position of the Picture to correspond to its centre, rather than its lower-left corner, then I'd also need to adjust the touch test. But that would be straightforward. I think that the Cube actually has the same property: it is the "unit cube" so the position is actually the position of the lower-left-back corner.

No worries on the questions.

• Posts: 5,396

@LoopSpace - thanks for all of that.

A couple of suggestions.

1. Your classes expect an axis-angle rotation parameter, but (yourself excluded) I doubt that very many Codea users would know how to create that. The alternative of a set of Euler angles, or maybe the current model matrix (which means the user can make the rotation they want, then call the class) will make the code more accessible.

2. Your code allows the user to move shapes along the touched plane. While interesting, this is unlikely to be a common requirement. More common is going to be the need to identify which object was touched, and in some cases, where it was touched. You have a class property which identifies the touch point, so this would not be hard to do. I would add a class function to fetch this value.

3. I do think users will always expect - and want - the front object to be touched. I can't think of any situations where this wouldn't be the case. (If you want to move stuff around, the common behaviour in every program I've ever worked with, is that touch always affects the front object, and if you want to to get at objects behind, you move the front object out of the way).

4. A greatly simplified example that shows how to identify touched objects and touch locations, will make it much easier for other people to use the classes.

Finally, I have a couple of questions on the utility functions at the end.

Why is applymatrix needed, when Codea already allows a 4D matrix x vector operation?

Is there a way to explain how any of _planetoscreen, screentoplane and screenframe work, in other than purely mathematical terms?

For example, I explained some of the code in your matrix post in a comment above like this -

"So what the matrices do is to convert a 3D point to its final position in front of the camera.

Then in the last part, m is the x translation, which is scaled for distance by dividing it by the factor in m, then the same is done for y."

That's the kind of explanation that might help me and others to understand these functions.

• Posts: 453

@Ignatz

1. Well, ideally I'd have it work with quaternions ... I chose angle-axis because that's how Codea handles rotations with the `rotate` function and the `matrix:rotate` method. I didn't want to add unnecessary code beyond what I wanted to demonstrate. I agree that this code is fairly rudimentary, but it was demonstration code rather than sample code, if you see what I mean.

2. The point here was to make the point that you can't go from the 2D screen to 3D world space. Rather, you should define a "plane of movement" in the 3D world and confine yourself to that; then it is possible to cleanly transfer a touch from the screen to that plane. It is for the object to define that plane, though. The Cube ensures that it is parallel to one of its faces while the Sphere uses the plane orthogonal to the camera.

3. That may be the common behaviour, but it's really really annoying behaviour. I don't like the fact that to get to the object behind then I have to move all the others out of the way! But as I said, this is easy to modify.

4. "Touch location" isn't necessarily well-defined due to the 2d-3d problem.

5. Re "applymatrix". I suspect that it is because when I originally wrote that code, then Codea didn't have this function.

6. I'll have a think and see if I can come up with an explanation.

• Posts: 5,396

@SkyTheCoder

I believe this minimal code - plus @LoopSpace's code - will give you the (x,y) touch point as a fraction of width,height, eg (0.4,0.6)

Assuming you set up your plane with Euler angles, I've provided a function that converts to an axis angle vector as required by LoopSpace.

``````function setup()
plane={}
plane.pos=vec3(0,0,-250)
plane.size=vec3(700,700,1)
plane.rotation=vec3(45,0,20)
--create axis/angle vector from x,y,z Euler rotations
r=AxisAngleFromEuler(plane.rotation)
p=Picture(plane.pos,plane.size,r,"Cargo Bot:Starry Background")
light=vec3(-0.717,0,0.717)
ambient=0.5
end

function draw()
background(50)
perspective()
camera(0,100,1400,0,0,-1)
p:draw()
end

function touched(t)
if t.state==ENDED then
p:isTouchedBy(t)
touchPoint=vec2(p.starttouch.x,p.starttouch.y)
print(touchPoint)
end
end

function AxisAngleFromEuler(r)
pushMatrix()
rotate(r.x,1,0,0) --reorder these if you want
rotate(r.y,0,1,0)
rotate(r.z,0,0,1)
local m=modelMatrix()
popMatrix()
local a=math.deg(math.acos((m+m+m-1)/2))
local d=math.sqrt((m-m)^2+(m-m)^2+(m-m)^2)
local x=(m-m)/d
local y=(m-m)/d
local z=(m-m)/d
return vec4(x,y,z,a)
end
``````
• Posts: 5,396

By the way, I did a speed test comparison of

1. the picture-based approach I suggested earlier, using a coded picture created at startup, so that each touch only requires a single pixel lookup, and

2. the LoopSpace approach

The clear winner is LoopSpace, running about 7 x faster.

But since even the "slow" picture approach can do 1,000,000 lookups in about 13 seconds, ie one lookup takes 0.08% of a draw cycle, I doubt that it is going to matter either way.