Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

3D dress-up game?

24

Comments

  • dave1707dave1707 Mod
    Posts: 8,030

    @yojimbo2000 When I gave you the path to a file, I should have mentioned that you could read and write to any extension even though you can't see them. I mentioned that maybe a year or 2 ago when I was posting examples about reading and writing files with io commands.

  • Posts: 2,020

    Yeah, I figured none of this is new knowledge. I'd been searching the forum for Readtext examples but I hadn't got round to searching for the io functions. I guess I've neglected IO. I've just relied on global/project data, but actually text assets, plus io actions on Dropbox, are far more suitable for what I'm trying to do. Thanks again for bringing this to my attention.

    I just did a quick test of my new Blender workflow and it works perfectly. Save straight to Dropbox, no need to manipulate/ edit the files in any way, load them straight into Codea with io.read, no need for an asynchronous loading class/ methods etc. This is a much, much faster workflow than what I was doing before, and is really going to speed up 3d development. I guess that'll teach me to neglect io.

    I'll post my version of the importer when it's ready.

  • Posts: 557

    Wow, lotta talkin'! As for my project, I'm still struggling with abstracting the obj loader so I can see my model. @yojimbo2000, thanks for the Dropbox investigations. You also wrote:

    Its os.remove, not io, and it is present in Codea. It doesn't seem to delete the file as such, though, just deletes its contents (presumably the same as writing an empty "" string).

    ...have you found out anything more about that? Is there a way to actually delete the file "for reals"?

  • Posts: 557

    Whoops, @yojimbo2000, I didn't see your most recent post--sounds like you're building a better version of exactly what I'm trying to do!

  • edited August 2015 Posts: 557

    Yay! More steps down. Originally they were:

    1. get a female 3D model from MakeHuman in a long-sleeved dress and pants
    2. get that model into Codea via the .obj importer
    3. make the meshes for the clothes use a local image resource for texture
    4. get the top meshes to treat an alpha value of 0 as an actual transparency, so we can make the shirt and pants look shorter just by erasing that area on the texture file
    5. skip the whole "image editing UI" thing and just have the file on Dropbox, and edit it with an external image editor

    Got #2 done! It's in Codea yay:

    http://i49.photobucket.com/albums/f271/jwc15/Photo Aug 12 4 18 01 PM_zpszggldjfb.png

    Along the way, #3 just happened naturally -- didn't need the .ply importer to do textures after all.

    And as it happens, #5 is also just a natural by-product of #2. Once the model's reading textures off of Dropbox, any edit to them naturally appears on the actual render the next time the project launches.

    Like so! If I make a big orange spot on the jeans:

    http://i49.photobucket.com/albums/f271/jwc15/z3Djeans_basic_diffuse_orange_dot_zpsfc1ekvin.png

    After a sync with Dropbox, it becomes a bright orange spot on the model:

    http://i49.photobucket.com/albums/f271/jwc15/Photo Aug 12 5 17 52 PM_zpss75eyn8v.png

    Easy-peasy. I guess all that is obvious to people who've done this before. It's new to me though.

    So all that's left is #4. Making transparency on the texture become transparency on the mesh.

    Here's the same pants mesh, but with a hole cut out of it (it shows up as a white hole in the preview below, but it's actually transparent):

    http://i49.photobucket.com/albums/f271/jwc15/z3Djeans_basic_diffuse_zpsc7pe3cif.png

    And here's how it looks on the model:

    http://i49.photobucket.com/albums/f271/jwc15/Photo Aug 12 5 09 19 PM_zpsdv2gonla.png

    @Ignatz, @Yojimbo2000, any advice? I've kept my head above water so far, but I am wayyyyy out of my depth on this here. I don't even know where to start. How do I figure out how to make a mesh treat transparency on its texture as actual transparency?

  • IgnatzIgnatz Mod
    Posts: 5,396

    Why would a number go missing when they are delimited?

  • Posts: 2,020

    @UberGoober good work!

    How many vertices is that? And how is the performance on your device?

    @Ignatz I dunno. Data rot?

  • edited August 2015 Posts: 2,020

    @UberGoober

    In the fragment shader, whatever variable that gl_FragColor is being set to, do something like this:

       col.a = pixel.a; // where pixel is the texture sampler2D value. ie final alpha is determined just by texture color, and not by lighting etc
      gl_FragColor = col;
    

    the texture will need to be imported into Codea as a png (as jpegs don't have alpha), which AFAIR means not via the camera roll, and the clothing mesh has to be drawn after the human mesh.

    Great work though.

  • IgnatzIgnatz Mod
    edited August 2015 Posts: 5,396

    I use discard for transparency

    if (pixel.a==0.0) discard;
    

    I think you will find setting col.a=0 doesn't work properly. I have been creating explosions which become transparent. I spent some time fading the alpha value down to zero in the shader without result before realising I had to fade r,g,b as well. In other words to create transparency gradually, you need something like this

    pixel = pixel * fade //where fade is factor 0-1
    

    and for full transparency, fade=0

  • Posts: 1,976

    @yojimbo2000 @Ignatz You'll also want to use discard instead of just setting the gl_FragColor's alpha because OpenGL does some weird things with transparency. If you look at the mesh from different angles if it has transparency in the texture, you'll get some pretty odd results, with different areas of the mesh not rendering because OpenGL believes it's not in sight, or not mixing colors behind a transparent image correctly.

  • IgnatzIgnatz Mod
    edited August 2015 Posts: 5,396

    If that's a new style, I don't like it much.

    PS I can assure you that we (or at least I) struggle as much as anyone else with this stuff. We just started a little sooner, that's all.

  • IgnatzIgnatz Mod
    edited August 2015 Posts: 5,396

    @SkyTheCoder - I find discard works fine as long as you draw transparent objects last, and in order from back to front

    EDIT - I meant to say setting all the colour values works fine, not just discard.

  • Posts: 557

    So, I'm chuffed. In terms of core functionality, this already really close. I showed it to my girls, and of course they wanted to touch it and move it. And I realize what they'd like best is to be able to draw right on the model.

    That seems really hard, but this seemed really hard at first. And it actually was significant effort, but nowhere near what I'd feared. Talking with you guys, I narrowed it down to five things that seemed doable, and then it I was able to actually do it.

    So, trying to continue in the same vein, to implement draw-on-model it seems like I need two core functionalities:

    1. Be able to live-update a texture
    2. Be able to detect where on the texture a person is touching when they tap or drag on the screen.

    ...not sure where to start on those. Any tips or sample projects or tutorials you could point me towards?

  • Posts: 1,976

    @Ignatz Yes, but im most cases, it's difficult to sort the order you draw objects in, and even then, you can get some weird results, i.e. from one angle you look through the transparent object and only see the scene behind it, but from another, you also see the transparent back of the object.

  • IgnatzIgnatz Mod
    edited August 2015 Posts: 5,396

    Live updating is simply a matter of replacing one texture image with another.

    I can show you how to detect a touch.

    1. Capture the touch x,y position

    2. Draw the next frame to an image in memory, and for all meshes on which you want to trap touches, use a coded version of the image for that mesh. By this, I mean an image whose RGB values are set in a way that identifies the touch position.

    For example, suppose you have several images, any of which could be touched. Create a duplicate image for each of them, and replace the normal colours (for non transparent pixels) with the following
    R = image #
    G = X position of pixel
    B = Y position of pixel

    So for the first image, R will be set to 1 for each pixel, for the second image, R will be set to 2, etc.

    G and B have to be set as a value 0-255, so your resolution (the imit of your accuracy) is 1/255 of the image width/height, which is good enough for fat fingers.
    So if a pixel is at 200,300 on an image which is 400x1200, you set G=255x200/400 and B=255x300/1200.

    Before you draw, set the clipping area so Codea only bothers drawing this image to the tiny area around the touch - for speed. If your touch is at x,y, I would just write clip(x-1,y-1,2,2). After you're done, reset clip with clip()

    Now set your special duplicate as the mesh texture and draw, for each mesh. Only bother drawing the meshes that can be touched.

    When you've drawn this image in memory with setContext, get the colour of the pixel at (x,y), decode the RGB values, and now you know which image was touched and where.

    You set the special duplicates up once-only at the beginning, so this detection process will be extremely fast.

    If you want a more dynamic approach, you can write a special shader to encode the pixels for you on the fly.

  • IgnatzIgnatz Mod
    Posts: 5,396

    Update. This post of mine

    https://coolcodea.wordpress.com/2015/01/04/191-locating-the-3d-position-of-a-2d-point/

    shows how to use a shader to insert position values into the pixel colours. This post suggests using the third colour to get more accuracy for x,y, but in your case, you will need it to identify which mesh was touched, and pass the mesh id into the shader for this purpose.

    If I'm confusing you, just say. #-o

  • edited August 2015 Posts: 2,020

    @Ignatz 's idea is a good one. In your case it would be simpler, because you're trying to find the texCoord at which the model was touched not the position, so you wouldn't need to pack extra positional data into the blue channel or take the final step of extrapolating the position from the texCoord (and with a complex form like this once, I'm not sure you could extrapolate the position from the texCoord?). I would go with something like
    col = vec4(texCoord.x, texCoord.y, modelNumber, pixel.a);

  • IgnatzIgnatz Mod
    edited August 2015 Posts: 5,396

    What I'm saying is that if you are using a pixel colour to store x,y position, there are only 256 values for width and height, but that should be enough.

    My approach gives you back the place on the texture that was touched, and if you want to change the texture image, that's exactly what you need.

    EDIT - I agree with yojimbo2000, I was saying the same thing but I guess I wasn't clear!

  • IgnatzIgnatz Mod
    edited August 2015 Posts: 5,396

    On the subject of efficient vertex storage and recovery, I had a go at using simple strings, as shown below. Decimals are truncated for this test.

    EDIT - On my iPad3, this encodes 500,000 (vec3) vertices in about 10 seconds and decodes them in 7 seconds.

    function setup()
        m=CreateTestSet(500000) 
        t=os.time() txt=Encode(m)  print("Encoding: "..os.time()-t)
        t=os.time() m2=Decode(txt) print("Decoding: "..os.time()-t)
        for i=1,#m do
            if (m[i]-m2[i]):len()>0.00001 then print("ERRORS!") break end
        end
        print("All done")
    end
    
    function CreateTestSet(n)
        local verts={}
        local rand=function() return math.floor((math.random()-0.5)*20000)/100 end
        for i=1,n do
            verts[i]=vec3(rand(),rand(),rand()) --between -100 and +100
        end
        return verts
    end
    
    --this method is faster than concatenating strings
    --or using tostring to collapse each vector to a string
    function Encode(tbl)
        local tbl2={}
        for i=1,#tbl do
            tbl2[#tbl2+1]=tbl[i].x
            tbl2[#tbl2+1]=tbl[i].y
            tbl2[#tbl2+1]=tbl[i].z
        end
        return table.concat(tbl2,",")
    end
    
    function Decode(txt)
        local tbl={}
        tbl=loadstring("return {"..txt.."}")()
        local tbl2={}
        for i=1,#tbl,3 do
            tbl2[#tbl2+1]=vec3(tbl[i],tbl[i+1],tbl[i+2])
        end
        return tbl2
    end
    
  • Posts: 2,020

    @Ignatz I'm glad you posted that test. I added json to it, and the json is ten times slower! I think it's because you have to make separate sub-calls to json.encode every time you encounter an unsupported type. Incidentally, if you make m a local variable, the tests run 1.5 to 2 times as fast.

    function setup()
        jsonMethods()
        local m=CreateTestSet(500000) 
        print("starting table.concat test")
        local t=os.time() txt=Encode(m)  print("Encoding: "..os.time()-t) --5-6 seconds (iPad Air). 3-4 secs (local)
        t=os.time() m2=Decode(txt) print("Decoding: "..os.time()-t) --2-5 seconds. 2-3 secs (local)
        for i=1,#m do
            if (m[i]-m2[i]):len()>0.00001 then print("ERRORS!") break end
        end
        print("starting json test")
        t=os.time() txt=json.encode(m)  print("Encoding: "..os.time()-t) --63-70 seconds!!  29-45 secs (local) {exception = jsonVec3}
        t=os.time() m2=decodeJson(txt) print("Decoding: "..os.time()-t) --23 seconds. 13-20 secs (local)
        print("All done")
    end
    
    function CreateTestSet(n)
        local verts={}
        local rand=function() return math.floor((math.random()-0.5)*20000)/100 end
        for i=1,n do
            verts[i]=vec3(rand(),rand(),rand()) --between -100 and +100
        end
        return verts
    end
    
    --this method is faster than concatenating strings
    --or using tostring to collapse each vector to a string
    function Encode(tbl)
        local tbl2={}
        for i=1,#tbl do
            tbl2[#tbl2+1]=tbl[i].x
            tbl2[#tbl2+1]=tbl[i].y
            tbl2[#tbl2+1]=tbl[i].z
        end
        return table.concat(tbl2,",")
    end
    
    function Decode(txt)
        local tbl={}
        tbl=loadstring("return {"..txt.."}")()
        local tbl2={}
        for i=1,#tbl,3 do
            tbl2[#tbl2+1]=vec3(tbl[i],tbl[i+1],tbl[i+2])
        end
        return tbl2
    end
    
    function jsonMethods()
        meta = getmetatable(vec3())
        meta.__tojson = function(t)
            return json.encode({t.x,t.y,t.z})
        end
    end
    
    --[[ --as an alternative to setting the meta-method, add this as the exception function
    function jsonVec3(_, v)
        return json.encode({v.x,v.y,v.z})
    end
      ]]
    
    function decodeJson(txt)
        local tab = json.decode(txt)
        local vert = {}
        for i,v in ipairs(tab) do
            vert[i] = vec3(v[1], v[2], v[3])
        end
        return vert
    end
    
  • IgnatzIgnatz Mod
    Posts: 5,396

    That's a much bigger difference than I'd expect, but I guess being able to use load string and working with tables rather than strings, helps a lot.

  • edited August 2015 Posts: 2,020

    json is useful for tables with lots of dimensions, but I guess its overkill for a one dimensional array.

    I've been doing lots of testing and comparisons, and I'm still undecided as to whether an intermediary mesh format is needed. Once you take all the asyncronous loading stuff out of the obj loader, it becomes a lot simpler, it doesn't have to be a class with methods and state management anymore, it can just be a function.

    Also, I realised that one part of why my load times were getting slow when there's lots of models is the CalculateAverageNormals function, which I've been using regardless of whether I'm loading from the .obj file or the intermediary format. This means that the overall project load speeds are faster, but only by a third, not a massive difference. You could save the normals (either in the interim file, or have Blender export them for you) so that they're only calculated once, but that would nearly double the size of the 3D assets. It's a question of hitting the right balance between asset size and asset loadspeed.

    I'm wondering whether calculating the average normals could be done faster as part of the processing of the .obj file, seeing as the obj file already contains a set of unique points for the model, which is one of the things you need for the calculation. It would mean one less iteration through all of the vertices.

    Pros of using an intermediary format:

    • load times are around a third faster (4 seconds instead of 6 on an iPad Air for 130,368 vertices worth of models). Not as big a gain as I'd hoped for, because both methods still need the average normals to be calculated. And if average normal calculation can be done at the .obj processing stage, then .obj might have an advantage over the intermediary format.

    • for animation via keyframe interpolation, I pack the vertices from several .obj files into one file (I put colours and texCoords into a separate file, as these don't change from frame to frame). Not sure if this is a pro, but keeps your Dropbox folder tidy.

    • by saving it as a text asset, it automatically gets included in your project when you export. Although, you can achieve a similar effect by changing the extension of the obj file in Dropbox from .obj to .txt (you can do this in the Codea obj loading program using os.rename), and then loading the file with readText instead of io.read. So again, perhaps not such a unique pro.

    Cons

    • 3D asset sizes can be almost twice as big (1404 kb vs 2639 kb for 130,368 verts worth of models). This is using @Ignatz 's minimal comma-separated value format too.

    • it does involve an extra step of importing

  • IgnatzIgnatz Mod
    Posts: 5,396

    I'm not sure you'll get much gain from doing the normals at the obj processing stage, I suspect the effort comes in the cross product calcs.

    I think I'd start off using native obj files, because

    1. that keeps the project simpler

    2. it can be added later

    3. other things are more important right now, like getting the animation working effectively

  • edited August 2015 Posts: 557

    @Ignatz, @Yojimbo2000, thanks for the pointers! And code! I'll get working on it. The whole thing is making my head spin, but the blog helps a lot.

    One question, to which the answer might be obvious if I understood the whole thing better: will I be able to save the drawn-on texture as a new file?

  • Posts: 557

    @Ignatz, I just pasted the sample code from your blog into Codea and got this error:

    error: [string "function setup()..."]:84: bad argument #1 to 'get' (number has no integer representation)
    

    Any advice?

  • Posts: 2,020

    Yes, using saveImage.

  • IgnatzIgnatz Mod
    edited August 2015 Posts: 5,396

    @UberGoober - Curses! That's caused by another change in Codea versions.

    It means the function on that line requires integers, so any parameters that might be fractional need to have math.floor/ceil applied.

    I added this at line 77 (untested)

        t.x,t.y=math.floor(t.x+0.5),math.floor(t.y+0.5)
    
  • Posts: 557

    @Ignatz: Rats. The new error:

    error: [string "function setup()..."]:78: attempt to index a userdata value (local 't')
    
  • edited August 2015 Posts: 557

    @Ignatz: it's mysterious to me, but I fixed it. I declared local variables touchX and touchY and then defined them as t.x and t.y. Then I pasted in the new variable names wherever t.x and t.y were. And now it works. Huh.

    The working code:

    function GetPlaneTouchPoint(t)   
        --if your rectangle never changes position, delete these commented lines, you don't need them
        --however, if it moves around, you'll need to draw the shader image for each touch
        --in that case, uncomment all these lines
        local touchX, touchY = t.x, t.y
        shaderImg=image(WIDTH,HEIGHT)
        setContext(shaderImg)
        pushMatrix()
        SetupPerspective()
        touchX,touchY=math.floor(touchX+0.5),math.floor(touchY+0.5)
        clip(touchX-1,touchY-1,3,3)
        plane.mesh.shader=shader(PlaneShader.v,PlaneShader.f)
        plane.mesh:draw()
        setContext()
        plane.mesh.shader=nil
        popMatrix()
    
        local x2,y2,x1y1=shaderImg:get(touchX,touchY)
        local y1=math.fmod(x1y1,16)
        local x1=(x1y1-y1)/16
        local x,y=(x1+x2*16)/4096*plane.size.x,(y1+y2*16)/4096*plane.size.y
        return vec2(x,y)
    end
    
  • edited August 2015 Posts: 557

    @Yojimbo2000, the lines preceded by just "v" are the vertices, right?

    If that is the case, there are 1,305. Some of that is the figure model underneath the clothes, which I'm not sure is getting rendered wherever it's fully covered by the clothes.

  • edited August 2015 Posts: 2,020

    Those are the unique points. The mesh will have quite a few more vertices than that. In the obj class, try printing the length of the vertices array after the mesh has been parsed. print("vertices:", #self.v) or whatever the array is called that holds the vertices. I was just curious about how detailed the models were from makehuman, it's not something that you particularly need to check, unless you're having performance issues.

  • Posts: 557

    @Yojimbo2000, I'm not sure if that would give you the data you want--MakeHuman exports squares, and I have to take the model into Blender and manually convert everything to triangles.

  • IgnatzIgnatz Mod
    edited August 2015 Posts: 5,396

    Blender has an option to triangulate vertices automatically when you export to obj

    I'll have a look at that new error today. So you have the touch working?

  • Posts: 557

    @Igntaz any chance you can tell me (or screen cap even) where that option is? The Blender UI is one of the most impenetrable barrages of menus and icons I've ever seen.

    Yes, touch is working on the demo now. Did you see my fix? It seemed like I didn't really do anything at all. Can you explain why that worked / was necessary?

  • IgnatzIgnatz Mod
    Posts: 5,396

    Blender UI is #%^£€¥?!%€#%%###€%%##*# IMHO. I often lost menus and couldn't get them back. Even simple things like zooming were hard to find.

    As I recall, the triangulate option comes up in the form that appears once you have selected export and the obj format, but it is out of sight at the bottom and you have to scroll down. This is from memory and I will check when I am able to get out of bed (but you should understand that I am currently keeping a very grateful kitty warm, and do not have permission to get up yet).

    I haven't looked at that strange error yet, but will do so.

  • IgnatzIgnatz Mod
    Posts: 5,396

    I think the problem with that code is another subtle change to Codea that I wasn't aware of.

    I am passing the touch object through to my function, and reassigning the x and y values of that object. Previously that was allowed, but now it appears it is either read only, or the type has changed in some way. Using temporary local variables instead fixes the problem.

  • IgnatzIgnatz Mod
    Posts: 5,396

    What I said about Blender above was correct. Try it and see.

  • Posts: 557

    ...everybody's hating on reference values these days, amirite. :)

    Ok so I'm trying to grok this.

    1. I need to keep a screen-sized image in memory--usually totally blank
    2. When a touch happens, define a small clipping area around the touch and render that small area in the image in memory with a shader
    3. Somehow the shader know which point on the texture I'm touching, and I can store that point's x, y in two color values [the main code can't grab those values directly somehow?]
    4. Using that x and y I can modify the texture image on the actual screen, which somehow propagates instantly? I don't have to do some kind of saving and reloading?

    Yeah, a lot I'm confused about there. I think 1 and 2 I could manage with some work. But 3 and 4 have me flummoxed.

  • IgnatzIgnatz Mod
    edited August 2015 Posts: 5,396

    If you understand shaders, you will understand this process completely.

    1. You don't need to keep an image in memory - just create a temporary one when a touch happens.

    2. The temporary image is full size, but we only draw the part we want, using clip.

    3. The shader is given the texture image and, for each vertex, the x,y position on the texture that applies to it (this is true of any shader - and any mesh using an image texture).

    For any point on the mesh, the shader then simply interpolates the texture x,y position from the given positions for each of the three vertices surrounding the point.

    The main code can't grab this point, because - well, I'll let an earlier post explain.

    Looking up the touched pixel position on the image in memory gives you the exact x,y position of the touch (in encoded form as a colour) on your texture (and nothing else).

    What you do with it is up to you. You can poke a hole in your image and save it, or put a red dot on it, or whatever you like. Then redraw it.

    So steps 1 to 3 simply give you an x,y position of the touch. That's all - and it may not seem like much - but if you read my post above, you'll see that this is extremely difficult!

  • I didn't think this kind of stuff was possible until I saw this decision! Wow! Impressive!

  • Posts: 557

    @Ignatz - that eBook is very helpful. Zero to sixty in seconds flat.

    So, ok, to summarize the theory (and you've proved it works):

    1. shaders get passed every visible pixel of a 3D model and can change how they're drawn on the screen
    2. so shaders naturally contain the exact information we need, i.e. two different x, y sets:

      • the first set is the position on the model texture of every pixel of the model that is to be drawn to the screen
      • the second set is the position on screen to which each pixel is drawn
    3. the only problem is that, as implemented, shaders cannot directly expose the pairing of these two x, y sets
    4. so we're being very sneaky. When the shader draws to the screen, i.e. draws using the second x, y set, it changes the r, g, b information of each pixel such that the g and the b values are actually the x and y values for the first set
    5. and instead of drawing those pixels to the visible screen, it draws it to a virtual screen in memory
    6. so when a touch happens:

      • Codea sends us the pixel touched
      • We define a small square around that pixel
      • We use the special sneaky shader and render just that square--but we render it to the virtual image in memory, not the actual screen
      • We inspect the g and b values of the pixel at the coordinate of the touch on the virtual image
      • That gives us the x and y values we need to draw directly on the model texture at the exact spot that is 'under' the current touch

    ... right?

  • IgnatzIgnatz Mod
    Posts: 5,396

    Perfect =D>

  • edited August 2015 Posts: 557

    Trying to think it out in pseudocode.

    --in touched(touch) function:
    
         porthole = areaToRender(touch)
         memoryImage = bufferImageRenderedAt(porthole)
         smuggler = rgbAt(memoryImage, touch)
         if smuggler ~= vec4(0,0,0,0) then
              textureXY = vec2(smuggler.g, smuggler.b)
              drawToModel(textureXY, color)
         end
    

    Sound right? Should be able to fill in those methods...

  • IgnatzIgnatz Mod
    Posts: 5,396

    sounds good.

    I'd put all that in its own function, to avoid cluttering the touched function

  • Posts: 557

    Something I don't understand here: drawing to the texture seems to only work if done before the first time draw() is called.

    Here's some code I'm using for tests:

    function DrawToMeshTextureTests:randomDraw()
        local texture = self.plane.mesh.texture
        local randomX, randomY = math.random(texture.width), math.random(texture.height)
        local randomPosition = vec2(randomX, randomY)
        self.drawer:drawAt(randomPosition, self.drawColor, self.drawWidth)
    end
    

    Explanation: I've abstracted drawing on textures with a class called DrawOnMeshTexture, which is initialized with a mesh, and thereafter can be used to draw to its texture. self.drawer above is an instance of this class. self.drawColor and self.drawWidth are just convenience variables.

    If I call randomDraw() (the method above) in the setup() function, it works. But if I call it from the touched(touch) function, no go. What's up why huh whaaa?

  • Posts: 2,020

    We'd need to see more code, the draw loop, the drawAt method etc (you could also be a bit more specific in what's going wrong!)

    One thing it could be: you're trying to draw to a 2 dimensional image, but you're calling this function from within a 3D space.

    Do you switch back to an orthogonal, 2D space at the end of the draw loop? This is the code you need:

    ortho()
    viewMatrix(matrix())
    

    If you are already doing this, then calling drawAt from touched should be fine (as touched is called at the end of the draw loop), and your problem is something else.

    If you do need to update some 2D textures from within a 3D draw loop, you can force the drawing to take place at the end of the draw loop by putting a tiny delay on it, tween.delay(0.001, do2Ddrawing).

  • Posts: 557

    @yojimbo2000: sorry to be unclear: when I say it works when called from setup(), I mean that a random dot gets drawn on the texture. Conversely, when called from touched(touch), nothing happens at all.

    This is my draw() method:

    function draw()
            pushMatrix()
            drawToMeshTextureTests:setupPerspective()
            drawToMeshTextureTests:drawDrawables()
            popMatrix()
    end
    

    Right now I'm working off of Ignatz's demo, so setupPerspective() is the same as his. And drawDrawables simply iterates through a table of objects with draw() methods and calls them.

  • Posts: 2,020

    But after all the 3D drawing is done, do you switch back to a 2D orthogonal projection? Using:

    ortho()
    viewMatrix(matrix())
    

    push/pop matrix is not enough, you need the above 2 lines.

  • edited August 2015 Posts: 557

    @yojimbo2000, that sure worked! Fantastic!

    I'd like to keep as much as possible of the custom code inside the DrawOnMeshTexture class, so what I've done is put those lines at the top of the 2D drawing code in that class, which seems to work just as well. Is there any problem there?

Sign In or Register to comment.