Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

advance only when a new camera capture frame exists

in Questions Posts: 15

I am making some optical flow shader experiments, and I noticed something that appears to be a bit of an issue:
If the draw routine is happening at 60fps, is the camera potentially updating slower than that? my entire program depends on comparing the difference between the current camera frame and the last, and I do not think I am getting a different image from the camera each draw cycle as far as I can tell.

Is there a way to wait at the end or beginning of draw until I have a new image from CAMERA ?

Comments

  • Posts: 1,966
    You could try only drawing the shader every other frame, so that you run at 30fps. What device are you running on? And are you monitoring delta time to see what fps you currently run at?
  • Posts: 15

    I ended up making lemonaide. something cool by processing the camera through another shader and using that capture as the difference texture rather than the raw last camera frame.

    Im getting a rather large fluctuation, it seems that it starts off fast ~60fps and slows to ~30-40 after a minute or so.. not sure what thats about. Just using print(DeltaTime)

    Im using an ipad pro at the moment. Ill post some test clips if I can figure out how to get the video files out of Codea. currently not showing up in my photo library.

  • Posts: 144

    @AxiomCrux it slows down because when you say print(), it eventually overloads the output and starts slowing down the FPS. Instead, write this in the setup:

    parameter.watch("1/math.floor(DeltaTime)")
    
  • dave1707dave1707 Mod
    Posts: 5,620

    If you're using a print statement in the draw function or a function called from draw, that will slow the program down after awhile and eventually crash Codea. All of those print statements build up in the print buffer and the program slows down as it adds more and more print statements.

  • dave1707dave1707 Mod
    Posts: 5,620

    @AxiomCrux I ran a test program that added 1 to a counter and displayed the text value each draw cycle. I ran the camera for a few seconds and when I looked at the video, I could drag the video frame to show each increment of the counter. So that tells me the camera is capturing each draw cycle.

  • dave1707dave1707 Mod
    Posts: 5,620

    @AxiomCrux After more testing, it looks like even though the draw function might slow down due to heavy calculations, the camera still takes its pictures at its same rate. So several frames of the camera could have the same image until the next draw cycle.

  • dave1707dave1707 Mod
    Posts: 5,620

    @AxiomCrux If you're comparing camera images, one way to tell if you get a different camera image is to put a small colored rectangle in the upper left corner of the image. When the image changes, draw a different color. If the color changes, then the image was updated and you can compare the images. That way it won't matter how often the image changes compared to the camera.

  • Posts: 674

    @AxiomCrux Interesting - a while back I was looking at doing frame by frame comparisons trying to emulate a green screen (https://codea.io/talk/discussion/7634/magic-mirror-green-screen-type-effect-with-video)

    A major issue I found was that there is preprocessing of the camera frame for automatic light adjustment - this caused an issue when trying a frame to frame comparison as any new object in the frame would rebalance the lighting. Did this cause and issue for you?

  • Posts: 15

    @west yeah the built in camera processing is a bit of a tricky deal on this, but not a deal breaker.

    @dave1707 how would i compare the camera images to see if the data is different? i tried if oldCAMERA == CAMERA, with oldCAMERA being an image set to CAMERA after the check, but it didnt seem to work. didnt know if == was even implimented for images.

    @CamelCoder I found watch just after posting. cheers :)

  • Posts: 15

    btw, what im doing on this peoject is actually coming out pretty bad ass now by using the feedback from my optical flow displacement instead of trying to use the last frame of the camera, that way it is ALWAYS different and the imagery gets pushed like fluid based on the movement. this is what i was after anyway, so not cruitial to continue figuring out the precise disparity between each frame of CAMERA. still could be useful for other things. ill post some things in another thread soon.

  • Posts: 15

    also @CamelCoder i just tried your version of watch and it is showing inf as the value

  • Posts: 15

    @CamelCoder turned out the math.floor needs to be around the 1/ as well

  • dave1707dave1707 Mod
    edited March 5 Posts: 5,620

    @AxiomCrux There were some post long ago about comparing images, but I don't think they worked out. I'll see if I can find them. Forget about my suggestion of drawing a colored square. When you capture an image, set a flag to true. If that flag is true, set it to false and compare the current image to the previous image. As long as the flag is false, don't try to compare images. That way you'll only compare images when you get a new image and the flag is true.

    EDIT: do a forum search on compare images and see if anything there will help.

  • Posts: 15

    @dave1707 hehehe:: if that flag is true set it to false and <>

    im trying to see precisely where the frames are different, so the flag doesn't do much.

    Im noticing my program seems to crash eventually, and Im not sure why. I use the same quad and keep flipping to different contexts and shaders. Should I be using different mesh quads for each? or is garbage collection or some sort of memory issue a possibility? Ill clean up my code and post if I cant figure it out.

  • dave1707dave1707 Mod
    Posts: 5,620

    Add collectgarbage. Anytime Codea crashes, its most likely a memory issue. In your first post, you said you didn't think you were getting a different image to compare. The flag was to let you know when you got a new image and to do the compare.

  • Posts: 15

    right @dave1707, but how do we know if the images are different? my issue was that often when I was subtracting the two camera frames in my shader I didn't get a resulting difference frame, which would indicate the frames were the same. My question being how can one know the frames are different with certainty? I tried OLDCAM (captured to image at the end of the draw loop) with the CAMERA built in image constant using != or == in an if statement, but that didnt seem to do anything.

    also I have a feature request that shouldnj't be too difficult to impliment, its basically I would love to be able to size up the shader lab real time preview window, maybe using the now familiar split screen style drag the divider. Is there a place I should post such a request to potentially get it on the docket? I had a few other requests that may be relatively streightforward to add but will net a large gain in functionality: mainly adding midi/osc and proper audio DSP libraries to the main API. I did a quick google search for LUA midi and LUA audio / DSP which showed at least a few great options of open source libraries on git. I searched the forum and noticed there were many other poeple asking for this through the years. getting the thread sidetracked, but I can post those in an appropriate feature req subforum if I find one now :)

  • dave1707dave1707 Mod
    Posts: 5,620

    @AxiomCrux Can you post any shader code to show what you're doing.

  • dave1707dave1707 Mod
    Posts: 5,620

    @AxiomCrux See this link and my code near the bottom of it. Would anything there help.

    https://codea.io/talk/discussion/7660/camera-and-shaders
    
  • dave1707dave1707 Mod
    Posts: 5,620

    @AxiomCrux Actually, here's the code with a lot of the not needed code removed.

    function setup()
        parameter.watch("1/DeltaTime//1")
        img1=readImage("Planet Cute:Character Boy")
        img2=readImage("Planet Cute:Character Pink Girl")
        m=mesh()
        m:addRect(WIDTH/2,HEIGHT-600,img1.width,img1.height)
        m.texture=img1
    end
    
    function draw()
        background(44, 44, 117, 255)     
        sprite(img1,WIDTH/2,HEIGHT-200)
        sprite(img2,WIDTH/2,HEIGHT-400)
        m.shader=shader(DS.vs, DS.fs)
        m.shader.texture2=img2
        m:draw()
    end
    
    DS={   
        vs= [[   uniform mat4 modelViewProjection;
                attribute vec4 position;
                attribute vec2 texCoord;
                varying highp vec2 vTexCoord;
                void main()
                {   vTexCoord = texCoord;
                    gl_Position = modelViewProjection * position;
                }    
            ]],
        fs= [[  precision highp float;
                uniform lowp sampler2D texture;
                uniform lowp sampler2D texture2;
                varying highp vec2 vTexCoord;    
                void main()
                {   lowp vec4 col1 = texture2D( texture,  vTexCoord );
                    lowp vec4 col2 = texture2D( texture2, vTexCoord );
                    if (abs(col2.r-col1.r)<0.01 &&  abs(col2.g-col1.g)<0.01 && 
                            abs(col2.b-col1.b)<0.01)
                        discard; 
                    else 
                        gl_FragColor = abs(col1-col2);
                }
            ]]
    }
    
  • dave1707dave1707 Mod
    edited March 12 Posts: 5,620

    @AxiomCrux Here's something I threw together. The top image is a live image. Tap the screen once to capture the first image. Tap the screen again to capture the second image. The difference image shows the pixels of image 1 that don't match image 2 by the amount of the slider diff value. The smaller the diff value, the closer the r,g,b,a values have to be between the 2 images. Slide the diff parameter and watch the differences change. Maybe you can use something from this code for what you're trying to do.

    supportedOrientations(PORTRAIT_ANY)
    
    function setup()
        parameter.watch("1/DeltaTime//1")
        parameter.number("diff",0,1,.2)
        cameraSource(CAMERA_BACK)
        size=200
        img1=image(size,size)
        img2=image(size,size)
        getImage1=true
        getImage2=false
        print("The smaller the diff value, the more the images have to be the same.")
    end
    
    function draw()
        background(44, 44, 117, 255)     
        fill(255)
        text("Tap screen to capture image1, then again for image 2",WIDTH/2,HEIGHT-50)
        text("Current image",400,750)
        text("Image 1",400,540)
        text("Image 2",400,330)
        text("Difference (image 1)",400,120)
        collectgarbage()
        if img1~=nil then
            sprite(img1,200,540)
        end
        if showDiff then
            sprite(img2,200,330)
            m.shader=shader(DS.vs, DS.fs)
            m.shader.texture2=img2
            m.shader.dd=diff
            m:draw()
        end    
        i=image(CAMERA)
        if i~=nil then
            sprite(i:copy(WIDTH//2,HEIGHT//2,size,size),200,750)
        end
    end
    
    function getImage()
        if getImage1 then
            i=image(CAMERA)
            if i~=nil then
                img1=i:copy(WIDTH//2,HEIGHT//2,size,size)
                m=mesh()
                m:addRect(200,120,size,size)
                m.texture=img1
                getImage1=false
                getImage2=true
            end        
        elseif getImage2 then
            i=image(CAMERA)
            if i~=nil then
                img2=i:copy(WIDTH//2,HEIGHT//2,size,size)
                getImage2=false
                showDiff=true
            end        
        end    
    end
    
    function touched(t)
        if t.state==BEGAN then
            getImage()
        end
    end
    
    DS={   
        vs= [[  uniform mat4 modelViewProjection;
                attribute vec4 position;
                attribute vec2 texCoord;
                varying highp vec2 vTexCoord;
                uniform float dd;
                void main()
                {   vTexCoord = texCoord;
                    gl_Position = modelViewProjection * position;
                }    
            ]],
        fs= [[  precision highp float;
                uniform lowp sampler2D texture;
                uniform lowp sampler2D texture2;
                varying highp vec2 vTexCoord;
                uniform float dd;
                void main()
                {   lowp vec4 col1 = texture2D( texture,  vTexCoord );
                    lowp vec4 col2 = texture2D( texture2, vTexCoord );
                    if (abs(col2.r-col1.r)<dd &&  abs(col2.g-col1.g)<dd && 
                            abs(col2.b-col1.b)<dd)
                        discard; 
                    else 
                        gl_FragColor = col1;
                }
            ]]
    }
    
Sign In or Register to comment.