Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion

Difference in edge detection shader response to different inputs [RESOLVED]

edited May 4 in Questions Posts: 691

Hello,

I'm looking to feed two images into the edge detector shader and compare the resulting edge images. One image will come from the camera in real time, while the other is a reference image captured some time previously. The following code demonstrates this, tap on the top half of the screen to swap between live and reference response and tap the bottom half of the screen to capture a new reference image.

In theory, laying the iPad on the table and capturing a reference image(tap bottom half of screen) then tap the top half of the screen to swap between the live and reference you should get the same response (provided the scene hasn't changed). However, in practice there is a difference between the two. It looks like the reference image is more "dilated" whereas the live image appears more smoothed.

I suspect it will have something to do with how the camera feed is passed to the shader, but am not sure. Any help would be appreciated (I want the same edge section result from both the live and reference feed)

-- Use this function to perform your initial setup


displayMode(FULLSCREEN)

--Use spacemonkey's motion detection code as a launch point
function setup()
    parameter.number("thresh",0,1,0.08) -- the threshold for difference between the two images
    parameter.integer("im",1,2,2) --1 is the current edge image, 2 is the reference edge image
    bgrefimg=nil
    curedgeimg=nil
    cameraSource(CAMERA_FRONT)
    cw=1024
    ch=768
    cwt=1024/2
    cht=768/2
    curmesh = mesh()

    curmesh.shader = shader("Filters:Edge")

    curID = curmesh:addRect(0, 0, 0, 0)

    bgmesh=mesh()
    bgmesh.shader = shader("Filters:Edge")
    bgID = bgmesh:addRect(0, 0, 0, 0)


end

-- This function gets called once every frame
function draw()

    -- This sets a dark background color
    background(0, 0, 0, 255)
    img=image(CAMERA)
 --   strokeWidth(5)


    -- latest image as a shader
    curmesh.shader.texture = img --feed in captured rather than "live" img
    curmesh:setRect(curID, cw/2,ch/2, cw, ch)
    curmesh.shader.conWeight=1.0/9.0
    curmesh.shader.conPixel=vec2(1/cwt,1/cht) -- 1.0/textureSize

    --this is doing the difference
    op2=image(cw,ch)
    setContext(op2)
    background(0, 0, 0, 255)
    curmesh:draw()
    setContext()
    curedgeimg=op2

    if updateflag==1 then
        updateBGRef()
        updateflag=0
    end

    -- Draw the selected mesh
    if im==1 then
        curmesh:draw()

    elseif im==2 then
        bgmesh:draw()

    end


    if im==1 then
        text("Current",100,HEIGHT-50)
    else
        text("Reference",100,HEIGHT-50)
    end

    --need this to stop a crash
    collectgarbage()

end


function touched(t)

    if t.state==ENDED then
        if t.y>HEIGHT/2 then
            im = im + 1
            if im>2 then im=1 end
        else
            updateflag=1

        end
    end
end

function updateBGRef()
    local op=image(cw,ch)

    --  sound("Game Sounds One:Bell 2")
    setContext(op)
    background(0, 0, 0, 255)
    curmesh:draw()
    setContext()
    bgrefimg=op
        bgmesh.shader.texture=bgrefimg
    bgmesh:setRect(bgID, cw/2,ch/2, cw, ch)
    bgmesh.shader.conWeight=1.0/9.0
    bgmesh.shader.conPixel=vec2(1/cwt,1/cht) -- 1.0/textureSize

end

Comments

  • Posts: 691

    OK - resolved the issue. I was feeding the edge image into the hater (rather than the original) and effectively applying edge detection twice

Sign In or Register to comment.