It looks like you're new here. If you want to get involved, click one of these buttons!
Hi, I'm looking for a good blur shader, something similar to the blur you get in the iOS Notification Center or OS X Yosemite. The blur shader included doesn't really do what I want. Also, preferably something non GPU intensive, as I have a lot of other stuff going on in the background (Even if it is GPU intensive, id still love to see it and test it) (also, preferably something with an open licence, as this will be used in a game that will most likely be on the App Store in the next couple of months). The background will constantly be changing, as this will be a background for a store which is "on top" or the menu screen which has several effects in it.
Comments
Most of the examples I've seen online require 2 passes. It also seems that you'd have to render the underlying scene to an image, and then pass that image as a texture to the shader. I don't know what the performance of that would be like if you were doing it every frame. This one looks interesting:
http://xissburg.com/faster-gaussian-blur-in-glsl/
@Mr_Ninja
the fastest method may be to create an overlay image which can be sprited over the drawn screen, creating the impression of a blur. This is much faster than creating a blur in real time (and does the difference really matter to users?).
The code below demonstrates. Touch the screen to toggle the "blur" on and off. You can adjust the light drop off towards the edges with this line in the shader: f = fff; (the more you mutiply f by itself, the faster the light drops off, and vice versa)
Here's a Gaussian blur shader. It looks great, but on an iPad Air, the FPS drops to 35 with two passes. I haven't tried optimising it, but I'm sure the bottleneck is sending two screen-sized images as textures to the shader, rather than the calculations in the shader itself. i.e. I'm not sure that reducing the number of samples in the shader from 14 would necessarily help. Could be worth trying?
Personally, I wouldn't go to a lot of effort just to blur the background that nobody will notice anyway
Well, it's what the OP asked for :-)
I was wrong about optimisation: if you comment out half the calculations in the fragment shader, it runs at about 58 fps, and still looks acceptably blurry. I'll have to recalculate what the brightness values should be. I can post an optimised version later
Plus, with all of these things, it's often the things you discover along the way. I'm interested in this because I'm currently exploring the possibility of creating an underwater ripple effect for a platform game I'm working on, which would be the same principle of sending the entire screen to an effect shader.
One really fun thing I just discovered: if you comment out every other line in the fragment shader above, you get a really cool after-image trail as the sprites move. With further adaptions so that the blur only goes in one direction, it would be a great effect
Ok, here's an optimised version. Tap to cycle through 3 states, no blur, sideways trails, 2-pass Gaussian. This runs at 59/60 fps on the Air:
On the above code,the weightings aren't right (I made them up). This blog uses the same technique as mine (ie 5 readings, fragment interpolation). I might see if I can copy their weightings. You could then emulate the iOS 7 panel effect, fullscreen, 60 fps.
http://www.sunsetlakesoftware.com/2013/10/21/optimizing-gaussian-blurs-mobile-gpu
You must have a fast iPad!
My iPad 3 runs your code with one pass at 40 FPS, two passes at 30 FPS.
@Ignatz (or anyone else with an iPad 3), try this one. It downsamples by half, meaning there are a quarter as many calculations in the fragment shader. Because OpenGL up scaling adds a blurring effect anyway (as long as you don't invoke
noSmooth()
), the visual difference between this and full-pixel calculation is unnoticeable, but it should be a lot faster:Ok, last one I promise. This one lets you set varying amounts of horizontal and vertical blur, and has better weighting of the pixels. I'm not sure, but it could be that the downsampling only quarters the number of calls to the fragment shader on the first pass, but it depends on how OpenGL handles upscaling (ie does it call the fragment shader for the number of pixels in the source, or in the destination, when upscaling?). So it's possible that we're getting five-eighths the number of calculations, rather than a quarter. I can't really check this because it all runs at 60fps for me.
Ok, I lied in my previous post, this really is the last one! I added a tween to the x and y radii of the blur to create a fun, "drunken insect eye" effect. Have a play, and let me know your fps (Added it to Codea Community) :
I think that what Notification Center does (it wasn't made in Codea, but if it was) is draw the screen into a small image, then scale that up and display the blurred result. I know it might create some odd effects when things are moving on the blurred screen, but the same happens in the real Notification Center. Try scrolling up this post, and while it's moving, drag down Notification Center.
Yes, I think so. That's what my code does. Change the downsamples variable to affect how small the intermediary image is.
Wow, thanks for all this! I just came to check, and saw all the code! Not sure which method will work best for me, but I'll definantly look into all of them. Again, thanks a ton!
With my ones, I think you can just look at the last one.
Ok, will do
@yojimbo2000 - thats better, about 55 now
@Ignatz cool, glad to hear that. I put the downsampling code (from my last entry) into my first entry at the top of this thread, the one that samples the texture 15 times per fragment, to see if I could get that up to speed on the iPad Air. At full-pixel it runs at around 35 fps, downsampling by 0.5, at 50 fps. Interestingly though, if you go down to 0.25, it goes at about 40 fps or so. So presumably there's a point at which too big a downscale/ upscale operation cancels out the benefit of quartering the number of calls to the fragment shader. In this particular case, downsampling by 0.5 seems to be optimal. You could also investigate scaling all of the draw operations too (of the 3 sprites I mean), see if that's quicker than drawing a 0.5 image of the whole scene.
Very good work and thanks for sharing.
I like v5 with kaleiodoscopic effect
I would like to create a face recognition shader ( with camera ) for my ia project
but it's very difficult for me
@hpsoft I don't think you could do that with a shader. Your best bet would to just use image.get.
hmmm, after a bunch of testing, the only shader that worked for me (although only at 20 fps) was the first one by @yojimbo2000. It seems to work good for my needs, and he blur looks good. The other ones when I tried them just worked similarly to the blur shader that comes with Codea; a version of the image appeared a little bit off to all four corners of the image. In the demo program though it worked good, so I'm wondering how to implement it like that (with the 3 different demos, the code confused me). Any help is appreciated, and thanks again for all the shaders!
This one adds the builtin in shader for comparison. It runs quickly on the Air, by doing 10 samples per pixel in a single pass. My last one also does 10 samples, but because it does 2 passes of 5, that results in 25 samples. I'm biased of course, but I think my one looks way better than the builtin one ;-)
Here's a Gaussian kernel calculator, if you want to experiment with weightings:
http://dev.theomader.com/gaussian-kernel-calculator/
@yojimbo2000 I agree with you, your blur looks better. For doing the 2 passes, would you draw the blur on a single axis, write it to an image, then blur that image on the other axis?
yes, that's what the code is doing.
So, first the scene you want blurred is drawn to the
blur[1]
image. That is then drawn, at a quarter the size, with the horizontal blurring on the shader, to theblur[2]
image (the blurRadii table determines whether the blurring is horizontal or vertical).blur[2]
is then drawn back to the screen, at full size, with the vertical blurring added. I suspect you could optimize it further by drawingblur[2]
to a third buffer, also a quarter the size of the screen, and then drawing that 3rd buffer back to the screen, full-size. I think at the moment, the processing saving from going down to a quarter the size is only gained on the first pass, but not the second.