emme |
I'm getting back to working on my lens/shape blur filter and I'm still facing this one problem. I'd like to be able to define any aperture shape for the blur, but I can't figure out an efficient way of doing this.
Basically I need to define all the x/y coordinates within a shape and then scatter those coordinates (and only those) evenly across the image plane. With simple shapes this can be done manually by filling a shape with x/y gradients and then manually transforming the shape back into a square (or another tileable shape). This can then be tiled and scaled down so that each pixel receives the right coordinates for its offset and converges into the correct shape as it's sampled repeatedly. So the problem is, how to do this procedurally with an arbitrary shape? One solution is to use a bomber to scatter the shape, but this gets quite slow, as a lot particles is needed to make sure an area is fully covered. Particle overlap doesn't really matter here, as the map can be scaled down so that the errors average out for each pixel. Another solution would be to use a script or a loop to collect n samples from within the shape and fill an area with those, but I imagine this would be even slower (and less accurate). Larger blurs obviously need a lot of samples to converge to a usable result, so speed is important here. Using a bitmap component as a buffer (for the image being blurred) will speed things up, but will also lead to loss in accuracy. The bomber method works reasonably well, but perhaps there's a better, smarter way of doing this. Anyone have any ideas how to do this more efficiently? |
|
Posted: March 22, 2019 5:55 pm | ||
Sphinxmorpher
![]() |
Tough one. Stochastic sampling is the only approach that comes to my mind, but it will end up looking grainy and be slow.
Perhaps this "low rank filtering" approach can be to some inspiration: https://www.researchgate.net/publicati...ar_filters |
|
Posted: March 23, 2019 12:35 pm | ||
emme |
Interesting. I remember seeing something like this mentioned in a computerphile video some time ago. I'll check out the paper and think about this. My feeling is that FF isn't very well suited for this kind of kernel processing, but I might be wrong. Do you have any experience or intuitions about doing this in FF?
And yes, this blurring method is by default very noisy and slow. 64px blur would basically need about 4000 samples per pixel to fully converge with optimal sampling. In practice you can usually get away with a lot less, so this can be quite usable still. I guess I'm fine with that part - the goal would be to optimize all the other steps, like building the x/y offset coordinate map. I did some testing - using a bomber is over 5x slower than doing a simple tiling grid. Using a bomber is kind of like doing stochastic sampling in reverse. The actual samples are taken at regular spaces, but the particles (or offset coordinates) within a pixel are randomly scattered. Using the tiling grid method, I add some subpixel distortion to get the same effect. Enabling jittered sampling in the setting works too, but I'd rather not rely on that. Thanks for the input. I'll have to think about this to figure out how these techniques would translate to FF. |
|
Posted: March 23, 2019 2:06 pm | ||
Sphinxmorpher
![]() |
I just tried constructing a separable kernel filter, based on loops and bitmap buffers.
I'm sure there is room for improvement (e.g. if the kernel remains the same for the complete image, the kernel weight accumulator loop could be run into a static lookup and thereby activate the sample cache). Now on to that paper... Separable Kernel Processor.ffxml |
|
Posted: March 23, 2019 3:33 pm | ||
emme |
That was quick, clean design - nice work.
Yeah, I guess for lens blur the kernel could stay the same for the whole image, unless you want some advanced stuff like morphing the bokeh shape around the edges of the image ect. So for more complex shapes you would need to merge multiple kernels and possibly run some diagonally too? These kernels would require to be pre-calculated right? Doing complex arbitrary shapes would still be a problem with this method right? I'm also trying to figure out if there are benefits (in FF) to doing this in separate 1D passes instead of just doing one 2D offset. Not sure - what do you think? Interesting stuff. Definitely potential to do some cool things with this. |
|
Posted: March 23, 2019 4:55 pm | ||
Sphinxmorpher
![]() |
Separate 1D passes - well, disregarding the additional buffering, the algorithm is less complex (samples*2 vs samples^2). I imagine the additional bitmap buffer overhead will cancel the advantage to a certain samples count level, and for larger samples counts (needed for acceptable quality at larger radii), the two step 1D processing will win.
Damn, I can't get my head around that "rank" concept. The explanation is not that clear. Did you get anywhere with it? |
|
Posted: March 24, 2019 4:08 pm | ||
emme |
Ok, good to know.
No, I couldn't understand the low-rank stuff either. It gets a bit too technical for me to keep track of. https://www.youtube.com/watch?v=vNG3ZAd8wCc I don't know if you've seen this already. This method combines separable complex gaussian kernels to approximate a disk shaped blur. Quite limited, but could be useful as long as it's reasonably fast. I'm not sure how you would do the imaginary number part - I think I'll leave that to you if you want to give it a try haha... |
|
Posted: March 24, 2019 6:34 pm | ||
Sphinxmorpher
![]() |
Interesting video - I like his enthusiasm about the subject
![]() Well, I think the method he mention is basically the same as in the article. They just use different methods to get the coefficients. I also found this article informative about different approaches: http://yehar.com/blog/?p=1495 I made the brute force 2D version - it is too slow to unleash upon this already heavily burdened Earth. I have some ideas for n-gon kernels with even amount of sides - basically a sum of angled 1D convolutions, perhaps with a slightly reverse kernel curve to compensate for center overlap. A hexagon would require 3 successive or additive steps at different angles. |
|
Posted: March 26, 2019 3:40 am | ||
emme |
Great find, good info on that article.
Haha, it is a bit sad that even with this separable filtering, doing a gaussian blur of reasonable quality seems to be about 40-50 times as slow as using the built in gaussian blur. Yeah, hexagonal blur should work with 3 modified box blurs at 30 degree angles. Or, like you say, any even sided ngon. I guess a circle could be approximated with this too. I'll have to try this at some point. |
|
Posted: March 26, 2019 1:59 pm | ||
emme | ||
Posted: March 27, 2019 1:07 pm |
Filter Forge has a thriving, vibrant, knowledgeable user community. Feel free to join us and have fun!
33,711 Registered Users
+18 new in 30 days!
153,533 Posts
+31 new in 30 days!
15,348 Topics
+73 new in year!
37 unregistered users.