You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 5, 2022. It is now read-only.
I am a bit confused by the input and output image sizes in the proposed method to refine the coarse mask.
Let's say the source images are 4000x3000 and the inferred mask is 256x256, and forget about the post-processing steps (denoising, luminosity, etc...). So the idea is to do guided upscaling of the inferred mask to the input image size to restore missing details (holes in trees, fine details).
Is the modified guided filter supposed to do this upscaling alone, or does it only refine the mask at coarse resolution? 256x256 pixels, or quarter or input image size? (1024x768)
Is the 64x scale factor for the full size image? and should I have to upscale the mask to 4000x3000 using (one single ?) bilinear interpolation before applying the modified guided filter? (in the provided code, I can see 256x factor, or 4x64 ??).
In the suppl. material pdf, you said you use the modified guided filter to compute an image of a quarter resolution of the original image. Then, use a bilinear interpolation... but to me that will produce blured mask, not 1:1 quality mask. Do you then apply another small guided filter?
Is the 64x scale factor the equivalent of a box filter radius 64 pixels in the original guided filter implementation?
Many thanks if you can help with my questions.
The text was updated successfully, but these errors were encountered:
# for freeto subscribe to this conversation on GitHub.
Already have an account?
#.
Hi and thank you for your great paper.
I am a bit confused by the input and output image sizes in the proposed method to refine the coarse mask.
Let's say the source images are 4000x3000 and the inferred mask is 256x256, and forget about the post-processing steps (denoising, luminosity, etc...). So the idea is to do guided upscaling of the inferred mask to the input image size to restore missing details (holes in trees, fine details).
Is the modified guided filter supposed to do this upscaling alone, or does it only refine the mask at coarse resolution? 256x256 pixels, or quarter or input image size? (1024x768)
Is the 64x scale factor for the full size image? and should I have to upscale the mask to 4000x3000 using (one single ?) bilinear interpolation before applying the modified guided filter? (in the provided code, I can see 256x factor, or 4x64 ??).
In the suppl. material pdf, you said you use the modified guided filter to compute an image of a quarter resolution of the original image. Then, use a bilinear interpolation... but to me that will produce blured mask, not 1:1 quality mask. Do you then apply another small guided filter?
Is the 64x scale factor the equivalent of a box filter radius 64 pixels in the original guided filter implementation?
Many thanks if you can help with my questions.
The text was updated successfully, but these errors were encountered: