-
-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
Regression in 7.1 with Image.getbbox returning None caused by converting the file to RGBA #4849
Comments
Testing, I find this is the result of #4454. If I run your code with https://github.com/python-pillow/Pillow/blob/master/Tests/images/hopper.jpg, converting to RGBA, and save It's hard to see that I've attached a file there because it's an entirely transparent PNG. If I don't convert to RGBA, I get this - So I would conclude that
is working correctly now. If you disagree, please let us know. If you think there is a problem with ImageChops, also let us know and we can talk about that, I'm just hoping to deal with one problem at a time. |
Thanks for the quick response and all the work on the library. After looking at your response I agree that the getbbox is working according to spec if fully transparent regions are considered as zero despite having non-zero data in the rgb layer. However this creates a bit of a problem when using it with the ImageChops.difference function on files with a transparency layer because the entire alpha layer will be set to 0 if both images are nontransparent, making the gettbox always return None. This also causes different results depending on the file type used. I can see a use case for both options, maybe an optional argument or something would be nice but we can just modify our code and set the alpha layer to 255 for all non-zero rbg pixels before taking the bounding box to get the same results we were before 7.1. Thanks again for your help. |
I imagine you mean if the mode is different? It shouldn't have any change in behaviour based on the file type itself. Thanks for being so considerate in your response. |
If both images have no transparency but are in an alpha mode, the behaviour makes sense. The idea that 'zero regions' means black, I imagine is actually less straightforward than meaning transparent, so perhaps using We could modify |
I would like to add a vote to add the behaviour you describe. I imagine more people will encounter this problem, since the change currently breaks a very popular StackOverflow method of trimming whitespace using PIL. (I have code that uses the above StackOverflow solution of trimming non-alpha black & white images saved in RGBA mode. It broke after the update. I tracked the problem to here.) |
It's been a while since I worked on this but I think the only change that I needed was converting the difference to RGB to ignore the the alpha layer in order to maintain the pre 7.1 behavior. From: diff = ImageChops.difference(img, bg)
diff.getbbox() To: diff = ImageChops.difference(img, bg)
diff = diff.convert('RGB')
diff.getbbox() |
Converting to RGB is the first thing I tried. While this fixes many use cases, the conversion to RGB didn't fully satisfy my test suite. The test suite was completely green prior to 7.1. The conversion to RGB trick fails on binary images that are all black and a shape is created via 0 / 255 alpha channel values. E.g. black shape pixel = (0, 0, 0, 255), background pixel = (0, 0, 0, 0). Here is my def trim_image(pil_image):
"""
Crop image to remove excess background at image edges.
https://stackoverflow.com/questions/10615901/trim-whitespace-using-pil
This solution should be roughly equivalent to trim with fuzz in
ImageMagick. ($ convert test.jpeg -fuzz 7% -trim test_trimmed.jpeg)
"""
background_color = detect_background_color(pil_image)
# Create a solid image in the background color.
background = Image.new(pil_image.mode, pil_image.size, background_color)
# Subtract the background from the image, effectively making all background
# pixels = 0.
diff = ImageChops.difference(pil_image, background)
# Subtract a scalar from the differenced image. This is a quick way of
# saturating all values under 100, 100, 100 to zero.
# add = (diff + diff) / 2 - 100
add = ImageChops.add(diff, diff, scale=2.0, offset=-100)
# Get the bounding box of the remaining pixels.
bbox = add.getbbox()
return pil_image.crop(bbox) Where Tests follow: black_dot_transparent_background = Image.fromarray(np.array([
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
[[0, 0, 0, 0],
[0, 0, 0, 255],
[0, 0, 0, 0],
],
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
],
]).astype(np.uint8))
red_dot_black_background = Image.fromarray(np.array([
[[0, 0, 0, 255],
[0, 0, 0, 255],
[0, 0, 0, 255],
],
[[0, 0, 0, 255],
[255, 0, 0, 255],
[0, 0, 0, 255],
],
[[0, 0, 0, 255],
[0, 0, 0, 255],
[0, 0, 0, 255],
],
]).astype(np.uint8))
detect_background_color(black_dot_transparent_background) # (0, 0, 0, 0)
detect_background_color(red_dot_black_background) # (0, 0, 0, 255)
def trim_image_convert_rgb(pil_image):
background_color = detect_background_color(pil_image)
background = Image.new(pil_image.mode, pil_image.size, background_color)
diff = ImageChops.difference(pil_image, background)
add = ImageChops.add(diff, diff, scale=2.0, offset=-100)
bbox = add.convert("RGB").getbbox()
return pil_image.crop(bbox)
trim_image(black_dot_transparent_background).size # (1, 1) ; correct
trim_image(red_dot_black_background).size # (3, 3) ; broken
trim_image_convert_rgb(black_dot_transparent_background).size # (3, 3) ; broken
trim_image_convert_rgb(red_dot_black_background).size # (1, 1) ; correct |
What did you do?
In version 7.1.0, after converting the file to RGBA the getbbox will return
None
instead of the bounding box of the difference of the two images. This has happened on every file I have tried it on so far. This was working on and below 7.0.0.What did you expect to happen?
It should return the same result as if it wasn't converted.
What actually happened?
It returned
None
What are your OS, Python and Pillow versions?
This a small reproducible test case though the python shell.
Version 7.0.0 (working)
Version 7.1.0 (broken)
Let me know if you need any other information.
The text was updated successfully, but these errors were encountered: