Table of Contents
If you want to run the repository in your local environment you just need to run this once with already_downloaded
set to False
.
It will then download the test data automatically (into the folder TEST_DATA
). After that you can run this script with already_downloaded
set to True
.
from pathlib import Path
from natsort import natsorted
from utilities.image_file import read_image, show_image, show_images
TOP_LIGHT_IMAGES_PATHS = []
BOTTOM_LIGHT_IMAGES_PATHS = []
for image_path in Path("test_data").glob("*.png"):
if "TL" in image_path.name:
TOP_LIGHT_IMAGES_PATHS.append(image_path)
elif "BL" in image_path.name:
BOTTOM_LIGHT_IMAGES_PATHS.append(image_path)
TOP_LIGHT_IMAGES_PATHS = natsorted(TOP_LIGHT_IMAGES_PATHS, key=lambda y: y.name)
BOTTOM_LIGHT_IMAGES_PATHS = natsorted(BOTTOM_LIGHT_IMAGES_PATHS, key=lambda y: y.name)
TOP_LIGHT_IMAGES = [read_image(image_path) for image_path in TOP_LIGHT_IMAGES_PATHS]
BOTTOM_LIGHT_IMAGES = [
read_image(image_path) for image_path in BOTTOM_LIGHT_IMAGES_PATHS
]
show_images(TOP_LIGHT_IMAGES)
Are applied per lens using lens specific calibration data.
from bit operations on median threshold bitmappings (MTB)1.
Are applied using enviroment specific calibration data.
Is applied per light source using images with a diffuse plane.
"""
from photonics.light_corrections import correct_light_falloff
LIGHT_FALLOFF_IMAGES = [read_image(path) for path in LIGHT_FALLOFF_IMAGES_PATHS]
TOP_LIGHT_IMAGES = correct_light_falloff(TOP_LIGHT_IMAGES, LIGHT_FALLOFF_IMAGES)
BOTTOM_LIGHT_IMAGES = correct_light_falloff(BOTTOM_LIGHT_IMAGES, LIGHT_FALLOFF_IMAGES)
del LIGHT_FALLOFF_IMAGES
show_images(TOP_LIGHT_IMAGES)
"""
'\nfrom photonics.light_corrections import correct_light_falloff\n\nLIGHT_FALLOFF_IMAGES = [read_image(path) for path in LIGHT_FALLOFF_IMAGES_PATHS]\n\nTOP_LIGHT_IMAGES = correct_light_falloff(TOP_LIGHT_IMAGES, LIGHT_FALLOFF_IMAGES)\nBOTTOM_LIGHT_IMAGES = correct_light_falloff(BOTTOM_LIGHT_IMAGES, LIGHT_FALLOFF_IMAGES)\n\ndel LIGHT_FALLOFF_IMAGES\nshow_images(TOP_LIGHT_IMAGES)\n'
Is applied per light setup using a RGB based white balance when reading the RAW images.
from Differently Lit Images with Edge Detection2.
There are currently three different types of datasets and thus, three different ways to create a mask:
- classic: Mask is created using a combination of thresholds and adaptive thresholds as well as skeleton data to preserve fine details and contour detection to organize and sort out detected elements.
- light table: Mask is created using a trivial threshold and contour detection to organize and sort out detected elements.
- frosted glass: similar to light table
from mappings.mask import mask_from_frosted_glass
OPACITY_MAP = mask_from_frosted_glass(BOTTOM_LIGHT_IMAGES)
OPACITY_MAP[OPACITY_MAP >= 255 / 1.5] = 255
OPACITY_MAP[OPACITY_MAP < 255 / 1.5] = 0
show_image(OPACITY_MAP)
from mappings.identifier import identifier_map
IDENTIFIER_MAP, element_count = identifier_map(OPACITY_MAP)
print(f"Element count: {element_count}")
show_image(IDENTIFIER_MAP)
Element count: 35
from Differently Lit Images with Exposure Fusion3.
from mappings.translucency import translucency_map
TRANSLUCENCY_MAP = translucency_map(BOTTOM_LIGHT_IMAGES)
TRANSLUCENCY_MAP[OPACITY_MAP == 0] = [0, 0, 0]
show_image(TRANSLUCENCY_MAP)
from Differently Lit Images with Exposure Fusion3.
from mappings.albedo import albedo_map
ALBEDO_MAP = albedo_map(TOP_LIGHT_IMAGES)
ALBEDO_MAP[OPACITY_MAP == 0] = [0, 0, 0]
show_image(ALBEDO_MAP)
from Differently Lit Images with Photometric Stereo4.
from mappings.normal import normal_map
from utilities.image_interpolation import (
edge_extend_image,
)
NORMAL_MAP = normal_map(TOP_LIGHT_IMAGES)
NORMAL_MAP[OPACITY_MAP == 0] = [0, 0, 0]
NORMAL_MAP = edge_extend_image(NORMAL_MAP, OPACITY_MAP, 1)
show_image(NORMAL_MAP)
from the Neighbor-Variance of the Normals.
from mappings.roughness import roughness_map
ROUGHNESS_MAP = roughness_map(NORMAL_MAP, OPACITY_MAP)
show_image(ROUGHNESS_MAP)
The height map is calculated using a 3d integral over the gradients of the normals on the normal map. The gradients are calculated by weighting vertical and horizontal gradient values based on a rotation value. The height map gains accuracy when using multiple rotation values and combining the integrated gradient values to a single height value per pixel.
Given the normal vector
This will be calculated for every pixel and for every rotation value.
The height values
This alone is very prone for errors. That’s why the rotation is introduced. When re-calculating the gradient map multiple times with a rotation factor and using that to calculate the height values for every re-calculated gradient map, adding this values together drastically improves the resulting height values:
To make all the height maps comparable, the height map is not normalized per height map, but per fixed height. In theory this means that all height maps are divided by the same factor, but in practice there is a little caveat to that: Not all datasets are captured with the same focal length. Thus, a slope of 45° in one normal map pixel does not result in the same height on every dataset, because the effective length of a single pixel does not correspond to the same real world length. To counteract this problem another factor is introduced: The texel density (pixels per meter,
The texel density
Given the angle of view
This and the image width in pixel
from mappings.height import height_map
HEIGHT_MAP = height_map(NORMAL_MAP, OPACITY_MAP, height_divisor=50)
show_image(HEIGHT_MAP)
from Height Mapping with ray-traced baking.
Footnotes
-
Ward, Greg. "Fast, robust image registration for compositing high dynamic range photographs from hand-held exposures." Journal of graphics tools 8.2 (2003): 17-30. ↩
-
Canny, John. "A computational approach to edge detection." IEEE Transactions on pattern analysis and machine intelligence 6 (1986): 679-698. ↩
-
Mertens, Tom, Jan Kautz, and Frank Van Reeth. "Exposure fusion." 15th Pacific Conference on Computer Graphics and Applications (PG'07). IEEE, 2007. ↩ ↩2
-
Woodham, Robert J. "Photometric method for determining surface orientation from multiple images." Optical engineering 19.1 (1980): 139-144. ↩