Portrait Light, an AI-based lighting feature introduced by Google for new pixel phones in September 2020, allows photographers to change the lighting direction and intensity of posting. Google has released a blog that reveals how the company developed the technology.
On the pixel camera on the Pixel 4, Pixel 4A, Pixel 4A (5G) and Pixel 5, the portrait light automatically uses post-capture for default mode images and nightside photos involving people. In Portrait mode photos, Google claims that the Portrait Lite feature provides more dramatic lighting with the effect of a shallow field already used. The company claims that this results in a “studio-quality” look.
Google says it believes lighting is a personal choice, which is why developers wanted to manually reposition the photographer and adjust the brightness of the lights used within Google photos to match personal preference.
Although the Portrait Lite was introduced on the Pixel 4 and 5 phones, it was included as an update for older Pixel phones, which predates the Pixel 2.
To understand the AI model of how changing the light pattern affects the human face, Google needed millions of portraits with different lighting scenes from different directions.
“Portrait light can add a replaceable light source to the scene, and the direction and intensity of the initial lights are automatically selected to complement the lights in the photo,” the technology’s developers explain in a blog post. “We accomplish this by upgrading the novel machine learning models, each trained using different databases of photographs captured in a light stage computational lighting system.”
Two models developed by Google are Automatic Directional Light Placement and Synthetic Post-Capture Relative. The first model works by looking at a given portrait, and then the next step is to place an artificial directional light on the scene similar to how a photographer would hold a camera light source in the real world. The second model, artificial post-capture relation, is added with artificial light given lighting direction to make the image appear realistic and natural.
To teach its artificial intelligence how to properly execute these two models, Google used its light stage computational lighting method:
By placing an object on the light stage and taking multiple shots with one of the 64 cameras positioned differently, the developers were able to simulate different lighting conditions of the AI by combining the light stage 331 with individually projected LED light sources. To be understood.
In the image below, Google illustrates that the captured images on the light stage – glowing one by one – can be put together to create the look of the object in any lighting environment:
Google says it is looking at portrait lite as the first step in a series of plans to make post-capture lighting more powerful on mobile cameras using AI and machine learning.
(Via Engadget)