Apple was slated to launch an iOS 13.2 developer beta version of its much anticipated Deep Fusion photography feature today, with a public launch likely soon to follow. Unfortunately, the launch will be delayed until further notice, but more details surrounding the feature have surfaced.
According to TechCrunch, Deep Fusion will give photos and images a higher level of detail than standard imaging software currently found in most phones, as it will use AI to blend multiple exposures together. Using machine learning, Deep Fusion will capture complex textures such as skin, clouds, foliage, etc on a per-pixel level to create a highly customized image. The new feature will automatically activate in specific situations depending on light source – in the brightest situations with the telephoto lens it will shut off, but with darker wide lens shots it will kick in. Deep Fusion cannot be used with the iPhone‘s ultra-wide-angle lens.
The feature will only work with 2019 iPhones with A13 chips, and each image using the Deep Fusion will take around one second to process and update. Most iPhone users won’t even notice. Keep it locked here for more information on Apple’s Deep Fusion launch as it arises.