AI & RoboticsNews

What is AI super resolution? How it improves video images

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Digital images begin with a fixed number of pixels in a two-dimensional grid. AI super resolution uses machine learning (ML) algorithms to infer from an original image ways that more pixels may be added to improve that image in some way. Fundamentally, the technology increases the resolution by creating a version of the image with more pixels that can offer greater detail. The algorithms generate the best colors to use for the interpolated pixels. 

How is AI super resolution used?

Super resolution algorithms are commonly used to improve the display of images and video. Many televisions, for instance, may be able to display a grid of 3840 x 2160 pixels, sometimes called 4K (an approximation of the horizontal number of pixels) or ultra high definition (UHD). Many TV signals, however, are broadcast only with grids of 1920 x 1080 pixels, also known as 1080p. AI algorithms convert each pixel in the 1080p signal into a grid of four pixels, effectively creating information and making the image quality more detailed. 

Super resolution algorithms are also being deployed with digital cameras and medical instrumentation. The algorithms provide higher resolutions that can be essential for engineering, construction, surgery and other practices that rely upon cameras to gather important details. 

How does AI super resolution work?

The visual output of super resolution, sometimes called “upsampling,” varies depending upon the algorithm. The simplest solution is to not to try to infer any new detail and simply replace each pixel with four identical pixels of the same color. This may create a larger grid, but there is no more detail. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

Better algorithms project more detail. Some look at adjacent pixels and arrange for the new pixels to form a smooth transition with the neighboring pixels. They can fit linear functions to the local pixels. Others are able to look for sharp transitions in the color and intensify them to make the image appear crisper. 

Some algorithms track the shifting images from a video feed and use the subtle changes from frame to frame to infer more detailed information. That enables the creation of a higher-resolution image that is consistent with the sequenced images from the original video. 

This entire area is an active subject with much research. Some companies are shipping working versions, sometimes bundled with cameras. Others are developing new algorithms for new domains.

What are some types of super resolution?

There are several different approaches to constructing a new image with higher resolution. The simplest begins with a single image and searches for the best way to create a new grid with more pixels that approximate the source image. 

Many algorithms seek to double the resolution along each axis, effectively quadrupling the number of total pixels, as with our example of converting a 1080p television feed to UHD. There is no reason, however, why the dimensions need to be exactly doubled: 

  • Algorithms may create any number of new pixels that approximate one or more pixels from the original image. A wide range of algorithms, often implemented with graphics processing units (GPUs), upscale (or increase the size of) images. 
  • Some single-image algorithms use machine learning to find better ways to upsample (or approximate a higher resolution for) a grid of pixels. These algorithms detect hard edges and quick shifts and choose new pixels to enhance these features. Pure linear interpolation tends to blur a picture, but better algorithms can produce crisper, more detailed results.

Some super resolution algorithms use a collection of sensors that are shifted into slightly different perspectives. They use several sources of illumination or sound that are also slightly shifted, often by amounts that correspond to the wavelength. This is often found in radar ranging systems and ultrasonic sensors that use radio waves or sound.

An approach often used for satellite images is to combine results from different colors or wavelengths. This multi-band super resolution can add more precision because the different colors have slightly different optical properties. Normally, the lens and the sensor must be designed to reduce these differences, but the super resolution algorithms use them to improve the final result. 

Some super resolution algorithms work with multiple images, which may have been taken independently in a burst and sometimes recovered from a video signal. Combining and aligning such images can make it possible to best position a sharp change in color or intensity.

A big challenge for multi-image super resolution algorithms is sub-pixel alignment. The multiple images will probably not align perfectly — indeed some super resolution scientists celebrate the kind of camera shake that a person can add to a sequence of pictures because it shifts the grid slightly between images. Slight, sub-pixel shifts make it possible to create new pixels and use the sub-pixel differences to better render the new pixels. 

What are the major applications for AI super resolution?

Currently, AI super resolution is commonly applied in the following areas:

Television entertainment

As in our original example, the most common application is to upsample video signals for display on screens with high resolution. The current generation of screens for living rooms and mobile phones deliver higher resolution than many historical video feeds. The video hardware must upsample it before displaying it. To avoid a blocky, pixelated result on the higher quality display, the super resolution algorithm must process the feed in an intelligent way. 

Satellite Imagery

Many satellites take photos of Earth, with the resolution seldom considered sufficient. Even the most recently captured images lack some of what is needed for the intended purposes. In some cases, scientists must work with historical data that was gathered at a lower resolution. Filling in detail can be essential for some studies. Satellite imagery also often includes data at a wide range of colors or wavelengths, sometimes including wavelengths that can’t be seen by the human eye. A super resolution algorithm can use all of this information to improve what we see in the visual spectrum. 

Medical Applications

While many super resolution algorithms work with visible light from cameras, the same algorithms and approaches can also improve the detail in images collected from other sensors, such as MRI, CT, x-ray and ultrasound scanners. 

Security Cameras

When investigators are tackling a crime that’s been recorded by a security camera, higher resolution is usually needed. In many cases, the cameras capture a video feed and an AI super resolution application can use multi-image techniques to create a single image with higher resolution. 

What major companies provide AI super resolution?

Both large, established companies and startups provide AI super resolution tools. Among the more established vendors are the following.

  • Google is a leader, with a variety of algorithms. The technology  is bundled with the camera app included with some of its high-end mobile phones, such as the Pixel 6. This app integrates information from multiple images to produce higher resolution results. The images can be captured in a burst as the shutter button is triggered. Google also engages in research using different models tuned with machine learning. These experimental models generate images that can be enlarged by a factor of four, 16 and even 64 times as many pixels.
  • While Apple doesn’t highlight any super resolution algorithms in its mobile phones, it holds several patents that indicate how the company may be folding these algorithms into the phones and deploying them in the background. One uses image stabilization operations to capture and combine multiple images that are offset by less than a pixel.
  • Adobe includes a super resolution algorithm in its Lightroom and Photoshop products. The technology can upsample images using a model trained with machine learning. Using millions of pairs of photos captured with both low and higher resolution, Adobe’s research team trained the AI to recognize some standard pixel structures. The algorithm can double the linear resolution or quadruple the number of pixels. Although this approach works with all image formats, it is most effective when applied to raw files.
  • AMD and Nvidia use super resolution in their video drivers to improve the display for the detailed worlds in some of their games. The algorithms are applied differently from many of the examples in this article, however. Instead of adding resolution to the sensor readings from a camera, the drivers take the synthetic world from inside a game and use similar techniques to improve how their video hardware renders these worlds on the screen. 

How are startups delivering super resolution? 

Startups are also addressing the market.

  • Entropix makes a platform that can increase the resolution of captured images by a factor of up to nine by using multiple frames from video images. The company focuses on improving the accuracy of machine vision algorithms by improving the resolution of images captured with inexpensive cameras. This solution can increase the accuracy of autonomous vehicles, automated inventory management and other applications using the raw data from machine vision algorithms. 
  • Eikon Therapeutics has created algorithms for adding super resolution to microscopy for pharmaceutical applications like drug discovery. The extra resolution can extend the capabilities of microscopes to detect and resolve smaller objects and molecules. The company states that with its technology researchers can see what couldn’t be seen before. 
  • Photobear, DeepAI and VanceAI are some of the startups that are delivering web applications and APIs that photographers can use to upscale or expand their images. These companies offer web interfaces that allow professional photographers and other users to improve the resolution of their images. 
  • The Phased Array Company (TPAC) is applying super resolution algorithms to the data it gathers from its array of sensors that can also collect data outside of the visible spectrum. For example, TPAC uses ultrasound results to detect flaws in metal structures and other mechanical and architectural elements. 
  • KP Labs and Mapscaping use super resolution to improve the results from satellite images. This can extend the life of old hardware and enhance the quality of historical data.

What is the real value of super resolution? 

Some question whether super resolution ultimately adds value to the original images. The algorithms create structure and add detail, but can we be certain that the added details are correct? Although the generated images may look good and compare well to what we expect, do we really know what should be there without having taken a higher resolution image in the first place?

This uncertainty exists despite researchers’ best practices, which often include beginning with higher resolution images and then downgrading the resolution before starting their experiments. They can then compare any newly created higher resolution results with the original high resolution images. The algorithms will create new, higher resolution results without having had access to the original, high-resolution images. Although we can test the results in the lab, we still cannot be certain how well the technology actually works in the wild. 

This reality reflects a philosophical gap in our understanding of data and imagery. Some argue that these algorithms create a fictional version of the world. It may appear as we expect, but the new, higher resolution is not backed up by real, higher-resolution data. 

As others point out, however, the algorithms and machine learning models are not simply creating flights of fancy. They are applying rules gathered from millions or billions of training images. When they add detail about hair, they are creating details that are based upon learning just how thin hair might be and how it lies. When the algorithms create scales, wrinkles or blemishes, they are not merely imagining details, but using knowledge and expertise built up over a long training process. The technology leverages a deep knowledge of the world to make informed decisions.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Peter Wayner
Source: Venturebeat

Related posts
AI & RoboticsNews

AI risk management startup ValidMind raises $8.1M to help banks comply with regulations

DefenseNews

Amid faltering domestic program, Taiwan orders more MQ-9B drones

DefenseNews

BAE demos platform that gives Army AMPVs turret system options

DefenseNews

US Army’s fresh look at watercraft includes unmanned options

Sign up for our Newsletter and
stay informed!