When it comes to artificial intelligence, Google is no stranger to its possibilities. As the company has previously experimented on its own offerings such as Google Photos, Google is now looking to turn what was once considered a television-only feat into a reality.
High-resolution screens on mobile devices is now a norm. The images give users a sharper and crispier look,…Read more
The feat in question is known as “zoom, enhance” which involves zooming into a photo up close and enhancing the image as to make it recognizable to everyone. In normal practice, doing so would just leave you with a pixelated mess. So how do you create a recognizable image out of the mass of pixels? Google’s answer is deep learning.
As part of the company’s Google Brain project, Google is looking to combine two neural networks to generate clearer images from a mere 8 pixel x 8 pixel image. The first of the neural networks is called the conditioning network, and it is used to match the source image against other high resolution images.
The second neural network is called the prior network, and this network is responsible for adding realistic high-resolution details to the source image. The end result of mashing together the output of both neural networks usually results in a photo that contains a plausible amount of real detail.
One thing to keep in mind about this system is that the final image isn’t the real thing. The final image that was created is merely the A.I.’s best guess as to what subject in the photo actually looks like. While the tech shows promise, it is still miles away from being the photo enhancement technology that was displayed on TV shows like Crime Scene Investigation. Nevertheless, this tech could potentially prove useful in the future. Those interested in this particular technology can find the research paper here.
Nothing boost and compliments a photographer like winning or even get nominated in a highly regarded competition. In…Read more