Google Maps is one of the most popular of Google’s apps out there. One of the features that makes it so popular is Street View. For years, we’ve been able to fire up the app, search for an address, and see a real picture of the place we’re searching for. Not only that, but if you want to see whats next door, down the street, or a few blocks over, you can do that too.
Street View works by stitching together a seemingly endless amount of panoramas. A rig with 15 cameras called a rosette is affixed on top of a car and Google drives around taking pictures with it. It then uses software to stitch these pictures together giving us the Street View images we see today.
Since Google is using multiple cameras to capture each image, there can be issues that pop up. These include a miscalibration of the camera’s geometry, timing differences between any of the 15 cameras, or parallax. These issues can lead to tearing or misalignment of images. One notable example of this is the Street View image of the Sydney Opera House seen below. But, Google is working on fixing that with a new software algorithm.
So, how did Google go about this? While it may seem easy to line the pictures up, Google had to account for a ton of variables during the process. Parallax, which we mentioned earlier, is caused by each of the cameras on the rosette seeing images slightly different because of their spacing. The gap between the cameras means that each picture is slightly different. It becomes a difficult task to stitch pictures together when each of them has a different angle.
Another of the issues is timing. While the rosette is one rig, its made up of 15 cameras. All of those cameras must be configured to fire at the exact same time. Picture this: you’re in the passenger seat of a car going 35 mph. You have a camera in each hand and you press the shutter button on the camera in your right hand half a second after the camera in your left hand. The cameras will take different pictures because you’re one second further down the road. Now imagine doing that 15 cameras.
These are just two examples of what can go wrong when capturing panoramas for Street View. To address them, Google is starting to use a brand new algorithm leveraging optical flow. Optical flow means that the software that analyzes these pictures finds corresponding pixels in images that overlap. Once the software finds these overlapping pixels, it can then correct the offsets it finds.
When these discrepancies are fixed, you have to make sure that more aren’t created in the process. If you move one part of the image, it affects how the rest of it lines up. To fix any further remaining issues, Google must then go in and change the geometry of the entire scene to make sure everything still fits together.
It does this by stretching and compressing other parts of the image to make sure everything continues to line up. It uses the points that it found during the optical flow process as reference points to find where it needs to stretch and where it needs to compress. The process is far from easy, but Google is downsampling photos to make the process a little less computationally stressful.
Overall, this new process should result in fewer artifacts showing up in panoramas and better overall geometry. While it’s not perfect, Google is now doing a better job of aligning each part of the panoramas. Also, since its software-based, Google doesn’t need to go out and take all new panoramas for Street View to improve.
Google is already rolling out the new software technique and applying it to existing Street View images. If you have time to kill, jump into Google Maps and take a look at some popular points of interest to see the changes. You can also hit the link below for more before and after pictures.
Powered by WPeMatico