Thanks to TV series and movies, people nowadays believe that when it comes to digital images and videos, everything is possible. Some of you may remember the “never-ending enhance” sequence in Blade Runner or the magic zoom they have in CSI. Then we turn to reality, where cameras with poor components, coupled with Digital Video Recorders (DVRs) set to kill the quality to save storage space, mean that you often end up with a bunch of smashed pixels. Sometimes, you are asked to take vital information from them.
Contrary to what you see in movies, there are cases in which there’s just nothing to do, except invent information that is not there – something we must avoid, of course. However, sometimes the information you are looking for is not clearly available in any frame alone, but it may become way more intelligible by wisely integrating multiple frames. In this post, we’ll show you how two of Amped FIVE’s brand new filters, Perspective Stabilization and Perspective Super Resolution, can really be a game changer in such situations.
Imagine we must read the license plate of the vehicle found at the bottom of this video.
By scrolling with the mouse, we can look at the actual pixels composing the license plate (it’s important to be able to see the raw data, without any adjustment, to begin with!). The plate is 11 pixels high, not much, but still something. The car stays in the video for several seconds, so we could try to merge the information from some frames into a single, enhanced picture.
An effective way to strongly reduce the noise in a video is Frame Averaging, that simply computes their mean pixel by pixel. Unfortunately, your object of interest must be almost still across the frames, otherwise it will smear in the final output. It is thus advisable to apply Local Stabilization to your object before averaging: you select the object in the first frame and let Amped FIVE track it and keep it still in the rest of the video.
If we try this in our example, we notice that simple image stabilization does not help much, as you can see below (we also improved contrast). Why is that?
The problem is that the car is not moving in a straight direction, and the license plate perspective changes. Therefore, even after stabilization, the license plate still changes from one frame to another, and the amount of change depends on its position with respect to the camera. That’s why pixels are more and more blurred as you move from left to right in the license plate.
Fortunately, in Amped FIVE there are now two filters targeting this exact scenario: Perspective Stabilization and Perspective Super Resolution.
Perspective Stabilization, found under the Stabilization filter category, can stabilize a planar object whose perspective changes across frames. It is based on the concept of image rectification: since we know the license plate is a planar surface, once we know its vertices coordinates in all frames we can compute a homography, that is, a mathematical transformation, which links the coordinates of vertices in the current frame to the coordinates of the corresponding vertices in the reference frame. In order to compute the license plate homography between two frames, the pixel coordinates of the plate’s vertices are needed. Unfortunately, tracking pixels’ coordinates with high accuracy in poor resolution images is not trivial. In fact, until a few months ago, the users had to manually click on the license plates vertices in all frames they wanted to stabilize, as allowed by the Perspective Registration filter – a boring and exhausting job. With Perspective Stabilization the user now only needs to select the vertices in the first (reference) frame; then, the filter will automatically track their position in the following frames.
Let’s take a look at how we can use this filter for our case. Once we have selected the area (it is recommended to allow some extra pixels outside the license plate; of course, they must be on the same planar surface), the points will be automatically added to the filter parameters and we can now choose Motion Type, Tracking Method, and Interpolation.
Motion Type refers to the expected type of motion the filter needs to track. More complex motions can give more accurate results but do take longer to compute and may be more sensitive to motion blur or video noise. Perspective is the most general setting and is what we are using in our case.
You can choose from three different types of tracking to suit your project:
- Static Tracking compares the current frame with the reference frame where the selection of pixels has been set: it offers the most precise stabilization, but may fail if the shape of the region changes too much.
- Dynamic Tracking compares each frame with the previous one, which can stabilize larger deformations but the position in the stabilized video may drift slightly over time. In practice, it is more robust (works in most situations), but less precise for later frames.
- Hybrid Tracking compares the current frame with both the first and the previous, allowing for the tracking of large deformations but keeping the object steady. It is the method which usually gives the best compromise of robustness and precision.
In our case, Dynamic Tracking proved to be the best option. Under the Output tab, you can select what type of output you would like from the filter.
- Stabilize Video will produce a stabilized video, should you only need to stabilize or if there are more steps in your workflow, or if you are thinking of using something like Frame Averaging to reduce noise and perform integration.
- Selection Overlay will draw the warped selection onto the input video.
- Prepare for Super Resolution will leave the video unaltered but adds the transformation matrix to each frame should you want to use Perspective Super Resolution later in a workflow.
Clicking the “Prepare for Perspective Super Resolution” button at the bottom of the Output tab will automatically add the Perspective Super Resolution filter.
Perspective Super Resolution
Perspective Super Resolution works alongside Perspective Stabilization to apply the Super Resolution effect to an object that has been the subject of some perspective disturbance. It is automatically loaded into the Chain History after clicking “Prepare for Super Resolution”.
The general idea behind super-resolution is that when you have many low-resolution observations of an object which moves a bit, you can merge the information to obtain a single observation at higher resolution. In a sense, you are trading in time resolution for pixel resolution. The Perspective Super Resolution filter uses the mathematical transformation previously estimated by the Perspective Stabilization filter to guide such an “information fusion” operation.
We asked for a magnification of a factor 5, and obtained this still image:
We compensated for the slight blurring that is typically introduced by integration techniques, by applying a slight Optical Deblurring:
Finally, we used the Correct Perspective and Sharpening filters to have a frontal view of the license plate and increase the contrast between characters and the background. As shown below, compared to the original pixels, we obtained quite an improvement!
This case shows how important is to use the proper tools in the proper order. We showed that a standard stabilization followed by frame integration was not the best choice in this case, because the perspective of the object of interest changes in the video. Thanks to the Perspective Stabilization and Perspective Super Resolution filters, we were able to automatically register frames and merge their information.
Find out more about Amped FIVE at ampedsoftware.com/FIVE.