Taking our stalking/detective/CSI-ing skills to the next level, we now try to post-process images in trying to enhance or remove certain portions without completely destroying the details of our image. This is beyond the abilities of any image processing programs such as GIMP and even Adobe Photoshop (as far as I know)! So this means that what we’ll be doing is something not everyone knows.

Why the need? Well… let’s say you want to try and analyze a certain letter from a kidnapper but then they tried their best in trying to erase it through scratches. Fear not for these scratches can be removed by proper filtering. How? We have to deal with the Fourier transform of images to be able to see these scratches in a new light. But before going further, we first have to familiarize ourselves with the most basic of things in the FFT world:

Familiarization with the FFT of Common Shapes:

Before going through the complicated images we hope to clean/filter/go-detective-with…. we first have to familiarize ourselves with the FFT’s of common shapes. Why is this necessary? Well, if we are now dealing with images that have ‘patterns’ on them, knowing the common FFT’s of certain shapes and patterns will immediately aid us in giving an idea as to what type of filter we should be using.

It took awhile before I finally mastered the syntax in Scilab for the different FFT applications. Below is the code I’ve used in getting the FFT of a certain pattern/image (represented by the variable m). For some odd reason, the use of the fft() and fft2() functions did not give out the desired results. Thanks to Mam Jing Soriano for figuring out that the working function for our Scilab 5.3.3 and SIVP toolbox is through the multidimensional FFT or mfft() function. I mean honestly… the matrices of our patterns and image are 2D right? So I really can’t comprehend why the fft2() function won’t work.. Anyway, the mfft() function asks for atleast 3 parameters: matrix of your object you wish to take its FFT, -1 for FFT or 1 for inverst FFT and lastly the dimensions of your object. After taking the fft of the object, immediately displaying this won’t give you a nice output. We still have to make use of the fftshift() function so that we’d get resulting patterns centered at (0,0) AND also have to take its abs() as the output will be complex!

In Scilab, the command imshow(FTmcA) will not display the output we are looking for. Again, I’m pretty much puzzled as to why complications exist in Scilab….  Again, Mam Jing Soriano was able to figure out that grayplot() function can give out the proper display of our fft’s. PHEW!

The first pattern deals with two dirac delta functions placed symmetrically along the x-axis. Basically this’ll look like two dots along the x-axis. Constructing the said pattern through the code as seen below:

gives an FFT that looks like the repetitive strips parallel the x-axis and continuously moving along the y-axis.

Through this, the FFT of two dots placed symmetrically along the y-axis will have stripes that are parallel to the y-axis.

Increasing the diameter of our ‘Dirac delta’s’, the next pattern deals with  two circles. Again, this pattern was generated through the code:

which we have been using since Day 2 (this just proves that what we learn from the past can always be handy in the future 🙂 ).  However, I’ve learned that this method does not necessarily give out perfect circles. You can actually see that they appear to have ‘corners’ …

The output looks like an Airy function (the bright circle that eventually fades as it propagates) with sine waves embedded in them (the black stripes). Take note that if we instead have a single circle, the FFT will look like an Airy function without the additional sine waves in it. Besides simply looking at the FFT of the circles, it was also observed that diameter of the central bright spot is inversely proportional to the diameter of the size of the circle.

If instead we consider squares brought by the code below:

The FFT looks a sinc function propagating along the x and y axis.

Similar to the case of the circles, it was observed that upon increasing the size of the squares, the size of the central bright fringe appear to be decreasing as well.

Not far from the circular patterns used earlier,  the next pattern dealt with two Gaussians. Gaussians are commonly used as they resemble actual point sources in real life. The construction of the Gaussian pattern was done through the use of the fspecial() function (which I’ve managed to make use from my earlier activities already). The resulting Gaussian patterns and their corresponding FFT’s are displayed below:

Since we have a pattern that kind of looks like the circles, we’d expect the resulting FFT to be like that of the Airy function. Only this time because the edges of the Gaussian is a gradient that eventually goes to 0 intensity, the outer circular rings beyond the central bright circle was already negligible as its’ intensities immediately decrease. It was also observed that since we have two Gaussians in our pattern, the Airy function has “sine waves” embedded on them as well. Also, as the size of the diameter of the Gaussian was increased (through increasing the value of sigma in our equation), the diameter of the central circular pattern in its FFT decreases.

Now that we look at it, the FFT’s are actually the diffraction patterns of a laser as it passes through a certain shaped-aperture!  In fact, there’s an equation that gives the resulting electric field of a light beam upon passing through an aperture.. Of course, that’s beyond the scope of our activity already though :).

As a last test, the pattern now deals with evenly spaced 1’s along the x and y-axis. This was generated through the code:

This is of interest as the resulting FFT looks something like a weave…

Recall that the FFT of two points along the x-axis looks like stripes propagating through the y-axis. Now that we have more than two points and add the fact that they are evenly spaced AND that we also have the similar points placed along the y-axis,  evenly spaced weaves looks reasonable enough as its FFT.

Why was this taken? Well… this looks like a pattern we’d usually encounter with our things right? Close-up version of your jeans, baskets, etc. We might just find this useful later on.

Honey-Stars Special:

Another thing we’d have to familiarize ourselves with is the concept of convolution.  Convolution is the method by which the Fourier transform of two functions are multiplied to one another. Graphically, convolution indicates the area of the overlap of the two functions we are trying to convolve. This is of importance as this has a lot of applications in different field like electrical engineering, optics and of course image processing [1].

Now the two images we’d try to convolve are as observed below:

The first one consist of 10 points of 1’s that are randomly placed in our image. On the other hand, the other one consist of a single star shape in the middle. Reminds you of honey stars right?  (PS: I was really hungry at the time I was doing this part of the activity… I seriously had to prevent myself from using a pizza as my patter 🙂 ).

Now taking the respective FFT of both images and making use of the per-element multiplication function (.*) in Scilab gives us our result. However, since we hope to see the resulting image, we have to take the inverse Fourier of the result. As discussed earlier, the use of the ifft does not seem to work hence the use of the mfft() plays at hand yet again. This time though we set the mode to 1 so that it’ll take the inverse Fourier instead.

What’s your guess for our resulting image?  Remember that the convolution more or less will look like that of the first image BUT with the second image ‘translated’ in it. So theoretically the ten 1’s will be replaced with that of our ‘star’ pattern.

With that in mind, we get… not only one but TEN scattered honey stars! YUM YUM!

What happens if in case the pattern used have larger dimensions than that of our honey stars? We’d still get the same results though this time the patterns at times might overlap with one another.

Familiarizing ourselves with this technique can be of purpose when we have to implement the filters to our original images as you’ll see later on.

CSI NIP: Episode 1 –  Lunar Analysis

 It’s a common practice in Astrophotography and landscape photography to make use of a technique termed as “stitching”. Like that in sewing, stitching is the process when two or more images are merged resulting to a single image. This technique is used for cases wherein the object is beyond the limit of the lens of the camera. Panoramic images which gives us a wide view of landscapes and scenes are results of stitched images. However, one of the main problems of this technique is the parallax error brought by the instability of your camera or even just the simple change in lighting of your view.

For the first episode of our going-detective-with-Scilab, we will try to remedy the error brought by stitched images of a lunar orbiter image taken by NASA. It is clear that the image contains the edges of the ‘stitches’ giving rise to repetitive patterns parallel to the x-axis.

Hmm…. Why does this look so familiar? Taking the FFT of the image shows us why…

From earlier, we have found that the FFT of the pattern containing evenly spaced 1’s along the x and y-axis is a weave.  As for this case, it is apparent that the FFT of the repetitive pattern brought by the stitches is simply equal to evenly spaced 1’s along the y-axis!  This is the reason as to why FFT’s play an important role in different applications. It’s ability to realize the frequency of the repetitive patterns  gives way in enhancing and even removing them.

So now that we know the points in the FFT that’s giving rise to the patterned stitches in our original image, we now create a mask to filter these out. The mask in theory should be composed of 1’s and o’s. If we hope to remove the points along the y-axis, we have to ensure that our mask will have 0 values on these points while the rest of the values will be one. Saving the fft of the image and placing it in Adobe Photoshop, I was able to create a mask that looks something like this:

In order to apply this mask to our FFT, we make use of the powers of convolution. Convolving the unshifted FFT of the original image to that of the fftshifted mask, we get the filtered FFT.  However, since we hope to see its effect to our image, we again make use of the mfft(image,1…) function to take the inverse Fourier.

Our result?

Woah! The stitch marks are barely visible!!

If in case you hope to get an even better image, a construction of a better mask can be done. Also, the code I have used for this portion of the activity is placed below for future reference:

CSI NIP: Episode 2 – Brush Strokes

For the last part of this activity, we are now to deal with an image of a painting on a canvas.

This image is a portion of Dr. Vincent Dara’s painting entitled “Frederiksborg”. Through the techniques used earlier, we hope to take out the weave pattern from the canvas so that we’d end up with only the painting. Removing the pattern of the canvas gives us only the details coming from the painting and hence can aid in telling us the strokes used by the painter.

Making use of our program to look at the FFT of this image gives us:

The FFT obviously contains what looks like alternating points. We then assume that these points are the ones that gave way to the pattern brought by the canvas. A filter was yet again created by simply taking note of the locations of these bright spots:

Upon convolving the fftshifted version of this filter to the unshifted fft of the image and taking its inverse FFT finally gives us our filtered image.

Yet again, we were able to take out bulk of the weave-pattern giving us a clear view of the brushstrokes done by the painter. If this is still lacking to your preference, creation of a more accurate filter can be done.

As an additional investigation, we try to look at the fft of the filter. Theoretically, this is supposed to give us the weave-pattern we hope to remove from our original image painting. The resulting fft is as observed.

It does resemble the weave patterns we are seeing however it seems as if it’s still lacking at some point. This simply tells use that the filter used could be further improved.

References:

1. Wikipedia. Convolution. Available at http://en.wikipedia.org/wiki/Convolution#Applications. Accessed July 25, 2012.

2. Maricor Soriano. AP186 Activity 7: Enhancement in the Frequency Domain. 2012.

Personal Note:

For this activity, I’d like to give myself a rating of 10/10 as I was able to successfully do all the requirements. This activity actually got me frustrated as a lot of functions actually don’t work the way they were supposed to. The ifft() function was, for some unknown reason, giving out a different output. In fact,  due to my tinkering with the functions, I was able to show the fft’s through the imshow function. How? I had to simply use the ifft() function to the abs() of the fft then take its real parts… I really don’t know how it worked but it gave me the chance to save the images of the fft properly 🙂 Funny right? That’s indeed the life of a scientist… trying to figure out something and then you’d end up with something new..