In preparation for an extreme ultra-CSI episode that’ll be up next after this activity, we have to familiarize ourselves with another technique in image processing that will be handy in future use.

The word Morphology already gives you a faint idea that it deals with shapes and structures. Morphological operations are commonly used on images to enhance, isolate, and improve its details. However, these operations are carried out only on binary images with its elements lying on a 2-dimensional space.

A quick review of the set-theory is needed as this serves as the main rules of our operations. The image below will serve as our reference point. The following elements can be seen:

Set A – pink box, Set B – purple box, C – region within the green blue outline, Set D – yellow box, and the elements butterfly and running man.

  • The butterfly is an element of Set A whereas the running man is not.
  • The complement of D are all the elements outside it (Set A, butterfly, doorman and the whole region of B that is not within D).
  • Set D is a subset of Set B.
  • C is the intersection of Sets A and B.
  • The union of Set A and B is the whole pink and purple regions together.
  • The difference of Set A and B is the whole pink region excluding those within the blue green outline.

In addition, translation is simply the shifting of the element to another point.

For the morphological operations to work, two images or sets are required. The main pattern will serve as the image we hope to increase/decrease its size. The structuring element on the other hand will serve as our basis on how much will the original pattern change and as to how it will change.

Dilation – Super Size Me!

With your structuring element having a set origin (where this is where we will be placing an element if certain cases are met), this is translated to every point of the main pattern. The points at which at least a single element of your structuring element intersects with your main pattern, the origin will turn into an element of the dilated set.

A larger structuring element will then result to a larger output. If in case a single pixel is considered as your structuring element, we will end up with the exact image as our main pattern.

Erode – The Biggest Loser!

As for the case of the erode operation, the resulting image will be smaller as compared to the original image. This is because this operation considers the points at which your structuring element fit inside your main pattern.

——-

To start of with the activity, we first have to determine the images we want to apply the operations. As indicated earlier, there should be two inputs involved: the main pattern by which we will be eroding / dilating and the structuring element which will be the basis as to how much will the original pattern be eroded/dilated.

For the main pattern, four different shapes were used as drawn below:

(L-R): A 5×5 square, a triangle with 4 units as its base and 3 units as its height, a cross that has a length of 5 units for each strip and lastly a hollow 10×10 square that has a width of 2 units.

The structuring elements on the other hand are as shown below:

(L-R): A 2×2 square, 1×2 square, 2×1 sqaure, 1’s at the diagonal of 2×2

Going old-school:

Before taking advantage of the gifts of technology, we are to test if we have really understood the concepts behind the dilate and erode morphological operations by going old-school. Through the use of a graphing paper and a marker, the resulting shape as our main patterns are eroded for a chosen structuring element is predicted and drawn.

Starting with erode, we are to expect that the shapes would be smaller as compared to the original pattern. The matrix of images below shows that I have managed to get shapes that are indeed smaller. In fact, there are instances wherein the original pattern is totally removed! This is expected for certain shapes as the erode operation deals with points at which your structuring element perfectly fits the main pattern. Take for example the case of cross main pattern where the 2×2 box structuring element is impossible to embed as the width of the cross is only 1 unit! Hence, the resulting eroded shape is nothing.

Erode Matrix of Images (Theoretical)

In contrast, dilating the shape results to a relatively larger pattern which I have managed to observe from my predictions. With the dilate function considering the points at which the structuring element still overlaps on the main pattern even if the overlap is only a single pixel.

Dilate Matrix of Images (Theoretical)

Consistency please?

Knowing what we are to expect upon dilating and eroding all our patterns and structuring elements together, it’s time to find out if my predictions are correct.  The operations can be carried out in Scilab as readily available functions for morphological operations are present in the SIP and IPD toolbox.

With the sudden death of my Ubuntu system, I have finally accepted defeat and opted to download the Scilab 4.1.2.. This version of Scilab matches the last version of the SIP toolbox amidst having lots of flaws such as being unable to run .sce’s so I usually end up typing my codes in notepad and pasting them on the prompt.

Through the use of Photoshop, I recreated the main patterns as well as the structuring elements as observed below:

Main Patterns

Structural elements

The important thing to take note in creating these patterns is the consistency of the scale of your units. Initially, I downloaded a graphing paper page from the internet and used this as the basis in creating my patterns. However, upon eroding and dilating them, the results produced images that seemed to not fit the units that I have! I had to redo my patterns again using  3×3 pixels as a single unit.

Now after calling all our patterns into scilab through the imread() function, it is best to ensure that the matrix of your patterns is strictly binary and not in grayscale (afterall, we are talking about binary operations!). This can be easily done by the im2bw() function with my threshold point set to be at 0.7.

The erode() function calls for two inputs: the matrix of the image you wish to erode and the structuring element. Using this through all our patterns gives rise to the matrix of images below:

Eroded Images through Scilab

Comparing this matrix to my matrix of images of old-school eroded patterns, I am glad to say that they are exactly the same! The only difference between the two are their scaling.

Similarly, the matrix of image for the dilated images show that the digitally dilated images as well as my predictions are the same.

Dilated images through Scilab

Personal Note:

Despite getting the right results from my predictions, it took quite awhile before I managed to fully understand and master the art of eroding and dilating. I had the hardest time taking to heart the process of dilation and it was all thanks to Mr. Xavier Tarcelo that I finally figured it out through the use of an acetate in which I had to move box per box on my graphing paper.  For this activity, I would like to give myself a grade of 10/10 as I have managed to accomplish all the requirements in the activity as well as understand the techniques and concepts.

 

References:

1. Soriano, Jing. Applied Physics 186  Activity 7 – Morphological Operations. NIP, UP Diliman. 2012.

Taking our stalking/detective/CSI-ing skills to the next level, we now try to post-process images in trying to enhance or remove certain portions without completely destroying the details of our image. This is beyond the abilities of any image processing programs such as GIMP and even Adobe Photoshop (as far as I know)! So this means that what we’ll be doing is something not everyone knows.

Why the need? Well… let’s say you want to try and analyze a certain letter from a kidnapper but then they tried their best in trying to erase it through scratches. Fear not for these scratches can be removed by proper filtering. How? We have to deal with the Fourier transform of images to be able to see these scratches in a new light. But before going further, we first have to familiarize ourselves with the most basic of things in the FFT world:

Familiarization with the FFT of Common Shapes:

Before going through the complicated images we hope to clean/filter/go-detective-with…. we first have to familiarize ourselves with the FFT’s of common shapes. Why is this necessary? Well, if we are now dealing with images that have ‘patterns’ on them, knowing the common FFT’s of certain shapes and patterns will immediately aid us in giving an idea as to what type of filter we should be using.

It took awhile before I finally mastered the syntax in Scilab for the different FFT applications. Below is the code I’ve used in getting the FFT of a certain pattern/image (represented by the variable m). For some odd reason, the use of the fft() and fft2() functions did not give out the desired results. Thanks to Mam Jing Soriano for figuring out that the working function for our Scilab 5.3.3 and SIVP toolbox is through the multidimensional FFT or mfft() function. I mean honestly… the matrices of our patterns and image are 2D right? So I really can’t comprehend why the fft2() function won’t work.. Anyway, the mfft() function asks for atleast 3 parameters: matrix of your object you wish to take its FFT, -1 for FFT or 1 for inverst FFT and lastly the dimensions of your object. After taking the fft of the object, immediately displaying this won’t give you a nice output. We still have to make use of the fftshift() function so that we’d get resulting patterns centered at (0,0) AND also have to take its abs() as the output will be complex!

In Scilab, the command imshow(FTmcA) will not display the output we are looking for. Again, I’m pretty much puzzled as to why complications exist in Scilab….  Again, Mam Jing Soriano was able to figure out that grayplot() function can give out the proper display of our fft’s. PHEW!

The first pattern deals with two dirac delta functions placed symmetrically along the x-axis. Basically this’ll look like two dots along the x-axis. Constructing the said pattern through the code as seen below:

gives an FFT that looks like the repetitive strips parallel the x-axis and continuously moving along the y-axis.

Through this, the FFT of two dots placed symmetrically along the y-axis will have stripes that are parallel to the y-axis.

Increasing the diameter of our ‘Dirac delta’s’, the next pattern deals with  two circles. Again, this pattern was generated through the code:

which we have been using since Day 2 (this just proves that what we learn from the past can always be handy in the future 🙂 ).  However, I’ve learned that this method does not necessarily give out perfect circles. You can actually see that they appear to have ‘corners’ …

The output looks like an Airy function (the bright circle that eventually fades as it propagates) with sine waves embedded in them (the black stripes). Take note that if we instead have a single circle, the FFT will look like an Airy function without the additional sine waves in it. Besides simply looking at the FFT of the circles, it was also observed that diameter of the central bright spot is inversely proportional to the diameter of the size of the circle.

If instead we consider squares brought by the code below:

The FFT looks a sinc function propagating along the x and y axis.

Similar to the case of the circles, it was observed that upon increasing the size of the squares, the size of the central bright fringe appear to be decreasing as well.

Not far from the circular patterns used earlier,  the next pattern dealt with two Gaussians. Gaussians are commonly used as they resemble actual point sources in real life. The construction of the Gaussian pattern was done through the use of the fspecial() function (which I’ve managed to make use from my earlier activities already). The resulting Gaussian patterns and their corresponding FFT’s are displayed below:

Since we have a pattern that kind of looks like the circles, we’d expect the resulting FFT to be like that of the Airy function. Only this time because the edges of the Gaussian is a gradient that eventually goes to 0 intensity, the outer circular rings beyond the central bright circle was already negligible as its’ intensities immediately decrease. It was also observed that since we have two Gaussians in our pattern, the Airy function has “sine waves” embedded on them as well. Also, as the size of the diameter of the Gaussian was increased (through increasing the value of sigma in our equation), the diameter of the central circular pattern in its FFT decreases.

Now that we look at it, the FFT’s are actually the diffraction patterns of a laser as it passes through a certain shaped-aperture!  In fact, there’s an equation that gives the resulting electric field of a light beam upon passing through an aperture.. Of course, that’s beyond the scope of our activity already though :).

As a last test, the pattern now deals with evenly spaced 1’s along the x and y-axis. This was generated through the code:

This is of interest as the resulting FFT looks something like a weave…

Recall that the FFT of two points along the x-axis looks like stripes propagating through the y-axis. Now that we have more than two points and add the fact that they are evenly spaced AND that we also have the similar points placed along the y-axis,  evenly spaced weaves looks reasonable enough as its FFT.

Why was this taken? Well… this looks like a pattern we’d usually encounter with our things right? Close-up version of your jeans, baskets, etc. We might just find this useful later on.

Honey-Stars Special:

Another thing we’d have to familiarize ourselves with is the concept of convolution.  Convolution is the method by which the Fourier transform of two functions are multiplied to one another. Graphically, convolution indicates the area of the overlap of the two functions we are trying to convolve. This is of importance as this has a lot of applications in different field like electrical engineering, optics and of course image processing [1].

Now the two images we’d try to convolve are as observed below:

The first one consist of 10 points of 1’s that are randomly placed in our image. On the other hand, the other one consist of a single star shape in the middle. Reminds you of honey stars right?  (PS: I was really hungry at the time I was doing this part of the activity… I seriously had to prevent myself from using a pizza as my patter 🙂 ).

Now taking the respective FFT of both images and making use of the per-element multiplication function (.*) in Scilab gives us our result. However, since we hope to see the resulting image, we have to take the inverse Fourier of the result. As discussed earlier, the use of the ifft does not seem to work hence the use of the mfft() plays at hand yet again. This time though we set the mode to 1 so that it’ll take the inverse Fourier instead.

What’s your guess for our resulting image?  Remember that the convolution more or less will look like that of the first image BUT with the second image ‘translated’ in it. So theoretically the ten 1’s will be replaced with that of our ‘star’ pattern.

With that in mind, we get… not only one but TEN scattered honey stars! YUM YUM!

What happens if in case the pattern used have larger dimensions than that of our honey stars? We’d still get the same results though this time the patterns at times might overlap with one another.

Familiarizing ourselves with this technique can be of purpose when we have to implement the filters to our original images as you’ll see later on.

CSI NIP: Episode 1 –  Lunar Analysis

 It’s a common practice in Astrophotography and landscape photography to make use of a technique termed as “stitching”. Like that in sewing, stitching is the process when two or more images are merged resulting to a single image. This technique is used for cases wherein the object is beyond the limit of the lens of the camera. Panoramic images which gives us a wide view of landscapes and scenes are results of stitched images. However, one of the main problems of this technique is the parallax error brought by the instability of your camera or even just the simple change in lighting of your view.

For the first episode of our going-detective-with-Scilab, we will try to remedy the error brought by stitched images of a lunar orbiter image taken by NASA. It is clear that the image contains the edges of the ‘stitches’ giving rise to repetitive patterns parallel to the x-axis.

Hmm…. Why does this look so familiar? Taking the FFT of the image shows us why…

From earlier, we have found that the FFT of the pattern containing evenly spaced 1’s along the x and y-axis is a weave.  As for this case, it is apparent that the FFT of the repetitive pattern brought by the stitches is simply equal to evenly spaced 1’s along the y-axis!  This is the reason as to why FFT’s play an important role in different applications. It’s ability to realize the frequency of the repetitive patterns  gives way in enhancing and even removing them.

So now that we know the points in the FFT that’s giving rise to the patterned stitches in our original image, we now create a mask to filter these out. The mask in theory should be composed of 1’s and o’s. If we hope to remove the points along the y-axis, we have to ensure that our mask will have 0 values on these points while the rest of the values will be one. Saving the fft of the image and placing it in Adobe Photoshop, I was able to create a mask that looks something like this:

In order to apply this mask to our FFT, we make use of the powers of convolution. Convolving the unshifted FFT of the original image to that of the fftshifted mask, we get the filtered FFT.  However, since we hope to see its effect to our image, we again make use of the mfft(image,1…) function to take the inverse Fourier.

Our result?

Woah! The stitch marks are barely visible!!

If in case you hope to get an even better image, a construction of a better mask can be done. Also, the code I have used for this portion of the activity is placed below for future reference:

CSI NIP: Episode 2 – Brush Strokes

For the last part of this activity, we are now to deal with an image of a painting on a canvas.

This image is a portion of Dr. Vincent Dara’s painting entitled “Frederiksborg”. Through the techniques used earlier, we hope to take out the weave pattern from the canvas so that we’d end up with only the painting. Removing the pattern of the canvas gives us only the details coming from the painting and hence can aid in telling us the strokes used by the painter.

Making use of our program to look at the FFT of this image gives us:

The FFT obviously contains what looks like alternating points. We then assume that these points are the ones that gave way to the pattern brought by the canvas. A filter was yet again created by simply taking note of the locations of these bright spots:

Upon convolving the fftshifted version of this filter to the unshifted fft of the image and taking its inverse FFT finally gives us our filtered image.

Yet again, we were able to take out bulk of the weave-pattern giving us a clear view of the brushstrokes done by the painter. If this is still lacking to your preference, creation of a more accurate filter can be done.

As an additional investigation, we try to look at the fft of the filter. Theoretically, this is supposed to give us the weave-pattern we hope to remove from our original image painting. The resulting fft is as observed.

It does resemble the weave patterns we are seeing however it seems as if it’s still lacking at some point. This simply tells use that the filter used could be further improved.

References:

1. Wikipedia. Convolution. Available at http://en.wikipedia.org/wiki/Convolution#Applications. Accessed July 25, 2012.

2. Maricor Soriano. AP186 Activity 7: Enhancement in the Frequency Domain. 2012.

Personal Note:

For this activity, I’d like to give myself a rating of 10/10 as I was able to successfully do all the requirements. This activity actually got me frustrated as a lot of functions actually don’t work the way they were supposed to. The ifft() function was, for some unknown reason, giving out a different output. In fact,  due to my tinkering with the functions, I was able to show the fft’s through the imshow function. How? I had to simply use the ifft() function to the abs() of the fft then take its real parts… I really don’t know how it worked but it gave me the chance to save the images of the fft properly 🙂 Funny right? That’s indeed the life of a scientist… trying to figure out something and then you’d end up with something new..

In the art of photography, light can be considered as either your friend or enemy. A friend as it serves as the illumination of your test subjects which brings your images to life while sadly an enemy as it’s high dependence on light can be a continuous burden to you. Based from my experiences thought, it is far better to have a highly intense light source as the settings in a camera can easily be tweaked to reduce the amount of light entering your optical system. On the other side, there are limited solutions when there is a lack of light. Sometimes even making use of a ‘flash’ still gives us dark images.

So how are we to solve this? Through post-processing of course!

This activity concentrates on the technique in manipulating the histogram in order to adjust the brightness of your image. Take for example the image below:

Image of my brother walking towards a moving car on one fine foggy evening in California.

In order to manipulate the pixels of this image, we first have to convert the image in grayscale as not doing so will further complicate the process. As learned from the previous techniques, the conversion to grayscale as well as displaying the histogram is done by the code snippet below:

Which gives rise to the images below:

It is important to take note here that the grayscale plot of an image is actually called as the graylevel probability distribution function (PDF) once the plot is normalized to the total number of pixels [1]. One can clearly observe that the image is indeed dark as most of the pixels are located on the left portion to the graph which is close to 0 (signifying the darkest pixel).

A more interesting quantity we would be using for this activity is called the cumulative distribution function or CDF. From Wikipedia, a CDF is the area of the the PDF after some time. Simply put, the CDF shows the evolution of the total area of your PDF. This can be taken in Scilab through the use of the cumsum() function as already shown in the code snippet above. With our PDF illustrated above, its corresponding CDF is as seen in the image beside this paragraph.

The manipulation of the histogram is highly dependent on the shape of our CDF. Hence, we would investigate on the effect of the shape of the CDF to our histogram and eventually the output image.

The new CDFs we hope to implement on our image are as illustrated below:

These CDFs were produced through the use of the code snippet below.

Both the linear and quadratic cases were done through simply defining the x2 to be a list starting from 1 to 255. On the other hand, the Gaussian CDF was produced by first plotting a Gaussian curve followed by the cumsum() function. It is also important that since our original CDF is normalized (ranging only from 0 to 1), the last line of the code normalizes our ideal CDFs.

Now having our ideal CDF and the original CDF, the implementation of the ideal CDF is next. This is done through a process called “backprojection”.  The steps through this method is listed as follows:

Illustrative step for the backprojection method [1]

  1. For each x-values of the original  CDF (take note that this ranges from 1 to 255 signifying your pixel intensities),
  2. It’s corresponding y-value was noted.
  3. The y-value noted for a specific x-value of the original CDF was then traced to the ideal CDF. We try to find the nearest y-value present in our original CDF
  4. then we take note of its corresponding x-value in our ideal CDF.
The backprojection code that I came up with was composed of for loops as I was unable to fully understand the interp() function which was suggested by Mam Jing Soriano. Thank you to Mr. Mark Tarcelo for suggesting a solution for my problems during the coding process. The snippet of the  code is as displayed below:
The main problem that I encountered was the fact that the y-values for the original and ideal CDFs, even though both have the same range, are not exactly similar. This is troublesome when we try to trace back our original CDFs y-value. Simply making use of the find() function will most of the time won’t trace it back properly. As a solution, instead of using the find( y2==CDF(i) line, we split our y-values at the point of our CDF(i) as seen in the code above. Through splitting, we have a faint idea on the location of our traced y-value.   The next step in finally choosing which point is closest to our original CDFs y-value. This is where the if and else functions goes in. This instructs the program that if the value for the edge A is smaller than that of the value of edge B, we make use of the value for the edge A and vice versa.
However, this will not yet give our desired image. We have to implement this new backprojected pixels to the image This was done by replacing our pixels (x-value of the new CDF) in our image matrix with its new intensity (y-value of the new CDF). The code below does the trick:
The output images are as follows:
It is clear that there are evident changes to our output images. The images appear to be brighter as compared to our original grayscale image. As proof, the tree branch located at the upper right corner of the image is now visible whereas it was not at the original image. It was also evident that there are different effects for the different shapes of our CDFs. This is of course expected as we are varying the CDF which is related to our histogram which is also related to the intensities and pixels of our image.
To check whether or not we have properly backprojected the pixels, the histograms and CDFs of the output images was taken (in theory, the CDFs should resemble our ideal CDFs):
It does resemble our ideal CDFs except that for the earlier portions of each CDF, it is noticeable that the curves are not smooth. In fact, taking a look at the histogram gives us an idea as to why we ended up with these kinds of CDF.
Some values for the low-valued pixels actually don’t have corresponding pixel value. The histogram of a linear CDF should show a properly distributed) plot. However, it was observed that since some low-valued pixels don’t have any values in it… it all added up to the other pixel values hence giving us high number of pixels. This problem is brought by our backproject method. Recall that we  just chose the value of y that gave out the smallest difference from our ideal CDF plot to the original CDF plot. A solution would be to figure out a way in determining the x-values for the exactly traced y-values of our ideal CDF.
In Photoshop however (and also other image editing softwares), this process is made easy for users. The CDF of an image can be easily observed and manipulated by the use of the curves option in Photoshop (Images — > Adjustments –> Curves). The manipulation is done by simply changing the CDF through clicking and dragging your cursor. Images below show some snapshots of the process:
Far easier right?
Also, compared to our output images from my own cold.. the image still seems to be smooth. The transition from one shade to the other of gray is better as compared to our output images wherein at times it looks too pixelated already. This could have been brought by the problem discussed earlier.
On a personal note, this activity made me think for a long time. I was trying to figure out how to properly trace back the y-values to m ideal plot. I also tried to studied the interp() function and make use of it however I never got around into having a proper output image from it. Overall, I think I deserve a grade of 9/10 as I was still able to fully accomplish the requirements. However, the limitations of my code brought by the backprojection script gave errors to my output images.
Oh well… so much for wanting to retrieve my extra grades from my earlier activities.. 😦

References:

[1] Soriano, Jing. Activity 5 – Enhancement by Histrogram Manipulation manual. Applied Physics 186. National Institute of Physics, UPD.

After finally getting everything fixed and prepped for the activity proper (Did you read my previous post that served as this blog’s prequel? Well if you didn’t… YOU SHOULD by clicking here), I can now finally discuss to you the actual activity:

Have you ever wanted to act all super-spy that’ll make you feel oh so cool? Ever wondered how it felt like to be part of the CSI team? Well I have awesome news for you! The activity for today bring stalking to a whole new level. By the time you finished reading my blog, you’ll be able to have a faint idea of the area of your crush/nemesis/best friend/boss’ house that you can put into good use (such as BLACKMAIL!!! hahaha).

Of course I’m kidding 🙂

With an image of the object you wish to determine its area, proper post-processing is required. What do I mean by that? Well, we hope to have a clean image wherein the object is defined and as much as possible the only one existing in your whole image. The ability to separate the background from the object is also important so the use of the im2bw() function was used before the calculation of the area was done.

There were two methods that was discussed in determining the area of a properly post-processed image. One is through the use of simple pixel counting. Since area is defined as the space occupied by your two-dimensional object, simply counting the number of pixels of your object can already be the rough estimate of its area. This is implemented in Scilab through the simple code as seen below:

But then since I’m an Applied Physics student, challenge is my middle name. The other method is more mathematical and is by far complicated. Fret not though as it only LOOKS complicated:

The equation seen above is the base equation for the Green’s Theorem. It simply indicates that the area of a certain contour (left side of the equation) is equal to the integral of the points on the border of your contour upon further simplication.  Converting our integral into its discrete form (the integral is simply just the summation!), we have a friendlier equation to make use of.  

What’s essential for this theorem to work is the knowledge of the set of points  lying at the boundaries of your contour moving at a certain direction (counterclockwise or clockwise).

Now this is where all hell breaks lose. 

Without the SIP toolbox installed in your Scilab, you’d have to be creative and smart enough to figure out a way on how to, not only find the edge of your object, get the list of points on the edges of your objects that is sorted in away that it is tracing the object around a specific direction. (again, I encourage you  to read my journey before this whole activity by clicking here).

Thanks to the follow() function available with the SIP toolbox, it immediately outputs the list of your sorted coordinates of the edge of your object. But then since we also need the variables Yi+1 and Xi+1.. we have to figure out a way to shift the list that we have from the follow function by moving a single step. I managed to do this by taking advantage of slicing off portions of the list. First I had to place the first element of my original list into a new list ([x2]). The next list I made was the rest of the elements left which is done by the function:

[x1]=x(2:length(x))

How does this work? The 2 signifies the point (or the element) the start while the length(x) signifies the end (for our case the last element) by which you want to slice off your list.  Since we want to shift our list a unit forward, we simply combine x1 and x2.

The rest that we have to do in order to get the area is simply implementing the discrete equation of Green’s theorem. Check out my finished code below:

Now off to the fun part – finding the area of different shapes!

Through the use of my code from Activity 2 in creating square apertures, I’ve use three different sized squares and determined the white spaces in between through the pixel count as well as the Green’s theorem. Below is the tabulated values of the areas calculated as well as the actual area taken through geometry:

What really surprised me from the results was not the zero percent error from the Green’s function but rather the high percent error of the pixel counting method!  I mean, the idea of simply counting the pixels inside your shape was basically solving for the area as well… I tried looking to determine where I want wrong but sadly I did not find any lead. Since my squares were generated from Scilab, there’s not blurred edges to be worried about.

As for the Green’s theorem garnering zero percent error, this could have been brought by the straightforwardness of the shape we’ve dealt with. If we have curved shapes instead… like for example circles:

Should we expect greater percent errors for the Green’s theorem?

Surprisingly, the case of the circle shifts it the other way around. The area from the pixel count has smaller percent error (less than 1 for all cases) while the area from the Green’s theorem had greater than one percent errors. The possible reason as to why percent errors exist for the Green’s theorem is because of the generation of the circles. I have used my algorithm from Activity 2 in producing circular apertures and I managed to notice that the smaller the size of my circle is, the less it resembles a perfect circle. Remember that pixels after all made up of small-sized squares.

As a last test to see if my code is functional enough to be used for some real stalking business, I tried to find the area of a triangle:

 The area from the Green’s theorem gave a value of 13321 pixel squared. Taking note that the triangle (which I produced from Photoshop and saved it as a .BMP so as to avoid blurred edges) has a base width of 177 pixels and a height of 153. Using the known equation of a triangle:

Area = 0.5*(base*height)

The actual area is 13540.5 giving us a percent error of 1.62%!

With a more or less low percent error for my algorithm (which could’ve come up from the fact that the shapes are not actually made of straight smooth lines but rather small squares of pixels), we are now ready for the big thing 😀

With China’s recent bird’s nest stadium makings its way to the headlines of almost every newspaper back then… I’ve chosen the simple yet exquisite mega-structure as my point of interest. Using Google’s one of many oh-so-useful applications they’ve made available for their fans: Google Maps, I’ve located Beijing’s National Stadium and zoomed it in enough to fit my browser.

Taking note that the output of the program is in fact in terms of pixels, we have to make use of the things we learned from Activity 1 in converting our units: pixels to meters. It is then essential to make use of the scales in properly converting our units. Thankfully, Google has thought of everything as they have made a scale available in google maps as seen inside the pink border of the image above.

What we need to do is measure the number of pixels for the indicated scale above:

108 pixels is to 100 m

Making use of dimensional analysis, we simply have to multiply our output area to 100m/108pixels for two times so that we’d end up with meter squared instead. Hence , the conversion factor to be multiplied is 0.926.

I’ve post-processed the image above through Photoshop by setting Google maps in map mode (the image above actually came from Google maps in Satellite mode). Converting it into grayscale then into a bitmap mode image and finally deleting the unnecessary pixels… I was left with the image below:

Unbelieveable right? The stadium looks like a perfect shape! Now that’s what a call perfect engineering.

Calling the image in my program, I’ve determined the area to be (Green’s theorem) 87189 meter squared.

 For comparison, I tried my best to find the actual area of the bird’s nest. However, one has to be careful in simply finding a value for an area as it does not necessarily mean you have measured the exact same thing. At first I was disappointed when I got the area to be around 258000 square meters! I mean, if that was truly the case… my program is after all, very very wrong!  However, this 258 000 meter squared value apparently already included the garden as well as the other land nearby the stadium. The one that I measured was actually ONLY the building itself!

Managing to find the estimate of the building to have an area of only 80,000 square meters, I’ve garnered a percent error of 7.64%! UNBELIEVABLE!

Furthermore, my original plan was to find the area taken up by a certain gigantic lazy pink bunny lying around the mountains of Italy. Don’t believe me? See for yourself by the images below:

Despite the awesomeness of the pink knitted bunny lazing around, I was unable to find a source indicating the total area it has covered. Nevertheless, I put my program into use and determined its area to be 8480 meters. Now that’s ONE BIG BUNNY!

 —

To finish this blog post, I’d like to give myself a grade of 10/10. Although deep inside I’d like to give myself a higher grade (after going through the trouble of having to install Ubuntu and Scilab and Imagemagick and Animal and SIP and VMware and trying my best to make my own code in order to replace the follow code but failed terribly)… I wasn’t able to explore the topic that much as I have hoped to. So in the end, I was unable to give out something new to the topic…. Oh well… atleast I finally can brag to my friends that I know how Ubuntu/Lunix systems work now 😀 That’s a good enough reward I guess.

Did you know that images have their own complexities? They’re not simply just images! There’s a lot more going on inside them! In fact, you can actually group them by different factors. Even though people don’t really take much time in looking into these things, researches, photographers, graphics artists and those that take images seriously spend ample amount of time in trying to find the best image to use depending on their needs.

If you actually look at the Image–>Mode toolbar on Photoshop, you’d see something like this:

WOAH! Seriously, what are all these modes that we’re seeing?! Well, lucky for you our activity discusses the basic image types and formats and hopefully after reading through this, you’d get the sense of why such things exist.

What’s your type?

Binary Images

The most basic image type one can encounter would be the typical black and white image. In layman’s term, black and white does not necessarily mean binary images. As from its name, binary images are strictly composed of only BLACK, WHITE and nothing in between. The usual photos that are termed black and white contain gradients between black and white… this is not to be confused with binary images as they are called grayscale images instead (to be discussed later on).

An initially colored image (truecolor type to be exact which will be discussed later on as well) was converted into a binary image by the use of the im2bw function in Scilab. However, take note that this function has two parameters: the file name of the image you wish to convert and the value of the threshold point.  As seen in the image below, the threshold point should be a value between 0 to 1 and it serves as the point (in percentage) by which the pixels with these values would be converted to a 1 (Taken from IM2BW SIP Function). Displaying the matrix of the converted image, only T’s and F’s were observed indicating that this is indeed a binary image (not just based from its physical look). If you’d like to convert your matrix from boolean form (T’s and F’s) to something numerical (1’s or 0’s), we can opt to use the bool2s() function in Scilab as sugggested by Ms. Maricor Soriano, PhD.

Gray-scale Images

As discussed earlier, the typical black and white images we are so fond of are not really binary images but rather grayscale images. Unlike that of the binary images, grayscale images are composed of values that are not strictly 1 and 0’s. In fact, they are composed of values ranging from 0 (black) to 255 (white). We’d expect to see more depth through this because of the availability of the gradient which is very much important in photos.

I’ve managed to compare the conversion of Scilab and Photoshop of a colored image to a gray-scale image. Making use of the rgb2gray function in Scilab while simply clicking Image -> Mode -> Grayscale in Photoshop seems to give an almost similar output as observed above. However, upon subtracting the values, I was able to see a difference between the two. We were supposed to see a black image instead if they were indeed exactly similar. Also taking the maximum values of the matrices of the images through the max function in Scilab, I found that the Photoshop gray-scale image has a maximum value of 255 while the Scilab has only 254. This shows that not all programs have the same method in converting one image type to another.  As to how these programs work its magic is beyond the scope of this post though it’s pretty interesting to look into it if you have time 🙂

Truecolor Images 

The most common type of image one would encounter today would be truecolor images. At times, they are also called as RGB Images.  The photos you have updated on facebook as well as the different ads and graphics available on different websites are usually of this type.

From its name, RGB images are known to have three layers. This can be seen in Scilab upon using the size function.  Take for example the image to the left, using the size function on it yields an output of:

265.    189.    3.

According to Scilab online help as well from Mr. Mark Tarcy, the first element signifies the row size, the second the column size and lastly the third to be the layers.  It is then clear that the image which was of truetype has the 3 layers – red, green and blue. I was able to display the individual layers of the original image through the help of Tarcy. This is done by simply indicating the layer you wish to display:

imshow(IMAGE(:,:,1)) for red, 2 for green and 3 for blue

The output of all these codes is a grayscale image since it simply signifies the intensity of the layer. I’ve added color to the output images through Photoshop just to make it easier for your imagination. It is clear here that there are some details that are not available on the other colors.


Indexed Images

The last common type of image would be indexed images. Again taking a hint from its name, indexed images is composed of a single matrix containing values where this values serve as the index on a specific color map. The image above was taken from Photoshop where it shows the different color maps used for indexed images. You can see that the color map for a computer running on a Mac OS is different from that for the Windows system.

Why the need for indexed images? The only possible reason there could be is that the it is without doubt MUCH smaller than that of a truecolor type of image. However as seen on the image above (where I’ve saved the same image into a indexed image using Photoshop), the color seems to deteriorate.  What did you expect right? The palette is now limited to specific colors so there is no doubt you’d get low quality images in return.

Level UP!

But then there are more types of images that are available. I’d only make a discussion for a type I feel very much close to 🙂

HDR (High-Dynamic Range)

With photography one of the things I like doing (oh FYI the image I’ve been repetitively using is actually an image I took with my beloved camera not too long ago), I’ve been inclined into learning different techniques in not only taking amazing pictures but also editing/manipulating them. HDR photography has been my favorite so far as it is able to give out amazing, out of this world images. Seen below are my personal favorite shots I have made through HDR:

See? HDR images actually bring more depth to certain things. They are the type of photos you would want to make use of if you want your clouds to stand out. The technique behind making an HDR image (from what I learned) is that you have to have three images for a single scene wherein their exposure levels are varying. A program is then used to mesh these three together making it look like everything is in contrast with one another. Who knew photography can be this fun right? 😀

Image Formats

Besides different types of images, different formats exists for images. The necessity of these is basically caused by the need to smaller space your image will take in your disk space. There are different pro’s and con’s in using these formats and it actually depends on your need as to which format your image would be compressed to.

Investigating further: a function in Scilab is available in looking at the properties of an image – imfinfo(‘file location’). This is somewhat similar to simply right clicking on your image and checking its properties for a Windows user.

.JPEG – You’ll never go wrong with the usual

Earlier, I’ve said that most images are usually of truetype… well, most images we experience is of .jpg format. It’s acronym stands as Joint Photographic Experts Group. The images you take with your camera (well, most of the time), the images you upload on facebook… these are almost always .jpg’s.

This is most of the time preferential among other formats because of it’s ability to compress an image into a smaller size. However, JPEG images are known to use lossy compression which is effective in compressing the size BUT at times tend to lower the quality of the image.  Most of the time people overlook the quality reduction as it still is not that much noticeable to most of them. Plus, there is actually an option (depending on what program you use) where you are asked to choose as to how “compressed” you want to save your JPEG to.

Properties:

FileSize: 211258

Width: 680 pixels

Height: 480 pixels

Bit Depth: 8

Color Type: truetype

Camera Model: C2020Z

.GIF – The Biggest Loser

Image taken from a friend’s Tumblr site

The image containing the great THOR is saved as a .gif image. It is the next most common image format you may have encountered especially in the internet. It is known to be the image format which will give you the lowest quality of your image. It is said to make use of lossless compression which means that there are almost no losses to the info of the pixels of your original image. However, what makes this format give low quality images comes from the fact that it uses the Web Color Palette (Taken from Scantips). From the sample images I’ve shown earlier (indexed images), it is very clear that the image looks tacky when the Web Color palette was used  as its color map.

Being a web designer back then, I usually made use of GIF images when I have to deal with images that have to have transparent backgrounds. In addition, animated graphics are known to be only compressed as GIF images! I’m not sure about it now though.

Note: for some odd reason, Scilab can’t seem to open the awesomeness that is called Thor. This may have been caused by the fact that Scilab doesn’t recognize .gif image formats. So in order to give you the properties  of Thor’s image, I simply had to right click on the image and check out the properties tab.

Properties:

FileSize: 794kB (oh don’t worry if it’s larger than the jpg image because this one’s animated! So it should be expected that it’s heavier)

Width: 245 pixels

Height: 213 pixels

.PNG – REBELLION!

As shared by Mam Jing Soriano, PhD, in protest against the sudden patenting of the GIF image format, .PNG was born. They are somewhat similar to GIF’s and at times they could be a bit better. As far as graphics are concerned, PNG’s are known to be able to have transparent backgrounds though I’m not sure about saving animations in this format.

Properties:

FileSize: 108901

Width: 201 pixels

Height: 300 pixels

Bit Depth: 8

ColorType: truecolor

Note: Image has been scaled down for viewing purposes through the help of WordPress’ awesome scale-your-image-down option.

.TIF  – You won’t lose much

Based from my experience, this image format is what I have found to give me the highest resolution among other image formats (not counting RAW images of course). It is known to be lossless indicating that the quality of image is not much at stake. It is also known to be flexible as you can compress different types of images here (Indexed, RGB, 1-bit, etc.). In terms of space, it is a bit heavier than JPG’s as it contains more data.

FileSize: 218408

Width: 201 pixels

Height: 300 pixels

Bit Depth: 8

ColorType: truecolor

RAW images – Photographer’s friend

The last image format I rarely use are RAW images. For my Canon 550D, these images have extensions as .CR2. There RAW images are EXTREMELY HEAVY so I try to use this less as much as possible. However, with its size comes a great pro to it… This image format easily lets you tinker with your image without having to lose the quality. This is the reason as to why photographers make use of this image format. They can easily change the lighting with a program as if you are changing the settings of your camera! From personal experience, I only make use of RAW images whenever I’d like to make HDR’s. That way I can change the exposure level without deteriorating the image since HDR’s need 3 pictures of the same scene with differing exposure levels.


And the winner is….

Comparing the same happy butterfly image I’ve used for the png and tif formats (sadly WordPress doesn’t allow .tif file extensions to be uploaded so I wasn’t able to place it here), you can clearly see that the tif image is heavier than that of the png format. True enough, the tif format contains more data than that of a png format as previously discussed. So a summary of the ranking on how ‘heavy’ the different image formats is presented below:

gif < jpg < png < tif <<< RAW

Removing your past:

Table 8 of Mr. Marlon Daza’s thesis entitled “Characterization of a UV Pre-Ionized TEA CO2 Laser”

The next portion of this activity is to make use of the functions introduced earlier in Scilab to remove the background of the scanned graph we have used earlier.  It starts off with opening your image and converting it into a matrix by the use of the imread(‘image path’) function. With our image originally a truecolor image, it is best to convert it into a grayscale image through the rgb2gray() function so that we’d end up with different values ranging from 0 to 255 referring to the image’s intensity.

Now recall that in using the im2bw function in Scilab, we have to provide a threshold value (between 0 to 1). This threshold value will serve as the reference point  of the function wherein intensities with values equal or greater than the threshold will automatically be converted into a T.  Now a question arises upon trying to convert our image into a binary image, what’s the best threshold value to use? A more technical approach would be to get hold of the histogram of your grayscale image.  This is done in Scilab by the code: (let’s say your grayscaled image is named as the variable ‘img’:

[count, cells] = imhist(img);

imhist(img,255,”)

The histogram basically shows the number of pixels (y-axis) that is under a specific value of intensity (which varies depending on your bin size). The number of bins is the second parameter for the imhist() function while the third parameter is just the color of your bar graph.

Now to find the best value for the threshold, we are supposed to look at the point in the histogram where the number of pixels for certain intensities drastically changed. Four our case, it is at approximately at I = 200. Doing a little math in trying to decode this in terms of a value between 0 and 1:

255 is to 200 as 1 is to ???

We get the value to be 0.78 which gives us a nice clean black and white image of our plot!

This method is true for a strictly black and white images (or atleast for an image that has only two shades like for our case)… When we are to deal with truecolor images, the histogram would look broad which will make it harder for us to determine which point is best to be considered as our threshold value.  Take for instance the images below:

Now this is a bit easy since there’s only shades of green and yellow here so we should expect this to be somewhat easy right? Well, that’s where you’re wrong!

As you can see from the histogram, we still have a broad plot that indicates that the details of the image is spread out almost equally to the different colors/intensities (remember, we’re dealing with a truecolor type image here!).

Noticing the sudden decrease in the plot at around 150, I tried using the threshold value: 0.58 (155/255) and ended up with an image that is still not that desireable:

You can barely tell that it looks like a flower!

Peronal Note: 

All in all, this activity was fun for me to work with as I’ve always been fond of handling images/graphics. Being a photography-enthusiast and a graphics artist back then… I had to learn the differences between the image formats and properties the hard way. I remember having to create a very nice graphic but ended up with something crappy the moment I compiled it into a .gif format (I had to! I needed the background to be transparent!). It was through my discovery of the magic of a .png format did I discover a new way of getting transparent images with a nice quality. But with all honestly, it was only up until today did I learn the indexed type of images. No wonder whenever I uploaded photos in facebook I’d always end up with my images looking a little bit of lower quality! It probably had it converted into an indexed image.

Anyway, I’d like to give myself a grade of 12/10 for this activity as I believe I was able to discuss briefly everything that needs to be discussed. I even compared the grayscale capabilities of Scilab as well as Photoshop. Plus I was also able to separate the respective R, G and B layers of a truecolor image in Scilab. Lastly, I was also able to discuss the limitations of finding the threshold value for a truecolor type image.

Image taken from http://scilab.org

Another programming language or tool that one may opt to use for numerical analysis to even image processing is Scilab. Personally, I call this my very own free and simplified version of MATLAB. Even though I haven’t really bonded with MATLAB that much as I love hanging out with Python, from what I basically know… Scilab has most of the things I essentially need in MATLAB. In fact, there are instances wherein their syntax are almost exactly the same!

However, what’s probably making Scilab lighter than MATLAB in terms of space may come from the fact that modules are needed to be installed first. But then that’s what makes this a bit easier since there are instances wherein you only need a few functions and hence you only download the things you need instead of getting the full package.  Modules can be downloaded in the Scilab Comlementary Modules website.

Since the goal of this class (Applied Physics 186) is to develop an understanding and mastery of image processing and pattern recognition techniques as quoted from the course syllabus of Maricor Soriano, PhD., the module needed for Scilab is Signal Image and Video Processing (SIVP).

Downloaded? Installed? Let’s Go!!

Plotting a  sine wave is simple. Similar to MATLAB, a list starting from x1 and ending with x2 with a step size of s is created by the syntax:

A = [x1:s:x2]

Creating a list for our values of t (time) and plugging this in to the sine function will automatically give us our desired output when plotted.

To further test our understanding of the basic functions in Scilab, creating synthetic images as based from algorithms can be done as a form of practice. As an example given in the manual, a circular aperture code with its corresponding output is as seen below:

In line 1, variables nx and ny were used to indicate the dimension of our desired image (for my case I’d have a 200×200 image). The linspace function as used in lines 2 and 3 simply defines the range of the image.  Now since we are dealing with a circular aperture, we hope to have 2-D system. With our x and y values initially set, we now combine these and form a 2-D array  through the use of the ndgrid function. However, it is better to deal with circles in the polar coordinate system and hence we convert x and y to r by using the simple distance formula. Take note that the power  operator here has to have a period before it as it signifies that this operation is to be done to elements in the array individually.

Wanting a circular aperture, we start of with an array of dimension nx and ny with all its element equal to 0 which is done by using the zeros(nx,ny) function. It is important to take note that when these is converted into an image, 0 stands for black  while 1 stands for white. .. which is pretty much similar to the on/off analogy for 1 and 0. As the last step, we now want to convert certain elements in our zero matrix into a 1 which will form into a circle when plotted. What’s great about Scilab is that it has a find(condition) function wherein it gives the indices of the elements that have satisfied your condition. Using this, the conversion process can be done using a single line!

Square Aperture – WANTED! Sign

 Have you ever wondered how you can make your own wanted sign through the use of different functions in Scilab?  This is almost similar to the circular aperture image wherein you start off with an array of dimension nx by ny with all its elements equal to 0 (or black graphically). With the x,y coordinates better used for shapes like squares, we replace certain elements in our zero-matrix to ones by using the list function (or the [::]) to name the indices of the elements we want to change.

My list starts off with nx*0.25 which is 1/4 of the square that we have and it ends at nx*0.75 which is 3/4 of the original square. If I only make use of  the command

A((nx*0.25):1:(nx*0.75)) = 1;

which, at first, is what I did… I’d only end up with a white elongated line lying parallel to the x-axis. We have to remember, that unlike that of the circular aperture wherein it is in polar coordinate system, we are dealing with a 2-D coordinate system and hence we have to add another condition:

A((nx*0.25):1:(nx*0.75),(nx*0.25):1:(nx*0.75)) = 1;

Grating – Behind Bars

The next shape we hope to make resembles that of what the prisoners (ages ago) are usually sporting – black and white stripes. I can easily do this shape by starting off with an array of zeros and manually converting certain elements into ones through listing them one by one… that’d be almost exactly similar to the square aperture earlier done only that we’d have more conditions.

Since codes are done to speed up the process and try to reduce the work of the user, I ought to find an easier method in making the grating . What I thought of was that we, as usual, start off with an array of dimension nx and ny with all its elements to be equal to 0. Then, a while loop was used so that it’ll automatically determine the values of the indices you want to change to get your gratings . But before that, I used a variable which I named as the grating wherein this determines how many bars of black and white will my output have. The C variable then serves as the starting point while the D variable the ending point of where I’d like to change the values to 1. For while loops, it is customary to make use of a variable that serves as the determining factor to whether it’ll continue to loop or not. For my case, the i variable was set to 0 and upon reaching the value of i = nx (or the edge of the image), the loop will stop. The variable fac was essential since this’ll be the factor by which variable C, D and even i will change everytime the loop repeats itself. Now the indices of the elements we hope to convert to 1’s was named similar as to how the square aperture was done BUT since we would like to get rectangles instead, I set the other condition to be

A(1:1:ny,C:1:D) = 1;

 where it simply says that it starts at the top-most portion of the image (1)  up until bottom-most part (ny).

Corrugated Roof – Behind Bars in 3D!

Almost similar to the previous shape, a corrugated roof was tasked to be drawn next. As seen in the image to the left, this resembles the typical jail bars but now it seems as if it has depth (making it somewhat 3D-ish) all thanks to the presence of a gradient. The mere fact that we are no longer dealing with simple black and white indicates that our end-matrix would have values ranging between 0 and 1.

As discussed earlier, the sine wave can easily be done by starting off with a list that signifies our “t”. I created a variable width so that we can easily change the dimension of our image just by varying this variable. Plugging in our list to the sin function, we get our desired sine output. If we carelessly plot this, we’d get a line of width equal to our variable “width” only and not our desired block. In order to come up with the square, we have to find a way such that our line (or list) appears to be stacked forming an array or similar values along the y-axis. Thankfully, a function that does that exact thing is available in both Scilab and MATLAB. The repmat() function asks for three parameters:

repmat(MATRIX,repeat in x, repeat in y);

The first parameter is the list or array that you hope to repeat. The second parameter asks for how many times will you hope to stack the given array along the x-axis while the third is for the y-axis. Since we only want to have our original list of our sine function to repeat along the y-axis, I set it to be equal to width*10 (10 comes form the fact that the step size of our t list is 0.1). The x-repetition parameter is set to 1 since we no longer need to have it repeated. 

For fun, I also tried to look at the mesh (or the 3-D plot) of our resulting image.

 

Grating – Behind Bars (REMIX!)

Thankfully, there is a much simpler way of drawing the gratings in Scilab. As suggested by Mr. Gino Borja during class, we can make use of the sine function and convert it into a 2-D matrix. We are to make use of the property of sine waves where it is known to have a positive and negative values moving periodically and smoothly.

Similar to what I did for the corrugated roof, we initially start with a sinusoidal image. Now we set the conditions where if the values of the array is greater than 0, the elements will be converted into a 1 while if the value is less than 0 it will be converted to zero. This is simply applying the technique we’ve been doing for the circular aperture (as well as the square aperture).

From 15 lines with the use of while loops, we have managed to reduce it into 7 lines!

Annulus – Eye Spy

 The next image is very much related to the first image we’ve done – the circular aperture. To put things simply, we are to just add another condition wherein after changing a set of elements that makes up a radius r=0.7 to 1, we’d have to change the elements of a smaller radius (r = 0.3) back to 0! As simple as that!

Gaussian Transparency – You are my sunshine 🙂

 The last image tasked to make looks like the sun shining on a foggy day. I’ve managed to come up with two methods in making the Gaussian transparency in Scilab. The first method resulting with the image located to the left comes from the basic definition of a Gaussian function. Recall that the Gaussian function is proportional to the negative exponent of x. But since we hope to end up with circular Gaussian pattern, we instead plug in r (remember when we converted our x and y axis into polar coordinates).  Varying the constants will give us a different size for our gloomy sun.

As for my second method, I made use of the imfilter and fspecial filter to end up with a more defined gloomy sun as seen on the image to the right.  I’ve managed to find that there exists a function in Scilab wherein there are different filters automatically set and are waiting to be used – fspecial. Wanting our end product to be a Gaussian transparency, I chose the “gaussian” mode for the fspecial function wherein the [200,200] is simply the dimension/size of our filter and 10 the sigma in the Gaussian equation. But the fspecial filter only creates the filter! We need to use the function imfilter wherein it takes in the original image you wish to filter and the filter you wish to use. With our original image a circular aperture (variable A in our case) and the filter the one we initially set (variable filt), the end product is the beautiful Gaussian transparency.

For this activity, I think I deserve to get a grade of 12/10 as I was able to give out more than one technique in producing certain shapes. Even though I planned to make a unique image (such as a tree or a flower) as a challenge for myself, due to lack of time..I wasn’t able to do so anymore.

Personal Notes:

Through this activity, I was able to gain a little bit more confidence when it comes to programming. I have always felt that I’m lacking with my programming skills as compared to my classmates even before. I’d always have a hard time trying to think of the right algorithm for things and believe me, Applied Physics 155 and 156 were something so terrible for me. Add the fact that since my mom’s a computer science student while my father’s known to program things which would usually take other people a week (he’s an engineer by the way)… the pressure add’s up to my belittling myself. I guess I do have an inner programmer in me in some way right?