Monday, November 4, 2013

More Image Sharpness

ImageMagick has FFT build into it! You just use the appropriate command line options in ImageMagic's Convert application.  I found ImageMagick's FFT documentation really helpful in understanding 2d FFTs of images. I just takes two command lines. The first performs the FFT and creates a magnitude and a phase plot.

convert ohelo.png -fft fft_ohelo.png

The phase plot's not really useful to us, so we'll just take the magnitude plot, which is the one that has a -0 appended to its name. Because it's plotted on a linear scale, it doesn't look like much - almost entirely black. But if we auto-scale it, and plot logarithmically, a pattern emerges.

convert fft_ohelo-0.png -auto-level -evaluate log 10000000 fft_ohelo_10000000.png

You may have to play around with the log scaling. It took me several tries to get something usable.

Here is the set of test images:

Straight Out of the Camera
5 Pixel Blur
25 Pixel Blur

 It's hard to tell the difference between the original and the 5 pixel blur, but look at the FFTs.
Straight Out of the Camera
5 Pixel Blur

25 Pixel Blur
The white areas farther from the center have the highest spatial frequencies and therefore are the sharpest. Blurring the image filters those out and results in a smaller circle. I'm not sure I understand the horizontal and vertical artifacts, but I think that may have something to do with the rectangular arrangement of the pixels.

To make this more quantifiable, I next plan to use ImageMagic's Sample option to grab radial lines at 5 degree increments between 5 and 85 degrees. Because of the symmetry of the quadrants, it's only necessary to sample from one of them.  I'll then average the samples and plot them on an X-Y axis.

Also, to make this real, I need to take the pictures with two different cameras or lenses. I think the subject should be something natural like leaves because man-made objects in some examples I've seen tend to have strong diagonal artifacts.

A later post will explore some of these refinements and possibly add some Ruby automation to the process.




Sunday, November 3, 2013

How Sharp is That Lens?

As I read reviews of photographic equipment, I find the occasionally come across a review in which the reviewer notes that the reviewed lens was a dud and didn't focus properly. How could I tell if I had a dud lens without a way to compare it to others?  Some kind of quantitative method for lens comparison is needed. There are all kinds of sharpness test patterns, but none of them seemed to be easy-to-use. Then I saw an example of a pattern of black and white bars that got progressively closer together. You can tell how sharp the lens is by looking at where the bars mush together to form gray. I looked at this and realized that this is the spacial equivalent of the swept-sine frequency response test used for audio equipment. What if I approached this like a signal processing problem? A basic test of signal processing equipment is the step-response. On a first-order system you can use this to determine its time constant, which is a fundamental metric. This could be the metric I was looking for? I used a laser printer to make a sheet of paper that was half black and white. I took three pictures with an old point and shoot camera: wide, tele, and purposely blurred. The photos were taken at maximum resolution.


WIDE


 TELE

BLUR




Sampling a line pixels from left to right in the middle each image should result in a step response from black to white. To extract the pixel values I used a Ruby script with RMagick:

require 'RMagick'
include Magick
puts ARGV[0]

image = ImageList.new(ARGV[0])
midPointY = image.rows / 2

(0..image.columns).each do |x|
      pixel = image.pixel_color(x,midPointY )
    print x
    print ", "
    print (pixel.red + pixel.green + pixel.blue) / 3
    print "\n"
end

RMagick is so cool! I sent the output of the file to a spreadsheet to compare the three images. 



Look at this! The wide angle has the fastest rise time. You can even see a little 2nd order ringing that's probably due to the compression algorythm. Interestingly there's pre-ringing too because spacial systems are non-causal.

Note that the wide angle is not sharper because it had to be closer to the paper to fill the frame. I've noticed that this 12 year-old camera is just not as sharp on the telephoto setting as it used to be. This graph quantified my observation. How much of a difference is there between wide and tele? We'll zoom in on the data.



In a first-order system, the time constant is measured at 63% of the final value of the step function. There are a couple of sample points in that area, so I'll use those as an approximation rather than interpolating an exact point. Now we can say that the wide-angle setting is more than twice as sharp as the telephoto setting.

There are still some questions I'd like to answer. How can I compare cameras with different pixel resolution? Can I harmonize my results somehow with the lensmakers' specs? Could I perform an average of successive images to get a better accuracy? How could I compare the center of the lens to the edges? Would deconvolution or FFT be useful analysis tools? These are all questions for a later blog post!









Saturday, November 2, 2013

Transcription Controller in an Afternoon

I have some audio files that I need to transcribe. I figured it would be easy just to load them up in the Audacity audio editor and type away. Not so easy. People talk much faster than I can type, and it's hard to control Audacity while trying to type on the word processor. Fortunately, Audacity has keyboard short-cuts. I just need a way to connect a foot pedal to Audacity.

An old PS2 mouse makes a decent foot pedal. I gutted the unit, removing the scroll wheel, and then wired the mouse buttons to the I/O cable.


I then cut off the PS2 connector, and wired it to pin 2 of an Arduino. I also added a 10K pull-up resistor. Here's what it looks like assembled and connected.


 The Arduino was programmed to send the following text strings:

15 seconds after boot: "g"
mouse down: "0"
mouse up: "1"

Here's the code (adapted from Arduino Playground):

// digital pin 2 has a pushbutton attached to it.
int pushButton = 2;
// the setup routine runs once when you press reset:
void setup() {
  // initialize serial communication at 9600 bits per second:
  Serial.begin(9600);
  // make the pushbutton's pin an input:
  pinMode(pushButton, INPUT);
  delay(15000);
}
void loop() {
  Serial.println("g");
  int initButtonState=digitalRead(pushButton);
  //loop forever
  while(1)
  {
    // read the input pin:
    int buttonState = digitalRead(pushButton);
    if(initButtonState != buttonState)
      {
        // print out the state of the button:
        Serial.println(buttonState);
        //debounce
        delay(5);
      }
     initButtonState=buttonState;
   }
 }



I decided to code this in Python because it's a pretty fun and easy language with lots of libraries. But, the first thing I needed was X-windows automation and there seem to be a lot of choices. Even though it's been replaced by Xaut, I found Xautomation worked for me. I got Python and Xautomation from the Linux Mint Software Library, but I could have got them as easily using apt-get.

For each of the received  characters I used Xautomation to sent the following key strokes.


g = space p (start playback and pause)
0 = p (un-pause)
1 = comma comma comma comma comma p (back up a little, then pause)


 The last link was the serial link connecting the Arduino to the Python code. pySerial looked like a good library, and to get it I would need python-pip.

sudo apt-get install python-pippip pySerial

pySerial didn't work at first. I found I had to execute the following commands.

sudo usermod -a -G dialout tester
sudo chmod 777 /dev/ttyACM0

The first command gives you permission to access serial I/O. The second gives you permission to use the particular USB device. Unless you have put these settings in a bash script, you'll have to execute them every time you run the program. Also, depending on your hardware, your USB device may have a different name (like ttyUSB0). The short-cut way to getting pySerial working would be to run Python as root, which is a very bad idea, however.

Here's the code with all in all its ugliness:

# serial_read_keys.py
import time
import serial
from subprocess import Popen, PIPE

control_f4_sequence = '''keydown Control_L
key F4
keyup Control_L
'''

shift_a_sequence = '''keydown Shift_L
key A
keyup Shift_L
'''


initialize_sequence = '''key space
key P
'''


play_sequence = '''key space
'''

unpause_sequence = '''key P
'''

pause_sequence = '''key P
'''

backup_sequence = '''key comma
'''

def keypress(sequence):
    p = Popen(['xte'], stdin=PIPE)
    p.communicate(input=sequence)

ser = serial.Serial('/dev/ttyACM0',9600)

while (1) :
        #print 'reading line'
        rcvChar = ser.readline()
        # print rcvChar
        if 'g' in rcvChar :
            print 'initialize - play and pause'
            keypress(play_sequence)
            time.sleep(0.1)
            keypress(pause_sequence)
        if '0' in rcvChar :
            print 'unpause'
            keypress(unpause_sequence)
        if '1' in rcvChar :
            print 'backup a little then pause'
            keypress(backup_sequence)
            time.sleep(0.1)
            keypress(backup_sequence)
            time.sleep(0.1)
            keypress(backup_sequence)
            time.sleep(0.1)
            keypress(backup_sequence)
            time.sleep(0.1)
            keypress(backup_sequence)
            time.sleep(0.1)
            keypress(pause_sequence)


I had to do some experimentation, and I left all of that in there so I could document what I had learned.

To do transcription, first open your audio file with Audacity. You may want to use the Effect, Change Tempo menu item to slow down the play-back. Now start the Python script. You have 15 seconds to do the following: make sure the Audacity stop button is clicked, then click on the waveform you want to transcribe.

After 15 seconds, the script will click the play button then immediately click pause. Don't touch anything on your screen again. If you do, it will lose focus and the key-presses won't go to Audacity. So, how are you supposed to type the transcription then? Use another computer! I neglected to tell you that, didn't I?

Go to the other computer, mash down on the mouse with you foot and the audio will begin to play. Release the mouse and the audio will back up about 5 seconds and then pause. Why does it back up before pausing? So you can more easily sync up your typing. If you want to back up more double click the mouse.

One unexpected nice feature I found is that when you start the script, it reboots the Arduino, so you don't have to reach down and press the reset button.

Sunday, October 27, 2013

Quantitative Analysis of My Photographic Style

I'm starting to look for a new camera. I'm moving from an advanced point-and-shoot to a basic DSLR. Choosing a body is hard enough. But, what about lenses? The answer different for everyone. It depends upon your style. Do you shoot landscapes, portraits, wildlife, or macro? I currently have a Canon S3 IS super-zoom which does fairly well at any of these, but mostly I like landscapes. Presumably I should be using wider angle settings. I'm now looking at ultra-wide zooms, but does my picture taking history show that this is the range of focal lengths that I use  would use most?

First I gathered all the photos I took with my S3 on a 2010 trip to Japan. Using ExifTool, I extracted the focal length setting from every photo I took with that camera.  Alternatively I might have decided to analyze only my favorite photos, but I decided to look at the larger data set. On my computer, I opened a terminal and moved to the folder containing the photos to be analyzed. It this case the photos were on a SAMBA file server. In Linux, the SAMBA path can be pretty long. I learned that you can just type "cd " in the terminal, then drag the network photos folder in the Nemo file browser to the text window to complete the command with the full network address. From there, I typed the following command to get a listing of the the focal lengths of all photo files.

 exiftool -exif:focallength ./ > out.txt

 Next I used GREP to eliminate the file names.

 grep Focal out.txt > test.txt

This can be done in one step by piping commands, of course. The file was imported and parsed into LibreOffice Calc such that I had only a list of focal lengths. Since the S3 has a 1/6 size sensor I had to multiply all the focal lengths by 6 to get the 35 mm equivalent. Next I needed to break the data into categories. Since focal length is logarithmic, I made 11 categories by starting with the minimum value and multiplying by 1.333. I didn't bother with Sturges' Rule or the Rice Rule. I just thought ten categories would be about right. After applying the frequency function to the data, I created the following chart.


It looks like over half my photos were taken at the maximum wide angle setting. The next most used setting was maximum telephoto. There's as slight increase in the middle, around 100 mm. One thing to be aware of is that many point-and-shoot cameras go to maximum wide when first powered on. This may have the potential to skew data.

What did I learn? Most of my pictures are taken at maximum wide angle. This means an ultra wide angle zoom might be most useful for me.

So how fast should the lens be? Can I categorize lighting conditions? Is there a single number that characterizes the brightness of the most commonly photographed scenes? The number would have to be some combination of shutter speed, aperture, ISO and sensor size. And that's a task for some other day.


Sunday, May 26, 2013

Planetary Conjunction

Three bright planets were in conjunction tonight - clustered within an arc of three degrees. Over the past few days I had been watching them move closer together. The last few nights the planets played hide and seek with the clouds. Tonight, the night of the tightest cluster, the clouds behaved, and we were treated to this sight that only happens every couple of years.

On the top is Mercury, to the left is Jupiter, and on the bottom is Venus.

More information can be found at NASA.


Using a Gradient Tool to Improve Pictures Taken on a Hazy Day

I really like the Gimp for photo post production. Here's a technique that takes me back to my film and darkroom days of "dodging" and "burning". Dodging is the technique of giving less light to underexposed areas of a negative. Burning adds light to overexposed areas.

This photo was taken on a somewhat hazy day. As you move towards the top of the photo, the scenery fades further in to the hazy distance, giving it a lighter appearance.

To darken hills in the distance so that they match the foreground, I used a gradient burn. First, use the toolbox to set the foreground color to a medium gray and then select the gradient tool.


Now set the gradient tool to burn.


Click and drag the cursor from top to bottom.
When you release the cursor, you should see an improved image.
I think this makes a good substitute for when you don't have a polarizing filter. Also, unlike the polarizer, it works well whatever your orientation to the sun.


Sunday, March 24, 2013

Transit of Venus - Last Chance and Last Minute

This was it. The last chance in my lifetime to see Venus pass in front of the Sun; and the last chance to measure, the old fashioned way, the distance from the Earth the Sun. The distance from the earth to the sun, called the AU, is the basic unit of astronomy. This blog entry documents my process, the assumptions I made, and the rewarding feeling of witnessing one of nature's most amazing events.

It had been a busy week, and I hadn't had time to prepare. On the morning of June 5th, 2012 I scrounged through the garage for what I might need. First the binoculars. I didn't know this, but there's a cap on the front of many binoculars that pops off to reveal a hole, tapped for a 1/4-20 screw.

Next I found a screw, a couple nuts, and an angle bracket to mount the binoculars onto the tripod.


 The rest was done with duct tape, cardboard, and paper. A screen was made from cardboard and paper onto which the image would be projected. Then a shield was made to shade the screen and to cover one side of the binoculars. A paper flap was added to temporarily cover the other side of the binoculars so the intensity of the sun's rays wouldn't damage the binocular's optics between viewings.



I brought the whole rig down to CSU Sacramento, where the staff of the observatory on top of the psychology building was offering views to the public. I figured I could set my rig on a corner of the roof so I could take my measurements and see the transit with their equipment too. Unfortunately, it was way too crowded there and I set up behind the building.

All in all, the image wasn't too bad, if I do say so myself. Solar North is to the right and East on top top, I believe. The image needs needs to be flipped and rotated 90 degrees to match the image from a direct view. Venus can clearly be seen, and you can just make out three groups of sunspots. My first goal was accomplished. I had seen the transit.


The other goal - measuring the AU - was next. I wanted to do this as independently as possible. I didn't want to use any primary measurements from others. Of course to do that I would have to be at two places at the same time because measurement of the AU is done using parallax. By definition, I had to use measurements from someone else at a different latitude. Transit of Venus data from the Mauna Kea was readily available, so I used that. Most of of the measurements were made with my camera, and the clock inside the camera. The only necessary measurements I looked up on Wikipedia were the angular diameter of the sun, and the circumference of the earth. The angular diameter of the sun can be calculated with a pin-hole viewer, and some day I'll do that. I once saw a PBS documentary in which a team used a compass, a protractor, and a moving van to measure the size of the earth. Someday I'll use the same technique, and adjust my results accordingly.Credit Kepler.

Across much of the world, this transit can be measured and the AU calculated by documenting the times at which Venus enters and exits the disk of the sun. Unfortunately, in most of the western US, the sun set before Venus exited. My plan was to take a series of pictures and using the time at which the pictures were taken, plot a line, and calculate how long the transit would have taken had I been able to see the whole thing.

After several hours of observing and photographing the transit, I downloaded the pictures. The first problem I noticed was that the sun as all different sizes and not even round in many cases. This is because I took all the pictures hand-held and off-angle. The solution was to crop all the images to edge of the sun and resize them to 1000x1000 pixels. I guessed, correctly, that the sun might be rotated from picture to picture. The photos could be aligned using the sunspots as a guide. I using the cursor on my photo editor entered the coordinates of Venus, the coordinates of the three sunspots, and the time-stamp on the photos into a spreadsheet.  The sunspot I called  "c" was the most easily measured so I used that for my calculations.

time c x c y venus x venus y
143123 310 561 720 918
143727 312 526 606 944
143758 316 524 618 938
144943 339 522 550 940
151807 348 554 734 826
153106 356 531 752 788
153153 356 550 741 796
153930 360 568 780 758
153941 358 573 772 764
154020 351 543 746 765
154630 356 578 729 790
154725 358 592 753 795
155102 369 591 780 736
155113 375 609 788 738
155454 362 576 764 758
155551 357 544 768 728
170411 332 490 710 669
171348 336 531 728 670
173544 334 528 759 628
175132 360 506 788 585
180101 351 525 789 578
180114 357 530 796 566
180135 354 538 801 562
180216 352 537 792 598
180632 354 540 798 573



The position of the raw points (blue squares) were aligned (green triangles) by rotating them about the center of the sun according to the angular difference of the position of sunspot c compared to the average of position of sunspot c.  Then a best-fit line was calculated for the green triangles. I didn't know it a the time, but not knowing the the exact position of the sun's north pole would be a contributor to the error in my measurement. If at some time in the future, I can locate the angle between sunspot c and solar north I can make that correction.

In the photo below, the average radius of c in pixels is 161 pixels and the average angle is 197 degrees.



Knowing the path in pixels isn't very useful, so the scale needed to be converted to arc-seconds. Since my pictures have a radius of 500 pixels and the sun has a radius of 960 arc-seconds the path of Venus across the sun can be described as as

y =  2.34x - 1510 

To get the length of the transit, we need to solve for the intersection with the intersection of the sun.

x^2 + y^ = 960^2

I won't write all the steps here because I haven't figured out how to use a decent equation editor, so for now I'll just tell you that I calculated the transit length to be 1477 arc seconds in Sacramento.

To calculate parallax, I needed to convert the transit time from Mauna Kea from seconds to arc-seconds. To do this, I needed to know the speed of Venus across the face of the sun in arc-seconds per second. I thought this would be easy. I just needed to average the change in position over the time between observations for every one of the photos. This number turned out to be way off, and I couldn't get any reasonable numbers by taking a subset of consistent looking observations.
 It didn't help that I had moved 15 miles north - you can see the gap in the data - mid-way through the observation period. I was surprised that it made a difference.

In the end, I just used a ruler to measure the distance between the majority of the points and divided by my total observation time.  This enabled my to convert the documented Mauna Kea transit time of 22500 seconds to 1641 arc-seconds.

I could have calculated the position and orientation of the Mauna Kea transit by solving for a line 1641 arc-seconds in length with a slope of 2.34, but I got impatient. I got out my ruler again and drew a line of that length, parallel to my observations. At this point I realized that perhaps parallax was not the perpendicular line between the two paths, but the north-south distance. My photos were aligned only roughly with the north pole and the parallax angle can vary wildly if the pole is not aligned.  The number I came up with was 141 arc-seconds of parallax between Sacramento and Mauna Kea. This is way too big; which meant that the path of Venus across the sun was much more perpendicular to the poles than my photos showed.

I discovered an even more serious error. I realized that since Mauna Kea is to the south of Sacramento, parallax should project the transit line to the north. Since the transit was projected on the sun's northern hemisphere, it means that my calculated transit time should have been greater than Mauna Kea's.

For now there was nothing to do but forge ahead, even with bad numbers - just to complete the exercise. I applied the equations from the Exploratorium's ToV web site. I should mention here that the Exploratorium was my favorite place in the world when I was a kid.


We need to know the N-S distance from Sacramento to Mauna Kea.

The radius of the earth is 12.7 km.
The latitude of Sacramento is 38.6 degrees.
The latitude of Mauna Kea is 19.8 degrees.

The distance between the two locations should be:

12,700km*(sin(38.6) - sin(19.8))  = 3,620 km

Using the Exploratorium equations:

E = the parallax angle.
V = E/0.72 = 54,000
Da-b = the distance North to South from the two viewing locations.
De-v = the distance from Earth to Venus.
De-s = the distance from Earth to the Sun.

De-v = (0.5*Da-b)/tan(V) = 19,200,000 km
De-s = De-s/0.28 =  68,600,000 km

If you look it up, the correct answer is 150,000,000 km. My measurements and calculations  yielded a result less that half of the true value!

Now it's time to call in some help from the professionals at NASA.


From this the diagram, I realized that I should be using the Earth's ecliptic rather than the Sun's poles to determine parallax. I could see that the difference between the Earth's ecliptic and the path of Venus was only 9 degrees, rather than the 24 degrees that i got from my photos. I could also tell from the graph that the velocity of Venus across the face of the sun which enabled my to calculate that the path across the sun was 1517 arc-seconds at Mauna Kea. The difference is less than the uncertainty in my measurement!

I'm not too disappointed, though. Even the heroic 1769 expedition of James Cook and Joseph Banks failed to return usable data, so I am in good company. It was a fun project and I got to see the orbital mechanics of our solar system in action!
I also have some ideas salvaging my data. Perhaps I can get some data from my move north in the middle of my data collection. The data clearly shows the projected path moving south after I moved north. At least the data moved in the right direction!

Data from Australia shows a projection to the north of the projection I observed, so hopefully, the increased distance can reduce the relative error of my measurements. This site may yield data I can use  Astronomical Association of Queensland (AAQ)

It lists predicted values rather than measured values, but I may use it in a pinch.
 

I also located measured data from S. Bolton in Canberra. In a later post I'll apply this data and re-evaluate my results!


Canberra -35.230167 latitude Time in seconds avg entry and exit duration
22:16:41 80201 80742 0
22:34:43 81283
22182
04:26:46 102406 102924
04:44:02 103442

Friday, January 25, 2013

GoPro Underwater Pole

Some folks have asked how we made our GoPro fishing videos. First, I have to say that this project was very much inspired by the crew of the yacht Teleport.















Our simple rig was made with code 40 PVC pipe. In the drawing below, the GoPro has been rotated 90 degrees around the axis of the pole for clarity.










The length of the rig just needs to be long enough to get under the boat when lowered into the water at an angle. The two one-foot lengths of pipe are used as handles to steer the camera in the general direction of the action. The tee and end-caps were glued using normal PVC solvent type glue. One problem I encountered was that as I tried to put on the last cap, the compression of the air inside tried to push the cap back off. I had to hold the cap on tight until the glue set up. The next time I do this I'll put a threaded coupling on the end of the handle, then seal it with a threaded cap.

We were concerned that the GoPro camera might somehow come loose from the clamp during operation. We weren't using the camera'a the float so I had visions of the camera dropping 70 fathoms to the seabed, so we tied the camera to the pole with string. The next concern was what would happen if we dropped the rig in the water. We didn't have time to test it to see if it floated so I tied a rope to it then tied the other end of the rope to a cleat on the boat. Next trip, if we find it doesn't float, the solution may be to wrap the pipe with a "noodle" pool toy. Not only would this increase buoyancy, but it would reduce the sound picked up by the camera when the pole bumps the side of the boat.