These have been busy days since the Kitt Peak observing run. Those observations are critical for us to help find interesting occultation events to try but they are of no use in the form of images. But that comment I mean that the easy part is over once the pictures are taken. There’s a lot of image processing and analysis that is required. I have to calibrate the images as well as map the sky coordinates. After that I have to scan the images looking for all the moving things. Most of the moving objects are main-belt asteroids but a few of them are the slow-moving Kuiper Belt objects that are our targets. Once all these objects are found I then extract their positions and use that information to improve the orbits. Good orbits are the key to letting me predict where these things will be at some future time.
This work, while difficult and time consuming, is made easier by the software that I’ve developed over the past 15 years. One of the nasty realities in professional astronomy is that there is very little standardization in the data I get. Usually, I can count on data from the same camera having the same data format. But, this observing run was with a camera that I’ve never used before. Even though this camera is on a telescope I’ve used, the data are just different enough that I had to rework a lot of my software. In the process, I discovered that there was a serious problem in the supporting data. One of the key bits of information I need to know is exactly when each picture was taken. Without a good time, the observations are useless for our project. Well, it turns out the times recorded by the camera was incorrect and off by as much as 12 minutes. That may not sound like a lot to you but to me it’s huge. Want to know how I figured this out?
Well, it’s like this. Time on telescopes like this is very precious and I work very hard during my nights to make sure that I’m getting as much data as possible. The ideal thing would be to be collecting light 100% of the time. Unfortunately, there are unavoidable times when you can’t collect light. After each picture we have to stop and read out the image and save it to the computer disk. This camera is quite fast and can store the 16 mega-pixel image in about 25 seconds. Not as fast as a commercial digital camera but then it’s much more precise and getting that precision requires a little extra time. Now, each picture takes about 4 minutes to collect (that’s when the shutter is open and I’m integrating the light coming through the telescope). If the readout were the only time I’m not collecting light then I could hope for 91% efficiency. That’s pretty good. But, there are other things that can eat into your observing efficiency. For instance, the telescope needs to be moved between each picture. If it can be moved and setup in less than 25 seconds there is no extra overhead. Also, if I’m not very organized I might be sitting there in the control room trying to figure out what to do next and the system would be waiting on me. Well, I have control over my part of the project and I always know what to do in time. But, the telescope motion turned out to take longer than the readout of the image. While observing I knew that we were losing time to moving the telescope but I didn’t know exactly how much.
Ok, so here I am looking at all the new data. I was wondering just what the efficiency was. So, I wrote a simple program to calculate how much dead time there was between each exposure. It really is simple to do, you take the difference in the start time of two exposures and then subtract the time the shutter was open. The remainder is the overhead. Well, to my surprise, the numbers came out very strange indeed. About overhead of about 20% of the images were negative. Do you know what that means? It implies that some exposures were started before the previous image was completed.
That’s impossible! After checking that my quickie program was working right, I then turned to my backup source of information.
One of my ingrained habits while observing is that I record a hand-written log of what I was doing. These days most astronomers rely on automated and electronic logs that are based on what the data system knows. Not me. I record information about each picture as an independent check on the system. Most of the time everything is fine and the logs are somewhat superfluous. This time, I was able to use the start times I wrote down to show conclusively that the data system was messed up. I sent a report back to the observatory and after considerable effort were able to verify the problem, what happened, and then a manual recipe for fixing the data based on their backup information. What a mess. This detour consumed the better part of 3 days worth of work.
Well, no need to recount every last thing I’ve been doing the past couple of weeks. But, at this point I’ve scanned about 1/3 of the data. I successfully recovered 29 out of 36 objects I was trying to get. I had to write an observing proposal to do this again in the fall. I asked for three more nights. The data processing continues on the rest of the data. On top of this, we’re planning the details for the upcoming training workshop next week. I’m very excited about getting together with everyone and getting everyone ready to observe. I think we’re going to have a great time together as we get this project up and running. We may have some challenges caused by the weather. The forecast is not perfect but I’ll note that it is much better than the weather this weekend.
On Tuesday morning I get on the California Zephyr train, yes, a train, to get to the workshop. This will be a nice break from flying around the world. The scenery should be excellent on the ride and I’ll have time to continue to work on getting ready for the workshop. I want to thank all of you signed up to participate. This project is a lot of work but I’m grateful for your willingness and enthusiasm to be involved in the project. I can’t do it without you and together we’ll amaze the world. For those coming to the workshop, drive safe, and we’ll see you in Carson City!