Isolda lessons learned

For those of you that tried Isolda, thank you.  Seems like most of us had one difficulty or another but it’s good to get that our of our collective systems early.  I haven’t had a chance yet to review all of the files uploaded.  I really have to get this automated more.  Being on the road non-stop isn’t helping either.  Last week I was in Flagstaff for a Planetary Defense Conference.  Saturday I was at the bottom of Meteor Crater.  Today I’m in Baltimore serving on an advisory committee for the Hubble Space Telescope.

I wanted to share some reflections on last week’s Isolda occultation event.  First, I have to apologize for one of my mistakes here.  I didn’t check on the Moon for this event.  It was really close and pretty bright on event night.  It gave me a lot of trouble with getting setup and finding the field.  I was not really able to use anything on the star hop list fainter than Alhena.  If it wasn’t for PreciseGoTo I would not have found the field at all.  In the end, the moonlight caused me to take longer than anticipated to get on the field and I was very rushed for time to get the data recorder started.

Aside from the obvious reminder lessons floating around that night, I learned something really important about our cameras.  The concept is a little tricky to explain but the bottom line is that if you use an exposure time (senseup) that is too short, you can fail to detect your object at all.  That meant x12 was a really bad idea.  Kudos to the Carson City folks in figuring this out and running with x48 instead.

Here’s the details in case you are wondering.  I took a lot of data a couple of weeks ago getting ready for the Pluto event.  Normally you can take an image with one set of camera parameters and then scale to what you’d expect to see at other settings.  I do this all the time, even for working with the Hubble Space Telescope.  In our case, this calculation doesn’t quite work right, as I found out. You see, today’s digital detectors are a lot more capable than cameras were at the time the video signal standard (NTSC in the US) was developed.  Video is designed for a fairly limited range in brightness, far less than what a good camera can deliver.  That means you have to do something in the electronics to match the camera signal to the video output.  This is normally labeled “brightness” and “contrast”, same as you’d see on an old TV.

If you were designing the perfect system, there would be a control that would let you set the signal level for the background of your image.  There’s always some background, either it’s from the sky brightness directly or it’s from the noise floor of your detector.  Now, you can think of a video signal as having 256 levels of brightness — 0 would be black, 128 would be grey, 255 would be white and you have shades in between.  I always prefer to see my background.  That means I’d set the background to be a signal of 5 to 10, depending on how noisy it is.  That means any source in the sky you can detect will be seen as a brighter bump on the background.

Our MallinCAMs have other ideas about how to set the background, unfortunately.  Now, I have to say that there’s a chance I just haven’t figured out how to configure them to do what I want but with my current recommended settings this is a problem to watch out for.  As I was saying, the MallinCAM doesn’t have a problem with black sky (signal=0).  That’s what I had for the Isolda event.  The problem with this is that you can’t tell the difference between a signal level of -100 and -1.  It all comes out as 0.  So, not only could I not see the sky but the star to be occulted was at a signal level below 0 and I only got a few of the brightest stars in my field.

How do we deal with this issue?  I’m not entirely sure yet.  I can say that x64 for the upcoming Pluto event is safe.  I really need to characterize the camera better so I know how to better predict its output.  This will be an ongoing effort in the coming months.  All of you could help if you like and I’ve also got a couple of bright high school students that are going to work on tasks like this as soon as school lets out.

Oh yes, there’s one other thing that I’ve noted.  The DVR screen makes your images look darker and less useful than they really are.  I put an example of this on the Pluto event page.  This makes it a little tricky to ensure that you are really seeing the sky level when you are in the field.

Busy days

These have been busy days since the Kitt Peak observing run.  Those observations are critical for us to help find interesting occultation events to try but they are of no use in the form of images.  But that comment I mean that the easy part is over once the pictures are taken.  There’s a lot of image processing and analysis that is required.  I have to calibrate the images as well as map the sky coordinates.  After that I have to scan the images looking for all the moving things.  Most of the moving objects are main-belt asteroids but a few of them are the slow-moving Kuiper Belt objects that are our targets.  Once all these objects are found I then extract their positions and use that information to improve the orbits.  Good orbits are the key to letting me predict where these things will be at some future time.

This work, while difficult and time consuming, is made easier by the software that I’ve developed over the past 15 years.  One of the nasty realities in professional astronomy is that there is very little standardization in the data I get.  Usually, I can count on data from the same camera having the same data format.  But, this observing run was with a camera that I’ve never used before.  Even though this camera is on a telescope I’ve used, the data are just different enough that I had to rework a lot of my software.  In the process, I discovered that there was a serious problem in the supporting data.  One of the key bits of information I need to know is exactly when each picture was taken.  Without a good time, the observations are useless for our project.  Well, it turns out the times recorded by the camera was incorrect and off by as much as 12 minutes.  That may not sound like a lot to you but to me it’s huge.  Want to know how I figured this out?

Well, it’s like this.  Time on telescopes like this is very precious and I work very hard during my nights to make sure that I’m getting as much data as possible.  The ideal thing would be to be collecting light 100% of the time.  Unfortunately, there are unavoidable times when you can’t collect light.  After each picture we have to stop and read out the image and save it to the computer disk.  This camera is quite fast and can store the 16 mega-pixel image in about 25 seconds.  Not as fast as a commercial digital camera but then it’s much more precise and getting that precision requires a little extra time.  Now, each picture takes about 4 minutes to collect (that’s when the shutter is open and I’m integrating the light coming through the telescope).  If the readout were the only time I’m not collecting light then I could hope for 91% efficiency.  That’s pretty good.  But, there are other things that can eat into your observing efficiency.  For instance, the telescope needs to be moved between each picture.  If it can be moved and setup in less than 25 seconds there is no extra overhead.  Also, if I’m not very organized I might be sitting there in the control room trying to figure out what to do next and the system would be waiting on me.  Well, I have control over my part of the project and I always know what to do in time.  But, the telescope motion turned out to take longer than the readout of the image.  While observing I knew that we were losing time to moving the telescope but I didn’t know exactly how much.

Ok, so here I am looking at all the new data.  I was wondering just what the efficiency was. So, I wrote a simple program to calculate how much dead time there was between each exposure.  It really is simple to do, you take the difference in the start time of two exposures and then subtract the time the shutter was open.  The remainder is the overhead.  Well, to my surprise, the numbers came out very strange indeed.  About overhead of about 20% of the images were negative.  Do you know what that means?  It implies that some exposures were started before the previous image was completed.
That’s impossible!  After checking that my quickie program was working right, I then turned to my backup source of information.

One of my ingrained habits while observing is that I record a hand-written log of what I was doing.  These days most astronomers rely on automated and electronic logs that are based on what the data system knows.  Not me.  I record information about each picture as an independent check on the system.  Most of the time everything is fine and the logs are somewhat superfluous.  This time, I was able to use the start times I wrote down to show conclusively that the data system was messed up.  I sent a report back to the observatory and after considerable effort were able to verify the problem, what happened, and then a manual recipe for fixing the data based on their backup information.  What a mess.  This detour consumed the better part of 3 days worth of work.

Well, no need to recount every last thing I’ve been doing the past couple of weeks.  But, at this point I’ve scanned about 1/3 of the data.  I successfully recovered 29 out of 36 objects I was trying to get.  I had to write an observing proposal to do this again in the fall.  I asked for three more nights.  The data processing continues on the rest of the data.  On top of this, we’re planning the details for the upcoming training workshop next week.  I’m very excited about getting together with everyone and getting everyone ready to observe.  I think we’re going to have a great time together as we get this project up and running.  We may have some challenges caused by the weather.  The forecast is not perfect but I’ll note that it is much better than the weather this weekend.

On Tuesday morning I get on the California Zephyr train, yes, a train, to get to the workshop.  This will be a nice break from flying around the world.  The scenery should be excellent on the ride and I’ll have time to continue to work on getting ready for the workshop.  I want to thank all of you signed up to participate.  This project is a lot of work but I’m grateful for your willingness and enthusiasm to be involved in the project.  I can’t do it without you and together we’ll amaze the world.  For those coming to the workshop, drive safe, and we’ll see you in Carson City!

Observing at Kitt Peak, March 11-13

Here I am in the control room of the Mayall 4-meter telescope on top of Kitt Peak in southern Arizona.  This telescope is operated by the National Optical Astronomy Observatory on behalf of the National Science Foundation.  The goal of this observing run is to check in on as many Kuiper Belt objects as I can and measure their current positions.  These measurements help improve the orbits and thus make it possible to more accurately predict their positions.  We need this for RECON because I need to be able to predict when one of these objects will pass in front of a star.

The weather has been great and I’m getting lots of objects.  The goal was to measure 100 on this run and that should be about right.  It’s pretty exhausting work to keep track of everything and staying alert all night long.  The only break time I get is time for a little sleep and then I always watch the sunset.  I’m here for three nights and I was able to see a green flash at sunset for two of the three.  Also, there’s a nice little comet hanging out after sunrise (Comet PANSTARRS).  It’s more of a binocular object but it’s there and you can see it even without aid.