Troubles with failed alignment?

I was practicing for a RECON campaign the other evening.  Everything was up and running and my first alignment of the night was a quick success.  At that point I sent the telescope off to the first RECON field of the night.  After getting there I needed to change the camera setting to a senseup of x128.  When I touched the camera housing I felt the tiniest of a static electricity spark leap from my finger to the camera.  That’s very unusual but it was a really dry day.

Here’s the thing: at the instant I felt the spark the sound of the telescope drive motors changed pitch and volume by a little bit.  Of course, the telescope is tracking and it makes its quiet humming sound as it works.  An abrupt change like that got my spidey-senses tingling but there didn’t seem to be any major problems except I saw the telescope was no longer tracking perfectly.  I was seeing a very slow but inexorable drift of the field on the camera image.  Not good, but I still wasn’t sure there was much of a problem.  But, better safe than sorry.  I decided to redo the pointing.

Well, redoing the pointing wasn’t helping at all.  I tried three times in a row, changing stars each time and by the end I knew that something was very wrong.  I’m at least one of those attempts should have worked.  Rather than try, try again without changing anything, I decided I would start all over again.  That means turning off the power to the telescope and letting it sit a minute.  After I turned it back on I was quickly able to get a good alignment and quickly found the RECON campaign field.  So what happened?

I think the static spark caused the GPS information (time or position) to be corrupted.  My theory is that this information is loaded at the very start and isn’t updated again.  After the spark, either a bad position or time meant the computer was doomed to failure since the stars would never appear to be in the right place.  That would also make tracking and pointing all wrong.  The odd thing is that the telescope, camera, and computer all thought everything was fine.

This experience is worth sharing as an example of one way things can go wrong and how to recover.  The key, as always, is to 1) know your equipment and that means practice, and 2) pay attention to your equipment and what it’s telling you when in use.  I never would have thought a static spark could do this but that myth has now been dispelled.  I wonder how many of you have been inadvertently bitten by this failure mode.  I suspect this will only happen when it is very, very dry out and even then will still be rare.  In my case, the relative humidity was below 10% and I only got that one little spark the entire night.  I sure am glad it wasn’t just before an actual campaign event!

No More Broken Tripods!

At the Northern RECON Training Workshop in Pasco last month, we figured out an adjustment to the CPC-1100 tripod that fixes a failure mode that has been occurring throughout the project. Over the course of the project, several spreader-arm brackets at the base of the tripod have been broken by team members trying to open overly stiff tripod legs. The fix involves loosening the three leg bolts at the top of the tripod.

Click here to check out the guide for this one-time adjustment . . .

Tripod Fix 1 rotate crop

and say good bye to broken tripods for the remainder of the RECON Project!!!

Laptop Power Supply

A couple of teams have had problem with their laptops that were very puzzling. In this case, a working computer became non-functional and unresponsive. The initial guess was a dead battery but the machines were reported to have been “plugged in all day”. It turns out that the problem is a bit of confusion over the proper power supply for the laptop. The power connector for the laptop happens to be essentially the same as the rest of the barrel connectors for the other equipment. The correct power supply for the laptop provides 19V power. Everything else provides 12V.

The confusion centers on the extra power supply provided with your MallinCAMs. In normal circumstances we don’t use this. A picture of one of these power supplies is shown below:

MallinCAM power supply. This unit provides 12V power. It is not normally used but can be useful for indoor testing.

MallinCAM power supply. This unit provides 12V power. It is not normally used but can be useful for indoor testing.

There are two versions of the CORRECT power supply shown below. Which charger you have depends upon which batch of laptops you received, but both versions have a transformer that is labeled with an output value of 19V. The first version has a removable plug at the transformer end that can be rotated if your wall plug or power strip doesn’t like the orientation you are using The second version has a removable cord at one end of the transformer that goes to 110V power.

Laptop power supply. This unit provides 19V power.

One correct version of the Laptop power supply. This unit provides 19V power, as labeled on the transformer.

IMG_2181

A second correct version of the Laptop power supply. This unit provides 19V power, as labeled on the transformer.

As always, the computer will actually tell you if it is charging properly. There is a little red light on the front of the machine that you can see even with the lid closed. Red means it is charging. While running, there is a tool on the task bar that will tell you if you are on battery or wall power.

Setting up Your New RECON Netbook

Here’s what you should find in the netbook box:

  • netbook
  • power cable
  • plug adapter
  • battery
  • dust cloth
  • RCA cable
  • USB video adapter
  • 3 spare fuses for telescope/camera power splitter
  • RECON stickers
  • manufacturer user guides and warranty information

Packing materials such as cardboard, foam, and bubble wrap can be discarded.

 Getting Started

First, connect the plug adapter to the end of the power cable.

You can insert the plug adapter in any direction, depending on which orientation best suits your power outlet. Turn the adapter clockwise to lock it in place. To remove the adapter (to adjust the orientation), press the tab and turn the adapter counter-clockwise.

Next, insert the battery into the netbook. 

Plug the power cable into an outlet, then into the netbook. You should see a charging light come on at the front of the netbook.

Open the netbook, remove the protective foam from the keyboard and the screen, then press the power button above the upper left of the keyboard. After it boots up, you will see the following desktop:

 Date and Time Settings

It is important to make sure your netbook has the correct date and time settings, particularly with respect to the time zone. Move the cursor to the very bottom of the screen to bring up the taskbar. Check to see if the date and time shown in the lower right corner is correct.

date_time1Click on the time in the taskbar, then click “Change date and time settings…”

Under “Time zone”, make sure the correct time zone is selected: “(UTC -08:00) Pacific Time (US & Canada)”. If it’s not, click the “Change time zone…” button and select the correct time zone from the drop-down menu.

date_time2If the time shown in the taskbar is still not correct, you have two options: 1) Set the time manually by clicking the “Change date and time…” button, or 2) Connect to the internet so that the computer can sync to Microsoft’s internet clock.

For option 2, you can connect to the internet by either inserting an ethernet cable on the left side of the netbook, or clicking the wireless network icon in the taskbar to set up a WiFi connection (if one is available).

Once you’ve established a working internet connection, click the “Internet Time” tab, then click the “Change settings…” button. On the window that pops up, ensure the box next to “Synchronize with an Internet time server” is checked, then click the “Update now” button. (If you get an error message, check your internet connection and try again.) Shortly, you should see the correct time shown in the taskbar. Press “OK” on each window to close it.

Finally, restart the computer so that OccultWatcher and other software will re-calibrate to the correct time zone.

Connecting the Camera System

Identify the yellow RCA connector on the USB video adapter. Connect it to one end of the RCA cable.

Connect the other end of the RCA cable to the output jack of the IOTA-VTI box, and connect the USB end of the adapter to the netbook. (The connections between the IOTA-VTI box and the camera are the same as before.)

 

Congratulations! You’re now ready to begin collecting data with VirtualDub.

Handling Video Data

To date in the project we have been using a mini-DVR for recording video for our occultation events.  In my last posting, I talked about the problems discovered with this approach.  Since then I have been working on setting up a new data recording system. This week marks the completion of that work and new data collection tools will be shipped out to all the teams. This document discusses the changes and the reasoning.  New training documents will also be made available shortly to help with the transition.

We are going to now use netbook computers in the field for data collection. I tried a couple of different systems and settled on an Acer Aspire V5 running Windows 7.  These systems will come pre-configured with VirtualDub, OccultWatcher, LiMovie, Chrome, LibreOffice, Skype, Occult, and a few other useful utilities.  These computers should not be used for general tasks, instead they are project machines for things like collecting data, transmitting data to SwRI, and signing up for events.  A small amount of customization of each system will be required, such as putting in your OW credentials.

Each computer comes with a video frame grabber interface (StarTech SVID2USB2) and a video cable that will connect to the output of the IOTA-VTI box.  Inside the box are also a couple of spare fuses as well as some of our cool new RECON stickers.  I recommend using at least a few of these stickers on the equipment (telescope, telescope crate, and so on).  If you need more, let me know.

I think you will all like the new system.  It’s not so great for dark adaption, the screen seems really bright at night even turned all the way down. But, you’ll be able to get a bigger image on the screen than what’s been possible with the mini-DVR and you’ll find that focusing and finding the field will be easier since you can see it from a distance while operating the telescope.  The program, VirtualDub, is what you run to see the video signal coming in.  Once you turn on the capture mode the video is live.  Stay tuned for a more detailed document on how best to setup and use this program.

The data quality from this setup is exceptionally good.  There is a price, however.  The files you have been collecting with the mini-DVR up to now are really quite small.  The new system makes much bigger files.  The data rate is about 200Mb/minute and your upload times will be significantly longer.  Unfortunately, this can’t be helped but the extra time will pay off in the increased scientific value of the data.  Note that the data are compressed pretty well already.  You won’t get much more out of using zip other than to pack a bunch of files into one clump for a single upload.

Everyone should see their systems arriving late this week or early next week.  I’m trying to get this equipment in your hands in advance of the 2001XR254 event in early March. I’m confident you’ll be able to make the switch in time but in an emergency the mini-DVR will be able to handle this upcoming event.

System Performance

2013 has been an amazing year for the RECON project.  We’ve progressed from an idea to a community in this short period of time.  One element of this effort is to better understand our observing system.  Central to that need is our camera and the data system (miniDVR).  This posting is a summary of lessons learned with plans about where we’re headed in the future.

Camera

As you know, we have been using the MallinCAM B&W Special.  My earlier testing indicated this would be a good camera for us but it remained to collect actual data and understand this device in gory detail.  Thanks to all team members I have enough information in hand to answer most of my burning questions.  I would like to specially thank Jerry Bardecker (RECON-Gardnerville) for his help in taking test data on short notice and two students from Berthod High School, Jo and Melody, who helped with laboratory-based timing tests.

Timing fidelity

One of the most important measurement goals of our project is to accurately time the disappearance and reappearance of the stars as they are occulted by an asteroid.  There have been numerous rumors floating around the occultation community that the MallinCAMs had some strange timing issues.  The answer, as with all real research projects, turns out to be complicated.  There’s good news and there’s bad news.

Screen shot of top-level camera settings. The three settings in the red box are not set correctly. These settings are particularly bad because they can prevent getting an accurate time for an occultation. The settings should be ALC (SHUTTER OFF, LEVEL to the far right), BLC (OFF), and AGC (MANU, set to the maximum).

Screen shot of top-level camera settings. The three settings in the red box are NOT set correctly. These settings are particularly bad because they can prevent getting an accurate time for an occultation. The settings should be ALC (SHUTTER OFF, LEVEL to the far right), BLC (OFF), and AGC (MANU, set to the maximum).

First the bad news. There are settings for the camera where the data will look good but you will get very poor timing results. The image to the right shows a mis-configured camera.  The incorrect settings in the red box all lead to potentially strange results.  What does it do?  Well, note the setting here for a sense up of x16.  The sense-up setting controls the effective exposure time for the data.  In this case each image from the camera is supposed to take 16/59.64 seconds (0.267 seconds).  You cannot figure out an accurate disappearance time if you don’t know the exposure time.  The automatic modes shown in this bad example have the effect of letting the camera adjust the sense-up setting at any time it likes.  The idea here is that with these modes it’s all about getting a pretty picture, not about taking accurate occultation data.  This doesn’t always mean the data will be bad. In fact, you can’t really see a problem just visually looking at the screen.  It’s only when you extract a lightcurve that you see what’s happening.  In principle, if you knew exactly when each exposure happened and what it’s duration is, you can get back to the correct timing. Unfortunately, there is no reliable way to ensure knowing this and the result is that you have to guess in order to analyze the data.  I hope I don’t have to spend a lot of time convincing you that guessing is a bad way to do a scientific measurement.  The bottom line here is that you must always have these settings set to the recommended standard and fixed values, forcing the camera to operate in the same way throughout the observation.

Now the good news.  If you use the suggested settings for the camera, the MallinCAM is capable of taking very good occultation data.

Sensitivity

The second area of concern for the camera is to know how sensitive it is.  By this I really mean, what’s the faintest thing that can be seen?  Why does this matter?  A well-known property of the sky we see is that there are more faint stars than there are bright stars. From star catalogs we know there are 116,000 stars that are 9th magnitude or brighter. Even though that sounds like a lot of stars, there aren’t that many occultations we will see with stars this bright and we’d wait a long time between events if that was as faint as we can see.  The situation gets better as you go fainter.  For example, there are 6.1 million stars that are 13th magnitude or brighter.  By going 4 magnitudes fainter the rate of observable occultations increases by more than a factor of 50.  The original baseline plan of RECON was to be able to work on stars this faint.  The first order of business was to make sure we can go this faint but it’s also important to know just how faint we can go in case we can go fainter.  If we can get to 15th magnitude the number of stars goes up to almost 36 million, or an increase in the rate of occultations by another factor of 6.

The question of sensitivity is much more important than you might think at first glance.  In fact, this very question was a central issue in the proposal we submitted to the National Science Foundation in Nov 2013.  We had to make a convincing case about just how sensitive our system is and how that translates into how productive our project will be. Obviously, we want to get the maximum amount of return from the invested grant funding.

During all our campaigns in 2013 it became very obvious to me that we could easily work with 13th magnitude stars.  That’s really great news but I really did have reason to believe this would be true.  The next question to answer is to know how much fainter we can go. This is a harder question to answer from our occultation attempts since we did not try for any really faint stars.  In fact, working with faint stars with main-belt asteroid occultations has its own challenges since the asteroids are easily seen by our equipment (as you have seen yourself).  With our TNO events, you will never see the TNO itself since it is typically a 23-24th magnitude object.  This means we can more easily work with much fainter stars.

Video image of a star field taken by Jerry Bardecker. This image was taken with SENSE-up x128 and a computer-based frame grabber. The red circle is around a 15th magnitude star. The yellow circles are around 13th magnitude stars. The green circle is around an 11th magnitude star. The faintest stars that can be seen here are about 16th magnitude.

Video image of a star field taken by Jerry Bardecker. This image was taken with SENSE-up x128 and a computer-based frame grabber. The red circle is around a 15th magnitude star. The yellow circles are around 13th magnitude stars. The green circle is around an 11th magnitude star. The faintest stars that can be seen here are about 16th magnitude.  This field is at J2000 01:00:34.2 +49:44:09.

In November, I sent a request to Jerry to have him take a picture of a patch of sky where there were a lot of stars of varying brightness.  By taking a short video of this area in a few different sense-up settings I could then simulate our system performance on occultations of fainter stars.  Based on this image we can easily go as faint as magnitude 15. In fact, the true limit is somewhat closer to 16. This is very exciting for the RECON project as it means we will have many more opportunities to work with than I originally estimated.  There is, however, one subtle lesson buried here in these test data.  That lesson is the subject of the next section.  For now, we know that we have a very sensitive camera that can do a really great job on observing occultations.

Data system

The system we’ve been using up to now employs a mini-DVR to record the video data. This unit is pretty convenient and (mostly) easy to use.  It was also really, really cheap and as far as I know, 100% of our units are still functional.  Ok, that’s the good news.  The bad news is that these systems significantly degrade the data coming out of the MallinCAM.

Back in October, I chased an occultation by Patroclus, using the standard RECON system.  A few days earlier, Jerry observed an occultation by Dorothea.  Jerry’s system is slightly different from the standard RECON system.  The most important difference is that he uses a computer with a frame grabber to record the video.  This is also used by a few other RECON teams that have computers readily available near their telescopes.  I’ve know about frame grabbers and I always knew they were better but the cost and complexity of this setup is somewhat higher and I wanted to stay as simple and cheap as possible.

About this same time, I sent the Susanville camera and data system to an avid IOTA member and occultation chaser, Tony George.  Tony volunteered to test out our system to see what he could learn. He did this and distributed a document entitled: Comparison of Canon Elura 80 camcorder to mdvr.  The most important thing he did here was to compare a system he understood really well (and works well) against our mini-DVR.  He set up a test where he sent the same signal to his recorder and our mini-DVR so the exact same data were record on both.  This test eliminates any differences in atmospheric conditions or telescope focus variations.  Ideally, one would see the same data in the saved files.  Tony has been trying to get me to use the Canon video recorder for quite some time but I don’t like this path because the recorders are no longer manufactured and it requires a second step of playing back the recorder into a computer before the data can be analyzed.  I’ve never worried about the data quality since Tony has shown time and time again that his setup is extremely good.

Anyway, the combination of Tony’s report plus the two occultations from October finally made it clear what’s going on.  There are two key things at work, both having to do with aspects of the mini-DVR.

Timing

The first issue has to do with the task of converting the video signal to digital images. There’s actually an important clue buried in the two images shown earlier in this post. Video images are a bit more complicated than they appear.  The type of video we are working with is called interlaced video, now sometimes called 480i.  Video images we get are 640×480 pixels and we get 29.97 such images per second.  The normal term is actually frame and I’ll use that here from now on.  Each frame is made up of 640 columns of pixels and 480 rows, more commonly called lines but these frames are actually made up of two fields that have 640 columns and 240 lines.  These fields alternate between encoding the even rows and the odd rows.  Thus, a frame is made up of two adjacent fields where they are interlaced by lines together into a single frame.  The field rate is twice as fast as the frame rate with 59.94 fields per second.

Our IOTA-VTI box actually keeps track of all this and will superimpose information on each field as it passes through the box.  Look closely at the overlay on the first image above.  P9 tells us that 9 (or more) satellites were seen.  The next bit is obviously a time, 07:34:27. The next two numbers have always been a little mysterious (only because I didn’t fully absorb the IOTA_VTI documentation).  You see 1542 and 1933 toward the lower right. The right-most number, 1933, is the field counter.  It increments by one for every field that the IOTA-VTI sees, counting up from 0 starting with the time it is powered on or last reset. The number to the left, 1542, is the fraction of the second for the time of that field.  One strange thing I’ve always noticed but never understood is that the middle number can show up in two different places.  Thus, looking at this image we can tell that it was passed through the IOTA-VTI box at precisely 07:34:27.1542 UT.  This time at the start of the field.  Note that it takes 0.017 seconds to transmit this field down the wire so you have to be careful about where in the transmission the time refers to.

The second image looks a little different.  Here we see P8 that shows eight satellites were in view by the IOTA-VTI box.  Next on the row there is the time, 06:25:50, followed by three sets of digits.  Furthermore, the middle two numbers are gray, not white and the last digit of the last number looks messed up.  If you look carefully at the full resolution image you see that one of the two gray numbers is only on even rows (lines) and the other is only on odd rows.  The last digit can be properly read if you just look at even or odd lines as well. At this point you can see what the VTI box is doing.  It is adding different text to each field, first on the odd lines and then on the even lines.  When the two fields are combined into a single frame you get this text superimposed on itself.  Where it doesn’t change it looks like a normal character.  Where it changes you see something more complicated.  Armed with this knowledge we can now decipher the timing.  This image (frame) consists of one field that started at 06:25:50.0448 and one field that started at 06:25:50.0615 UT.

The question I started asking myself is this: why does the mini-DVR only show one time and why does the frame grabber show two?  Tony’s report showed me the answer.  The mini-DVR only records one field per frame.  Internally, the mini-DVR only samples one field and it can either be the even or odd field.  It then copies the field it got to the other field making a full frame. This saves time for the recorder since it has to take the frame and then process it and write it out to the memory card.  I’ll have a lot more to say about the processing it does in the next section.  For now, it is sufficient to realize that only half of the image data coming out of the camera actually makes it to the video file.  In terms of the occultation lightcurve, this is equivalent to cutting down the size of our telescope to an 7.88-inch aperture, alternatively it means we lose nearly a magnitude in sensitivity.  In a frame-grabber system, you get all of the data.

Data quality

Recall that I said the mini-DVR has to process the frames before saving the data to the memory card.  The most important thing that happens here is data compression.  Even 480i video data generates a lot of data.  The camera is generating 307,200 bytes of data for every frame.  It’s actually worse at the DVR since it interpret the black and white images as color, making the effective data rate three times higher.  This amounts to just over 26 Mbytes of data per second.  In only three minutes you generate enough data to fill a DVD (roughly 4Gbytes total capacity).  In six minutes you would fill up our 8Gb memory cards.  This data rate is a problem for everyone, including movie studios that want to cram a 3 hours movie onto one DVD.  The fix is to use data compression.  A lot of effort has gone into this and there are some pretty good tools for this.  However, the best tools can take a long time to do the compression and in our systems you don’t have much time. Our cheap little mini-DVR has just 0.017 seconds to compress and save each frame and that’s a pretty tall order.  In fact, that’s why the DVR throws away every other field.  It has to, otherwise there isn’t enough time to process the data.  This compression can be sped up a lot but a consequence of the compression method chosen is that it must throw away information (that’s called lossy compression).  Once thrown away you can never get it back.  Another trick the DVR uses is to darken the image just a bit.  In doing so, the sky in the image is set to all black and that makes the compression even more effective.  The result of all this is that we get comparatively small files, like 20-30Mb.  Those sites using frame grabbers are typically uploading much larger files, like a few Gb.

This compression turns out to cause problems with our data as well.  The consequences aren’t as obvious but everything degrades the timing accuracy of our system.  In comparison, a frame grabber system will typically get timings better than 0.01 seconds while the mini-DVR could be as bad as one second with no way to know how large the real error is.   When you put all of these effects together, it says that the mini-DVR is not a very good choice for our project.  It does work, but, we can get more out of our telescope and camera and increase our scientific productivity by doing something better.

The next step

One obvious fix is to find a better mini-DVR.  Sadly, getting a better recorder at this price is just not an option.  There are digital recorders that can do this but they are way too expensive for this project.  An alternative is to switch to using a frame-grabber system. Since learning these lessons I have been testing a computer based data acquisition system.  I tried two low-cost netbook computers with an inexpensive USB to video adapter.  Both of the netbook options work just fine once you get them configured right. The bottom line is that for about $350 we can move to a computer system for recording the data.  That’s about $300 more than the DVR solution but the improvement in the data quality is worth it.

The plan now is to get computers for those sites that aren’t already using them, and move to getting rid of the mini-DVRs.  This upgrade has been incorporated into the budget for the new proposal but I am looking at ways to speed up the process for the current RECON sites.  It will mean learning some new procedures and it will significantly increase the time it takes to upload the data but the results will be well worth the effort.

Isolda lessons learned

For those of you that tried Isolda, thank you.  Seems like most of us had one difficulty or another but it’s good to get that our of our collective systems early.  I haven’t had a chance yet to review all of the files uploaded.  I really have to get this automated more.  Being on the road non-stop isn’t helping either.  Last week I was in Flagstaff for a Planetary Defense Conference.  Saturday I was at the bottom of Meteor Crater.  Today I’m in Baltimore serving on an advisory committee for the Hubble Space Telescope.

I wanted to share some reflections on last week’s Isolda occultation event.  First, I have to apologize for one of my mistakes here.  I didn’t check on the Moon for this event.  It was really close and pretty bright on event night.  It gave me a lot of trouble with getting setup and finding the field.  I was not really able to use anything on the star hop list fainter than Alhena.  If it wasn’t for PreciseGoTo I would not have found the field at all.  In the end, the moonlight caused me to take longer than anticipated to get on the field and I was very rushed for time to get the data recorder started.

Aside from the obvious reminder lessons floating around that night, I learned something really important about our cameras.  The concept is a little tricky to explain but the bottom line is that if you use an exposure time (senseup) that is too short, you can fail to detect your object at all.  That meant x12 was a really bad idea.  Kudos to the Carson City folks in figuring this out and running with x48 instead.

Here’s the details in case you are wondering.  I took a lot of data a couple of weeks ago getting ready for the Pluto event.  Normally you can take an image with one set of camera parameters and then scale to what you’d expect to see at other settings.  I do this all the time, even for working with the Hubble Space Telescope.  In our case, this calculation doesn’t quite work right, as I found out. You see, today’s digital detectors are a lot more capable than cameras were at the time the video signal standard (NTSC in the US) was developed.  Video is designed for a fairly limited range in brightness, far less than what a good camera can deliver.  That means you have to do something in the electronics to match the camera signal to the video output.  This is normally labeled “brightness” and “contrast”, same as you’d see on an old TV.

If you were designing the perfect system, there would be a control that would let you set the signal level for the background of your image.  There’s always some background, either it’s from the sky brightness directly or it’s from the noise floor of your detector.  Now, you can think of a video signal as having 256 levels of brightness — 0 would be black, 128 would be grey, 255 would be white and you have shades in between.  I always prefer to see my background.  That means I’d set the background to be a signal of 5 to 10, depending on how noisy it is.  That means any source in the sky you can detect will be seen as a brighter bump on the background.

Our MallinCAMs have other ideas about how to set the background, unfortunately.  Now, I have to say that there’s a chance I just haven’t figured out how to configure them to do what I want but with my current recommended settings this is a problem to watch out for.  As I was saying, the MallinCAM doesn’t have a problem with black sky (signal=0).  That’s what I had for the Isolda event.  The problem with this is that you can’t tell the difference between a signal level of -100 and -1.  It all comes out as 0.  So, not only could I not see the sky but the star to be occulted was at a signal level below 0 and I only got a few of the brightest stars in my field.

How do we deal with this issue?  I’m not entirely sure yet.  I can say that x64 for the upcoming Pluto event is safe.  I really need to characterize the camera better so I know how to better predict its output.  This will be an ongoing effort in the coming months.  All of you could help if you like and I’ve also got a couple of bright high school students that are going to work on tasks like this as soon as school lets out.

Oh yes, there’s one other thing that I’ve noted.  The DVR screen makes your images look darker and less useful than they really are.  I put an example of this on the Pluto event page.  This makes it a little tricky to ensure that you are really seeing the sky level when you are in the field.

Busy days

These have been busy days since the Kitt Peak observing run.  Those observations are critical for us to help find interesting occultation events to try but they are of no use in the form of images.  But that comment I mean that the easy part is over once the pictures are taken.  There’s a lot of image processing and analysis that is required.  I have to calibrate the images as well as map the sky coordinates.  After that I have to scan the images looking for all the moving things.  Most of the moving objects are main-belt asteroids but a few of them are the slow-moving Kuiper Belt objects that are our targets.  Once all these objects are found I then extract their positions and use that information to improve the orbits.  Good orbits are the key to letting me predict where these things will be at some future time.

This work, while difficult and time consuming, is made easier by the software that I’ve developed over the past 15 years.  One of the nasty realities in professional astronomy is that there is very little standardization in the data I get.  Usually, I can count on data from the same camera having the same data format.  But, this observing run was with a camera that I’ve never used before.  Even though this camera is on a telescope I’ve used, the data are just different enough that I had to rework a lot of my software.  In the process, I discovered that there was a serious problem in the supporting data.  One of the key bits of information I need to know is exactly when each picture was taken.  Without a good time, the observations are useless for our project.  Well, it turns out the times recorded by the camera was incorrect and off by as much as 12 minutes.  That may not sound like a lot to you but to me it’s huge.  Want to know how I figured this out?

Well, it’s like this.  Time on telescopes like this is very precious and I work very hard during my nights to make sure that I’m getting as much data as possible.  The ideal thing would be to be collecting light 100% of the time.  Unfortunately, there are unavoidable times when you can’t collect light.  After each picture we have to stop and read out the image and save it to the computer disk.  This camera is quite fast and can store the 16 mega-pixel image in about 25 seconds.  Not as fast as a commercial digital camera but then it’s much more precise and getting that precision requires a little extra time.  Now, each picture takes about 4 minutes to collect (that’s when the shutter is open and I’m integrating the light coming through the telescope).  If the readout were the only time I’m not collecting light then I could hope for 91% efficiency.  That’s pretty good.  But, there are other things that can eat into your observing efficiency.  For instance, the telescope needs to be moved between each picture.  If it can be moved and setup in less than 25 seconds there is no extra overhead.  Also, if I’m not very organized I might be sitting there in the control room trying to figure out what to do next and the system would be waiting on me.  Well, I have control over my part of the project and I always know what to do in time.  But, the telescope motion turned out to take longer than the readout of the image.  While observing I knew that we were losing time to moving the telescope but I didn’t know exactly how much.

Ok, so here I am looking at all the new data.  I was wondering just what the efficiency was. So, I wrote a simple program to calculate how much dead time there was between each exposure.  It really is simple to do, you take the difference in the start time of two exposures and then subtract the time the shutter was open.  The remainder is the overhead.  Well, to my surprise, the numbers came out very strange indeed.  About overhead of about 20% of the images were negative.  Do you know what that means?  It implies that some exposures were started before the previous image was completed.
That’s impossible!  After checking that my quickie program was working right, I then turned to my backup source of information.

One of my ingrained habits while observing is that I record a hand-written log of what I was doing.  These days most astronomers rely on automated and electronic logs that are based on what the data system knows.  Not me.  I record information about each picture as an independent check on the system.  Most of the time everything is fine and the logs are somewhat superfluous.  This time, I was able to use the start times I wrote down to show conclusively that the data system was messed up.  I sent a report back to the observatory and after considerable effort were able to verify the problem, what happened, and then a manual recipe for fixing the data based on their backup information.  What a mess.  This detour consumed the better part of 3 days worth of work.

Well, no need to recount every last thing I’ve been doing the past couple of weeks.  But, at this point I’ve scanned about 1/3 of the data.  I successfully recovered 29 out of 36 objects I was trying to get.  I had to write an observing proposal to do this again in the fall.  I asked for three more nights.  The data processing continues on the rest of the data.  On top of this, we’re planning the details for the upcoming training workshop next week.  I’m very excited about getting together with everyone and getting everyone ready to observe.  I think we’re going to have a great time together as we get this project up and running.  We may have some challenges caused by the weather.  The forecast is not perfect but I’ll note that it is much better than the weather this weekend.

On Tuesday morning I get on the California Zephyr train, yes, a train, to get to the workshop.  This will be a nice break from flying around the world.  The scenery should be excellent on the ride and I’ll have time to continue to work on getting ready for the workshop.  I want to thank all of you signed up to participate.  This project is a lot of work but I’m grateful for your willingness and enthusiasm to be involved in the project.  I can’t do it without you and together we’ll amaze the world.  For those coming to the workshop, drive safe, and we’ll see you in Carson City!

More work on the system

Tonight there’s a full moon.  Normally that’s not a great time for star gazing but it doesn’t matter so much when you use a camera.  I’m out at my observatory in northwest Arizona and the weather is fantastic (as usual).  The goal for tonight was to setup the entire system and get an inventory of cables or gear that is need to make the systems complete.  I also picked up a few ideas for how to better run our systems along with a few lessons learned.

The DVR is going to be a tricky beast to keep under control.  It seems that when it’s first turned it immediately begins recording.  A nuisance as long as you know about it.  There’s also a configuration setting relating to recording length.  What this really means is that if you set it for 5 minutes, then after 5 minutes of recording, it will stop recording, save the file and then immediately start another recording.  No matter what, you still have to tell it to stop recording.  Another issue that’s going to take more study is power.  Our DVR has a built-in battery but I haven’t figured how long it’s good for.  If we need to power the unit we’ll need some type of power converter.  The simplest thing is a DC-AC converter.  But, if the battery has a USB connector we might be able to power it from that.

Setting up the camera and IOTA-VTI box does require a few cables, some of which came with the equipment and some not.  We’ll need one BNC to RCA video adapter, a couple of DC power cables, and a power splitter cable.

I also had a bit more practice with the telescope.  For now I’m using the stock finder but I really don’t like it.  I see that it definitely affects the balance of the telescope, making it more tail heavy.  Tonight I used the default “Sky Align” option on the telescope.  I was able to do this while it was quite bright out, in fact, this is the perfect time to do so since you will certainly get the brightest stars that are needed for the alignment procedure.  I did a little bit of testing of the telescope’s ability to find objects.  It worked reasonably well but perhaps not as good as what I’m used to on my old telescope.  Perhaps that’s just because I know the old one better.

I did look at the power connector issue a bit as well.  I got a great suggestion from Dean at Starizona and I tried it out tonight.  He said to take the power cord and tie it around one of the forks and then plug it into the base.  This way, as the telescope moves, the fork pulls the cable around and it avoids putting stress on the power connector.  For me this worked just fine, at least this night.

At the end of my testing I’ve got some video data saved that I can dig into to learn about data quality and timing information.  I want to get reasonably good values for the field of view with and without the focal reducer as well as some idea of how faint this camera can get.

First night with the system

Well, last night I got a chance (finally!) to try out the new equipment.  I got the cameras on Friday (all 14 of them).  We still don’t have some of the accessories for the telescope, most notably the Telrad finters.   But, I had enough to let me try everything out on the sky  with the full set of equipment.  I learned a few things that I’m still digesting.  One thing was that my personal telescope, a Celestron Nexstar that was the basis for the early work on this project, now has to qualify as an old telescope.  I don’t know how that happened but I’ve had it for 11 years now.  Still seems new to me.  Anyway, Celestron has clearly been busy over that time and the telescope shows signs of many improvements.  More interestingly, I was able to take it out and get it running without reading the manual at all.  Yes, the hand controller is quite different now but I managed ok.  Eventually I’ll read the manual in time.

I was able to get the Mallincam Special camera on and take images of the sky through the focal reducer.  The star images look really good, at least on the monitor.  I’m still working on how best to deal with the video images that come out of our video recorder.  There are a few things that are clear now.  1) the video recorder (DVR) looks like it will work but has some operational peculiarities that I need to document.  The device somehow decided to change video formats during my short tests and I need to figure out why and how that happens.  2) I still need to get memory cards for the DVR.  These are on order now.  3) A bigger monitor is certainly nice to have and I have a good option picked out if we have to go this way, but, the DVR screen is pretty good on its own.  4) getting power to the gear and routing the video lines is tricky.  There aren’t that many wires but they seem to mulitiply in the dark.  I had a real rat’s nest going there for a while.  I also need to come up with some additional power cords for the full system.  I was using my pile of engineering gear and I’ll have to do a census on the remaining items that will be needed.  Once that’s done the cameras should be ready to be sent out, most likely in early March.  I will be running additional tests on the camera next week while I’m at my observatory in Arizona.  I haven’t yet decided if I’ll bring the new telescope along for the ride or just stick with my trusty (old) scope for now.  I guess it depends on how full my car gets.