Wednesday, Dec. 5
I’m at my observatory in northwest Arizona now (near 35N, 113W) after a long 14-hour drive from Colorado. It’s always nice to get out here again. The peace and quiet is quite impressive as are the sunsets and the night sky views. I have four cameras and two focal reducers to test all on my own 11-inch telescope. Given the weather forecast it may be a couple of days before I get clear enough conditions to do anything useful. That’s something of a pity since the weather as of now is remarkably mild for this time of year. By the time it clears up the temperature is supposed to drop by 20-30 degrees. Normally in December I see lows in the teens (or colder) and highs in the 40’s. Today it got up to the upper 60’s. Watch these postings for updates on results from this activity. I have until Sunday night to get some results. Most likely, Friday night will be the best.
Thursday, Dec. 6
The weather cleared up this afternoon, earlier than expected. The skies are beautiful tonight. I’m writing this while I warm up a bit. It’s almost 10pm and the temperature has dropped down to 33F. I’ve suffered much worse while observing but it’s still nice to come in a warm up for a bit. The testing is going quite well and I’ve already got a lot of critical data. I need to process this carefully after getting some sleep but I can already tell that I have 4 good candidate cameras. I think I even have a clear winner but I don’t want to say so until I’ve had a chance to analyze the data.
Friday, Dec. 7
Another beautiful night but yet again colder than the previous night. Last night had borderline issues with frost but it is drier tonight and no such problems this time. I have even more data now for checking out the cameras. In fact, I think I have all I need at this point and I need to spend more time in front of the computer instead of playing outside. I made some progress on processing last night’s data but not as much as I’d hoped. Somehow the software was no longer on my laptop for reading the video files and it took some time to figure out that problem. I did get as far as figuring out the image scale and thus the field-of-view of the cameras. That alone is a big help. Now I have to finish the task of seeing exactly how faint these cameras can go. Right now it looks like the Mallincam HyperPlus and the Watec 120N are tied for the lead with the Mallincam JR running a close second and the Super Circuits PC165DNR coming in third. But, the final word will come from the data itself.
Saturday, Dec. 8
One of the conclusions that I’ve already reached is in the choice of focal reducers. I’m testing two, one made by Meade and one by Mallincam. As far as I can tell these two provide essentially the same result. The main difference between the two is in how they mount to the telescope. The Meade unit screws directly on to the back of the telescope and requires a special adapter to a video camera. The adapter I have was specially made for me by friend and fellow occultation enthusiast, Gordon Hudson, who lives near Wellington, New Zealand. He might be willing to make a bunch of these but I’d like a simpler solution. The Mallincam unit screws on to the front of the video camera and is sized like an eyepiece. This one will work with the stock parts that we’ll be getting with the telescope and is definitely the simpler way to go. Almost all of the tests I’m reporting on here were done with the Meade optics but the results would have been the same with the Mallincam.
I spent most of the day looking at the data. I took some longer video sequences that were about 5 minutes long that were intended as simulated occultation event data. I ran into a snag with breaking these files apart into their component images. The software I was using to do this completely failed using it’s default values. After a while I was able to discover how to get it to work but doing so was at the expense of speed. It took my laptop just over an hour to split up a single 5 minute video file (creating over 12,000 individual images). This process went late into the night and as a result I have still not finished my analysis. I did learn a few interesting things along the way.
The first thing I looked at was the timing. The video record superimposed its clock on the display and I figured out how to know when the time display changed. This allows me to accurately time every frame. The video frame rate is supposed to be exactly 30 frames per second. That should mean the time code would change every 30 images, without fail. Well, this is almost true. It is definitely true over a long time span, whether for 30 seconds or 5 minutes. But, within this time I saw the frame count between time code changes to be 29, then a couple of seconds later it would be 31 which would bring it back to the correct time line. This is an important detail for when we have real scientific data but it’s really just a distraction for the camera testing. In the end I had to put this problem aside and I’ll have to get back to this on some later date.
The second thing I looked at was to see if I can tell when the images change in the video. All of these cameras have a feature that allows reaching fainter stars than a simple video camera can reach. This is known as either frame integration or “sense-up” and the unit is the number of frames like x2 or x16 (usually a power of 2). A normal video camera will spend 1/30 of a second looking at the scene before that recorded light must be converted onto the video output. So, if you integrate two frames (x2) it will be like taking a longer exposure time. In this case, it would be 1/15 of a second between changing the video output signal. Now, the video signal cannot change it’s speed. Thus, a camera that does this must work to duplicate the integrated frame on the video output so that the video stream has 30 frames per second. When you look at the video on a monitor, the images clearly don’t change as rapidly when you go to ever larger integration times.
For our project, we need to be able to work with fainter stars. I have estimated that x16 is the longest we can go (that would an exposure of 16/30’s of a second or just long than half a second). I’ll write up more about this limit later. Here’s the problem: when analyzing the video image stream, you must know when the image changes since this tells you where the start and stop of the frame integration is. If you don’t know the boundaries your time can be off by up to the frame integration time and that is not good for us. Now, I know that I can see this effect looking at the monitor but the trick is to teach the computer to recognize this transition between frames. Sure, I could do it by hand, but if I want to know when every transition occurs it will take me a very long time.
Most of my day was spent devising algorithms to sense the frame change. I have not solved this problem to my satisfaction yet and that’s very frustrating. On the other hand, my efforts have at least identified a key issue for these cameras and one that must be solved for future scientific use of frame-integrating video cameras for occultations. Without getting into all the details, I came up with something that works reasonably well on some of the data I collected. The results surprised me. My baseline camera, the Watec 120N+, works the way I expect. Every 16 frames the image does indeed change. Depending on the data, I can see changes more often than that but in my best example I only have a 2% false trigger rate (that’s where I think the frame changed but it didn’t). None of the other cameras were as easy to understand. The Mallincam Hyper Plus was similar but had a higher false trigger rate. The Mallincam JR and the PC165DNR have proven so far to be impossible to tell when the frame changes. My techniques that worked on the Watec will either say the frame never changes or it is always changing. Clearly, there is something about these newer cameras that I do not understand. I either have to decipher what’s going on using the data or talk to the manufacturer to get an explanation.
At this point I have to switch gears and work on other projects for a while. I’ll revisit the data analysis in a few days but at least I know that I have the data I came to Arizona to get.
Marc,
I had not looked at your blog for a few weeks, but ran through all of the postings today.
A couple comments on the latest posting. First , the correct NTSC frame rate is 29.97 frames per second, which is equivalent to 1001 milliseconds per 30 frames. The difference from 30.0 per second was done to avoid 60-cycle interference problems
Second, the problems you are having sorting out how the time changes when using the integration features implies that you are not using video time insertion. If that is correct, let me simplify life for you by loaning you an IOTA-VTI for an indefinite period. Whether you ever buy one or not (and I expect you will) it would be my pleasure to assist this early phase of your research by making this extra tool available to you now. It will allow you to make a nearly-direct measurement of system time response.
Walt
You are absolutely right that the time “drift” I was seeing is a consequence of a poor time hack provided by the DVR. As you say, the right way to do that particular test is to use a GPS-based timer inserter. It will comfort you to know that the IOTA-VTI is definitely on my list of parts to acquire for the network. I’ve just been studying the components that I’m worried about. As it happens, I already have a different time box but I didn’t run it due to some power distribution issues (I was out of 12V sockets). Still, it never really was my intention to get serious about absolute time on this test. In reality, I was just getting to know the data and didn’t have a conscious goal other than to see if I could take a video stream and understand what I was seeing.
In the process of my tests I think I have run into a serious issue and that is how to know when you are seeing a “new” frame on the video stream. The frames should be replicated by a deterministic and constant amount and tying that knowledge to the time from the IOTA-VTI is the key to getting good timing out. I’m rather surprised that it is not easy to see when the images change and that makes me think I don’t understand how the video/image data is being processed prior to placing it on the video output stream. The difficulty with finding these changes varies between cameras and is most likely also affected by the quality of the recorder. The PC165DNR is the worst and the Watec is the best while the Mallincam’s fall in the middle.