This is the second half of part 4 of my Keck NIRC2 observing log. (See Part 1 and Part 2, Part 3, and Part 4a).
The night you’re scheduled is the night you have, and you only worry about what the weather and conditions are when your data is being taken. Sometimes you get the only good night in the entire month of other observing runs and sometimes you get the day it rains. You might have awesome weather and your instrument malfunctions costing you hours as the operator and support scientist try to troubleshoot it. That’s the gamble with observing and part of the reason I love it. I’ve spent my share of time staring at humidity sensors trying to will them down to the threshold for the dome opening with no avail. You have to deal with what the night throws at you whether it’s bad weather, amazing conditions, or instrument malfunctions. If I don’t get the conditions I want, it’s tough luck for the observer. That’s it for the semester, and you’ll need to try again and apply during the next the call for proposals. That’s the gamble of an observing run.
So I started the night doing really bright targets and sorting instrument commands and forgetting and remembering to open the camera shutter and the appropriate dither command and offset to use as I got situated because the Kepler field wasn’t high enough in sky . When I got on to the Kepler field, I took the first test image to sort how long the integration should be . I’m in the linear regime (which means what it sounds like. It’s the range on the detector where a photon hitting generates X electrons on the camera CCD. If I have too many counts already on the detector, then this can cause the CCD to switch regimes where it doesn’t generate X electrons any more per photon and that means you don’t know how many photons from the star you actually received) so if I want double the counts I double the exposure time. I do that and I saturate the detector (where there are too many electrons due to photons hitting a given pixel on the CCD that they spill over onto surrounding pixels). Um…. then I go back to 4 times less and I get the peak counts I got with my first exposure (which was twice as long! What gives?). I call over Marc my support astronomer because something’s up. (he was awesome and really helped me to get familiar with the instrument and get feedback on what the AO system was doing given the conditions. Thanks Marc!) o Either I’m doing something wrong or something’s up. We go through the same logic. Okay let’s double the exposure. Now we get the same peak counts as were on the previous exposure that was half as long. What’s going on?
It was perfectly clear at the summit above Keck last night, but the turbulence in the atmosphere made it a constant struggle last night. The adaptive optics system corrects for the effects of the atmosphere and reduce the smearing of the star on the CCD detector, but if you’re seeing (how big the stars appear in your images due to the atmosphere smearing) is changing faster than the AO system can keep up and by a wide range the AO struggles to keep up. That’s because the wavefront sensors measuring how the light is hitting the telescope and direct the mirrors to deform to compensate, but that correction is no longer correct and the seeing is now something different, so you’re get a lag. I still get correction but it means that the counts in the pixel wells of the CCD that the camera actually measures are changing rapidly as the size of the star is oscillating back and forth. Not good news.
Here’s the seeing monitor from Mauna Kea for last night. You can see it was just rapidly oscillating and going towards big values. Average seeing on Mauna Kea in optical is ~0.8 arcseconds.
Red and blue points are from different elevations and you can see how widespread the points are in a short period of time. The absolute value isn’t necessarily what we measure on the telescopes, but we see the same relative changes
I’m the observer it’s my call what to do. So as Marc is explaining this could be kinda AO lag and the fact the seeing is just rapdily changing. I look at my target list notes and see that this Kepler star is at the mid range of the Kepler Input Catalog magnitude range. It’s 13th magnitude (it was high priority) but I have a few brighter stars that are 10th magnitude (in astronomy lower magnitude equals brighter star). The AO system is using the target to do the corrections, so if it’s not getting enough photons to adjust quickly then maybe going to a brighter star will help. So after a bit of playing around with different setups, I make the call to move to a 10th magnitude star on my list.
The same thing is happening but we’re getting better counts and the AO system seems to behaving a bit better. So that’s it. My observing plan is out the window. I decide targets aren’t going to get observed by priority but in order of magnitude working my way from brightest to fainter going from 10th, than 11th to 12th, and knowing I’m probably going to skip all the 13th and 14th magnitude stars. I had been mulling getting two colors (J and Ks) for each target before the start of the night. I ultimately decide to only get Ks, and J will only be if I see a faint companion in the image. The conditions could get better, and then I’d be in business. If they do, I’ll move on to brighter targets and adjust the where I’m moving Keck to next, and the seeing did improve in value (it was still varying by the same amount) so I could get 11th and 12 magnitude stars later on before it went back to being bad right before the Kepler field set.
I can take lots of short exposures but taking lots of short exposures and reading them out has a big overhead or I can take a bunch of short exposures and coadd them together in a single read out. I go for the latter after consulting Marc. I decided that saturation is at around 10000 counts, if I can keep the counts around 2000 even if the seeing is causing flucuations by a factor of 2-3 I’m still well in the linear regime and can use the observation. I am gambling a bit in that if one of the exposures that get added to together to make the coadd is saturated I’ve ruined that entire image, but I can check to make sure the counts are what I expect for the target peak counts I’m aiming for.
I also decide to do overkill on these targets. Do 3 times as may exposures+coadds at each part of the 3 point dither and pound on the targets (sometimes repeating the dither pattern again) to try and get useful photons. This is because if there is no contaminating faint star visible, we want an estimate of how bright of a companion we could see. I’m already getting lower counts than expected and the seeing is smearing the stars out over more pixels with additional readnoise and sky background, so that decreases my sensitivtiy. I don’t want to find out that all my observations were useless because they didn’t go deep enough to detect any possible stellar contaminators around these stars.
At this point, all I can do is crank the music up, drink more caffeine, and fight on through the rest of the night. Every target I spend several minutes taking test exposures getting a feel for the fluctuation in counts, and trying to get the peak counts in my goal ~2000-3000 counts and make sure that that exposure doesn’t seem to be giving me the danger non-linear regime counts. Then take the coadded exposure, see what the counts are and does the average peak counts divided by the number of coadded exposures give me back around ~2000-3000 counts, if not I need to readjust. I also look at the shape of the star (or the point spread function PSF), if one of the images saturated it should start looking funky).
I can’t help going back and trying to think about where I wasted time, what I should do differently for the next time on NIRC2 and how to be better for the next observing run. Ultimately I made a decision, on what to do for the observing scheme. Some of that came from gut feeling from my past experience observing. Things I’ve learned from watching more senior observers who trained me when bad nights happened, when things went wrong, and asking questions during the observing run.
But did I make the right call? Did that give me anything useful? I think there are moments I won the battle (but not the war) and the counts are linear but only fully reducing the data will tell. I’m planning on trying to do it myself, so that will take some time. I know there’s at least one source with a neighboring star that’s roughly 10% fainter than the Kepler star that I could see at one point in the night, and I managed to keep two filter band observations, to get the color so we can estimate the contaminator’s brightness in the Kepler magnitude. So I’m hoping those observations will be useful.
It’s one of those frustrating nights where all you can do is keep collecting photons, and try to deal with Mother Nature the best you can. A big thank you to my operator Joel, who was super knowledgable, happily answered questions, and really helped make things go smooth once I was on my own despite the variable weather conditions.
Ultimately, we’ll have to wait and see what the reduced data looks like.
PS. Chris posted my tour of Keck Remote Ops II yesterday, if you want to see what it’s like in Keck HQ.
Comments