Live from the Hubble Space Telescope
UPDATE # 5
PART 1: Status on Teacher Kits
PART 2: Las Cruces participants needed to meet Dr. Tombaugh
PART 3: WebChats to share ideas
PART 4: Volunteers wanted
PART 5: An average day in the life of an astronomer (me)
PART 6: Job descriptions from the Hubble Team
PART 7: The mystery of decreasing light
A Teacher's Kit is available to help better integrate the material of this project into classrooms. The Kit includes a 48-page Teacher's Guide with helpful background information and lesson plans/activities for science classes and interdisciplinary work. The Kit also includes a full-color Hubble poster, prints of planets and galaxies, color filters, a diffraction grating, slides of Neptune and Pluto/Charon, UV sensitive beads and other hands-on materials. Also a copy of NASA's Space-Based Astronomy booklet is included; this includes many additional classroom activities.
To order this kit, send a $10 check or money order to "Passport to Knowledge - LHST",PO Box 1502, Summit, NJ 07902-1502. Please include your name, complete mailing address, position and grade level taught, and how many students you expect to reach.
For those that have already sent in their $10, your kit either was mailed (via Priority Mail)last Friday or Saturday, or it will be mailed this Tuesday or Wednesday. So in all cases you can expect your kit by the end of this week.
Jan Wee writes, "If you are located in the Las Cruces, New Mexico area and are actively participating in the Live From the Hubble Space Telescope project, please contact me ASAP. I would like to share a unique opportunity .... we will be presenting our special birthday greetings to Dr. Clyde Tombaugh on Friday of this week (Feb 23) week and we would like to have some students involved in the presentation.
Call Jan Wee at 608-786-2767 for further details or send email to firstname.lastname@example.org
Our regularly scheduled chat time online is continuing. Some folks have gotten lost on the way to the chatroom by entering through a past project. We've taken steps to prevent that from re-occurring; sorry about the confusion.
Our goal is to gather some feedback from folks about this project and to address some of your concerns. We are particularly interested in chatting with teachers who are planning to use the LHST material with their classes, but everybody is invited. Soon we will announce opportunities in March/April to introduce your students to some Hubble Team experts. But for now consider this a teacher focused event.
So please visit Jan and Marc online each Tuesday from 3:00-4:00 PM (Pacific time). We love to have you loiter with us. If the time is bad, please send email to email@example.com and suggest an alternate time.
During past projects, we have received comments that some of the updates are too long or that some vocabulary/concepts are too difficult for the average middle schooler. So for this project, in addition to the regular Field Journals, we will be offering an easier- to-read version geared towards an average 5th/6th grader's interests and vocabulary. These messages will be distilled from the regular messages. I am looking for a few volunteers who would be willing to produce these reports. These folks should have a clear understanding of 5th/6th grade reading and comprehension skills. I expect to begin these reports in about two weeks. Volunteers would be expected to write no more than one report per week from an existing Field Journal. If you are interested, please send a note to me at firstname.lastname@example.org. Thank you so much.
Directions for receiving these so-called Junior Journals will be provided in a week or so.
February 13, 1996
I drop Michaela off at daycare. I usually spend about 15 minutes there unpacking and talking to Ann, the supervisor, about Michaela's morning so far and any concerns we might have. Finally I'm commuting to the Space Telescope Science Institute (STScI), about 20 minutes away by interstate. I'm at work by 10 am.
I go directly to my office and login to my workstation. One of my daily tasks is to monitor the activity of the archive "hotseat", an email site that astronomers can write with questions regarding the Archive, where all of the data from HST is stored. Most of this data can be accessed by anybody who wants to. Part of my job is making sure that the scientists and educators who need the data for professional reasons can get it fairly easily. There are usually about 20-30 email messages waiting for me, mostly messages asking for help with the Hubble Space Telescope (HST data archive. I page through those, scanning for the ones with science content. I read the posted replies to all messages since I just started this job, I don't know the answers to all of the questions. Often I also have email from collaborators who want some images or plots, and I zip those files off as soon as I can (or I forget). It often takes me an hour to sort through email in the morning.
Depending on my deadline and meeting schedule, I try to block out my time for the rest of the day in as large chunks as possible. The tasks that I have include: calibrating and analyzing telescope science data, writing and researching papers and proposals for future work, calling and emailing other scientists with questions, meeting and guiding the work of a science data analyst, and what is called my "functional work." I spend about half of my time on this kind of work.
I don't mind this part at all. The "real" part of my job is to enhance the scientific usefulness of the data archives. All of the data that HST sends back to Earth is stored on optical disks at the Institute. Eventually, all of this data is made public, available to anyone who wants to use it. But data is worthless if no one knows how to access it or use it. Most of the people who work with the archive are computer and software experts who know a lot more than I do about databases and data storage. But they are not as familiar with the science and how an astronomer might want to use the data. That's where I come in. Since I actually use HST data for my own research, and I plan to study more as it becomes public, I can be an interface between the computer scientists and the astronomy professional. Right now I'm helping develop a user survey and I'm updating the archive manuals. I'm also involved with expanding the archive to include other sources of astronomical data, like the 10-meter Keck telescopes on Mauna Kea in Hawaii. I am not directly responsible for any of the day to day activities regarding the care and feeding of the Hubble Space Telescope.
The other 50% of my job is occupied by scientific research. I am going to repeat a little of what I said in my bio. I study clusters of galaxies in order to learn about the origins and contents of the Universe (cosmology) and to study how galaxies evolve in dense environments. I use a lot of tools to do this, from space telescopes like HST to ground-based telescopes in Arizona and Chile, to computers to analyze data and to construct software-based models of clusters of galaxies. I don't just use HST, because HST only gives me one way of looking at clusters of galaxies. HST is very important because it is the only telescope which can see the structure of very distant galaxies. From the ground, distant galaxies look like fuzzy blobs; above the atmosphere, HST shows that many of these fuzzy blobs have disks, spiral arms, regions where massive stars are forming, just like in galaxies near to us. The difference is since these galaxies are very far away, we are seeing them as they were many many years ago. Looking at them is like looking into the Universe in the distant past. I use other telescopes, like orbiting X- ray telescopes, because clusters of galaxies also have huge amounts of very hot gas in between the galaxies, but bound to the cluster along with the galaxies by gravity. The only way to see this gas is by using telescopes and detectors sensitive to X-rays.
Deadlines place the biggest pressures on my job. So far I am most familiar with the scientific deadlines. For example, each observatory accepts proposals for the use of their telescopes about once or twice a year. I have access to all the orbiting telescopes (like HST), and to ground-based observing at the national observatories at Kitt Peak in Arizona and Cerro Tololo in Chile. The proposals contain a description of the project that I'd like to do, why it's an important project, and why it's possible to do it at this or that telescope and instrument. Sometimes the proposals also request money, for travel to the telescopes, for presentation of research at professional meetings, or for publications. The biggest proposals also ask for money to support salaries. I don't need money for my salary, but if I'd like to support a data analyst or a post- doctoral researcher, I must get funds. These proposals are entered into a competition with other proposals for the same telescopes and money, so they must be good and well-written to succeed.
As a scientist, I am evaluated by my papers, so I must both write papers AND write good papers. Right now I have several projects going in various stages of completion. Since I was pregnant with Michaela, I knew I wasn't going to have a lot of new data to analyze, so I made myself write up a backlog of projects, rather than start new ones. So right now, I'm NOT in the midst of writing a paper, but I am planning a new paper with my husband. He worked on the theory and I planned the experiment and took the data, so we're both going to write and submit the paper. We have done this several times before; we're a good team.
My Normal Day usually ends around 6 (Mark picks Michaela up). I try to run every other day at least. If Michaela is feeling well, she goes to bed all fed, bathed, and warm around 9-9:30, leaving me some time to do more reading (recently I had to read 24 NSF proposals in order to evaluate them, and report to a proposal-evaluation panel.) It requires a lot of discipline on our part to give Michaela the time she needs, and work, and do minimal health-maintenance, but so far we're doing ok. As I have alluded to earlier, all bets are off when Michaela is sick because she can't go to daycare and she isn't so predictable. Then I'm glad I'm in a position where either Mark or I can take time off to take care of her. In a most sticky crunch, one of us can even sub for the other.
The highest stress time (but also one of the most emotionally rewarding times) is observing at a ground-based telescope. I'll write about that in a future journal, because it is a rather special time for astronomers.
The most fun part of my job consists of puzzle solving. When something goes wrong with an exposure, part of my job is to track down the problem as quickly as possible and determine how to fix it. Sometimes the problems are with the equipment on the Hubble, and the trick is to find a way to work around the failure. Other times the error is in the programs I write, and I have to find and correct the problem so the failure doesn't happen again (sort of like doing a math problem wrong on your homework, and the teacher makes you do it again to get it right!). All of this activity is done while working with other people (experts in their own areas of responsibilities) to make sure that any changes I make don't cause other problems later; working as a team is very important around here!
I am an instrument scientist for the Faint Object Spectrograph (FOS). The Hubble Space Telescope (HST) has two cameras and two spectrographs. A spectrograph is an instrument that breaks light into it s individual colors, just like a prism breaks light into seven different colors, and then measures the amount in each color. A spectrograph breaks the light into much finer components than when a prism breaks light into the 7 components that we see in a rainbow. This spectrograph can observe objects both in the visible (colors that we can see with our eyes) and in the ultra-violet (UV). Why do you think it is possible to observe in the ultra-violet with the HST since UV radiation is blocked by the earth's atmosphere?
As an instrument scientist I have to make sure that my instrument is working properly, so that other astronomers can use it to study their favorite objects. Since, I know how the instrument works, I also help other astronomers to prepare their observing strategy, i.e. a plan on how to conduct their observations with the FOS. After their observations have been completed I help astronomers to analyze their data.
Today, I spent the day helping an astronomer ( we call them GOs or Guest Observer) to plan observing a Seyfert galaxy. A Seyfert galaxy is an unusual kind of galaxy. In this galaxy the nucleus has a black- hole which is giving off energy equivalent to about 10^6 suns!!
February 20, 1996
We want to look for ourselves at how these data change with time by plotting the amount of light collected versus time. However, the data from the astronomical object are flux calibrated, i.e. they are in flux units (ergs*cm^-2*s^-1*A^-1) and were collected over a range of wavelengths. All we really want is the total amount of light seen in an exposure period for each exposure; we prefer the units to be in counts (number of photons) or count rate than in flux units, mostly because the numbers are easier to work with. Flux units have a physical meaning and are "instrument-independent" and are therefore useful for comparing observations from different instruments. Count rate is "instrument-dependent", i.e. different instruments will give a different count rate for the same object, as if the instruments speak different languages. For this reason, we calibrate the instruments: to make them speak the same language. Anyway, since we are looking at information with the same instrumental set-up over time, we don't need to convert to flux.
It would take a long time to run the calibration software on all the data files to convert the data back to counts or count rate, so I want to know if I can get close enough to total count rate by multiplying the total flux sum by a constant. The constant will be the average sensitivity for the grating used over the wavelength range of the observations because the way to convert data to flux units is to divide by the sensitivity curve. I can get that constant either by calculating it from the calibration file for absolute sensitivity or by using a value out of the instrument handbook.
I want to do a test to see if multiplying by this average sensitivity is good enough, so I am going to re-calibrate one of the files, turning off the step that converts to flux units. Then I will sum all the pixels in each group and compare the results of a few groups with the result of the total flux * average sensitivity. Actually, I will probably perform the equation on all the total flux points and plot both sets of data to see how they compare.
Hmmm, they don't even look close...I better examine my constant more closely. Ah, wait, I see. The observations were taken in the Small Science Aperture (SSA) and the constant I used from the handbook was for the Large Science Aperture (LSA).
Since the handbook does not list a sensitivity value for the SSA, I will have to calculate one. First I will find out the range of wavelengths for the data by generating statistics on the wavelength file, whose extension is '.c0h'. I only need to look at the minimum and maximum of the wavelengths and I only need to do one group because all the groups contain the same wavelength information. 'gstat' is a task in a software package called STSDAS (which runs under yet another software package called IRAF) and it performs statistics calculations, such as mean, median, sum, standard deviation, etc. on each group in an HST data file. The input to gstat is 'z*.c0h': I used a wildcard ('*') in the name of the data file because it is the only file in that directory with a '.c0h' extension. I use the 'fields' parameter to limit the output to minimum and maximum and I use the 'group' parameter to limit the output to the 1st group because I know all the groups will contain the same wavelength information.
cl> gstat z*.c0h group=1 fields="min,max"
# Image Statistics for z555010bt.c0h # GROUP MIN MAX 1 1162.06 1448.45
GHRS observations consist of many types of data images. Without going into all the details, let me just say there is raw science data and calibrated science data. Part of the calibration involves assigning wavelengths to the data points of the calibrated data. The wavelengths are kept in a different file than the science from the object being observed. Silly, I know, but that's how someone decided to do it. Additionally, there can be many groups of data in one file. For this observation, the groups can be roughly correlated with time so that if we sum up all the data in each group and then plot the results sequentially, it is like seeing how the data changed with time. There is always a corresponding group in the wavelength file for each group in the science data; as I said, all the wavelength groups are the same, in this case.
So, from the MIN and MAX columns in the above gstat results, I know the range of wavelengths for the data. Next I extract from the calibrated science file (*.c1h) the names of the absolute sensitivity file (abshfile) and the absolute flux wavelength file (nethfile) used to change the count rates to flux. Just like the science data, the wavelengths are kept in a separate file:
cl> hselect z*.c1h abshfile,nethfile yes zref$e5v09370z.r3h zref$e5v0936ez.r4h
'hselect' is another IRAF task whose job it is to select information from header files. The input to the task is 'z*.c1h' and the information I want to select is the value of the keywords called 'abshfile and nethfile'. 'yes' just tells 'hselect' to give me the value of the keywords for the input I specified without any other qualifications on which files to work with.
Out of curiosity, I want to see what the range of wavelengths is for the nethfile, so I use gstat again. Since I don't specify the 'fields' parameter, I get the default and since I don't specify 'groups', I get the information for all the groups:
cl> gstat zref$e5v0936ez.r4h
# Image Statistics for zref$e5v0936ez.r4h # GROUP NPIX MEAN MIDPT STDDEV MIN MAX SUM 1 177 1497.92 1497.92 256.198 1057.92 1937.92 265132. 2 177 1497.92 1497.92 256.198 1057.92 1937.92 265132.
There are two groups for this file: one for each of the apertures. I double check the aperture for the science data and find out which group goes with which aperture for the absolute sensitivity file (abshfile):
cl> hedit z*.c1h aperture . z555010bt.c1h,APERTURE = SSA cl> hedit zref$e5v09370z.r3h,zref$e5v09370z.r3h aperture . zref$e5v09370z.r3h,APERTURE = SSA zref$e5v09370z.r3h,APERTURE = LSA
'hedit' is just like 'hselect' only 'hselect' puts the output in table format whereas 'hedit' just lists the information in one column.
Then I list out all the pixels in the nethfile. I need to know which pixel in the abshfile corresponds to the wavelengths of the data, so I redirect ('>') the output of the 'listpix' task into the file /tmp/pix.wave:
cl> listpix zref$e5v0936ez.r4h > /tmp/pix.wave cl> less /tmp/pix.wave
By examining the above file with a paging program called 'less', I find that pixel 22 is close to the minimum wavelength of 1162A (A means Angstroms or 10e-10 meters) and pixel 79 is close to the maximum wavelength of 1448A. Therefore, I want the median value of the sensitivity in the abshfile between (and including) those pixels. I don't really need the statistics for both groups, so I only do the one relating to the SSA:
cl> gstat zref$e5v09370z.r3h[22:79] group=1 field="mean,stddev"
# Image Statistics for zref$e5v09370z.r3h[22:79] # GROUP MEAN STDDEV 1 7.1690E12 2.0795E12
When I multiply my total flux values by the mean of the sensitivity (7.2e12) I am very close to getting the same answers as when I re- calibrate the data, turn off the flux correction, and sum the count rates. So now I know I can use my sums of the total flux, multiplied by the mean sensitivity 7.2e12 to approximate the total count rate without having to re-calibrate all the data. Now it is easy for me to plot all of the data over time to see what the trends look like.
And the best part is that by recording this journal for you, I have kept a record for me of what I did to this data :) Recording what you have done is a very important step to follow in scientific work.