Live from the Hubble Space Telescope
UPDATE # 13
PART 1: Our Neptune and Pluto images are available
PART 2: The Hubble team will answer your questions
PART 3: Starting work on the Neptune images
PART 4: Hawks, aluminum, and math: more work on the
PART 5: Program and standard stars for the catalog
PART 6: Trying to understand the changing wavelengths
If you saw the live television program on Thursday you already know
that our observations of Neptune and Pluto produced good data. After a
quick first look, it appears that our images contain exciting new science
information. The next five weeks promise to be interesting as we work
with Planet Advocates Heidi Hammel and Marc Buie to process the data and
discover the scientific significance of our images.
These Hubble pictures are available online on the Web. Go to our home
pages and look in any of the following sections for the images: - Project
News - Featured Events (student do research with the HST) - Photo Gallery
THE HUBBLE TEAM WILL ANSWER YOUR QUESTIONS
The opportunity to send Email questions to the men and women of the
Hubble team is now available and will remain active until early May. We
are grateful to the HST folks for generously volunteering their time to
support this service.
This section will describe some guidelines and procedures for the process.
K-12 students and educators can Email questions to researchers, engineers
and support staff. This interaction will be supported by a "Smart Filter"
which protects the professional from Internet overload by acting as a
buffer. The actual Email addresses of these experts will remain unlisted.
Also, repetitive questions will be answered from an accumulating database
of replies; thus the valued interaction with the experts will be saved
for original questions. (More information about how you can directly search
this database will follow later).
Tips for asking good questions
Each and every expert is excited about connecting with classrooms. But
it is important to remember that the time and energy of these researchers
is extremely valuable. If possible, please review the materials available
online to gain an overall understanding of the basics. It would be best
to ask questions that are not easily answered elsewhere. For example,
"What does the Hubble Space Telescope do?" would not be an appropriate
We recognize that this creates a gray area about whether or not a question
is appropriate. Simply use your best judgment. Since the main idea is
to excite students about the wonders of science and research, please err
on the side of having the students participate. If you are not sure whether
or not to send a question, send it.
Some teachers have used a group dynamic to refine the questions that
they email to experts. For example, after first studying HST material,
students divide into groups and create a few questions per group. All
of the questions are then shared, and students are given an opportunity
to find answers to their classmates' questions. Those that remain unanswered
are sent to the HST team.
Ideally, the act of sending questions will further engage the student
in their learning. It may help to think back to an early stage of development
when the 3 year old learns that repeating the word "why" can get parents
to do most of the work in a conversation The wise parent will try to get
child involvement by asking "why do you want to know?". The same is true
in the classroom. Teachers might want to help students learn to ask good
questions. Here are three questions the students might ask themselves
as they submit their questions:
What do I want to know?
Is this information to be found in a resource I could easily
check (such as a school encyclopedia)?
Why do I want to know it? ("What will I do with the
information?" or "How will I use what I learn?")
The last question is the most interesting. Student reflection on why
they want to know something is a very valuable learning experience.
Logistics of sending in questions (address and format)
Questions will be accepted from now through May 7, 1996. To submit a question
during these times, mail it here.
We will acknowledge and answer all questions as quickly as possible.
Our goal is to provide a basic acknowledgment immediately. In most cases
we should be able to provide an answer within ten days to two weeks, sometimes
In the subject field, please put the letters "QA:" before a descriptive
subject. Also, provide a sentence of background information to help the
experts understand the grade level of your students. The following example
should illustrate this idea.
FROM: your Email address
SUBJECT: QA: Hubble mission control
I am an 8th grader from Buloxi, Mississippi. In the television program,
it seemed like there were a lot of people in the control center to operate
Hubble. How many people normally work in this room?
Thanks, Sophie Jackson
One question per message
If you or your class have several questions which are unrelated, we
ask that you please send each unrelated question in a separate Email message
rather than as one message with many different questions. While this may
be inconvenient, it is important because it will help us to keep track
of the questions and ensure that no question remains unanswered. Messages
that do not follow this request will be unnecessarily delayed as we go
through the extra step of splitting up the messages ourselves.
Twenty question limit
Any individual teacher will be limited to submitting a total of twenty
(20) questions during the life of the project. Hopefully this will encourage
more classroom discussion about what students want to know and will lead
to research done before asking questions.
Browsing answers to questions already asked
An archive of question/answer pairs of previously asked questions will
be maintained. This archive is readily available at this
Searching question/answer pairs
A capability to search for interesting question/answer pairs is also
available via Email. The system relies on the user choosing one or more
keywords related to their interest. Every existing question/answer pair
will be searched to see if it contains the keywords. Messages with the
keywords will be returned via Email. To initiate a search, send a mail
In the message body, simply include the keywords (multiple keywords
should be separated by spaces).
Receiving all question/answer pairs as they get created
A capability has been set up for those people that would like to receive
ongoing Email with answers to all of the questions asked. Each night,
one mail message will be sent to those interested. This message will contain
a copy of every question/answer pair generated that day. If you are interested
in this feature, send an Email here.
Leave the subject blank and in the message body, write the words:
STARTING WORK ON THE NEPTUNE IMAGES
Heidi Hammel and Wes Lockwood
March 15, 1996
This is Heidi and Wes, writing from MIT on Friday afternoon. You already
know me (Heidi) and Wes Lockwood is a scientist from Lowell Observatory
in Flagstaff, Arizona. We have worked on Neptune data analyses together
for many years. The next series of journals will document how we analyze
the new LHST Neptune images. These will be analyzed right along with some
of our older images, taken in 1994 and 1995. Much of the processing is
the same, and we hope to include the LHST images in the paper we are writing
about Neptune right now.
Today, we downloaded the images from Space Telescope Science Institute:
this means we transferred them over the Internet from a disk in Baltimore,
Maryland, to a disk attached to Heidi's work station at MIT in Cambridge,
Massachusetts. This took about three hours! They are big images! But most
of the image is just sky. The pictures on the LHST web pages were clipped
out to show just Neptune.
Heidi also installed a new disk on her computer, to hold the new data
and the work that we expect to do over the next two weeks. While she was
doing that, Wes reviewed the work we had done the last time we were together,
which was just before Christmas.
We took a quick look at our new images - they're great! The first thing
noticed was that the clouds were very different from the way the planet
looked in September. The general banded structure (the stripes, like on
Jupiter) look pretty much the same, but the brightest cloud is now in
the south - not the north! That was a big surprise.
We have a lot of complicated work to do, both on these new LHST data,
and the other data. So we made a To-Do list, and here it is:
* Go over previous to-do list (18 Dec 1995), and document what we have done
* Put disk directories back in order since addition of new disk
* Track down Amanda's email about HST ephemerides
* Track down Mert Davies 1994 reference for Neptune ephemeris
* Get 1994 and 1995 disk-integrated photometry from Lowell
* Refresh memory of solar cycle phase vs. Neptune outbursts - is 1994 a burst?
* Assemble planet center table
* Assemble observation logs table
* Write up navigation procedure
During the next few weeks, we will keep you updated on our progress
as we work.
Heidi and Wes
HAWKS, ALUMINUM, AND MATH: MORE WORK ON THE ARCHIVE
March 8, 1996
I had just packed away my wife's snowboots (we're moving the end of this
month) and this morning it snowed. Two inches or so, but enough to mess
things up and close some of the schools.
We think the hawks are back! Last year we had a small family of red-tailed
hawks that lived in a nearby park. A couple times a week they would come
over onto the campus and go hunting. We'd see them perched halfway up
a tree, or up on the gutters just below the roof line of the building.
The female is pretty big, the male was about the size of a large crow,
and the juveniles (there were one or two, we were never sure which) got
to be the size of the male by the end of the summer. A couple of people
think they saw the female return in the last few days. I've been keeping
an eye out for her.
Today I'm doing two things at once: testing the changes I described
in my last journal, and doing some database work.
The optical platters we use cost about $300 each, so we try to make
sure things are working pretty well before we actually start writing data.
I've been "burning aluminum" all day, and I'm pretty confident that things
are working correctly. Suzanne (another DADS developer) and I have been
working on this project since about Halloween, and we're both relieved
to see it coming close to the end. And we're anxious to get on to the
next phase of work for SM-97.
I mentioned "burning aluminum" above. I call it that because when we
write to an optical disk, a laser in the drive blows little pits in a
very thin sheet of aluminum trapped between two layers of transparent
plastic. When we write, a high-powered laser burns out the pits. When
we read, a lower-powered laser looks to see what those pits look like.
Once you've burned the pits into the aluminum, it's permanent. You can't
erase it, and we expect the disks to be good for at least 20 years, and
maybe as much as 100 years.
CD-ROMs (and music CDs) work basically the same way, but the pits are
"stamped" using a pressing machine, rather than blowing them out with
a laser. The low-power laser in your CD player works the same as our optical
To test a new version of the programs that add data to the archive,
we run a standard set of test data through the system. There are about
700 files, for a total of 305 megabytes of data. That's about half of
a typical CD-ROM, or about one twentieth of our big disks. It takes a
couple hours to run all the data through, and that leaves me time to work
on my other problem.
The database work I'm doing involves figuring out how much space we
should reserve when we are making tapes to send to astronomers.
I'm trying to figure out just how big HST Datasets are. A dataset is
a collection of files that together hold all the data for an image or
spectrum. For WFPC-II (Wide-Field and Planetary Camera Two - the camera
that takes most HST pictures) this is a pretty constant number: about
25 megabytes in 10 files. It's a pretty constant because the camera takes
the same sort of pictures all the time. Each picture is four "chips" in
an 800x800 array. (A typical PC screen has 1024x768 pixels -- a single
WFPC-II chip is just slightly smaller, but square.) There are a total
of about 40 bytes of information about each pixel, including calibrated
and uncalibrated values, quality information, and other stuff. Since the
size of the picture doesn't change, the size of the dataset doesn't change
For the spectrographs, the size of the dataset can vary a lot. This
is because a single dataset can contain multiple spectra. In the case
of the Goddard High Resolution Spectrograph, it can vary from just 38
kilobytes to over 300 megabytes!
But what I want is a "pretty good" estimate of each kind of dataset,
and I can use that to plan how much space I'll need to retrieve a particular
set of data. To get a statistical look at the data, I have this nice complicated
query that gets the minimum, maximum, and average size of "Z-CAL" datasets.
"Z-CAL" datasets are CALibrated science data for the GHRS. (Each instrument
has a letter associated with it: U is for WFPC-II, X is for FOC, Z is
for GHRS.) Once I have all that data, I can also compute the "standard
deviation", which is a kind of average difference in sizes. That gives
me an idea of how much variation there is in size.
Here's another example: If ten people take a test, and they all score
between forty and sixty points, with an average of fifty points, that's
a pretty low standard deviation. If another group of ten take the test,
and half of them score about 20, while the other half score about 80,
the average would still be 50, but the standard deviation would be pretty
When you see a large standard deviation like that, you have to decide
if you're seeing different "populations". For example, if you have a test
aimed at eighth graders, and you get five people who score about 20, and
five who score about 80, the fact that you have a large deviation makes
you wonder if maybe the five who scored 20s were perhaps second graders!
In my case, I've discovered there are two types of GHRS observations:
short, small observations with one or a few spectra, and large observations
that have many spectra. The "mode" I see for those observations is "RAPID",
and I'll have to get one of the astronomer types to explain that operating
mode to me.
That's the kind of math I do pretty regularly: Statistical analysis
of the contents of the archive. I rarely need to do any calculus, though
I know enough to understand how the mathematical "tools" I use work. But
I do a lot of algebra, and use programs that have statistical functions.
Well, my big test is finished, and while most things are working, there
are a couple of problems I need to work on. I'm going to take a break,
get something to drink, and see if I can spot that hawk before I tackle
March 15, 1996
I apologize for not being able to write anything last week-- I had 2 midterms
and 3 papers due, and I was barely able to come into work at all! But
now things have calmed down; Spring Break is next week, and I'm finished
with all of my classes.
Our observing run is coming up quickly! Since next week is Spring Break,
I will actually leave for Arizona on Wednesday (my parents live in Arizona,
so I'll spend a few days with them before going down to Kitt Peak) The
run begins on the night of March 25, and ends on the morning of March
29. I sure hope we'll have good weather!!
Yesterday, my boss sent me a list of all the objects which we should
try to observe--I believe we're going to have a meeting on Monday to plan
everything out. I have never done anything like this, so I'll be learning
as much as you will!!
Since we're building a star catalog, it is very important for us to
be certain that the data we collect is completely accurate. I mentioned
before that part of my job is to remove the instrumental "signatures"
left by the telescope itself. Another very important part of building
the catalog is having a separate set of stars whose photometric information
is already known to compare our stars to. We therefore call the stars
we're observing for our catalog "Program stars"; the stars for which the
information is already known are called "Standard stars." The standard
stars come from a catalog which was compiled by an astronomer named Arno
Landolt. He spent several years observing around the celestial equator
(the celestial equator is just like the earth's equator-- stars near the
celestial equator would be almost directly overhead to a person standing
on earth's equator). Stars near the celestial equator are visible to people
both in the northern hemisphere and the southern hemisphere. His catalog
is filled with literally hundreds of stars from this region, and are often
called the "Landolt standards."
Have you ever looked at what happens to sunlight when it passes through
a prism? The light separates into the colors of the rainbow, right? Well,
this visible light is just a small part of what scientists call the "electromagnetic
spectrum." The electromagnetic spectrum includes the entire range of waves,
waves can teach us about the amount of energy a star is producing. A picture
of our sun in radio waves would look very different from a picture in
x-rays, and both would look very different from what the sun looks like
in the visible wavelengths -- wavelengths our eyes can see.
When collecting photometric information about the stars, astronomers
typically use five of what we call 'passbands': U (ultraviolet), B (blue),
V (visible), R (red), and I (near- infrared). A passband, as you might
have guessed, is a narrow region of the electromagnetic spectrum. In order
to only study within a certain passband, astronomers have to block out
the rest of the spectrum with very sensitive filters. Of course, many
astronomers choose other filters, but these five are probably the most
common. The first GSPC catalog included data only from the B and V filters;
the second GSPC (which I'm working on) will contain B, V, and R data and
will also include much fainter stars.
So what does all this have to do with our observing run? Well, when
we go out to the telescope, it's important not only for us to know which
"program" stars to observe for our catalog, but also which "standard"
stars to use. The standard stars help us to account for atmospheric distortions.
When a star is directly overhead, its light will pass through much less
atmosphere than when the star is near the horizon. We therefore observe
these standard stars at many positions in the sky (overhead, at 30 degrees
that we can take into account the effects of the atmosphere when reducing
the data from our program stars. Does this make sense? We already know
what the data from the standard stars SHOULD be, so when we measure them
at many positions in the sky, we can compare the differences between what
the standard star data SHOULD be and what it really is. We can then take
that information and apply it to our program stars.
Pretty neat, huh?
Well, I'll be sure to write more next week before I leave, and I promise
to write from Kitt Peak!!
TRYING TO UNDERSTAND THE CHANGING WAVELENGTHS
March 13, 1996
Today I am zeroing in on the answer to a GO (General Observer) question
about why wavelengths from data taken in March 94 are off by 2 angstroms
After looking at his data and seeing how little signal there was, I wondered
how he could tell anything about the wavelengths associated with the individual
spectra. I found that the signal improves (i.e. you could start to see
features as opposed to noise) when the individual spectra are combined
or co-added, i.e. summed together.
Still I was confused because I thought he was trying to compare the
same feature (an absorption (usually) or emission line) in wavelength
space but the data weren't taken at the same central wavelengths at all.
There was hardly any overlap. It turned out he was not comparing the same
feature but different features in redshift space, not wavelength space.
Redshift is the amount a feature moves in wavelength or velocity, etc.
due to the fact that it is moving away from us. This relates to something
you may have heard of called the Doppler effect? It is expressed in terms
of "delta lambda over lambda" which is equal to "v over c". Lambda (the
Greek letter lambda) stands for the rest wavelength, where you expect
to find the feature if it weren't moving away from you. Delta lambda is
the difference in wavelength between where you found the object and the
rest wavelength. "v" is velocity, the speed at which it is moving away.
And "c" is, of course, the speed of light.
The General Observer) was plotting his lines of different wavelength
on a redshift (or velocity) scale, then trying to fit them with theoretical
profiles. What he found was that the absorption lines in the March 94
data line up at the same redshift, but the line in the October 95 data
appear to be at a slightly different redshift. And that is the shift he
didn't understand. But I figured out that the wavelength calibration has
changed since the first data were taken. It is quite possible, probable,
and hopeful that if he recalibrates his data the unexpected shift will
Of course, while I am working on this problem, I also have several more
that I am trying to keep on top of...but my ability or lack of ability
to do more than one chore at a time is the topic for another journal.
Lisa Sherbert, GHRS Data Analyst
Space Telescope Science Institute