28 September 2011
How do we know where rovers are on Mars?
Posted by Ryan Anderson
The other day, my brother posted a very good question as a comment, and I thought it would be worth dedicating a full post to the answer.
So watching this made me wonder how you keep track of the rover(s) once they’re on mars. Is it done with a sort of GPS? Triangulating with things seen in orbital images? Directly taking pictures of the rover with orbital cameras (can they resolve it?) Dead reckoning?
A related question: How precise is the course laid out for the rover? Do you know pretty much absolutely that you’re going to to between these boulders then over here then over there or is there periodic reevaluation based on what was found that day/week?
So, these days this problem is a lot less complicated than it used to be because of the HiRISE camera on MRO. HiRISE has a resolution of about 30 cm per pixel, meaning that rovers can be directly resolved on the surface. By combining the location of the rover in the image with the detailed information from the spacecraft describing the exact location of the image on Mars, we can pinpoint the rover location nicely. Here is a recent HiRISE image of Opportunity at Endeavor Crater (click for a larger version):
Of course, HiRISE doesn’t snap a photo of the rover every sol, so in between new images, the rover location is tracked by comparing the images returned by the NavCams and Pancam to the HiRISE images, and updating the rover location based on visible landmarks.
But, I hear you asking, what did they do before HiRISE got to Mars? Well, MOC images were available for MER, and although they couldn’t quite resolve the rovers, some landmarks could be used to estimate the location. But more importantly, the rovers can also be located based on their radio signals. Mars is moving through space relative to the Earth and is spinning as it orbits. This motion causes a detectable shift in the radio frequencies that we receive from the rovers, and when combined with our knowledge of Mars’ orbit, can be used to tell what longitude the rover is at and how far away from the planet’s spin axis it is with an accuracy of about one meter. The problem is, to really nail down the rover location, you also need the distance from the equator. This can be found by watching how the doppler shift changes over multiple sols, or by measuring the distance from the earth to the rover by sending a radio signal and having the rover immediately send a response. Since this uses up bandwidth that would normally be used to send commands or receive data, the distance measurement isn’t done as often as monitoring the doppler shift. In any case, the rover’s latitude and longitude can both be measured to within a few meters using radio signals.
As for how the rover’s course is laid out, it is done in two stages. First there is a long term plan where the team sets a goal and tries to figure out a good path to follow. Opportunity’s trek to Endeavor crater is an extreme example of this. People on the team used HiRISE images to map out a course that avoided large regions with dangerous sand ripples years before the rover was anywhere close to those obstacles. But then on a day to day basis, the drives are dictated by the images from the rover itself that were returned on the previous sol. So, it’s very common to do “drive direction imaging” where we take a few pictures in the direction we anticipate driving so the rover can be sent instructions that avoid dangerous rocks and end up close to the desired science target for the next sol. Things change on a daily basis, especially when you’re at a major science target like Endeavor Crater because new images will lead to new questions and so the team might decide to make an additional observation here, or drive over and compare with a rock over there. The constant planning and adapting to new discoveries and questions is part of what makes a rover mission so exciting.
It’s going to be especially exciting on MSL, where our long term destination will be a towering mountain of layered rocks in the distance, beckoning us onward. With MER, HiRISE came after the rovers were already there, so as cool as it is to see the rover from orbit, we already sort of knew the site from the ground. With MSL at Gale, we already know so much about the site from orbit that it’s going to be surreal to suddenly see all the familiar sights from ground level and be able to drive over and analyze that outcrop or that crater or that mineral signature.
I can’t wait.
Info on radio ranging from:
Rongxing Li, Kaichang Di, Larry H. Matthies, Raymond E. Arvidson, William M. Folkner, & Brent A. Archinal (2004). Rover Localization and Landing-Site Mapping Technology for the 2003 Mars Exploration Rover Mission Photogrammetric Engineering & Remote Sensing, 70 (1), 77-90
[…] Martian Chronicles- Ever wondered how we track the location of the Mars rovers? (now singular) It’s not like our GPS works there. […]
I’m totally blown away that the doppler shift and the timings of the distance measurement signal work, and work so accurately. It seems like the shifts would be absolutely minuscule. If they can measure to within meters that suggests that altitude differences or what time of day the message was sent would matter for the measurement. That’s really impressive.
Yeah, the stuff they can do with just the radio signal from spacecraft is ridiculous. They can use the tiny variations in spacecraft orbits to map the density variations in the crust. Some of the outer solar system missions have used the radio signal to test general relativity because we’re deeper in the sun’s gravity well than they are.
One thing working in our favor is that tracking Doppler shifts happens in the frequency domain, so the base unit for accuracy would be the wavelength of the radio waves. Differential or interferometric techniques – comparing two measurements – can also turn up much higher accuracies than a single point measure. (I’m not saying that we’ll always know the distance from the Earth to the rover to a single wavelength, but even if we’re a factor of ten off from that, that’s only a couple hundred meters!)
Compared to trying to resolve spatial separations, better measurements almost always come from spectra in the frequency domain.
I only understand a little of this. It is so ridiculous and amazing. This is an excellent question and one I have never heard explained before. I read this twice and it is mindblowing. What happens if it storms and atmospheric static or something messes up the radio signal? Also how long is the delay, at the speed of light, for the info to get from Mars to Earth?
Closest approach found after 5 seconds of googling is 54.6 million km.
To travel 54.6 million km at the speed of light (3 x 10^8 m/s) would take 3.03 minutes. This is at the closest.
For larger separations in our orbits Wikipedia has this to say: The one-way communication delay due to the speed of light ranges from about 3 minutes at closest approach (approximated by perihelion of Mars minus aphelion of Earth) to 22 minutes at the largest possible superior conjunction (approximated by aphelion of Mars plus aphelion of Earth).”
When we get the first images back, I anticipate having the same feeling I get when I see a famous person. “Wait, why is that person not in my television? Am I in the television? What’s going on?!?”