Mars – in 3D!

October 10, 2012 • 11:58 am

by Matthew Cobb

Ever wanted to feel like you were flying high above the surface of Mars, looking down on the landing site of one of the most amazing feats in the history of technology?

Grab your red/cyan 3-D glasses and have a look at this image (click to see full size version).

This picture was tweeted by the Curiosity Rover (OK, I know) @MarsCuriosity: “Mars #3D: Grab your red-cyan glasses to see terrain, my parachute & backshell in 3-D via @HiRISE”

 

 

17 thoughts on “Mars – in 3D!

  1. I hate colored stereograms like this.

    Far superior are just two side-by-side frames. All you have to do is cross your eyes until the two merge, and then you’ve got full-resolution full-color un-distorted stereo vision.

    Anybody know if that sort of a presentation of this scene exists?

    Cheers,

    b&

    1. Hi Ben,

      Unfortunately, no. We (HiRISE) don’t produce side-by-side stereograms. We are considering creating them to support applications like 3D tv and such, but we haven’t really gone beyond the thinking about it stage. And every time I start thinking about it, it gives me a bad headache!

      To make a long story short, there are some significant technical (and presentation) hurdles to overcome to enable the automated creation of *useful* side-by-side stereo pairs for HiRISE imagery with the sort of ease that we are able to create the anaglyphs. I can’t say that we will never create any side-by-side stereograms, but if and when we do, they will probably be a small number of special products.

      1. If I may, permit me to take a moment to urge you to reconsider your position (and, with luck, to persuade others at NASA to do so as well).

        False-color anaglyphs require special (if, granted, inexpensive) equipment to view, equipment that (with rounding) basically nobody keeps at hand. Without that equipment, the image is next to useless. Even with the equipment, the image doesn’t look right; it’s falsely colored, and full-color images are hopeless.

        With side-by-side stereo, you not only don’t need any special equipment, but the images are perfectly sensible by themselves as regular flat photos — including for people like Adrian above who are going to have problems with 3D vision no matter what.

        What’s more, it’s trivial to turn a side-by-side stereo image into any other format, including false-color for those so inclined. Turning false-color into any other format doesn’t work very well, if at all. Plus, full color works great with side-by-side stereo.

        Lastly, if side-by-side stereo gives you a headache, you’re doin’ it rong. On my MacBook Air, the image on this post is about 4″ wide. With the laptop at a distance of about 2′ from my face, if I focus on my finger at about a 1′ distance, there’s enough divergence for a side-by-side equivalent of the same image to be properly displayed. Eye strain really only becomes a problem for either extended viewing or for close-up viewing of large images — but, then again, that’s a problem not just for 3D images but for anything.

        Cheers,

        b&

        1. Unfortunately, it’s not a matter of reconsidering our position. The points you make are all valid ones. Within my team we have discussed them, and we agree with the points you make. That’s not the problem, we don’t need to be convinced of any of those issues, we already are.

          The problems are a combination of technical, resource availability, and even presentational issues. Consider that the image above is a cutout that is just over 1/1000th of the full sized anaglyph. That full resolution anaglyph was automatically generated by 1 of about 100 different image processing pipelines that we have running (and overseen by 1 full time staff member, plus a few others that put in time when they can).

          Now what we want (or what we need in order to produce side-by-side stereograms with detail similar to what you see in this post) is a similar, automated way to produce full resolution side-by-side stereograms where the source images are *much* bigger than the screen on your MacBook, but allows you to pan and zoom around with the 2 co-registered images on your small screen so you can pick and choose what areas *you* want to focus on in stereo. That isn’t so simple a problem to resolve, and is why I alluded to only the production of special products in my previous post. There would be way too much manual labor involved for us to generate side-by-side cut-outs for images like the one above on a systematic basis. We just don’t have the resources to do it unless the process can be completely automated.

          Right now, what is on the table for us, is producing *very* low resolution side-by-side pairs of the full images. I think this is doable, but all you’ll see in these is the large topographic stuff. The parachute in the image above and the small craters, etc around it would be completely invisible in the lowres stereogram.

          1. I think I understand the dilemma.

            May I suggest?

            Don’t let the perfect be the enemy of the good, and start small.

            Keep your workflow exactly as it is today. But when an image such as this gets singled out for promotion to the public, for just that one (scaled / cropped / whatever) image, present an array of options for 3D viewing.

            The first few times, it’d presumably be a manual process, and perhaps annoying — but it shouldn’t be too terribly burdensome. And it’ll also give you feedback on what the public does and doesn’t like, as well as give you ideas on what does and doesn’t make sense in the workflow.

            From there, it should be a lot easier to scale that up. It may still take a while, but the scope of the job should at least be better defined. And, it may well be that you don’t have to do nearly as much as you fear after all.

            Cheers,

            b&

          2. Yes, that is the solution that we are ultimately heading towards. It will definitely be a piecemeal approach that slowly builds on successive steps, but I can’t say *when* any of it will go into place. The anaglyphs are what our science team uses, and that is what they want, and ultimately they drive what we produce. So these are relatively low on our priority list. I think we’ll get *something* out eventually, but the list of things that people want is long, and our resources are becoming more and more limited.

          3. Seems like you’re doing everything that’s reasonable.

            I’ll leave it with an observation and a last suggestion.

            First, the suggestion. This is really a PR / outreach matter, since the researchers are happy with what they have. Any chance you can foist the whole problem off onto somebody whose job is publicity?

            And the observation: if I were in charge of the Federal budget, you’d have more than ample resources to devote to this sort of thing. And I think that’d be the case if most other Americans were in charge of the budget, too. Just how much amazing science are we missing out on so we can kill a few more brown people?

            Cheers,

            b&

          4. We have *some* funding for public outreach, but as the mission has matured, that funding had dropped like everything else. For the most part, funding for new product development like this is leveraged off of our general operations budget, which is how our image processing and pipeline development programmers are funded. It’s perhaps not the best way of handling it, but the alternatives are worse.

  2. rheyd,

    Thanks for the image and taking the time to explain. It is quite nice to see something historic like this and it is appreciated.

    If I understand correctly, these images are or will ultimately be released into the public domain as a matter of policy.

    Many US agencies, space and otherwise related are making their data available to the general public to allow further work to be done by independent individuals or other research teams.

    Is it possible that, for example, the original data that were used here could be uploaded somewhere and accessed by anyone?

    A few stunning images later, who knows what might develop?

    1. Johnnie,

      All the raw and processed imagery that we produce is archived into NASA’s Planetary Data System which is completely free and open for public perusal, downloading, reprocessing, or anything you could possibly want to do with it. The data return of HiRISE (nearly 90 TB and counting) is such that we were setup as a subnode of the PDS, so we have full control over our own archive. You can peruse all the imagery in “user friendly” form at our official website: http://www.uahirise.org/ or if you’re a glutton for punishment you can peruse the archive in it’s raw form by going to http://hirise-pds.lpl.arizona.edu/PDS/

      Most NASA missions are required to release data to the archive every 3 months and contains data 6 to 9 months old. In our case, the data volumes are so high that we forgo the usual the embargo period and we release our data monthly, as soon as we have finished processing it. Our most recent release from last week has all data that we acquired through September 1st.

      If you are interested in it, part of the raw and processed source imagery for the above anaglyph is available here: http://www.uahirise.org/ESP_028335_1755, the other half of the data that went into the anaglyph is still processing and is due for release around November 7th.

      1. Few people appreciate how much advancements in information technology have permitted advancements in the sciences.

        90 terabytes? Holy fucking shit, that’s insane! Even with gigabit ethernet, it’d take a week and a half to transfer that much data.

        A modern high-end DSLR makes 25 Mbyte single full-resolution RAW frames — and you can easily make a three-foot by four-foot print from such a file that you can stick your nose up against. 90 terabytes is almost 4,000,000 such exposures. That’s enough to cover a few city blocks with large gallery-quality prints.

        The shutters in such DSLRs are rated for 100,000 – 150,000 exposures; you’d go through at least a couple dozen cameras trying to shoot that many pictures. And you’d have to have the shutter button glued down for weeks on end swap out thousands of memory cards to get to that much data.

        And the scary part? That 90 terabytes just a drop in the bucket.

        Then, of course, there’s the computational requirements to be able to process that much data once you’ve got it….

        Damn.

        b&

        1. I should clarify the 90TB figure a little bit. The actual data returned from the spacecraft is about 1/3 of that total. The other 2/3 of that figure are the processed imagery.

          It’s still a huge dataset though. 1GB compressed is about our typical image size, although some are a little over 2GB depending on the observing mode.

          And yes, the processing requirements are significant. We’ve already done 1 reprocessing run where we reprocessed our entire dataset with improved calibration and it took us 18 months to get through it all (while also processing new data as it came in at the same time). Given our current data volume, it would probably take double that time now.

          1. Again…insane.

            If I’m not mistraken, HiRISE has been sending back pictures for about six years, now. That means the average transmission rate is about 5 terabytes / year, which works out to about 170 KBps. That’s well over a megabit — faster than many American broadband connections, faster than a lot of mobile broadband!

            Damn…NASA gets better download speeds from Mars than my iPhone on T-Mobile does from YouTube!

            Amazing.

            b&

  3. Can I thank rheyd of the HiRISE group for contributing here? I have seen Ed Yong say how pleased he is when people who are involved in a project he’s blogged about (yes, Jerry, this is a blog) chip in on the comments. Now I know how he feels – it’s much appreciated by readers, and by writers.

    1. Hear, hear!

      And hip, hip, hooray!

      Stuff like this makes me proud to be an American taxpayer (even if stuff like the invasion of Afghanistan and Gitmo and drones and extraordinary rendition and the TSA and warrantless wiretaps and and and and all make me ashamed and want to puke).

      When America gets something right, nobody does any better — and NASA is about as right as we’ve done.

      Cheers,

      b&

    2. Thanks! Although it’s a bit of a luck of the draw. This is one of the sites that I just happen to follow, so I tend to contribute to what I’m interested in.

Leave a Reply to rheyd Cancel reply

Your email address will not be published. Required fields are marked *