Global Sources
EE Times-India
Stay in touch with EE Times India
EE Times-India > Embedded

NASA cites Astrobotic Technology as key to Mars exploration

Posted: 08 Feb 2016     Print Version  Bookmark and Share

Keywords:Astrobotic technology  Mars  space robotic 

I was thinking about standard Earth drones as a possibility, but NASA had thought about that with the Aerial Regional-scale Environmental Survey of Mars, or ARES aircraft. Aerodynamically, flying on Mars is an interesting challenge because the Martian atmosphere presents a unique environment for flight. An aircraft on Mars would encounter similar conditions to those on Earth at an altitude of 30,480m. Maintaining flight in such thin air requires either very long wings or a very powerful propulsion system to provide lift.

The drone's propulsion system is another issue. The ARES programme planned to keep it simple by using a rocket-based propulsion system. They would need to bring their own fuel. The Martian environment is made up of mostly carbon dioxide so you don't have an oxidizer there for a combustion engine. Using a bi-propellant rocket fuel was the option considered.

The rocket fuel would have consisted of mono-methyl hydrazine and nitrogen tetroxide with nitrogen additives (also known as MON-3), which has been used in proven systems. To reduce mission risk, researchers hoped to use available systems with proven reliability if at all possible.

A multi-rover design

The GNC algorithms in these robot/drones are based upon stochastic optimisation; the use of LIDAR will enable the mobile robotic surface probes to operate autonomously, without base rover control, while inside the high-risk areas in caves or lava tubes on Mars or the Moon where no radio signals are possible.

This multi-rover design is a Tier Scalable Reconnaissance (TSR) framework utilising multiple mobile robotic surface probes which helps replace the idea of a heavy, costly, cumbersome single rover that could overturn in the very heavy and hazardous terrain in a lava tube and end the mission in failure.

The use of Expendable Robotic Surface Probes (ERSPs) is shown in Figure 3.


Figure 3: There will be a need for a multi-rover framework functioning autonomously with stochastic optimisation-based guidance navigation and control (GNC). Operational autonomy is a must including redundant mobile robotic surface probes in the form of mini-rovers coupled with a base rover also with stochastic optimisation-based GNC for the mobile robotic surface probes so that the loss of one or more robotic probes does not jeopardise the mission. The base rover can intelligently plan its own course using a mast-mounted camera and sensors shown in this figure. This base rover can map out the best passage for the deployed mobile robotic probes to take to reach their individually assigned science targets. (Image courtesy of Reference 2)

Depending on the size of the base rover, it will be able to carry a minimum of two ERSPs as shown in Figure 3. When the base rover reaches its operations area, the smaller robotic probes would depart the base rover via a deployed ramp, and each would journey to its assigned science target. Each probe can be designed to be unique design and functionality, having wheels or treads, and scientific instrumentation specific to its purpose. The commonalities between these probes would be elements such as ad-hoc wireless communications equipment, on-board CPU, batteries, LIDAR, cameras, and a bank of IR/visible/UV flood LEDs.

Once the ERSPs are deployed within a lava tube or cave, it would not necessarily be expected that they would return to the base rover; they are committed to their task until end-of-life. The base rover would collect and analyse the data returned from each ERSP, and make decisions on new scientific targets to explore. Additionally, the base rover would in turn relay the data and analysis back to earth, either directly or via an orbiter 1.

Operational autonomy 1 is a crucial part of this deployment. This would require:

(1) Automatic characterisation of operational areas from different vantage points

(2) Automatic sensor deployment and data gathering

(3) Automatic feature extraction and region-of-interest identification

(4) Automatic target prediction and prioritisation

(5) Automatic re-deployment and navigation of robotic agents.

The Automated Global Feature Analyser


The AGFA is an analysis and classification of images in the operation area explored by the rovers (See Figure 4, top centre). Using image processing algorithms, features are extracted as in Figure 4 on the lower right. Feature vectors are then generated for all the identified targets as can be seen in Figure 4, upper right.

AGFA diagram

Figure 4: The operational flow diagram of AGFA needed for a successful autonomous multi-rover framework in exploration (Image courtesy of Reference 3)

The Advanced Target Prioritisation Framework

for AGFA

The rovers would need capabilities to automatically prioritise targets for possible further detailed investigation. How would that be done? Once their camera data is processed for feature extraction they can be pre-clustered by using a general purpose or special purpose model clustering algorithms. Prioritisation probability is calculated for the identified targets/clusters to rank further investigation by its level of importance.

A step further

William L. Whittaker takes the previous Multi-rover design concept a step up in Reference 4.

Reference 4 presents a framework that will consist of an autonomous frontier and capability-based task generator, which is a distributed market-based strategy for coordinating and allocating tasks to the different robotic team members. It creates a communication protocol for seamless interaction between the different robots in the system. These robots all have different sensors, (such as in the robot team used for testing consisting of: 2D mapping sensors, 3D modelling sensors, or no exteroceptive sensors), and varying levels of mobility.

Tasks can be generated to explore, model, and take science samples. Based on an individual robot's capability and associated cost to perform a task, a robot is autonomously selected for task execution. The robots create coarse online maps and store collected data for high resolution offline modelling since they cannot transmit this data wirelessly in the lava tubes or caves.

Some of the robots used in the testing can be seen in Figures 5, 6 and 7.

2D mapping robots

Figure 5: A pair of 2D mapping robots are shown here with SICK scanning LIDARs that can scan parallel to the ground. One covers 1800 at 10 resolution and the second one covers 2700 at 0.50 resolution. (Image courtesy of Reference 4)

3D modelling robot

Figure 6: a 3D modelling robot is shown in this image and has a SICK scanning LIDAR mounted on a rotating base. It spins the LIDAR as it scans and builds a 3D point cloud. (Image courtesy of Reference 4)

science sampling robot

Figure 7a: This is a Science Sampling Robot at the mock-cave site. (Image courtesy of Reference 4)

Cone penetrometer

Figure 7b: This is a Science Sampling Robot's Cone Penetrometer to measure soil properties. (Image courtesy of Reference 4)

This test found some issues that needed improvement. One of the allocation mechanisms in this study was a distributed system; however, task generation was centralised through the operator interface. In an ideal system individual robots would have the capability to generate and auction tasks, based on interesting features they encounter. Releasing responsibility to individual robots was found to be essential in this type of a system.

 First Page Previous Page 1 • 2 • 3 Next Page Last Page

Comment on "NASA cites Astrobotic Technology as ..."
*  You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.


Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

Back to Top