Skip to main content Accessibility help
×
Home

Autonomous visual recognition of known surface landmarks for optical navigation around asteroids

  • N. Rowell (a1), M. N. Dunstan (a1), S. M. Parkes (a1), J. Gil-Fernández (a2), I. Huertas (a3) and S. Salehi (a3)...

Abstract

We present an autonomous visual landmark recognition and pose estimation algorithm designed for use in navigation of spacecraft around small asteroids. Landmarks are selected as generic points on the asteroid surface that produce strong Harris corners in an image under a wide range in viewing and illumination conditions; no particular type of morphological feature is required. The set of landmarks is triangulated to obtain a tightly fitting mesh representing an optimal low resolution model of the natural asteroid shape, which is used onboard to determine the visibility of each landmark and enables the algorithm to work with highly concave bodies. The shape model is also used to estimate the centre of brightness of the asteroid and eliminate large translation errors prior to the main landmark recognition stage. The algorithm works by refining an initial estimate of the spacecraft position and orientation. Tests with real and synthetic images show good performance under realistic noise conditions. Using simulated images, the median landmark recognition error is 2m, and the error on the spacecraft position in the asteroid body frame is reduced from 45m to 21m at a range of 2km from the surface. With real images the translation error at 8km to the surface increases from 107m to 119m, due mainly to the larger range and lack of sensitivity to translations along the camera boresight. The median number of landmarks detected in the simulated and real images is 59 and 44 respectively. This algorithm was partly developed and tested during industrial studies for the European Space Agency’s Marco Polo-R asteroid sample return mission.

    • Send article to Kindle

      To send this article to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

      Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

      Find out more about the Kindle Personal Document Service.

      Autonomous visual recognition of known surface landmarks for optical navigation around asteroids
      Available formats
      ×

      Send article to Dropbox

      To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

      Autonomous visual recognition of known surface landmarks for optical navigation around asteroids
      Available formats
      ×

      Send article to Google Drive

      To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

      Autonomous visual recognition of known surface landmarks for optical navigation around asteroids
      Available formats
      ×

Copyright

Corresponding author

References

Hide All
1. NASA/JPL, Near-Earth Asteroid Delta-V for Spacecraft Rendezvous, http://echo.jpl.nasa.gov/~lance/delta_v/delta_v.rendezvous.html. Accessed: 21/08/2014.
2.Gil-Fernández, J.Cadenas, R., Prieto-Llanos, T., Rowell, N., Dunstan, M., Parkes, S., Homeister, M., Salehi, S. and Agnolon, D.Autonomous GNC system to enhance science of asteroid missions, 63rd International Astronautical Congress, October 2012, Naples, Italy.
3.Lovegrove, S., Davison, A. J. and Ibanez-Guzmán, J.Accurate visual odometry from a rear parking camera, Intelligent Vehicles Symposium (IV), 2011, Baden-Baden, Germany, IEEE June 2011, pp 788793.
4.Schwendner, J.Homography based state estimation for aerial robots, KI 2008: Advances in artifcial intelligence, Lecture Notes in Computer Science, 2008, 5243, pp 332339, Springer Berlin, Heidelberg, Germany.
5.Hayet, J., Lerasle, F. and Devy, M.A Visual landmark framework for indoor mobile robot navigation, ICRA ’02 IEEE International Conference, Robotics and Automation, May 2002, 4, pp 39423947.
6.Owen, W.M.Methods of optical navigation, 21st AAS/AIAA Space Flight Mechanics Conference, February 2011, New Orleans, LA, USA.
7.Cheng, Y., Goguen, J., Johnson, A., Leger, C., Matthies, L., Martin, M.S. and Willson, R.The Mars exploration Rovers descent image motion estimation system, IEEE Intell Syst, May 2004, 19, (3), pp 1321.
8.Flandin, G., Polle, B., Lheritier, J. and Vidal, P.Vision based navigation for autonomous space exploration, NASA/ESA Conference on Adaptive Hardware and Systems, 15-18 June 2010, Anaheim, CA, USA, pp 916.
9.Harris, C. and Stephens, M.A combined corner and edge detector, Fourth Alvey Vision Conference, 1988, Manchester, UK, pp 147151.
10.Bay, H., Ess, A., Tuytelaars, T. and van Gool, L.Speeded-up robust features (SURF), Computer Vision and Image Understanding (CVIU), June 2008, 110, pp 346359.
11.Hartley, R.I. and Zisserman, A.Multiple View Geometry in Computer Vision, Second Edition, 2004, Cambridge University Press, UK.
12.Lepetit, V.Moreno-Noguer, F. and Fua, P.EPnP: An accurate O(n) solution to the PnP problem, Int J Computer Vision, 2009, 81, (2), pp 155166.
13.Owen, W.M., Wang, T.C.H., Bell, A.M. and Peterson, C.NEAR optical navigation at Eros, AAS/AIAA Astrodynamics Specialist Conference, 30 July 2001, Quebec, Canada.
14.Gaskell, R., Barnouin-Jha, O., Scheeres, D., Mukai, T., Hirata, N., Abe, S., Saito, J., Ishiguro, M., Kubota, T., Hashimoto, T., Kawaguchi, J.Yoshikawa, M., Shirakawa, K. and Kominato, T.Landmark navigation studies and target characterization in the Hayabusa encounter with Itokawa, Astrodynamics Specialist Conference, 21-24 August 2006, Keystone, CO, USA
15.Mastrodemos, N., Rush, B., Vaughan, D. and Owen, B.Optical navigation for Dawn at Vesta, 21st AAS/AIAA Space Flight Mechanics Conference, February 2011, New Orleans, LA, USA.
16.Gaskell, R., Barnouin-Jha, O., Scheeres, D.J., Konopliv, A.S., Mukai, T., Abe, S., Saito, J., Ishig-Uro, M., Kubota, T., Hashimoto, T., Kawaguchi, J., Yoshikawa, M., Shirakawa, K., Kominato, T., Hirata, N. and Demura, H.Characterizing and navigating small bodies with imaging data, Meteoritics & Planetary Sci, June 2008, 43, pp 10491061.
17. MarcoPolo-R Science Study Team, Marco Polo-R Assessment Study Report, Technical Report, December 2013, ESA/SRE(2013)4, European Space Agency.
18.Bignami, G.F.Cargill|P.J. Schutz, B. and Turon, C.Cosmic Vision: Space Science for Europe 2015-2025, October 2005, BR-247, ESA Publications Division.
19.Rowell, N., Parkes, S. and Dunstan, M.Image processing for near-Earth object vision based guidance systems, IEEE Transactions on Aerospace and Electronic Systems, April 2013, 49, pp 10571072.
20. ESA, Delta-DOR: New Techniques For Esa’S Deep Space Navigation, Technical Report, November 2006, Bulletin 128, European Space Agency.
21.Tomasi, C. and Kanade, T. Detection and tracking of point features, Technical Report, April 1991, CMU-CS-91-132, Carnegie Mellon University.
22.Lowe, D.G.Distinctive image features from scale invariant key-points, Int J Computer Vision, 2004, 60, (2), pp 91110.
23.Cocaud, C. and Kubota, T.SURF-based SLAM scheme using Octree occupancy grid for autonomous landing on asteroids, Proceedings of the 10th International Symposium on Artifcial Intelligence, Robotics and Automation in Space, 29 August-1 September 2010, Sapporo, Japan.
24. Hayabusa Project Science Data Archive, http://darts.isas.jaxa.jp/planet/project/hayabusa/. Accessed: 07/08/2014.
25.Parkes, S., Martin, I., Dunstan, M., Rowell, N., Dubois-Matra, O. and Voirin, T.A virtual test environment for validating spacecraft optical navigation, Aeronaut J, November 2013, 117, pp 10751101.
26.Nanjappa, A.Delaunay Triangulation in R3 on the GPU, PhD thesis, 2012, National University of Singapore.
27.Bernardini, F., Mittleman, J., Rushmeier, H., Silva, C. and Taubin, G.The ball-pivoting algorithm for surface reconstruction, IEEE Transactions on Visualization and Computer Graphics, October 1999, 5, pp 349359.
28.Oren, M. and Nayar, S.K.Generalization of Lambert’s refectance model, 21st annual conference on Computer graphics and interactive techniques, SIGGRAPH ’94, Orlando, FL, USA, pp 239246, 1994.
29.Hapke, B.Bidirectional refectance spectroscopy 1. Theory, J Geophys Res, 1981, 86, (B4), pp 30393054.
30.Ishiguro, M., Nakamura, R., Tholen, D.J., Hirata, N., Demura, H., Nemoto, E., Nakamura, A.M., Higuchi, Y., Sogame, A., Yamamoto, A., Kitazato, K., Yokota, Y., Kubota, T.Hashimoto, T. and Saito, J.The Hayabusa spacecraft asteroid multi-band imaging camera (AMICA), Icarus, June 2010, 207, pp 714731.
31.Caballero, F., Martin-Romero, J. R., Mollinedo, L. and Suatoni, M.Dynamic rendezvous and docking simulation facility: Use cases and status 2012, 5th International Conference on Astrodynamics Tools and Techniques, 29 May-1 June 2012, ESA/ESTEC, The Netherlands.

Autonomous visual recognition of known surface landmarks for optical navigation around asteroids

  • N. Rowell (a1), M. N. Dunstan (a1), S. M. Parkes (a1), J. Gil-Fernández (a2), I. Huertas (a3) and S. Salehi (a3)...

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed