Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
UNIQUE RESOURCE EXPLORING HOW SPACECRAFT IMAGERY PROVIDES PROFESSIONALS WITH ACCURATE ESTIMATES OF SPACECRAFT TRAJECTORY, WITH REAL-WORLD EXAMPLES AND DETAILED ILLUSTRATIONS
Spacecraft Optical Navigation provides detailed information on the planning and analysis of spacecraft imagery to help determine the trajectory of a spacecraft. The author, an experienced engineer within the field, addresses the entirety of celestial targets and explains how a spacecraft captures their imagery.
Aimed at professionals within spacecraft navigation, this book provides an extensive introduction and explains the history of optical navigation, reviewing a range of optical methods and presents real world examples throughout. With the use of mathematics, this book discusses everything from the orbits, sizes, and shapes of the bodies being imaged, to the location and properties of salient features on their surfaces.
Specific sample topics covered in Spacecraft Optical Navigation include:
Spacecraft Optical Navigation is an ideal resource for engineers working in spacecraft navigation and optical navigation, to update their knowledge of the technology and use it in their day-to-day. The text will also benefit researchers working with spacecraft, particularly in navigation, and professors and lecturers teaching graduate aerospace courses.
WILLIAM M. OWEN JR. is an Optical Navigation engineer at the Jet Propulsion Laboratory, California Institute of Technology, USA. He has been a member of the technical staff at Jet Propulsion Laboratory since 1979 and has been semi-retired since 2010. He received his PhD in 1990 and is a past member of Division A Fundamental Astronomy, Division F Planetary Systems and Astrobiology, and more.
List of Figures ix
Preface xi
Acknowledgement xv
1 Introduction 1
1.1 Purpose 1
1.2 Definitions 1
1.3 Notation 2
1.4 Rotations 2
1.5 Left-handed Coordinate Systems 7
2 History 9
2.1 The Early Years: Mariner and Viking 10
2.1.1 Mariner 9 12
2.1.2 Viking 12
2.2 Coming of Age: Voyager 13
2.3 Innovation and Workarounds: Galileo 15
2.4 Landmarks: NEAR Shoemaker 18
2.5 Maturity: Cassini 20
2.6 Autonomy: Deep Space 1, Stardust, Deep Impact 21
2.6.1 Deep Space 1 22
2.6.2 Stardust 23
2.6.3 Deep Impact 23
2.7 Flight Hardware 24
2.8 Development of Enabling Technologies 24
2.8.1 Computers 24
2.8.2 Detectors 25
2.9 Star Catalogs 26
2.10 Stereophotoclinometry 27
2.11 Future Missions 27
2.12 Optical Navigation Outside JPL 28
2.13 Summary 28
3 Cameras 29
3.1 The Gnomonic Projection 30
3.2 Deviations from the Gnomonic Projection 31
3.2.1 Optical Aberrations 31
3.2.2 Keystone or "Tip/Tilt" Distortion 32
3.3 The Creation of a Digital Picture 32
3.3.1 Vidicon Detectors 33
3.3.2 CCD Detectors 34
3.3.3 Active Pixel Sensors 35
3.4 Picture Flattening 35
3.5 Readout Smear 37
4 Modeling Optical Navigation Observables 43
4.1 Introduction 43
4.2 The Apparent Direction to an Object 44
4.3 The Apparent Direction to a Star 46
4.4 The Apparent Direction to the Limb or Terminator 48
4.5 The Orientation of the Camera 53
4.5.1 Three Pointing Angles Are Known 53
4.5.2 Two Pointing Angles Are Known 54
4.5.3 Primary and Secondary Axes 55
4.5.4 Rotation into Camera Coordinates 56
4.6 Modeling the Gnomonic Projection 56
4.7 Modeling Distortions and Misalignments 57
4.7.1 The Acton-Duxbury Model 57
4.7.2 The OpenCV Distortion Model 58
4.8 Conversion into Pixel Coordinates 58
4.9 Summary of Optical Navigation Geometry Calculations 59
5 Obtaining Optical Navigation Observables 61
5.1 Introduction 61
5.2 Centerfinding for Stars 61
5.2.1 Circular Gaussian Model 62
5.2.2 Elliptical Gaussian Model 63
5.2.3 Circular Cauchy Function 63
5.2.4 Centerfinding Using the Marginal Distribution 63
5.3 Reflectance Laws 64
5.3.1 Lambert Scattering 66
5.3.2 Lommel-Seeliger Scattering 66
5.3.3 Minnaert Scattering 67
5.3.4 Hapke Scattering 67
5.4 Centerfinding for Unresolved Bodies 67
5.5 Centerfinding for Resolved Bodies 68
5.5.1 Correlation 68
5.5.2 Limb Scanning 69
5.6 Centerfinding for Landmarks 71
6 Using Opnav Data in Orbit Determination 75
6.1 Dynamic Partials 75
6.2 Star Partials 78
6.3 Optical Partials 79
6.3.1 Pointing Angles 79
6.3.2 Focal Length 80
6.3.3 Distortion Parameters 80
6.3.4 Pixel Layout ("K Matrix") Parameters 81
6.3.5 Optical Axis Pixel Coordinates 81
6.4 Constructing the List of Estimated Parameters 81
6.5 Camera Calibration 82
6.5.1 Time-Varying Camera Parameters 85
Appendix A The Overlapping Plate Method 87
A.1 Development of Fundamental Catalogs 87
A.2 The Astrographic Catalog 88
A.3 Development of the Overlapping Plate Method 88
A.4 Application to Groundbased Astrometry 89
Glossary 91
References 97
Index 101
Much of the material in this chapter was originally presented in Owen et al. (2008). It has been updated for this book.
Optical navigation got its start at Jet Propulsion Laboratory (JPL) as an experiment on the Mariner 6 and 7 missions to Mars in 1969 (Duxbury and Breckenridge 1970, Duxbury 1970) and again on Mariner 9 in 1971 (Breckenridge and Acton 1972). The justification for opnav was to ensure quality navigation results at the outer planets: as radio tracking data are geocentric, a radio-only orbit determination solution will tend to become less accurate with increasing distance from the earth.
Opnav was used operationally for both Viking orbiters at Mars, but it came into its own with the Voyager missions to the outer planets. Pictures of the Galilean satellites of Jupiter, of Titan and the smaller icy satellites of Saturn, of the five classical Uranian satellites, and of Triton, Nereid, and Proteus (S/1989 N 1) at Neptune helped immensely to shrink the size of the B plane error ellipse.1 Optical navigation engineers were also responsible for several of Voyager's discoveries. The serendipitous discovery of volcanic plumes on Io is best known, but optical navigators also found new satellites at all four outer planets and determined their orbits.
Both hardware and software advances have occurred since the 1970s. Vidicon television cameras have given way to charge-coupled detectors (CCDs) and complementary metal-oxide semiconductors (CMOS) detectors. Ground data processing has moved from mainframes to minicomputers and now to workstations. We once used special frame buffers and monitors for display; now, our workstations bring up pictures inside an X window. The software has progressed from a mixture of Fortran 66 and assembly language to Fortran 77 and C, and much of it has been rewritten in C++ and Python.
Perhaps the most promising advance in optical navigation technology is the migration from ground processing to onboard processing. JPL's onboard autonomous navigation system was demonstrated on Deep Space 1 and used on Stardust. Autonav's greatest success to date was on Deep Impact, where it ran on both the Impactor and Flyby spacecraft and guided the Impactor onto a collision course with the nucleus of comet 9P/Tempel 1 while the Flyby spacecraft was taking pictures of the event.
Opnav figures to play a prominent role in many future JPL missions: to small bodies, to Uranus and Neptune, and even to support precision landings on the moon or Mars. Whether the processing is done on the ground or onboard, whether the imager is like the dedicated Optical Navigation Camera flown on Mars Reconnaissance Orbiter or a science instrument, optical navigation data will continue to enable the kind of precision navigation which is required for mission success.
Optical navigation as practiced at JPL usually requires knowledge of several disciplines and interactions with both the flight team and the science team at various phases of a mission. Opnav analysts must know enough about the optical and physical characteristics of the onboard cameras to be able to simulate images and command pictures correctly. We must participate in orbit determination (OD) studies to find out how much optical data is necessary to meet navigation accuracy requirements; then we must negotiate for observing time, plan the pictures to be taken, develop the sequence products, and verify their correctness through the uplink process. After the pictures have been obtained, we must process them, extracting the observed locations of the images, and pass the results along to the rest of the navigation team. The optical navigation group must therefore have people who know about optics, some facets of astronomy, surface modeling, spacecraft commanding, image processing, and spacecraft navigation.
The first so-called "optical navigation experiment" was carried out in 1969 on Mariners 6 and 7 to Mars. Tom Duxbury and the late Bill Breckenridge used the "far-encounter planet sensor" to take pictures (50 for Mariner 6, 93 for Mariner 7) of Mars in the last two or three days before each spacecraft's flyby (Duxbury and Breckenridge 1970). Figure 2.1 shows one such image. They measured the location of Mars in each picture by putting a clear plastic overlay on top of a hard copy of the picture, matching a circle on the overlay to Mars, and reading off the coordinates. (The paper makes a point of saying that each picture was measured by two "observers" and the results averaged.) The Mars image centers thus obtained were transferred to punched cards.
Figure 2.1 Mariner 1969 picture of Mars.
Source: NASA.
There were no stars in the pictures; the spacecraft attitude came from real-time telemetry. Careful calibrations before launch and during cruise had given the orientation of the camera relative to the spacecraft. The calibrations also revealed distortions in the camera, which were used to correct the observed image locations.
The image location and camera attitude were then transformed into the observed inertial direction to Mars. The right ascension and declination were the optical observables (not the image coordinates as is now the case). These angles were fed into a navigation filter which read in the current radio OD solution (from magnetic tape) and updated it.
Real-time opnav operations were successful. The pictures were received and processed within the allotted time. The resulting solutions were generally within one sigma of the radio-only solutions, and the major axis of the B-plane error ellipse shrunk by half.
After encounter, Duxbury devised a new technique (Duxbury 1970) for determining the center of Mars. Each line of the picture was scanned to find the lit limb, defined as the first of three successive pixels brighter than a threshold. As the limb of an ellipsoidal body projects into an ellipse in the focal plane, they fit an ellipse to the limb points, subject to a priori constraints on its shape from the known shape of Mars and the viewing geometry. The center of the ellipse was identified with the center of the planet. There were trends in the limb residuals indicating systematic effects, particularly near the poles, but the resulting data were good to about 1 pixel east-west and 0.3 pixel north-south. The worse performance in the horizontal direction was attributed to the fact that only the lit limb was used.
Opnav was demonstrated again in the next mission, Mariner 9 in 1971. Tom Duxbury and Chuck Acton oversaw the creation of two sets of programs: the Optical Navigation Image Processing System (ONIPS), for picture display and image centerfinding, and the Optical Navigation Program set (ONP), for scene prediction, calculation of residuals and partial derivatives with respect to parameters of interest, and filtering. They planned 18 pictures of Deimos and 3 of Phobos, to be taken between 66 and 9 h before Mars orbit insertion. Deimos, farther from Mars, was the preferred target, and two of the Phobos pictures were timed to capture the satellite in transit across Mars. Several sets of calibration pictures, obtained during cruise, served to characterize the performance of the spacecraft attitude system, various camera misalignment angles, electromagnetic distortion in the camera's vidicon scan pattern, and the overall sensitivity of the camera.
Each of the 21 opnav pictures was analyzed within an hour of its receipt. Stars as faint as magnitude 8.9 were detected in 6-s exposures. The rms star residual, in pictures containing more than two stars, was 0.4 pixel. Results from the first 14 pictures were passed to the navigation team, which thereby obtained a more accurate OD at Mars than any previous mission. Improvements to the satellite ephemerides also enabled close-up imagery of Phobos and Deimos during the orbital phase of the mission.
The real benefit of Mariner 9, though, was the development of the opnav camera models, processing techniques, and team procedures. Much of what we do today traces its roots to 1971. ONIPS and ONP are still used, and even some of the program names are unchanged though the code has been rewritten several times.
Duxbury and Acton received NASA's Exceptional Scientific Achievement Medal for this work. They were also accorded the Institute of Navigation's Samuel M. Burka award for their paper (Duxbury and Acton 1972), along with the princely sum of $175 each - but they had to pay their own way to the ceremony (Figure 2.2).
The two Viking missions (1976) used optical navigation in operations not only on approach but also in orbit (Jerath 1978). The approach OD and the insertion maneuver were so accurate that the Viking 1 orbiter overflew the proposed landing site on its first orbit, rendering unnecessary two weeks of contingency operations. Dozens of opnav pictures taken by the orbiters enabled close encounters with both Martian satellites; a 20 km flyby of Deimos was in error by less than 2 km. Opnav was also useful in the orbiters' extended mission: as the amount of radio tracking data decreased, radio-only solutions became less sensitive to the node of the spacecraft's orbit on the plane of the...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.