The Journal of the Astronautical Sciences, Vol. XXX, No. 1, pp. 1-11,
January-March, 1982.
Copyright ©(c) 1982 by the American Astronautical Society, Inc.
Note: This web version is derived from an earlier draft of the paper and may possibly differ in some substantial aspects from the final published paper.
The 10-week, 10,000-man-hour project was hosted by the University of Santa Clara and co-directed by James Long and Timothy Healy. Team members were given the task of selecting and defining a number of representative space ventures which would benefit from Or even require extensive application of machine intelligence and automation, and assessing existing and foreseeable Al technologies necessary to accomplish the proposed missions [1]. Major support in the field of artificial intelligence was provided by SRI International, and a number of industrial concerns made contributions in the area of system implementation. Further, the study received firm backing from NASA Headquarters personnel, including two unprecedented on-site personal visits by the Agency administrator, Dr. Robert A. Frosch. The strong NASA support signals a new perception of the tremendous potential of artificial intelligence in future space missions.
For purposes of the summer study several specific areas within artificial intelligence were taken as representative of highly sophisticated computer-based systems which may be required in future space applications, including: planning and problem-solving; perception; natural languages; expert systems; automation, teleoperation, and robotics, distributed data management; and cognition and learning [2,3,4]. The latter category includes the concept of an adaptive machine able to study a new situation or environment, formulate hypotheses about the environment, test those hypotheses with additional data, and decide whether or not a new hypothesis should be added to the machine's existing model of the environment. The 1980 study was predicated on the assumption that such machine capabilities will become available by the end of the present century.
NASA has a long record of using automation and computers in its space missions. However, in 1979 Agency activities were examined by an ad hoc advisory committee and criticized for being "5 to 15 years behind the leading edge in computer science and technology" [5]. The committee also found that "the advances and developments in machine intelligence and robotics needed to make future space missions economical and feasible will not happen without a major long-term commitment and centralized, coordinated support." A part of NASA's response to this criticism was to commission the University of Santa Clara study, the results of which are briefly described herein.
IESIS has the following major features: (1) An intelligent satellite network which gathers data in a goal-directed manner, based on specific requests for observation (such as a farmer requesting once-a-week surveillance of his cornfield) and on prior knowledge contained in a detailed, self-correcting "world model"; (2) a user-oriented "natural language" interface which permits requests to be satisfied in plain English. without additional human intervention, using information retrieved from the system library or from direct observations made by a member satellite within the network; (3) a medium-level on-board decision-making capability that optimizes sensor utilization without compromising user requests; and (4) a library of stored information which provides a detailed set of all significant planetary features and resources, adjustable for seasonal and other identifiable variations, and accessible through a comprehensive cross-referencing system.
The heart of IESIS is, however, the world model, a self-correcting description of the environment under observation to any desired level of detail. This eliminates the need for acquiring and storing large quantities of redundant information by making use of two important Al elements: first, a "state component," which defines the physical state of the world to a predetermined accuracy and completeness at some specified time; and, second, a "theory component," which permits derivation of parameters of the world state not explicitly stored in the state component and allows forecasts of the time evolution of the state of the world. IESIS retains the complete Earth model in a ground-based central systems computer and an appropriate subset thereof on-board the main satellite. Orbiter sensors still collect extensive data, but only changes in the world model are downlinked, rather than the entire data stream. The result is an effective data compression system which removes redundancy over time.
One of the major problems with present interplanetary exploration strategies is that they typically require three distinct stages: Initial reconnaissance, exploration, and intensive study. Especially in the case of, relatively distant bodies, the sequential character of the examination leads to inordinately lengthy total investigation times. The team concluded that the three stages can be telescoped into a single mission by incorporating advanced machine intelligence to produce a single integrated scientific phase of discovery. On-board Al systems are required to make certain initial decisions about sites to be explored in detail, the nature of the exploration, and the best ways to conduct intensive studies.
As a preliminary shakedown voyage for this new technology which could help pave the way for more ambitious exploratory ventures both within and beyond the Solar System, the team proposed a demonstration mission to Titan (the largest natural satellite of Saturn). This would be capable of independent operation starting from launch in low Earth orbit; navigation and propulsion system control during interplanetary transfer to Saturn; rendezvous with Titan and orbital insertion; automatic landing site decision-making; deployment of various subsatellites, landers, and fliers on and about Titan; and subsequent monitoring and control of atmospheric and surface exploration.
Of course, decisions about succeeding steps in the exploration of Titan could well be made directly by earthbound scientists since transmission delay time is only about one hour. However, when explorer craft are sent to other star systems the delay time will stretch to years, and decisions concerning successive stages of investigation must be made on-board the spacecraft. The purposes of the Titan mission are to enhance the capabilities of semi-autonomous vehicles in the short-term, and to refine and demonstrate the effectiveness and versatility of fully autonomous exploration in the long-term.
A major finding of the study team was that automated hypothesis formation is highly desirable for sophisticated interplanetary missions within the Solar System but is absolutely essential for interstellar exploration. Machine intelligences capable of unassisted scientific and operational hypothesis formation must be able to hand three distinct classes of inferential thinking: (1) analytic inference (application of existing scientific classification schemes), (2) inductive inference (logical processes for generating universal statements about an entire domain based on quantitative or symbolic information from a restricted part of that domain), and (3) abductive inference (a method for evolving new information classification schemes using old theories, old schemes, old predictions, and novel contradictory data as inputs). An important feature of the Titan spacecraft is that it would carry a world model of the target for exploration, the best available record of all pertinent features of the body in view of the research to be conducted. The probe would use its sensors to accumulate data about Titan, generate hypotheses about the sensed environment, test these hypotheses using new data, then update the scientific model as required.
Such a mission requires important advances in visual, tactile, and force sensors, machine decision-making, adaptability, mobility, and many other areas of Al technology. Rapid advancements now are being made in many of these fields in connection with the automation of factories on Earth. It is, however, in NASA's interest to promote additional directed research into problems of manufacturing unique to the space environment.
The Replicating Systems Concepts team defined, as an ultimate challenge for advanced artificial intelligence and automation, a factory on the Moon which completely replicates itself using only lunar materials and solar energy. The basic concept of machine self-reproduction was shown theoretically feasible decades ago by John von Neumann [13], but actual implementation will be extremely difficult. To arrive at a system capable of building all of its own components and then assembling them into an exact duplicate will require major advances in automated materials processing, computer-aided manufacturing and parts fabrication (CAD/CAM technology), robot assembly techniques, storage and inventory maintenance, inspection and repair capabilities, scheduling, and other aspects of general factory management requiring very sophisticated AI techniques.
The central theoretical issue is closure: Can a real machine system itself
produce and assemble all the kinds of parts of which it is comprised? In a generalized
terrestrial industrialized economy manned by humans the answer clearly is yes
(e.g., American industry), since the set of machines which make all other machines
is a subset of the set of all machines. In space a few percent of total system
mass -in particular those items most difficult to produce such as ball bearings,
motors, or integrated circuits -could feasibly be supplied from Earth-based
manufacturers as "vitamin parts." Alternatively, the system could be designed
with components of very limited complexity [14]. The minimum size of
a self-sufficient "machine economy" remains unknown.
• Mapping and modeling criteria for creation of compact world models.
• Autonomous mapping from orbital imagery,
• Efficient, rapid image processing based on comparisons with world model information.
• Advanced pattern recognition, signature analysis algorithms and multisensor data/knowledge fusion.
• Explicit models of system users.
• Fast, high-density computer systems suitable for space application of world model computations and processing.
Analytic inferences have received the most complete treatment within the Al research community. For example, rule-based expert systems can apply detailed diagnostic classification schemes to data on events and processes in some given domain and produce appropriate identifications. However, these systems consist solely of complex diagnostic rules describing the phenomena in some domain. They do not include
models of the underlying physical processes of these phenomena. In general, state-of-the-art Al treatments of analytic inference fail to link detailed classification schemes with fundamental models required to deploy this detailed knowledge with maximal efficiency.
Inductive inferences receive a less complete treatment than analytic inferences, although some significant advances have been made. For instance, a group at the Czechoslovak Academy of Sciences has developed formal techniques for moving from data about a restricted number of members of a domain, to observation statement(s) which summarize the main features or trends of this data, to a theoretical statement which asserts that an abstractive feature or mathematical function holds for all members of the domain [15]. Another research effort attempts to integrate fundamental models with specific abstractive, or generalizing, techniques. However, this work is at the level of theory development -- a working system has yet to be implemented in hardware.
Abductive inference has scarcely been touched by the Al community. Tentative first steps have been taken, as for example current efforts in "non-monotonic logic" presented at a recent Al conference held at Stanford University [16,17], These attempts to deal with the invention of new or revised knowledge structures are hampered (and finally undermined) by their lack of a general theory of abductive inference. One notable exception is the recent work of Frederick Hayes-Roth [18], who takes a theory of abduction developed by Imre Lakatos for mathematical discovery and operationalizes two low-level members of Lakatos' family of abductive inferential types. Still, this work is but a preliminary step toward the implementation of workable systems of mechanized abduction.
In those instances in which the environment is highly restricted with respect to both the domain of discourse (semantics) and the form of appropriate statements (syntax), serviceable interfaces are just possible with state-of-the-art techniques. However, any significant relaxation of semantic and syntactic constraints produces very difficult problems in Al. For general use the following capabilities are highly desirable, and probably necessary, 'for efficient and effective communication: domain model, user model (general, idiosyncratic, contextual), dialogue model, explanatory capability, and reasonable default assumptions.
Recognition and understanding of fluent spoken language adds further complexity to that of ordinary keyed language/phoneme ambiguity. In noise-free environments involving restricted vocabularies, it is possible to achieve relatively high recognition accuracy, though presently not in real time. In more realistic operating scenarios, oral fluency and recognition divorced from semantic understanding is not likely to succeed. The critical need is the coupling of a linguistic understanding system to the spoken natural language recognition process. On a related research front, the physical aspects of machine speech generation are ready for applications, although some additional "cosmetic" work is still necessary for general use.
Some motor-oriented transfer of information from humans to machines already has found limited use, such as light pens, joysticks, and head-eye position detectors employed for military target acquisition. An interesting alternative for space applications is the "show and tell" approach. In this method a human manipulates an iconic model of the real environment. A robot "watches" these actions (perhaps complemented by some further information spoken by the human operator), then duplicates them in the real environment. Robot action need not be real time with respect to human operator action-the machine may analyze the overall plan, ask questions, and cooperatively optimize the original course of action. The operator plays the role of "editor" of the evolving robot program. Show and tell tasks can be constructed piecemeal, thus allowing a job to be described to the machine which requires many simultaneous and coordinated events. Finally, the fidelity of robot actions to the human example may vary in significant ways (e.g., size scale, mass scale, or speed of performance), allowing the machine to optimize the task in a manner alien to human thinking.
Extraction and purification technologies for processing raw materials on the lunar surface or elsewhere are beyond state-of-the-art. Sophisticated, highly automated chemical, electrical, and crystallization techniques must be developed yielding a far broader range of elements and materials than is presently possible. Product component fabrication involves primary shaping and finishing operations. Shaping technologies of greatest utility for fully automatic space manufacturing am casting and powder metallurgy, both of which can produce parts ready for use without further processing. Elimination of manual mold preparation should be sought, possibly through the use of computer-controlled containerless forming. Laser and electron-beam techniques appear promising for highly automated finishing. Product component assembly requires robot/teleoperator vision and end-effectors which are smart, self-preserving, and dexterous. Placement accuracy of 1/1000th inch and repeatability of 5/10,000ths inch are desirable in electronics assembly tasks. High-capacity arms and multi-arm coordination must be developed.
Control of a large-scale space manufacturing or replicating system demands a distributed, hierarchical, dynamic, machine-intelligent information system. For inventory control, an automated storage and retrieval system well-suited to the space environment is necessary. The ability to gauge and measure products -inspection or quality control -- is essential, and a general-purpose high-resolution AI vision module is needed for quality control of complex products and components. Advances in artificial intelligence should also include the embodiment of managerial and repair skills in an autonomous, adaptive-control expert system.
An important goal of teleoperator development must be to give the operator the ability to sense remote environments as realistically as possible, an effect termed "telepresence" by Minsky [8,9]. The capacity to closely relate action and reaction at the remote site and at the control room requires major advances in manipulators (coordination and cooperation of multiple manipulator arms and hands); force reflection and servoing; visual, audio, tactile, radar/proximity and other sensors; comprehension of variable conditions of scene illumination, wide or narrow viewing fields, and three-dimensional information via stereo displays, planar beams, or holograms; and systems to circumvent the disorienting effects of communication time delays in sensor/effector feedback loops.
Two other distinct classes of teleoperators may be required for complex, large-scale space operations such as a manufacturing facility or replicating factory. First is a free-flying system combining the technology of the Man Maneuvering Unit with the safety and versatility of remote manipulation. The free-flying teleoperator can be used for satellite servicing, stockpiling and handling materials, operations requiring autonomous rendezvous, stationkeeping, and docking capabilities. Second, mobile or walking teleoperators may be useful in various manufacturing processes and for handling hazardous materials. The device would automatically move to a designated internal or external work site and perform either preprogrammed or remotely controlled functions. For manufacturing and repair such a system could transport astronauts to the site. Manipulators could be locally controlled for view/clamp/tool operations or as a mobile workbench.
Computer Science and Technology
NASA's role, both now and in the future, is fundamentally one of information acquisition, processing, analysis, and dissemination. This requires a strong institutional expertise in computer science and technology. General computer science research avenues and capabilities required to implement the types of missions proposed by the teams include: computer systems, software, management services, and computer systems engineering. State-of-the-art technology already is a part of agency programs in the natural sciences, engineering, space simulation and modeling. A substantial commitment to research in machine intelligence, real time systems, information retrieval, supervisory and computer systems is required.