Table of Contents

Foreword by Herbert A. Simon

Preface

1 Introduction

1.1 Why Autonomous Learning Systems?
1.2 What Is Autonomous Learning?
1.3 Approaches to Autonomous Learning
1.4 What Is in This Book?

2 The Basic Definitions

2.1 Learning from the Environment
2.2 The Learner and Its Actions, Percepts, Goals and Models
2.3 The Environment and Its Types
2.3.1 Transparent Environment
2.3.2 Translucent Environment
2.3.3 Uncertain Environment
2.3.4 Semicontrollable Environment
2.4 Examples of Learning from the Environment
2.5 Summary

3 The Tasks of Autonomous Learning

3.1 Model Abstraction
3.1.1 The Choice of Model Forms
3.1.2 The Evaluation of Models
3.1.3 The Revision of Models
3.1.4 Active and Incremental Approximation
3.2 Model Application
3.3 Integration: The Coherent Control
3.4 Views from Other Scientific Disciplines
3.4.1 Function Approximation
3.4.2 Function Optimization
3.4.3 Classification and Clustering
3.4.4 Inductive Inference and System Identification
3.4.5 Learning Finite State Machines and Hidden Markov Models
3.4.6 Dynamic Systems and Chaos
3.4.7 Problem Solving and Decision Making
3.4.8 Reinforcement Learning
3.4.9 Adaptive Control
3.4.10 Developmental Psychology
3.5 Summary

4 Model Abstraction in Transparent Environments

4.1 Experience Spaces and Model Spaces
4.2 Model Construction via Direct Recording
4.3 Model Abstraction via Concept Learning
4.4 Aspects of Concept Learning
4.4.1 The Partial Order of Models
4.4.2 Inductive Biases: Attribute Based and Structured
4.4.3 Correctness Criteria: PAC and Others
4.5 Learning from Attribute-Based Instances and Related Algorithms
4.6 Learning from Structured Instances: The FOIL Algorithm
4.7 Complementary Discrimination Learning (CDL)
4.7.1 The Framework of Predict-Surprise-Identify-Revise
4.8 Using CDL to Learn Boolean Concepts
4.8.1 The CDL1 Algorithm
4.8.2 Experiments and Analysis
4.9 Using CDL to Learn Decision Lists
4.9.1 The CDL2 Algorithm
4.9.2 Experiments and Analysis
4.10 Using CDL to Learn Concepts from Structured Instances
4.11 Model Abstraction by Neural Networks
4.12 Bayesian Model Abstraction
4.12.1 Concept Learning
4.12.2 Automatic Data Classification
4.12.3 Certainty Grids
4.13 Summary

5 Model Abstraction in Translucent Environments

5.1 The Problems of Construction and Synchronization
5.2 The L* Algorithm for Learning Finite Automata
5.3 Synchronizing L* by Homing Sequences
5.4 The CDL+ Framework
5.5 Local Distinguishing Experiments (LDEs)
5.6 Model Construction with LDEs
5.6.1 Surprise
5.6.2 Identify and Split
5.6.3 Model Revision
5.7 The CDL+1 Learning Algorithm
5.7.1 Synchronization by LDEs
5.7.2 Examples and Analysis
5.8 Discovering Hidden Features When Learning Prediction Rules
5.9 Stochastic Learning Automata
5.10 Hidden Markov Models
5.10.1 The Forward and Backward Procedures
5.10.2 Optimizing Model Parameters
5.11 Summary

6 Model Application

6.1 Searching for Optimal Solutions
6.1.1 Dynamic Programming
6.1.2 The A* Algorithms
6.1.3 Q-Learning
6.2 Searching for Satisficing Solutions
6.2.1 The Real-Time A* Algorithm
6.2.2 Means--Ends Analysis
6.2.3 Distal Supervised Learning
6.3 Applying Models to Predictions and Control
6.4 Designing and Learning from Experiments
6.5 Summary

7 Integration

7.1 Integrating Model Construction and Model Application
7.1.1 Transparent Environments
7.1.2 Translucent Environments
7.2 Integrating Model Abstraction and Model Application
7.2.1 Transparent Environments
7.2.1.1 Distal Learning with Neural Networks
7.2.1.2 Integrating Q-Learning with Generalization
7.2.1.3 Integration via Prediction Sequence
7.2.2 Translucent Environments
7.2.2.1 Integration Using the CDL+ Framework
7.3 Summary

8 The LIVE System

8.1 System Architecture
8.2 Prediction Rules: The Model Representation
8.3 LIVE's Model Description Language
8.3.1 The Syntax
8.3.2 The Interpretation
8.3.3 Matching an Expression to a State
8.4 LIVE's Model Application
8.4.1 Functionality
8.4.2 An Example of Problem Solving
8.4.3 Some Built-in Knowledge for Controlling the Search
8.5 Summary

9 Model Construction through Exploration

9.1 How LIVE Creates Rules from Objects' Relations
9.2 How LIVE Creates Rules from Objects' Features
9.2.1 Constructing Relations from Features
9.2.2 Correlating Actions with Features
9.3 How LIVE Explores the Environment
9.3.1 The Explorative Plan
9.3.2 Heuristics for Generating Explorative Plans
9.4 Discussion

10 Model Abstraction with Experimentation

10.1 The Challenges
10.2 How LIVE Revises Its Rules
10.2.1 Applying CDL to Rule Revision
10.2.2 Explaining Surprises in the Inner Circles
10.2.3 Explaining Surprises in the Outer Circles
10.2.4 Defining New Relations for Explanations
10.2.5 When Overly Specific Rules Are Learned
10.3 Experimentation: Seeking Surprises
10.3.1 Detecting Faulty Rules during Planning
10.3.2 What Is an Experiment?
10.3.3 Experiment Design and Execution
10.3.4 Related Work on Learning from Experiments
10.4 Comparison with Previous Rule-Learning Methods
10.5 Discussion

11 Discovering Hidden Features

11.1 What Are Hidden Features?
11.2 How LIVE Discovers Hidden Features
11.3 Using Existing Functions and Terms
11.4 Using Actions as Well as Percepts
11.5 The Recursive Nature of Theoretical Terms
11.6 Comparison with Other Discovery Systems
11.6.1 Closed-Eye versus Open-Eye Discovery
11.6.2 Discrimination and the STABB System
11.6.3 LIVE as an Integrated Discovery System

12 LIVE's Performance

12.1 LIVE as LOGIC1
12.1.1 Experiments with Different Goals
12.1.2 Experiments with Different Explorations
12.1.3 Experiments with Different Numbers of Disks
12.2 LIVE as LOGIC2 (Translucent Environments)
12.3 LIVE on the Balance Beam
12.3.1 Experiments with Training Instances in Different Orders
12.3.2 Experiments with the Constructors in Different Orders
12.3.3 Experiments with Larger Sets of Constructors
12.4 LIVE's Discovery in Action-Dependent Environments
12.5 LIVE as a HAND-EYE Learner
12.6 Discussion

13 The Future of Autonomous Learning

13.1 The Gap Between Interface and Cognition
13.2 Forever Being Human's Friends

Appendix A: The Implementations of the Environments

Appendix B: LIVE's Running Trace

Bibliography

Index