Embodied Intelligence

 

Organization of Semantic and Episodic Memory in Motivated Learning of Robots

The main  goal of the project is  to  extend the current state-of-the-art  in  design mechanisms  for creation and organization of semantic and episodic memory in motivated learning of robots. Based on these mechanisms  one can build the memory of autonomous  systems operating  in a changing complex environment. The episodic memory will be created in interaction with  dynamically constructed semantic memory. This is an innovative approach beyond the current understanding of organization of both types of memories.

The semantic memory  represents knowledge through  a set of concepts and associations between them. The knowledge increases gradually,  with  cumulated experiences. The project  will  develop  the semantic memory structure and algorithms for its self-organization.

The episodic memory records  sequences of episodes that are  relevant  to  the  system operation.  The main tasks in the construction of episodic memory systems is creating episodes, recalling memorized  episodes, and gradual  forgetting  of less useful  episodes. The project aims to implement these tasks. So far, there were no computational models integrating the both memory types.

Developing of effective mechanisms for robot perception using motivated learning and self-organizing associative memory

The main objective of the proposed research is to develop new effective perceptual mechanisms using generalized idea of motivated learning (ML), and new mechanisms for associative learning and inference. In order to use perception to acquire knowledge by interacting with the environment, it is planned to refine the associative object recognition and scene representation system, supported by activities of a robot. It is proposed to build and test an innovative visual and acoustic perception system for a robot, based on the mechanisms of episodic memory. Our hypothesis is that the perception of visual and audio stimuli will bring the best results after applying the learning systems with memory, capable of gathering and modeling of knowledge and creating of associative memory for arbitrary spatio-temporal patterns. So, one of the challenges of this project is to base perceptual mechanisms of a motivated agent on visual saccades, associative mechanisms, and integrated associative memory model.

An additional goal of the project is to create new mechanisms of self-organized semantic memory using attention focus and attention switching, which, in cooperation with the episodic memory will yield contextual representation of a robot’s actions in its environment. Semantic and episodic memory consolidation is required in order to extend and generalize ML methods. Therefore, new inference mechanisms based on the associative memory model will be proposed. Our hypothesis is that using the associative semantic memory and sequential episodic memory, autonomous robot can recognize objects, scenes, and predict the outcome of its actions. Completing these objectives will enhance the capacity of robots for operation in a complex environment. Project is performed by University of Information Technology and Management in Rzeszow, Poland

Anticipation-Based Sequences Learning in Spatio-Temporal Memories

 

Temporal sequence learning is one of the most critical components for human intelligence. Prediction is an essential element of the temporal sequence learning model. By correct prediction, the machine indicates it knows the current sequence and does not require additional learning. When the prediction is incorrect, learning is executed and the machine learns the new input sequence as soon as the sequence is completed. 

 

In action planning a temporal sequence of planned actions is generated in search of satisfactory solutions (that lowers the pain). At the end of sequence (when solution is discovered) there is a need to remember the steps of this sequence. In the proposed organization, the same network will be responsible for storing and playing back various sequences thus providing a short term memory (STM). While the basic need for this STM is to immediately recall a temporal sequence, it is possible to use this sequence as a training input to the long term memory (LTM). By repeating the same STM sequence for a number of times, LTM can be trained, thus avoiding a requirement of one-shot learning in the LTM.

 

Models and Methods of Adaptive Control in Intelligent Systems

 

Intelligent agents develop their understanding and skills through interaction with environment.  They explore environment in search of solutions to their goals.  While some goals are well defined and easy to measure, others are very complex and require complex evaluation process.  It is important to understand how these higher level goals are formulated and what their role in self-organization of memory is.  It is also important to understand how multiple goals compete for attention and how they are internally managed.  Recently conceived low level model proposes that emergence of higher order level goals is correlated to the emergence of abstract perceptions and complex skills.  It is important to develop and test models in which goal creation is an integral part of learning.

 

It is not clear how multiple goal agents might be organized in relation to each other, how they might come about and self-organize, or how environmental manipulations might facilitate or impede their establishment and organization. We must first establish what models are possible and then use the models to hypothesize contextual (e.g., instruction) manipulations that should facilitate adaptation to and performance in these dynamic contexts.  Structural models of embodied intelligence which link goal creation mechanism to perception and action need to be developed and tested.  We need to learn how multiple goals can be evaluated and implemented in such structures and how importance and urgency of a goal affects attention shift and selection of an action.  

 

Building Invariant Sensory Representations through Active Vision

 

The objective of this work is to investigate natural ways of sensory representation building in intelligent systems, by using self-organized learning, integrating continuous observation and saccade movements. The aim for this biologically motivated approach is to achieve visual perception through a retina like sampling of high resolution images with lower resolution artificial retina, and sensory- motor coordination.  

 

The system will use an artificial retina model built up by modeling the rods and cones distributions in human retina.  By repeating saccade movements and building their invariant temporal correlations at sufficient detail level, an object will be represented in internal structures.   The neural network will use hierarchical feedback structures to build object representations, self-organize invariant transformations, act on the images received from the retina model and control the retina model to sample details of the observed objects.  The network will identify the input image by using winner-take-all scheme after sufficiently accurate saccades.   By using a unique invariance building scheme, the network will identify different views of the same object.  In addition, it will also learn temporal sequences and make predictions.

Hardware Needs for Machine Intelligence

This project focuses on design of self-organizing learning hardware modules for support of studying machine intelligence and spatio-temporal associative learning memory development in cortical minicolumn structures.  Such structures will benefit from implementation on regular self-organizing arrays of identical processors with programmable sparse interconnections to other processors and asynchronous data driven operation.  An FPGA based system that aims at developing such regular hardware architectures based on modular, expandable 3D architecture is described at the following page:

Self Organizing Learning Array (SOLAR)

- a data driven self-organizing learning hardware for studying machine intelligence, developing its computational models and its structures.

SOLAR is a regular, two or three-dimensional array of identical processing cells, connected to programmable routing channels.  Each cell in the array has ability to self-organize by adapting its functionality in response to information contained in its input signals.  Cells choose their input signals from the adjacent routing channels and send their output signals to the routing channels.

A SOLAR structure in many ways resembles the organization of cellular neural networks (CNN).  Like in a CNN its architecture is defined by an array of identical cells, which adapt their behavior to the input data.  Its neurons are cellular automata, which can be programmed to perform different computational tasks based on data received from its neighbors.  Neurons can be either static or dynamic, depending on the their implementation and types of signals processed.  However, unlike in a CNN, its connectivity structure is not fixed.  In a CNN, the interconnect structure is defined by templates which limits its learning ability, while in a SOLAR the interconnect structure is an element of learning and can by dynamically changed even during the network�s operation.  Thus a CNN can be considered as a special case of SOLAR structure.  

SOLAR has three advantageous features over a typical neural network technology: online learning; dynamically set local interconnections;  dynamically set neuron functions and threshold values. Comparing to cellular neural networks, SOLAR has not only dynamically adapting neurons, but dynamically adapting interconnection structure as well. 

SOLAR has a hierarchical structure in which data is represented through network topology and neuron's function.  It learns in interaction with environment through its interfaces, and stores the useful knowledge in its distributed, hierarchically organized memory.  The interfaces include sensory inputs and motor outputs, as well as inputs from reinforcement learning signals.  SOLAR is capable of learning through association, and uses associative feedback to predict and screen the incoming information for selective learning of new features. 

SOLAR can be used as autonomous control system that uses reinforcement learning and other sensory inputs as the feedbacks from environment in response to its actions.  This implementation of SOLAR is meant to be used to study selected aspects of intelligence in interaction with environment through planning and motor functions.  There is a close resemblance between the machine anticipation of the result of its action and its motor control.  Thus manipulation of environment directed to optimize the state of the machine with respect to its learned value system, together with planning related to finding the anticipated optimum response of the environment is a simple manifestation of intelligent behavior.  Learning through interaction with environment builds up machine experience and modifies its value system for better planning and future performance.

At present stage of our research, this learning array can perform intelligent tasks such as pattern recognition, prediction and modeling of unknown systems.  It can also learn associations between different input patterns and between different sensors.  Its associative learning yields a hierarchical organization of neurons, such that neurons that are farther away from the sensory inputs represent more abstract features or concepts.  SOLAR should find a wide range of applications from security, robotics, decision support, information gathering and learning, through everyday life applications in caretaking, monitoring, protection, guidance, to broad applications in military and commercial areas.

Motivated Learning for the Development of Autonomous Agents

Motivated learning is a significant extension of reinforcement learning (RL), which drives a machine to develop abstract motivations and choose its own goals. ML also provides a self-organizing system that controls a machine�s behavior, based on competition between motivations expressed subconsciously by dynamically-changing attention-switching and pain signals. This provides interplay of externally driven and internally generated signals that control a machine�s behavior. It has been demonstrated that ML not only yields a more sophisticated learning mechanism and system of values than RL, but is also more efficient in learning complex relations and also delivers better performance than RL in dynamically changing environments.

Motivated learning can be combined with artificial curiosity and reinforcement learning. It enhances their versatility and learning efficiency, particularly in changing environments with complex dependencies between environment parameters.

ML provides a very much needed mechanism for switching a machines attention to new motivations and implementation of internal goals. A motivated learning machine develops and manages its own motivations and selects goals using continuous competition between various levels of pain signals (and possible attention switching signals). This form of distributed goal management and competing motivations is equivalent to �central executive� control that may govern the cognitive operation of intelligent machines.

Modeling and Simulation of Cognitive Systems

Despite many efforts, there are no computational models of consciousness that can be used to design conscious intelligent machines. This is mainly attributed to available definitions of consciousness being human centered, vague, and incomplete.

Through a biological analysis of consciousness and concept of machine intelligence, we propose a physical definition of consciousness with the hope to model it in intelligent machines. We propose a computational model of consciousness driven by competing motivations, goals, and attention switching. Our proposed organization of conscious machine model is based on two important observations. First, biological evolution as well as development of human brain indicates that a functional unit similar to pre-frontal cortex is responsible for the emergence of consciousness. Thus, in our model, consciousness is an emerging phenomenon. Second, a central executive which controls and coordinates all processes, whether conscious or subconscious, and which can perform some of its tasks (like memory search) using concurrent dynamic programming, is necessary for developing consciousness.

Central executive in our model uses distributed and competing signals that represent goals, motivations, emotions, and attention. Only after the winner of such competition is established, it drives a focus point of a conscious experience. The proposed computational model of consciousness mimics the biological systems functionally and retains a well-defined architecture necessary for implementing consciousness in machines.

We propose a concept of mental saccades that is useful for explaining the attention switching and focusing mechanism from computational perspective. Our model uses the competition among three different types of signals in the cognitive cycle of the agent. However, the exact mechanism of attention switching is explained using mental saccades, which may not relate directly to human consciousness, but are useful for the computational implementation of consciousness in machines.

Presentations:

Mechanism for Consciousness, Oct. 2010

Heidi June 2005

Collaborative Sensing August 2005

Intentional Robot
SOLAR simulation