26.01.2015 Views

ULTIMATE COMPUTING - Quantum Consciousness Studies

ULTIMATE COMPUTING - Quantum Consciousness Studies

ULTIMATE COMPUTING - Quantum Consciousness Studies

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

70 From Brain to Cytoskeleton<br />

review which discouraged neural net research until Hopfield’s resurgence in the<br />

1980’s. Hopfield introduced an energy function so that information in a neural net<br />

circuit would settle into a series of stable energy states much like rain water<br />

falling on mountains flows through valleys into lakes and rivers. Depending on<br />

the rainfall, an information state (i.e. memory, conscious image, thought) would<br />

be a given watershed pattern. Hopfield’s neural nets are loosely based on aspects<br />

of neurobiology but readily adapted to integrated circuits. The collective<br />

properties of his model produce a content addressable memory (described by a<br />

phase space flow of the state of the system) which correctly yields an entire<br />

memory from any sub-part of sufficient size. The algorithm for the time evolution<br />

of the state of the system is based on asynchronous parallel processing. Additional<br />

emergent collective properties include some capacity for generalization,<br />

familiarity recognition, categorization, error correction, time sequence retention,<br />

and insensitivity to failure of individual components. Hopfield nets and similar<br />

models are best categorized with the “tabula rasa” view of learning in which the<br />

initial state is taken as a flat energy landscape which becomes progressively<br />

contoured, eroded and complicated by direct interactions with the environment.<br />

A selectionist approach to neural net theory has been taken by Jean Pierre<br />

Changeux, who pioneered description of allosteric interactions among proteins.<br />

Turning to the brain/mind, Changeux and colleagues (1984, 1985) have proposed<br />

a model of learning by selection based on the most recent advances in the<br />

statistical mechanics of disordered systems, namely the theory of spin glasses.<br />

Spin glasses are materials which are not crystalline, yet whose atoms possess a<br />

high degree of similar neighbor relationships and a finite number (i.e. 2) of<br />

magnetic spin states influenced by their neighbors. Aggregates of “like spin”<br />

states beget similar states among neighbors. Consequently the spin states of atoms<br />

in a spin glass can be viewed as a network (or cellular automaton) much like a<br />

collection of neurons in a nervous system. Changeux also uses terms from<br />

mathematical chaos theory like basins and attractors to describe the states to<br />

which the spin glass model evolves. Unlike the blank slate approach, the brain’s<br />

initial state is viewed by Changeux as a complex energy landscape with an<br />

exuberance of valleys typical of spin glasses. Each valley corresponds to a<br />

particular set of active neurons and plays the role of a prerepresentation. An input<br />

pattern sets an initial configuration which converges towards a valley whose entry<br />

threshold is lowered by synaptic modification. Starting from a hierarchical<br />

distribution of valleys, the “lowest” valleys (sea level fjords) would correspond to<br />

maximal comprehension, ultimate answer, best correlation. The learning process<br />

is viewed as smoothening, gardening, and evolutionary pruning as already stored<br />

information influences the prerepresentations available for the next learning<br />

event. Changeux’s spin glass model of neural nets is elegant, and successfully<br />

presents a hierarchical pattern of static information sorting. It’s shortcomings are<br />

that it is unidirectional and fails to describe dynamic, “real time” information<br />

processing.<br />

Another selective connectionist network model of learning is that of George<br />

Reeke and Gerald Edelman (1984) of Rockefeller University. They describe two<br />

parallel recognition automaton networks which communicate laterally. Automata<br />

are dynamic patterns of neighbor interactions capable of information processing<br />

(Chapter 1). The two parallel recognition automata which Edelman and Reeke<br />

devised have distinct and complementary personalities. They are named Darwin<br />

and Wallace after the co-developers of the theory of evolution, and utilize<br />

different approaches to the problem of recognition. “Darwin” is highly analytical,<br />

keyed to recognizing edges, dimensions, orientation, intensity, color, etc.<br />

“Wallace” is more “gestalt” and attempts to merely categorize objects into

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!