26.01.2015 Views

ULTIMATE COMPUTING - Quantum Consciousness Studies

ULTIMATE COMPUTING - Quantum Consciousness Studies

ULTIMATE COMPUTING - Quantum Consciousness Studies

SHOW MORE
SHOW LESS
  • No tags were found...

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

12 Toward Ultimate Computing<br />

information (Figure 1.3). The cytoskeleton can convey analog patterns which may<br />

be connected symbols (Chapter 8). Although overlooked by AI researchers, the<br />

cytoskeleton may take advantage of the same attributes used to describe neural<br />

level networks. Properties of networks which can lead to collective effects among<br />

both neurons and cytoskeletal subunits include parallelism, connectionism, and<br />

coherent cooperativity.<br />

1.3.1 Parallelism<br />

The previous generations of computer architecture have been based on the<br />

von Neumann concept of sequential, serial processing. In serial processing,<br />

computing steps are done consecutively which is time consuming. One false bit of<br />

information can cascade to chaotic output. The brain with its highly parallel nerve<br />

tracks shines as a possible alternative. In parallel computing, information enters a<br />

large number of computer pathways which process the data simultaneously. In<br />

parallel computers information processors may be independent of each other and<br />

proceed at individual tempos. Separate processors, or groups of processors, can<br />

address different aspects of a given problem asynchronously. As an example,<br />

Reeke and Edelman (1984) have described a computer model of a parallel pair of<br />

recognition automata which use complementary features (Chapter 4). Parallel<br />

processing requires reconciliation of multiple outputs which may differ due to<br />

individual processors being biased differently than their counterparts, performing<br />

different functions, or because of random error. Voting or reconciliation must<br />

occur by lateral connection, which may also function as associative memory.<br />

Output from a parallel array is a collective effect of the input and processing, and<br />

is generally a consensus which depends on multiple features of the original data<br />

input and how it is processed. Parallel and laterally connected tracks of nerve<br />

fibers inspired AI researchers to appreciate and embrace parallelism. Cytoskeletal<br />

networks within nerve cells are highly parallel and interconnected, a thousand<br />

times smaller, and contain millions to billions of cytoskeletal subunits per nerve<br />

cell!<br />

Present day evolution of computers toward parallelism has engendered the<br />

“Connection Machine” (Thinking Machines, Inc.) which is a parallel assembly of<br />

64,000 microprocessors. Early computer scientists would have been impressed<br />

with an assembly of 64,000 switches without realizing that each one was a<br />

microprocessor. Similarly, present day cognitive scientists are impressed with the<br />

billions of neurons within each human brain without considering that each neuron<br />

is itself complex.<br />

Another stage of computer evolution appears as multidimensional network<br />

parallelism, or “hypercubes.” Hypercubes are processor networks whose<br />

interconnection topology is seen as an “n-dimensional” cube. The “vertices” or<br />

“nodes” are the processors and the “edges” are the interconnections. Parallelism<br />

in “n-dimensions” leads to hypercubes which can maximize available computing<br />

potential and, with optimal programming, lead to collective effects. Complex<br />

interconnectedness observed among brain neurons and among cytoskeletal<br />

structures may be more accurately described as hypercube architecture rather than<br />

simple parallelism. Hypercubes are exemplified in Figures 1.4, 1.5, and 1.6.<br />

Al/Roboticist Hans Moravec (1986) of Carnegie-Mellon University has<br />

attempted to calculate the “computing power” of a computer, and of the human<br />

brain. Considering the number of “next states” available per time in binary digits,<br />

or bits, Moravec arrives at the following conclusions. A microcomputer has a<br />

capacity of about 10 6 bits per second. Moravec calculates the brain “computing”<br />

power by assuming 40 billion neurons which can change states hundreds of times<br />

per second, resulting in 40 x 10 11 bits per second. Including the cytoskeleton

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!