Look into Future Computation
Published in CORE, Nov/Dec, 1996
aim of computer scientists is to build intelligent machines. It may
sound abstract because intelligence is not a clearly defined term. There
can be varied interpretation, but in general it is accepted that a
machine which can learn, think and do things as human do can be regarded
as intelligent. It follows that one natural idea for artificial intelligence(AI) is to simulate the functioning of the brain directly on
The idea of building an- intelligent
machine out of artificial neurons (smallest sensor of brain) has been
around for quite some time. Research in neural networks was virtually
halted in the 19~0s because of its complexity in realizing it on
hardware. In recent years neural network has become one of the most
active areas in computer science due to introduction of faster digital
computers, massive parallel computers, and discovery of powerful
Human brain consists of about 10
interconnected neurons. Hence it is virtually impossible to duplicate
the operations of brain with the present knowledge of neurons and
computer technology. The new neural network architecture called
connectionist architectures receives inspiration from known facts about
how the brain works. The connectionist architecture is based on
Large numbers of very simple neuron like processing
Large numbers of weighted connections between the
elements (the weights represent the knowledge of the network,
Highly parallel, distributed control and,
Emphasis on learning internal representations
Computers are capable of amazing
feats. They can effortlessly store vast quantities of information, they
are very fast and they can perform large arithmetic calculations without
error. But the current AI systems are not good at doing the simple tasks
which humans routinely perform (such as seeing, talking and common
reasoning). Perhaps the structure of the brain is somehow
suited to these tasks and not suited to tasks like high speed arithmetic
calculation and vice versa.
Comparing human brain with present
day computer, individual neurons is extremely slow device compared to
its counterparts in digital computers. Neurons operate in
milliseconds range, which is very far from present computers speed. Yet,
humans can perform extremely complex tasks, like interpreting a visual
scene or understanding a sentence, in just tenth of a second. In other
words, we do in about ap hundred steps what current computers can not do
in ten million steps. Though it sounds unrealistic, it is a fact,
because unlike conventional computer the brain contains a huge number of
processing elements that act in parallel. That means we are looking for
massively parallel computers.
Another thing people seem to be able
to do a better than computers is handle fuzzy situations. We have very
large memories of visual,' auditory, and problem-solving episodes, and
one key operation in solving new problems is finding closest matches to
old situations. Inexact matching is something brain style models seem to
be good at. Home appliances as washing machines, refrigerators based on neuro fuzzy logic are already in the market.
Neurons are failure prone devices.
They are constantly dying (you have certainly lost a few since you began
reading this article), and their operation is irregular .But a computer
must operate perfectly, we certainly do not want it to reply in French
when we ask a question in Nepali. Loss of information due to failure can
be handled by having more than one copy of the information stored in the
computer. This introduces redundancy into the system but it works
perfectly even if some components of the computer fail. With present day
VLSI (very large scale integration) circuit technology it is possible to
build a billion component integrated circuit in which 95 percent of the
components work correctly.
How can a computer be made intelligent?
Research in this area is biologically inspired.
Researchers are always thinking about the organization and functioning
of the brain to make an intelligent system. But knowledge the about
brain's overall operation is very limited to guide the research. It is
believed that each neuron in the brain consists of the cell body. It
receives inputs from other neuron through dendrites and performs some
operation on the inputs and sends its output to other neurons through
The artificial neurons are designed
to mimic the structure of neurons. A set of inputs, each representing
the output of another neuron are applied. Each input is multiplied by a
corresponding weight, and all of the weighted inputs are summed to
determine the output of the neuron.
A larger network of such neuron can
be built and trained to perform some function. Training a neural network
means changing the weights of the inputs so that it produces desired
output. The most oRen used method of training known as 'supervised
training' requires pairs of input and desired output. The actual output
when all the inputs are applied is used to calculate the error
(difference between desired and actual output). The input weights are
adjust to until the desired output is obtained.
However, supervised training has been
criticized for not being natural, because the human brain does not know
the desired answer beforehand. A new training method known as
'unsupervised training' is proposed which does not requite the desired
output to train. The training process extracts the statistical
properties of the input set and learns to produce consistent output for
the inputs which are similar.
Features of artificial neurons
The artificial neurons which may be implemented by
software or hardware exhibit a surprising number of brain ' s
They learn from experience. They can modify their
behavior in response to their environment.
Once trained a network's response can be, to a certain
degree, insensitive to minor variation's in the input. This ability to
see through noise and distortion to the pattern that lies within is
vital to pattern recognition in a real world environment. Such a
characteristic is not possible with conventional computer for which the
inputs must be in a pre-specified pattern. Thus, it produces a system
that can deal with the not-so-perfect world in which we live.
Some neural network are able to abstract essential
characteristics from inputs containing irrelevant data. For example a
network can be trained on a sequence of distorted versions of the letter
A. After adequate training the network will be able to produce a
perfectly formed letter, that means it has learned to produce something
that it has never seen before.
Applicability and future
Artificial neural networks are not panacea. They are
not well suited for tasks as calculating the payroll. Because of their
capability to extract essential data from incomplete or incorrect data
they are preferred over conventional computers for a large class of
pattern recognition problem.
Potential applications are those where
human intelligence functions effortlessly and conventional computation
has proven cumbersome or inadequate. This application class is at least
as large as that serviced by conventional computation, and the vision
arises of artificial neural networks taking their place alongside of
conventional computation. This will happen only if fundamental research
yields results at a rapid rate, as today's theoretical foundations are
inadequate to support such projection.