Looking Ahead to Natural Language Processing

True insight comes from within.-Plato
Siddhartha said... Wisdom is not communicable. The wisdom which a wise man tries to communicate always sounds foolish....Knowledge can be communicated but not wisdom. One can find it, live it, be fortified by it, do wonders through it, but one cannot communicate and teach it.
-Hermann Hesse, Siddhartha

The ultimate goal of Human-Computer Interaction is to achieve fluent communication between man and machine. How far can we go?

For communication to be fluent in both the human-to-computer and computer-to-human directions, researchers are attempting to equip computers with communication abilities that resemble those of human beings. The closer the resemblance the more easily effective communication will take place. One of the main goals is to achieve natural language processing. The human would no longer have to adapt to the artificial modes of communication that used to be the only one understandable by computers. Natural language processing is no easy task, for it involves mastering principles that have so far been barely tackled.

Communication skills can be either directly and manually implemented, or learned by the machine, sometimes with the help of a human tutor. The latter solution has proved to be incredibly more versatile and promising, provided its techniques be first mastered by researchers. Indeed, there are many things that cannot be communicated even between two humans unless they are experienced, which is why learning seems to be the only plausible way for computers to exhibit signs of genuine intelligence. Several centers throughout the world have begun to teach robots some simple motor skills. For example, a computer can learn how to pronounce words and read aloud, a lorry can learn how to steer its wheel as it approaches curves, and a flexible mechanical arm can learn to displace itself in one direction, just like a worm. Likewise, natural language processing requires that the computer adapts to unpredictable situation.

Daphne Koller, PhD., working on Artificial Intelligence at Stanford University, explains that the learning process of robots equipped with engineered neural networks is similar to that taking place between the neurons of the human brain, where each nucleus sums up the amplitude of all electric impulses felt by its dendrites, and then, depending on the resulting amplitude, in its turn sends another signal along its axon, towards the dendrites of another neuron. The experience factor, which increases with practice and repetition, is nothing more than the fine-tuning process that balances the weight associated with each input. But that weight is the key quantitative measure to the type of intelligence that we humans know how to implement. Koller argues that the difference between the Deep Blue version that Kasparov beat and the Deep Blue version that beat Kasparov was all contained in that notion of attributing the right weights to each possible move. In the case of Deep Blue, however, it took a grandmaster chess player to manually assign those weights. The real challenge for the future is to create a machine that can assign those weights itself, through its own experience as a player.

One must differentiate between supervised learning and reinforcement learning. Supervised learning is the method of learning by example, where the human user performs a demo while the computer observes, registers, and fine-tunes its ability to evaluate different situations. Reinforcement learning, on the other hand, lets the computer act freely and be either rewarded or punished, depending on its actions. The notion of reward and punishment is nothing more than a number on a rating scale, and the computer is simply programmed to favor higher ratings over lower ones.

Arthur Kornberg, M.D., Professor Emeritus in Biochemistry at Stanford University, believes that the future of science and medicine lies in the re-unification of several fields that have been branching away from each others in the last few decades. He explains, not without humor, that much like the "gene hunters" replaced the "enzyme hunters," the "gene hunters" will soon be replaced by the "head hunters" - those that seek to understand the brain. And he foresees a great alliance between computer scientists, biologists and psychologists to design the tools that will smartly dig out the secrets of the human genome.

The learning example is a good illustration of reciprocity between computer science and psychology. In the first place, psychologists rationalized and provided a clear definition of the learning process in humans, and their studies became of great help to computer scientists, because they suggested a specific approach to the problem of teaching. Interestingly, creating those machines and observing them can in turn bring feedback to psychologists, for in simple machines the learning process can be isolated from other psychological interferences that would occur on living organisms. For example, there are analogies to be drawn from the fact that machines generally learn better via supervised learning than via reinforcement learning.

Biology too will bring its services to computer science. Understanding the human brain will in turn help design more complex artificial neural networks. In fact, the digital paradigm may eventually yield to an organic one, grounded in the complementarities of the A-T and C-G DNA bases. Current neural networks are infinitely small and simple in comparison to the connections of the human brain. Yet it is such complex connections that allow truly intelligent communication.

The work of computer scientists will also not be complete without the help of cognitive psychology and linguistics. In order to export any psychological trait from the human to the computer, one must first consciously define that trait, for as long as we are unable to explain the nature and causes of that trait, there will be no strategy to implement it.