URSS.ru Online Bookstore. Editorial URSS Publishers. Moscow
Cover Shamis A.L. Artificial Minds: Paths and Prospects Cover Shamis A.L. Artificial Minds: Paths and Prospects
Id: 110365
29.9 EUR

Artificial Minds:
Paths and Prospects

URSS. 256 pp. (English). ISBN 978-5-396-00108-4.
White offset paper

Summary

The book presents a general view of a large number of issues that touch upon the problems of behavior, perception and thinking modeling. Along with general questions, it considers the models of goal directed behavior, perception “with understanding” and active neural brain mechanisms. These models are based on the “Stable disequilibrium principle”, maxT principle and systems principles of integrity, purposefulness and activity. It also describes the ...(More)application of the principles of integrity, purposefulness and activity in the practical systems of written text recognition, such as Grafit, FineReader and FormReader.


Contents
top
Preface
Introduction. Artificial Intelligence: Myth or Reality?
1Stable Nonequilibrium and a Model of Goal-Directed Behavior
 1.1.Behavior
 1.2.The maxT Principle
 1.3.A Formal Model
 1.4.Motivations and Needs
 1.5.An Actual Problem
 1.6.Emotions
2The Integral Perception Model and "Recognition with Understanding". Perceptual Thinking
 2.1.Recognition
 2.2.The Psychology of Computer Vision
 2.3.Principles of Handprint Recognition and Their Implementation in the ABBYY Programs FineReader Handprint and FormReader
 2.4.Perceptual Thinking
3Wholeness and Organization
 3.1.Wholeness and the Theory of Systems
 3.2.Representation of a Holistic Object (System)
 3.3.Organization
 3.4.Occurrence and Change in Organization
 3.5.Stable Nonequilibrium
 3.6.The maxT Principle in the Behavior of Living Organisms and in Man-Made Active Dynamic Systems
 3.7.Freedom of Behavior of Active Dynamic Systems
4Information
 4.1.Approaches to Determining and Measuring Information
 4.2.The Value of Information
 4.3.Cognition and the Informational Representation of the World
 4.4.Information and Needs
5A Model of the Problem Environment
 5.1.Some Physiological Premises
 5.2.Special Features of the Task of Behavior
 5.3.Special Features of the Task of Perception
 5.4.Representation of Integral Objects and Situations in the Model of the Problem Environment
 5.5.Structural and Logical Features of the Model of the Problem Environment of a Human
 5.6.Necessary Properties of the Model of the Problem Environment
6Passive Neural Models: the Neurocomputer
 6.1.Properties and Functions of Neural Models of the Brain that are Necessary to Solve the Basic Problem of Thinking
 6.2.Review of the Components of the Brain
 6.3.Logical Neural Networks of McCullough and Pitts
 6.4.F.Rosenblatt's Perceptron
 6.5.Formal Feature-Recognition Neural Networks
7Active Neural Models
 7.1.Activity
 7.2.Yemelyanov-Yaroslavsky's Hypothesis
 7.3.A Model of the Three-Layer Active Network with Backward Inhibition
  7.3.1.Properties of Neurons of the Experimental Model
  7.3.2.Structure and Operation of a Network
  7.3.3.Remembering External Informational Effects on a Three-Layer Neural Network
 7.4.The A-Network: a Model Satisfying the Principle of Stable Nonequilibrium
 7.5.An Automaton for Recognition and Reproduction of Time Sequences
  7.5.1.The Problem
  7.5.2.Scheme of the Automaton
  7.5.3.Training and Reproduction of Experience
  7.5.4.Version with Support Codes
  7.5.5.Experiment
  7.5.6.The Probabilistic Scheme
  7.5.7.The Prospects for Implementation of an Automaton in an Active Neural Model
8Basic Problems Related to Neural Mechanisms of the Brain
 8.1.The General Evaluation of the Existing Neural Models Revisited
 8.2.Consciousness
 8.3.Imaginative Modeling and Free Will
 8.4.Appearance and Inhibition of Excitation Focuses
 8.5.Obtaining Features, Relations, and Metrical Characteristics
 8.6.Process Management in the Model from the Conceptual Level
 8.7.Construction of Integral Multilevel Hierarchical Structural-Metrical Descriptions
 8.8.The Problem of Modeling Memory
 8.9.Transition from a Neuron to a Neural Ensemble
 8.10.The Problem of Modeling Language and Abstract Thinking
 8.11.Secondary Problems
9Thinking and Creativity
 9.1.General Remarks
 9.2.Cognitive Thinking
 9.3.Practical (Behavioral) Thinking
 9.4.Mechanisms of Simple Reproductive Practical (Behavioral) Thinking
 9.5.Creativity
  9.5.1.Creative Search
  9.5.2.Creative Thinking
  9.5.3.Operation of the Brain and Some Functions of Creativity
10Artificial Mind: Myth or Reality?
Conclusion
Bibliography

Preface
top

This book considers both formal models, including those implemented in computer programs, and qualitative speculative models of perception, behavior, and the neuron mechanisms of the brain and thinking. Many related questions are also discussed. I hope that I have succeeded in formulating some useful general (and in some cases, specific) concepts. Some of the specific concepts formed the basis for the development of practical programs for recognizing handwritten texts that were incorporated into the Grafit, FineReader Handprint, and FormReader systems. This book describes these results as.

What is "thinking"?

This, one of the most interesting questions of our days, has occupied me for many years. A problem becomes relevant not only when its solution is expected to be of practical use but also when the necessary prerequisites exist that provide hope for its solution. There is some hope as to the possibility of modeling thinking, and this is based not only on the widespread accumulation of experimental material in psychology and physiology of higher nervous system activity but also on the capabilities of modern computer technology. In my view, this hope is also related to some experimental results and theoretical concepts described in this book. However, it is also obvious that the prospects for the complete solution to this problem are presently somewhat vague are rather remote.

How do we approach the construction of the "engineering of thinking", and where do we begin? It is simplest and most natural of all to put an equal mark between thinking and perception, especially visual, supposing that perception and thinking are one and the same. This was often done at the beginning, in the mid-20 century. Many, and I among them, began to construct models of perception, but this work very quickly degenerated into the development of applied systems for pattern recognition. That is, it led to a feature-based classification of simple objects, considered in isolation. There was not much success in moving forward from pattern recognition to the understanding of thinking, but it became clear that while thinking necessarily plays a role in perception, and perception in thinking, yet thinking is somewhat larger than simply perception, much less feature-based pattern recognition. In addition, it was realized that answering the question "what is thinking?" requires answering not only "what does the brain do?" but also "how does it do it?" That is, it is primarily important to understand the operation of the neuron mechanisms of the brain.

In the early stage of cybernetics (and later, bionics), work on modeling the neuron mechanisms of the brain went on as intensively as work on modeling perception (pattern recognition). With the success of W.McCulloch and then F.Rosenblatt, it seemed that this was not very difficult: all that had to be done was to connect threshold summing elements similar to neurons into a network, make this network very large, and everything, or nearly everything, would follow. But it did not happen that way. In the end, all reduced again to feature-based pattern recognition using formal neural networks. There did remain the idea of Yemelyanov-Yaroslavsky, ignored by nearly everyone, to construct a model of an active neural network consisting of unstable elements. From my standpoint, this rather promising idea was unfortunately compromised by the author himself, who formulated many unsubstantiated and weakly argued, pretentious interpretations based on this idea. Chapters 6, 7, and 8 in this book respectively present the results of the development of traditional formal neural networks, active neural networks of models of brain mechanisms, and a list and analysis of basic unsolved problems related to.

Then after discussions with physiologists, the feeling arose that the problem of thinking could not be considered in isolation from behavior and the understanding of the emotional decision-making mechanisms. At that time, a book that came out of the blue, E.S.Bauer's "Theoretical Biology", made an immense impression on many, including me. It seemed that the principle of stable nonequilibrium formulated by Bauer and the understanding of what differentiates living from nonliving must contribute much to the understanding of thinking. The results of work based on these concepts and directed toward the creation of a formal model of behavior (the maxT principle), and the ensuing hypotheses about the basic tasks of thinking are discussed in Chapters 1 and 3. In addition, Bauer's principle of stable nonequilibrium provides much for the understanding of active neural mechanisms of the brain.

At the same time, i.e., back in the 1960s, the concepts of the role of synergy (mutual assistance), came to the foreground, as regards both the functioning of the entire organism and the operation of neurons in the brain cortex. The principle of mutual assistance was formulated at the level of the entire organism in Anokhin's functional system theory. The idea of the mutual assistance of neurons in optimizing the functional state of a neural network overall was expressed in the work of Yemelyanov-Yaroslavsky. Unfortunately, the idea of synergy has not so far taken its place in studies on artificial intelligence and modeling thinking.

And finally, in the next turn of the spiral, I was again dealing with visual perception. This was not the classical form of feature-based pattern recognition but an active goal-directed integral perception "with understanding". The results of this work, which have been brought to practical use (Grafit, FineReader Handprint, and FormReader program systems), are described in Chapter 2.

All the problems mentioned above–behavior, perception, and modeling of neural mechanisms of the brain and thinking–are touched upon in this book in greater or lesser detail. The concepts described are based on studies accomplished in various years. I have tried to combine and interpret all mutually complementary results from a unified standpoint. The implied unity is based, first of all, on the common orientation of these studies toward the understanding of thinking. Second, these studies are unified by the concept of accumulating and maintaining the instability present in every living thing, from a cell up to the entire organism and, consequently, by the concept of activity, not only at the level of behavior but also at the level of nerve cells of the brain cortex. Third, the models of behavior, perception, and neuron mechanisms of the brain and thinking that are considered in this book are unified by the most important principles of wholeness and goal-directedness that they.

I am grateful to the very many remarkable and creative people who worked with me. I wish to give special notice to Leonid Yemelyanov-Yaroslavsky, Boris Levit, German Golitsyn, and Andrey Baykov. I have had the good fortune to work in recent years with a remarkable team of talented young programmers and mathematicians in developing the FineReader system. I am grateful for their interesting and productive joint work, first of all, to Konstantin Anisimovich, Vadim Tereshchenko, Dmitriy Deryagin, Diar Tuganbayev, and David Yang; and of course not only.

And, after all, what is "thinking"?

A.Shamis

Introduction
top
From false knowledge to real ignorance.
Popular

Sam: What do you do?
Pickwick: I'm a philosopher.
Sam: And what are you doing?
Pickwick: I'm thinking.
Sam: And do you often think?
The "Pickwick Club".
Stage version of Charles Dickens' novel

In this book, we speak about thinking. About what it is, and whether it can be implemented outside the brain, i.e., by technical means or programs. In this regard, we are primarily interested in the differences between living and nonliving and in whether these differences are fundamental. Hints as to how to answer this question may be sought in studying several models of behavior, perception, and the process of thinking, as well as models of certain neural mechanisms of the brain. Unfortunately, in most cases, we have to limit ourself to the qualitative (informal) level of studying the problems. This may complicate understanding, rendering it ambiguous. On the other hand, ambiguity can be useful, leaving room for imagination and fantasy.

In the mid-20th century, a heated debate was going on in popular-science publications: "Can a machine think?" If the answer was "yes", then accusations of a crude mechanistic approach followed; if it was "no", there followed accusations of idealism. Since the subject of argument was not clearly defined, the simplest and most correct answer would be a question in response: "What thinking?" Answering this way was equivalent to saying "I don't know". Another question that it would have been quite helpful to clarify was "What is a machine, and what is not (hence, life)?" The answer was not clear either.

The question "What is thinking?" still lacks a substantive answer; apparently, it must be fundamental in the work on machine modeling of intellect, because prior to modeling the intellect, or just of separate functions of the intellect, it is desirable if possible to define what is being presented for modeling, i.e., to try to answer the questions: what is intellect (thinking), what functions can, and which cannot, be called intellectual?

But even this may not be the most important question. Many years of unsuccessful attempts to understand and model thinking may suggest that thinking is fundamentally different from the usual algorithms and the programs that implement them. Before we can model the brain and thinking, we must try to understand wherein lies the essence of these differences.

What thinking is, at the qualitative, intuitive level, is understood by all; however, no compelling definition exists. The classical definition by Turing, based on the claim that it is not possible to formally discriminate between human thinking and nonthinking, and that the "intelligence" of a machine is defined by convention based on expert analysis, is clearly insufficient for the purposes of modeling. According to Turing's test, only the result of the process of thinking is important, and it is not important how this result was obtained. The test consists of the following: There is a benchmark set by human thinking, with which machine thinking is compared. A given program (machine) is declared to think if a human conducting a dialogue with it cannot determine with whom he is conversing, a machine or a human.

Strictly speaking, Turing did not provide a definition of thinking, but merely suggested a test that in the absence of anything better can be used to evaluate whether a given computer thinks. This evaluation, based on a superficial comparison of the results of "thinking" of a machine and man can only be very approximate, because the set of test problems (questions) is not specified in any way. Besides, thinking may be not so much the ability to solve problems as a method of solving them. We also emphasize that Turing's test only allows establishing whether a specific machine can engage in "human" thinking, i.e., precisely the same kind of thinking that a human employs. But there remains the question of whether human thinking is the only possibility.

Other attempts to define thinking are based on the desire to glance inside the "black box" and define the essence of the processes of thinking by the methods of experimental psychology or neuropsychology. Definitions of thinking in this case are constructed by enumeration of experimentally identifiable components of the process. These components may be given, for example, by qualitatively understood ones such as memory, inductive and deductive reasoning, and training, or by less well understood ones such as a knowledge base, a semantic model of the world, intuition, association, insight, imagination, and emotional evaluation. The list may be extended by identifying different components of the process of thinking, for example, by including as obligatory the capability of solving nonalgorithmic creative tasks and consciousness.

It is clear that the problem of defining thinking cannot be solved by such a method because the components of the process of thinking are most often themselves not defined, or are defined very approximately (and, in addition, an inductive definition constructed this way can be considered only a step toward the desired generalization).

Nevertheless, such an approach to the concretization of the problem allows semi-intuitively selecting suitable directions of research and attempting the construction of not only qualitative but also formal partial models of the conjectured intellectual processes.

Work aimed at the understanding and automatization (imitation) of thinking has been conducted under various banners. The first and most prominent of these banners was cybernetics. The word is now most often used as a generic concept for various research avenues unified by the fact that they deal with obtaining, processing, transferring, storing, and using information. The substantive part of cybernetics, as proposed by Norbert Wiener in 1948, consists in generalizing the concept of management and claiming the unity of the principles of management (dealing with processing and using information) in technology, living nature, and society. It was claimed that the same principles of management are implemented in living nature as in technology: deviation control systems based on negative feedback and control by perturbation, that is, the stimulus–reaction scheme (i.e., reflexes).

These concepts are marvelously compatible with the principle of behavior formulated at the beginning of the last century by the Russian psychologist I.P.Pavlov as "counterbalancing with the environment". These ideas underlie the concept of homeostasis (Ashby), as well as the foundation of many, both theoretical and technological, models of behavior and thinking constructed based on the stimulus–reaction scheme.

The pronouncement of the general principles of cybernetics had both a positive and a negative effect. The positive role was that cybernetics motivated scientists, and primarily representatives of the exact sciences, to study and model information processes related to behavior, perception, and thinking. Complex multidisciplinary studies began to be conducted.

The negative role of the pronouncement of the general principles of cybernetics consisted in simplification of the concepts about the living. Commonality was emphasized, and the difference in principle between living and nonliving was pushed to the background. Excessive attention was given (and is often still given even now) to homeostasis, feedback, the stimulus–reaction scheme, and the problem of "equilibrating with the environment". At the same time, insufficient attention was given to the aspects of activity and goal directedness. (We discuss this in more detail in what follows.)

In addition, the question of the similarity between the organization of the brain and the organization of the computer was discussed with enthusiasm at the early stages. Numerous articles and books appeared, such as "The Brain as a Computational Machine" (F.H.George) "Design for a Brain" (W.Ross Ashby) and "Algorithms of the Mind" (Amosov).

At the start of work on cybernetics, superficial parallels between brain and computer were drawn, in various aspects. Very often, for example, when comparing the brain and a computer, it was stated (and is stated even now) as an important distinction that the computer is a serial computing unit, whereas the brain is an enormous unit incorporating 14 billion neurons in parallel. The invalidity of such a superficial comparison has long been perfectly obvious, however. The brain cannot be regarded as a powerful computation unit, and the operations performed by the human brain and by the computer cannot be compared. These are entirely differently operations.

A human, despite the allegedly parallel organization of his "computing unit", cannot do $100,000$ additions per second. Normally, he cannot even do one addition per second. But on the other hand, the human is efficient in solving certain problems that the computer, with all its computational power, cannot solve or solves only in a very long time, by selection. The point here is not that the computer is a serial computation unit and the brain is a parallel "device". The point is that the brain and the computer solve their problems totally differently. In the process of thinking, the brain is not controlled by either an algorithm, in the strict sense, or a program. Nor is the process in the brain a computation.

The so-called intellectualization of computers was carried on under various banners, one of them called bionics. The general goal of bionics was formulated as carrying the "inventions" of nature over to technology. Following this idea, many laboratories were opened for engineers and physiologists to work together.

One of the most important tasks of bionics was considered to be the use of knowledge from the neurophysiology of the brain in computer technology. However, it became clear rather quickly that there is nothing to carry over from neurophysiology to computer technology; not because physiology lacks sufficiently well structured and comprehensive information about the operation of the brain but because this information is not needed for computer technology. The contemporary computer is not similar to the brain. The organizations and principles of operation of the modern computer have nothing in common with the organization and principles of the operation of the brain.

Naturally, the bionic boom of the 1960s, with its main goal seen as the creation of technological devices or practically useful programs, ended in complete failure. (There did exist several exceptions unrelated to thinking, for example, certain results of the studies of echolocation of bats were used in radiolocation.)

Overall, interdisciplinary studies of the principles of operation of the brain–conducted under the general banner of cybernetics or under the partial banner of bionics–were not useless already because they brought the attention of engineers and mathematicians to issues that had heretofore been considered exclusively the concern of psychologists and physiologists. This led to many attempts to apply both formal analytic methods and methods of modeling to the description of the brain and thinking, which was without a doubt beneficial to the general understanding of problems.

Yet another banner under which work in the area under discussion was conducted, and is still being conducted, was and remains "Artificial Intelligence" (AI). This research avenue superseded the cybernetic and bionic boom. At the beginning, the optimists believed that a revolution was coming and that the computer would begin to think. However, after it became clear that actual thinking cannot be constructed based on computer technology, the trend shifted from scientific speculations and studies of uncertain prospects to artificial intelligence, that is, to computer solution of difficult informational tasks, those that a human can solve but a computer cannot. Hence, originally, AI did not claim a direct modeling of thinking, but was simply a computer solution of hard-to-formalize "human" problems.

Nevertheless, it was assumed from the very beginning, explicitly or implicitly, that these solutions would permit formulating generalizations and developing specific methods of AI, eventually resulting in machine thinking. Proponents of the new approach correctly supposed (and still suppose) that to arrive at a constructive definition and modeling of thinking, it is useful to proceed from the setup of specific problems to the methods of their solution, introducing "intelligence" as a mechanism necessary for the solution.

Which problems are traditionally assigned to the realm of AI? It turned out that there are many such problems. They include the understanding of natural language by the computer, i.e., question–answer systems, natural-language database access, translation from one language to another, analysis of three-dimensional images, proof of theorems, games, databases, knowledge bases, and so forth.

Some complex studies were conducted under the banners of "expert systems" and "integrated robots". Expert systems concentrate on are databases, knowledge bases, informational query and human interaction with a database, feature-based recognition of situations, and transition from situations to recommendations and, in certain cases, to control actions. As regards integrated robots, the main questions were (and remain) the visual perception of three-dimensional scenes and control of movement of a mechanical device, usually a trolly or a manipulator.

The problems mentioned above may apparently be considered specific hard-to-formalize "human" tasks, and there arises the issue of identifying the common features underlying the necessity of thinking. It is stated that any "intellectual" system of management, translation, or perception must be able to construct and use a semantic model of the world. This is certainly true, as it is also true that this problem does not yet have a sufficiently general solution, especially because AI systems typically involve the construction not of active but of passive descriptions. Different systems for representing and using knowledge (for example, systems based on the apparatus of mathematical logic, frame systems, graphs, or semantic networks) are employed in the AI context for representing the problem environment as a multilevel hierarchy of concepts and relations. These may well be steps in the desired direction, but the fundamental problem is to make such systems active, i.e., to make them into "active models of the environment". The principal difference between an active model and a description is discussed in some detail in what follows.

Unfortunately, the above problems have not yet been solved at the level that would take us anywhere near the understanding and modeling of the methods of solution and the mechanisms of thinking. No constructive generalizations of AI methods that would be efficiently applicable to different tasks have been worked out. The only common feature of the proposed solutions is that these results are usually based on the traditional formal apparatus, with nonessential modifications, and a "brute force solution", i.e., exhaustive search. In particular, computers now play chess wonderfully. The practical level of the computer's chess play is comparable to that of a world champion. The computer solves this task due to its powerful computational capabilities, basically by exhaustive search and comparison of positions, using both formal and heuristic evaluation rules. Does this provide hints as to the operation of the chess player's brain while playing? Very little, unfortunately, although the chess play itself is a wonderful object for studying thinking.

If the problem of computer chess play yielded to a "brute force" solution, then success in solving many other problems is virtually absent. For example, the Japanese fifth-generation computer project, aimed in the first place at the creation of a "user-friendly" interface based on natural visual and verbal forms of communication between man and computer, was not implemented. It turned out that for the implementation of this project, it was not enough to build an "eye" and an "ear"; it was still necessary to explain how the brain operates. (Incidentally, this was understood by many from the very beginning.) For the same reason, qualitative progress is also lacking in solving tasks such as the analysis of three-dimensional scenes and translation from one language into another. Just like 30 years ago, robots either remain devices for carrying out complex standard technological operations or are increasingly sophisticated toys whose training and behavior does not go beyond conditioned reflex or, at most, a dynamic stereotype.

Work on pattern recognition is closely related to studies in the AI field. The originally set general problem of perception of a complex environment gradually degenerated into the simplified task of classification. Traditional programs or pattern recognition devices are mostly passive feature-based systems for classifying objects considered individually. A mathematical apparatus was developed to solve this problem, and numerous recognition systems were created, wonderfully working in a very broad range of applications. However, serious theoretical achievements that would be significant for understanding the mechanisms of thinking have not been obtained in this direction. These systems do not have the properties of living perception such as integrity, purposefulness, and "recognition with understanding" based on using a model of the environment.

Work on constructing networks on formal neurons is also related to AI studies. Significant theoretical results, important for the understanding of the mechanisms of thinking, have not been obtained in this field either. In different sections of this book, much is said from a critical stance about perceptrons, modern formal recognizing neural networks (FRNNs), and neurocomputers. The reason is that formal neural networks are given too much attention and are over-optimistically evaluated in modern scientific, technological, and popular science literature.

Characterizing the field of AI overall, it can be said that a large part of the work in this field is directed toward the development of algorithms and computer programs for solving complex problems. At the same time, many researchers suggest that many higher functions necessary for thinking, manifested in the operation of the brain, are not algorithmic. For example, Penrose claims that such indispensable functions are intuition, insight, and, especially, consciousness. This lack of correspondence, i.e., the nonalgorithmic nature of the most important functions of the brain and the attempts to implement intelligence programmatically, is considered by many to be the dominant cause of the absence of decisive success in understanding and modeling thinking.

Nevertheless, the opinion is sometimes advanced that there will soon appear machines passing the Turing test, intellectually excelling humans, and eventually subduing them. Can a computer program created in the context of an AI-oriented algorithmic approach pass the Turing test? Such a program can be written. The difficulties will consist in that a human trying to "expose" the computer will be guided not only by the content of answers and questions but also by their complexity and the "humanness" of phrases, expression, the emotional coloration of text, and so forth. But these difficulties can be overcome. Will computers (programs) that pass the Turing test compete with man and strive for domination? No, as long as they remain purely algorithmic passive systems, lacking needs, purposes, desires, and emotions; no, until, for example, they become systems that not only are able to win in chess against a human but also want to do this. Hence, the prospects of rivalry and struggle between computer and man may be discussed not in connection with the hypothetical possibility of creating artificial intelligence but in connection with the even more hypothetical possibility (or impossibility) of creating artificial (nonprotein).

It is frequently stated that as a consequence of the nonalgorithmic nature of consciousness and other functions of the brain, computer simulation of thinking is impossible in principle. This is certainly not so. To understand the principles of operation of the brain in the process of thinking, one need not be limited to the development of algorithms and computer programs for solving some specific "human" tasks. A digital (program) modeling of the brain as a physical object seem to be necessary and sufficient. Naturally, this modeling can and must be performed using a computer. In this case, the model may also have nonalgorithmic external functions. The key words in the comparison of artificial intelligence and the mechanisms of human thinking should be not "algorithmic" and "nonalgorithmic" but "passive" and "active". Chapters 7, 8, and 9 describe a possible approach to such modeling.

To summarize, we may say that the "Artificial Intelligence" research unifies very different and isolated discrete works are under a common name, and will not become a theoretical science with its own subject matter and method of research until there occurs a qualitative leap in understanding of the principles of operation of the brain and the question "What is thinking?" is answered, hypothetically at least. Unfortunately, the prospects for this to happen in the context that can be called algorithmic AI are presently not very encouraging. Such prospects are not encouraging in the context of modern biology either. No better perspectives are manifest in the context of "synergetics", a scientific field that has recently come into fashion.

The approach developed in this new field is sometimes called "the theory of everything". It is sometimes claimed that the methods of synergetics are applicable to the description of any systems, including living ones, and even of the brain. Synergetics uses the tools of nonlinear dynamics to describe changes in multistable systems that lose stability under external effects and enter a new stable state as the result of the changes. The transition to a new state is determined by the gradient of acting forces and by combination of external and internal random factors. Such a process is characteristic for passive nonliving systems losing stability. Active living systems are constantly in a state of nonequilibrium. Losing stability and moving toward an equilibrium, they return to a state of nonequilibrium due to internal and external work. The transition from nonequilibrium to equilibrium also occurs in some phases of the existence and development of a living entity. However, it seems that active purposeful maintenance of nonequilibrium is the main goal in the behavior of a living organism and in the operation of the brain. Jumping ahead and using the concepts and terminology of Chapter 3, we may say that the tools and methods of synergetics can adequately describe changes in passive static and passive dynamic systems. Behavior and, consequently, changes in active purposeful dynamic systems are most likely not described by these tools.

In what follows, we speak in detail about the idea that the two main tasks solved by the brain are the construction of an active hierarchical model of the environment and the use of this model for rapid solution, based on local emotional evaluations, of multiselective, multi-extremal behavioral tasks by reducing them to low-selective, single-extremal tasks. It seems that the mechanisms of solving these tasks are based on the operation of synergistically interacting, mutually assisting active unstable elements. Overall, this is not described by the apparatus of nonlinear dynamics. Therefore, the claims of synergetics to the role of the theory of everything do not seem to be substantiated.

Therefore, neither in the field of AI nor in adjacent areas are there significant general results leading to the understanding of thinking. Moreover, it is becoming increasingly clear that passive algorithmic systems will not lead directly to understanding and modeling thinking.

Isolated studies of various problems in AI and adjacent areas can be unified, ordered, and used to some extent on the following basis. The brain arose and evolutionarily developed to ensure the existence of animals, i.e., for survival. A simple functional definition of thinking is possible, based on a concept of what thinking is necessary for (for man or animal). We give such a preliminary definition.

Thinking is an active process in a living brain directed toward

  • construction in the brain of an active model of the environment, necessary and sufficient for the perception of the environment, prediction, and management of purposeful behavior in a multi-extremal dynamic environment;
  • realization of the process of perception;
  • realization of the process of managing behavior;
  • realization of the training process;
  • solution of nonalgorithmic (heuristic and creative) tasks.

Please note the words "active model of the environment" and "active process in a living brain". These words emphasize that for understanding thinking, it is important to understand not only what the brain does but also how it does it. Another important point is the emphasis on the multi-extremality of behavioral tasks.

This very general preliminary definition specifies only the main directions of the necessary research. In what follows, it is used as an initial framework that is to be filled by concrete and more precise, largely hypothetical, content. In particular, we try to move toward the understanding of thinking from the problems of perception and management of behavior, as well as from the understanding of the principles of operation of neuron mechanisms. Of course, the complete understanding of the process of human thinking is hardly possible outside the verbal level (language).

In addition, in attempting to explain the essence of thinking and construct a theory of the operation of the brain, it is impossible to bypass the most baffling problems related to the understanding of the role of consciousness, free will, creativity, emotions, feelings, desires, and sensations.

All this leads us to the following conclusion. To achieve an understanding of the thinking process, one probably should not begin with the algorithm-based AI–i.e. with the solution of difficult problems for which the processing of a complex information is necessary. One probably should begin with the understanding of the difference between the living and nonliving. Hopefully this distinction is not only the basis of the special mode of substance organization, but also a foundation of the thinking process. Hopefully this distinction is a material one and could be scientifically modelled. One could consider the search, the strict definition and attempts to model this distinction to be a separate branch of science. This branch could be named i.e. "vitalica" (from Latin vitalis). In contrast to vitalism, the purpose of vitalics is to find the material and strictly defined basis for definition of suc h conceptions as vital energy, animatedness, entelechy, conscious. We will try to achieve some results in this direction by exploring behavior, perception and work of the brain neural mechanism. Perhaps the important distinction between the living and nonliving lies in the principle of stable nonequilibrium of E.S.Bauer. This we will discuss later.

We note again that research on the problem of thinking, considered in isolation from the question of how the living brain operates, may be insufficient. Thinking seems to be not so much "clever" methods of solving difficult tasks as the particular mechanisms of operation of the living brain, and modeling thinking must be based on understanding the principal differences between living and nonliving.

The essence of these differences may be clarified by Bauer's principle of stable nonequilibrium. In later exposition, we attempt to show that it is specifically a stable nonequilibrium of living matter, not just at the cellular level but also at the level of the entire organism (i.e., in constructing a model of the environment and at the level of behavior in the environment), that necessarily entails the maxT principle considered in this book, as well as the principles of integrity, purposefulness, and activity.

And, finally, the most important property that distinguishes a living organism capable of thinking is, i.e., mutual assistance manifested both on the behavioral level in uniting functional subsystems into a single functional system and at the level of the neural mechanisms of the brain.

Thinking may be considered to embrace all processes arising in the brain that are related to the conscious information processing. These can be different processes, which may be classified differently. It is natural to differentiate processes related to perception and recognition of the environment, behavioral management, solution of formal tasks, and creativity. Common for any thinking processes is, first, the reflection of their results in consciousness and, second, conscious purposeful management of the process based on using the model of the environment and imaginative modeling.

In what follows, we say that the processes of perception and imaginative modeling of the environment that occur in the brain are perceptive thinking. We say that the processes that result in the construction of an environment model based on the actual perception are cognitive thinking. We say that the processes aimed at managing behavior or solving formal tasks are practical (behavioral) thinking. In addition, in Chapter 9, we speak of creative and reproductive thinking (to be explained in the).

Chapters 3, 4, and 5 contain general, largely theoretical, discussions of several issues related to the basic theme. These are preceded by descriptions of the results of solving more specific tasks: a formal model of behavior (Chapter 1) and principles of construction of an "intelligent" system of visual perception (Chapter 2). The latter is illustrated using the example of a practical resolution of the task of reading hand-written texts.

Chapter 6 briefly describes classic and modern models of formal neural networks. In Chapter 7, we outline a possible approach to constructing computerized recursively computable physical models of the active synergistic neural mechanisms of the brain. In Chapter 8, we discuss the main problems whose solution is essential for constructing relatively complete models of the neural mechanisms of thinking. Chapter 9 describes some qualitative theoretical concepts on the operation of the brain and neural mechanisms in the process of thinking and creativity. And, finally, Chapter 10 touches upon the traditional question: "Can a computer think?"


About the Author
top
Aleksander SHAMIS (born in 1933) Candidate of Technical Sciences, winner of the RF Government Science and Technology Award. For many years he worked in the Scientific and Research Center of Computer Technologies. At present, he is a scientific consultant of ABBYY Company. For over forty years, he has been engaged in theoretical and practical artificial intelligence technologies and development of applied intellectual technologies. The main sphere of his interests is perception and behavior modeling. He is the country's leading scientist in computer visual perception and automatic text recognition. One of the creators of FineReader and FormReader – popular systems for recognition of printed and written texts.
Information / Order
288 pp. (Russian). Paperback. 15.9 EUR New!

Особенности 20-го выпуска:

- исправили предыдущие ошибки

- Добавлены разновидности в раздел разновидностей юбилейных монет СССР

- В раздел 50 копеек 2006-2015 добавлены немагнитные 50 копеек

10 копеек 2005 М (ввел доп. разворот)

- Добавлена информация о 1 рубле 2010 СПМД немагнитный... (More)


Information / Order
URSS. 272 pp. (Spanish). Paperback. 21.9 EUR

El elemento clave de la física contemporánea es el concepto de campo cuántico. Hoy en día se considera que este constituye la forma universal de la materia que subyace a todas sus manifestaciones físicas. Este libro puede ser recomendado como una primera lectura para aquellos estudiantes y físicos de... (More)


Information / Order
URSS. 136 pp. (Spanish). Paperback. 12.9 EUR

En el libro se presenta de una manera clara y amena un sistema de ejercicios que contribuyen al rejuvenecimiento del rostro sin necesidad de recurrir a una intervención quirúrgica. El sistema es accesible a todos, no exige gastos materiales complementarios y es extraordinariamente efectivo. Todo el que... (More)


Information / Order
URSS. 304 pp. (Spanish). Paperback. 29.9 EUR

¿Qué es la dimensión del espaciotiempo? ¿Por qué el mundo que observamos es tetradimensional? ¿Tienen el espacio y el tiempo dimensiones ocultas? ¿Por qué el enfoque pentadimensional de Kaluza, el cual unifica la gravitación y el electromagnetismo, no obtuvo el reconocimiento general? ¿Cómo se puede... (More)


Information / Order
Sheliepin L.A. La coherencia. №09
URSS. 160 pp. (Spanish). Paperback. 14.9 EUR

El concepto de coherencia surgió en la óptica clásica. Hoy este concepto no sólo se ha convertido en un concepto general de la física, sino que se ha salido del marco de esta ciencia. En este libro el problema de la coherencia se estudia desde diferentes posiciones. Se examinan, además, las propiedades... (More)


Information / Order
URSS. 136 pp. (Spanish). Paperback. 15.9 EUR

La teoría cuántica es la más general y trascendente de las teorías físicas de nuestros tiempos. En este libro se relata cómo surgieron la mecánica cuántica y la teoría cuántica de campos; además, en una forma accesible se exponen diferentes tipos de campos físicos, la interacción entre ellos y las transformaciones... (More)


Information / Order
896 pp. (Russian). Hardcover. 43.9 EUR

Полный сборник афоризмов в билингве малоизвестного в России глубокого мыслителя и изысканного писателя из Колумбии Николаса Гомеса Давиды (1913—1994) на тему истории, религии, культуры, политики, литературы.

В КНИГЕ СОДЕРЖАТСЯ ПРОИЗВЕДЕНИЯ:

Escolios a un texto implícito, 2 volúmenes.... (More)


Information / Order
URSS. 128 pp. (Russian). Paperback. 12.9 EUR

Это рассказы о любви, нежности, желании и страсти, которая бывает и возвышенной, и цинично-жестокой.

В них абсурд и гротеск чередуются с методичной рассудочностью, милосердием и муками совести.

Их персонажи – человеческие, слишком человеческие, - однажды встречаются, проживают кусок... (More)


Information / Order
URSS. 224 pp. (Spanish). Paperback. 16.9 EUR

De forma viva y amena, el autor expone una diversa información sobre el héroe del libro, la famosa constante matemática que aparece en los lugares más inesperados, obteniendo de este modo una especie de "pequeña enciclopedia" del número pi. La parte principal del libro es de carácter recreativo,... (More)


Information / Order
URSS. 152 pp. (Spanish). Paperback. 14.9 EUR

Tras una breve introducción a la termodinámica de los procesos reversibles, el autor expone de forma amena y detallada los postulados fundamentales de la termodinámica de los procesos irreversibles. Se presta una atención especial a los efectos de la termodinámica no lineal, a la autoorganización en... (More)