Could a Computer Feel Pain?

David W. Croft
Internet CroftDW@Portia.Caltech.Edu, CompuServe [76600,102]

Philosophy 131
Philosophy of Mind and Psychology
Professor Fiona Cowie

California Institute of Technology
Pasadena, California

1994 February 09 Wednesday

Abstract

This paper seeks to define pain in such a manner as to be capable of implementation in a computational device for the purpose of developing an optimizing learning algorithm by following the intuitive and philosophical guidelines of biological pain as a working model. It then demonstrates that an implementation of pain alone is insufficient to match the computational capabilities of representational systems to find the global minima. A definition of the requirements for a representational system is then proposed for the purpose of suggesting a model for implementation.

Introduction

I define pain as a continuously and purposely optimizing input to a feedback system. I proceed by clarifying and restricting the defining terms to the given context. I then prove the robustness of this definition by demonstrating its compatibility with a biologically-acceptable intuitive and philosophical viewpoint. I conclude that if a computational device were to be designed to meet the definition of the requirements for pain, the computer could be said, then, to feel pain. I further note this definition of pain does not completely integrate with higher-order life forms which are capable of beliefs and intentions which I label representations. I then conclude with a rough sketch of what the requirements would be to define a representational system for the purpose of understanding how a computer could have a mind akin to our own.

Function

A function maps a set of inputs to a single output. To see this, consider the definitions of "function" which follow.

5. Math. a. A variable so related to one another that for each value assumed by one there is a value determined for the other. b. A rule of correspondence between two sets such that there is a unique element in one set assigned to each element in the other. (Morris 1982:539)

From the above, it becomes apparent that a function simply maps one set of points to another such as in the equation of line where we consider x to be the input and y to be the output: y is a function of x = f(x) = y = m*x + b. Note that we can remap the output to the input if we take x as a function of y = f(y) = x = ( y - b ) / m. If we examine definition b of "function", we note that, for each value in the input set x, there is one and only one corresponding value of the output y. Thus, the equation of a circle would not qualify as a function since for many values of x there are two values for y such as a point on the top of the circle and a point directly below on the bottom of the circle.

A deterministic, or non-random, function will give the same output y every time a given input x is presented. That is, the input x completely "determines" the output y. Randomness is a myth. Some have proposed that true randomness can be observed at the very lowest level of nature in quantum mechanics wherein some probabilistic, or random, forces are at work. I have been told that the famous physicist, Einstein, is said to have expressed his dissatisfaction with this theory with the statement, "God does not play dice." Therefore, if one observes some "random" output y from a function for a given input x one can only conclude that 1) not all of the inputs have been identified, 2) that the subsequent input x was slightly different, which, especially for non-linear functions, may have caused a measurable difference in the output y, or 3) that the "function" is actually a system.

System

A system, unlike a function, is purposeful. To see this, consider the following definition of a system.

A combination of two or more sets generally physically separated when in operation, and such other assemblies, subassemblies, and parts necessary to perform an operational function or functions. (Parker 1984:428)

To rephrase this, a system is composed of separate sub-systems which may be composed of sub-sub-systems and so forth until it is ultimately just a collection of input-output functions assembled in such a manner as to perform some desired operation. Note the use of "operational function" in the above definition is different from the term "function" that I have been using and is more akin to the phrase "desired task" in the definition below.

A system, in its most general form, is defined as a combination and interconnection of several components to perform a desired task. (Ziemer 1983:1)

One of the above definitions states that the component functions are assembled in a way "necessary to perform". The other definition states that the functions of a system are interconnected in such a way as "to perform a desired task." A task is defined as a "function to be performed; objective" (Morris 1982:1245). Thus, a system, unlike a function, always has a designed purpose, objective, task, or goal.

Feedback System

A feedback system is capable of determining its output from past as well as present inputs. An interconnection of separate functional components, a system, could allow for the outputs of one or more of the functions to be re-routed back to the input of one or more of the functions. Thus, the system would perform like a function whose inputs include the current inputs and some form of the "history" of prior inputs.
          ------------
X ------>| Function A |----> Y
      -->|            |--
     /    ------------   \
     \    ------------   /
      ---| Function B |<-
          ------------
In the above figure, the output y is a function (A) of the inputs, which include x and some function (B) of the previous outputs y. There is said to be a "feedback" path since the output is "fed back" into the input of the system. This, of course, assumes that there is some realistic delay as the input propagates through the system as well as the feedback path to determine the output.

Further note that, at any point in time, a feedback system can be viewed simply as a new function with respect to its previous input to output relationship if the feedback inputs are not considered. In this case, one could say that the function, or input to output relationship, is "plastic", or modifiable. If the feedback inputs are not considered and the plasticity cannot be predicted in any deterministic fashion, one might even consider the output "random"! To reiterate, a carefully designed collection of component functions, a system, is capable of determining its output from current and previous inputs through the use of feedback.

Optimizing Input to a Feedback System

An optimizing input to a feedback system is an input which changes the input to output relationship, or function, of a plastic system to better perform its task. To "optimize" is to "make as good or as effective as possible" (Morris 1982:873). "Optimizing", then, is the process of making something as good or as effective as possible. In this context, that "something" is the ability of a system to perform a desired task. To say that an input is optimizing is to say that the input is in the process of making the ability of a plastic system to perform its desired task as good or effective as possible. This allows for the possibility that during the process of optimization the feedback may actually cause the system to be temporarily less optimal in performing its functional task.

To contrast, one should consider the other type of input to a feedback system, non-optimizing input. Non-optimizing input can be broken down into two sub-types, those inputs that do not affect the performance of the system and those inputs that are in the process of detrimentally affecting the system.

An input that does not affect performance, a "sensation", may be simply processed as input to determine the output in a functional manner given the current state of the system. On the other hand, it may simply be ignored and have no bearing on the output. Whether it engages the plasticity of the system is irrelevant; only that the overall or net resultant input to output functional mapping is unchanged is important.

Inputs that are in the process of detrimentally affecting the ability of the system to perform its task include inputs which are purposeful, such as mis-information, and those which are accidental, such as non-rational or accidental performance damage. Note that since we are considering the process of an act instead of the act itself, one can allow for a temporary optimization en route to non-optimization.

Purposely Optimizing Input to a Feedback System

A purposely optimizing input to a feedback system intentionally engages the plasticity of the system in order to optimize its output. To realize this, one must distinguish between purposely and purposelessly optimizing input.

Purposelessly optimizing input is accidental. It is an optimizing input which was not designed to optimize the system. It is irrational. It carries no information or meaning. It does not symbolize or suggest optimization; it simply is optimization. At the level of the individual organism, below the level of the species, random mutation could be considered an example of a purposeless optimizing input.

Continuously, Purposely Optimizing Input to a Feedback System

A continuously and purposely optimizing input to a feedback system purposely optimizes the system as long as the input persists. That is, as long as this input is present, the system will be in the process of optimization. It is insufficient for the input to initially purposely optimize the system; it must continuously cause plastic changes in the input to output mapping of the function of the system.

What Pain Is Not

Pain is a continuously and purposely optimizing input to a feedback system. To see this, let us consider what we have excluded.

Pain is not input to a function. A function lacks plasticity. Pain is therefore simply an input, such as a sensation, to be mapped to an output, such as behavior. Furthermore, this behavior may not have any intentional design, such as a moving object that changes direction in a semi-random direction with every collision.

Pain is not an input to a system. While a system requires some intent in the performance of its task, it may not be capable of performance feedback. Consider a machine which is designed to move about a room. If an obstacle is placed in its path, it may continuously ram into the wall fruitlessly and indefinitely without any regard to its performance in completing its designed task.

Pain is not an input to a feedback system. Not all inputs to a feedback system are pain. Consider a moving vehicle with a collision detection system. It may sense an obstacle by colliding with it. The collision, as input, may be recorded in the system, such as the number of dents on its chassis. Clearly, there is some form of memory of the event and plasticity of the system, but the plasticity in the chassis will have no effect on its performance. The vehicle could, after acquiring the dent, continue to engage in non-optimizing behavior, such as continuing to collide with the obstacle. We could not consider the input as meaningful or relevant feedback to the task at hand.

Pain is not an optimizing input to a feedback system. Consider a vehicle which, as it collides with obstacles, begins to lose external projections on its chassis as they break off. On subsequent trips, collisions do not occur as the vehicle is becoming smaller. One could say that the system has feedback in that the structure of the vehicle is plastic or mutable and that the reshaping or mutation that occurs in the breaking of the vehicle is optimizing input. From the standpoint of the designer, however, it is doubtful that this optimization would be purposeful or intended. One can see that there is no information in the input, that the feedback is irrational, if, just as unexpectedly as the occurrence of optimization through the breaking of the chassis during collisions, performance degradation occurred.

Pain is not a purposely optimizing input to a feedback system. Consider our aforementioned vehicle improved in that, upon detecting a collision, it attempts a skillfully chosen series of movements around the obstacle until it is successful in finding an optimal detour. From the standpoint of the designer, the input, or collision detection mechanism, is purposeful and serves to discover a more optimal route for the vehicle when compared to simply colliding with the obstacle or shearing off external projections. However, on subsequent trips over the same path, we might expect the vehicle to engage in the same collisions and attempt the same series of detours to get to its goal. One can clearly see that this behavior which it has settled upon, especially when one considers that the vehicle may continuously attempt the same detours that failed it in previous runs, may be sub-optimal. Optimization has stopped. The collisions, then, have become landmarks or guide posts in a passively learned behavior of a complicated system.

Pain

Finally, we can show that pain is a continuously and purposely optimizing input to a feedback system. For our collision detection input to be considered pain, the vehicle must attempt different trajectories to achieve its goal so long as the collisions persist.

It does not matter that the vehicle may never completely avoid all of the obstacles; the requirement is that it continuously attempts to optimize its path so long as the detection of collisions persist.

It also does not matter that the pain may result in no better solution or that it may even cause a degradation in performance so long as the purpose of input continues to be the process of optimization. As a biological example of this last, somewhat confusing, concept, consider the unresolvable chronic pain experienced by our ancestors. Despite the fact that no remedies were available, the purpose and intent of the feedback signal was clear -- something is amiss in the body. Finally, after many countless, unsuccessful attempts at relieving this pain, progressive steps towards the cure of cancer are being taken. Pain is an input which refuses to be ignored!

Nor does it matter that, if it does manage to find a path which avoids all collisions, it settles upon a sub-optimal choice, or poor path, since the definition requires optimization only during input of the pain. In fact, one "successful" approach our vehicle could take in dealing with the pain would be to ram into an obstacle in such a manner as to destroy the collision detection mechanism. Optimization would then cease. Our biological equivalent would be to apply a drug such as morphine to treat the symptom, the input pain, despite the progression of the degenerative disease, cancer.

Suicide, then, is a perversion of the intent or purpose of pain. This is the undesirable side-effect of pain in that the system continues to attempt to optimize itself by minimizing pain with the net result probably being contrary to the original design. Analogous to this is the fact that monopolies are a result of the natural progression in the application of the laws of supply and demand which is contrary to the goal of efficient capital markets which the laws are meant to serve. The same could be said for evolution if one were to take a pessimistic outlook. For the most part, though, suicide is still a gradient ascent process; that is, suicide, as a solution, is frequently avoided because the process begins to hurt more than any original pain.

Representative Systems

It is to be understood that I am not claiming that pain is the only feedback optimization device of which we possess. We continuously see examples wherein we behave in such as manner as to intentionally avoid minimization of our pain or to actually increase our pain from one moment to the next. Consider the following examples: the prisoner who denies information despite extreme duress, the soldier who voluntarily risks extreme danger, the martyr who submits to the wrath of oppressors, and the worker who arises every morning from his comfortable home despite a well-stocked refrigerator.

We also note that the test of cognition seems to be this ability to accept some acceptable level of current pain in exchange for some greater future pain or some other representation of our input space to be minimized. We observe that in lower life forms, the laws of Behaviorism strictly apply, whereas in higher life forms such as ourselves, internal motivations, intents, beliefs, desires, and emotions skew these input to output relationships leading some to believe that we are not deterministic systems.

I propose that representation, the manipulation of symbolic internal inputs, can explain these contrary observations. As we defined pain before, we tacitly assumed that the primary recurrent feedback path of our system was the external environment. The physical laws of nature acted as the teacher that translated output behavior into physical consequences which were then re-routed back into system as pain. If, however, we allow for a competing, internal feedback path with different properties from that of pain, we can explain these contradictions in the higher life forms.

A representation is a belief or model of the input space on which optimization can be made. Like pain, the definition of a representational input and those systems capable of representation have certain requirements. Briefly, those requirements include plasticity or feedback; a purpose, or design, or task; and the ability to query or imagine or perform hypothesis testing for an optimal output.

As we considered a hypothetical computational device capable of experiencing pain if designed to meet the requirements of that given definition, we could also imagine a computer capable of representational thought should we design it to meet the requirements of the subsequent definition. Consider an autonomous vehicle that, upon encountering a new environment, embarked upon a temporary exploration phase despite the associated cost in increased pain and the prior discovery of a sufficiently optimal path. We might assume from this behavior that the vehicle is attempting to form a global map of its input space, or a global representation, which it could then use to find, not just the locally available optimal path that reduces pain in pursuit of its objective, the local minima, but the best path to minimize collisions considering all of the input space, the global minima.

Conclusion

This paper has sought to define pain in such a manner as to be capable of implementation in a computational device for the purpose of developing an optimizing learning algorithm by following the intuitive and philosophical guidelines of biological pain as a working model. It was then shown that an implementation of pain alone is insufficient to match the computational capabilities of representational systems to find the global minima. A definition of the requirements for a representational system was then proposed for the purpose of suggesting a model capable of implementation.

References

Morris, William, ed. American Heritage Dictionary, 2nd College Edition. Boston: Houghton Mifflin Company, 1982.

Parker, Sybil P., ed. McGraw-Hill Dictionary of Electrical and Electronic Engineering. New York: McGraw-Hill Book Company, 1984.

Ziemer, Rodger, et al. Signals and Systems: Continuous and Discrete. New York: Macmillan Publishing Co., Inc., 1983.


What follows is the material that was scrapped due to time
constraints.

	Pain is defined by my dictionary as follows:
1.  An unpleasant sensation, occurring in varying degrees of
severity as a consequence of injury, disease, or emotional
disorder.  2.  Suffering or distress.  ... [3]  To be the cause of
pain.  (Morris 1982:893)
Re-ordering this From outside the mind to inside the mind, this
can be expressed as
1) the physical object that causes pain,
2) the sensory input pain, or
3) the experience of pain.

must have inputs, otherwise pre-ordained
behavior
off-line backpropagation of a feed-forward network doesn't qualify
on-line continuous backpropagation does, the backpropagation is
recurrent feedback loop

PAIN AND PHILOSOPHIES OF MIND
Dualism
	Pain, as defined, integrates intuitively with the dualist
concept that the mind and body are separate.
qualitative badness
separate from physical body, not the system
Hell purges
if experienced by body but not mind, could be ignored so not
continuous so not pain
mindless robots without pain smash into walls
subjective badness
experience of
Idealism
	Pain, as defined, integrates with the idealist concept that
everything is composed strictly of mind.
if the existence of pain is ignored
Physicalism
the body is the system
Behaviourism
operant conditioning
Identity Theory
the input is neurotransmitter
Functionalism
the system is composed of functions
Representationalism
LOT, hypothesis testing
optimization and plasticity
beliefs
Computationalism
neural network
IMPLICATIONS
Only optimizing intentional representational systems can have pain.
All optimizing intentional representational systems have pain.
MIND GAMES
	Dualism is often implemented in software through the
familiar context of computer games.  Many such games allow
multiple players separated by thousands of miles to interact in
virtual bodies in a virtual reality created by a computer program.
It is clear that the behaviors of the simulated "shells" are
distinct from and influenced by the inputs from their minds, the
players.
pain doesn't affect players

To take it even further, the inputs and outputs of the players could be constrained from birth to the inputs and outputs via the computer connection. In this case, Idealism would be the true state of nature, for all of virtual reality for the players might exist solely in the Mind of God, the Computer.

Furthermore, the players might interact with virtual minds, or simalcurums, controlled and played by the computer. For these beings it could be argued that they are creatures of Physicalism, consisting only of the physical forces which drive the computer.

If these simualcrums were given a degree of adaptable intelligence which allowed them to learn, they could be driven strictly by any arbitrary internal implementation of the input-output laws of Behaviorism.

For these beings, then, mind states such as pain or desire could be identical to a certain flow of mechanics of the computer, as stated in Identity Theory.

Of course, upon closer examination from the viewpoint of Functionalism, these creations, which may seem to exhibit intricate behaviorisms, would be described completely as a complex composition of basic functions available in the computer software.

These basic functions could make up a programming language composed of named operatons that, when combined in a structured syntax, determine semantic meaning to the beliefs, desires, and intentions of beings. It would therefore be apparent, as stated in the theory of Representationalism, that their Language of Thought, or mentalese, would be constrained by the capabilities of this base programming language.

However, even if this language contained only one word or operation, it could be mathematically proven that, if the right word were chosen, logical sentences could be built up to fashion any possible thought, including non-rational thoughts involving fuzzy logic and intuitive beliefs described by Computationalism and normally only seen in the neural networks of the players.

In fact, if the inputs and outputs to and from the mind of a player were observed and recorded as they passed through the separating shell to the virtual reality, the virtual body, an artificial neural network could be simultaneously trained to match the general patterns of these inputs and outputs. An Epiphenomenalism might exist as the neurons of the artificial neural network, as part of training, were forced to fire to correspond to outputs given by the player.

If training were successful, that is, if the artificial neural network were generally successful in modeling the input-output patterns of the biological neural network of the player, the player could then release control of the shell to the artificial neural network. The behavior of the shell would then in many respects exhibit the personality and habits of the player while continuing to optimize based upon laws of Behaviorism or artificial neurobiology until the player chose to take "possession" of the shell again.

Hell, reincarnation, meaning of life as a purpose

determinism: The philosophical doctrine that every event, act, and decision, is the inevitable consequence of antecedents that are independent of the human will. 388

fatalism: The doctrine that all events are predetermined by fate and therefore cannot be changed by human beings. 492


Transcribed to HTML on 1997-10-27 by David Wallace Croft.