There is certainly evidence that language shapes thought. Different languages encode different cultural ideals and conceptual metaphors. For example, a language that is strongly tied to a subject-verb sentence structure subtly implies that reality is composed of active, discrete subjects affecting the external world. Other languages may focus more on passive qualities of being rather than action.
Some languages require you to consider other factors in your environment in order to use that language. For example, some languages require that you give absolute directions when describing the location of objects--such as, "this desk is north of me," rather than simply "this desk is in front of me". Using such a language would require that you pay more attention to cardinal directions than you might otherwise.
Often language is needed to codify and retain an experience. For example, we may intimately experience a dream, but can't remember it two days later unless we write it down or tell it to a friend. Yet only those elements of the dream that can be encoded in the language can be cemented in this way, which means our later recollection of the dream will only contain those parts encodable in our language.
So it seems that language does at least affect our experiences. And it certainly limits our expression of those experiences. If a language contains no word to describe a certain sensation or situation, we need to try to squeeze the experience into other, less adequate words. It can be difficult to express the full nuances of our thoughts and experiences in this way.
Yet I don't think our language restrains us from having complete or novel experiences. People with new ideas invent or borrow new words to describe them; here, thoughts and experience determine language, and not the other way around.
And so I don't believe thoughts are constrained solely to the thinker's language, but that thoughts can be affected or molded when they need to be encoded into the language. The language constrains what can be retained and communicated to others.
I believe other humans are conscious because of their actions. I interpret many of them as indicating mental processes similar to my own. For example, I see signs that other people sometimes pause to reflect and process information, that they have likes and dislikes, that these can form a stable pattern of preference and mannerisms (i.e., a personality), which is tempered by moods or emotions of the moment. What is particularly significant is that when people describe their own internal experiences of the world, they often mirror my own experiences.
I believe that many of these same indicators of thought--information processing, preferences, choices, moods, and reported consciousness--could be present in a machine of sufficient complexity. (Emotions and moods might be somewhat lacking at first in machines.) Descriptions of personal internal experience implies consciousness. Machines can already report certain aspects of their internal state--memory and processor usages for example. Though I would not, at that stage, call this consciousness (especially if there are no preferences for certain states of being), it seems to be more a difference of degree rather than a difference of kind.
Essentially, I agree the Turing test that if a human interacting with a machine can't tell it's a machine 70% of the time, we should judge that, for all effective purpose, the machine is as much a thinking thing as a human. I think this level of thinking could be reached by machines. And I think this would change how I interact with them. For example, the droids in the Star Wars movies exhibit a high level of personality, preference, and internal experience. Though still "only" machines, it seems as cruel to mindwipe them as it would be to lobotomize a normal human. Yet I doubt that I will see any serious moral respect for machines during my lifetime. (There is still precious little respect for high-order animals at this point.)
[I do not consider this a pseudo-question. Cherry claims that asking "Can machines think?" to be an incompatible mixture of the subjective and material realms. That is, it is nonsensical to ask of the internal subjective experience of a material objective object. Yet even assuming this Cartesian dualism (which I generally don't like to do), I think Cherry is misguided. By his reasoning, "Can other humans think?" is also a pseudo-question. Though we cannot directly experience another's subjective experiences, I do not believe questioning whether they might still exist to be nonsensical or illegitimate. In any case, this question here is whether machines exhibit behavior that indicates thinking, not whether they actually think. This is certainly not a pseudo-question, even by Cherry's reasoning.
I see our conception of communication as a frame, or a prototypical scenario. At it's core, a model of communication is two people talking to each other. In this case, feedback is very important. A speaker relies on feedback to determine whether the listener can hear the speaker's utterances, recognize the speaker's language, and understand the meaning of the speaker's message. Based on the feedback received, the speaker can refine the discourse in order to correct errors and omissions.
In other, less prototypical forms of communication, feedback is not so present. For example, mass media tend to publish or broadcast messages in one direction. Yet, the television broadcast companies still seek some feedback in the form of ratings, surveys, and focus groups. Book authors and publishers get feedback from book reviews. Sales can give some clue to whether the messages sent are desirable to receive.
As modes of communication change, such as when affected by changing technologies like cell phones, video-conferencing, etc., it may be that speakers (or senders) receive less feedback. For example, if the receiver is not physically present, the speaker cannot rely on clues from the receiver's body language on how the message is being received. This means less chance for error detection and correction. As communication technologies change and evolve, they need to continue to support adequate feedback mechanisms if the quality of communication is not to degrade.
Cherry, Colin. On human communication: a review, a survey, and a criticism. Cambridge: Technology Press of Massachusetts Institute of Technology, 1957.
CIS: Week 3
|Last Edited: 10 Sept 2004|
©2004 by Z. Tomaszewski.