Cherry describes three levels of the semiotic field, which is the study of signs. The levels are syntactics, semantics, and pragmatics. Pragmatics is, according to Cherry, the most inclusive, and it concerns itself with the relationship between signs and their users. Semantics is the study of the relationships between signs and their designata. Syntactics concerns itself only with the relationships between the signs themselves.
These can be said to be different levels of abstraction. Pragmatics includes all the user-based and contextual information of an expression. Semantics removes the environment and concerns itself only with what signs designate in the abstract. (See question 2 below.) Syntactics removes even this designation and looks only at connections between signs. Interestingly, this may still reveal "syntactic truths". For instance, in logic, the following is true: "P -> Q. P. Therefore Q." Here P and Q are not actually designating any real world entities, nor do they have any general semantic meaning. Yet based on the logical syntax alone, this is a valid statement.
I believe the vast majority of communication must be analyzed at the pragmatic level to gain its full meaning. This is because words have shades of meaning depending on the context of their use, whether this be the speaker and listener's past experiences and connections with an utterance, whether something in the environment prompts a different interpretation of a word's designatum, or simply because of the looseness of categories and prevalence of metaphor in human thought. However, in certain fields or areas, there is a need to limit this pragmatic "range" as much as possible, and approach a purely semantic system--where each sign designates a particular designatum clearly and unequivocally. This is particularly important in medicine, science, and emergency situations. Examining syntax is helpful in processing utterances, but cannot be substituted in everyday communication for full pragmatic analysis.
A logician can say "The moon is made of green cheese" because he is speaking only at the syntactic or semantic level. At the syntactic level, this sentence follows the formal rules of English grammar, with a subject and predicate, noun phrase and verb phrase. At the semantic level, we can claim that moon designates the large natural satellite that orbits the Earth, and that cheese is a certain type of dairy product. We can (arguably) determine the "meaning" of such a statement without referring to actual experience.
For (some) linguists, however, this abstraction is artificial. Utterances can be tied to the full "pragmatic" range of meaning. We can determine the truth of such a statement by visiting the moon and seeing if it is made of cheese. We can also look at the context of this utterance. Perhaps we are referring to a sculpture of a solar system made of food. The sun is made of yellow Jello, and the moon is made of green cheese. Or the phrase may have uttered as an exclamation of disbelief--"Yeah, right! And the moon is made of green cheese." Here, the speaker knows the statement is false, but is using is as an analogy or illustration of falsehood. These interpretations are completely missed through solely a logical analysis of syntax and semantics.
Statistical probability involves measuring a large sample of the same repeated event or quality within a system. For example, counting the frequency of the letter z in a large corpus of English literature, we can very closely approximate the statistical frequency of the letter z in the English language.
Inductive probability refers to a estimate of chance for a single event based on past experience. For example, if a beginning boxer has lost his first 3 matches, his statistical probability would imply that he will lose his fourth match as well. However, his coach may know that the boxer has been undergoing a lot of extra training lately and is starting to get the hang of the sport now. And so the coach may say that the boxer has a very high likelihood of winning his next match, and this would be based on inductive probability.
The different forms of probability are more or less useful depending on which level of communication you are analyzing. For instance, Shannon's theory of communication is concerned with the transfer of signals. At this level, signals are without particular meaning; they are only a syntax of bits. Thus the only form of probability of use here is statistical probability--making a best guess of what the next bit might be on the frequency of those that came before.
At the level of sentence or logical syntax, you may be able to infer (that is, use inductive probability) that the next element should be a noun or a logical operator. But again, at the purely syntactic level, which particular noun or operator it should be can only be guessed through statistical probability.
At the other end of the spectrum, inductive probability plays a primary role. If we consider all the context that surrounds each act of communication--from the environment, to the speakers' histories, to the previous threads of the conversation--then is seems that every act of communication is practically a new experience. Using statistical probability to determine the meaning or intention of the next utterance would be much less fruitful than an engaged listener making an educated guess.
The semantic level, occurring between these two ends of the spectrum, may benefit from both forms of probability. For instance, when faced with the word moon, we could guess at its designatum based on the statistical frequency with which moon has designated or signified different designata. Or we might make an educated guess based on our past experience that moon generally refers to the large natural satellite orbiting Earth.
Cherry, Colin. On human communication: a review, a survey, and a criticism. Cambridge: Technology Press of Massachusetts Institute of Technology, 1957.
CIS: Week 7
|Last Edited: 08 Oct 2004|
©2004 by Z. Tomaszewski.