It’s worth saying something about an important difference between the systems we use to represent our experience which are sensory – visual, auditory, kinaesthetic (feeling), olfactory (smell), and gustatory (taste) – and using words as a representational system.
The sensory systems are what’s known as ‘analogue’. An analogue representation bears some obvious relation to the thing it’s representing. So the position of the arrow on a fuel gauge indicates how much is in the fuel tank; if the arrow is half way along, you know the tank is about half full. It’s the same with sensory representational systems; if you wanted to make a visual representation of a dog, you would draw a picture of a dog. The picture looks like the thing it’s representing. Also, if you wanted to get across the idea of a really big dog, you would draw a bigger picture.
If you wanted to communicate the idea of a dog in sound, to someone who didn’t speak your language, you would probably make some kind of barking noise – again, there’s a direct relationship between the sound you’re making and the sound a dog would make. One sounds like the other.
Describing a dog in words is different. The English word ‘dog’ doesn’t sound much like any sound a dog would make; nor does the French ‘chien’ or the Spanish ‘perro’. If you don’t speak those languages, hearing those words wouldn’t bring a dog to mind, whereas hearing ‘wuff wuff’ would – because it sounds like the thing it’s representing.
Words are an example of what’s called ‘digital coding’. The difference between analogue and digital representations is the difference between vinyl records on the one hand, and CDs and MP3s on the other. On a vinyl record, the sound wave is recorded as a spiral groove in the plastic. The amplitude and frequency of the sound wave is exactly reproduced in the groove – which means that you could get some sort of recognisable sound out of it even on one of those old fashioned hand-cranked gramophones.
In a digital recording, on the other hand, the sound wave is encoded as a series of ones and zeros, and you need something that knows the code in order to play it back. If you don’t have the right code, you can’t play it back – so for example you can’t play the sound from a DVD in a regular CD player.
Another example would be the difference between a regular watch with hands (analogue) and a digital watch. The position of the hands on a regular watch tells you roughly what time it is, whereas you need to know what the numerals mean to understand the digital version.
Analogue representation is a continuous spectrum, whereas digital coding is either on or off, 1 or zero. So if a dog is annoyed, it growls – if it’s more annoyed, it growls louder. The analogue representation of its state – the volume of the growling – varies in proportion with the intensity of its state.
Digital representation doesn’t work like that. The qualities of the representation don’t necessarily bear any relation in itself to the qualities of what’s being represented. If it did, then words meaning ‘big’ would literally be bigger words than words describing something small. As it is, the word ‘big’ is smaller than the word ‘small’ – and both are smaller than the word ‘microscopic’. So if you didn’t speak English, you’re not going to get any clues about what words mean just by looking at them or hearing them.
Incidentally, Gregory Bateson, who had a big influence on the development of NLP, pointed out in his book Steps to an Ecology of Mind that animals communicate through analogue channels like sounds, facial expressions like snarling, and body language, and that what they are mostly communicating about is relationships.
We humans have an additional way of communicating, through words – which is a digital system. You have to know the language in order to understand what’s being said.
Human language, says Bateson, is mainly about things. He suggests that because we have hands with opposable thumbs, we tend to think of the world as a set of things we can pick up, manipulate, and do stuff to. This is reflected in our language, by the way – we even talk about relationships and processes as if they are things. This is the source of many of the most important ‘distortions’ in our thinking and communication.
Although we have the digital language of words, and we use it to talk mainly about things, we haven’t lost the other language, the one used by animals – the analogue body language and voice tone that conveys information about relationships. Whenever we say something in words, we are also communicating information about our relationship to the listener by means of body language and voice tone – whether we are consciously aware of it or not. This non-verbal information provides context so the listener can evaluate the true meaning of the words.
It’s quite easy to change the meaning of a statement to its complete opposite, and convey many shades of meaning in between, by changing the non-verbal message that accompanies it. Consider the following straightforward statement: “You’re my best friend”. Notice how different meanings are conveyed:
“You’re my best friend” – reassuring
“You’re my best friend” – possessive
“You’re my best friend” – appreciative
“You’re my best friend” – reassuring in a different way: you may not be the best at anything else, but you are my best friend.
Note how adding a rising intonation to any of these turns them into questions, and not very flattering ones:
“You’re my best friend?”
Or using a sarcastic tone turns the meaning of the statement on its head: “You’re my best friend”.
The listener is looking and listening out for that information, whether they are consciously aware of it or not; and where it’s not there (as in email communication) they will unconsciously fill in the missing information from their own maps of the world. This is why you should never use email to communicate information that is emotionally charged or that could have a significant impact on a relationship.
(At this point you might be expecting me to trot out the old statistic about only 7% of the meaning of a communication is conveyed by the words, while 38% comes from voice tone and 55% from body language. I’m not going to, because that statistic has been taken wildly out of context from Albert Mehrabian’s original research – see this article on debunking Mehrabian’s alleged 7% – 38% – 55% rule)
Because as human beings we think largely in metaphor, we are also able to interpret other elements such as the time and place of communication, and the medium used, as further communication by relationship. This is why a celebrity attracted a wave of disapproval when he dumped his wife by fax, and Manchester-based personal injury claims company The Accident Group became notorious when they informed their 2,400 workers of their redundancy by text message.
- Think of a communication that didn’t go as well as you had hoped. What was being conveyed by your body language and voice tone? What would you change next time to make sure your message is received as sent?
- Thinking of messages you want to communicate in the future, which are OK to do by email? Which should you do by phone, and which are best done by face to face contact? (if you’ve assigned any to email that could have a significant impact on your relationship with the recipients, you may want to think again)
- Think of an important message that you are planning to communicate. Put yourself in the shoes of the person or people the message is aimed at. What do you need to change about the manner, timing or medium of your communication for your message to have the maximum positive impact?
Some of the best resources I know for developing the voice tonality side of your non-verbal communication are my friend Jonathan Altfeld’s “Finding Your Irresistible Voice” CD sets – also available as MP3 downloads, these are definitely worth a listen:
Finding Your Irresistible Voice 1:
2-CD set | MP3 Download
Finding Your Irresistible Voice 2:
4-CD set | MP3 Download
© 2011, Andy Smith. All rights reserved.