What makes languages different? Part Two: Signals and Understanding

When we speak, we emit signals1,2, mostly audible ones – although we shouldn’t ignore the accompanying body language that forms the visual part. These signals are formed – encoded – from the items in the mental lexicon. It’s quite astonishing, in fact: whatever form a lexicon item takes in our minds, it’s translated into a hugely complex choreography of many-many muscles in our body3. The conversion of an electric signal into sound – the job of a loudspeaker4 – is much more direct and much simpler.

When we hear speech, we receive the audible signal that is created through that immensely complex dance of muscles. Our brain needs to match this signal against the mental lexicon, finding a combination of items that are similar enough to what we just heard. So, on the receiving end the audio signal is translated into something conceptual, and on the transmitting end (the source), the conceptual items translate into body movements. No wonder we need two very different regions in our brains for that, one to understand, and another to form (produce) speech. This also partly explains why we understand things we don’t deliberately speak.

Another kind of signal is writing5. It is essentially a visible code, transferred from the conceptual form that dwells in the mind. It would be easy to say that writing is yet another channel of expressing one’s thoughts, independent from speech. But it’s not true. There are many forms of writings, although all of them follow one of two approaches: either they are directly linked to mental concepts or their combinations, or they attempt to stay as close to speech as possible. If you are interested in how writing came about, see the Wikipedia link for now: http://en.wikipedia.org/wiki/History_of_writing. To my mind, on the other hand, writing is technology6, and as such, it merits another post or more.

No matter what form the signal takes, your brain hunts for similarities, and it rejoices when it finds one, and begins to build up a mental complex of the input (in other words, it will try to understand the input). Either this is fine, or it isn’t. Maybe the signal that was recognized is part of a larger construct that refers to a completely different mental concept, as intended by the source. If the rest of the construct isn’t similar to anything the brain knows about, the attempt at understanding will fail eventually. Maybe most of it is recognizable, and the brain will receive enough information to build up more or less the same mental representation as the source was intending to convey (send). In the latter case, we can say that the two languages are not so different after all – in that sense, they are related.

(Relationships between languages7 are mostly considered from the historical aspect, as if sharing historical roots would imply a level of similarity or understanding – but it’s only partly true. We know of languages that are related yet completely incomprehensible to the speakers of each other. Hungarian and Finnish are good examples.8,9)

When the brain recognizes a phrase (an item in the mental lexicon), it will link the phrase to a conceptual construct, a conceptual unit if you will (I think I can safely say that we don’t know what the most basic conceptual unit is for the brain). It is a common mistake to believe that these conceptual units are universal10,11, that they are the same for everyone. True, there are very tangible things, [artificial] objects mostly where there can be no question. So you can imagine a set of things where the meaning – the mental representation – will be the same no matter the culture or the language. As a result, some people, even very recently, made attempts at creating machine translation by first conceiving a common language, an interlingua, as a substitute for the mental representation that one uses when they translate.

Note: I mean ‘interlingua’ as in http://en.wikipedia.org/wiki/Interlingual_machine_translation rather than the international auxiliary language (http://en.wikipedia.org/wiki/Interlingua).

But for most of the time, it’s only the signal that technology has at its disposal. It’s the very surface of communication, and today’s translation technology is barely scraping it. From the way this surface is constructed, you can find out a lot about the inner world, but it will still be a tiny little speck compared to the whole image of the world in your mind.

A computer works by breaking down complex things into several simpler ones, and pretending that the simpler ones can be treated independently. When you create language technology12,13, you will do the same: in programming, you have to decide on a unit of language14 that you will use to slice up – segment15,16– text or speech, and then, at some level, you have to treat the units as independent. Yet no unit of language will ever be independent or meaningful on its own. What’s worse, the meaning of words and phrases will change as you speak more.

If you hear only a phrase, or a sentence or two, your mind will recognize that it received only partial information, and begin to build up the rest. Maybe just the mental representation, but there are many occasions when we begin to form the language, too. If you don’t believe me, go and play a game of Dixit, one of the most ingenious picture/language games of all time.

Cognitive science17 made many attempts to peek into how the human mind accommodates the mental representation, and there were many attempts to mimic or model parts of it. We might even be able to induce some sort of computational process that will resemble the way humans think about things. Still, I think we can safely say that a computer cannot see much beyond the surface. It cannot build up a mental representation of the world, not even partly. All it has is the surface and some pre-encoded knowledge that humans entered previously.

If we return to the difference or similarity of languages, we will see that some languages are more similar than others. Catalan speech will mostly be understood by Spanish speakers, and it will be easier to learn Slovak if you already speak Russian, for example. Let’s have a go at this adjective ‘easy’: the human brain is extremely economical, and it will use the least amount of information and energy to reach the conclusion it needs. Thus, if someone speaks to you in a language closely related to your own, your brain will be content to just substitute the strangely formed words of the foreigner with the familiar ones you have in your mental lexicon.

Strangely – or not so strangely, if you come to think of it –, machine translation18,19 is a good indicator of how much two languages are related (not in the historical sense!). Most of today’s machine translation systems work by shoveling together an immense amount of text in both the source and the target language, and derive statistical conclusions of how much a phrase occurs together with its translation in the target text. They scrape the surface, so to say. If it is enough to scrape the surface – that is, the two languages are so close that you get the translation by rewriting some words at some places, or little more –, the translation will be good. If the translation isn’t good, you know that the differences between the two languages go deeper than the surface-scraping of today’s technology.

Explore further

1 http://en.wikipedia.org/wiki/Models_of_communication

2 http://davis.foulger.info/research/unifiedModelOfCommunication.htm

3 http://www-mobile.ecs.soton.ac.uk/speech_codecs/speech_properties.html

4 http://en.wikipedia.org/wiki/Loudspeaker

5 http://en.wikipedia.org/wiki/Writing

6 http://en.wikipedia.org/wiki/Technology

7 http://aboutworldlanguages.com/language-families

8 http://aboutworldlanguages.com/Uralic-Language-Family

9 http://en.wikipedia.org/wiki/Finno-Ugric_languages

10 http://en.wikipedia.org/wiki/Linguistic_universal

11 Evans, Nicholas-Levinson, Stephen (2009): The Myth of Language Universals: Language diversity and its importance for cognitive science. In: Behavioral and Brain Sciences. Cambridge University Press.

12 Uszkoreit, Hans (2001): Language Technology: A First Overview. http://www.dfki.de/~hansu/LT.pdf

13 http://www.um.edu.mt/linguistics/human_language_technology

14 https://www.boundless.com/psychology/textbooks/boundless-psychology-textbook/language-10/introduction-to-language-60/structure-of-language-234-12769/

15 http://www.monlp.com/2012/03/13/segmenting-words-and-sentences/

16 http://en.wikipedia.org/wiki/Speech_segmentation

17 http://en.wikipedia.org/wiki/Cognitive_science

18 http://ice.he.net/~hedden/intro_mt.html

19 http://www.hutchinsweb.me.uk/IntroMT-TOC.htm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s