They say there were 6,909 different registered languages in the world as of 2009 (source: Ethnologue, SIL International, 2009). But there are many aspects that make us decide that two different individuals speak two different languages. One of these aspects is how well the two of them can understand each other (or if they can understand each other at all). Another aspect is when the two individuals declare that they speak two different tongues, even while they understand each other without a problem. The latter is the identity aspect, which quickly becomes a language policy dimension, escalating to all-round politics, sometimes with wars fought about it.
I will deliberately ignore this latter dimension, and focus on how well those two individuals understand each other. This is where technology can help us. As to technology helping politics, I would not go so far as to envisage some sort of a clever decision-planning computer, simply because I can’t imagine it. On the other hand, war is something that technology can assist pretty well.
Two paragraphs up, I said something very vulgar: “…when they understand each other without a problem.” There is no such thing. There is a song by The Cure (How Beautiful You Are…, 1987) that quite aptly concludes with “…no-one ever knows or loves another” (the theme of the song comes from Baudelaire’s The Eyes of the Poor, by the way). Sociolinguistics acknowledges this by stating that idiolects exist. An idiolect is the use of language as spoken by one individual, or as Bloch – who first used the term in 1948 – wrote: “The totality of possible utterances of one speaker at one time in using a language to interact with one other speaker is an idiolect…” [as quoted in Hazen 2006].
The point is that when one acquires their first language, a code – a mental lexicon – is built up in their mind, and each item of this code is mapped to, or intertwined with, a concept that also exists in their mind. I am deliberately this vague: I would not say how large an item in the mental lexicon is – there are competing theories on this, although psycholinguists and neurolinguists seem to know these days that the mental lexicon consists of phrases or fragments of phrases rather than individual words. Neither would I say how concepts relate to lexicon items. That I used the word ‘mapping’ gives away that I am a computer person: mapping is the primary means in computing to set up relationships between things that were previously unrelated – at least from the computer’s point of view.
In psycholinguistics, there is a debate on whether or not the mental lexicon exists, and if it does, how it works, what role it plays in language acquisition and understanding. Your best entry point to that discussion is probably the Wikipedia article — but don’t stop there, go deep through the reading offered in its References section.
Whether or not the mental lexicon exists — I have no irrefutable proof beyond the evidence its researchers provided –, it is an excellent model to show how the differences between languages (might) work. So I will continue to assume there is a one in everyone’s mind.
Thus, when language is built up in one’s mind, it will be a bit different for every individual – that goes for both the mental lexicon and the ‘mappings’. This happens because every infant has interactions with a different set of people and under different circumstances. Yet this language, this idiolect, won’t be totally independent from everyone else’s: you don’t learn language on your own, only through interaction with others. Sad was the fate of children who were left alone, with no-one to learn to speak from – but I’ll deal with those stories in another post.
The language you learn will be very similar to the language of those who ‘taught’ you speak. So similar that you will be able to understand them, and many others who had a similar environment in their infancy. When someone says something to you, it might not exist in your mental lexicon in the exact same form. Yet it will be similar enough to at least one item (or a combination of items) that you will be able to recognize, and form a mental image of.
When it comes to recognizing and understanding language, the human mind has an amazing error-correction capacity. The pronunciation, the lexical items, and also the grammar can be distorted to a large extent, and the individual at the receiving end will still understand – maybe with more effort. But there is a tipping point: the language at the source can be so different from the one the receiver can recognize that understanding is no longer possible. This is when we say the two of them speak two different languages. To make the two individuals understand each other, we need a third party, competent in both languages, who can transfer an utterance from the source’s language into the receiver’s language: translation must take place.
I am using the term ‘language’ incorrectly here. Language is a community phenomenon. The fact that two individuals can communicate without the need for any transfer (translation, that is) is a relationship between them. In the simplest wording, a particular language is the set of meaningful utterances spoken by a group of individuals who can understand each other. The study of Pragmatics deals with this in more depth.
There are a whole lot of aspects to the difference between the way two individuals speak – extent, nature, origin –, but from this point on, we will be restricted to the one dimension that technology can tackle: the signal, and the combinations of various signals that make up the code we can represent through technology.
To be continued in next post
I’m linking sources wherever I can. I only list those that you need to look up because I’m not permitted to link.
Hazen, Kirk. Idiolect. In: Encyclopedia of Language & Linguistics, vol. 5, pp. 512–513. 2006: Elsevier. (No permission to link)