If you ask computers, they will probably name ambiguity as public enemy number one. Ambiguity occurs when a word or expression can mean two or more different things, and you can’t find out which from the word alone – for that, you need the surroundings, or the context, and often also a lot of background information. Here’s an example: is a guide a book or a person? Although computational linguistics has methods to deal with some of this, resolving ambiguity remains mainly the privilege of the human mind.
Yehoshua Bar-Hillel himself invokes the concept of ambiguity in language to prove that high-quality automatic translation is not feasible, at least not if you stay with the generative approach. But I have already spent several posts on this, so now it’s time for something completely different.
Today I plan to confuse my readers by pointing out that human beings, whether they like it or not, are part of translation technology – or any technology, for that matter. Yet in the previous post, I argued that humans, for better or worse, are prone to refuse to be part of the machine.