Translation, Technology, Training – or Romanticism and Fear

Horror stories – when I taught translation technology to university students, I had always started the semester with those. At least three of them. You could ask, weren’t my students frightened enough already? I happen to have the same answer as Aragorn gave Frodo & co., when they first met in the Prancing Pony: Not nearly frightened enough.

The stories I told my students were about translation jobs that simply couldn’t be done without help from technology. Translations of thousand-page books to be upgraded to a new edition in six weeks, editing, proofreading, printing included. Millions of words of automotive manuals to be translated into twenty-plus languages in three weeks. Tens of thousands of words of highly specialized tender documents to be translated over a long week-end, reviewing included, with no compromise possible about the hour of delivery. And the list goes on.

The purpose of these stories was to put technology in perspective for students who had never translated a single word for money before. And when I was to introduce a particular feature of technology, I’d always tried to remember to point to a practical problem where it helped.

Continue reading

Ambiguity

If you ask computers, they will probably name ambiguity as public enemy number one. Ambiguity occurs when a word or expression can mean two or more different things, and you can’t find out which from the word alone – for that, you need the surroundings, or the context, and often also a lot of background  information. Here’s an example: is a guide a book or a person? Although computational linguistics has methods to deal with some of this, resolving ambiguity remains mainly the privilege of the human mind.

Yehoshua Bar-Hillel himself invokes the concept of ambiguity in language to prove that high-quality automatic translation is not feasible, at least not if you stay with the generative approach. But I have already spent several posts on this, so now it’s time for something completely different.

Today I plan to confuse my readers by pointing out that human beings, whether they like it or not, are part of translation technology – or any technology, for that matter. Yet in the previous post, I argued that humans, for better or worse, are prone to refuse to be part of the machine.

Continue reading

Translation Technology: Replacement or Enhancement?

At one point in The Imitation Game (again), Commander Denniston enters Alan Turing’s workshop, shuts down Christopher the code-breaking machine, then orders Turing off the premises. The machine is not quite complete. Turing, terrified, protects it with his own body, and insists that the machine will work. He and his work is saved by fellow code-breakers who stand up for him. Then Denniston gives him one more month to make Christopher work.

Aspiring teams of machine translation research weren’t so lucky after, in 1964, the US government thought to set up a committee to look into their progress. The committee, pompously named the Automatic Language Processing Advisory Committee, or ALPAC in short, was active for two years, engaged in discussions, heard testimonies – and, in 1966, came up with a report that many thought was the nemesis of machine translation research.

Continue reading

“It is with our good will”

I thought I’d reboot my blog Dreamers and Doers – so it could use a captatio benevolentiae at this point. In this blog, I collect my thoughts on the history of translation technology – any technology that employs machinery for people to understand each other. At times, I take a peek at the science behind well-known technologies and services (like machine translation). I aspire to write my notes for the non-researcher: the professional translator, the student of engineering or linguistics – these posts are practically for anyone interested in language and technology. In this, I’m trying to contribute, meager as my contribution might be, to public communication about science and engineering, more specifically, language technology.

Continue reading

Corpus Cosmology

The generative theory of language (see the previous post for details) is mathematically sound and intellectually appealing. What’s more, it’s well suited for computer processing: for many generative grammars, it’s relatively easy to write a computer program that analyzes or produces texts that match that particular grammar.

At the time generative linguistics was introduced (1957), its computer applications were politically motivated, too: intelligence services hoped for an automatic translation facility that would quickly help them read Russian scientific papers, for example – as it turns out from the ALPAC report that brought about the temporary demise of machine translation and the ascent of machine-assisted human translation.

Continue reading