search

SELECT LANGUAGE:

Using a laptop for complicated computations

What ChatGPT Gets Right and Wrong and Why It’s Probably a Game-changer for the Localization Industry

Lionbridge’s take on the new technology and next steps to unearth its full potential

The explosion of ChatGPT into the mainstream since its launch on November 30, 2022, has generated unprecedented attention and commentary. After spending a few days (and nights) having conversations with ChatGPT, these are the only questions that I think matter:

  • What does it get right?

  • What does it get wrong?

  • How can we use it?

Want to learn the answers to these questions and what I foresee for this new localization tool? Read on. For more insight, read the whitepaper below.

 

ChatGPT Whitepaper Cover

What Does ChatGPT Get Wrong?

There are things that you can’t rely on ChatGPT for:

  1. It doesn’t tell the truth. ChatGPT says things people want to hear, given the information it has.

  2. It doesn’t have a clue about the real world. It only knows a curated version of what people say about the real world, using that information to create very convincing language to represent what it’s learned.

  3. It can’t count. I tried replicating Jonas Degrave’s simulation at Engraved using a more complicated calculation, and got an erroneous result. It does correctly attempt to multiply the two numbers from this Python command it knows how to simulate, which is quite remarkable. However, it just can’t count.

  4. It can’t think. It will often write something that outlines accurate premises and true statements about what I asked it to do, and confidently apply it erroneously. Basically, it can’t reason; it’s not a finite state machine.

  5. Its humble bragging is a little cringe. ChatGPT will write with deep aplomb and authority, yet sheepishly tells you it’s only here to help, and exercises embarrassing contriteness when you tell it that it got it wrong.
The vast scope of human intelligence and reasoning

What ChatGPT Gets Right

Having gotten through what ChatGPT isn’t or can’t do, let’s look at what it can do as a text interpreter and generator. ChatGPT can:

  • Write better than you. I developed this strong belief after having many conversations with ChatGPT. It writes at various levels of complexity with a wide range of vocabulary. This generative AI is as good, in my opinion, as the top decile of human content writers.

  • Follow instructions. It will flawlessly change texts in specific ways, both in form and content. Most crucially, it maintains the conversation context and understands when you refer to what it had done before, even in the most vernacular manner.

  • Modify text while keeping meaning. Given any text, and following instructions, ChatGPT makes any modification to the content, form, and style. It maintains the semantic content of the text or modifies it as you ask it to.

  • Manage multilingual terminology, a critical issue in localization. I don’t know yet if it’s realistic for large-scale translation. But, I could see ChatGPT doing a decent job of introducing specific terminology while editing previously translated material, even if it’s not the original translator.

  • Detect offensive text. I provided ChatGPT excerpts from a pleading in a federal criminal case with racist and homophobic text messages, then asked it to identify offensive text. It did a great job and was able to explain its reasoning.

  • Perform entity detection. I asked ChatGPT to perform a typical case of entity detection and to place tags around the entities. It missed a couple. But, with an additional couple of prompts, tagged them up easily.

  • Classify things to a taxonomy. One of the most mind-boggling things about ChatGPT is how it can apply general knowledge to a specific situation.

We’ve established that you can’t rely on ChatGPT to say true things or to know what’s correct. (This means content creators would have to check whether it’s talking nonsense.) However, once we have a text containing meaning that we’re happy with, ChatGPT can manipulate or transform its form and content -- while retaining encoded meaning. This is an opportunity for localization providers because we don’t have to generate meaningful content from scratch. Let’s take a closer look at the localization activity landscape and how ChatGPT could affect what we’re doing today. This generative AI:

  • Is very skilled at translation. For languages with large corpora, ChatGPT is likely on par with state-of-the-art engines for out-of-the-box Machine Translation output, if not superior in some respects.

  • Seems quite effective at following terminological instructions.

  • Can apply style instructions, either broadly or narrowly. In contrast, getting Machine Translation engines to perform these types of tasks correctly is challenging.

  • Is quite good at categorizing things, particularly for text, to arbitrary taxonomies. This ability is beneficial for the localization industry, as we may want to apply specific instructions to certain types of content and other instructions to others.

  • Excels at editing given text, the most crucial element of quality localization. It’s quite good at the four major components of reviewing and translating text.

  • Helps with content analysis, analyzing text for effective processing, improvement, or ROI. It can help anticipate or preempt translation quality issues, tune it for outcomes effectiveness like reach, SEO, CTA/CTR performance, and improve legibility at source and in the target, among others.

  • Helps with writing and editing working code. Elite programmers debate whether ChatGPT writes at their level, and it appears not. However, I asked it to write some XML content extraction code, and it produced working code that I could run. It makes creating code (and learning how) easier for non-coders.

ChatGPT Will Specify New Skills and Practices

I’ve developed the beginning of an understanding of what type of prompting ChatGPT requires. This special technology only takes natural language as input. To use this technology in production, we’ll need to develop expertise for effective prompting. Specific transformations of content will likely require successions of prompts that will each perform different tasks: cleanup, pre-and post-processing, etc. Learning to use natural language prompts as part of our automation pipelines, in a way that is both contextually relevant and sufficiently predictable in the output, will be an interesting journey.

The nuances of digital tools

Where Do We Go from Here?

We now definitively know we can’t ignore this new generative AI. It’s likely to disrupt our industry. So, we must lead and drive that push to language automation, lest we get left behind. ChatGPT can transform and annotate text on par with the average human editor and likely perform these tasks more efficiently. It can perform tasks relying on a diversity of skills virtually no individual human possesses, and it can generalize its knowledge to new situations.

Most importantly, it shows potential for solving some lingering localization automation problems. Of course, it’s one thing to have a conversation with it using toy examples. It’s another thing to imagine using it at scale to perform these actions. Moving forward, we must:

  • Conduct real-world tests at scale to evaluate error rates for each type of localization and editing task investigated here.

  • Analyze detailed macro and micro user journeys occurring within the localization value chains and identify where they will likely be disrupted with this type of text automation.

  • Understand how to prompt and provide relevant context to ChatGPT at scale, and document pitfalls and best practices.

  • Develop the new automation and human-in-the-loop editing workflows, inventing what post-editing and QA will mean tomorrow with such an AI in the loop.

  • Design new automation and User Experience (UX) interaction contexts for both localization agents and customers for each possible improvement opportunity.

  • Ensure the economics of licenses, deployment costs, and maintenance makes sense for our business

Some Thoughts About Language and Real-World Usage of Chat GPT

One of the things that I found most striking was how ChatGPT got complex number operations almost right, but still wrong. It doesn’t cheat. It really learns everything from the language it trains on. The fact that it finds almost the correct result of an operation beyond a certain order of magnitude (and the correct result for smaller numbers) tells us that a language corpus of a sufficient scale contains statistically significant knowledge about the real world. But, it also shows that dedicated formal systems (such as mathematics) are required to produce meaningful, reliable, and accurate information about the real world.

ChatGPT offers a sobering reminder that a self-referential, self-consistent system cannot in and of itself carry the truth of the world, which exists independently. This echoes Gödel’s incompleteness theorem. As conscious beings, we can’t untether our cognition from formal and material systems grounding our understanding of the world in a reality that imposes itself on us and that we cannot define away through language alone.

Get in touch

Have your own translation or localization project? Need to ensure it gets done accurately, quickly, and under budget? We’ll use innovative generative AI like ChatGPT to help. Contact us today to find out more about Lionbridge’s translation and localization services.

linkedin sharing button

Vincent Henderson, Head of Product Language Services
AUTHOR
Vincent Henderson, Head of Product Language Services
  • #technology
  • #blog_posts
  • #translation_localization