Last week, one of our tech team members tested the newly launched ChatGPT Translate, and discovered how the prompt did not translate the text but instead answered a question:

ChatGPT translation fault

This is because ChatGPT is vulnerable to prompt injection.

This means when it is used to translate, it processes the text to be translated as part of the users’ instructions. Thus, a contradictory or malicious input can override the intended “translate” task and execute different commands.

This highlights an underlying problem: ChatGPT is built as a large language model, not a translation tool.

Whilst the interface looks like a specialised translation tool, it is essentially an “AI wrapper” – a sleek-looking interface over a general-purpose engine. It is designed to predict the next likely word in a sentence, not to manage the complex, high-stakes requirements of global linguistics.

What happens if you use ChatGPT Translate?

A manufacturing client recently used a general LLM to translate a heavy machinery safety manual into Spanish. Because the AI prioritises next-word probability over technical accuracy, it translated the English instruction “Take once a day” (referring to a maintenance check) into the Spanish word “once” – which means “eleven”.

This is the core issue of hallucination: the AI doesn’t know it’s wrong, it just knows ‘once’ is a valid word in both languages.

Beyond accuracy, you face the privacy black box: with open-sourced LLMs, your sensitive data could be fed back into the model’s training loop, potentially leaking proprietary IP to the public.

Even if there is an enterprise-grade API available that prevents it from doing so – is it auditable if the AI made an error? OpenAI’s Terms of Service explicitly state that users are responsible for evaluating the accuracy of outputs – ultimately putting users at risk.

How ChatGPT Translate compares with GAI Translate

Whilst ChatGPT is a generalist that happens to translate, GAI Translate is a specialist built for enterprise: it is a trained Medium Language Model (MLM), which offers higher precision in specialised domains, no hallucinations, and is customised for enterprise clients.

Unlike LLMs that prioritise fluency over accuracy, GAI Translate is engineered to achieve 100% accuracy. It integrates the latest domain-specific language models with vetted, industry-certified linguists, where your outputs are verified and certified.

Data security is underpinned by robust ISO:27001 information security certification, ensuring that your data is managed within a governed, auditable infrastructure.

How does GAI Translate compare to other tools
How GAI Translate compares with other translation tools.

In short, we provide the engine, the safety checks, and the accountability that general AI tools simply cannot offer.

Conclusion

As demand grows for AI solutions with human verification of outputs, it is essential for buyers to conduct due diligence to ensure products achieve the desired results and are future proofed.

It can be tempting to procure a £1 translation tool, but the long-term cost of data risks, inaccuracies, lack of audit trails and interoperability issues can hinder your adoption of AI. Caveat emptor applies – understand what you are buying before you commit to the promises of AI.

SHARE THIS ARTICLE

RELATED RESOURCES

  • What government officials should know before using ChatGPT for Translation 

    The first role of government is to protect its citizens from harm and that often means communicating safety messages quickly to communities in the language they understand. An impending storm...

    5 MIN READ

  • GAI SLM for mining abstract header

    What mining professionals should know when using a LLM versus GAI SLM for safety-related terminology

    Responses generated by Large Language Models (LLMs) always carry a health warning; that results can contain errors. AI copes well translating generic content, but safety-related terminology is unforgiving. Any...

    5 MIN READ

  • Busting the big 3 myths in AI adoption: what are you doing wrong with AI?

    The conversation about adopting AI tools to translate is clouded by myths - myths that create risks and missed opportunities for global businesses. This week, Be the Business, an organisation that champions...

    5 MIN READ

  • What government officials should know before using ChatGPT for Translation 

    The first role of government is to protect its citizens from harm and that often means communicating safety messages quickly to communities in the language they understand. An impending storm...

    5 MIN READ

  • GAI SLM for mining abstract header

    What mining professionals should know when using a LLM versus GAI SLM for safety-related terminology

    Responses generated by Large Language Models (LLMs) always carry a health warning; that results can contain errors. AI copes well translating generic content, but safety-related terminology is unforgiving. Any...

    5 MIN READ