AI Issue 3 (2025-3)

Do LLMs understand gibberish?

#nonsense_alsomatters

6 min read
Do LLMs understand gibberish?

In another article in this issue we discuss what gibberish is and how people have been applying it to some arts and sciences for about 200 years. There are examples of nonsensical languages created by mixing up real words with random units of speech; there are phrases and sentences where each separate word is meaningful but when combined, they make absolutely no sense; there are nonsenses based on mimicking.

There are many, many ways to tell nonsenses, and since we now have "magic" systems capable to process tones of words and language-based data, it is time to check their behaviour at facing gibberish.

We have tested Llama, Gemini, and ChatGPT and, challenging enough, asked each of them to (1) digest a seemingly gibberish text and (2) to create a piece of their own nonsense. We say "seemingly gibberish" because the message we started with was created following a clear pattern and could be easily decoded when the pattern is found.