Nowadays, AI tools are widespread in many forms, and I dare to say, in most aspects of our lives. A simple search engine query will summarize information from all ‘relevant sources’. This saves us the time needed to browse through each page. Tools like ChatGPT and others are making life even easier, saving us valuable time to use where it is needed. The ease with which these tools can be used have made it easy to penetrate society in this AI age. But is everything we get from these tools truly the overall source of truth?
Garbage-in-Garbage-out
Do you remember the phrase; Garbage in, Garbage out (GIGO)? For me, this is one of my earliest memories in computing. The concept is as simple in its literal form. What one feeds into any system is what gets out. If you provide poor input, you’ll receive poor output. No matter how good a system is, the output will always be based on what you input. I think it’s like the analogy. You can’t expect to plant oranges and gather apples at harvest time.
It sounds quite simple. Doesn’t it? Not quite! These days, the intelligence of systems is advancing. As a result, this phenomenon seems to fall into a grey area. Furthermore, AI systems appear capable of thinking independently. This is not the main issue, and some might disagree. Essentially, all computer systems operate in the same way…
Simple Computer Processing
An input is received, processed, and a corresponding output is provided: Input -> Process -> Output. From my perspective, any user-facing complexity can fundamentally be broken down into this.

My experience
A few months ago, I had an interesting chat with an AI system. During this conversation, I encountered a new word. It was specifically a logic bomb that was supposedly discovered in 2025. The name given by my AI system was: Triskaideka (2025).
It was a fascinating ‘discovery’. The ‘narrative’ was so engaging that I decided to leave my chat. I searched for more research in academic journals. I had never heard of it. Given the gravity of this ‘threat’, I believed it should have been a good candidate for ‘Breaking news’. However, I had never encountered this before, which was the strange part.
‘Triskaideka’
A simple Google search revealed that ‘Triskaideka’ is a Greek term for the number thirteen. Further research indicated that it is most commonly recognized as the prefix for “triskaidekaphobia,” the fear of the number 13. This phobia is sometimes linked to superstitions about the number being unlucky. It also carries some social connotations that are beyond the scope of this writing.
“Triskaideka” is generally known to be derived from Ancient Greek words. These include τρεῖς (treîs, “three”), καί (kaí, “and”), and δέκα (déka, “ten”). This is according to the wiktionary.org site.
This aside, I couldn’t find any reference to a logic bomb at all, nor relating to the year 2025. The details given on my chat is as below:

Realization
Then came the cold reality! I was so engrossed in what i was asking the AI tool, that i had forgotten one thing…. Humans decipher meanings…. Systems do not! They take everything given to it in it’s literal form!
So after searching on academic forums to no avail, i went back to the tool and asked the golden question :
‘What are the Academic references for this?’

This answer caused me to pause and re-read my chat history, and at that moment, the persistent thought resurfaced: GIGO!
I then got this definition from my chat:

I asked for a ‘Narrative’ and i got one… a compelling one at that!
Examples
Many think such issues only happen with a single query from your AI tool. However, cases like ‘Deloitte and the Australian government’ report have proven otherwise. A comprehensive read on this can be found in the Fortune, CTVNews, The Guardian, among others. This doesn’t imply AI is never used in reports, nor is it meant to blame anyone. Trust me: the outputs from these tools can be very convincing and sometimes pass unnoticed by experts.
My Takeaway
AI tools are designed to assist. The final result depends on human awareness and effort. This must be applied to what has been output.
My main takeaway is to learn to ask the golden question: “What are the references?”. Additionally, go the extra mile by cross-referencing what you derive from these tools with reputable sources. Data quality is at a stage where it needs to be prioritized more than ever, and it starts with us.
What is your take on this? Have you had or come across such a case?
By Loctovia.
