

More Australians are using AI now, but is it lying to us?
Aug 24, 2025
In this engaging discussion, Jackson Graham, an explainer reporter, dives into the rapid rise of AI use in Australia. He shares eye-opening insights about how these chatbots learn and evolve, which can sometimes lead to 'hallucinations' or inaccuracies. Users may depend on AI, yet trust remains shaky, with many questioning the reliability of the information produced. Jackson also highlights the ongoing challenges and improvements in AI, reminding us to stay vigilant about its limitations, especially in sensitive areas like health and finance.
AI Snips
Chapters
Transcript
Episode notes
Chatbot Day-Planning Gone Wrong
- Jackson Graham asked a chatbot to plan a day and it produced plausible shopping, cycling and dining suggestions for Melbourne.
- He then asked about bears at Melbourne Zoo and the bot fabricated species and a map that don't exist.
Why Today's Chatbots Predict, Not Understand
- Modern chatbots use machine learning and neural networks that generalise from massive datasets rather than fixed rules.
- This lets them predict likely next words, enabling fluid language but not true understanding.
Token Prediction Explains Confident Invention
- Chatbots break words into tokens and predict subsequent tokens based on patterns in training data.
- Jackson's 'snagtastic' experiment showed models combine known pieces to invent plausible meanings confidently.