On Thursday, October 17 we explained some of the common AI terms in layman’s language. We then went on to discuss the challenges posed by “Generative AI” .
What is AI ? in a nutshell it is “…the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity.” The important thing to note is the phrase “human-like capabilities“ . It is not human! For more see: https://www.europarl.europa.eu/topics/en/article/20200827STO85804/what-is-artificial-intelligence-and-how-is-it-used
Chris had asked Google Gemini’s AI to define what an Algorithm was in non-techie terms. Here’s what Gemini said:
Algorithm
… a list of rules to follow to complete a task or solve a problem. The steps in an algorithm need to be in the right order. Think about an algorithm for getting dressed in the morning. What if you put on your coat before your jumper?
An algorithm in relation to AI is a set of intelligent rules that a computer follows to learn from data, identify patterns, and make its own decisions over time. While a standard algorithm is just a static recipe, an AI algorithm is a recipe that gets better and better the more it is used.
An AI algorithm is like an advanced recipe
Think of a standard algorithm like a simple cooking recipe.
- Ingredients (Input): You add all the specific items listed.
- Instructions (Algorithm): You follow every step in the exact order.
- Finished Dish (Output): You get a predictable result every time.
An AI algorithm, however, is a recipe that learns from experience.
- Ingredients (Data): It’s given a massive database of “ingredients” to analyse. For example, thousands of photos of cats and dogs, but with no labels.
- Instructions (The AI Algorithm): The special recipe tells the computer how to study this data to find patterns on its own. For instance, it might notice that “fluffy ears” and “whiskers” are common in the cat pictures.
- Refinement (Learning): The computer is then given more images and is told whether its conclusions are right or wrong. It uses this feedback to get better at telling the difference between a cat and a dog.
- Finished Dish (The AI): The final “product” is a trained AI model that can accurately identify cats and dogs in new pictures it has never seen before.
We then challenged it to explain what an LLM (Large Language Model) is.
An LLM (Large Language Model) is an incredibly powerful computer program that can understand and generate human-like text. Think of it as an expert storyteller and conversationalist that has read and memorised a vast library of text from the internet, books, and articles.
Here’s a breakdown of the key concepts:
• A giant “fill-in-the-blanks” game. The core function of an LLM is to predict the next word in a sentence. Based on the immense amount of text it has read, it knows which word is most likely to follow another. For example, if you type “The cat sat on the…”, the model can predict that the next word will probably be “mat,” “couch,” or “floor”. This simple, repetitive process is what allows it to generate coherent sentences, paragraphs, and even entire essays.
• Vast knowledge. The “Large” in LLM refers to the fact that these models are trained on gigantic datasets—trillions of words from all over the internet, including Wikipedia, books, and news articles. This massive training allows the LLM to learn the patterns, grammar, and nuances of human language.
• Mimics human writing. By learning from this huge corpus of data, the model can generate text in a wide variety of styles, tones, and formats. It can write a news article, summarise a document, translate a language, or even generate computer code, all by mimicking the patterns it has learned.
• It doesn’t “think.” It’s important to remember that the LLM doesn’t actually “understand” language or the world like a human does. It’s a highly sophisticated pattern-matching system. It strings together words based on probabilities derived from its training, which can sometimes lead to factual errors, also known as “hallucinations”.
• How you use it. You interact with an LLM by giving it a text-based “prompt”. This could be a question, a command, or the start of a story. The model then uses what it has learned to generate a relevant response, providing a powerful and versatile tool for a wide range of tasks.
So if an LLM is a Large Language Model, it follows that an SLM is a “Small Language Model” These are specialist models trained on smaller datasets e.g. an SLM might only know the information needed to analyse Xrays in a hospital, or inform a customer service Chatbot.
…and what is a Chatbot?
A Chatbot is a computer program that simulates and processes human conversation, either written or spoken. In simple terms, it’s a digital assistant you can “chat” with to get information, complete tasks, or receive help.
You have likely interacted with a chatbot on a company’s website to ask questions about your order or seen one on social media.
Chatbots operate on a simple principle: they receive input from a user and generate an appropriate response. This can be done in a couple of ways:
- Rule-based: The simplest chatbots follow a predetermined path and are limited to specific tasks. If a user asks “What are your hours?”, the bot is programmed to recognise the keywords “hours” and will provide a pre-written response.
- AI-powered: More advanced chatbots use artificial intelligence (AI) and machine learning to understand and interpret more complex, open-ended requests. These bots can learn from interactions to provide more personalised and human-like conversations. For example, they can detect the intent of a question even if it’s phrased in different ways. Where you might find chatbots
- Customer service: An automated pop-up on a website that can answer frequently asked questions or help you with a simple issue like tracking a package.
- Virtual assistants: The voice assistants on your smartphone or smart speaker, like Siri or Alexa, are examples of AI-powered chatbots.
- E-commerce: A bot that can offer personalised product recommendations or help you with your order while you are shopping online.
- Information retrieval: A chatbot that can quickly pull up information from a company’s database, like employee benefits details from an HR system.
Finally, we moved on to the elephant in the room and the source of many controversies:
Generative AI
Here’s Gemini AI’s description :
“Generative AI is a type of artificial intelligence that creates new, original content by learning from and mimicking patterns in existing data. Instead of just analysing or categorising information, it generates fresh, unique text, images, music, or code in response to a user’s request.“
This is a very positive, glowing definition (you might expect that from an AI describing itself !) To use some derogatory slang terms: It is also a “Stochastic parrot” which can create “AI Workslop” sometimes complete with “hallucinations”
A “stochastic parrot” is a metaphor for large language models (LLMs) that can produce human-like text but lack true understanding of the content. “Stochastic” comes from a Greek word, “stókhos,” meaning “guess.” These models, which use statistical patterns from vast datasets, effectively “parrot” back text without genuine comprehension of meaning, context, or factual accuracy.
AI Workslop is a term to describe shoddy official reports created using LLMs, which often include AI-generated false information known as hallucinations. So we have to be very critical about the material produced by Generative AI
How Generative AI Works
- Massive data training: It is fed enormous amounts of data, such as billions of web pages, books, and images. During this process, it learns the underlying patterns and structures, like grammar, artistic styles, or musical composition.
- Learning relationships: It doesn’t just memorise the data. It learns the relationships between all the different elements. For example, in language, it learns how words relate to each other and how they form coherent sentences and ideas.
- Generating new content: When a user provides a command, or “prompt,” the AI uses its learned patterns to predict and create new content that fits the request. The output is new, but it is stylistically and structurally similar to its training data.
Common examples:
- Text and code: Tools like ChatGPT can write essays, emails, poems, and computer code.
- Images and art: Programs like Midjourney and DALL-E can generate unique images from simple text descriptions.
- Audio and music: AI can compose original music in different genres or generate realistic voiceovers for videos.
Here is a recent real-life (and expensive) example of AI Workslop:
Deloitte to partially refund Australian government for report with apparent AI-generated errors https://www.ctvnews.ca/business/article/deloitte-to-partially-refund-australian-government-for-report-with-apparent-ai-generated-errors/
Generative AI is making waves in the music and movie industries.
Listen to this Waltz in F-sharp minor, composed by the AI system NotaGen, performed by Stephen Malinowski, with an animated graphical score:
It’s not bad for AI – and can only get better as the model becomes trained on more of the huge back catalogue of classical music recordings. It’s not just classical music which is under threat. Chris commented on the following very recent story:
” ..earlier this year, a band called The Velvet Sundown racked up hundreds of thousands of streams on Spotify with retro-pop tracks, generating a million monthly listeners on Spotify. But the band wasn’t real. Every song, image, and even its backstory, had been generated by someone using generative AI. https://theconversation.com/why-industry-standard-labels-for-ai-in-music-could-change-how-we-listen-262840
This CBC news item plays a clip of Velvet Sundown music and discusses some of the implications: If you like a piece of music, does it matter if it was created by AI? What is the future for composers? In the news item, presenter Mark Carcassal, plays a snippet of a song he had quickly created using only a page of notes on subject material for the broadcast and AI software. Impressive!
The movie and TV industries are also scratching their heads about what to do with AI generated actors (AI avatars):
https://variety.com/2025/film/news/ai-actress-tilly-norwood-talent-agents-zurich-summit-1236533454/ AI Actress Tilly Norwood Debuts at Zurich Summit as Industry Grapples With Emerging Tech: We Want Her ‘to Be the Next Scarlett Johansson’
This comedy sketch featuring Tilly is entirely AI-generated. Tilly has been trained on the performances of a huge number of actors and can mimic them in any given situation. It (not “she”) is very versatile!
The company which made this video, Particle6, claims savings of up to 90% in the movie creation process. This could pose a threat to traditional moviemaking, but at the same time, open a door to small independent enterprises. The use of AI avatars also raises ethical questions: Tilly is not a human being, so its acting is not constrained by the protections and restrictions surrounding human actors.
These challenges are becoming more pressing as AI software gets better at mimicking human activities.
We finished off by looking at some of the trendy AI slang being developed by humans to throw insults at the technology:
Trendy AI slang includes terms like “AI washing” (deceptive marketing claiming more AI than there is), “slopper” (someone who uses AI for everything), “Groksucker” (a user of Elon Musk’s Grok chatbot), and “clanker” (a derogatory term for an AI system or bot). Other terms describe the AI experience itself, such as “AI glazing” (overly complimenting AI) and “Shoggoth” (referring to the hidden, alien nature of AI).
For more terms, see: https://archive.ph/20250816164104/https://www.fastcompany.com/91383875/ai-washing-sloppers-5-ai-slang-terms-you-need-to-know-slurs-clanker-groksucker-grok-chatgpt#selection-547.0-733.419
Christine Betterton-Jones – Knowledge Junkie (and slightly crazed human)
