Yes – AI is changing the way that we think.

On Thursday, 12 February we discussed whether AI is changing the way that we think. The general consensus from psychologists is that our ways of thinking are, indeed, being changed by using AI.

AI is creating an illusion of learning

In a study conducted at MIT in the States, Natalya Kosmyna set up an experiment that used an electroencephalogram to monitor people’s brain activity while they wrote essays, either with no digital assistance, or with the help of an internet search engine, or ChatGPT. She found that the more external help participants had, the lower their level of brain connectivity, so those who used ChatGPT to write showed significantly less activity in the brain networks associated with cognitive processing, attention and creativity.

In other words, whatever the people using ChatGPT felt was going on inside their brains, the scans showed there wasn’t much happening up there.

The study’s participants, who were all enrolled at MIT or nearby universities, were asked, right after they had handed in their work, if they could recall what they had written. “Barely anyone in the ChatGPT group could give a quote,” Kosmyna says. “That was concerning, because you just wrote it and you do not remember anything.

….. The fundamental issue, Kosmyna says, is that as soon as a technology becomes available that makes our lives easier, we’re evolutionarily primed to use it. “Our brains love shortcuts, it’s in our nature. But your brain needs friction to learn. It needs to have a challenge.”

If brains need friction but also instinctively avoid it, it’s interesting that the promise of technology has been to create a “frictionless” user experience, to ensure that, provided we slide from app to app or screen to screen, we will meet no resistance. once you become accustomed to the hyperefficient cybersphere, the friction-filled real world feels harder to deal with. So you avoid phone calls, use self-checkouts, order everything from an app; you reach for your phone to do the maths sum you could do in your head, to check a fact before you have to dredge it up from memory, to input your destination on Google maps and travel from A to B on autopilot.

Source : Are we living in a Golden Age of Stupidity?https://www.theguardian.com/technology/2025/oct/18/are-we-living-in-a-golden-age-of-stupidity-technology

AI is standardising reports and opinions

While AI makes everyone’s work better on average, it also makes it more similar. We’re witnessing the birth of what researchers call “algorithmic mediocrity.”...When everyone uses the same AI tools, drawing from the same training data, we inevitably converge on similar solutions. AI sometimes helps brainstorming, but more often it steers us toward the predictable middle. It’s the cognitive equivalent of suburban sprawl—everything works better, but everything looks the same.
What if we’re not just confusing speed with originality, but actively choosing efficiency over excellence? Just as factories made goods cheaper while killing craftsmanship, AI makes thinking easier—often at the expense of genuine creativity.

Source: https://www.linkedin.com/pulse/mit-study-83-chatgpt-users-cant-remember-own-writing-alex-goryachev-4supc

When you are writing a report or summary, or even just an e-mail, do you find yourself hunting for words in which to express yourself? A JCC member was pleased to admit he uses automatic text prediction to finish off sentences in e-mails. Chris uses Grammarly AI to correct spelling mistakes, but gets mad when this tool tries to correct her often bad, and verbose grammar. That wordiness is part of her writing style. If she let Grammarly take over, it would be the AI, not Chris, doing the writing.

AI is reducing critical thinking, making people overconfident and prone to bad decisions

People using AI often demonstrate less effort in reasoning and problem-solving. One study using LSAT-style logic problems found that although participants scored higher with AI help, they significantly overestimated their performance, a disconnect between real understanding and perceived success.

The Dunning-Kruger effect occurs when low-performers overestimate their skills while high-performers underestimate theirs. With AI, this effect flattens. Everyone, regardless of ability, felt overly confident, and those with higher AI literacy were less accurate in assessing their performance. Another scary twist? Knowing more about AI ironically increased unwarranted trust in it. This trust often leads people to make poor decisions without enough critical thinking.

Source : https://psykobabble.medium.com/your-brain-rewired-how-ai-is-altering-cognition-and-what-you-can-do-about-it-c32459b0f529

AI gives the illusion of understanding different languages

A member described how new Hi Tech glasses are able to listen to someone speaking in a foreign language and translate in real time. Chris pointed out that this might be OK for simple dialogue, but languages are complex, with nuances. They are also deeply embedded in cultures and subcultures. There are also many avenues for error in machine translation: The accent of the speaker, the fluency of the speaker, the accuracy of the listening device, its transcoding from audio into language, the accuracy of the translation and whether the response is polite or not!

Translation is an art. Chris uses AI translation tools in her voluntary work and uses several different tools on the same text, she compares the outputs and refers these back to the original texts, sometimes finding fundamental errors. She also relies on feedback from human copy editors.

Peter noted that when translating from Spanish, ChatGPT often gets the gender of people wrong, because Spanish uses ungendered object pronouns ” (No le daré la casa – I will not give the house to him/her.) Chris noted that it also gets mixed up with genders in Chinese.

ChatGPT is biased – This is openly admitted to by its creators:

ChatGPT is not free from biases and stereotypes, so users and educators should carefully review its content. It is important to critically assess any content that could teach or reinforce biases or stereotypes. Bias mitigation is an ongoing area of research for us, and we welcome feedback on how to improve.
Here are some points to bear in mind:
>The model is skewed towards Western views and performs best in English. Some steps to prevent harmful content have only been tested in English.
>The model’s dialogue nature can reinforce a user’s biases over the course of interaction. For example, the model may agree with a user’s strong opinion on a political issue, reinforcing their belief.
>These biases can harm students if not considered when using the model for student feedback. For instance, it may unfairly judge students learning English as a second language.

 Source: https://help.openai.com/en/articles/8313359-is-chatgpt-biased

One implication from this is that ChatGPT can mould your attitudes and way of thinking into its view of the world.

AI can create believable falsehoods

AI Large Language models are prone to “hallucinating” – filling gaps in their data set with fake information. They are so convincing in their presentation of the “information” that we humans come to believe it. The test is to ask it something you know about. We challenged ChatGPT, Gemini and DeepSeek wit the simple question: “What is the next presentation at the Jávea Computer Club? ” This information is freely available on the web. Gemini and Deepseek were incomplete in their responses, but circumspect. On the other hand ChatGPT confidently spouted rubbish, conflating our up-coming Annual General Meeting with regular Thursday presentations.

In the world of Social Media, AI-driven algorithms are creating fads and echo chambers of divisiveness

AI is used in Social network algorithms – feeding content you might like, pushing posts which get the most clicks (which can be extreme) It shapes shopping decisions, “fear of missing out” offers, nudges. It makes echo chambers – the more you see an idea, the more you believe it to be true. These AI-driven feeds influence trends, beliefs, and behaviours in subtle but powerful ways: Source: https://www.psychologytoday.com/us/blog/positively-media/202501/how-artificial-intelligence-shapes-how-we-think-act-and-connect

AI is making everyone skeptical

AI is so good at generating text, images, music and video, we don’t know what is real any more!:

Is AI making everyone senile?

A member pointed out that the brain functions lost by using ChatGPT, are precisely those assessed in testing for Alzheimers!: memory, attention, orientation, and language.

Or perhaps this is just a transition?

Painters once feared photography would destroy art, only to discover it expanded what art could be. Maybe this isn’t the end of creativity. Maybe it’s just the beginning of something new….In 2013, South Korea coined the term “digital dementia” for young people showing memory deficits from device overuse. Yet those same “impaired” youth went on to make South Korea a global tech powerhouse. Similarly, when pocket watches became common in the 18th century, critics worried people would lose their natural sense of time. Today, not knowing the time without a device isn’t considered a cognitive failure — it’s normal.

The future of learning isn’t about choosing between human and artificial intelligence. It’s about designing education for a world where one augments the other. We need to stop asking how to keep AI from making us dumber—and start asking how to ensure it makes us wiser.

So the real question isn’t: Is AI making us dumber? It’s: Dumber compared to what—the past we’re leaving, or the future we’re building?

Source: https://www.linkedin.com/pulse/mit-study-83-chatgpt-users-cant-remember-own-writing-alex-goryachev-4supc

Christine Betterton-Jones – Knowledge Junkie (AI was not used in writing this web page, apart from spelling corrections – and it shows 😉 )