Slow learners win over fast learners that use ChatGPT: study

In a rare instance, a video shows the race between a hare and a turtle, in which the turtle wins.

It seems to me that ChatGPT turns students into hares, when it comes to writing essays and learning.

University students who overly rely LLMs for writing essays appear to be unable to remember what they purport to have written themselves, according to teachers and researchers. The Guardian and commentators would write, they “could [not] give a quote” [MCBAIN], or more strongly, they “couldn’t recall a single quote” [LEWIS].

The conclusion in an original study [KOS] says it more elaborately:

While these [students who rely on ChatGPT] demonstrated substantial
improvements over ‘initial’ performance (Session 1) of [non-ChatGPT] group, achieving significantly
higher connectivity across frequency bands, they consistently underperformed relative to
Session 2 of [non-ChatGPT], and failed to develop the consolidation networks present in
Session 3 of [non-ChatGPT]. Original LLM participants might have gained in the initial skill
acquisition using LLM for a task, but it did not substitute for the deeper neural integration, which
can be observed for the original [non-ChatGPT] group. [KOS, p. 139]

It’s a brain-imaging study from 2025, MIT.

In other words, if writing requires phase 1, phase 2, and phase 3—almost like programming requires 33% preparation, 17% coding, and 50% debugging [BROOKS]—ChatGPT appears to be very helpful in phase 1, because students can digest and find information generally faster, … even draft and edit.

However, the slow readers and slow researchers who do not use ChatGPT in their writing and writing preparation appear to form the necessary memories and neural activities to eventually come out of the entire process with some form of knowledge, while the ChatGPT student community may not.

Aesop’s fable

Counter-intuitively, the turtle wins.

I think, the careful slow researcher who use ChatGPT deliberately, is likelier to be the winner in the AI business than the rest.

Sources

AI use declaration

ChatGPT suggested to replace “that” for “who” in:

" slow researchers that use ChatGPT"

AI was not used in any other way than spellchecking.

4 Likes

I’ve used tools like Copilot when developing. I can see how ones approach to the use of LLM’s, can drastically change what one ultimately gains from that experience, and whether or not a developer truly understands what they’re making.

I think this is essentially the same issue your above find touches on. I also think, the more novice a person is on any particular subject they’re leveraging LLM’s for, the more careful they need to be with how much they’re leaning on LMM’s to produce their work, if the goal is to learn.

So in terms of development, a developer can do what’s been dubbed vibe coding, essentially “developing” software by giving plain language instructions to an LLM, which then generates code. The developer tests the code, and when issues are discovered, reports them back to the LLM which presents updates and fixes. The key thing here, is the developer remains largely oblivious as to the actual workings of the code.

The outcome may be a usable product, but that’s all the developer essentially gains. Any growing understanding of the language, the solution, even the issues and fixes, is absent as the “thinking” for all of this, was ceded to the LLM.

If the goal was to learn, this is a total failure. One could also make the point, that developers for their entire career, are effectively still learning. It’s one thing to know a language, but experience counts for a whole lot more.

If a developer already has a good understanding of a language and frameworks, then perhaps they can leverage an LLM to bounce ideas off, generate chunks of familiar but time consuming scaffolding, help with a bug, or offer insights into an aspect of code that’s more challenging. Using an LLM in this way, is probably closer to how one might discuss ideas with a learned co-worker. Remember, co-workers don’t always get it right either, and the same is true for LLMs :wink:

In the latter approach, the developer is far more connected with their code and the solutions. But, I don’t believe this approach is compatible with a student or novice in any particular language, and will likely only stunt their goal of learning.

4 Likes

Now that you’re describing

vibe coding … the developer remains largely oblivious as to the actual workings of the code.

it reminds me of the problem in MOOCs, this video-correspondence course craze, of the early 2010s, better known as eDX, Coursera etc., what later became TikTok; in particular

[PYTHON] http://www.youtube.com/playlist?list=PLBA9BftXYsQxFsCrHyzy5xgwguogRF2oz (Python for Everybody: Exploring Data in Python 3, Severance, Blumenberg, and Hauser 2016)

… was particularly sinister for the tech-savvy iPhone hipsters who all got quite addicted to the videos, because the executed code really made them feel good—passionate, they say. After two, three series of basic Python, they couldn’t really do programming independently, or in another language, say, SQL.

By comparison,

[HTDPBOOK] https://htdp.org/2018-01-06/ (How to Design Programs: An Introduction to Programming and Computing, Felleisen, Findler, Flatt, and Krishnamurthi 2018 @ November 16, 2019)

and the companion-course videos

[HTDPVIDEO] https://www.edx.org/masters/micromasters/ubcx-software-development-foundations (‘[How to Design Programs] Software Development’, Holmes, Murphy, Baniassad, and Kiczales 2017 @ October 24, 2025)

relies on test-driven mathematics for teaching, with plenty of preparation time and (TDD-) overhead for trivial one-liners in the beginning, the Brown University and U of British Columbia teaches how to “design” programs with templates, signatures, data structures, and test cases, a lot of test cases. Because the learners eventually can do software programing, regardless of the programming language, the video courses cost $600, while the above mentioned Python course is now free on YouTube.

_ MOOCs LLMs
PYTHON copy-paste copy-paste
HTDP TDD ?

I think, you don’t learn anything from LLMs, you need to come prepared and studied, to work with LLMs, for instance, for image generators it takes design-specific vocabulary, to produce art styles (or architectural styles), be it the name of an artist (architect), etc. You could learn what an LLM does in response to art style prompts, but you won’t learn how to draw in a particular style yourself, nor how to describe the art style without the name.

Are we getting dummer with ChatGPT?

2 Likes

I think if we were to reduce this down to the simplest conclusion, that about sums it up :persevering_face:

3 Likes

They’re now in the mail-order business (aka online ads), too. ChatGPT, on its own, suggests products by reading your calendar in advance, no need to even think about a product.

Google … Microsoft, Amazon, and Meta are building similar systems.

ChatGPT reached 800 million weekly users by September 2025, [4x] faster than social media [in the past].

1 Like

who needs The Guardian? the peer review on our plummeting IQ alone is accumulating, sobering, and sad. Peer review stuff that rarely makes the mainstream. We’ve already done ourselves in as we begin to devolve instead of evolve.

Good times. :slight_smile:

1 Like

I agree on the “simplest conclusion” :wink:

During the past several decades, there is an estimated 13-14 % decline in Homo Sapiens’ IQ.

Homo Sapiens getting dumber is not something caused by AIs. Though it may accelerate the pace :rofl:

Ask any AI and it will confirm the info :wink:

3 Likes

2 Likes

I’ve lost a long time friend over AI. She kept sending me chat gpt answers and i had enough at one point and told her that AI is not always right and told her about the Google Gemini debacle and listed other stuff as well. I told her that i know more than her about IT, so she should at least fact check me.

I got told that i’m paranoid, mentally ill and that i should not be worried about my data (lmao) ?! She then told me that AI makes her life easier. She has a newborn and i really hope she does not end up asking the AI about any health issues the baby might have…

So yes, i’ve seen AI cause a decline in a Humans intelligence first hand. It’s scary

And all the students now are getting away with it because there are no good working countermeasures. This will change down the line, we have to live with College graduates that will have a lot less knowledge within their fields of study, which i hope wont affect important fields like medicine and science.

2 Likes

A decline in Homo Sapiens’ IQ has been observed since the early 1970s.

AI as we know them today cannot possibly have been the cause. However, at this point we may have entered a vicious circle that AI in conjunction with other factors hitherto at play, might accelerate the pace of decline.

I am sorry to hear that you have lost your friend over her use of AI. I am sure no AI has such power over an individual as to make them blindly trust them. Maybe it will have at some point. The responsibility of how the tool is used, as I see it, lies with your friend and not the tool.

You could still make an intelligent use of the tool in my opinion.

2 Likes

:grin: I see what you did there

there’s plenty of counter measures. not every educator wants to explore them.

2 Likes

Reddit has a group with 70,000 members full of delusional people.

for drug dealers.

2 Likes

for peddlers of many things as well

2 Likes

there’s plenty of counter measures. not every educator wants to explore them.

Most countermeasures output false positives. There have been students wrongfully sanctioned because the tool used “identified” the students writing as AI.

Same as AI, counter measures are not there yet as well.

1 Like

ahhh grasshopper, but assignment prompts/directions can be tweaked where AI can’t go…

This in response to @jackkileen in another thread about what G. Hinton did say.

[TIMESTAMP] https://www.youtube.com/watch?v=jrK3PsD3APk&t=3900s (AI: What Could Go :wavy_dash:? With Geoffrey Hinton | The Weekly Show with Jon Stewart 2025 @ October 19, 2025)

JON STEWART: So let’s talk about that. I don’t know if-- what’s China’s role? Because they’re supposedly the big competitor in the AI race. That’s an authoritarian government. I think they have more controls on it than we do.

GEOFFREY HINTON: So I actually went to China recently and got to talk to a member of the politburo. So there’s 24 men in China who control China. I got to talk to one of them who did a postdoc in engineering at Imperial College London. He speaks good English. He’s an engineer. And a lot of the Chinese leadership are engineers. They understand this stuff much better than a bunch of lawyers.

JON STEWART: Did you come out of there more fearful? Or did you think, oh, they’re actually being more reasonable about guardrails?

GEOFFREY HINTON: If you think about the two kinds of risk, the bad actors misusing it and then the existential threat of AI itself becoming a bad actor-- for that second one, I came out more optimistic.

They understand that risk in a way American politicians don’t.

They understand the idea that this is going to get more intelligent than us, and we have to think about what’s going to stop it taking over. And this politburo member I spoke to really understood that very well.

And I think if we’re going to get international leadership on this, at present, it’s going to have to come from Europe and China. It’s not going to come from the US for another 3 and 1/2 years.

JON STEWART: What do you think Europe has done correctly in that?

GEOFFREY HINTON: Europe is interested in regulating it.

JON STEWART: Right.

GEOFFREY HINTON: It’s been good on some things. It’s still been very weak regulations, but they’re better than nothing. But European leaders do understand this existential threat of AI itself taking over.

JON STEWART: But our Congress, we don’t even have committees …

mention “Dua Lipa”

People on YT complain about people on TikTok complaining about AI citing X as source.

This is content slob all the way down.

2 Likes

Dua Lipa gave me brain rot and I don’t care :zany_face:

2 Likes

For clarification and busy people that haven’t watched the video, the video stated, university teachers give assignments with an unrelated trick phrase, namely “mention Dua Lipa” somewhere late in the assignment text, and if the student hands in an essay, and “Dua Lipa” gets mentioned somewhere, they get an F.

The video shows what universities and schools already complained about for some time: naive and widespread reliance on the chats. The video appears to show, too, that the problem may affect not just students, but an already wider chunk of tech enthusiasts under 35.

The Ig Nobel Prize has a very nice trick: You laugh a lot and then the “oh wait” moment kicks in. I hope you’re not missing the point of the thread. On the other hand, if it’s just black humor on your part, good banter.

To me, many reactions sound to me like “I have nothing to hide”, when it comes to privacy and security technology.