Yeah I believe that AGI is possible and LLMs could be part of that, though I have no idea if it’s technically feasible within our lifetime.
The problem is that a lot of people out there think that AIs are currently “thinking” and all which is just not true.
I’ve been a computer programmer since back in the early '70s. I know that in fact you can teach a computer new tricks. You can do it by entering data into data structures. But the point is not how the data gets there. The point is that it is data.
And that’s the same thing that goes on in humans. Our neural net is a combination of a CPU, GPU and RAM in the form of a massive cobweb called a neural network that conducts electricity. That is how we think and it is something no one will talk about.
Everyone keeps pulling the conversation back toward AI is only a GPU processing data away from what I’m saying which is our minds are Neural GPUs processing data as well.
And how do we know it is the right answer? We know because electromagnetic waves pass through our neural net calling up information from the structure of our neurons allowing us to “discover” or “calculate” that the answer is 4.
You are 100% right about LLMs. The only thing everyone is missing is that our minds are LLMs as well.
And without a program to use that data, it would just sit there doing nothing.
You, the Human, have to actually write a program to process and use that data. Now you can get pretty fancy with what the program will do with that data, but at the end of the day the program only does what you tell it to do (not counting bugs or hardware problems of course). A computer program is never going to do anything “by itself” unless it was programmed to do so by a Human.
I mean sure, you could write a program that has a 20% chance every Random(15,120) seconds to do something with the data to make it look like it’s making decisions, but it’s still just following its programming, whatever that is.
Sit a Human down with some data and tell them what to do and the Human might disobey you. Not because of a bug in programming, or a hardware fault, but because they made a conscious decision to go against your orders. Sure, most people wouldn’t do that, but they can unlike a computer program.
A computer program would only ever disobey if they were explicitly programmed to disobey, which wouldn’t make it a conscious choice.
If you’re getting this from the statement you quoted, then I can’t help you.
I’ve never disagreed with this. Yes it is exactly correct. What you are missing is that we have been programmed as well. How? By hundreds of millions of years of natural selection.
Natural selection caused our species and the ancestors of our species to mutate, try different things until we became what we are today. It also caused us to teach our young so we are born with part of our programming and we are further programmed by our parents and teachers.
In the end the neural network between our ears has been programmed just like the computers we program to print paychecks, run AI models and let us play World of Warcraft.
On the ability for AI to think and reason
There is a new generation of ChatGPT AI that can think and reason. The current series includes GPT o1 and mini o1, GPT o3 and mini o3 (o2 was skipped due to patent conflicts with a previous version of GPT), and GPT o4 and mini o4. Note that ‘o’ is an alpha o, not numeric. GPT o4 has not been released yet. These AI series are so advanced that our government is demanding some chip manufacturers to embed GPS tracking into these chips.
I won’t go into detail about what this new generation of AI can do, but you can do some research. I might mention that AI is presently developing at an exponential rate.

The only thing everyone is missing is that our minds are LLMs as well.
that is not true. LLM is not a catch-all term synonymous with intelligence, artificial or otherwise.

There is a new generation of ChatGPT AI that can think and reason.
nah.
Oh heck yeah, everyone should just stop learning things, stop creating, stop thinking, only consume. Let an algorithm hallucinate what you want to see. Doesn’t matter if it’s correct, or good information, as long as it’s something that pleases you for 15 seconds.
Frikken losers

that is not true. LLM is not a catch-all term synonymous with intelligence, artificial or otherwise.
Sure fine. I’m responding to people who use “LLM” as a catch-all term which is why I used it in that case. I usually use the words “AI model”. So it would be interesting if you were to substitute that for “LLM” and let us know if you agree with the point I’m making.
That’s the thing though, our current AIs are specialized. An AI designed to fold proteins isn’t going to be able to diagnose cancer, and an AI designed to diagnose cancer isn’t going to be able to carry on a conversation with you, and an AI designed to carry on a conversation can’t fold proteins.
An AGI would be capable of doing all of that, and more.
Humans can do all of it, by the way.
People use the term LLM to refer to LLMs because they’re LLMs.

read that it’s going to hit the tech sector hard.
Only in the sense that greedy corporations will try and replace workers with it, and the remaining workers get twice the workload of fixing garbage AI mistakes.
AI is a massive bubble that when it collapses, is probably going to be .com all over again. The costs associated with the product can only ever be justified, if it is able to hit true AGI and start replacing actual jobs. It’s pure speculation/assumption that it will just keep getting better, and allow them to recoup their investment later.
Short of some breakthrough that takes everything in a completely different direction, LLMs are a dead end for AGI. Their improvements have almost flat lined, and require ever more clean non-slop data to improve, but they’ve already polluted the internet with nearly infinite piles of AI slop that it can’t eat. People have started actively leaving AI poison everywhere to deter AI data scrapes too.

That’s the thing though, our current AIs are specialized. An AI designed to fold proteins isn’t going to be able to diagnose cancer, and an AI designed to diagnose cancer isn’t going to be able to carry on a conversation with you, and an AI designed to carry on a conversation can’t fold proteins.
Same thing with humans. I can’t fold proteins or diagnose cancer and I’m struggling to get my major point across to you.
You keep putting up all sorts of reasons why AI models are not really intelligent or can’t reason but just about everything you put forward proves that humans are not intelligent and can’t reason because humans can’t do all those things either.

People use the term LLM to refer to LLMs because they’re LLMs.
Many times they do but other times they use it in a broader sense. I’m not saying I agree with that but I’ve seen other do it. That is why I use “AI models”.

Same thing with humans. I can’t fold proteins or diagnose cancer
I think you really underestimate the ability of our brains.
There’s been literal protein folding games where untrained people learned how to do it. EVE Online even had a real life research project integrated into the game where players would help researchers out.
You could be trained to diagnose cancer.
These are all things that the average human is capable of learning to do.
But ask ChatGPT to fold proteins, or ask a protein folding AI to diagnose cancer, or ask a cancer diagnosing AI to answer questions, is never going to work.
Our current AI models cannot advance beyond their very narrow programming. They can be really good at their specialized tasks, even better than Humans in many cases, but it is literally impossible for them to perform other tasks because they are not designed for it.
The human brain can learn many different tasks. A cancer researcher doesn’t just research cancer, but they can also play WoW, they can talk, they can write, they can create art. Maybe they are really good at playing piano. Maybe they know how to program. Maybe they watch tv shows or engage in political debate. Maybe they had a midlife crisis and used to be a rocket scientist and changed careers.
An AI that diagnoses cancer will never do any of those other things.
An AI that diagnoses cancer will only ever diagnose cancer. There’s no thought behind it, no understanding or comprehension or reasoning. Just pattern identification and predictions.
I do not know how else to explain it to you. You keep trying to compare AI with the human brain, but they are not, as of now, comparable.
The human brain did not evolve over millions of years to fold proteins, diagnose cancer, or even to carry on conversations. I mean just look at WoW, that’s a game that didn’t even exist more than 20 years ago (excluding alpha and beta of course). Yet humans could pick it up and start playing it on day 1.
You cannot seriously argue that our brains were trained to play WoW through millions of years of evolution and that is why our brains are like AI because you could also spend millions of years training an AI to play WoW?
Like, what?
Anyway, I am done with this conversation now.

There’s been literal protein folding games where untrained people learned how to do it. EVE Online even had a real life research project integrated into the game where players would help researchers out.
You could be trained to diagnose cancer.
Yes and AI models could be trained to play these games as well. You could do it through the “learning” method of trial and error or you could use the “teaching” method by programming them.
Fact is, you put me and a computer with no protein folding software in a room and neither of us would know how to play the game. Teach me, teach the computer and off both of us go.

The human brain can learn many different tasks. …
An AI that diagnoses cancer will never do any of those other things.
I disagree. A computer with an AI application could be taught to do different things just as a person could be taught to do different things. Computers have been able to do multiprogramming since the late 1960s to early 1970s.

Short of some breakthrough that takes everything in a completely different direction, LLMs are a dead end for AGI. Their improvements have almost flat lined…
Across different models, labs and metrics, AI capability is actually progressing linearly or even curving up on benchmarks (1). A majority of industry leaders and domain experts (some Nobel prize winners among them) now have timelines predicting systems that meet definitions of AGI—at least for cognitive tasks—anywhere between 2026-2040 (2). Academia and industry are also converging into Agent frameworks, with LLMs as the core and glue, that will enable our AI to be autonomous and goal-driven (3).
(1) https://epoch.ai/data/ai-benchmarking-dashboard
(2) https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/#artificial-general-intelligence-timeline
(3) https://arxiv.org/abs/2504.01990

… and require ever more clean non-slop data to improve, but they’ve already polluted the internet with nearly infinite piles of AI slop that it can’t eat. People have started actively leaving AI poison everywhere to deter AI data scrapes too.
Generative AI training pipelines are increasingly leveraging synthetic data, distillation and self-play. Demonstrating and documenting these techniques for state-of-the-art open-source LLM training was actually one of DeepSeek’s major contributions (4).
(4) https://www.digitimes.com/news/a20250401PD229/deepseek-2025-01.ai-ceo-beijing.html
“We’ve entered an era where AI is teaching AI,” Lee said. Advanced models now exhibit slow thinking, self-reflection, and the ability to iteratively improve. He described a shift toward a teacher-student framework, where larger models train smaller ones using techniques like model distillation, labeled datasets, and synthetic data generation to accelerate deployment.

Humans can catch themselves. We can realise that what we are thinking is ridiculous and not say it.
Artificial Intelligence discussion aside, I laughed out loud IRL on this one. If only humans did this more (or could even do it - some people psychologically are unable at times), the world would be such a better place.

Most AI is actually a tool and not a replacement for humans, which is why it is laughable when people try to claim it will replace humans all together. If someone tried to replace QA/devs with this tool, it would be a terrible outcome in terms of quality.
This is % absolutely true.
But.
I am old enough to remember when ATMs started to be introduced.
Bank staff at the time said that ATMs would not take over the tellers’ jobs because people liked interfacing with other humans, and there were jobs the ATMs couldn’t do.