No this is where you are wrong. I’m not claiming that AI systems are accurate all the time. I’m responding to a claim that there is a fundamental difference between human reasoning and the way AI comes up with answers. I’m saying that there is not.
I never claimed that LLMs or other data structures used by AI are always perfect. My point is that humans are similar in the way they think and are both capable of making mistakes.
Humans have a head start because our brains are pre wired based on millions of years of natural selection by human and pre human ancestors. Natural selection also has us prewired to train our kids so when we become adults our brains have learned things to help with survival. But that is just a matter of degree, not an ability to reason which AI systems do not have.
As for your questions, I usually use CoPilot or Perplexity to answer questions and while they make occasional mistakes they usually don’t make the mistakes you are talking about.
Again, maybe ChatGPT is just not as good as people make it out to be. The only reason it gets so much press is because of the big squabble when it first came out over who would control it.