Discussion about this post

User's avatar
Jessica Pin's avatar

I have been thinking about this a lot. Based on my convos with chat gpt (lol the irony), neural networks are not and will not ever be capable of this. They cannot reason. They cannot impose coherence. When asked to resolve incoherence, they make up novel crazy claims. They take noise and make it noise squared. We need an AI that can figure out what is true when all “reliable sources” are wrong. It needs to incorporate symbolic and Bayesian. I have no idea what I’m talking about. But there has been 0 improvement in chat GPT’s ability to describe female anatomy correctly and it is worse than any human because it is not logical so it mixes error together and creates error squared jn ways no human would ever do because human minds will at least be somewhat coherent.

Expand full comment

No posts