Logic Gear

Where Tech Starts

Does an AI model hallucinate more than humans?

On Thursday, there was the very first developing anthropic event of anthropology that was coded with Claude. The CEO Dario Amodei was claiming to believe that the AI models are hallucinating and making things up. These AI agents present the things which are not even true in a matter that they seem true. But these AI agents are hallucinating, not more hallucinating than any human being.

It is only a matter of approach. Everything that you ask of AI completely depends upon your preference. They hallucinate only in the things that you ask them about. But the hallucinations of AI can surprise you in many ways. 

Apparently, if there is any bullish leader in the AI models prospect industry then it would be the anthropic CEO. Because he thinks that everyone looks for the whole hard blocks that AI can do this and that but they are not seen. He thinks that AI is not completing anything that was set or Expected of it.

According to other AI makers and leaders, the hallucination is an obstacle in the way of AI achievement regarding the AGI. The AI models of today have many loopholes and they can get answers to many questions very wrong.

The Metro came to light when the anthropic lawyer apologised in the court because he used Claude. The AI was used to create citations and code filing where the chat board hallucinated again. As a result it got the titles and names completely wrong and alternated.

Although we cannot completely identify and verify the claim of Amodei because both hallucinations and benchmarks of AI stand opposite to each other. All of the other leaders are comparing AI software altogether but none of them compare AI with humans. Already is doing exactly the opposite.

There are certain tricks through which you can stop AI from hallucinating. Even though if you’re using AI for your search matters as well as the research matters it is up to you to stop it from hallucinating. You can use techniques or give exact prompts as well as ask you to search the web so it cannot hallucinate in any way.

While using this approach, many users have found that AI does not make things up when they pay a little attention to it. According to Amodei everyone makes mistakes. You see TV broadcasters, humans and politicians and journalists all the time making speeches and writing.

And AI is making mistakes on the same level. It does not mean that it’s not intelligent enough or does not have enough capability. The only problem that stands within AI and humans is the approach. If humans start to approach AI in a better way then it will start giving results much better.

Leave a Reply

Your email address will not be published. Required fields are marked *