While the field of artificial intelligence continues to make a significant impact, it is also influencing notable developments in the cryptocurrency market. Accordingly, large AI language models such as ChatGPT and Claude are not expected to lead to human-level artificial intelligence methods anytime soon. At least not according to Meta’s chief AI scientist Yann LeCun.
What’s Happening in the Field of Artificial Intelligence?
LeCun recently discussed the term artificial general intelligence (AGI), a vague term used to describe a theoretical AI system capable of performing any task given the right resources, in an interview with Time Magazine.
Although there is no scientific consensus on what a system must be able to do to be considered AGI, Meta CEO and founder Mark Zuckerberg caused a stir when he recently announced that Meta is moving towards developing AGI. In a recent interview with The Verge, Zuckerberg stated:
“We’ve come to the view that we need to build general intelligence to create the products we want to build.”
Significant Statements from a Renowned Figure
LeCun seems to disagree, at least semantically, with Zuckerberg. Speaking to Time, LeCun expressed his dislike for the term AGI, preferring to call it human-level artificial intelligence, pointing out that even humans do not have general intelligence. Regarding Large Language Models (LLMs) like Meta’s Llama-2, OpenAI’s ChatGPT, and Google’s Gemini, LeCun believes they are not even close to the intelligence of a cat, which does not bring them closer to human intelligence:
“Things we take for granted become extremely complex for computers to replicate. Therefore, AGI or human-level artificial intelligence is not coming soon and will require significant perceptual changes.”
LeCun also provided philosophical insights on ongoing debates about whether open-source AI systems like Meta’s Llama-2 pose a threat to humanity and categorically rejected the idea that AI poses a significant threat. When asked what if a person with a desire to dominate programs this intent into AI, LeCun argued that if such a malicious AI exists, then smarter and better AI products will defeat them.