Meta's chief AI scientist, Yann LeCun, said that these language models are “intrinsically unsafe” because they can only answer prompts correctly if they are trained on accurate data.
Large language models (LLMs), that power AI chatbots like ChatGPT and Gemini, which have garnered immense popularity recently will never be able to match the intelligence of humans when it comes to reasoning and planning, said Meta’s chief AI scientist Yann LeCun.
LeCun, in an interview with the Financial Times, said LLMs currently have a “very limited understanding of logic” and they “do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan… hierarchically.”
He also said that these language models are “intrinsically unsafe” because they can only answer prompts correctly if they are trained on accurate data.
His comments come after Tesla CEO Elon Musk expressed concern over AI chatbots taking over jobs of humans.
“Animals and humans get very smart very quickly with vastly smaller amounts of training data than current AI systems. Current LLMs are trained on text that would take 20,000 years for a human to read,” LeCun said.
He further explained that the AI models can’t even be compared with animals. Although the chatbots can answer prompts accurately based on their extensive training, they do not really comprehend language, LeCun explained.
The responses by these chatbots are simply an output triggered by certain inputs in the data, and they cannot apply the learning to new situations or think hierarchically, a trait essential to human-level intelligence.
On being asked as to how AI can reach human-level intelligence, LeCun said that Meta’s Fundamental AI Research (Fair) lab, which has around 500 people is currently working on developing a new AI system which can develop common sense and learn how the world around us works.
This approach, known as ‘world modelling’, can prove risky for Meta as investors want quick returns on their AI investments.
LeCun believes that developing Artificial General Intelligence, or AGI, is not a design or technology development problem but a scientific one.
India-Mongolia Relations: Strengthening Strategic Ties
Green Crackers: Balancing Tradition and Environmental Concerns
Ayushman Bharat: Expanding Healthcare Access
Critiquing the effectiveness of trickle-down policies in India
Pollution in the Ganga: Causes and Consequences
Kerala Hijab controversy escalates as student withdraws from school
DU students’ union leader slaps professor, sparks campus outrage
Bangladeshi trans spiritual leader arrested in Mumbai for massive human trafficking ring
Gangster links emerge as Kapil Sharma’s Surrey Café faces third shooting in 4 months
Top Punjab cop arrested in ₹5 lakh bribery case
India-Mongolia Relations: Strengthening Strategic Ties
Green Crackers: Balancing Tradition and Environmental Concerns
Ayushman Bharat: Expanding Healthcare Access
Critiquing the effectiveness of trickle-down policies in India
Pollution in the Ganga: Causes and Consequences
Kerala Hijab controversy escalates as student withdraws from school
DU students’ union leader slaps professor, sparks campus outrage
Bangladeshi trans spiritual leader arrested in Mumbai for massive human trafficking ring
Gangster links emerge as Kapil Sharma’s Surrey Café faces third shooting in 4 months
Top Punjab cop arrested in ₹5 lakh bribery case
Copyright© educationpost.in 2024 All Rights Reserved.
Designed and Developed by @Pyndertech