Meta's chief AI scientist, Yann LeCun, said that these language models are “intrinsically unsafe” because they can only answer prompts correctly if they are trained on accurate data.

Large language models (LLMs), that power AI chatbots like ChatGPT and Gemini, which have garnered immense popularity recently will never be able to match the intelligence of humans when it comes to reasoning and planning, said Meta’s chief AI scientist Yann LeCun.
LeCun, in an interview with the Financial Times, said LLMs currently have a “very limited understanding of logic” and they “do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan… hierarchically.”

He also said that these language models are “intrinsically unsafe” because they can only answer prompts correctly if they are trained on accurate data.
His comments come after Tesla CEO Elon Musk expressed concern over AI chatbots taking over jobs of humans.
“Animals and humans get very smart very quickly with vastly smaller amounts of training data than current AI systems. Current LLMs are trained on text that would take 20,000 years for a human to read,” LeCun said.
He further explained that the AI models can’t even be compared with animals. Although the chatbots can answer prompts accurately based on their extensive training, they do not really comprehend language, LeCun explained.
The responses by these chatbots are simply an output triggered by certain inputs in the data, and they cannot apply the learning to new situations or think hierarchically, a trait essential to human-level intelligence.

On being asked as to how AI can reach human-level intelligence, LeCun said that Meta’s Fundamental AI Research (Fair) lab, which has around 500 people is currently working on developing a new AI system which can develop common sense and learn how the world around us works.
This approach, known as ‘world modelling’, can prove risky for Meta as investors want quick returns on their AI investments.
LeCun believes that developing Artificial General Intelligence, or AGI, is not a design or technology development problem but a scientific one.

Supreme Court raps CAQM over Delhi air crisis: ‘No urgency, no clarity, no long-term plan’

Trump touts Venezuelan oil windfall for US as Caracas pushes back

Govt’s IndiaAI Mission teams up with Simplilearn for AI literacy course

CBSE launches psycho-social counselling support for Class 10 and 12 students

Age no bar: 92-year-old judge takes charge of Maduro trial, spotlighting US judiciary’s lifetime tenure

Supreme Court raps CAQM over Delhi air crisis: ‘No urgency, no clarity, no long-term plan’

Age no bar: 92-year-old judge takes charge of Maduro trial, spotlighting US judiciary’s lifetime tenure

Maduro denies drug charges, calls himself ‘decent man’ in US court
-1186x667.jpg&w=256&q=75)
Shark Tank India 5: Anupam Mittal rips into pitcher over ‘doctor’ claim

‘I hate being reminded that we are a UT’: Omar Abdullah on statehood, power tussles and Kashmir’s uneasy calm

Supreme Court raps CAQM over Delhi air crisis: ‘No urgency, no clarity, no long-term plan’

Trump touts Venezuelan oil windfall for US as Caracas pushes back

Govt’s IndiaAI Mission teams up with Simplilearn for AI literacy course

CBSE launches psycho-social counselling support for Class 10 and 12 students

Age no bar: 92-year-old judge takes charge of Maduro trial, spotlighting US judiciary’s lifetime tenure

Supreme Court raps CAQM over Delhi air crisis: ‘No urgency, no clarity, no long-term plan’

Age no bar: 92-year-old judge takes charge of Maduro trial, spotlighting US judiciary’s lifetime tenure

Maduro denies drug charges, calls himself ‘decent man’ in US court
-1186x667.jpg&w=256&q=75)
Shark Tank India 5: Anupam Mittal rips into pitcher over ‘doctor’ claim

‘I hate being reminded that we are a UT’: Omar Abdullah on statehood, power tussles and Kashmir’s uneasy calm
Copyright© educationpost.in 2024 All Rights Reserved.
Designed and Developed by @Pyndertech