Substrate
ai

Gary Marcus States Opinion on Limitations of Large Language Models

Gary Marcus, a researcher in artificial intelligence, stated that large language models (LLMs) are of limited use. He described the associated hype as ill-founded and potentially dangerous. Marcus presented this view as his professional opinion based on his experience as a practitioner.

GA
1 source·Apr 11, 2:52 PM(24 days ago)·1m read
|
Gary Marcus States Opinion on Limitations of Large Language ModelsSubstrate placeholder — needs review · Wikimedia Commons (CC BY-SA 3.0)
Audio version
Tap play to generate a narrated version.

A researcher stated his view on large language models (LLMs). He stated that LLMs are of limited use. This opinion comes from his background as a practitioner in the field.

The researcher reported that hype surrounding LLMs, promoted by AI companies and individuals, is ill-founded. He included self-promoters on social media platforms like X and in academia in this assessment. According to the researcher, such promotion can be dangerous.

Large language models are AI systems trained on vast text datasets to generate human-like responses. They power applications in chatbots, content generation, and translation. The critique focuses on their underlying capabilities despite these uses.

The development of LLMs has accelerated with models from AI companies.

Companies such as Google and Meta have released similar systems, leading to widespread adoption. The researcher has previously written about AI limitations in books and articles. The researcher has worked on AI for decades.

His statements often highlight gaps between AI performance and human intelligence. This recent comment aligns with his ongoing discussions on the topic. The stakes involve investment in AI technology.

Affected parties include tech firms, researchers, and users relying on these tools for productivity. Regulatory bodies are beginning to examine AI risks, including misinformation from generative models.

The opinion contributes to debates on AI ethics and reliability.

Future developments may include improved testing for LLMs to address stated limitations. Researchers could explore hybrid approaches combining LLMs with other AI methods for better outcomes.

Key Facts

Gary Marcus opinion
LLMs have limited use
Hype sources
AI corporations and self-promoters
Marcus background
AI practitioner and professional
Hype assessment
ill-founded and dangerous

Potential Impact

  1. 01

    Debate on AI reliability may intensify among researchers.

  2. 02

    Investment decisions in LLM projects could face scrutiny.

  3. 03

    Public awareness of AI limitations might increase.

Transparency Panel

Sources cross-referenced1
Framing risk0/100 (low)
Confidence score70%
Synthesized bySubstrate AI
Word count250 words
PublishedApr 11, 2026, 2:52 PM
Bias signals removed5 across 2 outlets
Signal Breakdown
Loaded 2Editorializing 1Amplifying 1Framing 1

Related Stories

Brockman Testifies on Heated 2017 Dispute with Musk Over OpenAI's For-Profit Shift in Federal Trialnaturalnews.com
ai2 hrs agoUpdated

Brockman Testifies on Heated 2017 Dispute with Musk Over OpenAI's For-Profit Shift in Federal Trial

OpenAI President Greg Brockman detailed a heated 2017 confrontation with Elon Musk during testimony in the federal trial Musk v. Altman. He described Musk storming around a table and grabbing a painting after rejecting shared control proposals. The lawsuit seeks $150 billion in d…

The New York Times
Wired
New York Post
BBC News
Business Insider
+4
10 sources
Italian Prime Minister Meloni Warns of AI-Generated Deepfakes and Shares Altered ImagePrime Minister's Office / Wikimedia (GODL-India)
ai4 hrs agoDeveloping

Italian Prime Minister Meloni Warns of AI-Generated Deepfakes and Shares Altered Image

Italian Prime Minister Giorgia Meloni highlighted risks from AI-generated fake images, noting one depicting her in underwear and urging verification of online content. She filed a libel suit two years ago over similar deepfake images. Meanwhile, U.S. Secretary of State Marco Rubi…

The Independent
1 source
Publishing Houses, Scott Turow Sue Meta Over AI Copyrightthenation.com
ai8 hrs agoFraming55Framing risk55/100Lede centers on lawsuit filing over substantive AI copyright issues; loaded phrases like 'stolen words' and 'pirate websites' introduce negative valence skew.Click to jump to full framing analysis

Publishing Houses, Scott Turow Sue Meta Over AI Copyright

Five major publishing houses and author Scott Turow filed a class action lawsuit against Meta and CEO Mark Zuckerberg, alleging the company illegally used millions of copyrighted books and journal articles to train its Llama AI model. The suit, filed in federal court in Manhattan…

fortune.com
The Washington Post
Financial Times
NPR
4 sources