Gary Marcus States Opinion on Limitations of Large Language Models
Gary Marcus, a researcher in artificial intelligence, stated that large language models (LLMs) are of limited use. He described the associated hype as ill-founded and potentially dangerous. Marcus presented this view as his professional opinion based on his experience as a practitioner.
Substrate placeholder — needs review · Wikimedia Commons (CC BY-SA 3.0)A researcher stated his view on large language models (LLMs). He stated that LLMs are of limited use. This opinion comes from his background as a practitioner in the field.
The researcher reported that hype surrounding LLMs, promoted by AI companies and individuals, is ill-founded. He included self-promoters on social media platforms like X and in academia in this assessment. According to the researcher, such promotion can be dangerous.
Large language models are AI systems trained on vast text datasets to generate human-like responses. They power applications in chatbots, content generation, and translation. The critique focuses on their underlying capabilities despite these uses.
The development of LLMs has accelerated with models from AI companies.
Companies such as Google and Meta have released similar systems, leading to widespread adoption. The researcher has previously written about AI limitations in books and articles. The researcher has worked on AI for decades.
His statements often highlight gaps between AI performance and human intelligence. This recent comment aligns with his ongoing discussions on the topic. The stakes involve investment in AI technology.
Affected parties include tech firms, researchers, and users relying on these tools for productivity. Regulatory bodies are beginning to examine AI risks, including misinformation from generative models.
The opinion contributes to debates on AI ethics and reliability.
Future developments may include improved testing for LLMs to address stated limitations. Researchers could explore hybrid approaches combining LLMs with other AI methods for better outcomes.
Key Facts
Potential Impact
- 01
Debate on AI reliability may intensify among researchers.
- 02
Investment decisions in LLM projects could face scrutiny.
- 03
Public awareness of AI limitations might increase.
Transparency Panel
Related Stories
naturalnews.comBrockman Testifies on Heated 2017 Dispute with Musk Over OpenAI's For-Profit Shift in Federal Trial
OpenAI President Greg Brockman detailed a heated 2017 confrontation with Elon Musk during testimony in the federal trial Musk v. Altman. He described Musk storming around a table and grabbing a painting after rejecting shared control proposals. The lawsuit seeks $150 billion in d…
Italian Prime Minister Meloni Warns of AI-Generated Deepfakes and Shares Altered Image
Italian Prime Minister Giorgia Meloni highlighted risks from AI-generated fake images, noting one depicting her in underwear and urging verification of online content. She filed a libel suit two years ago over similar deepfake images. Meanwhile, U.S. Secretary of State Marco Rubi…
thenation.comPublishing Houses, Scott Turow Sue Meta Over AI Copyright
Five major publishing houses and author Scott Turow filed a class action lawsuit against Meta and CEO Mark Zuckerberg, alleging the company illegally used millions of copyrighted books and journal articles to train its Llama AI model. The suit, filed in federal court in Manhattan…