Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Brink Bites: Using AI to Detect Alzheimer’s Disease; NIH Supports COPD Research in BU | The edge

October 17, 2025

NSF Announces Funding to Establish National AI Research Resources Operations Center | NSF

October 17, 2025

Cutting-edge imaging and AI research looking for tiny defects in chips

October 17, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Technology»A more powerful AI is coming. Academia and industry must oversee it – together
AI in Technology

A more powerful AI is coming. Academia and industry must oversee it – together

December 6, 2024005 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
D41586 024 03911 3 50277748.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link
Close-up portrait of Sam Altman with a screen reading OpenAI behind him

Sam Altman of OpenAI, the company that developed ChatGPT, says machines will create superintelligence. Research is needed to verify these and other claims.Credit: Jason Redmond/AFP/Getty

“It is possible that we will have superintelligence in a few thousand days (!); It may take longer, but I am confident we will get there.

This is what Sam Altman, CEO of OpenAI, a technology company based in San Francisco, California, wrote on September 23. This was less than two weeks after the company behind ChatGPT released o1, its most advanced extended language model (LLM) to date. Once confined to the realm of science fiction, the rise of LLMs in recent years has renewed the relevance of the question of when we might create artificial general intelligence (AGI). Although lacking a precise definition, AGI broadly refers to an AI system capable of human-level reasoning, generalization, planning, and autonomy.

How close is AI to human intelligence?

Policymakers around the world are asking questions about AGI, including its benefits and risks. These questions are not easy to answer, especially since much of the work takes place in the private sector, in which studies are not always published openly. But what is clear is that AI companies are working to equip their systems with the full range of cognitive abilities that humans enjoy. Companies developing AI models have a strong incentive to entertain the idea that AGI is near, in order to attract interest and therefore investment.

There was consensus among the researchers who spoke to Nature for a news article published this week (see Nature 63622-25; 2024) that large language models (LLMs), such as o1, Google’s Gemini, and Claude, made by San Francisco-based Anthropic, have not yet provided AGI. And, drawing on lessons from neuroscience, many argue that there are good reasons to believe that LLMs will never be and that another technology will be needed for AI to achieve intelligence of human level.

Despite the breadth of their capabilities – from generating computer code to summarizing academic papers and answering mathematical questions – there are fundamental limitations in the operation of the most powerful LLMs, which essentially involve devouring a mass of data and use it to predict the next “token” in a series. This generates plausible answers to a problem, rather than actually solving it.

AI and science: what 1,600 researchers think

François Chollet, a former software engineer at Google based in Mountain View, Calif., and Subbarao Kambhampati, a computer scientist at Arizona State University in Tempe, tested o1’s performance on tasks that require abstract reasoning and planning, and discovered this was happening. running out of AGI. If AGI were to occur, some researchers believe that AI systems would need consistent “world models,” or representations of their environment, that they could use to test hypotheses, reason, plan, and generalize. knowledge acquired in one area to other potentially unlimited situations.

This is where ideas from neuroscience and cognitive science could power the next advances. Yoshua Bengio’s team at the University of Montreal, Canada, for example, is exploring alternative AI architectures that would better support the construction of coherent global models and the ability to reason using such models.

Some researchers say the next advances in AI could come not from the biggest systems, but from smaller, more energy-efficient AI. Smarter systems of the future might also require less data to train if they had the ability to decide which aspects of their environment to sample, rather than simply ingesting whatever they’re fed, says Karl Friston, a theoretical neuroscientist at University College London.

Such work demonstrates that researchers from various fields need to be involved in the development of AI. This will be necessary to verify what the systems are actually capable of, ensure they live up to the tech companies’ claims, and identify the advancements needed for development. However, at present, access to leading AI systems can be difficult for researchers who do not work at companies that can afford the large amount of graphics processing units (GPUs) needed for training systems (A. Khandelwal and others. Preprint on arXiv https://doi.org/nt67; 2024).

ChatGPT has broken the Turing test – the race is on for new ways to evaluate AI

To give an idea of ​​the scale of activity, in 2021, US government agencies (excluding the Department of Defense) allocated $1.5 billion to AI research and development, and the European Commission spends about 1 billion euros ($1.05 billion) per year. In contrast, companies around the world spent more than $340 billion on AI research in 2021 (N.Ahmed et al. Science 379884-886; 2023). There are ways to governments could fund AI research on a larger scale, for example by pooling resources. The Confederation of European Artificial Intelligence Research Laboratories, a non-profit organization based in The Hague, Netherlands, has suggested creating a “CERN for AI” capable of attracting the same level of talent as AI companies and thus create a cutting-edge research environment.

It’s difficult to predict when AGI might happen – estimates vary from a few years to a decade or more. But other big advances in AI are sure to come, and many of them will likely come from industry, given the scale of investment. To ensure these advances are beneficial, technology companies’ research must be verified using the best current understanding of what constitutes human intelligence, according to neuroscience, cognitive science, social science, and other relevant fields. This publicly funded research is expected to play a key role in the development of AGI.

Humanity must harness all its knowledge to ensure that applications of AI research are robust and its risks are mitigated as much as possible. Governments, businesses, research funders and researchers must recognize their complementary strengths. If they don’t, information that could help improve AI will be missed – and the resulting systems are likely to be unpredictable and therefore dangerous.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

Fox Businessexpert predicts that AI will lead the technology sector leading to a huge president of TD Ameritrade, president and chief executive

September 19, 2025

Google Deepmind demands the breakthrough of “historic” AI in problem solving | Artificial Intelligence (AI)

September 19, 2025

The AI ​​CEO claims that technology “ moving very quickly ” could soon replace more jobs

September 19, 2025
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (29)
  • AI in Business (75)
  • AI in Healthcare (64)
  • AI in Technology (78)
  • AI Logistics (24)
  • AI Research Updates (42)
  • AI Startups & Investments (64)
  • Chain Risk (37)
  • Smart Chain (32)
  • Supply AI (21)
  • Track AI (33)

Brink Bites: Using AI to Detect Alzheimer’s Disease; NIH Supports COPD Research in BU | The edge

October 17, 2025

NSF Announces Funding to Establish National AI Research Resources Operations Center | NSF

October 17, 2025

Cutting-edge imaging and AI research looking for tiny defects in chips

October 17, 2025

AI is a strategic tool to improve scientific research

October 17, 2025

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (29)
  • AI in Business (75)
  • AI in Healthcare (64)
  • AI in Technology (78)
  • AI Logistics (24)
  • AI Research Updates (42)
  • AI Startups & Investments (64)
  • Chain Risk (37)
  • Smart Chain (32)
  • Supply AI (21)
  • Track AI (33)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2025 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.