Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Elon Musk calls Anthropic’s AI models ‘misanthropic and evil’

February 14, 2026

Scientists and AI leaders react to Matt Shumer’s viral essay

February 14, 2026

BBCAnthropic AI security researcher resigns with ‘the world is in peril’ warning. The same week, an OpenAI researcher resigned over concerns about his decision to begin testing ChatGPT ads… 19 hours ago

February 14, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Business»Scientists and AI leaders react to Matt Shumer’s viral essay
AI in Business

Scientists and AI leaders react to Matt Shumer’s viral essay

February 14, 2026014 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
698e7b64e1ba468a96abfdf4.jpeg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Scientists and business leaders are reacting to a viral essay warning about AI’s impact on jobs with a mix of agreement and skepticism.

The essay, titled “Something Big Is Coming,” written by OthersideAI co-founder and CEO, Matt Shumerracked up more than 60 million views on X as of Thursday.

In his 5,000-word message, Shumer said AI could disrupt everyday life on a “much larger” scale than COVID, a comparison that sparked reactions online. He wrote that the changes already underway in the technology sector are likely a foretaste of disruptions that could soon affect other sectors as well.

“Even if there’s a 20% chance of this happening, people deserve to know and have time to prepare,” Shumer told Business Insider’s Brent Griffiths in an interview.

Here’s what some of the sharpest minds in AI say about Shumer’s essay.

David Haber

Haber, general partner of a venture capital firm Andreessen Horowitz specializing in technology investments, published on X that Shumer’s essay contains “great advice on how to get ahead in your job at any big company right now.”

“’I used AI to do this analysis in an hour instead of three days, it will be the most valuable person in the room.’ Not finally. Right now,” Haber quotes in the essay. “Learn these tools. Become proficient. Demonstrate what’s possible.”

Alexis Ohanian

THE Founder of Reddit responded to Shumer’s initial post on X with a simple comment: “Great article. Totally agree.”

Since 2023, Reddit has introduced a range of AI-powered tools, from search features that summarize user discussions to AI that refines its content recommendations and targets ads, but Ohanian recently stressed that the platform needs to retain your humanity to remain competitive.

Eric Markowitz

Markowitz, an author, managing partner and director of research at Nightview Capital, a long-term investment firm, responded to Schumer with an essay almost as long, which criticized the practice of chasing speed and replacing the value of humanity simply because it was possible.

“These two worlds – Wall Street and Silicon Valley — have formed a feedback loop of short-termism so tight, so self-reinforcing, that they have confused efficiency with purpose, growth with meaning, and elimination of people with progress,” Markowitz wrote.

“I have two research assistants. Could I replace them with AI? Sure. But their value increases their weekly production,” Markowitz added. “They give meaning to my work and I love seeing the enthusiasm on their faces when they make a new discovery that I alone could not have found.”

“Let me repeat: we are not our tools. We never have been,” Markowitz wrote in conclusion.

Todd McLees

McLees, the founder of HumanSkills.AI, wrote on

“As AI becomes more capable, our role in setting direction, values ​​and purpose becomes increasingly critical,” McLees said.

“What do you bring to the table when the machine can do the job? That’s the only question that matters when intelligence is abundant“, McLees added. “Shumer wrote the alarm. That’s a good thing. But alarms don’t tell you where to go. You have to find that within yourself. »

Gary Marcus

Marcus, professor emeritus of psychology and neural sciences at NYU and founder of AI companies Robust.AI, has harsh words for Schumer in his newsletter.

Marcuz called Shumer’s blog post “weaponized hype, filled with vivid narrative and marketing talk,” and said he did not provide real data to support the claim that the latest AI can write complicated applications without errors.

“Shumer’s presentation is completely one-sided, omitting many concerns that have been widely expressed here and elsewhere,” Marcus added, after discussing various studies that question the accuracy and productivity gains that AI tools actually provide.

Vishal Misra

Misra, vice dean of computer science and artificial intelligence at Columbia University, responded in a lengthy article on Substack explaining why he doesn’t think AI is as scary as it seems, at least not yet.

Misra wrote that many strange AI behaviors that make them appear sentient, such as perception resistance and self-preservationare simply the result of training data.

As for possible job cuts, Misra said he understands the anxiety, but history shows we may not need to panic.

“When the camera was invented, portrait painters had every reason to panic. Their livelihood depended on a skill that a machine could now approximate,” Misra wrote.

“What happened? Painters did not disappear. They were freed from the obligation to faithfully reproduce reality and ventured into impressionism, cubism, abstract expressionism,” Misra added. “The camera didn’t kill the painting. It freed it.”

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

Lindner launches Master of Science in AI Management

February 14, 2026

AI threatens to devour enterprise software – and it could change the way we work

February 13, 2026

AI is indeed coming – but there is also evidence to allay investor fears | AI (artificial intelligence)

February 13, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (68)
  • AI in Business (365)
  • AI in Healthcare (298)
  • AI in Technology (354)
  • AI Logistics (50)
  • AI Research Updates (121)
  • AI Startups & Investments (293)
  • Chain Risk (86)
  • Smart Chain (113)
  • Supply AI (99)
  • Track AI (66)

Elon Musk calls Anthropic’s AI models ‘misanthropic and evil’

February 14, 2026

Scientists and AI leaders react to Matt Shumer’s viral essay

February 14, 2026

BBCAnthropic AI security researcher resigns with ‘the world is in peril’ warning. The same week, an OpenAI researcher resigned over concerns about his decision to begin testing ChatGPT ads… 19 hours ago

February 14, 2026

Four steps to creating reliable and sustainable AI

February 14, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (68)
  • AI in Business (365)
  • AI in Healthcare (298)
  • AI in Technology (354)
  • AI Logistics (50)
  • AI Research Updates (121)
  • AI Startups & Investments (293)
  • Chain Risk (86)
  • Smart Chain (113)
  • Supply AI (99)
  • Track AI (66)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.