The future never feels fully certain. But in this era of rapid and intense transformation – political, technological, cultural, scientific – it is more difficult than ever to have an idea of what awaits us.
At WIRED, we’re obsessed with what’s next. Our quest for the future most often takes the form of vigorously reported stories, in-depth videos, and interviews with the people who help define it. This is also why we recently adopted a new slogan: For Future Reference. We focus on stories that not only explain what lies ahead, but help shape it.
With that in mind, we recently interviewed a series of luminaries from across the WIRED touch worlds—and who participated in our recent Big Interview event in San Francisco, as well as students who have spent their entire lives inundated with technologies that seem increasingly likely to disrupt their lives and livelihoods. Unsurprisingly, emphasis was placed on artificial intelligencebut it has spread to other areas of culture, technology and politics. Think of it as a benchmark for how people think about the future today – and maybe even a rough map of where we’re going.
AI everywhere, all the time
What is clear is that AI is already just as integrated into people’s lives as research has been since the days of Alta Vista. Like search, use cases tend toward the practical or mundane. “I use LLMs a lot to answer all my questions throughout the day,” says Angel Tramontin, a student at UC Berkeley’s Haas School of Business.
Several of our respondents reported using AI in the past few hours or even minutes. Lately, Anthropic co-founder and president Daniela Amodei has been using her company’s chatbot to help with child care. “Claude helped me and my husband potty train our oldest son,” she says. “And I recently used Claude to Google the equivalent of panic symptoms for my daughter.”
She’s not the only one. Wicked Director Jon M. Chu turned to LLMs “just for advice on my children’s health, which may not be the best,” he says. “But it’s a good starting point of reference.”
AI companies themselves see healthcare as a potential growth area. OpenAI announced ChatGPT Health earlier this month, revealing that “hundreds of millions of people” use the chatbot to answer health and wellness questions every week. (ChatGPT Health introduces additional privacy measures, given the sensitivity of the queries.) Anthropic’s Claude for Healthcare targets hospitals and other health systems as customers.
Not everyone we interviewed took such an immersive approach. “I try not to use it at all,” says Sienna Villalobos, an undergraduate at UC Berkeley. “When it comes to doing your own work, it’s very easy to have an opinion. AI shouldn’t be able to give you an opinion. I think you should be able to do it yourself.”
This point of view could be increasingly in the minority. Nearly two-thirds of American teenagers use chatbots, according to a recent Pew Research study study. About 3 in 10 people say they use it daily. (Given how closely Google Gemini is tied to search these days, many others may be using AI without even realizing it or intending to do so.)
Ready to launch?
The pace of AI development and deployment is relentless, despite concerns about its potential impacts on mental healthTHE environmentAnd society as a whole. In this largely open regulatory environment, businesses are largely left to self-regulate. So what questions should AI companies ask themselves before each launch, in the absence of guardrails from lawmakers?
“What could possibly go wrong?” “This is a really interesting and important question that I wish more companies would ask,” says Mike Masnick, founder of the technology and politics news site. Technical dirt.
