Blowing the AI bubble
Angela Christinassociate professor of communication and senior research fellow Stanford HAI
The billboards in San Francisco say it all: AI everywhere!!! For everything!!! All the time !!! The slightly manic tone of these ads gives an idea of the hopes – and immense investments – placed in generative AI and AI agents.
So far, financial markets and large technology companies have doubled down on AI, spending huge amounts of money and human capital and building gargantuan IT infrastructures to support AI’s growth and development. Yet there are already signs that AI may not accomplish everything we hope. There are also indications that AI, in some cases, can distract, deskill and harm people. And some data shows that the current development of AI carries enormous environmental costs.
I hope we see more realism about what we can expect from AI. AI is a fantastic tool for certain tasks and processes; this is problematic for others (hello students who generate final essays without doing the readings!). In many cases, the impact of AI is likely to be moderate: a boost in efficiency and creativity here, a little extra work and boredom there. I’m particularly excited to see more detailed empirical studies of what AI does and what it can’t do. It’s not necessarily the bubble that’s bursting, but the bubble may not be growing very much.
A “ChatGPT moment” for AI in medicine
Curtis Langlotz, professor of radiology, medicine and biomedical data science, senior associate vice provost for research and Stanford HAI principal investigator
Until recently, developing medical AI models was extremely expensive, requiring training data labeled by well-paid medical experts (e.g. labeling a mammogram as benign or malignant). New self-supervised machine learning methods, now widely used by commercial chatbot developers, do not require labels and have significantly reduced the cost of training medical AI models.
Medical AI researchers have been slower to gather the massive data sets needed to leverage self-supervision due to the need to maintain patient data privacy. But self-supervised learning from somewhat smaller datasets has shown promise in radiology, pathology, ophthalmology, dermatology, oncology, cardiologyAnd many other areas of biomedicine.
Many of us will remember the magical moment when we discovered the incredible capabilities of trained self-supervision chatbots. We will soon see a similar “ChatGPT moment” for AI in medicine, when AI models are trained on high-quality, massive health data rivaling the scale of data used to train chatbots. These new biomedical foundation models will improve the accuracy of medical AI systems and enable new tools to diagnose rare and uncommon diseases for which training datasets are scarce.
