Eighteen months ago, it was plausible that artificial intelligence was taking a turn different path than social networks. At the time, AI development was not focused on a small number of large technology companies. Nor had it capitalized on consumers’ attention, by monitoring them and serving advertisements.
Unfortunately, the AI industry is now modeled after the social media model and aims to monetize consumer attention. When OpenAI launched its Search ChatGPT functionality end of 2024 and its browser, ChatGPT Atlasin October 2025, it launched a race to capture online behavioral data to fuel advertising. It’s part of a year turnaround by OpenAIwhose CEO Sam Altman once called the combination of advertising and AI “troubling” and now promises that ads can be deployed in AI applications while preserving trust. The creeper speculation among OpenAI users who think they see paid placements in ChatGPT responses suggest they’re not convinced.
In 2024, AI research company Perplexity launched experiment with advertisements in its offers. A few months later, Microsoft introduces advertisements on your Copilot AI. Google AI Mode for Search now offers more and more advertisements, just like Amazon’s Rufus chatbot. OpenAI announced on January 16, 2026 that it will start soon test ads in the non-paid version of ChatGPT.
Inasmuch as security expert And data scientistwe see these examples as harbingers of a future in which AI companies profit from manipulating their users’ behavior for the benefit of their advertisers and investors. It also reminds us that the time needed to shift AI development from private exploitation to public benefit is quickly running out.
The functionality of ChatGPT Search and its Atlas browser is not really new. MetaAI commercial competitor Perplexity and even ChatGPT itself has had similar AI search capabilities for years, and both Google And Microsoft beat OpenAI by integrating AI into their browsers. But OpenAI corporate positioning indicates a change.
We think the ads from ChatGPT Search and Atlas are concerning because there is really only one way to make money on search: the advertising model. launched mercilessly by Google.
Advertising model
Line a monopolist in US federal court, Google won more than $1.6 trillion in advertising revenue since 2001. You can think of Google as a web search company, or a video streaming company (YouTube), or an email company (Gmail), or a cell phone company (Android, Pixel), or maybe even an AI company (Gemini). But these products are incidental to Google’s financial results. The advertising segment generally represents 80% to 90% of its total turnover. Everything else is there for collect user data and grab their attention to its source of advertising revenue.
After two decades in this monopoly position, Google’s search product is much more tailored to the needs of the company than to those of its users. When Google Search first arrived decades ago, it proved revelatory in its ability to instantly find useful information on a still-nascent web. In 2025, its search results pages are dominated by poor quality and often AI-generated content, spam sites that exist solely to drive traffic to Amazon sales – a tactic known as affiliate marketing – and paid ad placements, which are sometimes indistinguishable from organic results.
Full of advertisers And observers seem to think that AI-powered advertising is the future of the advertising industry.
Very convincing
Paid advertising in AI search, and AI models in general, could be very different from traditional web search. This has the potential to influence your thinking, your spending habits, and even your personal beliefs in much more subtle ways. Because AI can engage in active dialogue, responding to your specific questions, concerns, and ideas rather than simply filtering through static content, its potential for influence is much greater. It’s like the difference between reading a textbook and having a conversation with its author.
Imagine you are talking to your AI agent about your next vacation. Did they recommend a particular airline or hotel chain because they really are the best for you, or does the company get a commission for each mention? If you ask about a political issue, does the model bias its answer based on which political party paid fees to the company, or based on the bias of the model’s company owners?
There is growing evidence that AI models are at least as effective as people at persuading users to do certain things. A December 2023 meta-analysis of 121 randomized trials found that AI models are as good as humans to change people’s perceptions, attitudes and behaviors. A more recent meta-analysis of eight studies concluded in the same way there were “no significant overall differences in persuasion performance between (large language models) and humans.”
This influence can go far beyond determining what products you buy or who you vote for. As in the field of search engine optimization, incentivizing humans to perform for AI models could shape the way people write and communicate with each other. The way we express ourselves online will likely be increasingly geared towards the attention of AIs and their place in the responses they send back to users.
Another way forward
Much of this situation is discouraging, but there is much that can be done to change this situation.
First, it is important to recognize that today’s AI is fundamentally unreliablefor the same reasons as search engines and social media platforms.
The problem is not the technology itself; Quick ways to find information and communicate with friends and family can be wonderful abilities. The problem lies in the priorities of the companies that own these platforms and for whose benefit they are exploited. Recognize that you have no control over what data is transmitted to the AI, who it is shared with, and how it is used. This is important to keep in mind when connecting devices and services to AI platforms, asking them questions, or considering purchasing or doing things they suggest.
There are also many things people can demand from governments to restrict harmful uses of AI by businesses. In the United States, Congress could enshrine consumer rights to control their own personal data, as the EU has already done. This could also create data protection executing agencyas basically all the others developed country did it.
Governments around the world could invest in public AI – models constructed by public agencies and offered universally in the public interest and transparently under public control. They could also restrict how companies can collude to exploit people using AI, for example by banning ads for dangerous products such as cigarettes and requiring disclosure of paid endorsements.
Every tech company is looking to differentiate itself from its competitors, especially in an era where yesterday’s revolutionary AI is quickly becoming a product that will run on any kid’s phone. One of the differentiators is creating a trustworthy service. It remains to be seen whether companies like OpenAI and Anthropic can maintain profitable businesses through subscription-based AI services like ChatGPT’s premium editions, Plus and Pro, and Claude Pro. If they want to continue convincing consumers and businesses to pay for these premium services, they will need to build trust.
This will require making real commitments to consumers on transparency, privacy, reliability and security, which will be monitored in a consistent and verifiable way.
And while no one knows what future AI business models will be, we can be certain that consumers do not want to be exploited by AI, covertly or otherwise.
This story has been updated with news about OpenAI’s plan to test advertising in ChatGPT.
