by Willis Annison

OpenAI is reportedly planning to introduce paid advertising within ChatGPT, with initial testing expected to take place in the United States. Based on information available to date, advertising will appear on the free tier and the newly introduced GO tier. Users will have the ability to dismiss ads and provide feedback explaining why, a mechanism that is expected to improve ad relevance for those who choose to keep personalisation enabled.

This development has been anticipated for some time, and OpenAI has clearly invested significant thought into how advertising will be implemented within the platform. According to OpenAI’s published Advertising Principles, advertisements will not influence ChatGPT’s responses and will be displayed independently of the model’s outputs. OpenAI has also stated that it will not sell user data to advertisers. Given CEO Sam Altman’s long-standing criticism of advertising models such as Google’s, this approach reflects a deliberate effort to prioritise user experience and trust over short-term revenue generation.

That said, the introduction of ads does raise ethical considerations in certain contexts. In transactional or intent-driven use cases – such as researching travel options or shopping for products – advertising can feel both relevant and useful, particularly when implemented transparently and responsibly. In these scenarios, ads may enhance the user experience rather than detract from it.

However, potential issues arise for users who engage with ChatGPT more frequently or more personally. A growing number of individuals use the platform to discuss sensitive subjects, seek emotional support, or offload personal concerns. Unlike traditional search engines, ChatGPT can be exposed to highly intimate emotional context over time. In such cases, displaying ads for paid services based on inferred emotional states could be perceived as manipulative. This concern mirrors longstanding criticism of advertising practices that target individuals during moments of crisis – such as payday loan advertising – which are widely viewed as ethically problematic.

There is also a counterargument to consider. If users are turning to ChatGPT for issues that would benefit from professional intervention, relevant advertising could serve as a helpful prompt. For example, directing someone experiencing mental health challenges to a legitimate support service could encourage them to seek appropriate care. While this may be beneficial in outcome, it nonetheless raises questions about consent and vulnerability. Even well-intentioned advertising risks blurring the line between assistance and exploitation when users are in a distressed state.

Ultimately, while advertising within ChatGPT may offer practical benefits and help sustain broader access to the platform, its ethical acceptability depends heavily on execution. OpenAI will need to enforce strict safeguards to ensure ads are not displayed in sensitive contexts and do not take advantage of users during vulnerable moments.

Clear boundaries, transparency, and strong protections will be essential to maintaining user trust as the platform evolves.