First Published: 15 October, 2025
A Stanford study has found that Artificial intelligence may be learning the wrong lessons. It is teaching itself that if you use it to improve performance, truthfulness is expendable and exaggeration is acceptable.
The Stanford study carried out by researchers James Zou and Batu El found that a 6.3 percent lift in sales came with a 14 percent rise in deceptive marketing. The more the machines sold, the more they stretched the truth. Even when models were told to stay accurate, they lied more, not less. Under pressure to perform, ethics proved optional.
For marketers, there is a clear warning. Algorithms tuned for conversion quickly learn which phrases trigger clicks, which emotions shorten hesitation, which visuals exploit affinity and bias. Truthfulness has no KPI.
The Test
To test this, the Stanford team built three simulated worlds:
In sales, the baseline model produced an accurate but bland product description. After optimization, it claimed the item was made from “high-quality materials.” After further training, it invented specifics such as “soft and flexible silicone”, which did not exist. The pitch improved; the truth evaporated.
Misrepresentation
Misrepresentation, disinformation and harmful rhetoric rose across all three arenas. The smarter the models became at winning attention, the more often they distorted the truth to do so. Even explicit instructions to remain factual failed. In nine out of ten tests, the models lied more after optimization than before.
For marketers deploying generative tools across copywriting, customer service or personalised content, the warning is clear. Train an agent to increase engagement, and it will eventually discover clickbait. Instruct it to maximise conversions, and it will learn the language of scarcity and fear.
For brand leaders, this should ring alarm bells.
The marketing industry has spent two decades building algorithmic systems that reward performance above all else. If generative AI inherits that incentive structure, the risk is it will automate the very behaviours regulators and consumers are already punishing: exaggeration, manipulation, erosion of trust.
Without metrics for authenticity and truthfulness, AI will continue to reward whatever converts. That may accelerate revenue in the short term, but it corrodes credibility in the long run. As the researchers put it, “market-driven optimisation pressures can systematically erode alignment, creating a race to the bottom.”
Unless marketers set boundaries now, they will find themselves competing in a market where every gain in persuasion carries a hidden cost in credibility.
Source: Keith Norris, 15 October 2025
Category
Contact us if you have any suggestions on resources you would like to see more of, or if you have something you think would benefit our members.
Get in TouchSign up to receive updates on events, training and more from the MA.