The way people decide what to buy has changed. Instead of piecing together recommendations from search engines, social media, and friends, more people are turning to large language models (LLMs) like ChatGPT or Claude as their starting point for product research.
Aanchal Gupta, an Executive MBA student at London-based Bayes Business School, spent six months researching how LLMs are changing consumer behavior. Her finding: LLMs aren’t just a new search tool. They’re shifting who people trust when they make purchase decisions.
From days to 40 minutes
The biggest pull toward LLMs isn’t curiosity or novelty. It’s cognitive offloading: the model does the aggregation work. Instead of visiting five product sites, each with a different layout and its own framing of why you should buy their thing, users run a few prompts and get a synthesized answer. Aanchal’s participants reduced skincare research time from several days down to 30–40 minutes.
The comparison table kept coming up in her interviews. Participants loved asking an LLM to compare three products across price, ingredients, and reviews and getting a clean, readable table back. E-commerce sites offer filtering and sorting, but those tools only work on that platform’s inventory. LLMs pull from across the web (reviews, research papers, blogs, retail sites) and synthesize them around the specific question you’re asking.
Trust builds slow and breaks fast
The stickier issue is trust. Aanchal calls it “strong but fragile.” Once a user gets a recommendation that works, especially one that matches a product they already know, the model earns credibility. Use it enough times and it becomes the first stop for any product search, not a fallback.
Broken links, irrelevant responses, or answers that lose context send people straight back to Google. A single bad experience can undo weeks of trust-building.
Age determines how much patience people extend before that happens. Younger participants wanted to run the full purchase journey inside the LLM: research, comparison, and checkout. Some were waiting for in-app checkout so they wouldn’t have to leave at all.
Older participants were more likely to not have used LLMs for skincare before. Several tried it for the first time during Aanchal’s interviews and came away surprised by how specific and actionable the recommendations were. The divide shows up most sharply at the point of purchase: both groups browse and research with LLMs, but completing a transaction through one is still a younger-user behavior.
What brands need to change
Almost none of Aanchal’s participants visited a brand website directly during their research. Yet all of them said the site needed to exist. The brand website isn’t a destination anymore. It’s a source of credibility, and LLMs extract from it. That means sites need to carry more information than they were designed to hold: ingredient proof points, product limitations, who the product is actually for, and the reasoning behind each claim. Content written for humans to skim doesn’t give an LLM enough to work with when someone asks a specific question about their skin type.
Aanchal draws a line between optimizing structured data for LLM visibility and actually doing the harder work of understanding what consumers ask AI when they’re in your product category. The brands willing to do the second thing will show up better in the model’s output, not because they’ve gamed a system, but because they’ve given it something real to work with.
For now, most of us are somewhere in the middle: using LLMs for some searches, still Googling for others, and slowly delegating more product research to something that remembers we prefer a three-step skincare routine.
