Too many reviews to read leads to overload & doubt

This page is part of a global project to create a better online reviews system. If you want to know more or give your feedback, write at [email protected] and we’ll grab a beer ;)

Due to the limitations of ratings (unclear scale, nuances, categories…), and the unreliable nature of average ratings (volatility, not all reviews being equal, unrepresentative sample…), users rely heavily on written reviews for genuine insights.

The problem is that users don’t have time to read them all.

Consumers are unlikely to read every single review because online markets and review platforms can accumulate vast numbers of them. Additionally, these reviews often contain a lot of redundant information (Ma and Wei, 2012 1^1), which can discourage users from searching further.

It's not just about the time commitment: an overload of information can overwhelm consumers and lead to poor decision-making (Park and Lee, 2008 2^2).

Potential customers face a paradox: they need numerous reviews to ensure they have all the information needed to make a decision, but they can't read them all, and too many reviews can lead to confusion.

image

Grover, Lim, and Ayyagari (2006) 3^3 identify two conflicting situations associated with information quantity: uncertainty and overload. Information uncertainty occurs when necessary information for facilitating transactions is unavailable. In contrast, information overload happens when providing more than required information creates a cognitive burden for customers. Both can occur simultaneously: a user may have too much information in general but not enough on a specific criterion that matters to them.

Having a clear overview is crucial: according to a study 4^4, only 54.7% of potential customers read at least four product reviews before making a purchase. About 44% read three or fewer reviews.

This explains why there’s an extensive reliance on the average rating in today's standards.

💡
Exploration
  • Show most helpful reviews first. To reduce the cognitive burden, websites can sort and list reviews by helpfulness ranking, allowing consumers to rely on such rankings to choose which reviews to read (Lu, Wu, and Tseng, 2018 5^5). Most platforms already rank reviews by helpfulness, using both upvotes from other customers and an algorithm to determine relevance. Improvements can be made by tailoring this selection based on users’ own criteria, either calculated from previous purchases or added manually.
  • A curated summary of reviews. Some platforms, like Amazon, have started implementing this to better assist potential customers. While this helps, readers still want to ensure the information is relevant and know how controversial it might be. They don’t trust the summary too much and still want to deep dive to get real users’ opinions. Again, a paradox.
  • Amazon is testing review summaries but hasn’t deployed it for all products yet.
    Amazon is testing review summaries but hasn’t deployed it for all products yet.
  • Suggest businesses update descriptions based on reviews. For example, if Airbnb guests flag an apartment as noisy, the platform could suggest the owner add that information to their listing with a simple click. This would help future guests set their expectations correctly (see “Expectations, subjectivity, standards & risks”).
  • A suggestion to add listing details could help Airbnb Hosts manage expectations.
    A suggestion to add listing details could help Airbnb Hosts manage expectations.
  • Labels. Summarizing a product or service’s strengths with labels can make specific information more accessible to potential customers. For instance, an Airbnb accommodation labeled “Extra clean” would save potential guests from reading all reviews, as they can trust the label derived from user feedback.
  • A challenge on Airbnb is to create labels valuable in guests’ eyes, which makes them think they don’t even need to double check. They would need a different colour to stand out, and and a strict validation, just like the “Superhost” or “Guest favorite” badges.
    A challenge on Airbnb is to create labels valuable in guests’ eyes, which makes them think they don’t even need to double check. They would need a different colour to stand out, and and a strict validation, just like the “Superhost” or “Guest favorite” badges.
  • Converging or diverging opinions. As proposed in “Repartition of reviewers’ sentiment”, showing whether opinions are converging or diverging helps readers gauge the trustworthiness of a specific review and decide if they need to read more reviews for additional details.
  • A proposition of design for Airbnb.
    A proposition of design for Airbnb.

1^1 “Measuring the coverage and redundancy of information search services on e-commerce platforms”, Ma and Wei, 2012.

2^2 “eWOM overload and its effect on consumer behavioral intention depending on consumer involvement”, Park and Lee, 2008.

3^3 “A Citation Analysis of the Evolution and State of Information Systems within a Constellation of Reference Disciplines”, Grover, Lim, and Ayyagari, 2006.

4^4 “How many reviews do you typically read before you make a decision to purchase?”, Bizrate Insights, 2021.

5^5 “Helpfulness of online consumer reviews: A multi-perspective approach”, Lu, Wu, and Tseng, 2018.

➡️ Next up: Why do we leave online Satisfaction vs. Performance: understand the difference