Categorization: reviewers should evaluate on specific criteria

This page is part of a global project to create a better online reviews system. If you want to know more or give your feedback, write at [email protected] and we’ll grab a beer ;)

When people evaluate their experiences, they do so based on specific criteria they have in mind. Reviews should clearly indicate the areas of service being evaluated (e.g., for a restaurant: food, atmosphere, service…).

Often, people read negative reviews to ensure the issues mentioned don’t apply to them (e.g., a reviewer mentioned long wait times, but that’s not important to the reader). They also scan additional comments, unconsciously checking the criteria mentioned. They then consolidate this information to make a decision.

Three main issues arise from this:

  1. Time-consuming: Categorizing information from reviews takes time (see “Too Many Reviews to Look At”).
  2. Insufficient information: When there are few comments or they lack detail, the reader doesn’t get the information they need.
  3. Subjectivity: Some categories are objective (e.g., accuracy of description), some are slightly subjective (e.g., cleanliness, noise levels), and some are highly subjective (e.g., value for money, atmosphere). A user giving 3 stars out of 5 might do so based on personal expectations, which may differ from the reader’s.
Airbnb was one of the first businesses to offer reviews based on multiple criteria: cleanliness, check-in, location, communication, accuracy, value.
Airbnb categories
Airbnb categories

However, a few problems persist:

  • Overall rating dominance: The overall rating remains the main reference, even though category ratings are only visible on the listing page.
  • Lack of clarity: The average rating for each category isn’t clear. People have a reference for overall ratings because they can compare listings, but they can’t directly compare cleanliness ratings, making it hard to judge if a score is good or bad.
  • Missing categories: Important aspects like quietness, amenities, Wi-Fi speed, and bed comfort are often not included. Users will check the comments for these if they’re important to them.
  • Information overload: There’s a lot of information to process, and people don’t have the time to check all category ratings on all listings.
  • Unclear scale: The ambiguity of the rating scale also applies to these scores.

Due to these issues, people often overlook category ratings and focus on written reviews instead.

Google has also recently added several categories to their review framework, indicating a growing interest in categorization.

The consequences of non-categorized content are significant. For example, on HealthGrades, physicians receive bad ratings for being “not nice.” While being nice is important, it’s not everything. A pleasant physician who prescribes forbidden medication or fails to refer complex cases to specialists is far more dangerous.

This issue extends to private feedback as well. In NPS surveys, it’s hard to determine what is being rated without comments (product, customer support, operations?). Departments within a company struggle to identify who’s responsible.

Overall, basing an average rating on subjective sentiments is questionable. It works with a lot of reviews, but not with fewer reviews (see the “Volatility” section).

💡
Exploration
  • Force the user to categorize their review and remove the “overall rating.” If a user wants to review different aspects of the service/product, they should leave separate reviews for each category. Airbnb partially does this by asking guests to rate each aspect (cleanliness, communication…), but they still ask for a global rating at the beginning. The comment is also at the “global” level, not category-specific. While this approach would provide a fairer representation for potential customers, it has downsides: it takes more time to leave a review and can create an overload of information for potential customers. However, with the right design, these issues can be mitigated.
  • A design prototype
    A design prototype
  • Ask specific questions to the reviewer. For example, asking “Did you receive the package on time and undamaged?” with a “Yes or no” answer captures valuable feedback and adds useful information for potential customers. This approach could be extended to other specific questions related to product quality, customer service, or any aspect that helps reviewers qualify their experience more easily.
  • image
  • Ask “what did you most enjoy” and/or “what was wrong” with a selection of choices. This design could be improved further: since a reviewer might have both good and bad feedback on a specific topic, they could still ask what can be improved even when it stood out.
  • image
  • Confirm items of the description. A platform could ask reviewers whether a product or service truly provides something mentioned in the description. As seen in the introductory principles, reviews help confirm or invalidate an option. Checking user experiences on specific description points would be valuable to readers: “Does the product really provide this feature?”
  • image

Give your opinion!

➡️ Next up: Nuances