This page is part of a global project to create a better online reviews system. If you want to know more or give your feedback, write at [email protected] and we’ll grab a beer ;)
When discussing what’s wrong with online reviews, a frequent answer I get is “fake reviews”. While I believe a system composed only of honest reviews would still present challenges, it’s a fact that this is a major concern to users.
Knowing this, platforms strive to be transparent and disclose information on how they handle fake reviews: in 2020, TrustPilot reported removing 5.8% of reviews, and TripAdvisor 3.6%. That’s what we know. In 2018, a Washington Post investigation found that for some popular product categories on Amazon, more than 50% of reviews are “questionable.”
Here are common types of fake reviews:
- (Positive) Paid reviews: These probably account for the largest share. It’s well known that some companies pay for good reviews. Research shows that businesses are more likely to engage in review fraud when their reputation is struggling or competition is particularly fierce.
- (Positive) Friends & connections reviews: Common for small businesses. For example, ProductHunt is known for many upvotes coming from the publisher’s own connections.
- (Negative) Competitor reviews: Reviews left by competitors to harm a business’s reputation.
- (Negative) Revenge reviews: When a dissatisfied customer asks their connections to leave bad reviews, flooding the business and lowering their average rating.
Anyone can leave a review on Google, Yelp, or TrustPilot, even for a product they haven’t used, even if these platforms claim they frequently remove fake reviews. They use automatic algorithms and manual verification to spot them because fake reviews often have identifiable patterns. However, with the rise of AI, people will be able to generate lifelike reviews using bots, making it harder to differentiate real from artificial reviews.
And corruption?
While we’re not talking about fake reviews per se, another concern related to fake reviews is about “corrupted” reviewers. Some companies offer discounts and other benefits to customers who leave reviews online.
Although asking for a positive review can be legitimate, it distorts the buyer/seller relationship, causing customers to lie or omit parts of the truth (e.g., leaving a very good review despite an “okay” or bad experience). This practice can, to a certain extent, be considered corruption.
You might have been in that situation where the business owner offers to offer the dessert if you leave a review on Google Maps. It’s not a big deal- but it’s not fair either. These businesses have 50 reviews in average. 10 five-star reviews obtained through this method would increase the average rating from 4.2 to 4.4.
- Verify the transaction. The best way to prevent fake reviews is to ensure that a transaction actually occurred. Studies have shown that the prevalence of review manipulation is lower on platforms that require a purchase to post a review . For example, Airbnb verifies transactions before allowing reviews. Amazon uses the “verified purchasers” label, but due to return policies and lower prices, fake reviews are still possible on Amazon. Tools like ReviewMeta and FakeSpot analyze reviews to provide a more accurate score.
- Disallow copy and paste in the comment field. AI-generated reviews are often written in external tools like ChatGPT and then pasted onto the platform. By disabling this function, platforms could deter fraudulent users from manually rewriting the comments. This doesn’t prevent large-scale fraudulent systems from operating through monitor controllers (which can be stopped with bot protection), but individual frauds can be avoided.
- Use additional markers for verifying reviews (e.g., length, depth, historical records of the reviewer). A study propose a moderation system to measure a reviewer's reputation and review quality, showing promise for ensuring the content quality of an online platform. Companies like Yelp and Trustpilot use content-checking algorithms to remove fake reviews using similar markers (but it’s blurry, and not accessible to the public).
- Enable users to flag reviews as “potentially fake.” Making it a collaborative effort, platforms could display this information to readers, e.g., “X customers reported this might be a fake review.” However, a new issue arises: companies might exploit this feature by using collaborators and their network to flag negative reviews, increasing the chances they get removed- something we’ll explore in the dedicated section “Flagging reviews”. Google allows users to report reviews, but it’s unclear how this information is processed.
- Add a disclaimer about the legal consequences and fines for submitting fake reviews and mention that the platform uses algorithms to detect fraud. Although these tools might not catch all fake reviews, this deterrent could reduce their occurrence.
- Implement additional barriers. When the algorithm identifies a potentially fraudulent review, the website could request more information, such as the date and time of the purchase, the price of the item, or a receipt that the reviewer can upload. This would help prevent illegitimate reviews from being submitted, but can also restrain legitimate reviewers from proceeding.
Fake It Till You Make It: Reputation, Competition, and Yelp Review Fraud, Luca and Zervas, 2016
Promotional Reviews: An Empirical Investigation of Online Review Manipulation, Mayzlin, Dover, and Chevalier, 2014
The Role of Marketing in Social Media: How Online Consumer Reviews Evolve, Chen, Fay, and Wang, 2011