Why reputation systems should not employ downvotes

I have always been more than a bit skeptical about reputation systems that employ so called “downvotes” or “dislikes”. Because I believe that by providing a negative incentive downvotes will oppress legitimate dissent by unconsciously leading people to either agree or stay silent.

Each time you are downvoted it gives you a negative incentive not to voice that opinion the next time. The slightest disagreement with the majority can have this effect, since it can accumulate to an overall negative reputation over time. Even if you ignore the negative incentive, at some point your reputation will become bad enough that you will be ignored or barred from further discussion.

Ask yourself, would you want to have downvotes employed universally and on a global basis? For example, would it be desirable if millions could downvote someone for proclaiming that there is no god? Obviously such a system would be really bad for minorities, and for the world as a whole.

Downvotes do not correct factual error

In some cases downvotes might cause a person to reflect on what they have written. But that will only happen if the person believes that downvotes are evidence that their submissions are actually faulty, rather than signaling that the person who downvoted did so for various other reasons than being objectively right about a factual disagreement.

Reputation systems allow for an ambiguous interpretation of the number they assign to content. That downvotes mean that someone is objectively wrong is just one unlikely interpretation, given the selection pressure such reputation systems cause and the human bias towards group think.

Even if all requirements for a successful downvote are met, the person might very well not be able to figure out how exactly they are wrong by observing the change of a number associated with their submission. The information is simply not sufficient. Which will cause the person to continue to express their opinion, or avoid further discussion and continue to hold wrong beliefs. Even worse than this, it will likely cause people who are downvoted to become angry at those who downvoted them and retaliate, causing any discussion to degenerate into a fight.

In response to this it is often argued that little information is better than no information. Yet humans can easily be overwhelmed by too much information. Especially if the information are easily misjudged and only provide little feedback. Such information might only add to the overall noise.

This problem could be alleviated if people had to actually explain themselves in order to voice their disagreement. But the possibility to downvote someone with a single mouse click discourages people to elaborate on their disagreement.

If people had to actually write a comment to voice their disagreement, everyone would benefit. The person who is wrong would benefit by being provided an actual explanation for why someone disagrees and would therefore not be able to easily believe that the person who disagrees just doesn’t like their opinion for irrational reasons. The person who disagrees would have to be more specific and maybe name some concrete reasons for their disagreement and that way notice that it might be them who is wrong, or that their disagreement with the other person isn’t as strong as they thought. Further, everyone else reading the conversation would be able to discern whether all parties involved in the discussion actually understand each other or rather talk past each other.

Further reading

Tags: ,

  • Stephen Diamond

    Most LW readers don’t understand the real purpose of the reputation system. I didn’t until the insight recently struck me: the real purpose of the reputation–what function it actually serves system–isn’t to select excellent content but to evolve consensus opinions on an eclectic array of topics. This became clear to me because I’ve experimented with postings, intentionally trying to vary the quality and the degree of agreement with the LW “line.” The dominant factor in upvoting and downvoting is agreement.

    In itself, that’s not so unusual a claim. But critics who observe the dominant role of agreement in reputation allocation either confuse agreement with quality or see it as a flaw in a system serving to reward quality. The purpose is rather to evolve a “line.”

    LW’s success is due to its offering nonphilosophical readers a philosophical ideology, for which purpose it must take specific positions. How does it arrive at and then manifest those positions. It does both by downvoting disagreement and upvoting agreement. Since it desires a dynamic consensus on a specific outlook or a narrow range of outlooks, it must use some objective means to coordinate on such as outcome.

    Consider the alternative: Yudkowsky writes a sequence favoring a blend of utilitarianism and intuitionism in ethics. Posters could just coordinate around Yudkowsky’s views, but then you’d have a rather unattractive overt cult. “Reputation” is the mechanism by which the group adopts Yudkowsky’s views. Yudkowsky loses some control: he hasn’t pushed his (misname) “infinite-set atheism,” probably because it wouldn’t be upvoted, but he has been able to get LW to endorse most of his ideas.

    The “democracy” of reputation in LW mirrors political democracy these days in the U.S., where billionaires play a disproportionate role in writing the political agenda. (As per the role of the Koch brothers in spawning the Tea Party.) This isn’t a mere metaphor; Yudkowsky is banked by his friend billionaire Peter Thiele, whose largesse allowed Yudkowsky the time and other resources to post thousands of pages of Sequence material and to build networks of hundreds in New York and the Bay Area. (http://tinyurl.com/9y8ppmv)

    [I had long wondered how an uncharismatic figure lacking a stellar intellect or literary talent could create a huge network and get others to support his ultra-cushy job (at SIAI, it’s recognized that he can’t or won’t devote more than 4 hours to intellectually demanding labor and is required to do nothing but such labor). It’s now been clarified: he has a billionaire friend.

  • Pingback: Alexander Kruel · SIAI/lesswrong Critiques: Index()

  • Pingback: Alexander Kruel · Less Wrong: What annoys me MOST()

  • Guest

    “it’s recognized that he can’t or won’t devote more than 4 hours to
    intellectually demanding labor and is required to do nothing but such

    This doesn’t seem especially remarkable. I’ve read several people on Hacker News say that they can’t do more than 4 hours or so of actual programming per day, and it (unfortunately) matches my experiences with myself as well…

  • Pingback: Alexander Kruel · The Singularity Institute: How They Brainwash You()

  • Pingback: Alexander Kruel · The detailed and specific reasons for why I am criticizing MIRI/LW()

  • Tim Tyler

    I think the biggest problem is that votes are anonymous. Anonymity is well known to encourage bad behaviour. Of course plenty of users are anonymous anyway on such chat boards – but even a virtual identity is worth something. Less anonymity would be highly likely to produce better behaviour – according to everything we know about reputation systems.

  • seahen

    Given how few actual trolls LW ever has to deal with, maybe the weight of downvotes should be reduced if too many have been cast. Exceptions could be made for comments from banned users or users who hadn’t met a “probation” requirement; this’d keep the system working if LW was ever raided by a large troll mob or faced an influx of merely clueless new lusers.