I have always been more than a bit skeptical about the reputation system employed on LessWrong. I think that it might unconsciously lead people to agree because even slight disagreement might accumulate to an overall negative reputation over time. And even if, on some level, you don’t care about your reputation, each time you are downvoted it gives you a negative incentive not to voice that opinion the next time or to change how you portray it.
I noticed that I myself, although I believe that I never cared much about my rank within that community, at some point became increasingly reluctant to say something that I knew would lead to a negative reputation.
The problem with downvotes is that those who are downvoted are rarely people who know that they are wrong, otherwise they would have deliberately submitted something that they knew would be downvoted, in which case the downvotes would be expected and have little or no effect on the future behavior of the person.
In some cases downvotes might cause a person to reflect on what they have written. But that will only happen if the person believes that downvotes are evidence that their submissions are actually faulty rather than signaling that the person who downvoted did so for various other reasons than being right.
Reputation systems, like the one used on LessWrong, allow for an ambiguous interpretation of the number they assign to content. That downvotes mean that someone is objectively wrong is just one unlikely interpretation, given the selection pressure such reputation systems cause and the human bias towards group think.
Even if all requirements for a successful downvote are met, the person might very well not be able to figure out how exactly they are wrong due to the change of a number associated with their submission. The information is simply not sufficient. Which will cause the person to either continue to express their opinion or avoid further discussion and continue to hold wrong beliefs.
With respect to reputation systems it is often argued that little information is better than no information. Yet humans can easily be overwhelmed by too much information. Especially if the information are easily misjudged and only provide little feedback. Such information might only add to the overall noise.
The availability of a reputation system discourages people to actually explain themselves by being able to let off steam or ignore cognitive dissonance by downvoting someone with a single mouse click.
If people had to actually write a comment to voice their disagreement, everyone would benefit. The person who is wrong would benefit by being provided an actual explanation for why someone disagrees and therefore wouldn’t be able to easily believe that the person who disagrees just doesn’t like their opinion for irrational reasons. The person who disagrees would have to be more specific and maybe name some concrete reasons for their disagreement and that way notice that it might be them who is wrong or that their disagreement with the other person isn’t as strong as they thought. Further, everyone else reading the conversation would be able to discern if all parties involved in the discussion actually understand each other or talk past each other.
It is trivially true that the LessWrong reputation system would fail if there were more irrational people than rational people, where “rational” is defined according to the criteria of the majority (without the intention to imply that their criteria are wrong).
I am quite sure that a lot of valuable opinions are lost due to the current reputation system because there are a lot of people who don’t like the idea of being voted down according to unknown criteria rather than engaging in argumentative discourses.
Another problem is that the current reputation system favors non-technical posts. More technical posts often don’t receive the same amount of upvotes as non-technical posts and technical posts that turn out to be wrong are downvoted more extensively. This does discourage rigor and gives incentive to write posts about basic rationality rather than tackling important problems collaboratively.
I also see that reputation systems can have positive effects given certain circumstances. But would you want to have such a system employed on a global basis, where millions could downvote you for saying that there is no God? Obviously such a system would be really bad for the kind of people who read LessWrong and for the world as a whole.
That means that the use of the system on LessWrong is based on the assumption that it will only be used by people who are much like people favored by LessWrong and will therefore work well for them. But given that LessWrong is an open system, will it always stay that way? At what point is it going to fail on them, how will they notice, how do they set the threshold?
And given that the system works so well as to keep everyone who doesn’t think like them off LessWrong, how are they going to notice negative effects of groupthink? Do they trust their abilities to seek truth enough to notice when the system starts to discourage people who are actually less wrong than LessWrong?
A reputation system works insofar as it maximizes the content the majority of people is interested in. But that content might be biased and to some extent dishonest. Are humans really good enough at collectively deciding what they want to see more of, just by clicking two buttons that increases a reward number? I am skeptical.
Ask yourself, would a reputation system cause the Tea Party movement to become less wrong? Any reputation system protects itself by making those who most benefit from it defend its value.