Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,082 questions , 2,232 unanswered
5,353 answers , 22,789 comments
1,470 users with positive rep
820 active unimported users
More ...

  Is it possible to have different voting categories?

+ 0 like - 0 dislike
988 views

When doing research questions, there are different ways a question body can be useful. For example: "Explain what Witten is doing in equation 3.14 of paper so and so" might be important, but it will not require original thinking to answer, and it will not be difficult for anyone who understands the paper. On the other hand "Can you give an example of where this model of superconductivity will break down?" might require extensive original thinking to answer, and is of a qualitatively different nature of problem.

I think that it would be useful to have two categories for questions and two categories of upvote for answer:

questions: ()difficulty, ()interesting-ness

So that you can upvote one, or the other, or both attributes.

for answers: () correctness, () completeness

(taking dimension 10's criticism to heart, perhaps for answers there should only be only one voting category for answers, correctness/completeness merged together--- then the total rep can be difficulty*(answer-score * interestingness. The important thing is to give research level questions a hugely greater value than run-of-the-mill student questions, and to give original answers much more credit than unoriginal material--- this is how you make a research level site)

So that you can upvote both categories separately. The voting system can assign you a score which is the difficulty score for the question times the completeness score for the answer, plus the interesting-ness score for the question times the correctness score for the answer. This type of thing is a simple way to ensure that answering difficult research questions will give a crapload more reputation than answering student questions.

I was wondering if people think this is a good idea. There is an issue with feature creep--- once you introduce categories, there is always pressure to make the categories expand, and if they expand too much, the site becomes useless. But I think this might be sufficient.

Perhaps "originality" is a third category one can add to questions/answers. The moment you add "originality" voting, you become equivalent to a journal, and if you multiply the reputation gained for answers by their originality score, you attract top contributions immediately, because originality is the most important thing for a researcher. The best researchers will flounder around, but after figuring it out, they will give original creative correct answers to research level questions, and should be rewarded for it much more than a person who gives unoriginal rehashes of old stuff, or literature-summary answers to fluff questons like "what are some promising avenues for spintronics research?". So that if someone asks "How can you calculate the temperature of loss of superconductivity in High Tc?", and someone produces a new model which gets it right, which is not in the literature, the originality score might go sky high, although the model might not be complete and correct.

This way, a person who contributes only a few answers, but correct and original ones, can get a large amount of reputation, simply if those answers are original and complete answers to questions which are assessed to be difficult. This will prevent people from asking hundreds of bogus stupid low-level questions, answering them, and then getting a large amount of rep, since then someone who does research can easily accumulate vastly more reputation by answering one difficult question with a penetrating original analysis.

asked Feb 21, 2014 in Discussion by Ron Maimon (7,730 points) [ no revision ]
retagged Jul 11, 2014 by dimension10

1 Answer

+ 1 like - 0 dislike

I will just note, first of all, for other readers, that this post is not to be confused with, despite the so-sounding title, different rep from voting in different categoryies, but instead about different categories of reasons to vote a certain  post up or down fore, something like ratings (-1 or 0 or 1) for different aspects. 

I don't think this is necessary. Consider 4 answers: 

  1.  Complete, Correct. 
  2. Incomplete, Correct. 
  3. Complete, Incorrect. 
  4. Incomplete, Incorrect. 

Different people have different voting criterion (without voting categories). Some would vote for all correct answers, some would vote for answers which are both correct, or complete. Some would vote only if the answer is correct, and at least above a minimum threshold of completeness... Clearly, for an intelligent enough crowd, answers 3 and 4 would only get downvotes, while 1 would get more upvotes than 2.   

This satisfies the same result needed, and that would be obtained with your idea about voting categories. Even though a single user is not able to rate both the completeness and the correctness, the overall score after voting from different users whom have different criterion, will reflect on both the completeness and correctness, combined.          

answered Feb 22, 2014 by dimension10 (1,985 points) [ no revision ]

The problem is with the combination of things in votes.

There are fluff questions, like "Why does time ordering work for Wick's theorem" which are trivial to answer by anyone, but usually this fluff gets a lot of question upvotes, because students are interested in it, and this pushes the site away from research questions. So there needs to be a "difficulty" rating for questions, separate from "interesting". There are difficult things, and things for which a lot of people want to know the answer.

Perhaps completeness and correctness are not the correct separate metric for answers. Perhaps "originality" and "correctness" are the proper measures. But there are sometimes correct insights which only partially answer the question, but which are useful, and other times, there are complete answers which are not 100% correct, but get at the main idea, so I thought this is a good separation at the time I wrote the question.

But the main goal is simply to avoid the anti-research degeneration in questions and answers even at sites like math.overflow. It is important that a person who solves a major problem, long unsolved, be well rewarded. Better than someone who writes a derivative answer which simply rewords the solution. This doesn't happen even at math overflow, even when the problem is at the level of a minor research paper (I know).

Research-level is very hard to ensure, you need politics that rewards originality, creativity, and gives a certain reward to exposition of older stuff, but only when it is significant exposition. This is not natural on public websites, and even on math-overflow, after a promising start, the site did not become a research level thing, rather it turned into more of a pat-on-the-back for researchers to suck up to each other socially.

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar\varnothing$sicsOverflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...