Review Markets
Unbiased Peer Review
Last updated
Unbiased Peer Review
Last updated
The Academic Reporting Market involves two essential roles. Methodology Checkers are responsible for identifying methodological or statistical errors and are rewarded based on their findings. Judges assess the inaccuracies pointed out by these checkers, determine their validity, and are rewarded accordingly.
After the scientists submit their research, the methodology checking process begins. During this period, anyone can point out an inaccuracy in the research. These inaccuracies are called Initial Methodological Issues (IMIs).
Next, a random selection is made from a pool of scientists specialising in the relevant article field, having a minimum reputation score, and having requisite funds in their account. Further a few open seats can be filled by those with proven ability with scientific and statistical methods. Unlike the Fact Reporting Market, there is no lead judge to collate issues as it is envisaged the issues raised will be fewer and more complex in this market.
The Judges give a secret vote for Severity and Accuracy Scores for each of the issues. The Severity Score denotes the level of importance of the inaccuracy found within the research, and the Accuracy Score denotes the judge's confidence in the issue raised.
Once the judges finish scoring and the outcomes are revealed, the rewards for Methodology Checkers, Judges, and Content Contributors are determined based on the outcomes.
Methodology Checkers are rewarded based on the accuracy and severity of their contributions.
Severity: A score from 0-10 is assigned by a Judge to a methodological issue, which signifies its severity.
Accuracy: A score from 0-10 is assigned by a Judge to a methodological issue, which signifies the Judge's belief in the accuracy of the issue raised.
The Methodology Checking Reward is calculated using the following formulas:
Each Methodology Checker's General Score is calculated using a weighted formula that takes into account the severity and accuracy of their submission.
The Total Score is calculated by summing the rewards of all Methodology Checkers, providing a baseline for proportional reward distribution.
Once the Total Score is determined, the Methodology Checkers' rewards are calculated proportionally.
In this example, predefined severity and quality weights from the protocol are used, along with a fixed reward for the article.
Severity Weight
0.4
Accuracy Weight
0.6
Total Reward of Methodology Checkers
100 USDC
Subsequently, all judges provide Severity and Accuracy Scores for the question. The median of these scores is used.
MI1
6
7
MI2
8
8
MI3
5
7
The general score is calculated for all the methodological issues.
General Score = (Median of Severity Scores × Severity Weight) + (Median of Accuracy Scores × Accuracy Weight)
General Score MI1 = [(6 x 0.4) + (7 x 0.6) ] = 6.6
General Score MI2 = [(8 x 0.4) + (8 x 0.6) ] = 8
General Score MI3 = [(5 x 0.4) + (7 x 0.6) ] = 6.2
Total Score = 6.6 + 8 + 6.2 = 20.8
Reward Per Methodology Checker = (General Score / Total Score) x Total Reward of Methodology Checkers
Methodology Checker 1 Reward = (6.6 / 20.8 ) x 100 = $31.730
Methodology Checker 2 Reward = (8 / 20.8 ) x 100 = $38.4615
Methodology Checker 3 Reward = (6.2 / 20.8 ) x 100 = $29.8076
The final distribution of Methodology Checker Rewards is as follows:
Methodology Checker 1
6.6
31.730
Methodology Checker 2
8
38.4615
Methodology Checker 3
6.2
29.8076
To participate in the Judging Panel, each Judge must stake a certain amount of funds. Judges must either use funds obtained through the funding mechanism that are currently locked in the protocol or stake their own funds. The amount they stake depends on the initial amount the content contributor staked in the article.
Judges are rewarded for each methodological issue (MI) based on the proximity of their accuracy score to the median of all judge's accuracy scores. The use of median ensures that rewards are distributed based on a score that is resilient to outliers.
The Judging Reward is calculated using the following formulas:
The Proximity Score reflects how close a judge's score is to the median, which is the middle score of all judges' scores when sorted. The closer a judge's score to the median, the higher their Proximity Score will be.
The Total Proximity Score is the sum of all individual Proximity Scores from each judge.
Each Judge's Reward is calculated by taking their individual Proximity Score and dividing it by the Total Proximity Score, which is then multiplied by the Total Reward available for the methodological issue.
When all the judges vote with the same accuracy score, they get back their original staked amount.
In this example, we use a panel of 5 judges, though actual panels may have more judges. Each judge stakes $10 to evaluate a particular methodological issue(MI) in an article.
Total Judges
5
Total Reward
$50 ($10 Per Judge)
The following table shows the distribution of accuracy scores for a specific methodological issue, indicating judge's confidence in the statement's accuracy.
J1
10
J2
8
J3
4
J4
3
J5
2
The reward calculation process for Judges consists of the following steps:
Establishing the Median Score:
We sort the given accuracy scores and determine the median value. The median represents the middle value when there is an odd number of observations.
Median(Odd) = (n+1)/2th data point
Sorted Scores: 2, 3, 4, 8, 10
Median Score = 4
Calculating Proximity Score for Each Judge:
Max Proximity(Max Score - Min Score) = 10 - 0 = 10 Proximity Score = Max Proximity - |Judges Score - Median Score|
Judge 1 Proximity Score = 10 - |10 - 4| = 10 - 6 = 4 Judge 2 Proximity Score = 10 - |8 - 4| = 10 - 4 = 6
Judge 3 Proximity Score = 10 - |4 - 4| = 10 - 0 = 10
Judge 4 Proximity Score = 10 - |3 - 4| = 10 - 1 = 9
Judge 5 Proximity Score = 10 - |2 - 4| = 10 - 2 = 8
Calculating the Total Proximity Score: Total Proximity Score = 4 + 6 + 10 + 9 + 8 = 37
Calculating Each Judge’s Reward:
Assuming a total reward pool of $50($10 Per Judge)
Reward = (Proximity Score / Total Proximity Score) x Total Reward
Judge 1 Reward = (4 / 37) × $50 ≈ $5.41
Judge 2 Reward = (6 / 37) × $50 ≈ $8.11
Judge 3 Reward = (10 / 37) × $50 ≈ $13.51
Judge 4 Reward = (9 / 37) × $50 ≈ $12.16
Judge 5 Reward = (8 / 37) × $50 ≈ $10.81
The final distribution of Judge Rewards is as follows:
J1
4
$5.41
J2
6
$8.11
J3
10
$13.51
J4
9
$12.16
J5
8
$10.81
Content Contributors in Olas have the opportunity to apply for funds through periodic funding rounds. In these rounds, they receive donations from active donors. The amount of donations each contributor receives plays a significant role in the quadratic funding mechanism, which determines the funds matched to each contributor. Upon securing these funds, contributors' accounts will have these funds locked. A unique feature of Olas is that contributors stake an amount on each article they write, using their locked funds. This staking process represents a commitment to the quality and reliability of their content, as it involves a direct financial stake in the success and accuracy of the article. Alternatively, contributors can choose to stake their own money instead of participating in funding rounds. This can enhance their reputation within the protocol and improve their chances of achieving better results in future funding rounds.
The reward for Content Contributors is calculated based on the following factors:
Article Score: The article score is calculated by summing the median of accuracy scores of all methodological issues and then dividing it by the maximum possible accuracy score. For instance, with accuracy scores of 9, 7, and 2, the article score is 18 out of a maximum of 30(assuming max possible accuracy score is 10), resulting in a score of 0.6, which equates to 60%.
Deferred Payout: Irrespective of the article's score, content contributors are guaranteed 20% of their staked amount as a Deferred Payout. This ensures that contributors receive a fixed portion of their stake regardless of the article score. For example, if a contributor stakes $100, the deferred pay is $20 (20% of the staked amount).
Evaluated Stake Payout: After the deferred payout, the remaining 80% of the staked amount is subject to the article's score evaluation. This part of the stake is dependent on the article score. For instance, if a contributor stakes $100 on an article and receives a score of 70%, then 70% of the remaining $80 stake, which is $56, is unlocked and made accessible to the contributor.
Tips Payout: Contributors receive a portion of the tips, given by readers to their articles. The allocation of tips is dependent on the article score. For instance, if an article scores 70%, 70% of the tips go to the contributor, while the remaining 30% is directed to the global pool.
The total reward for a Content Contributor is calculated using this formula:
Let's consider the reward calculation for a content contributor who has invested $200 in their article. Additionally, this article has earned $50 in tips from the tippers. The Judging Panel reviews the article and assigns it a score of 70%.
70%
$200
$50
We determine the total payout for the contributor by using this simple formula:
Deferred Payout + Article Score x (Evaluated Staked Payout + Tips Payout)
First, we calculate the Deferred Payout:
Deferred Payout (20% of Staked Amount) = 0.20 × 200 = $40
Next, we calculate the evaluated stake payout: Evaluated Stake = 80%(predefined by the protocol) of Staked Amount Evaluated Stake Payout = 0.80 × 200 = $160 Content Contributor's Reward = 40 + 0.7 x (160 + 50) = $187
From the entire $250 reward, the contributor earns $152 (deferred payout + evaluated stake payout) from the locked balance and $35 in tips. The global pool will receive $48, which is 30% of the evaluated stake, and $15, which is 30% of the tips.