Olas
Go to Olas Website
  • Introduction
    • Economic Model Overview
    • Information Markets Overview
    • Stablecoin Overview
  • Participants
    • Reader
    • Content Contributor
    • Curator
    • Fact Checker
    • Judge
    • Donor
  • Components
    • Identity Layer
    • Initial Monetary Subsidy
    • Funding Pools
    • Stablecoin
      • Issuance and Stability Mechanisms
    • Reputation Systems
      • Reputation System Architecture
      • Reputation Algorithms
      • Impact of Reputation Score By Role
    • Attestations
      • Citation Systems
    • Tagging System
    • Digital Assets
  • Markets
    • Funding Markets
      • News & Opinion Funding Mechanisms
        • Subsidy Allocation Mechanism
        • Quadratic Funding Competitions
      • Investigative Journalism & Scientific Research Funding Mechanisms
        • Investment and Funding Competitions
    • Quality Control Markets
      • Fact Reporting Markets
      • Opinion Markets
      • Scientific Research Markets
        • Review Markets
        • Replication Markets
    • Tipping System
Powered by GitBook
On this page
  • 1.1. Fact Checkers Reward
  • 1.1.1 Example Fact Checking Reward Calculation
  • 1.2. Judges Reward
  • 1.2.1 Example Judging Reward Calculation
  • 1.3 Content Contributor Reward
  • 1.3.1 Example Content Contributor Reward Calculation
  1. Markets
  2. Quality Control Markets

Fact Reporting Markets

Unbiased Editorial Review for News and Investigative Journalism

Last updated 1 year ago

The Fact Reporting Market involves two essential roles. Fact Checkers are responsible for identifying inaccuracies in articles and are rewarded based on their findings. Judges assess the alleged inaccuracies pointed out by these fact checkers, determine their validity, and are rewarded depending on their position vis-à-vis the other judges.

After the contributors submit their articles the fact-checking process begins. During this period, anyone can point out a factual inaccuracy or a misleading contextual omission in the article. These inaccuracies are called Initial Fact Checking Questions(IFCQ).

Next, a random selection is made from a pool of journalists specialising in the relevant article field, having a minimum reputation score, and having requisite funds in their account. From this group of judges, one individual is assigned the role of Lead Judge.

The Lead Judge is responsible for 2 duties:

  • Assigning a Quality Score to each Fact Checker - based on how well the questions are formulated and the quality of supporting evidence.

  • Grouping the IFCQ's into different questions - this is done to keep unique questions separate and combine similar questions.

Subsequently, the questions grouped by the Lead Judge are sent to all the Judges for voting. Voting on Severity and Accuracy Scores for each of the grouped questions is done by secret ballot. Judges are unaware who else is on the panel. The Severity Score denotes the level of importance of the inaccuracy found within the article, and the Accuracy Score denotes the judge's confidence in the question posed.

Once the judges finish scoring and the outcomes are revealed, the rewards for Fact Checkers, Judges, and Content Contributors are determined based on the outcomes.

1.1. Fact Checkers Reward

Fact Checkers are rewarded based on the accuracy, severity, quality, and uniqueness of their contributions.

  • Quality: A score from 0-10 is assigned to a Fact Checker by the Lead Judge depending on the quality of their submission.

  • Uniqueness: Similar questions are collated by the Lead Judge, and unique questions remain separate.

  • Severity: A score from 0-10 is assigned by a Judge to a collated question, which signifies its severity.

  • Accuracy: A score from 0-10 is assigned by a Judge to a collated question, which signifies the Judge's belief in the accuracy of the collated question.

Each Fact Checker's General Score is calculated using a weighted formula that takes into account the severity and quality of their submission, as well as the median value of the accuracy scores (provided by all judges).

Then Total Score is calculated by summing the rewards of all Fact Checkers, providing a baseline for proportional reward distribution.

Once the Total Score is determined, the Fact Checkers' rewards are calculated proportionally.

1.1.1 Example Fact Checking Reward Calculation

In this example, predefined severity and quality weights from the protocol are used, along with a fixed reward for the article.

Judging Panel Variables
Values

Severity Weight

0.7

Quality Weight

0.3

Total Reward of Fact Checkers

100 USDC

The Lead Judge assigns a quality score to each Fact Checker

Fact Checker
Quality Scores given by Lead Judge

FC1

7

FC2

8

FC3

7

FC4

4

The Lead Judge reviews fact checking questions and determines if any are similar. Questions sent by FC1 and FC2 are identified as similar and are merged into a single question, now referred to as FCQ12.

Subsequently, all judges provide Severity and Accuracy Scores for the question. The median of these scores is used.

Fact Checking Questions
Median of Severity Scores
Median of Accuracy Scores

FCQ12 (FC1 and FC2 merged)

6

9

FCQ2 (FC3)

8

7

FCQ3 (FC4)

5

2

The general score is calculated for all the fact checking questions.

General Score = [(Median of Severity Scores × Severity Weight) + (Quality Score × Quality Weight)] / (Combined Questions + 1) × (Median of Accuracy Scores / 10)

General Score FC1 = [(6 x 0.7) + (7 x 0.3) / 1 + 1 ] x (9 / 10) = 2.835

General Score FC2 = [(6 x 0.7) + (8 x 0.3) / 1 + 1 ] x (7 / 10) = 2.31

General Score FC3 = [(8 x 0.7) + (7 x 0.3) / 0 + 1 ] x (6 / 10) = 4.62

General Score FC4 = [(5 x 0.7) + (4 x 0.3) / 0 + 1 ] x (2 / 10) = 0.94

Total Score = 2.835 + 2.31 + 4.62 + 0.94 = 10.705

Reward Per Fact Checker = (General Score / Total Score) x Total Reward of Fact Checkers

Fact Checker 1 Reward = (2.835 / 10.705 ) x 100 = $26.48

Fact Checker 2 Reward = (2.31 / 10.705 ) x 100 = $21.58

Fact Checker 3 Reward = (4.62 / 10.705 ) x 100 = $43.16

Fact Checker 4 Reward = (0.94 / 10.705 ) x 100 = $8.78 The final distribution of Fact Checker Rewards is as follows:

Collated Fact Checking Question
General Score
Reward

Fact Checker 1

2.835

26.48

Fact Checker 2

2.31

21.58

Fact Checker 3

4.62

43.16

Fact Checker 4

0.94

8.78

1.2. Judges Reward

To participate in the Judging Panel, each Judge must stake a certain amount of funds. Judges must either use funds obtained through the funding mechanism that are currently locked in the protocol or stake their own funds. The amount they stake depends on the initial amount the content contributor staked in the article although very large stakes will result in a larger judging panel.

Judges are rewarded for each fact checking question (FCQ) based on the proximity of their accuracy score to the median of all judge's accuracy scores. The use of median ensures that rewards are distributed based on a score that is resilient to outliers.

The Judging Reward is calculated using the following formulas:

The Proximity Score reflects how close a judge's score is to the median, which is the middle score of all judges' scores when sorted. The closer a judge's score to the median, the higher their Proximity Score will be.

The Total Proximity Score is the sum of all individual Proximity Scores from each judge.

When all the judges vote with the same accuracy score, they get back their original staked amount.

1.2.1 Example Judging Reward Calculation

In this example, we use a panel of 5 judges, though actual panels may have more judges. Each judge stakes $10 to evaluate a particular fact checking question in an article.

Variables
Values

Total Judges

5

Total Reward

$50 ($10 Per Judge)

The following table shows the distribution of accuracy scores for a specific fact checking question, indicating judge's confidence in the statement's accuracy.

Judges
Accuracy Scores Given By Judges

J1

10

J2

8

J3

4

J4

3

J5

2

The reward calculation process for Judges consists of the following steps:

  • Establishing the Median Score:

    We sort the given accuracy scores and determine the median value. The median represents the middle value when there is an odd number of observations.

    Median(Odd) = (n+1)/2th data point

    Sorted Scores: 2, 3, 4, 8, 10

    Median Score = 4

  • Calculating Proximity Score for Each Judge:

    Max Proximity(Max Score - Min Score) = 10 - 0 = 10 Proximity Score = Max Proximity - |Judges Score - Median Score|

    Judge 1 Proximity Score = 10 - |10 - 4| = 10 - 6 = 4 Judge 2 Proximity Score = 10 - |8 - 4| = 10 - 4 = 6

    Judge 3 Proximity Score = 10 - |4 - 4| = 10 - 0 = 10

    Judge 4 Proximity Score = 10 - |3 - 4| = 10 - 1 = 9

    Judge 5 Proximity Score = 10 - |2 - 4| = 10 - 2 = 8

  • Calculating the Total Proximity Score: Total Proximity Score = 4 + 6 + 10 + 9 + 8 = 37

  • Calculating Each Judge’s Reward:

    Assuming a total reward pool of $50($10 Per Judge)

    Reward = (Proximity Score / Total Proximity Score) x Total Reward

    Judge 1 Reward = (4 / 37) × $50 ≈ $5.41

    Judge 2 Reward = (6 / 37) × $50 ≈ $8.11

    Judge 3 Reward = (10 / 37) × $50 ≈ $13.51

    Judge 4 Reward = (9 / 37) × $50 ≈ $12.16

    Judge 5 Reward = (8 / 37) × $50 ≈ $10.81

The final distribution of Judge Rewards is as follows:

Judges
Proximity Score
Reward

J1

4

$5.41

J2

6

$8.11

J3

10

$13.51

J4

9

$12.16

J5

8

$10.81

1.3 Content Contributor Reward

A unique feature of Olas is that contributors stake an amount on each article they write. This staking process represents a commitment to the quality and reliability of their content, as it involves a direct financial stake in the success and accuracy of the article.

The reward for Content Contributors is calculated based on the following factors:

Article Score: The article score is calculated by summing the median of accuracy scores of all fact checking questions and then dividing it by the maximum possible accuracy score. For instance, with accuracy scores of 9, 7, and 2, the article score is 18 out of a maximum of 30(assuming max possible accuracy score is 10), resulting in a score of 0.6, which equates to 60%.

Guaranteed Payout: Irrespective of the article's score, content contributors are guaranteed 20% of their staked amount as payment. This ensures that contributions aren't deemed to risky to make. For example, if a contributor stakes $100, the guaranteed pay is $20 (20% of the staked amount).

Evaluated Stake Payout: The remaining 80% of the staked amount is subject to the article's score evaluation. This part of the stake is dependent on the article score. For instance, if a contributor stakes $100 on an article and receives a score of 70%, then 70% of the remaining $80 stake, which is $56, is unlocked and made accessible to the contributor.

Tips Payout: Contributors receive a portion of the tips, given by readers to their articles. The allocation of tips is dependent on the article score. For instance, if an article scores 70%, 70% of the tips go to the contributor, while the remaining 30% is directed to the global pool.

The total reward for a Content Contributor is calculated using this formula:

1.3.1 Example Content Contributor Reward Calculation

Let's consider the reward calculation for a content contributor who has invested $200 in their article. Additionally, this article has earned $50 in tips from the tippers. The Judging Panel reviews the article and assigns it a score of 70%.

Article Score(%)
Staked Amount
Tips

70%

$200

$50

We determine the total payout for the contributor by using this simple formula:

Deferred Payout + Article Score x (Evaluated Staked Payout + Tips Payout)

First, we calculate the Deferred Payout:

Deferred Payout (20% of Staked Amount) = 0.20 × 200 = $40

Next, we calculate the evaluated stake payout: Evaluated Stake = 80%(predefined by the protocol) of Staked Amount Evaluated Stake Payout = 0.80 × 200 = $160 Content Contributor's Reward = 40 + 0.7 x (160 + 50) = $187

From the entire $250 reward, the contributor earns $152 (deferred payout + evaluated stake payout) from the locked balance and $35 in tips. The global pool will receive $48, which is 30% of the evaluated stake, and $15, which is 30% of the tips.

The Fact Checking Reward is calculated using the following formulas: SG=(MS×WS)+(SQ×WQ)CQ+1×MA10\boxed{S_G = \frac{ \left( M_S \times W_S) + ({S_Q} \times {W_Q} \right) }{ \text{CQ} + 1 } \times \frac{{M_A}}{10} } SG​=CQ+1(MS​×WS​)+(SQ​×WQ​)​×10MA​​​

SG = General ScoreS_G\, = \, General \,ScoreSG​=GeneralScore MS = Median of Severity ScoresM_S\, = \, Median \, of \, Severity \, ScoresMS​=MedianofSeverityScores MA = Median of Accuracy ScoresM_A\, = \, Median \, of \, Accuracy \, ScoresMA​=MedianofAccuracyScores WS = Severity WeightW_S \, = \, Severity \, Weight WS​=SeverityWeight WQ = Quality WeightW_Q \, = \, Quality \, Weight WQ​=QualityWeight SQ = Quality ScoreS_Q \, = \, Quality \, Score SQ​=QualityScore CQ = Combined Questions(if any)CQ\, = \, Combined \, Questions(if\, any)CQ=CombinedQuestions(ifany)

ST = ∑i=1N SGi \boxed{ S_T \, = \, \displaystyle\sum_{i=1}^N \, S_{Gi}\,} ST​=i=1∑N​SGi​​ ST = Total ScoreS_T\, = \, Total \, ScoreST​=TotalScore N = Number of Fact CheckersN\, = \, Number \,of \, Fact \, CheckersN=NumberofFactCheckers

RFC = SGST× TRF\boxed{ R_{FC} \, = \, \dfrac{ S_{G} }{ S_T} \times \, TR_F} RFC​=ST​SG​​×TRF​​ RFC = Fact Checker RewardR_{FC}\, = \, Fact \,Checker \,RewardRFC​=FactCheckerReward TRF =Total Reward of Fact CheckersTR_F\, = Total\,Reward \,of \, Fact \, CheckersTRF​=TotalRewardofFactCheckers

SP=Pmax −∣JA−MJ∣\boxed{S_P=P_{max}\, − ∣J_A−M_J∣}SP​=Pmax​−∣JA​−MJ​∣​ SP = Proximity ScoreS_{P}\, = \, Proximity \,Score SP​=ProximityScore Pmax = Max ProximityP_{max}\, = \, Max \,ProximityPmax​=MaxProximity JA = Judge′s vote on Accuracy ScoreJ_A\, = \, Judge's \, vote\, on \, Accuracy \,ScoreJA​=Judge′svoteonAccuracyScore MJ = Median of All Judges′s Accuracy ScoreM_J\, = \, Median \,of \,All \, Judges's \, Accuracy \,ScoreMJ​=MedianofAllJudges′sAccuracyScore

SPtotal= ∑i=1N(SPi)\boxed{ S_{P_{total}} = \, \displaystyle\sum_{i=1}^N ( S_{Pi})}SPtotal​​=i=1∑N​(SPi​)​ SPtotal= Total Proximity ScoreS_{P_{total}} = \, Total \, Proximity \,ScoreSPtotal​​=TotalProximityScore

RJi=​SPiSPtotal×RJtotal\boxed{R_{Ji}= ​\frac{S_{Pi}}{S_{P_{total}}} × R_{J_{total}}}RJi​=​SPtotal​​SPi​​×RJtotal​​​

RJ = Judge′s RewardR_J\, = \, Judge's \,RewardRJ​=Judge′sReward RJtotal =Total Reward available for JudgesR_{J_{total}} \, = Total\,Reward \, available \, for \, JudgesRJtotal​​=TotalRewardavailableforJudges Each Judge's Reward is calculated by taking their individual Proximity Score and dividing it by the Total Proximity Score, which is then multiplied by the Total Reward available for the fact-checking question

SAR=∑i=1N( MAFCi)N × SAmax \boxed{S_{AR} = \frac{{\sum_{i=1}^N ( \, M_{A_{FC_{i}}})}}{{N \,\times \,S_{A_{max}}} }} SAR​=N×SAmax​​∑i=1N​(MAFCi​​​)​​ SAR = Article ScoreS_{AR}\, = \, Article \,ScoreSAR​=ArticleScore MAFC = Median of the Accuracy Scores for a Fact Checking QuestionM_{A_{FC}}\, = \,Median \, of \,the\,Accuracy \, Scores \,for \,a \, Fact \,Checking \,QuestionMAFC​​=MedianoftheAccuracyScoresforaFactCheckingQuestion N = Number of Fact Checking QuestionsN\, = \, Number \,of \, Fact \, Checking \,QuestionsN=NumberofFactCheckingQuestions SAmax = Maximim Possible Accuracy ScoreS_{A_{max}}\, = \, Maximim \,Possible \, Accuracy \,ScoreSAmax​​=MaximimPossibleAccuracyScore

RC=PD+SAR × ( PE + PT )\boxed{{R_C}= P_D +S_{AR} \, \times \, ( \, P_{E} \, + \, P_T \, )}RC​=PD​+SAR​×(PE​+PT​)​ RC=Total Content Contributor RewardR_C= Total\,Content \, Contributor\, RewardRC​=TotalContentContributorReward PD = Deferred PayoutP_D\, = \, Deferred \,PayoutPD​=DeferredPayout SAR = Article ScoreS_{AR}\, = \, Article \,ScoreSAR​=ArticleScore PE = Evaulated Stake PayoutP_{E}\, = \, Evaulated \,Stake \, PayoutPE​=EvaulatedStakePayout PT = Tips PayoutP_T\, = \, Tips \, PayoutPT​=TipsPayout

Figure : Olas Judging Panel Architecture