Fact Reporting Markets

Unbiased Editorial Review for News and Investigative Journalism

The Fact Reporting Market involves two essential roles. Fact Checkers are responsible for identifying inaccuracies in articles and are rewarded based on their findings. Judges assess the alleged inaccuracies pointed out by these fact checkers, determine their validity, and are rewarded depending on their position vis-à-vis the other judges.

After the contributors submit their articles the fact-checking process begins. During this period, anyone can point out a factual inaccuracy or a misleading contextual omission in the article. These inaccuracies are called Initial Fact Checking Questions(IFCQ).

Next, a random selection is made from a pool of journalists specialising in the relevant article field, having a minimum reputation score, and having requisite funds in their account. From this group of judges, one individual is assigned the role of Lead Judge.

The Lead Judge is responsible for 2 duties:

  • Assigning a Quality Score to each Fact Checker - based on how well the questions are formulated and the quality of supporting evidence.

  • Grouping the IFCQ's into different questions - this is done to keep unique questions separate and combine similar questions.

Subsequently, the questions grouped by the Lead Judge are sent to all the Judges for voting. Voting on Severity and Accuracy Scores for each of the grouped questions is done by secret ballot. Judges are unaware who else is on the panel. The Severity Score denotes the level of importance of the inaccuracy found within the article, and the Accuracy Score denotes the judge's confidence in the question posed.

Once the judges finish scoring and the outcomes are revealed, the rewards for Fact Checkers, Judges, and Content Contributors are determined based on the outcomes.

1.1. Fact Checkers Reward

Fact Checkers are rewarded based on the accuracy, severity, quality, and uniqueness of their contributions.

  • Quality: A score from 0-10 is assigned to a Fact Checker by the Lead Judge depending on the quality of their submission.

  • Uniqueness: Similar questions are collated by the Lead Judge, and unique questions remain separate.

  • Severity: A score from 0-10 is assigned by a Judge to a collated question, which signifies its severity.

  • Accuracy: A score from 0-10 is assigned by a Judge to a collated question, which signifies the Judge's belief in the accuracy of the collated question.

The Fact Checking Reward is calculated using the following formulas: SG=(MS×WS)+(SQ×WQ)CQ+1×MA10\boxed{S_G = \frac{ \left( M_S \times W_S) + ({S_Q} \times {W_Q} \right) }{ \text{CQ} + 1 } \times \frac{{M_A}}{10} }

SG=GeneralScoreS_G\, = \, General \,Score MS=MedianofSeverityScoresM_S\, = \, Median \, of \, Severity \, Scores MA=MedianofAccuracyScoresM_A\, = \, Median \, of \, Accuracy \, Scores WS=SeverityWeightW_S \, = \, Severity \, Weight WQ=QualityWeightW_Q \, = \, Quality \, Weight SQ=QualityScoreS_Q \, = \, Quality \, Score CQ=CombinedQuestions(ifany)CQ\, = \, Combined \, Questions(if\, any)

Each Fact Checker's General Score is calculated using a weighted formula that takes into account the severity and quality of their submission, as well as the median value of the accuracy scores (provided by all judges).

ST=i=1NSGi\boxed{ S_T \, = \, \displaystyle\sum_{i=1}^N \, S_{Gi}\,} ST=TotalScoreS_T\, = \, Total \, Score N=NumberofFactCheckersN\, = \, Number \,of \, Fact \, Checkers

Then Total Score is calculated by summing the rewards of all Fact Checkers, providing a baseline for proportional reward distribution.

RFC=SGST×TRF\boxed{ R_{FC} \, = \, \dfrac{ S_{G} }{ S_T} \times \, TR_F} RFC=FactCheckerRewardR_{FC}\, = \, Fact \,Checker \,Reward TRF=TotalRewardofFactCheckersTR_F\, = Total\,Reward \,of \, Fact \, Checkers

Once the Total Score is determined, the Fact Checkers' rewards are calculated proportionally.

1.1.1 Example Fact Checking Reward Calculation

In this example, predefined severity and quality weights from the protocol are used, along with a fixed reward for the article.

Judging Panel VariablesValues

Severity Weight

0.7

Quality Weight

0.3

Total Reward of Fact Checkers

100 USDC

The Lead Judge assigns a quality score to each Fact Checker

Fact CheckerQuality Scores given by Lead Judge

FC1

7

FC2

8

FC3

7

FC4

4

The Lead Judge reviews fact checking questions and determines if any are similar. Questions sent by FC1 and FC2 are identified as similar and are merged into a single question, now referred to as FCQ12.

Subsequently, all judges provide Severity and Accuracy Scores for the question. The median of these scores is used.

Fact Checking QuestionsMedian of Severity ScoresMedian of Accuracy Scores

FCQ12 (FC1 and FC2 merged)

6

9

FCQ2 (FC3)

8

7

FCQ3 (FC4)

5

2

The general score is calculated for all the fact checking questions.

General Score = [(Median of Severity Scores × Severity Weight) + (Quality Score × Quality Weight)] / (Combined Questions + 1) × (Median of Accuracy Scores / 10)

General Score FC1 = [(6 x 0.7) + (7 x 0.3) / 1 + 1 ] x (9 / 10) = 2.835

General Score FC2 = [(6 x 0.7) + (8 x 0.3) / 1 + 1 ] x (7 / 10) = 2.31

General Score FC3 = [(8 x 0.7) + (7 x 0.3) / 0 + 1 ] x (6 / 10) = 4.62

General Score FC4 = [(5 x 0.7) + (4 x 0.3) / 0 + 1 ] x (2 / 10) = 0.94

Total Score = 2.835 + 2.31 + 4.62 + 0.94 = 10.705

Reward Per Fact Checker = (General Score / Total Score) x Total Reward of Fact Checkers

Fact Checker 1 Reward = (2.835 / 10.705 ) x 100 = $26.48

Fact Checker 2 Reward = (2.31 / 10.705 ) x 100 = $21.58

Fact Checker 3 Reward = (4.62 / 10.705 ) x 100 = $43.16

Fact Checker 4 Reward = (0.94 / 10.705 ) x 100 = $8.78 The final distribution of Fact Checker Rewards is as follows:

Collated Fact Checking QuestionGeneral ScoreReward

Fact Checker 1

2.835

26.48

Fact Checker 2

2.31

21.58

Fact Checker 3

4.62

43.16

Fact Checker 4

0.94

8.78

1.2. Judges Reward

To participate in the Judging Panel, each Judge must stake a certain amount of funds. Judges must either use funds obtained through the funding mechanism that are currently locked in the protocol or stake their own funds. The amount they stake depends on the initial amount the content contributor staked in the article although very large stakes will result in a larger judging panel.

Judges are rewarded for each fact checking question (FCQ) based on the proximity of their accuracy score to the median of all judge's accuracy scores. The use of median ensures that rewards are distributed based on a score that is resilient to outliers.

The Judging Reward is calculated using the following formulas:

SP=PmaxJAMJ\boxed{S_P=P_{max}\, − ∣J_A−M_J∣} SP=ProximityScoreS_{P}\, = \, Proximity \,Score Pmax=MaxProximityP_{max}\, = \, Max \,Proximity JA=JudgesvoteonAccuracyScoreJ_A\, = \, Judge's \, vote\, on \, Accuracy \,Score MJ=MedianofAllJudgessAccuracyScoreM_J\, = \, Median \,of \,All \, Judges's \, Accuracy \,Score

The Proximity Score reflects how close a judge's score is to the median, which is the middle score of all judges' scores when sorted. The closer a judge's score to the median, the higher their Proximity Score will be.

SPtotal=i=1N(SPi)\boxed{ S_{P_{total}} = \, \displaystyle\sum_{i=1}^N ( S_{Pi})} SPtotal=TotalProximityScoreS_{P_{total}} = \, Total \, Proximity \,Score

The Total Proximity Score is the sum of all individual Proximity Scores from each judge.

RJi=SPiSPtotal×RJtotal\boxed{R_{Ji}= ​\frac{S_{Pi}}{S_{P_{total}}} × R_{J_{total}}}

RJ=JudgesRewardR_J\, = \, Judge's \,Reward RJtotal=TotalRewardavailableforJudgesR_{J_{total}} \, = Total\,Reward \, available \, for \, Judges Each Judge's Reward is calculated by taking their individual Proximity Score and dividing it by the Total Proximity Score, which is then multiplied by the Total Reward available for the fact-checking question

When all the judges vote with the same accuracy score, they get back their original staked amount.

1.2.1 Example Judging Reward Calculation

In this example, we use a panel of 5 judges, though actual panels may have more judges. Each judge stakes $10 to evaluate a particular fact checking question in an article.

VariablesValues

Total Judges

5

Total Reward

$50 ($10 Per Judge)

The following table shows the distribution of accuracy scores for a specific fact checking question, indicating judge's confidence in the statement's accuracy.

JudgesAccuracy Scores Given By Judges

J1

10

J2

8

J3

4

J4

3

J5

2

The reward calculation process for Judges consists of the following steps:

  • Establishing the Median Score:

    We sort the given accuracy scores and determine the median value. The median represents the middle value when there is an odd number of observations.

    Median(Odd) = (n+1)/2th data point

    Sorted Scores: 2, 3, 4, 8, 10

    Median Score = 4

  • Calculating Proximity Score for Each Judge:

    Max Proximity(Max Score - Min Score) = 10 - 0 = 10 Proximity Score = Max Proximity - |Judges Score - Median Score|

    Judge 1 Proximity Score = 10 - |10 - 4| = 10 - 6 = 4 Judge 2 Proximity Score = 10 - |8 - 4| = 10 - 4 = 6

    Judge 3 Proximity Score = 10 - |4 - 4| = 10 - 0 = 10

    Judge 4 Proximity Score = 10 - |3 - 4| = 10 - 1 = 9

    Judge 5 Proximity Score = 10 - |2 - 4| = 10 - 2 = 8

  • Calculating the Total Proximity Score: Total Proximity Score = 4 + 6 + 10 + 9 + 8 = 37

  • Calculating Each Judge’s Reward:

    Assuming a total reward pool of $50($10 Per Judge)

    Reward = (Proximity Score / Total Proximity Score) x Total Reward

    Judge 1 Reward = (4 / 37) × $50 ≈ $5.41

    Judge 2 Reward = (6 / 37) × $50 ≈ $8.11

    Judge 3 Reward = (10 / 37) × $50 ≈ $13.51

    Judge 4 Reward = (9 / 37) × $50 ≈ $12.16

    Judge 5 Reward = (8 / 37) × $50 ≈ $10.81

The final distribution of Judge Rewards is as follows:

JudgesProximity ScoreReward

J1

4

$5.41

J2

6

$8.11

J3

10

$13.51

J4

9

$12.16

J5

8

$10.81

1.3 Content Contributor Reward

A unique feature of Olas is that contributors stake an amount on each article they write. This staking process represents a commitment to the quality and reliability of their content, as it involves a direct financial stake in the success and accuracy of the article.

The reward for Content Contributors is calculated based on the following factors:

Article Score: The article score is calculated by summing the median of accuracy scores of all fact checking questions and then dividing it by the maximum possible accuracy score. For instance, with accuracy scores of 9, 7, and 2, the article score is 18 out of a maximum of 30(assuming max possible accuracy score is 10), resulting in a score of 0.6, which equates to 60%.

SAR=i=1N(MAFCi)N×SAmax \boxed{S_{AR} = \frac{{\sum_{i=1}^N ( \, M_{A_{FC_{i}}})}}{{N \,\times \,S_{A_{max}}} }} SAR=ArticleScoreS_{AR}\, = \, Article \,Score MAFC=MedianoftheAccuracyScoresforaFactCheckingQuestionM_{A_{FC}}\, = \,Median \, of \,the\,Accuracy \, Scores \,for \,a \, Fact \,Checking \,Question N=NumberofFactCheckingQuestionsN\, = \, Number \,of \, Fact \, Checking \,Questions SAmax=MaximimPossibleAccuracyScoreS_{A_{max}}\, = \, Maximim \,Possible \, Accuracy \,Score

Guaranteed Payout: Irrespective of the article's score, content contributors are guaranteed 20% of their staked amount as payment. This ensures that contributions aren't deemed to risky to make. For example, if a contributor stakes $100, the guaranteed pay is $20 (20% of the staked amount).

Evaluated Stake Payout: The remaining 80% of the staked amount is subject to the article's score evaluation. This part of the stake is dependent on the article score. For instance, if a contributor stakes $100 on an article and receives a score of 70%, then 70% of the remaining $80 stake, which is $56, is unlocked and made accessible to the contributor.

Tips Payout: Contributors receive a portion of the tips, given by readers to their articles. The allocation of tips is dependent on the article score. For instance, if an article scores 70%, 70% of the tips go to the contributor, while the remaining 30% is directed to the global pool.

The total reward for a Content Contributor is calculated using this formula:

RC=PD+SAR×(PE+PT)\boxed{{R_C}= P_D +S_{AR} \, \times \, ( \, P_{E} \, + \, P_T \, )} RC=TotalContentContributorRewardR_C= Total\,Content \, Contributor\, Reward PD=DeferredPayoutP_D\, = \, Deferred \,Payout SAR=ArticleScoreS_{AR}\, = \, Article \,Score PE=EvaulatedStakePayoutP_{E}\, = \, Evaulated \,Stake \, Payout PT=TipsPayoutP_T\, = \, Tips \, Payout

1.3.1 Example Content Contributor Reward Calculation

Let's consider the reward calculation for a content contributor who has invested $200 in their article. Additionally, this article has earned $50 in tips from the tippers. The Judging Panel reviews the article and assigns it a score of 70%.

Article Score(%)Staked AmountTips

70%

$200

$50

We determine the total payout for the contributor by using this simple formula:

Deferred Payout + Article Score x (Evaluated Staked Payout + Tips Payout)

First, we calculate the Deferred Payout:

Deferred Payout (20% of Staked Amount) = 0.20 × 200 = $40

Next, we calculate the evaluated stake payout: Evaluated Stake = 80%(predefined by the protocol) of Staked Amount Evaluated Stake Payout = 0.80 × 200 = $160 Content Contributor's Reward = 40 + 0.7 x (160 + 50) = $187

From the entire $250 reward, the contributor earns $152 (deferred payout + evaluated stake payout) from the locked balance and $35 in tips. The global pool will receive $48, which is 30% of the evaluated stake, and $15, which is 30% of the tips.

Last updated