Skip to content

sidstuff/webtrust-extension

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Icon WebTrust

A decentralized Web of Trust implementation.

Inspired by Freenet's Web of Trust plugin (wiki).

Note

The program is functional but still in beta (latest v0.1.2) and under active development.

Fig 1. The temporary WebTrust logo – feel free to contribute a better one!

🕸️ What does it do?

Based on the list of people you trust, those they trust, and so on, each identity gets a "trust score" that represents how trustworthy they are TO YOU. The score you see for an identity wil be different from what someone else sees. "Trustworthy" here refers to how likely they are to be sincere, i.e., NOT a scammer, spammer, troll, astroturfer, misinformer, etc. This score is displayed next to usernames on websites.

Important

This is a decentralized solution that requires each user to locally store and process a potentially massive web of trust, which may be infeasible – an alternative would be for social sites to implement such a thing server-side, but of course, this requires trusting the site to display undistorted scores.

👤 What exactly is an "identity" here?

There are two types of identities recognized by this program. A "user" is identified by a public key, whereas non-user identities are accounts on different websites. Both types can be trusted, but only by users, whose trust can be cryptographically signed and verified using their keys. Thus social media accounts can get trust even without the owner using the program, and you don't need to verify ownership of your social accounts – simply add them as trustees so those who trust you will also trust your accounts.

v0.1.2 Optional feature: distrust

It is expected that allowing users to also list those they distrust will lead to "trust wars" where people use this feature for those they merely disagree with. If such a score is used to block people, it will stifle debate and discussion, since the score will represent how much a person shares your opinions, rather than how likely they are to be sincere. However, there may be places like focused political groups where the former feature is desired. Thus alongside the 'trust score' (which uses only trust), there is the option to store distrust and display a 'similarity score' (which considers both trust and distrust).

🖥️ How do you use the program?

Fig 2. The popup interface.

The Trust and Distrust buttons are self-explanatory. Remove removes an identity from your direct (dis)trustees if present, and removes their (dis)trustees if not. Click Update Scores after such operations to recompute all scores and prune your graph of trust. Add Alias adds a nickname for the identity.

My Graph downloads your graph. Ask those you trust for their trust graphs and upload them by clicking Upload Graph. This automatically trusts them and updates all scores. Public figures/entities might upload their graphs online.

(Dis)trust lists are signed with a timestamp, so they can be updated, and you should regularly send out your latest graph to your direct contacts. They then have to send out their graph, and so on, so changes to your graph will take a while to propagate to other people. But it's not a bug, it's a feature!

If the change propagated automatically, people would gain others' trust and then suddenly update their own trust list to include their bot army. Then people would continually need to revert changes to their graphs as bot armies randomly appear. But now, this propagation will be stopped by the few direct contacts, instead of everybody needing to revert it.

Note

Your private key is stored in storage.sync and your initial empty (dis)trust lists have a timestamp of 0, so if on a new device, you can sync storage and upload your old graph to restore your data.

🤓 How are scores calculated?

The method to calculate trust depends on the ways in which users may try to undermine it. One way would be to try artificially increasing their trust scores by having multiple accounts that trust each other and some of which will try to gain your direct trust. The solution is to sort users into rings by their shortest distance (in trust steps) from you. Only use trust from those in closer rings, to compute a user's trust score.

But if the innermost user that trusts you is in a particular ring, you will be in the very next outer ring. Thus only trust from the next inner ring is considered for a particular user.

For simplicity, assume a person is either trustworthy or untrustworthy, and that a trustworthy person being wrong in their endorsement (endorsing an untrustworthy person) is of probability $Q$ and independent of others being wrong. So if $N$ trustworthy people endorse someone, them being untrustworthy is the same event as all $N$ being wrong, and both will have a probability of $Q^N$. Thus the probability of the endorsee being trustworthy is $1-Q^N$.

The only person guaranteed to be trustworthy is YOU (Ring 0). A person in Ring 1 will be trustworthy with probability $1-Q=P$. If someone in Ring 2 is trusted by $N$ people from Ring 1, they are on average, trusted by $PN$ trustworthy people, which means they can be approximated as being trustworthy with probability $1-Q^{PN}$.

The assumption of trustworthy people being wrong about someone, being independent events, is not a very good assumption; someone who can trick one person is more likely to be able to trick others. But of course, we care more about simplicity than the mathematical accuracy of our scores. Even making that assumption, $1-Q^{PN}$ is just an approximation, but one that's close enough, and will suffice. The exact value is

$\binom{n}{0}\ Q^0\ (1-Q)^N\ (1-Q^N)$ ----- none in Ring 1 are trustworthy

$+\ \binom{n}{1}\ Q^1\ (1-Q)^{N-1}\ (1-Q^{N-1})$ ----- one in Ring 1 is untrustworthy

$+\ \binom{n}{2}\ Q^2\ (1-Q)^{N-2}\ (1-Q^{N-2})$ ----- two in Ring 1 are untrustworthy

$...$

$=\ \sum_i\ \binom{n}{i}\ Q^i\ (1-Q)^{N-i}\ (1-Q^{N-i})$

Similarly, someone trusted by $N$ people, each trustworthy with a probability $P_i$, is in turn trustworthy with a probability $1-Q^{\sum P_i}$. This is even more of an estimation, for example, the endorsers being untrustworthy are not independent events – endorsers endorsed by the same people are likely to end up being untrustworthy together. But we just need the score to be reliable enough to tell whether someone is genuine or not, and the priority is fast calculation for millions of accounts.

As for how to calcuate the similarity scores taking into account distrust, you don't want people retaliating with distrust to affect the scores of trusted members of the community. Thus only distrust from higher rings should affect a person. Moreover, since trust is considered from only one ring, and we want distrust to be the inverse of trust, we must do the same for distrust. Thus trust and distrust only applies to someone from the innermost ring that trusts OR distrusts them.

We can use the previous formula $1-Q^{\sum P_i}$ for the similarity score, with some modifications:

  • subtract the probabilities corresponding to the distrusters, and
  • modfify the formula to $\text{sgn}(\sum P_i)\ (1-Q^{|\sum P_i|})$ so it's symmetric about the origin and in $(-1,1)$, as opposed to plummeting to $-\infty$ as $\sum P_i$ decreases.

Finally, when displaying the scores, we can clamp them all to the [min,max] for Ring 1, i.e., $[-(1-Q),1-Q]$, and then map from this range to the integers {-100..100}.

About

A decentralized Web of Trust implementation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published