A decentralized Web of Trust implementation.
Inspired by Freenet's Web of Trust plugin (wiki).
Note
The program is functional but still in beta (latest v0.1.2
) and under active development.
Based on the list of people you trust, those they trust, and so on, each identity gets a "trust score" that represents how trustworthy they are TO YOU. The score you see for an identity wil be different from what someone else sees. "Trustworthy" here refers to how likely they are to be sincere, i.e., NOT a scammer, spammer, troll, astroturfer, misinformer, etc. This score is displayed next to usernames on websites.
Important
This is a decentralized solution that requires each user to locally store and process a potentially massive web of trust, which may be infeasible – an alternative would be for social sites to implement such a thing server-side, but of course, this requires trusting the site to display undistorted scores.
There are two types of identities recognized by this program. A "user" is identified by a public key, whereas non-user identities are accounts on different websites. Both types can be trusted, but only by users, whose trust can be cryptographically signed and verified using their keys. Thus social media accounts can get trust even without the owner using the program, and you don't need to verify ownership of your social accounts – simply add them as trustees so those who trust you will also trust your accounts.
It is expected that allowing users to also list those they distrust will lead to "trust wars" where people use this feature for those they merely disagree with. If such a score is used to block people, it will stifle debate and discussion, since the score will represent how much a person shares your opinions, rather than how likely they are to be sincere. However, there may be places like focused political groups where the former feature is desired. Thus alongside the 'trust score' (which uses only trust), there is the option to store distrust and display a 'similarity score' (which considers both trust and distrust).
The Trust and Distrust buttons are self-explanatory. Remove removes an identity from your direct (dis)trustees if present, and removes their (dis)trustees if not. Click Update Scores after such operations to recompute all scores and prune your graph of trust. Add Alias adds a nickname for the identity.
My Graph downloads your graph. Ask those you trust for their trust graphs and upload them by clicking Upload Graph. This automatically trusts them and updates all scores. Public figures/entities might upload their graphs online.
(Dis)trust lists are signed with a timestamp, so they can be updated, and you should regularly send out your latest graph to your direct contacts. They then have to send out their graph, and so on, so changes to your graph will take a while to propagate to other people. But it's not a bug, it's a feature!
If the change propagated automatically, people would gain others' trust and then suddenly update their own trust list to include their bot army. Then people would continually need to revert changes to their graphs as bot armies randomly appear. But now, this propagation will be stopped by the few direct contacts, instead of everybody needing to revert it.
Note
Your private key is stored in storage.sync
and your initial empty (dis)trust lists have a timestamp of 0, so if on a new device, you can sync storage and upload your old graph to restore your data.
🤓 How are scores calculated?
The method to calculate trust depends on the ways in which users may try to undermine it. One way would be to try artificially increasing their trust scores by having multiple accounts that trust each other and some of which will try to gain your direct trust. The solution is to sort users into rings by their shortest distance (in trust steps) from you. Only use trust from those in closer rings, to compute a user's trust score.
But if the innermost user that trusts you is in a particular ring, you will be in the very next outer ring. Thus only trust from the next inner ring is considered for a particular user.
For simplicity, assume a person is either trustworthy or untrustworthy, and that a trustworthy person being wrong in their endorsement (endorsing an untrustworthy person) is of probability
The only person guaranteed to be trustworthy is YOU (Ring 0). A person in Ring 1 will be trustworthy with probability
The assumption of trustworthy people being wrong about someone, being independent events, is not a very good assumption; someone who can trick one person is more likely to be able to trick others. But of course, we care more about simplicity than the mathematical accuracy of our scores. Even making that assumption, $1-Q^{PN}$ is just an approximation, but one that's close enough, and will suffice. The exact value is
Similarly, someone trusted by
As for how to calcuate the similarity scores taking into account distrust, you don't want people retaliating with distrust to affect the scores of trusted members of the community. Thus only distrust from higher rings should affect a person. Moreover, since trust is considered from only one ring, and we want distrust to be the inverse of trust, we must do the same for distrust. Thus trust and distrust only applies to someone from the innermost ring that trusts OR distrusts them.
We can use the previous formula
- subtract the probabilities corresponding to the distrusters, and
- modfify the formula to
$\text{sgn}(\sum P_i)\ (1-Q^{|\sum P_i|})$ so it's symmetric about the origin and in$(-1,1)$ , as opposed to plummeting to$-\infty$ as$\sum P_i$ decreases.
Finally, when displaying the scores, we can clamp them all to the [min,max] for Ring 1, i.e.,