Ripple Labs Inc., a Google-backed technology company with ties to the White House, is funding research into algorithms that fine and “penalize” people for sharing stories deemed “misinformation” or “fake news.”
The research, which is being conducted at the University of Waterloo, follows Ripple positioning its blockchain-based technology for application with Central Bank Digital Currency (CBDC) by governments across the world. The company has reportedly collaborated with over a dozen countries on CBDC ventures, inking partnerships with the Hong Kong Monetary Authority, the Republic of Palau, the Royal Monetary Authority of Bhutan, Montenegro, and Colombia.
The company has also held meetings with high-level Chinese Communist Party officials despite fears that CBDC could easily be used to implement a Chinese-style “social credit score” system in the West. Ripple has also enjoyed financial backing from Chinese Communist Party-linked firms such as ChinaRock Capital Management and China Growth Capital in addition to Google.
Ripple also retains ties to the White House, as its former Advisor, Michael Barr, is the second vice chair of the Federal Reserve for supervision. A former high-ranking Obama-era Treasury official, Barr is “in charge of developing regulatory policies for cryptocurrencies and stablecoins” and “expected to scale daily oversight of both the biggest lenders and smaller financial firms that play a part in the overall economy.”
The research, led by University of Waterloo PhD candidate in electrical and computer engineering Chien-Chih Chen, could confirm fears concerning the link between CBDC and how, if people hold political or social beliefs contrary to mainstream ideas, they could be penalized.
Even financially.
“It starts with the publication of a news article on a decentralized platform based on blockchain technology, which provides a transparent, immutable record of all transactions related to news articles. This makes it extremely difficult for users to manipulate or tamper with information. Second comes human intelligence in the form of a quorum of validators who are incentivized with rewards or penalties to assess whether the news story they’re reviewing is true or false,” explains a synopsis of the research.
The technology could potentially empower a group of people to be able decide whether or not a story is “misinformation”:
“The quorum would be a subset of the larger community of users on the platform. A quorum’s members could be chosen at random from people interested in validating news stories or from those with a proven reputation for authenticating news — or a combination of both groups. They’d verify news stories by reading an article and judging its veracity based on their own knowledge and sources. They would then state their opinion on whether or not the article is accurate. The quorum’s collective opinion would be used to establish a consensus on the accuracy of the article. The article would then be validated — or flagged as fake news — based on the outcome of the consensus mechanism.”
"Validators who provide accurate information that aligns with the consensus of the majority would be rewarded while those who provide fake news or inaccurate information would be penalized,” Chen explained.
“Those rewards or penalties could be in the form of various cryptocurrencies,” explained the research.
A glaring example of how the term “misinformation” can be improperly used is found in the University of Waterloo’s synopsis of the research, lamenting how the spread of fake news led to Brexit and President Donald Trump’s 2016 victory. The university also posits technology companies have not gone far enough in their censorship efforts:
“The danger of disinformation — or fake news — to democracy is real. There is evidence fake news could have influenced how people voted in two important political events in 2016: Brexit, the exit of the United Kingdom from the European Union, and the U.S. presidential election that put Donald Trump in power. More recently the Canadian government has warned Canadians to be aware of a Russian campaign of disinformation surrounding that country’s war against Ukraine. Although big tech companies, including Facebook and Google, have established policies to prevent the spread of fake news on their platforms, they’ve had limited success.”
The study’s lead researcher admitted he was “confident our system has the potential to be applied in practical situations within the next few years.”
The research from the PhD candidate is terrific. Please keep us informed of updates.