Blurt is and always will be, a free speech platform, where people are free to post their thoughts and ideas, to just Blurt out whatever they feel like. But free speech doesn't mean free rewards!
Blurt only has value when rewards are backed by some sort of value themselves, example Proof-of-Brain. Value gets eroded when we start seeing "Proof-of-Copy-Paste" and "Proof-of-Plagiarism", which is currently rearing its ugly head on Blurt.
Good content creates SEO (Search Engine Optimisation) value and discoverability on search engines, which helps Blurt frontends and content to get discovered by external users. Plagiarism and copy-paste actually erodes the SEO value of Blurt, users think they are winning by upvoting their own garbage for rewards, but long-term they are not helping the token price.
I iterate that accounts should not be stopped from posting on chain, but they should not be allowed to gain from the reward pool if the content is of low value.
The Core Team has discussed many solutions at length, we certainly don't want to bring back the downvote because that will cause flag wars and bullying, if you think downvotes are cool, have a look at this Youtube Video first about the Downvote Predicament.
I will briefly cover some ideas we looked at.
Code name: Plan A
This is an idea dreamed up by our resident mathematician @rycharde, the idea is we create an account that is used as a herald/trigger, this account will vote on posts that are not desirable with 1% and then we code the blockchain to simply not pay out rewards at the 7-day mark to posts that have been marked by a micro-vote from the abuse account.
In this model the post will not earn rewards, the curators perhaps still will, however, should we be rewarding them for voting trash?
Advantages: Granular, this method allows spam fighters to target individual posts and not an account as a whole. Two abuse accounts with different severity can be used, one with high severity that signals that all rewards should be forfeit and another with moderate severity that only chops off say 60% of the rewards.
Disadvantages: Centralised approach, so with this method a group of people would be given posting authority over the abuse accounts, the accounts will have to be owned by the foundation and the foundation will add/remove posting authority of abuse fighters as needed, this creates central reliance on the foundation.
The Diversity Index (DI) also conceived by @rycharde would be like a reputation system, if you vote just a few of the same circle of accounts each day and don't spread your votes to new authors, your DI will be low. If you spread your votes widely your DI will be high, up to 100% max.
The idea is the blockchain will allow you to receive proportionate rewards according to your DI score. So say you are an author and you earn 100 BP on a post after curation is deducted, and let's say your DI score is 80%, in that case you will only earn 80 BP in rewards.
Advantages: The DI incentives users to curate widely and not circle jerk the same people. It can also be used alongside any of the other abuse fighting ideas.
Disadvantages: Complex to implement and might be compute-heavy. Doesn't stop users from voting trash, only helps distribute votes.
The concept is that we would add a field to each user account, let's call it "verified" and then an external verification service api would be used as an oracle to write to chain to update that field as TRUE or FALSE. All users would be set to verified as FALSE post-Hardfork and would have to verify with a service such as brightID.org or similar.
Only users verified as humans and verified as TRUE would gain access to the rewards pool. All accounts will still be able to post and be ranked using the decline payout posting method but just cannot receive rewards payouts until verified.
Advantages: This solution cuts out all sockpuppet accounts from the rewards pool immediately and only lets real humans earn rewards. Depending on the verification partner chosen, it could add trust benefits and access to a wider network of already verified users on the verification partner's network
Disadvantages: The chain will no longer be self-sovereign and will have an intrinsic dependency on an external service. This service could become corrupt or the oracle could be tampered with to verify bad actors. If a third party is used that requires ID and Address verification, that service could be compromised via leaks or even by court orders from authorities wanting to find out the identities of pseudonymous bloggers they want to target.
Services like BrightID luckily do not require ID or anything, they rely on you getting verified by human friends when you share a single-use QR code with them, the more friends you verify with, the more human you are. This can be gamed by installing the app on multiple devices or even virtual devices and creating multiple virtual profiles this way, Jacob and I tested this and Jacob created two BrightID identities using two devices he had. If a better non-gameable Proof-of-Human solution comes along in the future, it could be an option.
Another disadvantage is that verification needs to be done regularly, maybe every 3 months, otherwise there will become a black market for verified accounts and destroy the whole concept.
Witness Operated Abuse Lists
@jacobgadikian proposed this idea, where witnesses would run abuse lists, where essentially each witness will have an abuse list on their server with names of accounts that they don't want to access the rewards pool. There would be a consensus threshold of say 15/20 witnesses would have to have the same name on their list for the chain to exclude it from rewards.
Advantages: Possibly the easiest to implement, it has been done on Steem before.
Disadvantages: Might be hard to get all witnesses to agree and might be a slow process to get consensus.
Abuse lists will not be public, as they reside on the witness server so will not be transparent.
With this method witnesses will always be playing catchup, abusers can switch to multiple accounts when the former have been abuse listed.
Witness bribery and corruption could occur, for example, whales that abuse rewards could vote only witnesses that don't have them on their abuse lists or pay them off not to add them. The same goes for witness threats, where witnesses can be targetted by abuse listed users in their personal capacity.
My Proposed Solution - Blockchain Moderators
After speaking with @jacobgadikian, I formulated an idea based on his suggestion above regarding witness operated abuse lists.
The idea is that we want witnesses to be corruption free and to stay on task of securing the blockchain without having to worry about policing rewards as well, not every witness is good at abuse fighting or even wants to get involved with it. Their job is to be ambassadors and developers of the chain and to run reliable nodes.
So I propose that we create a different set of validators for the rewards pool, that are voted in the same way as witnesses are, by stake-weighted voting. In this solution, up to 20 moderators would be voted in and they will be tasked to seek out accounts that will be added to their abuse lists.
Much like in Jacob's solution, there would be a consensus threshold of say 15/20 where an abuser's name would have to appear on 15 moderator abuse lists before the blockchain will suspend reward pool access of the abuser's account.
The difference however is that the abuse lists would be public and auditable, they would be posted using Custom JSON by each moderator and recorded on-chain, that way they do not need to run servers like witnesses do.
We can also redirect some of the blockchain inflation to the moderators, much like how witnesses share in 10% of inflationary rewards, moderators could perhaps share in 3% since they don't have server costs, which could be reduced from the @blurt.dao for the time being.
Advantages: Fast and transparent Custom JSON blacklist updating.
Helps focus witnesses on blockchain security and keeps them free from corruption, bribery, and threats.
Moderators have to perform well otherwise will be unvoted by the community and lose their share of inflation rewards.
Disadvantages: Moderators could still be subjected to corruption, threats, and bribery. Moderators could however choose to have different anonymous accounts that are not linked to their social profile accounts, it would be harder to get voted in this way but perhaps in time they can be voted in for their diligent work and not because of their social standing and reputation.
Targets rewards denial at account level instead of post level, no granularity, blanket rewards denial until removed from abuse list. However, perhaps this solution can be used in conjunction with Plan A, where the 20 moderators able to issue the special trigger vote on content to be demonetised of rewards will be voted in by the community in a decentralised manner, in this way specific posts are targetted and not entire accounts, unless placed on autovote for repeat offender types.
Should rewards be limited only on the author's side or should curators that voted for the garbage content also not receive rewards for the post by the abuse listed author they voted for?
This is a lot of food for thought, please take your time to digest it, and please comment below and offer suggestions, thoughts, and improvements.