explicitClick to confirm you are 18+

Open Source Censorship

museJun 2, 2018, 10:00:13 PM
thumb_up63thumb_downmore_vert

[The censored "CENSORED" sign is hilarious]

There are three issues today with the way censorship is conceived where I think improvement can be made.

𝟏)    Censorship by businesses is a content choice, it's not a moral requirement to censor or not censor.
𝟐)    Censorship is unhelpfully viewed in either binary or ternary terms. That is, something is either explicit, non-explicit, or illegal(ternary). And if it is legal, it is either explicit or non-explicit(binary).
πŸ‘)    Censorship is enforced by centralized authority.

Please keep in mind that the words binary and ternary link back to this second point, and the word centralized links back to the third. I will not treat the issue of the meaning of Free Speech in a sovereign State, but I will treat the issue of seeing censorship as binary, ternary and centralized.

Why Censorship?

There are essentially two kinds of censorship on Minds just like most other platforms. Specifically, Censorship as Morality and Censorship as Preference. Censorship as Morality refers to content that is not permitted to exist on Minds(or some other platform), probably due to it being illegal content. Censorship as Preference refers to the preference setting of turning "explicit" content on or off.

1)
Censorship as Morality - Legal vs Illegal
Platforms, whether news, social network, radio, tv, these are all under obligation by their overseering State to obey certain regulations. What is permitted by the State can for this articles purposes be considered a moral judgement. This might mean that some political opinions are viewed as wrong, that drugs are wrong, that some pornography is wrong etc. All content then falls under legal or illegal. Minds, Facebook, Google etc will all ban certain content that falls under the category of illegal.

2) Censorship as Preference - Legal Non-Explicit(Everyone) vs Legal Explicit
Within the category of legal, one can select a preference of whether or not to see Explicit posts. This is true of Minds as well as many other platforms or search engines like Duckduckgo or more invasive and insecure search engines such as Google. Some users prefer not to see certain content. Thus, this kind of censorship is just user preference.

Thus, for as long as Minds is held accountable for the content on its platform by both the States legal requirements and its users preferential requirements, Minds will need to censor some content, the question is just how to do this so that it pleases the optimal number of people. "No censorship" will annoy a lot of people, because, as shown above, much of "Censorship" is really just a preference.

The Problem

There has been for some time, a degree of kerfuffle occurring over Minds' attitude towards some explicit content. However, so long as Minds meets their regulative obligations(which could be much less if their servers were decentralized), the problem of determining which content should be marked explicit could be solved by using a new, more community involved paradigm.
At present, users have the ability to flag content as Explicit. So the user is involved in this process, yet final decisions are made by Minds staff(centralized), and the users can only choose between Explicit and Non-Explicit(binary).

Beyond Binary Censorship

In order to better suit user preferences for what is Explicit or Non-Explicit, Minds could add additional options to the user, perhaps even dozens of options. Some initial family friendly ideas are as follows:

No Drug references
No Pornography
No Hardcore Pornography
No Gore
No Extreme Gore


If these seem Draconian to some, remember this is a user setting, and it may help Minds to be more family friendly by adding in these user settings to help it make some inroads into settings like schools. How many people really want their 6yos learning about these things? At any rate, this is a user setting, therefore those parents who wish to get their toddlers clued up on 420 are perfectly capable of doing so.

These are just initial ideas, but ultimately, if Minds took up this task without the next step, they would be defining the categories for users, and not giving their users the chance to customize as much as they would like.

Beyond Centralized Censorship

To take the customization settings further than the above, Minds could add the ability for users to not only select predefined preference settings but create their own custom settings, and to make those custom settings sharable with others. A user could submit these settings to Minds, and they could go into a database, which other users could peruse to find recommended preferences from others.

This way, the process of censorship is completely community developed, it is "open source", not only with the options being publicly available, but also in the users decision making ability. Explicit content is then completely in the users power.

From this, popular standards and "gatekeepers" are likely to develop. These might have names such as:

"Public-School" by Tom1984
"Work" by Dick501
"Public-Airport" by Mohammed911

Thus schools, families, airports etc could each choose to adopt their censorship standard, but without infringing on the less intense censorship standards of others.

This also a-moralizes the censorship process. Censoring does not need to be considered a moral task(at least, so long as it's "legal" content), but just personal preference and settings suitable to the environment.

Unifying Censorship and Preference

and, making advertisements personal, but staying private

One problem that Minds is likely going to need to confront soon, is that due to their not collecting information on users, their advertising/boost system is not targeted, much unlike Facebook or Google. On the bright side, Minds has not compromised user privacy for advertising revenue. On the dark side, this means that boosting on Minds is less valuable than it could be, and users see boosts that are less interesting to them.

This also means that people with specific niche interests are less likely to find people like them, as the advertisements targeted to the general audience are the ones that are the most popular. Niche sub-communities may find more difficulty forming.

Personal data is necessary in order to create targeted advertising and improve user experience. Minds, in taking privacy seriously has refused to collect personal data, yet, there is another way: allow users more options to choose their own preferences. Unify the concepts of censorship and preference, the only exception to this needs to be those legal standards for content which is required by the legislative bureaucracy.

Special Thanks to

@RealMindsChan for our chat
@Jon & @Jack for listening
@zeqkris and his blog Future of Advertising
Shirkys brilliant article Ontology is Overrated: Categories, Links, and Tags

Afterword

I am aware of some problems to this article. Such as the following

𝟏)    The difficulty of replacing binary explicit vs non-explicit content with an infinite amount of options. How could a user then mark this as explicit for other users? There is an answer, but it involves difficult graph theory.

𝟐)    The issue of Valuable Targeted Advertisements vs Privacy Protection is ultimately not completely solvable, there must always be a tradeoff. Selecting preferences is ultimately a privacy tradeoff, as it gives information about yourself, and allows a profiling/fingerprinting of yourself to develop which is not otherwise there.