Boosting Truth in Media - keskival/ai_enabled_transparency_of_governance_and_power GitHub Wiki

What Is Boosting Truth?

Boosting truth is less about a collection of solid facts, and more of a process which tests and maintains preference for truth.

The "post-truth age" in global politics has shown us the risks of losing the value of truth.

How can AI solutions help us in boosting truth? In social media advertisers can already pay to boost their messages, and there are services available for the rich for removal and obfuscation of inconvenient information on the internet. We need to point these tools to the opposite direction, to make these engines enrich the truth rather than corrupting it.

Media is affected by boosting and filtering. The main question is what content to boost and what to filter.

Centralization vs Decentralization

There are definite risks in centralizing the determination of what is truthful and what is not. It also assumes that the information to determine the validity of content is available to the central actor.

Most social media platforms nowadays boost paid content, and block content which is against their centralized community guidelines. Paid content is further personalized to different users, but blocks, filters and deboosts are not. People can personally block other people they have problems with, but this remains an ineffective method against any larger scale disinformation campaigns.

What if people could algorithmically delegate the decisions of boosting and filtering to other people, experts or institutions they share values with? It shouldn't be the platform provider making these decisions in behalf of their users. The users should be in control of their media, whether that means advertisement preferences or value-based filters.