9
19 Comments

[Hot Topic] Should social networks arbitrate / fact-check user posts?

https://www.theshovel.com.au/2020/05/28/mark-zuckerberg-dead-at-36-says-social-media-fact-check/

This is the hot topic in tech right now, What's your indie hacker take on this? It's important for our community because it impacts how we build stuff.

I personally think there should be a middle ground, i.e. social networks shouldn't arbitrate / delete posts, but independent people should be allowed to present counter arguments falsifying or supporting a post, in a way that's highlighted more than just comments.

What do you think?

  1. 6

    who watches the watchmen?

  2. 3

    Where does fact checking end?

    1. 1

      It ends when a piece of information is decorated with facts and alternate viewpoints. [Personal opinion]

      1. 2

        This comment was deleted a year ago.

  3. 3

    I have to agree with Mark on this. Users are supposed (or given the opportunity) to be able to discern what’s true or false — not the company.

    1. 1

      This comment was deleted 4 years ago.

    2. 1

      This comment was deleted a year ago.

  4. 3

    I think fact checking is a great idea. I also think banning users is fine to create the community you want. It definitely gets weird when you're talking about global social networks.

    I guess what we really need is a stable, educated public. This problem is always going to exist to some degree, and unless more people have stability in their lives and paired with a good education, the problem is going to persist at its current, elevated level.

    1. 2

      How do we know social network have the right judgement for any matter, isn't "banning" too much power in hands of a few people?

      Also, "stable, educated public" is not a reality, and IMO it's not going to be, in the next 100 years.

      1. 1

        I agree :)

        I'll start with your last point: Yep, it's not a reality, so we can look forward to our current mess for a long time! I like to think it'll be much less time than 100 years, but I wouldn't be surprised if you're right.

        To your first point: We don't, and they're filled with garbage. So we basically have the choice of (1) unregulated like 4chan, (2) regulated by private corporations, or (3) regulated by the government. They all sound pretty bad to me ;-)

  5. 2

    The further away from Mathematics your field is, the more fuzzy the definition of what a fact is.

    Recently Richard Dawkins tweeted "What honest, reasonable person could EVER object to being fact-checked?". I immediately wondered how one would go about fact checking opinions about God's existence/non-existence.

  6. 2

    I am all against it.

    Only moderate the posts that encourage real-world violence and some obvious spamming. Maaaaybe cyber-bullying.

    Other than that - no.

    I remember how 2, 3 months ago the reddit (?) posts talking about Wuhan lab being the source of coronavirus were blocked, because they were supposedly conspiracy theories without any basis on facts.

    Much later the media ran with stories that US intelligence actually researches that as a realistic option.

    You can't have "truth arbitrages" because no one knows all the truth.

    What we consider one month as true, might change next month. Will we be doing monthly blocking and unblocking of posts based on our current best knowledge? No, that's absurd.

    This whole thing is absurd.

    1. 1

      90%+ of people don't fact check, and a really large percentage, 75%+ just believe what they see or hear. Influencers are the new kings and queens of the world. If they are allowed to spread false information, we run into the risk of a large chunk of population being fooled, kept in dark. Last I checked, that's how politics work in most countries, and that in my opinion the biggest con of democracy. I think, bigger the influence of a person is, more they should be arbitrated and scrutinized, not only by social networks, but by people as well. That's just my opinion though.

      1. 3

        I just feel like this is a slippery slope.

        What if I write a mathematical sentence that is wrong. Will it be taken down?

        What if I joke that my name is Ziggy, while it's not? Will my post be taken down?

        What about the comedy / irony / sarcasm?

        It just scares me. There will be a group of people that will decide what is true and what is false. Am I really the only one that finds that terrifying?

        Let me ask you. Do you want to amend the constitution with law that disallows people to speak false sentences?

        1. 2

          BTW I was born in a post-soviet country.

          History of censorship and "deciding" what is true and what is not true is literally ingrained in our culture - books, movies, poems and songs from that era.

          People fought hard and sometimes even gave their lives so that we can speak whatever the fuck we want.

          You don't want to go that route, silly Americans.

        2. 1

          I absolutely agree on that part. No one should be allowed to "ban" me or "take it down". Everyone should be allowed to say I'm wrong or criticize me. Everyone should also be allowed to criticize my criticism.

          1. 2

            What an irony, that now I - a kid from a soviet block - have to ask Americans to not censor stuff...

  7. 1

    The problem with censorship of socially shared content, comments and discussions is that everyone has their own view of what is or is not valid or offensive.

    Facts can be found or twisted to argue any point because facts are only as good as the bias of the source of the facts (and there is ALWAYS bias to some degree no matter the source) and the opinions of the one supporting them.

    I understand and respect the need to protect your site/service/app reputation and branding, but there is a fine line between doing that and treating people using your service in a condescending way.

    The only viable solution to me is to put the burden on the individual readers, viewers, consumers, etc.

    I have no problem with social media sites flagging content as long as it remains viewable by all users.

    That said, they should allow users to say, "I disagree with this flag" and have it removed from their screen so they do not see it again, and if possible use that to avoid showing them similar flags on content in the future.

    Also, if enough people disagree with the flag (not a majority, but still a significant number of users), then the flag should be removed from the content for everyone.

  8. 1

    Absolutely. The really flagrant stuff should have snopes like ratings. Or at least a "Potential Misinformation" warning. A huge chunk of people, across political/ideological lines, carry misinformation with them. See the impact here: https://www.scientificamerican.com/article/cognitive-ability-and-vulnerability-to-fake-news/

    This won't stop the problem but the really bad offenders, think Alex Jones, need to be curbed.

    We wouldn't let Hitler's cabinet and Propaganda division post their weaponized Social Media posts in 2020? Would we?
    https://encyclopedia.ushmm.org/content/en/article/nazi-propaganda

    “task is not to make an objective study of the truth, in so far as it favors the enemy, and then set it before the masses with academic fairness; its task is to serve our own right, always and unflinchingly.”

  9. 1

    Good question, @swebdev. I heard a good summation: the right to free speech does not imply the right to a bullhorn.

    Imagine someone saying to you, "Hey - build me this communications platform because I want to use it to say wild shit that will cause all sorts of drama."

    I'd say, "Hell, no! Build your own goddamn platform."

    Now imagine they're saying, "Hey - you already built this communications platform, I'm already using it, and I want to continue using it to say wild shit that will cause all sorts of drama, so you need to maintain this thing for me."

    I'd say, "Hell, no! Maintain your own goddamn platform."

    Nobody should be able to force me to work, or force me to continue to work, to provide them a service.

    But the real question is a little more nuanced, like whether or not Twitter should have the legal protection of Section 230 if they're adding commentary to content created by others, which arguably puts them in the "publisher" category (which is not protected by Section 230).

    Speaking of the recent news stories, I personally don't think Twitter's changes are so drastic, given that they've left everything intact and just added a link (in one case) or a warning (in the other). I don't know that those small changes constitute being a "publisher" but I'm open to hearing those arguments. I wish Twitter had been enforcing their TOS the whole time, but it seems they've been inconsistent.

    All of that said, First Amendment issues are where you can find yourself -- if you care about being rational -- defending the rights of the very worst people. For example, the ACLU defending the rights of the Westboro Baptist Church and writing, "We All Need to Defend Speech We Hate."

    So yeah - I'm trying to think of a summary of those thoughts...

    1. As Twitter has the right to manage its own platform, it's not a First Amendment issue.
    2. Without Section 230, we couldn't have platforms like Twitter / FB / Reddit / IH.
    3. It makes sense that a publisher should be liable for libel or malicious & incendiary content.
    4. I don't think Twitter's recent TOS enforcement makes them a publisher.
  10. 1

    Hmm definitely a tough nut to crack without sufficient resources and it can still be a slippery slope. For example we see those images shared with quotes being misattributed to famous people. Not just memes but motivational sayings. Is that hurting anyone? Then there’s satire like The Onion. How do you flag that? So the question is where do we draw the line. The misquotes don’t seem to hurt anyone. But then if we got people relying on information for their health, that’s tough. A blog sharing certain diet changes may have help one person alleviate symptoms of their health conditions but is that advice merely anecdotal or drawn from focused tests with sufficient sample sizes and controls to validate that.

    Facebook in this case may have sufficient resources (people/moderators) as well as tooling (AI and flagging filters, etc) to fix this. It sets a precedent that others may have to follow, spend energy and money on. Definitely a tough question to answer.

Trending on Indie Hackers
I talked to 8 SaaS founders, these are the most common SaaS tools they use 20 comments What are your cold outreach conversion rates? Top 3 Metrics And Benchmarks To Track 19 comments How I Sourced 60% of Customers From Linkedin, Organically 12 comments Hero Section Copywriting Framework that Converts 3x 12 comments Promptzone - first-of-its-kind social media platform dedicated to all things AI. 8 comments How to create a rating system with Tailwind CSS and Alpinejs 7 comments