Jumping Ship

You might find the departing messages of some recent Facebook employees as they leave the company, published at Buzzfeed by Ryan Mac and Craig Silverman interesting or depressing or both. Here’s the one that caught my attention:

“AI will not save us,” wrote Nick Inzucchi, a civic integrity product designer who quit last week. “The implicit vision guiding most of our integrity work today is one where all human discourse is overseen by perfect, fair, omniscient robots owned by [CEO] Mark Zuckerberg. This is clearly a dystopia, but one so deeply ingrained we hardly notice it any more.”

The article also lists the 10 Facebook pages dispensing the greatest volume of hateful content. All are on the right and include Breitbart, Fox News, The Daily Caller, and Donald Trump for President. That could be viewed in more than one way. Either those sites actually have lots of friends, lots of volume, and hateful content or Facebook’s definition of hateful content skews left.

Update

I think it’s worth mentioning that “AI” covers a lot of territory. It could mean a rules-based system which is inherently biased, it could mean a content analysis system based on a neural network which would, essentially, be impossible determine whether it was biased or not, or it could be something else entirely. In the absence of internal knowledge I couldn’t say what it actually means. My guess is that it’s an expert system—rules-based.

8 comments… add one
  • steve Link

    Go read them.

    Steve

  • walt moffett Link

    Seems to be a denial of agency, a belief that facebook customers/products can be programmed to bellyfeel BB and whatever else they want to sell that day.

    Hmmph, interesting to see the war between the plaintiff’s bar and FANGs over section 230 in the years ahead.

  • Grey Shambler Link

    Facebook is boring.
    Lonely people looking for empathy.
    I think they should censor the platform to death.

  • TastyBits Link

    AI as conceived my 99.99% of people is rules based, and this includes neural networks. If you know the starting conditions, you can predict the result, exactly. A non-rules based system must allow random inconsistencies, i.e. usually, 1 + 1 = 2, but not always.

    Humans are inherently irrational creatures, and there is no way to rationally model irrationality, except irrationally. In essence, AI insanity must be allow.

    (Asimov’s Three Laws of Robotics need to be inverted.)

  • While I guess a neural network-based system can be conceived of as rules-based, it operates by its own rules, i.e. it “learns” what the rules are based on its training. That’s different from a conventional expert system in that the programmer or designer provides the rules for an expert system.

    I was using neural networks for certain sorts of problems 30 years ago. Probably should have patented what I was doing.

  • I think they should censor the platform to death.

    They who? The complaint of some progressives about Facebook is that it doesn’t censor conservatives sufficiently.

  • Grey Shambler Link

    They who?
    Silicon Valley progressive elites.
    If they want a platform resembling a hall of mirrors, I’m fine with that.
    I’m as likely to get my information there as from the Church of Scientology or the moonies. Censor yourself to death.

  • CuriousOnlooker Link

    In another sense of jumping ship; this week Zero Hedge introduced paid subscriptions.

    What interested me was Zero Hedge attributes this to progressively becoming verboten on the big social networks until it culminated in Google’s demand to moderate comments on the site to Google’s satisfaction or face demonetization from its ad network.

    Now Zero Hedge is essentially experimenting if people will pay to speak their minds with no filter online (through they include other things in their subscription).

    I doubt ZeroHedge is the last content generator that scheming to be free of control from big tech.

    Will the experiment work; I don’t know, it bears watching.

Leave a Comment