Facebook Destroys Everything: Part 3

When Covid arrived, I was, like most reasonable people, terrified of the virus. I was also absolutely terrified by the glittering, data-hoovering opportunity that a global pandemic represented for the always-hungry likes of Facebook and Google.

My fears about how Big Tech might take advantage of this planet-sized tragedy only solidified after it came out in early March 2020 that the Trump administration had been holding conference calls with Silicon Valley to discuss how they might be able to work together on battling the pandemic – and if the companies had any useful data they might be willing to share with the federal government.

My mind filled with visions of an unholy alliance between privacy-destroying tech firms and the deranged Trump administration, who could use public health concerns to legally mandate that Americans cough up their health and location data to both Zuck and the MAGA set in exchange for access to Covid testing and vaccines.

There was some precedent for my paranoia.

I’d spent the last decade watching as Facebook sweet-talked governments, medical systems, and non-profits around the world into adopting their platform for communicating with the public about crises, seducing decision-makers with promises of an easy, domesticated solution that would liberate them from having to futz with building and updating their own websites.

I’d also watched in disgust as how, after crucial organizations became comfortable with pushing out vital information on Facebook, the company began to make it harder and harder for people to find or to view those potentially life-saving posts if they weren’t already logged in. The end-game was obvious: they were building a world where if someone wanted to look at updates from their city government on local flooding, or see what their local hospital was saying about flu vaccinations, they’d have to submit to becoming legible to Facebook first.

Covid, then, represented a massive opportunity for a company that was already so clearly hell-bent on taking advantage of disaster and crises as away to herd even more organizations and people into its blue, walled-off paddocks.

And while it was true that Facebook and Trump regularly sparred with one another in public, as GOP leaders complained that the platform was unfairly censoring them (when in truth, the site was doing the exact opposite), it was a different story in private.

i was a kid when this happened and it just keeps getting dumber and more insidious the more I read about it

At the time, Facebook policy vice president and former George W Bush policy advisor Joel Kaplan – a notorious participant in the 2000 “Brook’s Brothers” riot that helped secure the presidency for George W Bush – was working overtime to win the MAGA set’s trust.  Why wouldn’t Zuckerberg and his highly-paid and ethically suspect colleagues take the opportunity to partner, at least for now, with the Trump administration?

Much to my surprise, and relief, both Trump and Facebook spectacularly fumbled the world-domination bag.

 In retrospect, it was even less surprising that the rift between the MAGAs and Big Tech began over disinformation.

In early March, as the world became horribly aware that Covid was both real and destined to become real bad, Facebook joined forces with Google and Twitter to announce that their sites would make a special effort to counter the spread of egregious misinformation about the pandemic.

Then came May 25th, 2020, and the brutal murder of George Floyd at the hands of a bloodthirsty Minneapolis cop. As protests against police brutality ignited across the United States, social media users were confronted with a tsunami of hate speech and disinformation directed against Black Americans and activists. Perhaps anticipating what would happen next, Trump hastened to sign a executive order on “preventing online censorship,” although it was almost entirely symbolic in practice.

President Trump then, in the course of making his own contributions to the fire hose of racist bullshit that swirled around the Internet at the time, crossed a line. In ominous May 29th posts on both Facebook and Twitter, he declared that “once the looting starts, the shooting starts.”

Twitter acted relatively quickly to limit the public’s ability to view or interact with Trump’s post, citing their rules against “glorifying violence.”

Facebook, meanwhile, didn’t do shit.

As both the public and national media took note of the two social media platform’s distinctly different approach to Trump’s violent rhetoric, Mark Zuckerberg was eventually forced to say something. In an impressive display of weasel-words, Zuckerberg wrote a lengthy post justifying his decision to leave the President’s egregiously terms-of-service violating emission up, claiming (as he had before in response to Myanmar) that the company “shouldn’t be the arbiter of truth.” 

Mark was, I suspect, surprised when his word salad failed to turn down the heat on both himself and his company.

Repulsed Facebook employees publicly called both Zuckerberg and Joel Kaplan out, accusing their leaders of bending over to accommodate the whims of the GOP. Soon, over 800 advertisers had joined a boycott against the company, including heavy-hitters like Coca-Cola, Ford, and Unilever. Caught between a rock and a hard place, Zuckerberg finally agreed at the end of June to do more to remove violence-inciting posts and to label posts by politicians with virulently policy-flouting content.

While many critics from the left were temporarily quieted by this move, Facebook’s woes weren’t over yet.

zuckerberg and fauci touching base

In mid-July, Zuckerberg, in a rare display of semi-human sentiment, openly criticized the Trump administration’s stunningly shit response to the virus in a live interview with Dr. Anthony Fauci. Soon after the Fauci comments, Zuckerberg insisted to Axios that he didn’t have a secret deal with Trump, as some media outlets had begun to speculate – though he did confirm that he spoke with the President “from time to time.” Trump, for his part, largely kept quiet about these open provocations. For a few days, it seemed like Zuckerberg was, infuriatingly, managing to once again get away with his obfuscating aw-shucks act.

Then in early August, Trump claimed (falsely) in a Fox and Friends interview, which he shared on Facebook, that children are “almost immune” from Covid-19. Facebook, pushing its luck, decided that it would hold the President to its terms of service: it deleted Trump’s video.

what a time to be alive

Predictably, Trump lost his shit, and perhaps even more predictably, he lost his shit during an interview with Gerald Rivera.

After deeming his comment on Covid to be “a perfect statement, a statement about youth,” he took up his old claim that Facebook was censoring him. “They’re doing anybody, on the right, anybody, any Republican, any conservative Republican is censored and look at the horrible things they say on the left,” Trump wailed to Geraldo’s sympathetic listeners.

By September, Trump was making ominous noises at the White House about taking “concrete legal steps” against social media sites that censored conservatives online. The relationship between the President and Facebook would remain distrustful at best until Trump – grudgingly – left office.

Which was, of course, a good thing. The Trump administration’s wildly unpredictable behavior and constant hostility to Silicon Valley’s prideful overlords ensured that both the government and Big Tech would fail to pull off the frightening privacy-destroying partnership I’d been so afraid of when the pandemic first began.

But bad as the relationship between Trump and Zuck now was, Donald Trump was still allowed on the platform. Which he used to spewed claims about voter fraud up to and after the 2020 election, and where his supporters openly discussed the plans that would eventually lead to January 6th in ever-more-deranged Facebook groups.

On that particular day of infamy, Facebook did suspend Trump’s account. But only after Twitter did it first. (Trump now has his Facebook account back, but he doesn’t use it much. The moment has passed).

Facebook found little friendliness from the new Biden administration, populated by staffers who were far less enamored with big tech than the technocrats of the Obama era had been.

thanks Facebook!

Biden’s team immediately criticized the company for failing to adequately control rampant disinformation about the Covid vaccine, as the Democrat-led administration set about frantically picking up the pieces of the GOP’s disastrous pandemic response. Meanwhile, it battled with Biden in public, Facebook (per whistleblower revelations) carefully tracked the spread of Covid disinformation internally – while consistently sharing as little of their findings with the new Democrat-led government as possible. 

Eventually, Facebook did eventually, begrudgingly, give into Biden administration pressure to take down obvious Covid-19 bullshit. It was a move that was in alignment, you might recall, with what Zuckerberg publicly claimed he was going to do when the pandemic began.

It was also a choice that the GOP is now, as I write this in the summer of 2023, using to bolster their nonsense claims (which they’ve been making in one form or another since 2015) that the Biden administration is unjustly censoring the GOP on social media.

A Louisiana judge recently used this exact rationale to ban federal agencies and officials from working with social media companies to address “protected speech.” And much of the media continues to politely ignore the fact that Trump and the GOP have spent years blatantly pressuring social media companies to cater to them, actions they’ve figured out they can obfuscate by shrieking as loudly as possible about how they’re being oppressed by the Coastal Elite.

blue pretzel/ouroboros

And then came Meta.

At the end of 2021, Zuckerberg, high on an in-house supply incomprehensible to the likes of groveling, ground-dwelling peasants like us, announced that his company would be changing its name, placing products like Facebook, Instagram, and WhatsApp under the same blandly ominous title.

What’s more, the whole shambling horror would be pivoting operations over to something he’d dubbed the Metaverse, an incomprehensible concept that was – I think, it’s terrifically hard to say for sure – positioned somewhere in between hideous NFTs of vomiting apes, The Blockchain (such as it is), and a 2005-era VR video game where you don’t have any legs. Supposedly, it was a play to attract more young people, more hip people, to Meta’s increasingly geriatric lineup of products. After all, nothing says youthful cool like dropping fake computer money on virtual branded estate.

turns out that people just want to be sexy 20-foot dragon ladies in VR worlds, not dead-eyed dorks posing in front of monuments

Unsurprisingly to everyone who isn’t Mark Zuckerberg, the Metaverse was a majestic, world-beating failure. Meta hemorrhaged money, burning billions of dollars in pursuit of a lame product that nobody wanted. The company’s frantic flailing drove even more people away from Facebook’s both grotesquely ethically compromised and now terminally lame platform. For the first time ever, in early 2022, Facebook started losing users.

Facebook, or Meta, was by no means dead. But Facebook, surprisingly, had stopped feeling inevitable.

wow, he’s just like us

As the world became aware of Elon Musk’s manure-brained battle to weasel out of buying Twitter in 2022, the attention of what remained of tech journalism shifted away from Zuckerberg’s failings to Musk’s even splashier, rocket-fuel stained antics. By 2022, the Metaverse’s incredible, legless failure had conditioned many people to view the company as more absurd than it was outright evil. I noticed a considerable uptick in fluff pieces about how Mark Zuckerberg was learning BJJ, like a normal human with normal, relatable hobbies.

For Mark, Elon Musk’s incredible two year effort to light his own reputation on fire has also had the remarkably convenient knock-on effect of making him seem reasonable. “Yes, Zuckerberg’s companies ransack private data and tear apart societies, and he does openly thirst for world domination,” some reasoned, “but you also don’t see him promoting creepy eugenics theories, blowing up rockets in environmentally sensitive areas, or directly meddling in the Ukraine War.”

And so, Zuckerberg and the Metaverse and everything else were able to slink back into the shadows for a bit. Sure, there were still stories about how the company was failing to control hate speech in conflict zones. How it had been slapped with more historically huge and yet affordable fines from the European Union. How people in poor countries were getting charged for their supposedly free Facebook-branded mobile data. But the media had, largely, shifted its coverage of man-made horrors beyond our comprehension to the latest, splashiest abominations that Elon was involved in.

When Elon Musk finally did walk into Twitter HQ with a shit-eating grin and a stupid Home Depot sink in his hands, his status as the Internet’s new Most Hated Man was secured. And it became terribly apparent that Twitter as we knew it, as I knew it, was gone for good, and something much, much worse was going to take its place.

relics from the old, fun internet

Enter Threads. 

Meta’s Twitter-killer features little news by design, in line with Meta’s new hardline strategy against accommodating those press-room bastards that have inflicted them with so many indignities in the past. It also has even less moderation than Facebook or Instagam ever did, echoing both Musk and Zuckerberg’s profoundly cynical, if hard to argue with, realization that governments don’t have the courage to force them to make their websites less evil. Unsurprisingly, the site already has a hate-speech problem.

 Somehow, some people, mourning over the terminally-ill wreck of what was once Twitter, are still hailing Zuckerberg as something of a savior, or at least, as someone who’s substantially less evil than Elon Musk (which is wrong, but is very convenient for Zuck). Others are shrugging and leaning into Threads, shifting back into the once all-powerful idea that Facebook is inevitable, that resisting it is as foolish as shooting into the eye of a hurricane.

As for me? I’m somewhat afraid of Threads, albeit less so now, in August, then I was when it first came out in July, as it’s become clear that the service isn’t becoming the default Twitter-replacement that Meta had so fervently hoped it would be. But I’m also angry about Threads, the kind of rage that develops when you see your oldest and most loathsome enemy somehow survive threat after threat, and continue to shamble hungrily on. 

I’m angry about how Mark Zuckerberg and Facebook and all the rest of his horrible companies have been able to spend the last 15 years getting away with it, how they never seem to suffer truly meaningful consequences for constantly, continuously, making the world worse. And I’m also angry about how so many people know what Zuckerberg is, and know what he’s done, and are still willing to give him yet another chance.

facebook has always made me feel like I’m trapped in a Bruegel painting

“Maybe this time, he won’t be evil!” people say, and then he does something evil again, and the same people claim that this was, somehow, a surprise. It’s a lot like inviting the Dread Vampire Zartok into your home, even after he’s drained the blood of your neighbors, because he hasn’t drained your blood yet. It’s a form of collective madness, or at least, it makes me, and everyone else who has spent years trying to warn people, feel mad.

Oh, I’d like to imagine that Mark Zuckerberg sleeps terribly.

That every night, the hungry ghosts of the dead close in upon him.

The small, charred ghosts of the Rohingya children burned alive in their homes, who still smell faintly of smoke and cooked flesh.

The pale and bloated ghosts of the people who drowned in the Mediterranean after fleeing ethnic cleansing in their home countries, whose faces have been nibbled upon by deep-sea fish.

The suicides.

The men and women slowly tortured to death in secret Syrian prison cells. 

They gather around him, and they whisper things that cannot be written into his ear. And he is tormented. 

But that’s a fantasy. 

Mark Zuckerberg is a man who sleeps well. He has hobbies. He enjoys non-descript barbecue sauce. He’s happily married. He has none of the freakish, manic anxiety that swirls around Elon Musk. Zuckerberg is self-assured.

He walks, serene, under a shield of plausible deniability. After all: he didn’t burn those Rohingya villages himself. He didn’t lead the soldiers that chased those Muslim Indians off of their land, or the vigilantes killing their ethnic enemies in Ethiopia.

He didn’t personally destroy the self-esteem of teenage girls, or publicly stream a mass-shooting at a mosque in New Zealand, or coordinate storming the Capitol on January 6th. He didn’t spread the lies that persuaded millions of Americans to wave off the vaccines that might have saved their lives, and he didn’t give those Kenyan moderators the PTSD that makes them see the faces of the screaming dead at night. 

Certainly, Zuckerberg would acknowledge that his website played a role. But who’s to say how much of one? It is so hard to quantify these things. And there are fewer and fewer people left who have the time and the resources to try.

“But can we really blame Facebook for that?” some people will say. “Wasn’t journalism already in trouble before he came along?

Maybe. But isn’t it interesting how Mark Zuckerberg and his company exists entirely in a cocoon of plausible deniability, in an ecosystem they’ve designed to exquisitely accommodate their own version of reality?

Perhaps I am too hard on Mark Zuckerberg.

Perhaps he deserves another chance to connect the world, like he says he always meant to do. Move fast. Break things. You have to make a few mistakes to get ahead. Just a few little mistakes. 

“The idiots trusted me,” Mark Zuckerberg famously said, in the early-on years, when people had not learned what he was yet. 

No. I won’t be posting on Threads. 

Little Flying Robots is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Facebook Destroys Everything: Part 2

It was April 2016, and Mark Zuckerberg, clad in his usual incredibly expensive cotton t-shirt, told the world that his website – and thus, the entire Internet – was headed to a video-filled future, where live broadcasts and snappy, “snackable” content would push out the old, boring world of words.

Mark told the world that he knew this because he had the data: he knew for a fact that people were spending lots of time watching video, and simply couldn’t get enough of punchy video ads. Anxiety-filled media companies and publications, already wondering if video was the play of the future, scrambled to answer the call. 

ooh, we’re pivoting! ooh, look at us pivot!

Just a few months later, Facebook admitted it had made yet another one of its signature, whimsical little oopsies. It had fucked up the math: it had overestimated video viewership metrics by, it said, about 80 percent. Or, possibly, by 900 percent. Somewhere in that ballpark.

But the evidence that Facebook lied came out too late. The lumbering executive minds of great lumbering companies had already been made up. Print reporters were laid off en masse, and many of those who survived were pressured to spend less time messing around with icky, unprofitable words, and more time on making fun little videos.

And like many millennials who had once dreamed of reporting careers, I watched the bloodbath and regretfully decided that I wasn’t going to bother with pursuing another full-time journalism job either. 

Despite all the cuts and the reshuffling and the chaos, the profits that Mark Zuckerberg had promised for journalism never arrived, and remained a blue-shaded mirage on the far off horizon. In late 2019, Facebook coughed up $40 million to advertisers to settle a lawsuit they’d filed against the company, claiming (it seems, accurately) that Facebook had flagrantly lied to them about how much time users actually spent viewing video ads.

While the media industry eventually concluded the Pivot to Video had been a terrible mistake, the jobs that had been lost in the process never recovered. And Facebook, or Meta, or whatever the terrible thing is called, has soured on journalism too. It’s a far cry from the friendly overtures – hiding a handgun behind its back – that the company was making to the media less than a decade ago.

Subscribe now

This summer, in a particularly petulant act, Meta announced that instead of adhering to a new Canada law that would require social media companies to share profits with publications, its sites would block all links to Canadian news sites instead. Threads, for its part, has rejected journalism entirely, in favor of content – ah, that hideous, bloodless word! – that Threads and Instagram lead Adam Mosseri has deemed more uplifting, more marketable.

 Sum it all up, and you’re left with the conclusion that Facebook seduced the entire journalism industry with promises of riches and security, then turned around and shot it in the knees – not to kill it immediately, but to ensure that it’d bleed out slowly instead. And we’ve all been left to suffer with the results, in a world where fewer and fewer people can make any kind of meaningful living from finding the truth hidden within the great morass of disinformation that the Internet churns out, like guano from an island full of shouting, shitting seabirds. 

welcome to the internet where we SHIT and we SCREAM

Are you starting to detect a pattern here, a through-line, a single blue vein running like the shit-filled intestines of a shrimp through the last decade and a half of lies, conflict, corruption, and death? But I can’t address every stunningly ethics-free and immoral thing Meta has done. Not in one article.

I’d be working on it for years, or I’d eventually, after staring too long into one company’s seemingly inexhaustible reserve of unpunished and incredibly public crimes, go mad. I can only run through the violations and the failures as they come to mind, the ones that made the biggest impression on me.

For me, in my narrative, first there was Myanmar, and then came 2016 – that venom-filled year in which I realized that the evils that Facebook had unleashed on Myanmar were coming home. When I first started watching what was happening in Myanmar in 2013, many students of social media culture, like me, operated under the hopeful assumption that the country’s Facebook-enabled descent into hell could at least partially be chalked up to a lack of online literacy.

We reasoned that countries like the US had a solid 20 year head-start on being online over places like Myanmar, and that the general global public simply needed time, and perhaps some carefully-crafted public education, to get a better sense of what was real and what was dangerous bullshit on the Internet. 

We were incredibly wrong. 

It turned out that the evils enabled by Facebook, and by social media in general, were much more deeply rooted in the tar-filled recesses of the bad bits of the human mind than that. And as the shadowy creeps at Cambridge Analytica secretly sifted through my Facebook data and that of everyone else, I watched my algorithmically-barfed up feed with an ever-increasing sense of nausea. Realizing as I watched that it could happen here.

And it was. 

wow i miss this kind of thing so much

The second-cousins of people I’d vaguely known in high school accused my actual friends of being Soros-funded shills for global Jewish conspiracy. I watched as real-life friendships crumbled, families decided they’d never speak to each other again, and parents accused children of being blood-sucking, welfare-exploiting Communists.

I spent hours a day sucked into pointless, deranged political fights with people I’d never met before, as Facebook’s nasty little algorithm zeroed in on exactly what was most likely to put me over the edge into the Red Mist. The site always was terrible at figuring out which ads would appeal to me, but it did get pretty good at figuring out how to make me stroke-inducingly angry.

Eventually, I came to recognize that the site was twisting human relationships into dark and unrecognizable shapes, working to reform our conversations and our thoughts into patterns legible to marketers: transforming us into creatures easier to sell to, easier to keep locked up inside the confines of Facebook’s ecosystem. I knew all this and yet, as we got closer and closer to the election, I stayed on the repulsive thing, unable to resist watching the fighting, the weird digital-media enabled derangement that seemed to have spread to everyone on the Internet. 

Then, Trump won. 

Facebook lost its hold on me over that, repulsed me even more than it had before – as I realized that  it had played a decisive role in helping something dark and disgusting in the human mind manifest into a new, far more dangerous, real-world form, and that, by adding my own voice to the collective scream that had come to define the site, I’d helped bring it all into being too. In early 2017, I mothballed my account, scrubbing all the data and removing all my friends.

Did you know that if you deactivate your account, Facebook will keep tracking your data, under the theory that you might come back someday? And did you know that even if you delete your account, even if you’ve never had one to begin with, Facebook will create a zombiefied shadow profile for you anyway – which might include sensitive health data that you’ve entered into your medical providers website? Were you aware that Facebook will, at best, take its sweet time to crack down on scammers who appropriate your name and your identity so they can better exploit your elderly relatives? (Or never deal them with at all). And what’s more, were you warned that you can’t delete a Threads account once you’ve made one without deleting your Instagram account as well, an issue that the company swears that it will fix eventually, one of these days/months/decades? 

proustian shit for me

After I left Facebook, I turned my attention to Twitter, which was, while a cesspool, a cesspool I found much more suited to my particular slop-seeking tastes. Twitter’s developers had never figured out how to monetize user-data in the grim and shark-like way Facebook had, and Jack Dorsey largely appeared to be too busy gobbling up magic mushrooms and studying erotic yoga poses to make progress on the problem. The site was designed in such a way that I never found myself screaming at someone’s gibbering fascist uncle with a soul patch in darkest Missouri, and it was much easier for me to simply block and ignore the weird conservative wildlife that did, on occasion, stumble across my profile. And most importantly, Twitter never made me feel quite as debased, as repulsive, as angry as Facebook did. 

When the Cambridge Analytica revelations came out in 2018, revealing that a political consulting company had been quietly exploiting user data that Facebook had failed miserably to protect, I felt both horrified and validated. And I was pleased to see that Facebook’s previously relatively-clean public image, already tarnished by how repulsive many people found the site in the lead-up to Trump’s election, was finally, finally beginning to take on serious damage. 

Sure, tons of people still used Facebook, but signs of weakness were appearing, hints that younger, cooler people were beginning to back away from a website that seemed engineered to allow their weird Trump-loving great-uncles to yell at them. Indications that Gen Z kids increasingly regarded Facebook as a place they’d only use (maybe) to wish their grandparents a happy birthday, not a site where they’d ever want to actually hang out. But Instagram was still popular, and Facebook owned that, and WhatsApp was still globally pervasive, and Facebook owned that too. The same blue sheep-paddock, as Meta had correctly deduced, could be made to take on many forms. 

hey, remember this

Zuckerberg apologized for Cambridge Analytica, just like he did when his company was called out for abetting genocide in Myanmar. Zuckerberg went on another one of his Apology Tours in public, as the company (largely behind the scenes) rolled over and pissed at the feet of GOP politicians and MAGA emperor-makers, ceded to the ever-changing, deranged whims of Donald Trump. Zuckerberg even agreed to a photo-op with Trump in the White House, which the President saw fit to post first on Twitter.

 And while people trusted Facebook a lot less than they used to in 2016, the site, and the company, still seemed horribly inevitable. People had fallen out of love with Facebook, but many of us were getting the uncomfortable feeling that soon, our personal feelings wouldn’t matter anymore. That Mark Zuckerberg’s company was building towards a future where getting a Facebook account would no longer be an actual consumer choice, but a price you’d be forced to pay just to get on the Internet, or to pay your taxes, or to set up a doctor’s appointment. 

Exhibit A of this unsettling world-domination strategy? Libra, Facebook’s now-failed June 2019 universal cryptocurrency boondoggle that the company claimed would use the blockchain, or whatever, to help connect the world’s underbanked and digitally-isolated people with the global financial system. It was a financially-focused rebrand of Meta’s now flailing Internet.org strategy to get the entire world onto Facebook (and incidentally, the Internet), the same effort that had helped ensnare Myanmar. Regulators almost immediately responded with suspicion – to their credit – but the company continued for a while to doggedly press on. 

Also connected to Libra, in terms of overall strategy, was Facebook’s new effort to map the entire world with imagery pulled from satellites and drones, using computer vision tools to suss out population figures for 22 different countries, followed-up with maps doing the same thing for the majority of the African continent. Facebook’s messaging around the project, much like Libra’s, emphasized the warm and cuddly impacts, focusing on how the data would be used to support charitable causes and humanitarian response efforts. Their releases discreetly ignored the profit motive behind why such a gigantic, publicly-traded company was pumping such vast sums of money and human resources into supposedly charitable projects. 

only a little ominous!

For me, and a lot of other Facebook-cynical observers, that unspoken answer was obvious. They were doing all this to herd even more of the planet into their own walled garden, permitting the company to profit off ever more human data, of every more aspect of modern-day, digital life.

What Zuckerberg seemed to want was for the world to view his Facebook as more than just a tech company – as more like an inevitable, unstoppable natural phenomenon. The kind that moves fast and breaks things. And places. And people. 

Contract employees paid only somewhat above minimum wage, employed by vendors with intentionally-bland names,employed in satellite offices around the world in locations as far away from Facebook’s actual, highly-compensated employees as possible. People who spend their entire day at work staring into the dark and rotting heart of humanity’s absolute worst impulses, clicking through scene after loathsome scene of screeching men slowly having their heads sawed off, kittens loaded into blenders, Holocaust deniers and mass-shooting victims. Human big-tech byproducts who are able to access a perfunctory amount of mental health support, but who are also achingly aware that they’ll be out on the street if they make a few mistakes in the course of viewing a tsunami of horror. 

I have some small sense of what it is like to gaze long into the digital abyss, due to my reporting and research work around conflict and war crimes – but then again, I have no idea at all, because I willingly and knowingly chose to look at these things, was compensated fairly, received praise and platitudes for taking on the burden. In late 2020, American Facebook moderators settled with the company for $52 million, cash intended to compensate both current and former employees for the psychological damage they’d taken on in the line of duty: leaders also agreed to introduce content moderation tools that muted audio by default and swapped video over to black and white, small changes intended to make viewing evidence of a blood-soaked world more bearable.

 But of course the problem isn’t fixed. Of course, Facebook is Still Working On It. This summer, Facebook moderators in Kenya launched their own lawsuit mirroring that filed by their American counterparts, seeking $1.6 billion to compensate them for miserable working conditions, inept psychological counseling, and crippling psychological damage – and for lost jobs, as some moderators claim they were fired in retaliation for attempting to organize a union. On social media, we joke, in a way that’s not really joking, about how our tech overlords have created the Torment Nexus, about how we’re locked in a psychological hell we can’t escape. 

Some of us much more than others.

More next time.

Facebook Destroys Everything: Part 1

I want to tell you a real bummer of a story about Facebook.

The kind of no-fun, downer tale that Alex Mosseri, the head of Threads, Meta’s new social media service, said he doesn’t want his website to support.

I arrived in Myanmar for the first time in November 2012, the same week that the country’s very first ATMs that worked with international credit cards went online. The humble money machine’s arrival was a big deal, one of the clearest signs yet that the oppressive, isolationist military junta that had run the country from 1962 all the way up to 2011 was truly gone. An indicator that Myanmar was entering a new, much more outwardly-focused, era. 

19th street in Yangon in 2013, photo by me

With the fall of the junta came an even bigger deal: the arrival of the relatively free Internet in Myanmar, liberated from the ultra-restrictive controls that the old regime had placed on its citizen’s access to international information. Before, the few bloggers that had managed to skirt the controls and write online, like poet and activist Nay Phone Latt, were met with prison sentences, fines, and violence.

Now, Nay Phone Latt was free, Internet cafes were doing a booming business, and there was even talk of the imminent arrival of publicly-available mobile data. And most exciting of all, people across Myanmar were setting up their very first Facebook accounts. 

I’d come to Myanmar to write about the rise of the Internet, as part of my then-regular beat on tech in Southeast Asia – a subject I’d grown fascinated by ever since I started my first reporting job out of college at the Cambodia Daily in Phnom Penh. It was an opportune time for that kind of thing.

The Arab Spring, and the way in which its fearless millennial-aged leaders had organized on social media platforms that their authoritarian overlords understood poorly,  had ushered in a wave of  global optimism about how Facebook and Twitter could, just perhaps, usher in a new era of democracy and empathetic communication, build a perfect framework for a Marketplace of Ideas (and do it all while making a shit-ton of money).

According to some pundits, Mark Zuckerberg might just, in his weird nerd way, heal the world

While I was more skeptical than most about if the ascendance of social media was a good thing or not, it was very clear to me that it was important – and so I’d begun my reporting career looking at what Cambodians were doing online, how they were using Facebook to politically organize against their own repressive government, to meet one another, to reach out to a broader technological world.  I’d connected with a Myanmar NGO dedicated to digital inclusion, and through them, I got a chance to meet and interview a number of brilliant and extremely online Burmese people, all of them brimming with long-suppressed, almost giddy, optimism about their country’s technological future. 

It was hard for me not to share their enthusiasm, their massive relief at finally getting out from under the jackboot of a military regime that had tried to lock them away from the rest of their world for as long as they could remember. I came away from speaking with them with a warm, happy feeling about how online communication maybe, just maybe, really did have the power to unfuck the world. 

I’d also come to Myanmar because of Barack Obama.

The US had sent then-secretary of State Hillary Clinton to Myanmar on a diplomatic visit in late 2011, restored full diplomatic relations with Myanmar in January 2012, and had begun to roll back long-standing economic sanctions. This extended process of thawing the ice cube was set to culminate with the first-ever trip to Myanmar by a US President, who would meet with both President Thein Sein and the recently-freed and globally iconic opposition leader Aung San Suu Kyi, to congratulate them on their achievements and to implore them to keep up the good work.  

picture by me from Yangon in 2012.

On the day of the President’s arrival, I walked towards the university auditorium where he was set to speak through the streets of Yangon, which were lined with excited and intensely curious Burmese people, many of whom were wearing t-shirts with Obama’s face on them, who were waving little paper American flags (sold by enterprising street vendors).

We all watched the massive US motorcade roll by, the President’s enormous black monolith of a car smack-dab in the center of it, and people cheered and shouted and waved, and shook my hand as the nearest American who could be congratulated.

another one of my 2012 photos – the President in Yangon

 Once there, I managed to talk my way into the official White House press pool, and I was able to join the great scrum of jostling foreign correspondents on the balcony of the auditorium as Obama, Clinton, and Suu Kyi embraced each other and spoke to the audience about the rise of a new relationship, a new era. For onlookers, it was easy to get seduced by how picture-perfect it all was, to believe that Myanmar was on the up-and-up, that both the government and its people were headed towards a freer, wealthier future. 

But it was not that simple. Nothing ever is. 

Prior to my first visit, in June 2012, people form the Rakhine Buddhist ethnic group and Muslims from the long-persecuted Rohingya ethnic minority, up in Myanmar’s north, had begun fighting with one another, in the latest outburst of tensions that had been flaring up on and off for generations. Myanmar state security forces headed to the scene at President Thein Sein’s request and promptly started making things even worse – rounding up Rohingya (long denied citizenship by the Burmese state) en masse, raiding their villages, raping them, killing them.

 After a few months of relative peace, the violence escalated once again in October, right before both I and the President arrived in Myanmar. By then, at least 80 were dead, and it was estimated that somewhere in the ballpark of 100,000 people, almost all Rohingya, had been displaced, burned out of their homes and villages, forced into squalid and desperate refugee camps.

While United Nations experts raised the alarm in Geneva and Human Rights Watch released satellite imagery showing hundreds of burned buildings in Rohingya villages, most global onlookers seemed to regard the violence and the fire as one of those things: regrettable, but not unexpected, and certainly not so awful that it was worth torching newly-established relations ever. 

Obama explicitly mentioned the Rohingya situation while speaking at the University of Yangon, calling upon Myanmar to “stop incitement and to stop violence.” For his part, President Thein Sein –  who’d said just a few short months ago that his country didn’t want the Rohingya, and that it’d be best if they were resettled in any country willing to take them – publicly agreed to eleven US-defined human rights commitments, from “taking decisive action in Rakhine” to permitting aid workers to enter certain conflict-wracked areas. Messy. Imperfect. But, from the perspective of the US, good enough for now. 

temple in Yangon in 2012. photo by me.

After I got back from that first trip to Yangon, I kept following the Rohingya clashes in Myanmar on the news, watching with growing trepidation as the situation grew ever more terrible, as the deaths piled up, and as ever more Muslims were forced to flee into newly-established and massively growing refugee camps over the border in Bangladesh. I also watched as this growing darkness was reflected on the Internet  – indeed, intensified by it, the online world and the offline world becoming ever more enmeshed, interlocked, impossible to tell apart.

 As far as many newly online people around the world were concerned in the early 2010s, Facebook was the Internet: the single, centralized portal through which they interacted with the rest of the planet, where everything online that bore the slightest relevance to their lives took place. They were part of a millions-strong captive audience, and Facebook had realized that if they played their cards right, if they hurried the process along, they could keep all these people safely locked up in their own custom-designed, eminently profitable enclosures. And they could mask their ambitions by claiming that all they really wanted to do was help people gain economically-vital access to the Internet. 

I’d already been seeing the darkness in Cambodia, where reporters had started to notice an alarming up-tick in violent, intense rhetoric against the Vietnamese minority in Khmer Facebook groups in the run-up to the 2013 elections, as the CNRP opposition party accused them of secretly wanting to take over Cambodia again. And now I was hearing about how Facebook was even worse in Myanmar, as more and more of the nation got online for the very first time: how Buddhist firebrand monks were using the platform to whip newly-online people into paroxysms of anger about the prospect of Muslims taking over their land. Outnumbering them. 

But still, reasonable people had reasonable questions about the causality of it all. Was there a truly direct connection between the violence against Rohingyas and the nastiness on Facebook? Were enough people in Myanmar even online that it’d actually make a difference? Was the way people used Mark Zuckerberg’s platform really, ethically speaking, Mark Zuckerberg’s fault

I spent the spring of 2013 mulling over these questions, rooting around in the nastier recesses of politically-minded Facebook groups, reading through the then-nascent literature on how social media could, just perhaps, drive social progress in ways that didn’t help bring about yet more Arab Springs and bust open secret torture prisons.

In June, I got the chance to go back to Yangon. I’d be writing about the nation’s first-ever Internet Freedom Forum, a gathering dedicated to helping Myanmar’s people take advantage of the new, liberated Internet. Nay Phone Latt spoke at the conference, and so did a number of the other brilliant young Burmese tech enthusiasts I’d met before. The mood was still buoyantly optimistic as we circulated from one Post-It note-filled brainstorming session to the next, as we drank tea, discussed Internet freedom regulations and online privacy. 

And yet, I could detect a slight edge in the air, a certain trepidation that had grown, mutated into new forms, in the few  months since I’d been away. People knew that the country’s fate still remained very much in doubt, and they knew the turn to democracy could evaporate just as quickly as it had come about. At night, I’d walk back to my hotel room through the silent, dark streets of Yangon – a city that was still figuring out what it wanted to do about night life – and sometimes stray dogs would tail me home, lean, rangy beasts with a worrisome, predatory alertness, much more so than I remembered seeing in the local curs in India and in Cambodia. 

vendor in Yangon in 2013 selling/promoting 969 Movement materials, a nationalist, anti-Muslim movement led by extremist monk Ashin Wirathu. photo by me.

 During the conference, we talked about how hateful talk about the Rohingya was starting to pop up on Facebook, about how it was casting an ominous shadow over the good things about helping more people get online. Hopefully, it’d stay relatively isolated, and people could be taught to use and to read social media in more critical, careful ways. Hopefully, the whole thing would represent a nasty but not-unexpected blip on the road towards the Internet helping Myanmar build a better, freer society. 

Hopefully. 

And then, near the end of my visit, I had an honest-to-god Thomas Friedman moment. In a taxi cab.

The driver was a charming young Burmese man who spoke good English, and we chatted about the usual things for a bit: the weather (sticky), how I liked Yangon (quite a bit, hungry dogs aside), and my opinion on Burmese food (I’m a fan).

Then he asked me what I was in town for, and I told him that I’d come to write about the Internet. “Oh, yes, I’ve got a Facebook account now,” he said, with great enthusiasm. “It is very interesting. Learning a lot. I didn’t know about all the bad things the Bengalis had been doing.” 

“Bad things?” I asked, though I knew what he was going to say next. 

“Killing Buddhists, stealing their land. There’s pictures on Facebook. Everyone knows they’re terrorists,” he replied. 

“Oh, fuck,” I thought. 

I was going to write “you know what happened next.” But as I watched social media discourse about the launch of Threads this summer, I realized that a lot of you – good, smart, reasonably well-informed people – don’t know what happened in Myanmar after 2013. Or the role Facebook played.  

 So, here’s a brief summary. 

Internet access ripped across Myanmar after 2013, and so did smartphones, which often came conveniently pre-loaded with the Facebook app. In 2016, Facebook even partnered with Myanmar’s government to launch two products that let people use basic versions of Facebook without having to pay for data: millions of people signed on, eager to talk to their friends and read the news for free on a platform that most assumed was perfectly trustworthy. They also used Facebook to talk about the Rohingya – and there was a lot to talk about, as the violence kept getting worse, as over a hundred thousand Rohingya were pushed into refugee camps. 

In August 2017, a Rohingya armed group attacked military targets and killed civilians in Rakhine state: Myanmar’s security forces responded with total warfare. Soldiers massacred thousands of unarmed people, raped women, and burned down hundreds of villages. Children were incinerated inside their own homes.

scene from one of the enormous refugee camps in Bangladesh. Credit: UN Women/Allison Joyce.

Over 730,000 Rohingya fled across the border into Bangladesh, forced to take up residence in overcrowded refugee camps where they still wait in limbo to this day, subject to the often unsympathetic, cruel whims of the Bangladeshi government. Hundreds of thousands more remained trapped unhappily in Myanmar, existing without rights and as a hated, hunted underclass. Experts started to apply terms like “ethnic cleansing” and “genocide” to the Rohingya killings, and few bothered to argue.

The few who did included Aung Sang Suu Kyi, the erstwhile human rights hero that I’d seen Obama shake hands with just a few years before. After becoming the de facto head of government in 2015, Suu Kyi started to vocally defend the military’s actions against people she deemed to be Muslim terrorists. She was still grumbling about unjust disinformation when she was brought before the Hague in 2019 to defend Myanmar against charges of genocide, praising the same military that kept her under house arrest for over a decade. 

Yet Suu Kyi’s willingness to defend mass murder wasn’t enough to keep her in power.

In February 2021, the military decided that this political liberalization business had gone too far: it reverted to tradition, launching a coup against the government, invalidating the 2020 election, and arresting Aung San Suu Kyi and other officials on highly-suspect allegations of  fraud. The military swiftly locked down Internet access, restricted aid worker freedom of movement, and viciously attacked protesters.

In response, both existing ethnic militias and newly formed ones fought just as ferociously back, creating a brutal civil war that’s still happening today. Nor have things improved for the Rohingya, who still languish in dangerous camps, who are still deprived of rights by governments in both Myanmar and in Bangladesh. Who still drown by the hundreds in overladen boats headed for places where they might, just might, find dignified work. 

As this last dismal decade in Myanmar unfolded, one thing has become exceedingly clear: Facebook, in its rush to massively profit from getting an entire country on the Internet in just a few short years, played a key role in the country’s slide into hell. During that blood-soaked period from 2016 to 2018, website’s attention-hunting  algorithms pumped vast amounts of ferocious anti-Rohingya content into the feeds of millions of Myanmar Facebook users, and the site failed over and over to counter dangerous hate speech, ignoring pleas from local activists, including some people I knew.

Screen cap from 8/7/2023 of an inflammatory Wirathu interview that’s still publicly visible on Facebook.

Despite Facebook’s claims that it had cracked down on hate speech, in 2020, researchers found Facebook was still promoting anti-Rohingya hate videos from Ashin Wirathu, the extremist monk they’d supposedly banned years before. (Just now, it took me approximately 5 seconds to find an anti-Muslim 2020 interview with Ashin Wirathu, with English subtitles, still up and visible on a Facebook page run by Indian Hindu nationalists – and I wasn’t even asked to log in).

When the military launched its 2021 coup, Facebook promised, like always, that it would take action to reduce the reach of pro-junta posts. But researchers found that the constantly-churning algorithm continued to promote posts advocating for violence anyway.

 As I write this, Facebook remains wildly popular in Myanmar today, persisting despite the military’s occasional, doomed attempts to ban it in retribution for attempting to ban them  – measures which people relatively easily get around with VPNs. The site’s filters still consistently fail to catch ads promoting virulent-anti Rohingya hate-speech, and activists are regularly imprisoned by the junta for their anti-government Facebook posts. In Myanmar, as in much of the rest of the world, Facebook has accumulated a power center of its own, wound itself around the very idea of modern, connected life itself. 

Nor can Zuckerberg claim it was a mistake, a misunderstanding. Throughout this entire dark period, Facebook knew what it was doing. 

mark zuckerberg at a 2018 keynote about fighting fake news, and we all know how well that went.

In 2018, an independent report commissioned by the company itself concluded that the website had helped fuel genocide, and the company agreed with its findings, said it was hiring more Burmese speaking moderators, that it was “looking into” creating a human rights policy. (It only got around to actually doing this in 2021). 

The company’s statements on the matter remained bloodless, at a distance: the closest show of actual human emotion came from Adam Mosseri, the current Threads chief and Facebook’s then VP of product management. “Connecting the world isn’t always going to be a good thing,” he conceded on a Slate podcast. “We’re trying to take the issue seriously, but we lose some sleep over this.”

Mark Zuckerberg himself acknowledged, in a 2018 interview with Ezra Klein, that his company’s penchant for encouraging genocide was “a real issue” that “we’re paying a lot of attention to.”It was familiar Zuckerberg line. A chunk of bloody meat tossed to the press and to the public, a bribe that could get away with being bereft of actual content, actual human sentiment. 

In another, even more illuminating, 2018 interview with Recode, Zuckerberg said that he felt “fundamentally uncomfortable sitting here in California at an office, making content policy decisions for people around the world.”

To drive the point home, he added this: “A lot of the most sensitive issues that we faced today are conflicts between our real values, right? Freedom of speech and hate speech and offensive content…. Where is the line, right? And the reality is that different people are drawn to different places, we serve people in a lot of countries around the world, and a lot of different opinions on that.” 

In these words, Zuckerberg expressed his most fundamental perspective, the belief system that has shielded him with remarkable effectiveness from the public anger that he deserves. (He would go on to use almost the exact same phrasing to defend his soft-gloved treatment of Donald Trump in 2020 and 2021).

It’s phrasing that acknowledges the existence of ethical issues with tech, while deftly absolving the person who created these issues in the first place from responsibility for cleaning things up. It’s a message that Facebook is inevitable, inescapable, that humanity will simply have to adapt to its presence.

And it’s a message that allows Zuck to publicly pretend that he’s simply too humble to feel OK with making decisions for other people, even as he works hard, right out in the open, to herd an entire species into his immensely profitable, walled-garden  of a website. 

As I write this in 2023, Facebook, or Meta, if we’re going to politely go along with another one of the company’s great squid-ink moves, claims they’re still Working on The Myanmar Problem. I’m sure company spokespeople would agree, if I asked them, that they’re Very Apologetic and that they absolutely still Need to Do Better. 

That’s what Meta always says, after every single damning revelation, after every single time they’re entirely and unequivocally caught doing something wildly immoral.

Zuckerberg and his company have learned this is really all they need to do, that there is little appetite among the truly powerful for holding them accountable. That the lawsuits filed against them by groups like the Rohingya, like the Ethiopians impacted by the war in Tigray, will almost inevitably fail. 

But, no, I don’t blame anyone for not knowing about all this, about what Facebook helped enable in Myanmar, about what it did in Ethiopia, and in Kenya, and in India and South Sudan and in the United States, and a lot of other places besides.

After all, there are way fewer full-time journalists writing about these things than there used to be. Including me.

Enter the Pivot to Video.

Facebook Believes Americans Are Good at Evaluating Their Sources, And Other Comfortable Delusions

oh my god shut up

Mark Zuckerberg would like you to know that he cares a lot about disinformation and bots and propaganda. He is very concerned about this, and is also very aware that he possesses terrifying technological powers. (See, his brow! Consider how it furrows!) And so on January 19th, he made another one of his big announcements.  He’s decided, in his serene wisdom, to trust the people of Facebook to determine what is true. Nothing could possibly go wrong.  

“The hard question we’ve struggled with is how to decide what news sources are broadly trusted in a world with so much division,” Zuckerberg chirped in his announcement (I always imagine him chirping in these, like a smug billionaire chickadee). “We decided that having the community determine which sources are broadly trusted would be most objective.” Users will be asked to rate the credibility of news sources, though only those that Facebook determines they are familiar with, through some mysterious and possibly eldritch method. These “ongoing quality surveys” will then be used to determine which news sources pop up most often in users news feeds. Will there be any effort to correct for craven partisan sentiment? No, apparently there will not be. Will there be some mechanism for avoiding another mass and gleeful ratfucking by 4chan and 8chan and whatever other slugbeasts lurk within the Internet? No, apparently there will not be. Everything will be fine! 

On January 19th, we learned that Facebook is the last organization in the entire world that still has great faith in the research and assessment powers of the average American. Is Facebook actually that unfathomably, enormously naive? Well, maybe. Or perhaps they are, once again, betting that we are stupid enough to believe that Facebook is making a legitimate effort to correct itself, and that we will then stop being so mad at them. 

Which is insulting. 

Any creature more intelligent than an actual avocado knows that Facebook’s user-rating scheme is doomed to miserable failure. Researchers  Alan Dennis, Antino Kim and Tricia Moravec elegantly diagnosed the project’s many, many problems in a Buzzfeed post, drawing on their research on fake news and news-source ratings. They conclude, as you’d think should be obvious, that user-ratings for news sources are a very different thing than user-ratings for toasters. “Consumer reviews of products like toasters work because we have direct experience using them,” they wrote. “Consumer reviews of news sources don’t work because we can’t personally verify the facts from direct experience; instead, our opinions of news are driven by strong emotional attachments to underlying sociopolitical issues.”

Facebook, if we are to believe that they are not actively hoodwinking us, legitimately believes that the American people have, in the past year, somehow become astute and critical consumers of the news. But this impossible.  Facebook’s magical thinking is roughly equivalent to putting a freezer burned Hot-Pocket in a microwave and hoping that it will, in three minutes, turn into a delicious brick-oven pizza. There is no transmutation and there is no improvement. The Hot Pocket of ignorance and poor civic education will remain flaccid and disappointing no matter how much you hope and wish and pray. 

there is some trippy ass clipart for Facebook on pixabay

This doesn’t mean there is no hope for the information ecosystem of the United States. It does not mean that this ongoing nightmare is permanent. As Dennis, Kim, and Moravec suggest, Facebook could grow a spine and start employing actual experts. Experts empowered to filter. Experts who are empowered to deem what is bullshit and what is not. But of course, this is what scares them most of all. See what Zuckerberg wrote in his Big Announcement: “The hard question we’ve struggled with is how to decide what news sources are broadly trusted in a world with so much division. We could try to make that decision ourselves, but that’s not something we’re comfortable with.”

“Not comfortable with.” Consider that wording. They’re not comfortable with doing the one thing that might actually help to dislodge the cerebral-fluid sucking leech that is currently wrapped around the brainstems of the social-media using public. It would be so awful if Facebook was made uncomfortable.

And it will do anything to avoid discomfort. Mark Zuckerberg and Facebook are simply abdicating responsibility again. They know that these “checks” won’t work. They know damn well that hiring editors and engaging in meaningful moderation is what they haven’t tried, and what is most likely to work, and what is most likely to earn them the ire of the Trump cult that now squats wetly in the White House. Cowardice has won out, again: they’ve simply come up with another semi-clever way to fob off responsibility on its users. When these “credibility checks” inevitably fail or are compromised by hordes of wild-eyed Pepes, Facebook will, right on schedule, act surprised and aghast, then quickly pretend it never happened. You should be insulted that they think we’ll just keep falling for this. We have to stop falling for this. 

These so-called credibility checks are just Facebook’s latest milquetoast and insulting effort to pretend it is dealing with its disinformation problem.  Just a few weeks ago, Facebook announced that it would be reducing public content on the news feed. This is to social-engineer “meaningful social interactions with family and friends” for its users. This might sound well and good – if you are much more comfortable with being socially-engineered by blank-eyed boys from Silicon Valley than I am – or at least it does until you hear from people who have already undergone this change. Facebook is fond of using countries from markets it deems insignificant as guinea pigs for its changes, and in 2017, Sri Lanka, Guatemala, Cambodia, Slovakia, Bolivia, and Serbia were shoved in the direction of “meaningful social interaction.” (One does wonder about the selection, considering the unpleasant history these nations share). The results were, to quote local journalists in Guatemala, “catastrophic.” Reporters in these countries suddenly found their publications – important sources of information in fragile political systems – deprived of their largest source of readership and income.

Adam Mosseri, head of Facebook’s News Feed, responded to these reporter’s anguish with the serene, Athenian calm that only tech evangelicals can muster: “The goal of this test is to understand if people prefer to have separate places for personal and public content. We will hear what people say about the experience to understand if it’s an idea worth pursuing any further.”(Whoops, we broke your already-fragile democracy! Move fast! Break things!) Dripping a new shampoo line in little white bunny rabbit’s quivering eyeballs is also a test . The difference between the two? Testing your new product on embattled reporters in formerly war-torn nations is much more socially acceptable. 

Facebook has also recently attempted to socially engineer us into being better citizens. In late 2017, I wrote about Facebook’s ill-considered civic engagement tools or “constituent services,” which were meant to (in a nutshell) make it easier for you to badger your representative or for your representative to badger you back. Using these tools, of course, required a Facebook account – and you also had to tell Facebook where you lived, so it could match you up with your representative.  Facebook would very much like a world in which people need to submit to having a Facebook account to meaningfully communicate with their representatives. Facebook would, we can probably assume, very much like a world where pretty much everything is like Facebook. This is probably not going to change. 

Yes, I know: Zuckerberg furrowed his brow somewhere in his mansion and said that he might consider cutting his profits to reduce the gigantic social problem that he’s engendered. By that, he means doing things that might actually address the disinformation problem: these things might take a variety of forms, from actually hiring experts and editors, to actually paying for news (as, incredibly, Rupert Murdoch just suggested) to hiring and meaningfully compensating a competent army of moderators. But consider our available evidence.  Do we really believe that he’ll flout his (scary) board and do the right thing? Or will he and Facebook once again choose comfort, and do nothing at all? 

“We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard,” said John F. Kennedy, in a quote that I am deadly certain Facebook employees like to trot out as they perfect methods of micro-targeting underpants ads to under-25 men who like trebuchets, or perfect new Messenger stickers of farting cats, or sort-of-accidentally rupture American democracy. Perhaps someday Facebook will develop an appetite for dealing with things that are actually hard, that are actually uncomfortable.

I’m not holding my breath. 

India’s Own Great “Firewall”? Censorship Hits World’s Biggest Democracy

flickr http://www.flickr.com/photos/ghoseb/2788306665/

India’s tradition of free speech may be facing its biggest obstacle yet, following an end-of-year government push to require Internet giants Facebook, Microsoft, Yahoo and Google to filter its users content for “offensive” material.

The crack-down came after Communications Minister Kapil Sibal became aware of photoshopped images of Sonia Gandhi and Prime Minister Manmohan Singh hosted on social networking sites sometime in September, as well as some images deemed offensive to Islam.  Sibal swiftly demanded the social media companies remove the offensive material and create human-run monitoring systems for their networks, which would catch such images before they hit the Internet.

The good news is that the the companies ignored him, demanding a court order before they would take action—and pointing out in two recent meetings that they would rather not put themselves in a position to decide what is and what isn’t “offensive.”  In any case, with internet usage at approximately 100 million Indians,  the companies told Sibal his monitoring plans would be impossible to implement.

One would think that Sibal would leave it at that. And, as of Dec 15, according to a report by the Press Trust of India, Sibal seems to have taken his strident tone down a notch or two, following a meeting with Google, Facebook and Twitter.  (His change in tone may be chalked up to the nature of the Indian media itself, a famously vocal bunch of newspapers, writers, and bloggers, just about all of whom seemed to have a choice word or two regarding Sibal’s dreams of censorship)

Read more at UN Dispatch….