Redefining Corporate Responsibility, DMAC and Publisher Immunity, with John Kheit – ACM 521
Show Notes
Sources referenced in this episode:
-
- 180 CEOs With Tim Cook Want to Redefine a Corporation’s Purpose
- Apple Context Machine Facebook Group - Facebook
- Bryan's Twitter
- Bryan's blog: GeekTells
I’ve always been a fan of pre-crime. It just makes sense.
How’s Tim Cook’s political savvy looking now? Cook is a heavy hitter in his own league, but China is a different league altogether.
Let us also not confuse taking a stance on soft social issues at home that is more about building brand when you’re not shipping product, with international politics and profits. The latter is a bigger game than Cook and Apple can afford to take too high a moral ground, for fear of looking silly in the small picture, or just outmatched in the bigger picture. Apple might be the world’s largest corporation, but it’s dwarfed by China Inc.
Bryan and John K:
Great discussion. I seldom respond to your ACM podcasts, but this one is an exception.
TLDR/AI can be better deployed to impartially identify speech likely to result in violence using evidence-based tools at our disposal.
Regarding the first topic about corporations, I concur with John K and
Bryan, you should simply concede.
What your first and second topics on have in common is the concept of community, whether as community of corporations and the culture of responsibility or a community of people bound by shared belief and practices.
Regarding the issue of free speech (your second topic, I forget what you called it), I think much of the public discussion around this topic is mired in precedent, certainly in the US context of the First Amendment, such that the rightful protection of an individual’s liberty to say hateful things, is extended to entities that are not individuals and to products that are not speech.
Speech is about the sharing of ideas, and once people go beyond simply agreeing with but organising around those beliefs to practice them, they become a community. A community is not simply a collection of individuals, but of people engaged in collective identity (eg Bostonians) capable of organising around collective action (eg supporting the Red Sox). If the rubbish collectors for my neighbourhood go on strike, my neighbours and I can organise our own rubbish collection and keep our community clean.
Community therefore serves two purposes; identity and support for collective or individual action (eg taking my own rubbish to the dump as my personal contribution to the community action plan). At issue with support is, support to what end? If it is support to be better individuals, or to do things that benefit the community, or beyond that, provide a benefit others outside of one’s own community, then we seldom have concerns, but applaud that community’s existence. When that community encourages or provides even moral support to its members to do harm to an individual or to another community, or by other means engage in unlawful conduct, then we seldom question the need for law enforcement or other authorities to intervene, prevent that action, and even disband that community.
What is different today than was true when the Founding Fathers wrote the US Constitution and the Bill of Rights is that, courtesy of the internet, many communities are virtual and independent of physical fellowship. Importantly, virtual communities are around ideas, sentiments, beliefs, and objectives conveyed by online speech. When we focus only on the speech, but ignore its organising principle, and its product, encouragement/support for collective or individual action, then our analysis is dangerously incomplete.
Further, to argue that such community creation is simply freedom of assembly, albeit virtual, is to ignore the distinction between peaceful assembly and riot. Worse still, if we divorce those actions from their organising principle of speech, we deny ourselves the strategic advantage of effective prevention. We should all have freedom to assemble and even to peacefully protest; however assembly to riot, vandalise, terrorise or murder, is unlawful.
The challenge for any free society then is striking that balance between permitting free speech, and preventing its harmful actions and consequences. I propose that it is not at the later stage of community formation, but earlier on the basis of its specific content. There are tools that we can borrow from political science and history to assist in that task, one in particular, but it comes with both a caveat and an emerging solution.
When we study demagogues who have used rhetoric (speech) to mobilise and energise a following, and thereby create a community of the like-minded and the malleable, almost invariably they have used a specific tool that has invariably resulted in violence, and referred to in political science as ‘villain-making’; the vilification and demonisation of a specific group with or without a nominal figure head or leader. This group is made to embody all that is hateful and ultimately responsible for whatever grievance the demagogue articulates has harmed the community, nation or the tribe.
Invariably, such villain-making has resulted in actions ranging from ostracism and segregation, to the creation of ‘re-education camps’, to assigning terrorist status to ethnic cleansing to pogroms and genocide. The data are unambiguous . Pretending that such speech, although it has resulted in violence and brutality in the past might today remain harmless meets Einstein’s definition of insanity, namely repeating the same thing and expecting a different outcome (villain-making that results in peace on earth and good will towards men). Speech that incorporates villain-making then can be used as a marker of portent and an indicator of the need for intervention.
There is a caveat. We are better at spotting villain-making in people and societies different from our own than we are in ourselves. Witness and compare the rapidity of threat identification and intervention in the West when its villain-making proponents are Islamist jihadists vs nativist racial supremacists, the latter continuing to harm at pace. Thus, we are limited by our cultural biases in how quickly we identify the threat.
A possible solution is a dispassionate arbiter and assayer of speech who will use evidence-based algorithms impartially and without bias, provided that it is trained to so. Enter AI. AI can be trained to review posts and speeches, and apply algorithms that will look for tells in speech that betray villain-making and demonisation, irrespective of the identity of either the speaker or their target. In practice, such training and improvement will be iterative and get better over time. However, once AI identifies the use of speech that qualifies as villain-making, in the interests of public safety, and following a thorough review by human arbitration, such accounts can be shut down before that speech can accrete a critical mass.
Just a thought, but it is past time to create and deploy modern solutions to a modern take on an age-old problem.
Corporate responsibility. Thanks for providing the biggest laugh-out-loud I’ve had for a very long time. Silicon Valley became concerned that the teachers teaching their children couldn’t afford to buy a house anywhere near the schools in which they taught – and what did it do? Start a charity to LOAN teachers a starter amount to get them into the Silicon Valley property market. How magnanimous!
NOT pay teachers commensurate with the expensive area in which they need to live (everything else is more expensive too, not just housing), because that would destroy the obscene-wage lifestyle of the Silicon Hippies. Then they’d have to pay the servers in their beloved restaurants a decent wage, and so on and so-forth. And you can’t have ‘the little people’ getting above their station, the very (obscene) fabric of Silicon Valley society could come undone.
You guys put a smile on my face that lasted all day. Love you.
I was very confused by Mr. Kheit’s comments regarding section 230. Mr. Kheit said any sort of editorializing put a company’s Section 230 protection at risk. This seems to directly contradict what I have heard from Nilay Patel, also an attorney, on The Verge multiple times and from the EFF.
Since I was confused I looked for the text of Section 230 and found it at the Cornell Law School site. This seems to be the relevant section:
“(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liabilityNo provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]”
I am not a lawyer but this seems to clearly state that Facebook or any other site can remove anything posted by any of their users because the site finds it objectionable. They have no obligation to be neutral or or function as a public square,Editorializing presents no risk to a site’s Section 230 protection.
If Mr. Kheit or anyone else has information to the contrary I would love to see it.
So 230 interacts with other parts of the DMCA. And the safe harbor provisions exempting liability for copyright infringement are found in section 512 which you can see here:
https://www.law.cornell.edu/uscode/text/17/512
The relevant part of which says:
(a)Transitory Digital Network Communications.—A service provider shall not be liable for monetary relief, or, except as provided in subsection (j), for injunctive or other equitable relief, for infringement of copyright by reason of the provider’s transmitting, routing, or providing connections for, material through a system or network controlled or operated by or for the service provider, or by reason of the intermediate and transient storage of that material in the course of such transmitting, routing, or providing connections, if—
….
(3) the service provider *******does not select******* the recipients of the material except as an automatic response to the request of another person
The 230 exemption deals with obscenity, not political speech imo, here is the relative subsection:
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liabilityNo provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
As such, imo, if they editorialize eying obscenity, eg porn (constitutionally protected), harassment (not protected), then they may be elated as a publisher instead of a provider of a public forum. 512 shows selection will help lose the safe harbor exception, that is beyond obscenity. Political speech is not obscenity. So I don’t see any editorializing beyond traditional obscenity to be supported, and think the courts won’t be able to write in “political speech/hate speech/etc” equals obscenity. Particularly when previous paragraphs promote political speech, see here:
(a) FindingsThe Congress finds the following:
…
(5) Increasingly Americans are relying on interactive media for a variety of political, educational, cultural, and entertainment services.
Of course other folks may argue otherwise and we won’t know until it goes to court and is appealed to the Supreme Court.
Tech companies lost their immunity because they found that if they called themselves NEWS services, their share prices shot up beyond their wildest imaginations. They are now paying the price for that. If you’re a news service, you are responsible for what you publish.
Algorithms were invented to ‘create’ the news services, and as a consequence, algorithms are now responsible for providing brand-safe content for advertisers. Don’t believe for a second tech companies have any idea (or even care) what’s right or wrong. They are the most amoral of all corporations, many of them having been started by children with no sense of consequence for actions. Don’t believe the rhetoric or you’ll just get lost.