It’s a long read, but Rodrigo Ochigame, former AI researcher at MIT’s Media Lab, examined Big Tech’s negative role in AI ethics research.
MIT lent credibility to the idea that big tech could police its own use of artificial intelligence at a time when the industry faced increasing criticism and calls for legal regulation…Meanwhile, corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition).
Check It Out: Inside Big Tech’s Manipulation of AI Ethics Research
Andrew:
Rodrigo Ochigame’s piece on the Invention of Ethical AI, is indeed thought provoking. Although it provides a deep dive into the interplay between corporate (private sector) drivers on the limits of regulation imposed upon AI product development, it does not address, likely by intent, many other vital contextual issues that drive AI development, and select for a relatively lax regulatory environment that openly eschews such conditions as ‘fairness’ and avoidance of ‘bias’ in military applications, which can further affect commercial application in other sectors, including law enforcement, human resources practices, banking and extensions of credit and loans and other aspects of human profiling that come under the rubric of population-level surveillance.
At the heart of that theme is the questioning of the motives for the outsized role of the private sector in shaping the discussion of ethical AI development and the level of regulation (none to modest) that should be applied, as seen in these two excerpts,
“The MIT-Harvard fund’s initial director was the former “global public policy lead” for AI at Google. Through the fund, Ito and his associates sponsored many projects, including the creation of a prominent conference on “Fairness, Accountability, and Transparency” in computer science; other sponsors of the conference included Google, Facebook, and Microsoft.”
“How did five corporations, using only a small fraction of their budgets, manage to influence and frame so much academic activity, in so many disciplines, so quickly? It is strange that Ito, with no formal training, became positioned as an “expert” on AI ethics, a field that barely existed before 2017. But it is even stranger that two years later, respected scholars in established disciplines have to demonstrate their relevance to a field conjured by a corporate lobby.”
As important as these questions are, they do not address the broader context of the de facto AI arms race between major powers, much of which is being done by state-sponsored initiatives for military application, industrial espionage, infrastructural protection and sabotage, as well as population-level threat identification and containment often under the aegis of ‘counter terrorism’. This arms race, unlike its nuclear counterpart, is neither bipolar nor is its goal a detente at the threat of mutual assured destruction. Rather, it is a quest for technological hegemony, whose side benefits will be informational, economic and military dominance.
Also unlike the nuclear arms race, this arms race is not simply an open private/public partnership, but a triad of the corporate, governmental and academic sectors whose domains are fungible, whose interests are simultaneously distinct and symbiotic, and whose momentum is a juggernaut into whose path regulators will fear encroach for fear of compromising national security and their respective nations’ bid for global primacy.
Little of this bodes well for the individual or for populations whose vulnerability is as yet unsuspected, but suffice it to say, whomever’s turn is first, it will inevitably be swiftly followed by others’ turn and ultimately all of ours, likely sooner than later.
One real countermeasure against such abuse, which is not excluded by this current alignment of interests, but will require stricter oversight and regulation, is to separate belligerent from peaceful application of AI products, notably those with a direct domestic and consumer application, subjecting these to peer-review, open standards, third party regulation, and consumer recourse and reporting, as well as a publicly available register where both the complaints and their redress are logged – in a word, transparency.
Without such bold action, our foray into this brave new world will be less brave but foolhardy and ultimately self destructive, with individual liberty and freedom amongst the greatest casualties.