I’ve previously written about calls to Silicon Valley to embrace ethics, and how companies should have a Chief Ethics Officer. But you can’t really have ethics without morals, and this article explains how we can improve our moral reasoning.
Addressing the issues brought upon by artificial intelligence, biological advances, and the information age, I’d like to create a generalized method of moral reasoning for any human being in our current age to address issues like gene editing while remaining faithful to the work of philosophers and historians.
Check It Out: Improving Our Moral Reasoning in the Digital Age
Andrew:
That was an interesting read. Thank you for sharing it.
While I thought that the author made a number of good points, well thought out and referenced, particularly his thought exercise in differentiating reasoning from rationalisation, and countered a number of objections to his suggestions, I found the article to be more of a dialectical exercise than either a methodology or even an analytical framework. His appeal, as I understood it, is for the reader to exercise reason, avoid rationalisation, and appreciate that machines cannot make moral judgements or even appreciate the morality of any given situation; people do. However, I’d like to think that it is both simpler and paradoxically more complex, as in diverse, than his thesis suggests.
Briefly, this can be reduced to a simple question set; who are we, what contribution do we wish to make and how do we achieve that? Obviously, we do not have consensus on any of those questions as a society, but it is a question that companies, tech companies specifically, should be asking themselves. While they may differ, Apple have already committed themselves to wanting to better society by unlocking human potential with creativity tools. However effectively one thinks that they have prosecuted that mission, there are trends that indicate that this is their intent, including their stance on privacy, their record of security, and even their curation of third party apps in their App Store, as attested by their recent actions regarding Tumblr. This is why I believe that Apple should be in the business of not only AI, which they are, but robotics, which is at yet unclear.
The complexity arises from the sheer diversity of companies and business models on offer in the tech industry, the competitive nature of the market, and the potential for human greed to corrupt even the best intentions, let alone what avaricious and unscrupulous tech leadership is capable of. This is where the interplay of the consumer is paramount in creating that dynamic of not simply supply but demand to help shape offerings, and select not simply for products that reflect our broader value system of morality, but against those offerings that we deem harmful or even ‘immoral’, at least in the context of extant culture, accepting that these notions may evolve over time as culture evolves.
There is one other bulwark against not so much immorality as in its imposition, certainly the imposition of things harmful to society and that society does not want; legislation. More specifically, good, well-informed, evidence-based and well crafted legislation.
For either beneficent consumer behaviour or well-crafted and protective legislation, a well-informed and curious, engaged and reasoning public with access to information and the ability to voice their opinion through representative government are essential.
A pretty good article.
“Emphasizing fundamentals of human and machine autonomy, Vallor explained how the abilities to govern our own lives and use our intelligence to our own choices are fundamental part of our ethical theories on these issues. AI presents a challenge of the promise of off-loading many of those choices to machines. But, though we give machines values, a machine doesn’t appreciate these values the way humans do. They’re mathematically programmed to match patterns in a way that’s completely different from our methods of moral reasoning. The question of how much autonomy we must retain for ourselves so we can maintain the skills of governing ourselves. Being clear, though, Vallor said machines don’t make judgements. Judgements require perceiving the world, and, while machines process data in code, they don’t understand the patterns we perceive as humans. Instead, we have to understand what’s gained and what’s lost in giving that choice to machines. I believe these judgements are exactly how our heuristics about moral reasoning and scientific theories come into play. It’s what separates man from machine.”
I listen to several philosophy podcasts, Philosophy Bites and Philosophy Bakes No Bread.
The link to the story opens the logo, not the story:
“Check It Out: Improving Moral Reasoning in the Digital Age”
Whoops, just fixed it