Playing on the good side.

30.07.19 05:54 PM Comment(s) By Jordan

Artificial Intelligence (AI) has revolutionised the Fourth Industrial Revolution and has been the driving force behind the major influencer that technology is today.

However, there are ethical concerns regarding AI. Not only does it pose a threat to jobs where a robot can carry out basic tasks better than a human can, there are concerns around the collection of data and data privacy that has cast a shadow of suspicion on its development.

However, I recently read several articles which pointed out that there is a strong movement towards the development of ethical AI.

Key investor.

The first article pointed out that Microsoft is eager to become the first company to develop this area. This is encouraging news for GTconsult as it has a strong business relationship with the company.

The article pointed out that Microsoft announced a $1bn investment in an OpenAI ethical artificial intelligence project backed by Tesla’s Elon Musk and Amazon.

The partnership will be devoted to developing advanced AI models on Microsoft’s Azure cloud computing platform while adhering to “shared principles on ethics and trust,” the companies said in a joint release.

The article added that OpenAI and Microsoft expressed a vision of “artificial general intelligence” (AGI) working with people to help solve daunting problems such as climate change.

OpenAI chief executive Sam Altman said the goal of the effort is to allow artificial intelligence to be “deployed safely and securely and that its economic benefits are widely distributed.”

Democratise AI.

The article pointed out that Microsoft will become the preferred partner for commercialising new “supercomputing” artificial intelligence technologies developed as part of the initiative.

“AI is one of the most transformative technologies of our time and has the potential to help solve many of our world’s most pressing challenges,” Microsoft Chief Executive Satya Nadella told AFP.

“By bringing together OpenAI’s breakthrough technology with new Azure AI supercomputing technologies, our ambition is to democratise AI.

A joint statement noted that OpenAI is producing “a sequence of increasingly powerful AI technologies, which requires a lot of capital for computational power” and that the agreement with Microsoft would help in commercialising these products.

Risks to humanity.

The article points out that OpenAI was launched in 2015 with financing from tech entrepreneur Musk, LinkedIn founder Reid Hoffman and Peter Thiel, a co-founder of PayPal with Musk who has been active in technology and who is an ally of President Donald Trump.

Musk and others have warned that artificial intelligence could pose risks to humanity if mismanaged, allowing the potential emergence of “Terminator”-type killer robots, for example.

The article adds that OpenAI researchers early this year announced they had developed an automatic text generator using artificial intelligence so good that it was keeping details private.

Microsoft is also a member of the nonprofit Partnership on AI focused on helping the public understand the technology and practices in the field along with Amazon, Google, Facebook and IBM.

Just good business.

An article I read on ZDnet pointed out that ethical AI is just good business.

The article points out that ethics looms as a vexing issue when it comes to artificial intelligence (AI). Where does AI bias spring from, especially when it’s unintentional? Are companies paying enough attention to it as they plunge full-force into AI development and deployment? Are they doing anything about it? Do they even know what to do about it?

The article adds that wringing bias and unintended consequences out of AI is making its way into the job descriptions of technology managers and professionals, especially as business leaders turn to them for guidance and judgement. The drive to ethical AI means an increased role for technologists in the business, as described in a study of 1,580 executives and 4,400 consumers from the Capgemini Research Institute. The survey was able to make direct connections between AI ethics and business growth: if consumers sense a company is employing AI ethically, they’ll keep coming back; it they sense unethical AI practices, their business is gone.

Competitive pressure.

The article points out that competitive pressure is the reason businesses are pushing AI to its limits and risking crossing ethical lines. “The pressure to implement AI is fueling ethical issues,” the Capgemini authors, led by Anne-Laure Thieullent, managing director of Capgemini’s Artificial Intelligence & Analytics Group, state. “When we asked executives why ethical issues resulting from AI are an increasing problem, the top-ranked reason was the pressure to implement AI.” Thirty-four percent cited this pressure to stay ahead with AI trends.

Another one-third report ethical issues were not considered while constructing AI systems, the survey shows. Another 31% said their main issue was lack of people and resources. This is where IT managers and professionals can make the difference.

The article adds that the Capgemini team identified the issues with which IT managers and professionals need to deal:

  • Lack of ethical AI code of conduct or ability to assess deviation from it;
  • Lack of relevant training for developers building AI systems;
  • Ethical issues were not considered when constructing AI systems;
  • Pressure to urgently implement AI without adequately addressing ethical issues; and
  • Lack of resources (funds, people, technology) dedicated to ethical AI systems

Thieullent and her co-authors have advice for IT managers and professionals taking a leadership role in terms of AI ethics:

Empowerment.

The article adds that AI needs to empower users with more control and the ability to seek recourse: “This means building policies and processes where users can ask for explanations of AI-based decisions.”

Make AI systems transparent and understandable to gain users’ trust: “The teams developing the systems should provide the documentation and information to explain, in simple terms, how certain AI-based decisions are reached and how they affect an individual. These teams also need to document processes for data sets as well as the decision-making systems.”

The article adds that  in order for companies to practice good data management and mitigate potential biases in data: While general management will be responsible for setting good data management practices, it falls on the data engineering and data science and AI teams to ensure those practices are followed through. These teams should incorporate ‘privacy-by-design’ principles in the design and build phase and ensure robustness, repeatability, and auditability of the entire data cycle (raw data, training data, test data, etc.).

Important checks.

The article points out that, as part of this, IT managers need to “check for accuracy, quality, robustness, and potential biases, including detection of under-represented minorities or events/patterns,” as well as “build adequate data labeling practices and review periodically, store responsibly, so that it is made available for audits and repeatability assessments.”

Keep close scrutiny on datasets: “Focus on ensuring that existing datasets do not create or reinforce existing biases. For example, identifying existing biases in the dataset through use of existing AI tools or through specific checks in statistical patterns of datasets.” This also includes “exploring and deploying systems to check for and correct existing biases in the dataset before developing algorithms,” and “conducting sufficient pre-release trials and post-release monitoring to identify, regulate, and mitigate any existing biases.”

The article adds that companies need to use technology tools to build ethics in AI: “One of the problems faced by those implementing AI is the black-box nature of deep learning and neural networks. This makes it difficult to build transparency and check for biases.”  Increasingly, some companies are deploying tech and building platforms which help tackle this. Thieullent and her co-authors point to encouraging developments in the market, such as IBM’s AI OpenScale, open source tools, and solutions from AI startups that can provide more transparency and check for biases.

Create ethics governance structures and ensure accountability for AI systems: “Create clear roles and structures, assign ethical AI accountability to key people and teams and empower them.” This can be accomplished by “adapting existing governance structures to build accountability within certain teams. For example, the existing ethics lead (e.g., the Chief Ethics Officer) in the organization could be entrusted with the responsibility of also looking into ethical issues in AI.”

It’s also important to assign “senior leaders who would be held accountable for ethical questions in AI.” Thieullent and the Capgemini team also recommends “building internal and external committees responsible for deploying AI ethically, which are independent and therefore under no pressure to rush to AI deployment.”

Build diverse teams to ensure sensitivity towards the full spectrum of ethical issues: “It is important to involve diverse teams. For example, organizations not only need to build more diverse data teams (in terms of gender or ethnicity), but also actively create inter-disciplinary teams of sociologists, behavioral scientists and UI/UX designers who can provide additional perspectives during AI design.”

Powerful explosion.

I recently read an article that provides an interesting blueprint.

The article points out that the field of artificial intelligence is exploding with projects such as IBM Watson, DeepMind’s AlphaZero, and voice recognition used in virtual assistants including Amazon’s Alexa, Apple’s Siri, and Google’s Home Assistant. Because of the increasing impact of AI on people’s lives, concern is growing about how to take a sound ethical approach to future developments.

The article adds that building ethical artificial intelligence requires both a moral approach to building AI systems and a plan for making AI systems themselves ethical. For example, developers of self-driving cars should be considering their social consequences including ensuring that the cars themselves are capable of making ethical decisions.

Ethical Questions.

The article points out that some major issues that need to be considered include:

  • Should we be worried about the prospect of machines becoming more intelligent than humans, and what can we do about it?
  • What needs to be done to prevent new AI applications from creating mass unemployment?
  • How can an AI application such as face recognition be used for social control in ways that restrict the privacy and freedom of human beings?
  • How can AI systems increase or possibly decrease social biases and inequality? And
  • What are the harms associated with the development of killer robots?

We need a general approach to ethics that can help to answer such questions.

Ethical Challenges.

The article adds that applying ethics to artificial intelligence is difficult because of the lack of a generally accepted ethical framework. Here are some of the challenges that need to be dealt with to come up with ethical AI.

  • Ethical theories are highly controversial. Some people prefer ethical principles established by religious texts such as the Bible or the Quran. Philosophers argue about whether ethics should be based on rights and duties, on the greatest good for the greatest number of people, or on acting virtuously;
  • Acting ethically requires satisfying moral values, but there is no agreement about which values are appropriate or even about what values are. Without an account of the appropriate values that people use when they act ethically, it is impossible to align the values of AI systems with those of humans; and
  • To build an AI system that behaves ethically, ideas about values and right and wrong need to be made sufficiently precise that they can be implemented in algorithms, but precision and algorithms are sorely lacking in current ethical deliberations.

Ethical Plan.

Fortunately, the author of the article wrote a book which pointed out that Natural Philosophy presents an account of ethics that can meet these challenges.

  • I argue that the most plausible ethical theory is one that evaluates actions based on the extent to which they satisfy the vital needs of human beings. Vital needs are ones that are required for human lives and are distinguished from casual wants such as desiring a fancy car.  Vital needs include not only biological needs such as food, water, and shelter, but also evidence-based psychological needs such as autonomy, relatedness to other people, and competence to achieve personal and social goals;
  • Accordingly, the appropriate values to be taken into account in ethical decisions are these vital human needs. The justification of such values comes not from religious texts or pure reason, but from empirical research that shows that these needs are in fact crucial to human lives;
  • Evaluating different actions with respect to how well they accomplish different needs for different people is an extraordinarily complex process, but it can be performed by algorithms that balance a multitude of constraints based on which actions satisfy most needs for most people. Such algorithms can be efficiently computed by neural networks and other methods.

Ethical Procedure.

The article adds that, accordingly, I propose the following ethical procedure to be carried out by people making decisions about the development of AI. Moreover, this procedure could be implemented in actual machines.

  • List the alternative actions that are worth considering in a particular situation. The ethical deliberation will assess these actions and choose based on moral considerations, not just on personal preferences. For example, government officials can consider whether or not to make military robots more intelligent and autonomous;
  • Identify all the people affected by these actions, including future generations as well as people currently alive. For killer robots, consider people who might be saved as well as ones that would be killed;
  • For each action, assess the extent to which it helps to promote or impede the satisfaction of human vital needs. For killer robots, the consequences to be considered include the survival and other needs of all the people potentially affected by intelligent weapons;
  • Translate the promotion of needs by actions into positive constraints and translate incompatibilities between actions into negative constraints. The result is a large constraint satisfaction network that can be evaluated computationally; and
  • Maximize constraint satisfaction and choose the actions that do the best job of satisfying human needs.

Whatever route is taken. Ethical AI needs to be worked towards.

Jordan

Share -