Top 9 ethical issues in artificial intelligence

As we speed along the AI highway, are we getting closer to an ethical cross road?

https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

Faced with an automated future, what moral framework should guide us?

Image: Matthew Wiebe

21 Oct 2016

Optimizing logistics, detecting fraud, composing art, conducting research, providing translations: intelligent machine systems are transforming our lives for the better. As these systems become more capable, our world becomes more efficient and consequently richer.

Tech giants such as Alphabet, Amazon, Facebook, IBM and Microsoft – as well as individuals like Stephen Hawking and Elon Musk – believe that now is the right time to talk about the nearly boundless landscape of artificial intelligence. In many ways, this is just as much a new frontier for ethics and risk assessment as it is for emerging technology. So which issues and conversations keep AI experts up at night?

  1. Unemployment. What happens after the end of jobs?

The hierarchy of labour is concerned primarily with automation. As we’ve invented ways to automate jobs, we could create room for people to assume more complex roles, moving from the physical work that dominated the pre-industrial globe to the cognitive labour that characterizes strategic and administrative work in our globalized society.

Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade? But on the other hand, if we consider the lower risk of accidents, self-driving trucks seem like an ethical choice. The same scenario could happen to office workers, as well as to the majority of the workforce in developed countries.

This is where we come to the question of how we are going to spend our time. Most people still rely on selling their time to have enough income to sustain themselves and their families. We can only hope that this opportunity will enable people to find meaning in non-labour activities, such as caring for their families, engaging with their communities and learning new ways to contribute to human society.

If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.

  1. Inequality. How do we distribute the wealth created by machines?

Our economic system is based on compensation for contribution to the economy, often assessed using an hourly wage. The majority of companies are still dependent on hourly work when it comes to products and services. But by using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this means that revenues will go to fewer people. Consequently, individuals who have ownership in AI-driven companies will make all the money.

We are already seeing a widening wealth gap, where start-up founders take home a large portion of the economic surplus they create. In 2014, roughly the same revenues were generated by the three biggest companies in Detroit and the three biggest companies in Silicon Valley … only in Silicon Valley there were 10 times fewer employees.

If we’re truly imagining a post-work society, how do we structure a fair post-labour economy?

  1. Humanity. How do machines affect our behaviour and interaction?

Artificially intelligent bots are becoming better and better at modelling human conversation and relationships. In 2015, a bot named Eugene Goostman won the Turing Challenge for the first time. In this challenge, human raters used text input to chat with an unknown entity, then guessed whether they had been chatting with a human or a machine. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being.

This milestone is only the start of an age where we will frequently interact with machines as if they are humans; whether in customer service or sales. While humans are limited in the attention and kindness that they can expend on another person, artificial bots can channel virtually unlimited resources into building relationships.

Even though not many of us are aware of this, we are already witnesses to how machines can trigger the reward centres in the human brain. Just look at click-bait headlines and video games. These headlines are often optimized with A/B testing, a rudimentary form of algorithmic optimization for content to capture our attention. This and other methods are used to make numerous video and mobile games become addictive. Tech addiction is the new frontier of human dependency.

On the other hand, maybe we can think of a different use for software, which has already become effective at directing human attention and triggering certain actions. When used right, this could evolve into an opportunity to nudge society towards more beneficial behavior. However, in the wrong hands it could prove detrimental.

  1. Artificial stupidity. How can we guard against mistakes?

Intelligence comes from learning, whether you’re human or machine. Systems usually have a training phase in which they “learn” to detect the right patterns and act according to their input. Once a system is fully trained, it can then go into test phase, where it is hit with more examples and we see how it performs.

Obviously, the training phase cannot cover all possible examples that a system may deal with in the real world. These systems can be fooled in ways that humans wouldn’t be. For example, random dot patterns can lead a machine to “see” things that aren’t there. If we rely on AI to bring us into a new world of labour, security and efficiency, we need to ensure that the machine performs as planned, and that people can’t overpower it to use it for their own ends.

  1. Racist robots. How do we eliminate AI bias?

Though artificial intelligence is capable of a speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.

We shouldn’t forget that AI systems are created by humans, who can be biased and judgemental. Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change.

  1. Security. How do we keep AI safe from adversaries?

The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously. Because these fights won’t be fought on the battleground only, cybersecurity will become even more important. After all, we’re dealing with a system that is faster and more capable than us by orders of magnitude.

 

  1. Evil genies. How do we protect against unintended consequences?

It’s not just adversaries we have to worry about. What if artificial intelligence itself turned against us? This doesn’t mean by turning “evil” in the way a human might, or the way AI disasters are depicted in Hollywood movies. Rather, we can imagine an advanced AI system as a “genie in a bottle” that can fulfill wishes, but with terrible unforeseen consequences.

In the case of a machine, there is unlikely to be malice at play, only a lack of understanding of the full context in which the wish was made. Imagine an AI system that is asked to eradicate cancer in the world. After a lot of computing, it spits out a formula that does, in fact, bring about the end of cancer – by killing everyone on the planet. The computer would have achieved its goal of “no more cancer” very efficiently, but not in the way humans intended it.

  1. Singularity. How do we stay in control of a complex intelligent system?

The reason humans are on top of the food chain is not down to sharp teeth or strong muscles. Human dominance is almost entirely due to our ingenuity and intelligence. We can get the better of bigger, faster, stronger animals because we can create and use tools to control them: both physical tools such as cages and weapons, and cognitive tools like training and conditioning.

This poses a serious question about artificial intelligence: will it, one day, have the same advantage over us? We can’t rely on just “pulling the plug” either, because a sufficiently advanced machine may anticipate this move and defend itself. This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth.

  1. Robot rights. How do we define the humane treatment of AI?

While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, reinforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward.

Right now, these systems are fairly superficial, but they are becoming more complex and life-like. Could we consider a system to be suffering when its reward functions give it negative input? What’s more, so-called genetic algorithms work by creating many instances of a system at once, of which only the most successful “survive” and combine to form the next generation of instances. This happens over many generations and is a way of improving a system. The unsuccessful instances are deleted. At what point might we consider genetic algorithms a form of mass murder?

Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status. Should they be treated like animals of comparable intelligence? Will we consider the suffering of “feeling” machines?

Some ethical questions are about mitigating suffering, some about risking negative outcomes. While we consider these risks, we should also keep in mind that, on the whole, this technological progress means better lives for everyone. Artificial intelligence has vast potential, and its responsible implementation is up to us.

Written by

Julia Bossmann, President, Foresight Institute

The views expressed in this article are those of the author alone and not the World Economic Forum

Top 10 Predictions for Global Manufacturing in 2018: IDC

By 2020, 60% of manufacturers will rely on digital platforms which will support as much as 30% of their overall revenue.

IW Staff | Dec 19, 2017

http://www.industryweek.com/leadership/top-10-predictions-global-manufacturing-2018-idc

IDC recently released a report, “IDC FutureScape: Worldwide Manufacturing Predictions 2018,”  surveying the global manufacturing landscape.  When creating its predictions the firm examined ecosystems and experiences, greater intelligence in operational assets and processes, data capitalization, the convergence of information technology (IT) and operations. Most of the group’s predictions refer to a continuum of change and digital transformation (DX) within the wider ecosystem of the manufacturing industry and global economy.

“Manufacturers of every size and shape are changing rapidly because of new digital technologies, new competitors, new ecosystems, and new ways of doing business,” said Kimberly Knickle, research vice president, IT Priorities and Strategies, IDC Manufacturing Insights. “Manufacturers that can speed their adoption of digital capabilities in order to create business value will be the leaders of their industry.”

Technologies that will have the greatest impact include cloud, mobile, big data and analytics, and internet of things (IoT). Manufacturers also have high expectations for the business value of technologies that are in earlier stages of adoption, such as robotics, cognitive computing/artificial intelligence (AI), 3D printing, augmented reality/virtual reality (AR/VR), and even blockchain.

Over the next few years, IDC identified some of the most notable changes in the industry:

  • Redefining how businesses design (or define), deliver and monetize products and services
  • Developing new contextualized and customized experiences for customers, employees and partners
  • Increasing coordination and collaboration between IT and line-of-business organizations, as well as among ecosystem participants
  • Changing the nature of work and how it’s accomplished with people, process, and technology coming together

While the predictions offered largely focus on the near- to midterm (2018–2021), the impact of many of these will be felt for years to come.  IDC’s worldwide manufacturing 2018 predictions are:

Prediction 1: By 2020, 60% of the top manufacturers will rely on digital platforms that enhance their investments in ecosystems and experiences and support as much as 30% of their overall revenue.

Prediction 2: By 2021, 20% of the top manufacturers will depend on a secure backbone of embedded intelligence, using IoT, blockchain, and cognitive, to automate large-scale processes and speed execution times by up to 25%.

Prediction 3: By 2020, 75% of all manufacturers will participate in industry clouds, although only one-third of those manufacturers will be monetizing their data contributions.

Prediction 4: By 2019, the need to integrate operational technology and information technology as a result of IoT will have led to more than 30% of all IT and OT technical staff having direct project experience in both fields.

Prediction 5: By 2019, 50% of manufacturers will be collaborating directly with customers and consumers regarding new and improved product designs through cloud-based crowdsourcing, virtual reality, and product

Prediction 6: In 2020, augmented reality and mobile devices will drive the transition to the gig economy in the service industry, with “experts for hire” replacing 20% of dedicated customer and field service workers, starting with consumer durables and electronics.

Prediction 7: By the end of 2020, one-third of all manufacturing supply chains will be using analytics-driven cognitive capabilities, thus increasing cost efficiency by 10% and service performance by 5%.

Prediction 8: By 2020, 80% of supply chain interactions will happen across cloud-based commerce networks, dramatically improving participants’ resiliency and reducing the impact of supply disruptions by up to one-third.

Prediction 9: By 2020, 25% of manufacturers in select subsectors will have balanced production with demand cadence and achieved greater customization through intelligent and flexible assets.

Prediction 10: By 2019, 15% of manufacturers that manage data-intensive production and supply chain processes will be leveraging cloud-based execution models that depend on edge analytics to enable real-time visibility and augment operational flexibility.

Researchers tackle racial, gender bias in artificial intelligence

This is good work, a great check and balance, maybe not perfect, however, heading in the right direction.

 

Bias in artificial intelligence can surface in various ways. Some of the best minds are working on the problem.

When Timnit Gebru was a student at Stanford University’s prestigious Artificial Intelligence Lab, she ran a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the U.S.

While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptible to bias — racial, gender, socio-economic. She was also horrified by a ProPublica report that found a computer program widely used to predict whether a criminal will reoffend discriminated against people of colour.

So this year, Gebru, 34, joined a Microsoft Corp. team called FATE — for Fairness, Accountability, Transparency and Ethics in AI. The program was set up three years ago to ferret out biases that creep into AI data and can skew results.

“I started to realize that I have to start thinking about things like bias,” says Gebru, who co-founded Black in AI, a group set up to encourage people of colour to join the artificial intelligence field. “Even my own PhD work suffers from whatever issues you’d have with data set bias.”

In the popular imagination, the threat from AI tends to the alarmist: self-aware computers turning on their creators and taking over the planet.

The reality (at least for now) turns out to be a lot more insidious but no less concerning to the people working in AI labs around the world. Companies, government agencies and hospitals are increasingly turning to machine learning, image recognition and other AI tools to help predict everything from the credit worthiness of a loan applicant to the preferred treatment for a person suffering from cancer.

The tools have big blind spots that particularly effect women and minorities.

“The worry is if we don’t get this right, we could be making wrong decisions that have critical consequences to someone’s life, health or financial stability,” says Jeannette Wing, director of Columbia University’s Data Sciences Institute.

Researchers at Microsoft, International Business Machines Corp. and the University of Toronto identified the need for fairness in AI systems back in 2011.

Now, in the wake of several high-profile incidents — including an AI beauty contest that chose predominantly white faces as winners — some of the best minds in the business are working on the bias problem.

AI is only as good as the data it learns from. Let’s say programmers are building a computer model to identify dog breeds from images. First, they train the algorithms with photos that are each tagged with breed names. Then they put the program through its paces with untagged photos of Fido and Rover and let the algorithms name the breed based on what they learned from the training data. The programmers see what worked and what didn’t and fine-tune from there.

The algorithms continue to learn and improve, and with more time and data are supposed to become more accurate. Unless bias intrudes.

Bias can surface in various ways. Sometimes, the training data is insufficiently diverse, prompting the software to guess based on what it “knows.” In 2015, Google’s photo software infamously tagged two Black users “gorillas” because the data lacked enough examples of people of colour.

Even when the data accurately mirrors reality, the algorithms still get the answer wrong, incorrectly guessing a particular nurse in a photo or text is female, say, because the data shows fewer men are nurses. In some cases, the algorithms are trained to learn from the people using the software and, over time, pick up the biases of the human users.

AI also has a disconcertingly human habit of amplifying stereotypes. PhD students at the University of Virginia and University of Washington examined a public data set of photos and found that the images of people cooking were 33 per cent more likely to picture women than men. When they ran the images through an AI model, the algorithms said women were 68 per cent more likely to appear in the cooking photos.

Eliminating bias isn’t easy. Improving the training data is one way. Scientists at Boston University and Microsoft’s New England lab zeroed in on so-called word embeddings — sets of data that serve as a kind of computer dictionary used by all manner of AI programs. In this case, the researchers were looking for gender bias that could lead algorithms to do things such as conclude people named John would make better computer programmers than ones named Mary.

In a paper called “Man is to Computer Programmer as Woman is to Homemaker?” the researchers explain how they combed through the data, keeping legitimate correlations (man is to king as woman is to queen, for one) and altering ones that were biased (man is to doctor as woman is to nurse). In doing so, they created a gender-bias-free public data set and are now working on one that removes racial biases.

“We have to teach our algorithms which are good associations and which are bad, the same way we teach our kids,” says Adam Kalai, a Microsoft researcher who co-authored the paper.

He and researchers including Cynthia Dwork — the academic behind the 2011 push toward AI fairness — have also proposed using different algorithms to classify two groups represented in a set of data, rather than trying to measure everyone with the same yardstick.

So for example, female engineering applicants can be evaluated by the criteria best suited to predicting a successful female engineer and not be excluded because they don’t meet criteria that determine success for the larger group. Think of it as algorithmic affirmative action that gets the hiring manager qualified applicants without prejudice or sacrificing too much accuracy.

While many researchers work on known problems, Microsoft’s Ece Kamar and Stanford University’s Himabindu Lakkaraju are trying to find black holes in the data. These “unknown unknowns” — a conundrum made famous by former secretary of defence Donald Rumsfeld — are the missing areas in a data set the engineer or researcher doesn’t even realize aren’t there.

Using a data set with photos of black dogs and white and brown cats, the software incorrectly labelled a white dog as a cat. Not only was the AI wrong, it was very sure it was right, making it harder to detect the error. Researchers are looking for places where the software had high confidence in its decision, finding mistakes and noting the features that characterize the error. That information is then provided to the system designer with examples they can use to retrain the algorithm.

Researchers say it will probably take years to solve the bias problem. While they see promise in various approaches, they consider the challenge not simply technological but legal too because some of the solutions require treating protected classes differently, which isn’t legal everywhere.

What’s more, many AI systems are black boxes; the data goes in and the answer comes out without an explanation for the decision.

Not only does that make it difficult figuring out how bias creeps in; the opaqueness also makes it hard for the person denied parole or the teacher labelled a low-performer to appeal because they have no way of knowing why that decision was reached.

Google researchers are studying how adding some manual restrictions to machine learning systems can make their outputs more understandable without sacrificing output quality, an initiative nicknamed GlassBox. The Defense Advanced Research Projects Agency, or DARPA, is also funding a big effort called explainable AI.

The good news is that some of the smartest people in the world have turned their brainpower on the problem. “The field really has woken up and you are seeing some of the best computer scientists, often in concert with social scientists, writing great papers on it,” says University of Washington computer science professor Dan Weld. “There’s been a real call to arms.

NEW – Core Powered Future of Manufacturing Blog

5 to 10 years from now we will have a very different manufacturing environment. Specifically due to the exponential technology growth and the impact that Artificial Intelligence and Artificial Consciousness will have on both capability and social impact.

This blog is intended to bring awareness of these advances to you and provoke your attention to the social impacts of these developments. I welcome your thoughts, challenges, and perspectives to bring this blog to life and help evolve our social relationship to these changes at the velocity and impact these technologies will have on us.

Enjoy