The expanding use of artificial intelligence raises important questions about the relationship between people and technology, and the impact of the new capabilities on individuals and communities. We struggled with privacy issues and fraud in the early years of the internet, and similar questions are emerging for AI—not about what it can do, but what it should do.

AI augments and amplifies human capabilities, and its governance will require a human-centered approach. But that can only be realized if researchers, policymakers, and leaders from government, business, and society come together to develop a shared ethical framework for AI.

The role of partners

Our partners also play a critical role in helping establish the ethical principles that will guide the use of AI going forward. I have no doubt that partner skills will grow, and the AI building blocks will evolve to the point where creating custom AI solutions will be commonplace. But first we need to address the issues with a sense of shared responsibility, and partners are ideally suited to help us ask the right questions.

I encourage you to read the free e-book The Future Computed. Written by Microsoft’s President and Chief Legal Officer Brad Smith and Executive Vice President, Artificial Intelligence and Research Harry Shum, the book takes a fresh look at the role of AI in society.

As Brad Smith explains, “There will be challenges as well as opportunities. We must address the need for strong ethical principles, the evolution of laws, training for new skills and even labor market reforms. This must all come together if we’re going to make the most of AI.”

Brad and Harry outline the societal values that AI needs to respect and provide six principles to guide its development:

  1. Fairness — AI systems should treat everyone in a fair and balanced manner and not affect similarly situated groups in different ways. Developers need to understand how bias can be introduced into AI systems and how it can affect AI-based recommendations. Racism and sexism can also creep into societal data used to train AI systems, so it’s important that those designing AI systems reflect the diversity of the world for which it’s designed and have the relevant subject matter expertise.
  2. Reliability — Users need to trust that AI systems will perform reliably within a clear set of parameters and respond safely to unanticipated situations. This will require the involvement of domain experts in the design process, systematic evaluation of the data and models, a process for documenting and auditing performance, determination of how and when an AI system seeks human input, and a robust feedback mechanism.
  3. Privacy and Security — Not unlike other digital technologies, AI systems need to protect the privacy and security of the data or users will not share the data needed to train the AI. Techniques and policies are needed to protect privacy while facilitating access to the data that AI systems require to operate effectively.
  4. Inclusiveness — AI can be a powerful tool for increasing access to information, education, employment, government services, and social and economic opportunities. But to benefit everyone, AI technologies must understand the context, needs, and expectations of the people who use them, and address potential barriers that could unintentionally exclude people.
  5. Transparency — When AI systems help make decisions that impact people’s lives, it’s particularly important that people understand how those decisions were made. Contextual information about how an AI system works and interacts with data will make it easier to identify and raise awareness of potential bias, errors, and unintended outcomes.
  6. Accountability — Those who design and deploy AI systems must be accountable for how their systems operate and should periodically check whether their accountability norms are being adhered to and if they are working effectively.

As designers, developers, and consumers of AI technologies and solutions, we all need to deliberate and establish the ethical framework within our own organizations and with our customers.

To that end, Microsoft is addressing these considerations internally with the AI and Ethics in Engineering and Research (AETHER) committee, which includes senior leaders from across Microsoft’s engineering, research, consulting, and legal organizations. AETHER focuses on proactive formulation of internal policies and how to respond to specific issues in a responsible way.

Microsoft is also a founding member of The Partnership on AI, a consortium of business leaders, policymakers, researchers, academics, and representatives of non-governmental groups to advance industry discussion around ethical AI

Will human workers become redundant?

AI policymakers will likely focus first on the collection and use of data to ensure the protection of privacy and proprietary information, but Microsoft is also concerned with the responsible and effective use of the technology. Like with previous computing breakthroughs, there is fear that automation and AI will replace jobs. I believe AI will have an impact on workers, but new opportunities will also be created, as well as entirely new occupations and categories of work.

We have no way to predict which jobs will be eliminated and which jobs will be created as a result, but we know that it will require new skills and training to ensure that workers are prepared for the future and that there is sufficient talent available for critical jobs. We can already see that the transformation taking place is causing a shortage of critical talent across many industries. To overcome the skills gap, we’ll need to ensure that the workforce can continually learn and gain new skills.

Like the new technologies before it, AI shows great potential to improve our daily lives. But the opportunities come with challenges, and by working together we can develop a shared ethical framework for AI that is trusted by all.

Discover the best ways to build AI into your business, here.