Focusing on ethical AI in business and government

The World Economic Forum and associate partner Appen are wrestling with the thorny issue of how to create artificial intelligence with a sense of ethics.

Their main area of focus is to design standards and best practices for responsible training data used in building machine learning and AI applications.   It has already been a long process and continues.

“A solid training data platform and management strategy is often the most critical component of launching a successful, responsible machine learning-powered product into production,” said Mark Brayan, CEO of Appen in a statement. Appen has been providing training data to companies building AI for more than 20 years.   In 2019, Appen created its own Crowd Code of Ethics.

“Ethical, diverse training data is essential to building a responsible AI system,” Brayan added.

Kay Firth-Butterfield, head of AI and machine learning at WEF, said the industry needs guidelines for acquiring and using responsible training data.  Companies should address questions around user permissions, privacy, security, bias, safety and how people are compensated for their work in the AI supply chain, she said.

Every business needs a plan to understand AI and deploy AI safely and ethically, she added in a video overview of Forum’s AI agenda.  “The purpose is to think about what are the big issues in AI that really require something be done in the governance area so that AI can flourish.”

 “We’re very much advocating a…soft law approach, thinking about standards and guidelines rather than looking to regulation,” she said.

The Forum has issued a number of white papers dating to 2018 on ethics and related topics, with a white paper on responsible limits on facial recognition issued in March.

 RELATED: Researchers deploy AI to detect bias in AI and humans

In January, the Forum published its AI toolkit for boards of directors with 12 modules for the impacts and potential of AI in company strategy and is currently building a toolkit for transferring those insights to CEOs and other C-suite executives.

Another focus area is on human-centered AI for human resources to create a toolkit for HR professionals that will help promote ethical human-centered use of AI. Various HR tools have been developed in recent years that rely on AI to hire and retain talent and the Forum notes that concerns have been raised about AI algorithms encoding bias and discrimination. “Errors in the adoption of AI-based products can also undermine employee trust, leading to lower productivity and job satisfaction,” the Forum added.  

Firth-Butterfield will be a keynote speaker at Appen annual Train AI conference on October 14.

RELATED: Tech firms grapple with diversity after George Floyd protests