JOIN TODAY | |

PULSE: Featured IAOP Webinar: Building a Culture of Responsible AI

Featured IAOP Webinar: Building a Culture of Responsible AI    

View the webinar

With humans and machines working together, organizations can get the best of both worlds. But the issue is that humans with biases select the data used to train machines. And humans don’t trust the decisions made by machines.  

There’s no “easy button” when it comes to building a culture of responsible Artificial Intelligence (AI), shared Phaedra Boinodiris, Global Lead for Responsible AI at IBM and a member of the Cognitive World Think Tank.

“Earning trust in AI is a socio-technological challenge that requires a holistic approach of people, processes and tools,” said Boinodiris in a webinar on this timely topic presented by the IAOP Digital Technologies Center of Excellence.

To design AI that is truly fair, explainable, accountable, and robust, organizations need to ask what kind of relationship they want to have with AI, what culture is required to curate responsible AI and how can they establish AI governance. 

From her background in video gaming for entertainment and promoting opportunities for women gamers, she joined IBM in a role focused on using serious games to solve complex problems and began leading IBM Consulting’s trustworthy AI work three years ago. She is pursuing a Ph.D. in AI and ethics.

“I made this pivot in 2018 when I started getting angrier and frustrated by what I was hearing in the news about organizations blatantly using AI for mal intent or organizations, even though they had the very best intentions, causing individual or societal harm,” she said. “There are so many wonderful ways people and organizations use AI to solve all kinds of problems and I had witnessed that in the work I had done in serious games.”

Boinodiris defined AI as, “any system capable of simulating human intelligence and thought process.”

She shared stories that keep her up at night of governments and organizations with good intentions using AI models that produced incorrect results. This can lead to losses of opportunities, liberty and finances and social detriment, according to the Future of Privacy Forum which researched potential harms for automated decision making.   

Gartner Group Research found that 80 percent of AI projects never made it to deployment and get stuck for a variety of reasons, including a lack of trust in the results of AI models.

To establish trust in a decision made by a machine, organizations need to ask these human-centric questions:

  • Fairness – Is it fair?
  • Explainability – It is easy to understand?
  • Adversarial Robustness – Did anyone tamper with it?
  • Transparency – Is it accountable?
  • Data privacy – Does it protect my data?   

Three Recommendations for Building a Culture of Responsible AI 

  1. Establish Trust

Organizations need to reinvent the way they develop AI with a multidisciplinary approach designed to engender trust.  

This is a difficult task, she said, because data is “an artifact of human experience.”  Humans create data or the machines that create the data – and humans are fallible and have more than 180 biases.

“It’s incredibly important to be hyper-aware of our biases because we are choosing what data to use to train our models,” Boinodiris said. “If we are not cognizant about our biases, they can make it into AI and that’s not what we want. To curate AI and data responsibly, companies must be committed to introspectively looking into a mirror.”

A common myth is that AI is all about coding when in reality 80 percent of data science is determining the right data to use to train AI models, she said. After that, it involves experimenting to make AI models perform better than humans to make faster, more accurate predictions.

  1. Consider AI ethics at the offset

Another common misunderstanding is that ethics happens at the very end of the process, as a check box for quality assurance. Ethics needs to begin when companies are even considering using an AI model for the first time, she said. At the inception of the idea, organizations need to determine the potential harm and protect against it, empower people and use AI to augment human intelligence with the right intent, according to Boinodiris.

Mitigating for risk holistically requires thinking about the people, culture and the AI governance processes in place as well as the data used to train the model, she noted.        

  1. Start with diversity and involve all skill sets

AI ethics is a team effort that should include individuals with diversity in world views as well as skill sets. Social scientists including designers, psychologists, diversity advocates, anthropologists, lawyers and behavioral scientists need to work hand and hand with data scientists and computer scientists, she said.   

© 2024 IAOP® All Rights Reserved. IAOP, Certified Outsourcing Professionals®, The Outsourcing World Summit® and The Global Outsourcing 100® are registered trademarks of IAOP.