Presented by Defined.ai
As AI is integrated into day-to-day lives, justifiable concerns over its fairness, power, and effects on privacy, speech, and autonomy grow. Join this Report Door Live event for an in-depth look at why ethical AI is essential, and how we can ensure our AI future is a just one.
Watch on demand right here.
“AI is only biased because humans are biased. And there are lots of different types of bias and studies around that,” says Daniela Braga, Founder and CEO of Defined.ai. “All of our human biases are transported into the way we build AI. So how do we work around preventing AI from having bias?”
A big factor, for both the private and public sectors, is lack of diversity on data science teams — but that’s still a difficult ask. Right now, the tech industry is notoriously white and male-dominated, and that doesn’t look like it will change any time soon. Only one in five graduates of computer science programs are women; the number of underrepresented minorities are even lower.
The second problem is the bias baked into the data, which then fuels biased algorithms. Braga points to the Google search issue from not so long ago, where searches for terms like “school boy” turned up neutral results, while searches for terms like “school girl” were sexualized. And the problem was gaps in the data, which was compiled by male researchers who didn’t recognize their own internal biases.
For voice assistants, the problem has long been the assistant not being able to recognize non-white dialects and accents, whether they were Black speakers or native Spanish speakers. Datasets need to be constructed accounting for gaps like these by researchers who recognize where the blind spots lay, so that models built on that data don’t amplify these gaps with their outputs.
The problem might not sound urgent, but when companies fail to put guardrails around their AI and machine learning models, it hurts their brand, Braga says. Failure to root out bias, or a data privacy breach, is a big hit to a company’s reputation, which translates to a big hit to the bottom line.
“The brand impact of leaks, exposure through the media, the bad reputation of the brand, suspicion around the brand, all have a huge impact,” she says. “Savvy companies need to do a very thorough audit of their data to ensure they’re fully compliant and always updating.”
How companies can combat bias
The primary goal should be building a team with diverse backgrounds and identities.
“Looking beyond your own bias is a hard thing to do,” Braga says. “Bias is so ingrained that people don’t notice that they have it. Only with different perspectives can you get there.”
You should design your datasets to be representative from the outset or to specifically target gaps as they become known. Further, you should be testing your models constantly after ingesting new data and retraining, keeping track of builds so that if there’s a problem, identifying which build of the model in which the issue was introduced is easy and efficient. Another important goal is transparency, especially with customers, about how you’re using AI and how you’ve designed the models you’re using. This helps establish trust, and establishes a stronger reputation for honesty.
Getting a handle on ethical AI
Braga’s number-one piece of advice to a business or tech leader who needs to wrap their head around the practical applications of ethical and responsible AI is to ensure you fully understand the technology.
“Everyone who wasn’t born in tech needs to get an education in AI,” she says. “Education doesn’t mean to go get a PhD in AI — it’s as simple as bringing in an advisor or hiring a team of data scientists that can start building small, quick wins that impact your organization, and understanding that.”
It doesn’t take that much to make a huge impact on cost and automation with strategies that are tailored to your business, but you need to know enough about AI to ensure that you’re ready to handle any ethical or accountability issues that may arise.
“Responsible AI means creating AI systems that are unbiased, that are transparent, that treat data securely and privately,” she says. “It’s on the company to build systems in the right and fair way.”
For an in-depth discussion of ethical AI practices, how companies can get ahead of impending government compliance issues, why ethical AI makes business sense, and more, don’t miss this Report Door On-Demand event!
Access on demand for free.
Attendees will learn:
- How to keep bias out of data to ensure fair and ethical AI
- How interpretable AI aids transparency and reduces business liability
- How impending government regulation will change how we design and implement AI
- How early adoption of ethical AI practices will help you get ahead of compliance issues and costs
- Melvin Greer, Intel Fellow and Chief Data Scientist, Americas
- Noelle Silver, Partner, AI and Analytics, IBM
- Daniela Braga, Founder and CEO, Defined.ai
- Shuchi Rana, Moderator, VentureBeat