News

How OpenAI’s DALL·E 2 illustrated the challenges of bias in AI

How OpenAI's DALL·E 2 illustrated the challenges of bias in AI

An artificial intelligence program that has impressed the internet with its ability to generate original images from user prompts has also sparked concerns and criticism for what is now a familiar issue with AI: racial and gender bias. 

And while OpenAI, the company behind the program, called DALL·E 2, has sought to address the issues, the efforts have also come under scrutiny for what some technologists have claimed is a superficial way to fix systemic underlying problems with AI systems.

“This is not just a technical problem. This is a problem that involves the social sciences,” said Kai-Wei Chang, an associate professor at the UCLA Samueli School of Engineering who studies artificial intelligence. There will be a future in which systems better guard against certain biased notions, but as long as society has biases, AI will reflect that, Chang said.

OpenAI released the second version of its DALL·E image generator in April to rave reviews. The program asks users to enter a series of words relating to one another — for example: “an astronaut playing basketball with cats in space in a minimalist style.” And with spatial and object awareness, DALL·E creates four original images that are supposed to reflect the words, according to the website.

As with many AI programs, it did not take long for some users to start reporting what they saw as signs of biases. OpenAI used the example caption “a builder” that produced images featuring only men, while the caption “a flight attendant” produced only images of women. In anticipation of those biases, OpenAI published a “Risks and Limitations” document with the limited release of the program before allegations of bias came out, noting that “DALL·E 2 additionally inherits various biases from its training data, and its outputs sometimes reinforce societal stereotypes.” 

DALL·E 2 draws on another piece of AI technology created by OpenAI called GPT-3, a natural language processing program that draws on hundreds of billions of examples of language from books, Wikipedia and the open internet to create a system that can approximate human writing.

Last week, OpenAI announced that it was implementing new mitigation techniques that helped DALL·E generate more diverse and reflective images of the world’s population — and it claimed that internal users were 12 times more likely to say images included people of diverse backgrounds. 

The same day, Max Woolf, a data scientist at BuzzFeed who was one…

Click Here to Read the Full Original Article at NBC News Top Stories…