What if we could just ask AI to be less biased?


Last week, I published a story about new tools developed by researchers at AI startup Hugging Face and the University of Leipzig that let people see for themselves what kinds of inherent biases AI models have about different genders and ethnicities. 

Although I’ve written a lot about how our biases are reflected in AI models, it still felt jarring to see exactly how pale, male, and stale the humans of AI are. That was particularly true for DALL-E 2, which generates white men 97% of the time when given prompts like “CEO” or “director.”

And the bias problem runs even deeper than you might think into the broader world created by AI. These models are built by American companies and trained on North American data, and thus when they’re asked to generate even mundane everyday items, from doors to houses, they create objects that look American, Federico Bianchi, a researcher at Stanford University, tells me. 

As the world becomes increasingly filled with AI-generated imagery, we are going to mostly see images that reflect America’s biases, culture, and values. Who knew AI could end up being a major instrument of American soft power? 
So how do we address these problems? A lot of work has gone into fixing biases in the data sets AI models are trained on. But two recent research papers propose interesting new approaches. 

What if, instead of making the training data less biased, you could simply ask the model to give you less biased answers? 

A team of researchers at the Technical University of Darmstadt, Germany, and AI startup Hugging Face developed a tool called Fair Diffusion that makes it easier to tweak AI models to generate the types of images you want. For example, you can generate stock photos of CEOs in different settings and then use Fair Diffusion to swap out the white men in the images for women or people of different ethnicities. 

As the Hugging Face tools show, AI models that generate images on the basis of image-text pairs in their training data default to very strong biases about professions, gender, and ethnicity. The German researchers’ Fair Diffusion tool is based on a technique they developed called semantic guidance, which allows users to guide how the AI system generates images of people and edit the results.  

The AI system stays very close to the original image, says Kristian Kersting, a computer science professor at TU Darmstadt who participated in the work. 


Source : https://www.technologyreview.com/2023/03/28/1070390/what-if-we-could-just-ask-ai-to-be-less-biased/

Leave a Comment

SMM Panel PDF Kitap indir
erotica x videos dorporn.com tamil sex mms telugupussy palimas.mobi xnxpunjabi سكس في جيم arabsexeporn.net سكس خلفى عربى gma voltes v cast teleseryehot.com a2z channel 11 山口理紅 javmobile.mobi rebdb-346
hentai artist cg hentaiact.com copipe manga سكس جامد موت hqtube.pro سكس بنات صغر richard yap family hdteleserye.com gen youtube downloader سكس مصرى فلاحه sexarabporn.net نيك كويتى hitozuma life: one time gal 2 madhentai.net hellabunna hentai
bazaar full movie chupaporn.com hind xxx vido 巨乳マニア javlibrary.pro 西野なこ جوني سنس pornovuku.info اكل كس fucking pornstar bukaporn.com marathi bhabhi sex xxx movies gonzo barzoon.info dj punjab