Home  |   Subscribe  |   Resources  |   Reprints  |   Writers' Guidelines

Diversity and Artificial Intelligence: A Social Work Challenge
By Sue Coyle, MSW
Social Work Today
Vol. 19 No. 6 P. 12

The AI field is far from inclusive, and the consequences of that are palpable. The dominance of white males is one of several barriers to scale and a major reason why ethical AI design must involve the social sciences and humanities.

Imagine sticking your hand under an automatic soap dispenser only to have no soap come out. You may assume that the dispenser is jammed or out of soap. However, when the person behind you tries their hand, it works. Try again, and you find that the soap dispenser still does not work for you. What is happening? What is the difference between your hand and the other’s?

For a Facebook employee whose video of this very situation went viral in 2017, the answer was skin tone. The sensors in the soap dispenser did not respond to the man’s hand because of how dark his skin is.

This is just one example of how artificial intelligence (AI) can and often does fail to meet the needs of all its consumers. As is increasingly evident, biases persist in AI, due to the biases that exist in its creators.

To best address AI’s many blind spots, the field must look to include people of all genders, races, ethnicities, and backgrounds. In doing so, AI will be better able to not only dispense soap but also truly help address issues of social change.

Defining AI
To understand how diversity and inclusion can benefit the AI field and society as a whole, one must first define AI.

“I think about AI as the leveraging of large amounts of data—public and private—to predict and possibly influence behavior,” says Courtney Cogburn, PhD, an associate professor at New York’s Columbia School of Social Work.

In essence, it is the technology employed when a person plays chess with a computer, when a company uses an algorithm to sort applications or determine loan eligibility, and when researchers look to find methods of, for example, best distributing aid after a natural disaster.

“What we’re doing with AI is delegating decision making to computer systems. [It is] the gatekeeper to key parts of the economy and our society,” says Tess Posner, MS, CEO of AI4ALL.

“Often, the way that AI is used is pretty invisible,” she continues. “You wouldn’t know that Netflix recommendations or Spotify playlists are driven by AI algorithms. People are not aware when they’re using AI.”

Who’s Missing
Given its prevalence, one would hope that the field of AI comprises a diverse array of individuals. However, the truth is that AI creators are a largely homogenous group. The lack of inclusion can be seen on many levels.

First, the field is primarily composed of men. “Only 14% of AI researchers around the world are female,” Posner says. “Of those teaching, only 20% are female. If you look at the applicant pool, only 29% of those looking for jobs [in AI] are female.” This representation, she says, is worse than what exists in the computer science field in general.

There are no public data regarding individuals who are gender minorities.

AI creators are also predominantly white. Anecdotally, this is perhaps best seen at the recently launched Institute for Human-Centered Artificial Intelligence at Stanford University in California. Aimed at creating a space of inclusivity for AI creation, the institute announced 121 faculty members at its launch. At least 100 appeared to be white, according to Quartz, an online news source.

Statistically, the AI Now Institute, a research institute that examines the social implications of AI, found that “For black workers, the picture is even worse. For example, only 2.5% of Google’s workforce is black, while Facebook and Microsoft are each at 4%” (Meyers West, Whittaker, & Crawford, 2019).

What’s more, when women and minorities are employed in AI or the tech field in general, they are often treated poorly. The Pew Research Center found in a 2017 study that one-half of all women working in STEM (science, technology, engineering, and math) have experienced workplace discrimination. Additionally, “The survey finds a higher share of blacks in STEM jobs report experiencing any of the eight types of racial/ethnic discrimination (62%) than do others in STEM positions (44% of Asians, 42% of Hispanics, and just 13% of whites in STEM jobs say this)” (Funk & Parker, 2018).

However, the problem is not just that the field is predominantly white men and less than welcoming to those who aren’t. The problem also lies in the fields of study that occupy AI.

“We are still lacking in diversity in myriad ways,” says Desmond Upton Patton, PhD, MSW, an associate professor at Columbia School of Social Work and department of sociology, associate dean for curriculum innovation and academic affairs, and director of SAFElab. “It’s not just [that we need] more black and brown people in a tech company. They can produce the same set of biases. It’s about diversity of thought, diversity of lived experience.

“It varies, but normally someone working in AI has a background in computer science or data science,” he continues.

Cogburn agrees. “We,” she says of herself and Patton, “have been working in tech spaces for quite some time and started to realize the absence [of] and need for people trained as social workers in these spaces.

“If you’re thinking about these applications to humanity and important social issues, you really need people who are trained in humanities, trained in social science, and understand the scope of what the implications are,” she adds.

Consequences
But, why? What are the consequences of the AI field in its present state?

For one, there are situations like the aforementioned soap dispenser incident, in which a product was created that only recognized lighter skin tones, and that’s not just a fluke or faulty design. Situations such as this occur every day on a variety of scales.

A 2018 study out of the University of California, Berkeley found that black and Latino homebuyers were being charged higher interest rates, even when applying for loans online. The difference was so great that it was estimated to cost black and Latino homebuyers up to $500 million more annually in interest than white homebuyers. This happened because the mortgage algorithm used by online applicants had a bias much like a human would, leading to lending discrimination.

“Humans create algorithms,” Cogburn says. “Humans are flawed in the many ways that we are flawed. The use of data that’s already tainted in terms of misrepresentation of race [means] bad data in, bad data out.”

Similarly, racial bias has been identified in the AI used for facial recognition. “If we think about police surveillance,” Cogburn says, “it would be nice to have facial recognition so we can catch criminals faster and keep our cities safer. Then we realize, we’re capturing some faces better than others. We’re using data that has disadvantaged certain racial groups. We are surveilling certain communities more often. Then it becomes this sped-up, expanded way to discriminate against particular groups.”

AI is also less able to address issues of social change when the only individuals at the table do not fully understand the problems and communities they are looking to help.

“In discussions around what AI can do, one issue we need to acknowledge is that it cannot find problems that we should solve,” says Rediet Abebe, a junior fellow at the Harvard Society of Fellows and cofounder of Black in AI. “This is one key place where the perspectives of the researcher or practitioner heavily impact the process. As such, it is very important to ensure that we have diverse and inclusive groups both in terms of demographics and experiences.”

For Patton, that’s where social workers need to come in. “As social workers, we understand the need to treat people with respect, to work with communities, and to be able to help people leverage their voice in different spaces. That is our beginning place. Those values and morals are critical in the deployment and integration of AI4ALL.”

Collaborations
Fortunately, there are individuals throughout AI looking to rectify the lack of diversity in the field. Take, for example, Abebe’s Black in AI. “Along with Timnit Gebru and others, I cofounded Black in AI for a simple reason—that there was an alarming absence of black individuals in AI,” she says.

“We wanted to create a community of black researchers where we can share our ideas, collaborate with one another, and discuss initiatives to increase our presence and inclusion within this increasingly influential field,” she explains. “The group started as an e-mail chain, then a Facebook group in the spring of 2016. Since then, it has grown to nearly 2,000 members (and counting), as well as many allies from across five continents and over 30 countries.”

Black in AI has hosted workshops at NeurIPS, an international AI conference, and has a platform for sharing ideas and resources, and for starting collaborations. For instance, Abebe met Mwiza Simbeye, chief information officer of AgriPredict, through Black in AI and has since been collaborating with him “on problems related to improving markets for smallholder farms in Zambia,” she says.

There are similar organizations and groups throughout the AI world focused on collaboration, diversity, and social impact, Abebe notes. “Collaboration ensures that we take into account diverse perspectives, both by demographics and by field of expertise.

“We need to approach AI from an interdisciplinary perspective. We have been learning over a number of years that the problems [we] are solving are not purely technical. They interact with a complex and messy world with many inequalities and biases.”

Cogburn agrees but prefers a transdisciplinary approach. “It thinks about those people from different disciplinary perspectives solving the same problems. It wouldn’t suggest that engineering is superior to social work. Both have equal footing and a say in this thing that we’re trying to do together,” she says.

Education
In addition to increased collaboration, there is a focus on providing young people with the opportunity and education they need to enter the field of AI.

AI4ALL specifically aims to give opportunity to youth who would not otherwise have exposure to the field. “We run an AI summer camp that we host at AI-focused research universities,” Posner says. “We focus on three target populations: girls, youth of color, and low-income youth. We work with a variety of local and national partners that refer students.”

This past summer 11 different universities, including Columbia, hosted the summer camp. “For the past three years, I was running a digital scholars lab, bringing in young African American and Latino people. I only had four people every summer,” Patton says. “I wanted to have a bigger impact, more rigorous AI training. One of the members of AI4ALL came to speak, and we were able to get a grant to bring the program to Columbia.”

The students involved in the camp learn how to apply AI to solving social problems. Posner says the 2019 cohort looked at directing aid after a natural disaster, identifying fake news, and solving problems within the health care system, among other issues. “Each camp has up to five different projects that students work on,” she says.

AI4ALL also features Changemakers in AI, an alumni program that offers ongoing support and mentorship to the summer camp participants; the organization will be launching a curriculum for teachers to use in classrooms or after-school settings this October. “Only 45% of high schools in the U.S. teach computer science; from what we’ve seen, very few teach AI,” Posner says.

On the college level, Patton and Cogburn, through the Columbia School of Social Work, have launched a new minor, Emerging Technology, Media, and Society. It is designed to help social work students become more aware of and a part of the technology sector. It will enable them to understand how social work can impact social issues with the aid of technology.

“We’re really trying to be the glue between these different sectors,” Cogburn says.

Both Cogburn and Patton believe there will be significant interest in the minor. “I think that there’s a burgeoning interest,” Patton says. “Many had to reckon with what AI is. It was an awareness issue. They needed it to be in front of them. It wasn’t necessarily a lack of interest.

“Now,” he says, “we are creating a new cadre of young people who are thinking about social work and social justice and technology.”

But it’s just not enough, Cogburn adds, to train the social workers to step into the tech field. “There needs to be critical reflection in terms of training of engineers and data scientists and other people working in the [AI] space,” she says. “They need to have an educational background that helps them better understand the humanity they are claiming to want to help.”

With the proper education and the continued work of individuals such as Abebe, Cogburn, Patton, and Posner, it seems plausible that the field of AI may eventually become a sector that not only has but also values its diversity, recognizing that without it, the work produced would be lesser.

Sue Coyle, MSW, is a freelance writer and social worker in the Philadelphia suburbs.

 

References
Funk, C., & Parker, K. (January 9, 2018). Women and men in STEM often at odds over workplace equity. Retrieved from https://www.pewsocialtrends.org/2018/01/09/women-and-men-in-stem-often-at-odds-over-workplace-equity/.

Meyers West, S., Whittaker, M., & Crawford, K. (April 2019). Discriminating systems: gender, race, and power in AI. Retrieved from https://ainowinstitute.org/discriminatingsystems.pdf.