Big innovations in machine-learning have made some unsettling headlines the last year, holding a mirror to our own persistent biases by adopting them. When it comes to gender stereotypes, there’s a double-jeopardy nestled in how machines learn languages. Babbel’s computational linguist, Kate McCurdy, has been looking at how algorithms conflate semantic and grammatical gender, what this could mean for any application of so-called Artificial Intelligence, and how we might think about correcting course.
So, how about we start by just breaking down your project?
So, I’m looking at grammatical gender in word embeddings. Word embeddings are a kind of natural language-processing technology that are used in a lot of things. The core of this is an algorithm that learns the meaning of a word based on words that appear around it. In the past few years, we’ve seen pretty major developments in this area. Lots of research is happening, and big companies like Facebook and Google are using these technologies. A couple of years ago, there was this new algorithm that allowed you to train a model quite quickly and get these representations of word meaning that seemed to be really impressive. So, you could just automatically let it loose on a corpus and it would learn, for example, that “dog” and “cat” and “animal” are all related, or that “apple” and “banana” are related, without anybody explicitly telling it to. This is quite powerful, and it’s being used in a lot of technological applications. But we’ve started to notice that there are some issues with it.
Because these algorithms are picking up on gendered associations…
Right. The thing is that, while they’re good at learning things that are really useful, like the relationship between “apple” and “banana”, they’re also really good at learning things that are not so useful, and they’re basically learning representations that we probably don’t want them to. So, last year, a number of researchers published findings showing that, for example, these technologies were learning that career terms like “business” and “office” and “salary” were all systematically closer to words associated with men, like “uncle” and “father”. Masculine tendencies. And then terms associated with the home and family were learned in relation with feminine-associated terms.
There was a sort of famous instance of this, in some of the research that came out. One of the more impressive properties of word embeddings is that they can perform what’s called an analogy task, where you take an embedding model and you say “Man is to woman as king is to…” — and then you let it fill in the blank, like you would with an SAT question, and it gives you “queen”. Impressive, right? But then it turns out that when you say something like “Man is to woman as pilot is to…”, it gives you “flight attendant.” And this really gave people pause. It turns out that simply by training on the statistical probabilities of the words around it in text, it wound up building a model that has all these word associations that we’re really not interested in seeing. Gender is just the tip of the iceberg. The same researchers have also found problematic associations with regard to race, racialized names, and so on.
Of course.
It’s basically learning associations that can turn out to be deeply problematic. Especially if they’re involved in other types of applications. One researcher offered a particularly telling scenario. Imagine if you’re on Google and you’re looking for candidates for a particular computer programming job. And let’s say you search for candidates at a local university or something. But it turns out, because people’s names also get representation in the text these algorithms are processing, the application could learn that names like Mark and John are more closely associated with computer programming than… Samantha. And then it maybe ranks candidates with men’s names higher up in the list. So, if you’re an employer and you’re searching for candidates, this could actually statistically bias the input you get. And that’s just one of many subtle ways these technologies could collide with real life situations, with stakes.
So what’s the intersection with grammatical gender look like? English is a little less messy in that respect, but this presumably has ramifications for languages that do have grammatical gender.
Yeah, exactly. So, the problems we were just discussing come from other researchers’ observations. What I and the group here at Babbel were looking at is just what you’re talking about; how this sort of technology interacts with languages where you do have grammatical gender. With Spanish or French or German, we know that the word for “father” is not just semantically referring to a man. We know the word is masculine because, in the case of German, der Vater involves a masculine article. In Spanish it would be el padre. So this question of what’s semantically gendered gets put alongside the gender of the articles, the grammatical gender of the words themselves. This extends to objects, as well. “Table” happens to be masculine in German, but feminine in French and Spanish.
The interesting thing is it’s pretty clear that when it comes to humans there’s some logic surrounding the gender reference. Historically, there are associations — many of which are increasingly being contested. In Swedish, for example, they just created a gender neutral pronoun hen. There’s rethinking happening all over the place, culturally. But we can say pretty clearly that with objects there’s no clear logic when it comes to gender distinction. The lack of grammatical gender consistency for “table” from German to Spanish kind of tells us that there’s no real ground truth, here. There’s no actual gender property of a table. The same with most objects in the world. But what my group found is that, because these statistical word embedding models are based simply on looking at the words surrounding other words, if you don’t actively think about this and correct for it somehow ahead of time, then when you’re training a model for German, it will learn that “table” is actually masculine. It’s hanging out over there in the semantic space with the fathers and the brothers, and so on. And that goes for any words associated with the grammatical masculine.
So, like, in German the word for “athlete” has both masculine and feminine forms. You have der Sportler and die Sportlerin. And table will be closer to the masculine form, as far as how the model learns it. But because table is feminine in Spanish, it learns that it’s part of the feminine semantic space, hanging out with mothers and aunts and such. What this means is that the arbitrary gender properties of language, wherein any speaker knows there’s nothing actually masculine or feminine about a table or any other common object –beyond your own mental associations– it turns out that these models are learning that there is. And this could be influencing the results they provide in applications where they’re used.
Say you’re searching in some product-recommendation space, for something to give a female friend, with a female name. If you do this search in Spanish, you might get different results than if you did it in German — because gender properties of the results are different across those two languages. There are all sorts of subtle ways that could be occurring, but if we don’t notice it, we can’t correct for it.
Are you seeing in your research anything that points to possible corrective interventions?
The super-easy way to correct for this is to just get rid of the article information, right? Just skip over the articles when you’re training the data, and just sort of say that it’s not providing meaningful information.That strikes me as a fix that kind of works for some languages, but we’d need to go beyond that for others. In German, for example, articles will have not just gender information, but also case. So, it might be worth thinking through a more sophisticated approach. In the research we did, we just did the first easy, obvious thing, just to show some proof of concept; that you can train a model without these grammatical gender biases resulting. But I think actually handling this well will require a bit more thought, because different languages have different needs. And these word embedding models are really developed and innovated in English. So, they reflect that lens where we have this algorithm that gets us close to the word meaning in English, but we actually need to think about the specific needs and properties of other languages, to be able to generalize that meaningfully.
Are there particular ways this is driving work happening here at Babbel?
Well, we’re looking at different ways to use language technology for learners, right? So, if there’s a case where a particular word works well in English, but not in Spanish or some other language — let’s say we’re designing a comprehension task for learners of Spanish or German, and we’re asking which of a set of words is more like the others — we could wind up providing the user with something incorrect, if we don’t anticipate this sort of thing. A model might shorten the semantic distance between two words simply because of their grammatical gender, privileging that over some meaningful relation.
Widening the aperture a bit, where do you see the most interesting social implications for this sort of critical examination of these models?
Anywhere where we’re using so-called Artificial Intelligence technology, really. I think it’s creeping more and more into our lives, and in a lot of ways that are quite opaque. It’s hard to clearly sift out its effects. Really, in our research, we’re shining a light on one of what’re likely hundreds or thousands of factors that could be affecting some particular decision a system is making. That has meaning for you or me as a consumer or searcher or whatever, at some point.
Some other really interesting research came out recently showing that there are associations with semantic roles of images. For example, the semantic association between women and cooking is so strong that, for some algorithms that are trained on labeling images, if you give it a picture of a man cooking, it’ll say it’s a woman simply because the associations are so strong. For the moment, it’s just a data result. I think it’s hard to imagine, now, all the ways this could be significant. Think of systems for sorting employees, algorithms searching for key terms in CV’s. This is common practice in a lot of industries. And if you’re not catching this, then these language-specific biases could influence whose CV gets in front of which person at what time.
So, it’d systematize the very employment biases we’re trying to combat.
It could end up affecting the structure of employment. It could end up affecting any space in which automated decision-making is used in a sort of institutional or structural capacity. If something is leaning on technology that’s kind of opaque, you’re gonna have downstream consumer implications. That’s certainly significant. But it can also be used in all these institutional capacities. In any case, because it’s so opaque, it’s really, really hard to anticipate a specific harm. But that’s exactly what makes it so important to be able to draw these things out one by one and point out the potential factors that could be at play.