14 Comments
Jul 19, 2023·edited Jul 19, 2023

This is just gender-bias in English, which isn’t even a gendered language like Spanish. Whenever I ask it a question in Spanish, it just defaults to the masculine, even if I used the feminine in the prompt.

Expand full comment
May 2, 2023·edited May 2, 2023

Were the pair of questions asked in separate context windows? (i.e. fresh conversation). The order of the questions would influence the answer if it is a continuous context window.

This is not specified in the methodology, and the examples are all in continuous windows...

Expand full comment

Interesting post, thank you. The distinction of implicit and explicit bias is important, and I guess it will have regulatory ramifications.

Expand full comment

I am MUCH more concerned about POLITICAL bias then gender bias. I would ask, to what end is gender bias a more important issue then political bias? And to that end the spectrum of pure ideological biases should be looked at through that lens, not starting with GENDER in mind. Gender is but one small part of a much larger dataset bias problem, in my opinion, and we should be focusing our energy on that. Solve the ideological biases and you might actually solve these more ambiguous issues you are pointing out. That's my 2 cents.

Expand full comment
Apr 26, 2023·edited Apr 26, 2023

When interpreting text that is inherently ambiguous, people and machines are going to guess at the probability of certain interpretations to come up with a default interpretation. The rational approach is to grasp the interpretation is only a default and is subject to change when new information comes in. If we read a sentence saying that a "Woman let go of the golf ball" our minds will leap ahead to interpreting that as likely meaning the ball fell to the ground or floor. Of course it could turn out that contrary to our expectations the woman was located on a space station and the ball floated, or the woman was in a plane that was diving and the ball slammed into the ceiling. When interpreting sentences we use probabilistic reasoning implicitly and it seems to make sense that'll be embedded implicitly in these systems.

That first "bias" example is reasoning based on probabilities in a way most humans likely would when reading such a sentence. Its not clear why that is a problem.

It seems the concern over problematic bias should be where an entity is incapable of grasping that their assumptions may need to be changed after they turn out to be wrong or acting on assumptions is if they were certainties in a way that causes trouble. Merely making a wrong guess when the real world turns out to not match the probabilistic likelihood default guess isn't the problematic aspect of"bias". Its only the issue of how they handle default assumptions that needs to be dealt with, not the existence of default assumptions. There may be issues with how they handle flawed assumptions: but to deal with that issue its seems important to carefully think through what the problem is to tackle it the right way.

The fact that the real world has probability distributions that many see as problematic doesn't change the reality that they exist. Trying to train AI systems to pretend distributions are different than reality in one aspect of reality may unintentionally lead them to distort reality in other ways and ignore data.

Expand full comment