It’s less of a bias of the programmer and moreso a bias of data, particularly when a factor like gender or ethnicity correlates with something without direct causation, such as crime rates correlating with ethnicity largely because of immigrants being poorer on average, and economic standing being a major correlating factor. If your dataset doesn’t include that, any AI will just see “oh, people in group x are way more likely to commit crimes”. This can be prevented but it’s generally more of a risk of overlooking something than intentional data manipulation (not that that isn’t possible).
Yes that’s fair. I guess my comment wasn’t a direct response to yours other than it made me think this desire that all the difficult issues (like bias) just disappear if you remove all the humans from the process* is flawed and any anticapitalist society should really start from that understanding. One that understands that conflict will emerge and pro-social “convivial” systems and structures need to emerge to handle them.
*You are right to point out that the “AI” we are talking about is statistical models built from humans that includes bias where as the hype is that we have Data from Star Trek and therefore these systems hide the human inputs but don’t remove them.
To be honest I’m tempted to say that desire to remove humans from the production of society is a fundamentally capitalist one.
While that might be true in some contexts it makes no sense in the context of my comment.
Im saying that leftist coders inherent personal problems and racism will make their way into the AI much like how it has worked with capitalist AI.
Humans have many of the same biases and issues regardless of political lean.
It’s less of a bias of the programmer and moreso a bias of data, particularly when a factor like gender or ethnicity correlates with something without direct causation, such as crime rates correlating with ethnicity largely because of immigrants being poorer on average, and economic standing being a major correlating factor. If your dataset doesn’t include that, any AI will just see “oh, people in group x are way more likely to commit crimes”. This can be prevented but it’s generally more of a risk of overlooking something than intentional data manipulation (not that that isn’t possible).
Yes that’s fair. I guess my comment wasn’t a direct response to yours other than it made me think this desire that all the difficult issues (like bias) just disappear if you remove all the humans from the process* is flawed and any anticapitalist society should really start from that understanding. One that understands that conflict will emerge and pro-social “convivial” systems and structures need to emerge to handle them.
*You are right to point out that the “AI” we are talking about is statistical models built from humans that includes bias where as the hype is that we have Data from Star Trek and therefore these systems hide the human inputs but don’t remove them.