Disclaimer: I am asking this for a college class. I need to find an example of AI being used unethically. I figured this would be one of the best places to ask. Maybe this could also serve as a good post to collect examples.
So what have you got?
Disclaimer: I am asking this for a college class. I need to find an example of AI being used unethically. I figured this would be one of the best places to ask. Maybe this could also serve as a good post to collect examples.
So what have you got?
I’m not really sure if I want to agree here. We’re currently in the middle of some hype wave concerning LLMs. So most people mean that when talking about “AI”. Of course that’s wrong. I tend to use the term “machine learning” if I don’t want to confuse people with a spoiled term.
And I must say, most (not all) machine learning is done in a problematic way. Tesla cars have been banned from companies parking lots, your Alexa saves your private conversations in the cloud, the algorithms that power the web weigh down on society and they spy on me. The successfull companies build upon copyright-theft or personal data from their users. And all of that isn’t really transparent to anyone. And oftentimes it’s opt-out if we get a choice at all. But of course there are legitimate interests. I believe a dishwasher or spamfilter would be trained ethically. Probably also the image detection for medical applications.
I 100% agree that big tech is using AI in very unethical ways. And this isn’t even new, the chairman of the U.N. Independent International Fact-Finding Mission on Myanmar stated that Facebook played a “determining role” in the Rohingya genocide. And then recently Zuck actually rolled back the programs that were meant to prevent this in the future.
I think quite some of our current societal issues (in western societies as well) come from algorithms and filter bubbles. I think that’s the main contributing factor to why people can’t talk to each other any more and everyone gets more radicalized into the extremes. And in the broader pictures the surrounding attention economy fuels populists and does away with any factual view on the world. It’s not AI’s fault, but it’s machine learning that powers these platforms and decides who gets attention and who gets confined into which filter bubble. I think that’s super unhealthy for us. But sure. It’s more the prevailing internet business model to blame here and not directly the software that powers this. I have to look up what happened in Rohingya… We get a few other issues with social media as well, which aren’t directly linked to algorithms