Andy Haymaker Author

Don’t Boycott AI, Especially If You Hate It

These days, morally sensitive people say things like, “There’s no ethical way to use AI”. Believe me, I get it. I wrote earlier about all the reasons a backlash against the current set of AI models is justified. The big AI companies have hurt their own industry by building it atop massive data theft, all to build AI applications that don’t address pressing societal issues. But issuing holier-than-thou fatwahs and advocating a Luddite smashing of the AI power looms won’t fix the problem. Let me explain why, and what to do instead of calling for boycotts.

First of all, get off your high horse. If you’re posting AI-boycott sentiments on the internet, this is the height of hypocrisy. There is no 100% purely ethical way to use the internet, drive to work, or buy food for your family. Anything in our modern world is built upon a mountain of environmental catastrophes and exploitative business practices. I’m not saying this is okay, but unless you’re living in an off-grid cabin in the woods or in a non-technological community like the Amish, then you’re part of the problem. This is understandable, because nobody who wants to affect the world is living off grid. Subsistence living isn’t scalable, and engagement in the high-tech world is required to help steer our future. We need to move forward, not back to a romanticized past.

Second, AI isn’t going anywhere. If you think boycotts—or even legislation—can stop the industry in its tracks, you’re wasting your breath tilting against capitalist windmills. The technology is simply too valuable and too easy to build. AI will keep coming, and our best bet for making things better is to steer the technology, not try to stop it.

If you accept that AI is here to stay, then it becomes obvious that we need to change our trajectory, not attempt to go into reverse. It’s already happening. Companies like OpenAI are responding to criticism and lawsuits by forming deals with the owners of the intellectual property (IP) that AI is trained on. The latest ChatGPT searches the internet and provides links to the sources it depends on for up-to-date and accurate information. This is a trivial step that doesn’t address the contributions of working artists and ordinary internet users whose data is being vacuumed up by the companies, but it’s something.

We need to do much more to reward companies for the small steps they make in the right direction. We need to create and promote ways to build AI models and products that reflect the world we want. There are hundreds of journalists and bloggers who will tell you how to opt out of having your online interactions used as training for AI. What’s the end game for that behavior? Where are the posts telling you how to opt in to providing data for AI models that are dedicated to providing social goods? Where are the articles on how to get compensated for the data you generate?

If you want a better future, consider moving from an anti-AI stance to a more balanced and future-aware form of AI criticism. As I wrote in my earlier post, we need to quickly move towards opt-in only consent, attribution and compensation, and positive, pro-social applications. Let’s build AI and robots that take the drudgery out of life so we can hang out together and make art. If we simply oppose AI on principle, the companies will beat us in the courts and push their profit-driven future on us. To create the future we want, we have to build it. Find, or create, organizations that are building AI the right way for the right reasons. Supporting these organizations will not only do more to bring about a positive future, it’ll feel a lot better along the way.