Ethical AI Usage: Fantasy or Reality?

How do we use AI ethically, if that’s at all possible?

Believe it or not, I’m optimistic. After all, AI is just a tool, and tools can be shaped and regulated into something great.

A note before we begin: this piece is primarily about examining necessary considerations. I have offered a few thoughts and tips, but the intent behind this piece is to ensure we’re asking the right questions. Many of these questions are open-ended, and others might not yet have answers. I want to ask them, though, because they are essential to developing an ethical AI framework.

With that, let’s get into it.

Initial Considerations

There are four main considerations we want to take into account at the outset, both for personal use and for model construction: trustworthiness, the impact on labor, transparency, and data privacy.

These considerations relate both to the creation and maintenance of AI and to its use, so bear that in mind as we go.

Trustworthiness

Trustworthiness is just what it sounds like: can we trust AI?

First, we need to be sure to check its outputs. At the moment, the onus is more on the individual, but we can theoretically craft tools to keep AI honest in its outputs, and we can put in work to better shape the databases they’re built on to avoid bias in the first place.

Aside from that, we need to think about how AI is being used. Are AI applications being used to deceive or mislead? How so? Is this something the average person can spot and determine, or does this require experts to step in?

As I’ve mentioned before, AI hallucinations are prolific. They may not be the result of malicious intent, but they do bring into question the reliability of AI outputs. Then there’s Grok, which is just about the worst chatbot out there and seems to almost exclusively deal in hallucinations (kind of a cool sentence out of context lol). There are also websites churning out and disseminating fake news articles at an alarming rate using AI. These realities are not the norm, but they are persistent enough to warrant adding an additional note of caution.

Impact On Labor

The next aspect we want to consider is the impact of AI on labor. Are we using AI in ways that displace human workers? Are we using it in ways that are exploitative of others’ labor? If we are, is that avoidable? If not, how do we account for the displacement?

How AI impacts labor is, in my opinion, the chief ethical concern at the moment. As a society with a basically non-existent safety net, any labor displacement is going to hurt real people and communities, and that’s going to bloom into a whole host of issues. Automation is not necessarily an evil thing–especially when it supplants the need for menial, degrading labor–but what happens to those it replaces? If we are to use AI ethically, we need to ensure that these people are cared for in the fallout, either with a social safety net or with stable employment to fall back on. Ideally, both. This echoes the issue of just transition in environmental policy, i.e. the move to clean energy.

Transparency

Next, we should account for transparency in AI use. Are we fully disclosing our use of AI tools? Are we giving appropriate context around our use of AI tools? How do we ensure that people follow these guidelines, and what can we do to enforce these standards?

There are some platforms that require disclosure of AI usage (YouTube, for example), but not many, and certainly not enough. Some social platforms have minimal stopgaps in place as well, but again, not enough.

All told, that just means that we need to be careful with what we see and share. Until there are better measures in place, double-check anything you think is suspect and avoid sharing AI productions.

Data Privacy

The last consideration we should make is regarding data privacy in AI use.

Generative AI tools require vast amounts of data to work well. Where does that data come from? At the moment, it’s hard to know. There are few if any laws in place regulating the data that AI can use and how it uses it, so beware that your every action may be unintentionally feeding these models.

Know that almost everything you do online at this point may be feeding some AI tools. Uploading photos, commenting on posts and articles, conducting online searches, and filling out reCAPTCHA can all be mined for data and exploited by AI models. That’s not the fault of the models but of the people in control of them and the people who have failed to regulate them. So, be conscious of what you’re putting out there.

Guidance

Like I said at the outset, this piece is more about asking questions than answering them, but there are still a few tips I want to leave you with.

  1. Be cautious about the info you’re giving these tools.
  2. Don’t include personal or private data in prompts.
  3. Assume everything you type in an AI tool or prompt could leak or become public.

Some Parting Thoughts

It is possible to use AI ethically. It may be difficult to untangle what has already been unleashed on the world, but we have the chance to shape AI’s future. In order to do that, we need to be sure we’re asking the right questions and demanding the right standards. Hopefully, this piece has helped you sort some of those out.