It’s easy to get complacent with GenAI tools that not only answer all our questions but also can do things like analyze our behavior and make binding decisions about who we are, and what we can access. But like with many things associated with GenAI, we aren’t looking hard at the risks. The “What Ifs.”

OpenAI’s Age Prediction Model

We can see this with OpenAI’s announced use of age predication modeling to determine whether a user is underage. It’s hard to argue against something like this that sounds good. But there are some risks. Some proverbial slippery slopes.

The program roll out was reported in a  recent article in Mashable. The program is designed to make decisions about a user’s age based on factors such as behavior, when the person is active, long term usage patterns and how long the account has existed. If the algorithm determines the person is under 18, then they will be barred from viewing certain things that are perceived as potentially harmful. 

This also means OpenAI now maintains records of user classifications, data that could prove valuable beyond the present use.

That’s a powerful tool. Maybe too powerful.

A Slippery Slope

Once GenAI providers have that power, it’s a slippery slope. GenAI could no doubt use its algorithms to define who I am, what I do, my political views, how likely I am to do or not do something. It feels like a further erosion of privacy. 

And there’s a risk those decisions could be used by others. Just to use an example, a GenAI tool might conclude based on my prompts I’m the kind of person who is likely to drive too fast or eat unhealthy snacks. What happens if that data gets into an insurance company’s hands. What’s to stop the provider from even selling that data to insurance companies? 

Not to mention the fact that these GenAI “decisions” could fuel all sorts of advertisements and other intrusions. 

There’s a lot of mischief that could be done with making these kinds of decisions.

What if these decisions could be used for political gain: what if ChatGPT decides that I’m likely to vote democratic in a swing state election and that information is provided to others? Or based on an algorithm it decides that I’m here illegally and reports that? There’s a lot of mischief that could be done with these kinds of decisions.

The Law of Unintended Consequences

It’s also a fair question to ask with any algorithmic decision what happens if the algorithm is wrong?

Let’s say I’m using a GenAI tool to do some research on domestic violence. But the algorithm mistakenly decides I’m an abuser and cuts off access. Sure I can try to rectify it but how hard will that be? And how long? What cost in time and energy?

In the meantime, what are the consequences of that label?

And we have seen examples of algorithms making decisions that proved to be just wrong. Risk assessment algorithms tools were reportedly used to make bail decisions in many courts. But the algorithm was later revealed by ProPublica to be biased against certain groups of people because of biased data. It’s an example of something that sounded good but ignored the practical realties of data bias. The same thing could happen with GenAI data: the way some people communicate could trigger some decision because the algorithm can’t see the bias.

These are all dangers especially when the decisions are associated with a tool whose use is so pervasive, like GenAI.

But It’s All Good, Right?

The Age Predictor tool is one of those things that’s hard to oppose. We are all in favor of protecting children from bad things on the internet. And there of course have been instances where GenAI tools gave inappropriate and allegedly downright dangerous advice to underage users. 

Can we really trust tools that may be biased and hallucinate to make the “correct” decision based on the identified factors?

But it’s concerning that GenAI have virtually unrestrained power to use data from user inputs and make assessments about them and make decisions for them as to what they can use the tool for. Can we really trust tools that may be biased and hallucinate to make the “correct” decision based on the identified factors? According to the Mashable article, OpenAI does state that if its AI tool makes a mistake and says you are a minor then you can send a selfie off someplace and it will be corrected. But that sounds like a hassle and who knows how long it would take to get resolved or what would happen to your photo once sent. 

It’s also a concern for lawyers and legal professionals. We could get classified inappropriately based on something we said in pursuit of client advocacy. It could chill what we say and how we use the tools. And it could create consequences for our clients.

A Need for Guardrails

Of course, OpenAI is a private company and can do what it wants. It’s easy to say if you don’t like what it’s doing, don’t use the product; no one is forcing you to. 

But that ignores the fact that GenAI tools are becoming indispensable at work and every place else. It would be hard to not use the tools especially if you can’t get your work done without them. Which, by the way, brings up the possiblity that your work performanace and loyality might be determined by an algorithm based on patterns it thought indicative.

And as Cory Doctorow and others have written and I have discussed, when companies get this powerful, they can essentially do whatever they want, jeopardizing our privacy and placing us in categories that we don’t want to be in or which are wrong. And the temptation to use that kind of labeling for nefarious purposes may be too great.

I’ve been around tech and social media vendors long enough to know that using a “just trust us” guardrail is just naïve. These guys are in it to make a profit and, particularly as they get more powerful, will do just about anything unless restrained.

A Little Restraint, Please

What’s needed is some guardrails, some sort of protections and limits as to what GenAI providers can and can’t do with the data they collect from us and the patterns they see. We can’t just ignore the dangers. There needs to be regulation, there needs to be penalties for bad use and mistakes

Yes, it’s a lofty goal to protect kids. But rather than come up with intrusive algorithms to preclude liability, maybe GenAI providers ought to make their outputs more robust or concentrate on not giving bad advise. Perhaps come up with protective measures that don’t make algorithmic decisions about users at all. 

The irony is that we’re asking GenAI to make nuanced decisions about human behavior and intent: the same GenAI that we regularly catch making basic factual errors. Maybe it’s time to admit that some decisions are too important to automate, even in the name of child safety.