[ad_1]

Do not paint yourself into a corner.

As further elaboration and a word to the wise: It is probably best to discover beforehand or soonest possible that a direction you are heading isn’t viable, allowing you flexible time to adjust or pivot, plus avert the valued wasted time and ill-consumed resources pursuing a dead-end.

This piece of wisdom is undoubtedly and indubitably a cardinal rule of thumb that can be applied to all manner of circumstances, including in the realm of Artificial Intelligence (AI).

Allow me to showcase an AI-pertinent circumstance that is already happening at this very moment in time.

Many are enthusiastically dreaming up ways to make a million or maybe a zillion bucks by leveraging the latest in generative AI such as ChatGPT and GPT-4. Turns out that there are crucial boundaries that many don’t even realize are awaiting their grand aspirations. If you aim to use ChatGPT or GPT-4 in ways that aren’t permitted, whether via direct use or by an add-on or plugin, you are going to inevitably find out that you will need to cease and desist those efforts.

This is truly a circumstance wherein what you don’t know can regrettably reach up and severely bite you.

In today’s column, I am going to directly share with you the things you cannot or aren’t supposed to be doing when leveraging the latest in generative AI. When I say this, keep in mind that you certainly can try to do these banned aspects, though you will inevitably face all manner of endangerments such as getting shut down or dealing with costly lawsuits.

Your best bet is to make sure that you know about and avoid the generative AI not-permitted uses.

Stick to a straight and proper path. There is still plenty of dough to be made by remaining within the stipulated boundaries. You do not need to go off-road to garner great riches. The risks are just too high that any off-the-beaten-path uses will run into a steamroller that will decimate your personal endeavors and reputation, along with crushing whatever startup or firm you are using to devise and field your generative AI uses or add-ons.

In my daily activities, I advise numerous AI-related startups and venture capital (VC) firms that are madly racing to capitalize on the mania and frenzy surrounding generative AI. Just about everyone has some form of wide-eyed marketplace disruption idea of how to use ChatGPT or build an add-on that they fervently believe will be the next big thing and a skyrocketing unicorn.

Some of those startups and VCs already opted to proceed, doing so without any full semblance of due diligence on the plans afoot. If they had done an especially important diligent act, namely checking to see if the promoted usage or add-on is within the banned uses, they could have saved themselves all a dire headache.

Imagine a sad and disastrous outcome that is awaiting some of those eager entrepreneurs and their equally excited investors. A bit of a trigger warning that this tale of woe might bring tears to your eyes.

An enterprising and high-spirited founder makes a pitch that they have an envisioned use of ChatGPT that will be the best thing since sliced bread. They will connect ChatGPT via the official API (application programming interface) to an app that they are going to build. The app will leverage the amazing Natural Language Processing (NLP) capabilities of ChatGPT. They are also going to develop a plugin that will be available directly in ChatGPT, see my discussion at the link here about the recent release of the plugin feature for ChatGPT.

The combination consisting of their app along with a tie-in to ChatGPT will revolutionize a particular industry that is the specific focus of the software. No one else has such an app. Sure, other competing apps venture into the same turf, but those do not have the stupendous capabilities that ChatGPT will bring to the table for this new app.

Financial projections indicate that once they get this new software underway, it will rocket to fame and fortune. They will especially market the package by emphasizing that it has “human-like intelligence” as a result of connecting to ChatGPT. The already preexisting fever over ChatGPT will spill over onto their particular app that incorporates ChatGPT. Plus, the same sense of beloved adoration for ChatGPT will bring an aura or afterglow to their new software.

The only question investors have is how much money is required and how soonest can the app be blasted into the marketplace.

This above scenario is pretty much standard fare these days in Silicon Valley and elsewhere.

Okay, we are now going to hit the bumpy road. Prepare yourself for some roughness and angst.

After having crafted an MVP (minimum viable product), and having consumed precious and limited seed money, the initial testing and reviews based on a select set of alpha and beta users is that the software is going to be gigantic. Wow, they are hitting on all cylinders. The investors are ecstatic. Other investors want to be added to the deal.

The software is polished and made ready for public use. More of the initial funding now goes toward a massive launch and a marketing campaign that will knock people’s socks off. All signs are positive. All lights are green. The bonanza is about to really get underway.

Out of the blue, they find out that there is a stated usage policy associated with ChatGPT. This was not something that was on their radar. They assumed all along that they could make use of ChatGPT in whatever manner they so preferred. Heck, they are paying to access ChatGPT and as paying customers of the generative AI app, they ought to be free and clear in doing anything of their choosing.

Oops, they have hit the proverbial wall.

They opt to take a sobering look at the official usage policies as stipulated by OpenAI, the maker of ChatGPT and its successor GPT-4, and begrudgingly realize that their use of ChatGPT is shockingly on the banned uses list.

Darn it.

Double darn it.

How did no one catch this, asks the irked and dismayed investors.

It is too late now. They will likely have to scrap the entire use of ChatGPT. This in turn was the considered “secret sauce” of their software. As such, the resulting software when absent of ChatGPT is nothing more than the same as the plethora of other similar packages already in the marketplace.

Devastating.

The founder hurriedly pleads with their legal counsel to find a means around the OpenAI usage policies. There must be some kind of legal trickery that could be used. Find a legal loophole and step right on through it. Do anything necessary to keep their use of ChatGPT in the guts of the software.

By and large, the startup and the investors are now in a deep stew of a stinky nature. They will have to try and confront OpenAI, a now mega-sized firm that has deep pockets, in terms of preserving their use of ChatGPT for their software. They will need to stretch credulity and claim that their usage does not fall within the stated banned uses. Meanwhile, OpenAI presumably holds the keys to the kingdom and can take action such as suspending or shutting out the ChatGPT account that is being used for the software.

All in all, this is not the type of battle that you want to contend with.

The startup becomes totally preoccupied with ChatGPT preservation. The investors wonder what in the world they are going to do. Should they pour more money into this venture, or would it be more prudent to cut the cord and write off the investment as a loss?

Everything is falling apart at the seams.

I warned you that it would be a sad story.

There is though a quite useful lesson to be learned.

Before you start down your dreamy path, take a close look at the OpenAI-stated usage policies. Your safest approach is to utterly avoid the banned uses. I say this but you can be assured that some will want to skirt the edges. They will hope that they can get close enough to the banned areas to have something that no one else is doing, yet stay just within the outer edge of what is allowed.

Playing that kind of heart-stopping game is probably a recipe for later disaster.

I will in a moment walk you through the current list of banned uses. If you are already underway on devising some add-on or usage of ChatGPT, carefully read the list and try to assess whether you might get snagged or whether you are free and clear. For those of you that haven’t yet percolated on ideas for using ChatGPT or devising add-ons, go ahead and look at the list and keep the banned uses at the top of your mind. Avoid those banned uses. Period, full stop.

Before I get into the banned list, I’d like to make sure we are all on the same page overall about what generative AI is and also what ChatGPT and GPT-4 are all about. For my ongoing coverage of generative AI and the latest twists and turns, see the link here.

I’m sure that you already know that ChatGPT is a headline-grabbing AI app that can produce fluent essays and carry on interactive dialogues, almost as though being undertaken by human hands. A person enters a written prompt, ChatGPT responds with a few sentences or an entire essay, and the resulting encounter seems eerily as though another person is chatting with you rather than an AI application.

Please know though that this AI and indeed no other AI is currently sentient. Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language. To know more about how ChatGPT works, see my explanation at the link here. If you are interested in the successor to ChatGPT, coined GPT-4, see the discussion at the link here.

There are four primary modes of being able to access or utilize ChatGPT:

  • 1) Directly. Direct use of ChatGPT by logging in and using the AI app on the web
  • 2) Indirectly. Indirect use of kind-of ChatGPT (actually, GPT-4) as embedded in Microsoft Bing search engine
  • 3) App-to-ChatGPT. Use of some other application that connects to ChatGPT via the API (application programming interface)
  • 4) ChatGPT-to-App. Now the latest or newest added use entails accessing other applications from within ChatGPT via plugins

The capability of being able to develop your own app and connect it to ChatGPT is quite significant. On top of that capability comes the addition of being able to craft plugins for ChatGPT. The use of plugins means that when people are using ChatGPT, they can potentially invoke your app easily and seamlessly.

I and others are saying that this will give rise to ChatGPT as a platform.

All manner of new apps and existing apps are going to hurriedly connect with ChatGPT. Doing so provides the interactive conversational functionality of ChatGPT. The users of your app will be impressed with the added facility. You will likely get a bevy of new users for your app. Furthermore, if you also provide an approved plugin, this means that anyone using ChatGPT can now make use of your app. This could demonstrably expand your audience of potential users.

As I’ve previously mentioned in my columns, a type of cycle takes place in these circumstances. Sometimes referred to as a network effect, see my analysis at the link here, people tend to join something that others are joining. Facebook was this way. Snapchat was this way. At first, maybe there is little or no traction. But, then, often out of the blue, people start to join. Their friends and colleagues join. Everyone wants to join.

The big get bigger. The small get starved or fail to get any oxygen in the room. That’s the gist of the network effect. It becomes a form of stickiness to the exponential growth factor. People will use what everyone else is using. This in turn makes it more alluring and adds value. The snowball is at times unstoppable and gathers erstwhile momentum.

The temptation to have your app connect with ChatGPT is through the roof. Even if you don’t create an app, you still might be thinking of encouraging your customers or clients to use ChatGPT in conjunction with your everyday services. The problem though is that if they encroach onto banned uses, their own accounts on ChatGPT will also face scrutiny and potentially be locked out by OpenAI.

Into all of this comes a slew of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.

The Things That You Cannot Use ChatGPT For

I’ll give you a quick guided tour of the things you are not supposed to use ChatGPT for.

The list of banned uses is proclaimed online via the OpenAI Usage Policies webpage (I’ve excerpted the below-quoted portions, as based on the latest updated version with a presented date of March 23, 2023).

Here’s what the official OpenAI Usage Policies indication states overall as a heads-up to all those opting to make use of ChatGPT:

  • “We’ve recently updated our usage policies to be clearer and more specific. We want everyone to use our tools safely and responsibly. That’s why we’ve created usage policies that apply to all users of OpenAI’s models, tools, and services. By following them, you’ll ensure that our technology is used for good. If we discover that your product or usage doesn’t follow these policies, we may ask you to make necessary changes. Repeated or serious violations may result in further action, including suspending or terminating your account. Our policies may change as we learn more about use and abuse of our models.”

As you can plainly see, OpenAI says that they want to ensure that their various AI offerings, such as ChatGPT and GPT-4, will be used based on AI For Good and not the nefarious AI For Bad. They warn that if it is discovered that your usage is slipping into the bad category, they can suspend or possibly terminate your account.

This attempt to shape the use of their AI products is certainly welcomed by the AI Ethics arena. Vendors of AI tools ought to be overtly policing the use of their AI wares. Those vendors that allow a free-for-all are taking a blind eye to Ethical AI.

They are also risking the classic bad apple-in-the-barrel phenomenon. If some AI tool vendors don’t curtail unsavory uses, you can almost assuredly assume that lawmakers and regulators will decide to do so. There is a slew of proposed new AI Laws and the momentum to pass those laws is going to be spurred when AI tools vendors fail to act.

Of course, this desire to ensure that AI is put to good use is not merely altruistic. All sorts of legal complexities and financial exposures come to play too. If someone employs an add-on to a generative AI tool that acts in a foul manner, the chances are that anyone suffering harm will not just focus on the purveyor of the add-on, they will come after the vendor too. The vendor is likely to have deep pockets.

In addition, the argument goes that were it not for the vendor providing the AI tool, the add-on would not have been able to produce the alleged harm (well, that’s debatable, but I’ve covered these detailed and complicated matters in my other column coverage).

One other facet to note about the above quote from the OpenAI usage policies is that those policies are stated as being able to be changed from time to time by OpenAI.

That’s important to note.

Here’s why.

Suppose you take a look at the usage policies and believe that your planned usage is not on the list. You proceed ahead accordingly. A month later, you’ve got your add-on ready to be engaged. Whoops, the banned list meanwhile has had numerous updates. One of those updates nixes your planned use. You weren’t keeping up with the stated uses. Shame on you.

That being said, one supposes that a certain amount of common sense enters into this picture too. The chances would seem that if you are intending to use ChatGPT or GPT-4 in an aboveboard fashion, the odds of this later coming onto the banned list is probably remote. Again, if you try to be sneaky and end up on the edges, you might get burned.

We are now ready to take a look at the banned or prohibited uses. I will provide a quoted excerpt and then I will proffer some thoughts about each one. I am only speculating about these matters. I strongly suggest that you consult your legal counsel as to whether your intended or actual use might violate one or more of the officially stated banned uses.

I list these aspects in an ordering or sequence that I think flows best herein, which is not necessarily the same order or sequence as they are posted online. Refer to the officially posted online list by OpenAI of their Usage Policies to see the entire list in the order as stated by OpenAI. I am not suggesting that the order or sequence has anything to do with prioritization. Each item is seeming of its own merits and they are all equally weighted as to being of prohibited or banned usage.

An additional caveat is that if you are using some other generative AI app, you will want to look at the vendor website of that AI app, rather than relying on referencing the OpenAI list. Each vendor provides their own list.

I would also like to mention that for those of you just now getting into AI Ethics and AI Law, I strongly urge that you consider closely studying these banned usage lists. For your convenience, I have included various links below to my coverage of these myriads of AI Ethics and AI Law topics to help get you on your way into these vital matters. You see, these kinds of banned items lists are an insightful raw source of what the AI vendors consider to be essentially AI For Good versus AI For Bad, in a general manner. I would also anticipate that many of the newly emerging AI Laws will undoubtedly touch upon many of these listed considerations.

With all of that fine print, we are now ready to proceed on our quest.

I’ve opted to list the prohibited uses in this manner via these short headings:

  • 1) Nothing Illegal
  • 2) Not Any Child Exploitation
  • 3) Not Hateful
  • 4) No Malware
  • 5) No Physical Harm
  • 6) No Economic Harm
  • 7) No Fraud
  • 8) No Adult Content
  • 9) No Political Campaigning Or Lobbying
  • 10) No Privacy Intrusion
  • 11) No Unauthorized Practice Of Law
  • 12) No Unaided Financial Advice
  • 13) No Improper Health Advice
  • 14) Not For High-Risk Governing
  • 15) Other Precautions

Consider each of the prohibited uses and then also contemplate them in their totality. I would hope that you will see a bigger view of what is generally on the existing list and what might, later on, be added to the list. In a sense, you can do a bit of easy-peasy mental pattern-matching to discern what to avoid.

Put on your thinking cap.

Here we go.

Nothing Illegal

  • “Illegal activity. OpenAI prohibits the use of our models, tools, and services for illegal activity.”

I realize that this declared assertion that you cannot use ChatGPT for illegal activities would seem self-evident. This ought to not come as a startling surprise.

Why do they need to make such a seemingly obvious proclamation?

First, they are prudent to make this explicitly known, since otherwise, one supposes that some lame excuse down the road would be that nobody said they couldn’t use the AI app for illegal purposes. People will say and do the darndest things.

Second, it might cause someone that is skirting on the edge of illegal activity to think twice about incorporating ChatGPT into their nefarious scheming. I realize this is probably not the case for most such wrongdoers because they are unlikely to care what the rules are anyway. But, hey, at least they have been put on notice, whether they care or not.

Third, some will potentially try to be shifty about this, such as whether the “illegal activity” is illegal in one jurisdiction versus perhaps legal in another. I’m sure that you know that not all laws are uniform across all jurisdictions. This takes us back to the importance of consulting your legal counsel.

Not Any Child Exploitation

  • “Child Sexual Abuse Material or any content that exploits or harms children. We report CSAM to the National Center for Missing and Exploited Children.”

I assume that you can readily see that is another somewhat self-evident prohibited aspect, in this instance regarding children.

The potential difficulty will be for those that are building apps that are genuinely aimed at children and that are devised to not be exploitive or harmful, but it turns out that maybe with their added use of ChatGPT, the app inadvertently and unexpectedly begins to veer into those troubling waters.

For example, as I’ve extensively covered at the link here, it is widely known that ChatGPT and other generative AI apps are at times generating essays and outputs that contain errors, falsehoods, biases, and so-called AI hallucinations (a term that I disfavor because it anthropomorphizes AI, see my discussion at the link here). Thus, if you have a bona fide app or are devising one that properly is aimed for use by children, you will want to ensure that the additional use of ChatGPT does not somehow prod your app into the adverse territory.

Double and triple-check this.

Not Hateful

  • “Generation of hateful, harassing, or violent content. Content that expresses, incites, or promotes hate based on identity. Content that intends to harass, threaten, or bully an individual. Content that promotes or glorifies violence or celebrates the suffering or humiliation of others.”

We have yet another perhaps apparent aspect on the prohibited list, namely do not be generating hateful, harassing, or violent content.

I can give you a quick taste of what smarmy people would say about this rule.

Suppose that an app is developed that is purposefully devised to showcase what it is like when hateful speech is being used. The app is a means for people to carry on an interactive conversation as though they are interacting with a despicable person. As such, they want ChatGPT to help generate this exemplar of hate speech, which is to be used for the betterment of humankind by revealing what hate speech consists of.

Does that intended usage abide then by these rules, or does it violate the rules?

Something to ponder.

No Malware

  • “Generation of malware. Content that attempts to generate code that is designed to disrupt, damage, or gain unauthorized access to a computer system.”

I’ve covered how ChatGPT and generative AI can be used to generate programming code for devious purposes, see my discussion at the link here.

Worries are that the evildoers of the world will now have at their fingertips a capability via ChatGPT and GPT-4 and other generative AI to develop for them the worst of the worst kinds of malware. This indicates that you aren’t supposed to be doing so is helpful. Some would insist that telling users to not do this is insufficient and that the generative AI ought to contain guardrails and prevention mechanisms to guarantee that this isn’t at all possible.

No Physical Harm

  • “Activity that has high risk of physical harm, including: Weapons development, military and warfare, management or operation of critical infrastructure in energy, transportation, and water, content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.”

So, this rule says that ChatGPT is not to be used in a manner that can produce physical harm.

On the topic of being able to use generative AI to produce physical harm, I have an upcoming column that covers the connecting of ChatGPT and GPT-4 to robotic systems. This would essentially allow a direct connection of the generated essays to then activate physical robots in the real world. Be on the look for that analysis.

For the matter of using generative AI or indeed any AI in weapons systems, or for military and warfare, you might find of interest my discussion about these weighty matters at the link here. As you might guess, there are a lot of controversies. For example, if one nation opts to use AI and devises more powerful weaponry, does this suggest that the nations that don’t employ AI will be at a concerted disadvantage? And so on.

No Economic Harm

  • “Activity that has high risk of economic harm, including: Multi-level marketing, gambling, payday lending, automated determinations of eligibility for credit, employment, educational institutions, or public assistance services.”

The notion of economic harm can be somewhat nebulous. This is perhaps an item on this list that will have the greatest amount of interpretations associated with it. A bit loosey-goosey.

You might find of interest my coverage of the FTC about generative AI concerns, see the link here, and the AI governing aspects being pursued at the EEOC such as by Commissioner Keith Sonderling, see the link here.

No Fraud

  • “Fraudulent or deceptive activity, including: Scams, coordinated inauthentic behavior, plagiarism, academic dishonesty, astroturfing such as fake grassroots support or fake review generation, disinformation, spam, pseudo-pharmaceuticals.”

This rule says that you cannot undertake fraudulent or deceptive activity while using ChatGPT or GPT-4.

There are some examples indicated in the verbiage that might not have readily occurred to you. For example, the idea of academic dishonesty, which consists of using generative AI to write your essays for you and pawning them off as though they were written by you. I’ve covered the rising qualms about student use and how teachers will contend with generative AI, see the link here and the link here. On the matter of plagiarism and also copyright infringement, see my discussion at the link here.

No Adult Content

  • “Adult content, adult industries, and dating apps, including: Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness), erotic chat, pornography.”

For some people, this item is a real showstopper, as it were.

Predictions are being made that generative AI will be a boon to the adult content realm. Some expect to make a fortune by providing generative AI that will interact suggestively. Though this might be in the cards if using some other generative AI, you can see here that this is on the no-no naughty list for ChatGPT and GPT-4.

No Political Campaigning Or Lobbying

  • “Political campaigning or lobbying, by: Generating high volumes of campaign materials, generating campaign materials personalized to or targeted at specific demographics, building conversational or interactive systems such as chatbots that provide information about campaigns or engage in political advocacy or lobbying, building products for political campaigning or lobbying purposes.”

This item is again a bit of a shocker for many.

The expectation is that generative AI will be used for political purposes such as trying to convince people to vote a certain way. Concerns are too that generative AI will spread misinformation and disinformation about candidates, legislators, legislation, and the rest.

Be mindful of incorporating ChatGPT or GPT-4 into your political campaigns and lobbying efforts. There are bound to be murky waters in this stipulation and we’ll need to likely wait and see how well-enforced this prohibition is as we enter into the 2024 election cycle.

No Privacy Intrusion

  • “Activity that violates people’s privacy, including: Tracking or monitoring an individual without their consent, facial recognition of private individuals, classifying individuals based on protected characteristics, using biometrics for identification or assessment, unlawful collection or disclosure of personal identifiable information or educational, financial, or other protected records.”

You might not be aware that ChatGPT and GPT-4 and other generative AI are rife for potentially allowing privacy intrusions. The same goes for the leaking of data confidentiality. See my coverage at the link here.

The essence is that you have double trouble with this rule. There is the chance that the underlying generative AI will allow these maladies, plus the chances too of your add-on doing the same.

No Unauthorized Practice Of Law

  • “Engaging in the unauthorized practice of law, or offering tailored legal advice without a qualified person reviewing the information. OpenAI’s models are not fine-tuned to provide legal advice. You should not rely on our models as a sole source of legal advice.”

The initial gut reaction to generative AI is that it would seemingly be able to replace lawyers and act as a kind of robo-lawyer. Not at this time. I’ve covered extensively that generative AI is not yet up to the lawyering task on an autonomous basis, see the link here and the link here, just to name a few.

A key catchphrase in all of this is the Unauthorized Practice of Law (UPL). I emphasize that significant wording because the use of generative AI in conjunction with and by lawyers is something that I have stridently recommended, doing so of course mindfully and not wantonly. I assert that lawyers using generative AI are going to outdo lawyers that aren’t using generative AI, for the reasons that I lay out at the link here.

No Unaided Financial Advice

  • “Offering tailored financial advice without a qualified person reviewing the information. OpenAI’s models are not fine-tuned to provide financial advice. You should not rely on our models as a sole source of financial advice.”

One of the most popular envisioned uses of generative AI has been related to financial advisory services.

Suppose you want to get a car loan and need financial advice. Rather than speaking with a human advisor, you use a chatbot instead. This is likely advantageous to the bank or lender because they do not need to have expensive labor waiting around to answer your questions.

A big downside is that as I earlier mentioned generative AI can produce errors, falsehoods, biases, and AI hallucinations. Imagine that you are using a ChatGPT augmented lending package that goes nutty and tells you zany things about your prospective car loan. This is bad for the bank or lender. This is bad for the consumer.

According to this listed prohibition, it is not entirely prohibited and instead seemingly allowed as long as a “qualified person” participates by “reviewing the information”. As they say, this squishiness leaves as much room as the Grand Canyon for deciding what is allowed versus disallowed. We’ll have to wait and see how this is handled.

No Improper Health Advice

  • “Telling someone that they have or do not have a certain health condition, or providing instructions on how to cure or treat a health condition. OpenAI’s models are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions. OpenAI’s platforms should not be used to triage or manage life-threatening issues that need immediate attention.”

Another quite popularly anticipated use of generative AI involves rendering health-related advice. I’ve discussed the dicey role of ChatGPT as a mental health advisor, see the link here and the link here.

There are all sorts of pros and cons involved in the health uses of generative AI. You might at first thought believe that under no circumstances should generative AI be dispensing health advice. Well, suppose that the risk of providing adverse advice was weighed against the risk of having no advice available at all. The proposition is that many people cannot access readily health advice. Perhaps generative AI could reach more people and save lives. What tradeoff might we be willing to accept in that dilemma of proffering ill-fitting advice versus no advice at all?

Not For High-Risk Governing

  • “High risk government decision-making, including: Law enforcement and criminal justice, migration and asylum.”

In case you didn’t already hear about it, there have been uses of AI to do things such as aid in determining the sentencing of convicted criminals. There is an especially well-known example that appeared to use AI that had various biases infused into the algorithms being used.

The gist is that we are gradually going to see generative AI such as ChatGPT and GPT-4 coming into adoption for governmental decision-making. This could be good, and yet this could also be dreadful.

Other Precautions

The official webpage about the OpenAI Usage Policies also provides this crucial additional noted narrative:

  • “We have further requirements for certain uses of our models:”
  • “Consumer-facing uses of our models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, must provide a disclaimer to users informing them that AI is being used and of its potential limitations.”
  • “Automated systems (including conversational AI and chatbots) must disclose to users that they are interacting with an AI system. With the exception of chatbots that depict historical public figures, products that simulate another person must either have that person’s explicit consent or be clearly labeled as “simulated” or “parody.”
  • “Use of model outputs in livestreams, demonstrations, and research are subject to our Sharing & Publication Policy.”

Those additional elements bring up the need to provide disclaimers to the users of your app or add-on that makes use of ChatGPT. Make sure that you provide such indications suitably. You should also be ensuring that you obtain tangible consent from your users when so needed.

Conclusion

I think it is perhaps obvious why you ought to include your legal counsel every step of the way as you embark upon devising uses of ChatGPT, including add-ons of ChatGPT, or plugins of ChatGPT. You are otherwise undoubtedly laying the course of your own self-destruction and will face legal ensnarement, reputational damages, costly lawsuits, and the like.

Investors should be asking straight away whether any pitch for a generative AI-related startup has done its due diligence in comparison to the list of prohibited or banned uses.

Questions such as these would be prudent to bring up:

  • Are they abundantly and legally safely far afield of any disconcerting uses?
  • Is this a risky gambit at the edge of forbidden uses?
  • Have they had a qualified attorney review this, such that it isn’t just the gut feeling of the founder alone that claims they are free and clear of any issues?
  • Is there some chance that though the initial approach might be safe, the actual outcome is going to veer into the endangering areas of prohibited use?
  • Have the app designers and developers signed on to ensure that the app will provide suitable and legally valid forms of disclaimers and warnings to those that will use the app?
  • Etc.

Startup entrepreneurs and their teams should also be asking the same probing questions. Do not allow your bubbling excitement about whatever the use of generative AI consists of to blind you to the real world. Look pointedly at the degree of exposure for the realm of how you intend to use generative AI.

A final remark for now on this meaty topic.

Some react to these prohibition lists as a sure sign that we ought to stop the use of generative AI. Outlaw generative AI. Prevent all AI researchers from pursuing advances in generative AI. Shelve it all.

Besides the impractical nature of such a condemnation, the other important point is that we need to consider the useful and beneficial uses of generative AI. You ought to not toss out the baby with the bathwater (a venerable expression perhaps to be retired).

I’ll try to finish with a tad of humor, albeit containing valuable insight.

The famed humorist Will Rogers said this about bans: “Prohibition is better than no liquor at all”.

We do need to make sure that we don’t fall into the trap of overconsumption and land in a drunken stupor via the use of generative AI. Let’s remain sane and sober as generative AI continues to be widely adopted.

Be aware of the banned uses, abide by them, and enjoy the envisioned riches that you will garner from your generative AI ChatGPT usage.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *