[ad_1]

Samsung learned this lesson the hard way on at least three different occasions, The Economist Korea found recently. Three separate employees submitted code, a meeting recording, and even more “confidential source code” to the chatbot, opening that data up to the program.

Needless to say, confidential data should stay confidential.

Some governments are taking a strong stance against the ways in which ChatGPT collects user data. Germany has said it may block ChatGPT over data security and privacy concerns, in the wake of Italy doing just that.

Double-Check Sources

ChatGPT’s output is a bigger concern than its input. Because the tool operates independantly, there’s no vetting process for what it decides to tell you.

The same is true for similar generative chat-based AI programs, like Google’s Bard. For one new report, 100 different prompts were submitted to Bard, all asking it to explain certain types of online misinformation. In 76 incidents out of 100, Bard created fake content on those topics.

In another example,

Double-check all sources that the chatbot cites, because sometimes it just feels like making them up.

Check Math and Formulas

The hard sciences aren’t exempt from generative AI’s habit for fibbing. In fact, even a standard calculator is better! That’s because this type of algorithm learns with a large language model, so it “thinks” and speaks in natural language rather than in mathematical formulas. More often than not, ChatGPT will give a natural-language response that’s clear, confident, and incorrect.

AI might well be able to handle the complexities of math in the future, but for now, don’t rely on it to handle your algebra. At the very least, take every solution it delivers with a grain of salt.

Be Wary of Copyrighted Material

Any machine learning program operates by pulling information from a set of existing information. In some cases, a chatbot might pull an entire sentence from a source. If that source is from any publication younger than 1927, it won’t be in the public domain in the United States. If your chatbot is not digesting its information properly, the final result can easily violate US copyright law.

Run your results through an online plagiarism checker in order to cut down on the odds that you’ll be violating copyright.

Learn Which Categories of Data ChatGPT Can’t Handle

ChatGPT can’t do everything.

Take ASCII art, for example. A human can easily tell what image is being represented by a series of computer-text symbols when they’re arranged in the right shape. However, ChatGPT regularly gets this category of art incorrectly, whether it’s reproducing gibberish ASCII art or falsely claiming that a depiction of the cartoon character Shrek is actually the Mona Lisa.

In many cases, ChatGPT can’t handle a complex version of a task, even if it completes the simpler version just fine. Coding is one example: ChatGPT can change the color of a website, but it might not be able to figure out which color scheme will make sense to the human eye.

If you can take a little trial-and-error time to work through what ChatGPT does best, you’ll have a handy tool. Ask too much — or fail to factcheck all data, sources, math, or plagerism — and you’ll just be worse off than before.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *