Blog Post

Prmagazine > News > News > 5 Ways to Stay Smart When Using Gen AI, Explained by Computer Science Professors
5 Ways to Stay Smart When Using Gen AI, Explained by Computer Science Professors

5 Ways to Stay Smart When Using Gen AI, Explained by Computer Science Professors

There is an old saying in the press: If your mother tells you that she loves you, check it out. The point is that you need to be skeptical even the most trustworthy source. But if, that’s one, not your mother, that’s Generated AI model Like Openai chatgpt What do you tell you? Should you trust your computer?

Key points of a speech delivered by a pair of Carnegie Mellon University computer scientists in the Southwest this week? no. Check it out.

This week, the conference in Austin, Texas attracted the attention of artificial intelligence. Experts discuss the future and The overall situation,conversation believethis Change the workplace And more. CMU Assistant Professors Sherry Wu and Maarten SAP are focusing more on here and now, and offer some tips on how to best rather than abuse, which is the most common Generated AI tools There, for example, AI chatbots trained by large language models.

“They are actually far from perfect, and they aren’t actually suitable for all the use cases people want to use,” SAP said.

Here are five tips on how to be smarter than AI.

Know what you want

Anyone who is joking is on social media sites like Twitter or Twitter Bruceky Will tell you how difficult it is to convey irony in words. Posters on these sites (at least humans) know when you are not literally. LLM does not.

SAP says today’s LLMs are actually more than half the time, and they are struggling with social reasoning.

Wu said the solution will be more specific and structured Your Tips. Make sure the model knows what you are producing. Focus on what you want and don’t think LLM will infer your actual problem.

The robot has confidence, but is not accurate

Perhaps the biggest problem with generating AI tools is their hallucination, which means they make up something. SAP said hallucinations may occur until a quarter of the time, with a higher rate in more specialized areas such as law and medicine.

The problem is not just about getting it wrong. SAP said the chatbot is confident in the answer while being completely wrong.

“This makes it easy for humans to rely on these certain expressions,” he said.

The solution is simple: check LLM’s answer. Wu said you can check its consistency with yourself by asking the same question multiple times or changing it on the same question. You may see different outputs. “Sometimes, you’ll find that the model doesn’t really know what it’s talking about,” she said.

The most important thing is to use external sources to verify. This also means you should be careful to ask questions you don’t know the answer. Wu said that when the answers to generative AI are on topics you are familiar with, their answers are most useful, so you can tell what is real and what is not real.

“Make a conscious decision about when to rely on the model and when not to do it,” she said. “Don’t believe it when the model tells you it’s very confident.”

Artificial intelligence cannot keep secrets

LLM has a wealth of privacy issues. It not only provides a machine for information you don’t want to see on the internet that might refute it to anyone asking. The demonstration with Openai’s Chatgpt shows that when asked to organize a surprising party, it tells the person who should be surprised by the party, SAP said.

“LLM is not good at reasoning about who should know what, when and what information should be private,” he said.

Wu said, do not share sensitive or personal data with LLM.

“Whenever you share anything produced with the model, be sure to double check if there is anything you don’t want to release to the LLM,” she said.

Remember, you are talking to the machine

Chatbots are trapped in part because they imitate human speeches. But this is all imitation; SAP says it is not a real human. Models say things like “I want to know” and “I want to” because they are trained in languages ​​that include these words, not because they have imagination. “The way we use language, these words all mean cognition,” SAP said. “It means that the language model imagines things, it has an internal world.”

Thinking of AI models as humans can be dangerous and can lead to misplaced trust. SAP says LLMs don’t operate like humans and see them as humans can enhance social stereotypes.

“Humans are more likely to be overly attributed to artificial intelligence systems,” he said.

Using LLM may not make sense

Despite claiming that LLMS is capable of high-level research and reasoning, they have not worked well, SAP said. It is suggested that the model can be benchmarked on a human level with a PhD. Just benchmarks, the tests behind these analyses don’t mean that the model can work on that level so you can use it.

“People’s fantasy about the robustness of AI capabilities leads to people making rash decisions in their business,” he said.

Wu said when determining whether the generated AI model should be used for tasks, consider the benefits and potential harms of using it, as well as the benefits and potential harms of not using it.

Source link

Leave a comment

Your email address will not be published. Required fields are marked *

star360feedback