...

Logo Igesa La Saline les Bains & Porte du Volcan
Logo Nextory

Here's how to use AI at work to avoid hallucinations and mistakes

• 2025年10月31日 上午8:42
8 min de lecture
1

Judges around the world are dealing with a growing problem: legal briefs that were generated with the help of artificial intelligence (AI) and submitted with errors such as citations to cases that don’t exist, according to attorneys and court documents.

The trend serves as a cautionary tale for people who are learning to use AI tools at work. Many employers want to hire workers who can use the technology to help with tasks such as conducting research and drafting reports. As teachers, accountants and marketing professionals begin engaging with AI chatbots and assistants to generate ideas and improve productivity, they're also discovering the programs can make mistakes.

A French data scientist and lawyer, Damien Charlotin, has catalogued at least 490 court filings in the past six months that contained “hallucinations,” which are AI responses that contain false or misleading information. The pace is accelerating as more people use AI, he said.

“Even the more sophisticated player can have an issue with this,” Charlotin said. “AI can be a boon. It’s wonderful, but also there are these pitfalls”.

Charlotin, a senior research fellow at HEC Paris, a business school located just outside France's capital city, created a database to track cases in which a judge ruled that generative AI produced hallucinated content such as fabricated case law and false quotes.

The majority of rulings are from US cases in which plaintiffs represented themselves without an attorney, he said. While most judges issued warnings about the errors, some levied fines.

But even high-profile companies have submitted problematic legal documents. A federal judge in Colorado ruled that a lawyer for MyPillow Inc., filed a brief containing nearly 30 defective citations as part of a defamation case against the company and founder Michael Lindell.

The legal profession isn’t the only one wrestling with AI’s foibles. The AI overviews that appear at the top of web search result pages frequently contain errors.

And AI tools also raise privacy concerns. Workers in all industries need to be cautious about the details they upload or put into prompts to ensure they're safeguarding the confidential information of employers and clients.

Legal and workplace experts share their experiences with AI’s mistakes and describe perils to avoid.

Think of AI as an assistant

Don’t trust AI to make big decisions for you. Some AI users treat the tool as an intern to whom you assign tasks and whose completed work you expect to check.

“Think about AI as augmenting your workflow,” said Maria Flynn, CEO of Jobs for the Future, a nonprofit focused on workforce development. It can act as an assistant for tasks such as drafting an email or researching a travel itinerary, but don't think of it as a substitute that can do all of the work, she said.

When preparing for a meeting, Flynn experimented with an in-house AI tool, asking it to suggest discussion questions based on an article she shared with the team.

“Some of the questions it proposed weren’t the right context really for our organisation, so I was able to give it some of that feedback ... and it came back with five very thoughtful questions,” she said.

Check for accuracy

Flynn has also found problems in the output of the AI tool, which is still in a pilot stage. She once asked it to compile information on the work her organisation had done in various states. But the AI tool was treating completed work and funding proposals as the same thing.

“In that case, our AI tool was not able to identify the difference between something that had been proposed and something that had been completed,” Flynn said.

Luckily, she had the institutional knowledge to recognise the errors. “If you’re new in an organisation, ask coworkers if the results look accurate to them,” Flynn suggested.

While AI can help with brainstorming, relying on it to provide factual information is risky. Take the time to check the accuracy of what AI generates, even if it's tempting to skip that step.

“People are making an assumption because it sounds so plausible that it’s right, and it’s convenient,” Justin Daniels, an Atlanta-based attorney and shareholder with the law firm Baker Donelson, said. “Having to go back and check all the cites, or when I look at a contract that AI has summarised, I have to go back and read what the contract says, that’s a little inconvenient and time-consuming, but that’s what you have to do. As much as you think the AI can substitute for that, it can’t.”

Be careful with notetakers

It can be tempting to use AI to record and take notes during meetings. Some tools generate useful summaries and outline action steps based on what was said.

But many jurisdictions require the consent of participants prior to recording conversations. Before using AI to take notes, pause and consider whether the conversation should be kept privileged and confidential, said Danielle Kays, a Chicago-based partner at law firm Fisher Phillips.

Consult with colleagues in the legal or human resources departments before deploying a notetaker in high-risk situations such as investigations, performance reviews or legal strategy discussions, she suggested.

“People are claiming that with use of AI there should be various levels of consent, and that is something that is working its way through the courts,” Kays said. “That is an issue that I would say companies should continue to watch as it is litigated.”

Protecting confidential information

If you're using free AI tools to draft a memo or marketing campaign, don't tell it identifying information or corporate secrets. Once you've uploaded that information, it's possible that others using the same tool might find it.

That's because when other people ask an AI tool questions, it will search available information, including details you revealed, as it builds its answer, Flynn said. “It doesn't discern whether something is public or private," she added.

Seek schooling

If your employer doesn't offer AI training, try experimenting with free tools such as ChatGPT or Microsoft Copilot. Some universities and tech companies offer classes that can help you develop your understanding of how AI works and ways it can be useful.

A course that teaches people how to construct the best AI prompts or hands-on courses that provide opportunities to practice are valuable, Flynn said.

Despite potential problems with the tools, learning how they work can be beneficial at a time when they're ubiquitous.

“The largest potential pitfall in learning to use AI is not learning to use it at all,” Flynn said. “We’re all going to need to become fluent in AI, and taking the early steps of building your familiarity, your literacy, your comfort with the tool is going to be critically important”.


Uzbekistan strengthens labour rights through union-led reforms
• 下午4:57
6 min
Uzbekistan’s trade unions mark 120 years of operation, by taking the lead in labour reform, pushing for equal pay, safer workplaces, and stronger employee rights.
阅读文章
Most safety precautions for AI tools can be bypassed within a few minutes, study finds
• 下午3:52
3 min
AI systems ‘forget’ their safety measures the longer a user speaks to it, making the tools more likely to give out harmful or inappropriate information, a new report found.
阅读文章
French competition authority slaps €4.6 million fine on telehealth app Doctolib for unfair practices
• 下午3:17
2 min
France’s competition authority slapped a €4.6 million fine on telehealth firm Doctolib for what it said was anticompetitive behaviour.
阅读文章
Germany fines JPMorgan €45 million for anti-money-laundering failings
• 下午2:26
2 min
Germany’s financial watchdog BaFin has imposed a record fine on JPMorgan for systemic failings in its anti-money-laundering controls.
阅读文章
Radiation therapy does not improve survival rates for some women with breast cancer, study finds
• 下午1:24
3 min
A new international study found that radiation therapy did not affect 10-year survival rates for women with early-stage breast cancer who were considered “intermediate risk”.
阅读文章
EU Tech Commissioner meets French ministers over Shein 'sex doll' uproar
• 下午1:17
4 min
French ministers urge the European Commission to sanction e-commerce giant Shein for allowing illegal items to be sold on its website including a 'childlike sex doll' that sparked a national scandal.
阅读文章
Chipmaker Nexperia said it halted China shipments on payment refusal
• 上午10:42
3 min
Dutch chipmaker Nexperia said it stopped sending wafers to its Chinese unit because of a payment refusal — the latest twist in Europe’s semiconductor standoff with Beijing.
阅读文章
'Vibe coding' named Word of the Year by Collins Dictionary
• 上午9:45
4 min
“Vibe coding”, an emerging software development that turns natural language into computer code using AI, has been named Collins Dictionary’s Word of the Year for 2025.
阅读文章
How Denmark is trying to protect citizens from AI deepfakes
• 上午9:36
7 min
Denmark is seeking to protect ordinary Danes, as well as performers and artists who might have their appearance or voice imitated and shared without their permission.
阅读文章
Chinese-made buses in Norway can be halted remotely, spurring increased security
• 上午8:58
5 min
Test results showed that the Chinese bus maker had access to the vehicles' control systems for software updates and diagnostics.
阅读文章
Heavy drinkers at higher risk of brain bleeds in old age, study finds
• 上午6:00
3 min
Heavy drinkers were aged 64 on average when they had a stroke, compared to 75 among people who drank less alcohol.
阅读文章
Nvidia first to hit $5 trillion valuation but last in AI supply chain decarbonisation, study finds
• 上午6:00
6 min
The rapid growth of the AI market comes with increasing environmental challenges. The massive energy demands are raising serious concerns. Leading AI companies, including Nvidia, have so far failed to address these issues according to Greenpeace.
阅读文章