- Sullivan & Cromwell filed AI-generated “hallucinations” including false citations and fabricated precedents in a bankruptcy case.
- The top law firm admitted its comprehensive AI policies were not followed and its secondary review failed to catch the errors.
- Co-head Andrew Dietderich apologized to federal judge Martin Glenn on Saturday after Boies Schiller Flexner discovered the mistakes.
Sullivan & Cromwell’s 900-lawyer Wall Street powerhouse discovered its own AI tools had quietly invented legal precedents. The firm’s filing supporting creditor groups in the Prince Group bankruptcy case contained fabricated citations and misquoted the US bankruptcy code.
The errors emerged during a routine document exchange with rival firm Boies Schiller Flexner. BSF’s lawyers spotted the hallucinations and flagged them. The incident exposes a gap between AI policy documents and actual practice inside elite legal shops.
“We deeply regret that this has occurred,” said Andrew Dietderich, co-head of Sullivan & Cromwell’s global restructuring group. “I apologise on behalf of our entire team. I also called BSF on Friday to thank them for catching this.”
Sullivan & Cromwell AI Hallucinations Expose Law Firm Risk
The firm maintains what it calls “comprehensive policies and training requirements governing the use of AI tools in legal work.” Those policies explicitly require human verification of AI-generated citations. They were ignored. The secondary review process also “did not identify the inaccurate citations,” according to Dietderich’s Saturday letter to Judge Glenn.
Lawyers are not prohibited from using AI in legal research. They are ethically bound to verify accuracy. The profession operates under rules requiring candor to the tribunal and competence in representation. Hallucinations—confident outputs from language models that sound authoritative but are completely fabricated—directly threaten both obligations.
The Prince Group bankruptcy involves a sprawling scam network allegedly run by Cambodian businessman Suwen Chen. US prosecutors accused Chen of directing forced-labor scam operations and filed a legal action to seize nearly $9 billion in bitcoin allegedly connected to the scheme. Chen was arrested in Cambodia earlier this year and extradited to China. Sullivan & Cromwell represents creditor groups in the bankruptcy proceedings.
Sullivan & Cromwell submitted a corrected filing after discovering the errors. The firm did not identify which AI program generated the hallucinations or name the lawyers responsible. No disciplinary action has been announced. Judge Glenn has not indicated whether sanctions will be imposed.
The incident has prompted fresh scrutiny of AI safety measures in professional settings. Legal tech vendors have marketed AI research tools as productivity boosters. The reality involves tradeoffs between speed and accuracy that many firms are only now confronting. Boies Schiller Flexner’s discovery of the errors—rather than Sullivan & Cromwell’s own safeguards—raises questions about whether peer review has become the primary quality control mechanism.
Sullivan & Cromwell’s letter called the episode a reminder that “vigilance is required when using AI tools.” The firm said it would be implementing “additional safeguards.” What those safeguards involve was not specified. The episode follows a string of similar incidents across the legal profession. Federal judges in multiple districts have issued standing orders requiring disclosure of AI use in filings. Some have banned AI-generated briefs entirely.
The Prince Group bankruptcy continues in the Southern District of New York. The corrected filings are now part of the record.
