About Citational

The Belief

Law is a technology. Not in the Silicon Valley sense, not an app, not a platform, not a disruption. Law is one of the oldest technologies humans have: a system of structured reasoning developed over centuries to allocate rights, resolve disputes, and hold power accountable.

We believe artificial intelligence can make law’s daily application more uniform, more accessible, and more faithful to its own principles. But only if these systems are built on foundations that take accuracy as seriously as the discipline itself demands.

The gap between a confident answer and a correct one is where reputations are lost and cases collapse. Closing that gap is the work.


The Problem

Here is a fact about legal AI that should trouble anyone paying attention: the systems most widely trusted are often the least rigorously verified.

In November 2025, Thomson Reuters demonstrated Westlaw’s new “Deep Research” AI in its launch video. In the demo, the system was asked a straightforward question of evidence law: can a laboratory director’s opinion be admitted as lay testimony under Federal Rule of Evidence 701?

Westlaw answered confidently. No, it said. The 9th Circuit had established that such testimony is inadmissible, was its reasoning.

The answer was wrong.

Westlaw had cited the correct case, United States v. Elizabeth Holmes. It had found the correct section of the judgment. But it fundamentally misunderstood the court’s actual holding. This wasn’t a hallucination; it was a category error. Westlaw mistook a specific factual finding—that certain directors in this case gave expert testimony—for a universal legal rule. The actual holding was the opposite: the 9th Circuit affirmed that a lab director may testify as a lay person when the opinion is based on personal perception rather than specialized knowledge.

This is not a trivial error. This is a category error. That is a multi-billion dollar legal research behemoth, whose AI will be trusted by people simply because of the brand. AI that could not distinguish between a case-specific fact and a governing principle — between what happened in a case and what the court actually decided.

When legal journalist Rob Freund first saw our system’s contradictory answer, he assumed we had it wrong. “In this example though, it looks like Westlaw got it right and Citational got it wrong?” he wrote. He changed his position only after reading the entire Holmes judgment himself.

Think about that. A professional whose job is reading legal documents was initially fooled by Westlaw’s confident summary. If he can be misled, what happens when this technology reaches associates working under deadline pressure? What happens when it’s integrated into client-facing workflows?

The citation was real. The case was real. The paragraph existed. And the answer was still wrong in the way that matters most.


The Work

We are an AI research and development lab focused on law.

We started with a simple premise: what would it take to build systems that deserve to be trusted? Not systems that seem reliable. Systems that actually are.

This led us to verification infrastructure; methods for checking not just whether citations exist, but whether they mean what a system claims they mean. Whether the authority is still good law. Whether the reasoning has been applied correctly.

It led us to develop a specialized model trained on judicial reasoning across common-law jurisdictions; a system that learns the difference between dictum and holding, between a court’s analysis and a party’s brief, that encodes judicial psychology.

It led us to research on advanced adversarial methods, approaches that stress-test arguments before they reach a courtroom, that find weaknesses before opposing counsel does.

Some of this work is visible. Much of it is not yet ready to be.

[Learn about our project and platform, Motion Validator →]

[Learn about our experimental judicial-reasoning model, Shinshō-27B →]


The Lab

We are a small team split between New York and London. Our backgrounds span litigation, legal technology, and systems architecture. We have advised and built for some of the most demanding legal institutions in both jurisdictions.

We are not trying to replace lawyers. We are trying to give them tools that don’t lie to them.


The Conversation

If you’re thinking carefully about what AI means for the law or legal practice — not as hype, but as reality — we’d welcome a conversation.

This is not an invitation to a sales call. It is an invitation to a conversation about what legal AI should become.

contact@citation.al