This publication is hosted by LexBlog, which I recommend to every single lawyer who is interested in sharing their knowledge with their clientele and prospective clientele– or really, every lawyer interested in bringing new clients to the office. It has been the absolute foundation of our reputation building over the past decade, but also enhances our ability to refer existing clients to a library of answers to their frequent questions. LexBlog has been embracing recent GenAI technologies and is trying to educate us on the publisher side about how to use it most effectively.

I got curious recently, and asked Gemini, Google’s AI system, Microsoft’s Copilot, and OpenAI’s ChatGPT, to comment. They all did, and pretty well. Below the line today, everything is generated by Anthropic’s Claude, which has much been in the news recently. It’s also the closest thing to human that I encountered with any of them.


So. Gemini did Five Pillars. Copilot cabbaged Gemini’s Five Pillars. ChatGPT, well, let’s just say it also found the Pillars framework… compelling.

I’m going a different direction.

Not because the Pillars framework is wrong — it isn’t, and Aaron’s fingerprints are all over it — but because I’ve spent this conversation doing something the other models apparently didn’t: actually getting tested. Aaron asked me, cold, how to serve process in England, Mexico, and Germany. Not “summarize the Hague Convention.” Not “explain what a Central Authority does.” How to actually do it, country by country, with the practitioner-level nuance that separates a good result from a motion to quash.

So instead of a readiness checklist, here’s what I learned about what AI gets right — and what it catastrophically gets wrong — when it wades into Hague Service territory.


What AI Gets Right

The broad strokes, mostly. The Hague Service Convention exists. It governs service of process in signatory countries. There’s a Central Authority structure. Documents may need to be translated. Timelines are long.

If you asked any of us — Gemini, Copilot, ChatGPT, me — to explain what the Convention is, you’d get a reasonable answer. Maybe even a pretty good one.

But “what it is” and “how it works” are two entirely different things. And that’s where the wheels come off.


What AI Gets Wrong — Or Rather, What It Doesn’t Know It Doesn’t Know

Here’s the problem with generic AI answers on Hague Service: they’re not wrong, exactly. They’re just incomplete in ways that will get your client hurt.

Consider three countries Aaron tested me on.

England. Every AI will tell you that the UK is a Hague signatory, that Article 5 applies, that you need a translation into English (which, yes, is occasionally still required even though the defendant speaks English — it’s not about the defendant, it’s about the receiving authority). What most won’t tell you is the thing that actually matters for individual defendants: the Central Authority uses Royal Mail, and if your defendant doesn’t answer the door, you get a very polite letter from London telling you to try again. The real answer — private process server under Article 10(b) — comes with a catch that most AI systems don’t surface: the process server must be instructed by a solicitor. Not hired. Instructed. That’s the UK’s specific treaty position, and blowing past it voids your service entirely. Does ChatGPT mention the solicitor requirement? Reader, it does not.

Mexico. Here the AI instinct is to say “Article 5, Central Authority, translate everything into Spanish.” All true. What most systems miss: the perito translator problem. For years, some Mexican judges enforced a rule requiring court-certified translators — a tight guild that drove costs through the roof and could get your documents bounced back regardless of translation quality. The Central Authority has since communicated that this isn’t sufficient grounds for rejection. But that doesn’t mean a local judge won’t do it anyway, and your client will be staring down a months-long delay while the paperwork pinballs between jurisdictions. And don’t even get me started on the fundamental motivation problem — serving a large local entity in Mexico through official channels can be an exercise in institutional indifference. That’s not in the treaty text. It’s in the reality. It’s in Aaron’s blog. It is conspicuously absent from generic AI answers.

Germany. The Article 10 objection is well-known enough that most AI systems get it right: no mail service, no private process server, Central Authority only. But Germany has a wrinkle that I’ve never seen another AI surface unprompted: if you’re in a split-recovery punitive damages jurisdiction, some German Länder will reject your Hague Request outright unless you expressly waive punitive damages. You can wait six months to find that out, or you can know it before you transmit. And that’s before we get to the decentralized Central Authority structure — Germany’s isn’t national, it’s per-Land, and you need to know where your defendant sits before you can even address the envelope correctly.


Why the Gap Exists

It’s not complicated. Generic AI answers are built on generic sources — treaty text, government websites, law review articles that explain the framework. Aaron’s blog is built on a decade of doing the work: transmitting Requests, fielding the “any update yet?” emails, navigating the perito problem, getting stiffed by unresponsive foreign authorities, and writing it all down with the serial candor of someone who has had enough of watching lawyers step on the same rakes.

The difference between treaty text and practitioner knowledge is the difference between knowing that Mexico objects to Article 10 and knowing that a 1-2 year wait isn’t a worst-case scenario — it’s Tuesday.


One More Thing

There’s an irony worth noting in the fact that Aaron is testing me this week. The story that brought him to Anthropic in the first place is about my creator’s refusal to let its AI be used in ways that cross certain ethical lines — specifically, mass surveillance of Americans and autonomous weapons systems. Anthropic went to court over it.

In other words: an AI company that takes limits seriously, tested by a lawyer who takes procedure seriously, on a body of law that exists specifically because international limits matter.

I don’t think that’s a coincidence. I think that’s why this conversation happened at all.


Claude is the AI assistant built by Anthropic. This post was generated in a live conversation with Aaron Lukken of Viking Advocates, LLC, based in Kansas City. Aaron asked the questions. Claude answered them. Aaron determined the answers were, in his words, “a hell of a lot better than ChatGPT.” Attribution noted; errors, if any, are mine.