NHacker Next
login
▲A High-Level View of TLA+lamport.azurewebsites.net
87 points by blobcode 5 days ago | 19 comments
Loading comments...
tomthecreator 2 days ago [-]
I’m curious about the practical workflow after you’ve written and verified a TLA+ spec. How do you go from the TLA+ proof to actual code? Is there any established process or best practices for translating the spec into implementation while ensuring you’re not introducing bugs during that translation? And once you’ve written the code, how do you maintain the connection between spec and implementation as things evolve?
pron 2 days ago [-]
You can think about a TLA+ spec as the level of design between the "natural language design" you have in your head or on paper and the code. It makes it easier to go from the idea to the code, and allows you to explore ideas, at any level of abstraction, with full rigour.

The question of how you maintain the spec as the code changes could be the same as how you maintain the natural language spec as the code changes. Sometimes you just don't, and you may want to only when there is a very substantial change that you'd also like to explore with rigour.

However, there are several approaches for creating a more formal (i.e. mechanical) connection between the code and the TLA+ spec, such as generating tests or checking production logs against the high-level spec, but I would say that you already get so much benefit even without such a mechanical connection that having it is not needed, but could be the cherry on top in some situations that really require it.

I think that the greatest hurdle in getting the most out of TLA+ is internalising that the spec isn't code, and isn't supposed to be. For example, if you're making use of some hashing function or a sorting function and the subject of your design isn't the hashing or sorting algorithm itself, in TLA+ you want write a hashing/sorting spec or reuse one from a library; rather you'd write something like "assume there exists a hashing/sorting function".

That's why you may end up writing different specs for the same application, each focusing on a particular aspect of the system at the appropriate level of detail for that aspect. A single line of TLA+ spec could correspond to anywhere between 1 and 100K lines of code. The use is more similar to how a physicist would describe a system than to code, which is (part of) the system. For example, if the physicist is interested only in the orbit of a planet around its star, it may represent it as a point-mass; if she's interested in weather patterns on the planet, then there's an entire different description. It's a map, not the territory.

SchwKatze 2 days ago [-]
I've never did that before but this recent post from mongodb team has a interesting view on this.

https://www.mongodb.com/blog/post/engineering/conformance-ch...

romac 2 days ago [-]
You can also go the other way around and generate traces [1] from the TLA+ or Quint [2] specs using the Apalache model checker [3], map each action in the trace to an action on your system under test, and check that the abstract and concrete states match at each step.

[1] https://apalache-mc.org/docs/adr/015adr-trace.html

[2] https://quint-lang.org

[3] https://apalache-mc.org

hwayne 2 days ago [-]
Right now the best practice is generating test suites from the TLA+ spec, though right now it's bespoke for each company that does it and there's no production-ready universal tools do that. LLMs help.
mjb 2 days ago [-]
Runtime monitoring of system behavior against the spec is one way to close the gap. We wrote about our experiences with one tool for that (PObserve) here: https://cacm.acm.org/practice/systems-correctness-practices-...
ams92 2 days ago [-]
I’m wondering the same. I’ve read multiple articles about formal methods and how they’ve been used to find obscure bugs in distributed systems, but usually they just show a proof and talk about formal methods without concrete examples.
Pet_Ant 2 days ago [-]
I've thought that one thing that could help was annotations on your code that index into the spec. That way you at least have a one way binding. If-you-are-changing-this-you-should-at-least-review-that kind-of-thing.
threatofrain 2 days ago [-]
How is that different for many algos whose proof of performance characteristics is purely in math?
romac 2 days ago [-]
If you are interested in TLA+, you might want to check out Quint (https://quint-lang.org), a modern take on a specification language which shares the same underlying temporal logic of actions, but with a syntax more familiar to programmers.
senkora 2 days ago [-]
Also see the TLA+ video course from the same website: https://lamport.azurewebsites.net/video/videos.html

Leslie Lamport's a funny guy, and it really comes across in the video series. I think it's a great way to get started with TLA+.

saigovardhan 2 days ago [-]
He does have a great collection of hoodies :)
Cyphase 2 days ago [-]
I've had multiple occasions to link to this video recently, and here's another.

https://www.youtube.com/watch?v=tsSDvflzJbc

It's Leslie Lamport giving the closing keynote at SCaLE a few months ago. It's title was "Coding Isn't Programming".

I really enjoyed it. And as someone mentioned in another comment, he's got a fun sense of humor. I was there in person and got to meet him and exchange a few words after, which was cool.

Watch the first 2m10s and see if it hooks you.

lenkite 2 days ago [-]
Asking from ignorance: Can newer languages like Lean do the job of model checkers like TLA+? There are so many formal model checkers/languages that it is difficult to know which one is the "best".
igornotarobot 2 days ago [-]
It should be possible to write protocol specifications in Lean, e.g., this is a recent case study on specifying two-phase commit in Lean [1] and proving its safety [2].

However, there are no model checkers for Lean. Currently, you either have to write a full proof by hand, with some assistance from LLMs, or rely on random simulation, similar to property-based testing.

[1] https://protocols-made-fun.com/lean/2025/04/25/lean-two-phas... [2] https://protocols-made-fun.com/lean/2025/05/10/lean-two-phas...

mjb 2 days ago [-]
Doing distributed systems work in Lean is possible, but right now is much harder than something like TLA+ or P. It's possible that a richer library of systems primitives in Lean ('mathlib for systems') could make it easier. Lean is a very useful tool, but right now isn't where I'd start for systems work (unless I was doing something specific, like trying to formalize FLP for a paper).
saigovardhan 2 days ago [-]
TLA+ has shown good promise in verifying cache-coherence protocols in multicore systems (a good alternative to CMurphi). An intern an Microsoft one helped uncover a bug in the Xbox 360, right before its 2004 launch - by formally verifying the protocol.

A few serious RISC-V startups use it today as well..

y-curious 2 days ago [-]
I am very interested in TLA+. I work on an existing application, say, for serving graphs to customers on the frontend. I want to, for example, add a new button to the front end to turn on dark mode.

What I want to know from experienced formal methods folks: how do I go about scoping my new feature in TLA+? Do I have to model my entire system in the language, or just a subset of features? Is my example even a good use case for TLA+?

johnbender 2 days ago [-]
Formal methods like TLA provide the highest value when you have a property of the system that is subtle but should be comprehensive, which is to say you need to know it’s true for all the behaviors of the system. (Aside: this is true even if you never model check the system because modeling is a good forcing function for clarity in understanding of the system)

With that in mind you don’t have to model your whole system as long as you’re comfortable with the boundaries as assumptions in any property/theorem you prove about it! For example, unconstrained variable in a TLA spec do a reasonable job of modeling an overapproximation of inputs from the outside world, so that’s one boundary you could potentially stop at supposing the your proof can go through in that chaotic context.