How Open-Sourciness Prevents the Ledger Seed Issue

Disclaimer: not intending this for a Trezor only audience but users are asking here

There have been plenty of threads this week on Ledger's recent “Ledger Recover” feature roll out. One of the many complaints was that this feature was leaked from a Wired magazine article instead of an official notification from Ledger themselves. Most of those affected by Ledger's new firmware are searching for a HW wallet that is 100% leak-proof. This may be a VERY hard goal to achieve, so I'll discuss what can approach leak-proof, if never fully achieving it. But here's a basic outline of what a leak-proof HW device would look like.

  1. Fully Opensource and reproducible software stack
  2. Fully Opensource and reproducible hardware stack
  3. Fully disclosed microcontroller specifications
  4. Public, cryptographically signed warrant canary

Well go through these one at a time

Opensource and reproducible software stack

The idea of opensource was originally, the ability to modify software so you could fix it if needed. But in cryptography opensource is used for auditability more than anything else. The idea of auditability is that you, or someone like you, can read all the source code and and find any code that says:

send_seed(to = "hackerman@hackerland.haha", seed = "all your coins belong to us")

One of the more famous examples of this was the theft of balances from Copay Bitcoin Wallets through the introduction of code in one of the public repos. The theft only came to light when a college student saw the code and reported it. Note that the issue was not raised by a customer reporting theft, but rather by a volunteer auditor who was reviewing the code. For those who don't work in technology, the thought of volunteer programming may sound comical, but far more issues are identified by volunteer programmers than are raised and identified application observations of users.

So, if you have a fully opensource software project, that has received sufficient community review, you can build the software and know that you are running audited code. But in most cases, normal users would prefer loadable binaries instead of buildable source. to address this, the concept of “reproducible builds” comes into play. A reproducible build is software that can be built by two random volunteers and that are byte for byte identical to each other. With a reproducible build, auditors can clearly signal the community if their build doesn't match the released build. If they are different then the release is thrown into question.

But even with fully reproducible builds, there is still the danger of a “trusting trust” attack. This would be a nation-state intentionally corrupting the build tools that build the software so that they can inject spyware into the build process itself. This gets pretty deep into tinfoil-hat land, but it is still something that should be discussed. The Bitcoin Core project took it seriously enough to move to an OS called GUIX. The whole OS can be built from a small 357 byte program. This little bootstrap program is small enough that the machine code can be reviewed by hand to ensure there is nothing fishy going on.

  • Trusting Trust attacks and reproducible builds
  • Bitcoin build reproducibility
  • Trezor build reproducibility
  • Coldcard build reproducibility
  • Bitbox build reproducibility

Opensource hardware

Just how opensource software is important to audit the code, opensource hardware allows volunteers to audit the hardware. But in a similar vein to the “trusting trust” attack mentioned earlier, a hardware build consists of many prefabricated parts (leds, microcontrollers, resistors). If any of these components have backdoors in them, even the HW wallet vendor wouldn't know.

  • How to build Trezor / Bitbox from specs

Microcontroller specifications

Even with fully opensource software and hardware, you still need to know what is going on inside the microcontroller. To have any hope of doing a deep dive, you would not only need the schematics and BOM, but you would also need full, public specifications to all the microcontroller's guts as well. Some secure elements are more “opensource” than others, while others may require an NDA to view the details. But even with all that info, there is no hope of building your on microcontroller. This is where we have to give up and trust the maker. There is very little ability to audit what is in a microchip.

  • STM32 microcontroller full specifications

Warrant Canary

A warrant canary is a type of legal loophole against the US Patriot Act, and similar laws. Under the Patriot Act, certainly crafted subpoenas can come with criminal penalties to anyone that declares that they are under investigation. Or to state plainly, if you tell anyone that the government is snooping, you go to jail. Where the loophole comes into place is the difference between a “gag order” and “compelled speech”. Although it is legal (unfortunately) for a court to order someone to not speak (“gag order”), courts cannot (yet) force anyone to publicly lie (“compelled speech”). So a canary is basically a file that says

No government snooping this week

Then if there is government snooping, you simply refuse to publish any further canaries. Then, everyone who is watching assumes, through absence, that government snooping is now taking place. Having these files cryptographically signed is useful for the issuer, since if they destroy their canary keys, then no government agent can forge a canary in their name.

Even if you have a hardware / software stack that you are happy with, knowing when to abandon the product is always a good thing to learn.

  • Cloudflare canary
  • Trezor canary
  • QubesOS canary


Since we have to trust the microcontroller, since it can't be fully audited, there is some level of trust that we just have to live with. We are not going to get to 100% auditable builds that can fully declare that it is impossible for seeds to be extracted. The best we can hope for is an “I don't know” from most hardware wallets. Of course some hardware wallets may be designed with fully write-only critical key sections in the secure element, but we may never know if there are some secret microcontroller undocumented pin manipulations that would allow key exfiltration.

So hopefully we can all find a HW maker we are happy with and that we can trust as infrequently as needed. But IMHO, having a fully disclosed and open stack is the very first step. Short of that, nothing else can follow.

2 thoughts on “How Open-Sourciness Prevents the Ledger Seed Issue”

  1. To be fair to Ledger, the feature was publicly viewable as of Feb 15th in PR #2658. From then on there were 47 mentions of the feature in the last 90 days. I’m honestly surprised that Wired was the first one to catch it.

    Reminder to us all to monitor the github of our hardware / software wallets every few weeks to see if anything weird is headed your way.

  2. Thanks for the really well written post .
    I think it’s very important to educate the community since most people don’t really understand the real problem behind this ledger drama. There is a big crowd and often people are not that deep into tech to make their own conclusion . So there is a lot of misunderstanding since people are just yelling open source .
    I’m glad I found this post

Comments are closed.