TL;DR:
When buying a Hardware Wallet (HWW), you give away important private information to companies you shouldn't trust. This isn't hypothetical, these companies have had hacks, which have resulted in cyberspace and meatspace attacks against their users. Do you want people to have your address, knowing that you own bitcoin? The list of historical HWW vulnerabilities are so long, that even though I wrote a paper on it, I can't get anyone to read it. And many of these companies are on record lying about their vulnerabilities numerous times, while the list just keeps getting longer and longer. There is a long list of remote attacks, believe it or not, and of course, there are a near unlimited list of physical vulnerabilities (how private keys can be extracted from the HWW once in the adversary's possession). And of course, this opens up attack vectors such as Evil Maid attacks, and Supply Chain attacks. As bitcoin specific hardware, the supply chain attack risk is so astonishingly awful that on this alone they shouldn't be considered. Despite what HWW vendors try to say, no magic holographic sticker or secure element can solve this issue. There just is no way to verify whether or not you have a genuine device. But even worse than all of this is the architectural issue - the single selling point of these devices is that you can use them on infected machines. BUT THIS IS WRONG. Bitcoin's security model fails without a full node, and you cannot trust a software application (the node) if you cannot trust the computing levels beneath it - that is, the operating system (OS), kernel, bootloader, BIOS, CPU, and hardware. If one level is vulnerable or malicious, it corrupts all levels above it. If you can't trust your computer, you ultimately cannot know when a broadcast fails, when a transaction sends, or if bitcoin you have received are genuine or not. You may ask, why do HWWs have a history of so many vulnerabilities (remote private key extraction, attacker controlled addresses, ransom attacks, etc)? It is because they have very bad code review. On the other hand, Bitcoin Core has a decentralized development process that is very conservative, prioritizes security, is supported by hundreds of developers, and has wide community and business review all over the world, with no single point of failure. Unfortunately, these hardware wallet projects have a handful of developers at most, with one single project lead who can merge commits with essentially no community review. These projects can push poorly written code, or even malicious code, without it going noticed for months. And at the time I wrote this paper, there were still many unresolved, devastating vulnerabilities for these devices. Read on for all the details.
Section 1.0 Introduction
First off, I don’t ‘hate’ hardware wallets, nor their developers. What I hate is security theater, and scams. I hate security theater and scams because I have the guiding principles of security, privacy, and voluntary human action in all of my work.
I don’t hate the developers working on these projects, and I don’t mean to disrespect anyone discussed or linked to in this post. Some of these developers have been involved in the Bitcoin community for nearly a decade, and have been contributing great work in addition to their hardware wallets. But I do disagree with these developers, and I hope to make my argument clear here.
I started this research back in August of 2020 after receiving an invitation by Advance Tech Podcast [1] to discuss hardware wallets and Yeti [2] – JW, Will Weatherman, and my bitcoin sovereign custody solution.
Note: Unlike hardware wallet vendors, we make no money whether or not users decide to run our software. It is free/libre open source software. Yeti is a minimal Python script and UI layer run on top of Bitcoin Core. Our goal is that eventually Bitcoin Core will pull in enough advances that Yeti will not be needed at all to achieve the easy, secure, offline, multisig that Yeti enables. Bitcoin Core is currently working on this, and a lot of work has already been merged. Some of the remaining work can be looked at here [3]–[5]. For this article, I will not mention our software until the very end, trying to keep it as strictly unbiased as possible. My only goal is to advance the security practices of the Bitcoin community.
I also donate monthly to many Bitcoin Core developers, as well as support other important projects in the community to advance security and privacy (such as porting Qubes to the Power ISA) [6]. I have absolutely no financial gains from any of this. I do this because I believe in the goals of our community.
For the structure of this article, I will start with the security issues of Hardware Wallets (Section 2.0), (which I will at times abbreviate as HWW’s), refute arguments made in defense of Hardware Wallets (Section 3.0), move into solutions (Section 4.0), and pros/cons for those solutions (Section 5.0).
2.0 Security Vulnerabilities of Hardware Wallets
First, Hardware Wallet security vulnerabilities. There are numerous problems with Hardware Wallets. I will start with describing physical security issues (Section 2.1), then dive into the historical technical issues (Section 2.2). (I suggest skipping this if you are not technically inclined, but please note how long the list is). Finally, I will discuss the inherent architectural issue with Hardware Wallets themselves (Section 2.3).
2.1 Physical (as in ‘real world’) Security Issues
When you order a Hardware Wallet, you are giving your private information to a company that you are ultimately having to trust. This trust shows itself in many surprising ways - physical (as in ‘real world’, or ‘meatspace’), and technical (hardware/firmware/software). By ‘meatspace’ security issues, I mean that when ordering a Hardware Wallet, you must provide the company with your physical address, name, phone number, and email address. While there are a number of solutions to receiving packages without providing this vital information (I suggest reading Jameson Lopp’s ‘A Modest Privacy Proposal’ [7]) - the best ones cost a lot of money, and if you are going to such lengths to protect your physical privacy, you should be taking your cybersecurity as seriously as well (by not using a HWW).
You might be saying to yourself, “these are trusted security companies with great standing in the Bitcoin community, I’m sure they will keep my information safe!”. However, this isn’t just hypothetical, Ledger just had this exact data breach in July of this year. 9,500 customers had their first and last name, address, phone number, and email exposed, in addition to almost a million other email accounts.
This is incredibly dangerous. There are real, physical security risks associated with owning Bitcoin. Jameson Lopp has a repo [8] that documents known physical attacks against Bitcoin owners. It includes muggings, stabbings, home invasions, kidnappings, torture, and murder, for as little as $1,000 worth of crypto to as much as a few million dollars. This happens all over the world, with many of these attacks happening in the United States, Canada, and Western Europe.
This was just in one case when a HWW vendor was hacked. At any time, a HWW vendor could decide to sell this data, there could be a change in leadership, or a government could force legal action. With Bitcoin Core, there is no such data to be collected, no matter what force is taken. Segwit2x proved that Bitcoin Core has a decentralized development process unable to be corrupted by force [9]–[11].
With this data leaked, stolen, or sold, you are very vulnerable. Adversaries can now target you with phishing emails, texts, or phone calls, they could target your home, or perform advanced social engineering attacks. An adversary can very easily scrape social media and public records to learn even more information about you. This opens you up to more targeted attacks, such as SIM swapping [12]. SIM Swapping allows for account takeovers by targeting weaknesses in two factor authentication implementations. Attackers can imitate you, take over your email, exchange and trading accounts, and transfer money that you thought was in your control. There have been millions of dollars worth of bitcoin stolen from SIM swapping attacks, and some from pretty well known people in the industry [13].
Note: Interestingly enough, it is now 2020 EOY, and we have seen attacks on users who had their information leaked in this database. I started researching this around August for the podcast, and now in December we are seeing many attacks in the wild just as I predicted at the time. While these attacks are currently less complex as described above - most being carried out just through phishing emails - it still leads to complete loss of funds if successful [14]. Major bitcoin advocates continue to downplay the seriousness of the situation [15].
Note #2: On December 20th, a database of the ledger leaks containing 1,075,382 emails and 272,853 orders with full details (Emails, Physical Addresses, Phone Numbers) were leaked (this is over 28x the amount larger than Ledger initially claimed to have lost). Apparently, the database had been making the rounds on hacker forums for nearly $100,000 – if you were doubting how valuable this information can be. I won’t be linking to the leak as to not contribute to the spread.
Note #3: As of December 23rd, there have been reports of SIM Swapping attacks, and targeted emails towards users threatening physical attacks. This situation is likely going to continue to get worse.
2.2 Historical Technical Vulnerabilities
Let us now get into the lengthy history of security vulnerabilities of Hardware Wallets. The most well-known and used HWWs are Trezor [16], Ledger [17], BitBox [18], and Coldcard [19]. KeepKey is another HWW which has been around for a long time, however it is a near complete clone of Trezor, so there is no reason to discuss it as well.
Hardware Wallets are advertised as enabling a user to securely store private keys and sign transactions even when the user’s laptop is insecure. I will demonstrate throughout this section (and discuss further in the later section on inherent architectural issues (Section 2.3)), how this is fundamentally impossible. Because of this, I will be focusing first on remote attack vulnerabilities (Section 2.2.1), which are Hardware Wallet’s single selling point. I am doing this to demonstrate that the claim from HWW vendors, which is that HWWs enable you to sign securely from a malicious machine, is completely bogus.
Afterwards, I will focus on the (numerous) physical vulnerabilities disclosed regarding how private keys can be extracted from HWWs once physical access is attained (Section 2.2.2). This opens up attack vectors such as Evil Maid attacks and Supply Chain attacks. No hardware wallet is immune from this, regardless of their use of holographic stickers, or secure elements (even though they may advertise otherwise).
If your computer is malicious (which is assumed), it is very likely that either the software wallet you are using is malicious or the data that the wallet is sending to the HWW is being intercepted and changed to malicious data – this is called a Man in the Middle attack, or MITM. (I believe that BitBox is the only HWW that tried to prevent MITM attacks along the USB by encrypting the data; however, they used unauthenticated encryption).
2.2.1 Remote Attacks
In 2014, Trezor had to fix a vulnerability where a malicious wallet/computer could send to the HWW a specially crafted transaction (one with a malicious ScriptSig), which would cause a buffer overflow and extract the private key [20].
A similar type of vulnerability happened ~7 months later (maliciously crafted transaction not confirmed on the HWW screen), except this time it would contain a change output owned by the attacker [21].
In 2018, BitBox was examined by Saleem Rashid, where he essentially called them “irresponsible maverick[s] with no regard for domain separation”. They implemented BIP-32 so poorly that requesting the master public key of the public wallet revealed the master private key of the ‘hidden’ wallet; and requesting the master public key of the hidden wallet revealed the master private key of the public wallet. In addition, the device did not have a screen, making confirmation of what the device was signing impossible (and IMO, worthless). The designers tried to fix this by pairing it to an (insecure) smartphone as a second factor; however, there was no proper authentication process, and the pairing process was vulnerable to MITM attacks [22].
Trezor uses a USB cable to transfer data from the device to/from the computer. This opens it up to many attacks, most of which we will get into in Section 2.3 (architectural issues). However, it also makes Trezor more vulnerable to remote attacks, such as the one discovered in May of 2018 by Christian Reitter. Specially crafted USB packets could trigger a buffer overflow and lead to code execution [23].
In November of 2018, Sergey Lappo disclosed a vulnerability in Ledger’s hardware wallet that allowed an attacker to replace the change address with the attacker’s own address instead of the wallet’s derived address, without any confirmation or verification on the device. Depending on the UTXO distribution in the addresses, even just spending a small amount could lead to a near total loss of funds [24].
In addition, Ledger runs proprietary firmware, so no serious user should even consider running it, nor should any serious expert recommend it (although many surprisingly do) [25], [26] – but I will continue to include their vulnerabilities in this post for your information.
A vulnerability that affected Trezor & Ledger [27], BitBox [28], [29], and later Coldcard as well [30] is that attackers could create a change or receive address that, while still owned by the user, would be derived on an arbitrary keypath that was not properly limited nor verified. Essentially, the user would receive coins on an address that they would be unable to find and spend from. The attacker could then hold the path to those funds ransom.
Rather than the keypath tree looking like:
m/44' /0' /0' /0 /0
It could look something like:
m/44' /0' /0' /519486735 /755295795 (without the user being aware).
Later, in October of 2019, it was disclosed that Trezor had a change output address vulnerability similar to Ledger’s, even though Ledger’s was disclosed nearly an entire year prior. To be fair, this one was slightly more complicated (and only applied to their newer model, Trezor T). Trezor at least had some checks on the change output, but if the attacker added a malicious input to the transaction, it enabled the attacker to bypass those incomplete checks, set the change output as a 1-of-2 multisig address controlled by both the attacker and the user, and then quickly transfer the user’s funds solely to the attacker’s control. It was a critical vulnerability – it allowed the attacker to steal all funds in the user’s account except for the transaction’s send amount [31].
And then, just a few months later, it was found that once again there was another change output vulnerability – this time with Coldcard. Coldcard was not properly validating the transaction, and by using maliciously crafted script opcodes, an attacker could trick a user into sending change to an attacker-owned address [32].
In the beginning of 2020, it was discovered that Trezor never fully patched their 1-of-2 multisig change output vulnerability disclosed in October of 2019, and it had to be fixed again [33].
Ledger supports multiple different currencies, and claims to do so securely by ‘isolating’ individual apps for each currency. However, researcher Monokh discovered in May 2020 that an attacker can prompt the Ledger device for a Litecoin/Bitcoin testnet/Bitcoin Cash/etc (list low value currency here) transaction, have the user confirm the transaction for that other currency, and actually spend the high value Bitcoin transaction, all without the user being aware [34]. Numerous different scenarios could end up stealing all your Bitcoin – trading on an exchange, trying out a new service, working as a developer with testnet coins, etc. Even more frustrating, Ledger downplayed the issue, the researcher reached out for updates numerous times, and the issue remained unfixed for months (the researcher had previously contacted Ledger about the privacy related aspect of the vulnerability 18 months prior – if Ledger had examined it, perhaps they would have discovered and patched this more serious theft related vulnerability). Only after the researcher finally publicly disclosed this, did Ledger do anything about it.
Near the end of 2020, developer and security researcher Benma disclosed that Coldcard was also vulnerable to the above isolation vulnerability (despite Coldcard originally stating otherwise). Coldcard doesn’t support any altcoins, but they were vulnerable to sending real bitcoin instead of testnet bitcoin, despite showing and confirming to the user otherwise. This vulnerability has been public for 4 months [35], a fix still hasn’t been released [36], and believe it or not, Coldcard just doesn’t believe social engineering is an attack vector that needs to be protected against [37].
This will never end. There will be even more vulnerabilities discovered in 2021. Your funds will never be safe in a hardware wallet.
If you have made it this far, let us continue.
2.2.2 Physical Vulnerabilities
Here, I mean ‘physical’ as ‘hardware’, not as ‘meatspace’. This is about how private keys can be extracted from HWWs once physical access is attained. Physical access to a device is ‘game-over’, as detailed in Microsoft’s ‘10 Immutable Laws of Security’ - Law #3: “If a bad guy has unrestricted physical access to your computer, it's not your computer anymore” [38]. This is a security issue with all hardware devices – computers, laptops, and hardware wallets. What honest security is about, is recognizing this issue, being honest with consumers, and making the time and money required for hackers to break into the hardware and boot process as significant as possible. What security theater is about, is lying to customers and advertising your device as somehow being above Law #3, when in fact it is incredibly vulnerable and simple to break into when physical access is achieved. Unfortunately, all HWW’s have chosen to travel on the path of security theater, instead of honest security.
If the device is easy to attack physically, users are vulnerable to losing their private keys through supply chain attacks, evil maid attacks, or plain physical theft. A supply chain attack is where a malicious actor sends a malicious device in place of the legitimate device. There currently is no way to verify whether or not you have a genuine hardware wallet. An evil maid attack is a hack that requires the user to activate the device and input some information before/after the attacker gains physical access. Physical theft requires stealing the device after the user has generated a private key on the device. Common attacks include side channel attacks and fault attacks. While physical attacks are typically more complex and difficult to carry out than the remote attacks discussed previously, they are serious and should be considered in your threat modeling. Evil maid attacks and physical thefts require more targeting than supply chain attacks, which can target users indiscriminately.
In March 2015 Jochen Hoenicke published the first successful physical attack against HWWs. Previously, these discussions had been mostly theoretical. Specifically, this attack required theft of the device after the user had generated their private keys. Using a cheap oscillator (~$70), a side channel attack was performed by analyzing power consumption when generating public keys (therefore, the PIN was not needed). The private key was successfully recovered [39].
Two years later, another theft attack on Trezor devices was disclosed. The seed is saved in flash memory, but copied to RAM when in use. Flash memory storage survives restarts, but RAM memory does not. Trezor allows you to install your own custom firmware, but doing so requires a restart (to place the device into bootloader mode) and will erase the flash memory as well. Turns out an attacker can install malicious firmware, and instead of restarting the device, perform a soft restart by shorting the device. This keeps its RAM contents readable, and therefore, it's seed as well [40].
In the beginning of 2018, Trezor had a supply chain vulnerability. The microcontroller used in the Trezor (STM32F205) had undocumented write-protection flaws, essentially rendering them useless. This allowed an attacker to replace the bootloader through a malicious firmware update. If the device was intercepted en route to you, or you purchased from a malicious reseller, this would allow the attacker to infect your device. Trezor fixed this by having the newest firmware verify the authenticity of the bootloader, (the bootloader already checked the signature of the firmware). They also implemented write protection through another unit of the chip [41].
Note: Tamper-evident seals are not enough to verify the authenticity of a device, unlike what is described in the linked post. Also, even though the bootloader may be verified, there is no way to verify whether or not another supply chain attack has taken place – such as by inserting malicious chips. See Section 2.3 – architectural vulnerabilities.
In February of 2018, it was discovered that the firmware of the BitBox could be downgraded to older versions. This opened devices up to the previously fixed remote BIP32 implementation hack [22]. To make it worse, the firmware version check happens in the bootloader, and the bootloader can’t be updated. That means for all existing devices, there’s no fix. This can only be fixed in newer shipped versions [42].
One month later, Saleem Rashid published one of the most infamous supply chain attacks against Ledger. Ledger advertises that their devices are tamper proof, that there is no need for anti-tampering stickers, and that you can ‘safely’ purchase their devices from second-hand sellers, eBay, etc, because they have a secure element that will verify that the device is genuine. So how do they claim that this works?
First off, Ledger uses a Dual architecture - a general purpose microcontroller (MCU) and a secure element (SE). The MCU acts as a proxy between the computer and the SE. So when the SE verifies the MCU’s firmware, it asks for a non secure MCU to send over its contents. The problem is, if the MCU is compromised, nothing prevents it from sending over legitimate code to the SE, while running malicious code. There are ways to make it difficult, but not to prevent it. In addition, the use of a Secure Element requires closed source firmware, so you can’t even audit its code.
While this may sound very theoretical, Saleem proved it. He was able to install malicious firmware on the MCU and still have the SE authenticate the device, and therefore ultimately demonstrated a supply chain attack and an evil maid attack. Despite what Ledger stated, neither required malware on the target’s computer, or for the target to confirm a special transaction. Ledger unfortunately did not surprise us this time – despite many methods of communication and waiting months, Ledger continued to downplay the seriousness of this critical vulnerability, and the researcher had to release it publicly themselves without a bounty rewarded [43].
A few months later, Saleem set his eyes back on the BitBox. Like Ledger, BitBox uses a dual architecture approach. However, BitBox uses a general MCU and a tamper resistant storage chip (ATAES132A), which is only used for storing keys and passwords, not for running Bitcoin specific code. However, while that is what BitBox said they did, what their code actually did was entirely different. The ATAES132A surprisingly had all of its security protections disabled. While all secrets were encrypted, the encryption key was placed in the non-secure MCU, and the encrypted contents were written to the disabled secure chip. So – how to access the encryption key? Well, it turns out factory resets wiped the secrets, but not the encryption key. This led to reuse of encryption keys across users. By attaching invasive probes, the attacker can backup the victims encrypted private keys, reset the device with their own password, write the victim’s encrypted private keys back, and successfully operate the device. BitBox’s first patch for this was vulnerable to a MITM attack (it asked the secure chip for random bytes, not the MCU). Saleem never bothered to review their second fix, as it was just a minor modification that looked poorly implemented, he had been poorly paid for his findings, and there were “elementary flaws in the high-level design of the device” [22].
In August of 2018, Trezor had to push an additional fix for it’s first supply chain attack six months prior (the write protection flaws). Turns out, the previous update could be circumvented via clever use of the system configuration (SYSCFG) registers [44].
In December of 2018, Dmitry Nedospasov, Josh Datko, and Thomas Roth presented at 35C3 a number of vulnerabilities in hardware wallets at a presentation known as Wallet.Fail. One of these was a physical theft vulnerability in Trezor wallets. They disclosed that it was possible to downgrade the security of the MCU by downgrading the read protection from RDP2 (no access) to RDP1 (ability to read RAM) through a glitching attack. But, the seed is stored in the flash, not RAM. However, during an upgrade (which does not require the PIN), the device retains the seed by momentarily copying it to RAM. So, all an attacker has to do is enter the bootloader, start a firmware upgrade, have the seed copied into RAM, stop the upgrade process, and glitch the Trezor to downgrade the MCU from RDP2→RDP1, which makes the RAM readable and the seed accessible [45].
The Wallet.Fail presenters also disclosed a supply chain vulnerability with Ledger’s devices. Instead of verifying the firmware on each boot, the device wrote a constant to a specific memory address after verification, and on subsequent boots, merely checked for the constant. There was a prevention to merely writing in the constant itself; however, the chip had an interesting memory map that allowed mapping the flash to another virtual memory area that would map back to the same physical memory. This allows the attacker to successfully install their own malicious firmware and write in the constant to pass the verification. What was Ledger’s response? “Don’t worry, your crypto assets are still secure on your Ledger device”, and even went as far to state that installing custom firmware “is actually a feature” (although later stating that “This bug has been solved in the next firmware version”) [46]. Ledger makes the further argument that the Wallet.Fail team only were able to compromise the MCU, not the Secure Element, which is what holds the private keys and authenticates the device. However, the MCU is responsible for displaying data to the users via the screen, and receiving data via the confirmation buttons. The Wallet.Fail team also made the same argument that Saleem did (and proved) earlier that year – if the MCU is compromised and acts as the proxy between the computer and the SE, it should be very easy to lie to the SE, and therefore fail the entire security model. “We did not bother to fully reverse engineer [this] because we didn’t need to” [47]. They released a proof of concept that runs once then replaces itself with the genuine firmware, so that future checks come back clean.
In the beginning of 2019, an extension of the Wallet.Fail Trezor attack was revealed by Colin O’Flynn. The attack used electromagnetic fault injection to leak secret information by USB descriptors. Rather than using the JTAG connection and dumping from RAM, Colin used the USB connection (thereby also avoiding evidence of tampering) and dumped directly from flash (where the seed is stored) [48].
Also early in 2019, it was disclosed that Coldcard was vulnerable to a physical theft attack. This was achieved by performing a MITM attack between the secure chip and the general purpose MCU. This allowed the attacker to bypass the MCU failed attempt counter and brute force the PIN, as the PIN attempt counter located on the secure chip was not being authenticated [49].
A couple months later, Trezor was found to be vulnerable of a PIN attack as well. The issue was that while the PIN was checked in constant-time, it was checked in sequence (processed one by one). Through a side channel attack, an attacker was able to deduce the valid PIN by reading the power consumption when the device compared the presented PIN with the valid PIN [50].
In April of 2019, Christian Reitter discovered a side channel attack that all major HWW’s were vulnerable to (Trezor, Ledger, Coldcard, and BitBox) [51]. This attack involved the OLED displays on the devices. They display information one pixel row at a time and require a lot of energy to do so, which opens up the devices to a side channel attack. Christian found there was a direct correlation between the number of illuminated pixels on each row and the total power consumption of the device. Through statistical analysis, the seed words and/or PIN combination could be discovered. Since the attack has to occur while the device is displaying the sensitive secrets, it would likely occur via a malicious USB cable – through a supply chain attack or an evil maid attack.
In July, the Ledger Donjon team disclosed a devastating physical attack against all Trezor devices and clones. The vulnerability allows an attacker with physical access to the device to retrieve the master seed, using only cheap tools that can be bought from any electronics store and basic electronics techniques. The total cost of the attack is only $100 and 5 minutes of time to execute. This seems to be a fundamental bug in the STM32F205 chip, and is therefore unable to be patched [52]. Using a long passphrase could protect you from this, since it encrypts the seed. However, this introduces multiple other issues through it’s use of BIP39 (which Bitcoin Core doesn’t use for these reasons). BIP39 uses PBKDF2, which is a weak key derivation function with an iteration count that is set too low. There is also multiple implementations available, with versioning issues. In addition, you now have to store multiple pieces of sensitive data (the seed, PIN, and passphrase).
Disclosed this year was a supply chain attack on Coldcard devices. Coldcard allows users to load their own firmware. This is by design, and documented on their website - "We have so much internal protection for the master secret, that we feel it's safe to allow potentially hostile firmware...". How Coldcard protects against this, is that firmware not signed by Coldcard will light up with a red light and "Danger" warning screen. However, this only happens the first time it is run with the firmware. This would allow an attacker to flash malicious firmware, load the device, power off, and then hand it off to a user as a fresh device - since the warnings would only be presented on the first run of the firmware. Coldcard did think of this, and their defense against this was the prevention of any device reset. The user in this case would receive a device that suspiciously already had a wallet set up, since the firmware would already have been loaded. However, this protection was not done properly. It was discovered that if malicious firmware is loaded on the device and then the PIN set to zero, the device resets back to a blank device. On the next start-up, the Coldcard will then walk the user through the normal setup procedure [53]. This exposes all Coldcard users to a serious supply chain attack.
Coldcard does not acknowledge this vulnerability, instead claiming that this is part of their threat model. This is amusing, since they did have to remove the documentation stating that "there is no way to clear main PIN" [54]. In their own blog post on the vulnerability, Coldcard stated that "If the plan is to make a trojan-horse device, an attacker would have to successfully open and then close the bag without damaging it while hoping the user doesn’t upgrade their firmware upon receipt" [55]. It is true that the supply chain security of Coldcard devices have now been reduced to that only of the plastic bag (which is essentially zero). However, even when updating with genuine firmware, the currently installed malicious firmware could lie to you about the genuine firmware update. That is not a reasonable ‘fix’. It would require a patched bootloader for future devices, which has not occurred.
Copyright © 2022 Robert Spigler - All Rights Reserved.
I do not host this website. Default cookies for the purpose of analytics and performance tracking are loaded. I do not have control over how these cookies are stored. If this is a concern of yours, I suggest installing something like 'Privacy Badger' in addition to selecting "Decline". View 'Privacy Policy' for details.