How the Bybit Incident Actually Worked: A Technical Post-Mortem
A precise account of how the Bybit incident worked, and the architectural property that made it possible.
This is the technical companion to The Bybit Incident and the Limits of Approval Thresholds. That essay addresses the architectural argument. This one addresses what actually happened, and why it worked.
One clarification before the mechanics. This was not a failure of Safe’s core protocol, whose smart contract logic is formally verified and has held up under years of real-world use. It was not a failure of hardware wallets or multisig mathematics. It was an attack that exploited a structural property those tools share — one that no amount of cryptographic rigor at the verification layer resolves.
What Bybit Was Running
Bybit used Safe to manage its Ethereum cold wallet. The architecture has two components relevant to what follows.
The first is a proxy contract. This is the address that holds assets. It does not contain wallet logic. It delegates execution to a separate implementation contract by storing that contract’s address in a single storage slot.
The second is the implementation contract. This is where the multisig logic lives: signature verification, threshold enforcement, execution rules.
When signers approved a transaction, the proxy looked up the implementation address and delegated execution there. The implementation verified the required signatures and executed the action. This architecture is standard. The attack was designed specifically for it.
How Access Was Obtained
The entry point was not Bybit’s internal systems. It was infrastructure in the software supply path Bybit relied on to authorize transactions.
Public reporting and forensic analysis from multiple security firms indicated compromise within Safe’s cloud delivery environment, giving attackers the ability to modify the frontend code Bybit’s signers trusted. This matters beyond the Bybit case. Many operators secure their own environments carefully while remaining dependent on externally rendered interfaces to present and authorize transactions. The security of that delivery path is part of the custody model, whether it is treated that way or not.
The attackers did not move immediately. They studied how the interface processed transactions and how it interacted with Bybit’s specific signing setup before acting. Two days before the theft, they deployed a targeted modification. It activated only for Bybit’s cold wallet address. Every other Safe user saw a normal interface. Bybit’s signers had no reason to suspect theirs was different.
The modified code did two things: it substituted the actual transaction payload while displaying legitimate details to the signers, and it preserved normal-looking responses after each signature was collected. The deception was invisible at every step of the workflow.
What the Signers Actually Signed
On February 21, Bybit’s team initiated what appeared to be a routine cold-to-warm wallet transfer. The interface showed the expected destination and amount. The Safe domain was correct.
What was sent to their hardware wallets for signing was different.
The payload the signers approved was a delegatecall to a pre-deployed attacker-controlled contract. A delegatecall is an Ethereum operation that executes another contract’s code within the storage context of the calling contract. In practical terms: it runs external code as if it were your own, with full access to your contract’s stored state.
The attacker’s contract contained one meaningful function. It overwrote the storage slot holding the implementation contract address, replacing the legitimate Safe logic with an attacker-controlled contract.
Three signers approved. The threshold was satisfied. Every verification the system could perform passed. The transaction was valid by every measure available to the protocol.
Why Controls Vanished
When that transaction executed, the proxy’s implementation pointer changed.
The new implementation had no signature verification, no threshold enforcement, no multisig logic of any kind. It contained sweep functions that transferred assets to attacker-controlled addresses on demand.
From that moment, the multisig was not bypassed. It was structurally absent. The three signers who had spent time carefully reviewing what they believed was a routine transfer had approved the last transaction the multisig would ever verify.
401,000 ETH drained in the transactions that followed. No further approvals were required.
What Property Made This Possible
The attack required preparation and precision. The architectural property it exploited is not exotic.
Upgradeable proxy systems permit a valid multisig transaction to replace the implementation contract. This is a feature, not a bug. It enables wallet logic upgrades without migrating assets.
The consequence is that the same threshold governing ordinary transactions also governs the transaction that can eliminate the multisig entirely. There is no architectural distinction between approving a transfer and approving a change to the mechanism that authorizes transfers.
A system without that distinction has a specific vulnerability: one valid authorized event is sufficient to immediately displace all prior controls, faster than any defender can respond. This property is not specific to Safe, Bybit, or Ethereum. It exists in any system where upgrade authority and operational authority share the same approval mechanism.
The sophistication of the access-layer attack was real. But the property it exploited was architectural, not operational. Better signing procedures, better interface verification, better device hygiene — all of these raise the cost of obtaining valid signatures. None of them change what the system permits a valid signature to authorize.
What Different Architectures Change
Systems can separate operational approvals from authority-changing transitions.
In those designs, routine approvals authorize transfers and ordinary actions. Changes to the mechanism of control require something distinct: separate authority paths, independent proofs, delayed activation, or recovery constraints that cannot be overridden by the same approval flow that governs daily operations.
The point is not that a compromised interface could never deceive signers in such a system. It is that a successful deception at the operational layer would not automatically inherit the power to rewrite authority itself. The two surfaces are separated by design, not by operational discipline.
When authority is hidden, off-chain, and never collapses onto an artifact that an approved transaction can overwrite, the class of attack demonstrated at Bybit has no equivalent target. There is no implementation pointer to replace. There is no exposed control artifact a valid transaction can capture. A compromised interface can deceive a signer into approving an action. It cannot authorize that action to rewrite the system governing all future actions, because no such path exists in the architecture.
The approval threshold protected a transaction that eliminated the approval threshold. Different architectures do not permit that equivalence.
What This Means for Your System
If you run a multisig treasury, a proxy-based smart contract, or any system where a threshold of approvals governs consequential actions, one question is worth sitting with directly.
Is there an architectural distinction in your system between approving an ordinary action and approving a change to the mechanism that authorizes actions?
If the answer is no, or uncertain, the Bybit failure mode is a description of your current attack surface, not a historical curiosity.
You may secure your own environment carefully. You may train your signers well. You may use hardware wallets and enforce strict signing procedures. None of that changes the question of what your system permits a valid transaction to authorize about itself.
That is an architectural question. It has an architectural answer.
The lesson is not to harden the same model further. It is to question the model itself.
Related: The Bybit Incident and the Limits of Approval Thresholds · Exposed Authority Is the Root Failure · Recovery Is a First-Class Property · Verification Is Not Authority