Vitalik: What improvements can be made to Ethereum PoS, and what are the possible approaches?
This article will focus on the Ethereum "Merge" issue: What areas in the technical design of Proof of Stake can still be improved, and what are the possible ways to achieve these improvements?
This article will focus on the Ethereum "Merge" issue: What aspects of the technical design of proof-of-stake can still be improved, and what are the possible ways to achieve these improvements?
Written by:Vitalik Buterin
Translated by: Deng Tong, Jinse Finance
Special thanks to Justin Drake, Hsiao-wei Wang, @antonttc, and Francesco for their feedback and review.
Originally, the "Merge" referred to the most significant event in the history of the Ethereum protocol since its launch: the long-awaited and hard-won transition from proof-of-work to proof-of-stake. Today, Ethereum has been running as a stable proof-of-stake system for nearly two years, and this proof-of-stake has performed exceptionally well in terms of stability, performance, and avoiding centralization risks. However, there are still some important areas for improvement in proof-of-stake.
My 2023 roadmap divides this into several parts: improving technical features such as stability, performance, and accessibility for smaller validators, as well as economic changes to address centralization risks. The former has taken over the "Merge" title, while the latter has become part of the "Scourge".
This article will focus on the "Merge" part: What aspects of the technical design of proof-of-stake can still be improved, and what are the possible ways to achieve these improvements?
This is not an exhaustive list of possible improvements to proof-of-stake; rather, it is a list of ideas that are actively being considered.
Single-Slot Finality and Staking Democratization
What problem are we solving?
Currently, it takes 2-3 epochs (about 15 minutes) to finalize a block, and 32 ETH is required to become a staker. This was originally a compromise to balance three goals:
- Maximize the number of validators that can participate in staking (which directly means minimizing the minimum ETH required for staking)
- Minimize finalization time
- Minimize the overhead of running a node
These three goals are in conflict: In order to achieve economic finality (i.e., an attacker must destroy a large amount of ETH to revert finalized blocks), each validator needs to sign two messages each time finalization occurs. Therefore, if you have many validators, you either need a long time to process all the signatures, or you need very powerful nodes to handle all the signatures simultaneously.
Note that all of this depends on a key goal of Ethereum: to ensure that even a successful attack comes at a high cost to the attacker. This is what the term "economic finality" means. If we didn't have this goal, we could finalize each slot by randomly selecting a committee (as Algorand does) to solve this problem. But the issue with this approach is that if an attacker does control 51% of the validators, they can attack at a very low cost (reverting finalized blocks, censorship, or delaying finality): only some nodes in the committee can be detected as participating in the attack and punished, whether through slashing or minority soft forks. This means the attacker can repeatedly attack the chain. Therefore, if we want economic finality, the simple committee-based approach does not work, and at first glance, we do need the full set of validators to participate.
Ideally, we want to retain economic finality while improving on two fronts:
- Finalize blocks within a single slot (ideally, keeping or even reducing the current 12-second length), instead of 15 minutes
- Allow validators to stake with 1 ETH (lowering the minimum from 32 ETH to 1 ETH)
The first goal is justified by two objectives, both of which can be seen as "aligning Ethereum's properties with those of (more centralized) performance-focused L1 chains".
First, it ensures that all Ethereum users can benefit from the higher level of security provided by the finality mechanism. Today, most users cannot enjoy this guarantee because they are unwilling to wait 15 minutes; with single-slot finality, users can see transactions finalized almost immediately after confirmation. Second, if users and applications do not have to worry about the possibility of chain reorgs (except in the relatively rare case of inactivity leaks), it simplifies the protocol and the surrounding infrastructure.
The second goal is motivated by the desire to support solo stakers. Poll after poll has repeatedly shown that the main factor preventing more people from solo staking is the 32 ETH minimum. Lowering the minimum to 1 ETH would solve this issue to the extent that other problems would become the main limiting factors for solo staking.
There is a challenge: the goals of faster finality and more democratized staking are both in conflict with the goal of minimizing overhead. In fact, this is the entire reason we did not adopt single-slot finality from the start. However, recent research has proposed some possible ways to address this issue.
What is it and how does it work?
Single-slot finality involves using a consensus algorithm that finalizes blocks within a single slot. This is not a difficult goal to achieve in itself: many algorithms (such as Tendermint consensus) have already achieved this with optimal properties. A unique ideal property of Ethereum is inactivity leaks, which Tendermint does not support, allowing the chain to continue running and eventually recover even if more than 1/3 of validators go offline. Fortunately, this desire has already been met: there are already proposals to modify Tendermint-style consensus to accommodate inactivity leaks.
Leading single-slot finality proposals
The hardest part of the problem is figuring out how to make single-slot finality work with a very high number of validators, without causing extremely high node operator overhead. There are several leading solutions for this:
Option 1: Brute force—strive for better signature aggregation protocols, possibly using ZK-SNARKs, which would actually allow us to process signatures from millions of validators per slot.
Horn, one of the designs proposed for better aggregation protocols.
Option 2: Orbit Committee—a new mechanism that allows a randomly selected medium-sized committee to be responsible for finalizing the chain, but in a way that preserves the attack cost properties we are looking for.
One way to think about Orbit SSF is that it opens up a compromise option space, ranging from x=0 (Algorand-style committee, no economic finality) to x=1 (Ethereum status quo), opening up points in the middle where Ethereum still has enough economic finality for extreme security, but at the same time we gain the efficiency advantage of only needing a moderate-sized random sample of validators to participate in each period.
Orbit leverages the pre-existing heterogeneity in validator deposit sizes to obtain as much economic finality as possible, while still giving small validators corresponding roles. In addition, Orbit uses slow committee rotation to ensure a high overlap between adjacent quorums, thus ensuring its economic finality still applies at committee rotation boundaries.
Option 3: Two-tier staking—a mechanism in which stakers are divided into two categories, one with higher deposit requirements and the other with lower deposit requirements. Only the higher-deposit tier directly participates in providing economic finality. There are various proposals regarding the exact rights and responsibilities of the lower-deposit tier (for example, see the Rainbow Staking post). Common ideas include:
- The right to delegate stake to higher-tier stakers
- Randomly selected lower-tier stakers proving and needing to finalize each block
- The right to generate inclusion lists
What are the links to existing research?
- Paths to single-slot finality (2022):
- Specific proposals for Ethereum single-slot finality protocol (2023):
- Orbit SSF:
- Further analysis of Orbit-style mechanisms:
- Horn, signature aggregation protocol (2022):
- Signature merging for large-scale consensus (2023):
- Signature aggregation protocol proposed by Khovratovich et al.:
- STARK-based signature aggregation (2022):
- Rainbow Staking:
What remains to be done? What trade-offs are needed?
There are four main feasible paths (we can also take hybrid paths):
- Maintain the status quo
- Orbit SSF
- Brute-force SSF
- SSF with two-tier staking
(1) Means doing nothing, keeping staking as is, but this would make Ethereum's security experience and staking centralization properties worse than they could be.
(2) Avoids "high-tech" and solves the problem by cleverly rethinking protocol assumptions: we relax the "economic finality" requirement so that we require attacks to be expensive, but can accept that the attack cost may be 10 times lower than now (e.g., an attack cost of 2.5 billions USD instead of 25 billions USD). It is widely believed that Ethereum's economic finality today is far beyond what is needed, and its main security risks lie elsewhere, so this can be argued to be an acceptable sacrifice.
The main work is to verify that the Orbit mechanism is safe and has the properties we want, then fully formalize and implement it. In addition, EIP-7251 (increasing the maximum effective balance) allows voluntary validator balance merging, which immediately reduces chain validation overhead and serves as an effective initial stage for Orbit rollout.
(3) Avoids clever rethinking and instead solves the problem with high-tech brute force. To do this requires collecting a large number of signatures (over 1 million) in a very short time (5-10 seconds).
(4) Avoids clever rethinking and high-tech, but it does create a two-tier staking system, which still has centralization risks. The risk largely depends on the specific rights granted to the lower staking tier. For example:
- If lower-tier stakers need to delegate their proving rights to higher-tier stakers, delegation may become centralized, and we end up with two highly concentrated staking tiers.
- If random sampling of the lower tier is required to approve each block, an attacker can spend a very small amount of ETH to prevent finality.
- If lower-tier stakers can only create inclusion lists, the proving layer may still be centralized, at which point a 51% attack on the proving layer can censor the inclusion lists themselves.
Multiple strategies can be combined, for example:
- (1 + 2): Add Orbit without implementing single-slot finality.
- (1 + 3): Use brute-force techniques to reduce the minimum deposit size without implementing single-slot finality. The required aggregation is 64 times less than in the pure (3) case, so the problem becomes easier.
- (2 + 3): Implement Orbit SSF with conservative parameters (e.g., a 128k validator committee instead of 8k or 32k), and use brute-force techniques to make it ultra-efficient.
- (1 + 4): Add Rainbow Staking without implementing single-slot finality.
How does it interact with other parts of the roadmap?
In addition to other benefits, single-slot finality also reduces the risk of certain types of multi-block MEV attacks. In addition, in a single-slot finality world, proposer-builder separation design and other in-protocol block production pipelines need to be designed differently.
The weakness of brute-force strategies is that they make it more difficult to shorten slot times.
Single Secret Leader Election
What problem are we solving?
Today, which validator will propose the next block is known in advance. This creates a security vulnerability: attackers can monitor the network, identify which validators correspond to which IP addresses, and launch DoS attacks on validators just before they are about to propose a block.
What is it? How does it work?
The best way to solve the DoS problem is to hide which validator will generate the next block, at least until the block is actually generated. Note that if we remove the "single" requirement, this is easy: one solution is to allow anyone to create the next block, but require randao reveals to be less than 2^256 / N. On average, only one validator will meet this requirement—but sometimes there will be two or more, sometimes zero. Combining the "secret" requirement with the "single" requirement has always been a challenge.
Single secret leader election protocols solve this by using some cryptographic techniques to create a "blinded" validator ID for each validator, then allowing many proposers to shuffle and re-blind the pool of blinded IDs (similar to how mixnets work). In each slot, a random blinded ID is selected. Only the owner of that blinded ID can generate a valid proof to propose a block, but no one knows which validator the blinded ID corresponds to.
Whisk SSLE protocol
What are the links to existing research?
- Dan Boneh's paper (2020):
- Whisk (Ethereum-specific proposal, 2022):
- Single secret leader election tag on ethresear.ch:
- Simplified SSLE using ring signatures:
What remains to be done? What trade-offs are needed?
In practice, what remains is to find and implement a protocol that is simple enough for us to easily deploy it on mainnet. We value Ethereum being a relatively simple protocol, and we do not want to further increase complexity. The SSLE implementations we have seen add hundreds of lines of spec code and introduce new assumptions in complex cryptography. Finding a sufficiently effective quantum-resistant SSLE implementation is also an open question.
It may ultimately turn out that only when we introduce general-purpose zero-knowledge proof mechanisms on Ethereum's L1 for other reasons (such as state trees, ZK-EVM), will the "marginal extra complexity" of SSLE drop low enough.
Another option is to ignore SSLE altogether and instead use out-of-protocol mitigations (such as at the p2p layer) to address the DoS problem.
How does it interact with other parts of the roadmap?
If we add a proposer-builder separation (APS) mechanism, such as execution tickets, then execution blocks (i.e., blocks containing Ethereum transactions) will not need SSLE, as we can rely on specialized block builders. However, for consensus blocks (i.e., blocks containing protocol messages such as attestations, possibly inclusion lists, etc.), we would still benefit from SSLE.
Faster Transaction Confirmation
What problem are we solving?
It is valuable to further reduce Ethereum's transaction confirmation time, from 12 seconds to 4 seconds. Doing so would significantly improve the user experience for both L1 and rollup-based users, while making defi protocols more efficient. It would also make it easier for L2s to decentralize, as it would allow many L2 applications to work on rollup-based infrastructure, reducing the need for L2s to build their own committee-based decentralized ordering.
What is it? How does it work?
There are roughly two techniques here:
- Reduce slot time, for example to 8 seconds or 4 seconds. This does not necessarily mean 4-second finality: finality essentially requires three rounds of communication, so we can make each round of communication a separate block, and after 4 seconds at least get preliminary confirmation.
- Allow proposers to issue pre-confirmations during the slot. In the extreme case, proposers can include transactions in their block in real time and immediately issue pre-confirmation messages for each transaction ("my first transaction is 0×1234...", "my second transaction is 0×5678..."). Cases where proposers issue two conflicting confirmations can be handled in two ways: (i) by slashing the proposer, or (ii) by having attesters vote on which one was earlier.
What are the links to existing research?
- Pre-confirmation-based:
- Protocol-enforced proposer commitments (PEPC):
- Interleaved epochs on parallel chains (2018 idea for low latency):
What remains to be done, and what are the trade-offs?
It is not yet clear how practical it is to reduce slot times. Even today, stakers in many parts of the world have difficulty obtaining attestations quickly enough. Attempting 4-second slot times risks centralizing the validator set, and due to latency, being a validator outside a few privileged regions would be impractical.
The weakness of the proposer pre-confirmation approach is that it can greatly improve inclusion time in the average case, but not in the worst case: if the current proposer is running well, your transaction will get a pre-confirmation within 0.5 seconds instead of being included in (on average) 6 seconds, but if the current proposer is offline or not running well, you still have to wait the full 12 seconds to start the next slot and get a new proposer.
In addition, there is an unresolved question of how to incentivize pre-confirmations. Proposers are incentivized to maximize their optionality for as long as possible. If attesters sign off on the timeliness of pre-confirmations, transaction senders could make part of the fee conditional on immediate pre-confirmation, but this would add extra burden to attesters and could make it harder for them to continue acting as neutral "dumb pipes".
On the other hand, if we do not attempt this and keep finality at 12 seconds (or longer), the ecosystem will place more value on L2-level pre-confirmation mechanisms, and cross-L2 interactions will take longer.
How does it interact with other parts of the roadmap?
Proposer-based pre-confirmation actually relies on a proposer-builder separation (APS) mechanism, such as execution tickets. Otherwise, the pressure to provide real-time pre-confirmations could become too concentrated on regular validators.
Other Research Areas
51% Attack Recovery
It is commonly believed that if a 51% attack occurs (including attacks that cannot be cryptographically proven, such as censorship), the community will unite to implement a minority soft fork, ensuring the good guys win and the bad guys are leaked or slashed for inactivity. However, this degree of reliance on the social layer can be argued to be unhealthy. We can try to reduce reliance on the social layer and make the recovery process as automated as possible.
Full automation is impossible, because if it were, this would count as a >50% fault-tolerant consensus algorithm, and we already know the (very strict) mathematically provable limitations of such algorithms. But we can achieve partial automation: for example, if clients censor transactions that the client has seen for long enough, the client can automatically refuse to accept a chain as finalized, or even refuse to accept it as the head for fork choice. A key goal is to ensure that the bad guys in an attack at least cannot win quickly.
Increasing Quorum Threshold
Today, if 67% of stakers support, a block is finalized. Some believe this is too aggressive. In the entire history of Ethereum, there has only been one (very brief) finality failure. If this percentage were increased to 80%, the number of non-finality periods would be relatively low, but Ethereum would gain security: in particular, many more contentious situations would result in a temporary halt to finality. This seems healthier than the "wrong side" immediately winning, whether the wrong side is the attacker or the client with a bug.
This also answers the question of "what is the point of solo stakers". Today, most stakers already stake through pools, and it seems unlikely that solo stakers will reach as high as 51% of staked ETH. However, if we try, it seems possible for solo stakers to reach a minority sufficient to block the majority, especially if the majority is at 80% (so the minority needed to block the majority is only 21%). As long as solo stakers do not participate in 51% attacks (whether finality reversion or censorship), such attacks will not achieve a "clean victory", and solo stakers will actively help organize minority soft forks.
Quantum Resistance
Metaculus currently believes, with considerable uncertainty, that quantum computers are likely to start breaking cryptography sometime in the 2030s:
Quantum computing experts, such as Scott Aaronson, have also recently begun to take the possibility of quantum computers working in the medium term more seriously. This affects the entire Ethereum roadmap: it means that every part of the Ethereum protocol that currently relies on elliptic curves will need some kind of hash-based or other quantum-resistant alternative. In particular, this means we cannot assume we will always be able to rely on the excellent properties of BLS aggregation to handle signatures from large validator sets. This justifies being conservative in performance assumptions in proof-of-stake design, and is also a reason to more aggressively develop quantum-resistant alternatives.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Is Cardano (ADA) About to Rebound as the Fed Turns Dovish?

Sorare CEO still bullish on Ethereum despite ‘upgrading’ to Solana

Base co-founder discusses token issuance again—what does Zora’s launch of live streaming at this moment signify?
The current $850 million FDV still has reasonable room for growth considering Zora's ecosystem status and growth potential.

The last mile of blockchain, the first mile of Megaeth: Taking over global assets
1. The blockchain project Megaeth has recently reached a critical milestone with its public sale, marking the official start of the project. Its goal is to build the world's fastest public chain to solve the "last mile" problem of blockchain's management of global assets. 2. Industry observations indicate that the crypto punk spirit has been weakening year by year, and the industry's focus is shifting towards high-performance infrastructure. Against this backdrop, Megaeth is advancing the implementation of its project, emphasizing that the blockchain industry has moved past the early exploratory phase, and high performance has become key to supporting the next stage of application scenarios. 3. Industry insiders believe that all infrastructure has a "late-mover advantage," and blockchain also needs to go through a process of performance upgrades to drive scenario expansion. High performance is seen as the key to unlocking larger-scale applications. 4. With multiple chains exploring performance pathways, Megaeth positions itself as aiming to be the "fastest public chain," attempting to solve the challenge of "trillions of transactions on-chain." The team believes that addressing real-world problems is the most effective path, regardless of whether it is Layer1 or Layer2. 5. Megaeth's public sale is seen as the beginning of its "first mile" journey. Although it may face technical challenges, the potential brought by its differentiated underlying architecture is highly regarded and is expected to give rise to new industry paradigms.
Trending news
MoreCrypto prices
More








