Article Ethereum Evolved: Dencun Upgrade Part 5, EIP-4844

Date

December 18, 2023

Author

Andrew Breslin & TJ Keel

Ethereum Evolved: Dencun Upgrade Part 5, EIP-4844

In the fifth and final article part of the Dencun Upgrade series, we cover EIP-4844, also known as “proto-danksharding,” which will drastically reduce the cost of posting L2 rollup data to Ethereum mainnet via “blobs.”

EIP-4844 cover image

Reading time

5 mins

To conclude the Ethereum Evolved Dencun Upgrade series, part five explains EIP-4844, also known as proto-danksharding, and the closely related EIP-7516. Below are parts one through four of the series that overview the full set of EIPs within this fork:

  • Part 1 of this series explored the opcode-focused EIPs 5656 and 6780.

  • Part 2 of this series focused on two staking and validator upgrades: EIP-7044 and EIP-7045.

  • Part 3 detailed the implications of EIP-4788, and how communication between Ethereum’s consensus and execution layers has evolved.

  • Part 4 outlined the precautionary EIP-7514 that will prevent overt validator bloat, as well as the introduction of transient storage in EIP-1153.

EIP-4844 represents a tangible and important step in Ethereum’s journey to scale in a decentralized way. More specifically, it helps scale Ethereum by introducing a new “blob-carrying” transaction type. Rollup sequencers (and potentially others) will use this new transaction type to post data to Ethereum mainnet more cheaply than is currently possible. Further, this EIP preserves decentralization by ensuring that the size and number of blobs included per block is limited, such that the computational and storage requirements of Ethereum nodes don’t drastically increase. In future upgrades, these limitations can be reduced to scale Ethereum further. 

The reason why blob data can be less expensive than regular Ethereum calldata of a similar size is because the blob data itself is not actually made accessible to Ethereum’s execution layer (EL, aka the EVM). Rather, only a reference to the blob’s data will be accessible to the EL, and the data within the blob itself will be downloaded and stored solely by Ethereum’s consensus layer (CL, aka beacon nodes), and only for a limited period of time (~18 days, typically).

Through its blob-carrying transaction format, EIP-4844 improves Ethereum’s scalability, preserves decentralization, and most importantly, sets the stage for more complex and impactful scalability upgrades to be implemented in the future; namely, full Danksharding.

What is a Blob?

Before diving into how the Ethereum community arrived at EIP-4844, or proto-danksharding, as the logical next step to scale Ethereum, we’ll first expand upon the main feature that this EIP introduces - blob-carrying transactions (and blobs themselves). 

In Ethereum today, blocks are filled with standard Ethereum transactions. After the Cancun-Deneb fork, blocks can be filled with a combination of these typical Ethereum transactions, and so-called blob-carrying transactions. 

Blobs can be imagined as ‘side-cars’ full of data that will ride alongside blocks. The transactions that fill Ethereum blocks don’t necessarily have to have blobs riding with them, but a blob cannot be included in the network without an associated blob-carrying transaction making its way into a block. While exactly how blobs will be used (and by whom) remains to be seen, it is assumed that the sequencers of Ethereum Layer 2 rollups will be the primary consumer of ‘blobspace’, and that blobs will primarily contain batched transactions that were executed on these rollups. Moreover, we can think of these blobs as ‘compressed’ data structures, like a .zip file.

EIP-4844 image 1

A blob-carrying transaction contains two new fields that a typical Ethereum transaction does not: 

  • a bid that defines how much the transaction submitter is willing to pay to have their blob-carrying transaction included in a block (max_fee_per_blob_gas), and

  • a list of references to the blobs included in the transaction (blob_versioned_hashes). 

Notably, the blob-carrying transaction doesn’t actually include the blob data; only a reference to it in the blob_versioned_hashes field (item 2 above). Technically, this reference is a hash of a KZG commitment to the blob, but for our purposes, it's sufficient to think of this as a fingerprint that is unique to each blob and that can be used to tie each blob to a blob-carrying transaction. 

Since only this reference to each blob exists within a given block, the L2 transactions contained in each blob are not, and can not be, executed by Ethereum’s execution layer (aka, the EVM). This is the reason why blob data of a given size (128KB per blob) can be posted to Ethereum by a rollup sequencer more cheaply than regular Ethereum calldata of similar size - blob data does not need to be re-executed by the Layer 1 (Ethereum, in this case). The actual data that makes up each blob is circulated and stored exclusively on Ethereum’s consensus layer (i.e. beacon nodes), and only for a limited period of time (4096 epochs, or ~18 days). 

Technically, blobs are vectors of data that are made up of 4096 field elements, with each field element being 32 bytes in size. Blobs are constructed in this way so that we can create the succinct cryptographic references to them that are found in blob-carrying transactions, and so that blobs can be represented as polynomials. Representing blobs as polynomials allows us to apply some clever mathematical tricks - namely, erasure coding and data availability sampling – that ultimately reduce the amount of work each Ethereum consensus node must perform to verify the data in a blob. A full explanation of the mathematics that enable all of this is out of scope for this post, but Domothy’s Blockspace 101 article provides an accessible starting point. 

Making Use of the Blob

How will rollup sequencers make use of blobs? 

Ultimately, rollup sequencers need to post some data to Ethereum mainnet. Today, they do this by posting batched transaction data as Ethereum calldata, which is an arbitrary data type and can be quite expensive. After EIP-4844 is implemented, rather than submitting their data to mainnet as calldata, rollup sequencers can sign and broadcast the new, “blob-carrying” transaction type and achieve the same goal - having their data posted to, and therefore secured by, Ethereum mainnet. 

EIP-4844’s New Precomplies 

Proto-Danksharding also introduces two new precompiles, the blob verification precompile and the point evaluation precompile, that are meant to be used by optimistic and zk-rollups respectively. These precompiles are what’s used to verify that the data in a blob matches the reference to the blob included in the blob-carrying transaction (i.e. the versioned hash of the KZG commitment to the blob).

To more specifically outline how each rollup will use these precompiles, we’ll quote EIP-4844 itself:

Optimistic rollups only need to actually provide the underlying data when fraud proofs are being submitted. The fraud proof submission function would require the full contents of the fraudulent blob to be submitted as part of calldata. It would use the blob verification function to verify the data against the versioned hash that was submitted before, and then perform the fraud proof verification on that data as is done today.

"ZK rollups would provide two commitments to their transaction or state delta data: the KZG in the blob and some commitment using whatever proof system the ZK rollup uses internally. They would use a commitment proof of equivalence protocol, using the point evaluation precompile, to prove that the KZG (which the protocol ensures points to available data) and the ZK rollup’s own commitment refer to the same data.”

Blob Market

While the number of blobs that may be attached to a block is dynamic (ranging from 0-6), three blobs will be targeted per block. This targeting is conducted via a pricing incentive mechanism not dissimilar from EIP-1559; the pricing of a blob gets more expensive when there are more than three blobs attached to a block. Reciprocally, the blob price gets cheaper when there are less than three blobs set to be attached to a given block.

Specifically, the costs for blob transactions from one block to the next can increase or decrease by up to 12.5%. The degree to which these price movements will approach +/- 12.5% per block is calculated by the total amount of gas used by all attached blobs, which necessarily scales with the increase in blobs, since all blobs will be 128KB in size regardless if they are completely filled or not. 

This pricing calculation functions via a running gas tally: if blocks consistently host more than 3 blobs the price will continually increase. 

As the blob market fluctuates via the dynamic pricing model described above, layer 2 contracts will need near real-time pricing information of the blob market to ensure proper accounting. Alongside EIP-4844, the corresponding EIP-7516 will ship to create the opcode BLOBBASEFEE, which rollups and layer 2s will utilize to query the current blob base-fee from the block header. It is an inexpensive query, requiring only 2 gas. 

Blob Expiry

A blob is transient and designed to remain available for exactly 4096 epochs, which translates to roughly 18 days. After this expiry, the specific data within the blob will no longer be retrievable from the majority of consensus clients. However, the evidence of its prior existence will remain on mainnet in the form of a KZG commitment (we’ll explain these later). You can think of this as a leftover fingerprint, or a fossil. These cryptographic proofs can be read to prove that specific blob data once existed, and was included on Ethereum mainnet.

Why was 18 days chosen? It’s a compromise between cost of increased state size and liveness, but the decision was also made with optimistic rollups in mind, which have a 7 day fault proof window. This fault proof window was therefore the minimum amount of time blobs must remain accessible, but ultimately more time was allotted. A power of two was chosen (4096 epochs is derived from 2^12) for simplicity.

Even though the protocol will not mandate blob storage beyond 18 days, it is highly likely that certain node operators and service providers will archive this data. That is to say, a robust blob archive market is likely to emerge off-chain, despite the protocol not mandating permanent inclusion on-chain.

Blob Size

Each blob attached to a block may hold 128KB of temporary data, although a given layer 2 may not completely fill a given blob. Importantly, even if a layer 2 does completely fill a blob, the “size” of the blob being attached to the block will always be 128KB (the unused space must still be accounted for). Therefore, given the range of potential blobs per block, EIP-4844 may increase the data associated with a block by up to 768KB (128KB per blob x 6 possible blobs).

Immediately after EIP-4844, layer 2s will not collectively fill blobs. In fact, data that a layer 2 batches into blob space will not have a relationship with the data a different layer 2 batches. It’s worth noting that protocol upgrades or future innovations, such as shared sequencers, or blob sharing protocols, may allow L2s to collectively fill a given blob.

This is, of course, currently the case with blockspace, where protocols, rollups, dApps, and user transactions are collectively bundled together as appropriate by size and priority fee. Where this collectivization from a public mempool typically catered to consistently full, or near full blocks, it may be the case that rollups will less frequently bump up against the 128KB blobspace limit before a form of blob sharing is introduced.

Not only will there be a single sequencer or organizer per blob (for now in our centralized sequencer world), but the relatively cheap nature of blobspace may enable less cramming and efficiency maxing compared to blockspace in the immediate term. In that way, blobspace efficiency may not be immediately close to the maximum until the rollup market and the rollups themselves further mature. 

Neither will layer 2s be forced to use blobspace, and some may opt to occasionally continue to use blockspace, or even other data availability platforms. One can imagine many creative posting solutions depending on the need for data retrieval, and the market prices across each storage type.

EIP-4844 image 2

In any case, a block with six full blobs attached does not represent an inconsequential increase in a block’s size. If a block today can be as large as ~1.875MB, and a full set of blobs can add as much as ~0.75MB, we should note the possibility of increasing the block size by as much as ~40%.

However, we must proportionally note that this increase in block size only occurs on a rolling ~18 day basis on the head of the chain. Therefore, a node on the network will not need to meaningfully increase its long term storage capacity; 18 days represents a small fraction of Ethereum’s state history.

The following calculation can be used to anticipate the targeted increased in storage space required per node:

  • Target 3 blobs at 128KB each: 384KB per block

  • 32 blocks per epoch x 4096 epochs for blob expiry: 131,072 blocks with blobs

  • 384KB x 131,072 blocks: 48GB Increase in Storage

KZG Commitments and the KZG Ceremony

We previously compared blobs to ‘sidecars’ attached to blocks, or the ‘main car.’ Furthering this analogy, it’s appropriate to consider the hash of a KZG commitment as the rod that attaches the two cars; each blob requires its own KZG commitment to ride along with the block.

During the consensus process post EIP-4844, validators will need to download entire blobs to check that they are available and to verify that the KZG commitment is correct. After full Danksharding is implemented, they will only need to see a fraction of a blob to make the same verification.

However, in the event that blob data needs to be submitted to the EVM, both in Proto-Danksharding and under Danksharding, the KZG commitment itself can be used to prove that individual elements of that data are correct without having to provide the entire 128KB blob.

As a prerequisite to generating the KZG commitments associated with each blob, the community conducted a KZG Trusted Setup Ceremony to create the necessary cryptographic parameters, without which, none of the future KZG commitments would be possible. This ceremony was open to all, and its chief purpose was to create randomness with an internal correlation that nobody could predict or recreate.

In fact, not only was the ceremony open to all, but any who opted to join the ceremony generated a portion of the necessary preconditions, which, only when utilized together, allow for the creation of the KZG commitments needed to attach blobs to blocks. In order to reveal the secrets behind these cryptographic parameters and therefore break the KZG commitment scheme, every single participant would need to collude and connect their individual pieces of information to reconstruct the whole. 

As a consequence, only one person needed to honestly discard their portion of the computation to make it impossible to recreate. Therefore, any individual person that contributed to the ceremony could consider the setup trustless, as they would only need to trust themselves.

The Evolution of Ethereum Scalability

The diagram below shows how the Ethereum community’s ideation process around scaling has progressed, from Full Sharding to Proto-Danksharding, over the past few years:

EIP-4844 image 3

Over time, consensus on the optimal implementation of sharding has gradually shifted towards a more pragmatic and rollup-centric approach. EIP-4844 represents a compromise between the ideal end goal and a realistic near-term implementation, while still leaving a technical runway for the ideal end state to be achieved.

Full Sharding

The concept of sharding was originally proposed as a horizontal scaling solution for Ethereum, and has been discussed and refined within the Ethereum community since the inception of the chain. In its original form, implementing Full Sharding (aka full execution sharding) would have meant splitting the Ethereum blockchain into multiple shards (or mini-blockchains) that run in parallel to the beacon chain. Each of these shards would operate similarly to the post-Merge Ethereum blockchain, with distinct blocks and block proposers per shard, and with each shard being secured by a randomly assigned, constantly changing subset of Ethereum’s active validator set. These validator subsets would be tasked with executing and validating transactions, attesting to blocks, and proposing blocks within their assigned shard. The beacon chain would act as the orchestrator of this sharded system, randomly assigning validators to shards. 

Crucially, Full Sharding would have seen both Ethereum’s execution layer (EL) and consensus layer (CL) be sharded to achieve the network's scalability goals. This was, and still is, a viable path for the Ethereum blockchain to achieve decentralized scaling. In practice, however, implementing Full Sharding requires answering a number of still unanswered research questions about communication between shards, the economic security of shards, and so on. The complexity and unknowns around implementing Full Sharding created an opportunity for alternative scaling solutions - those that could be implemented more quickly – to arise.

Rollups & Data Sharding

Those familiar with the Ethereum ecosystem are likely aware that since 2020, Layer 2 solutions (L2s), including rollups, have emerged as the predominant scaling solution for Ethereum. Rollups, whether optimistic or zk-rollups, are separate blockchains that run “on top” of Ethereum mainnet (aka Layer 1, or L1). Like L1 blockchains, rollups have their own internal state, and are appealing to users and developers since they can be transacted upon much more cheaply than on Ethereum mainnet. This is because in rollups, user transactions are batched and only periodically posted in a single transaction to a smart contract on Ethereum mainnet by each rollup’s so-called sequencer. 

Outsourcing transaction execution to these optimistic and zk-rollups effectively removes the need for Ethereum nodes to do this computationally expensive work of execution themselves, but still allows Ethereum nodes to secure and validate L2 transactions by way of enforcing constraints on the data that each rollup’s sequencer can post. In the case of optimistic rollups, all data posted by the sequencer is assumed to be correct. However, a delay is enforced before some data, like the bridging/releasing of funds from L2 to L1, can be acted upon. During this delay, anyone can (at least in theory) submit a fault proof to demonstrate that the sequencer posted malicious or incorrect data. It’s worth noting that in practice, today’s dominant optimistic rollups either do not have this fault proof mechanism enabled, or only allow whitelisted entities to submit fault proofs. In the case of zk-rollups, the only data that the L1 contract will accept from the sequencer is that which comes with a cryptographic validity proof, which provides a mathematical guarantee of the data’s correctness. 

With a thriving ecosystem of rollups, demand for transaction execution can increasingly be met by L2 solutions. In this way, the existence of rollups made sharding Ethereum’s execution layer less necessary. However, validating these L2 transactions still requires having on-demand access to the data necessary for verification. So, the focal point of Ethereum scalability shifted from Full Sharding of Ethereum’s EL and CL (i.e. scaling execution and data availability) to the comparatively simpler task of sharding Ethereum’s CL (i.e. scaling data availability). 

Danksharding

As we saw above, the introduction of L2 rollups meant that Ethereum could outsource execution to these out-of-protocol scaling solutions, and abandon Full Sharding. Danksharding – named after Ethereum researcher Dankrad Feist – represents an even further simplification that eliminates the need to shard Ethereum’s CL. 

Rather than splitting Ethereum’s validating nodes into shards that independently process and validate all Ethereum and L2 data, Danksharding proposed a new transaction type that contains a reference to large, additional “blobs” of data (note: these are the same “blob-carrying” transactions that we’ll get with EIP-4844). By constructing these blobs in a clever way, so that they can be represented as polynomials, Ethereum’s validating nodes can make use of what’s known as a polynomial commitment scheme to verify that blob data is available probabilistically, by way of data availability sampling, without having to download and verify the entirety of the blob. Ultimately, this allows the Ethereum protocol to publish blocks with references to very large amounts of blob data, while keeping validating nodes small, so that decentralization can be preserved. 

Notably, keeping Ethereum’s validating nodes small in this way requires proposer/builder separation (PBS), wherein a block builder processes and executes all block and blob data, the validator tasked with proposing a new block simply selects and proposes the blockheader submitted with the highest bid, and all other validators simply verify the block data via data availability sampling. Since Ethereum validators won’t include blocks on mainnet unless they can verify that the block’s data is available, this makes it virtually impossible for a rollup sequencer to post their data to mainnet, while withholding some, or all, of that data. 

As mentioned,  Danksharding eliminates the need to segregate Ethereum validators into subgroups tasked with validating separate shards of the Ethereum blockchain. In fact, Danksharding is somewhat of a misnomer, and is perhaps better called Danksampling, since sharding is not technically happening as it was originally designed. Instead of validation being done horizontally by multiple groupings of Ethereum validating nodes, a portion of blob validation will instead be probabilistically assigned to a single node grouping. Block validation will remain unchanged.

Thus, the introduction of Danksharding meant that, not only could we avoid sharding Ethereum’s execution layer (thanks to the introduction of L2 rollups), we could technically avoid it on the consensus layer as well thanks to the introduction of cleverly constructed blobs, data availability sampling, and PBS. While the means are vastly different, Data Sharding and Danksharding are able to provide a similar result - scaling that preserves Ethereum’s decentralization. 

Proto-Danksharding

Still, we are not implementing Danksharding in the Cancun-Deneb upgrade. EIP-4844 is often referred to as 'Proto-Danksharding', named after Ethereum researchers Protolambda and Dankrad Fiest. It serves as a preliminary version of Danksharding, and will establish most of the cryptographic foundations necessary for Danksharding’s full implementation.

Once EIP-4844 is in place, the remaining work to implement Danksharding will be confined to the consensus layer. Afterwards, there will be no further requirements for the execution layer teams or the rollups themselves to transition from Proto-Danksharding to Danksharding. Rollups will simply be provided with more and larger blobs to work with.

In summary – with the exception of Full Sharding, which has fallen out of popular consensus and is unlikely to be implemented as demand for execution increasingly moves from Mainnet to rollups – it is possible that sharding and decentralized scaling may be implemented in the reverse order that the Ethereum community thought through this problem. However, at the moment, even Data Sharding is often considered superfluous and unnecessary when compared to fully implemented Danksharding.

The diagram below shows the most likely implementation path for sharding on Ethereum:

EIP-4844 image 4

In practice, we can see the Ethereum community’s movement away from a monolithic design throughout Ethereum’s history, and its embrace of a modular scaling architecture. Ethereum’s rollup-centric roadmap does not require execution optimization on Mainnet, but neither does it prohibit it. Importantly, Data Sharding, or even only Full Sharding remain possibilities into the future.

The Path to Danksharding

After EIP-4844 goes live, every node will need to download every blob. To move beyond Proto-danksharding, and into Danksharding, it’s important that nodes only need to download a portion of these blobs in a process referred to as data availability sampling. By doing so we can increase the number of blobs that may be attached to each block without increasing the load felt by each node operator–this is real sharding.

In the same way Proto-danksharding targets the 3 blobs, but allows for a maximum of 6, Danksharding aspires to target 8 blobs, with a maximum of 16.

But we’re not done! In addition to increasing the number of blobs that may be attached to a block, Danksharding will also increase the size of each of these blobs. In a process referred to as erasure coding, each node will not have to verify an entire blob, only a portion of it.

If data availability sampling reduces the number of blobs a given node must verify, erasure coding reduces the amount of a blob a node must verify. Still, there are other, more nuanced technical problems that must be addressed, one of them being the creation of a more resilient network topology. Here is some additional reading for those interested. 

Charting the Future of Ethereum—One EIP at a Time 

Our five-part series covered the nine EIPs that will be included in the upcoming Cancun-Deneb hard fork. By most estimations, it is reasonable to expect this hard fork to launch towards the end of March, or possibly sometime in April. Before that happens, we’ll combine these five posts into a singular comprehensive report that weaves the implications of each EIP together in an attempt to more cohesively explain Ethereum’s trajectory. Moreover, we’ll likely have additional details on the subsequent hard fork by then, Prague-Electra, and we’ll speculate on its possible upgrades and areas of focus.


For those seeking more in-depth information than this blog post provides, the authors found the dedicated EIP-4844 website, and Domothy’s Blobspace 101 article particularly helpful. Additionally, check out this course on "Understanding Ethereum Network Upgrades," hosted by the Education DAO. It's free, with donations encouraged to support more course content like this.