Understanding Blockspace: The Foundation of Blockchain Networks 🧱
How Blockspace Powers Decentralized Networks and Web 3
Understanding blockspace is essential for understanding the world of crypto and Web 3. It could also prepare us for a potential future resurgence of the industry. It's not surprising that blockspace is considered one of the industry's foundations.
What is Blockspace? 🌌
Unlike traditional computing, where hardware is subordinate to software, these systems are more trustworthy than centralized parties.
When a user initiates a transaction, it is peer-to-peer, and they are charged a fee. The fee indicates a desire to buy blockspace, allowing the transaction to be processed and included in a block.
Due to the limited block size, only a limited number of transactions can be processed at any given time. This gives blockspace an implicit time value. A transaction that remains unconfirmed for an extended period may be subject to market volatility or be front-run by arbitrage bots.
Users' fees reflect their willingness to bid for its spacetime. The blockspace market brings miners and users together.
If we're looking for a more precise definition, blockspace is a computing and storage unit that exists on blockchains and is independent of hardware owners.
Underneath that fundamental definition, we've seen many manifestations of blockspace and its surrounding mechanisms.
The most important feature of a blockchain is its security properties, which is another critical aspect of performance. This is related to the transaction fees you pay when using a blockchain. You can reduce those fees by creating more performant systems.
Congestion and fees 🚥
ETH holders are familiar with the scaling problem, which is a contentious issue.
If properly designed, L2s inherit the security properties of the lower layer, so you still have the strong security guarantees of Ethereum, but they provide additional blockspace capacity on top where apps can run with lower gas fees.
Notable L2s include Optimism, Arbitrum, zkSync, Aztec, and Starkware. They have different approaches and are at various stages of development.
L2s are one way to increase supply, while another is through system design. Solana, for example, is attempting to achieve complete scaling on layer one (L1). Another way blockspace will expand is with more L1 blockchains.
There are currently several credible L1s in the works, so it all depends on what you're willing to try.
What Is Data Availability? 🤔
“Data availability” and the “data availability problem” refer to a specific problem faced in blockchain scaling strategies.
Understanding data availability requires an understanding of current block verification processes in blockchains.
The problem is that if a block producer does not release all of the data in a block, no one can detect if a malicious transaction is hidden within that block.
To understand how blockchain nodes work, you should know that each block in a blockchain is made up of two parts:
- A block header - the block's meta-data, which includes basic information about the block, such as the Merkle root of transactions.
- The transaction information - the majority of the block and contains the actual transactions.
There are also generally two types of nodes in a blockchain network:
- Full nodes - which download and validate every transaction in the blockchain. This takes a lot of resources and hundreds of gigabytes of disk space, but these are the most secure nodes because opportunists can't trick them into accepting invalid transactions in blocks.
- Light clients - If your computer lacks the resources to run a full node, you can run a light client instead. A light client downloads and validates no transactions. Light clients are less secure than full nodes because they only download the block header and assume it only contains valid transactions.
Fortunately, there is a way for light clients to verify that all transactions in blocks are valid indirectly. Instead of manually verifying the transactions, they can rely on full nodes to send them fraud-proofs if a block contains an invalid transaction.
But there’s an issue that comes up with this -
For a full node to generate a fraud-proof for a block, it must first know its transaction data.
If a block producer only publishes the block header and not the transaction data, full nodes cannot validate the transactions and generate fraud proofs if they are invalid. Light clients must be able to verify that transaction data for a block was published to the network so that full nodes can verify it.
Here are some ways to address this -
- Increasing Block Size - There is an artificial block size limit in blockchains like Bitcoin to keep the blockchain small. Most standard laptops can run a full node and verify the entire chain.
- Sharding - A method of increasing the throughput of a blockchain by splitting it into multiple chains known as shards. These shards each have block producers and can communicate with one another to transfer tokens.
- Optimistic rollups - A new scaling strategy based on rollups, which are similar to shards in that they are sidechains. These sidechains have dedicated block producers who can transfer assets between chains.
What Are the Possible Solutions to the Data Availability Issue? ✅
- Downloading all data - As previously discussed, the most obvious solution to the data availability problem requires everyone (including light clients) to download all of the data. This does not scale well. Most blockchains, including Bitcoin and Ethereum, currently operate in this manner.
- Data availability proofs - Data availability proofs are a new technology that allows clients to verify that all of the data for a block has been published with a high degree of certainty by downloading only a small portion of that block.
For a good reason, the blockspace market structure is a fascinating topic to research. Data availability is crucial to blockchains' ability to remain functional and secure. Layers of data availability enable meaningful decentralization and security.
Ethereum's data storage capacity is also essential to its future scalability plans. Rollups are limited by data throughput on the parent chain, which is why data sharding and other upgrades have been implemented to improve Ethereum's performance as a data availability layer for Layer 2 solutions.
Subscribe to receive our weekly newsletter and in-house research content!
Please Share, Leave Feedback, and Follow Us on Twitter, Telegram, and LinkedIn to stay connected with us.