One of the most important questions that the crypto community faces today is how to design blockchains that can meet the diverse and evolving needs of users, developers, and validators. There are different approaches to this challenge, and they can be broadly categorized as “modular” or “monolithic”.
Modular blockchains are those that focus on a specific function or use case and rely on interoperability with other blockchains to provide a full range of services. For example, Celestia is a modular blockchain that aims to provide scalable data availability for any application, while delegating computation and state management to other chains. Modular blockchains can benefit from specialization, efficiency, and flexibility, but they also face trade-offs in terms of complexity, coordination, and security.
Monolithic blockchains are those that aim to provide a comprehensive platform for any kind of application, without depending on external networks. For example, Solana is a monolithic blockchain that claims to offer high performance, low cost, and rich functionality for a wide range of use cases. Monolithic blockchains can benefit from simplicity, convenience, and security, but they also face trade-offs in terms of scalability, adaptability, and diversity.
Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.
Tekedia AI in Business Masterclass opens registrations here.
Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.
There is also a middle ground between modular and monolithic blockchains, which can be seen as “hybrid” or “general-purpose”. For example, Ethereum is a hybrid blockchain that offers a flexible and programmable platform for various applications, while also supporting interoperability with other chains through bridges and sharding. Hybrid blockchains can benefit from versatility, compatibility, and innovation, but they also face trade-offs in terms of performance, cost, and governance.
The debate between modular and monolithic blockchains is not a binary one, but rather a spectrum of design choices and trade-offs. There is no one-size-fits-all solution for blockchain architecture, as different applications may have different requirements and preferences.
The crypto community should embrace the diversity and experimentation that these approaches offer and collaborate to find the best solutions for the common goals of decentralization, security, and usability.
One of the most important and contentious topics in the blockchain space is the question of how to design a system that is scalable, secure, and decentralized. There are two main approaches that have emerged: modular blockchains and monolithic blockchains. Modular blockchains, also known as sharded or layer-2 blockchains, are composed of multiple independent chains that communicate with each other through a common layer-1 chain.
Monolithic blockchains, also known as single-chain or layer-1 blockchains, are based on a single chain that processes all transactions and data. Both approaches have their advantages and disadvantages, and there is no clear-cut answer to which one is better.
The main trade-off between modular and monolithic blockchains is the balance between scalability and security. Scalability refers to the ability of a system to handle a large number of transactions and users without compromising performance or efficiency. Security refers to the ability of a system to resist attacks and ensure the validity and integrity of transactions and data.
Modular blockchains achieve higher scalability by dividing the workload among multiple chains, but this also introduces more complexity and potential points of failure. Monolithic blockchains achieve higher security by having a single source of truth and consensus, but this also limits the throughput and capacity of the system. Depending on the application and the requirements, different trade-offs may be acceptable or desirable.
“Data availability sampling” will allow blockchains to be verifiable on hardware devices
One of the biggest challenges facing blockchain technology is scalability. How can we ensure that millions of transactions can be processed quickly and securely without compromising the core principles of decentralization and trustlessness?
Many solutions have been proposed, such as sharding, layer 2 protocols, and rollups, but they all come with trade-offs and limitations. I want to introduce you to a novel concept that could revolutionize the way we verify blockchain data: data availability sampling.
Data availability sampling is a technique that allows anyone to check the validity of a large amount of data using only a small sample of it. The idea is based on the assumption that if a random subset of the data is available, then the whole data is available with high probability.
This means that instead of downloading and validating the entire blockchain, which could take hours or days on consumer hardware, like smartphones, you only need to download and validate a few random chunks of it, which could take seconds or minutes.
How does this work in practice? Imagine that Alice wants to send some tokens to Bob using a blockchain that supports data availability sampling. Alice creates a transaction and broadcasts it to the network. The transaction is then included in a block by a validator, who also commits to a Merkle root of the block data.
The Merkle root is a cryptographic hash that summarizes the entire block data in a compact way. The validator also publishes a proof of custody, which is a way of proving that they have the full block data and are not hiding or tampering with it.
Now, anyone who wants to verify the block can use data availability sampling. They can request a few random chunks of the block data from the validator or other peers, and check that they match the Merkle root. If they do, then they can be confident that the block is valid, and that Alice’s transaction was executed correctly. If they don’t, then they can raise an alarm and reject the block.
Data availability sampling has several advantages over existing solutions. First, it reduces the bandwidth and storage requirements for verifying blockchain data, making it possible to run full nodes on consumer hardware, like smartphones. This means that we can have more users participating in the network and securing it, without sacrificing decentralization or performance.
Second, it improves the security and privacy of blockchain transactions, as users do not need to reveal which transactions they are interested in or rely on third parties to validate them. Third, it enables new applications and use cases that require fast and cheap verification of large amounts of data, such as decentralized file storage, streaming, gaming, and machine learning.
Data availability sampling is not just a theoretical idea. It is already being implemented and tested by several projects in the blockchain space, such as Ethereum 2.0, Polkadot, Near Protocol, and Coda Protocol. These projects are using different variations and optimizations of data availability sampling to achieve their scalability goals and offer new features to their users.
Data availability sampling is a game-changing innovation that will allow blockchains to be verifiable on consumer hardware, like smartphones, which means we can have our abundance without sacrificing our decentralization. It is one of the most exciting developments in blockchain technology and I encourage you to learn more about it and try it out for yourself.