In this article, I would like to draw the attention of the blockchain community to two important points that are usually overlooked in the often fierce debates about scalability. Neither of these points is original to me in any way but I believe that people may overlook them even while interacting with the original articles in which they were made.
- Scaling transaction confirmation does not scale computations
Most of the discussions about scaling revolve around the number of transactions that various platforms can process per second. Claims about thousands or even millions of tps are thrown around, frequently succeeding to woo investors. However, Medium blogger Eric Wall in his recent critique of Hedera Hashgraph rightly noted that when projects like Hedera cite their impressive tps figures (10,000 at launch), one should be aware that what they are talking about are not all kinds of transactions but only first-layer account-to-account transfers of tokens. For most blockchains, this only means their native tokens like BTC, ETH, EOS, etc. However, Algorand is an exception here in that it implements other tokens directly on the first layer.
Wall further notes that Hedera’s processing speed for virtual machine computations is actually lower than that of its supposed outdated main competitor Ethereum. The reason for this is that Hedera uses basically the same VM, which was, however, deliberately throttled at launch.
Consider a simple example to better understand why this is the case. Suppose user A wants to transfer 10 tokens to another user, user B wants to verify that a document is the same as the one whose hash is stored on-chain, and user C wants to multiply 2 very large numbers together. Suppose A sends her transaction, first, then B sends hers in 3 seconds, then C sends hers 3 seconds later.
If the consensus mechanism of the blockchain platform in question works flawlessly, it will validate and correctly order these transactions. But notice that even if validated and correctly ordered, these transactions remain merely instructions to be executed by the virtual machine of the blockchain in question. More precisely, the computers functioning as full nodes must execute them using the virtual machine’s software to achieve what users really wanted. And depending on the complexity of the computations, this may take a lot more time than reaching consensus on the transactions.
The importance of this issue goes way beyond the question whether Hedera is really a project worthy of hype. Scaling VMs turns out to be essentially unrelated to consensus mechanisms or even the issues of bandwidth (e.g. the Bitcoin block size debate) that dominate scalability debates.
This is where claims that have so far received little attention may actually turn out to be very important. In my view, the best example comes from the RChain project founder, mathematician Greg Meredith. He has long been trying to draw the attention of the blockchain community to the possibility that distributed computing based on the sequential Lambda-calculus approach is not scalable in principle, regardless of sophisticated new approaches such as using WASM, state sharding, etc.
As an example, consider his recent discussion of Polkadot here:
But if you go to the Polkadot website and their literature, they talk about an aggregation of state machines. The whole reason, the raison d’être, for the invention of CCS and the Pi Calculus and that whole area of work, is because state machines don’t compose. In particular, they don’t compose along the lines of machine and environment. Even Mealy and Moore machines don’t compose in that way. That was essentially why Milner invented CCS.
As I learned the hard way, when you try to compose state machines in the way that they do compose, then you get an exponential blow-up. It’s very, very, very fast and very bad. What you end up having to do is to model the constraints on the product space of the states. The complexity of that goes through the roof.
I am, of course, not even remotely qualified to evaluate Meredith’s claims, however, he seems to have a profound grasp of computer science, and there are other computing experts who agree with him. If he is right, most of the ongoing scalability efforts for computation-focused blockchains may be blind alleys.
2. There is a middle ground between mass and small-circle validation
The second point I would like to highlight is that the dilemma between the Bitcoin’s and EOS/Hedera’s approaches to ledger validation is a false one. Observers of debates on decentralization might think that one either has to have a network where even average-quality consumer laptops can validate the distributed ledger in full, or one with a small circle of validators.
The professed opposition to the latter option is the stated motivation for the Bitcoin community’s aversion to increasing the maximum block size and some other proposed modifications. However, as Tezos’s founder Arthur Breitman aptly notes, the same vision seems to be behind the various engineering efforts aimed at implementing sharding. And indeed, Ethereum creator Vitalik Buterin repeatedly stated in the past that his preferred outcome is Ethereum running on consumer laptops.
Breitman rightly objects that there is a middle ground between the two extremes that does not necessarily sacrifice the core advantages of public blockchains:
My read is that people in this space have overly romanticized chain validation and consensus participation. It is often said that Bitcoin is a “permissionless” network. To me, permissionless means that I can set up a website and start accepting bitcoins immediately. In particular, I do not need a bank’s approval to start receiving or sending bitcoins. It is a powerful source of financial freedom.
[…] As we’ve said before, a low barrier to entry in validation is important in maintaining a healthy decentralized and censorship resistant network, but once this censorship resistance is achieved, the fact that I, as a user, can become a small time miner on the chain and marginally contribute is pragmatically irrelevant. At best I’ll be making a symbolic statement, at worst I’ll be making a gamble between electricity prices and bitcoin prices. Given the amounts that I typically transact, I hardly need to bother with anything more than a SPV level of security.
He also provides an estimate of what kind of hardware investments would be required to scale a non-PoW blockchain at present, without complex sharding or other approaches:
Let’s look at some actual numbers. Visa’s peak transaction rate is 4,000 transactions per second. An ed25519 verify operation takes 273,364 cycles. On a modern 3Ghz computer this means more than 4,000 signatures can be verified in 0.36s. Of course there is more to validating a blockchain than merely verifying signatures, but this tends to dominate the computational cost.
I may be too conservative in using Visa’s peak transaction rate. After all, the promise of microtransactions and the machine payable web suggest an increase in the demand for transactions. Let us boldly increase that rate one hundred fold. A back of the napkin calculation suggest that, conservatively, a rate of 400,000 transactions per second could be sustained today with a gigabit connection (which can be obtained for only a couple hundred of dollars a year in most OECD countries, the US sadly being a notable exception) and with a cluster of computers costing less than $20,000. Five years from now, this computing power will likely available for under $2,000. These are the numbers for a sustained transaction rate one hundred time greater than Visa’s peak transaction rate.
It is very difficult to say whose approach to scalability will turn out to be more viable in the end and for which use cases. However, it is better for everyone involved to realize that there are more options than it would appear.