88mph asked the following in the Factom Slack,
I was listening to an interview with one of the heads of LBRY a few days ago, and he made the point that while there’s a lot of discussion around scaling various blockchains, none has shown how to do so successfully “in the wild.” Meaning that all blockchains with significant usage are encountering scaling problems.
I’m wondering how this applies to Factom. Do you see Factom encountering similar growing pains as usage increases? Or do you think you’ll be able to side-step some of the problems encountered by blockchains such as Bitcoin and Ethereum because of X / Y / Z in the Factom protocol? Are you concerned / relatively unconcerned? Thanks in advance.
Paul Snow replied with,
Great question! The short answer is that distributed immutable data scalability is clearly achievable where distributed immutable computational scalability is very hard. Factom is the only distributed immutable data solution today.
Factoids have the same scalability issues as every other cryptocurrency. And if that was the end of the story, we’d be in trouble.
But it isn’t. Factom doesn’t aim to be a currency solution. It is a data ledger solution. So the first step is to segregate the token from the data protocol. We do that by requiring the purchase of entry credits.
Entry Credits allow users to buy many “data protocol writes” at one time. So think of it as prepayment of fees for many many transactions all at once. And into an account that is only decremented as it is used.
I can decrement a counter in half the space as I can send funds out of a transaction to another addreas. A transaction (as Bitcoin does it) specifics a source, a destination, tracking fees, and ties it to the data, all in a lump.
Writing data to the protocol in Factom first pays for the write with a commit, which decrements an entry credit address (account based balances) and ties this to a 32 byte hash.
Then when you reveal your data, the payment is allowed because it matches the hash of a commit. Again we segregate. The data is written to a chain with only 35 bytes of overhead. It is the data that goes into the Merkel roots and anchored, from the application’s perspective. And the commit is kept elsewhere. It can be pruned away when nobody cares about the accounting, and most applications won’t care. (edited)
So even the entry credits are segregated from the data. An application using factom’s data doesn’t need to look at Factoids nor entry credits.
Entry credits can’t be traded. So they only need to track 1) balance additions (rare) and 2) commits which are balance decrements. They have no outputs or ties to any other entry credit or factoid addresses. So. Decrements can be spread out over different servers and even different networks.
Factom on Factom is also easy. Where other networks of Factom have independent entry credit accounting (bought from the same Factoids) and anchor into factom.
What we are leveraging is the fact that to validate data with a hash only requires the data and the hash. The key to scale is to limit the data needed to validate. And it is only by segregated layers that a public blockchain can hope to scale. And the easiest way to scale a data solution is to get rid of any context (or history) for validation.
Factom is currently the only blockchain doing this today.
Another short answer is that BitTorrent scales exactly the same way. A torrent works because validation only requires that the data matches the hash. And nobody needs anything but the torrents they care about. Factom scales because validation only requires the data to match the hashes, and nobody needs anything but the user chains that they care about.