Data from April 2020 Load Test

Unrestricted Public Thread

  • Viewed BI Foundation BI Foundation Bedrock Solutions Bedrock Solutions Canonical Ledgers Canonical Ledgers Consensus Networks Consensus Networks Cube3 Cube3 DBGrow DBGrow De Facto De Facto Factable Solutions Factable Solutions Factom Inc. Factom Inc. Factomize Factomize Federate This Federate This Go Immutable Guides HashQuark HashnStore HashnStore Kompendium Kompendium LUCIAP LUCIAP Matters Matters Nate Miller Prestige IT Prestige IT RewardChain RewardChain Stamp-IT Stamp-IT The Factoid Authority The Factoid Authority VBIF VBIF
  • Not Viewed Blockrock Mining Blockrock Mining Crypto Logic Crypto Logic Factomatic Factomatic Factoshi Factoshi LayerTech LayerTech

Nate Miller

Consensus Networks
All, here's some more in depth data from the recent load test. Overall, it was pretty exciting to watch the amount of data and load put through the network. There we some audit servers that fell behind during the test, but overall the network didn't have any problems.

This is great data, imo, but let’s not hang our hat on EPS/TPS. There are blockchains with high TPS; Solana, Ripple (if you can call it that) as well as those that are slower like Bitcoin. TPS are not the only thing that matters and scalability does not necessarily happen or need to happen on-chain.

So, I encourage us all to be excited about these results, not just because of the EPS, but because we’re doing it with an ever decentralizing and growing group of nodes, which proves an advancing and robust network capable of high throughput and fault tolerance. Boasting TPS will alert the XRParmy, as well as all the other TPS shills and probably isn't worth the argument.

Thanks to @Tor Paulsen @Paul Bernier and @Who for the technical support and help with testing! We're working on expanding what testing we can do and will hopefully try to break the testnet in the near future.

Here's a screenshot from the load test. The highest average EPS for blocks was 42.95. You check out block 126588 here. Rough math is appx 25,000 entries for that block, which is close to 42 EPS. During the hour or so we were trying to hit 50EPS we had about a GB of data transfer as well, a healthy amount. The dip in EPS at 126590 is when we forced an election.

testnetload.png


Below are screenshots from the leader TFA was testing. Of note it was 2C 8GB and had no trouble keeping up.
TFA-Servers__Primary__-_Grafana.jpg

TFA-Factomd_-_Grafana.jpg
TFA-Factomd_-_Grafana (1).jpg
 

Mike Buckingham

Cube3
Website Committee
Governance Working Group
Hi Nathan,

You should take credit for the way you have performed the role of Testnet Admin. You have fulfilled the very essence of what a Tesnet Admin needs to do for the Factom protocol and community, very much in line with the vision and general strategic direction established by Doc007.

These results, coming from that leadership and the associate more robust, representative Testnet are a tribute to that. We need to cultivate this, Testnet is our R&D proving ground (amongst other things) so we all need to run appropriate Testnet servers (which is why this is an important part of ANO requirements and standing).

As you say we should not be complacent, there are plenty of blockchains thrashing this TPS result. We need to continue to develop and test in the way you have described.

Thank you.
 

Paul Bernier

LUCIAP
Core Committee
Core Developer
Hi @Frédéric Faye. Adding load through multiple nodes has multiple advantages:
-it's a more realistic testing. In real life load doesn't come from a single node, you have multiple parties pushing their entries via various nodes spread across the network.
-a single node may struggle at higher TPS to process all the API requests to insert an entry, the API queue may start to see backpressure, or the node isn't able to send out fast enough all those entries. I'd need to measure more accurately if any of those happen. By splitting the loads to multiple nodes, each node only has to process the API calls for a fraction of the total load. The idea is basically to not put all the pressure on a single node but spread it, and see if the network, as a whole, can take the sum of all the loads.

A node with the same requirement as for the testnet authority set (4 cores, 8Gb memory) is suitable to host a Chockagent along a factomd node.
 

Mike Buckingham

Cube3
Website Committee
Governance Working Group
Hi @Frédéric Faye. Adding load through multiple nodes has multiple advantages:
-it's a more realistic testing. In real life load doesn't come from a single node, you have multiple parties pushing their entries via various nodes spread across the network.
-a single node may struggle at higher TPS to process all the API requests to insert an entry, the API queue may start to see backpressure, or the node isn't able to send out fast enough all those entries. I'd need to measure more accurately if any of those happen. By splitting the loads to multiple nodes, each node only has to process the API calls for a fraction of the total load. The idea is basically to not put all the pressure on a single node but spread it, and see if the network, as a whole, can take the sum of all the loads.

A node with the same requirement as for the testnet authority set (4 cores, 8Gb memory) is suitable to host a Chockagent along a factomd node.
Quick question: Why couldn't we designate a number of testnet-authority-set nodes to be load injectors?
 

Paul Bernier

LUCIAP
Core Committee
Core Developer
We can, running an agent is as easy as launching a docker container (https://github.com/PaulBernier/chockagent). In the past CryptoLogic and TFA deployed an agent along their testnet node for instance. I wouldn't want to start with many agents though, just 2 or 3, in case things don't go as planned so we can have sysadmin available (I am Docker Swarm admin though so technically I should be able to turn on/off chockagents myself from Portainer if necessary).

Given the recent interest in testnet and load testing I have been working this week on polishing and improving a few aspects of Chockablock. I will update the community about it.
 
Last edited:
Top