Branson Consulting

Thank you for the application. First question:

A. Based upon your stated Efficiency of 50% (with one server) and your estimated expenses, at what USD price does FCT need to be for your Authority Node operation to break even on a monthly basis with one server?
 
A. From a purely price per FCT point of view at 50% of one server they would need to be selling at $4.45 to cover bare costs. However, we have funded an account to prevent this issue from arising.
 
Can you expand on "funded an account" above?

Also you write:
the meantime while we develop a redundant failover system in order to keep instances healthy and online at all times.
What is your preliminary thoughts on how to design such a system? What do you have to take into account when developing such a system for servers hosting a distributed ledger?
 
We have funded an account with $10,000.00 to cover all operational expenses in the event that the price per Factoid is traded below $4.45 per FCT as a reserve account.

My preliminary thoughts regarding setting up a redundant failover system are using the load balancer in AWS to complete its regular check on instance health and operation. If the load balancer were to come back with a report of an unhealthy instance then it would switch the elastic IP to a backup instance that is healthy. Now the question is how to get an updated factomd that stays updated on the backup node. My thoughts so far are too use the weekly snapshots on the EBS volumes and attach said volume to the backup instances, so in the event of a failure when the load balancer switched instances out it would be working with a factomd that is already mostly synced up. AWS used to make it easy to clone instances but now they are more difficult. Another thought on the subject is to create an auto scaling group to manage servers in the event of more than one backup failing and there not being any other available. This would probably be overkill since I don’t envision a master instance failing without any of us attending to it but its a thought. Another option would be to use the AWS EC2 auto-recovery where if the cloudwatch monitor checks on the instance and comes back with an unhealthy report, it’ll automatically switch the elastic IP and EBS volume to a new healthy instance in a different server set.
 
Thank you for your application

NK01)

Could you elaborate on the planned and prepared parts? How will you set this up?
We have planned to take “shifts” in the sense that ideally someone from our team would be able to perform server maintenance at any given time of the day/night. Being that we are all located in the U.S. we eventually want to move toward partnering with either other Authority Set operators or a 3rd party in order to cover times where we should be sleeping.

We are prepared to setup monitoring software like Zabbix, Grafana and Prometheus. Being that the testnet has used Grafana and Prometheus we will most likely swing towards these tools. We currently have Cloudwatch active in AWS to cover the server health, but would like to setup more monitoring for the factomd. Ideally we would set the monitoring to notify the team through Discord if any server was acting up, similar the the Telegram bot. If I remember correctly from what Brody mentioned, we would install Prometheus and configure the metrics to monitor. Then use possibly Pagerduty to push notifications. A new options that we have discovered is the use of Data Dog due to its integration with both AWS and Azure, we don’t have enough info to speak on it yet however we are looking into it.
 
NK02)
You state that 2 team members will perform the system administration (Brody and Cody). Could you break down their respective experience in years running production servers.
Cody has 5 years experience running productions servers. The bulk of his experience is somewhat entry level. Cody has managed cloud infrastructure in AWS. Cody was responsible for spinning up basic Windows Server based instances for enterprise wide use to support the sales teams quoting tool. Part of this was allowing multi users same time logins, VPN connections, elastic IPs tied to instances and security group adjustments.

Brody has 10 years of server administration. He has started out building basic Wordpress websites and moved onto bare metal servers and cloud infrastructure. He has worked for Elevation Church for 6 years, 3 of which as their lead system admin. In his time there so far he has has been responsible for coding their current site that supports thousands of online streaming users as well as hundreds of on demand video plugins. He maintains the Apache web servers that run the site as well as managing the database inside the servers. He works with PHP, HTML and Apache on a daily basis. He has used Docker to push images to multiple server sets and understands how to use Nginx to balance a load.
 
NK03)
What type of OS or network security measures would you be taking?
What we have discussed and planned are locking down the security groups in each providers settings (AWS and Azure.) Only allowing certain ports to be open, setting up special IP privileges and tying those into the instance security groups for the incoming connections. Changing out RSA keys each month to add another layer of extra protection in the event they get stolen. Cloudwatch monitoring set up to email us if an unknown IP accesses the instance which is currently setup.
 
NK04)

Could you elaborate on your thoughts about the redundant failover system?

I did a write up above in response to Quintilians question regarding this, I pasted it below for quick references.

My preliminary thoughts regarding setting up a redundant failover system are using the load balancer in AWS to complete its regular check on instance health and operation. If the load balancer were to come back with a report of an unhealthy instance then it would switch the elastic IP to a backup instance that is healthy. Now the question is how to get an updated factomd that stays updated on the backup node. My thoughts so far are too use the weekly snapshots on the EBS volumes and attach said volume to the backup instances, so in the event of a failure when the load balancer switched instances out it would be working with a factomd that is already mostly synced up. AWS used to make it easy to clone instances but now they are more difficult. Another thought on the subject is to create an auto scaling group to manage servers in the event of more than one backup failing and there not being any other available. This would probably be overkill since I don’t envision a master instance failing without any of us attending to it but its a thought. Another option would be to use the AWS EC2 auto-recovery where if the cloudwatch monitor checks on the instance and comes back with an unhealthy report, it’ll automatically switch the elastic IP and EBS volume to a new healthy instance in a different server set.
 
Top