At the time of writing, adding a new asset to PegNet is possible but it is a laborious manual process:
- Find at least two APIs that provide free and reliable* data for the market price
- Create a polling adapter that is able to read the API endpoint and transform the data to use in pegnetd
- Add a new OPR version that indicates the addition of a new asset, updating the OPR and Grading package
- Determine an activation height for this change
- Implement the activation height change in both pegnetd and the reference miner
- Deploy the new code and work with exchanges, mining pools, and discord to coordinate the hard fork
* Free means there is a way to make … Read the rest
One of PegNet’s selling points is decentralization — officially there is nobody in charge of PegNet. It works as long as the people using it agrees to play by the same rules, which at the moment is done by everyone using the same software:
pegnetd. As a core developer for
pegnetd, one of the first questions I ask myself before considering a new feature is: how will the community react to this?
That immediately raises another question: Who exactly is the “PegNet” community and can we determine who the majority is? The obvious answer is that it’s a combination of all the people using it: members on discord, core developers, exchanges, miners, etc., however, it’s not feasible to … Read the rest
As of block 222270, PegNet switched from using the bootstrap formula for the value of PEG to using the market value, calculated from the three exchanges that list the token: CiteX, ViteX, and VineX. For miners to be able to oraclize this data, we needed to add APIs to the system. Every asset in PegNet has at least two separate APIs to pull data from and we did not want to make an exception for PEG. CoinGecko was the only existing API for the price of PEG, so two entities in the PegNet community stepped up: Factoshi and Factomizewith the goal of being phased out as other data aggregators come online.. The APIs of Factoshi and … Read the rest
Update (Nov. 6th, 2019): A new version of the PegNet Daemon has been released that lets you burn FCT by typing
pegnetd burn [source address] [amount] .
It’s Monday, October 7th and today we are launching the most important aspect of PegNet: Transfers and Conversions or, in short, Transactions. They will be enabled starting with block 213237, with a planned official launch at 15:00 UTC.
What was once just an idea and a whitepaper is now reality. We are happy to release the PegNet Daemon, which will extend PegNet’s functionality to include these additional features:
- Transfering any pegged asset to a different address (e.g. sending 10 pUSD to another person)
- Converting pegged assets (excluding PEG) to a different
… Read the rest
Can’t believe it’s already one year since I started working for Factomize. At the time, I only had a superficial knowledge of blockchains and no experience writing Golang. David’s charge to have me become a Factom Protocol core developer seemed like an almost insurmountable task.
At first, I just worked on the Factomize forum, which was more in line with my area of expertise, while getting familiar with the Factom community and node. A couple of months later it was time to learn Golang and familiarize myself with the core code. My first pull request was on January 7th, 2019, simply adding myself to the factomd CLA. That was followed by my first feature implementation: adding https support to the … Read the rest
Over the past couple of weeks, I’ve been involved in the PegNet project and I wanted to share my understanding of what it is and how it works with the rest of the world. I’m a developer and not an economist, so my perspective focuses more on the technical aspects than how to master the market. Due to the large scope of the project, this blog will be split into multiple pieces, with the first one focusing on the Oracle.
PegNet, short for Pegged Network, is a set of tokens pegged to existing currencies. It is built as a Factom Asset Token (“FAT”) standard on top of the Factom Protocol, meaning that the values and transactions sit inside … Read the rest
In my previous blog post on the gossip network, I detailed how the current network has a tendency to form a hub network and how that introduces both inefficiencies and scalability problems. A short recap: when booting up, all nodes connect to the seed nodes, leaving them with a disproportionally vast connection count. This impacts the fanout of messages with the seed nodes receiving a disproportionate amount of messages, the duplicates of which are dropped.
The ideal network structure is every node in the system connected to an equal amount of other nodes. This is made difficult by the fact that nodes are not aware of the network topology.
So how do we go from a hub network to … Read the rest
Living in a world where it’s impossible to tell whether or not a recorded video is real sounds like a nightmare but with the advent of Deepfake, that world has been heralded by many in recent times. The question of what to do about it is asked almost daily in the Factom Protocol community but the answers, both in our community and elsewhere, have been sparse.
Tackling Deepfakes is an extraordinarily difficult problem and, unfortunately, I have no easy answers. I do, however, have some expertise and a lot of interest in the area. The goal of this blog is to present the full scope of the problem, of which Deepfakes is only the latest iteration, and explore the … Read the rest
Up until now, I have been relying on legacy values for configuring the P2P 2.0 package I have been working on. These values are:
- Outgoing: 32
- Incoming: 150
- Fanout: 16
- Rounds: 6
As far as I know, these values have been selected arbitrarily with the primary goal of ensuring that messages reach as many targets as possible. The drawback is that the more reliability you choose, the more the network will be flooded with duplicate messages. I wanted to find out if these settings make sense for the network and if it is possible to optimize them.
Since I am a programmer, not a mathematician, I opted to do this through an empirical process.
Note: All data I used … Read the rest
There is a lot of talk about scalability, sharding, and how to get factomd to the next level. What I want to talk about in this blog is skipping past all those steps in between and start right at the end: a fully customizable, modularized, shardable factom node.
This is not meant to be a proposal of things we should implement in factomd right now, it is an idealistic vision of the future that doesn’t account for hardware limits or optimizations. The haute couture of programming — not something meant to be implemented but rather to inspire goals and trends.
The foundation of extendable modularization is a unified message bus. All modules should be able to react … Read the rest