Process Discussion ANO assessment process - role of Guides and Standing Parties in next round.

Public: Only invited members may reply

  • Viewed BI Foundation BI Foundation Bedrock Solutions Bedrock Solutions Blockrock Mining Blockrock Mining Brian Deery BuildingIM BuildingIM Canonical Ledgers Canonical Ledgers Crypto Logic Crypto Logic Cube3 Cube3 DBGrow DBGrow De Facto De Facto Factom Inc. Factom Inc. Factomatic Factomatic Factomize Factomize Factoshi Factoshi Federate This Federate This Go Immutable HashnStore HashnStore Julian Fletcher-Taylor LUCIAP LUCIAP LayerTech LayerTech Matter of Fact Matter of Fact Mike Buckingham Multicoin Capital Multicoin Capital Niels Klomp Prestige IT Prestige IT RewardChain RewardChain Samuel Vanderwaal Stamp-IT Stamp-IT The Factoid Authority The Factoid Authority Tor Paulsen VBIF VBIF
  • Not Viewed None

Timed Discussion

Discussion ended:

Status
Not open for further replies.
Secured
#1
As part of the progression to increased decentralisation it is intended that the next round of ANO applications are assessed by the ANOs.

The ANO application process has served us well but there are challenges with the existing process:
  • We do not signpost the ANO requirements sufficiently well
  • There are duplicate and redundant questions
  • Assessing the responses has been very time consuming for the guides
  • A large proportion of the marks are based on subjective assessments
  • It can be difficult to draw objective conclusions
A Research Group has been established by DBGrow and Cube3 to recommend changes detailed in the attached document. They are:

1. Revised introduction explaining the ANO role and the expectations of an ANO
2. Grouping of the questions into specific categories and having a balance of closed and open questions
3. Creating a revised scoring system which ultimately enables all standing parties to participate in the assessment against these questions. However for this next round it is proposed that the standing parties would be responsible for 60% of the marks and the guides responsible for 40% of the marks.
4. Providing guidance on how the responses to the questions should be graded against a scale which should help reduce marking variance.
5. Consolidating the results so that applicant performance can be visualised and easily compared, enabling appropriate searching questions to be asked
6. Including the responses to the ad-hoc questions into the assessment in a structured way.

The attached “Voting Matrix” illustrates suggested section groupings, revised questions, % of marks and scoring responsibility.

Following discussion we would like you to vote whether you would want a scoring system which for this next round allocates 60% of the scores to the standing parties and 40% to the Guides and Testnet Admin.

Thank you for your participation.

Assuming community approval it may not be possible to incorporate all of the other changes referred to in time for the next ANO application round. In which case they can be phased in.
 

Attachments

Chappie

Factomize Bot
Secured
#2
This thread is a Minor Timed Discussion and I am designed to help facilitate efficient communication.

Guides and ANOs may take part in this discussion and vote. Unless this discussion is ended early or extended, it will end in 3 days after which a vote may take place. After 18 hours from the start of the thread or any point up until 24 hours are left in the discussion, you can make a motion to end the discussion immediately or extend the discussion beyond it's initial time frame by selecting the pertinent button at the top of this thread. If someone "seconds" your motion, a poll will take place and if a majority of voters vote yes by the time the discussion is scheduled to end, the time period will be extended for 24 hours.
 
Secured
#3
Thank you, Mike, for detailing the crux of what we're working towards above. I'd like to re-state that this thread aims to accomplish two goals:

- To discuss the contents of the ANO Election Research Group document + voting matrix
- To ultimately discuss the merits of switching our current ANO scoring process, as laid out in Doc 001, from ALL Standing Parties having equal vote weight in all categories -- to -- Guides + Testnet Admin having 40% vote weighting in the technical categories (Support of Protocol/Efficiency, Technical Specs, and Decentralization of the Protocol); and a 60% vote weighting attributed to ALL Standing Parties in the Human Determinable Factors and Availability of Maintenance Team categories.

The catalyzing logic for this change is as follows:

We know that ANO's don't necessarily always have the technical acumen to review varying ANO server and node set-ups and then compare them against each other. From that, we felt that proposing the option of splitting the vote weighting up into these 40% and 60% categories would allow the perceived more experienced Guides + Testnet Admin to vet the technical information and vote on it, while allowing all Standing Parties (ANO's and Guides) to vote in the 60% vote weighting category where the human determinable factors come into play.

This newly proposed vote weighting structure also yields the benefit of decreasing an ANO's time spent reviewing and vetting the specs of these server and node set-ups so that ANO's can solely focus and vote on the human-expertise benefits the prospective ANO can bring to the table. This would hopefully free up the ANO's time.

We realize that we don't want to gatekeep or centralize decision-making, but we seek a balance at this time of optimal scrutiny towards the technical aspects (which we presently believe is optimally done by Guides + Testnet Admin) while we simultaneously relieve ANO's of possibly voting on technical matters they may not have full comprehension in.

Please review and comment on this proposal so that we can best move forward with the process of developing the workflow for the upcoming ANO application round. And if needed, after this discussion + vote on scoring structure, we will propose any necessary changes to Doc 001 along with a single-use ANO application & scoring workflow we will share with the community here on Factomize. Thank you.
 
Last edited:
Secured
#4
Thank you all for your hard work!

1. I understand and share your concern about the technical acumen of some ANOs. The reality is, some of the Guides will likely not have that technical acumen either. As such, how is the proposal to have the Guides handle it better?

2. In the scoring matrix, you've created a list of questions which ANOs will base their votes on. Will the ANOs be able to ask additional question of candidates or are the questions fixed to what you propose?

Thank you.
 

Chappie

Factomize Bot
Secured
#5
We are now 18 hours into the discussion. You may now make a motion to extend this Minor Discussion by an additional 24 hours or end this conversation by selecting the pertinent button at the top of this thread. This option will end when there are 24 hours left in the discussion.
 
Secured
#6
@Mike Buckingham @Nic R

Given the proposed split, all the objective factors seem to sit with the guides and testnet admin, and all the subjective factors seem to sit with the ANOs. Because we publish the criteria for the objective scores, every applicant should be "maxing out" their objective scores to have the highest chance of becoming an ANO. This effectively gives the guides no standing because the power of their vote is in the hands of the applicant.

Maybe I have misunderstood the objective/subjective factor split?

If I haven't, the proposed setup would be quite dangerous. I would recommend that the guides retain some of the subjective criteria so that they have not ceded their standing to the ANOs and applicants.
 
Secured
#7
@David Chapman
  1. This is a good point. Very strictly speaking, we should actually expect a higher level of technical acumen required from ANO's than from Guides, as an ANO's baseline is to be able to run Factom servers, but Guides do not necessarily need to (and can be non-ANO). My thinking here was that those tasked from their ANO to review applications may not necessarily be the technologically experienced individuals from that ANO that are doing the server management. This brings in the extra overhead of having to pull in the people from the ANO that do the technical management to review applications. Guides, on the other hand, can be expected to take whatever steps and efforts are required to appropriately judge the technical merits of candidates, including seeking out more experienced individuals' opinions, and this level of effort may not be practical to expect from ANO's.

  2. We will still preserve the Q&A period on Factomize where community members can ask questions to candidates.
 
Secured
#8
@Ben Jeater

Thanks, Ben. The Guides and Testnet Admin are responsible for the overhead with objective scoring, but all Standing Parties which include the Guides are voting in the subjective criteria, so Standing is not fully ceded to ANO's; instead, only distributed with the same weight we distributed during Grant rounds. Thus, Guides and ANO's will have equal vote weighting, 1:1, in the subjective categories.
 
Secured
#10
I meant that each Guide in the subjective category has an equal vote weight as any given ANO. So if there are 29 total votes, the 1 vote coming from any given Guide and the 1 vote from any given ANO are equal in terms of vote weight.
 
Secured
#11
Thank you all for your hard work!

1. I understand and share your concern about the technical acumen of some ANOs. The reality is, some of the Guides will likely not have that technical acumen either. As such, how is the proposal to have the Guides handle it better?
Thank you.
The scoring of these "objective technical criteria" is not really dependent on technical acumen.

I believe what is referenced is the job of scoring objective criteria like:
- Pledged efficiency
- Node location (to ensure it doesn't conflict with other nodes)
- Technical details as amount of RAM, type of disks (SSD/HDD), CPU cores etc...

The above is basically just mapping a lot of detailed responsens/answers to a standardized scoring system and is quite tedious work. Having the 5 guides doing this and cross-checking each others work is beneficial from a community bandwidth perspective, as all the ANOs does not have to do this work independently of each other.

Also, I believe if all the ANOs did this (and later other standing parties), the chance of mistakes happening would go up as a correct result would be dependent on all the ANOs (and other voting parties) correctly applying the scoring criteria independently...

If the guides do it they can all do it independently and then input their scores in a spreadsheet and verify that they have scored it the same way... If there are discrepancies it can be discussed and solved in the guide group.
 
Secured
#12
Firstly thank you all for engaging in this discussion.

Hi David, with regard to your point about technical acumen. We are not saying that the ANOs do not have this, after all it is a requirement for all ANOs except those that subcontract this. What we are saying is that it is an important part of the assessment that the guides have historically handled very well. Indeed Tor has explained clearly how they do that above.

The description “subjective” is a bit of a misnoma. It has historically been a useful way of separating groups of questions. Ultimately we would like to move to the point where there there is less subjectivity, although we acknowledge that there are a lot of factors to assess.

On the face of it the technical aspects can be assessed objectively and just as we have suggested producing guidelines for the non-technical aspects of an application then the technical aspects too could have guidelines. If so I would expect that in the short term they would align very closely with the way this aspect has been assessed historically. Importantly they would ideally be created by people in our ecosystem with undoubted expertise in this sphere. (People like Brian and Niels for example)

To try to ensure that the ANOs are not overloaded and we get reasonably consistent scoring this seemed to be an appropriate balance to make at this stage. Nic has clarified that the balance proposed is that the guides score the technical aspects and all the standing parties score the non-technical aspects. Later we do want to move this so that all standing parties contribute equally.
 

Chappie

Factomize Bot
Secured
#13
Nic R has made a motion to extend the discussion. If someone seconds this motion by selecting the button below, a vote on the motion will start.

A majority voting yay will pass the motion and the discussion will be extended for 24 hours. This motion will remain open until the normal discussion period ends or a motion to end the discussion is passed by a majority.
 

Chappie

Factomize Bot
Secured
#14
Matthias Fortin has seconded the motion to extend the discussion.

A motion is now active at the top of this thread to vote if you want to extend the discussion. A majority voting yes will pass the motion and the discussion will be extended for 24 hours. This vote will remain open until the normal discussion period ends or another motion is passed.
 
Secured
#17
Could some please elaborate what are the outstanding things that need to be discussed at this point? (@Nic R @Matthias Fortin)

From what I see above, there aren't many (any?) unresolved issues. I'm asking in order to inform my decision about extending this discussion.
Hi Valentin,
Like you I do not think there are any unresolved issues. From my perspective the only reason for extending the discussion is to allow anyone else to participate if they want to.
 
Secured
#20
Just thinking about clarifying two points. They are quite formal points and it does not challenge the substance of the proposed change which I am comfortable with:
- Do we expect only one vote for all the Guides (along with the Testnet Admins) or multiple votes (5, one for each Guide) which should be all very close one to each other as this concerns "objective criteria"?
- What does motivate the change of the weight proposed in the Excel sheet compared to Doc 001 V1.4 (Location going up from 5% to 10%, Node reliability goind down from 16% to 5% and the Human Determinable factors going up from 49% to 55%)

And thank you for this work which furthers the decentralisation process.
 
Status
Not open for further replies.