Rough draft of "Grant Success" process

Unrestricted Public Thread

  • Viewed BI Foundation BI Foundation Bedrock Solutions Bedrock Solutions Blockrock Mining Blockrock Mining BuildingIM BuildingIM Canonical Ledgers Canonical Ledgers Crypto Logic Crypto Logic Cube3 Cube3 DBGrow DBGrow David Chapman De Facto De Facto Factom Inc. Factom Inc. Factomatic Factomatic Factomize Factomize Factoshi Factoshi Federate This Federate This Go Immutable Guides HashnStore HashnStore LUCIAP LUCIAP LayerTech Matter of Fact Matter of Fact Multicoin Capital Multicoin Capital Prestige IT Prestige IT RewardChain RewardChain Stamp-IT Stamp-IT The Factoid Authority The Factoid Authority VBIF VBIF
  • Not Viewed None
Secured
#1
The following is a rough draft of a "Grant Success" determination process that will begin the ratification process on February 25th (at which point I will close this thread). It is suggested that this grant success determination happen in our new grant tracking subforum. I wanted to post this early draft for additional feedback and discussion so we can have a more advanced draft ready for the ratification process. Here is the document:

https://docs.google.com/document/d/1ARTDRrHzLvFbg13Ci-kDToKgQ63WBBEB2aY_AJWLzxs/edit?usp=sharing

Feel free to make suggestions direct into the document or post them here. Input from non-ANOs is very much welcomed.

Thank you to @Julian Fletcher-Taylor of DBGrow who made suggestions to help get the document to this point.
 
Secured
#2
If a grant has not been put up for determination by a grantee or sponsor and three months have passed since its final milestone or initially intended completion date, it will automatically go up for determination.
1. Why three months? Why not 1 month?

Some grants may only take a month to complete. Are we going to wait 3 months to vote on that grant?

2. I am not sure automating this is ideal due to potential extenuating circumstances

3. Should this be a two-step process?
STEP 1: Vote to have a vote (there could be good reasons a grant is delayed and a final vote should not be had yet)
STEP 2: The final vote
 
Secured
#3
1. Why three months? Why not 1 month?

Some grants may only take a month to complete. Are we going to wait 3 months to vote on that grant?

2. I am not sure automating this is ideal due to potential extenuating circumstances

3. Should this be a two-step process?
STEP 1: Vote to have a vote (there could be good reasons a grant is delayed and a final vote should not be had yet)
STEP 2: The final vote
1. I wanted to provide substantial leeway. There are always extenuating circumstances (lead dev got sick, 3rd party didn't do a code review on time, zombie apocalypse, etc) but I figure 3 months should reduce those to extreme statistical outliers. If the community wants less than 3 months, I'll of course change it.

2. This process will be manual for now. Whether it is automated in the future, I don't know. I think it could be in the future once kinks are worked out. If it was automated in the future, my suggestion would be to have a "revote in one month" voting option where a majority choosing that would roll the vote back a month.

3. We could do it that way where sponsors or grantees can do it anytime or anyone can request a "vote to vote". I can see pros and cons of both methods. I personally think we should keep it simple and the 3 months drastically reduces the chance of outlier issues/extenuating circumstances.
 
Last edited:
Secured
#4
1. Why three months? Why not 1 month?

Some grants may only take a month to complete. Are we going to wait 3 months to vote on that grant?

2. I am not sure automating this is ideal due to potential extenuating circumstances

3. Should this be a two-step process?
STEP 1: Vote to have a vote (there could be good reasons a grant is delayed and a final vote should not be had yet)
STEP 2: The final vote
Would that be better accomplished by just having people abstain if they think there are extenuating circumstances? Not everyone is going to agree on what constitutes reasonable justification for delay. Just thinking out loud.
 
Secured
#5
I'm curious as to why we departed from the Yes/No structure in Grant Success review to an averaging system. Not against it, to be clear; just wondering what the logic is behind the change. Thank you in advance for any context on this one.

Also, what is the process if a Grant does fail? Is there any recourse? While this document does a great job of outlining how Grant Success can be determined, I'm not clear on the direction when a Grant fails.

I know that in Doc 001, 6.2.6, "Support Category: Grant Success" discusses the idea of Support provided over 24 months for successful completion of Grants (as it may eventually relate to Voting power in the Protocol).

I bring this up because I find that since this document does an excellent job of defining Grant success (and how it's measured), the corollary, in my mind, is "Failure of Grants", and I'm unclear as to how failed Grants are navigated after the period-of-performance. If this is a topic for a future thread, I have no problem with tabling this discussion.

My present understanding here is that it's as simple as: Grant fails -> Grant-receiving Party doesn't receive Support from Grant Success (when this is implemented in the future) -> social trust from the community may/could be weakened -> we all move on --- does this sound right?

Regardless of the answers to the above, I support the creation of this Grant Success process as it is currently proposed.
 
Secured
#6
I'm curious as to why we departed from the Yes/No structure in Grant Success review to an averaging system. Not against it, to be clear; just wondering what the logic is behind the change. Thank you in advance for any context on this one.
A suggestion in the doc wanted an actual score for greater granularity.

Also, what is the process if a Grant does fail? Is there any recourse? While this document does a great job of outlining how Grant Success can be determined, I'm not clear on the direction when a Grant fails.
I purposefully didn't try to outline what happens if a grant fails or is successful. I don't feel that needs to be defined in this document, at least not at this time. In the short term, having determined knowledge of who has succeeded and who has failed will allow for greater transparency and social enforcement.
 
Secured
#8
Thanks for putting this together, guys :) Is the idea to use this process retroactively as well?

5 is an average but successful job, and 10 is a fantastic result exceeding the voter’s expectations.
At the end of the voting period, if the average of non-abstaining votes is 5 or higher, the grant is successful. If the average is below 5, the grant is determined to have failed.
I'm a bit concerned about the methodology here. By taking the granular number and turning it into a binary outcome we're saying that a 4.9 rated grant (achieved 98% of its goal) is the same as a 0 rated grant (took the money and ran).

The system also creates a situation where a 10.0 rated grant (changed the world) and a 5.0 rated grant (hit the target) are given an equal rating.

If we are going to create a granular scoring system then I believe we should also a granular ranking system that reflects the scoring system. I'd recommend 5 tags:
  • Exceptional (9.0 - 10.0)
  • Overachieved (7.0 - 8.9)
  • Achieved (5.0 - 6.9)
  • Underachieved (3.0 - 4.9)
  • Failure (0.0 - 2.9)
I would like us to have a system where those who do achieve aren't grouped with those who barely achieved; this process should be a meritocracy, not a machine churning out binary outcomes.
 
Secured
#10
The governance doc robes “grant success” as a factor to be used I governance, so my view view would be that we start with a binary option that more easily can be included in an on-chain system (maybe “success” triggers some kind of support with weight equal to amount of fct paid for the grant).

In theory the support could also be higher for grants that exceeded expectations, but that adds complexity - and to be honest I think we should work towards a MVP at this point in time.

I also provided some input directly to the document which I haven’t mentioned here.
 
Secured
#11
Just to clarify, would 5.0 or higher still be "successful" in your system and 4.9 would be a fail? Or would only 2.9 be considered a failure?
I took 5.0 to be a successful grant (as per the document put together) which I renamed as "achieved" to allow the prefixes "over" and "under" to be applied. A score of 4.9 would be considered a failure to meet the requirements as set out in the grant.

I have nothing against where the line was drawn. What would be useful is guidance as to what specific scores mean to allow standing parties to give an accurate score based on their feeling about the grant. By assigning terms such as "underachieved", "achieved", and "overachieved" to the scoring, a form of guidance is provided.

If we keep it where the score is converted into a binary outcome will create binary voting (like in 2018 grant round 2 with the scores 0-100).

An imagined conversation:
Person 1 said:
That grant was successful in my opinion so I'll give it a 10 so it is more likely to be ranked as successful
Person 2 said:
But it only barely achieved its goals; surely a score of 5 or 6 would be appropriate?
Person 1 said:
It was successful, I'm giving it a 10
 
Secured
#15
Just tossing out another idea:

Just as standing parties can vote to extend a conversation, we could have a running standing party tally of if a grant should be voted on. Meaning, as soon as 6 standing parties vote, "It's time to vote on the success of this grant," then the actual vote is triggered (after a minor discussion ensues). One of the voting options could be "Delay another month."

Probably too much to implement right now, but something to maybe keep in the back of our minds for down the road when we want to fine-tune the process.
 
Secured
#17
Why do you believe that a binary option is more easily included in an on-chain system?
Just because it is fewer factors to add. If its a binary option then you can can set a determined "grant success factor" in the overall weighting, but if you score it on a scale then you'll need to calculate the actual weight for each grant. Adds more logic into the mix that needs to be taken into account in the code... I'm not a developer however; maybe it is actually trivial, and if someone makes a good case for that I'm happy to have my opinion changed :)
 
Secured
#18
I'm personally in favor of the binary scoring system. I would like to see more defined milestones for our grant system similar to how contracts have Contract Line Item Numbers(CLINs). You could put the grant CLINs/milestones up for review/vote and if successful, can aggregate the milestone achievements to populate a final aggregate score (if a score is desired). This would also allow for development of automating grant payout in the future based upon milestone achievement rather than lump payment and just quantifying whether or not a grant was successful.
 
Secured
#19
I like this and what's been put together so far. it's great that we're moving forward on this. I just want to say that now before the rest of this sounds overly critical.

I feel we're putting a lot onto standing parties in taking the time reviewing and determining the success of every grant we pay out. I would really like to see something in place either via guides or community based who can actually take additional time before hand to review and summarise the grants before putting them up for standing party review. Standing parties don't have to agree and can vote however they see fit but this would at least inform them with an independent voice that isn't part of am update. I realise sponsors can fill this role but not all grants have sponsors. I just don't know if enough standing parties are going to be knowledgeable enough to score accurately without guidance.

My other concern is, are we going to end up with people only creating proposals that have either little to no risk of failure to avoid receiving a black mark against their name? Is this good for the protocol long term?

Offtopic but finally what do we do with this information? Will past grant statuses need to be declared on future proposals? How would it work? For example say I was part of a grant that failed badly and I became part of a different grant with different people. Would there need to be some declaration that I was part of a failed grant in the past?
 
Secured
#20
I feel we're putting a lot onto standing parties in taking the time reviewing and determining the success of every grant we pay out. I would really like to see something in place either via guides or community based who can actually take additional time before hand to review and summarise the grants before putting them up for standing party review. Standing parties don't have to agree and can vote however they see fit but this would at least inform them with an independent voice that isn't part of am update. I realise sponsors can fill this role but not all grants have sponsors. I just don't know if enough standing parties are going to be knowledgeable enough to score accurately without guidance.
I added wording to the doc where the person initiating the determination is to provide a short summary.

My other concern is, are we going to end up with people only creating proposals that have either little to no risk of failure to avoid receiving a black mark against their name? Is this good for the protocol long term?
I don't see how this process will change anything in that regard other than make it easier to see who performed well and who did not. I think there will be no shortage of grant proposals of all types in the future.

Offtopic but finally what do we do with this information? Will past grant statuses need to be declared on future proposals? How would it work? For example say I was part of a grant that failed badly and I became part of a different grant with different people. Would there need to be some declaration that I was part of a failed grant in the past?
That's up to the community and outside the scope of this process.
 
Secured
#24
One thing to consider: how are we going to handle the determination of grant success for grants in which multiple parties participate? There are two obvious approaches:
  • the same score is assigned to all parties involved in the grant
  • a separate score is assigned to each party involved in the grant
I see arguments for both and I'm not sure what is the better approach to be honest, but I think it's something we need to think about & discuss.
 
Secured
#25
One thing to consider: how are we going to handle the determination of grant success for grants in which multiple parties participate? There are two obvious approaches:
  • the same score is assigned to all parties involved in the grant
  • a separate score is assigned to each party involved in the grant
I see arguments for both and I'm not sure what is the better approach to be honest, but I think it's something we need to think about & discuss.
I lean towards project. I can see the argument for wanting to know who did what, but the opportunity for micromanagement comes into play. As a protocol, I don't care how it got done, just that it did. As a stake holder, I don't necessarily know the entities involved and don't need to be privy to internal project drama.

This does not mean that we should not associate them in the future somehow, but if the goal is to eventually handle this with on chain voting, all I care about is results (for success voting).

If it is a big dramatic mess that still delivers the code, you probably won't get a 10 rating.
 
Secured
#26
One thing to consider: how are we going to handle the determination of grant success for grants in which multiple parties participate? There are two obvious approaches:
  • the same score is assigned to all parties involved in the grant
  • a separate score is assigned to each party involved in the grant
I see arguments for both and I'm not sure what is the better approach to be honest, but I think it's something we need to think about & discuss.
I believe that grant success should be determined by a project overall, and each team participating should have the same score assigned. This process should be as straight forward as possible, and the added complexity that comes with assessing the performance of individual contributors will likely cause more problems than it solves.