Ratified Doc 106 - Factom Grant Success Determination Process

Public: Only invited members may reply

  • Viewed BI Foundation BI Foundation Bedrock Solutions Bedrock Solutions Blockrock Mining Blockrock Mining Brian Deery BuildingIM BuildingIM Canonical Ledgers Canonical Ledgers Crypto Logic Crypto Logic Cube3 Cube3 DBGrow DBGrow David Chapman De Facto De Facto Factom Inc. Factom Inc. Factomatic Factomatic Factomize Factomize Factoshi Factoshi Federate This Federate This Go Immutable HashnStore HashnStore Julian Fletcher-Taylor LUCIAP LUCIAP LayerTech Matter of Fact Matter of Fact Multicoin Capital Multicoin Capital Niels Klomp Prestige IT Prestige IT RewardChain RewardChain Samuel Vanderwaal Stamp-IT Stamp-IT The Factoid Authority The Factoid Authority Tor Paulsen VBIF VBIF
  • Not Viewed None

Should the document be ratified or amended as specified by the thread type?


Have not voted

Authority Nodes Federate This Federate This

  • Total voters
    29
  • Poll closed .

Timed Discussion

Discussion ended:

Status
Not open for further replies.
Secured
#1
The purpose of this thread is to further discuss and then put up for ratification Doc 106 - Factom Grant Success Determination Process. The ratification of this document would provide a defined process for how approved grants are determined to be successful or not. "Grant success" is a term utilized in Doc 001 Factom Governance. How the ecosystem should subsequently utilize the information of whether a grant is successful or not is currently outside the scope of this document.

Submission of this document for ratification was scheduled with the Guides.

Please make suggestions for changes within this thread.

Previous discussion and revisions to the draft took place here.

Thank you.
 
Last edited:

Chappie

Timed Discussion Bot
Secured
#2
This thread is a Document Ratification/Amendment Timed Discussion and I am designed to help facilitate efficient communication.

Guides and ANOs may take part in this discussion and vote. Unless this discussion is ended early or extended, it will end in 8 days after which a vote will take place. After 18 hours from the start of the thread or any point up until 24 hours are left in the discussion, you can make a motion to end the discussion immediately or extend the discussion beyond it's initial time frame by selecting the pertinent button at the top of this thread. If someone "seconds" your motion, a poll will take place which requires a majority of Standing Parties to vote one way or the other.

At the end of the discussion period, Guides will vote first and 4 must vote yes otherwise the process ends. If 4 do vote yes, ANOs then vote and if 60% vote yes, the document is successfully ratified or amended.
 
Secured
#4
Thanks for putting this together, David. Just to be clear, what is the stated objective in having a formal vote on the success of a grant? Knowing this context will help in my reading of the proposed legislation.
Alex, I added the following to the first post in this thread:
"Grant success" is a term utilized in Doc 001 Factom Governance. How the ecosystem should subsequently utilize the information of whether a grant is successful or not is currently outside the scope of this document.
Knowing whether grants are successful or not is obviously necessary and we need to standardize that process. HOW we utilize that information is a much broader topic we can tackle over time. I suspect there will be social enforcement prior to there being defined metrics.
 
Secured
#5
Thanks for that clarification. Here is the relevant section in 001:

6.2.6. Support Category: Grant success
6.2.6.1. The successful handling of grants in the past provides support over 24 months. A grant is weighted by the factoids issued, on the dates issued, or milestone complete. Grant success will be combined from the following factors, to be tracked in the protocol. Voting power is not granted until the grant is closed by the Post Mortem.

6.2.6.1.1. 30% Milestones completed (10 points for each milestone, up to three milestones.)

6.2.6.1.2. 30% Project completed

6.2.6.1.3. 40% Post Mortem grading of the grant by standing parties
I think we need to more carefully define how these two documents are related to each other. Is this process the official "Post Mortem" outlined by Doc 001? Moreover, would the process outlined by Doc 106 account only for the 40% weighting granted by the Post Mortem described in 6.2.6.1.3, or would it also define whether the project was completed as outlined in 6.2.6.1.2? This is important, as one of the grades provided in Doc 106 is "Incomplete", which has direct implications for 6.2.6.1.2.

Furthermore, we might have a situation where the grant was completed, but the quality of work is so sloppy that standing parties do not give it a passing grade. Under those circumstances, the "Incomplete" scoring tier would be factually inaccurate and may lead to confusion when casting votes. We should perhaps rename that category "Underachieved".
 
Secured
#7
I think we need to more carefully define how these two documents are related to each other. Is this process the official "Post Mortem" outlined by Doc 001? Moreover, would the process outlined by Doc 106 account only for the 40% weighting granted by the Post Mortem described in 6.2.6.1.3, or would it also define whether the project was completed as outlined in 6.2.6.1.2? This is important, as one of the grades provided in Doc 106 is "Incomplete", which has direct implications for 6.2.6.1.2.
If this document is ratified, my plan was to submit changes for Doc 001 version 1.5 to reference this document and clear up any inconsistencies.
Furthermore, we might have a situation where the grant was completed, but the quality of work is so sloppy that standing parties do not give it a passing grade. Under those circumstances, the "Incomplete" scoring tier would be factually inaccurate and may lead to confusion when casting votes. We should perhaps rename that category "Underachieved".
Good call. Change made.
 
Secured
#8
I think that some registration of reviewers is necessary. As we go along, a commitment to review progress and results by parties might be too great if one has to review every grant, and late comers might not have actually maintained an understanding of performance over time. And this can all be done onchain (eventually).
The document currently states
The thread will be replied to by the initiator of the determination bringing attention to the poll, a link to this document, a summary of the grant performance, and provide the following scoring rubric:

Exceptional (9.0 - 10.0) - Successful
Overachieved (7.0 - 8.9) - Successful
Achieved (5.0 - 6.9) - Successful
Underachieved (2.0 - 4.9) - Failure
Total Failure (0.0 - 1.9) - Failure
You could rewrite that as:
The thread will be replied to by the Reviewer bringing attention to the poll, a link to this document, a summary of the grant performance, and provide the following scoring rubrik:
As such, I believe this document allows for formalization of such a group of people but don't feel it needs to be defined within this document. If you would like "initiator of the determination" to be renamed at this juncture, I'm good with that.
 
Secured
#9
I am concerned that the need for this legislation is not sufficiently clear. "Because we said it in Doc 001" does not really seem like a good enough reason to do this to me if we are then able to go and simply edit Doc 001 in order to align the two.

We have 4 grant rounds per year. Assuming a conservative 10 grants per round, this legislation would result in at least 80 new votes for the standing parties per year, and probably a lot more as we decide to defer votes on delayed grants. If we're going to introduce this legislation, we need to be very sure what the benefits are as that is a pretty enormous cognitive overhead for the standing parties.

Is the core intention here to support grant recipients as a standing party? if so, should we not do the revision to Doc 001 first then fill out the details here?

I really want to emphasise that this will be a lot of votes to plough through. As the protocol grows, we will presumably have an ever increasing number of grants per round. Between this and the grant rounds themselves, we are going to be consumed by grants.
 
Secured
#11
You would be able to ascertain their success on past grants without community consensus. You can look through their grant updates and judge for yourself, as you would need to do anyway for this process. Do I need an official view from everyone else in order to form my own opinions on any new grant application? I would likely use my own opinion when deciding on a new proposal anyway, not the official view of the standing parties.

Grant transparency and accountability arises from the requirement of regular grant updates from grant recipients, which does not require constant, ongoing attention from every standing party.

As I see this right now, the costs are outweighing the benefits. I don't think I can support this in its current form or without greater understanding of the benefits.
 
Secured
#12
There will come a point where we have had 200, 400, 1000 grants executed. Each grant round are you going to want to have to go back and review each grant? Do you expect others to, especially new people to the ecosystem? Or do you simply want to have to go to factomprotocol.org/grant-success and see a UI that displays each team, corresponding success metrics, and links to their grant and grant updates so you can do an additional personal review if wanted?

If I'm a newcomer to this ecosystem I'd like to see a UI of the past 1000 grant's data and trust that the community scored them relatively well and use that data to help make my decision on future grants.

If we don't set this process in place now and we decide we want those metrics later when there's already been a hell of a lot of grants, it'll be a nightmare.
 
Secured
#13
I absolutely see Alex’ point here though. We have maybe 10 grants approved 4 times a year for 40 grants. Two votes per grant is 80. If we expand standing parties it would be natural for the others to also vote on grant success... How much participation will we see? How much of that participation would actually be meaningful? I hadn’t really thought this through properly (I still might not have), but I start too see some more issues with the approach appear...

We should absolutely be cognisant about the issues with asking our standing parties to review and vote on 80 grants a year. Especially when it comes in addition to grant-votes, ANO-votes, guide-votes and other governance matters.

On the other hand we do need to have a way to determine grant success if that should play a part in our governance, and if it is to be taken into consideration “on chain” we also need a way to make that input into that system....

So a process should be put into place, and we might want to do it like this (so we at least get started and don’t create a big backlog of grants that need to be scored), but personally I dont believe a process that needs this much hands on participation by all the standing parties is viable long term...
 
Secured
#14
IF the price of FCT appreciates substantially, we'll be seeing far more than 40 grants per year.

IF we don't set this process in place now, you're not going to have data from past grants helping you make decisions on future grants and sooner or later, someone that bungled a grant previously is going to screw up a second grant, it'll be pointed out it happened a second time, and people will be screaming for a process like this. So let's be proactive please? Especially if we like the idea of a "Grant Success" Standing Party (I do).

Now, bandwidth will become an issue for some. That will either be handled socially, largely ignored like it is now, and/or the process can be changed.

As such, let us please be proactive and get this process in place and improve it over time. Let us also acknowledge that some Standing Parties won't have the bandwidth. But let us also acknowledge that if "Reviewers" do a good job, it won't really take much effort to read what they say and click a button based upon that and we'll have extremely important data we can use to make informed decisions moving forward.
 
Secured
#16
Is it worth consideration to apply the same techniques here discussed on Factom Grant Success Determination Process, be applicable to those ANO’s that have lowered their efficiencies to pursue projects for the betterment of the Factom protocol?
I think there is some merit to that but it's outside the scope of this document. As such, if you feel strongly about it, will you please start a separate discussion on the topic?
 
Secured
#18
It sounds like everyone is acknowledging that is is going to be generating human-centric overhead for an educated voter. Very human-centric is where we still are right now so I don't have a problem with it as I do see a demarcation where, in the future, we can automate the voting and history and ignore the rest. When a grant round starts and I have a week or two to read 1000 pages of forum posts on top of my regular life, I am going to look at the ones I have a stake in and use past scoring on everything else. A new grantee isn't going to be there anyway.

Question, is there a tool to create the forums and polls? I don't mean in general, I mean a month after the projected end date, will the 'should we vote?' vote going to magically appear or does that fall on someones plate? If that passes, will the 'Was it successful?' vote be auto generated?


For the record, I am for the rapid automation of everything so adding more human overhead to the process is not something I really want. The Grantee Reputation System that this creates is a nice segway to automating the grant system.
 
Secured
#20
1. At some point this entire process will be automated but you'll still need a human to decide what button to click to score a grant.

2. Those who are worrying about bandwidth may not be thinking about the fact that as our grant system scales in scope, so too will properly run companies. Our 1-5 person teams, if/when the price of FCT is higher because of utility demand for the token (which also causes more grants to come in) will scale to larger teams where the role of someone will be, in part, to review the grants and score them accordingly.

But we need to proactively put processes in place so that properly run companies can scale with them.
 
Last edited:
Secured
#21
What about voting using a delegated system or proxy?

1. This could lower the global amount work that each individual standing party has to put in the determination process

2. Not every standing party has the required skillset to judge the final results of a given grant. There will be plenty of different grant types marketing/core/libs/products/etc and being able to do delegated vote might be superior than simply voting for the sake of voting
 

Chappie

Timed Discussion Bot
Secured
#24
We are now 18 hours into the discussion. You may now make a motion to extend this Document Ratification/Amendment Discussion by an additional 72 hours or end this conversation by selecting the pertinent button at the top of this thread. This option will end when there are 24 hours left in the discussion.
 
Secured
#25
Just a couple of comments from me:

2.1.1 says:
If a grant has not been put up for determination by a grantee or sponsor and one month has passed since its final milestone or initially intended completion date, it will go up for a preliminary vote to determine if the grant should be put up for final determination.
This implies that all grants need a final milestone AND intended completion date. Otherwise, someone could create a grant that says "my final milestone is to deliver X within the grant period" which is woolly language and would allow the grantee to claim "I've still not reached my final milestone" indefinitely. I'm not saying this document is wrong, but that we probably need to add some language to the next grant process document to avoid a grant that can never go up for determination.

2.5 says:
The grant update thread’s prefix will be changed to “Successful” or “Failed”.
My understanding was that the point of the rubric was to provide a more granular prefix for the thread as well as providing voting guidance. It comes back to my point that we're grouping terribly executed grants with "99% complete" grants, and amazingly executed grants with "barely made it" grants.

Is there any reason we are not using the rubric categories to prefix the threads?
 
Secured
#26
Hi David, Thank you for the work on this important subject.
For me this definitely needs an answer to the "so what" question. In other words we need to be more explicit about the use to which we put the information. Perhaps we can reference the grant application process so that evaluation of grants obliges the standing parties to scrutinise this past performance?
The other thing that strikes me is just how hard it may be for people not closely connected with a grant to assess it. Should there be an obligation on the grantee to submit something, say a report, which quantitavely and qualitatively describesthe work done and the results achieved?
If we were to go down this latter route grantees could self certify and all we may then need would be an audit process, thus significantly reducing the workload.
 
Secured
#27
but that we probably need to add some language to the next grant process document to avoid a grant that can never go up for determination.
Agreed.

My understanding was that the point of the rubric was to provide a more granular prefix for the thread as well as providing voting guidance. It comes back to my point that we're grouping terribly executed grants with "99% complete" grants, and amazingly executed grants with "barely made it" grants.

Is there any reason we are not using the rubric categories to prefix the threads?
My thought was to have the binary Success / Fail for the prefixes with more detailed scoring inside the thread, but if the consensus from the community is to use the rubrick for the prefixes as well, that's an easy change and can be done anytime.
 
Secured
#28
Perhaps we can reference the grant application process so that evaluation of grants obliges the standing parties to scrutinise this past performance?
I'm not quite sure I follow what you're suggesting. Will you please provide some example text?
The other thing that strikes me is just how hard it may be for people not closely connected with a grant to assess it. Should there be an obligation on the grantee to submit something, say a report, which quantitavely and qualitatively describesthe work done and the results achieved?
Yes, the Reviewer, whether that be the grantee, sponsor, or a 3rd party is expected to summarize the grant.
 
Secured
#29
I'm not quite sure I follow what you're suggesting. Will you please provide some example text?

Yes, the Reviewer, whether that be the grantee, sponsor, or a 3rd party is expected to summarize the grant.
Hi David,

My apologies if this was not sufficiently clear. You requested a sample text to illustrate the point, so can I suggest the following to possibly be inserted in the introduction after 1.1:

"The purpose of this is to inform the standing parties about the effectiveness of a particular grantee in delivering what was promised in the grant application.

The value the standing parties derive from this is in enabling better judgement of the likely success of future grant applications by a grantee. (This statement to be mirrored in Doc 153)

In this way the performance of grantees should improve over time."

I understand your point about the Reviewer being expected to summarise the grant. I was suggesting taking this further than that and getting not the Reviewer but the Grantee to both summarise the grant outcome AND score their own performance. This would reduce the amount of time the standing parties needed to apply to this subject. Given a strict scoring regime and the requirement for evidence this ought to work. It could then be policed by an audit process whereby certain grants are investigated more deeply and the lessons learned.
 
Secured
#30
Hi David,

My apologies if this was not sufficiently clear. You requested a sample text to illustrate the point, so can I suggest the following to possibly be inserted in the introduction after 1.1:

"The purpose of this is to inform the standing parties about the effectiveness of a particular grantee in delivering what was promised in the grant application.

The value the standing parties derive from this is in enabling better judgement of the likely success of future grant applications by a grantee. (This statement to be mirrored in Doc 153) In this way the performance of grantees should improve over time."
Thanks Mike. I've updated the intro to say:

Governance Document 001 references “Grant Success”. This document outlines the process by which a grant will be determined successful or not which will subsequently inform the Standing Parties about the effectiveness of a particular grantee in delivering what was promised in the grant application. This will allow better judgement of future grant applications by a grantee and provide the data necessary for a “Grant Success” Standing Party if so desired.
Let me know of further suggestions.

I understand your point about the Reviewer being expected to summarise the grant. I was suggesting taking this further than that and getting not the Reviewer but the Grantee to both summarise the grant outcome AND score their own performance. This would reduce the amount of time the standing parties needed to apply to this subject. Given a strict scoring regime and the requirement for evidence this ought to work. It could then be policed by an audit process whereby certain grants are investigated more deeply and the lessons learned.
That's an interesting idea. I'd like to hear from others to see if there's any consensus on this suggestion?
 
Status
Not open for further replies.