Fully agreed on the first account, and it’s why I (in discussions leading up to the formation of the proposal) did my best to steer this away from creating a “standard” but rather towards a system for the formalization of relevant knowledge into frameworks that can be applied across spectra of maturity levels (from small to large operators, from small to large numbers of validators, from immature to more mature orgs, from less automated to more automated setups, from cloud-based to baremetal-based infra, etc).
The second point I would like to prod a little bit. I think open frameworks like this will be necessary to gauge the quality of node operators at a level that can scale. In my opinion if this proposal and overall efforts succeeds it won’t really create one framework, but many, so it should be relatively doable to match an NO to a sub-set of established practices that fits their “persona” so to speak. To wit, it should be an exercise that leads to the improvement in quality of a robust set of variations on how to run nodes well (at scale), versus a tendency towards a singular “best” one. If the latter occurs, I agree it’s certainly undesirable and should be avoided. In this vein, the assessment should be more of a “community accountability” type thing, where I imagine third parties (e.g. security auditors, process auditors, and other consulting type orgs) – bust most importantly, peers – could come in and review an NO against relevantly applied sub-sets of this framework, but it would not be necessarily so that that review/assessment has a direct impact on things like stake allocation, but it could be something that is taking into account.