Surplus Management Framework: Discussion and Draft Proposal

In general I think this is a really great piece of analytical work and important in trying to drive a substantive discussion on how the DAO can support the robustness of the protocol. However, I am in strong disagreement with a few of the points made. In the interest of brevity, I won’t go through and highlight all the sections I think are great (most of it basically) but will just focus on areas where I think there’s going to be contention.

This is a very strong statement to make. In the case of catastrophic (or even just substantial) slashing events, there’s really no way to ever be able to make all users “whole”, so as a headline this is at best misleading. The framing is also couched in traditional terms that make things very confusing (solvency => an expectation that the protocol is somehow obligated to make people whole to begin with, and implies a lot of things of an almost custodial nature). I really wish we’d stop using loaded traditional financial terms to describe new system paradigms. This applies to the “working capital” description used as well. If we want to show that staking protocols are a new form of common good / infrastructure or even utility, I think we’d do better to move away from using this traditional terminology. I acknowledge the explanatory utility of the phrasing, but ultimately I think we can have these discussions without relying on this as a crutch.

I’m really against reservation of buffer for a few reasons, but the main ones are these:

  • depending on how “quickly” you try to keep the buffer topped up at all times you may actually exacerbate cycling stake even more than it is currently (and with things like the proposed limits to churn rate basically being a done deal already, make it a lot worse)
  • having a “readily available” buffer ends up only benefitting people during fair weather, and even then it will always be utilized by arbitrageurs / large players / bots before anyone else (and obviously especially during non fair-weather conditions it gets insta-zapped by some bot)
  • permanently set aside buffer can basically cause meaningful and difficult to calculate rewards drag due to compounding effects

Most importantly, though:

  • I don’t think these kinds of “economic mechanisms” belong at the protocol layer, but rather should go on top of it. The base layer will always be less nimble and able to reason about economic effects of things happening on top of it, attempting to codify things into the core protocol adds a) complexity and b) potential exploitability and I don’t think the net benefit of doing it “in protocol” vs “atop protocol” is substantial. My opinion is that if there’s demand for “always available withdrawals” then it can be built atop the protocol and incentivized (if necessary) accordingly, but not in-protocol. In fact if you manage to do this in an abstract way then you can create a market out of different approaches to this, where different actors can compete, as opposed to building an ultimately less efficient and agile mechanism at the root.
  • Making an explicit mechanism that calculates and then allocates capital about “how much should be staked and how much should not be” (or other things like reinvested or position as an LP, like Frax does) almost turns the protocol into a capital management mechanism versus a staking mechanism, which IMO is definitely the wrong direction. The simpler and purer the base mechanism, the better. ETH gets submitted, unless there’s actual real withdrawal need, then it gets staked. I.e. it should do what it says on the tin, and the tin says “stake”.

I agree with this but I don’t think it’s smart to try to do it at a “whole-protocol” level, but rather at on a per-module basis. The risk profiles of the different modules will be too disparate and modules will be independent enough that attempting to aggregate and manage this in aggregate is going to cause a lot of inefficiencies. I think there should be a larger effort here to understand to create a risk analysis framework on a per module basis, and identify what (if any) additional risk mitigation measures can be made (e.g. for the curated set understanding from NOs which explicitly insure validators they run, including through using the Lido protocol, to what extent, etc.) and identifying if there are useful mechanisms to use these risk profiles (per operator, per module) when driving staking allocation decisions (i.e. in line with what we’re researching with Nethermind).

For short term I agree an increase in a “risk reserve” is prudent, but just from a rough reading the numbers seem a bit off. If the protocol has a surplus of ~34K stETH as at Aug 1, and you want to shift to like 25608 stETH (so ~ + 20K stETH), does that even leave enough for a decent runway for the next 1-2 years? The risk/reward seems off here.