-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
SIMD-110: Exponential fee for write lock accounts #110
Conversation
ad0b1d9
to
551971a
Compare
371ef90
to
7151fd7
Compare
- Identify write-locked accounts with *compute-unit utilization* > half of | ||
account max CU limit. Add/update bank's account_write_lock_fee_cache. | ||
- Adding new account into LRU cache could push out eldest account; | ||
- LRU cache has capacity of 1024, which should be large enough for hot accounts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LRU the same as evicting lowest cost accounts first?
|
||
### Other Considerations | ||
|
||
- Users may need new instruction to set a maximum write-lock fee for transaction |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
v0, 1% increase means that worst case is 4.4x increase over 150 slots for a block hash. That should be good enough for wallets to show to users as an estimate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how would a maximum write-lock fee instruction work? at what point does the tx get rejected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would imagine it would be a max fee as part of the budget program.
I would rather put this complexity outside of the runtime. Runtime needs a cheap fast priority, the non priority fees can be estimated by the wallets off chain.
- End of Block Processing: | ||
- Identify write-locked accounts with *compute-unit utilization* > half of | ||
account max CU limit. Add/update bank's account_write_lock_fee_cache. | ||
- Adding new account into LRU cache could push out eldest account; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is eldest determined? If we added 2 accounts in slot N, the oldest slot in the cache, and we now need to evict 1, how do we determine which one?
Potentially could also look at cost in block N, but what if they had the same cost?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it should be eldest. It should be cheapest account gets evicted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be cheapest gets evicted with a (much) larger cache size i believe; otherwise you can end up in a world where an account that starts at 0 fee (or whatever the min fee is) never gets off the ground cause it never makes it into the cache
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New account > 6m CU in usage evicts the cheapest one from the cache at the end of the block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, cheapest not eldest, I think that makes sense. But that doesn't address that we need to propose a way to resolve ties - otherwise clients could implement that logic differently and we end up with consensus failures.
If 2 accounts have the same cost, and were most recently accessed in same block, how do we resolve that tie in a deterministic way? also possible we just evict all tied accounts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would implementing a (small) global CU base fee help resolve this issue? The local per-account fees would then start at this base fee and increase/decrease from there. This type of system would avoid the step change when an account is added/removed from the cache.
account max CU limit. Add/update bank's account_write_lock_fee_cache. | ||
- Adding new account into LRU cache could push out eldest account; | ||
- LRU cache has capacity of 1024, which should be large enough for hot accounts | ||
in 150 slots. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the relevance of 150 slots here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's the # of slots until blockhash expiry, but also not sure why it's relevant here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Attacker can saturate 128 * 8 accounts per block
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Max accounts in a tx is 128. Max txs with the accounts hitting the 6m CU limit is 8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@taozhu-chicago
i don't think 150 slot ema makes sense. 1.01x or 1/1.01x on every block depending if the account > 6m or < 6m, that will track the average already.
Then the cache eviction policy is simple.
Cache size = 2x * worst case number of accounts > 6m per block
Eviction = any new account > 6m CUs at the end of the block are added to the cache with a write lock fee of K. Cheapest accounts are evicted.
Fee updates = if account is in the cache and > 6m fee is 1.01x, otherwise fee is 1/1.01x
With this setup an account that is saturated 100 / 150 blocks will have something like 1.01^100 * 1.01^-50 or 1.01^50 which is what we want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is the worry a malicious leader doing an attack to evict cache? if that's the concern, then we should be using a cache-size of 4096 since we allocate leader-slots in chunks of 4.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@apfitzge second set would evict the first, not the most expensive. 2048 is all we need
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💯
- # Algorithm: | ||
- Adjusts write-lock *cost rate* based on an account's EMA *compute-unit | ||
utilization*. Initial write-lock cost rate is `1000 lamport/CU`. | ||
- For each block, if an account's EMA *compute-unit utilization* is more than |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why a discrete change at 50%? why not increase_pct = f(utilization)
where f(0%) = -max, f(100%) = +max (perhaps max = 1%) with a continuous function (e.g. linear)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and if it's gonna be discrete, why not e.g. 33%?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the shape of the controller deserves some study. understood there is a desire for tx senders to know what the max they might pay is. price as an increasing function of utilization (which all these proposals provide) is already much better than status quo, but a significantly more aggressive increase would help with discrete opportunities like NFT mints and one-time arbs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea. I agree. I think it's worth picking something reasonable and then adjusting it in the future. 1% per block increase puts the worst case on a tx to 4.4x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discrete makes it easier to manage a cache. Accounts that cross 6m evict the cheapest from the cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to @eugene-chen's comment: the choice of 50% implies some target ("utilization <= 50% is fine; over 50% is undesirable"). Since the EMA is tracked anyway, could you have inc_pct = f(utilization)
be something like constant * (utilization - target)
?
Since the increase/decrease is in percentage, implicitly the calculation is being done in "logarithmic space" so f(utilization) = exp(constant * (utilization - target) )
is more appropriate. (We have some work that suggests that the current price should probably appear in the exponent as well: see appendix c of this paper.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
discrete is a better devX for clients to figure out #
current bank is frozen. | ||
- Provides the current *cost rate* when queried. | ||
- EMA of Compute-Unit Utilization: | ||
- Uses 150 slots for EMA calculation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assuming this constant is roughly a guess; what's the reasoning to make this the same as transaction expiry time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think ema makes sense here. Since the fee change is 1.01x per block it will average out already. No need to track an ema.
- Calculate write-lock fee for each account a transaction needs to write, | ||
summing up to be its *write lock fee*. This, along with signature fee and | ||
priority fee, constitutes the total fee for the transaction. | ||
- Leader checks fee payer's balance before scheduling the transaction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prio fee today and this proposed fee are priced per CU requested. if desired, could also price per CU used, by beginning the tx by cost rate * CU requested and ending the tx by rebating cost rate * (CU requested - CU used)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No rebates please. We need devs to correctly estimate what they are using, not request max.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exactly, rebate will just make all transactions setting the CU limit to max.
- Accounts are associated with a *compute unit pricer*, and the *runtime* | ||
maintains an LRU cache of actively contentious accounts' public keys and | ||
their *compute unit pricers*. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
very nice way to avoid adding new field to AccountInfo
!
- Acknowledge read lock contention, deferring EMA fee implementation for read locks. | ||
- In the future, a percentage of collected write-lock-fee could be deposited | ||
to an account, allowing dApps to refund cranks and other service providers. | ||
This decision should be done via a governance vote. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why governance? governance by whom? what changes belong to governance and what changes belong to e.g. SIMD process?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stake weighed median signaled by validators. Would need a simd in the future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I will argue in that future SIMD instead :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think some program account deposit is necessary to prevent write lock spamming oracles and things like that. But that can be turn on in v2
- Alternatively, each account can have its *compute unit pricer* stored | ||
onchain, which would require modifying accounts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
....but if there are spare bytes, this can be done cheaply (and you can avoid the cache and have a price on literally every account) with two u64s: store last_slot_touched
and an int representing log(fee rate) / log(1.01)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would require storage forever in the state. An LRU cache should be sufficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only if it is big enough!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it only needs to be 2*worst case evictions per block
to an account, allowing dApps to refund cranks and other service providers. | ||
This decision should be done via a governance vote. | ||
|
||
## Impact |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one addition: if fee is paid per CU requested, this additionally incentivizes accurate CU estimation beyond what the prio fee already does
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
another addition (less obvious if good or bad): this increases the incentive for app developers to fit more of app state into a single account rather than having a bunch of accounts
and incentivizes some type of account-sybiling, e.g. the "canonical account for some app state" rotates every N slots to allow the fee to cool down
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't it have the opposite effect? If that account is saturated then fees increase super-linearly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(assuming this is referring to the first note in the second comment) Today a bunch of programs require always hitting accounts A B and C so the user would have to pay for 3 account writes whose price escalate together. So there's an incentive for the developer to compress the state to a single account A to decrease write-lock fee by 2/3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@taozhu-chicago might make sense to scale it by bytes. So there is little advantage to combining accounts.
So each account starts at a lamports per CU per byte rate (LCBR) that scales 1.01x each block.
One way to fit this into existing fee model is to
- lower the existing signature fee by 50%
- have validators earn 100% of the signature fee
- set the floor LCBR to match what votes currently use, so whatever vote writes to be 50% of existing signature fee. Burn 100%
So basically math works out to be the same for votes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
account bytes are converted into CU by cost model, feel it's simpler to stay with "lamports_per_cu". As for vote, it has const CU cost, it can also have const fee, assume it is "simple vote transaction", and all "complex vote transactions" are dropped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
complex votes can pay the complex fee :), they can be dropped by leaders, but i don't think we need to make them invalid in the runtime for this change.
- Calculate write-lock fee for each account a transaction needs to write, | ||
summing up to be its *write lock fee*. This, along with signature fee and | ||
priority fee, constitutes the total fee for the transaction. | ||
- Leader checks fee payer's balance before scheduling the transaction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't necessary for the proposal, even if it is how the labs client will do it; scheduling is outside consensus and doesn't affect the fee.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leaders should drop invalid fee payers as early as possible though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah definitely, but that's not related to this proposal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is assumed leader will do fee payer check upfront, but if it doesn't (for whatever reason), invalid fee payer will be dropped during loading (after locking). That's make it less effective, but still improvement.
I strongly disagree with this proposal. It seems like an engineer's solution to A few questions/observations:
Imo we are treating the symptom instead of the disease and it requires much more rigorous analysis. I understand that the time to market of this solution is quicker and we have real issues right now in production, but this seems like the wrong solution. |
Like physical real estate, block space will go to the highest & best use as determined by the highest bidder. We rely on free markets for price discovery. This proposal introduces an artificial # mechanism and will only distort the free market. Priority Fees (PF), our current method of free market price discovery, are starting to work -- we see applications adapting with dynamic PFs, and important TXs are landing. We'll do better to improve the effectiveness of the current market-based system. Distorting the free market is a step backward. The motivation of this proposal is mis-guided -- the outcome of processing a TX is irrelevant when the sender pays a (higher) fee. Beauty is in the eye of the beholder, and transaction success is in the eye of the bidder. What you see as a failed defi transaction was a success for the trader because they didn't lose money. Traders are willing to pay TX fees to avoid losses -- that's a feature, not a bug! #OPOS The current cNFT scams are much more troubling. With account compression, we made it incredibly cheap for scammers to send fake cNFTs trying to steal money. For example, someone is sending scam NFTs to Block Logic stakeholders and attempting to steal stake accounts. I chatted with one of the victims, and he is pissed at me and Solana for his losses. He may never come back, and I don't blame him! The defi bots will do us a favor if they outbid the scammers. We are in the business of processing transactions for a fee. Let's avoid judgment calls about which use cases are good or bad -- let the market decide what's highest & best. I'll take defi traffic over cNFT scams any day. The original vision for market-based PFs was correct. We should persevere to improve the current PF system. Pivoting here is the wrong move. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there anyway to estimate how this will affect DeFi apps where users desire to lock an account every block? E.g. clobs, oracles
While the priority fee serves to mitigate low-cost spams by decreasing the | ||
likelihood of less prioritized transactions being included, it cannot entirely | ||
eliminate the inclusion of spam transactions in a block. As long as there | ||
remains a chance, no matter how small, to inexpensively include transactions, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a pretty big assumption. I'd like to see the tx scheduler improvements land in 1.18 before this gets seriously considered. Given how the tx scheduler is currently implemented, it's hard to say priority fees aren't sufficient to adjudicate access to blockspace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am confident improved scheduler in 1.18 will respect priority fee much better, but there is still chance, hight be very small, that lower prio tx lands before higher one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the probability of low-prio txs getting included is low. Then it seems like the EMA raising fees will mainly affect the non-spam devs/users who will now see raised fees. If non-spam is anywhere close to 6M CUs/block, then non-spam users pay and low-priority spammers will only get hit by high fees very rarely?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
user will pay more when capacity is reduced; once this is reliably predictive, spammer will have to back off; and normal users pay resources per demand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@crispheaney steady state ingress load on leaders is 50kpps+. The pipeline to dedup/sigverify/fee check, has to handle all that load before it gets to the scheduler. If it takes more then 400ms, tx isn't getting prioritized in that block. The only way to get spammers to send fewer txs is to raise the base layer fee.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so if txs take too long to get through the pipeline, prioritization is not effective. Does that make sense? Doesn't matter what the scheduler does, if it can't see txs because they are still in the queues. we need to force sender to stop economically
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems reasonable we focus on improving the throughput scheduling and pre-scheduling stages so that it's more likely txs are able to be prioritized.
If the chance of inclusion for spam is low, then I'm not convinced they will back off.
Fee-payer is can be totally separate from the account funding economic activity for arbitrage.
I can keep my fee-payer balance low and continue to spam. If the fees get too high for my account to fund, then oh well my tx won't make it into the block because the leader won't include me since I can't pay fees.
to everyone saying "real priority fees have never been tried", can you link to a spec for what the behavior is supposed to be and how that improves the UX? pardon my ignorance but i don't think i've ever seen a description of what the intended behavior of this system is even a perfect first-price gas auction (which is not achievable with continuous block building) is worse than a protocol-enforced min fee, in terms of UX |
The SIMD talks about deterring spam, not improving UX. Perhaps the SIMD can discuss UX improvements as well if that is a strong reason to adopt this. |
This needs to offer some arguments for why it's bad for a high % of transactions to fail. Or identify a more specific problem than the failure rate across all transactions |
Bringing such a # scheme into the protocol, imo, is a colossal mistake for two main reasons:
Jitter in banking stage and quic server is ridiculous. So much that the current priority fee system has yet to be properly tried. I'm hesitant to rule our current scheme as insufficient so hastily.
Such a scheme could already be implemented at the scheduler level. Individual validators intentionally have agency to price transactions as they choose, which allows them to outcompete potato validators, create large blocks, and ultimately keep fees lower for users. Not only will this scheme make fees unpredictable for users, it will also make them higher. If this truly is the best scheme, validators that use it at the scheduler level will be able to outcompete validators that don't. Oracles, Crankers, Market-Makers are just 3 of the many categories of users whom will be affected by this change. Raising fees for these players, of course, just ends up as worse UX / higher spreads for retail! |
There's no spec on behavior because the leader is free to do whatever they want with them. Can only speak for myself, but I think priority-fees have not really been given a chance for a few reasons:
expanding on 2 - the current (soon to be legacy 🙏 ) implementation has nearly independent threads, all of which race to take account-locks. This leads to lower-priority transactions getting executed before higher-priority ones, even when they are competing. There are major changes coming in 1.18 which makes the leader respect priority much better, which hopefully will discourage spammed low-priority arb transactions - since they will have a significantly decreased probability of success. I'm not totally against this proposal, I'm happy we're seeing some alternative ideas proposed. But I do think we need to consider the context of the current implementation with respect to already implemented economic models. I think it's prudent to hold off on committing to a total economic overhaul, when there are massive improvements in the release pipeline which may already lead to resolutions to the problems we see. |
This is an elegant proposal that addresses many of the UX issues on Solana If a leader observes a transaction that write locks certain accounts it is very hard for them to know whether they should include this transaction immediately or wait until they see another transaction write locking similar accounts that will pay a higher fee. The best way to make this determination would be to look over the previous N blocks and see if it is higher or lower than the average fee paid to write lock the N accounts. The main issues with the current solana fee markets are
In protocol fee market floors that target 50ish% block usage are good because you are very clearly communicating with the user something like "if you pay this much we will basically try to include you immediately otherwise we will drop you on the floor" At the end of the day if you implement good surge # you can also drop fees for other less contentious operations. The reason this helps with spam is that the reason to spam is to trick bad scheduler implementations that do not look back an accurately estimate the opportunity cost of including a dumb cheap transaction that write locks every defi account. You are now # this opportunity cost for them and also telling the user how you are valuing it |
This promise is not any better than the current promise of inclusion. The validators cannot make such promises anyway. That's because everyone will have the same information (the floor price being ema) so everyone will start bribing from a higher floor. How does that improve probability of inclusion if everyone just bids higher? Since everyone will start at a higher price it will be equally contentious so UX will not improve in any way. This proposal does nothing but increase the price from which spam happens, but it will keep happening. There is really no gain in terms of deterring spam. @eugene-chen At most, this just makes things more expensive instead of letting the market decide that. |
@y2kappa priority fees alone can't work. if a spammer has a 1/100 probability of inclusion that has an ROI of $0.01, they will send 100 txs, 99 of them get dropped, 1 gets included and pays a fee. Leader has to deal with 99 of the txs. if the account write lock fee > $0.01, spammer wont send the tx, or they will get dropped really quickly because the fee payer can't afford the fee, which is a low resource check. The 99 txs that are in the pipeline increase the work that all leaders need to do to land successful txs. This proposal addresses the steady state spam that leaders see without increasing costs for all users.
This is wrong. Only contentious accounts will see a fee increase. globally there is no increase. So general users will not be impacted. |
@aeyakovenko if there is such a low probability of inclusion then it seems this proposal primarily affects "legitimate" users. It seems this just raises the floor-price for arbitrage txs to be worth it. |
@apfitzge cost has to increase above the cost of spam, and if there are users below that cost they will be effected, no way around it. But those users are already effected because spam increases the the load, so txs take longer to get to the scheduler, priority fees are higher, etc... Economically should be the same to legit users. Spammer is willing to pay X total fee, as the base fee rises, they are still willing to pay up to X total fee, therefore their priority fee drops. Legit users not accessing that account that pay X now get relatively higher priority then spammers. |
It would only be relatively higher due to issues with scheduler/quic in the first place. If we eliminate quic/scheduler jitter (which we will eventually anyways), the value of spam greatly diminishes. This would still raise the base fee for normal users -- nevermind oracles, market makers, etc. |
@tao-stones worth running some data on 1.01% increase 1.015% or 1.02% decrease per block. so faster decay |
Re-calibrating my understanding...
|
Other thoughts to consider:
Edited: The fee does not guarantee inclusion. It merely guarantees that the TX is not dropped at the fee gate. |
That cost exists for the network though, and normal users already pay it. leaders ingress 50k-100k txs per second atm, but only 500-1000 land into the block. This translates into poor priority # and confirmation delays for the user. Given that more then 50% of current CUs fail, it is safe to set the write lock base price to target 50% load. If that drops the ingress rate to 5k, users get much faster confirmations, and much faster inclusion, and therefore much better roi per priority fee. |
Nit: could be confusing by calling it "Base Fee", Base Fee is static and 50% burnt. Proposed write-lock fee is dynamic, and 100% burnt.
In current version where priority is solely ordered by If priority is based on (total_fee / total_cu), then hot accounts will push up Transfer transaction's priority fee, if it wants to land even without touch any hot accounts. In both case, high enough priority fee will be able to outbid tx access lowest hot account. |
Overall cost of inclusion should be lower for non congested accounts. Validators get 100% of the priority fee, the write lock fee is 100% burned. So there isn't an incentive to include any hot account txs unless they also outbid the priority fees. So total costs for hot accounts are going to be higher. |
Push changes that improves confirmation times for non congested txs. They used to be 1s on the same hardware as today. What's changed is that there is 100kpps of spam filling up all the pipelines. If you think it's easy to fix, by all means, show us. |
Happy to help outline proper fixes! Some things that will help
I've been meaning to push some analytics and writeups for a awhile -- but I'm one person with a full time job doing this completely for free. I don't have any $100m grants, so forgive me if I don't have anything pushed tommorow 😄 |
Andrew and Richard are looking already at scheduler and quic improvements. They shouldn't block experimenting with write lock fees. A year ago the implementation was worse but had 1.4s P90 confirmation times on the same hardware. The difference is that leaders had 5k pps of traffic to deal with. We can a/b test all the changes and see if write lock fees work or not and reducing the load selectively. |
I'm not saying this is a blocker at all. I'm saying the aforementioned problems are largely solved by scheduler and quic improvements. I'm not against experimenting with write lock fees either (I do think write locks are extremely mispriced as it costs nothing extra to lock a +1 account). I do think however there's a balance between planning fees at the protocol level vs # transactions at the scheduler level. Centrally-planned EMA fees will hinder fast blockbuilders' ability to keep fees low and outcompete potato hardware/algos.
I remember, and I'm happy we can be talking about such a scale of throughput problems today! I think we just disagree on the solution. |
As a validator I want to be able to price my own compute. If I can't price my own compute there's incentive to fork the chain and outcompete with a better blockpacking algo -- a better algo that can handle the throughput at line rate. I'm not saying this is an easy problem, but I am absolutely saying we can do it.
I argue that metrics dropping under times of high kpps is a symptom of a poor networking stack and a poor scheduler. Instead of attempting to artificially limit kpps with centrally planned fees (which also doesn't solve invalid spam), I propose we move to handle pps at line rate. If a single IP is spamming too many handshakes, we can easily cut them off. Confirmation times with the current scheduler/quic will absolutely drop with pps. I don't disagree with you there! |
Andrew and Pankaj are working on it. If the implementation proves to be sufficient then exponential write lock fees can be turned off. Gotta fix the fires in front of us. |
If we want to test the strategy, is the scheduler level not the best place to do it? imo Validators should be able to price their own compute. Rather than adding protocol complexity/constraints -- we could spin up some clients on mainnet/testnet with the new strategy implemented at the scheduler level. If we do it this way we also don't need a feature gate every time we want to make an adjustment |
Wouldn't this make UX worse? Instead of # based on cluster status, users have to price based on leader that packs their transactions? |
But that doesn't really change in this proposal, this just sets a floor on top of which users still need to compete in priority fee markets. Their total fee would just be (ema fee(s) + priority fee to get into block), and if they are unwilling to raise their total fee they are less likely to get into the block, because the leader has no incentive to pack these high ema fee txs if they don't also have high priority |
No, even a naive client could use the median fee and have phenomenal chances of getting though. Chances getting dropped would fall on a geometric distribution over N leaders. During times of high volatility / high fees, a slightly less naiive client could just narrow the sample size for the median or weight for recent leaders higher. More sophisticated clients like can use their own strategies and track individual validators. Free market. |
User has to sign max they are willing to pay, and pay it. vs user signs a tx that pays up to the max if the write lock fee scales all the way there, but only pays what is the current offer price. I'd be really surprised if your proposal doesn't result in either users overpaying above the floor price or more time outs and resigning. |
|
||
## Security Considerations | ||
|
||
none |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can potentially slow the leader by spamming it with transactions using 256 accounts in an ALT all write locked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, this isn't unique to this proposal tho. ALT needs to be loaded anyway, the proposal adds LRU cache lookup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
max account locks is only 64. Scheduler should use locks in the equation however
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feature activation pending right?
9LZdXeKGeBV6hRLdxS1rHbHoEUsKqesCC2ZAPTPKJAbK | inactive | NA | increase tx account lock limit to 128 #27241
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to feature schedule, it is blocked atm.
This is just shifting the # burden from the leader to the protocol and stifling validator commission though -- which will stifle competition between validators and reduce a smart blockbuilder's ability to keep fees low.
Let validators price their own compute or they will fork the chain and use a less naiive # method |
## New Terminology | ||
|
||
- *compute-unit utilization*: denominated in `cu`, it represents total | ||
compute-units applied to a given resource. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the utilization count both read and write transactions, or just write transactions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just write lock in this proposal, it is possible to implement similar fee to read locks, but rather do that after found success of write lock fee.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At a high level, I think this proposal reasonably addresses the inevitable problem that all supply-constrained systems face. When blockspace demand drastically overpowers supply, the market price needs to adjust to a stable equilibrium. I agree with the need for a controller mechanism because it will positively impact downstream UX related to transaction inclusion (for both developers and application users).
A few notes:
- This is an implementation detail, but the shape of the controller is likely pretty important if this is to be a breaking change on the protocol level. It's not clear that exponential is the right way to go.
- IMO any protocol level change is riskier than a client specific implementation fix, so I think how changes get applied is a very practical thing to do before jumping the gun. Sequencing wise, I think it makes sense to enable this particular change after @apfitzge's introduces key client level infrastructure changes in 1.18.
This is why it'd be great to leave this up to the implementation of individual schedulers, instead working on a better vehicle for communication between validators / clients on fees and better client fee # models (once priority fees actually work, that is!). This will still result in good client UX whilst allowing validators to compete on fees and quickly test different fee models.
Scheduler will help a ton, but we also need to eliminate quinn too or I feel people will still spam due to network jitter. Eliminating jitter is uncontroversial and will also abolish the need to spam for the most part. I'd have a hard time seeing the need for this proposal or anything like it if scheduler was fixed and quinn was eliminated. tldr and to stay on topic: I think resources are better allocated to solving these engineering problems instead of taking an ETH-centric "solve it with more fees" approach. |
…espond account saturation events quick enough. Changed to 8
… decreasing un-write-locked account per block, and eventually evict from cache when cost rate is zero
The Firedancer team (largely myself) is spending a large amount of time trying to do a root cause analysis of why the client cannot properly price inbound transactions and why users are unable to land transactions during times of congestion. Our preliminary findings seem to indicate that a pile up of transactions at the RPC/TPU layer can considerably slow the validator done. We are also trying to substantiate the facts about Solana's fee market and the microstructure therein. We believe there are several issues with Solana's fee model and networking layer and block production which require different solutions to address both spam and priority fee based inclusion. This is time consuming so please be patient. This proposal aims to solve the issue of inclusion by imposing a tax (and I mean this in purely economical terms, this is not a dig) on end users for high consistent resource consumption of a write lock account. This excise is burned, reducing supply marginally (2-3 OoM lower than inflation). What is very clear and evident is that this proposal does not address a few key concerns which will not be fixed by this proposal:
Given these issues, it is unclear precisely how this proposal aims to solve the issues of spam if these issues cannot be addressed. This proposal does get a few things right:
The issue of spam cannot be solved entirely in protocol and we should endeavour to find the right solution, not the one which we will have to undo later. I would like to not have to think about this for another decade if I could, so we will find that solution. The more time we spend now arriving at the correct specification, the less time we spend on implementation or undoing things later. Again, this proposal is not faulty; the premise is good, but the economics most likely will not have the desired effect (due to the aforementioned issues showing the proposal does not address the root of the matter directly). It will require another iteration and some analysis from our end. We will publish a report based on our findings and counterproposal to address these issues after mtnDAO. I would encourage anyone who has any feedback or questions about this reach out to my various socials (by the same handle). We hope this proposal will remain simple and provide a comprehensive specification to improve inclusion (aka UX) on Solana. |
- worst case per block: 128 * 8 = 1024; | ||
- 2 times worst case: 2048; | ||
- Fee Handling: | ||
- Collected write-lock fees are 100% burnt. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why aren't read locks considered here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other Considerarion
section acknowledges read-lock contention and possibility of applying same EMA mechanism. Prefer doing that after seeing success on write-lock first.
Closing due to staleness |
to introduce economic back pressure to make spammer back off