-
Notifications
You must be signed in to change notification settings - Fork 5.8k
BIP draft: Consensus ScriptPubKey Length Limit #2039
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The implementation is fairly straightforward: example If you want to avoid the confiscatory accusations, you can do something like this: if (DeploymentActiveAfter(pindexPrev, chainman, Consensus::DEPLOYMENT_260)) {
int violations = 0;
for (const auto& tx : block.vtx) {
for (unsigned int i = 0; i < tx->vout.size(); i++) {
if (tx->vout[i].scriptPubKey.size() > MAX_SCRIPT_SIZE_260) {
if (++violations > 1) {
return state.Invalid(BlockValidationResult::BLOCK_CONSENSUS, "bad-txns-invalid-block-???", "invalid-output transaction");
} else {
break; // skip the other outputs of the same transaction!
}
}
}
}
}This would let a single transaction per block to have the pre-fork limits applied to it. |
|
There should be some mention about the effects of this change on OP_CHECKSIG / OP_CHECKMULTISIG DoS vulnerabilities. A limit this low (bytes) would likely reduce the worst case substantially. |
|
|
||
| Even x-of-3 bare multisig scriptPubKeys are smaller than 260 bytes. | ||
|
|
||
| A 260 byte limit allows 256 bytes of data in OP_RETURN outputs: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is true, though this is only the limit on UTXO set excluded outpoints (unspendable)
One could literally just use all 260 bytes for data if they didn't care about the execution of the script that resulted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, but a ~11 byte overhead per chunk would remain (see my other comment).
|
|
||
| Bitcoin is money. Transactions embedding arbitrary data compete with financial transactions for block space, and many node operators clearly do not want to process, relay or store arbitrary data. | ||
|
|
||
| The motivation of this BIP is to reduce the amount of arbitrary data that can be embedded in transactions and in the UTXO set. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This BIP only has a marginal effect on the total amount of data that could be stored as it only forces segmentation of the data into .size()/260 + .size()%260 ? 1 : 0 chunks.
There is a reduction in the total usable space for arbitrary data by 5.625% with 9 bytes of overhead against 160 bytes of payload.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@portlandhodl I mostly agree - the effect is marginal. I think the ROI of this BIP is attractive because it can be implemented in very few LOC that are easy to reason about.
Nit: It might make more sense to split data into .size()/255 + .size()%255 ? 1 : 0 chunks to avoid having to use OP_PUSHDATA2 (2 length bytes).
There is a reduction in the total usable space for arbitrary data by 5.625% with 9 bytes of overhead against 160 bytes of payload.
Not following the numbers 9 and 160 here. I think the overhead is 14 bytes per chunk:
- value: 8 bytes
- scriptSize: 3 bytes
- in-script overhead: 3 bytes (OP_RETURN, OP_PUSHDATA1, 0xff)
|
@moonsettler: Edit: Excepting even one tx per block would defeat the purpose of this BIP almost entirely. |
More state is an integer in a local context. If you read the ML on the topic, a lot of people declared possibly "confiscatory" changes a non-starter. Since Bitcoin lacks proper covenants pre-signed, deleted key transactions are the only way to do certain relative time-lock spends. I have no idea why they would also use an absolute time-lock in such a security solution. PS: It's also trivially possible to limit the total number of bytes new outputs in violation of the new rule can take up. |
|
@moonsettler: If even a single long scriptPubKey per block remains allowed, it undermines the purpose of this BIP almost entirely. I'd love to know which concrete cases people are worried about, as this concern seems quite contrived. Of course, this soft fork would be deployed with many months of notice. |
|
@moonsettler: I've added an exception for one in 256 blocks ( This way there is no confiscation at all, just throttling. And 255/256 blocks will not contain any large |
It makes such transactions very unreliable to get included in the next block. Later on could be restricted further if abused, but I'm not insisting on this, just an idea. Every nth block is also a possibility. There is always a potential for degens to place a huge premium on such rare space tho. |
jonatack
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR appears to have been opened prematurely, to not be original work by the author, and appears to be a gamification or shortcut of the BIPs process:
- No dedicated ML post to present this specific draft proposal and have prior discussion of concept, technical merit, or soundness
- LLM-generated ("95% – Extremely likely LLM-generated")
- Author's GitHub was just created (https://github.com/billymcbip) with no previous proof of work in this space and its sole action has been to open this PR. It would be best to propose a branch of your original (not LLM-generated) work on your own fork of this repository via a dedicated ML post.
|
@jonatack I am disappointed that you closed my PR. I've not claimed that I am the first person to have this idea - I credited BOTH the RDTS and @portlandhodl's mailing list thread (AFAIK he never made a BIP PR, so this is not a duplicate). And allowing every n-th block to circumvent the limit to prevent the confiscation "FUD" (for lack of a better term) is my original idea! Again, Portland specifically asked for someone else to continue iterating on his idea! The RDTS (#2017) combines a ScriptPubKey length limit with highly controversial changes to Taproot and it has a deactivation block height. I really think it's valuable to consider a (higher) ScriptPubKey length limit separately from the rest of the RDTS. The fact that my account is new is undeniable. Yes, this is my first contribution to Bitcoin. It's just a PR for a BIP - I don't see why we cannot openly discuss it. |
|
Per my feedback: it would be best to propose a branch of your original (not LLM-generated) work on your own fork of this repository via a dedicated ML post. |

This BIP adds a length limit for new scriptPubKeys in transactions at or above the activation height.
#2017 ("Reduced Data Temporary Softfork") includes a similar mechanism among many other, more complex, and, in my opinion, more risky changes. It would be valuable to consider a scriptPubKey length limit in isolation. This BIP would be a permanent change, and the same limit would apply to OP_RETURN and non-OP_RETURN outputs.
Thank you in advance for your feedback.
Edits: