The Embiggening (large OP_RETURN driven)


There are three main changes I am proposing here:

  1. Increase OP_RETURN allowed size
  2. Increase fee resolution (and thereby overall fee for large txns)
  3. Increase block size

OP_RETURN
Blockchain space on the peercoin network is currently paid for by the kilobyte. We currently have a very small blockchain and low inflation rates but we lack much usage of our secure network, so a logical direction to take the network is one that gives people additional use cases for taking up blockchain space. By promoting the null data field of a transaction from a minor addon to a major network feature, we enter a whole new set of blockchain use cases that has not been fully explored by cryptocurrency to date. We can see that there is demand for blockchain use other than basic transaction verification (i.e. moving fungible tokens around) in two of the top cryptocurrencies by market cap: STEEM and Etherium. With our fee structure, we are in a prime position to explore using the current peercoin network as a backbone for more than just basic transactions.

Technically, this looks like putting large data into a single transaction. For illustrative purposes, I want the reader to imagine posting an entire image to the blockchain, say it’s ~2 MB. Then, the poster simply needs to tell someone the txid of that transaction and they can parse the image directly from the blockchain. As the basic transaction data for that txn is likely to be negligible in size, we can assume that txn is almost entirely just the image. This leads us to consider how the protocol handles a txn of that size.

Orphaned large txns


The protocol already ignores orphaned txns higher than 5KB. The txn must have already confirmed parents before the large txn is broadcast, or rebroadcast once the parents are confirmed.

nLockTime


A transaction can be locked for 14 days before it is processed. This can cause a potential DDoS of the node memories. An analysis of the economics of this requires understanding of the minimum fee and the dust size requirements for a transaction. The best approach for an attacker to fill up a node’s memory would be to spam the txout table full of dust then use those dust to form txins for a mass amount of 1KB txns (txin takes much more data than txout) with nLockTime on them. This can be done even better with multisig:

So if we assume these 1KB spam txns have 2 dust inputs and minimum fee, that’s 0.03 PPC/KB (0.02 locked and 0.01 fee).
To clarify, if one wanted to fill 8 GB node memory with spam (DDoS basically every node) it would require locking or spending 240,000 PPC, or ~1% of PPC supply, for 14 days.

Now I want to lift the dust input mechanism, which large OP_RETURN txns would effectively do. The result would be straightforwardly 0.01 PPC/KB burned in the fee, with a trivial amount locked. This results in 80,000 PPC, or 0.33% of supply, for the same 14 days of 8 GB spam. To alleviate this concern I look to increase the fee for large txns but preserve the standard fee for standard txns.

Fee Resolution
A standard txn has 1 single-sig input and 2 outputs. A txn like this is <300 Bytes. For this reason, I suggest we increase the fee resolution to 0.01 PPC/300 Bytes. This would preserve the 0.01 PPC/txn for standard txns while increasing the fee for large txns over 3 fold. This would put us back in line with the current cost level of nLockTime-spam (not to mention it would all be burned and none locked).

I don’t really have a whole lot more to say here. It certainly makes back of the envelope calculations more complicated, but it doesn’t actually require a whole lot of protocol change. I think it is sufficiently elegant to retain the current expectations of low fees while still imposing an appropriate fee structure in preparation for large txns.

Block Size
And finally we have the limiting factor of the block size. The current 1 MB will simiply not allow a ~2 MB image to be uploaded in a single txn. What’s more, the fee structure gets even more complicated when the block exceeds half full:


The limitation on blockchain size is really node relay time. Here we will calculate average relay requirements when blocks are full then try to develop a reasonable stochastic-peak amount.
x will represent the blocksize in MB, y will represent download/upload speed for the average node in KB/s, z will represent number of relays.
block time = 600 seconds = zx1000/y
z is something like degrees of separation between nodes on the network. Here I’ll assume it’s ~10, which I think is more like an overstatement.
Rearrange to solve for y, assume z=10:
y = 16.67*x KB/s
So if the average node has 133.33 KB/s upload/download time, this should be fine for an 8MB network on average.
Peak-stochastic chance may put out multiple blocks in the 10 minute time interval. If we assume that the average node has something like 1 MB/s upload/download time, we can see that the network can easily keep up with even 7.5 completely full 8MB blocks in one 10 minute interval. Based on these calculations, 8MB seems like a completely feasible blocksize limit.

These considerations lead me to claim a 3.9 MB OP_RETURN size as a feasible limitation.

Hi Nagalim,

I’m minting with a modified ppc client that has unlimited OP_RETURN: https://github.com/hrobeers/ppcoin/tree/unlimited-opreturn
Currently there is an OP_RETURN txn of ~700bytes in the mempool to be minted in soon.
I want to illustrate that these kind of changes can be community driven and don’t need protocol changes.

I’m totally in favor of the community experimenting with the non-standard transactions!
But we should not do premature protocol optimizations, let’s get a serious use case first.

The good news here is that the OP_RETURN size limitation is NOT baked into the protocol! (contrary to popular belief)

[quote=“Nagalim, post:1, topic:4000”]OP_RETURN

Technically, this looks like putting large data into a single transaction. For illustrative purposes, I want the reader to imagine posting an entire image to the blockchain, say it’s ~2 MB.[/quote]

I would discourage storing images and other data that size on chain. There are other persistent p2p storage techniques that allow to store the data off chain and use the chain to store it’s meta-data. I agree that 80 bytes is not enough, but that is because it is insufficient to store a practical size of meta-data, not because it doesn’t allow to store the data entirely.

So do like me, remove the OP_RETURN size on your client, and help minting in the larger OP_RETURN transactions. Then build applications using this feature and consider merging it upstream.

[quote=“Nagalim, post:1, topic:4000”]Fee Resolution
A standard txn has 1 single-sig input and 2 outputs. A txn like this is <300 Bytes. For this reason, I suggest we increase the fee resolution to 0.01 PPC/300 Bytes.[/quote]

This is not correct. I’ve had it many times that a regular P2PKH transaction needed 0.02PPC as a fee. This is because it groups the many change outputs from previous transactions.
Grouping small outputs lowers the UTXO table size and therefore it lowers the memory requirements for all nodes! We shouldn’t discourage this.
Multiple inputs are very common.

[quote=“Nagalim, post:1, topic:4000”]Block Size

These considerations lead me to claim a 3.9 MB OP_RETURN size as a feasible limitation.[/quote]

I wouldn’t do this yet, like I said earlier, we should not do premature protocol optimizations.
Let’s first develop and promote our “unlimited” OP_RETURN feature and scale when needed.
And let us still encourage off-chain data storage, the chain is for cryptographic proof and meta-data timestamping.

Thank you for the response, you bring up very good points. I’d like to respond specifically to that statement that peercoin is not to be used for decentralized data storage. Essentially, my argument is that the only limitation on what should or should not be allowed on the blockchain is the fee. If someone is willing to pay the fee to store data in a decentralized way, we should not stop them, or put artificial limitations in their path. I could argue that the core txns, moving peercoin from address to address, are raw data not meta-data, as are peerassets. Anyway, no one will be able to stop someone from using the blockchain to store data, so that just leaves the question of the size limitations. Since we agree on lifting the OP_RETURN cap, I’ll leave that and talk about the other two points.

For the fee size, I want you to think about this as compared to increasing the fee to 0.03 PPC/KB. The higher resolution fee structure does not discourage collection of TXOs as much as it discourages the creation of additional TXOs that can be freely merged in a later txn with the ‘extra space’ you get with a low resolution fee. This change is an intentional fee increase (because we all know it’s very low) without affecting the experience of the basic peercoin user.

And finally, the blocksize. Here I want to go ahead and point this out as a marketing ploy that we can afford to make, especially if it comes with a fee increase. Take it as a “build it and they will come” kind of a move. Simply speaking, why not? The question “why would we?” is answered by saying that it will attract attention and marketing to our chain. So what are the arguments against it?

Again, I appreciate the discussion. I realize that this post is turning an uncontentious protocol change (lifting the cap on null data) into a contentious blocksize and fee debate, but I think that there is a real opportunity here to make a statement about what the peercoin blockchain can be used for and what it can do. Also, as you, [member=32827]hrobeers[/member], are showing I think the lift on OP_RETURN is uncontentious and will occur without me posting about it. I’d be happy to open another thread on the subject, but it would probably be better coming from the person who’s actually broadcasting large txns on test net.

Ok, I guess we’ll get distracted by too many different topics in the thread.

So let’s sort it out a bit:

[ol][li]Unlimited OP_RETURN: this is not a protocol change, let’s discuss this later in a different thread. I should mint a block with one in the near future and will start a thread about it.[/li]
[li]Increased block size: let’s discuss this in another thread maybe somewhere in marketing?[/li]
[li]High resolution txn fee: I think we should discuss this here.[/li][/ol]

Increasing the resolution is interesting to me as it would encourage to pack your data as small as possible instead of working with 1kb increments.
But I think we should set a minimum size to start from at 1kb so that for example: 1kb=0.01PPC, 1.3kb=0.02PPC, …
Otherwise it will be very hard to explain to users why some transactions are more expensive in a seemingly random way to the user.

peercoin embiggens us all :stuck_out_tongue:

Thanks to the PARS project:

RE: Fee Resolution

With large txns, the dust size and the total fee decouple their correlation to some degree. A txn currently has a tie between its maximum size and the number of dust outputs or inputs involved. With large txns the total fee size alone is the restriction. With regular txns, there are dust restrictions as well. (I may not have a point here with regards to no-output txns, but anyway it shouldn’t affect the following.)

Assume that we want to increase fees for this or some other reason. The idea will be to maintain practical usability of that fee. So, I would like to repropose the fee resolution parameter independently as a means of changing the total fee paid by large txns without drastically altering the fee structure of smaller txns.

The basic thought process is like this:
If we could do 0.00001 PPC/byte the fee would be txn size agnostic. However, this could be confusing to users. An alternative is to try to fix the smallest txn size, <300 Bytes, at 0.01 and simply measure txns in terms of units of 300 bytes. The linear simplification, instead of doing anything peacewise, would be to keep the coding and explainations light.