There are three main changes I am proposing here:
- Increase OP_RETURN allowed size
- Increase fee resolution (and thereby overall fee for large txns)
- Increase block size
Blockchain space on the peercoin network is currently paid for by the kilobyte. We currently have a very small blockchain and low inflation rates but we lack much usage of our secure network, so a logical direction to take the network is one that gives people additional use cases for taking up blockchain space. By promoting the null data field of a transaction from a minor addon to a major network feature, we enter a whole new set of blockchain use cases that has not been fully explored by cryptocurrency to date. We can see that there is demand for blockchain use other than basic transaction verification (i.e. moving fungible tokens around) in two of the top cryptocurrencies by market cap: STEEM and Etherium. With our fee structure, we are in a prime position to explore using the current peercoin network as a backbone for more than just basic transactions.
Technically, this looks like putting large data into a single transaction. For illustrative purposes, I want the reader to imagine posting an entire image to the blockchain, say it’s ~2 MB. Then, the poster simply needs to tell someone the txid of that transaction and they can parse the image directly from the blockchain. As the basic transaction data for that txn is likely to be negligible in size, we can assume that txn is almost entirely just the image. This leads us to consider how the protocol handles a txn of that size.
Orphaned large txns
The protocol already ignores orphaned txns higher than 5KB. The txn must have already confirmed parents before the large txn is broadcast, or rebroadcast once the parents are confirmed.
A transaction can be locked for 14 days before it is processed. This can cause a potential DDoS of the node memories. An analysis of the economics of this requires understanding of the minimum fee and the dust size requirements for a transaction. The best approach for an attacker to fill up a node’s memory would be to spam the txout table full of dust then use those dust to form txins for a mass amount of 1KB txns (txin takes much more data than txout) with nLockTime on them. This can be done even better with multisig:
So if we assume these 1KB spam txns have 2 dust inputs and minimum fee, that’s 0.03 PPC/KB (0.02 locked and 0.01 fee).
To clarify, if one wanted to fill 8 GB node memory with spam (DDoS basically every node) it would require locking or spending 240,000 PPC, or ~1% of PPC supply, for 14 days.
Now I want to lift the dust input mechanism, which large OP_RETURN txns would effectively do. The result would be straightforwardly 0.01 PPC/KB burned in the fee, with a trivial amount locked. This results in 80,000 PPC, or 0.33% of supply, for the same 14 days of 8 GB spam. To alleviate this concern I look to increase the fee for large txns but preserve the standard fee for standard txns.
A standard txn has 1 single-sig input and 2 outputs. A txn like this is <300 Bytes. For this reason, I suggest we increase the fee resolution to 0.01 PPC/300 Bytes. This would preserve the 0.01 PPC/txn for standard txns while increasing the fee for large txns over 3 fold. This would put us back in line with the current cost level of nLockTime-spam (not to mention it would all be burned and none locked).
I don’t really have a whole lot more to say here. It certainly makes back of the envelope calculations more complicated, but it doesn’t actually require a whole lot of protocol change. I think it is sufficiently elegant to retain the current expectations of low fees while still imposing an appropriate fee structure in preparation for large txns.
And finally we have the limiting factor of the block size. The current 1 MB will simiply not allow a ~2 MB image to be uploaded in a single txn. What’s more, the fee structure gets even more complicated when the block exceeds half full:
The limitation on blockchain size is really node relay time. Here we will calculate average relay requirements when blocks are full then try to develop a reasonable stochastic-peak amount.
x will represent the blocksize in MB, y will represent download/upload speed for the average node in KB/s, z will represent number of relays.
block time = 600 seconds = zx1000/y
z is something like degrees of separation between nodes on the network. Here I’ll assume it’s ~10, which I think is more like an overstatement.
Rearrange to solve for y, assume z=10:
y = 16.67*x KB/s
So if the average node has 133.33 KB/s upload/download time, this should be fine for an 8MB network on average.
Peak-stochastic chance may put out multiple blocks in the 10 minute time interval. If we assume that the average node has something like 1 MB/s upload/download time, we can see that the network can easily keep up with even 7.5 completely full 8MB blocks in one 10 minute interval. Based on these calculations, 8MB seems like a completely feasible blocksize limit.
These considerations lead me to claim a 3.9 MB OP_RETURN size as a feasible limitation.