Has a DHT already been considered as a method to process microtransactions?

Hello!

I understand the purpose of Peercoin as a “backbone currency”, and fully support the measures being taken to reduce the size of the blockchain by discouraging micro-transactions.

However, I do feel that a feature for micro-transactions is missing. The proposed solutions I often hear include: side-chains, and (partially) centlralised solutions.

The problem with these is, though: side-chains, being blockchains, wil have slow confirmation times (for the purpose, I don’t want to wait 5 minutes to pay for my beer), a relatively high use of storage due to every node storing every transaction ever made on that chain, and the need to keep it all in sync. If we started pruning such a chain, we also risk losing unspent old transactions. Centralized solutions obviously have the single point of failure that cryptocurrencies were created to avoid in the first place

I was looking at altcoins the other day, and Safecoin caught my eye. To my surprise, I found out it doesn’t use a blockchain!

Instead, it uses a distributed hash table to assign “transaction managers” to every coin. (I believe 32 of them, but you could increase this number for added safety…).

To reach consensus, the transaction managers vote on which conflicting transaction they approve.

It’s obviously not as safe as a blockchain since relatively few nodes are involved in the transaction, but it’s a very lightweight system with a very short confirmation time, which might be just what we need for small transactions. A DHT also doesn’t have the single point of failure that centralized systems have.

Could a similar DHT-based system be a useful complement for Peercoin to process micro-transactions? Perhaps some kind of pegging system could be used, or even integration into the Peercoin protocol itself? I personally think the systems could complement each-other wonderfully.

[quote=“sGYuOQTLJM, post:1, topic:3787”]Hello!

I understand the purpose of Peercoin as a “backbone currency”, and fully support the measures being taken to reduce the size of the blockchain by discouraging micro-transactions.

However, I do feel that a feature for micro-transactions is missing. The proposed solutions I often hear include: side-chains, and (partially) centlralised solutions.

The problem with these is, though: side-chains, being blockchains, wil have slow confirmation times (for the purpose, I don’t want to wait 5 minutes to pay for my beer), a relatively high use of storage due to every node storing every transaction ever made on that chain, and the need to keep it all in sync. If we started pruning such a chain, we also risk losing unspent old transactions. Centralized solutions obviously have the single point of failure that cryptocurrencies were created to avoid in the first place

I was looking at altcoins the other day, and Safecoin caught my eye. To my surprise, I found out it doesn’t use a blockchain!

Instead, it uses a distributed hash table to assign “transaction managers” to every coin. (I believe 32 of them, but you could increase this number for added safety…).

To reach consensus, the transaction managers vote on which conflicting transaction they approve.

It’s obviously not as safe as a blockchain since relatively few nodes are involved in the transaction, but it’s a very lightweight system with a very short confirmation time, which might be just what we need for small transactions. A DHT also doesn’t have the single point of failure that centralized systems have.

Could a similar DHT-based system be a useful complement for Peercoin to process micro-transactions? Perhaps some kind of pegging system could be used, or even integration into the Peercoin protocol itself? I personally think the systems could complement each-other wonderfully.[/quote]

This is excellent idea.

I will give it some thought.

Thanks for bringing this to my attention.

Given that there are multiple DHT algorithms with their respective strengths and weaknesses, which would be best suited to process micro transactions?
When referring to Maid and their implementation of DHT routing I came across their use of the Kademlia algorithm.

Just an overview of Kademlia:

When deciding where to store particular key/value pairs, Kademlia does a bitwise XOR operation on the N-bit node id numbers and finds out the topological distance between them. Lookups are then forwarded to the nearest available node. Kademlia protocol states that each node must know at least one node in each of it’s sub-trees. This way every node can be located reached from any given node using it’s ID.

This XOR method is unidirectional and this is similar to how the Chord algorithm works. The unidirectionality ensures that all lookups for the same key converge along the same path, regardless of the originating node. In this sense, caching [key,value] pairs along the lookup path prevents build up in a given area.

Kademlia uses the XOR of 2 node IDs as a measure of their distance apart. The idea with the routing table buckets is that a node has detailed knowledge of the network “close” to it and less knowledge the further you get from its ID. Each Kademlia node keeps a list of one another, for nodes of distance between 2^i and 2^(i+1) from itself, using [IP address, UDP, Node ID]. For large values of i, the list can grow up to a given size K. K is a system-wide replication parameter and should be chosen such that any given k nodes are very unlikely to fail within a given length of time.

In the original Kademlia paper the routing table is made up of 160 buckets. I believe that the 160 value is derived from the bit-size of a SHA1 hash. Normally a hash function is used to generate the key for a given value to be stored. A different hash algorithm could be used, e.g. SHA256 would yield up to 256 buckets. It would be very unlikely for the number of buckets to get anywhere near 256 or 160 for that matter. For a network of a billion nodes, the average bucket count would be around 27.

It seems that this organization combined with the XOR topology metric creates tiered locality. This guarantees that querying a given node relatively near your target will provide information on closer nodes and yields exponential convergence.