Bitcoin Forum
June 24, 2018, 09:15:46 AM *
News: Latest stable version of Bitcoin Core: 0.16.1  [Torrent]. (New!)
 
   Home   Help Search Donate Login Register  
Pages: [1]
  Print  
Author Topic: If you just SUM the inverse-hashes of valid blocks ..?  (Read 90 times)
spartacusrex
Hero Member
*****
Offline Offline

Activity: 648
Merit: 505



View Profile
June 13, 2018, 12:05:39 PM
 #1

..I'm working on something.

The inverse hash is the MAX_HASH_VALUE - the hash value. So the lower the hash of the block, the more difficult the block, the more you add to the total.

I'm wondering what the sum of the inverse-hashes of the blocks, as opposed to the sum of the difficulty of the blocks, would give you ?

You would still need to find a nonce that satisfies the block difficulty, as usual, but after that the lower your hash, the more your block is worth.

The chain with the most hash-rate would on average still have the highest sum of inverse-hashes.

Any ideas what would happen?

( It just seems that some of the POW available is being left unused.. )

1529831746
Hero Member
*
Offline Offline

Posts: 1529831746

View Profile Personal Message (Offline)

Ignore
1529831746
Reply with quote  #2

1529831746
Report to moderator
The World's Betting Exchange

Bet with play money. Win real Bitcoin. 5BTC Prize Fund for World Cup 2018.

Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
monsterer2
Member
**
Online Online

Activity: 252
Merit: 94


View Profile
June 13, 2018, 12:26:46 PM
 #2

For anyone wondering what he's asking, here's some clarification:

He's talking about what would happen if you changed the LCR from sorting branches by cumulative difficulty, to sorting by lowest achieved hash vs target value. Sometimes miners mine blocks with a hash quite a lot lower than the target value, and on average these numerically lower hashes are harder to mine, so does it increase the security of chain to sort by them?

tromp
Hero Member
*****
Offline Offline

Activity: 570
Merit: 512


View Profile
June 13, 2018, 01:42:43 PM
Merited by suchmoon (5), HeRetiK (1)
 #3

Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem with this proposal, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions. 6 confirmations will no longer be particularly safe.

For the same reason, this will make selfish mining all the more effective.

So the Longest Chain Rule, where one next block is as good as any other, is quite essential in stabilizing the transaction history.
spartacusrex
Hero Member
*****
Offline Offline

Activity: 648
Merit: 505



View Profile
June 13, 2018, 02:33:01 PM
 #4

Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions.
6 confirmations will no longer be particularly safe.

For the same reason, this will make selfish mining all the more effective.

So the Longest Chain Rule, where one next block is as good as any other, is quite essential in stabilizing the transaction history.

Yes - this is the issue.. one lone hash can wipe out many normal / smaller hashes.

I have a system where you delay this. So initially a block is worth it's normal difficulty - as per usual.

But after 10,000 blocks.. beyond a re-org, you let it be worth what it's actually worth. This has some nice benefits (in my case - for pruning)

Carlton Banks
Legendary
*
Offline Offline

Activity: 2044
Merit: 1197



View Profile
June 13, 2018, 03:36:15 PM
 #5

Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions.
6 confirmations will no longer be particularly safe.

This has no basis in fact.

All blocks must have a hash lower than the threshold, there is no logic in Bitcoin block validation that behaves in any way differently depending on how low the hash value is of a new block. Either a block is the lowest hash value for the next block solution, or it's not. "Unexpectedly low" is therefore completely meaningless. The only logic that exists in Bitcoin block validation is "lowest", not "how low".


Let's remove this "problem" you've identified lol, only the longest chain wins. We then have a "new" problem; how to resolve chain forks when 2 blocks are found before either block has 100% acceptance, and so different parts of the network accept either block. This problem was solved in Bitcoin back in 2009 or 2010, and now you're saying that the solution is causing the problem. Those who merited your post should ask for the merit points to be returned.


For the same reason, this will make selfish mining all the more effective.

But you were wrong, so it actually makes zero difference

Edit: I misinterpreted, sorry tromp

Vires in numeris
monsterer2
Member
**
Online Online

Activity: 252
Merit: 94


View Profile
June 13, 2018, 04:12:22 PM
 #6

Let's remove this "problem" you've identified lol, only the longest chain wins. We then have a "new" problem; how to resolve chain forks when 2 blocks are found before either block has 100% acceptance, and so different parts of the network accept either block. This problem was solved in Bitcoin back in 2009 or 2010, and now you're saying that the solution is causing the problem. Those who merited your post should ask for the merit points to be returned.

I don't think you've read this thread correctly. No one is suggesting we remove the cumulative difficulty rule; the OP was suggesting that hashes with a lower numerical value than target are on average harder to mine than hashes at the target value, therefore there might be some merit in using this to order blocks.

@tromp rightly pointed out that this would cause wild reorgs when a miner gets lucky with a low target, thereby increasing double spend risk.

HeRetiK
Hero Member
*****
Offline Offline

Activity: 784
Merit: 642


the forkings will continue until morale improves


View Profile
June 13, 2018, 04:42:14 PM
 #7

Sometimes miners mine blocks with a hash quite a lot lower than the target value

And that is exactly the problem, since such unexpectedly low hashes will then cause multiple blocks to get orphaned. There will be a permanent state of uncertainty about finalization of recent transactions.
6 confirmations will no longer be particularly safe.

This has no basis in fact.

All blocks must have a hash lower than the threshold, there is no logic in Bitcoin block validation that behaves in any way differently depending on how low the hash value is of a new block. Either a block is the lowest hash value for the next block solution, or it's not. "Unexpectedly low" is therefore completely meaningless. The only logic that exists in Bitcoin block validation is "lowest", not "how low".

[...]

I'm not sure you read tromp's post correctly (well, or maybe I didn't), but the way I understand it is this:

1) Bitcoin follows the chain of the longest cumulative work based on the difficulty at which each block was mined

2) spartacus rex suggests calculating the cumulative work based on not the difficulty (which is the same for each block within a given difficulty period) but rather using the amount of work that is put into a block beyond this difficulty (ie. how far the hash is beyond the difficulty threshold)

3) tromp then points out that this would make the logic for following the longest cumulative work less stable, as a single "hard" block (ie. far beyond the difficulty threshold) would supersede multiple "easy" blocks (ie. within, but close to the difficulty threshold). This would lead to both (a) more orphans and (b) some confirmations being more worth than others (ie. 6 confirmations by "easy" blocks would be nullified by a single "hard" block), making it hard to reliably asses transaction finality.

While I wouldn't go as far as claiming this to make selfish mining easier, pointing out that block equality within a given difficulty period is important for a stable network and thus network security seems to me both correct and important. Feel free to correct me though.

Carlton Banks
Legendary
*
Offline Offline

Activity: 2044
Merit: 1197



View Profile
June 13, 2018, 05:42:17 PM
 #8

Oh I get it now, tromp was saying "if that were true", explaining his edit

Vires in numeris
odolvlobo
Legendary
*
Offline Offline

Activity: 2170
Merit: 1077



View Profile
June 13, 2018, 05:55:39 PM
Merited by Foxpup (2)
 #9

I think the idea is based on a misconception that rarer values are somehow worth more. I don't believe that is true because the amount of work expended to generate a hash is the same, regardless of the hash.

I believe that the result of such a system would simply make the actual difficulty higher than the stated difficulty, but by a varying amount. I can't think any benefit of adding a random factor to the difficulty that would outweigh the problems.

Buy bitcoins with cash from somebody near you: LocalBitcoins
Buy stuff on Amazon at a discount with bitcoins: Purse.io
Join an anti-signature campaign: DannyHamilton's ignore list
spartacusrex
Hero Member
*****
Offline Offline

Activity: 648
Merit: 505



View Profile
June 13, 2018, 06:42:34 PM
 #10

P2Pool uses a system a little bit like this.

In P2Pool, you search for a block every 10 secs. You publish at that difficulty level, on the P2Pool network. BUT if you find a block that is 60x harder (one every 10 minutes), enough to be a valid Bitcoin block, you then publish that on the mainnet. The cumulative POW is preserved  - in a smaller amount of blocks.

In this scenario using multi-difficulty blocks has a great use case. 

I am trying to leverage this.

In Bitcoin, you search for a block every 10 minutes. You publish at that difficulty level, on the mainnet. BUT if you find a block that is 60x harder (one every 10 hours), enough to be a valid Super Block (let's say), then you can publish that on a Super Block Network, where you only find one block every 60 normal Bitcoin blocks. The cumulative POW is preserved - in a smaller amount of blocks.

The point is that since this works for dual-difficulty blocks, what happens if you take this to the extreme, and simply use the inverse-hash as the block weight ?

It STILL seems to work (chain with most POW wins), but the fluctuations are far bigger..

I'm trying to see if those fluctuations can be better controlled.

aliashraf
Sr. Member
****
Offline Offline

Activity: 490
Merit: 273


View Profile
June 13, 2018, 07:36:54 PM
 #11

I think the idea is based on a misconception that rarer values are somehow worth more. I don't believe that is true because the amount of work expended to generate a hash is the same, regardless of the hash.

I believe that the result of such a system would simply make the actual difficulty higher than the stated difficulty, but by a varying amount. I can't think any benefit of adding a random factor to the difficulty that would outweigh the problems.
Agree. No matter what the outcome is, you are processing the target difficulty and you hit proportional to your hashpower in long run, no hashpower is lost as OP suggests.
Pages: [1]
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!