Re: Clif High's solution for Bitcoin to scale to infinity.
Very doubtful.
Where are all these extra lateral blocks going to be stored?
Reminds me of pseudoscience.
Geez.
You never use a Spread sheet Before?
Block #52652 A | Block #52652 B | Block #52652 C Block #52653 A Block #52654 A Block #52655 A | Block #52655 B
They would be stored in the blockchain like the rest of the blocks.
Tell me, do blocks B, C etc… also come within the same 10 minutes as the A-blocks ? Do they also have block rewards ? Do these rewards increase bitcoin’s inflation ?
If yes –> Mwhahaha
If no –> Who mines them ? Does block A have to certify (Merkle of blocks ?) blocks B, C etc…. and hence “wait for them” ? Is there only one miner of blocks A,B and C, namely the one mining block A and getting its rewards ? What’s then the difference if the one mining block A is also including the transactions of B and C into block A ?
Since no one has wrote the code yet, Just my speculation: Block B & C would be broadcast as quickly as possible, so in the same 10 minute window. ZERO BLOCK rewards for the lateral Blocks, however the Miner would receive the transaction fees for the additional blocks. 1 Miner for each Block #, and the A,B,C or more that follow
You are correct , 1 large block A could contain B & C
But if it is not a miner, who will “broadcast” blocks B and C ? Will this not be a cacophony of many people broadcasting many slightly different blocks B and C ? Normally, only a miner can “broadcast” a valid block (everybody can claim to be miner and broadcast a block with wrong proof of work of course, but that’s quickly discovered in the block header). But who will broadcast blocks B and C ?
If it is the miner of block A, he’s just cutting his big own block in a few pieces.
Difference is you have to verify the block, so doing it Clif’s way means you can validate a smaller block faster, and only use the lateral blocks when the it is full of transactions. The Additional blocks would be more important , if they can run past the next main block that is found . Meaning someone tries to spam the network, and a main block plus 20 lateral blocks, eats up the entire spam attack, and then the next block is only 1 main block again.
But who is broadcasting these non-PoW, non-PoS blocks ? Every node ?
Single Large Blocks can also adapt , but usually BTC miners are not lowering the blocksize now, even when they include only 1 transaction.
At some point the larger blocksizes would not be able to be verified in the time before the next block, therefore limiting the maximum size.
How are you going to be able to verify the gazillion of POSSIBLE and BROADCAST small blocks, while the miner of block A has just picked out two of them ?
Suppose that the mem pool contains, at a certain moment, 10 000 transactions. There are 5000 different ways to put 6000 of these 10 000 transactions into 3 blocks of about 2000 transactions each because there are 5000 nodes doing so, with slightly different mem pools (not sync of course). So you receive 15000 blocks from 5000 different nodes. The miner of block A has picked 3 of them, which he calls A, B and C. You will indeed find, amongst the 15000 broadcast blocks, that one is block A, another one is block B and still another one is block C. But if you don’t have the time to verify the big block , do you think you have the time to verify those 15000 blocks ??
Advertised sites are not endorsed by the bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
Legendary Offline
Activity: 1008
ZEIT Knight
Very doubtful.
Where are all these extra lateral blocks going to be stored?
Reminds me of pseudoscience.
Geez.
You never use a Spread sheet Before?
Block #52652 A | Block #52652 B | Block #52652 C Block #52653 A Block #52654 A Block #52655 A | Block #52655 B
They would be stored in the blockchain like the rest of the blocks.
Tell me, do blocks B, C etc… also come within the same 10 minutes as the A-blocks ? Do they also have block rewards ? Do these rewards increase bitcoin’s inflation ?
If yes –> Mwhahaha
If no –> Who mines them ? Does block A have to certify (Merkle of blocks ?) blocks B, C etc…. and hence “wait for them” ? Is there only one miner of blocks A,B and C, namely the one mining block A and getting its rewards ? What’s then the difference if the one mining block A is also including the transactions of B and C into block A ?
Since no one has wrote the code yet, Just my speculation: Block B & C would be broadcast as quickly as possible, so in the same 10 minute window. ZERO BLOCK rewards for the lateral Blocks, however the Miner would receive the transaction fees for the additional blocks. 1 Miner for each Block #, and the A,B,C or more that follow
You are correct , 1 large block A could contain B & C
But if it is not a miner, who will “broadcast” blocks B and C ? Will this not be a cacophony of many people broadcasting many slightly different blocks B and C ? Normally, only a miner can “broadcast” a valid block (everybody can claim to be miner and broadcast a block with wrong proof of work of course, but that’s quickly discovered in the block header). But who will broadcast blocks B and C ?
If it is the miner of block A, he’s just cutting his big own block in a few pieces.
Nope, only the miner that mined the main block would be able to broadcast the lateral blocks for that main block. The Lateral or adjacent blocks would have to match a checksum included in the main block or fail inclusion into the chain.
Difference is you have to verify the block, so doing it Clif’s way means you can validate a smaller block faster, and only use the lateral blocks when the it is full of transactions. The Additional blocks would be more important , if they can run past the next main block that is found . Meaning someone tries to spam the network, and a main block plus 20 lateral blocks, eats up the entire spam attack, and then the next block is only 1 main block again.
But who is broadcasting these non-PoW, non-PoS blocks ? Every node ?
Only the miner of the main block would be able to broadcast the lateral blocks and only for the block he mined the main one.
Single Large Blocks can also adapt , but usually BTC miners are not lowering the blocksize now, even when they include only 1 transaction.
At some point the larger blocksizes would not be able to be verified in the time before the next block, therefore limiting the maximum size.
How are you going to be able to verify the gazillion of POSSIBLE and BROADCAST small blocks, while the miner of block A has just picked out two of them ?
Suppose that the mem pool contains, at a certain moment, 10 000 transactions. There are 5000 different ways to put 6000 of these 10 000 transactions into 3 blocks of about 2000 transactions each because there are 5000 nodes doing so, with slightly different mem pools (not sync of course). So you receive 15000 blocks from 5000 different nodes. The miner of block A has picked 3 of them, which he calls A, B and C. You will indeed find, amongst the 15000 broadcast blocks, that one is block A, another one is block B and still another one is block C. But if you don’t have the time to verify the big block , do you think you have the time to verify those 15000 blocks ??
Only Block A requires Complete Verification, the additional Lateral Blocks are only accepted if they match the Checksums, listed by the verified block.
Legendary Online
Activity: 1638
ok this is how i see it We already know that whatever is in purple gets hashed. And that the hash is thrown to ASICS to make a more secure hash with a bunch of zero’s at the start there are [80bytes of data] that is unused in a block..
now imagine that we used that 80bytes for something as simple as a signature(sidehash).. more precisely a signature hash of data signed by the pools own chosen coinbase(reward) keypair. and then hashed the same as before.
now you may well be asking what ‘message’ could that signature be part of
the message could be a cluster of tx’s and a signature belonging to another previous cluster of tx’s.. and so on.. and so on
so how would it work. well imagine every second new tx’s are signed into a cluster (extended block) by the pool making the main block
so timeline, using just 3 clusterblocks/extended blocks for example previous block solved. Previous block hash is added to the new block aswell as 2500 tx added to the new block. 2500 tx added to sideblockA and signed as A by that pool 2500 tx and signature A added to sideblock B data and signed as B by that pool 2500 tx and signature B added to sideblock C data and signed as C by that pool sideblockC signature added to block and the block is hashed and PoW’d as usual all within the same 10 minute window
possibilities, because signatures are involved. clusters/extended blocks can be signed once a second, meaning it can make 600 cluster/extended blocks to allow a tx to get semi confirmed in second of being seen by a pool. yes them 600 clusters instead of 1 extended block may have a 42-48kb extra bloat(600 signatures), but then you gain the feature of 1second semi confirmation instead of just extra tx per single block waiting 10 minutes for a confirm.
issues: there are 20+pools so your TX might be in cluster/extended block antpool: 350 of the 600 blocks signed by antpool or 123 of the 600 blocks signed by bitfury or 500 of the 600 blocks signed by btcc all because in the 600 seconds between mainblocks each pool has their own 600 cluster/extendedblocks and tx arrangement that pool has solely/independently chosen resolution: user receives atleast 20 pool responses that the users tx is in a cluster/extended block somewhere in each pools
issue: this then causes alot more data flying around the network of ‘soft confirming’ 12,000 cluster/extended blocks (20 pools*600cluster/extended blocks each)
issue: logical minds will think screw it lets make it 60 clusters/extended blocks with a 10 second semi confirm. practical minds then argue that although 10 seconds is better than 10 minutes. there is still 1200 extended blocks flying through the network. and 10 seconds is now slower than visa’s ‘touchless’ NFC swipe and go payment method.
my opinion. 1. just using 1 extended block that can hold an infinite amount of tx’s is a way of ‘going soft’ with dynamics. but then the nodes should have a dynamic useragent flag that pools find a consensus of nodes of, to know how many tx’s should be put in this extended block where it wont cause issue to the majority of nodes.
2. by using it to also ‘speed up’ the confirm by having MULTIPLE extended blocks signed every second/10seconds/whatever.. is better than wrecking blockrewards/difficulty/halving schedules like some other lame proposals of just reducing the 10min average to 2minutes, 1min, 30seconds. but again nodes will need to have rules of acceptability to keep pools inline so that pools do not overdo it.
i could continue waffling on, but ill stop here for now
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Don’t take any information given on this forum on face value. Please do your own due diligence & respect what is written here as both opinion & information gleaned from experience. If you wish to seek legal FACTUAL advice, then seek the guidance of a LEGAL specialist.
Very doubtful.
Where are all these extra lateral blocks going to be stored?
Reminds me of pseudoscience.
Geez.
You never use a Spread sheet Before?
Block #52652 A | Block #52652 B | Block #52652 C Block #52653 A Block #52654 A Block #52655 A | Block #52655 B
They would be stored in the blockchain like the rest of the blocks.
Tell me, do blocks B, C etc… also come within the same 10 minutes as the A-blocks ? Do they also have block rewards ? Do these rewards increase bitcoin’s inflation ?
If yes –> Mwhahaha
If no –> Who mines them ? Does block A have to certify (Merkle of blocks ?) blocks B, C etc…. and hence “wait for them” ? Is there only one miner of blocks A,B and C, namely the one mining block A and getting its rewards ? What’s then the difference if the one mining block A is also including the transactions of B and C into block A ?
Since no one has wrote the code yet, Just my speculation: Block B & C would be broadcast as quickly as possible, so in the same 10 minute window. ZERO BLOCK rewards for the lateral Blocks, however the Miner would receive the transaction fees for the additional blocks. 1 Miner for each Block #, and the A,B,C or more that follow
You are correct , 1 large block A could contain B & C
But if it is not a miner, who will “broadcast” blocks B and C ? Will this not be a cacophony of many people broadcasting many slightly different blocks B and C ? Normally, only a miner can “broadcast” a valid block (everybody can claim to be miner and broadcast a block with wrong proof of work of course, but that’s quickly discovered in the block header). But who will broadcast blocks B and C ?
If it is the miner of block A, he’s just cutting his big own block in a few pieces.
Nope, only the miner that mined the main block would be able to broadcast the lateral blocks for that main block. The Lateral or adjacent blocks would have to match a checksum included in the main block or fail inclusion into the chain.
Well, then he could just as well make one big block, instead of splitting his one big block he’s alone in deciding about, into several small ones.
Difference is you have to verify the block, so doing it Clif’s way means you can validate a smaller block faster, and only use the lateral blocks when the it is full of transactions. The Additional blocks would be more important , if they can run past the next main block that is found . Meaning someone tries to spam the network, and a main block plus 20 lateral blocks, eats up the entire spam attack, and then the next block is only 1 main block again.
But who is broadcasting these non-PoW, non-PoS blocks ? Every node ?
Only the miner of the main block would be able to broadcast the lateral blocks and only for the block he mined the main one.
So he’s just putting what he’d put in one big block, in several smaller ones.
Single Large Blocks can also adapt , but usually BTC miners are not lowering the blocksize now, even when they include only 1 transaction.
At some point the larger blocksizes would not be able to be verified in the time before the next block, therefore limiting the maximum size.
How are you going to be able to verify the gazillion of POSSIBLE and BROADCAST small blocks, while the miner of block A has just picked out two of them ?
Suppose that the mem pool contains, at a certain moment, 10 000 transactions. There are 5000 different ways to put 6000 of these 10 000 transactions into 3 blocks of about 2000 transactions each because there are 5000 nodes doing so, with slightly different mem pools (not sync of course). So you receive 15000 blocks from 5000 different nodes. The miner of block A has picked 3 of them, which he calls A, B and C. You will indeed find, amongst the 15000 broadcast blocks, that one is block A, another one is block B and still another one is block C. But if you don’t have the time to verify the big block , do you think you have the time to verify those 15000 blocks ??
Only Block A requires Complete Verification, the additional Lateral Blocks are only accepted if they match the Checksums, listed by the verified block.
Of course not. There are two “levels” of verification: “header verification”, and “full transaction consensus verification”.
Let us not forget what is the PURPOSE of a block chain: coming to a consensus of *transactions*. You want to know whether a given transaction is part of the past consensus, or not, because the validity of a new transaction depends on that. It is the only reason of existence of a block chain: knowing on what set of past transactions, there is consensus.
You can check the validity of the block headers: that makes you know that: 1) the header fits in correctly in the chain 2) it has the right amount of PoW
By verifying only the headers, you can verify the block chain structure, and the Merkle tree hashes included in them. But you don’t know anything about the consensus of transactions this chain is supposed to bring you.
In order to know that, you have to know the transactions themselves. They need two verifications: 1) the validity of the transactions themselves, for which you need previous consensus knowledge: that is, what previous transactions did we agree upon ? They provide the outputs that can be used as inputs in a valid transaction. 2) their inclusion in the consensus the miner decided upon. You combine their hashes into a Merkle tree, and you verify whether that Merkle tree hash corresponds to what is in the block header. If it fits, ALL of them are OK, if it doesn’t fit, the block including its block header, is false.
But if you need to know this, you need to know ALL THE SIDE BLOCKS and verify all of them. It is sufficient that one single transaction in block C doesn’t work, and your block header is, in the end, false.
==> there is no conceptual difference between verifying block A only, or not verifying any block. It is A,B and C or nothing.
Because a smart guy could include a transaction in block A, but “screw up” block C. If that’s the case, block A is just as invalid as if, in a normal chain, the block itself were false. If you ONLY verify block A, and you think it is OK, and you accept the payment, then if it turns out that block C was erroneous, your block A is JUST AS FALSE and your transaction will not be part of consensus. The block header will turn out to be false, after all, just as if you included a double spend or a wrong Merkle hash in today’s chain.
Verifying ONLY block A doesn’t verify anything: if block B or block C are false, this invalidates the block header, and hence also block A.
ok this is how i see it We already know that whatever is in purple gets hashed. And that the hash is thrown to ASICS to make a more secure hash with a bunch of zero’s at the start there are [80bytes of data] that is unused in a block..
now imagine that we used that 80bytes for something as simple as a signature(sidehash).. more precisely a signature hash of data signed by the pools own chosen coinbase(reward) keypair. and then hashed the same as before.
now you may well be asking what ‘message’ could that signature be part of
the message could be a cluster of tx’s and a signature belonging to another previous cluster of tx’s.. and so on.. and so on
so how would it work. well imagine every second new tx’s are signed into a cluster (extended block) by the pool making the main block
so timeline, using just 3 clusterblocks/extended blocks for example previous block solved. Previous block hash is added to the new block aswell as 2500 tx added to the new block. 2500 tx added to sideblockA and signed as A by that pool 2500 tx and signature A added to sideblock B data and signed as B by that pool 2500 tx and signature B added to sideblock C data and signed as C by that pool sideblockC signature added to block and the block is hashed and PoW’d as usual all within the same 10 minute window
possibilities, because signatures are involved. clusters/extended blocks can be signed once a second, meaning it can make 600 cluster/extended blocks to allow a tx to get semi confirmed in second of being seen by a pool. yes them 600 clusters instead of 1 extended block may have a 42-48kb extra bloat(600 signatures), but then you gain the feature of 1second semi confirmation instead of just extra tx per single block waiting 10 minutes for a confirm.
issues: there are 20+pools so your TX might be in cluster/extended block antpool: 350 of the 600 blocks signed by antpool or 123 of the 600 blocks signed by bitfury or 500 of the 600 blocks signed by btcc all because in the 600 seconds between mainblocks each pool has their own 600 cluster/extendedblocks and tx arrangement that pool has solely/independently chosen resolution: user receives atleast 20 pool responses that the users tx is in a cluster/extended block somewhere in each pools
issue: this then causes alot more data flying around the network of ‘soft confirming’ 12,000 cluster/extended blocks (20 pools*600cluster/extended blocks each)
issue: logical minds will think screw it lets make it 60 clusters/extended blocks with a 10 second semi confirm. practical minds then argue that although 10 seconds is better than 10 minutes. there is still 1200 extended blocks flying through the network. and 10 seconds is now slower than visa’s ‘touchless’ NFC swipe and go payment method.
my opinion. 1. just using 1 extended block that can hold an infinite amount of tx’s is a way of ‘going soft’ with dynamics. but then the nodes should have a dynamic useragent flag that pools find a consensus of nodes of, to know how many tx’s should be put in this extended block where it wont cause issue to the majority of nodes.
2. by using it to also ‘speed up’ the confirm by having MULTIPLE extended blocks signed every second/10seconds/whatever.. is better than wrecking blockrewards/difficulty/halving schedules like some other lame proposals of just reducing the 10min average to 2minutes, 1min, 30seconds. but again nodes will need to have rules of acceptability to keep pools inline so that pools do not overdo it.
i could continue waffling on, but ill stop here for now
There is really no difference between putting all these transactions in one big block or making all these “clusters”. Note that as long as a pool hasn’t built all his clusters together, it CANNOT START HASHING on the main block, because it doesn’t know the final signature to include. In fact, the “list” of signatures is more wasteful than the “Merkle tree” hash or signature that is used within a big block. So from the miner’s PoV, there is no difference between making his list of linked clusters, to include the final signature into his block on which it will start hashing, or to make one big block with a Merkle tree of hashes (his list is just slightly slower).
The “signatures” of the pools are absolutely no guarantee that the whole series will not be orphaned if another pool wins the main block. And, as you point out, not only do the individual transactions fly through the network, but all the *different versions of clusters of all pools* fly through the network with their signatures. That is a large multitude of traffic as compared to one big block. The factor is the number of competing pools (not only the big ones !). So if you see a transaction flying by, and there are 30 mining pools active, you will see 30 different miner clusters containing this transaction fly by within a second or so… You will see this transaction 30 times, you will have to check the validity of 30 clusters, and finally, only one of them will make it as the 25th cluster in a row by a given pool that won the race for the main block. And maybe it is a main block that doesn’t, finally include that transaction in one of its side clusters despite the 29 signatures you saw, because these 29 pools that signed it, didn’t win the main block.
Essentially, if there are about 30 active mining pools, the spent bandwidth is multiplied by about 30 as compared to a single big block.
Legendary Online
Activity: 1638
There is really no difference between putting all these transactions in one big block or making all these “clusters”.
to put them into 600 clusters allows for ‘semi-confirm’ of 1 second. instead of waiting 10 minutes. drawback is the user needs to see 20 different signatures (from all the pools)
i explained this already…!
Note that as long as a pool hasn’t built all his clusters together, it CANNOT START HASHING on the main block, because it doesn’t know the final signature to include.
yep again i explained this already it would be signing clusters of fresh transactions so by the time its validated a previous main block to know what tx’s that were in mempool already, to add to the new mainblock it would have already also/separately grabbed a whole load of tx’s that are fresh to put into clusters. ive explained this already…!
In fact, the “list” of signatures is more wasteful than the “Merkle tree” hash or signature that is used within a big block. So from the miner’s PoV, there is no difference between making his list of linked clusters, to include the final signature into his block on which it will start hashing, or to make one big block with a Merkle tree of hashes (his list is just slightly slower).
yep again i explained this already 1 extended block vs 600 clusters = 42kb-48kb of signature wasted. but to users its a 1second confirm feature
The “signatures” of the pools are absolutely no guarantee that the whole series will not be orphaned if another pool wins the main block. And, as you point out, not only do the individual transactions fly through the network, but all the *different versions of clusters of all pools* fly through the network with their signatures. That is a large multitude of traffic as compared to one big block.
again i explained this.. issues: ‘12,000 cluster/extended blocks (20 pools*600cluster/extended blocks each)’
The factor is the number of competing pools (not only the big ones !). So if you see a transaction flying by, and there are 30 mining pools active, you will see 30 different miner clusters containing this transaction fly by within a second or so… You will see this transaction 30 times, you will have to check the validity of 30 clusters, and finally, only one of them will make it as the 25th cluster in a row by a given pool that won the race for the main block. And maybe it is a main block that doesn’t, finally include that transaction in one of its side clusters despite the 29 signatures you saw, because these 29 pools that signed it, didn’t win the main block. Essentially, if there are about 30 active mining pools, the spent bandwidth is multiplied by about 30 as compared to a single big block.
wow you waffled a paragraph to repeat what i said and all you done was change the number 20 to 30.. yep i explained it..
but its good to see you have a critical hat on. but your just repeating what i already said.
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Don’t take any information given on this forum on face value. Please do your own due diligence & respect what is written here as both opinion & information gleaned from experience. If you wish to seek legal FACTUAL advice, then seek the guidance of a LEGAL specialist.
There is really no difference between putting all these transactions in one big block or making all these “clusters”.
to put them into 600 clusters allows for ‘semi-confirm’ of 1 second. instead of waiting 10 minutes. drawback is the user needs to see 20 different signatures (from all the pools)
i explained this already…!
Then you are just doing a kind of “20 masternode scheme” of DASH, with the “pools” electing themselves as masternodes. This is nothing else but the instant pay mechanism of DASH, with informal master nodes instead of protocol-defined masternodes. In other words, bitcoin then has 20 certificate authorities, signing the validity of transactions.
Once you do that, why not simply use PoS ? Why waste electricity on mining, if you trust miner signatures ? They could simply sign off the main block too, in a PoS scheme, instead of wasting electricity on PoW !
yep i explained it..
Ok, sorry, I misunderstood your post as “being in favour” of this scheme, while it is a total clusterfuck concerning bandwidth etc…
If 8 MB blocks are a “bandwidth issue”, then 20 times the block size is most probably a “bandwidth issue” !
However, what is positive in these discussions is that people are slowly, very slowly, discovering that:
1) PoW is a bad cryptographic protection
2) a block chain is way too much severity of consensus (we don’t need the EXACT ORDER of transactions)
3) signatures of different entities can validate transactions much better/cheaper/faster than by putting them into a block that needs PoW protection.
In other words, that most of the principles of bitcoin are, well, improvable. Which is no surprise because it is the oldest, and very first tech that is used.
Legendary Online
Activity: 1638
1) PoW is a bad cryptographic protection
2) a block chain is way too much severity of consensus (we don’t need the EXACT ORDER of transactions)
3) signatures of different entities can validate transactions much better/cheaper/faster than by putting them into a block that needs PoW protection.
In other words, that most of the principles of bitcoin are, well, improvable. Which is no surprise because it is the oldest, and very first tech that is used.
1. PoW has nothing to do with blockchain size. a 1Kb block or a 1Gb block both result in a 256bit hash. PoW only needs 256bit hash, and doesnt care about blocksize. (hint: you wont see a hard drive in an asic) its about timing
2. you do. to then have a checkable history by just knowing the latest contains data of the previous. thus no need to constantly be checking everything, because the previous is locked.
3. and as we both pointed out signatures of different pools become troublesome of users seeing 20+ pools all with different signatures, plus to offset things like propagation.. the timing of signing then becomes less instant to give room for the network congestion to breath. which then brings back the issues of “but its not fast enough if your waiting 1mb-10min in a grocery store checkout line hoping a confirm happens soon
I DO NOT TRADE OR ACT AS ESCROW ON THIS FORUM EVER. Don’t take any information given on this forum on face value. Please do your own due diligence & respect what is written here as both opinion & information gleaned from experience. If you wish to seek legal FACTUAL advice, then seek the guidance of a LEGAL specialist.
1) PoW is a bad cryptographic protection
2) a block chain is way too much severity of consensus (we don’t need the EXACT ORDER of transactions)
3) signatures of different entities can validate transactions much better/cheaper/faster than by putting them into a block that needs PoW protection.
In other words, that most of the principles of bitcoin are, well, improvable. Which is no surprise because it is the oldest, and very first tech that is used.
1. PoW has nothing to do with blockchain size. a 1Kb block or a 1Gb block both result in a 256bit hash. PoW only needs 256bit hash, and doesnt care about blocksize. (hint: you wont see a hard drive in an asic) its about timing
I wasn’t making that remark in direct relationship to the current subject. PoW is bad cryptographic security, because the “good guy” doesn’t have any advantage over the “bad guy”. It is simply the one that wastes more that wins. I only mentioned it because digital signatures do have the advantage that PoW doesn’t have: the one with the secret key can easily sign, and it is essentially practically impossible to imitate such a signature if you don’t have the key.
2. you do. to then have a checkable history by just knowing the latest contains data of the previous. thus no need to constantly be checking everything, because the previous is locked.
I’m not saying that a block chain is “not good enough”. I’m saying it is way too severe. You don’t NEED full block chain ordering in order to verify transaction validity. It is much harder to come to “exact order consensus” than it is to come to “correct transaction set consensus”. The order is not needed.
3. and as we both pointed out signatures of different pools become troublesome of users seeing 20+ pools all with different signatures, plus to offset things like propagation.. the timing of signing then becomes less instant to give room for the network congestion to breath. which then brings back the issues of “but its not fast enough if your waiting 1mb-10min in a grocery store checkout line hoping a confirm happens soon
Indeed, the “solution” in this thread doesn’t bring any solution to any serious problem, but adds problems.
Another $25 Billion Wiped Out: Crypto Market Suffers From Large Sell-Off Advertisement Twitter Facebook LinkedIn Over the past 24 hours, another $25 billion has been wiped out of the crypto market as major digital assets […]
Kangaroo Leather Field Notes Cover – Brandy Colour. Mens Gift. Birthday Gift. Groomsman Gift. bitcoin Accepted. Protect your ideas. This kangaroo leather field notes cover will protect your Field Notes booklet. This wallet is part […]
Develop a Successful Cryptocurrency & bitcoin Entrepreneur Tech Startup Business Today! – Guangzhou Learn to Develop a Successful bitcoin Tech Startup Company Today! Always wanted to start an Tech Startup? Now we have a complete […]