Scaling in a free market, and avoiding re-capture.

Bitcoin Cash is here because Bitcoin BTC was captured and refused to increase the blocksize.

We solved this in Bitcoin Cash by giving over the limits to the free market. We can in actual fact continue to grow the chain without the software devs giving permission. We today have the most free solution there is. Some suggest to limit it by adding some algorithmic growth, which inevitably will lead to capture again. Don't fall for that!

Lets walk through a simple scenario on how we can see Bitcoin Cash grow, using purely and only the rules that exist on the chain today. No extra stuff needed.

Scenario

We have today a maximum blocksize which the miners manually set. This property is only used by miners or mining pools. This property is set by them to indicate the maximum size of the blocks they will produce.

We have today a blocksize-accept-limit. This is a technical property, meant to indicate the technical abilities of anyone parsing blocks in the Bitcoin Cash space.

The accept-limit does not need coordination to be changed, if 20% of the full nodes changes to 40MB tomorrow nothing bad will happen.

Go another year forwards and you might have a bunch of 32MB ones, some 40MB ones, some 64MB services. Lots of differences because those people running that software will have been tested for different limits.

Look 3 years in the future, the actual block size being mined is 10MB. The majority of services have software that by default supports 40MB. The miners want to protect themselves and soft-forked themselves to keep their accept-limits at 32MB.

Add another year, blocksizes are 20MB regularly. Some miners are mining 30MB blocks when the mempool gets really full. We still make sure that the transactions get included in the next block. Practically all of the ecosystem (exchanges, merchants, full nodes, fulcrum like indexers) are running with software marked to support 64MB blocks.

Miners may coordinate between themselves to remove the 32MB accept-limit. They are 100 full nodes in an ecosystem of 40000 full nodes. Price has risen to a level where mining a block for the lolz is not happening anymore. Any attacker is going to lose $100K per block trying to disrupt the network.

Its time, the miners decide its safe, they change their own blocksize-accept-limits. Some time later a random miner ends up mining a 33 MB block.
Everyone accepts it.

Want to understand the fine details on how this can work; there is a much more verbose CHIP here: https://codeberg.org/bitcoincash/CHIP-Block-Growth

9 thoughts on “Scaling in a free market, and avoiding re-capture.”

  1. > chain without the software devs giving permission. We today have the most free solution there is. Some suggest to limit it by adding some algorithmic growth, which inevitably will lead to capture again.

    This is FUD. we already have an “algorithmic limiter”. It’s just a very simple algorithm: accept_limit = value_in_config_file.

    It is the presence of the config file which, in your argument, prevent “developer capture.” Anyone can change the config file without developer assistance.

    There is no reason an adjustable algorithmic limiter cannot also use values changeable in a config file. And in fact the one under discussion uses config file values. You can even configure it to not auto-adjust.

    >The accept-limit does not need coordination to be changed, if 20% of the full nodes changes to 40MB tomorrow nothing bad will happen.

    This misses the case where 20% of the full nodes start falling out of sync because Kraig Right and his billionaire buddy think it’s cool to build 40MB blocks full of cat photos and the only solution is for these 20% of nodes that are falling over to get their limits back in sync with the rest of the network.

    If there were only honest miners mining organic transactions then we arguably wouldn’t need an accept-limiter at all.

    Dishonest miners include miners that perform resource-starving attacks to drive other nodes (incl competing miners) off the chain. Such as nchain. These miners are willing to mine at a loss, maybe for an extended period of time, to achieve capture of their chain by effectively DoSing other nodes off the network.

    BCH is resistant to such an attack because we didn’t remove or radically raise our accept-limiter. That’s what it does. It limits the effects of a bloat attack to something the network can handle without fragmenting or falling out of sync. We also didn’t remove or radically increase limits on transaction size, for the same reason.

  2. As much as I like that we prepare for different potential events, I believe we need to stop talking about scaling. BCH is not popular, there is less demand for it than for the abandoned LTC.

    We need to talk about how to overcome propaganda, about marketing and about getting crypto-folks to use crypto (which will make them end up pro-BCH eventually).

    Blocks won’t be 10 MB in 3 years, nor in 7 years, if we don’t solve the bad image / censored / coin is unknown current problem

  3. Tom, you are making a very basic mistake and thus your entire line of reasoning ended up wrong.

    You think that BTC was captured because of technological reasons, but no such thing actually happened.


    BTC was captured because of purely social reasons. Because miners followed Core and not thought.

    As of 2023 miners do not make any decisions in this ecosystem. They just follow.

    So the current possible/probable capture point of BCH is nothing else but BCHN project. Because that is what every miner will install, regardless whether it is infiltrated/captured or not. This is why I got so protective and made a drama when I detected a possible infiltrator, Melroy. Because I understand this.

    Humans are the weakest point, not the tech. The tech is bulletproof.

    Once BCHN gets captured it doesn’t matter what are the technological limits. Technological limits are just one of possible Casus Bellis that the captors will use to keep BCH neutered.

    In other words, technology is the effect, not the cause.

    In order to prevent BCH from getting captured like BTC was, what you need to fight is the social layer, or – like I would like to do – remove the social layer from the equation entirely – if that is even possible. If it is not 100% possible, then let’s at least get to as close to 100% as we can. Perhaps 98% will be enough.

  4. I am not a developer but I can see why it is important not to go down any path that will lead to BCH being captured.

    For the other side of the debate, I have been reading through all the comments, but is there an actual algorithm that was being proposed? I did not find any when I went through the comment. I think if there is an actual algorithm and someone think that it is perfect, there is no problem to discuss it. Or the point is “there should be someone to make some algorithms for this“?

  5. A block size of 32MB is now sufficient.The block reward decreases every 4 years and we need more users and more transaction fees to compensate.32 MB is not at all full.Focus on more business adoption and acceptance.

    ​

    Bitcoin Cash Transactions historical chart
    https://bitinfocharts.com/comparison/bitcoin%20cash-transactions.html#3y

  6. The discussionso far in this thread seems to agree the accept-limit is just a parameter in the command file, thereby decoupling miners to change this as needed to follow the market, there protecting miners from potential developer capture. However, this is true only up to a certain point. There is a real limit of the existing code when running on existing hardware. Beyond this, miners start falling behind and suffering orphans, while exchanges, merchants and users begin to lose confidence in the coin.

    So far, so good. The miners can buy faster hardware, faster network service and overcome these limits. They don‘t need code tweaks from (potentially captured) developers. They just change the accept-limit in their command file. Ditto for exchanges.

    However, this is not enough. Eventually, the miners and exchanges will run into hard technical limits in the (potentially captured) code base. This problem will not be solved by adding faster hardware, because there are real limits to existing hardware based on solid physical laws, specifically the speed of light and thermodynamics. Processing power can be increased by adding cores per chip, chips per backplane, or backplanes per cluster, but clock rates can no longer be significantly be increased. Memory capacity and random I/O ops can be increased by adding more storage hardware, but latency can only increase as more hardware is added, as node equipment gets larger. Network bandwidth can also be increased by adding parallel internet connections, but geographic latency can not be avoided.

    The bottom line is that there is a limit built into existing node software that will make it impossible to scale the existing node software beyond some point. The good news is that the existing node software can be changed, by adding the ability to scale to an unlimited number of threads to achieve as many transactions processed per second and as low latency as desired for block propagation as needed, limited only by the physical limits to internet round trips. This is because of Satoshi’s UTXO design.

    The bad news is that the needed development is not happening at any significant rate, at least not to my knowledge. If it had been happening there would have been multi gigabyte testnet demonstrations. Instead, for the past few years the development seems to have been focused on adding various features.

    In my opinion, focusing on clever algorithms for changing a number is a waste of time, unless there is a way for the node operator to configure his hardware so the node will actually run correctly at the required rate.

    There is also a problem with user facing servers such as Fulcrum servers and Exchange software, but this is basically database software and so is easier to scale as there are many examples of scalable databases, e.g. any big tech social media platform. However, to my knowledge there are no practical examples of proof of work scalable block chains. (By scalable, I mean that as much performance as desired can be achieved by adding more or better hardware.)

  7. Three numbers to be set by the mining node operators. It is quite simple as mentioned in http://np.reddit.com/r/btc/comments/qgwskf/who_here_is_ready_to_see_some_64mb_blocks_on_mainnet/hilqbud?context=3

    Software developers shall try their best to minimize d and k, nothing else they can/shall do. d and k can be easily concluded by linear regression done by the operators with respect to block sizes. k is related to the speed the mining node operator purchases from the ISP, therefore k might not be the best offered by the ISP (for political safety reason, the darknet operator would never purchase the best speed offered by ISP, or a three letter agent would knock the door soon )

Leave a Comment