RoleCoin

Full overview of Eth 2.0 & 1.x roadmaps from Messari

Full section on Messari's Ethereum trends for 2020 here

ETH 2.0 Research/Governance/Roadmap at a glance

If history is any guide, we’re not going to see ETH 2.0 until 2022 at the earliest, even if the earliest phases of “Serenity” begin getting pushed in mid-2020. ETH 2.0’s rollout breaks down into seven (7!!!) phases and brings with it the promise of staking, sharding, a new virtual machine, and more dancing badgers.
(One of our analysts, Wilson Withiam, put together an excellent overview of both the ETH 2.0 and ETH 1.x roadmaps for this report. They are critical to track and understand at a high-level given how much Ethereum’s performance will affect other competitive projects and most of the DeFi and Web 3 infrastructure. So these next two sections are longer and more technical.)
Here’s what you need to know about the current game plan for crypto’s largest platform.
Phase 0 marks the launch of the “beacon chain”, which will serve as the backbone for a new blockchain. The beacon chain will manage network validators (large early stakers like ConsenSys) and ultimately assign validators to individual shards (slicing the new blockchain into smaller chunks is a key, difficult, controversial scaling decision that’s been made). The new chain will support Ethereum’s new proof-of-stake consensus mechanism, and offer inflation rewards with new ETH2 for those that pony up and lock 32 ETH1 tokens into an irreversible contract. That one way bridge into the new system is also contentious, but it means ETH1 supply will start getting “effectively burned” once token holder begin claiming beacon chain validator slots. Initial reports claimed Jan. 3 as a realistic launch date (lol). It will be amazing to see this launched by end of June.
Phase 1 will introduce 64 individual shard chains (reduced from 1,024!!!) to the network, with the option to increase the total down the road as the design gets tested. The Ethereum elite see sharding as the “key to future scalability” as shards can parallelize transaction processing, something that could improve network performance and reduce individual validator’s costs (good for decentralization). It comes with big risk: this is still theoretical. No network the size of Ethereum has successfully sharded its blockchain. In Phase 1, shard chains will only contain simple data sets (no smart contracts or transaction executions) to test the system’s structure. As with Phase 0, the beacon chain will continue to run in parallel with ETH 1.x throughout the phase. Don’t expect Phase 1 anytime before 2021.
Phase 2 marks the full launch of the ETH2 chain, allowing for on-chain contract execution and introducing the new eWASM virtual machine (dubbed EVM 2.0). At this point, existing dApps can start migrating their contracts from ETH 1.x to a specific shard (one shard per contract) in the new network. Storage rent, charging contract owners for storing data on the network (more on this below), is in the cards as well, which would require mass contract rewrites. Even though Phase 2 intends to replace the original Ethereum blockchain entirely, ETH 1.x may still live on as a shard within ETH2. (How confused are you by now? See why bitcoin will still dominate the macro narrative for a while?) A late 2021 release for Phase 2 is optimistic. Before the end of 2022 would be a win.
The final four phases are less defined, and without an attached timeline:
Phase 3 implements state-minimized clients (because stateless clients are just too much). Phase 4 allows for cross-shard transactions. Phase 5 improves network security and the availability of data proofs. Phase 6 introduces meta-shards, as in “shards within shards within shards,” for near-infinite scaling. If you’re scratching your head and are sadistic enough to read more, the Sharding Wiki page does note, “this may be difficult.”
Scaling and compilation efficiencies aside, the most notable change in Ethereum’s metamorphosis is the transition from proof-of-work to proof-of-stake. PoW is the more battle tested security model for blockchain networks, while PoS may prove to be more efficient but with new and less obvious attack vectors. For the more technical, we recommend reading Bison Trails’ Viktor Bunin on the subject of PoS security threats.
Past research has also shown PoS requires an extra layer of “trust” vs. PoW, to help nodes sync to the network. Most models share specific characteristics to address this trust issue, such as allowing for a dynamic set of validators (rotate your security), promoting token holder participation in consensus, and assessing steep penalties (slashing) for any network participant that violates the protocol guidelines. ETH 2.0 will function similarly, but may be able to learn from other PoS networks (and their R&D) as well as those come live and see real world issues. As Vitalik points out, recent research in PoS resulted in “great theoretical progress,” But...
Listen, we're talking about practice. Not a game. Not a game. Not a game. We're talking about practice. Not a game….Practice? We're talking about practice, man? We're talking about practice. We're talking about practice. We ain't talking about the game. We're talking about practice, man.
Vitalik was eight when this happened, so the clip might help and prove metaphoric.

2 ETH 1.x Research/Governance/Roadmap at a glance.

Ok, one more. Bear with us. Let’s reiterate, ETH 2.0 is a brand new blockchain. It’s going to be a chaotic and high-risk transition. In the meantime, the existing network needs to run existing applications (particularly financial settlements for DeFi transactions). More critical upgrades are needed in the current system.
To that end, ETH 1.x devs have three goals to boost performance and reduce blockchain bloat: (1) introduce client optimizations that increase transaction capacity; (2) cap disk space requirements and prune old, memory-sucking data (so running a node is less expensive and more decentralized); and (3) upgrade the EVM to eWASM, a newer open standard for code compilers that simplifies debugging, and is also used by all the newer smart contract platforms. ETH 1.x developers have decided to split the major tasks amongst four working groups:

Core developers intend to introduce most of these implementations through a series of hard forks, the latest of which activated just over a week ago (Istanbul, Dec. 7). However, Istanbul’s second phase, tentatively scheduled for Q2 next year, has Ethereans at each other’s throats. The controversy boils down to the fork’s inclusion of ProgPoW, an ASIC-resistant hashing algorithm designed to replace Ethereum’s current algo. ProgPoW aims to even the playing field for GPU miners and ward off the entrance of potential ASIC competitors. The miners like that. But many miners and investors see ProgPoW as a threat to their investments. For miners, the change would shift the power dynamic away from mining farms and render expensive, specialized mining hardware useless. Ethereum (and ERC-20) investors intent on securing their assets might balk because ASIC miners typically prop up hash rates (overall chain security) and their costs “naturally create a price-floor for ASK prices of miners’ sell-orders.”
This saga is far from over. The infighting will likely continue leading up to ProgPoW’s activation date mid-next year, and presents the strongest potential for a network split since “The DAO” fork that spawned Ethereum Classic. The looming transition to ETH 2.0 (and proof-of-stake) will likely deter investor pushback, because it’s a short-term battle in a war the miners are ultimately going to lose, anyway.
Unless the roadmap changes back to supporting a hybrid PoW/PoS system, of course, but... Oh my god, I’m just kidding. This section is mercifully over.
submitted by CryptigoVespucci to ethereum [link] [comments]

Vitalik's response to Tuur

I interlaced everything between Vitalik and Tuur to make it easier to read.
1/ People often ask me why I’m so “against” Ethereum. Why do I go out of my way to point out flaws or make analogies that put it in a bad light?
Intro
2/ First, ETH’s architecture & culture is opposite that of Bitcoin, and yet claims to offer same solutions: decentralization, immutability, SoV, asset issuance, smart contracts, …
Second, ETH is considered a crypto ‘blue chip’, thus colors perception of uninformed newcomers.
Agree! I personally find Ethereum culture far saner, though I am a bit biased :)
3/ I've followed Ethereum since 2014 & feel a responsibility to share my concerns. IMO contrary to its marketing, ETH is at best a science experiment. It’s now valued at $13B, which I think is still too high.
Not an argument
4/ I agree with Ethereum developer Vlad Zamfir that it’s not money, not safe, and not scalable. https://twitter.com/VladZamfistatus/838006311598030848
@VladZamfir Eth isn't money, so there is no monetary policy. There is currently fixed block issuance with an exponential difficulty increase (the bomb).
I'm pretty sure Vlad would say the exact same thing about Bitcoin
5/ To me the first red flag came up when in our weekly hangout we asked the ETH founders about to how they were going to scale the network. (We’re now 4.5 years later, and sharding is still a pipe dream.)
Ethereum's Joe Lubin in June 2014: "anticipate blockchain bloat—working on various sharding ideas". https://www.youtube.com/watch?v=oJG9g0lCPU8&feature=youtu.be&t=36m41s
The core principles have been known for years, the core design for nearly a year, and details for months, with implementations on the way. So sharding is definitely not at the pipe dream stage at this point.
6/ Despite strong optimism that on-chain scaling of Ethereum was around the corner (just another engineering job), this promise hasn’t been delivered on to date.
Sure, sharding is not yet finished. Though more incremental stuff has been going well, eg. uncle rates are at near record lows despite very high chain usage.
7/ Recently, a team of reputable developers decided to peer review a widely anticipated Casper / sharding white paper, concluding that it does not live up to its own claims.
Unmerciful peer review of Vlad Zamfir & co's white paper to scale Ethereum: "the authors do NOT prove that the CBC Casper family of protocols is Byzantine fault tolerant in either practice or theory".
That review was off the mark in many ways, eg. see https://twitter.com/technocrypto/status/1071111404340604929, and by the way CBC is not even a prerequisite for Serenity
8/ On the 2nd layer front, devs are now trying to scale Ethereum via scale via state channels (ETH’s version of Lightning), but it is unclear whether main-chain issued ERC20 type tokens will be portable to this environment.
Umm... you can definitely use Raiden with arbitrary ERC20s. That's why the interface currently uses WETH (the ERC20-fied version of ether) and not ETH
9/ Compare this to how the Bitcoin Lightning Network project evolved:
elizabeth stark @starkness: For lnd: First public code released: January 2016 Alpha: January 2017 Beta: March 2018…
Ok
10/ Bitcoin’s Lightning Network is now live, and is growing at rapid clip.
Jameson Lopp @lopp: Lightning Network: January 2018 vs December 2018
Sure, though as far as I understand there's still a low probability of finding routes for nontrivial amounts, and there's capital lockup griefing vectors, and privacy issues.... FWIW I personally never thought lightning is unworkable, it's just a design that inherently runs into ten thousand small issues that will likely take a very long time to get past.
11/ In 2017, more Ethereum scaling buzz was created, this time the panacea was “Plasma”.
@TuurDemeester Buterin & Poon just published a new scaling proposal for Ethereum, "strongly complementary to base-layer PoS and sharding": plasma.io https://twitter.com/VitalikButerin/status/895467347502182401
Yay, Plasma!
12/ However, upon closer examination it was the recycling of some stale ideas, and the project went nowhere:
Peter Todd @peterktodd These ideas were all considered in the Treechains design process, and ultimately rejected as insecure.
Just because Peter Todd rejected something as "insecure" doesn't mean that it is. In general, the ethereum research community is quite convinced that the fundamental Plasma design is fine, and as far as I understand there are formal proofs on the way. The only insecurity that can't be avoided is mass exit vulns, and channel-based systems have those too.
13/ The elephant in the room is the transition to proof-of-stake, an “environmentally friendly” way to secure the chain. (If this was the plan all along, why create a proof-of-work chain first?)
@TuurDemeester "Changing from proof of work to proof of stake changes the economics of the system, all the rules change and it will impact everything."
Umm... we created a proof of work chain first because we did not have a satisfactory proof of stake algo initially?
14/ For the uninitiated, here’s a good write-up that highlights some of the fundamental design problems of proof-of-stake. Like I said, this is science experiment territory.
And here's a set of long arguments from me on why proof of stake is just fine: https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ. For a more philosophical piece, see https://medium.com/@VitalikButerin/a-proof-of-stake-design-philosophy-506585978d51
15/ Also check out this thread about how Proof of Stake blockchains require subjectivity (i.e. a trusted third party) to achieve consensus: https://forum.blockstack.org/t/pos-blockchains-require-subjectivity-to-reach-consensus/762?u=muneeb … and this thread on Bitcoin: https://www.reddit.com/Bitcoin/comments/59t48m/proofofstake_question/
Yes, we know about weak subjectivity, see https://blog.ethereum.org/2014/11/25/proof-stake-learned-love-weak-subjectivity/. It's really not that bad, especially given that users need to update their clients once in a while anyway, oh and by the way even if the weak subjectivity assumption is broken an attacker still needs to gather up that pile of old keys making up 51% of the stake. And also to defend against that there's Universal Hash Time.
16/ Keep in mind that Proof of Stake (PoS) is not a new concept at all. Proof-of-Work actually was one of the big innovations that made Bitcoin possible, after PoS was deemed impractical because of censorship vulnerability.
@TuurDemeester TIL Proof-of-stake based private currency designs date at least back to 1998. https://medium.com/swlh/the-untold-history-of-bitcoin-enter-the-cypherpunks-f764dee962a1
Oh I definitely agree that proof of work was superior for bootstrap, and I liked it back then especially because it actually managed to be reasonably egalitarian around 2009-2012 before ASICs fully took over. But at the present time it doesn't really have that nice attribute.
17/ Over the years, this has become a pattern in Ethereum’s culture: recycling old ideas while not properly referring to past research and having poor peer review standards. This is not how science progresses.Tuur Demeester added,
[email protected] has been repeatedly accused of /criticised for not crediting prior art. Once again with plasma: https://twitter.com/DamelonBCWS/status/895643582278782976
I try to credit people whenever I can; half my blog and ethresear.ch posts have a "special thanks" section right at the top. Sometimes we end up re-inventing stuff, and sometimes we end up hearing about stuff, forgetting it, and later re-inventing it; that's life as an autodidact. And if you feel you've been unfairly not credited for something, always feel free to comment, people have done this and I've edited.
18/ One of my big concerns is that sophistry and marketing hype is a serious part of Ethereum’s success so far, and that overly inflated expectations have lead to an inflated market cap.
Ok, go on.
19/ Let’s illustrate with an example.
...
20/ A few days ago, I shared a critical tweet that made the argument that Ethereum’s value proposition is in essence utopian.
@TuurDemeester Ethereum-ism sounds a bit like Marxism to me:
  • What works today (PoW) is 'just a phase', the ideal & unproven future is to come: Proof-of-Stake.…
...
21/ I was very serious about my criticism. In fact, each one of the three points addressed what Vitalik Buterin has described as “unique value propositions of Ethereum proper”. https://www.reddit.com/ethereum/comments/5jk3he/how_to_prevent_the_cannibalism_of_ethereum_into/dbgujr8/
...
22/ My first point, about Ethereum developers rejecting Proof-of-Work, has been illustrated many times over By Vitalik and others. (See earlier in this tweetstorm for more about how PoS is unproven.)
Vitalik Non-giver of Ether @VitalikButerin: I don't believe in proof of work!
See above for links as to why I think proof of stake is great.
23/ My second point addresses Ethereum’s romance with the vague and dangerous notion of ‘social consensus’, where disruptive hard-forks are used to ‘upgrade’ or ‘optimize’ the system, which inevitably leads to increased centralization. More here:
See my rebuttal to Tuur's rebuttal :)
24/ My third point addresses PoS’ promise of perpetual income to ETHizens. Vitalik is no stranger to embracing free lunch ideas, e.g. during his 2014 ETH announcement speech, where he described a coin with a 20% inflation tax as having “no cost” to users.
Yeah, I haven't really emphasized perpetual income to stakers as a selling point in years. I actually favor rewards being as low as possible while still being high enough for security.
25/ In his response to my tweet, Vitalik adopted my format to “play the same game” in criticizing Bitcoin. My criticisms weren't addressed, and his response was riddled with errors. Yet his followers gave it +1,000 upvotes!
Vitalik Non-giver of Ether @VitalikButerin: - What works today (L1) is just a phase, ideal and unproven future (usable L2) is to come - Utopian concept of progress: we're already so confident we're finished we ain't needin no hard forks…
Ok, let's hear about what the errors are...
26/ Rebuttal: - BTC layer 1 is not “just a phase”, it always will be its definitive bedrock for transaction settlement. - Soft forking digital protocols has been the norm for over 3 decades—hard-forks are the deviation! - Satoshi never suggested hyperbitcoinization as a goal.
Sure, but (i) the use of layer 1 for consumer payments is definitely, in bitcoin ideology, "just a phase", (ii) I don't think you can make analogies between consensus protocols and other kinds of protocols, and between soft forking consensus protocols and protocol changes in other protocols, that easily, (iii) plenty of people do believe that hyperbitcoinization as a goal. Oh by the way: https://twitter.com/tuurdemeestestatus/545993119599460353
27/ This kind of sophistry is exhausting and completely counter-productive, but it can be very convincing for an uninformed retail public.
Ok, go on.
28/ Let me share a few more inconvenient truths.
...
29/ In order to “guarantee” the transition to PoS’ utopia of perpetual income (staking coins earns interest), a “difficulty bomb” was embedded in the protocol, which supposedly would force miners to accept the transition.
The intended goal of the difficulty bomb was to prevent the protocol from ossifying, by ensuring that it has to hard fork eventually to reset the difficulty bomb, at which point the status quo bias in favor of not changing other protocol rules at the same time would be weaker. Though forcing a switch to PoS was definitely a key goal.
30/ Of course, nothing came of this, because anything in the ETH protocol can be hard-forked away. Another broken promise.
Tuur Demeester @TuurDemeester: Looks like another Ethereum hard-fork is going to remove the "Ice Age" (difficulty increase meant to incentivize transition to PoS). https://www.cryptocompare.com/coins/guides/what-is-the-ethereum-ice-age/
How is that a broken promise? There was no social contract to only replace the difficulty-bombed protocol with a PoS chain.
31/ Another idea that was marketed heavily early on, was that with ETH you could program smart contract as easily as javascript applications.
Tuur Demeester @TuurDemeester: I forgot, but in 2014 Ethereum was quite literally described as "Javascript-on-the-blockchain"
Agree that was over-optimistic, though the part of the metaphor that's problematic is the "be done with complex apps in a couple hours" part, NOT the "general-purpose languages are great" part.
32/ This was criticized by P2P & OS developers as a reckless notion, given that every smart contracts is actually a “de novo cryptographic protocol”. In other words, it’s playing with fire. https://bitcointalk.org/index.php?topic=1427885.msg14601127#msg14601127
See above
33/ The modular approach to Bitcoin seems to be much better at compartmentalizing risk, and thus reducing attack surfaces. I’ve written about modular scaling here...
To be fair, risk is reduced because Bitcoin does less.
34/ Another huge issue that Ethereum has is with scaling. By putting “everything on the blockchain” (which stores everything forever) and dubbing it “the world computer”, you are going to end up with a very slow and clogged up system.
Christopher Allen @ChristopherA: AWS cost: $0.000000066 for calc, Ethereum: $26.55. This is about 400 million times as expensive. World computer? https://hackernoon.com/ether-purchase-power-df40a38c5a2f
We never advocated "putting everything on the blockchain". The phrase "world computer" was never meant to be interpreted as "everyone's personal desktop", but rather as a common platform specifically for the parts of applications that require consensus on shared state. As evidence of this, notice how Whisper and Swarm were part of the vision as complements to Ethereum right from the start.
35/ By now the Ethereum bloat is so bad that cheaply running an individual node is practically impossible for a lay person. ETH developers are also imploring people to not deploy more smart contract apps on its blockchain.
Tuur Demeester @TuurDemeester: But... deploying d-apps on the "Ethereum Virtual Machine" is exactly what everyone was encouraged to do for the past 4 years. Looks like on-chain scaling wasn't such a great idea after all.
Umm.... I just spun up a node from scratch last week. On a consumer laptop.
36/ As a result, and despite the claims that running a node in “warp” mode is easy and as good as a full node, Ethereum is becoming increasingly centralized.
@TuurDemeester Finally a media article touching on the elephant in the room: Ethereum has become highly centralized. #infura https://www.coindesk.com/the-race-is-on-to-replace-ethereums-most-centralized-layeamp?__twitter_impression=true
See above
37/ Another hollow claim: in 2016, Ethereum was promoted as being censorship resistant…
Tuur Demeester @TuurDemeester: Pre TheDAO #Ethereum presentation: "uncensorable, code is law, bottom up". http://ow.ly/qW49302Pp92
Yes, the DAO fork did violate the notion of absolute immutability. However, the "forking the DAO will lead to doom and gloom" crowd was very wrong in one key way: it did NOT work as a precedent justifying all sorts of further state interventions. The community clearly drew a line in the sand by firmly rejecting EIP 867, and EIP 999 seems to now also be going nowhere. So it seems like there's some evidence that the social contract of "moderately but not infinitely strong immutability" actually can be stable.
38/ Yet later that year, after only 6% of ETH holders had cast a vote, ETH core devs decided to endorse a hard-fork that clawed back the funds from a smart contract that held 4.5% of all ETH in circulation. More here: ...
See above
39/ Other potential signs of centralization: Vitalik Buterin signing a deal with a Russian government institution, and ETH core developers experimenting with semi-closed meetings: https://twitter.com/coindesk/status/902892844955860993 …,
Hudson Jameson @hudsonjameson: The "semi-closed" Ethereum 1.x meeting from last Friday was an experiment. The All Core Dev meeting this Friday will be recorded as usual.
Suppose I were to tomorrow sign up to work directly for Kim Jong Un. What concretely would happen to the Ethereum protocol? I suspect very little; I am mostly involved in the Serenity work, and the other researchers have proven very capable of both pushing the spec forward even without me and catching any mistakes with my work. So I don't think any argument involving me applies. And we ended up deciding not to do more semi-closed meetings.
40/ Another red flag to me is the apparent lack of relevant expertise in the ETH development community. (Check the responses…)
Tuur Demeester @TuurDemeester: Often heard: "but Ethereum also has world class engineers working on the protocol". Please name names and relevant pedigree so I can follow and learn. https://twitter.com/TuurDemeestestatus/963029019447955461
I personally am confident in the talents of our core researchers, and our community of academic partners. Most recently the latter group includes people from Starkware, Stanford CBR, IC3, and other groups.
41/ For a while, Microsoft veteran Lucius Meredith was mentioned as playing an important role in ETH scaling, but now he is likely distracted by the failure of his ETH scaling company RChain. https://blog.ethereum.org/2015/12/24/understanding-serenity-part-i-abstraction/
I have no idea who described Lucius Meredith's work as being important for the Serenity roadmap.... oh and by the way, RChain is NOT an "Ethereum scaling company"
42/ Perhaps the recently added Gandalf of Ethereum, with his “Fellowship of Ethereum Magicians” [sic] can save the day, but imo that seems unlikely...
Honestly, I don't see why Ethereum Gandalf needs to save the day, because I don't see what is in danger and needs to be saved...
43/ This is becoming a long tweetstorm, so let’s wrap up with a few closing comments.
Yay!
44/ Do I have a conflict of interest? ETH is a publicly available asset with no real barriers to entry, so I could easily get a stake. Also, having met Vitalik & other ETH founders several times in 2013-’14, it would have been doable for me to become part of the in-crowd.
Agree there. And BTW I generally think financial conflicts of interest are somewhat overrated; social conflicts/tribal biases are the bigger problem much of the time. Though those two kinds of misalignments do frequently overlap and reinforce each other so they're difficult to fully disentangle.
45/ Actually, I was initially excited about Ethereum’s smart contract work - this was before one of its many pivots.
Tuur Demeester @TuurDemeester: Ethereum is probably the first programming language I will teach myself - who wouldn't want the ability to program smart BTC contracts?
Ethereum was never about "smart BTC contracts"..... even "Ethereum as a Mastercoin-style meta-protocol" was intended to be built on top of Primecoin.
46/ Also, I have done my share of soul searching about whether I could be suffering from survivor’s bias.
@TuurDemeester I just published “I’m not worried about Bitcoin Unlimited, but I am losing sleep over Ethereum” https://medium.com/p/im-not-worried-about-bitcoin-unlimited-but-i-am-losing-sleep-over-ethereum-b5251c54e66d
Ok, good.
47/ Here’s why Ethereum is dubious to me: rather than creating an open source project & testnet to work on these interesting computer science problems, its founders instead did a securities offering, involving many thousands of clueless retail investors.
What do you mean "instead of"? We did create an open source project and testnet! Whether or not ETH is a security is a legal question; seems like SEC people agree it's not: https://www.cnbc.com/2018/06/14/bitcoin-and-ethereum-are-not-securities-but-some-cryptocurrencies-may-be-sec-official-says.html
48/ Investing in the Ethereum ICO was akin to buying shares in a startup that had “invent time travel” as part of its business plan. Imo it was a reckless security offering, and it set the tone for the terrible capital misallocation of the 2017 ICO boom.
Nothing in the ethereum roadmap requires time-travel-like technical advancements or anything remotely close to that. Proof: we basically have all the fundamental technical advancements we need at this point.
49/ In my view, Ethereum is the Yahoo of our day - an unscalable “blue chip” cryptocurrency:
Tuur Demeester @TuurDemeester: 1/ The DotCom bubble shows that the market isn't very good at valuing early stage technology. I'll use Google vs. Yahoo to illustrate.
Got it.
50/ I’ll close with a few words from Gregory Maxwell from 2016,: https://bitcointalk.org/index.php?topic=1427885.msg14601127#msg14601127
See my rebuttal to Greg from 2 years ago: https://www.reddit.com/ethereum/comments/4g1bh6/greg_maxwells_critique_of_ethereum_blockchains/
submitted by shouldbdan to ethtrader [link] [comments]

Greg Maxwell /u/nullc (CTO of Blockstream) has sent me two private messages in response to my other post today (where I said "Chinese miners can only win big by following the market - not by following Core/Blockstream."). In response to his private messages, I am publicly posting my reply, here:

Note:
Greg Maxell nullc sent me 2 short private messages criticizing me today. For whatever reason, he seems to prefer messaging me privately these days, rather than responding publicly on these forums.
Without asking him for permission to publish his private messages, I do think it should be fine for me to respond to them publicly here - only quoting 3 phrases from them, namely: "340GB", "paid off", and "integrity" LOL.
There was nothing particularly new or revealing in his messages - just more of the same stuff we've all heard before. I have no idea why he prefers responding to me privately these days.
Everything below is written by me - I haven't tried to upload his 2 PMs to me, since he didn't give permission (and I didn't ask). The only stuff below from his 2 PMs is the 3 phrases already mentioned: "340GB", "paid off", and "integrity". The rest of this long wall of text is just my "open letter to Greg."
TL;DR: The code that maximally uses the available hardware and infrastructure will win - and there is nothing Core/Blockstream can do to stop that. Also, things like the Berlin Wall or the Soviet Union lasted for a lot longer than people expected - but, conversely, the also got swept away a lot faster than anyone expected. The "vote" for bigger blocks is an ongoing referendum - and Classic is running on 20-25% of the network (and can and will jump up to the needed 75% very fast, when investors demand it due to the inevitable "congestion crisis") - which must be a massive worry for Greg/Adam/Austin and their backers from the Bilderberg Group. The debate will inevitably be decided in favor of bigger blocks - simply because the market demands it, and the hardware / infrastructure supports it.
Hello Greg Maxwell nullc (CTO of Blockstream) -
Thank you for your private messages in response to my post.
I respect (most of) your work on Bitcoin, but I think you were wrong on several major points in your messages, and in your overall economic approach to Bitcoin - as I explain in greater detail below:
Correcting some inappropriate terminology you used
As everybody knows, Classic or Unlimited or Adaptive (all of which I did mention specifically in my post) do not support "340GB" blocks (which I did not mention in my post).
It is therefore a straw-man for you to claim that big-block supporters want "340GB" blocks. Craig Wright may want that - but nobody else supports his crazy posturing and ridiculous ideas.
You should know that what actual users / investors (and Satoshi) actually do want, is to let the market and the infrastructure decide on the size of actual blocks - which could be around 2 MB, or 4 MB, etc. - gradually growing in accordance with market needs and infrastructure capabilities (free from any arbitrary, artificial central planning and obstructionism on the part of Core/Blockstream, and its investors - many of whom have a vested interest in maintaining the current debt-backed fiat system).
You yourself (nullc) once said somewhere that bigger blocks would probably be fine - ie, they would not pose a decentralization risk. (I can't find the link now - maybe I'll have time to look for it later.) I found the link:
https://np.reddit.com/btc/comments/43mond/even_a_year_ago_i_said_i_though_we_could_probably/
I am also surprised that you now seem to be among those making unfounded insinuations that posters such as myself must somehow be "paid off" - as if intelligent observers and participants could not decide on their own, based on the empirical evidence, that bigger blocks are needed, when the network is obviously becoming congested and additional infrastructure is obviously available.
Random posters on Reddit might say and believe such conspiratorial nonsense - but I had always thought that you, given your intellectual abilities, would have been able to determine that people like me are able to arrive at supporting bigger blocks quite entirely on our own, based on two simple empirical facts, ie:
  • the infrastructure supports bigger blocks now;
  • the market needs bigger blocks now.
In the present case, I will simply assume that you might be having a bad day, for you to erroneously and groundlessly insinuate that I must be "paid off" in order to support bigger blocks.
Using Occam's Razor
The much simpler explanation is that bigger-block supporters believe will get "paid off" from bigger gains for their investment in Bitcoin.
Rational investors and users understand that bigger blocks are necessary, based on the apparent correlation (not necessarily causation!) between volume and price (as mentioned in my other post, and backed up with graphs).
And rational network capacity planners (a group which you should be in - but for some mysterious reason, you're not) also understand that bigger blocks are necessary, and quite feasible (and do not pose any undue "centralization risk".)
As I have been on the record for months publicly stating, I understand that bigger blocks are necessary based on the following two objective, rational reasons:
  • because I've seen the graphs; and
  • because I've seen the empirical research in the field (from guys like Gavin and Toomim) showing that the network infrastructure (primarily bandwidth and latency - but also RAM and CPU) would also support bigger blocks now (I believe they showed that 3-4MB blocks would definitely work fine on the network now - possibly even 8 MB - without causing undue centralization).
Bigger-block supporters are being objective; smaller-block supporters are not
I am surprised that you no longer talk about this debate in those kind of objective terms:
  • bandwidth, latency (including Great Firewall of China), RAM, CPU;
  • centralization risk
Those are really the only considerations which we should be discussing in this debate - because those are the only rational considerations which might justify the argument for keeping 1 MB.
And yet you, and Adam Back adam3us, and your company Blockstream (financed by the Bilderberg Group, which has significant overlap with central banks and the legacy, debt-based, violence-backed fiat money system that has been running and slowing destroying our world) never make such objective, technical arguments anymore.
And when you make unfounded conspiratorial, insulting insinuations saying people who disagree with you on the facts must somehow be "paid off", then you are now talking like some "nobody" on Reddit - making wild baseless accusations that people must be "paid off" to support bigger blocks, something I had always thought was "beneath" you.
Instead, Occams's Razor suggests that people who support bigger blocks are merely doing so out of:
  • simple, rational investment policy; and
  • simple, rational capacity planning.
At this point, the burden is on guys like you (nullc) to explain why you support a so-called scaling "roadmap" which is not aligned with:
  • simple, rational investment policy; and
  • simple, rational capacity planning
The burden is also on guys like you to show that you do not have a conflict of interest, due to Blockstream's highly-publicized connections (via insurance giant AXA - whose CED is also the Chairman of the Bilderberg Group; and companies such as the "Big 4" accounting firm PwC) to the global cartel of debt-based central banks with their infinite money-printing.
In a nutshell, the argument of big-block supporters is simple:
If the hardware / network infrastructure supports bigger blocks (and it does), and if the market demands it (and it does), then we certainly should use bigger blocks - now.
You have never provided a counter-argument to this simple, rational proposition - for the past few years.
If you have actual numbers or evidence or facts or even legitimate concerns (regarding "centralization risk" - presumably your only argument) then you should show such evidence.
But you never have. So we can only assume either incompetence or malfeasance on your part.
As I have also publicly and privately stated to you many times, with the utmost of sincerity: We do of course appreciate the wealth of stellar coding skills which you bring to Bitcoin's cryptographic and networking aspects.
But we do not appreciate the obstructionism and centralization which you also bring to Bitcoin's economic and scaling aspects.
Bitcoin is bigger than you.
The simple reality is this: If you can't / won't let Bitcoin grow naturally, then the market is going to eventually route around you, and billions (eventually trillions) of investor capital and user payments will naturally flow elsewhere.
So: You can either be the guy who wrote the software to provide simple and safe Bitcoin scaling (while maintaining "reasonable" decentralization) - or the guy who didn't.
The choice is yours.
The market, and history, don't really care about:
  • which "side" you (nullc) might be on, or
  • whether you yourself might have been "paid off" (or under a non-disclosure agreement written perhaps by some investors associated the Bilderberg Group and the legacy debt-based fiat money system which they support), or
  • whether or not you might be clueless about economics.
Crypto and/or Bitcoin will move on - with or without you and your obstructionism.
Bigger-block supporters, including myself, are impartial
By the way, my two recent posts this past week on the Craig Wright extravaganza...
...should have given you some indication that I am being impartial and objective, and I do have "integrity" (and I am not "paid off" by anybody, as you so insultingly insinuated).
In other words, much like the market and investors, I don't care who provides bigger blocks - whether it would be Core/Blockstream, or Bitcoin Classic, or (the perhaps confusingly-named) "Bitcoin Unlimited" (which isn't necessarily about some kind of "unlimited" blocksize, but rather simply about liberating users and miners from being "limited" by controls imposed by any centralized group of developers, such as Core/Blockstream and the Bilderbergers who fund you).
So, it should be clear by now I don't care one way or the other about Gavin personally - or about you, or about any other coders.
I care about code, and arguments - regardless of who is providing such things - eg:
  • When Gavin didn't demand crypto proof from Craig, and you said you would have: I publicly criticized Gavin - and I supported you.
  • When you continue to impose needless obstactles to bigger blocks, then I continue to criticize you.
In other words, as we all know, it's not about the people.
It's about the code - and what the market wants, and what the infrastructure will bear.
You of all people should know that that's how these things should be decided.
Fortunately, we can take what we need, and throw away the rest.
Your crypto/networking expertise is appreciated; your dictating of economic parameters is not.
As I have also repeatedly stated in the past, I pretty much support everything coming from you, nullc:
  • your crypto and networking and game-theoretical expertise,
  • your extremely important work on Confidential Transactions / homomorphic encryption.
  • your desire to keep Bitcoin decentralized.
And I (and the network, and the market/investors) will always thank you profusely and quite sincerely for these massive contributions which you make.
But open-source code is (fortunately) à la carte. It's mix-and-match. We can use your crypto and networking code (which is great) - and we can reject your cripple-code (artificially small 1 MB blocks), throwing it where it belongs: in the garbage heap of history.
So I hope you see that I am being rational and objective about what I support (the code) - and that I am also always neutral and impartial regarding who may (or may not) provide it.
And by the way: Bitcoin is actually not as complicated as certain people make it out to be.
This is another point which might be lost on certain people, including:
And that point is this:
The crypto code behind Bitcoin actually is very simple.
And the networking code behind Bitcoin is actually also fairly simple as well.
Right now you may be feeling rather important and special, because you're part of the first wave of development of cryptocurrencies.
But if the cryptocurrency which you're coding (Core/Blockstream's version of Bitcoin, as funded by the Bilderberg Group) fails to deliver what investors want, then investors will dump you so fast your head will spin.
Investors care about money, not code.
So bigger blocks will eventually, inevitably come - simply because the market demand is there, and the infrastructure capacity is there.
It might be nice if bigger blocks would come from Core/Blockstream.
But who knows - it might actually be nicer (in terms of anti-fragility and decentralization of development) if bigger blocks were to come from someone other than Core/Blockstream.
So I'm really not begging you - I'm warning you, for your own benefit (your reputation and place in history), that:
Either way, we are going to get bigger blocks.
Simply because the market wants them, and the hardware / infrastructre can provide them.
And there is nothing you can do to stop us.
So the market will inevitably adopt bigger blocks either with or without you guys - given that the crypto and networking tech behind Bitcoin is not all that complex, and it's open-source, and there is massive pent-up investor demand for cryptocurrency - to the tune of multiple billions (or eventually trillions) of dollars.
It ain't over till the fat lady sings.
Regarding the "success" which certain small-block supports are (prematurely) gloating about, during this time when a hard-fork has not happened yet: they should bear in mind that the market has only begun to speak.
And the first thing it did when it spoke was to dump about 20-25% of Core/Blockstream nodes in a matter of weeks. (And the next thing it did was Gemini added Ethereum trading.)
So a sizable percentage of nodes are already using Classic. Despite desperate, irrelevant attempts of certain posters on these forums to "spin" the current situation as a "win" for Core - it is actually a major "fail" for Core.
Because if Core/Blocksteam were not "blocking" Bitcoin's natural, organic growth with that crappy little line of temporary anti-spam kludge-code which you and your minions have refused to delete despite Satoshi explicitly telling you to back in 2010 ("MAX_BLOCKSIZE = 1000000"), then there would be something close to 0% nodes running Classic - not 25% (and many more addable at the drop of a hat).
This vote is ongoing.
This "voting" is not like a normal vote in a national election, which is over in one day.
Unfortunately for Core/Blockstream, the "voting" for Classic and against Core is actually two-year-long referendum.
It is still ongoing, and it can rapidly swing in favor of Classic at any time between now and Classic's install-by date (around January 1, 2018 I believe) - at any point when the market decides that it needs and wants bigger blocks (ie, due to a congestion crisis).
You know this, Adam Back knows this, Austin Hill knows this, and some of your brainwashed supporters on censored forums probably know this too.
This is probably the main reason why you're all so freaked out and feel the need to even respond to us unwashed bigger-block supporters, instead of simply ignoring us.
This is probably the main reason why Adam Back feels the need to keep flying around the world, holding meetings with miners, making PowerPoint presentations in English and Chinese, and possibly also making secret deals behind the scenes.
This is also why Theymos feels the need to censor.
And this is perhaps also why your brainwashed supporters from censored forums feel the need to constantly make their juvenile, content-free, drive-by comments (and perhaps also why you evidently feel the need to privately message me your own comments now).
Because, once again, for the umpteenth time in years, you've seen that we are not going away.
Every day you get another worrisome, painful reminder from us that Classic is still running on 25% of "your" network.
And everyday get another worrisome, painful reminder that Classic could easily jump to 75% in a matter of days - as soon as investors see their $7 billion wealth starting to evaporate when the network goes into a congestion crisis due to your obstructionism and insistence on artificially small 1 MB blocks.
If your code were good enough to stand on its own, then all of Core's globetrotting and campaigning and censorship would be necessary.
But you know, and everyone else knows, that your cripple-code does not include simple and safe scaling - and the competing code (Classic, Unlimited) does.
So your code cannot stand on its own - and that's why you and your supporters feel that it's necessary to keep up the censorship and and the lies and the snark. It's shameful that a smart coder like you would be involved with such tactics.
Oppressive regimes always last longer than everyone expects - but they also also collapse faster than anyone expects.
We already have interesting historical precedents showing how grassroots resistance to centralized oppression and obstructionism tends to work out in the end. The phenomenon is two-fold:
  • The oppression usually drags on much longer than anyone expects; and
  • The liberation usually happens quite abruptly - much faster than anyone expects.
The Berlin Wall stayed up much longer than everyone expected - but it also came tumbling down much faster than everyone expected.
Examples of opporessive regimes that held on surprisingly long, and collapsed surpisingly fast, are rather common - eg, the collapse of the Berlin Wall, or the collapse of the Soviet Union.
(Both examples are actually quite germane to the case of Blockstream/Core/Theymos - as those despotic regimes were also held together by the fragile chewing gum and paper clips of denialism and censorship, and the brainwashed but ultimately complacent and fragile yes-men that inevitably arise in such an environment.)
The Berlin Wall did indeed seem like it would never come down. But the grassroots resistance against it was always there, in the wings, chipping away at the oppression, trying to break free.
And then when it did come down, it happened in a matter of days - much faster than anyone had expected.
That's generally how these things tend to go:
  • oppression and obstructionism drag on forever, and the people oppressing freedom and progress erroneously believe that Core/Blockstream is "winning" (in this case: Blockstream/Core and you and Adam and Austin - and the clueless yes-men on censored forums like r\bitcoin who mindlessly support you, and the obedient Chinese miners who, thus far, have apparently been to polite to oppose you) ;
  • then one fine day, the market (or society) mysteriously and abruptly decides one day that "enough is enough" - and the tsunami comes in and washes the oppressors away in the blink of an eye.
So all these non-entities with their drive-by comments on these threads and their premature gloating and triumphalism are irrelevant in the long term.
The only thing that really matters is investors and users - who are continually applying grassroots pressure on the network, demanding increased capacity to keep the transactions flowing (and the price rising).
And then one day: the Berlin Wall comes tumbling down - or in the case of Bitcoin: a bunch of mining pools have to switch to Classic, and they will do switch so fast it will make your head spin.
Because there will be an emergency congestion crisis where the network is causing the price to crash and threatening to destroy $7 billion in investor wealth.
So it is understandable that your supports might sometimes prematurely gloat, or you might feel the need to try to comment publicly or privately, or Adam might feel the need to jet around the world.
Because a large chunk of people have rejected your code.
And because many more can and will - and they'll do in the blink of an eye.
Classic is still out there, "waiting in the wings", ready to be installed, whenever the investors tell the miners that it is needed.
Fortunately for big-block supporters, in this "election", the polls don't stay open for just one day, like in national elections.
The voting for Classic is on-going - it runs for two years. It is happening now, and it will continue to happen until around January 1, 2018 (which is when Classic-as-an-option has been set to officially "expire").
To make a weird comparison with American presidential politics: It's kinda like if either Hillary or Trump were already in office - but meanwhile there was also an ongoing election (where people could change their votes as often as they want), and the day when people got fed up with the incompetent incumbent, they can throw them out (and install someone like Bernie instead) in the blink of an eye.
So while the inertia does favor the incumbent (because people are lazy: it takes them a while to become informed, or fed up, or panicked), this kind of long-running, basically never-ending election favors the insurgent (because once the incumbent visibly screws up, the insurgent gets adopted - permanently).
Everyone knows that Satoshi explicitly defined Bitcoin to be a voting system, in and of itself. Not only does the network vote on which valid block to append next to the chain - the network also votes on the very definition of what a "valid block" is.
Go ahead and re-read the anonymous PDF that was recently posted on the subject of how you are dangerously centralizing Bitcoin by trying to prevent any votes from taking place:
https://np.reddit.com/btc/comments/4hxlquhoh_a_warning_regarding_the_onset_of_centralised/
The insurgent (Classic, Unlimited) is right (they maximally use available bandwidth) - while the incumbent (Core) is wrong (it needlessly throws bandwidth out the window, choking the network, suppressing volume, and hurting the price).
And you, and Adam, and Austin Hill - and your funders from the Bilderberg Group - must be freaking out that there is no way you can get rid of Classic (due to the open-source nature of cryptocurrency and Bitcoin).
Cripple-code will always be rejected by the network.
Classic is already running on about 20%-25% of nodes, and there is nothing you can do to stop it - except commenting on these threads, or having guys like Adam flying around the world doing PowerPoints, etc.
Everything you do is irrelevant when compared against billions of dollars in current wealth (and possibly trillions more down the road) which needs and wants and will get bigger blocks.
You guys no longer even make technical arguments against bigger blocks - because there are none: Classic's codebase is 99% the same as Core, except with bigger blocks.
So when we do finally get bigger blocks, we will get them very, very fast: because it only takes a few hours to upgrade the software to keep all the good crypto and networking code that Core/Blockstream wrote - while tossing that single line of 1 MB "max blocksize" cripple-code from Core/Blockstream into the dustbin of history - just like people did with the Berlin Wall.
submitted by ydtm to btc [link] [comments]

Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.
The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).
No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.
Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage.
Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software.
Longer Summary:
People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you.
Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid.
But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin.
Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else.
Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture.
This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure.
The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
  • Core / Blockstream erroneously insist that the entire blockchain must always be downloadable, storable and verifiable on a single node, as dictated by the least powerful nodes in the system (eg, u/bitusher in Costa Rica), or u/Luke-Jr in the underserved backwoods of Florida); and
  • Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees.
These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years.
Details:
Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers.
The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain).
Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
BUIP024: Extension Blocks with Address Sharding
https://np.reddit.com/btc/comments/54afm7/buip024_extension_blocks_with_address_sharding/
Why aren't we as a community talking about Sharding as a scaling solution?
https://np.reddit.com/Bitcoin/comments/3u1m36/why_arent_we_as_a_community_talking_about/
(There are some detailed, partially encouraging comments from u/petertodd in that thread.)
[Brainstorming] Could Bitcoin ever scale like BitTorrent, using something like "mempool sharding"?
https://np.reddit.com/btc/comments/3v070a/brainstorming_could_bitcoin_ever_scale_like/
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
https://np.reddit.com/btc/comments/3wtwa7/brainstorming_lets_fork_smarter_not_harder_can_we/
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
https://np.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/
Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling.
A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding"
A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system.
Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards.
(Maybe you can already see where this is going...)
Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on.
Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.)
Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
  • A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
  • If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
  • This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.)
Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do.
Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees.
Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements.
The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems.
Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC.
Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem.
And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe.
We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections.
Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results.
It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain.
Some people might object that those systems are different from Bitcoin.
But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems.
The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct.
Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
  • an appropriate "decompose" operation,
  • an appropriate "recompose" operation,
  • the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors.
This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency.
The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
  • (DECOMPOSE ; SUB-SOLVE ; RECOMPOSE) = (SOLVE)
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable.
In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture).
That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream.
ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work.
Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code.
An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code:
https://youtu.be/uzahKc_ukfM?t=1101
Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it.
The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing).
The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth.
This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code.
Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline).
Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives.
Many of these sharding-based projects have many more nodes than the Bitcoin network.
The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance.
[email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set.
[email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation.
In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
[email protected]
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.
Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze.
Data is merged into a database using [email protected] computers in Berkeley.
The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
Active users: 121,780 (January 2015)
PrimeGrid
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform.
Active users 8,382 (March 2016)
MapReduce
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin?
We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines.
The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution.
BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions.
And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements.
So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin?
First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed.
In the case of Bitcoin, the problem involves:
  • sequentializing (serializing) APPEND operations to a blockchain data structure
  • in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"?
Yes we can!
Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s).
So, let's imagine how a possible future sharding-based architecture of Bitcoin might look.
We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine.
Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them.
This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin.
Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines.
This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees.
In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements.
This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system.
Conclusion
Bitcoin isn't the only project in the world which is permissionless and distributed.
Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives.
Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
submitted by ydtm to btc [link] [comments]

The owners of Blockstream are spending $75 million to do a "controlled demolition" of Bitcoin by manipulating the Core devs & the Chinese miners. This is cheap compared to the $ trillions spent on the wars on Iraq & Libya - who also defied the Fed / PetroDollar / BIS private central banking cartel.

At this point, that's really the simplest "Occam's razor" explanation for Blockstream's "irrational" behavior.
Once you let go of your irrational belief that Blockstream's owners actually want to get a "return" on their $75 million investment, from "innovations" such as sidechains technology (Lightning Network - LN) - only then will you be able to see that Blockstream's apparently "irrational" behavior is actually perfectly rational.
They say their goal is to "get rich" from LN. And if you believe that, I have a Dogecoin I'd like to sell you.
What are the real goals of Blockstream's owners?
Blockstream's owners don't give a fuck about the Rube Goldberg vaporware which some focus group christened "the Lightning Network". That name is just there to placate the masses of noobs who congregate on /bitcoin.
The owners of Blockstream are laughing at Adam Back as he continues to labor in isolation, the stereotypical math PhD who is clueless about economics, toiling away creating a slow, overpriced, centralized "level 2" payment layer on top of Bitcoin - a complicated contraption which may never work. They have neutralized him - but meanwhile, he thinks he's a rock star now, as "CEO of Blockstream". Little does he know he is the worst "collaborator" of all.
Investors are risk-averse
If Blockstream's owners really wanted to get rich from LN, do you really think they would freeze the "max blocksize" at 1 MB for the next year, when this 1-year freeze obviously risks destroying Bitcoin itself (along with their investment)?
Investors are not stupid - and they are risk-averse. They know that if there's no Bitcoin, then there's no Lightning - so their $75 million investment would go out the window.
And all the "Core" devs have actually gone on the record stating (in their less-guarded moments, or before they signed their employment contracts with Blockstream) that 2 MB blocks would work fine - even 3-4 MB blocks. Empirical research by miners has shown that 3-4 MB blocks - or even bigger - would work fine right now.
So why aren't the Blockstream investors pressuring the Core devs to go to 2 MB now, to remove the risk of Bitcoin failing?
If Blockstream did the "rational" thing and agreed to 2 MB now, the price would shoot up, the community would heal, innovation would start happening again. Bitcoin would proper, and Blockstream's investors would have a good chance at making a "return" on their investment.
For some reason, Blockstream's investors are trying to stop all this from happening. So we have to look for a different explanation. If the owners of Blockstream don't want to get rich from the Lightning Network, then what do they really want?
The simplest explanation is that the real risk which Blockstream's investors are "averse" to is the possibility of trillions of dollars in legacy fiat suddenly plunging in relative value, if Bitcoin were to shoot to the moon. They're afraid they'll lose power if Bitcoin succeeds.
In order to provide some support for this radical but simple hypothesis, we have to dive into some pretty nasty and shadowy geopolitics.
What do the wars on Iraq and Syria, JPMorgan's naked short selling of silver, and the book "Confessions of an Economic Hit Man" all have in common?
Whenever a currency tries to compete with the Fed / Petrollar / BIS [1] private central banking cartel, the legacy fiat power élite destroys that currency (if the currency has a central point of control - which Bitcoin does have: the Core devs, the Chinese miners, and Theymos).
[1] BIS = the Bank for International Settlements, often referred to as "the central bank of central banks"
Trillions of dollars were spent to take down the central banks of Iraq and Libya, because they defied the hegemony of the Fed / Petrodollar / BIS private central banking cartel.
https://duckduckgo.com/?q=ellen+brown+iraq+libya+bis
And while you're googling, you might want to look up whistleblower Andrew Maguire (who exposed how JPMorgan uses naked short selling to "dump" nonexistent silver in order to prevent the USDollar from collapsing).
https://duckduckgo.com/?q=andrew+maguire+jpmorgan
And you might also want to look up John Perkins, whose book "Confessions of an Economic Hit Man" is another major eye-opener about how "the Washington consensus" manages to rule the world by printing fiat backed by violence and justified by "experts" and propaganda.
https://duckduckgo.com/?q=john+perkins+confessions+economic+hit+man
That's just how the world works - although you have to do a bit of research to discover those unpleasant facts.
So for the legacy fiat power élite, $75 million to take down Bitcoin (and maintain their power) is chump change in comparison.
You all knew that "they" were going to try to destroy Bitcoin, didn't you?
Even Jamie Dimon practically admitted as much.
https://duckduckgo.com/?q=jamie+dimon+bitcoin
Did you really think they would be clumsy enough to try to ban it outright?
Private central bankers run this planet, and they have never hesitated to use their lethal combination of guns, debt and psyops to maintain their power. They pay for the wars, they keep people enslaved to debt, and they dumb down the population so nobody knows what's really going on.
Print up a trillion dollars here, kill a million people there, brainwash everyone with censorship and propaganda. That's their modus operandi.
So we shouldn't be surprised if they they ruthlessly and covertly try to take down Bitcoin. They have the means and the motivation.
It was only a matter of time before they identified the three weakest centralized points in the Bitcoin system:
And so that's where they applied the pressure.
I'm sorry to be rude, but all three of those players listed above are idiot savants / sitting ducks up against the full-spectrum of covert dirty tricks deployed by the legacy fiat power élite - whether it's money, ego-stroking, or pretending to go along with their crazy cypherpunk beliefs that Bitcoin will only prosper as long as it remains small enough to run a node on a dial-up internet on a Raspberri Pi in Luke-Jr's basement.
So the simplest explanation is this: Blockstream is a "front company" which has been established for the purpose of performing a "controlled demolition" of Bitcoin.
So Satoshi messed up. He messed up by baking in a 1 MB constant into the code at the last minute as a clumsy anti-spam kludge - which could unfortunately only be removed via a hard fork - and which the global legacy power élite have figured how to retain via social engineering directed at clueless Core devs and clueless Chinese miners (and clueless forum moderators).
So why is the price is still fairly stable?
Heck, I'm so paranoid, I wouldn't even put it past them to try to interfere with investors who might otherwise be trying to send a signal by "voting with their feet".
In other words, several observers have commented that the only way to liberate Bitcoin from the cartel of Chinese miners and Core/Blockstream devs is to crash the price.
And many other observers are puzzled that the price isn't crashing now that Bitcoin is being strangled in its cradle by Blockstream.
Well, this wouldn't be the first time that the Fed / PetroDollar / BIS private central banking cartel sent in the "plunge protection" team to artificially prop up their fragile, centralized, permissioned currency.
https://duckduckgo.com/?q=plunge+protection+team
Who knows, they could easily have printed up a few million dollars in phoney fiat and given it to players like Jamie Dimon or Blythe Masters who probably have access to the HFT (high frequency trading) tools to keep the price exactly where they want it, for as long as they want it. Manipulating an unregulated $6 billion market would be child's play for them.
The point is, we have no idea who is buying bitcoins at this price right now. Or what their motives are.
I know that if I were part of the legacy fiat power élite, this is exactly what I'd be doing now: buy off the devs, pressure the miners, encourage the censors, and play with the price - so nobody knows what the hell is going on. Prevent the price from crashing for the next year (so the community won't have a "smoking gun" to reject the Core devs and the Chinese miners)... and prevent it from going to the moon also (so the dollar won't look like it's crashing). Not too hard to do, especially if you have unlimited fiat at your disposal.
2016 is the perfect time to perform a "controlled demolition" on Bitcoin.
All the forces in the global economy are now aligned for a massive economic storm of epic proportions. Without Blockstream's interference, Bitcoin's price would be shooting to the moon right now, because it's the only digital asset class free of counterparty risk, compared to all the other garbage floating around in the system:
https://duckduckgo.com/?q=deutsche+bank+lehman
https://np.reddit.com/BitcoinMarkets/comments/45ogx7/daily_discussion_sunday_february_14_2016/d0015vf
https://duckduckgo.com/?q=china+capital+flight
https://duckduckgo.com/?q=NIRP+Negative+Interest+Rate+Policy
Bitcoin is one of the only safe harbors in this oncoming economic storm. So it should be skyrocketing right now - if there were no artificial constraints on its growth.
So if Blockstream were not doing a controlled demolition of Bitcoin right now by freezing the blocksize to 1 MB for the next year, then the Bitcoin price could easily go to 4,000 USD - instead languishing around 400 USD.
In other words: the USDollar would be crashing 10-fold versus Bitcoin.
The only bulwark against Bitcoin rising 10x versus the USDollar is Blockstream's stranglehold on the Core devs and the Chinese miners.
Just like the only bulwark against precious metals rising 10x versus the USDollar right now is JPMorgan's naked short selling of phoney (paper) precious metals, mainly via the SLV ETF (exchange traded fund).
https://duckduckgo.com/?q=jpmorgan+naked+short+selling+slv
(Most informed estimates say that there is 100x more "fake" or "paper" gold and silver in existence, versus "physical" gold and silver. So it's easy for JPMorgan to suppress the silver price: just naked-short-sell "paper" silver. They do this as a service to the Fed, to prop up the dollar. And your tax dollars pay for this fraud.)
The silence of the devs
Isn't it strange how not a single Blockstream dev dares to "break ranks" on the 2 MB taboo?
This unanimous code of silence among Blockstream devs speaks volumes.
Devs on open-source projects like this (particularly ones which were founded on principles of "permissionless" "decentralization") would never maintain this kind of uniform code of developer silence - especially when their precious open-source project is on the verge of failing.
Most devs are rebels - especially Bitcoin devs - ready to break ranks at the drop of a hat, and propose their brilliant ideas to save the day.
But right now - utter silence.
This bizarre code of silence which we are now seeing from the "Core" devs must be the result of some major behind-the-scenes arm-twisting by the owners of Blocsktream, who must have made it abundantly clear that any dev who attempts to provide a simple on-chain scaling solution will be severely punished - financially, legally and/or socially.
Blockstream has deliberately set Bitcoin on a suicide course right now - and all the devs there are silently complicit - and so are the Chinese miners who submissively bowed down to Blockstream's stalling "scaling" roadmap.
But I don't really blame the devs and the miners. I feel bad for them.
I'm not really "blaming" any Chinese miners for being used like this - nor am I really "blaming" devs such as Adam Back, Greg Maxwell, etc.
Nor do I really "blame" guys like Austin Hill.
And I even think guys like Theymos and Luke-Jr "mean well".
They're all just being played. They think they're doing the right thing. Their arguments are genuine and heart-felt. Wrong, but heart-felt. This is what makes them so dangerous - because they really sound sincere and convincing. This is why they are the perfect pawns for the owners of Blockstream to play like this.
Subtle coercion
We recently found out that they locked the Chinese miners in a room for 13 hours until 3 AM to force them to sign an "agreement" to never use any code from a competing Bitcoin implementation that would increase the blocksize.
https://np.reddit.com/btc/comments/46tv22/only_emperors_kings_and_dictators_demand_fealty/
Have you ever seen this kind of coercion in an open-source project - an open-source project founded on the principles of "permissionless" "decentralization" - where many of the founders were "cypherpunks"??
The miners and the devs - and Theymos - and guys like Austin Hill - all are passionate about Bitcoin, and they all believe they are doing "the right thing".
But they are being manipulated, without their knowledge, by the real power behind Blockstream.
Prisoners in a golden cage
Strange how we never get to hear what really goes on behind closed doors at Blockstream. We never get to see the PowerPoint decks, we never get to find out who said what. Blockstream's public messaging is tightly controlled.
If Bitcoin were to have a "core" dev team, it should have had something like the Mozilla Group, or the Tor Project - non-profits, who answer to the public, not to private investors. Instead we got Blockstream - a private company funded by some of the biggest players of the legacy fiat power élite. WTF?!?
If they wanted to develop sidechains and LN, then fine, they should be able to. But what they're really doing is radically changing Bitcoin itself - mainly by freezing growth at 1 MB blocks now, which is choking the system.
Depite all this, I still would not go so far as to say that the Core devs and the Chinese miners are really "traitors". At most, they are actually prisoners in a golden cage, who are not even really conscious of their own imprisonment. They're smart people - and in some ways, smart people are actually easier to fool, once you figure out what they believe in.
So this is what I really think the owners of Blockstream have done. They've figured out how to manipulate the Core devs and the Chinese miners - and they're happy that Theymos is playing along, censoring the main online forums - so they're able to move ahead with their plan to do a "controlled demolition" of Bitcoin, and it only cost them $75 million dollars.
Centralization got us into this mess.
The only reason Bitcoin is vulnerable to this kind of "controlled demolition" being performed by the owners of Blockstream is because mining operations and dev teams are centralized - thus providing a single, vulnerable point where the legacy fiat power élite could easily deploy their full-spectrum attack.
We finally have a digital asset with no counterparty risk - and they want to take it away from us, so that we continue to depend on their debt-backed, violence-backed legacy fiat.
And they're able to do this because the Core devs and the Chinese miners and Theymos were such easy gullible centralized targets.
Decentralization will get us out.
If you are a miner or a dev, and if you want Bitcoin to survive, then you must go back to the principles of permissionless decentralization.
Go dark, release some code anonymously.
Release an internal Blockstream PowerPoint deck or some internal Blockstream emails to Wikileaks, exposing what the Blockstream investors are really up to.
Otherwise, Bitcoin is probably going to fail to realize its potential - and we'll have to wait a while for truly decentralized development (and mining, and forums) to possibly create a successor someday.
If you're a hodler, it would be great if such a phoenix rising from Bitcoin would be a "spinoff" - ie, a coin bootstrapped off of the existing ledger (to preserve existing wealth, while upgrading to a new protocol for appending new blocks).
https://bitcointalk.org/index.php?topic=563972.0
But who knows.
submitted by UndergroundNews to btc [link] [comments]

Episode 123: Visa Reveals Roadmap to Supporting Bitcoin and Cryptocurrency, Runs on Ripple The Wrong Type 1990 Full Movie streaming [DOWNLOAD] - YouTube Bitcoin Q&A The Core roadmap & scaling solutions Bitcoin Q&A The Core roadmap & scaling solutions What Is Bitcoin Core (BTCC) ? And Why Its Important

2. Bitcoin and blockchain will finally break up. Bitcoin should be revered as the patriarch of digital assets. Bitcoin confluenced cryptography, peer-to-peer networking, a virtual machine, and a consensus formation algorithm to solve “the double spend” and “the Byzantine general’s problem” elegantly. That said, time moves on. RoleCoin by STEAMRole is a digital currency that tracks and measures the skill and career development progress of its recipient. Philanthropic foundations, impact investors and family offices finally have a way to track the long-term impact of their $100+ billion annual investments in STEM and STEAM education. The tech industry loves catchy phrases, but the truth is that most of them either don't mean anything or they're just complicated ways to phrase simple ideas. With that in mind, let's 2) Start Bitcoin Core, and go to Help > Debugging Window > Console. 3) On the console's command line, execute the following command: importaddress the_address_you_are_interested_in. Note that this will cause the program to rescan the entire block chain, which can take several minutes. Then you can close the debugging window. Melvin Draupnir is a cryptocurrency journalist living on the Moon, where Bitcoin is going!, and has been entrenched in the cryptocurrency community since 2012. His passions include open source code, Bitcoin, cryptocurrency, economics, geopolitics and decentralized applications. A student of Austrian Economics, Draupnir found Bitcoin in 2012 and has been a hodler and evangelist ever since.

[index] [8858] [6137] [19305] [41095] [22538] [47385] [307] [48900] [5030] [17098]

Episode 123: Visa Reveals Roadmap to Supporting Bitcoin and Cryptocurrency, Runs on Ripple

Episode 123: Visa Reveals Roadmap to Supporting Bitcoin and Cryptocurrency, Runs on Ripple New Creation Capital. Loading... Unsubscribe from New Creation Capital? Goosebumps (6/10) Movie CLIP - Werewolf On Aisle 2 (2015) HD by Movieclips. 3:29. ... Roadmap to Reopening by AtlanticLIVE. ... Programming Core DC Motor Control Functions by Paul McWhorter. How to Mine Bitcoin: Everything You Need to Know Things To Know Before You Buy If you wish to discover a particular thread about a cryptocurrency, simply type the cryptocurrencies name and the ... Denzel Washington's Life Advice Will Leave You SPEECHLESS LISTEN THIS EVERYDAY AND CHANGE YOUR LIFE - Duration: 10:18. Grow Successful Recommended for you Binance СЕО LIVE: Bitcoin price prediction & Givе Awaу BTC Binance 3,674 watching Live now Hard Forks Killing Bitcoin, $100,000 TRON Bounty And Ethereum Passes Bitcoin - Duration: 28:07.