chainlink kurssiPresentation by Tim Beiko of the Ethereum Foundation SmartCon 2022

chainlink link and facebook libra chainlink top pole chainlink kurssi Presentation by Tim Beiko of the Ethereum Foundation SmartCon 2022
chainlink kurssi chainlink for indoors Presentation by Tim Beiko of the Ethereum Foundation SmartCon 2022
foreign [Applause] Works in a post-emerge world so let me share my slides here Okay so as all of you I know uh ethereum just went through a pretty big transition to move from proof of work to proof of stake uh so today I wanted to take some time to kind of start a context behind that transition walk you through exactly what happened and kind of give a picture of what ethereum looks like today and how the network works um and so take a little step back and kind of start from the beginning here uh ethereum always had this desire to move to proof of stake it was part of the initial roadmap even uh board the actual chain launch um and as soon as basically ethereum was live work shifted to try and uh design the pricker steak algorithm that ethereum could switch to and around 2018 we had something which we almost went to production with but ended up ditching for a few reasons um and at a high level the the main reason to move away from this was just that it was very closely coupled with the existing proof of work chain and kind of brought in a bunch of technical constraints from that chain um that that made the system uh quite worse so first the previous take algorithm was designed as a special contract like a kind of enshrined smart contract on the network which had different uh different operation constraints than normal smart contracts so it added all this extra logic um in in sort of a semi-detached way from the network and the initial design also had like quite a high minimum stake uh required to be a validator it was 1500 each and also at the same time as research on this was happening people were already thinking about how we go and scale ethereum um and were coming up with similar designs and hitting similar constraints for things like sharding um so around 2018 there was this realization that well maybe this design is not great and and then the kind of thought was what if we just designed a proof of stake in sharding uh ethereum from scratch what would that look like if we had none of the constraints of the current proof of work chain and so this is where uh the ethereum 2.0 world map came about um so the idea was that you would redo everything from scratch and move all of ethereum to this new system which would be re-architected from the ground up and and kind of perform much better than the current proof of work chain and so um when people talk about the theorem 2.0 and the sort of phase zero phase one phase two this was the idea uh phase zero the idea was you would launch the proof of stake chain separately from from the system and first get that up and running so that you can make sure everything works there then instead of directly scaling computation on the network uh we would launch shards which would execute it parallel and anchor back to the beacon chain but they would only store data rather than execute transactions and then finally once we were confident that the system was stable and we would turn on computation in those shards and get all the applications that migrate to the new ethernet um and this is kind of the roadmap that was talked a lot about in 2018 2019 and 2020. um and as you as you can see uh we actually did do part of it so we we did launch the beacon shade separately um but there were some problems with this it still wasnt quite perfect um the biggest open question was you know if we do this phase zero we launched a beacon chain we do phase one we had these data shards um it was never quite clear how we would turn on computation in those shards and what that would look like um this also implied that we would need users to migrate uh from the current chain to another one and that you know all the applications would have to be redeployed people would have to move their funds over and so that would be quite complex and then the other thing you would require is it would require new software that execute all the transactions in those computation charts um and we know just from history how hard it is to write these uh ethereum clients so we have some on proof of work and we have some sort of Beacon chain and its its quite a lot of work to write and maintain peace um so while these problems started showing up on on the roadmap uh it was another thing that happened in the space which was kind of the appearance of roll-ups and the idea there uh with rollups is that you can scale uh computation of ethereum using uh layer twos rather than the protocol itself and so with this kind of emerging you can see there were a couple refinements to the original ethereum two dollar roadmap um that led us to what was the specification we ended up using for the merge um so early on and when there was this realization that it would take time to launch the beacon shank deploy data shards turn on computations in those shards um vitalik put a proposal to say what if we simply made the proof of work chain the first Shard of ethereum 2.0 so this would get us kind of if they removed the proof of stake it would allow us to test the system and we would have to wait for all these computation charts to be migrated and it might also mean that theres actually no migration for users because uh the proof of work changes becomes sort of Shard zero of the new system um and then theres a second car if I havent done this idea almost a year later and which was saying what if we actually ditched this part around execute executable shards in the roadmap and simply focused on Roll-Ups for scaling and the interesting thing there is that Roll-Ups generate a lot of data either they post transaction data back on chain or proofs of it then take this with the DK Roll-Ups um so the first two bits of the roadmap having the beacon chain and having the data shards is still quite useful for Roll-Ups because it gives them somewhere that they can post data back to um but the most complicated bit around turning on the sharded execution thats basically how Roll-Ups work so this second kind of roll-up Centric roadmap uh Insight was saying well lets just ditch the most complicated part of our protocol changes simply launch the beacon chain had the data shards and have Roll-Ups kind of store their data on those data shards and and well have scaling towards the room and then this third refinements that came to this idea was saying well what if we we keep doing all this but instead of having the proof of work chain become a Shard of uh the ethereum 2.0 uh designed what if we simply change what the current proof of work chain uh listens to to come to consensus so to change from proof of work to proof of stake and the interesting thing there is all of the software that that ran the proof of work chain already have this concept that it could change consensus algorithms so basically uh on ethereum mainnet it would run proof of work but on test nets for example it would run proof of authority and same on private Network and so if we could simply kind of reuse all that software and give it to swap its consensus algorithm to something else and then we would have to write any new kind of software to execute uh the the proof of work computation we could just keep things like Gap Aragon that remind base to have them swap over or them and uh move the preferred state that way and this is more or less the final design weve gone with and and I sort of taking a second with the kind of uh show the contrast between what this initial ethereum two dollar roadmap was like and uh whats the sort of roll-up Centric approach weve taken uh looks like and were not even going into all of the details just like you know glancing at those diagrams you can see that it pretty much ends up being the same thing right like we get to a very similar end state um on the on the on the left side we see uh there that the original plan is actually the key proof of work a lot uh around for a while and and and then you know have that Tethered to the proof stake chain um which is something we ditch we move to a world where we completely just use proof of stake um and and thats what you can see kind of on the right side um and then you can see that uh instead of oh sorry you can see that on the on the left side you know we have all of these shardens uh that provide data and then the VMS that link back to them and then provide execution results and if you look at the right side of the diagram you basically see that theyre very similar thing where you have all of these blobs which are basically The Shard Shades that provide data um and you have all of these l2s which are now running the evm so instead of having kind of one version or versions of the evm that are dictated by the protocol you can have all these different implementations that are you know done by many team um and theyre not only EVMS you know theres obviously optimism arbitration evm Roll-Ups but you can see uh ZK sync and stockware that have different approaches as well so we end up with a state where you know we have this this this kind of parallelized computation thats happening that allows us to scale theorems throughput um thats using the data layer on on L1 um and we have proof of stakes to hearing that so this was kind of a much simpler design that ended up getting us to roughly the same spot and if we zoom in a little bit into the merge right this transition from proof of work to previously what would happen there so first um like I was saying the idea is to go from proof of work to proof of stake as the consensus algorithm for the network um and this means that you know after this transition what is what was currently known as Ethan or the proof of workspace chain becomes the execution layer for all of the theorem so where the transactions run and all of that and then the beacon chain or what was known kind of as the two part of the system because of the consensus layer and as we said already um the execution layer clients already had this concept where they could switch different they can they could use different consensus algorithms so is reusing kind of you know some some logic that that already existed and obviously there was a ton of custom stuff that had to be developed for the merge but generally like this this concept is something that that clients were already used to dealing with and the way that we would trigger the transition is saying once theres a specific amount of difficulty thats reached on the proof of work chain the clients stopped listening the proof of work as uh as the way to determine the head of the network and the incense shift the proof of stage and then in a post merge setting what this looks like is that the consensus layer clients so the beacon chain is running the proof of stake algorithm you know it gets the consensus on what the head of the chain is um its what produces and gussets the blocks um and obviously handles syncing to the right port on The Proven stake side and then the execution layer still executes all of the transactions in the evm make sure that theyre correct and they follow the protocol rules um it still manages all of the state and uh basically the database I think associated with that and gossips transactions on its own peer-to-peer Network as well and if you zoom in even more in like the actual transition process um what happened was this is first we have an upgrade on the beacon chain to tell it to listen for this terminal total difficulty value that we set which is the highest amount of difficulty we want to see on the group of work chain so once we set this value uh and deploy the stops work this was during the Bellatrix hard Fork uh the beacon chain is and the execution layer are listening kind of pinging every block asking have we exceeded this TTD value when we see a block who first exceeds TTD we consider this to be a candidate final proof of work block um and this is kind of the last block thats thats produced in that chain uh by proof of work and producing the next block is then the role of the validator whos assigned in the back slot on the beacon chain um and because reorgs can happen and there can be multiple competing uh terminal blocks and what we want to see is once this this first proof of stake block has been produced uh we want to see this finalized uh so that we know we chat three org past uh the transition and once we had that this is when we knew that the transition was complete so um I have a graph here by Danny Ryan who who kind of shows this uh more visually if you go all the way to the left here um you see we have a proof of work and begin chain chains both running in parallel uh on the top you know you have this proof of work plot which which has some proof of work but then within it theres uh all of the transactional contents of the block and the the data thats uh thats required to divide those transactions like the hash of the block the base fee and whatnot and then right below that you have a beacon chain block and that has all of this data related to the previous stake chain uh you can think of it as like consensus metadata um so you know the attestations uh all the new deposits all the exits uh the rendero and whatnot but it doesnt have any actual end user transactions being executed in it and so if you move over one block um you could imagine this second proof of work block would be the one where we exceed PPD so we consider that the terminal work plot uh while thats happening the beacon chain is still operating as it previously did but then once that proof of work block is hit that second proof of work block we see that the third Beacon chain block now contains the transactional payload of the previous proof of work chain um and this is basically The Verge happening its going from this last proof of work block which has all the transactions in that to basically the beacon chain still Community consensus on different stake um but then including what we now call an execution payload which is the list of all these transactions as well as all the relevant uh data to execute them on the network and then if you move over to the last block on the right you can see that this block basically has no linking back to proof of work right its a pure kind of proof of stake block which has all of the proof of stake consensus information uh as well and uh this execution payload again um and this is what we wanted this you finalize is just the first block on the proof of stake side uh so that we know we cant reorg that to uh proof of work world and if you zoom in even more on one of these blocks um like I was saying the content that used to contain all of like the transition the transaction related data of an of an East one block now becomes an execution payload which is part of a beacon chain block um and the goal there was to make the transaction the transition as seamless as possible so that you know a transaction running in the last group of work we block or the last proof of stake block would execute the same way that smart contracts and users wont have anything to do um that said there are like a couple changes that are worth noting so the first is that the block times and proof of work where average to about 13 seconds have a really high variance uh whereas an interpretive stake they all happen every 12 seconds unless the validated resolve flight and this currently happens less than one percent of your time um the second thing is that there are several fields in the block that we set to zero after the transition because they no longer apply so anything related the proof of work or on Google blocks are basically uh the things we zeroed out um but with one exception so um difficulty is a value that was accessible in the evm um and its kind of a biasable pseudo-random value that uh is used on chain for for pseudo Randomness um if we if we move that to zero then it would go for providing you know four Randomness to providing no Randomness at all so instead what we do is we rewired this opcode to return the Randall reveal from the previous slot um in in the evm uh which which is another source of Randomness that can that can now be used by contracts um so this is really the only change in terms of like contract execution on chain and its kind of neat because the the size of those values is different so if you if you look at the size of the return value of this opcode you can know whether your contract is executing pre or post merge better and if you look at what are the ethereum node it looks like on the network this is roughly um this is roughly the idea where an ethereum client now requires you to run both the consensus layer client and an execution there client and without these you know if you dont have the consensus of your clients you cant tell what the state of the chain is and if you dont have the execution there client you cant actually execute the blocks on chain so you need to combine both of them um but because we wanted things to also be seamless uh for for the users of these clients all of the nodes still maintain their uh their apis as previously so if you are using the beacon apis or Json RPC or websockets on the execution layer none of that changes you still kind of get the same experience um and similarly uh both nodes or both sorry both kind of clients as part of the node so the consensus layer in the execution layer maintain their set of peers um so you know the the gossip that works remain independent um and the only thing that changes there is that blocks go from being ghosted on the execution level to the consensus level um and then in order for both uh the consensus execution there to communicate weve introduced an API between them which is called the engine API and this is just an authenticated API where the consensus layer can send information to the execution layer and tell them yet you know whats that its valid ahead uh whats like the state of the network and it can also request information from it so for example itll send it to the block and ask it to validate the block or similarly if its a validator and it has to propose a block it will query its execution engine and ask it to return a block that it can build on the networks and so thats pretty much it for you know how the network works you know we we work quite hard to have minimal changes to the architecture of uh of clients the contents of blocks to the interface with nodes um and yeah that that thats really the the main thing we wanted to get is just having this be a really smooth transition for for users of ethereum um and now that the merch has actually happened uh theres a few nice charts I wanted to highlight uh that Ive seen since then um the first one is this uh this is from coinmetrics and I actually couldnt believe this chart when I when I first saw it and it shows what the average block time is pre and post merge so this is taken in like the 12 hour gap where the merge happened which was at about 6 30 UPC which you could you can see carry on the graph um and before it emerged like I was saying we have this target this 13 second block times where we had very high variance in it and I think I I personally underestimated the amount of variance that we had so um you know if you if you look at this left part of the graph uh all of the red dots are you know short blocks that happen in less than 12 seconds um but this entire upper part of blocks that take you know more than 12 seconds is this is quite surprising and even if you look after the merge um you know you can see theres this red line which shows that like most blocks come every 12 seconds and theres a couple kind of blue dots which showed that this validator was offline so the next block didnt come in 12 seconds but it came in 24. um and so far you know there hasnt been like two valid dators on line in a row so you havent seen like a 36 second block on proof of stake ethereum um I think it just shows like how much more predictable uh block types have become under proof of States uh so you kind of know that you can expect something basically every 12 seconds and if not you know it might be every 24 seconds whereas before you know its kind of wild to see we had blocks that would take between 60 and 80 Seconds um from time to time um and that those happen you know multiple times between this in a six hour span and then related um theres a couple other neat things the notice um is well the first one on the left here is a chart from interesting looking at the daily gas usage um and it shows that you know if we went from 13 second to 12 second blocks obviously theres going to be an increase in capacity on the network and we can see that quite clearly um in this chart so you can see towards the uh the past the last couple days or when the merge happened um and and you see the gas the gas on uh the gas used daily has gone up um and one thing thats nice is um we weve managed to increase the capacity a little bit and weve also managed to get better pricing uh on the network so if you look at this the right hand side um the the crawler tweet from Martin is basically uh the same the same thing yes as this chart that I showed before just in a different view um but uh vitaliks reply theyre saying uh that the amount of four blocks weve seen has gone from 20 to 10 um and this is quite important because the way we use the mechanism we use to price gas on the network uh eip1559 and you can think of as as basically surge pricing for ethereum that goes both ways so when many people want to use ethereum the price goes up when no one wants to use ethereum the price goes down and the way were able to calibrate this is by trying to keep blocks 50 full all of the time so if blocks at 50 full we assume that the demand is constant so we dont change the price and if if theyre more than that we start isnt the price and if theyre less than that we start lowering it but this means that if a block if many blocks are more than 100 full right like theres twice theres more than twice as much demand as the current base uh then our mechanism kind of stops working right it stops tracking the excess demand because it gets saturated um and it can take many blocks to kind of recover from that and one of the reasons this can happen is not necessarily that theres more demand but its simply that theres more time between transactions right you can think that if if blocks are supposed to come every 13 seconds and one of them comes every 26 seconds which as you can see on this chart is quite common um itll look to the network as well theres twice as much demand but in practice per unit of time the demand was constant and so because we have these much more constant block times now we see that we actually have less blocks that are 100 full so we went from about 20 to 10 percent um and thats great because it says it kind of shows that our current pricing mechanism actually works better in a post merge world and and it gets kind of distorted less Often by surges of demand um so I wanted to share these couple charts because uh even though it hasnt been super long since the merge happened um were already seeing some interesting effects on the network and its clearly shown that you know things are generally more stable and so thats all I had uh thank you so much for having me and I wish you a great conference cheers Dive deep into Ethereums past, present, and future, and the many roadblocks and conversations that took place along the way. In this SmartCon 2022 presentation, Tim Beiko, AllCoresDevs Coordinator at the Ethereum Foundation, walks through Ethereums rollup-centric vision, post-merge architecture, and more.Chainlink is the industry-standard Web3 services platform that has enabled trillions of dollars in transaction volume across DeFi, insurance, gaming, NFTs, and other major industries. As the leading decentralized oracle network, Chainlink enables developers to build feature-rich Web3 applications with seamless access to real-world data and off-chain computation across any blockchain and provides global enterprises with a universal gateway to all blockchains. Learn more about Chainlink: Website: Docs: Twitter: Chainlink Chainlink,