By Vinay Gupta, infrastructure theorist, software engineer, and inventor. https://re.silience.com, @leashless
Behold Skynet
Ethereum brings up strong emotions. Some have compared it to SkyNet, the distributed artificial intelligence of the Terminator movies. Others once suggested the entire thing is a pipe dream. The network has been up for a few months now, and is showing no signs of hostile self-awareness — or total collapse.
But if you’re not terribly technical, or technical in a different field it’s easy to stare at all this stuff and think “I’ll get around to this later” or decide to ignore it until the Guardian does a nice feature (e.g., Imogen Heap: saviour of the music industry? article).
But, in truth, it’s not that difficult to understand Ethereum, blockchains, Bitcoin and all the rest — at least the implications for people just going about their daily business, living their lives. Even a programmer who wants a clear picture can get a good enough model of how it all fits together fairly easily. Blockchain explainers usually focus on some very clever low-level details like mining, but that stuff really doesn’t help people (other than implementers) understand what is going on. Rather, let’s look at how the blockchains fit into the more general story about how computers impact society.
As is so often the case, to understand the present, we have to start in the past: blockchains are the third act of the play, and we are just at the beginning of that third act. So we must recap.
SQL: Yesterday’s Best Idea
he actual blockchain story starts in the 1970s when the database as we currently know it was created: the relational model, SQL, big racks of spinning tape drives, all of that stuff. If you’re imagining big white rooms with expensive beige monoliths watched over by men in ties, you’re in the right corner of history. In the age of Big Iron, big organizations paid big bucks to IBM and the rest for big databases and put all their most precious data assets in these systems: their institutional memory and customer relationships. The SQL language which powers the vast majority of the content management systems which run the web was originally a command language for tape drives. Fixed field lengths — a bit like the 140 character limit on tweets — originally served to let impatient programs fast forward tapes a known distance at super high speed to put the tape head exactly where the next record would begin. This was all going on round about the time I was born — it’s history, but it’s not yet ancient history.
At a higher, more semantic level, a subtle distortion in how we perceive reality took hold: things that were hard to represent in databases became alternately devalued and fetishized. Years passed as people struggled to get the real world into databases using knowledge management, the semantic web, and many other abstractions. Not everything fit, but we ran society on these tools anyway. The things which did not fit cleanly in databases got marginalized, and life went on. Once in awhile a technical counter-current would take hold and try to push back on the tyranny of the database, but the general trend held firm: if it does not fit in the database, it does not exist.
You may not think you know this world of databases, but you live in it. Every time you see a paper form with squares indicating one letter per box, you are interacting with a database. Every time you use a web site, there’s a database (or more likely an entire mess of them) lurking just under the surface. Amazon, Facebook, all of that — it’s all databases. Every time a customer service assistant shrugs and says “computer says no” or an organization acts in crazy, inflexible ways, odds-are there’s a database underneath which has a limited, rigid view of reality and it’s simply too expensive to fix the software to make the organization more intelligent. We live in these boxes, as pervasive as oxygen, and as inflexible as punched cards.
Documents and the World Wide Web
he second act is started by the arrival of Tim Berners-Lee and the advent of the web. It actually starts just a hair before his arrival. In the late 1980s and early 1990s we get serious about computer networking. Protocols like Telnet, Gopher, Usenet and Email itself provide a user interface to the spanning arcs of the early internet, but it’s not until the 1990s we get mass adoption of networked computers, leading incrementally to me typing this on Google Docs, and you reading it in a web browser. This process of joining the dots — “the network is the computer” as Sun Microsystems used to say) — was fast. In the early 1990s, vast numbers of machines already existed, but they were largely stand-alone devices, or connected to a few hundred machines on a university campus without much of a window into the outside world. The software and hardware to do networking everywhere — the network of networks, the internet — took a long time to create, and then spread like wildfire. The small pieces became loosely joined, then tightly coupled into the network we know today. We are still riding the technological wave as the network gets smarter, smaller and cheaper and starts showing up in things like our light-bulbs under names like “the Internet of Things.”
Bureaucracy and Machines
But the databases and the networks never really learn to get on. The Big Iron in the machine rooms and the myriads of tiny little personal computers scattered over the internet like dew on a cobweb could not find a common world-model which allowed them to interoperate smoothly. Interacting with a single database is easy enough: forms and web applications of the kinds you use every day. But the hard problem is getting databases working together, invisibly, for our benefit, or getting the databases to interact smoothly with processes running on our own laptops.
Those technical problems are usually masked by bureaucracy, but we experience their impact every single day of our lives. It’s the devil’s own job getting two large organizations working together on your behalf, and deep down, that’s a software issue. Perhaps you want your car insurance company to get access to a police report about your car getting broken into. In all probability you will have to get the data out of one database in the form of a handful of printouts, and then mail them to the company yourself: there’s no real connectivity in the systems. You can’t drive the process from your laptop, except by the dumb process of filling in forms. There’s no sense of using real computers to do things, only computers abused as expensive paper simulators. Although in theory information could just flow from one database to another with your permission, in practice the technical costs of connecting databases are huge, and your computer doesn’t store your data so it can do all this work for you. Instead it’s just something you fill in forms on. Why are we under-utilizing all this potential so badly?
The Philosophy of Data
he answer, as always, is in our own heads. The organizational assumptions about the world which are baked into computer systems are almost impossible to translate. The human factors — the mindsets which generate the software — don’t fit together. Each enterprise builds their computer system in their own image, and these images disagree about what is vital and what is incidental, and truth does not flow between them easily. When we need to translate from one world model to another, we put humans in the process, and we’re back to processes which mirror filling in paper forms rather than genuinely digital cooperation. The result is a world in which all of our institutions seem to be at sixes and sevens, never quite on the same page, and things that we need in our ordinary lives seem to keep falling between the cracks, and every process requires filling in the same damn name and address data, twenty times a day, and more often if you are moving house. How often do you shop from Amazon rather than some more specialized store just because they know where you live?
There are lots of other factors that maintain the gap between the theoretical potential of our computers and our everyday use of the — technological acceleration, constant change, the sheer expense of writing software. But it all boils down to mindset in the end. Although it looks like ones and zeros, software “architects” are swinging around budgets you could use to build a skyscraper, and changing something late into a project like that has similar costs to tearing down a half-made building. Rows upon rows upon rows of expensive engineers throwing away months (or years) of work: the software freezes in place, and the world moves on. Everything is always slightly broken.
Over and over again, we go back to paper and metaphors from the age of paper because we cannot get the software right, and the core to that problem is that we managed to network the computers in the 1990s, but we never did figure out how to really network the databases and get them all working together.
There are three classic models for how people try and get their networks and databases working together smoothly.
First Paradigm: The Diverse Peers Model
The first approach is just to directly connect machines together, and work out the lumps as you go. You take Machine A, connect it over a network to Machine B, and fire transactions down the wire. In theory, Machine B catches them, writes them into its own database, and the job is good. In practice, there are a few problems here.
The epistemological problem is quite severe. Databases, as commonly deployed in our organizations, store fact. If the database says the stock level is 31 units, that’s the truth for the whole of the organization, except perhaps for the guy who goes down to the shelf and counts them, finds the real count is 29, and puts that in the database as a correction. The database is institutional reality.
But when data leaves one database and flows into another, it crosses an organizational boundary. For Organization A, the contents of Database A are operationally reality, true until proven otherwise. But for Organization B, the communique is a statement of opinion. Consider an order: the order is a request, but it does not become a confirmed fact until after the payment clears past the point of a chargeback. A company may believe an order has occurred, but this is a speculation about someone else’s intentions until cold hard cash (or bitcoin) clears all doubts. Up until that point, an “ordered in error” signal can reset the whole process. An order exists as a hypotheses until a cash payment clears it from the speculative buffer it lives in and places it firmly in the fixed past as a matter of factual record: this order existed, was shipped, was accepted, and we were paid for it.
But until then, the order is just a speculation.
The shifting significance of a simple request for new paint brushes flowing from one organization to another, a statement of intention clearing into a statement of fact, is not something we would normally think about closely. But when we start to consider how much of the world, of our lives, run on systems that work much like this — food supply chains, electrical grids, tax, education, medical systems, it’s odd that these systems don’t come to our notice more often.
In fact, we only notice them when something goes wrong.
The second problem with peer connection is the sheer instability of each peer connection. A little change to the software on one end or the other, and bugs are introduced. Subtle bugs which may not become visible until the data transferred has wormed its way deep into Organization B’s internal records. A typical instance: an order was always placed in lots of 12, and processed as one box. But for some reason, one day an order is placed for 13, and somewhere far inside of Organization B, a stock handling spreadsheet crashes. There’s no way to ship 1.083 of a box, and The Machine Stops.
This instability is compounded by another factor: the need to translate the philosophical assumptions — in fact, the corporate epistemology, of one organization into another organization’s private internal language. Say we are discussing booking a hotel and car rental as a single action: the hotel wants to think of customers as credit card numbers, but the car rental office wants to think of customers as driving licenses. A small error results in customer misidentification, comedy as customers are mistakenly asked for their driving license numbers to confirm hotel room bookings — but all anybody knows of the error is “computer says no” when customers read back their credit card details with no idea that the computer now wants something else.
If you think this is a silly example, the Mars Climate Orbiter was lost by NASA in 1999 because one team was using inches, and the other, centimeters. These things go wrong all the time.
But over a wire, between two commercial organizations, one can’t simply look at the other guy’s source code to figure out the error. Every time two organizations meet and want to automate their back end connections, all these issues have to be hashed out by hand. It’s difficult, and expensive, and error prone enough that in practice companies would often rather use fax machines. This is absurd, but this is how the world really works today.
Of course, there are attempts to clarify this mess — to introduce standards and code reusability to help streamline these operations and make business interoperability a fact. You can choose from EDI, XMI-EDI, JSON, SOAP, XML-RPC, JSON-RPC, WSDL and half a dozen more standards to assist your integration processes.
Needless to say, the reason there are so many standards is because none of them work properly.
Finally, there is the problem of scaling collaboration. Say that two of us have paid the upfront costs of collaboration and have achieved seamless technical harmony, and now a third partner joins our union. And now a fourth, and a fifth. By five partners, we have 13 connections to debug. Six, seven… by ten the number is 45. The cost of collaboration keeps going up for each new partner as they join our network, and the result is small pools of collaboration which just will not grow.
Remember, this isn’t just an abstract problem — this is banking, this is finance, medicine, electrical grids, food supplies, and the government.
Our computers are a mess.
Hub and Spoke: Meet the New Boss
One common answer to this quandary is to cut through the exponential (well, quadratic) complexity of writing software to directly connect peers, and simply put somebody in charge. There are basically two approaches to this problem.
The first is that we pick an organization — VISA would be typical — and all agree that we will connect to VISA using their standard interface. Each organization has to get just a single connector right, and VISA takes 1% off the top, and makes sure that everything clears properly.
There are a few problems with this approach, but they can be summarized with the term “natural monopoly.” The business of being a hub or a platform for others is quite literally a license to print money for anybody that achieves incumbent status in such a position. Political power in the form of setting terms of service and negotiating with regulators can be exerted, and over all an arrangement that might have started with an effort to create a neutral backbone rapidly turns into being clients of an all-powerful behemoth without which one simply cannot do business.
This pattern recurs again and again in different industries, at different levels of complexity and scale, from railroads and fibre optics and runway allocation in airports through to liquidity management in financial institutions.
In the database context, there is a subtle form of the problem: platform economics. If the “hub and spoke” model is that everybody runs Oracle or Windows Servers or some other such system, and then relies on these boxes to connect to each-other flawlessly because, after all, they are clone-like peas in a pod, we have the same basic economic proposition as before: to be a member of the network, you rely on an intermediary who charges whatever they like for the privilege of your membership, with this tax disguised as a technical cost.
VISA gets 1% or more of a very sizeable fraction of the world’s transactions with this game. If you ever wonder what the economic upside of the blockchain business might be, just have a think about how big that number is.
Protocols — If You Can Find Them
The protocol is the ultimate “unicorn.” Not a company that is worth a billion dollars two years after it was founded, but an idea so good that it gets people to stop arguing about how to do things, and just get on with it and do them.
The internet runs on a handful of these things: Sir Tim Berners Lee’s HTTP and HTML standards have worked like magic, although of course he simply lit the fire and endless numbers of technologists gave us the wondrous mess we know and love now. SMTP, POP and IMAP power our email. BGP sorts out our big routers. There are a few dozen more, increasingly esoteric, which run most of the open systems we have.
A common complaint about tools like Gchat or Slack is that they do jobs which have perfectly great open protocols in play (IRC or XMPP) but do not actually speak those protocols. The result is that there is no way to interoperate between Slack and IRC or Skype or anything else, without going through hacked together gateways that may or may not offer solid system performance. The result is a degradation of the technical ecosystem into a series of walled gardens, owned by different companies, and subject to the whims of the market.
Imagine how much WikiPedia would suck by now if it was a start up pushing hard to monetize its user base and make its investors their money back.
But when the protocol gambit works, what’s created is huge genuine wealth — not money, but actual wealth — as the world is improved by things that just work together nicely. Of course, SOAP and JSON-RPC and all the rest aspire to support the formation of protocols, or even to be protocols, but the definitional semantics of each field of endeavor tend to create an inherent complexity which leads back towards hub and spoke or other models.
Blockchains — A Fourth Way?
You’ve heard people talking about bitcoin. Missionary chaps in pubs absolutely sure that something fundamental has changed, throwing around terms like “Central Bank of the Internet” and discussing the end of the nation state. Sharply dressed women on podcasts talking about the amazing future potential. But what’s actually underneath all this? What is the technology, separated from the politics and the future potential?
What’s underneath it is an alternative to getting databases synchronized by printing out wads of paper and walking it around. Let’s think about paper cash for a moment: I take a wad of paper from one bank to another, and the value moves from one bank account — one computer system — to another. Computer as paper simulator, once again. Bitcoin simply takes a paper-based process, the fundamental representation of cash, and replaces it with a digital system: digital cash. In this sense, you could see bitcoin as just another paper simulator, but it’s not.
Bitcoin took the paper out of that system, and replaced it with a stable agreement (“consensus”) between all the computers in the bitcoin network about the current value of all the accounts involved in a transaction. It did this with a genuinely protocol-style solution: there’s no middleman extracting rents, and no exponential system complexity from a myriad of different connectors. The blockchain architecture is essentially a protocol which works as well as hub-and-spoke for getting things done, but without the liability of a trusted third party in the center which might choose to extract economic rents. This is really a good, good thing. The system has some magic properties — same agreed data on all nodes, eventually — which go beyond paper and beyond databases. We call it “distributed consensus” but that’s just a fancy way of saying that everybody agrees, in the end, about what truth (in your bank balance, in your contracts) is.
This is kind of a big deal.
In fact, it breaks with 40 years of experience of connecting computers together to do things. As a fundamental technique, blockchains are new. And in this branch of technology, genuinely new ideas move billions of dollars and set the direction of industries for decades. They are rare.
Bitcoin lets you move value from one account to another without having to move either cash or go through the baroque wire transfer processes that banks use to shuffle numbers because the underlying database technology is new, modern and better: better services through better technology. Just like cash it is anonymous and decentralized, and bitcoin bakes in some monetary policy and issues the cash itself: a “decentralized bank.” A central bank of the internet, if you will.
Once you think of cash as a special kind of form, and cash transactions as paper shuffling to move stuff around in databases, it’s pretty easy to see bitcoin clearly.
It’s not an exaggeration to say that Bitcoin has stated us on the way out of a 40 year deep hole created by the limits of our database technology. Whether it can effect real change at a fiscal level remains to be seen.
Ok, So What About Ethereum?
Ethereum takes this “beyond the paper metaphor” approach to getting databases to work together even further than bitcoin. Rather than replacing cash, Ethereum presents a new model, a fourth way. You push the data into Ethereum, it’s bound permanently in public storage (the “blockchain”). All the organizations that need to access that information — from your cousin to your government — can see it. Ethereum seeks to replace all the other places where you have to fill in forms to get computers to work together. This might seem a little odd at first — after all, you don’t want your health records in such a system — and that’s right, you don’t. If you were going to store health records online, you’d need to protect them with an additional layer of encryption to ensure they couldn’t be read — and we should be doing this anyway. It’s not common practice to apply appropriate encryption to private data, that’s why you keep hearing about these enormous hacks and leaks.
So what kinds of things would you like as public data? Let’s start with some obvious things: your domain names. You own a domain name for your business, and people need to know that your business owns that domain name — not somebody else. That unique system of names is how we navigate the internet as a whole: that’s a clear example of something we want in a permanent public database. We’d also like it if governments didn’t keep editing those public records and taking domains offline based on their local laws: if the internet is a global public good, it’s annoying to have governments constantly poking holes in it by censoring things they don’t like.
Crowdfunding as a Test Bed
Another good example is crowdfunding for projects, as done by places like KickStarter, IndieGoGo and so on. In these systems, somebody puts a project online and gather funds, and there’s a public record of how much funding has flown in. If it’s over a certain number, the project goes live — and we’d like them to document what they did with the money. This is a very important step: we want them to be accountable for the funds they have taken in, and if the funds aren’t sufficient, we want them returned where they came from. We have a global public good, the ability for people to organize and fund projects together. Transparency really helps, so this is a natural place for a blockchain.
So let’s think about the crowdfunding example in more detail. In a sense, giving money to a crowdfunding project is a simple contract:
If the account balance is greater than $10000 then fund the project, and if I contributed more than $50, send me a t-shirt. Otherwise, return all the money.
expressed as let’s pretend code, that might be:
If you represent this simple agreement as actual detailed code, you get something like this. This is a simple example of a Smart Contract, and smart contracts are one of the most powerful aspects of the Ethereum system.
Crowdfunding potentially gives us access to risk capital backed by deep technical intelligence, and invested to create real political change. If, say, Elon Musk could access the capital reserves of everybody who believes in what he is doing, painlessly selling (say) shares in a future Mars City, would that be good or bad for the future of humanity?
Building the mechanisms to enable this kind of mass collective action might be critical to our future. (e.g. See Coase’s Blockchain youtube video)
Smart Contracts
The implementation layer of all these fancy dreams is pretty simple: a smart contract envisages taking certain kinds of simple paper agreements and representing them as software. You can’t easily imagine doing this for house painting — “is the house painted properly?” is not something a computer can do — yet. But for contracts which are mainly about digital things — think cell phone contracts or airline tickets or similar, which rely on computers to provide service or send you an e-ticket — software already represents these contracts pretty well in nearly all cases. Very occasionally something goes wrong and all the legalese in English gets activated, and a human judge gets involved in a lawsuit, but that’s a very rare exception indeed. Mostly we deal with web sites, and show the people in the system who help us (like airline gate staff) proof that we’ve completed the transaction with the computers, for example by showing them our boarding passes. We go about our business by filling in some forms and computers go out and sort it all out for us, no humans required except when something goes wrong.
To make that all possible today, companies offering those kinds of services maintain their own technical infrastructure — dotcom money pays for fleets of engineers and server farms and physical security around these assets. You can buy off-the-shelf services from people that will set you up an e-commerce website or some other simple case, but basically this kind of sophistication is the domain of the big companies because of all the overheads and technical skill you need before you can have a computer system take money and offer services.
It’s just hard and expensive. If you are starting a bank or a new airline, software is a very significant part of your budget, and hiring a technical team is a major part of your staffing challenge.
Smart Contracts and the World Computer
So what Ethereum offers is a “smart contract platform” which takes a lot of that expensive, difficult stuff and automates it. It’s early days yet, so we can’t do everything, but we are seeing a surprising amount of capability even from the first version of the world’s first generally available smart contract platform.
So how does a smart contract platform work? Just like bitcoin, lots and lots of people run the software, and get a few tokens (ether) for doing it. Those computers in the network all work together and share a common database, called the blockchain. Bitcoin’s blockchain stores financial transactions. Ethereum’s blockchain stores smart contracts. You don’t rent space in a data center and hire a bunch of system administrators. Rather, you use the shared global resource, the “world computer” and the resources you put into the system go to the people whose computers make up this global resource. The system is fair and equitable.
Ethereum is open source software, and the Ethereum team maintain it (increasingly with help from lots of independent contributors and other companies too.) Most of the web runs on open source software produced and maintained by similar teams: we know that open source software is a good way to produce and maintain global infrastructure. This makes sure that there’s no centralized body which can use its market power to do things like jack up the transaction fees to make big profits: open source software (and its slightly more puritan cousin, Free Software) help keep these global public goods free and equitable for everybody
The smart contracts themselves, which run on the Ethereum platform, are written in simple languages: not hard to learn for working programmers. There’s a learning curve, but it’s not different from things that working professionals do every few years as a matter of course. Smart contracts are typically short: 500 lines would be long. But because they leverage the huge power of cryptography and blockchains, because they operate across organizations and between individuals, there is enormous power in even relatively short programs.
So what do we mean by world computer? In essence, Ethereum simulates a perfect machine — a thing which could never exist in nature because of the laws of physics, but which can be simulated by a large enough computer network. The network’s size isn’t there to produce the fastest possible computer (although that may come later with blockchain scaling) but to produce a universal computer which is accessible from anywhere by anybody, and (critically!) which always gives the same results to everybody. It’s a global resource which stores answers and cannot be subverted, denied or censored (See From Cyperpunks to Blockchains video on youtube).
We think this is kind of a big deal.
A smart contract can store records on who owns what. It can store promises to pay, and promises to deliver without having middleman or exposing people to the risk of fraud. It can automatically move funds in accordance with instructions given long in the past, like a will or a futures contract. For pure digital assets there is no “counterparty risk” because the value to be transferred can be locked into the contract when it is created, and released automatically when the conditions and terms are met: if the contract is clear, then fraud is impossible, because the program actually has real control of the assets involved rather than requiring trustworthy middle men like ATM machines or car rental agents.
And this system runs globally, with tens and eventually hundreds of thousands of computers sharing the workload and, more importantly, backing up the cultural memory of who promised what to whom. Yes, fraud is still possible, at the edge of the digitial, but many kinds of outright banditry are likely to simply die out: you can check the blockchain and find out if the house has been sold twice, for example. Who really owns this bridge in Brooklyn? What happens if this loan defaults? All there, as clear as crystal, in a single shared global blockchain. That’s the plan, anyway.
Democratized Access to the State of the Art
All of this potentially takes the full power of modern technology and puts it into the hands of programmers who are working in an environment not much more complex than coding web sites. These simple programs are running on enormously powerful shared global infrastructure that can move value around and represent the ownership of property. That creates markets, registries like domain names, and many other things that we do not currently understand because they have not been built yet. When the web was invented to make it easy to publish documents for other people to see, nobody would have guessed it would revolutionize every industry it touched, and change people’s personal lives through social networks, dating sites, and online education. Nobody would have guessed that Amazon could one day be bigger than Wal-Mart. It’s impossible to say for sure where smart contracts will go, but it’s hard not to look at the web, and dream.
Although an awful lot of esoteric computer science was required to create a programming environment that would let relatively ordinary web skills move around property inside of a secure global ecosystem, that work has been done. Although Ethereum is not yet a cakewalk to program, that’s largely an issue of documentation, training, and the gradual maturation of a technical ecosystem. The languages are written and are good: the debuggers take more time. But the heinous complexity of programming your own smart contract infrastructure is gone: smart contracts themselves are simpler than modern JavaScript, and nothing a web programmer will be scared of. The result is that we expect these tools to be everywhere fairly soon, as people start to want new services, and teams form to deliver them.
The Future?
I am excited precisely because we do not know what we have created, and more importantly, what you and your friends will create with it. My belief is that terms like “Bitcoin 2.0” and “Web 3.0” will be inadequate — it will be a new thing, with new ideas and new culture embedded in a new software platform. Each new medium changes the message: blogging brought long form writing back, and then twitter created an environment where brevity was not only the soul of wit, but by necessity its body also. Now we can represent simple agreements as free speech, as publication of an idea, and who knows where this leads.
Ethereum Frontier is a first step: it’s a platform for programmers to build services you might access through a web browser or a phone app. Later we’ll release Ethereum Metropolis, which will be a web browser like program, currently called Mist, which takes all the security and cryptography inherent in Ethereum and packages it nicely with a user interface that anybody can use. The recent releases of Mist showcase a secure wallet, and that’s just the start. The security offered by Mist is far stronger than current e-commerce systems and phone apps have. In the medium term, contract production systems will be stand-alone, so nearly anybody can download a “distributed application builder” and load it up with their content and ideas and upload it — for simple things, no code will be required, but the full underlying power of the network will be available. Think along the lines of an installation wizard, but instead of setting up your printer, you are configuring the terms of a smart contract for a loan: how much money, how long, what repayment rates. Click OK to approve!
If this sounds impossible, welcome to our challenge: the technology has gotten far, far ahead of our ability to explain or communicate the technology!
The World SUPER Computer?
We are not done innovating yet. In a little while — we’re talking a year or two — Ethereum Serenity will take the network to a whole new level. Right now, adding more computers to the Ethereum network makes it more secure, but not faster. We manage the limited speed of the network using Ether, a token which gives priority on the network etc. In the Serenity system, adding more computers to the network makes it faster, and this will finally allow us to build systems which really are internet scale: hundreds of millions of computers working together to do jobs we collectively need done. Today we might guess at protein folding or genomics or AI, but who’s to say what uses will be found for such brilliant software.
I hope this non-technical primer on the Ethereum ecosystem has been useful. If you are interested in learning more about Ethereum, check out our blockchain training courses and free, on-demand webinars.
Subscribe to our newsletter
Get the latest Ethereum news, products, resources, and more straight to your inbox.