Ram
Personal

Deployment 3.0

Jun 06, 2021

The term Web 3.0 is often bandied about to mean the (r)evolution of the web to its next phase. Depending on what you read it is set to mean the blockchain-based decentralization of apps, new interaction paradigms powered by AR/VR, 5G, IoT, etc, or the semantic web. In parallel to this there has been a similar shift in the deployment of web services and apps, from self-hosted on-premise hosting - Deployment 1.0 - to cloud hosting - 2.0. And we have been on the cusp of 3.0 for a while now.

To sum up 3.0 in a nutshell:

Deploying to the cloud such that assets, compute and data are automatically replicated and scaled across regions in accordance with demand and data regulations.

At the level of assets and data, numerous approaches have sprung up over the last several years that effectively solve the problem of efficient global replication (and thus distribution), e.g. IPFS. But the holy grail has always been doing the same for executable code - aka "compute". Even cloud functions require region selection despite their ephemerality.

A point on data regulations: different jurisdictions and regions may have different requirements in terms of how user data must be stored and handled (e.g. GDPR in the EU). If cross-region replication is to be automated then it must configurable to ensure data regulations are also honoured.

Most apps and services currently employ a combination of region-specific deployments collectively fronted by a routing system to ensure an incoming request gets served by the datacentre closest to it. However, the choice of regions is still a manual affair. And if demand from a previously overlooked region suddenly spikes then such a setup cannot dynamically adjust itself to maximally serve that demand.

A solution which which abstracts away regions and yet achieves the same goals is desirable. And not just from a DevOps point of view, but also on the basis that abstracting away this complexity would lower the cost and effort required to deploy and maintain new apps and services.

The recently launched Internet Computer (aka Dfinity) attempts to enable this alongside its loftier goals of blockchain consensus, decentralization and censorship-resistance. Apps are written as canisters which are then deployed to the IC cloud, with global replication fully taken care of. Indeed, even a database isn't needed since the memory pages for storing the executable code as well as in-code variables all get replicated with the canister.

A given canister can store upto 4 GB worth of data. If more than that is required the recommendation is to split your data across multiple canisters. I'm personally not yet convinced that the performance of Dfinity's cloud architecture will be able to match what is currently possible with the existing clouds, and in particular for real-time apps at scale.

It's also not clear how Dfinity plans to honur region-specific data regulations given that replication is largely automated and abstracted away from the developer.

Another player in this space is Cloudflare, and specifically Cloudflare Workers and Durable Objects. Workers offer serverless compute and can be personalised by user location. Importantly, Durable Objects (the data for your app) can be restricted by jurisdiction. In a recent article, Ben Thompson reckons Cloudflare's solution gives it the edge (pun intended) when it comes to data regulations. As he explains:

... (Cloudflare Workers) are not, in-and-of-themselves, going to kill the public clouds; what they represent, though, is an entirely new way of building infrastructure — from the edge in, as opposed to the data center out — that is perfectly suited to a world where politics matters more than economics.

In any case I expect that over the next 10 years, region-agnostic replication that honours jurisdictional data limits will become the norm for cloud deployments. In general terms it will become even easier for developers to publish and scale web apps and endpoints. This bodes well for software development as a whole.