“To the Moon” is too specific of a goalMaking a Case for Full-Stack Decentralized Programming.
I’ve been saying this : decentralized applications should develop and launch their own networks from the ground up. Why? Because running on someone else’s network leaves the application unable to define important features of their operation, governance, and economy. Application developers absolutely need this freedom. This post describes why.
First, consider a common problem faced by any open decentralized network: throughput constraints. To avoid DoS attacks, the network must somehow restrict the amount of resources each connected user is able to consume. Platforms solve this problem generically, because they don’t know the particular use-cases that will be implemented in the future.
Generic solutions are just that — generic. At the platform level you can’t distinguish between different classes of users, different classes of tasks, and different classes of transactions. So there aren’t many ways in which platforms can limit bandwidth. The most effective solution here is economic, transaction fees or token staking, which restricts the amount of resources a single user can consume.
Now consider the needs of an application. It knows its own use case. It can create a rate-limiting model that fits it best. The spectrum of choices becomes much broader. Let me give you some examples:
A certain class of users is allowed to perform expensive maintenance transactions for free. Why? Because any database requires maintenance and decentralized networks are no exceptions. Any task that requires batch processing — expensive calculations, searching for inconsistencies, etc. — falls into this category. The rate limiting of such transactions is simple: you allow, say, a single transaction per day in this category. Can this be done on existing platforms? Yes, but it requires much additional work in order to adapt the tasks to the generic constraints of the platform. And added complexity is always bad.Rate limiting mechanisms must reduce frictions. Say, you want a funds transfer between users to be free X times per day, and the transaction fee is required thereafter. You would only allow this for users that have passed a certain trust threshold, similar to Reddit’s karma. You can allow different number of free transactions depending on the user’s reputation. Can you do this today? Not really. Applications have to bend over backwards to pay transaction fees on behalf of users.Say you want your application to have two tokens — a staking token and a currency token — and you want both to participate in your rate limiting scheme. A staking token is used by long-term users, whereas a currency token is used by guest users. Can you do this today? You can develop a new platform that does this, because there is almost no way on any existing ones.
These examples are just an illustration of various design choices we might want to consider. Obviously, a platform can implement any one of these, but if the platform is to service a broad spectrum of applications, it can’t possibly implement every possible design. The platform has to remain generic. My goal is to point out the limitations of being generic, not to claim that any particular model is intrinsically wrong.
Secondly, let’s consider a special class of network users — transaction validators (miners, stakers, witnesses, whatchamacallits). These users provide generic services of transaction validation. If an application needs other types of services, such as batch processing or expensive infrequent computations, it must enlist other users to provide them.
But the validators are in the perfect position to provide whatever other computational services the application might need! All they do is run software they download and earn money for doing so. The problem is that currently the software they run is generic platform software. Consequently, an application can’t just delegate some tasks to them, because such tasks are too specific.
This would be very different if the application had its own network, its own node software, and its own group of validators. Many possibilities would open up then — oracle services, batch processing, order book building for decentralized exchanges, and so on. The validators would not be limited to just one service, but could carry out many important tasks for the network. They would become an automatic support structure, highly aligned with the system’s goals. No more front-running of DEX orders.
All of this points to an error in how we think about decentralized architectures. Having been accustomed to proof-of-work networks, where security highly depends on size, we now think exclusively in terms of generic platforms. But we don’t have to do this any more, since proof-of-stake consensus (Casper, BFT, and others) offers a way to build small networks with comparable security.
The one thing missing in order to make decentralized full-stack programming fully available to developers is an appropriate software development toolkit. Indeed, building your own blockchain network is a task of significant complexity. But only because we haven’t yet built a blockchain builder software. Remember how compilers made software development into a commodity? Remember how parser generators did the same to compilers? That’s what I’m talking about.
Ready for the future of decentralized programming? Stay tuned.
was originally published in on Medium, where people are continuing the conversation by highlighting and responding to this story.