Resources are limited. This is in many ways one of the defining realisations of our current age. We have limited oil, limited land, limited water etc.
In general system design this is no different. For computationally backed systems we have limited execution cores, limited volatile memory, limited network bandwidth. The realisation for software engineering is that the systems being built can not be built with the mindset of unlimited resource, but rather with the view that resources are bounded.
The question, then, is how do we go about introducing this notion into software in a pragmatic and useful manner?
This discussion approaches the problem from a top-down stance rather than a bottom-up stance. That is, rather than expecting the programmer to be able to allocate exact byte quantities at the lower levels, or try to specify CPU cycles etc. we should provide the mechanisms to push this down from the top of the system as constraints.
In order to do this it is useful to think of any given system as covering and reserving a complete unit measure of some type of resource. That is, the system is allocated a “whole” that it can then use as it needs. The different types of resources are then provided to the system as a capability that enables them to allocate parts of that whole that has been previously reserved for that system.
In addition to explicitly allocating absolute quantities of resources via the capability provided, we also provide the mechanisms for describing relative allocations. That is, the means for a system to partition its resources into isolated chunks.
These partitions are guaranteed to enforce mutual isolation. That is, resource exhaustion in one partition will not directly result in resource exhaustion in another partition.
However, one might ask, how is this really more practical than the bottom-up approach. It is still difficult to know a priori that one, should for example, split up the resources into the three partitions with the following ratios ½ + ¼ + ¼ = 1.
Instead, it is important to realise that the value of this approach is, at the time of coding the system, not knowing the exact values for the partition sizes, but rather knowing the structure of the partitions.
With this structure of partitioning being made explicit it would then become a tuning exercise that can be performed after the fact. One could imagine many good ways of then tuning these partitions:
- manually defining “good enough” partition ratios as defaults in the code.
- manually capturing telemetry from the running instances of the system and performing analysis in order to choose good values to apply to this and other system instances in the future.
- building machine learning mechanisms that can automatically tune the system partition ratios or absolute sizes relative to other constrains (e.g. system SLAs and resource costs).
- building a simulation framework to run many different scenarios in order to search for optimal solutions (again relative to general constraints or resource costs and SLAs) before moving to production.
- creating multiple isolated experiments in production in order to again traverse a search space.
In all cases, the logic of the system is not being adapted. Rather, we have explicitly acknowledged that this type of resource management is not a problem that can be meaningfully solved ahead of having a real system instance to observe. However, we rely on the architecture and design of the system to provide the structure relative to which the system can be tuned.
Now, in addition to tuning the resource capacities of different parts of the
system we can also begin to imagine having the system react usefully to the
bounds imposed. More specifically, rather than simply aborting when memory is
exhausted (as might happen in a
C program) or entering a
CPU-garbage-collection death spiral (as might happen in a Java program), we
could introduce back-pressure. Here, the system can begin to internally process
signals from the resource partition indicating that the system is approaching
some hard limit and should start to take corrective action.
Again, with back-pressure, we would still need to tune the system. However, we could imagine that the system could be made to be more resilient to poor tuning, or more resilient under aberrant system load.
Another important benefit of this design approach to managing resources, is that the same system can be deployed to very different execution environments depending on the expected load or available infrastructure. For example, one could imagine deploying a system to:
- a developer’s laptop with default “equal-sized” partitions and low system load for the purposes of correctness testing of logic.
- a constrained stress testing rig with well defined high load in which iterations of system tuning and induced failures are explored.
- a production environment for with a global footprint and dynamic feedback loops for tuning.
- a production environment with a data centre local footprint and an precomputed tuning profile.
Ultimately, this approach should allow for making far more robust systems while additionally expanding the range of utility of a given system.