Autoscale
Autoscaling is load balancing where the number of instances used to handle your traffic automatically adapts to match the current traffic load. On Unikraft Cloud, scale-out (the process of adding instances to cope with increased load) happens in milliseconds. You can transparently and effortlessly handle load increase including traffic peaks. No more headaches due to slow autoscale like keeping hot instances around to deal with peaks, coming up with complex predictive algorithms, or other painful workarounds. You can set autoscale on and let Unikraft Cloud handle your traffic increases and peaks.
The basics
As with load balancing, autoscaling in Unikraft Cloud takes care of a service. Services allow you to load balance traffic for an Internet-facing service like a web server by creating many instances within the same service.
While you can add or remove instances to a service to scale your service, doing this manually makes it hard to react to changes in traffic load. Keeping many instances running to cope with intermittent bursts would be wasteful and expensive. This is where autoscale comes into play.
With autoscale enabled, Unikraft Cloud takes care of the heavy lifting for you by continuously monitoring the load of your service and automatically creating or deleting instances as needed.
Limited Access
At the moment, autoscale is not enabled by default (you might get an "Autoscale not enabled for your account" error).
If you would like to enable it, please reach out to the Unikraft Cloud Discord or send an email to support@unikraft.com.
Autoscale, as well as load balancing in general, currently supports only Internet-facing services.
Setting up autoscale
First, create an instance, in this example using NGINX:
This single deploy or run flow does 3 things:
- Creates an instance of NGINX which will serve as the autoscale master instance.
- Creates a service via the
-pflag (namedsmall-leaf-rafirkw7). - Attaches the instance to the service (the
-pflag also does this automatically).
All that's left to do now to set up autoscale is to set an autoscale configuration policy and to set the instance as master. Unikraft Cloud then takes care of cloning this master instance whenever load increases. To achieve this, use the legacy CLI scale command:
Note the following:
- The first command sets the master to the created instance, and configures it to scale up to a maximum of 8 instances and a minimum of 1; the command also sets the warm up and cool down time to 1 second each, so it doesn't constantly fluctuate up and down.
- The second command sets the scale-out policy based on CPU utilization (in millicores): between 60% and 80% utilization, the system increases instances by 50%. From 80% onward, the number of instances doubles.
- The third command sets the scale-in policy: below 50% utilization, the system reduces the number of instances by half (note the
-sign for scale-in).
The intervals that autoscale uses for making scale-out and scale-in decisions use the --warmup-time and --cooldown-time parameters of the scale init command, in units of milliseconds.
Refer to the API autoscale reference for more details.
Keep in mind that a few restrictions apply to how you define scale-in/scale-out steps. You can find the documentation here at the bottom of the section.
Testing it
To check it's working, you can use the legacy CLI scale get command to list the autoscale properties of the service:
You should see output like:
To list an individual policy, use the legacy CLI scale get command as follows:
You should see output like:
You can further check that the master instance is on standby (scaled to zero), assuming your service hasn't received any traffic yet.
You can get the UUID of your master instance from the legacy CLI scale get command above.
You should see output like:
Note the value of the state field.
Now to make sure the service is up, curl the service address:
You should get an immediate response, even though the instance was on standby.
You can use a watch command to see if you catch the instance changing state from standby to running:
Policy types
Four autoscale policy types are available. A service can have more than one policy active at the same time.
Step policy
The step policy scales instances based on metric thresholds.
You define up to 4 steps, each specifying a lower bound, upper bound, and the scaling change to apply when the metric falls in that range.
You should order steps by lower bound with no gaps between them and no overlaps.
Metrics
The following metrics can drive a step policy:
| Metric | Description |
|---|---|
cpu | CPU utilization in millicores |
inflight_reqs | Number of requests the platform is processing across all instances |
reqs_per_sec | Request throughput in requests per second |
Scale change types
Step policies support three scaling change types:
| Type | Description |
|---|---|
change | Change the instance count by the specified value (positive to scale out, negative to scale in) |
exact | Set the instance count to exactly the specified value |
percent | Change the instance count by the specified percentage of the current count |
On-demand policy
The on-demand policy creates a new instance immediately when an incoming request finds no available instances.
This prevents request queuing but introduces cold start delays.
POST /services
Create policy
The create policy provisions a new VM when an instance exceeds the num_requests threshold.
Setting replace to true deletes the original VM after the new one starts.
POST /services
Idle policy
The idle policy scales in (removes instances) when the service has been idle—receiving no requests—for a configurable period.
POST /services
Learn more
- The CLI reference and the legacy CLI reference.
- Unikraft Cloud's REST API reference, and in particular the section on autoscale.