What's Unikraft?
Unikraft is the company behind Unikraft Cloud, a next-generation compute platform that leverages unikernel technology. It delivers millisecond cold boots and scale-to-zero autoscaling so you never pay for idle resources.
Can you do cloud-prem, or on-prem deployments?
Yes. Please get in touch to discuss dedicated hosting.
What's a unikernel?
A unikernel is a specialized virtual machine. Each target app gets a distro and kernel containing only the code it needs. Everything builds at compile time, before deployment. If an app doesn't need a line of code to run, it doesn't get deployed. This packages as any other virtual machine, with strong, hardware-level isolation. Unikraft packages unikernels as OCI images.
Aren't virtual machines heavyweight?
They need not be! Unikraft unikernels are proof you can have strong, hardware-level isolation. This isolation is the workhorse of public cloud deployments. It combines with the lightweight characteristics of containers or processes, such as millisecond cold boot times.
How much smaller are Unikraft images?
The answer depends on the app, but with Unikraft most (up to ~90%) of an image's size is due to the app itself. For example, an NGINX Unikraft image is under 2MBs in size.
Why can't you just use a container?
When you deploy a container on the public cloud, it gets deployed on top of a virtual machine to have strong, hardware-level isolation.
The container thus adds yet another layer of overhead between your app and the hardware.
At Unikraft, Dockerfiles create the filesystem at build time.
Then, a lean unikernel provides the best efficiency at deploy time.
Do Unikraft unikernels come with security benefits?
Yes, especially stemming from the fact that they have a minimal Trusted Computing Base (TCB), and everything is off by default (services, ports, etc).
Can a service and a database run with Unikraft Cloud?
Definitely. Every instance on Unikraft Cloud has a private IP and DNS name. You can plug instances together. Follow this guide for instructions.
With an access token and an app, what should you do?
You'll need a Dockerfile and Kraftfile. See any of the apps/langs guides here to see examples.
What's kraft?
kraft is Unikraft Cloud's open source CLI tool written in Go.
You can control Unikraft Cloud with it as well as build unikernels and try things out locally.
How does Unikraft differ from Linux, and what about debugging?
Unikraft implements the Linux API so apps and languages run unmodified. A common concern is that unikernels are hard to debug. Unikraft includes a full GDB server, tracing, and other facilities, so debugging feels like Linux (see here). For observability, Unikraft ships a Prometheus library so a unikernel can export metrics to Grafana dashboards.
How's millisecond scale-to-zero achieved?
Magic 🪄. Kidding, it's the combination of using Unikraft unikernels to run workloads and a custom node ingress controller. This controller can be reactive in milliseconds and scale to thousands of instances. Other optimizations to the underlying host also help. When traffic dies, your app will stop, consume no resources, and get charged $0. When the next request from your user comes, the instance will wake in milliseconds and reply. Your user will be none the wiser.
Do you have cold boots on Unikraft Cloud?
By definition, meaning starting an instance up from zero, yes, cold boots exist. People often mean a slow cold boot: seconds or minutes. On Unikraft Cloud, cold boot times happen in milliseconds.
How do you achieve millisecond autoscale?
By leveraging fast cold boot times, and by coupling that with a reactive and scalable controller and proxy infrastructure. Autoscale like you've never seen it before.
Millisecond snapshots?
Yep. Instances on Unikraft Cloud are lean. Your app determines size, so snapshots are quick. Stateful scale-to-zero in milliseconds anyone?
What do you need to get started?
Sign up to get an access token. If you're interested in an enterprise solution please write to us.
What about usability?
Unikraft puts great care and effort into providing seamless integration with major tools and frameworks like Docker, Kubernetes, Terraform, and Prometheus/Grafana.
Why Unikraft and what's wrong with current cloud offerings?
Current cloud stacks and offerings are over-bloated and over-priced. With Unikraft you can be sure that the resources you're consuming and paying for are going to your app, and your app only. When your app is idle, so is your bill. For Unikraft Cloud Enterprise, imagine running 1000s of instances with a couple of servers.
Does Unikraft provide cost savings?
Absolutely! Unikraft can provide cost savings in many ways:
- Scale-to-Zero: don't ever pay for idle again;
- Autoscale: don't ever pay for warm instances to cope with peaks;
- Fewer instances: higher, more efficient I/O means fewer instances for similar workloads; and,
- Server density: 1000s of instances on a single server mean fewer servers, and higher savings.
What's the relationship between the Unikraft open source project and Unikraft Cloud?
Unikraft OSS allows you to build and run unikernels locally via kraft run (and even hack the Unikraft OS itself if you like tinkering!).
When you're ready to deploy, switch to kraft cloud deploy.
Are you a replacement for Docker?
Definitely not!
Docker is great for dev environments and for building images.
In fact, Unikraft relies on Dockerfile to specify the filesystem of images on Unikraft Cloud.
Having said that, when it's time to deploy, Unikraft places the resulting filesystem on a lean unikernel.
It's ready to run with hardware-level isolation and extreme efficiency.
Are you a competing technology to WASM?
No. You can read more about how Unikraft and WASM complement each other in our dedicated blog post on the matter. In short, WASM provides language-level isolation, whereas Unikraft provides hardware-level. When you deploy WASM to the public cloud, there will be a VM underneath for isolation. Maybe even a container runtime. The most efficient way to deploy WASM workloads on the public cloud is on Unikraft Cloud. The VM (the unikernel) itself has only the minimal code needed to run the WASM runtime.
Can you deploy within hyperscaler infra and connect to their services with Unikraft Cloud?
Yes. Unikraft supports metros (regions) within existing hyperscaler infra, and also connectivity to their services. For example, you could run an API server on Unikraft Cloud and connect to S3 as a storage back end. Contact information is available for more details.
Can you try Unikraft open source first?
Yes.
Head on over to unikraft.org, install the kraft tool with the one-liner there, and use kraft run to build, package and run your app locally.
When you're ready to deploy to cloud, set your Unikraft Cloud token and deploy via kraft cloud deploy.
Does Unikraft Cloud work on Mac/Windows/Linux?
Yes, the kraft tool can run on Mac, Windows or Linux.
How can you track instances on Unikraft Cloud?
Unikraft provides full log support for all users. Paying customers get access to Prometheus metrics.
Are there other unikernel cloud platforms?
Other unikernel projects exist, but most are research efforts, unmaintained, or limited to a single app or language and aren't POSIX (Portable Operating System Interface) compliant.
Do you have a Kubernetes integration?
Do you have a Terraform integration?
When should you use Kraftkit stable and/or staging?
You should use stable versions when initially trying out the Unikraft Platform and its features. Any time you encounter a problem, it's good to first try out the latest staging version before forwarding the problem as it might have been already fixed. There's also a chance that Kraft staging contains changes available only on non-stable nodes.
What keywords does Kraftkit ignore from a Dockerfile?
This is the current list of ignored keywords: EXPOSE, HEALTHCHECK, ONBUILD, SHELL, STOPSIGNAL, VOLUME.
Currently, kraft parses the ENV keyword, but the platform discards the result in the current stable version.
How does the image size and memory relate to boot times?
When using CPIO, there is a strong correlation between the image size and the boot time.
This is because it needs to load the whole image into memory first.
When using EROFS, there is no such correlation, as the operating system loads pages directly from disk, on demand.
At the same time, there is a slight correlation between big memory allocations and higher boot times, though not significant. This is irrespective of the file system used.
What's the key difference between cold boots and warm boots?
End-result wise, cold and warm boots are no different, the instance still works.
The server handles the request in both cases if the app is stateless.
The main difference appears in stateful apps and when considering latency, hence the --scale-to-zero-stateful flag.
A cold boot equates to starting the instance every time and then stopping it if no traffic hit it for a while.
A warm boot resembles the suspend and resume operations in virtual machines. The platform saves the system memory to disk and loads it when a request comes.
Warm boots offer clear advantages over cold boots, having lower boot times, no setup time, and are better in general, with the trade off of using space on the disk. Thus, for most use cases, using stateful scale-to-zero is the better alternative.
Cases exist where short lived VMs are small enough to have a small boot time, but they use a lot of memory. In this case it might be better to use cold boots, to use disk efficiently. This should be first experimented or discussed with a Unikraft Engineer to verify it.
What can you push as an image?
An image is specifically a pair of a kernel and a rootfs bundled together as different layers of an OCI package. To keep things light, certain optimizations are in place to not repush existing kernels and to reuse them from your account. If these aren't automatically detected, kraft pushes the kernel again.
At the same time, things like ROMs use the same packaging systems, but push images with a missing kernel and only a rootfs. These have specific metadata that identifies them as ROMs.
Finally, kernels can also run without a rootfs. They don't do anything, but support exists to run them.
How to replicate the ENV/ENTRYPOINT of a container?
To reproduce the starting environment without specifying extra arguments in the command line you can replace your Kraftfile's cmd: with the path to a script.
In there you can place all ENVs from the dockerfile and call the binary as you want.
You can also use the env: field in the Kraftfile.
When building a rootfs directly from a DockerHub image, or a Dockerfile which starts from a DockerHub image, you should inspect the environment set by said image (example with library/node):
Code
If you are using the Unikraft OS, you need to specify those in the command like with --env.
Note: ENTRYPOINT is a recognised keyword in the Dockerfile so it should propagate up into the image.