WOPR demo:
docker run --rm -ti \
--device /dev/kvm:/dev/kvm \
--device /dev/net/tun:/dev/net/tun \
--cap-add NET_ADMIN \
mato/unikernel-wopr
Telnet to <container-ip>
, port 4096.
Mathopd web server:
docker run --rm -ti \
--device /dev/kvm:/dev/kvm \
--device /dev/net/tun:/dev/net/tun \
--cap-add NET_ADMIN \
mato/unikernel-mathopd
Browse to http://<container-ip>/
.
Unikernel-runner provides a base image for running rumprun unikernels as Docker containers using KVM, fully integrated with Docker networking.
To use this image as a base, a child image inherits FROM mato/unikernel-runner
and must adhere to the following structure:
/unikernel/unikernel.bin
/unikernel/config.json
/unikernel/fs/<volume>.img
unikernel.bin
is the unikernel binary. Only the rumprunhw/x86_64
platform is currently supported, and the image must be baked using thehw_virtio
configuration.config.json
is an optional JSON configuration to be passed to the unikernel. The configuration must follow the work in progress "Rumprun unikernel configuration" specification (see NOTE below) and, in addition:- must not include a
net
object, this will be generated by unikernel-runner. - if it includes a
mount
object, must not define any mountpoints using/dev/ld*
block devices, these are generated by unikernel-runner.
- must not include a
- Each file under
/unikernel/fs
is assumed to be a filesystem image. Unikernel-runner will automatically generate configuration to mountimagename.img
as/imagename
in the unikernel.
Refer to Dockerfile.wopr
, Dockerfile.wopr-build
, Dockerfile.mathopd
and
Dockerfile.mathopd-build
for examples of Dockerfiles which build unikernels
using unikernel-runner
as a base image.
NOTE: The work in progress rumprun configuration parser used by
unikernel-runner has not yet been merged into rumprun master. When building
unikernels for unikernel-runner be sure to use a toolchain built off the
mato-wip-rumprun-config
branch, also available as the
mato/rumprun-toolchain-hw-x86_64:wip-rumprun-config
image on Docker Hub.
Start the container as you would any other Docker container, with the following additional options:
--device /dev/kvm:/dev/kvm
--device /dev/net/tun:/dev/net/tun
--cap-add NET_ADMIN
If you do not pass a /dev/kvm
into the container, the unikernel will be run
using software emulation only.
CAP_NET_ADMIN
and access to /dev/net/tun
are required for unikernel-runner
to be able to wire L2 network connectivity from Docker to the unikernel guest.
For additional security, unikernel-runner invokes the included QEMU binary as a
non root user.
The build process for unikernel-runner is containerized. However, due to the need to use intermediate containers to separate the different toolchains it cannot be run as a single container.
Thus, you will need both docker
and make
available in your development
environment. To build everything from source, just run make
.