Skip to content

a Go based supervisor process for remotely executing processes that were provisioned via Ansible or some other means so that it can be supervised from inside Kubernetes

License

rhuss/kansible

 
 

Repository files navigation

Kansible

Kansible lets you orchestrate processes in the same way as you orchestrate your Docker containers with Kubernetes.

kansible logo

Kansible uses:

  • Ansible to install, configure and provision your software onto machines using playbooks
  • Kubernetes to run and manage the processes and perform service discovery, scaling and load balancing.

Kansible provides a single pane of glass, CLI and REST API to all your processes whether they are inside docker containers or running as vanilla proccesses on Windows, AIX, Solaris or HP-UX or an old Linux distros that predate docker.

Kansible lets you slowly migrate to a pure container based Docker world while using Kubernetes to manage all your processes.

Features

  • All your processes appear as Pods inside Kubernetes namespaces so you can visualise, query and watch the status of your processes and containers in a canonical way
  • Each kind of process has its own Replication Controller to ensure processes keep running and so you can manually or automatically scale the number of processes up or down; up to the limit in the number of hosts in your Ansible inventory
  • Reuse Kubernetes liveness checks so that Kubernetes can monitor the state of your process and restart if it goes bad
  • Reuse Kubernetes readiness checks so that Kubernetes can know when your process can be included into the internal or external service load balancer
  • You can view the logs of all your processes in the canonical kubernetes way via the CLI, REST API or web console
  • You can open a shell into the remote process machine via the CLI, REST API or web console
  • Port forwarding works from the pods to the remote processes so that you can reuse Kubernetes Services to load balance across your processes automatically
  • Centralised logging and metrics and alerting works equally across your containers and processes

How it works

You use kansible as follows:

  • create an Ansible playbook to install and provision the software you wish to run on a number of machines defined by the Ansible inventory
  • run the Ansible playbook either as part of a CI / CD build pipeline when there's a change to the git repo of the Playbook, or using a command line tool, cron or Ansible Tower
  • define a Replication Controller YAML file at kubernetes/$HOSTS/rc.yml for running the command for your process like this example.
  • the RC YAML file contains the command you need to run remotely to execute your process via $KANSIBLE_COMMAND . You can use the {{ foo_bar }} ansible variable expressions to refer to variables from your global ansible variables file
  • whenever the playbook git repo changes, run the kansible rc command inside a clone of the playbook git repository:
    kansible rc myhosts

where myhosts is the name of the hosts you wish to use in the Ansible inventory.

Then kansible will then create/update Secrets for any SSH private keys in your Ansible inventory and create a Replication Controller of kansible pods which will start and supervise your processes.

So for each remote process on Windows, Linux, Solaris, AIX, HPUX kansible will create a kansible pod in Kubernetes which starts the command and tails the log to stdout/stderr. You can then use the Replication Controller scaling to start/stop your remote processes!

Using kansible

  • As processes start and stop, you'll see the processes appear or disappear inside kubernetes, the CLI, REST API or the console.
  • You can scale up and down the Replication Controller via CLI, REST API or console.
  • You can then view the logs of any process in the usual kubernetes way via the command line, REST API or web console.
  • Centralised logging then works great on all your processes (providing the command you run outputs logs to stdout / stderr)

Exposing ports

Any ports defined in the Replication Controller YAML file will be automatically forwarded to the remote process.

This means you can take advantage of things like centralised metrics and alerting or Kubernetes Services and the built in service discovery and load balancing inside Kubernetes!

Opening a shell on the remote process

You can open a shell directly on the remote machine via the web console or by running

oc exec -p mypodname bash

Examples

To try out running one of the example Ansible provisioned apps try the following:

  • Download a release and add kansible to your $PATH
  • Or Build kansible then add the $PWD/bin folder to your $PATH so that you can type in kansible on the command line

These examples assume you have a working Kubernetes or OpenShift cluster running.

To get started quickly try using the fabric8 vagrant image that includes OpenShift Origin as the kubernetes cluster.

type the following to setup the VMs and provision things with Ansible

    git clone https://github.com/fabric8io/fabric8-ansible-spring-boot.git
    cd fabric8-ansible-spring-boot
    vagrant up
    ansible-playbook -i inventory provisioning/site.yml -vv

You now should have 2 sample VMs (app1 and app2) with a Spring Boot based Java application provisioned onto the machines in the /opt folder, but with nothing actually running yet.

Now to setup the kansible Replication Controller run the following, where appservers is the hosts from the Ansible inventory

    kansible rc appservers

This should now create a Replication Controller called springboot-demo along with 2 pods for each host in the appservers inventory file.

You should be able to look at the logs of those 2 pods in the usual Kubernetes / OpenShift way.

e.g.

    oc get pods 
    oc logs -f springboot-demo-81ryw 

where springboot-demo-81ryw is the name of the pod you wish to view the logs.

You can now scale down / up the number of pods using the web console or the command line:

    oc scale rc --replicas=2 springboot-demo

Important files

The examples use the following files:

type the following to setup the VMs and provision things with Ansible

    git clone https://github.com/fabric8io/fabric8-ansible-hawtapp.git
    cd fabric8-ansible-hawtapp
    vagrant up
    ansible-playbook -i inventory provisioning/site.yml -vv

Now to setup the Replication Controller for the supervisors run the following, where appservers is the hosts from the inventory

    kansible rc appservers

The pods should now start up for each host in the inventory!

Using Windows

To use windows you need to first make sure you've installed pywinrm:

sudo pip install pywinrm

To try using windows machines, replace appservers with winboxes in the above commands; assuming you have created the Windows vagrant machine locally

Or you can add the windows machine into the appservers hosts section in the inventory file.

Configuration

To configure kansible you need to configure a Replication Controller in a file called kubernetes/$HOSTS/rc.yml.

Specify a name and optionally some labels for the replication controller inside the metadata object. There's no need to specify the spec.selector or spec.template.containers[0].metadata.labels values as those are inherited by default from the metadata.labels.

You can specify the following environment variables in the spec.template.spec.containers[0].env array like the use of KANSIBLE_COMMAND below.

These values can use Ansible variable expressions too.

KANSIBLE_COMMAND

Then you must specify a command to run via the $KANSIBLE_COMMAND environment variable:

---
apiVersion: "v1"
kind: "ReplicationController"
metadata:
  name: "myapp"
  labels:
    project: "myapp"
    version: "{{ app_version }}"
spec:
  template:
    spec:
      containers:
      - env:
        - name: "KANSIBLE_COMMAND"
          value: "/opt/foo-{{ app_version }}/bin/run.sh"
      serviceAccountName: "fabric8"

KANSIBLE_COMMAND_WINRM

This environment variable lets you provide a Windows specific command. It works the same as the KANSIBLE_COMMAND environment variable above, but this value is only used for Ansible connections of the form winrm. i.e. to supply a windows only command to execute.

Its quite common to have a foo.sh script to run sh/bash scripts on unix and then a foo.bat or foo.cmd file for Windows.

KANSIBLE_EXPORT_ENV_VARS

Specify a space separated list of environment variable names which should be exported into the remote shell when running the remote command.

Note that typically your sshd_config will disable the use of most environment variables being exported that don't start with LC_* so you may need to configure your sshd in /etc/ssh/sshd_config to enable this.

KANSIBLE_BASH

This defines the path where the bash script will be generated for running a remote bash shell. This allows running the command bash inside the kansible pod to remotely execute either /bin/bash or cmd.exe for Windows machines on the remote machine when you try to open a shell inside the Web Console or via:

    oc exec -p mypodname bash

KANSIBLE_PORT_FORWARD

Allows port forwarding to be disabled.

export KANSIBLE_PORT_FORWARD=false

This is mostly useful to allow the bash command within a pod to not also try to port forward as this will fail ;)

SSH or WinRM

The best way to configure if you want to connect via SSH for unix machines or WinRM for windows machines is via the Ansible Inventory.

By default SSH is used on port 22 unless you specify ansible_port in the inventory or specify --port on the command line.

You can configure Windows machines using the ansible_connection=winrm property in the inventory:

[winboxes]
windows1 ansible_host=localhost ansible_port=5985 ansible_user=foo ansible_pass=somepasswd! ansible_connection=winrm

[unixes]
app1 ansible_host=10.10.3.20 ansible_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/app1/virtualbox/private_key
app2 ansible_host=10.10.3.21 ansible_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/app2/virtualbox/private_key

You can also enable WinRM via the --winrm command line flag:

export KANSIBLE_WINRM=true
kansible pod --winrm somehosts somecommand

or by setting the environment variable KANSIBLE_WINRM which is a little easier to configure on the RC YAML:

export KANSIBLE_WINRM=true
kansible pod somehosts somecommand

Checking the runtime status of the supervisors

To see which pods own which hosts run the following command:

    oc export rc hawtapp-demo | grep ansible.fabric8  | sort

Where hawtapp-demo is the name of the RC for the supervisors.

The output is of the format:

    pod.ansible.fabric8.io/app1: supervisor-znuj5
    pod.ansible.fabric8.io/app2: supervisor-1same

Where the output is of the form pod.ansible.fabric8.io/$HOSTNAME: $PODNAME

About

a Go based supervisor process for remotely executing processes that were provisioned via Ansible or some other means so that it can be supervised from inside Kubernetes

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Go 91.1%
  • Makefile 7.6%
  • Groovy 1.3%