Auto Deploy is a feature of VMware vSphere that enables booting VMware ESXi hosts directly from the network instead of a local storage device.  The boot process starts via PXE, loading an agent which ultimately pulls ESXi images from the Auto Deploy HTTP service on your vCenter Server Appliance.  VMware customers that use this deployment model tend to be on the larger side, so performance and scale is of natural concern.

The latest release of vSphere – version 6.5 – included quite a few enhancements to Auto Deploy, as well as to Host Profiles, that make this stateless option much more approachable and easier to operate.  One nice improvement is the ability to easily configure reverse caching proxies to offload all of that HTTP traffic generated by booting hosts.  It’s an optional architecture, but nice to have available when planning for large numbers of concurrent boots.

I wrote about configuring this feature on the vSphere Blog, but here I will explain how I made the Docker container running Nginx so you could make your own, if you like, and feel more confident about the internals. Since the container is based on the official Nginx image, there is very little else that needs to be done.

Nginx Configuration File

This slim nginx.conf  does nothing fancy, so there may be some room left to optimize, but it works on my machine™. Create a directory and save it as nginx.conf.template.simple  wherever you’re going to run your docker client.


The only other requirement to build your own image is to create a Dockerfile, in the same directory.

Build and Run

Once the two files are in place, build it like any other docker image, for example:

Then, start up the container on a suitable Linux VM (PhotonOS works perfectly) and try it out:

The nice thing about that Nginx image is that the logs are configured to go to stdout and stderr, so you can view them without much effort through the docker logs  command.


Please note that this proof of concept is intended to get up and running quickly and, as a result, does not incorporate SSL certificates so that access to the proxy can be secured over HTTPS.  For a production rollout, SSL may be an important consideration. Fortunately, this is easy to do; just create certificate and key files, copy them into the container during Docker build, and add appropriate lines to the Nginx config file.

Auto Deploy is an innovative approach to infrastructure management, and this new enhanced reverse proxy ability can help with performance and scalability.  If you try this out, let me know how it goes!


Tags: , ,

For my VMworld 2016 breakout sessions this year, I wanted to demonstrate new functionality that was added to Auto Deploy.  After exploring a few ideas, I settled on leveraging the new Script Bundle feature to send a post-boot tweet directly from stateless ESXi hosts.  I figured sending a tweet would be an effective way to highlight the ability to integrate with arbitrary REST-based services. Interestingly, I ran into someone recently who was part of a DevOps team that used a private Twitter account for posting various alerts for the group to see – so it’s not as far-fetched as it seems!

I heard you like tweeting about vSphere.

Below is a Python script that can send a tweet directly from the console of a VMware ESXi 6.0 host.  vCenter Server 6.5 with Auto Deploy supports multiple versions of ESXi, and I chose to use 6.0 here.  Note that ESXi 6.5 includes Python 3, so this script would need some modification to work with that release.  I got a head start on the functionality by taking some ideas from Chris Wood.

In order to authenticate with Twitter, it is first necessary to visit the Twitter App Management portal to generate a consumer key and access token.  Plug them into the script accordingly.

The script accepts one argument: a string, wrapped in quotes, that will be posted to your Twitter timeline.

./ "Sent from an ESXi host"

The demo I ran at the INF8920 breakout session was slightly different because scripts cannot accept arguments in that case. Hopefully the recording will eventually be posted on the VMworld site, but the fate is unclear at the moment.

For more information on the Script Bundle feature, check out William Lam’s recent post on the topic.

Tags: ,

VMware just hit the next milestone of Project Photon: Photon OS Technology Preview 2 (TP2).  There are numerous enhancements, especially around deployment and management.  One welcome feature is support for guest OS customization in vSphere – now it is possible to deploy by cloning a VM template or from the new Content Library. DHCP as well as static IP addressing are supported, along with the expected guest naming capabilities.

In addition to that, Photon OS TP2 supports network booting via PXE, which can be scripted. Let’s take a look.

Network Installation

First, download the TP2 ISO from the link above and extract the contents in a convenient location.  Files will need to be copied to a few different destinations, depending on how you have your PXE boot server set up.

The boot files, as with other Linux distributions, are served up via tftp.  Place initrd.img and vmlinuz in a suitable subdirectory of tftpboot.

The RPM package repository (the “RPMS” directory on the ISO) must be served through HTTP, typically in a location resembling /var/www/html/photontp2-RPMS/.

After those files are in place, edit the PXE menu (e.g., pxelinux.cfg/default) to add an entry reflecting the locations in your environment:

From there you can install manually over the network after booting an empty VM.  The full installation should take less than a minute, it’s very small!

Scripted Installation

Once the manual PXE installation is working in your environment, it’s easy enough to automate the process.  Photon OS TP2 supports a simple scripted install, kind of like kickstart.  There are a few differences, though.  The most obvious is the format – instead of a plain text file, TP2 uses JSON.  This is easy enough to edit by hand, but would also facilitate automation in the future if necessary for your use case.

The scripted install file must also be served through HTTP, so place it on an accessible server in a location such as: /var/www/html/ks/photon_tp2_crypt.cfg.

There are sample configuration files included with the distribution and below you see the various elements that can be customized.

Photon OS TP2 scripted installation file

The file above should be fairly self-explanatory, but let’s walk through the highlights:

  • The root password can be specified in plain text or via encryption hash
  • Install type can be minimal (includes Docker) or full
  • Additional packages can be specified by adding elements to that JSON array
  • Post install allows running of a simple script at the conclusion – add a comma and more elements as needed
    • Note that in this sample I am using the systemctl command that enables the Docker service on boot
  • Public key is for SSH root login

Create another entry on your PXE menu that points to the installation script, like so:

Generating a Password Hash

There are several ways to generate a password hash and multiple algorithms are supported. In my environment, SHA-512 with a random salt worked great.  Either copy an existing hash from another system or generate a new one.  One easy way to do this is to use the mkpasswd command, found in the whois package on Ubuntu systems.  If you want an easy way to try it, this Docker container should do the trick:


Photon OS is a small, fast, container runtime that is optimized for VMware vSphere infrastructure.  Paravirtualized drivers and VMware Tools are included and make setup a snap.  Enhancements in TP2, such as guest OS customization, make Photon OS even more attractive for your container needs.  Network installation and automation are other great additions for operationalizing this open source element of your cloud-native infrastructure.


Tags: , , ,

Project Photon from VMware is a small-footprint Linux container runtime.  Technology Preview 1, released on April 20, shipped with Docker 1.5 – but with a few simple commands it is easy to update to Docker 1.6.  This is done with the Photon package manager, TDNF.  For those that were not aware, Yum is Dead and being replaced by DNF.  TDNF is a VMware innovation that offers DNF-compatible package management without a massive Python footprint.

All that is needed to move up to the latest Docker is to verify that the Photon repository is accessible, update the docker package, and restart appropriate components.

Prepare the Repository

Photon comes configured with several RPM repositories, one of which is the ISO image that can be handy when Internet connectivity is not available.  However, if your Photon instance does have access to the net, it is more convenient to use the online repositories than to mount an ISO.  Regardless, since the goal here is to get a package that has been updated since the ISO was created, Internet access is required.

Disable the ISO repository with the following command:

After that, update the metadata cache:

Update Docker with TDNF

First, verify that an updated version of Docker is available:

Then, run the update command:

If everything goes according to plan, this should be the experience:

Update Docker with TDNF

Restart the Docker Daemon

Photon uses systemd, so use the following commands to restart the docker daemon and complete the update:

Now your Photon instance is on the current Docker release.  Use the hello-world container to verify:


Docker 1.6 Hello World

Easy as that. The procedure described above should work for future releases, too.


Tags: , ,

Project Lightwave is an open source identity and access management platform from VMware. One of the many capabilities offered is authentication of SSH logins, eliminating the need to manage local user accounts on Photon container runtime instances.  This article walks through the basic steps required to enable this feature — please see the quick start guide for instructions on how to set up a Lightwave server and join a client to the domain.

Once configured, it is possible to ssh into Photon using Lightwave directory credentials and even use sudo to run privileged commands:

SSH into Photon with your Lightwave directory credentials

Photon Configuration

After the Lightwave components and dependencies are installed, run these commands:

Enable SUDO for the Lightwave Account

This is an optional step.  If you would like the user logging in via Lightwave credentials to be able to run privileged commands, add the account to sudoers by doing the following:

Use SSH to log in from another system

In order to log into the Photon instance, the the Lightwave account must be specified by using one of the following variations:

Run your containers

After logging in, docker containers can be executed as needed:


Project Lightwave has much more to offer, so please stay tuned for more information on technical capabilities and feature demos.  Also be sure to check out the vSphere blog for an overview of Photon and Lightwave.


Tags: , ,

« Older entries