How the ThreadFix Team Uses Docker for QA and Support

The members of the ThreadFix team have often found themselves face-to-face with a fairly universal need across software groups: to quickly access running application instances. This need applies to groups from developers to support engineers to quality assurance personnel. It can require the latest and greatest code that developers have been working on or the most recent stable release that is in the hands of customers. Through a container system we built around Docker, simply referred to internally as “ThreadFix + Docker”, the fulfillment of this need is easier than ever.


This section will provide an overview of the components that make up the ThreadFix + Docker system.

Docker and the Docker Daemon
The largest and most integral piece of this system is the Docker daemon that sits on a remote Ubuntu VM. This process and its associated files comprise the backend of ThreadFix + Docker.

As a brief rundown of our use case, Docker is a tool that allows us to dynamically generate “containers”, or lightweight independent spaces that can coexist on a single running machine. These allow us to host individual ThreadFix instances without the overhead of full-on virtual machines and their associated management burden.

The Docker daemon runs on the host VM and awaits commands for spinning up new application containers. When it receives the proper command, it uses one of the “images” it has access to in order to create a running container based on that configuration. A Docker image is a representation of the environment that a container will have once it is running, and since the image was first generated by walking step-by-step through a configuration file (called a “Dockerfile”) and then saving its last state, the system can spin up ready-to-use containers fairly quickly. We will cover how these images are generated in the “Jenkins Continuous Integration” section later in this article.

Here is a simple example of a Dockerfile.

FROM tomcat:7.0.65-jre7
ADD ./threadfix /usr/local/tomcat/webapps/threadfix
LABEL branch="Dev-QA"
LABEL version="Enterprise"

And here is how the available images are displayed in the ThreadFix + Docker UI, along with their creation dates:


The Docker API
Docker provides a robust API available via REST calls. Through a simple configuration change, we expose our Docker daemon’s API on a specific Unix port of our host VM. There are two components of ThreadFix + Docker that will communicate with our running Docker process via this channel.

Management Shell Script
An intuitive and interactive shell script communicates with the host VM’s Docker process in order to create or kill containers. Since this script uses Unix’s curl program to communicate with the Docker process via REST calls, this script can be utilized from a user’s machine and does not have to be executed on the host machine itself. This script allows you to designate options such as:

The display name of the container (for the AngularJS UI).
The version and git branch of ThreadFix to use (Community or Enterprise, Development or Stable, etc.).
The port on the host VM to expose this application instance.
The specific database files that the ThreadFix instance should use.
The database action to call (“create” or “update”).

Thin AngularJS Client
The main component that most ThreadFix + Docker end users interact with is the web UI, which is constructed with AngularJS. This thin front-end client communicates with the host VM’s Docker process directly through GET calls in order to populate information about running containers and available images.

For each running container, there is a link showing the port number on the host VM that the ThreadFix instance is exposed on, and that link will take the user directly to the URL that represents that instance’s ThreadFix homepage. Additionally, there is a link to view the application logs from that container, which will open up the log in a new tab. This comes in handy when Support or QA is trying to find or replicate an issue. Lastly, icons display the current database being used by a container, and a warning icon will caution the user that the container is not built upon that version’s most recent image and that they are likely working with old code.

Funnily enough, the ThreadFix + Docker Web UI is itself hosted on a running Docker container.

Jenkins Continuous Integration
The last piece of the puzzle is integrating this system with our Jenkins continuous integration jobs. We take advantage of our existing CI jobs, specifically those which build ThreadFix artifacts after code changes and then run unit tests against them to verify the code’s quality. These jobs have been modified to copy the built artifact over to the VM hosting the Docker process, then execute a script to build a new Docker image for a particular version of ThreadFix. This way, when a user spins up a ThreadFix Docker instance, they can be sure they’re getting the latest approved code and that they’re getting it almost instantly.

Behind the Scenes
Now we’ll cover a bit of the process behind ThreadFix + Docker. When a ThreadFix container is spun up via the manager script, the REST call passes in several runtime parameters to configure the container and provide metadata for the UI.

The port number passed into the script maps the exposed ThreadFix application port (8080) from within the container to the specified port on the host VM. This is what allows users to access their instances on different host ports simultaneously.

The version and branch of ThreadFix used (Community or Enterprise, Stable or Development) lets the Docker process know which Docker image to use when spinning up the container. As stated above, our Jenkins jobs ensure that these images are up-to-date.

The database name parameters looks for a similarly-named directory in a dedicated database directory on the host VM. If the directory does not exist, it is created. The ThreadFix containers take advantage of these databases by attaching them as “volumes” to a specific directory within the container. In Docker vernacular, a “volume” is a file path on a host machine that is realtime symlinked to a specific path within a container. In this case, that container path is the location where the ThreadFix application reads and/or generates its HSQL database files. Now if say, the power goes out or you want to restart your container with the newest code, you can spin up a container and attach the same database directory as a volume, and you’ll pick up right where you left off with all your data intact.

The database action parameter also takes advantage of volumes. If you designate “create” as your database action, ThreadFix + Docker will replace the default files (which designates ThreadFix’s database configuration) with a file from a specific jdbc directory on the host VM called “”. Similarly, “update” will use a file called “” to allow you to use established data if you’ve designated a database.

Finally, it is important to keep in mind that ThreadFix + Docker does not interface with an independent backend, but instead communicates with the Docker process directly. To store and retrieve container metadata, we rely on the use of Docker “labels”. These are key-value pairs that you can either designate at runtime when spinning up a container or in the Dockerfiles that configure your images. These labels are later queried and parsed by the web UI to show information like display name, version, branch, etc.

Here is an excerpt from the manager script showing how the JSON for creating containers is crafted:

# Craft JSON Data for Create Call
json="{"OpenStdin": true,  "Image": "threadfix/${version}", "Tty": true, "Labels": {"user":"${name}", "db": "${database}", "dbMethod": "${dbMethod}"},"HostConfig": {${databaseJson} "PortBindings": { "8080/tcp": [{ "HostPort": "${port}" }]}, "DnsSearch": [""]}}"


That about wraps up the overview of the ThreadFix + Docker system. There are several extra use cases we’ve run into (almost “easter eggs”) that we hope to streamline, such as connecting a remote ThreadFix container to a locally-hosted MySQL instance to query a database in realtime, or spinning up a background SQL Server build instance to prepare for database provider testing.

As it stands now though, ThreadFix + Docker has significantly decreased the time and effort it takes to access robust and up-to-date ThreadFix instances, from sometimes 10+ minutes for the uninitiated to now 30 seconds. Whether it’s developing third-party integrations, triaging user issues, tracking down bugs during quality assurance runs, or onboarding new team members, leveraging Docker and other connected technologies has helped us toward accomplishing a crucial goal: making it easier to make ThreadFix better.

Leave a Reply

Your email address will not be published. Required fields are marked *