Automating a Development Infrastructure for vRealize Suite with Docker

/, VMware, vRealize Automation, vRealize Orchestrator/Automating a Development Infrastructure for vRealize Suite with Docker

Automating a Development Infrastructure for vRealize Suite with Docker

vRealize Suite is an incredible stack that enables customers to manage and automate their hybrid cloud infrastructure and offer self-service capabilities to their clients. In the article Automating At Scale, my colleague Kiril Vuchkov explained how we, PS CoE (Professional Services Centre Of Excellence), developed a toolchain that combines a set of open-source tools with VMware products to deliver the best development experience for vRealize Orchestrator (vRO) and vRealize Automation (vRA) content. We use this toolchain on a daily basis to apply software development best practices for our customer solutions.

In order to fully utilize this toolchain however, we would need additional tooling – a development platform.

The development infrastructure that the PS CoE built for the vRealize Suite and the toolchain consists of the following basic components:

  • Source control management (SCM)
  • Continuous Integration (CI)
  • Maven repository

Deploying and maintaining these components is not a trivial task. In today’s post I’m gonna talk about how we leverage containers, such as Docker, to automate and ease the deployment and maintenance of the infrastructure, needed to fully benefit from the development toolchain and improve the VMware vRealize development process.

Why containers and Docker?

The traditional approach to setting up the development infrastructure would be to get a piece or pieces of hardware infrastructure (either physical or virtual), install an operating system and then start setting up each individual service. Even though this may sound quite straightforward, sometimes things get tricky (and very messy) right from the beginning. Very often services rely on a set of shared libraries with different versions. Some of these shared libraries cannot co-exist and/or having multiple versions of them would break other components. This makes maintenance a living hell and updates take forever long, very frequently ending in service outages.

So, how to solve this problem?

This is where Docker containers come into help. A Docker container is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. This means that each individual service can be deployed as a separate container and later maintained and updated separately without interfering with the other services. So, using Docker and containers not only simplifies the overall process but also has some additional benefits:

  • Standardization: Docker containers are portable
  • Lightweight: Containers share the machine’s OS system kernel, so no OS per application is required
  • Secure: Docker provides process isolation capabilities

Where to run the containers?

Containers are nice but we have to answer one more question before we get started – where to run them. If we are using local test environments, our good old PC will do the job but when talking about enterprise, things are not that simple.

This is where the container host kicks in. A container host is an operating system specially optimized for running containers and this is where the Docker client and Docker daemon will run.

Now, there are a lot of container hosts out there on the market: CoreOS, RancherOS, Ubuntu Core, RedHat Atomic, etc. Choosing between them is not an easy job. We needed something with a minimal footprint, easily maintainable, and secure for our customers. Since most of them have long migrated to the virtual infrastructure, being optimized to run on a virtual infrastructure would be a plus for our container OS.

The answer to the question came from right beneath our noses – Project Photon OS™. Photon OS is an open source minimal Linux container host optimized for cloud-native applications, cloud platforms, and the VMware infrastructure (Great for our customers!).

Photon OS is everything we need. It has a lot of great features and benefits:

  • Optimized for VMware vSphere®: The Linux kernel is tuned for performance when Photon OS runs on vSphere.
  • Support for containers: Photon OS includes the Docker daemon by default and has support for container orchestration frameworks.
  • Efficient lifecycle management: Photon OS is easy to manage, patch, and update.
  • Security hardened: The kernel and other aspects of the operating system are built with an emphasis on security.

What other tools we use?

For managing and kickstarting the whole Docker infrastructure we use Docker Compose. Docker Compose is a tool for defining and running multi-container Docker applications. The configuration is based on the YAML format. which specifies all of the different options for running each container (service) that makes up the whole application and performs the creation and start-up process of all the containers with a single command.

The tool also provides a basic management CLI for starting/stopping/restarting the whole stack in case it is needed.

The containers from above

The container infrastructure that we designed features a Git repository server (SCM), artefact and dependency management (Maven repository), and build agents (CI). The code is pushed to the repo, from there the agents take it, build it, package it and then push it to the artifactory from where it can be consumed.

The high-level overview of the infrastructure is presented in the following diagram:

The automated deployment spins up the following container services:

  • NGINX – used both as a Web server and as as a reverse proxy that exposes all of the needed services to the outside world. The use of NGINX reverse proxy simplifies service management configuration, adds security and central SSL management (SSL termination occurs on the proxy) and provides unified access logging and a simple user portal. Certain caching performance benefits may also apply even with default configuration.
  • GitLab – a web-based Git-repository manager providing wiki, issue-tracking and CI/CD pipeline features, using an open-source license, developed by GitLab Inc.
  • GitLab Runner – the open source project that is used to run the jobs and send the results back to GitLab. It is used in conjunction with GitLab CI, the open-source continuous integration service included with GitLab that coordinates the jobs.
  • jFrog Artifactory – a Universal Repository Manager supporting all major packaging formats, build tools and CI servers.

Get started

After the deployment of the infrastructure is ready, developers can directly access all services from a Web portal. Since the containers are communicating through an internal container network, NGINX handles the reverse proxy role to forward requests back and forth.

The image below shows how the web portal looks like:

The portal is based on Clarity Design System and is a very simple and easy to use interface that provides access to all of the needed resources – GitLab, Artifactory, VSCode extension and vRO hinting plug-in in one place. Also, all of the documentation on how to setup the entire infrastructure as well as the development ecosystem end-to-end is included in the bundle and accessible through the Help tab.

From here onwards, developers start to develop vRealize content following standard development practices and procedures. (We will show some practical examples in a later article.)

For more information on the actual development process, please refer the section “Software Development for the vRealize Platform” in the article Automating At Scale by my colleague Kiril Vuchkov.

Creating the code repositories

When we first login into GitLab we have to create our code repositories.

GitLab offers a variety of ways to do that depending on the use case:

  • If the repository is new we can follow the wizard
  • If we already have an existing code base, it can be imported or pushed directly into a fresh repo

After we have created the repo and imported the code, we are presented with the following screen:

This is an example test project generated with our VSCode plugin. As you can see the generator has created the folder structure and included the basis for building and releasing the project – the POM file and release script. For the purposes of this article I will show you a very basic example of a project with only a handful of elements and no external dependencies but in reality you can build complex projects with hundreds of elements and multiple external dependencies without problems.

Setting up the CI build pipeline

The vRealize Suite projects are built and managed with Maven. The next step in order for us to build our project is to setup our Maven repos (Do you still remember Artifactory?)

Our trusty Artifactory will come to the rescue for this one. Creating the Maven repos in the UI is fairly straightforward and automated by a wizard so I want deep dive into that but only show you the end result:

The highlighted part of the screenshot are our actual Maven repos. In order for us to setup our build, we would need a special settings.xml file for Maven which can be very easily obtained by just clicking on a repo that we want to use and clicking Generate Settings:

Next, we have to create a folder called .m2 in the top level of the project source and put our settings.xml file there. Your XML file may vary but will look something similar to this:

<?xml version="1.0" encoding="UTF-8"?>
<settings xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd" 
  xmlns="http://maven.apache.org/SETTINGS/1.1.0" 
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <servers>
    <server>
      <username>admin</username>
      <password>--Password--</password>
      <id>central</id>
    </server>
    <server>
      <username>admin</username>
      <password>--Password--</password>
      <id>snapshots</id>
    </server>
  </servers>
  <profiles>
    <profile>
      <repositories>
        <repository>
          <snapshots>
            <enabled>false</enabled>
          </snapshots>
          <id>central</id>
          <name>libs-release</name>
          <url>http://artifactory.corp.local:8081/artifactory/libs-release</url>
        </repository>
        <repository>
          <snapshots />
          <id>snapshots</id>
          <name>libs-snapshot</name>
          <url>http://artifactory.corp.local:8081/artifactory/libs-snapshot</url>
        </repository>
      </repositories>
      <pluginRepositories>
        <pluginRepository>
          <snapshots>
            <enabled>false</enabled>
          </snapshots>
          <id>central</id>
          <name>libs-release</name>
          <url>http://artifactory.corp.local:8081/artifactory/libs-release</url>
        </pluginRepository>
        <pluginRepository>
          <snapshots />
          <id>snapshots</id>
          <name>libs-snapshot</name>
          <url>http://artifactory.corp.local:8081/artifactory/libs-snapshot</url>
        </pluginRepository>
      </pluginRepositories>
      <id>artifactory</id>
    </profile>
  </profiles>
  <activeProfiles>
    <activeProfile>artifactory</activeProfile>
  </activeProfiles>
</settings>

The final step will be to create the pipeline configuration that will tell GitLab how to build our project. In GitLab this starts as a .gitlab-ci.yaml file that serves as a descriptor for the whole process.

stages:
  - test
  - package
  - publish

variables:
  MAVEN_CLI_OPTS: "-s .m2/settings.xml"
  MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"

cache:
  paths:
    - .m2/repository/
    - target/

test:
  stage: test
  script:
    - mvn -e $MAVEN_CLI_OPTS test -pl actions
  dependencies: []

package:
  stage: package
  script:
    - mvn $MAVEN_CLI_OPTS package -Pbundle -DskipTests
  artifacts:
    paths:
      - target/*.package
      - target/*-bundle.zip
    expire_in: 1 week
    
publish:
  except:
    - schedules
  stage: publish
  script:
    - mvn $MAVEN_CLI_OPTS deploy -Pbundle -DskipTests
  dependencies: []
  only:
    - master

After we have the configuration all setup we can go ahead and create a pipeline which should start the building process. When finished, you will get something similar to the screenshot below:

As we can see, all of the defined stages are successful and we can go ahead and browse Artifactory to locate our packages. If there are any errors in the stages, it will be indicated and the build would not continue forwards.

It is a nice feature of GitLab that when you merge your changes into the master branch, it will trigger the pipeline automatically for you.

Conclusion

With the help of containers we managed to drastically reduce the time needed setting up the infrastructure. Containerization enabled us to quickly deploy and later update the services with minimum effort and downtime. Keeping the whole infrastructure-as-a-code helps us maintain, upgrade and find problems much faster and easier.

In the following articles to come, we will deep dive into some of the technical implementations and challenges that we faced along the way and show you some practical examples how the CI actually works. Stay tuned.

References:

By |2019-02-01T07:45:12+00:00January 31st, 2019|

Leave A Comment