A brief introduction to Windows containers. Understanding containers Windows server containers

In today's Ask an administrator a question, I'll show you how to deploy an image in a container on Windows Server 2016, create a new image, and upload it to Docker.

One of the major new features in Windows Server 2016 is support for containers and Docker. Containers provide lightweight and flexible virtualization capabilities that developers can use to rapidly deploy and update applications without the overhead of virtual machines. And coupled with Docker, a container management solution, container technology has exploded over the past few years.

This is an update to information that was previously included in Deploying and Managing Windows Server Containers with Docker that was current for Windows Server 2016 Technical Preview 3. For more information about Docker, see What is Docker? and Are Docker containers better than virtual machines? on the Technical base Petri IT knowledge.

To follow the instructions in this article, you'll need access to a physical or virtual server running Windows Server 2016. You can download an evaluation copy from the Microsoft website or set up a virtual machine on Microsoft Azure. You will also need a free Docker ID, which you can get by registering.

Install Docker Engine

The first step is to install Docker support on Windows Server 2016.

  • Sign in to Windows Server.
  • Click Search taskbar icon and type PowerShell in the search box.
  • Click right click mice Windows PowerShell in the search results and select Run as administrator from the menu.
  • Enter administrator credentials when prompted.

To install Docker on Windows Server, run the following PowerShell cmdlet. You will be prompted to install NuGet, which downloads the Docker PowerShell module from a trusted online repository.

Install-Module -Name DockerMsftProvider -Force

Now use Install-Package cmdlet for installing the Docker engine on Windows Server. Note that a reboot is required at the end of the process.

Install-Package -Name docker -ProviderName DockerMsftProvider -Force Restart-Computer -Force

After restarting the server, re-run the PowerShell prompt and verify that Docker is installed by running the following command:

docker version

Download an image from Docker and start a container process

Now that the Docker engine is installed, let's pull the default Windows Server Core image from Docker:

docker pull microsoft/windowsServerCore

Now that the image has been uploaded to the local server, start the container process using docker launch:

Docker run Microsoft /windowsServerCore

Create a new image

We can now create a new image using the previously downloaded Windows Server image as a starting point. You will need a Docker ID before running. If you don't already have it, sign up for a Docker account.

Sponsors

Docker images are usually built from Dockerfile recipes, but for the purposes of the demonstration, we run a command on the uploaded image, create a new image based on the change, and then upload it to Docker so it's available from the cloud.

Note that on the command line below -t The parameter gives the image tag, allowing you to easily identify the image. Also, pay special attention to the hyphen that appears after the tag name.

"FROM Microsoft /windowsservercore `n CMD echo Hello World!" | docker build -t mydockerid /windows-test-image -

After Docker has finished creating the new image, check the list of available images on the local server. You should see both. Microsoft /windowsServerCore and mydockerid /windows-test-images in the list.

docker image

Now start a new image in the container, remembering to replace mydockerid with your Docker ID and you should see Hello World! Appear at the exit:

docker run mydockerid /windows-test-images

Upload an image to Docker

Let's upload the image we just created to Docker so it's available from the cloud. Login with your Docker ID and password:

Login to docker -u mydockerid -p mypassword

usage docker push to upload the image we created in the previous steps by replacing mydockerid with the name of your Docker ID:

Docker push mydockerid /windows-test-images

How to package an application in a Docker container?

I have an application written in NodeJS. How can I package it into a Docker image to run as a container?

Docker is a container management system for POSIX-compliant operating systems (Linux is currently supported). A feature of Docker is the ability to package an application with all the necessary environment in such a way as to run it on another system without long and complicated procedures for installing dependencies or building from source. A packaged application ready to be deployed is called an "image". Docker images are based on "templates" - preconfigured working environments. You can think of it as distributions operating system, although this is not entirely true. You can also create your own template by reading the Docker documentation. The advantage of this approach is that your application image will contain only the application itself, and the environment necessary for it will be downloaded automatically from the template repository. Docker is a bit like chroot or bsd jail, but it works differently.

It is important to distinguish between the concepts of "container" and "image". A container is a running copy of your application, and an image is a file that stores the application and from which the container is created.

Let's say you have a NodeJS application that you want to package in a container. Let's assume that the file that runs your application is called server.js, and the application listens on port 8000 to run. We will use "node:carbon" as a template. To containerize the application, you need to create a "Dockerfile" file in the directory where your application files are located, which will describe the parameters for preparing the image:

$ touch Dockerfile

The contents of the file might look something like this:

# Specify template to use FROM node:carbon # Create application working directory inside container WORKDIR /usr/src/app # Install application dependencies with npm # Copy both package.json AND package-lock.json if present COPY package*. json ./ RUN npm install # Copy your application files into the image COPY . . # Open port 8000 so it's accessible outside the EXPOSE 8000 container # Execute the command to run the application inside the CMD container [ "npm", "start" ]

To exclude unnecessary files from the image, you can list their names in the ".dockerignore" file. You can use a mask (*.log).

The image is built with the following command:

$ docker build -t username/node-web-app .

$ docker images # Example REPOSITORY TAG ID CREATED node carbon 1934b0b038d1 5 days ago username/node-web-app latest d64d3505b0d2 1 minute ago

Starting a container from an image is done with the following command:

$ docker run -p 49160:8000 -d username/node-web-app

This example creates a container from the image "username/node-web-app" and starts immediately. Application port 8000 is available on the local machine (localhost) and in order for it to be available "outside", it is "forwarded" to port 49160. You can choose any free port, in addition, it is possible to forward the application port "as is" by specifying the option " -p 8000:8000".

You can see that your container is running by issuing the command:

$ docker ps # Example ID IMAGE COMMAND ... PORTS ecce33b30ebf username/node-web-app:latest npm start ... 49160->8000

The container can be managed with various commands, specifying the ID of this container:

$ docker pause ecce33b30ebf - pause the container with ID ecce33b30ebf
$ docker resume ecce33b30ebf - resume container ID ecce33b30ebf
$ docker stop ecce33b30ebf - stop container with ID ecce33b30ebf
$ docker rm ecce33b30ebf - remove the container (this removes all data created by the application inside the container)

In March 2013, Soloman Hykes announced the start of an open source project that later became known as Docker. It was strongly supported by the Linux community in the following months, and in the fall of 2014, Microsoft announced plans to implement containers in Windows Server 2016. WinDocks, which I co-founded, released an independent version of open source Docker for Windows in early 2016 with a focus on first-class container support in SQL Server. Containers are quickly becoming the focus of attention in the industry. In this article, we'll take a look at containers and their use by SQL Server developers and DBAs.

Container Organization Principles

Containers define new method application packaging, combined with user and process isolation, for multi-tenant applications. Various container implementations for Linux and Windows have been around for years, but with the release of Windows Server 2016, we have the de facto Docker standard. Today, the Docker container API and format is supported on AWS public services, Azure, Google Cloud, all Linux and Windows distributions. Docker's elegant framework has important advantages.

  • Portability. Containers contain application software dependencies and run unchanged on a developer's laptop, a shared test server, and any public service.
  • Ecosystem of containers. The Docker API is the focus of industry innovation with solutions for monitoring, logging, data storage, cluster orchestration, and management.
  • Compatibility with public services. Containers are designed for microservices architecture, scale-out, and transient workloads. Containers are designed to be removed and replaced at will, rather than repaired or upgraded.
  • Speed ​​and economy. It takes a few seconds to create containers; effective multisubscriber support is provided. For most users, the number of virtual machines is reduced by three to five times (Figure 1).

SQL Server Containers

SQL Server has supported named instance multitenancy for ten years, so what's the value of SQL Server containers?

The point is that SQL Server containers are more practical due to their speed and automation. SQL Server containers are named instances, with data and settings, provisioned in seconds. The ability to create, delete, and replace SQL Server containers in seconds makes them more practical for development, QA, and other use cases discussed below.

With their speed and automation, SQL Server Containers are ideal for a production development and quality control environment. Each member of the team works with isolated containers in a shared virtual machine, with a three to five times reduction in the number of virtual machines. As a result, we get significant savings on the maintenance of virtual machines and the cost of Microsoft licenses. Containers can be easily integrated into Storage Area Network (SAN) arrays using storage replicas and database clones (Figure 2).

A 1TB attached database is instantiated in a container in less than one minute. This is a significant improvement over servers with dedicated named instances or provision of virtual machines for each developer. One company uses an octa-core server to serve up to 20 400 GB SQL Server containers. In the past, each VM took over an hour to provision, and container instances are provisioned in two minutes. Thus, it was possible to reduce the number of virtual machines by 20 times, reduce the number of processor cores by 5 times and dramatically reduce the cost of paying for Microsoft licenses. In addition, increased flexibility and responsiveness in business.

Using SQL Server Containers

Containers are defined using Dockerfile scripts, which provide specific steps for building a container. The Dockerfile shown in Figure 1 specifies SQL Server 2012 with databases copied to the container and a SQL Server script to mask selected tables.

Each container can contain dozens of databases with auxiliary files and log files. Databases can be copied and run in a container or mounted using the MOUNTDB command.

Each container contains a private file system isolated from host resources. In Figure 2, the container is built using MSSQL-2014 and venture.mdf. A unique ContainerID and container port are generated.


Figure 2: SQL Server 2014 container and venture.mdf

SQL Server containers provide new level performance and automation, but their behavior is exactly the same as regular named spaces. Resource management can be implemented using SQL Server Instrumentation or through container resource limits (Figure 3).

Other Applications

Containers are the most common means of organizing a development and QA environment, but other uses are emerging. Disaster recovery testing is a simple yet promising use case. Others include containerization of the internal SQL Server environment for legacy applications such as SAP or Microsoft Dynamics. The containerized backend is used to provide a working environment to support and current maintenance. Evaluation containers are also used to support production environments with persistent data stores. In one of the following articles, I will talk in detail about persistent data.

WinDocks is committed to making containers even easier to use through a web interface. Another project focuses on migrating SQL Server containers in DevOps or Continuous Integration with CI/CD pipelines based on Jenkins or Team City. Today you can get acquainted with the use of containers on all editions of Windows 8 and Windows 10, Windows Server 2012 or Windows Server 2016 with support for all editions starting with SQL Server 2008 using your copy of WinDocks Community Edition (https://www.windocks.com/community-docker-windows).

Exploring container technology
Windows Server 2016

One of the notable new features introduced in Windows Server 2016 is support for containers. Let's get to know her better

Modern systems have long moved away from the principle of one OS - one server. Virtualization technologies allow more efficient use of server resources, allowing you to run multiple operating systems, separating them from each other and simplifying administration. Then there were microservices that allow you to deploy isolated applications as a separate easily managed and scalable component. Docker has changed everything. The process of delivering an application along with an environment has become so simple that it could not help but interest the end user. The application inside the container works as if using a full-fledged OS. But unlike virtual machines, they do not load their own copies of the OS, libraries, system files, etc. Containers receive an isolated namespace in which all necessary resources are available to the application, but which cannot be exited. If you need to change the settings, then only the differences with the main OS are saved. Therefore, the container, unlike virtual machines, starts up very quickly and loads the system less. Containers use server resources more efficiently.

Containers on Windows

In Windows Server 2016, in addition to existing virtualization technologies - Hyper-V and Server App-V virtual applications, support for Windows Server Containers containers has been added, implemented through the Container Management stack abstraction layer that implements all the necessary functions. The technology was announced back in Technical Preview 4, but since then a lot has changed towards simplification and instructions written earlier can not even be read. At the same time, two types of "own" containers were proposed - Windows containers and Hyper-V containers. And probably another main opportunity is the use of Docker tools in addition to PowerShell cmdlets to manage containers.

Windows containers resemble FreeBSD Jail or Linux OpenVZ in principle, they use one core with the OS, which, along with other resources (RAM, network), is shared among themselves. OS files and services are mapped into the namespace of each container. This type of container uses resources efficiently, reducing overhead, and therefore allows applications to be placed more densely. Since the base images of the container "have" one core with a node, their versions must match, otherwise operation is not guaranteed.

Hyper-V containers use an additional isolation layer and each container is allocated its own core and memory. Isolation, unlike the previous type, is performed not by the OS kernel, but by the Hyper-V hypervisor (requires the Hyper-V role). The result is less overhead than virtual machines, but more isolation than Windows containers. In this case, to run the container, have the same OS kernel. These containers can also be deployed on Windows 10 Pro/Enterprise. It is especially worth noting that the container type is not chosen at creation time, but at deployment time. That is, any container can be run both as Windows and as a Hyper-V variant.

As the OS in the container, the trimmed Server Core or Nano Server are used. The first one appeared in Windows Sever 2008 and provides greater compatibility with existing applications. The second is even more stripped down than Server Core and is designed to run without a monitor, allowing you to run the server in the smallest possible configuration for use with Hyper-V, file server (SOFS) and cloud services, requiring 93% less space. Contains only the most necessary components(.Net with CoreCLR, Hyper-V, Clustering, etc.).

For storage, the VHDX hard disk image format is used. Containers, as in the case of Docker, are saved as images in the repository. Moreover, each one does not save a complete set of data, but only the differences between the created image and the base one. And at the time of launch, all the necessary data is projected into memory. Virtual Switch is used to manage network traffic between the container and the physical network.