Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Getting Started with Container Based Deployment

In this article, Gigi Sayfan focuses on release management and deployment.


advertisement

Application life cycle management (ALM) is the process of creating application software and includes many different aspects — such as gathering requirements, architecture and detailed design, implementation, testing, change management, maintenance, integration, release management and deployment. There are different approaches, methodologies and schools of thought for all these aspects. Today, I'm going to focus on release management and deployment. This is the most dynamic and active field where the most innovation happens these days.

Traditionally, large enterprise systems where released very infrequently (within a time scale of months or even years) by dedicated release management teams that often belonged to an operations group, which was organizationally separate from the development group.

Now, the landscape is different. Large systems are composed from a large number of services that interact through APIs that are developed and deployed by semi-independent teams that use DevOps practices (development and operations are mixed).



One of the key technologies that enables this approach is container-based deployment. At the core, containers are very similar to virtual machines in the sense that they allow you to isolate a piece of software, its dependencies and the resources it needs. Multiple isolated containers running on the same physical machines can share the same operating system, which make them much more efficient and lean.

Docker

Docker has a lot of momentum right now and provides an increasingly complete toolchain. Let's poke around and see what it takes to deploy some software using Docker.

The first step is to install Docker. Follow the instructions here: http://docs.docker.com/installation/

Here is what the MAC OSX Toolbox installer looks like:

The toolbox installs quite a few tools as well as a virtual machine called 'default. I'll go over some of the tools later. I work mostly from the terminal to get as close to the action as possible.

The 'default' VM is your new home. You can manage multiple VMs using the docker-machine tool, but 'default' is good enough for us.

Type: 'docker-machine ls'

You should see something like:

NAME ACTIVE DRIVER STATE URL SWARM
default virtualbox Running tcp://192.168.99.100:2376

The next step is setting up the environment:

eval "$(docker-machine env dev)" 

That lets you use the 'docker' tool to actually work with containers inside the 'default' VM.

Type: 'docker ps'

You should see:

CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES

That means no containers are running inside our machine. We will rectify it soon enough. Let's run a web server. The following command will run nginx in a container listening on port 8080:

docker run -d -p 8080:80 nginx

Verify it worked by typing 'docker ps' again. You should see something like:

CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
3c91a93d4976 nginx "nginx -g 'daemon off" About
an hour ago Up About an hour 443/tcp, 0.0.0.0:8080->80/tcp
jolly_goldstine

We got nginx running on port 8080 of the VM.

To access it you need to use the VM IP address, which is available through:

docker-machine ip default

Here is how to get the default page from nginx:

curl http://$(docker-machine ip default):8080

The Docker Ecosystem

This was a very rudimentary example of how to bring up a web server ready to go. A complete system is much more than that. Even a simple system will have at least some sort of database for persistent data. Then, you'll have your application. More sophisticated systems will have multiple services and backend processes running and possibly multiple types of databases and API servers. Each one of these components may have multiple instances. You can create Docker images for each component and then deploy them, but you'll still need to manage versions properly and hook up the different containers. Deploying such a system is not trivial. Docker provides several tools that can help.

Docker Swarm

Docker Swarm lets you create a cluster of Docker hosts. Each one of these hosts can host multiple Docker containers. This is very important because you get the benefit of isolation between different containers, but you don't have to dedicate an entire host for a component that may not use a lot of resources.

Docker swarm supports different discovery back ends so the various Docker containers in your cluster can find and interact with each other. Supported back ends include: etcd, Consul and ZooKeeper.

Docker Compose

Docker Compose lets you define multiple services to run on the same host. Each service will run in its own containers and you can link the services in the docker-compose.yml file. This is very convenient to manage. You can define, build and deploy multi-container applications

Conclusion

The container-based deployment scene is exploding these day with lots of tools for managing it. Docker is a major player. It slowly expands beyond just containers to provide management and orchestration too. Its current offerings are still in Beta, but have a lot of momentum. My recommendation is to keep an eye and experiment with these technologies and tools, but be very careful before you trust them with production systems.



   
Gigi Sayfan is the chief platform architect of VRVIU, a start-up developing cutting-edge hardware + software technology in the virtual reality space. Gigi has been developing software professionally for 21 years in domains as diverse as instant messaging, morphing, chip fabrication process control, embedded multi-media application for game consoles, brain-inspired machine learning, custom browser development, web services for 3D distributed game platform, IoT/sensors and most recently virtual reality. He has written production code every day in many programming languages such as C, C++, C#, Python, Java, Delphi, Javascript and even Cobol and PowerBuilder for operating systems such as Windows (3.11 through 7), Linux, Mac OSX, Lynx (embedded) and Sony Playstation. His technical expertise includes databases, low-level networking, distributed systems, unorthodox user interfaces and general software development life cycle.
Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date