top of page
Writer's pictureMeenu Juneja

Docker Compose

What is Docker and how does it work?

Docker is a software container platform designed for developing, shipping, and running apps leveraging container technology.

A Docker Container is a popular lightweight, standalone, executable container that includes everything needed to run an application, including libraries, system tools, code, and runtime. 

Docker enables developers to package applications and their dependencies into self-sufficient containers, eliminating compatibility issues and streamlining the deployment process

One of the key challenges in software development is ensuring consistency across different environments, from development to production. Docker addresses this challenge by providing a consistent runtime environment encapsulated within each container. This ensures that applications behave consistently regardless of the underlying infrastructure, reducing the risk of "it works on my machine" issues.


What is Docker Compose?

Let’s assume you are working on a project that requires 10 different services in a running state to run your Project.

One way is to pull all the images separately from docker using CLI and run these one by one on containers. There is a possibility that one service may depend on other services too. So, you must keep track of the order of execution.

What if we need data persistence for some of the services? Then you must use volume commands separately to create volumes.

This becomes a very tedious task, because you may have to repeat it multiple times. Docker Compose is the solution to this.

Docker Compose is a tool you can use to define and share multi-container applications. This means you can run a project with multiple containers using a single source.

For example, assume you are building an application that requires Spring boot, Postgres and NodeJS. With Docker Compose, you can create a single file with all the instructions that will take care of pulling the images for individual applications and running these on containers. We can easily define dependencies between different applications too. And we can define Volumes for data persistence in the same file.


To achieve this, we need to define a docker-compose.yml.


The first step to use docker in your machine is, you should download Docker Desktop, and it should be running in your machine. 

 





The second most important thing that docker requires is setting Environment Variables. For example, if I am trying to pull Postgres Image from docker hub, there are few environment variables that need to be set. You can get all information regarding environment variables for a particular service from docker hub. We will discuss in detail how we can specify these environment variables in docker-compose.yml file.

 



 


 

 

 

Let’s take an example of a Spring boot application and the most important files that we need to create to dockerize our application.

 




  1. Generate Jar file for your application. We can use MVN Package command for Maven applications.



 

2. Create DockerFile




The FROM instruction specifies the base image that the container will be built on top of.

Use the COPY instruction to copy local files from the host machine to the current working directory.

Use the “RUN” instruction to execute commands that will run during the image build process.


 

3. SQL file.



Here I have used dcdb.sql file. This file is a backup file that I generated from my Postgres DB as I wanted to use my existing data be used by container.

If you want a fresh database, you can create customized sql file with your own Create and Insert Scripts too.

  

4. env File.

As we mentioned earlier, we have to provide values for specific environment variables for individual images.

One way is you can skip creating .env file and hardcode values for environment variables for local environment, but eventually you have to configure environment variables while deployment. Good idea is to create an env file and specify all environment variables here.



Don’t checkin your .env file on gihub for safety purposes. You should add this file in gitIgnore.



5. Docker-compose.yml

services:
  postgresdb:
    image: postgres
    env_file: ./.env
    environment:
      - POSTGRES_USER=$POSTGRESDB_USER
      - POSTGRES_PASSWORD=$POSTGRESDB_ROOT_PASSWORD
      - POSTGRES_DB=$POSTGRESDB_DATABASE
    ports:
      - $POSTGRESDB_LOCAL_PORT:$POSTGRESDB_DOCKER_PORT
    volumes:
      - db:/var/lib/postgresql/data
      - ./dcdb.sql:/docker-entrypoint-initdb.d/dcdb.sql
  pgadmin:
    image: dpage/pgadmin4
    container_name: pgadmin4_container_dc
    restart: always
    environment:
      - PGADMIN_DEFAULT_EMAIL=$PGADMIN_DEFAULT_EMAIL
      - PGADMIN_DEFAULT_PASSWORD=$PGADMIN_DEFAULT_PASSWORD
    ports:
      - $PGADMIN_LOCAL_PORT:$PGADMIN_DOCKER_PORT
    volumes:
      - pgadmin-data:/var/lib/pgadmin

  app:
    depends_on:
      - postgresdb
    build: .
    env_file: ./.env
    restart: always
    environment:
      - SPRING_DATASOURCE_URL=jdbc:postgresql://postgresdb:$POSTGRESDB_DOCKER_PORT/$POSTGRESDB_DATABASE
      - SPRING_DATASOURCE_USERNAME=$POSTGRESDB_USER
      - SPRING_DATASOURCE_PASSWORD=$POSTGRESDB_ROOT_PASSWORD
      - SPRING_JPA_HIBERNATE_DDL_AUTO=update
    ports:
      - $SPRING_LOCAL_PORT:$SPRING_DOCKER_PORT
volumes:
  db:
  pgadmin-data:
  • You can specify version as first line of docker-compose.yml. I have skipped this so it can use the latest version.

  • services define the services that we need to run.

  • app is a custom name for one of your containers.

  • image the image which we must pull.

  • container_name is the name for each container. It’s optional, but a good idea to use.

  • restart starts/restarts a service container.

  • port defines the custom port to run the container.

  • environment defines the environment variables, such as DB credentials, and so on.

  • Docker volumes are a mechanism for persisting data generated by and used by Docker containers. They are the unsung heroes of data management, ensuring your data survives even if your containers don’t.

  • Path for volumes can be a little bit different for different databases e.g.  MySQL, MongoDB, Postgres etc.


  • For Postgres path is /var/lib/postgresql/data



Bind Mounts vs. Docker Volumes

Bind mounts are another way to give containers access to files and folders on your host. They directly mount a host directory into your container. Any changes made to the directory will be reflected on both sides of the mount, whether the modification originates from the host or within the container.

Volumes are a better solution when you’re providing permanent storage to operational containers. Because they’re managed by Docker, you don’t need to manually maintain directories on your host. There’s less chance of data being accidentally modified and no dependency on a particular folder structure. Volume drivers also offer increased performance and the possibility of writing changes directly to remote locations.


Docker Compose Commands

docker-compose up

This command does the work of the docker-compose build and docker-compose run commands. It builds the images if they are not located locally and starts the containers. If images are already built, it will fork the container directly.

docker-compose up -d

We can use the ‘-d’ flag to run the services in detach mode, as we use while running a new container.








As we see container is up and running, we should be able to access our Rest APi.




How can I connect to pgAdmin and access my DB?











Use Network IP Address to Register Server. User name and password for Postgres that you mentioned while setting up environment variables.






docker-compose stop

Stops containers associated with services defined in the docker-compose.yml file, without deleting other resources such as networks, volumes, etc.

 

docker-compose down

Stops containers and removes containers, networks, volumes, and images created by docker up command.

12 views

Recent Posts

See All
bottom of page