Docker-compose with React, Node, and PostgreSQL. A multi-container application with Docker

American Dreamer
8 min readMar 15, 2020

If you developer or have been in the tech industry at least once in your lifetime you have heard about the docker.

So what is a Docker? According to Wikipedia Docker is a set of the platform as a service product that uses OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels to learn more on their official website Docker or Wikipedia.

Now we know about the Docker. Let’s get hands-on on how to build and run a multi-container with Docker-compose locally on your machine. If you have enough experience with Docker feel free to had over to the repo, otherwise we are going to go over with you step by step and build a multi-container that has ui built with React app and service (backend) with the express framework for Nodejs and database with PostgreSQL.

Docker compose

Docker-compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. In other words, it makes easy to start your application and connect them with some sort of networking.
And by the way, if you have installed a Docker desktop application, it already includes docker-compose.

The root level of my project

Before we can start configuring docker-compose I am going to expect that you have already your apps build UI (front-end), Service (backend) and database. Please ignore the package.json and package-lock.json for now, I am going to explain later as a bonus to make your workflow efficient and well documented.
Without further a due let’s get started.

First, we will need to create a docker-compose.yml file to run our multi-container application with one shoot. For now, we are going to leave it empty and get back to it later.

The UI — React App

As usual, I used create-react-app with Typescript. Whether you have installed the React app from scratch or have an existing one, first we need to create a Dockerfile inside our ui app (ui/Dockerfile.dev) that will be used to build the image. Dockerfile is just a plain text where you define your configuration steps. More specifically it tells our container how to behave and install our dependencies and run the application.

When creating Dockerfile we need to define the base image and commands that need to be run.

It’s good practice to have configured containers one for development (Dockerfile.dev) and one for production (Dockerfile) to keep them separate. As you can see I have created Dockerfile.dev that I mentioned above in the UI directory. See image:

And now we will need to paste the following content inside of Dockerfile.dev.

Note: Ignore copying anything that starts with ts if you DO NOT have a project set up with Typescript.

FROM node:10.16.0-alpine#By default create-react-app sets port to 3000 when you start react app, but I configured to run ui on 8080 (just personal preference) instead of 3000.EXPOSE 8080 
RUN mkdir -p /app/public /app/src
WORKDIR /app
COPY tsconfig.json /app/tsconfig.json #ignore if don't have react with typescript
COPY tslint.json /app/tslint.json #ignore if don't have react with typescript
COPY package.json /app/package.json
COPY package-lock.json /app/package-lock.json
## install only the packages defined in the package-lock.json (faster than the normal npm install)
RUN npm install
# Run 'npm run dev' when the container starts.
CMD ["npm", "run", "dev"]

Basically, we will need to install base image (nodejs); then copy content into the container as well as package*json files; then as usual run ‘npm install’; then start our app ‘npm run dev’ in your case it might be just ‘npm start’.

Now we took care of the UI portion of configuration let’s head over to our service (backend) and configure that part.

The Service (backend) — Express framework for Nodejs.

My service (backend) built with the Express framework for Nodejs with Typescript. The same actions apply here. We also need to create our Dockerfile.dev in service directory (service/Dockerfile.dev).

Once you created Dockerfile.dev inside your service (backend) directory paste the following content.

FROM node:10.16.0-alpine
RUN mkdir -p /app/config /app/src
WORKDIR /app
COPY tsconfig.json /app/tsconfig.json
COPY tslint.json /app/tslint.json
COPY package.json /app/package.json
COPY package-lock.json /app/package-lock.json
RUN npm install
CMD ["npm", "run", "dev"]

Again you can ignore copying files that start with ts if you DO NOT have typescript in your project.

Now we have UI and Service configured to run out an application inside the container time to take care of the database.

The Database — PostgreSQL

If you notice at the very beginning of my blog I have attached image the tree of my root directory and I have postgresql folder. Theoretically, you do not need to have that folder and create Dockerfile.dev because we can just pull from Postgres image from Docker Hub, but there is one downside doing this way. Every time when you stop your containers it will lose all your data and you are going to have to set up, again and again, your database after each time you restart your containers. We don’t want to do that, so, therefore, we will need to create a postgresql folder and leave it empty for now.

Finally, Docker-Compose and Time for Magic

Now we are going to configure the docker-compose.yml file that we created at the start of this project. This file will allow us to define the configuration for each container more like passing commands such as docker container create, docker volume create, docker network create , etc.

When we create our Postgres container we need to include the following:

postgres:
image: postgres:12.1
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: mypassword
volumes:
- ./postgresql/data:/var/lib/postgresql/data

As you notice I have included my environment variables for Postgres in the docker-compose.yml file, to pass them all to the container. Here comes handy when creating a postgresql folder. In our project when we defined volumes we are basically telling our docker volume when we building an image have to refer to that local folder. Like I mentioned earlier the purpose of having that folder every time when you restart your containers, you are not going to lose your data.

For our UI (front-end) and Service (backend), it should look like this:

service:
build:
context: ./service
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./service/config:/app/config
- ./service/src:/app/src
- ./service/test:/app/test
ports:
- "3000:3000"
ui:
build:
context: ./ui
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./ui:/app
ports:
- "8080:8080"

As you can see we also have defined volumes for Service and UI. Remember when we creating an image we are taking a snapshot of our source code inside our directory.

The final docker-compose.yml should look like this:

version: '3'
services:
postgres:
image: postgres:12.1
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: mypassword
volumes:
- ./postgresql/data:/var/lib/postgresql/data
service:
build:
context: ./service
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ./service/config:/app/config
- ./service/src:/app/src
- ./service/test:/app/test
ports:
- "3000:3000"
ui:
build:
context: ./ui
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./ui:/app
ports:
- "8080:8080"

Starting and Running containers

There are several ways of running containers.

One way of starting your container is to run this command from your terminaldocker-compose up -d .

-d, --detach               
Detached mode: Run containers in the background, print new container names. Incompatible with --abort-on-container-exit.

And here is a bonus I mentioned at the beginning to make your workflow efficient and well documented for other people who are not familiar with docker-compose.

Inside your project let’s go ahead and run following command.

npm init -y

Command will generate a package.json. We are going to go ahead and add three scripts, to build our image, to stop our running containers and to start our container.

You will need to rebuild your images every time if something changed whether you have installed new dependency, removed dependency or updated dependency.

"docker:build": "docker-compose -p {name} build",# After adding this script, we will be able to run following command to rebuild our images.npm run docker:build

To stop running multi-container we can add the following script to our package.json file.

"docker:down": "docker-compose -p {name} down",## After adding this script, we should be able to run following command to stop all running containers.npm run docker:down

To start the multi-container application we can add the following script to our package.json file.

"docker:up": "docker-compose -p {name} up",## After adding this script, we should be able to run following command to start our containers.npm run docker:up

You may have noticed I have {name} in my scripts. You can replace it and give it a name to your convenience for your multi-container application.

Once you are done you should have a package.json look like this

{
"name": "Name of your app",
"version": "0.1.0",
"description": "",
"main": "index.js",
"scripts": {
"docker:build": "docker-compose -p {name} build",
"docker:down": "docker-compose -p {name} down",
"docker:up": "docker-compose -p {name} up",
"test": "echo \"Error: no test specified\" && exit 1"
},
"repository": {
"type": "git",
"url": ""
},
"keywords": [],
"author": "",
"license": "SEE LICENSE IN LICENSE.txt"
}

That’s it. There you have it a multi-container application with Reactjs, Nodejs, and PostgreSQL.

Please note this is my very first blog in my life. I look forward to your feedback. Let me know if you find it useful.

--

--