Whether you are a student or a professional, as long as you are in the field of Web development, you’ll always have to manage multiple database-powered projects in your computer at the same time, which is not an easy task. Docker offers a viable solution for this.
think about installing an instance of MySQL or Postgres system-wide and then using it for all of your projects. You have to create databases and manage permissions for each project, and keep a track of the tables if you are sharing the default database among multiple hobby projects. You also have to perform manual clean ups occasionally. Worse still, you may find yourself struggling to log in as the root user and to reset passwords (if you haven’t gone through this so far, just search ‘mysql root password reset’ and see how many complaints and totally diverse suggestions show up).
Is there a solution? Will containers help?
Why not use SQLite?
Sure, there’s SQLite, in which a database is just a file, without any concept of server, users and permissions. That’s why many use SQLite for local development and MySQL or Postgres in production. But this is practical only if you are using a framework with sufficient abstraction. Every RDBMS is incompatible with everything else in one way or another. The differences can vary from changes in syntax to the way transactions are carried out. So if you are planning for SQLite in production, use it in your dev environment.
But if you’re planning for Postgres in production, you should use the same in development too.
How containers can help
Okay, you want to experiment with multiple MySQL or Postgres projects but still don’t want to have a difficult time managing them. Containerisation comes to your rescue.
The idea is to create a container whenever you need a new database for a new project. You can stop the container or even delete it when you are done with it. No installations, no administration. You can’t get closer to the ease of using SQLite with DB servers.
Do you need more explanation for the no-installations-no-administration part? For Docker, there are pre-built images for services like Postgres and MySQL. Once you’ve downloaded such an image, you can spawn as many instances as you want. Since each project gets its own DB server container, the amount of work required to set up the database, roles and permissions is trivial.
|Note: The only intention of this article is to serve as an introduction to Docker and Docker Compose while sharing a solution to some problems faced by database application developers. Docker is a great development tool. But there is a certain amount of risk when using it in production, especially in terms of security and performance. You should use Docker in production only if there is a real need and you have to be extra careful when you do so.|
Installing Docker used to be a bit difficult, but not anymore. All you have to do in Ubuntu 20.04 is run these commands:
sudo apt update sudo apt install docker.io docker-compose
Don’t forget to add yourself to the Docker group:
sudo usermod -aG docker YOUR_USERNAME
You’ll have to log out and log in again to apply the group change, but that’s all it takes. Now you can use this Docker setup to run as many containers and projects as you want. However, remember to monitor the containers and clean up the junk frequently, or your system resources will be wasted.
Let’s start the container!
Now that you have Docker installed on your machine, you can run this command to start a Postgres instance:
docker run --name mypostgres1 -e POSTGRES_PASSWORD=mypgpass -d postgres
It’ll pull the Docker image from the Internet automatically and start an instance of it. Pulling is a one-time process, which means subsequent runs will be faster.
In the above command, mypostgres1 is a name of your choice, which you can use in commands like:
docker stop mypostgres1
The flag -e sets an environment variable, and here you can use it to set the POSTGRES_PASSWORD to mypgpass, the password of your choice. -d means detached, which will cause the container to run in the background, allowing you to use the shell for other purposes. Finally, postgres at the end of the command is the image name.
To enter the Postgres shell of this instance, run the following command:
docker exec -it mypostgres1 psql -U postgres
But this setup is only useful to try out some SQL commands. To run a real project, you have to manage ports, persist data, etc. It’s time to get introduced to Docker Compose.
Docker Compose is an abstraction over Docker that makes life easier when you need more control over your containers like setting environment variables, persisting data, coordinating multiple services (e.g., database server and Web server), etc. With Compose, you can define all your services in a human-readable YAML file and run the command docker-compose with minimal arguments to get them started. It’s still possible to manage containers started this way using the native Docker commands.
Postgres with Compose
Let’s first create a working directory with the following structure:
- myproject/ - initdb.d/ - pgdatadir/ - docker-compose.yml - pg.env
initdb.d is a folder to keep the SQL files that are to be executed automatically while initialising the database. pgdatadir is an empty folder in which you ask your Postgres instance to store all the data. Both are optional.
If there is no inital SQL, the database will be empty. If there is no pgdatadir given, Docker will keep the data in an automatic volume. pg.env is where you plan to keep all the environment variables. You can put them inside the Compose file itself, but that’s a bad idea (for example, someone might ask you to share your Compose file). Yes, all the file names are of your own choice. Map them using the Compose file and only the targets matter.
Now let’s define the contents of docker-compose.yml, the Compose file:
version: “3.3” services: db: image: postgres container_name: mypostgres2 volumes: - ./initdb.d:/docker-entrypoint-initdb.d/ - ./pgdatadir:/var/lib/postgresql env_file: - pg.env ports: - “9000:5432”
It should already make sense to you. You are defining just one service (db, again a name of your choice) here. The lines under volumes: map the local directories to the container’s file system. Similarly, there’s a mapping going on under ports. Here you are mapping the container’s internal port 5432 (the default Postgres port) to the host machine’s port 9000 (once again, your choice). This mapping is not needed if the database is only accessed from the services defined in the same Compose file. But with this mapping, the service is exposed as part of the host itself, making it available to any application run in the host, LAN, or even the Internet, as long as the routers and firewalls permit. Now you know one security pitfall. I recommend you do some research on exposing ports and container IPs (also, try the command docker inspect CONTAINER_NAME).
Okay, let’s see the contents of the environment file:
The env file has a syntax that expects one variable per line. Anything put after the equals symbol is part of the value, which means no quotes or escaping.
Finally, how do you start the service? Make sure you are inside the directory myproject and run this:
docker-compose up -d
You can still use the basic Docker commands like docker exec and docker logs. However, to stop the services, it’s better to run:
Connecting to the database
To interact with the database, you can use docker exec. But now that the port is open, you can connect from a client that is outside Docker (say psql installed system-wide using apt install), or from your application, just like you connect always. The host will be localhost and the port will be 9000, based on the above example. If your application is run by the same Compose file, you can use the container’s name (here, mypostgres2) as the host.
Things that change with MySQL include the environment variables, the path of the data directory and the internal port.
The useful variables are:
The data directory is /var/lib/mysql and the default MySQL port is 3306.
Monitoring and clean up
You can use the command docker ps to list the running containers. docker stop CONTAINER_NAME is how you stop a container, but it’s recommended to stop it using docker-compose down if you have started it using docker-compose up. Run docker system prune to clean up unused files and cache. Docker volumes can also consume disk space. Run the command docker volume to learn how you can manage them.
If you want to stop the whole Docker service for some reason, run systemctl stop docker. You can use systemctl to restart Docker also.