简体   繁体   中英

Setting up your dev environment using docker

I'm trying to learn docker in an attempt to help me with development. I can imagine that using docker images with software already preinstalled (like Kafka, nodeJS, redis, mysql etc) seems to be much easier then manually installing everything!

I had few questions on this usecase. Again, these are more in context to easily creating a dev environment for myself rather than deploying docker in production!

  • Lets say we have an app that uses kafka, redis, nodeJS and mysql. If i wanted to set this up on my machine using docker containers, is it correct to assume that the best setup is a.) each of these running from their own individual docker images and b.) each of them communicating using open ports on their respective dockers?
  • My webapp code would be part of the docker that hosts nodeJS? So everytime i need to update my webapp code, i would need to change and commit to the nodeJS container.
  • When i save data using my webapp, i'm assuming it gets saved within the mysql container. So if i want to save DB state i will need to make sure i save and commit the (mysql) container contents, right?

Also, do suggest any reading material targeted towards developers for dockers!

The answer to this question is going to be opinionated by definition, it's all a matter of your project requirements.

Docker should not be viewed as a time saver for software installation ( but that's actually what people do all the time now, resulting in a new generation of developers that had never installed mysql.) It is more a way to manage the distribution of software among environments / team members. It helps to ensure that all components are configured the same way / have same data everywhere.

For instance, to install redis or nodeJs locally you type a single command in terminal, so unless you have to play around with different configurations/versions and distribute them docker doesn't save much effort. Docker actually adds a layer of complexity, alas it tries to make it as minor as possible.

In general it's better to decouple deployment components into different images. Docker creates its own network interface and all inter-component communication is hidden from your network, unless you decide to map container ports to your local NIC.

Regarding the code, I don't see why you would want to develop the code inside a container. Normally you keep your code in some VCS (like GIT), and use it to produce distributable packages, like docker images. Again, installing/maintaining local nodeJs/npm has never been an issue.

When you save data it's saved in the container. You could generate an image from your container, but that's bad. You want to generate SQL scripts that populate the DB with data (you can do that by exporting from your container), then use these scripts to build new image from scratch (see Dockerfile).

Lets say we have an app that uses kafka, redis, nodeJS and mysql. If i wanted to set this up on my machine using docker containers, is it correct to assume that the best setup is a.) each of these running from their own individual docker images and b.) each of them communicating using open ports on their respective dockers?

Thats correct, Using microservices architecture we want to divide an application into microservices and host each microservice in its own separate container. This provides various advantages:

  • able to easily switch technologies
  • fault containment
  • ease of upgrades
  • ease of scaling
  • ...

Using user defined networks you can easily connect your containers together.

sudo docker network create mynet

My webapp code would be part of the docker that hosts nodeJS? So everytime i need to update my webapp code, i would need to change and commit to the nodeJS container.

Thats possible but I dont recommend it, You can bind mount your code from docker host, and commit to it on the host (the code in the container will change too (because of bind mount ). You may need to restart services depending on your app in the container to reflect the changes.

When i save data using my webapp, i'm assuming it gets saved within the mysql container. So if i want to save DB state i will need to make sure i save and commit the (mysql) container contents, right?

You shouldn't keep persistent data in your container. You should use bind mount or volumes for your persistent data. In your case mysql you need to bind mount or volume mount on to /var/lib/mysql to keep the persistent data off the container. This provides many advantages like ease of service version upgrade.

Bind mount example:

sudo docker run -d --name mysql --network mynet -v /path/to/directory/on/host:/var/lib/mysql mysql

Volume example:

sudo docker run -d --name mysql --network mynet -v myvolumename:/var/lib/mysql mysql

I'll just answer your 3 questions literally :

"Lets say we have an app that uses kafka, redis, nodeJS and mysql. If i wanted to set this up on my machine using docker containers, is it correct to assume that the best setup is a.) each of these running from their own individual docker images and b.) each of them communicating using open ports on their respective dockers?"

Yes, totally.

"My webapp code would be part of the docker that hosts nodeJS? So everytime i need to update my webapp code, i would need to change and commit to the nodeJS container."

This would be cumbersome. You can just map the webapp directory from your nodeJS container to a directory on your host using "run -v hostdir:container-webapp-dir"

"When i save data using my webapp, i'm assuming it gets saved within the mysql container. So if i want to save DB state i will need to make sure i save and commit the (mysql) container contents, right?"

Nothing can really be considered as "saved" in containers, only volumes store datas. So if you have a mysql container, you will have your /var/lib/mysql container directory saved on your host (declared as "VOLUME /var/lib/mysql" in official image Dockerfile). You can have volumes as well in each container for anything you need to save between container drop (logs, conf, ...). "Container drop" = you totally remove the container to upgrade version for example, when you stop and start a container, it won't erase your logs from inside the container, even if they are not stored in volumes.

I would suggest that you should use separate docker for each of your DataStores ie (Redis, Kafka, Zookeeper, Mysql) and run your application on your host machine by this each time there is some modification in code you need not build docker image and will save your time.

  • You can use alpine Linux docker image it is very light and is better than both ubuntu and centos image to be used as the docker.

  • You should use separate docker for each of your data Stores(Mysql, Redis, Kafka, Zookeeper) as it will be easy for you to manage your datastores like upgrade your dataStores version without disturbing other sources.Easy to create the clusters.

  • You should mount your docker directory with your host machine so that the docker data remain preserved.

Below mention is the docker images that you can use for your dev env:

  1. Docker for mysql:

docker run --net=host -d -it -p 3306:3306 -v /var/lib/mysql:/var/lib/mysql yobasystems/alpine-mariadb

  1. Docker for redis:

docker run -d -it --net=host smebberson/alpine-redis

Similarly, you can find docker images for Kafka and other datastores you want to use. Using publicly available docker image will save your lots of effort as well as time. To view more details about docker : https://docs.docker.com/engine/examples/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM