目录

系统学习Docker 践行DevOps理念

本文来源于 慕课网课程 《系统学习 Docker 践行 DevOps 理念》

本课程发布于 2018 年,部分知识可能已过时

第 3 章 Vagrant 部署

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
sudo ip link add veth-test1 type veth peer name veth-test2

sudo ip netns add test1

sudo ip link set veth-test1 netns test1

sudo ip netns exec test1 ip link

sudo ip netns add test2

sudo ip link set veth-test2 netns test2


sudo ip netns exec test1 ip addr add 192.168.1.1/24 dev veth-test1
sudo ip netns exec test2 ip addr add 192.168.1.2/24 dev veth-test2


sudo ip netns exec test1 ip link set dev veth-test1 up
sudo ip netns exec test2 ip link set dev veth-test2 up

sudo ip netns exec test1 ping 192.168.1.2
sudo ip netns exec test2 ping 192.168.1.1
1
2
3
4
5
6
7
docker run -d --name test1 busybox /bin/sh -c "while true; do sleep 3600;done"
docker network inspect bridge
brctl show
ip a


docker run -d --name test2 busybox /bin/sh -c "while true; do sleep 3600;done"

docker 容器间的 link 功能,略

第 4 章 网络

bridge

1
2
3
4
5
6
7
docker network create -d bridge my-bridge
brctl show

docker run -d --name test3 --network my-bridge busybox /bin/sh -c "while true; do sleep 3600;done"
docker network inspect my-bridge
docker network connect my-bridge test2
docker network inspect my-bridge

none

创建的容器没有 ip 地址

host

共用主机的网络

多容器部署示例

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
docker run -d --name redis redis
docker build -t tao/flask-redis .
docker run -d --link redis --name flask-redis -e REDIS_HOST=redis tao/flask-redis
docker exec -it flask-redis /bin/bash
env
curl 127.0.0.1:5000
exit
docker stop flask-redis
docker rm flask-redis

docker run -d --link redis --name flask-redis -p 5000:5000 -e REDIS_HOST=redis tao/flask-redis
curl 127.0.0.1:5000

多机网络

> vxlan > underlay > overlay > ***# etcd

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
sudo yum install -y etcd

nohup etcd --name docker-node1 --initial-advertise-pe│ er-urls http://192.168.205.10:2380 --listen-peer-urls http://192.168.205.10:2380 --listen-client-urls │ http://192.168.205.10:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.205.10:2379 --│ initial-cluster-token etcd-cluster --initial-cluster docker-node1=http://192.168.205.10:2380,docker-no│ de2=http://192.168.205.11:2380 --initial-cluster-state new&
nohup etcd --name docker-node2 --initial-advertise-peer-urls http://192.168.205.11:2380 --listen-peer-urls http://192.168.205.11:2380 --listen-client-urls http://192.168.205.11:2379,http://127.0.0.1:2379 --advertise-client-urls http://192.168.205.11:2379 --initial-cluster-token etcd-cluster --initial-cluster docker-node1=http://192.168.205.10:2380,docker-node2=http://192.168.205.11:2380 --initial-cluster-state new&

etcdctl cluster-health




sudo service docker stop
sudo /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.205.10:2379 --cluster-advertise=192.168.205.10:2375&

sudo service docker stop
sudo /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.205.11:2379 --cluster-advertise=192.168.205.11:2375&
1
2
3
docker network create -d overlay demo
docker network inspect demo
docker run -d --name test1 --net demo busybox sh -c "while true; do sleep 3600; done"

第 5 章 数据持久化

1
2
vagrant plugin install vagrant-scp
vagrant scp ./test docker-node1:/home/vagrant/xxx/

docker volumn 略

第 6 章 docker compose

水平扩展容器

1
docker-compose up --scale web=3 -d

HAProxy 负载均衡

第 7 章 容器编排 swarm

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
docker swarm --help
docker swarm init --help

docker swarm init --advertise-addr=192.168.205.10


docker swarm join --token SWMTKN-1-44mgja8j469rh7i9mc81r6ug6szxkhutctx4c4hufq4rtyxtv2-00tid7ndxfqkzj6dyk1ucxb52 192.168.205.10:2377

docker node ls



docker service create --name demo busybox sh -c "while true;do sleep 3600;done"

docker service ls

docker service ps demo

docker service scale demo=5


docker service rm demo

部署示例 (wordpress)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
docker network create -d overlay demo


docker service create --name mysql --env MYSQL_ROOT_PASSWORD=root --env MYSQL_DATABASE=wordpress --network demo --mount type=volume,source=mysql-data,destination=/var/lib/mysql mysql

docker service create --name wordpress -p 80:80 --env WORDPRESS_DB_PASSWORD=root --env WORDPRESS_DB_HOST=mysql --network demo wordpress


docker service create --name whoami -p 8000:8000 --network demo -d jwilder/whoami

curl 127.0.0.1:8000

docker service create --name client -d --network demo busybox sh -c "while true;do sleep 3600;done"

docker service ps client


docker service create --name whoami -p 8000:8000 --network demo -d jwilder/whoami
docker service ps whoami
docker service scale whoami=2

nslookup tasks.whoami

扩展 : lvs + keeplived

ingress network

负载均衡 转发

docker stack

1
2
3
4
docker stack deploy wordpress --compose-file=docker-compose.yml
docker stack ls
docker stack ps wordpress
docker stack services wordpress
1
2
3
4
docker stack deploy example --compose-file=docker-compose.yml
docker stack ls
docker stack services example
docker service example scale example_vote=3

docker visualizer

dockersamples/visualizer

docker secret

1
echo "adminadmin" | docker secret create my-password -

docker service 热更新

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
docker service create --name web --publish 8080:5000 --network demo xiaopeng163/python-flask-demo:1.0

docker service ps web
docker service scale web=2

curl 127.0.0.1:8080

sh -c "while true;do curl 127.0.0.1:8080 && sleep 1; done"


docker service update --image xiaopeng163/python-flask-demo:2.0 web

docker service update --publish-rm 8080:5000 --publish-add 8088:5000 web

第 8 章 docker 企业版

第 9 章 k8s

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
brew install minikube
kubectl version --client
minikube version
minikube start
minikube start --image-mirror-country=cn --registry-mirror=https://b3uey254.mirror.aliyuncs.com



kubectl config view

kubectl config get-contexts

kubectl cluster-info


kubectl create -f pod_nginx.yml

kubectl get pods
kubectl get pods -o wide


kubectl exec -it nginx sh

kubectl port-forward nginx 8080:80

kubectl delete -f pod_nginx.yml

kubectl create -f rc_nginx.yml

kubectl get rc

kubectl delete pods nginx-5q46s

kubectl scale rc nginx --replicas=2
kubectl get pods
kubectl scale rc nginx --replicas=4
kubectl get pods

kubectl get pods -o wide

kubectl delete -f rc_nginx.yml

kubectl create -f deployment_nginx.yml
kubectl get deployment
kubectl get rs
kubectl get pods

kubectl set image deployment nginx-deployment nginx=nginx:1.13

kubectl rollout history deployment nginx-deployment


kubectl rollout undo deployment nginx-deployment


kubectl get node
kubectl get node -o wide


kubectl expose deployment nginx-deployment --type=NodePort


kubectl get svc

tectonic sandbox

1
source <(kubectl completion zsh)

***# Flannel (是一种 k8s 网络插件) Flannel 是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的 CoreOS 主机拥有一个完整的子网。这次的分享内容将从 Flannel 的介绍、工作原理及安装和配置三方面来介绍这个工具的使用方法。 Flannel 通过给每台宿主机分配一个子网的方式为容器提供虚拟网络,它基于 Linux TUN/TAP,使用 UDP 封装 IP 包来创建 overlay 网络,并借助 etcd 维护网络的分配情况。 Flannel is a simple and easy way to configure a layer 3 network fabric designed for Kubernetes.

Services

kops

***# dig 命令

1
2
dig +short imooc.com ns
dig +short imooc.com soa

第 10 章 容器运维、监控

1
2
docker top 容器名
docker stats

Weave Scope

GitHub - weaveworks/scope: Monitoring, visualisation & management for Docker & Kubernetes

k8s Heapster+Grafana+InfluxDB

Heapster 已废弃

压测 wrk ( 或 ab)

水平扩展 autoscaler

log

  • docker logs
  • kubectl logs

生产环境下:

  • ELK

  • 云服务商 log 服务

  • Fluented (log 转发)

  • ElasticSearch (log Index)

  • Kibana (log 可视化)

  • LogTrail (log ui 查看,Kibana 的插件)

日志监控 Prometheus

第 11 章 CI/CD