1、kubeadm部署和rancher搭建
服务器环境、kubeadm安装
文档:https://kubernetes.io/zh/docs/
常见的几种安装方式
机器:三台centos7.9
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| # 增加普通用户,别tmd用root操作 # 创建用户 cwz。不要使用root赤裸裸操作服务器 useradd cwz passwd cwz(自行输入密码)
# 给cwz赋予 sudo权限 #vi /etc/sudoers 编辑这个文件 #在这一行下加入 root ALL=(ALL) ALL (这一行是原来有的) cwz ALL=(ALL) ALL (这一行是我们要加入的) # 注意:保存的时候要键入 wq! (因为这厮是只读文件)
# 修改主机名: hostnamectl
# 修改hosts文件。 sudo vi /etc/hosts # 给新主机增加127.0.0.1 。不然tmd 你去ping node1 显示的是局域网IP
|
下载docker离线安装包
1、禁用 firewalld
1
| systemctl stop firewalld && systemctl disable firewalld
|
2、禁用selinux (华为云 默认是禁用的,这步可以省略,getenforce 可以看状态。如果是开的,那么自行百度禁止掉)
https://download.docker.com/linux/centos/7/x86_64/stable/Packages/
目前我下载的是 19.03 版本 https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-19.03.3-3.el7.x86_64.rpm
下载好后 上传到 服务器上 你喜欢的位置(或者直接用wget 在服务器上下载,很快很丝滑)
我的位置是 /home/cwz/tools/docker-ce-19.03.3-3.el7.x86_64.rpm
3、安装docker
1
| sudo yum install docker-ce-19.03.3-3.el7.x86_64.rpm -y
|
耐心等待,不出意外 会出现2个错误:
- 第一个错误: Requires: containerd.io >= 1.2.2-3
1 2 3 4 5
| # 我们可以到这里去下载 :https://centos.pkgs.org/7/docker-ce-stable-x86_64/containerd.io-1.2.13-3.1.el7.x86_64.rpm.html (版本比它高是可以的) #下载下来: wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm #手工安装: sudo yum install -y containerd.io-1.2.13-3.2.el7.x86_64.rpm
|
- 第二个错误: Requires: docker-ce-cli
1 2 3 4 5 6 7 8
| # 于是我们 wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-19.03.3-3.el7.x86_64.rpm # (注意:cli工具 要和 上面下载的docker-ce版本一致) # 接下来是安装cli: sudo yum install -y docker-ce-cli-19.03.3-3.el7.x86_64.rpm
# 搞定后,继续安装docker,也就是再执行一次: sudo yum install docker-ce-19.03.3-3.el7.x86_64.rpm -y
|
4、设置用户组
docker安装时默认创建了docker用户组,将普通用户加入docker用户组就可以不使用sudo来操作docker
sudo usermod -aG docker cwz
注意:光加入还不行,要么重新登录,要么执行 newgrp - docker (改变当前用户的有效群组)
5、设置镜像加速:
1 2 3 4 5 6 7 8 9
| { "registry-mirrors" : [ "https://registry.docker-cn.com", "https://docker.mirrors.ustc.edu.cn", "http://hub-mirror.c.163.com", "https://cr.console.aliyun.com/" ], "exec-opts": ["native.cgroupdriver=systemd"] }
|
6、启动docker
1 2 3
| # 重启docker sudo systemctl daemon-reload sudo systemctl restart docker
|
安装 k8s文档:
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
使用国内源:
1 2 3 4 5 6 7 8 9 10 11 12
| # cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
# yum makecache
|
各个节点上都要执行:
1
| sudo yum -y install kubelet-1.18.6 kubeadm-1.18.6 kubectl-1.18.6
|
搞定后 执行 rpm -aq kubelet kubectl kubeadm 看下列表,如果OK就是装好了
然后把kubelet 设置为开机启动
1
| sudo systemctl enable kubelet
|
kubeadm基本作用:初始化集群
kubeadm
主要有三个:Kubeadm init、kubeadm join、kubeadm token
- kubeadm init:集群的快速初始化,部署Master节点的各个组件
- kubeadm join 节点加入到指定集群中
- kubeadm token 管理用于加入集群时使用的认证令牌 (如list,create)
- kubeadm reset 重置集群,如删除 构建文件以回到初始状态
使用systemd作为docker的cgroup driver
sudo vi /etc/docker/daemon.json (没有则创建)
加入
1 2 3 4 5
| {
"exec-opts": ["native.cgroupdriver=systemd"]
}
|
重启docker
systemctl daemon-reload && systemctl restart docker
确保执行这句命令docker info |grep Cgroup
出来的值是 systemd
关闭swap
1、暂时关闭SWAP,重启后恢复
swapoff -a
2、永久关闭SWAP
sudo vi /etc/fstab
初始化集群:
在你 喜欢的机器上执行 (master)
1
| sudo kubeadm init --kubernetes-version=v1.18.6 --image-repository registry.aliyuncs.com/google_containers
|
关于token
1 2 3 4 5 6 7 8
| sudo kubeadm token list #可以查看token 列表
#重新生成token (默认24小时有效期) sudo kubeadm token create --print-join-command
# 其他节点加入( 我们先不执行,先记下来) sudo kubeadm join 192.168.0.53:6443 --token yd38ha.1u9xsqsleyw7gjuc \ --discovery-token-ca-cert-hash sha256:2ef5f37cc530644ba1b6a5d99150af309e9c5d6a16933323ee18a7732811f3c1
|
安装网络组件(flannel)、加入子节点
CNI(Container Network Interface)
容器网络接口,为了让用户在容器创建或销毁时都能够更容易地配置容器网络。
常用的组件:
- Flannel(最基本的网络组件)
- Calico(支持网络策略)
- Canal(前两者的合体)
- Weave(同样支持策略机制,还支持加密)
先采用flannel
github地址:https://github.com/coreos/flannel
根据官网https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/,执行:
1 2
| # 为了让你的 Linux 节点上的 iptables 能够正确地查看桥接流量,你需要确保在你的 sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1 sudo sysctl net.bridge.bridge-nf-call-iptables=1
|
在主机上执行:
1 2 3 4
| kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 验证 kubectl get pods --all-namespaces
|
可能有个错误:
failed to acquire lease: node “just1” pod cidr not assigned
通过执行kubectl —namespace kube-system logs kube-flannel-ds-2hpnq
可查看到 (我们需要和它一样的网段)
解决办法:
1 2 3 4 5 6 7 8 9
| sudo vi /etc/kubernetes/manifests/kube-controller-manager.yaml
在command节点 加入 - --allocate-node-cidrs=true - --cluster-cidr=10.244.0.0/16
#然后执行 systemctl restart kubelet
|
注意:可以借鉴完整的安装文档:https://www.yuque.com/leifengyang/kubesphere/grw8se
kubernetes dashboard部署
概念:
官方提供的web管理界面,通过dashboard可以很方便地查看集群的各种资源.以及修改资源编排文件,对集群进行扩容操作,查看日志等
GitHub地址:https://github.com/kubernetes/dashboard
安装:
1 2 3 4
| kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
# 执行完后 验证下 kubectl get pods --all-namespaces
|
访问UI
kubectl proxy
kubectl proxy为访问k8s apiserver的REST api建立反代,提供统一入口进行访问控制、监控、管理,在代理中管理后端,在代理中进行认证等。
1
| kubectl proxy --address=0.0.0.0 --port=9090 --accept-hosts='^*$'
|
可以直接调用api
http://ip:9090/api/v1/namespaces/kube-system/services
进入界面:
http://121.36.252.154:9090/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy
创建对应的ServiceCount,对于文档:https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
创建一个文件夹 譬如叫做/home/cwz/dashboard
创建2个文件 ,第一个db_account.yaml:
1 2 3 4 5
| apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard
|
第二个文件 db_role.yaml:
1 2 3 4 5 6 7 8 9 10 11 12
| apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard
|
在/home/cwz/dashboard 下执行
1 2 3 4 5 6
| kubectl apply -f .
# 然后再执行 kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
# 你会看到token
|
不过到这一步还是不能登陆,因为默认只能localhost或127 能登陆
简单的解决方案是 直接暴露NodePort访问
创建文件 db_svc.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
| kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 nodePort: 30043 selector: k8s-app: kubernetes-dashboard type: NodePort
|
最后执行:kubectl apply -f .
部署rancher作为管理系统(导入)
1
| sudo docker run -d --restart=unless-stopped -p 8080:80 -p 8443:443 -v /home/cwz/rancher:/var/lib/rancher/ rancher/rancher:v2.4.16
|
别忘了 8080 和 8443 安全组开放
集群导入
设置:
1
| kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user kubernetes-admin(这个用户名要去配置文件找)
|
根据提示设置:
1
| curl --insecure -sfL https://192.168.0.106:8443/v3/import/pclwnv429cf4fznp74vfrblsr9w9f4fdvv6gckmtzh6k4tslls66g5.yaml | kubectl apply -f -
|
修正、安装Helm、nginx-ingress
rancher导入完集群,可能会有一些小坑
kubectl get cs —查看集群监控状况
解决办法:
1 2 3 4 5 6 7 8
| sudo vi /etc/kubernetes/manifests/kube-scheduler.yaml sudo vi /etc/kubernetes/manifests/kube-controller-manager.yaml
把 --port=0 删掉
systemctl restart kubelet
|
可能需要去除maste节点污点:
1 2
| kubectl taint nodes k8s-master node-role.kubernetes.io/master:NoSchedule-
|
安装 helm
https://github.com/helm/helm/releases/tag/v3.4.0
安装nginx-ingress:
使用老版本 nginx3.8.tar.gz
随便装一下:helm install my-nginx ingress-nginx/ingress-nginx
然后肯定会报错,然后 两条命令:
- kubectl logs pod名称
- kubectl describe pod 你的pod名称 (这里会看到具体的错误)
发现 k8s.gcr.io/ingress-nginx/controller:v0.41.0 这个镜像无法下载
执行:
1 2 3 4 5 6 7 8 9
| docker pull giantswarm/ingress-nginx-controller:v0.41.0
docker tag giantswarm/ingress-nginx-controller:v0.41.0 k8s.gcr.io/ingress-nginx/controller:v0.41.0
docker rmi giantswarm/ingress-nginx-controller:v0.41.0
|
2、k8s认证和授权
认识k8s的用户账号:生成证书
K8S里两种账户类型
UserCount ——- 用户账号
也就是集群外部访问时使用的用户。最常见的就是kubectl命令就是作为kubernetes-admin用户来执行,k8s本身不记录这些账号.
认证的方式: 先学习默认认证方式:客户端证书
创建客户端证书
首先假设你装好了openssl (没装执行 sudo yum install openssl openssl-devel)
- 创建一个文件夹叫做 ua/cwz
- 创建一个文件夹叫做 ua/cwz
1 2 3 4 5 6 7
| openssl genrsa -out client.key 2048 openssl req -new -key client.key -out client.csr -subj "/CN=cwz"
sudo openssl x509 -req -in client.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out client.crt -days 365
|
用户账号(2):使用证书初步请求API、设置上下文
最简单的请求 方式还是:
其中 可以用 —insecure 代替 —cacert /etc/kubernetes/pki/ca.crt 从而忽略服务端证书验证
证书反解:
如果你忘了证书设置的CN(Common name)是啥 可以用下面的命令搞定
openssl x509 -noout -subject -in client.crt
接下来要干的事情:
1 2 3 4 5 6 7 8 9 10 11 12
| 把client.crt加入到~/.kube/config,执行kubectl命令时切换成我们的用户(虽然现在其实没啥权限)
kubectl config --kubeconfig=/home/cwz/.kube/config set-credentials cwz --client-certificate=/home/cwz/ua/cwz/client.crt --client-key=/home/cwz/ua/cwz/client.key
kubectl config --kubeconfig=/home/cwz/.kube/config set-context user_context --cluster=kubernetes --user=cwz
kubectl config --kubeconfig=/home/cwz/.kube/config use-context user_context
kubectl config current-context
|
入门Role和RoleBinding、创建一个角色
我们把 cwz这个用户(usercount)加入到 kubectl config中
并且用了 kubectl config use-context user_context 来切换上下文。
切回管理员可以这么干:
1
| kubectl config use-context kubernetes-admin@kubernetes
|
相关概念:
- Role 角色:它可以包含一堆权限。用于授予对单个命名空间的资源访问
- RoleBinding 顾名思义,将用户和角色进行绑定
- kubectl get role —all-namespaces
接下来我们要干的事
我们希望让cwz这个用户能查看pod
我们创建一个文件叫做:mypod_role.yaml
1 2 3 4 5 6 7 8 9
| kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: mypod rules: - apiGroups: ["*"] resources: ["pods"] verbs: ["get", "watch", "list"]
|
关于apiVersion 查文档:https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#role-v1-rbac-authorization-k8s-io
关于资源,可以用 这个命令查看:
1
| kubectl api-resources -o wide
|
譬如role有 对应的操作如下:
create delete deletecollection get list patch update watch
创建 删除 批量删除 获取 列表 合并变更 更新 监听
1 2 3 4 5 6
| 执行下面的内容,则完成角色的创建 kubectl apply -f mypod_role.yaml
kubectl get role -n default ----查看下
kubectl delete role mypod (不加-n 默认就是default)
|
用户和角色进行绑定(RoleBinding)
操作环境:
将node1上的kubeconfig文件拷贝到node2上,这样node2就是管理员账号了。
接下来我们来绑定
第一种方式: 命令行
1 2 3 4 5 6 7 8
| kubectl create rolebinding mypodbinding -n default --role mypod --user cwz
kubectl config use-context user_context
kubectl config use-context kubernetes-admin@kubernetes
|
第二种,创建一个文件 譬如叫做mypod_rolebinding.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13
| apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: null name: mypodrolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mypod subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: cwz
|
kubectl请求 apiServer时候会辨别出用户名是什么,然后根据名字去rolebinding中查找,如果查到有角色,再去匹配是否符合当前的操作
ClusterRole和RoleBinding
可以管理集群中多个 namespace,就需要使用到clusterrole
绑定既可以使用RoleBinding,也可以使用ClusterRoleBinding ( 两者效果不同)
之前创建Role的文件:
1 2 3 4 5 6 7 8 9
| kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: default name: mypod rules: - apiGroups: ["*"] resources: ["pods"] verbs: ["get", "watch", "list"]
|
clustrole文件:
1 2 3 4 5 6 7 8
| kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: mypod-cluster rules: - apiGroups: ["*"] resources: ["pods"] verbs: ["get", "watch", "list"]
|
之前创建的绑定:
1 2 3 4 5 6 7 8 9 10 11 12
| apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: mypodrolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: mypod subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: cwz
|
现在创建绑定,需要改一下:
1 2 3 4 5 6 7 8 9 10 11 12 13
| apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: mypodrolebinding-cluster namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: mypod-cluster subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: cwz
|
1 2 3 4 5 6 7
| kubectl get rolebinding -n kube-system
kubectl delete rolebinding mypodrolebinding-cluster -n kube-system
clusterrole+clusterrolebinding
clusterrole+clusterrolebinding
|
删掉之前创建的role、rolebinding
1 2 3 4 5 6
| kubectl delete rolebinding mypodrolebinding-cluster -n kube-system kubectl delete rolebinding mypodrolebinding kubectl delete role mypod
|
vim mypod-clusterrolebinding.yaml
1 2 3 4 5 6 7 8 9 10 11 12
| apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: mypod-clusterrolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: mypod-cluster subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: cwz
|
配置使用token的方式请求API(UserAccount)
其实 假设我们要请求API, 还可以使用token的方式
curl https://192.168.0.53:6443 这是不可以访问的
接下来我们设置一个 token
1 2 3 4
| head -c 16 /dev/urandom | od -An -t x | tr -d ' '
kubectl config set-credentials cwz --token=4e2f6f4250a43ce94426b6264dad2609
|
找个获取pod的api试一试
https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#role-v1-rbac-authorization-k8s-io
curl -H “Authorization: Bearer 4e2f6f4250a43ce94426b6264dad2609” https://192.168.0.53:6443/api/v1/namespaces/default/pods —insecure
默认情况下还是没用的
curl —cert ./client.crt —key ./client.key —cacert /etc/kubernetes/pki/ca.crt -s https://192.168.0.53:6443/api/v1/namespaces/default/pods
修改api-server的启动参数
1 2 3 4 5 6 7 8 9
| sudo vi /etc/kubernetes/pki/token_auth
4e2f6f4250a43ce94426b6264dad2609,cwz,1001
sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml 加入 --token-auth-file=/etc/kubernetes/pki/token_auth
|
ServiceAccount入门(1):创建账号
创建一个ServiceAccount
1 2 3 4 5
| kubectl get sa -n xxxx (不写-n 就是默认default)
kubectl create serviceaccount mysa
|
每个namespace都会有一个默认的 default账号,且sa局限在自己所属的namespace中。而UserAccount是可以跨ns的
kubectl describe sa mysa
k8s会在secrets 里面保存一个token
kubectl describe secret mysa-token-bgh9b
也可以使用yaml方式创建:
1
| kubectl create serviceaccount mysa -o yaml --dry-run=client
|
进阶一下,可以这样:
1
| kubectl create serviceaccount mysa -o yaml --dry-run=client > mysa.yaml
|
ServiceAccount入门(2):赋予权限、外部访问API
使用之前创建的sa账号 mysa访问api
绑定角色 mypod-cluster:
1 2
| kubectl create clusterrolebinding mysa-crb --clusterrole=mypod-cluster --serviceaccount=default:mysa
|
装个工具 jq:是一个轻量级的json处理命令。可以对json数据进行分片、过滤、映射和转换 jq . 对json数据进行格式化输出
sudo yum install jq -y
分解命令:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
| kubectl get sa mysa
kubectl get sa mysa -o json
kubectl get sa mysa -o json | jq '.secrets[0].name'
kubectl get sa mysa -o json | jq -Mr '.secrets[0].name'
kubectl get secret mysa-token-vxwrn
kubectl get secret mysa-token-vxwrn -o json | jq -Mr '.data.token'
kubectl get secret mysa-token-vxwrn -o json | jq -Mr '.data.token' | base64 -d
|
连起来命令就是:
1
| kubectl get secret $(kubectl get sa mysa -o json | jq -Mr '.secrets[0].name') -o json | jq -Mr '.data.token' | base64 -d
|
1 2 3 4 5
| mysatoken=$(kubectl get secret $(kubectl get sa mysa -o json | jq -Mr '.secrets[0].name' ) -o json | jq -Mr '.data.token' | base64 -d)
curl -H "Authorization: Bearer $mysatoken" --insecure https://172.17.16.9:6443/api/v1/namespaces/default/pods
|
ServiceAccount入门(3):在POD里访问k8s API(token的方式)
vim myngx.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
| apiVersion: apps/v1 kind: Deployment metadata: name: myngx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginxtest image: nginx:1.18-alpine imagePullPolicy: IfNotPresent ports: - containerPort: 80
|
kubectl apply -f .
接下来:
1 2 3 4 5 6 7 8 9 10 11
| kubectl get pod
可以看到pod列表 ,然后我们 进入 该容器
kubectl exec -it myngx-58bddf9b8d-qmdq7 -- sh
echo $KUBERNETES_SERVICE_HOST echo $KUBERNETES_PORT_443_TCP_PORT
echo $KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT
|
保存到环境变量中:
1 2 3
| TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`
APISERVER="https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT"
|
请求:
1
| curl --header "Authorization: Bearer $TOKEN" --insecure -s $APISERVER/api
|
我们可以看到 上面是可以的,但下面不可以,因为default账号没有列出pods的权限:
1
| curl --header "Authorization: Bearer $TOKEN" --insecure -s $APISERVER/api/v1/namespaces/default/pods
|
要修改myngx.yaml
1 2 3 4 5 6 7
| spec: serviceAccountName: mysa containers: - name: nginxtest image: nginx:1.18-alpine imagePullPolicy: IfNotPresent ports:
|
加上 serviceAccountName: mysa
kubectl -apply -f . 更新deployment
ServiceAccount入门(4):在POD里访问k8s API(token+证书的方式)
更常用的方式是直接用证书
1 2 3 4 5 6
| /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
cat /var/run/secrets/kubernetes.io/serviceaccount/token
|
1 2
| curl --header "Authorization: Bearer $T OKEN" --cacert ./ca.crt -s $APISERVER/api
|
3、Pod和deployment
Pod入门、看文档的方式、创建Pod
pod
好处:
- 方便部署、扩展和收缩、方便调度等,反正各种方便
- Pod中的容器共享数据和网络空间,统一的资源管理与分配(想想第二章我们讲的ServiceCount)
pause容器作用
- 扮演Pid=1的,回收僵尸进程
- 基于Linux的namespace的共享
文档:https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#pod-v1-core
创建pod
1 2 3 4 5 6 7 8
| apiVersion: v1 kind: Pod metadata: name: myngx spec: containers: - name: ngx image: "nginx:1.18-alpine"
|
1 2 3 4 5
| kubectl describe pod xxxx 查看pod详细
kubectl logs xxx 查看日志
kubectl exec -it xxx -- sh //进入pod
|
尝试多容器运行,想当然的改成如下的yaml文件:
1 2 3 4 5 6 7 8 9 10
| apiVersion: v1 kind: Pod metadata: name: myngx spec: containers: - name: ngx image: "nginx:1.18-alpine" - name: alpine image: "alpine:3.12"
|
但是不行,alpine没有启动成功。这是因为nginx容器内置了entrypoint,但是alpine没有。
1 2 3 4 5 6 7 8 9 10 11
| apiVersion: v1 kind: Pod metadata: name: myngx spec: containers: - name: ngx image: "nginx:1.18-alpine" - name: "alpine" command: ["sh","-c","echo this is second && sleep 3600"] image: "alpine:3.12"
|
配置数据卷:挂载主机目录、排坑方式
基本格式:
1 2 3 4 5 6 7 8
| …… spec: containers: - name: ngx image: "nginx:1.18-alpine" volumeMounts: - name: mydata mountPath: /data
|
文档:https://kubernetes.io/zh/docs/concepts/storage/volumes/
最简单 常用的类型 hostPath
基本写法:
1 2 3 4 5
| volumes: - name: mydata hostPath: path: /data type: Directory
|
完整的
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
| apiVersion: v1 kind: Pod metadata: name: myngx spec: containers: - name: ngx image: "nginx:1.18-alpine" volumeMounts: - name: mydata mountPath: /data - name: alpine command: ["sh","-c","echo this is second && sleep 360000"] image: "alpine:3.12" volumes: - name: mydata hostPath: path: /home/cwz/data type: Directory
|
Pod和Deployment基本区别、创建deployment
Pods:
Deployment
- 运行一组相同的Pod(副本水平扩展)、滚动更新
- 适合生产
总结为:Deployment通过副本集管理和创建POD。
deployment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
| apiVersion: apps/v1 kind: Deployment metadata: name: myngx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx image: nginx:1.18-alpine imagePullPolicy: IfNotPresent
|
两个容器共享文件夹
文档:https://kubernetes.io/zh/docs/concepts/storage/volumes/
两个节点分别写入挂载点
1 2 3
| volumeMounts: - name: sharedata mountPath: /data
|
1 2 3
| volumes: - name: sharedata emptyDir: {}
|
同一个pod内的容器都能读写EmptyDir中 文件。常用于临时空间、多容器共享,如日志或者tmp文件需要的临时目录
完整版:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
| apiVersion: apps/v1 kind: Deployment metadata: name: myngx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx image: nginx:1.18-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: sharedata mountPath: /data - name: alpine image: alpine:3.12 imagePullPolicy: IfNotPresent command: ["sh","-c","echo this is alpine && sleep 36000"] volumeMounts: - name: sharedata mountPath: /data volumes: - name: sharedata emptyDir: {}
|
init容器的基本使用
在启动之前希望把数据库启动,数据库可能不在一个pod里,如果数据库没有启动,那这个pod就没有启动的必要
Init 容器是一种特殊容器,在 Pod 内的应用容器启动之前运行。Init 容器可以包括一些应用镜像中不存在的实用工具和安装脚本
Init 容器与普通的容器非常像,除了如下两点:
- 它们总是运行到完成
- 每个都必须在下一个启动之前成功完成
如果 Pod 的 Init 容器失败,kubelet 会不断地重启该 Init 容器直到该容器成功为止。 然而,如果 Pod 对应的 restartPolicy 值为 “Never”,Kubernetes 不会重新启动 Pod
文档地址:https://kubernetes.io/zh/docs/concepts/workloads/pods/init-containers
基本配置:
1 2 3 4
| initContainers: - name: init-mydb image: alpine:3.12 command: ['sh', '-c', 'echo wait for db && sleep 35 && echo done']
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
| apiVersion: apps/v1 kind: Deployment metadata: name: myngx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: initContainers: - name: init-mydb image: alpine:3.12 command: ['sh', '-c', 'echo wait for db && sleep 35 && echo done'] containers: - name: ngx image: nginx:1.18-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: sharedata mountPath: /data - name: alpine image: alpine:3.12 imagePullPolicy: IfNotPresent command: ["sh","-c","echo this is alpine && sleep 36000"] volumeMounts: - name: sharedata mountPath: /data volumes: - name: sharedata emptyDir: {}
|
4、Deployment和ConfigMap
ConfigMap(1)基本创建、环境变量引用
基本概念
- ConfigMap 是一种 API 对象,用来将非机密性的数据保存到健值对中。使用时可以用作环境变量、命令行参数或者存储卷中的配置文件。
- ConfigMap 将您的环境配置信息和 容器镜像 解耦,便于应用配置的修改。当您需要储存机密信息时可以使用 Secret 对象。
使用的场景:
- 容器 entrypoint 的命令行参数
- 容器的环境变量
- 映射成文件
- 编写代码在 Pod 中运行,使用 Kubernetes API 来读取 ConfigMap
基本命令
创建一个cm
1 2 3 4 5 6 7 8
| apiVersion: v1 kind: ConfigMap metadata: name: mycm data: username: "cwz" userage: "19"
|
也可以这样,一键对应多行值
1 2 3 4 5 6
| data: username: "cwz" userage: "19“ user.info: | name=cwz age=19
|
删除之前的deployment
1 2 3 4 5 6
| env: - name: USER_NAME valueFrom: configMapKeyRef: name: mycm key: username
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
| apiVersion: apps/v1 kind: Deployment metadata: name: myngx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx image: nginx:1.18-alpine imagePullPolicy: IfNotPresent env: - name: TEST value: testvalue - name: USERNAME valueFrom: configMapKeyRef: name: mycm key: username
|
ConfigMap(2)映射成单文件
文档:https://kubernetes.io/zh/docs/concepts/storage/volumes
挂载:
1 2 3
| volumeMounts: - name: cmdata mountPath: /data
|
写卷:
1 2 3 4 5 6 7
| volumes: - name: cmdata configMap: name: mycm items: - key: user.info path: user.txt
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
| apiVersion: apps/v1 kind: Deployment metadata: name: myngx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx image: nginx:1.18-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: cmdata mountPath: /data env: - name: TEST value: testvalue - name: USERNAME valueFrom: configMapKeyRef: name: mycm key: username volumes: - name: cmdata configMap: name: mycm items: - key: user.info path: user.txt
|
ConfigMap(3)全部映射文件和subpath
如果我们不指定具体的key,像这样:
1 2 3 4
| volumes: - name: cmdata configMap: name: mycm
|
这时你会发现,所有文件都被 映射进去了,文件名就是key名
现在需求:不指定key 。也想挂载其中一个配置
参考文档:https://kubernetes.io/zh/docs/concepts/storage/volumes/#using-path
用于指定所引用的卷内的子路径,而不是其根路径
修改:
1 2 3 4 5 6 7 8 9 10 11
| volumeMounts: - mountPath: /data/user.txt name: cmdata subPath: user.info
.....
volumes: - name: cmdata configMap: name: mycm
|
完整:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
| apiVersion: apps/v1 kind: Deployment metadata: name: myngx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx image: nginx:1.18-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: cmdata mountPath: /data/user.txt subPath: user.info env: - name: TEST value: testvalue - name: USERNAME valueFrom: configMapKeyRef: name: mycm key: username volumes: - name: cmdata configMap: defaultMode: 0655 name: mycm
|
ConfigMap:用程序读取(体外)
体外:就是程序在k8s集群外
体内:就是程序在k8s集群内部pod在一起
开启api代理:
1
| kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' --port=8009
|
8009请自行安全组 开放
下载k8s对应版本的包
1
| go get k8s.io/client-go@v0.18.6
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
| package main
import ( "context" "fmt" "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" "log" )
func getClient() *kubernetes.Clientset{ config:=&rest.Config{ Host:"http://124.70.204.12:8009", } c,err:=kubernetes.NewForConfig(config) if err!=nil{ log.Fatal(err) }
return c } func main() { cm,err:=getClient().CoreV1().ConfigMaps("default"). Get(context.Background(),"mycm",v1.GetOptions{}) if err!=nil{ log.Fatal(err) } for k,v:=range cm.Data{ fmt.Printf("key=%s,value=%s\n",k,v) } }
|
看文档:https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/
ConfigMap:用程序读取(体内)
之前学RBAC学到:
- token 在这个位置/var/run/secrets/kubernetes.io/serviceaccount/token
- APISERVER的地址这么拼接:
1
| https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT
|
证书的位置在:
1
| /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
|
此时我们需要创建一个 账户:
- 注意体内是:ServiceAccount 账户, (体外是UserAccount)
- 这个账户需要对ConfigMap拥有权限
cmuser.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
| apiVersion: v1 kind: ServiceAccount metadata: name: cmuser --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cmrole rules: - apiGroups: [""] resources: ["configmaps"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cmclusterrolebinding namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cmrole subjects: - kind: ServiceAccount name: cmuser namespace: default
|
cmtest.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
| apiVersion: apps/v1 kind: Deployment metadata: name: cdmtest spec: selector: matchLabels: app: cmtest replicas: 1 template: metadata: labels: app: cmtest spec: serviceAccount: cmuser nodeName: node2 containers: - name: cmtest image: alpine:3.12 imagePullPolicy: IfNotPresent command: ["/app/cmtest"] volumeMounts: - name: app mountPath: /app volumes: - name: app hostPath: path: /home/cwz/goapi type: Directory
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
| package main
import ( "context" "fmt" "io/ioutil" "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" "log" "os" ) var api_server string var token string
func init() { api_server=fmt.Sprintf("https://%s:%s", os.Getenv("KUBERNETES_SERVICE_HOST"),os.Getenv("KUBERNETES_PORT_443_TCP_PORT")) f,err:=os.Open("/var/run/secrets/kubernetes.io/serviceaccount/token") if err!=nil{ log.Fatal(err) } b,_:=ioutil.ReadAll(f) token=string(b) } func getClient() *kubernetes.Clientset{ config:=&rest.Config{ Host:api_server, BearerToken:token, TLSClientConfig:rest.TLSClientConfig{CAFile:"/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"}, } c,err:=kubernetes.NewForConfig(config) if err!=nil{ log.Fatal(err) }
return c } func main() { cm,err:=getClient().CoreV1().ConfigMaps("default"). Get(context.Background(),"mycm",v1.GetOptions{}) if err!=nil{ log.Fatal(err) } for k,v:=range cm.Data{ fmt.Printf("key=%s,value=%s\n",k,v) } select {} }
|
ConfigMap:调用API监控cm的变化
基本代码:
1 2 3 4 5 6
| fact:=informers.NewSharedInformerFactory(getClient(), 0) cmInformer:=fact.Core().V1().ConfigMaps() cmInformer.Informer().AddEventHandler(&CmHandler{})
fact.Start(wait.NeverStop) select {}
|
回调:
1 2 3 4 5 6
| type CmHandler struct{} func(this *CmHandler) OnAdd(obj interface{}){} func(this *CmHandler) OnUpdate(oldObj, newObj interface{}){
} func(this *CmHandler) OnDelete(obj interface{}){}
|
完整代码:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
| package main
import ( "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/util/wait" "k8s.io/client-go/informers" "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" "log" )
func getClient() *kubernetes.Clientset{ config:=&rest.Config{ Host:"http://124.70.204.12:8009", } c,err:=kubernetes.NewForConfig(config) if err!=nil{ log.Fatal(err) }
return c } type CmHandler struct{} func(this *CmHandler) OnAdd(obj interface{}){} func(this *CmHandler) OnUpdate(oldObj, newObj interface{}){ if newObj.(*v1.ConfigMap).Name=="mycm"{ log.Println("mycm发生了变化") } } func(this *CmHandler) OnDelete(obj interface{}){}
func main() {
fact:=informers.NewSharedInformerFactory(getClient(), 0)
cmInformer:=fact.Core().V1().ConfigMaps() cmInformer.Informer().AddEventHandler(&CmHandler{})
fact.Start(wait.NeverStop) select {}
}
|
5、Deployment和Secret
入门和无脑创建(Opaque)
官方定义:
- Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 SSH 密钥。 将这些信息放在 secret 中比放在 Pod 的定义或者 容器镜像 中来说更加安全和灵活。 参阅 Secret 设计文档 获取更多详细信息。
- Secret 是一种包含少量敏感信息例如密码、令牌或密钥的对象。 这样的信息可能会被放在 Pod 规约中或者镜像中。 用户可以创建 Secret,同时系统也创建了一些 Secret。
类型:
先试一把:
1 2 3 4 5 6 7 8
| apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: user: "cwz" pass: "123"
|
这样肯定报错
我们需要 先base64编码
1
| echo cwz | base64 && echo 123 | base64
|
1 2 3 4 5 6 7 8
| apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: user: "Y3d6Cg==" pass: "MTIzCg=="
|
或者可以这样:
1 2 3 4 5 6 7 8
| apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: user: "zhangsan" pass: "123456"
|
命令获取secret内容、挂载文件
1 2 3 4
| kubectl get secret mysecret -o yaml
kubectl get secret mysecret -o json
|
还可以使用 jsonpath
文档:https://kubernetes.io/zh/docs/reference/kubectl/jsonpath/
环境变量使用:
1 2 3 4 5
| - name: USER valueFrom: secretKeyRef: name: mysecret key: user
|
挂载:
1 2 3 4
| volumes: - name: users secret: secretName: mysecret
|
完整的myngx.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46
| apiVersion: apps/v1 kind: Deployment metadata: name: myngx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx image: nginx:1.18-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: cmdata mountPath: /data/user.txt subPath: user.info - name: users mountPath: /users env: - name: TEST value: testvalue - name: USERNAME valueFrom: configMapKeyRef: name: mycm key: username - name: USER valueFrom: secretKeyRef: name: mysecret key: user volumes: - name: users secret: defaultMode: 0655 secretName: mysecret - name: cmdata configMap: defaultMode: 0655 name: mycm
|
secret进行basic-auth认证(1):手工配置
先看下手工配nginx 认证咋配
1 2 3 4
| location / { auth_basic "xxxxooooo"; auth_basic_user_file conf/htpasswd; }
|
其中密码 我们常用的是 :
- 通过apache的工具htpasswd或者openssl passwd命令生成
- htpasswd提供了一个变种的MD5加密算法(apr1)
安装工具:
1 2 3
| sudo yum -y install httpd-tools
|
1 2 3 4 5 6 7
| htpasswd -c auth cwz
htpasswd auth lisi
|
我们搞成一个ConfigMap,名称叫做bauth,然后把刚才的内容 拷贝进去
使用的镜像是 nginx:1.18-alpine,默认配置文件在/etc/nginx/nginx.conf,其中 主配置文件引用了 /etc/nginx/conf.d/default.conf 文件
拷贝并修改:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| server { listen 80; server_name localhost; location / { auth_basic "test auth"; auth_basic_user_file /etc/nginx/basicauth; root /usr/share/nginx/html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }
|
把nginx配置也搞成ConfigMap,名字叫做ngx,值是上面拷贝的
挂载:
1 2 3 4 5 6 7
| volumeMounts: - name: nginxconf mountPath: /etc/nginx/conf.d/default.conf subPath: ngx - name: basicauth mountPath: /etc/nginx/basicauth subPath: auth
|
卷:
1 2 3 4 5 6 7 8 9
| volumes: - name: nginxconf configMap: defaultMode: 0655 name: nginxconfig - name: basicauth configMap: defaultMode: 0655 name: bauth
|
完整myngx.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
| apiVersion: apps/v1 kind: Deployment metadata: name: myngx spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx image: nginx:1.18-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: nginxconf mountPath: /etc/nginx/conf.d/default.conf subPath: ngx - name: basicauth mountPath: /etc/nginx/basicauth subPath: auth volumes: - name: nginxconf configMap: defaultMode: 0655 name: nginxconf - name: basicauth configMap: defaultMode: 0655 name: bauth
|
访问测试:curl —basic -u cwz:123456 http://10.244.2.85
secret进行basic-auth认证(2):使用secret挂载
从文件的方式导入
1
| kubectl create secret generic secret-basic-auth --from-file=auth
|
拉取私有镜像、创建Docker Secret
首先去 https://hub.docker.com/signup 注册并登录
1 2 3 4 5
| docker pull alpine:3.12
docker login 登录
docker tag alpine:3.12 cwz/myalpine:3.12
|
创建私有镜像仓库:https://hub.docker.com/repository/create
发布:
1
| docker push cwz/myalpine:3.12
|
接下来 把我们刚才打标签的tag给删掉
1
| docker rmi cwz/myalpine:3.12
|
写一个deployment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
| apiVersion: apps/v1 kind: Deployment metadata: name: myalpine spec: selector: matchLabels: app: myalpine replicas: 1 template: metadata: labels: app: myalpine spec: containers: - name: alpine image: 2441767409/myalpine:3.12 imagePullPolicy: IfNotPresent command: ["sh","-c","echo this is alpine && sleep 36000"]
|
创建secret
1 2 3 4 5
| kubectl create secret docker-registry dockerreg \ --docker-server=https://index.docker.io/v1/\ --docker-username=2441767409 \ --docker-password=xxxxx \ --docker-email=2441767409@qq.com
|
内容解码:
1
| kubectl get secret dockerreg -o jsonpath={.data.*} | base64 -d
|
myalpine.yaml中加入配置:
1 2
| imagePullSecrets: - name: dockerreg
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
| apiVersion: apps/v1 kind: Deployment metadata: name: myalpine spec: selector: matchLabels: app: myalpine replicas: 1 template: metadata: labels: app: myalpine spec: containers: - name: alpine image: 2441767409/myalpine:3.12 imagePullPolicy: IfNotPresent command: ["sh","-c","echo this is alpine && sleep 36000"] imagePullSecrets: - name: dockerreg
|
6、Deployment和Service
创建一个最基本的Service、ClusterIP
Service的作用:
先搞一个ConfigMap
1 2 3 4 5 6 7
| apiVersion: v1 data: h1: this is h1 h2: this is h2 kind: ConfigMap metadata: name: html
|
写一个deployment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
| apiVersion: apps/v1 kind: Deployment metadata: name: ngx1 spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx1 image: nginx:1.18-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: htmldata mountPath: /usr/share/nginx/html/index.html subPath: h1 ports: - containerPort: 80 volumes: - name: htmldata configMap: defaultMode: 0644 name: html --- apiVersion: v1 kind: Service metadata: name: nginx-svc spec: type: ClusterIP ports: - port: 80 targetPort: 80 selector: app: nginx
|
服务类型—-ClusterIP:
- ClusterIP:通过集群的内部 IP 暴露服务,选择该值时服务只能够在集群内部访问。 这也是默认的 ServiceType
Service负载均衡多个POD
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
| apiVersion: apps/v1 kind: Deployment metadata: name: ngx1 spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx1 image: nginx:1.18-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: htmldata mountPath: /usr/share/nginx/html/index.html subPath: h1 ports: - containerPort: 80 volumes: - name: htmldata configMap: defaultMode: 0644 name: html --- apiVersion: apps/v1 kind: Deployment metadata: name: ngx2 spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx2 image: nginx:1.18-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: htmldata mountPath: /usr/share/nginx/html/index.html subPath: h2 ports: - containerPort: 80 volumes: - name: htmldata configMap: defaultMode: 0644 name: html --- apiVersion: v1 kind: Service metadata: name: nginx-svc spec: type: ClusterIP ports: - port: 80 targetPort: 80 selector: app: nginx
|
标签匹配的
宿主机访问k8s的Service的基本方法
安装工具:
1
| sudo yum install bind-utils -y
|
执行:
1 2
| kubectl get svc -n kube-system
|
1 2 3 4 5 6 7 8 9
| sudo vi /etc/resolv.conf
nslookup nginx-svc.default.svc.cluster.local
|
无头Service初步入门
所谓的无头服务
- 通过指定 ClusterIP的值为“None”来创建 Headless Service
1 2 3 4 5 6 7 8 9 10 11
| apiVersion: v1 kind: Service metadata: name: nginx-svc spec: clusterIP: "None" ports: - port: 80 targetPort: 80 selector: app: nginx
|
作用:
- 有些程序 需要自己来决定到底 使用哪个IP
- 譬如golang 可以使用 net.LookupIP(“服务名”),来获取 IP ,然后自己来决定到底使用哪个
- StatefulSet 状态下。POD互相访问
kube-proxy、修改为ipvs模式
外部通过NodePort、ClusterIP等方式访问服务
kube-proxy运行在每个node上,负责Pod网络代理,维护网络规则和四层负载均衡工作
查看监听的端口:
1
| sudo netstat -ntlp | grep kube-proxy
|
kube-proxy监听 10249 和 10256端口:对外提供/metrics和 /healthz的访问
查看配置文件
1
| kubectl describe cm kube-proxy -n kube-system
|
两个主要模式:
userspace(废除)、iptables或者IPVS
iptables:
- 基于Linux内核,成熟稳定,并且能够和其他跟iptables协作的应用融洽相处
- 但我们的Service和Endpoint比较少,如千把个、几百个、几十个,那么iptables性能较高
IPVS:
- 基于Linux内核功能(负载均衡器)。IPVS模式下,kube-proxy使用IPVS负载均衡代替了iptables,使用优化的查找算法,而不是从列表中查找规则,在大规模场景下相对IPVS性能更好
修改方法:
- 在kube-system找到 cm(kube-proxy),修改mode
1
| kubectl logs kube-proxy-l7hb9 -n kube-system | grep "Using ipvs Proxier"
|
7、PV和PVC
创建PV、Local方式、基本设置
pv:
- 全称是:PersistentVolume(持久化卷),是对底层的共享存储的一种抽象, 由管理员进行创建和配置。
- 然后由 卷插件 如local、NFS 等具体的底层技术来实现
文档:https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes
创建一个:
1 2 3 4 5 6 7 8 9 10 11 12 13 14
| apiVersion: v1 kind: PersistentVolume metadata: name: local-pv spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage local: path: /mnt/disks/ssd1
|
capacity:
- 它的单位有:P,T,G,M,K或 Pi,Ti,Gi,Mi,Ki 区别,加i的是以1024为换算单位
- 如 (1Mi) 1M=1024K=1024×1024byte 1M 则是1000*1000
accessModes:
访问模式有:
- ReadWriteOnce — 卷可以被一个节点以读写方式挂载;
- ReadOnlyMany — 卷可以被多个节点以只读方式挂载;
- ReadWriteMany — 卷可以被多个节点以读写方式挂载。
PersistentVolume 对象的回收策略:
Retained(保留)、Recycled(回收)或 Deleted(删除)
节点亲和性:
如果没有满足调度要求的节点的话,就会忽略按正常调度preferredDuringSchedulingIgnoredDuringExecution
preferredDuringSchedulingRequiredDuringExecution (一旦后面标签改了,有满足条件的节点了,就会重新调度到该节点上)
如果没有满足条件的节点的话,就不断重试直到满足条件为止 requiredDuringSchedulingIgnoredDuringExecution (如果一开始满足的,后面node标签发生变化了,也无妨,继续运行)requiredDuringSchedulingRequiredDuringExecution(一旦变了,不符合条件了。则重新选择)
1 2 3 4 5 6 7 8 9 10 11
| kubectl get node --show-labels=true
kubectl label nodes <node-name> <label-key>=<label-value> kubectl label nodes node2 pv=local
kubectl label nodes <node-name> <label-key>- kubectl label nodes node2 pv=local
|
matchExpression
In: label的值在某个列表中
NotIn:label的值不在某个列表中
Exists:某个label存在
DoesNotExist:某个label不存在
Gt:label的值大于某个值(字符串比较)
Lt:label的值小于某个值(字符串比较)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
| apiVersion: v1 kind: PersistentVolume metadata: name: local-pv spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete local: path: /home/cwz/data nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: pv operator: In values: - local
|
创建PVC、初步绑定PV 、POD挂载
PVC(Persistent Volume Claim)
Persistent Volume提供存储资源(并实现)
Persistent Volume Claim 描述需要的存储标准,然后从现有PV中匹配或者动态建立新的资源,最后将两者进行绑定。
好比
- PV是提供者,提供存储方式和容量 ( 销售费用1万)
- PVC是消费者,消费容量 (它不需要关注用什么技术实现,绑定即可)
绑定方式:
PV和PVC中
spec关键字段要匹配 。storageClassName字段必须一致
PersistentVolume 对象的回收策略:Retained(保留)、Recycled(回收)或 Deleted(删除)
local-pv.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| apiVersion: v1 kind: PersistentVolume metadata: name: local-pv spec: capacity: storage: 1Gi volumeMode: Filesystem storageClassName: "" accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain local: path: /home/cwz/data nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: pv operator: In values: - local
|
pvc.yaml:
1 2 3 4 5 6 7 8 9 10 11
| apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ngx-pvc spec: accessModes: - ReadWriteOnce storageClassName: "" resources: requests: storage: 1Gi
|
ngx.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
| apiVersion: apps/v1 kind: Deployment metadata: name: ngx1 spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx1 image: nginx:1.18-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: mydata mountPath: /data ports: - containerPort: 80 volumes: - name: mydata persistentVolumeClaim: claimName: ngx-pvc
|
StorageClass简单入门和创建
理解为创建PV的模板
文档:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/
看个例子:
- provisioner:指的是卷插件 (如NFS,local)
- reclaimPolicy: 回收策略。
- volumeBindingMode :绑定模式
- Immediate :一旦创建PVC 就绑定
- WaitForFirstConsumer:就是延迟绑定,直到使用该 PVC 的 Pod 被创建
- parameters:参数(不同的存储方式有N多个不同参数,这里不讲了。大家自行看文档)
local-pv.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
| apiVersion: v1 kind: PersistentVolume metadata: name: local-pv spec: capacity: storage: 1Gi volumeMode: Filesystem storageClassName: local-storage accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain local: path: /home/cwz/data nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: pv operator: In values: - local
|
pvc.yaml:
1 2 3 4 5 6 7 8 9 10 11
| apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ngx-pvc spec: accessModes: - ReadWriteOnce storageClassName: local-storage resources: requests: storage: 1Gi
|
storageclass.yaml:
1 2 3 4 5 6 7
| kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage provisioner: Local reclaimPolicy: Retain volumeBindingMode: WaitForFirstConsumer
|
ngx.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
| apiVersion: apps/v1 kind: Deployment metadata: name: ngx1 spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: ngx1 image: nginx:1.18-alpine imagePullPolicy: IfNotPresent volumeMounts: - name: mydata mountPath: /data ports: - containerPort: 80 volumes: - name: mydata persistentVolumeClaim: claimName: ngx-pvc
|
8、POD自动伸缩初步(HPA)
HPA入门、部署metrics-server
HPA
Pod 水平自动扩缩(Horizontal Pod Autoscaler) 可以:
- 基于 CPU 利用率自动扩缩 ReplicationController、Deployment、ReplicaSet 和 StatefulSet 中的 Pod 数量。
- 除了 CPU 利用率,也可以基于其他应程序提供的自定义度量指标 来执行自动扩缩。
- Pod 自动扩缩不适用于无法扩缩的对象,比如 DaemonSet。
metrics-server:
- k8s里。可以通过Metrics-Server服务采集节点和Pod的内存、磁盘、CPU和网络的使用率等
注意:
- Metrics API 只可以查询当前的度量数据,并不保存历史数据
- Metrics API URI 为 /apis/metrics.k8s.io/
- 必须部署 metrics-server 才能使用该 API,metrics-server 通过调用 Kubelet Summary API 获取数据
部署方式:
官方告诉我们:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
components.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188
| apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname - --kubelet-use-node-status-port image: bitnami/metrics-server:0.4.1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS periodSeconds: 10 securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100
|
1 2 3 4 5
| kubectl top pod
kubectl autoscale deployment ngx1 --min=2 --max=5 --cpu-percent=20
|
限制POD资源、创建HPA
看一段配置:
1 2 3 4 5 6 7
| resources: requests: cpu: "200m" memory: "256Mi" limits: cpu: "400m" memory: "512Mi“
|
requests来设置各容器需要的最小资源
limits用于限制运行时容器占用的资源
1物理核=1000个微核(millicores) 1000m=1CPU
部署一个deployment
ngx.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
| apiVersion: apps/v1 kind: Deployment metadata: name: web1 spec: selector: matchLabels: app: myweb replicas: 1 template: metadata: labels: app: myweb spec: nodeName: just2 containers: - name: web1test image: alpine:3.12 imagePullPolicy: IfNotPresent command: ["/app/stress"] volumeMounts: - name: app mountPath: /app resources: requests: cpu: "200m" memory: "256Mi" limits: cpu: "400m" memory: "512Mi" ports: - containerPort: 8080 volumes: - name: app hostPath: path: /home/cwz/goapi
--- apiVersion: v1 kind: Service metadata: name: web1 spec: type: ClusterIP ports: - port: 80 targetPort: 8080 selector: app: myweb
|
app代码:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
| package main
import ( "encoding/json" "github.com/gin-gonic/gin"
)
func main() { test:=map[string]string{ "str":"requests来设置各容器需要的最小资源", } r:=gin.New() r.GET("/", func(context *gin.Context) { ret:=0 for i:=0;i<=1000000;i++{ t:=map[string]string{} b,_:=json.Marshal(test) _=json.Unmarshal(b,t) ret++ } context.JSON(200,gin.H{"message":ret}) }) r.Run(":8080") }
|
先装一个工具
1 2 3 4 5
| sudo yum -y install httpd-tools
ab -n 10000 -c 10 http://web1/
|
创建一个
1 2
| kubectl autoscale deployment web1 --min=1 --max=5 --cpu-percent=20
|
yaml的方式创建HPA
1 2 3 4 5 6 7
| kubectl api-versions | grep autoscaling
autoscaling/v1 autoscaling/v2beta1 autoscaling/v2beta2
|
yaml配置方式:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
| apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: web1hpa namespace: default spec: minReplicas: 1 maxReplicas: 5 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: web1 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 50
|
使用量:
1 2 3 4 5 6 7 8 9 10 11 12 13
| metrics: - type: Resource resource: name: cpu target: type: AverageValue averageValue: 230m - type: Resource resource: name: memory target: type: AverageValue averageValue: 400m
|
关于自定义指标:
- 自定义指标 比较主流的方式是使用prometheus
大概有几项需要安装:
node-exporter:prometheus的export,收集Node级别的监控数据
prometheus:监控服务端,从node-exporter拉数据并存储为时序数据。
kube-state-metrics:将prometheus中可以用PromQL查询到的指标数据转换成k8s对应的数据
k8s-prometheus-adpater:聚合进apiserver,即custom-metrics-apiserver实现(也可以用自定义CRD来实现)
9、CRD和Controller
什么是CRD、创建一个自己的资源
CRD:
- CRD 是 Kubernetes 的一种资源(CustomResourceDefinition ),允许我们基于它自定义新的资源类型.
- 值得注意的是:k8s所有的东西都叫做资源(Resource)
- CRD就是 我们对自定义资源的定义
crd.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
| apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: proxies.extensions.just.com spec: group: extensions.just.com versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: name: type: string age: type: integer scope: Namespaced names: plural: proxies singular: proxy kind: MyRoute shortNames: - mr
|
crddep.yaml:
1 2 3 4 5 6 7
| apiVersion: extensions.just.com/v1 kind: MyRoute metadata: name: mygw spec: name: "cwz" age: 19
|
kubectl get mr
控制器的基本概念
概念:
- kubernetes控制器管理器是一个守护进程,内嵌随kubernetes一起发布的核心控制回路
- 具体的控制器通过API server监视集群的状态,并尝试进行更改以将当前状态转为期望状态。目前,kubernetes自带的控制器包括副本控制器、节点控制器、命名空间控制器和服务账号控制器等
具体作用:
- 控制器会监视资源的创建、更新、删除事件,并触发Reconcile函数作为响应。整个调整过程被称作 ”Reconcile Loop“ (调谐循环)或者 ”Sync Loop“ (同步循环)
- 如 RS 当收到一个关于ReplicaSet的事件或者关于ReplicaSet创建pod事件时,就会触发一个叫做 Reconcile的函数,利用该函数用来调整状态使之可以和期望状态匹配
10、调度器kube-scheduler入门
调度器概念、基本过程、插件组成
概念:
kube-scheduler是kuberbetes中的关键模块,扮演管家的角色遵从一套机制为pod提供调度服务
粗暴的POD调度流程:
- 通过某个手法发布pod(一堆配置)、
- ControllerManager会把pod加入待调度队列
- scheduler开始干活,通过机制决定到底要调度到哪个node。然后写入etcd
- 被选上节点里的kubelet得到消息,开始干活(pull image这些,正式启动pod)
scheduler是怎么样干活的:
然后有一堆插件来实现这些节点:
总结为两点:
- 过滤
- 譬如我们直接设置nodeName、或者标签匹配。这就是过滤
- 打分
- 调度器会给每一个可调度节点进行打分。最后,kube-scheduler会将pod调度到得分最高的弄node上。如果存在多个得分最高的node,kube-scheduler会从中随机选取一个
NodeAffinity之节点选择器
NodeAffinity之节点亲和性(入门级)
目前有两种类型的节点亲和性:
- requiredDuringSchedulingIgnoredDuringExecution (必须满足)
- preferredDuringSchedulingIgnoredDuringExecution (有最好,没有也无所谓)
- 可以视它们为“硬需求”和“软需求”
示例:
1 2 3 4 5 6 7 8 9
| affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: app operator: In values: - ngx
|
文档:https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
ngx.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
| apiVersion: apps/v1 kind: Deployment metadata: name: ngx1 spec: selector: matchLabels: app: nginx replicas: 1 template: metadata: labels: app: nginx spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: app operator: In values: - ngx preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: age operator: In values: - "19" containers: - name: ngx1 image: nginx:1.18-alpine imagePullPolicy: IfNotPresent
|
调度之节点污点和容忍度
节点亲和性 使 Pod 被吸引到一类特定的节点。
Taint(污点)则相反,它使节点能够排斥一类特定的 Pod。
1 2
| kubectl describe node just1 | grep Taints
|
三种类型:
- NoSchedule :一定不能被调度。
- PreferNoSchedule:尽量不要调度。
- NoExecute:不仅不会调度,还会驱逐Node上已有的Pod。
内置污点:
- node.kubernetes.io/not-ready:节点未准备好。这相当于节点状态 Ready 的值为 “False”。
- node.kubernetes.io/unreachable:节点控制器访问不到节点. 这相当于节点状态 Ready 的值为 “Unknown”。
- node.kubernetes.io/out-of-disk:节点磁盘耗尽。
- node.kubernetes.io/memory-pressure:节点存在内存压力。
- node.kubernetes.io/disk-pressure:节点存在磁盘压力。
- node.kubernetes.io/network-unavailable:节点网络不可用。
- node.kubernetes.io/unschedulable: 节点不可调度。
- node.cloudprovider.kubernetes.io/uninitialized:如果 kubelet 启动时指定了一个 “外部” 云平台驱动, 它将给当前节点添加一个污点将其标志为不可用。在 cloud-controller-manager 的一个控制器初始化这个节点后,kubelet 将删除这个污点。
1 2 3 4 5
| # 打污点 kubectl taint node just1 name=cwz:NoSchedule
# 删除该污点 kubectl taint node just1 name:NoSchedule-
|
接下来尝试调度
环境是这样的:
1 2 3 4 5 6 7 8 9 10 11
| 首先
我有个两个节点
1、just1 拥有标签app=ngx 拥有一个污点 是name=cwz
2、just2 没有上述标签
发布一个deployment 。节点亲和性 是 app in ngx
|
添加容忍度:
1 2 3 4 5
| tolerations: - key: "name" operator: "Equal" value: "cwz" effect: "NoSchedule"
|
POD亲和性的基本设置
概念:
- Pod 间亲和性与反亲和性使你可以 基于已经在节点上运行的 Pod 的标签 来约束 Pod 可以调度到的节点,而不是基于节点上的标签
官方说明:
- Pod 间亲和性与反亲和性需要大量的处理,这可能会显著减慢大规模集群中的调度。 因此不建议在超过数百个节点的集群中使用它们。
ngx.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
| apiVersion: apps/v1 kind: Deployment metadata: name: ngx1 spec: selector: matchLabels: app: ngx1 replicas: 1 template: metadata: labels: app: ngx1 spec: nodeName: just1
containers: - name: ngx1 image: nginx:1.18-alpine imagePullPolicy: IfNotPresent
|
ngx2.yaml:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
| apiVersion: apps/v1 kind: Deployment metadata: name: ngx2 spec: selector: matchLabels: app: ngx2 replicas: 1 template: metadata: labels: app: ngx2 spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - ngx1 topologyKey: name containers: - name: ngx2 image: nginx:1.18-alpine imagePullPolicy: IfNotPresent
|
附录
初始化集群:
1
| kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version=1.20.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
|
清除主机污点:
1
| kubectl taint nodes --all node-role.kubernetes.io/master-
|
给工作节点打标签:
1
| kubectl label node just2 node-role.kubernetes.io/node=node
|
重新安装k8s
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114
| kubectl drain justnode2--delete-local-data --force kubectl delete nodes justnode2
sudo kubeadm reset sudo rm -rf /etc/cni/net.d sudo iptables -F
sudo yum -y remove kubelet-1.22.3 kubeadm-1.22.3 kubectl-1.22.3
sudo yum -y remove docker-ce docker-ce-cli containerd rm -rf /etc/systemd/system/docker.service.d rm -rf /etc/systemd/system/docker.service rm -rf /var/lib/docker rm -rf /var/run/docker rm -rf /usr/local/docker rm -rf /etc/docker rm -rf /usr/bin/docker* /usr/bin/containerd* /usr/bin/runc /usr/bin/ctr
yum install -y yum-utils yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install containerd -y
containerd config default > /etc/containerd/config.toml
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
[plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] endpoint = ["https://dockerproxy.com"]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
systemctl daemon-reload && systemctl restart containerd && systemctl enable containerd
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz
cat > /etc/crictl.yaml <<EOF runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 EOF
cat <<EOF>> /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum -y install kubelet-1.24.2 kubeadm-1.24.2 kubectl-1.24.2
vim /etc/sysctl.d/k8s.conf 加入或修改 如下4项 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables = 1
modprobe br_netfilter sysctl --system
systemctl enable kubelet
kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version=1.28.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --cri-socket=unix:///run/containerd/containerd.sock
kubeadm join 172.17.16.9:6443 --token 6f1uk3.mlwq38qizo60eev1 \ --discovery-token-ca-cert-hash sha256:110c723f99def7107ad1ce8cf2e063ea551064d945ec6558200879f386e7d0ff --cri-socket=unix:///run/containerd/containerd.sock
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
ctr和crictl
containerd相比于docker,多了namespace概念,每个image和container都会在各自的namespace下可见
所以ctr 要查询应该:
1
| ctr -n k8s.io images list
|
清除污点:
1
| kubectl taint nodes --all node-role.kubernetes.io/master-
|
打标签:
1
| kubectl label node just2 node-role.kubernetes.io/node=node
|