一、基于环境和项目的资源限制¶
和租户与团队一致,一个集群也可能同时存在多个环境和项目,此时也可以基于环境和项目进行资源限制。
1、创建两个ns
[root@k8s-master01 ~]# k create ns project-dev
[root@k8s-master01 ~]# k create ns project-test
2、定义资源限制的yaml文件
[root@k8s-master01 ~]# vim dev-resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: project-dev
spec:
hard:
requests.cpu: "2"
requests.memory: 4Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "50"
requests.storage: 40Gi
persistentvolumeclaims: "10"
services: "40"
count/replicasets.apps: 1k
应用yaml文件
[root@k8s-master01 ~]# kaf dev-resourcequota.yaml
查看resourcequota
[root@k8s-master01 ~]# kg resourcequota -n project-dev
NAME AGE REQUEST LIMIT
dev-quota 6m23s count/replicasets.apps: 1/1k, persistentvolumeclaims: 0/10, pods: 0/50, requests.cpu: 0/2, requests.memory: 0/4Gi, requests.storage: 0/40Gi, services: 0/40 limits.cpu: 0/8, limits.memory: 0/16Gi
3、配置租户2的资源限制
定义资源限制的yaml文件
[root@k8s-master01 ~]# vim test-resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: test-quota
namespace: project-test
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "16"
limits.memory: 32Gi
pods: "50"
count/replicasets.apps: 1k
应用yaml文件
[root@k8s-master01 ~]# kaf test-resourcequota.yaml
查看resourcequota
[root@k8s-master01 ~]# kg resourcequota -n project-test
NAME AGE REQUEST LIMIT
test-quota 6m48s count/replicasets.apps: 0/1k, pods: 0/50, requests.cpu: 0/4, requests.memory: 0/8Gi limits.cpu: 0/16, limits.memory: 0/32Gi
4、创建一个deployment进行测试
[root@k8s-master01 ~]# k create deploy test-quota --image=registry.cn-hangzhou.aliyuncs.com/abroad_images/redis:7.2.5 -n project-dev
创建完成后,需要添加资源请求
# 这里设置的requests.cpu为2
[root@k8s-master01 ~]# k edit deploy test-quota -n project-dev
...
...
resources:
limits:
cpu: "4"
memory: 1Gi
requests:
cpu: "2"
memory: 10Mi
...
...
pod已创建成功
[root@k8s-master01 ~]# kgp -n project-dev
NAME READY STATUS RESTARTS AGE
test-quota-558557cc7b-pz545 1/1 Running 0 17s
5、重新定义project-dev命名空间下的ResourceQuota
# 降低requests.cpu为1
[root@k8s-master01 ~]# vim dev-resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: project-dev
spec:
hard:
requests.cpu: "1"
requests.memory: 4Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "50"
requests.storage: 40Gi
persistentvolumeclaims: "10"
services: "40"
count/replicasets.apps: 1k
# 重新应用
[root@k8s-master01 ~]# kaf dev-resourcequota.yaml
# 观察到虽然requests.cpu降低为1,而现有pod之前请求的是requests.cpu为2,说明对已经创建资源没有任何影响
[root@k8s-master01 ~]# kg resourcequota -n project-dev
NAME AGE REQUEST LIMIT
dev-quota 13m count/replicasets.apps: 2/1k, persistentvolumeclaims: 0/10, pods: 1/50, requests.cpu: 2/1, requests.memory: 10Mi/4Gi, requests.storage: 0/40Gi, services: 0/40 limits.cpu: 4/8, limits.memory: 1Gi/16Gi