好用的日本ip地址seo優(yōu)化要做什么
目錄
創(chuàng)建私有倉庫
將修改過的nginx鏡像做標(biāo)記封裝,準(zhǔn)備上傳到私有倉庫
將鏡像上傳到私有倉庫
從私有倉庫中下載鏡像到本地
CPU使用率
CPU共享比例
CPU周期限制
CPU 配額控制參數(shù)的混合案例
內(nèi)存限制
Block IO 的限制
限制bps 和iops
創(chuàng)建私有倉庫
倉庫(Repository)是集中存放鏡像的地方。
倉庫注冊(cè)服務(wù)器才是存放倉庫具體的服務(wù)器(Registry),每個(gè)服務(wù)器上都可以放置多個(gè)倉庫,而每個(gè)倉庫下可以放置多個(gè)鏡像,每個(gè)鏡像上可以運(yùn)行多個(gè)容器,每個(gè)容器上可以跑一個(gè)應(yīng)用或應(yīng)用組。
安裝docker后,可以通過官方提供的registry鏡像部署一套本地的私有倉庫環(huán)境
[root@localhost ~]# mkdir -p /opt/data/registry
[root@localhost ~]# docker run -d --restart=always -p 5000:5000 -v /opt/data/registry:/tmp/registry registry
Unable to find image 'registry:latest' locally
Trying to pull repository docker.io/library/registry ...
latest: Pulling from docker.io/library/registry
79e9f2f55bf5: Pull complete
0d96da54f60b: Pull complete
5b27040df4a2: Pull complete
e2ead8259a04: Pull complete
3790aef225b9: Pull complete
Digest: sha256:169211e20e2f2d5d115674681eb79d21a217b296b43374b8e39f97fcf866b375
Status: Downloaded newer image for docker.io/registry:latest
a0edf5ac6cdda7464855c98db855c60f32f54bf8f078647dc2b8357aa8581151
[root@localhost ~]# docker ps -l
CONTAINER ID ???????IMAGE ??????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ???????????????????NAMES
a0edf5ac6cdd ???????registry ???????????"/entrypoint.sh /e..." ??31 seconds ago ?????Up 29 seconds ??????0.0.0.0:5000->5000/tcp ??thirsty_ptolemy
準(zhǔn)備測試鏡像
[root@localhost ~]# docker run -d -p 8000:80 nginx???? //將宿主機(jī)8000端口映射給容器的業(yè)務(wù)端口
ea26add1a77cd25a90041acfd3b0994630cecc098de2ed15f088be9b4fa8335a
[root@localhost ~]# docker ps -l
CONTAINER ID ???????IMAGE ??????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ?????????????????NAMES
ea26add1a77c ???????nginx ??????????????"/docker-entrypoin..." ??9 seconds ago ??????Up 8 seconds ???????0.0.0.0:8000->80/tcp ??nifty_knuth
宿主機(jī)訪問8000端口測試:
[root@localhost ~]# docker logs ?ea26add1a77c
[root@localhost ~]# docker tag nginx 192.168.50.59:5000/nginx-awd
將修改過的nginx鏡像做標(biāo)記封裝,準(zhǔn)備上傳到私有倉庫
[root@localhost ~]# cat /etc/docker/daemon.json
{
????????"registry-mirrors":[ "https://nyakyfun.mirror.aliyuncs.com" ]
}
[root@localhost ~]# vim /etc/docker/daemon.json
{
????????"registry-mirrors":[ "https://nyakyfun.mirror.aliyuncs.com" ],"insecure-registries":["192.168.50.59:5000"]
}
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker
將鏡像上傳到私有倉庫
[root@localhost ~]# docker push 192.168.50.59:5000/nginx-awd
The push refers to a repository [192.168.50.59:5000/nginx-awd]
d874fd2bc83b: Pushed
32ce5f6a5106: Pushed
f1db227348d0: Pushed
b8d6e692a25e: Pushed
e379e8aedd4d: Pushed
2edcec3590a4: Pushed
latest: digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3 size: 1570
查看
[root@localhost ~]# docker images
REPOSITORY ????????????????????TAG ????????????????IMAGE ID ???????????CREATED ????????????SIZE
centos ????????????????????????exp ????????????????c85e59d0ca2f ???????23 hours ago ???????231 MB
192.168.50.59:5000/nginx-awd ??latest ?????????????605c77e624dd ???????19 months ago ??????141 MB
刪除掉測試
[root@localhost ~]# docker rmi 192.168.50.59:5000/nginx-awd
Untagged: 192.168.50.59:5000/nginx-awd:latest
Untagged: 192.168.50.59:5000/nginx-awd@sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3
[root@localhost ~]# docker images
REPOSITORY ??????????TAG ????????????????IMAGE ID ???????????CREATED ????????????SIZE
centos ??????????????exp ????????????????c85e59d0ca2f ???????23 hours ago ???????231 MB
docker.io/nginx ?????latest ?????????????605c77e624dd ???????19 months ago ??????141 MB
docker.io/registry ??latest ?????????????b8604a3fe854 ???????20 months ago ??????26.2 MB
docker.io/centos ????latest ?????????????5d0da3dc9764 ???????22 months ago ??????231 MB
從私有倉庫中下載鏡像到本地
[root@localhost ~]# docker pull 192.168.50.59:5000/nginx-awd
Using default tag: latest
Trying to pull repository 192.168.50.59:5000/nginx-awd ...
latest: Pulling from 192.168.50.59:5000/nginx-awd
Digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3
Status: Downloaded newer image for 192.168.50.59:5000/nginx-awd:latest
[root@localhost ~]# docker images
REPOSITORY ????????????????????TAG ????????????????IMAGE ID ???????????CREATED ????????????SIZE
centos ????????????????????????exp ????????????????c85e59d0ca2f ???????23 hours ago ???????231 MB
192.168.50.59:5000/nginx-awd ??latest ?????????????605c77e624dd ???????19 months ago ??????141 MB
Docker資源限制
Docker容器技術(shù)底層是通過Cgroup(Control?Group?控制組)實(shí)現(xiàn)容器對(duì)物理資源使用的限制,限制的資源包括CPU、內(nèi)存、磁盤三個(gè)方面。基本覆蓋了常見的資源配額和使用量控制。
Cgroup?是Linux 內(nèi)核提供的一種可以限制、記錄、隔離進(jìn)程組所使用的物理資源的機(jī)制,被LXC及Docker等很多項(xiàng)目用于實(shí)現(xiàn)進(jìn)程的資源控制。
Cgroup 是提供將進(jìn)程進(jìn)行分組化管理的功能和接口的基礎(chǔ)結(jié)構(gòu),Docker中I/O 或內(nèi)存的分配控制等具體的資源管理功能都是通過Cgroup功能來實(shí)現(xiàn)的。這些具體的資源管理功能稱為Cgroup子系統(tǒng)
使用下面的Dockerfile 來創(chuàng)建一個(gè)基于CentOS的stress工具鏡像。
[root@localhost ~]# cat centos-7-x86_64.tar.gz | docker import - centos:7
sha256:6e593ec2c4f80e5d44cd15d978c59c701f02b72b1c7458778854a6dc24d492b8
[root@localhost ~]# mkdir stress
[root@localhost ~]# vim stress/Dockerfile
FROM centos:7
MAINTAINER crushlinux "crushlinux@163.com"
RUN yum -y install wget
RUN wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
RUN yum -y install stress
~ ????????
[root@localhost ~]# cd stress/
[root@localhost stress]# docker build -t centos:stress .
CPU使用率
[root@localhost stress]# ?docker run -itd centos:stress /bin/bash
9d9428089027bf70bf2b4e6a441cab0d465c2f5dd3988420b05c7149d4a9ff3d
????????????????
utes ago ?????????????????????????????suspicious_franklin
[root@localhost stress]# docker ps -a
CONTAINER ID ???????IMAGE ??????????????COMMAND ?????????????????CREATED ????????????STATUS ???????????????????????PORTS ???????????????????NAMES
9d9428089027 ???????centos:stress ??????"/bin/bash" ?????????????39 seconds ago ?????Up 38 seconds ?????????????????????????????????????????silly_mestorf
[root@localhost stress]# vim /sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us
-1
CPU共享比例
當(dāng)多個(gè)容器任務(wù)運(yùn)行時(shí),很難計(jì)算CPU的使用率,為了使容器合理使用CPU資源,可以通過--cpu-shares選項(xiàng)設(shè)置容器按比例共享CPU資源,這種方式還可以實(shí)現(xiàn)CPU使用率的動(dòng)態(tài)調(diào)整。
命令中的--cpu-shares 選項(xiàng)值不能保證可以獲得1 個(gè)vcpu 或者多少GHz 的CPU 資源,僅僅只是一個(gè)彈性的加權(quán)值。
[root@localhost ~]# docker run --name aa -itd --cpu-shares 1024 centos:stress /bin/bash
1c9d6552e940da713e8ce89c9b10f045aff0b1fcbfdef45f2f9bf1d2189c4604
[root@localhost ~]# docker run --name bb -itd --cpu-shares 1024 centos:stress /bin/bash
4a7ea87192d16d4b8b086c125a467357f28aed00157726367cefea7fada12a21
[root@localhost ~]# docker run --name cc -itd --cpu-shares 2048 centos:stress /bin/bash
7b52275fa1b3c00b1bcbbb5ef6ef165f4b33bc5ccab51908b66e19b7f50fe772
[root@localhost ~]# docker run --name dd -itd --cpu-shares 4096 centos:stress /bin/bash
b56f55536170be6ac5bf25b6e1350e126c05fc44cf54d954a7c9d679ef73c110
默認(rèn)情況下,每個(gè)docker容器的cpu份額都是1024。單獨(dú)一個(gè)容器的份額是沒有意義的。只有在同時(shí)運(yùn)行多個(gè)容器時(shí),容器的CPU加權(quán)的效果才能體現(xiàn)出來。例如,兩個(gè)容器A、B的CPU份額分別為1000和500,在CPU進(jìn)行時(shí)間片分配的時(shí)候,容器A比容器B多一倍的機(jī)會(huì)獲得CPU的時(shí)間片。但分配的結(jié)果取決于當(dāng)時(shí)主機(jī)和其他容器的運(yùn)行狀態(tài),實(shí)際上也無法保證容器A一定能獲得CPU時(shí)間片。比如容器A的進(jìn)程一直是空閑的,那么容器B 是可以獲取比容器A更多的CPU時(shí)間片的。極端情況下,比如說主機(jī)上只運(yùn)行了一個(gè)容器,即使它的CPU份額只有50,它也可以獨(dú)占整個(gè)主機(jī)的CPU資源。
換句話說,可以通過cpu shares可以設(shè)置容器使用CPU的優(yōu)先級(jí),比如啟動(dòng)了兩個(gè)容器及運(yùn)行查看CPU使用百分比。
[root@localhost ~]# docker run -tid --name cpu1024 --cpu-shares 1024 centos:stress stress -c 10
3aa4978e3257d760b66b8d85a9c78257e982fcff593f4211880a914a59978603
[root@localhost ~]# docker run -tid --name cpu512 --cpu-shares 512 centos:stress stress -c 10
[root@localhost ~]# docker ps -a
CONTAINER ID ???????IMAGE ??????????????COMMAND ????????????CREATED ?????????????STATUS ?????????????PORTS ??????????????NAMES
3aa4978e3257 ???????centos:stress ??????"stress -c 10" ?????About a minute ago ??Up About a minute ??????????????????????cpu1024
8a0048522072 ???????centos:stress ??????"stress -c 10" ?????2 minutes ago ???????Up 2 minutes ???????????????????????????cpu512
[root@localhost ~]# docker exec -it 3a /bin/bash
[root@3aa4978e3257 /]# top
top - 02:46:44 up ?5:45, ?0 users, ?load average: 19.34, 10.07, 4.21
Tasks: ?13 total, ?11 running, ??2 sleeping, ??0 stopped, ??0 zombie
%Cpu(s): 98.6 us, ?1.4 sy, ?0.0 ni, ?0.0 id, ?0.0 wa, ?0.0 hi, ?0.0 si, ?0.0 st
KiB Mem : ?3861056 total, ??846756 free, ??292556 used, ?2721744 buff/cache
KiB Swap: ?2097148 total, ?2097148 free, ???????0 used. ?2961824 avail Mem
???PID USER ?????PR ?NI ???VIRT ???RES ???SHR S %CPU %MEM ????TIME+ COMMAND ??????????????????????????
????11 root ?????20 ??0 ???7260 ????92 ?????0 R ?7.0 ?0.0 ??0:11.83 stress ???????????????????????????
????14 root ?????20 ??0 ???7260 ????92 ?????0 R ?7.0 ?0.0 ??0:11.84 stress ???????????????????????????
?????6 root ?????20 ??0 ???7260 ????92 ?????0 R ?6.7 ?0.0 ??0:11.83 stress ???????????????????????????
?????7 root ?????20 ??0 ???7260 ????92 ?????0 R ?6.7 ?0.0 ??0:11.83 stress ???????????????????????????
?????8 root ?????20 ??0 ???7260 ????92 ?????0 R ?6.7 ?0.0 ??0:11.84 stress ???????????????????????????
?????9 root ?????20 ??0 ???7260 ????92 ?????0 R ?6.7 ?0.0 ??0:11.84 stress ???????????????????????????
????10 root ?????20 ??0 ???7260 ????92 ?????0 R ?6.3 ?0.0 ??0:11.83 stress ???????????????????????????
????12 root ?????20 ??0 ???7260 ????92 ?????0 R ?6.3 ?0.0 ??0:11.82 stress ???????????????????????????
????13 root ?????20 ??0 ???7260 ????92 ?????0 R ?6.3 ?0.0 ??0:11.83 stress ???????????????????????????
????15 root ?????20 ??0 ???7260 ????92 ?????0 R ?6.3 ?0.0 ??0:11.83 stress ???????????????????????????
?????1 root ?????20 ??0 ???7260 ???640 ???548 S ?0.0 ?0.0 ??0:00.01 stress ???????????????????????????
????16 root ?????20 ??0 ??11748 ??1796 ??1444 S ?0.0 ?0.0 ??0:00.01 bash ?????????????????????????????
????31 root ?????20 ??0 ??51872 ??1940 ??1408 R ?0.0 ?0.1 ??0:00.00 top ??????????????????????????????
?開啟了10 個(gè)stress 進(jìn)程,目的是充分讓系統(tǒng)資源變得緊張。只有這樣競爭資源,設(shè)定的資源比例才可以顯現(xiàn)出來。如果只運(yùn)行一個(gè)進(jìn)行,會(huì)自動(dòng)分配到空閑的CPU,這樣比例就無法看出來。由于案例的環(huán)境不一樣,可能導(dǎo)致上面兩張圖中占用CPU 百分比會(huì)不同,但是從cpu share 來看兩個(gè)容器總比例一定會(huì)是1:2。
[root@localhost ~]# docker exec -it 8a /bin/bash
[root@8a0048522072 /]# top
top - 02:47:53 up ?5:47, ?0 users, ?load average: 20.04, 12.21, 5.38
Tasks: ?13 total, ?11 running, ??2 sleeping, ??0 stopped, ??0 zombie
%Cpu(s): 98.2 us, ?1.8 sy, ?0.0 ni, ?0.0 id, ?0.0 wa, ?0.0 hi, ?0.0 si, ?0.0 st
KiB Mem : ?3861056 total, ??846756 free, ??292548 used, ?2721752 buff/cache
KiB Swap: ?2097148 total, ?2097148 free, ???????0 used. ?2961828 avail Mem
???PID USER ?????PR ?NI ???VIRT ???RES ???SHR S %CPU %MEM ????TIME+ COMMAND ??????????????????????????
?????9 root ?????20 ??0 ???7260 ????92 ?????0 R ?3.7 ?0.0 ??0:15.21 stress ???????????????????????????
????12 root ?????20 ??0 ???7260 ????92 ?????0 R ?3.7 ?0.0 ??0:15.21 stress ???????????????????????????
????14 root ?????20 ??0 ???7260 ????92 ?????0 R ?3.7 ?0.0 ??0:15.20 stress ???????????????????????????
?????6 root ?????20 ??0 ???7260 ????92 ?????0 R ?3.3 ?0.0 ??0:15.20 stress ???????????????????????????
?????7 root ?????20 ??0 ???7260 ????92 ?????0 R ?3.3 ?0.0 ??0:15.19 stress ???????????????????????????
????10 root ?????20 ??0 ???7260 ????92 ?????0 R ?3.3 ?0.0 ??0:15.20 stress ???????????????????????????
????13 root ?????20 ??0 ???7260 ????92 ?????0 R ?3.3 ?0.0 ??0:15.19 stress ???????????????????????????
????15 root ?????20 ??0 ???7260 ????92 ?????0 R ?3.3 ?0.0 ??0:15.21 stress ???????????????????????????
?????8 root ?????20 ??0 ???7260 ????92 ?????0 R ?3.0 ?0.0 ??0:15.19 stress ???????????????????????????
????11 root ?????20 ??0 ???7260 ????92 ?????0 R ?3.0 ?0.0 ??0:15.19 stress ???????????????????????????
?????1 root ?????20 ??0 ???7260 ???640 ???548 S ?0.0 ?0.0 ??0:00.01 stress ???????????????????????????
????16 root ?????20 ??0 ??11772 ??1896 ??1500 S ?0.0 ?0.0 ??0:00.01 bash ?????????????????????????????
31 root ?????20 ??0 ??51872 ??1944 ??1408 R ?0.0 ?0.1 ??0:00.00 top ??
CPU周期限制
cpu-period 和cpu-quota 的單位為微秒(μs)。cpu-period 的最小值為1000 微秒,
最大值為1 秒(10^6 μs),默認(rèn)值為0.1 秒(100000 μs)。cpu-quota 的值默認(rèn)為-1,
表示不做控制。cpu-period、cpu-quota 這兩個(gè)參數(shù)一般聯(lián)合使用。
[root@localhost ~]# ?docker run -it --cpu-period 100000 --cpu-quota 200000 centos:stress /bin/bash
??????????[root@67f52c4e2d20 /]# cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000
[root@67f52c4e2d20 /]# cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us ?
200000
CPU 配額控制參數(shù)的混合案例
通過--cpuset-cpus 指定容器A 使用CPU 內(nèi)核0,容器B 只是用CPU 內(nèi)核1。在主機(jī)上只有這兩個(gè)容器使用對(duì)應(yīng)CPU 內(nèi)核的情況,它們各自占用全部的內(nèi)核資源,--cpu-shares 沒有明顯效果。
--cpuset-cpus、--cpuset-mems 參數(shù)只在多核、多內(nèi)存節(jié)點(diǎn)上的服務(wù)器上有效,并且必須與實(shí)際的物理配置匹配,否則也無法達(dá)到資源控制的目的。
在系統(tǒng)具有多個(gè)CPU 內(nèi)核的情況下,需要通過cpuset-cpus 為容器CPU 內(nèi)核才能比較方便地進(jìn)行測試。
[root@localhost ~]# docker run -itd --name cpu0 --cpuset-cpus 0 --cpu-shares 512 centos:stress stress -c 1
d3734959ddb9a118d857a5d7d35b426b697b3d6a09670e6041096a74c17b6c4f
[root@localhost ~]# docker ps -a
CONTAINER ID ???????IMAGE ??????????????COMMAND ????????????CREATED ?????????????STATUS ?????????????PORTS ??????????????NAMES
d3734959ddb9 ???????centos:stress ??????"stress -c 1" ??????About a minute ago ??Up 1 second ????????????????????????????cpu0
[root@localhost ~]# docker exec -it d37 /bin/bash
[root@d3734959ddb9 /]# top
top - 06:25:23 up ?3:32, ?0 users, ?load average: 0.42, 0.20, 0.48
Tasks: ??4 total, ??2 running, ??2 sleeping, ??0 stopped, ??0 zombie
%Cpu(s): ?6.3 us, 14.6 sy, ?0.0 ni, 79.1 id, ?0.0 wa, ?0.0 hi, ?0.1 si, ?0.0 st
KiB Mem : ?3861048 total, ??415924 free, ??251640 used, ?3193484 buff/cache
KiB Swap: ?2097148 total, ?2096884 free, ?????264 used. ?2961148 avail Mem
???PID USER ?????PR ?NI ???VIRT ???RES ???SHR S ?%CPU %MEM ????TIME+ COMMAND ?????????????????????????
?????6 root ?????20 ??0 ???7260 ????92 ?????0 R 100.0 ?0.0 ??0:08.09 stress ??????????????????????????
?????1 root ?????20 ??0 ???7260 ???432 ???352 S ??0.0 ?0.0 ??0:00.00 stress ??????????????????????????
?????7 root ?????20 ??0 ??11772 ??1804 ??1444 S ??0.0 ?0.0 ??0:00.01 bash ????????????????????????????
????21 root ?????20 ??0 ??51868 ??1896 ??1388 R ??0.0 ?0.0 ??0:00.00 top ?????????????????????????????
內(nèi)存限制
與操作系統(tǒng)類似,容器可使用的內(nèi)存包括兩部分:物理內(nèi)存和swap。Docker 通過下面兩組參數(shù)來控制容器內(nèi)存的使用量。
- -m 或--memory:設(shè)置內(nèi)存的使用限額,例如100M, 1024M。
- --memory-swap:設(shè)置內(nèi)存swap 的使用限額。
當(dāng)執(zhí)行如下命令:
其含義是允許該容器最多使用200M 的內(nèi)存和300M 的swap。
[root@localhost ~]# ?docker run -it -m 200M --memory-swap=300M progrium/stress --vm 1 --vm-bytes 280M
Unable to find image 'progrium/stress:latest' locally
Trying to pull repository docker.io/progrium/stress ...
latest: Pulling from docker.io/progrium/stress
a3ed95caeb02: Pull complete
871c32dbbb53: Pull complete
dbe7819a64dd: Pull complete
d14088925c6e: Pull complete
58026d51efe4: Pull complete
7d04a4fe1405: Pull complete
1775fca35fb6: Pull complete
5c319e267908: Pull complete
Digest: sha256:e34d56d60f5caae79333cee395aae93b74791d50e3841986420d23c2ee4697bf
Status: Downloaded newer image for docker.io/progrium/stress:latest
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] --> hogvm worker 1 [6] forked
stress: dbug: [6] allocating 293601280 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: dbug: [6] freed 293601280 bytes
stress: dbug: [6] allocating 293601280 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: dbug: [6] freed 293601280 bytes
stress: dbug: [6] allocating 293601280 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: dbug: [6] freed 293601280 bytes
stress: dbug: [6] allocating 293601280 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: dbug: [6] freed 293601280 bytes
stress: dbug: [6] allocating 293601280 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: dbug: [6] freed 293601280 bytes
stress: dbug: [6] allocating 293601280 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
- --vm 1:啟動(dòng)1 個(gè)內(nèi)存工作線程。
- --vm-bytes 280M:每個(gè)線程分配280M 內(nèi)存。
默認(rèn)情況下,容器可以使用主機(jī)上的所有空閑內(nèi)存。與CPU 的cgroups 配置類似,docker 會(huì)自動(dòng)為容器在目錄/sys/fs/cgroup/memory/docker/<容器的完整長ID>中創(chuàng)建相應(yīng)cgroup 配置文件。
因?yàn)?80M 在可分配的范圍(300M)內(nèi),所以工作線程能夠正常工作,其過程是:
- 分配280M 內(nèi)存。
- 釋放280M 內(nèi)存。
- 再分配280M 內(nèi)存。
- 再釋放280M 內(nèi)存。
- 一直循環(huán)......
如果讓工作線程分配的內(nèi)存超過300M,分配的內(nèi)存超過限額,stress 線程報(bào)錯(cuò),容器退出。
[root@localhost ~]# docker run -it -m 200M --memory-swap=300M progrium/stress --vm 1 --vm-bytes 380M
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] --> hogvm worker 1 [6] forked
stress: dbug: [6] allocating 398458880 bytes ...
stress: dbug: [6] touching bytes in strides of 4096 bytes ...
stress: FAIL: [1] (416) <-- worker 6 got signal 9
stress: WARN: [1] (418) now reaping child worker processes
stress: FAIL: [1] (422) kill error: No such process
stress: FAIL: [1] (452) failed run completed in 1s
Block IO 的限制
默認(rèn)情況下,所有容器能平等地讀寫磁盤,可以通過設(shè)置--blkio-weight 參數(shù)來改變?nèi)萜鱞lock IO 的優(yōu)先級(jí)。
--blkio-weight 與--cpu-shares 類似,設(shè)置的是相對(duì)權(quán)重值,默認(rèn)為500。在下面的例子中,容器A 讀寫磁盤的帶寬是容器B 的兩倍。
[root@localhost ~]# docker run -it --name container_A --blkio-weight 600 centos:stress /bin/bash
[root@ee06408457d8 /]# cat /sys/fs/cgroup/blkio/blkio.weight
600
[root@localhost ~]# docker run -it --name container_B --blkio-weight 300 centos:stress /bin/bash
[root@0ad7376f831b /]# cat /sys/fs/cgroup/blkio/blkio.weight
300
限制bps 和iops
如果在一臺(tái)服務(wù)器上進(jìn)行容器的混合部署,那么會(huì)存在同時(shí)幾個(gè)程序?qū)懘疟P數(shù)據(jù)的情況,這時(shí)可以通過--device-write-iops選項(xiàng)來限制每秒寫io次數(shù)來限制制定設(shè)備的寫速度。相應(yīng)的還有--device-read-iops選項(xiàng)可以限制讀取IO的速度,但是這種方法只能限制設(shè)備,而不能限制分區(qū),相應(yīng)的Cgroup寫配置文件為/sys/fs/cgroup/blkio/容器ID/?blkio.throttle.write_iops_device。
- bps 是byte per second,每秒讀寫的數(shù)據(jù)量。
- iops 是io per second,每秒IO 的次數(shù)。
[root@localhost ~]# docker run -it --device-write-bps /dev/sda:5MB centos:stress /bin/bash
[root@bef94b99dc95 /]# dd if=/dev/zero of=test bs=1M count=100 oflag=direct
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 20.0237 s, 5.2 MB/s