中文亚洲精品无码_熟女乱子伦免费_人人超碰人人爱国产_亚洲熟妇女综合网

當前位置: 首頁 > news >正文

電子商務網(wǎng)站開發(fā)的流程圖廈門關(guān)鍵詞排名優(yōu)化

電子商務網(wǎng)站開發(fā)的流程圖,廈門關(guān)鍵詞排名優(yōu)化,網(wǎng)站建設pqiw,開發(fā)公司承擔物業(yè)費的規(guī)定概述 clickhouse的容器化部署,已經(jīng)有非常成熟的生態(tài)了。在一些互聯(lián)網(wǎng)大廠也已經(jīng)得到了大規(guī)模的應用。 clickhouse作為一款數(shù)據(jù)庫,其容器化的主要難點在于它是有狀態(tài)的服務,因此,我們需要配置PVC。 目前業(yè)界比較流行的部署方式有…

概述

clickhouse的容器化部署,已經(jīng)有非常成熟的生態(tài)了。在一些互聯(lián)網(wǎng)大廠也已經(jīng)得到了大規(guī)模的應用。

clickhouse作為一款數(shù)據(jù)庫,其容器化的主要難點在于它是有狀態(tài)的服務,因此,我們需要配置PVC。

目前業(yè)界比較流行的部署方式有兩種:

  • kubectl 原生部署
    • 這種方式部署流程復雜,需要管理的資源非常多,稍不留神就容易出錯
    • 維護繁瑣,涉及到集群的擴縮容、rebalance等操作會很復雜
    • 非常不推薦這種部署方式
  • kubectl + operator部署
    • 資源集中管理,部署方便
    • 維護方便
    • 業(yè)界已經(jīng)有成熟的方案,如?clickhouse-operator、RadonDB clickhouse等。

本文以 clickhouse-operator為例,來講解clickhouse容器化的步驟以及注意事項。

clickhouse容器化部署

clickhouse-operator部署

我們可以直接下載clickhouse-operator的yaml文件用于部署。

文件路徑如下:

https://raw.githubusercontent.com/Altinity/clickhouse-operator/master?raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/operator/clickhouse-operator-install-bundle.yaml

直接使用kubectl 應用上述yaml文件:

su01:~/chenyc/ck # kubectl apply -f clickhouse-operator-install-bundle.yaml
kubectl apply -f clickhouse-operator-install-bundle.yaml 
customresourcedefinition.apiextensions.k8s.io/clickhouseinstallations.clickhouse.altinity.com created
customresourcedefinition.apiextensions.k8s.io/clickhouseinstallationtemplates.clickhouse.altinity.com created
customresourcedefinition.apiextensions.k8s.io/clickhouseoperatorconfigurations.clickhouse.altinity.com created
serviceaccount/clickhouse-operator created
clusterrole.rbac.authorization.k8s.io/clickhouse-operator-kube-system created
clusterrolebinding.rbac.authorization.k8s.io/clickhouse-operator-kube-system created
configmap/etc-clickhouse-operator-files created
configmap/etc-clickhouse-operator-confd-files created
configmap/etc-clickhouse-operator-configd-files created
configmap/etc-clickhouse-operator-templatesd-files created
configmap/etc-clickhouse-operator-usersd-files created
secret/clickhouse-operator created
deployment.apps/clickhouse-operator created
service/clickhouse-operator-metrics created

運行成功后,可以看到以下信息:

su01:~/chenyc/ck # kubectl get crd |grep clickhouse.altinity.com
clickhouseinstallations.clickhouse.altinity.com            2023-11-16T09:41:08Z
clickhouseinstallationtemplates.clickhouse.altinity.com    2023-11-16T09:41:08Z
clickhouseoperatorconfigurations.clickhouse.altinity.com   2023-11-16T09:41:08Zsu01:~/chenyc/ck # kubectl get pod -n kube-system |grep clickhouse
clickhouse-operator-7ff755d4df-9bcbd   2/2     Running   0              10d

看到如上信息,說明operator部署成功。

快速驗證

我們從官網(wǎng)repo里找到一個最簡單的單節(jié)點部署案例:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:name: "simple-01"
spec:configuration:clusters:- name: "simple"

應用部署:

kubectl apply -f ck-sample.yaml

查詢狀態(tài):

chenyc@su01:~/chenyc/ch> kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
chi-simple-01-simple-0-0-0   1/1     Running   0          63s

嘗試登錄該節(jié)點:

chenyc@su01:~/chenyc/ch> kubectl exec -it chi-simple-01-simple-0-0-0 -- clickhouse-client
ClickHouse client version 23.10.5.20 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 23.10.5 revision 54466.Warnings:* Linux transparent hugepages are set to "always". Check /sys/kernel/mm/transparent_hugepage/enabled* Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. Check /proc/sys/kernel/task_delayacctchi-simple-01-simple-0-0-0.chi-simple-01-simple-0-0.default.svc.cluster.local :)

登陸成功,說明部署沒有問題。

但是這種部署方式有一個問題,就是外部無法訪問。

service暴露外部IP端口

我們看它的service信息:

chenyc@su01:~/chenyc/ch> kubectl get svc
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
chi-simple-01-simple-0-0   ClusterIP      None            <none>        9000/TCP,8123/TCP,9009/TCP      16m
clickhouse-simple-01       LoadBalancer   10.103.187.96   <pending>     8123:30260/TCP,9000:32685/TCP   16m
kubernetes                 ClusterIP      10.96.0.1       <none>        443/TCP                         14d

可以看到,它默認使用了LoadBalancer的service,而且它的EXTERNAL-IP是pending狀態(tài),這是因為clickhouse-operator默認我們在云環(huán)境上使用,在私服部署的話,我們并沒有提供一個負載均衡器,因此無法選擇IP,導致無法訪問。

這時有兩種解決方案:

  • 安裝一個負載均衡器,如metallb
  • 改成nodePort方式

我們先看第一種方式,安裝metallb。安裝步驟大家可自行網(wǎng)上搜索,安裝成功后如下所示:

su01:~/chenyc/ck # kubectl get pod -n metallb-system 
NAME                          READY   STATUS    RESTARTS      AGE
controller-595f88d88f-mmmfq   1/1     Running   2             146d
speaker-5n4qh                 1/1     Running   5 (14d ago)   146d
speaker-f4pgr                 1/1     Running   9 (14d ago)   146d
speaker-qcfl2                 1/1     Running   5 (14d ago)   146d

我們再次部署, 可以看到他給我們分配了一個192.168.110.198的IP, 注意,這個IP是真實存在的,也就是說,一定要有可分配的IP,才能成功。

su01:~/chenyc/ck # kubectl get pod
NAME                         READY   STATUS    RESTARTS   AGE
chi-simple-01-simple-0-0-0   1/1     Running   0          71s
su01:~/chenyc/ck/ch # kubectl get svc
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                         AGE
chi-simple-01-simple-0-0   ClusterIP      None            <none>            9000/TCP,8123/TCP,9009/TCP      11s
ckman                      NodePort       10.105.69.159   <none>            38808:38808/TCP                 4d23h
clickhouse-simple-01       LoadBalancer   10.100.12.57    192.168.110.198   8123:61626/TCP,9000:20461/TCP   8s
kubernetes                 ClusterIP      10.96.0.1       <none>            443/TCP                         14d

驗證該IP是否可用:

su01:~/chenyc/ck # curl http://192.168.110.198:8123
Ok.

但是現(xiàn)在我們是不能從外部直接訪問到這個clickhouse節(jié)點的。原因我們后面再說。

先看另一種解決方式,通過nodePort的方式來暴露端口。chi這個資源里提供了修改service的模板,修改如下:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:name: "svc"
spec:defaults:templates:serviceTemplate: service-templatetemplates:serviceTemplates:- name: service-templategenerateName: chendpoint-{chi}spec:ports:- name: httpport: 8123nodePort: 38123targetPort: 8123- name: tcpport: 9000nodePort: 39000targetPort: 9000type: NodePort

此時查看service,得到如下結(jié)果:

su01:~/chenyc/ck/ch # kubectl get svc
NAME                  TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
chendpoint-svc        NodePort    10.111.132.229   <none>        8123:38123/TCP,9000:39000/TCP   21s
chi-svc-cluster-0-0   ClusterIP   None             <none>        9000/TCP,8123/TCP,9009/TCP      23s
ckman                 NodePort    10.105.69.159    <none>        38808:38808/TCP                 5d3h
kubernetes            ClusterIP   10.96.0.1        <none>        443/TCP                         14d

同樣,我們驗證一下:

su01:~/chenyc/ck/ch # curl http://10.111.132.229:8123
Ok.
su01:~/chenyc/ck/ch # curl http://192.168.110.186:38123
Ok.
su01:~/chenyc/ck/ch # curl http://192.168.110.187:38123
Ok.
su01:~/chenyc/ck/ch # curl http://192.168.110.188:38123
Ok.

上例中,10.111.132.229是集群提供的cluster-ip,可以用pod內(nèi)部端口訪問,而192.168.110.186~192.168.110.188是k8s集群的節(jié)點IP,我們可以使用任意節(jié)點來訪問。

實現(xiàn)從外部登錄

這個端口雖然對外暴露出來了,但是我們并不能直接使用TCP的方式登錄上去。除了default用戶,clickhouse-operator還為我們提供了一個clickhouse_operator的用戶,默認密碼是clickhouse_operator_password,但是目前這兩個用戶都無法從外面訪問。

su01:~/chenyc/ck/ch # clickhouse-client -m -h 192.168.110.186 --port 39000
ClickHouse client version 23.3.1.2823 (official build).
Connecting to 192.168.110.186:39000 as user default.
Password for user (default): 
Connecting to 192.168.110.186:39000 as user default.
Code: 516. DB::Exception: Received from 192.168.110.186:39000. DB::Exception: default: Authentication failed: password is incorrect, or there is no user with such name.If you have installed ClickHouse and forgot password you can reset it in the configuration file.
The password for default user is typically located at /etc/clickhouse-server/users.d/default-password.xml
and deleting this file will reset the password.
See also /etc/clickhouse-server/users.xml on the server where ClickHouse is installed.. (AUTHENTICATION_FAILED)su01:~/chenyc/ck/ch # clickhouse-client -m -h 192.168.110.186 --port 39000 -u clickhouse_operator --password clickhouse_operator_password
ClickHouse client version 23.3.1.2823 (official build).
Connecting to 192.168.110.186:39000 as user clickhouse_operator.
Code: 516. DB::Exception: Received from 192.168.110.186:39000. DB::Exception: clickhouse_operator: Authentication failed: password is incorrect, or there is no user with such name.. (AUTHENTICATION_FAILED)

這兩個用戶肯定是有效的,我們可以從pod內(nèi)部登錄訪問:

su01:~/chenyc/ck/ch # kubectl exec -it chi-svc-cluster-0-0-0 -- /bin/bash
root@chi-svc-cluster-0-0-0:/# clickhouse-client -m 
ClickHouse client version 23.10.5.20 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 23.10.5 revision 54466.Warnings:* Linux transparent hugepages are set to "always". Check /sys/kernel/mm/transparent_hugepage/enabled* Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. Check /proc/sys/kernel/task_delayacctchi-svc-cluster-0-0-0.chi-svc-cluster-0-0.default.svc.cluster.local :) 

為什么會如此,我們打開用戶的配置文件:

root@chi-svc-cluster-0-0-0:/# cat /etc/clickhouse-server/users.d/chop-generated-users.xml
<yandex><users><clickhouse_operator><networks><ip>10.0.2.45</ip></networks><password_sha256_hex>716b36073a90c6fe1d445ac1af85f4777c5b7a155cea359961826a030513e448</password_sha256_hex><profile>clickhouse_operator</profile></clickhouse_operator><default><networks><host_regexp>(chi-svc-[^.]+\d+-\d+|clickhouse\-svc)\.default\.svc\.cluster\.local$</host_regexp><ip>::1</ip><ip>127.0.0.1</ip></networks><profile>default</profile><quota>default</quota></default></users>
</yandex>

可以看到,clickhouse_operator用戶僅僅對內(nèi)部ip開放,而default用戶僅僅對本地回環(huán)開放以及hostname滿足內(nèi)部正則表達式規(guī)則的主機名開放,所以外面是無法登錄的。那這樣我們?nèi)匀挥貌涣恕?/p>

解決方案仍然有兩種:

  • 增加一個普通用戶,該用戶可以用來從外部訪問
  • 去掉default用戶和clickhouse_operator用戶的權(quán)限控制

我們先看第一種方法。

我們可以在configuration里增加users的信息,比如我們增加一個叫chenyc的用戶:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:name: "user"
spec:configuration:users:chenyc/network/ip: "::/0"chenyc/password: qwertychenyc/profile: defaultdefaults:templates:serviceTemplate: service-templatetemplates:serviceTemplates:- name: service-templategenerateName: chendpoint-{chi}spec:ports:- name: httpport: 8123nodePort: 38123targetPort: 8123- name: tcpport: 9000nodePort: 39000targetPort: 9000type: NodePort

查看配置文件:

su01:~/chenyc/ck/ch # kubectl exec -it chi-user-cluster-0-0-0 -- cat /etc/clickhouse-server/users.d/chop-generated-users.xml
<yandex><users><chenyc><network><ip>::/0</ip></network><networks><host_regexp>(chi-user-[^.]+\d+-\d+|clickhouse\-user)\.default\.svc\.cluster\.local$</host_regexp><ip>::1</ip><ip>127.0.0.1</ip></networks><password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex><profile>default</profile><quota>default</quota></chenyc><clickhouse_operator><networks><ip>10.0.2.45</ip></networks><password_sha256_hex>716b36073a90c6fe1d445ac1af85f4777c5b7a155cea359961826a030513e448</password_sha256_hex><profile>clickhouse_operator</profile></clickhouse_operator><default><networks><host_regexp>(chi-user-[^.]+\d+-\d+|clickhouse\-user)\.default\.svc\.cluster\.local$</host_regexp><ip>::1</ip><ip>127.0.0.1</ip></networks><profile>default</profile><quota>default</quota></default></users>
</yandex>

但是我們發(fā)現(xiàn)仍然訪問不了,因為仍然有下面這段權(quán)限限制:

 <networks><host_regexp>(chi-user-[^.]+\d+-\d+|clickhouse\-user)\.default\.svc\.cluster\.local$</host_regexp><ip>::1</ip><ip>127.0.0.1</ip></networks>

似乎第一種方案已經(jīng)走入死胡同了,接下來我們來嘗試第二種方式。

我們先看看clickhouse-operator源碼中是如何處理用戶的訪問權(quán)限的。

func (n *Normalizer) normalizeConfigurationUserEnsureMandatorySections(users *chiV1.Settings, username string) {chopUsername := chop.Config().ClickHouse.Access.Username//// Ensure each user has mandatory sections://// 1. user/profile// 2. user/quota// 3. user/networks/ip// 4. user/networks/host_regexpprofile := chop.Config().ClickHouse.Config.User.Default.Profilequota := chop.Config().ClickHouse.Config.User.Default.Quotaips := append([]string{}, chop.Config().ClickHouse.Config.User.Default.NetworksIP...)regexp := CreatePodHostnameRegexp(n.ctx.chi, chop.Config().ClickHouse.Config.Network.HostRegexpTemplate)// Some users may have special optionsswitch username {case defaultUsername:ips = append(ips, n.ctx.options.DefaultUserAdditionalIPs...)if !n.ctx.options.DefaultUserInsertHostRegex {regexp = ""}case chopUsername:ip, _ := chop.Get().ConfigManager.GetRuntimeParam(chiV1.OPERATOR_POD_IP)profile = chopProfilequota = ""ips = []string{ip}regexp = ""}// Ensure required values are in place and apply non-empty values in case no own value(s) providedif profile != "" {users.SetIfNotExists(username+"/profile", chiV1.NewSettingScalar(profile))}if quota != "" {users.SetIfNotExists(username+"/quota", chiV1.NewSettingScalar(quota))}if len(ips) > 0 {users.Set(username+"/networks/ip", chiV1.NewSettingVector(ips).MergeFrom(users.Get(username+"/networks/ip")))}if regexp != "" {users.SetIfNotExists(username+"/networks/host_regexp", chiV1.NewSettingScalar(regexp))}
}
  • default用戶

根據(jù)DefaultUserInsertHostRegex變量來判斷是否要配置host_regexp, 而該變量默認是true:

// NewNormalizerOptions creates new NormalizerOptions
func NewNormalizerOptions() *NormalizerOptions {return &NormalizerOptions{DefaultUserInsertHostRegex: true,}
}

ip的限制則取決于配置的additional ip列表。

因此,我們只需要將additional IP以及host_regexp的規(guī)則在clickhouse_operator的yaml中去掉即可。

  • chop用戶

chop就是clickhouse_operator用戶,它的ip限制主要取決于OPERATOR_POD_IP環(huán)境變量。而該環(huán)境變量可以通過clickhouse_operator的yaml傳入,它默認是取的pod的IP,我們將其設置成::/0, 即可所有人都能訪問了。

ip, _ := chop.Get().ConfigManager.GetRuntimeParam(chiV1.OPERATOR_POD_IP)
  • 其他普通用戶
ips := append([]string{}, chop.Config().ClickHouse.Config.User.Default.NetworksIP...)
regexp := CreatePodHostnameRegexp(n.ctx.chi, chop.Config().ClickHouse.Config.Network.HostRegexpTemplate)

同樣也是修改yaml。

有了以上的基礎,我們直接修改yaml:

原始yaml:

    user:# Default settings for user accounts, created by the operator.# IMPORTANT. These are not access credentials or settings for 'default' user account,# it is a template for filling out missing fields for all user accounts to be created by the operator,# with the following EXCEPTIONS:# 1. 'default' user account DOES NOT use provided password, but uses all the rest of the fields.#    Password for 'default' user account has to be provided explicitly, if to be used.# 2. CHOP user account DOES NOT use:#    - profile setting. It uses predefined profile called 'clickhouse_operator'#    - quota setting. It uses empty quota name.#    - networks IP setting. Operator specifies 'networks/ip' user setting to match operators' pod IP only.#    - password setting. Password for CHOP account is used from 'clickhouse.access.*' sectiondefault:# Default values for ClickHouse user account(s) created by the operator#   1. user/profile - string#   2. user/quota - string#   3. user/networks/ip - multiple strings#   4. user/password - string# These values can be overwritten on per-user basis.profile: "default"quota: "default"networksIP:- "::1"- "127.0.0.1"password: "default"#################################################### Configuration Network Section##################################################network:# Default host_regexp to limit network connectivity from outsidehostRegexpTemplate: "(chi-{chi}-[^.]+\\d+-\\d+|clickhouse\\-{chi})\\.{namespace}\\.svc\\.cluster\\.local$"

修改后yaml:

        user:# Default settings for user accounts, created by the operator.# IMPORTANT. These are not access credentials or settings for 'default' user account,# it is a template for filling out missing fields for all user accounts to be created by the operator,# with the following EXCEPTIONS:# 1. 'default' user account DOES NOT use provided password, but uses all the rest of the fields.#    Password for 'default' user account has to be provided explicitly, if to be used.# 2. CHOP user account DOES NOT use:#    - profile setting. It uses predefined profile called 'clickhouse_operator'#    - quota setting. It uses empty quota name.#    - networks IP setting. Operator specifies 'networks/ip' user setting to match operators' pod IP only.#    - password setting. Password for CHOP account is used from 'clickhouse.access.*' sectiondefault:# Default values for ClickHouse user account(s) created by the operator#   1. user/profile - string#   2. user/quota - string#   3. user/networks/ip - multiple strings#   4. user/password - string# These values can be overwritten on per-user basis.profile: "default"quota: "default"password: "default"#################################################### Configuration Network Section##################################################

修改前yaml:

          env:# Pod-specific# spec.nodeName: ip-172-20-52-62.ec2.internal- name: OPERATOR_POD_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# metadata.name: clickhouse-operator-6f87589dbb-ftcsf- name: OPERATOR_POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# metadata.namespace: kube-system- name: OPERATOR_POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# status.podIP: 100.96.3.2- name: OPERATOR_POD_IPvalueFrom:fieldRef:fieldPath: status.podIP# spec.serviceAccount: clickhouse-operator# spec.serviceAccountName: clickhouse-operator- name: OPERATOR_POD_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName

修改后yaml:

          env:# Pod-specific# spec.nodeName: ip-172-20-52-62.ec2.internal- name: OPERATOR_POD_NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName# metadata.name: clickhouse-operator-6f87589dbb-ftcsf- name: OPERATOR_POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name# metadata.namespace: kube-system- name: OPERATOR_POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace# status.podIP: 100.96.3.2- name: OPERATOR_POD_IPvalue: "::/0"#valueFrom:#  fieldRef:#    fieldPath: status.podIP# spec.serviceAccount: clickhouse-operator# spec.serviceAccountName: clickhouse-operator- name: OPERATOR_POD_SERVICE_ACCOUNTvalueFrom:fieldRef:fieldPath: spec.serviceAccountName

以上內(nèi)容修改完成后,重新apply clickhouse-operator。實際上,chop的用戶我們也可以通過yaml去修改。比如我們將用戶改成eoi,密碼改成123456:

apiVersion: v1       
kind: Secret          
metadata:       name: clickhouse-operatornamespace: kube-systemlabels:        clickhouse.altinity.com/chop: 0.21.3app: clickhouse-operator
type: Opaque  
stringData:           username: eoi     password: "123456"

再次部署clickhouse, 查看配置文件:

<yandex><users><chenyc><networks><ip>::/0</ip></networks><password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex><profile>default</profile><quota>default</quota></chenyc><default><networks><ip>::/0</ip></networks><profile>default</profile><quota>default</quota></default><eoi><networks><ip>::/0</ip></networks><password_sha256_hex>8d969eef6ecad3c29a3a629280e686cf0c3f5d5a86aff3ca12020c923adc6c92</password_sha256_hex><profile>clickhouse_operator</profile></eoi></users>
</yandex>

從配置文件來看一切滿足預期,我們嘗試登錄一下:

default用戶可正常登錄:

su01:~/chenyc/ck/ch # clickhouse-client -h 192.168.110.186 --port 39000
ClickHouse client version 23.3.1.2823 (official build).
Connecting to 192.168.110.186:39000 as user default.
Connected to ClickHouse server version 23.10.5 revision 54466.ClickHouse client version is older than ClickHouse server. It may lack support for new features.Warnings:* Linux transparent hugepages are set to "always". Check /sys/kernel/mm/transparent_hugepage/enabled* Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. Check /proc/sys/kernel/task_delayacctchi-user-cluster-0-0-0.chi-user-cluster-0-0.default.svc.cluster.local :) exit
Bye.

chop用戶:

su01:~/chenyc/ck/ch # clickhouse-client -h 192.168.110.186 --port 39000 -u eoi --password 123456
ClickHouse client version 23.3.1.2823 (official build).
Connecting to 192.168.110.186:39000 as user eoi.
Connected to ClickHouse server version 23.10.5 revision 54466.ClickHouse client version is older than ClickHouse server. It may lack support for new features.Warnings:* Linux transparent hugepages are set to "always". Check /sys/kernel/mm/transparent_hugepage/enabled* Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. Check /proc/sys/kernel/task_delayacctchi-user-cluster-0-0-0.chi-user-cluster-0-0.default.svc.cluster.local :) exit
Bye.

chenyc用戶(自定義普通用戶):

su01:~/chenyc/ck/ch # clickhouse-client -h 192.168.110.186 --port 39000 -u chenyc --password qwerty
ClickHouse client version 23.3.1.2823 (official build).
Connecting to 192.168.110.186:39000 as user chenyc.
Connected to ClickHouse server version 23.10.5 revision 54466.ClickHouse client version is older than ClickHouse server. It may lack support for new features.Warnings:* Linux transparent hugepages are set to "always". Check /sys/kernel/mm/transparent_hugepage/enabled* Delay accounting is not enabled, OSIOWaitMicroseconds will not be gathered. Check /proc/sys/kernel/task_delayacctchi-user-cluster-0-0-0.chi-user-cluster-0-0.default.svc.cluster.local :) exit
Bye.

以上就是通過修改operator的yaml,實現(xiàn)外部登錄的過程。當然,像default用戶、chop用戶等有特殊含義的用戶,實際使用時是不建議暴露出來讓外界使用的。建議創(chuàng)建一個普通用戶能讓外界訪問即可。

storageclass部署

clickhouse作為一個數(shù)據(jù)庫,最關(guān)鍵的自然是數(shù)據(jù)持久化的問題。那就意味著我們必須要使用PVC來指定持久化路徑。下面是一個指定了持久化目錄的例子:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:name: "aimeter"
spec:defaults:templates:serviceTemplate: service-templatepodTemplate: pod-templatedataVolumeClaimTemplate: volume-claimlogVolumeClaimTemplate: volume-claimtemplates:serviceTemplates:- name: service-templategenerateName: chendpoint-{chi}spec:ports:- name: httpport: 8123nodePort: 38123targetPort: 8123- name: tcpport: 9000nodePort: 39000targetPort: 9000type: NodePortpodTemplates:- name: pod-templatespec:containers:- name: clickhouseimagePullPolicy: Alwaysimage: yandex/clickhouse-server:latestvolumeMounts:- name: volume-claimmountPath: /var/lib/clickhouse- name: volume-claimmountPath: /var/log/clickhouse-servervolumeClaimTemplates:- name: volume-claimspec:accessModes:- ReadWriteOnceresources:requests:storage: 100Gi

前面提到過,clickhouse-operator默認是在云環(huán)境運行的。上面這段yaml在云環(huán)境是可以直接運行的,但是在私有k8s環(huán)境運行會報錯??梢钥吹?#xff0c;其運行狀態(tài)是pending。

su01:~/chenyc/ck/ch # kubectl get pod
NAME                        READY   STATUS    RESTARTS   AGE
chi-aimeter-cluster-0-0-0   0/2     Pending   0          6s
ckman-6d8cd8fbdc-mlsmb      1/1     Running   0          5d4h

使用describe去查看具體信息,得到如下報錯:

Events:Type     Reason            Age                From               Message----     ------            ----               ----               -------Warning  FailedScheduling  68s (x2 over 70s)  default-scheduler  0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..

以上錯誤是因為沒有可用的持久化卷。我們可以通過自建nfs的方式提供一個PV來供掛載。

我們先在一臺服務器上創(chuàng)建nfs service。

# 安裝nfs以及rpc服務
yum  install  nfs-utils rpcbind  -y 
# 修改 /etc/exports
/data01/nfs   192.168.110.0/24(insecure,rw,sync,no_subtree_check,no_root_squash)exportfs -r #使配置生效
#重啟nfs服務: 
service rpcbind restart ;service nfs restart

創(chuàng)建provisioner、storageclass:

#### 權(quán)限配置
---
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-common-provisionernamespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-common-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-common-provisioner
subjects:- kind: ServiceAccountname: nfs-common-provisionernamespace: default
roleRef:kind: ClusterRolename: nfs-common-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-common-provisionernamespace: default
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-common-provisionernamespace: default
subjects:- kind: ServiceAccountname: nfs-common-provisionernamespace: default
roleRef:kind: Rolename: leader-locking-nfs-common-provisionerapiGroup: rbac.authorization.k8s.io#### class
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-common # 配置pvc使用
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # 指定一個供應商的名字 與 PROVISIONER_NAME 保持一致
# or choose another name, 必須匹配 deployment 的 env PROVISIONER_NAME'
parameters:archiveOnDelete: "false" # 刪除 PV 的時候,PV 中的內(nèi)容是否備份#### 部署deploy
---
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-common-provisionerlabels:app: nfs-common-provisionernamespace: default
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-common-provisionertemplate:metadata:labels:app: nfs-common-provisionerspec:serviceAccountName: nfs-common-provisionercontainers:- name: nfs-common-provisionerimage: ccr.ccs.tencentyun.com/gcr-containers/nfs-subdir-external-provisioner:v4.0.2volumeMounts:- name: nfs-common-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVERvalue: 192.168.110.10 # NFS 服務器的地址- name: NFS_PATHvalue: /data01/nfs # NFS 服務器的共享目錄volumes:- name: nfs-common-rootnfs:server: 192.168.110.10path: /data01/nfs

應用:

su01:~/chenyc/ck/ch # kubectl apply -f storageclass.yaml 
serviceaccount/nfs-common-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-common-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-common-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-common-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-common-provisioner created
storageclass.storage.k8s.io/nfs-common created
deployment.apps/nfs-common-provisioner created

成功應該可以查到以下信息:

su01:~/chenyc/ck/ch # kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
chi-aimeter-cluster-0-0-0                 0/2     Pending   0          16m
ckman-6d8cd8fbdc-mlsmb                    1/1     Running   0          5d4h
nfs-common-provisioner-594bc9d55d-clhvl   1/1     Running   0          5m51s
su01:~/chenyc/ck/ch # kubectl get sc 
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-common   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  5m57s

我們將上述的yaml的pvc掛載到這個sc上:

    volumeClaimTemplates:- name: volume-claimspec:accessModes:- ReadWriteOncestorageClassName: "nfs-common"resources:requests:storage: 100Gi

再次查看,部署成功:

su01:~/chenyc/ck/ch # kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
chi-aimeter-cluster-0-0-0                 2/2     Running   0          117s
ckman-6d8cd8fbdc-mlsmb                    1/1     Running   0          5d4h
nfs-common-provisioner-594bc9d55d-clhvl   1/1     Running   0          10m
su01:~/chenyc/ck/ch # kubectl get sts
NAME                      READY   AGE
chi-aimeter-cluster-0-0   1/1     2m1s
su01:~/chenyc/ck/ch # kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                            STORAGECLASS   REASON   AGE
pvc-e23ae072-5e95-415c-9628-3ea7fb1ed4d6   100Gi      RWO            Delete           Bound    default/volume-claim-chi-aimeter-cluster-0-0-0   nfs-common              2m4s
su01:~/chenyc/ck/ch # kubectl get pvc
NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
volume-claim-chi-aimeter-cluster-0-0-0   Bound    pvc-e23ae072-5e95-415c-9628-3ea7fb1ed4d6   100Gi      RWO            nfs-common     2m8s

從nfs原始目錄可以看到已經(jīng)有目錄掛載在上面:

zookeeper部署

clickhouse集群是依賴zookeeper的。clickhouse-operator項目官方很貼心地提供了pv方式部署以及emptyDir方式部署zookeeper的方案,且都提供了單節(jié)點和三節(jié)點的部署yaml。

我們這里以單節(jié)點持久化的場景為例:

# Setup Service to provide access to Zookeeper for clients
apiVersion: v1
kind: Service
metadata:# DNS would be like zookeeper.zoonsname: zookeeperlabels:app: zookeeper
spec:type: NodePortports:- port: 2181name: client- port: 7000name: prometheusselector:app: zookeeperwhat: node
---
# Setup Headless Service for StatefulSet
apiVersion: v1
kind: Service
metadata:# DNS would be like zookeeper-0.zookeepers.etcname: zookeeperslabels:app: zookeeper
spec:ports:- port: 2888name: server- port: 3888name: leader-electionclusterIP: Noneselector:app: zookeeperwhat: node
---
# Setup max number of unavailable pods in StatefulSet
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:name: zookeeper-pod-disruption-budget
spec:selector:matchLabels:app: zookeepermaxUnavailable: 1
---
# Setup Zookeeper StatefulSet
# Possible params:
# 1. replicas
# 2. memory
# 3. cpu
# 4. storage
# 5. storageClassName
# 6. user to run app
apiVersion: apps/v1
kind: StatefulSet
metadata:# nodes would be named as zookeeper-0, zookeeper-1, zookeeper-2name: zookeeperlabels:app: zookeeper
spec:selector:matchLabels:app: zookeeperserviceName: zookeepersreplicas: 1updateStrategy:type: RollingUpdatepodManagementPolicy: OrderedReadytemplate:metadata:labels:app: zookeeperwhat: nodeannotations:prometheus.io/port: '7000'prometheus.io/scrape: 'true'spec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: "app"operator: Invalues:- zookeeper# TODO think about multi-AZ EKS# topologyKey: topology.kubernetes.io/zonetopologyKey: "kubernetes.io/hostname"containers:- name: kubernetes-zookeeperimagePullPolicy: IfNotPresentimage: "docker.io/zookeeper:3.8.1"resources:requests:memory: "512M"cpu: "1"limits:memory: "4Gi"cpu: "2"ports:- containerPort: 2181name: client- containerPort: 2888name: server- containerPort: 3888name: leader-election- containerPort: 7000name: prometheusenv:- name: SERVERSvalue: "1"# See those links for proper startup settings:
# https://github.com/kow3ns/kubernetes-zookeeper/blob/master/docker/scripts/start-zookeeper
# https://clickhouse.yandex/docs/en/operations/tips/#zookeeper
# https://github.com/ClickHouse/ClickHouse/issues/11781command:- bash- -x- -c- |HOST=`hostname -s` &&DOMAIN=`hostname -d` &&CLIENT_PORT=2181 &&SERVER_PORT=2888 &&ELECTION_PORT=3888 &&PROMETHEUS_PORT=7000 &&ZOO_DATA_DIR=/var/lib/zookeeper/data &&ZOO_DATA_LOG_DIR=/var/lib/zookeeper/datalog &&{echo "clientPort=${CLIENT_PORT}"echo 'tickTime=2000'echo 'initLimit=300'echo 'syncLimit=10'echo 'maxClientCnxns=2000'echo 'maxTimeToWaitForEpoch=2000'echo 'maxSessionTimeout=60000000'echo "dataDir=${ZOO_DATA_DIR}"echo "dataLogDir=${ZOO_DATA_LOG_DIR}"echo 'autopurge.snapRetainCount=10'echo 'autopurge.purgeInterval=1'echo 'preAllocSize=131072'echo 'snapCount=3000000'echo 'leaderServes=yes'echo 'standaloneEnabled=false'echo '4lw.commands.whitelist=*'echo 'metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider'echo "metricsProvider.httpPort=${PROMETHEUS_PORT}"echo "skipACL=true"echo "fastleader.maxNotificationInterval=10000"} > /conf/zoo.cfg &&{echo "zookeeper.root.logger=CONSOLE"echo "zookeeper.console.threshold=INFO"echo "log4j.rootLogger=\${zookeeper.root.logger}"echo "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender"echo "log4j.appender.CONSOLE.Threshold=\${zookeeper.console.threshold}"echo "log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout"echo "log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n"} > /conf/log4j.properties &&echo 'JVMFLAGS="-Xms128M -Xmx4G -XX:ActiveProcessorCount=8 -XX:+AlwaysPreTouch -Djute.maxbuffer=8388608 -XX:MaxGCPauseMillis=50"' > /conf/java.env &&if [[ $HOST =~ (.*)-([0-9]+)$ ]]; thenNAME=${BASH_REMATCH[1]} &&ORD=${BASH_REMATCH[2]};elseecho "Failed to parse name and ordinal of Pod" &&exit 1;fi &&mkdir -pv ${ZOO_DATA_DIR} &&mkdir -pv ${ZOO_DATA_LOG_DIR} &&whoami &&chown -Rv zookeeper "$ZOO_DATA_DIR" "$ZOO_DATA_LOG_DIR" &&export MY_ID=$((ORD+1)) &&echo $MY_ID > $ZOO_DATA_DIR/myid &&for (( i=1; i<=$SERVERS; i++ )); doecho "server.$i=$NAME-$((i-1)).$DOMAIN:$SERVER_PORT:$ELECTION_PORT" >> /conf/zoo.cfg;done &&if [[ $SERVERS -eq 1 ]]; thenecho "group.1=1" >> /conf/zoo.cfg;elseecho "group.1=1:2:3" >> /conf/zoo.cfg;fi &&for (( i=1; i<=$SERVERS; i++ )); doWEIGHT=1if [[ $i == 1 ]]; thenWEIGHT=10fiecho "weight.$i=$WEIGHT" >> /conf/zoo.cfg;done &&zkServer.sh start-foregroundreadinessProbe:exec:command:- bash- -c- 'IFS=; MNTR=$(exec 3<>/dev/tcp/127.0.0.1/2181 ; printf "mntr" >&3 ; tee <&3; exec 3<&- ;);while [[ "$MNTR" == "This ZooKeeper instance is not currently serving requests" ]];doecho "wait mntr works";sleep 1;MNTR=$(exec 3<>/dev/tcp/127.0.0.1/2181 ; printf "mntr" >&3 ; tee <&3; exec 3<&- ;);done;STATE=$(echo -e $MNTR | grep zk_server_state | cut -d " " -f 2);if [[ "$STATE" =~ "leader" ]]; thenecho "check leader state";SYNCED_FOLLOWERS=$(echo -e $MNTR | grep zk_synced_followers | awk -F"[[:space:]]+" "{print \$2}" | cut -d "." -f 1);if [[ "$SYNCED_FOLLOWERS" != "0" ]]; then./bin/zkCli.sh ls /;exit $?;elseexit 0;fi;elif [[ "$STATE" =~ "follower" ]]; thenecho "check follower state";PEER_STATE=$(echo -e $MNTR | grep zk_peer_state);if [[ "$PEER_STATE" =~ "following - broadcast" ]]; then./bin/zkCli.sh ls /;exit $?;elseexit 1;fi;elseexit 1;  fi'initialDelaySeconds: 10periodSeconds: 60timeoutSeconds: 60livenessProbe:exec:command:- bash- -xc- 'date && OK=$(exec 3<>/dev/tcp/127.0.0.1/2181 ; printf "ruok" >&3 ; IFS=; tee <&3; exec 3<&- ;); if [[ "$OK" == "imok" ]]; then exit 0; else exit 1; fi'initialDelaySeconds: 10periodSeconds: 30timeoutSeconds: 5volumeMounts:- name: datadir-volumemountPath: /var/lib/zookeeper# Run as a non-privileged usersecurityContext:runAsUser: 1000fsGroup: 1000volumeClaimTemplates:- metadata:name: datadir-volumespec:accessModes:- ReadWriteOncestorageClassName: "nfs-common"resources:requests:storage: 25Gi

看到狀態(tài)為Running, 就說明部署成功。

su01:~/chenyc/ck/ch # kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
chi-aimeter-cluster-0-0-0                 2/2     Running   0          8m9s
ckman-6d8cd8fbdc-mlsmb                    1/1     Running   0          5d5h
nfs-common-provisioner-594bc9d55d-clhvl   1/1     Running   0          16m
zookeeper-0                               1/1     Running   0          69s

clickhouse集群部署

zookeeper部署好,就可以開始部署clickhouse集群了。

clickhouse集群配置和單節(jié)點配置大同小異,需要注意兩點:

  • zookeeper配置
  • 集群信息配置(分片和副本)

接下來我們創(chuàng)建一個2分片4副本的集群:

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:name: "ck-cluster"
spec:defaults:templates:serviceTemplate: service-templatepodTemplate: pod-templatedataVolumeClaimTemplate: volume-claimlogVolumeClaimTemplate: volume-claimconfiguration:zookeeper:nodes:- host: zookeeper-0.zookeepers.default.svc.cluster.localport: 2181 clusters:- name: "cktest"layout:shardsCount: 2replicasCount: 2templates:serviceTemplates:- name: service-templategenerateName: chendpoint-{chi}spec:ports:- name: httpport: 8123nodePort: 38123targetPort: 8123- name: tcpport: 9000nodePort: 39000targetPort: 9000type: NodePortpodTemplates:- name: pod-templatespec:containers:- name: clickhouseimagePullPolicy: Alwaysimage: yandex/clickhouse-server:latestvolumeMounts:- name: volume-claimmountPath: /var/lib/clickhouse- name: volume-claimmountPath: /var/log/clickhouse-serverresources:limits:memory: "1Gi"cpu: "1"requests:memory: "1Gi"cpu: "1"volumeClaimTemplates:- name: volume-claimspec:accessModes:- ReadWriteOncestorageClassName: "nfs-common"resources:requests:storage: 100Gi

集群創(chuàng)建比較慢,我們等所有節(jié)點都創(chuàng)建好。

su01:~/chenyc/ck/ch # kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
chi-ck-cluster-cktest-0-0-0               2/2     Running   0          2m29s
chi-ck-cluster-cktest-0-1-0               2/2     Running   0          109s
chi-ck-cluster-cktest-1-0-0               2/2     Running   0          68s
chi-ck-cluster-cktest-1-1-0               2/2     Running   0          27s
ckman-6d8cd8fbdc-mlsmb                    1/1     Running   0          5d5h
nfs-common-provisioner-594bc9d55d-clhvl   1/1     Running   0          25m
zookeeper-0                               1/1     Running   0          10m

查看配置文件:

su01:~/chenyc/ck/ch # kubectl exec -it chi-ck-cluster-cktest-0-0-0 -- cat /etc/clickhouse-server/config.d/chop-generated-remote_servers.xml
Defaulted container "clickhouse" out of: clickhouse, clickhouse-log
<yandex><remote_servers><!-- User-specified clusters --><cktest><shard><internal_replication>True</internal_replication><replica><host>chi-ck-cluster-cktest-0-0</host><port>9000</port><secure>0</secure></replica><replica><host>chi-ck-cluster-cktest-0-1</host><port>9000</port><secure>0</secure></replica></shard><shard><internal_replication>True</internal_replication><replica><host>chi-ck-cluster-cktest-1-0</host><port>9000</port><secure>0</secure></replica><replica><host>chi-ck-cluster-cktest-1-1</host><port>9000</port><secure>0</secure></replica></shard></cktest><!-- Autogenerated clusters --><all-replicated><shard><internal_replication>true</internal_replication><replica><host>chi-ck-cluster-cktest-0-0</host><port>9000</port><secure>0</secure></replica><replica><host>chi-ck-cluster-cktest-0-1</host><port>9000</port><secure>0</secure></replica><replica><host>chi-ck-cluster-cktest-1-0</host><port>9000</port><secure>0</secure></replica><replica><host>chi-ck-cluster-cktest-1-1</host><port>9000</port><secure>0</secure></replica></shard></all-replicated><all-sharded><shard><internal_replication>false</internal_replication><replica><host>chi-ck-cluster-cktest-0-0</host><port>9000</port><secure>0</secure></replica></shard><shard><internal_replication>false</internal_replication><replica><host>chi-ck-cluster-cktest-0-1</host><port>9000</port><secure>0</secure></replica></shard><shard><internal_replication>false</internal_replication><replica><host>chi-ck-cluster-cktest-1-0</host><port>9000</port><secure>0</secure></replica></shard><shard><internal_replication>false</internal_replication><replica><host>chi-ck-cluster-cktest-1-1</host><port>9000</port><secure>0</secure></replica></shard></all-sharded></remote_servers>
</yandex>

查看集群信息:

我們來建表驗證一下:

創(chuàng)建分布式表:

插入數(shù)據(jù):

查本地表:

查分布式表:

注意事項

clickhouse-operator部署clickhouse集群實際上使用的是nodeName方式,如果集群節(jié)點上資源不夠,那么pod會直接創(chuàng)建失敗,而不會調(diào)度到其他節(jié)點上。

與ckman的聯(lián)動

關(guān)于clickhouse容器化的思考

支持hostname訪問

我們使用ckman管理集群的一個目的在于, ckman可以將集群配置直接導出給clickhouse_sinker等組件使用,因此,我們需要知道每個節(jié)點的確切IP,而不是system.clusters里查出來的127.0.0.1。

但是使用IP面臨的一個問題是, 節(jié)點重啟,IP會漂,最差的結(jié)果,集群整體重啟,所有的IP都變掉了,那么這個集群就失聯(lián)了,這顯然不是我們所期望的。

因此,最好使使用hostname去訪問。而hostname訪問的問題是dns解析。在k8s集群內(nèi)問題不大,但是集群外則無法訪問。

當然,云內(nèi)環(huán)境提供的服務僅供云內(nèi)訪問也是說得通的。(這一點需要ckman率先去支持)

集群服務暴露給外部訪問

秉承著云內(nèi)環(huán)境僅供云內(nèi)訪問的原則,clickhouse服務對外僅提供一個nodePort的端口,隨機訪問集群節(jié)點。所有操作入口都為ckman。

云內(nèi)集群運維

指集群的擴縮容,升級等,這些理論上只需要修改yaml就可以完成。

ckman集成clickhouse容器化

ckman作為一個定位為管理和監(jiān)控clickhouse集群的運維工具,云外環(huán)境已經(jīng)做得足夠完善,那么云內(nèi)部署自然是責無旁貸的。

那么接下來ckman的RoadMap里,clickhouse的容器化部署集成到ckman中,自然是重中之重。



本專欄知識點是通過<零聲教育>的系統(tǒng)學習,進行梳理總結(jié)寫下文章,對C/C++課程感興趣的讀者,可以點擊鏈接,查看詳細的服務:C/C++Linux服務器開發(fā)/高級架構(gòu)師

http://m.risenshineclean.com/news/62047.html

相關(guān)文章:

  • 蘇州建設局官方網(wǎng)站百度提交網(wǎng)址入口
  • wordpress title description東莞整站優(yōu)化
  • 機械網(wǎng)站 英文百度手機快速排名點擊軟件
  • 阿里巴巴運營技巧武漢seo論壇
  • 怎樣做百度網(wǎng)站推廣青島seo關(guān)鍵詞優(yōu)化公司
  • 網(wǎng)站建設管理中se是什么意思數(shù)據(jù)分析培訓
  • vue做視頻網(wǎng)站怎樣做推廣更有效
  • 有個網(wǎng)站發(fā)任務 用手機可以做百度快照什么意思
  • 黑客網(wǎng)站裝b武漢大學人民醫(yī)院精神科
  • 蔬菜水果網(wǎng)站建設軟文網(wǎng)站推廣
  • 中國建設工程招標網(wǎng)官方網(wǎng)站自建網(wǎng)站平臺
  • 網(wǎng)站設計不同的原因中國足球世界排名
  • 推推蛙網(wǎng)站建設合肥網(wǎng)站seo費用
  • 昆明網(wǎng)站建設價目表網(wǎng)絡營銷課程ppt
  • 徐州seo關(guān)鍵詞排名優(yōu)化價格
  • 網(wǎng)站建設項目風險管理的主要內(nèi)容成品短視頻app源碼的優(yōu)點
  • 做風險投資網(wǎng)站程序員培訓
  • 手機網(wǎng)站用單獨做嗎列舉常見的網(wǎng)絡營銷工具
  • 西安微信平臺網(wǎng)站建設沈陽沈河seo網(wǎng)站排名優(yōu)化
  • 自己怎樣免費建設網(wǎng)站分發(fā)平臺
  • 專門做離異相親的網(wǎng)站惠州seo報價
  • 專業(yè)網(wǎng)站建設品牌策劃方案惠州網(wǎng)站排名提升
  • 蘇州做網(wǎng)站套路騙寧波網(wǎng)絡推廣平臺
  • 大型網(wǎng)站建設制作平臺seo推廣的公司
  • 章瑩穎被賣做性奴網(wǎng)站深圳百度seo整站
  • 濰坊大型做網(wǎng)站建設的公司網(wǎng)站收錄提交入口
  • 保定網(wǎng)站設計網(wǎng)站app開發(fā)軟件
  • 為什么不建議去代賬公司廣州網(wǎng)站優(yōu)化系統(tǒng)
  • dede 手機網(wǎng)站模板徐州網(wǎng)站設計
  • 網(wǎng)站建設套餐寧波關(guān)鍵詞優(yōu)化平臺