学习 ZooKeeper(二): ZooKeeper on Kubernetes
2015 年 3 月 12 日
更新至 ZooKeeper 3.5.6 版本
资源定义
配置
日志文件 log4j.properties、ZooKeeper 配置文件模板 zoo.cfg.jinja2 和 ZooKeeper ID 文件 myid.jinja2,模板语法使用 Jinja
。
:point_down:是 zoo.cfg.jinja2 的:chestnut::
tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data/zookeeper clientPort=2181 electionPortBindRetry=0 {% for index in range(0, ENSEMBLE_SIZE|int) %} server.{{ index + 1 }}={{ APP_NAME }}-{{ index }}.{{ SERVICE_NAME }}:2888:3888 {% endfor %}
配置项 electionPortBindRetry 设为 0,无限重试绑定选举端口,默认为 3。
:point_down:是 myid.jinja2 的:chestnut::
{% set id = (POD_NAME.split("-")[-1])|int + 1 %}{{ id }}
创建 ConfigMap:
kubectl create configmap zk-cm \ --from-file=conf/log4j.properties --from-file=conf/zoo.cfg.jinja2 --from-file=data/myid.jinja2
ZooKeeper StatefulSet
zk-sts.yaml
apiVersion: apps/v1 kind: StatefulSet metadata: name: zk-sts spec: serviceName: zk replicas: 3 selector: matchLabels: app: zk template: spec: initContainers: - name: jinja2 image: dyingbleed/jinja2 command: - /bin/sh - -c args: - > cp /opt/config/* /opt/output && cat /opt/jinja2/zoo.cfg.jinja2 | python /opt/render.py > /opt/output/zoo.cfg && cat /opt/jinja2/myid.jinja2 | python /opt/render.py > /opt/data/myid env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: APP_NAME value: zk-sts - name: SERVICE_NAME value: zk - name: ENSEMBLE_SIZE value: "3" volumeMounts: - name: share mountPath: /opt/output - name: zk-pvc mountPath: /opt/data - name: config mountPath: /opt/config - name: jinja2 mountPath: /opt/jinja2 containers: - name: zk image: zookeeper:3.5.6 env: - name: JVMFLAGS value: "-Xmx1G -Xms1G" resources: limits: cpu: 200m memory: 1Gi ports: - name: client containerPort: 2181 - name: server containerPort: 2888 - name: election containerPort: 3888 - name: web containerPort: 8080 volumeMounts: - name: share mountPath: /conf - name: zk-pvc mountPath: /data/zookeeper volumes: - name: config configMap: name: zk-cm items: - key: log4j.properties path: log4j.properties - name: jinja2 configMap: name: zk-cm items: - key: myid.jinja2 path: myid.jinja2 - key: zoo.cfg.jinja2 path: zoo.cfg.jinja2 - name: share emptyDir: {} volumeClaimTemplates: - metadata: name: zk-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: zk-sc
① 在 Init Container 拷贝 log4j.properties 文件到配置目录,模板生成 zoo.cfg 文件到配置目录,配置目录为 /conf;
② 在 Init Container 模板生成 myid 文件到数据目录,数据目录为 /data/zookeeper;
③ 使用 PersistentVolumeClaim 为 ZooKeeper 数据目录 /data/zookeeper 申请存储资源。
ZooKeeper Service
zk-svc.yaml
apiVersion: v1 kind: Service metadata: name: zk spec: selector: app: zk ports: - name: client port: 2181 targetPort: client - name: server port: 2888 targetPort: server - name: election port: 3888 targetPort: election - name: web port: 8080 targetPort: web clusterIP: None
为 ZooKeeper StatefulSet 创建 Headless Service。