Contents
  1. 1. 一、etcd
  2. 2. 二、SSL证书
  3. 3. 三、master端
    1. 3.1. 3.1 kube-apiserver服务
    2. 3.2. 3.2 kube-controller-manager服务
  4. 4. 四、node端
    1. 4.1. 4.1 kubele
    2. 4.2. 4.2 kube-proxy
  5. 5. 五、测试
  6. 6. 六、后续

Kubernetes简称K8s,是Google推出的容器集群,个人感觉对比docker原生的swarm,K8s更全面当然也更复杂。具体的对Kubernetes的介绍可以网上找,这里不多说了,下面开始记录一下手工搭建一个简单集群的步骤。
首先你的主机linux内核需要3.1以上,推荐ubuntu 16.04

一、etcd

这是一个类似zookeeper的服务发现注册组件,介绍说在算法上比zk简单点,K8s使用这个etcd做服务注册,在K8s 1.6版本开始要使用etcd3。
可以从以下链接下载编译好的二进制安装包 etcd-v3.2.9-linux-amd64.tar.gz
解压后直接把里面的etcd、etcdctl文件cp到/usr/bin
编辑systemd配置 /usr/lib/systemd/system/etcd.service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[Unit]
Description=etcd - highly-available key value store
Documentation=https://github.com/coreos/etcd
Documentation=man:etcd
After=network.target
Wants=network-online.target
[Service]
Environment=DAEMON_ARGS=
Environment=ETCD_NAME=%H
Environment=ETCD_DATA_DIR=/data/etcd
EnvironmentFile=-/etc/default/%p
Type=notify
User=etcd
PermissionsStartOnly=true
#ExecStart=/bin/sh -c "GOMAXPROCS=$(nproc) /usr/bin/etcd $DAEMON_ARGS"
ExecStart=/usr/bin/etcd $DAEMON_ARGS
Restart=on-abnormal
#RestartSec=10s
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

etcd配置文件放在 /etc/default/etcd, 现在是单机版,所以配置下面几个就可以:

1
2
3
4
ETCD_DATA_DIR="/data/etcd"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

配置完成后使用systemd启动

1
2
3
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

验证一下:

1
2
3
ubuntu@ubuntu1:/etc/default$ etcdctl cluster-health
member 4550dff037975af3 is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy

二、SSL证书

K8s各个组件与master之间的安全通信推荐使用基于CA签名的SSL证书加密数据,要求在master节点的kube-apiserver、kube-controller-manager、kube-scheduler进程及node节点的kubelet、kube-proxy进程配置CA签名证书。
使用easyrsa生成证书比较方便,先下载easy-rsa.tar.gz
解压后进入 easy-rsa-master/easyrsa3 目录,编辑vars文件:

1
2
3
4
5
6
set_var EASYRSA_REQ_COUNTRY    "CN"
set_var EASYRSA_REQ_PROVINCE "Guangdong"
set_var EASYRSA_REQ_CITY "Zhongshan"
set_var EASYRSA_REQ_ORG "myself"
set_var EASYRSA_REQ_EMAIL "me@example.net"
set_var EASYRSA_REQ_OU "k8s-test"

初始化easyrsa环境:

1
./easyrsa init-pki

生成ca证书:

1
./easyrsa --batch "--req-cn=${MASTER_IP}@`date +%s`" build-ca nopass

生成服务器证书和key:

1
2
3
4
5
6
7
8
9
./easyrsa --subject-alt-name="IP:${MASTER_IP}"\
"IP:${MASTER_CLUSTER_IP},"\
"DNS:kubernetes,"\
"DNS:kubernetes.default,"\
"DNS:kubernetes.default.svc,"\
"DNS:kubernetes.default.svc.cluster,"\
"DNS:kubernetes.default.svc.cluster.local" \
--days=10000 \
build-server-full server nopass

MASTER_CLUSTER_IP是apiserver服务启动参数–service-cluster-ip-range的第一个ip
然后复制pki/ca.crt, pki/issued/server.crt, pki/private/server.key到 master节点的/etc/kubernetes/pki目录备用

生成client证书:

1
./easyrsa build-client-full kubelet nopass

复制pki/ca.crt, pki/issued/client.crt, pki/private/client.key到 node节点的/etc/kubernetes/pki目录备用

三、master端

下载k8s的二进制版,当前是v1.8.3 kubernetes-server-linux-amd64.tar.gz
把kube-apiserver、kube-controller-manager、kube-scheduler文件cp到/usr/bin

3.1 kube-apiserver服务

编辑systemd配置 /usr/lib/systemd/system/kube-apiserver.service

1
2
3
4
5
6
7
8
9
10
11
12
13
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/apiserver

1
KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://192.168.21.66:2379 --client-ca-file=/etc/kubernetes/pki/ca.crt --tls-cert-file=/etc/kubernetes/pki/server.crt --tls-private-key-file=/etc/kubernetes/pki/server.key --service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=1-65535 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds --logtostderr=false --log-dir=/var/log/kubernetes --v=2"

–etcd-servers: etcd的服务地址
–client-ca-file: ca证书文件
–tls-cert-file: 服务器证书文件
–tls-private-key-file: 服务器key文件
–service-cluster-ip-range: k8s集群中service服务的虚拟ip段,不能与物理ip冲突
–admission-control: 准入控制设置,各控制模块以插件形式依次生效

创建好日志目录 /var/log/kubernetes

3.2 kube-controller-manager服务

kube-controller-manager依赖kube-apiserver
创建systemd配置 /usr/lib/systemd/system/kube-controller-manager.service :

1
2
3
4
5
6
7
8
9
10
11
12
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/controller-manager

1
KUBE_CONTROLLER_MANAGER_ARGS="--master=https://192.168.21.66:6443 --kubeconfig=/etc/kubernetes/kubeconfig --service_account_private_key_file=/etc/kubernetes/pki/server.key --root-ca-file=/etc/kubernetes/pki/ca.crt --logtostderr=false --log-dir=/var/log/kubernetes --v=2"

1.8版本使用kubeconfig进行集群信息配置
/etc/kubernetes/kubeconfig

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: Config
users:
- name: controllermanager
user:
client-certificate: /etc/kubernetes/pki/kube-cs-client.crt
client-key: /etc/kubernetes/pki/kube-cs-client.key
clusters:
- name: k8s-test
cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
contexts:
- context:
cluster: k8s-test
user: controllermanager
name: my-context
current-context: my-context

–master: apiserver的服务地址,6443是加密https端口
–kubeconfig: 集中配置文件,包括证书等

3.3 kube-scheduler服务
kube-scheduler服务依赖于kube-apiserver
创建systemd配置 /usr/lib/systemd/system/kube-scheduler.service:

1
2
3
4
5
6
7
8
9
10
11
12
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/scheduler:

1
KUBE_SCHEDULER_ARGS="--master=https://192.168.21.66:6443 --kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=2"

kubeconfig与kube-controller-manager一样

配置完成后使用systemd启动以上三个服务

四、node端

工作节点是实际运行容器的物理服务器,需要预先安装docker并启动,安装docker参考之前的博文
下载k8s的二进制版[kubernetes-server-linux-amd64.tar.gz],复制kubelet、kube-proxy文件到/usr/bin

4.1 kubele

创建systemd配置 /usr/lib/systemd/system/kubelet.service:

1
2
3
4
5
6
7
8
9
10
11
12
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/kubelet

1
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 --fail-swap-on=false --logtostderr=false --log-dir=/var/log/kubernetes --v=2"

WorkingDirectory是保存数据的目录,预先创建

–kubeconfig: 集中配置文件,旧版的–api-server参数废弃
–pod-infra-container-image: k8s pod使用的pause镜像地址,指定为阿里的
–fail-swap-on: 默认为true,即服务器不能有swap,这里测试环境忽略swap

kubeconfig文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
clusters:
- cluster:
server: https://192.168.21.66:6443
certificate-authority: /etc/kubernetes/pki/ca.crt
name: k8s-test
contexts:
- context:
cluster: k8s-test
user: kubelet
name: my-context
current-context: my-context
kind: Config
preferences: {}
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/pki/kubelets.crt
client-key: /etc/kubernetes/pki/kubelets.key

4.2 kube-proxy

创建systemd配置 /usr/lib/systemd/system/kube-proxy.service

1
2
3
4
5
6
7
8
9
10
11
12
[Unit]
Description=Kubernetes Kube-proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
Requires=networking.service
[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

配置文件/etc/kubernetes/proxy

1
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=2"

kubeconfig和kubelet服务一致

配置成功后使用systemd启动kubelet和kube-proxy

五、测试

在master节点上运行

1
2
3
ubuntu@ubuntu1:/usr/local$ sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
wx Ready <none> 13d v1.8.3

可以看到node节点已经ready,可以增加RC、pod、service等资源了,当然现在所有资源都只能放在一个node上。

六、后续

这里只是演示搭建了master和node的关联配置,后续还有内部DNS服务、网络等需要完善

Contents
  1. 1. 一、etcd
  2. 2. 二、SSL证书
  3. 3. 三、master端
    1. 3.1. 3.1 kube-apiserver服务
    2. 3.2. 3.2 kube-controller-manager服务
  4. 4. 四、node端
    1. 4.1. 4.1 kubele
    2. 4.2. 4.2 kube-proxy
  5. 5. 五、测试
  6. 6. 六、后续