multus-cni之源码解析

multus-cni之源码解析,第1张


欢迎关注微信公众号*“云原生手记”***

文章目录 源码解析cmdAdd解析cmdDel解析 一个ipvlan的例子总结
源码地址:https://github.com/k8snetworkplumbingwg/multus-cni.git

源码解析

解析之前,先看下机器上的/opt/cni/bin目录下的可执行文件,这些文件都有可能被cni插件调用:

[root@zhounanjun-test01 ~]# ls /opt/cni/bin/
bandwidth  calico       dhcp      flannel      host-local  kube-ovn  macvlan  portmap  sbr    static  vlan
bridge     calico-ipam  firewall  host-device  ipvlan      loopback  multus   ptp      sriov  tuning  whereabouts

这些 CNI 的基础可执行文件,按照功能可以分为以下三类:

第一类,叫做 Main 插件,它是用来创建具体网络设备的二进制文件。比如,bridge(网桥设备)、ipvlan、loopback(lo设备)、macvlan、ptp(Veth Pari 设备)、以及 vlan。第二类,叫做 IPAM(IP Address Management)插件,它是负责分配 IP 地址的二进制文件。比如 dhcp 会向 DHCP 服务器发起请求;host-local 会使用预先配置的 IP 地址段来进行分配;calico-ipam是clalico cni自己的ip地址分配插件,是一种集中式ip分配插件;whereabouts也是一个集中式ip分配插件,用的比较少,我也是因为使用了sriov设备才用到的,所以节点上有这个文件, 这个是k8snetworkplumbingwg社区开源的。 第三类,是由 CNI 社区维护的内置 CNI 插件,比如 flannel,这就是专门为 Flannel 项目提供的 CNI 插件;tunning,是一个通过 sysctl 调整网络设备参数的二进制文件;portmap 是一个通过 iptables 配置端口映射的二进制文件;bandwidth 是一个使用 Token Bucket Filter(TBF)来进行限流的二进制文件;calico是专门为Calico项目提供的CNI插件。 cmdAdd解析

源码解析的过程,也就是实现cni插件的过程,希望读者在本文解析后可以实现自己的cni插件。
multus-cni插件的启动代码在cmd/main.go文件中

main函数如下:

func main() {

    // Init command line flags to clear vendored packages' one, especially in init()
    flag.CommandLine = flag.NewFlagSet(os.Args[0], flag.ExitOnError)

    // add version flag
    versionOpt := false
    flag.BoolVar(&versionOpt, "version", false, "Show application version")
    flag.BoolVar(&versionOpt, "v", false, "Show application version")
    flag.Parse()
    if versionOpt == true {
        fmt.Printf("%s\n", multus.PrintVersionString())
        return
    }

    skel.PluginMain( // 实现cni插件的三个函数cmdAdd, cmdCheck, cmdDel
        func(args *skel.CmdArgs) error {
            // 用于创建容器时构建网络环境
            result, err := multus.CmdAdd(args, nil, nil)
            if err != nil {
                return err
            }
            return result.Print()
        },
        func(args *skel.CmdArgs) error {
            return multus.CmdCheck(args, nil, nil)
        },
        // 用于删除容器时回收 网络环境
        func(args *skel.CmdArgs) error { return multus.CmdDel(args, nil, nil) },
        cniversion.All, "meta-plugin that delegates to other CNI plugins")
}

skel.PluginMain中传入的三个函数类型,就是cmdAdd, cmdCheck, cmdDel函数。那么先看cmdAdd函数,就是下面这段:

func(args *skel.CmdArgs) error {
            // 用于创建容器时构建网络环境
            result, err := multus.CmdAdd(args, nil, nil)
            if err != nil {
                return err
            }
            return result.Print()
        }

调用的是multus.CmdAdd,multus.CmdAdd函数的内容如下,稍加缩减:

func CmdAdd(args *skel.CmdArgs, exec invoke.Exec, kubeClient *k8s.ClientInfo) (cnitypes.Result, error) {
    // 是/etc/cni/net.d/00-multus.conf文件的配置
    n, err := types.LoadNetConf(args.StdinData) // 从stdinData中反序列化配置,
    logging.Debugf("CmdAdd: %v, %v, %v", args, exec, kubeClient)
    if err != nil {
        return nil, cmdErr(nil, "error loading netconf: %v", err)
    }
    // 生成client
    kubeClient, err = k8s.GetK8sClient(n.Kubeconfig, kubeClient)
    if err != nil {
        return nil, cmdErr(nil, "error getting k8s client: %v", err)
    }

    k8sArgs, err := k8s.GetK8sArgs(args) // 参数中包含的是kubelet传递过来的pod信息,比如pod名称,pod所属的namespace等信息
    if err != nil {
        return nil, cmdErr(nil, "error getting k8s args: %v", err)
    }
    ...

    pod, err := getPod(kubeClient, k8sArgs, false) // 从k8s获取pod对象
    if err != nil {
        return nil, err
    }
    ...
    // 获取/etc/cni/net.d/00-multus.conf 配置中的delegates字段数据,该字段中放着的是默认cni插件的配置,比如calico的
    // 这个函数深入看下,适用于获取pod要使用的网络插件信息的,可能存在多个网络插件要使用。
    _, kc, err := k8s.TryLoadPodDelegates(pod, n, kubeClient, resourceMap) // n中的Delegates数组已经更新完毕
    if err != nil {
        return nil, cmdErr(k8sArgs, "error loading k8s delegates k8s args: %v", err)
    }

    // cache the multus config 将Delegates数组内容保存在CNIDir中,
    // CNIDir默认路径是:/var/lib/cni/multus
    if err := saveDelegates(args.ContainerID, n.CNIDir, n.Delegates); err != nil {
        return nil, cmdErr(k8sArgs, "error saving the delegates: %v", err)
    }

    var result, tmpResult cnitypes.Result
    var netStatus []nettypes.NetworkStatus
    cniArgs := os.Getenv("CNI_ARGS")
    for idx, delegate := range n.Delegates { // 遍历Delegates
        ifName := getIfname(delegate, args.IfName, idx) // 获取网卡名称
        // 为delegate创建RuntimeConf
        rt, cniDeviceInfoPath := types.CreateCNIRuntimeConf(args, k8sArgs, ifName, n.RuntimeConfig, delegate)
        if cniDeviceInfoPath != "" && delegate.ResourceName != "" && delegate.DeviceID != "" {
            err = nadutils.CopyDeviceInfoForCNIFromDP(cniDeviceInfoPath, delegate.ResourceName, delegate.DeviceID)
            // Even if the filename is set, file may not be present. Ignore error,
            // but log and in the future may need to filter on specific errors.
            if err != nil {
                logging.Debugf("cmdAdd: CopyDeviceInfoForCNIFromDP returned an error - err=%v", err)
            }
        }

        netName := ""
        // 调用被delegate的cni插件
        tmpResult, err = delegateAdd(exec, kubeClient, pod, ifName, delegate, rt, n, cniArgs)
        if err != nil {
            // If the add failed, tear down all networks we already added
            netName = delegate.Conf.Name
            if netName == "" {
                netName = delegate.ConfList.Name
            }
            // Ignore errors; DEL must be idempotent anyway
            _ = delPlugins(exec, nil, args, k8sArgs, n.Delegates, idx, n.RuntimeConfig, n)
            return nil, cmdPluginErr(k8sArgs, netName, "error adding container to network %q: %v", netName, err)
        }
        // 。。。省略部分设置默认网关的代码
        
        // create the network status, only in case Multus as kubeconfig
        if n.Kubeconfig != "" && kc != nil {
            if !types.CheckSystemNamespaces(string(k8sArgs.K8S_POD_NAME), n.SystemNamespaces) {
                // 根据tmpResult创建delegateNetStatus
                delegateNetStatus, err := nadutils.CreateNetworkStatus(tmpResult, delegate.Name, delegate.MasterPlugin, devinfo)
                if err != nil {
                    return nil, cmdErr(k8sArgs, "error setting network status: %v", err)
                }

                netStatus = append(netStatus, *delegateNetStatus)
            }
        } else if devinfo != nil {
            // Warn that devinfo exists but could not add it to downwards API
            logging.Errorf("devinfo available, but no kubeConfig so NetworkStatus not modified.")
        }
    }

    // set the network status annotation in apiserver, only in case Multus as kubeconfig
    if n.Kubeconfig != "" && kc != nil {
        if !types.CheckSystemNamespaces(string(k8sArgs.K8S_POD_NAME), n.SystemNamespaces) {
            // netStatus写入pod注解即在pod的注解中写入pod的ip地址信息
            err = k8s.SetNetworkStatus(kubeClient, k8sArgs, netStatus, n)
            if err != nil {
                if strings.Contains(err.Error(), "failed to query the pod") {
                    return nil, cmdErr(k8sArgs, "error setting the networks status, pod was already deleted: %v", err)
                }
                return nil, cmdErr(k8sArgs, "error setting the networks status: %v", err)
            }
        }
    }

    return result, nil // 返回cni插件调用结果
}

mdAdd函数的逻辑见下方的流程图:

逻辑讲解:

首先解析参数,生成NetConf对象;根据NetConf中的kubeconfig文件的地址生成k8s client;k8s.GetK8sArgs获取传递进来的pod参数,比如containerID,network namesapce等信息;获取集群中的该pod最重要的一步k8s.TryLoadPodDelegates,该函数用于获取pod要使用的网络插件信息的,可能存在多个网络插件要使用,如果你看过/etc/cni/net.d/00-multus.conf 文件的内容,就知道该文件中Delegates字段存储的数据就是默认cni插件的信息,后面例子中会讲到这个文件,该函数的逻辑就是从00-multus.conf 中获取Delegates的cni插件信息放入NetConf对象的Delegates数组,然后再获取pod注解中k8s.v1.cni.cncf.io/networks字段指明的network-attachment-definition对象的信息放入NetConf对象的Delegates数组;保存Delegates数组到本地的CNIDir目录,默认该目录在/var/lib/cni/multus下然后遍历NetConf对象的Delegates数组,执行每个Delegate中装载的插件的Add方法,其实就是调用每个插件为容器完成容器网络环境构造;将执行结果比如pod的ip信息更新在pod的注解中;设置容器的默认网关;返回执行结果。

注意:delegate 意思是,这个 CNI 插件并不会自己做某些事情,而是会调用 Delegate 指定的某种内置插件来完成。

cmdDel解析

multus.CmdDel函数:

func CmdDel(args *skel.CmdArgs, exec invoke.Exec, kubeClient *k8s.ClientInfo) error {
    // 加载配置
    in, err := types.LoadNetConf(args.StdinData)
    logging.Debugf("CmdDel: %v, %v, %v", args, exec, kubeClient)
    if err != nil {
        return err
    }
    ...

    k8sArgs, err := k8s.GetK8sArgs(args) // 获取传递来的pod信息
    if err != nil {
        return cmdErr(nil, "error getting k8s args: %v", err)
    }
    ...
    kubeClient, err = k8s.GetK8sClient(in.Kubeconfig, kubeClient) // 生成client
    if err != nil {
        return cmdErr(nil, "error getting k8s client: %v", err)
    }

    pod, err := getPod(kubeClient, k8sArgs, true) // 获取pod
    if err != nil {
        return err
    }

    // Read the cache to get delegates json for the pod
    netconfBytes, path, err := consumeScratchNetConf(args.ContainerID, in.CNIDir) // 读取保存在本地的delegates文件
    if err != nil {
        // Fetch delegates again if cache is not exist and pod info can be read
        // 去找00-multus.conf文件
        if os.IsNotExist(err) && pod != nil {
            if in.ClusterNetwork != "" {
                _, err = k8s.GetDefaultNetworks(pod, in, kubeClient, nil)
                if err != nil {
                    return cmdErr(k8sArgs, "failed to get clusterNetwork/defaultNetworks: %v", err)
                }
                // First delegate is always the master plugin
                in.Delegates[0].MasterPlugin = true
            }

            // 找不到本地文件就去pod annotation去找
            _, _, err := k8s.TryLoadPodDelegates(pod, in, kubeClient, nil)
            if err != nil {
                if len(in.Delegates) == 0 {
                    // No delegate available so send error
                    return cmdErr(k8sArgs, "failed to get delegates: %v", err)
                }
                // Get clusterNetwork before, so continue to delete
                logging.Errorf("Multus: failed to get delegates: %v, but continue to delete clusterNetwork", err)
            }
        } else {
            // The options to continue with a delete have been exhausted (cachefile + API query didn't work)
            // We cannot exit with an error as this may cause a sandbox to never get deleted.
            logging.Errorf("Multus: failed to get the cached delegates file: %v, cannot properly delete", err)
            return nil
        }
    } else {
        defer os.Remove(path) 
        if err := json.Unmarshal(netconfBytes, &in.Delegates); err != nil {
            return cmdErr(k8sArgs, "failed to load netconf: %v", err)
        }
        // check plugins field and enable ConfListPlugin if there is
        for _, v := range in.Delegates {
            if len(v.ConfList.Plugins) != 0 {
                v.ConfListPlugin = true
            }
        }
        // First delegate is always the master plugin
        in.Delegates[0].MasterPlugin = true
    }

    // set CNIVersion in delegate CNI config if there is no CNIVersion and multus conf have CNIVersion.
    for _, v := range in.Delegates { // 遍历Delegates
        if v.ConfListPlugin == true && v.ConfList.CNIVersion == "" && in.CNIVersion != "" {
            v.ConfList.CNIVersion = in.CNIVersion
            v.Bytes, err = json.Marshal(v.ConfList)
            if err != nil {
                // error happen but continue to delete
                logging.Errorf("Multus: failed to marshal delegate %q config: %v", v.Name, err)
            }
        }
    }
    // 遍历Delegates,执行回收动作
    return delPlugins(exec, pod, args, k8sArgs, in.Delegates, len(in.Delegates)-1, in.RuntimeConfig, in)
}

cmdDel的逻辑流程图如下:

具体逻辑:

kubelet调用multus-cni插件删除容器网络环境;调用到cmdDel函数解析并生成netconf对象生成client,获取pod详情获取本地保存下来的delegates文件(若获取不到就去本地加载00-multus.conf文件和pod 注解中的network-attachment-definition的对象)遍历调用delegates数组中的cni插件,执行插件的cmdDel方法;返回结果。 一个ipvlan的例子

在部署完multus-cni后:

[root@zhounanjun-test01 ~]# cat /etc/cni/net.d/00-multus.conf 
{ "cniVersion": "0.3.1", "name": "multus-cni-network", "type": "multus", "capabilities": {"portMappings": true}, "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig", "delegates": [ { "name":"kube-ovn", "cniVersion":"0.3.1", "plugins":[ { "type":"kube-ovn", "server_socket":"/run/openvswitch/kube-ovn-daemon.sock" }, { "type":"portmap", "capabilities":{ "portMappings":true } } ] } ] }

json的内容如下:

{
    "cniVersion":"0.3.1",
    "name":"multus-cni-network",
    "type":"multus",
    "capabilities":{
        "portMappings":true
    },
    "kubeconfig":"/etc/cni/net.d/multus.d/multus.kubeconfig",
    "delegates":[
        {
            "name":"kube-ovn",
            "cniVersion":"0.3.1",
            "plugins":[
                {
                    "type":"kube-ovn",
                    "server_socket":"/run/openvswitch/kube-ovn-daemon.sock"
                },
                {
                    "type":"portmap",
                    "capabilities":{
                        "portMappings":true
                    }
                }
            ]
        }
    ]
}

重点看delegates。对于 multus来说,它调用的 Delegata 插件是 kube-ovn插件,集群中使用的是kube-ovn网络。
创建network-attachment-definition对象,指定main插件使用ipvlan,ipam插件使用host-local,子网cidr是192.168.133.0/24,这边还用了一个sbr的插件,由于我们需要给pod使用双网卡,两张网卡使用的网段可能一样,就会导致pod中往外出去的流量不知道从哪张网卡出去,要是从网卡1进来的数据从网卡2出去,会导致丢包问题,所以使用了sbd插件,他可以做到从哪张网卡进来的流量从哪张网卡出去:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: ipvlan-net2
spec:
  config: '{
  "cniVersion": "0.3.1",
  "name": "ipvlan-network",
  "plugins": [
        {
            "type": "ipvlan",
            "master": "bond0",
            "ipam": {
                "type": "host-local",
                "subnet": "192.168.133.0/24",
                "rangeStart": "192.168.133.190",
                "rangeEnd": "192.168.133.250"
            }
        },
        {
            "type": "sbr"
        }
   ]
}'

创建测试pod:

apiVersion: v1
kind: Pod
metadata:
  name: ipvlanpod22
  labels:
    environment: production
    app: MyApp
  annotations:
    k8s.v1.cni.cncf.io/networks: ipvlan-net2 # 指定使用的nad对象
spec:
  containers:
  - name: appcntr1
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command:
    - tail
    - -f
    - /dev/null

查看pod是否正常运行:

[root@zhounanjun-test01 ipvlan-tests]# kubectl get po 
NAME                                            READY   STATUS             RESTARTS   AGE
ipvlanpod22                                     1/1     Running            0          10s
[root@zhounanjun-test01 ~]# kubectl exec -ti ipvlanpod22 -- ip a
1: lo:  mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: net1@if6:  mtu 1500 qdisc noqueue 
    link/ether 14:18:77:46:1a:5a brd ff:ff:ff:ff:ff:ff
    inet 192.168.133.190/24 brd 192.168.133.255 scope global net1
       valid_lft forever preferred_lft forever
    inet6 fe80::1418:7700:146:1a5a/64 scope link 
       valid_lft forever preferred_lft forever
4399: eth0@if4400:  mtu 1400 qdisc noqueue 
    link/ether 00:00:00:88:33:40 brd ff:ff:ff:ff:ff:ff
    inet 10.222.90.214/16 brd 10.222.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd00:10:16::108/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::200:ff:fe88:3340/64 scope link 
       valid_lft forever preferred_lft forever
[root@zhounanjun-test01 ipvlan-tests]# docker ps | grep ipvlan
c4aa9fe87ed3   1a80408de790                                    "tail -f /dev/null"      About a minute ago   Up About a minute             k8s_appcntr1_ipvlapod22_default_d0d41d2f-1240-422d-ba48-459c8b718b98_0
e9532b2b15d4   cis-hub-huabei-3.cmecloud.cn/ecloud/pause:3.2   "/pause"                 About a minute ago   Up About a minute             k8s_POD_ipvlanpod22_default_d0d41d2f-1240-422d-ba48-459c8b718b98_0

可以看到ipvlanpod22已经在running了,进入pod运行ip -a命令查看网络信息,看到pod有三张网卡:lo、net1和eth0。其中lo是本地回环设备,net1是ipvlan的网卡,eth0是集群的默认网卡。

为了找到cmdAdd函数中保存在本地的delegates数组文件,在CNIDir目录下查找到该pod保存的delegates数组信息:

[root@zhounanjun-test01 ipvlan-tests]# cd /var/lib/cni/
[root@zhounanjun-test01 cni]# ls
cache  multus  networks
[root@zhounanjun-test01 cni]# cd multus/
[root@zhounanjun-test01 multus]# pwd
/var/lib/cni/multus
[root@zhounanjun-test01 multus]# ls
002e834da428059256b942909c80b73abbb4b575e1e300fa02e8e4e61306f237  94a6fcf24bbbea87131fb16cb99d6cd782e811f49d7f6ca2af0c45b9477ce479
09e7407d75f796e83516def4416f0c85545ebbdafb953b2e182ed099562e811d  a08e23b9bc6339e50ac71a4262e4962191ad636f0628c3bcfe2dd61820cf1511
0a023efc856995fa806539f582578680323753c290e61f61abf891bf24cce09e  bd8a986b9c7e270cd49db0f88ca0cad6447ba6685f24fd01ec47d8769af79e0d
10e56ada914f234e32c03e0f07a634f9e24ebc7ffb0a4d90eaaa28645a18cc87  c38c190069cbe35bba9c9153ef624e2ae82180cc8f10d4aa1e09907c89178bb8
2548fb21a62fa48f41b74d75c5d950e3d5690fc50d14b54d46dd8ad76a2c49f1  cfdbbe7c1ad89851b4bbdbf42f3ccec8ef14f849685b5307e7122bcdea495ea9
325cab7d927103e8f6b30dab6046f2e05daed9497f158e33f8fd960912e58d4a  e20b6d7d0bcc635c93820fce418ed628c29c45765d8f949440fde419273b924a
362134fe2218518006a754af6d0509ba76e8095ad43818a67755b51cbd3ac596  e9532b2b15d4d52a14db5326d87348211ba943bc1f61e92d94f04ee167efae72
45ee60bd57ba999f4f419a74136f861fd4b0779d6e0eede1c5bfbb67e0fe0d43  ef92c2c63d52f67a9f561f0f4d8a60eb0f5645bf690030535dac324d668804b4
4d01e982e4413e927b0c85de677b04a70973ca60ed91afca0d1b0f6e27995246  f9832765b5565afa934f0f01f074ba68f4ac8ad63769cc32a4ea40d14c718e0d
8e4c186dc21b455943abf96d49cea0a3caef90c941cfda9167475a37990ddbf2  results
[root@zhounanjun-test01 multus]# cat e9532b2b15d4d52a14db5326d87348211ba943bc1f61e92d94f04ee167efae72 
[{"Conf":{"cniVersion":"0.3.1","name":"kube-ovn","ipam":{},"dns":{}},"ConfList":{"cniVersion":"0.3.1","name":"kube-ovn","plugins":[{"type":"kube-ovn","ipam":{},"dns":{}},{"type":"portmap","capabilities":{"portMappings":true},"ipam":{},"dns":{}}]},"Name":"kube-ovn","IsFilterGateway":false,"Bytes":"eyJjbmlWZXJzaW9uIjoiMC4zLjEiLCJuYW1lIjoia3ViZS1vdm4iLCJwbHVnaW5zIjpbeyJzZXJ2ZXJfc29ja2V0IjoiL3J1bi9vcGVudnN3aXRjaC9rdWJlLW92bi1kYWVtb24uc29jayIsInR5cGUiOiJrdWJlLW92biJ9LHsiY2FwYWJpbGl0aWVzIjp7InBvcnRNYXBwaW5ncyI6dHJ1ZX0sInR5cGUiOiJwb3J0bWFwIn1dfQ=="},{"Conf":{"cniVersion":"0.3.1","name":"ipvlan-network","ipam":{},"dns":{}},"ConfList":{"cniVersion":"0.3.1","name":"ipvlan-network","plugins":[{"type":"ipvlan","ipam":{"type":"host-local"},"dns":{}},{"type":"sbr","ipam":{},"dns":{}}]},"Name":"default/ipvlan-net2","IsFilterGateway":false,"Bytes":"eyAiY25pVmVyc2lvbiI6ICIwLjMuMSIsICJuYW1lIjogImlwdmxhbi1uZXR3b3JrIiwgInBsdWdpbnMiOiBbIHsgInR5cGUiOiAiaXB2bGFuIiwgIm1hc3RlciI6ICJib25kMCIsICJpcGFtIjogeyAidHlwZSI6ICJob3N0LWxvY2FsIiwgInN1Ym5ldCI6ICIxOTIuMTY4LjEzMy4wLzI0IiwgInJhbmdlU3RhcnQiOiAiMTkyLjE2OC4xMzMuMTkwIiwgInJhbmdlRW5kIjogIjE5Mi4xNjguMTMzLjI1MCIgfSB9LCB7ICJ0eXBlIjogInNiciIgfSBdIH0="}]

保存下来的json内容如下:

[
    {
        "Conf":{
            "cniVersion":"0.3.1",
            "name":"kube-ovn",
            "ipam":{

            },
            "dns":{

            }
        },
        "ConfList":{
            "cniVersion":"0.3.1",
            "name":"kube-ovn",
            "plugins":[
                {
                    "type":"kube-ovn",
                    "ipam":{

                    },
                    "dns":{

                    }
                },
                {
                    "type":"portmap",
                    "capabilities":{
                        "portMappings":true
                    },
                    "ipam":{

                    },
                    "dns":{

                    }
                }
            ]
        },
        "Name":"kube-ovn",
        "IsFilterGateway":false,
        "Bytes":"eyJjbmlWZXJzaW9uIjoiMC4zLjEiLCJuYW1lIjoia3ViZS1vdm4iLCJwbHVnaW5zIjpbeyJzZXJ2ZXJfc29ja2V0IjoiL3J1bi9vcGVudnN3aXRjaC9rdWJlLW92bi1kYWVtb24uc29jayIsInR5cGUiOiJrdWJlLW92biJ9LHsiY2FwYWJpbGl0aWVzIjp7InBvcnRNYXBwaW5ncyI6dHJ1ZX0sInR5cGUiOiJwb3J0bWFwIn1dfQ=="
    },
    {
        "Conf":{
            "cniVersion":"0.3.1",
            "name":"ipvlan-network",
            "ipam":{

            },
            "dns":{

            }
        },
        "ConfList":{
            "cniVersion":"0.3.1",
            "name":"ipvlan-network",
            "plugins":[
                {
                    "type":"ipvlan",
                    "ipam":{
                        "type":"host-local"
                    },
                    "dns":{

                    }
                },
                {
                    "type":"sbr",
                    "ipam":{

                    },
                    "dns":{

                    }
                }
            ]
        },
        "Name":"default/ipvlan-net2",
        "IsFilterGateway":false,
        "Bytes":"eyAiY25pVmVyc2lvbiI6ICIwLjMuMSIsICJuYW1lIjogImlwdmxhbi1uZXR3b3JrIiwgInBsdWdpbnMiOiBbIHsgInR5cGUiOiAiaXB2bGFuIiwgIm1hc3RlciI6ICJib25kMCIsICJpcGFtIjogeyAidHlwZSI6ICJob3N0LWxvY2FsIiwgInN1Ym5ldCI6ICIxOTIuMTY4LjEzMy4wLzI0IiwgInJhbmdlU3RhcnQiOiAiMTkyLjE2OC4xMzMuMTkwIiwgInJhbmdlRW5kIjogIjE5Mi4xNjguMTMzLjI1MCIgfSB9LCB7ICJ0eXBlIjogInNiciIgfSBdIH0="
    }
]

对于之前cmdAdd函数解析的部分,会保存容器的delegates数组到本地文件,delegates数组中包含两个网络,第一个是kube-ovn网络,第二个是ipvlan-network。这样就和上面的

总结

本文主要讲了multus-cni的代码解析,并根据代码解析,现场验证了ipvlan的例子,并在例子中介绍了几个关键的文件。实现cni就是实现cmdAdd、cmdDel和cmdCheck方法,然后根据需要在cmdAdd中使用已有的main插件和ipam插件,当然main插件和ipam插件可以自己实现,并不一定要用已有的插件。其实multus-cni虽然满足k8s的容器网络接口的规范,但是其并不是真正的cni实体,他会最终调用了delegates数组里使用的cni插件,所以,严格来讲,他是cni插件的管理者。要想学习真正的cni插件最好还是看看calico这种项目,能学到更多的东西。

参考文章:

[1] https://blog.crazytaxii.com/posts/multus_cni/

[2] https://www.cni.dev/plugins/current/ipam/host-local/

[3] http://119.23.219.145/posts/kubernetes-cni-%E7%9A%84%E5%9F%BA%E6%9C%AC%E5%8E%9F%E7%90%86/

[4] https://access.redhat.com/documentation/zh-cn/openshift_container_platform/4.3/html/networking/multiple-networks

欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/yw/926756.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-05-16
下一篇 2022-05-16

发表评论

登录后才能评论

评论列表(0条)

保存