Deploy IPCL FATE with KubeFATE - FederatedAI/KubeFATE GitHub Wiki

Intel Paillier Cryptosystem Library is an open-source library which provides accelerated performance of a partial homomorphic encryption (HE), named Paillier cryptosystem, by utilizing Intel® IPP-Crypto technologies on Intel CPUs supporting the AVX512IFMA instructions. The library is written in modern standard C++ and provides the essential API for the Paillier cryptosystem scheme. Intel Paillier Cryptosystem Library - Python is a Python extension package intended for Python based privacy preserving machine learning solutions which utilizes the partial HE scheme for increased data and model protection.

FATE supports the use of Intel Paillier Cryptosystem Library to accelerate partial encryption and decryption calculations

1. Prerequisites

  • Systems that support AVX512IFMA
  • Docker 18+
  • A Linux system environment for docker build
  • Support Kubernetes 19+
  • Kubernetes supports Ingress-controller

1.1 Check the environment

Check AVX512IFMA

$ cat /proc/cpuinfo | grep avx512ifma
flags  : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities

Check Docker

$ docker version
docker version
Client: Docker Engine - Community
 Version:           20.10.17
 API version:       1.41
 Go version:        go1.17.11
 Git commit:        100c701
 Built:             Mon Jun  6 23:02:57 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.17
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.17.11
  Git commit:       a89b842
  Built:            Mon Jun  6 23:01:03 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.6
  GitCommit:        10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
 runc:
  Version:          1.1.2
  GitCommit:        v1.1.2-0-ga916309
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Check the Linux system

$ uname -a
Linux sdp 5.4.0-122-generic #138-Ubuntu SMP Wed Jun 22 15:00:31 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Check Kubernetes

$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:30:46Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:23:26Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}

Check Ingress-controller

$ POD_NAMESPACE=ingress-nginx
$ POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)
$ kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.2.0
  Build:         a2514768cd282c41f39ab06bda17efefc4bd233a
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.10

-------------------------------------------------------------------------------

2. Build IPCL FATE images (Optional)

2.1 Get code

  • FATE
# Please select a FATE branch that supports IPCL
git clone -b <ipcl_fate_branch> https://github.com/FederatedAI/FATE.git

You can get the code for IPCL FATE from this PR Add IPCL as Paillier Scheme to FATE v1.9.0 #4183.

  • IPCL
git clone -b v1.1.3 https://github.com/intel/pailliercryptolib_python.git
  • FATE-Builder
git clone https://github.com/FederatedAI/FATE-Builder.git

2.2 Build

Building IPCL-enabled FATE images using FATE-Builder,FATE image builds must use the root user.

1. Set configuration information

export FATE_DIR=/root/FATE
export IPCL_PKG_DIR=/root/pailliercryptolib_python
export Build_IPCL=1

2. Run build command

bash docker-build/build.sh -t 1.9.0-release all

3. Check the built images

$ docker images
REPOSITORY                                       TAG                       IMAGE ID       CREATED          SIZE
federatedai/spark-worker-ipcl                    1.9.0-release             f69a09a0b846   7 minutes ago    3.61GB
federatedai/spark-base-ipcl                      1.9.0-release             0f75a7e0d91c   7 minutes ago    3.61GB
federatedai/fateflow-spark-ipcl                  1.9.0-release             70e51a72eaf1   9 minutes ago    4.4GB
federatedai/eggroll-ipcl                         1.9.0-release             ded62a6444ac   12 minutes ago   3.19GB
federatedai/fateflow-ipcl                        1.9.0-release             7bce8f2da3fd   13 minutes ago   3.11GB
federatedai/base-image-ipcl                      1.9.0-release             bf0f5fcae429   14 minutes ago   2.73GB
federatedai/fateflow-spark-nn                    1.9.0-release             dd0b70e75f52   16 minutes ago   4.46GB
federatedai/fateflow-nn                          1.9.0-release             dd0b70e75f52   16 minutes ago   4.46GB
federatedai/spark-worker                         1.9.0-release             350b33d683a0   18 minutes ago   2.41GB
federatedai/spark-master                         1.9.0-release             e4f973b61c70   18 minutes ago   2.41GB
federatedai/spark-base                           1.9.0-release             7b932b1248ea   18 minutes ago   2.41GB
federatedai/fateflow-spark                       1.9.0-release             a5eca0fd7df3   19 minutes ago   3.2GB
federatedai/eggroll                              1.9.0-release             a14f708b518f   24 minutes ago   1.99GB
federatedai/fateboard                            1.9.0-release             c11804c9e2ae   25 minutes ago   193MB
federatedai/fateflow                             1.9.0-release             2a60e4002595   25 minutes ago   1.92GB
federatedai/fate-test                            1.9.0-release             fa846e2691ae   5 days ago       3.91GB
federatedai/client                               1.9.0-release             8a9ad2d2c5a6   5 days ago       1.54GB
federatedai/nginx                                1.9.0-release             9b2799563aa3   5 days ago       528MB
federatedai/base-image                           1.9.0-release             2fd8d8fc9820   2 weeks ago      1.53GB

3. Deploy IPCL's FATE

Using KubeFATE can deploy FATE clusters that support IPCL, and can support both docker-compose and K8s deployment methods

Here is a demonstration of k8s deployment.

Deploying FATE with k8s requires first deploying kubeFATE.

3.1 Deploy KubeFATE

KubeFATE supports IPCL FATE since v1.9.0, please ensure that fate_version is greater than or equal to v1.9.0

curl -LO https://github.com/FederatedAI/KubeFATE/releases/download/${fate_version}/kubefate-k8s-${fate_version}.tar.gz && tar -xzf ./kubefate-k8s-${fate_version}.tar.gz
cd kubefate
chmod +x ./kubefate && sudo mv ./kubefate /usr/bin
kubectl apply -f rbac-config.yaml
kubectl apply -f kubefate.yaml

sudo -- sh -c "echo \"192.168.100.123 example.com\"  >> /etc/hosts"

192.168.100.123 is the demo IP, which needs to be replaced with the IP of your own nginx-ingress-controller according to the situation, which can be obtained by the following method.

kubectl get ingress -n kube-fate
NAME       CLASS   HOSTS         ADDRESS        PORTS   AGE
kubefate   nginx   example.com   192.168.100.123   80      17d

If nginx-ingress-controller uses NodePort to provide external services, you also need to modify the config.yaml configuration and add the corresponding NodePort after serviceurl: example.com, such as serviceurl: example.com:32422

3.2 Check KubeFATE deployment is successful

~/kubefate$ kubefate version
* kubefate commandLine version=v1.4.4
* kubefate service version=v1.4.4

If nginx-ingress-controller uses NodePort to provide external services, you also need to modify the config.yaml configuration and add the corresponding NodePort after serviceurl: example.com, such as serviceurl: example.com:32422

3.3 Change setting

Modify the cluster.yaml file and change device: CPU to device: IPCL.

docker-compose deployment can also modify the corresponding configuration

3.4 Deploy IPCL FATE

~/kubefate$ kubefate cluster install -f cluster.yaml

Check for successful deployment

~/kubefate$ kubefate cluster ls
UUID                                 NAME      NAMESPACE  REVISION  STATUS  CHART  ChartVERSION  AGE
c0d45d22-d2c8-471e-bbde-c82fafa1d3fa fate-9999 fate-9999  0         Running fate   v1.9.0        5m       

4. Run IPCL's FATE task test (Optional)

Prepare the dsl files of hetero_sbt, conf and dsl respectively. Modify the encryption method to ipcl

                "encrypt_param": {
                    "method": "ipcl"
                },

Submit FATE job

$ kubectl exec -it svc/nodebook -n fate-9999 -- bash
(app-root) bash-4.2# flow job submit -c ipcl/hetero_sbt/hetero_sbt_job_conf.json -d ipcl/hetero_sbt/hetero_sbt_job_dsl.json