Week15 - Selesfia/ComputerNetwork GitHub Wiki
- Open Cloud Shell
- Make 2 files Dockerfile, and index.html using touch command.
touch Dockerfile index.html
- Dockerfile
FROM ubuntu/apache2:latest
ADD index.html /var/www/html
- index.html
hi test
``
5. `docker build -t mywww:1.0 .` -> `docker images` -> `docker run -d -p 8080:80 mywww:1.0`
.png)
6. Click on web preview. Make sure the port number are the same.
.png)
7. Go to Repositories of Artifact Registry -> Create new -> Give a name "mydocker" -> Choose format "docker" -> Mode "standard" -> Location type "Region (asia-east1)" -> Artifact Analysis "Enabled" -> Create
8. Back to cloud shell -> `docker build -t asia-east1-docker.pkg.dev/YOUR_PROJECT_ID/mydocker/mywww:1.0 .`
9. Using push command to put image into the cloud `docker push asia-east1-docker.pkg.dev/YOUR_PROJECT_ID/mydocker/mywww:1.0 .` -> There are 3 dots, click to it and choose Deploy to cloud run -> Change the region to asia-east1 -> Authentication "Allow unauthenticated invocations" -> Create. Make sure the port is at 80
10. Just click the provided url to see the result
11. Back to cloud shell `touch client2.py client.py Dockerfile main.py requirements.txt` to create empty file -> Open editor
12. train_model.py
import pickle from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn import tree
iris=datasets.load_iris() x=iris.data y=iris.target
#labels for iris dataset labels ={ 0: "setosa", 1: "versicolor", 2: "virginica" }
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.25) classifier=tree.DecisionTreeClassifier() classifier.fit(x_train,y_train) predictions=classifier.predict(x_test)
#export the model model_name = 'model.pkl' print("finished training and dump the model as {0}".format(model_name)) pickle.dump(classifier, open(model_name,'wb'))
13. requirements.txt
scikit-learn flask
14. main.py
import pickle
from flask import Flask, request, jsonify
app = Flask(name)
model = pickle.load(open('model.pkl', 'rb')) labels = { 0: "versicolor", 1: "setosa", 2: "virginica" }
@app.route("/", methods=["GET"]) def index(): """Basic HTML response.""" body = ( "" "
" "" "" "" ) return body@app.route('/api', methods=['POST']) def predict(): # Get the data from the POST request. data = request.get_json(force = True) predict = model.predict(data['feature']) return jsonify(predict[0].tolist())
if name == 'main': app.run(debug = True, host = '0.0.0.0', port=8080)
15. Dockerfile
FROM python:3.9
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
EXPOSE 8080
16. client2.py
import requests
url = 'http://127.0.0.1:8080/api' feature = 5.8, 4.0, 1.2, 0.2 labels ={ 0: "setosa", 1: "versicolor", 2: "virginica" }
r = requests.post(url,json={'feature': feature}) print(labels[r.json()])
17. Back to terminal -> `python train_model.py` -> `python main.py` -> Open another terminal to test the result -> `cd test-iris-docker/` -> `python client.py` -> The output should be setosa
</br>When executing main.py, if there is error that say the port number is in use by other program. Delete the docker using `docker rm -f 40a`. 40a is the docker file name that you want to delete
18. Back to the first terminal -> `docker build -t myiris:1.0 .` -> `pip install requirements.txt` -> `docker run -d -p 8080:8080 myiris:1.0` -> Back to second terminal and run `python client.py` -> Output: setosa
19. Go to Repositories of Artifact Registry -> Create new -> Give a name "test-iris" -> Choose format "docker" -> Mode "standard" -> Location type "Region (asia-east1)" -> Artifact Analysis "Enabled" -> Create
20. `docker build -t asia-east1-docker.pkg.dev/mygcp-436602/test-iris/myiris:1.0` to build the image again -> `docker push asia-east1-docker.pkg.dev/mygcp-436602/test-iris/myiris:1.0` to push image to GCP -> There are 3 dots, click to it and choose Deploy to cloud run -> Change the region to asia-east1 -> Authentication "Allow unauthenticated invocations" -> Create -> Copy the URL link -> `vim client2.py` change the url in client2.py with the url that you just copied -> `python client2.py` -> Output: setosa
# Terraform
[Reference](https://devops-with-alex.com/day-4-terraform-install/)
1. Install terraform -> Open Cloud shell
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list sudo apt update && sudo apt install terraform
2. Create service account -> Give a name "sa-terraform" -> Role "Basic -> Owner" -> Done
3. Click to the sa that you just created -> Keys -> Add key -> Create new key -> Key type "JSON" -> upload your json file to test-terraform directory
4. main.tf. Put the main.tf at test1 directory
provider "google" { credentials = "${file("mySA.json")}" project = "YOUR_PROJECT_ID" region = "asia-east1" }
resource "google_storage_bucket" "quick-start-gcs" { name = "YOUR_BUCKET_NAME" location = "asia-east1" force_destroy = true }
5. `terraform init` -> `terraform fmt` -> `terraform validate` -> `terraform plan` -> `terraform apply / terraform destroy` -> `vim main.tf`
provider "google" { credentials = "${file("mySA.json")}" project = "YOUR_PROJECT_ID" region = "asia-east1" }
resource "google_storage_bucket" "quick-start-gcs" { name = "YOUR_BUCKET_NAME" location = "asia-east1" force_destroy = true }
resource "google_storage_bucket" "quick-start-gcs2" { name = "YOUR_BUCKET_NAME2" location = "asia-east1" force_destroy = true
6. `terraform validate` -> `sudo terraform plan` -> `sudo terraform apply` -> Enter a value `yes`
7. `cd test2` -> provider.tf (prepare 3 file mySA.json, provider.tf, main.tf)
##################################################################################
################################################################################## terraform {
required_version = ">=1.0"
required_providers { # provider 中的最小版本 google = { source = "hashicorp/google" version = ">= 4.40.0" } } } ##################################################################################
################################################################################## provider "google" {
credentials = file("mySA.json") project = "mygcp-436602" }
8. main.tf
resource "google_compute_instance" "example" { name = "example-instance" machine_type = "e2-micro" zone = "asia-east1-b"
boot_disk { initialize_params { image = "projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240726" } }
network_interface { network = "default"
access_config {
// Ephemeral IP
}
} }
9. `sudo terraform init` -> `sudo terraform fmt` -> `sudo terraform validate` -> `sudo terraform plan` -> `sudo terraform apply`
10. Prepare 3 file mySA.json, provider.tf, main.tf. Only the main.tf that is different
main.tf
resource "google_compute_instance" "example" { name = "example-instance" machine_type = "e2-micro" zone = "asia-east1-b"
boot_disk { initialize_params { image = "projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20240726" } }
network_interface { network = "default"
access_config {
// Ephemeral IP
}
}
provisioner "local-exec" { command = "echo ${google_compute_instance.example.network_interface[0].network_ip} > ./ip_address_local_exec.txt" }
"echo ${google_compute_instance.example.network_interface[0].network_ip} > /tmp/ip_address_remote_exec.txt"
}
11. `sudo terraform init` -> `sudo terraform fmt` -> `sudo terraform validate` -> `sudo terraform plan` -> `sudo terraform apply` -> `cat ip_address_local_exec.txt` -> Go to compute engine to see if the vm is being created with the ip address that you got from running the previous command
**PS : Remember to delete the resources that you created if you don't use it anymore.**
17/12/2024