55_Install Application in Application Server(Private Network) and Validate the connectivity & Configuring Load balancer - Nirvan-Pandey/OCI_DOC GitHub Wiki

55_1: Overview

In this lab, we will install and configure an HTTP web server (HTTPD) on an application server located in a private network within Oracle Cloud Infrastructure (OCI). Since the server resides in a private subnet, it cannot be accessed directly from the internet. To enable external access, we will configure a private Load Balancer that will route traffic to the application server securely.

55_2: Architecture

55_3: Key Objectives:

  • Provision an Application Server in a Private Subnet

  • Install and Configure HTTPD on the Application Server

  • Set Up Security Rules to Allow Internal Communication

  • Create and Configure a Private Load Balancer

  • Validate Connectivity and Test Access via Load Balancer

By the end of this lab, we will have a working setup where external users can access the web application through the Load Balancer, ensuring security and controlled traffic flow in the private network.

55_4: Install HTTPD on the Application Server (Private Network)

  1. Login to Bastion Server(public)

image image

  1. Use SSH to access the Application Server (Private Network)
ssh -i /root/keys/privkey.pem [email protected]

image

  1. Switch to root user.
sudo su - root

image

  1. Check the status of httpd application. If not installed, please install.
yum install httpd
systemctl status httpd

image

Start the httpd application.

systemctl start httpd

image

  1. Navigate to the web directory
cd /var/www/html/

image

  1. Create and edit an index.html file
vi index.html

Add the following content inside the file:

<html>
<head>
    <title>Application Server</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            text-align: center;
            margin-top: 50px;
        }
        h1 {
            color: #2c3e50;
        }
        p {
            font-size: 18px;
            color: #34495e;
        }
    </style>
</head>
<body>
    <h1>Welcome to the Application Server!</h1>
    <p><strong>VCN:</strong> NP_Test_VCN (Spoke VCN)</p>
    <p><strong>Private IP:</strong> 10.0.1.235</p>
    <p>Server is up and running. Enjoy secure access!</p>
</body>
</html>

image

  1. Verify the content of the index.html file:
cat index.html

image

55_5: Understanding the Theory

After successfully installing HTTPD in a private network and access it via a browser, you need two key components in OCI.

1️⃣ Web Subnet (Private Subnet for Application Server)

  • This subnet hosts the Application Server where HTTPD is installed.
  • Security lists or Network Security Groups (NSGs) must allow inbound HTTP (port 80) traffic from the Load Balancer.
  • No public IP is assigned to this server (it remains private).

2️⃣ Private Load Balancer

  • Placed in the same VCN as the Application Server but in a different subnet (Web Subnet).
  • Routes external requests to the backend application server in the private subnet.
  • Health checks ensure backend availability.
  • Requires proper routing and security rules to allow traffic flow.

How it Works?

  • User requests access to the website using the Load Balancer’s private IP (accessed from within the VCN).
  • Load Balancer forwards traffic to the Application Server in the private subnet.
  • The Application Server responds with the web page (index.html)

55_6: Create Web Subnet/ Web Layer

  1. Navigate to the Private VCN

Here the private VCN is NP_Test_VCN (Spoke VCN).

image

  1. Create Web subnet

image image image

Choosing Public subnet and default route table for creating this subnet.

image

  1. Create New Route table for this Web Subnet

image image image

  1. Creating a security list for this Web Subnet

image

In the ingress rule, we permit traffic flow from the Internet through Internet Gateway to the application subnet by specifying the CIDR 0.0.0.0/0 as the source and using TCP protocol on port 80.

image image

  1. Replacing the New Route Table with Default Route Table for this Web Subnet

image) image image image

  1. Similarly, replacing the New Security List with Default Security List for this Web SUbnet

image image image image

Remove default security list from Web_subnet.

image

Finally the Web Subnet has its on Web Routetable and Web SecurityList.

image

55_7: Create Load Balancer

  1. Navigate to Networking > Load Balancer

image

Create it.

image

  1. Creating a Load Balancer

A. Add Details

image image image

B. Choose Backends

image image image image image

C. Configure Listener

image

D. Manage Logging

image image image

E. Review and Create

image image

Load Balancer is created but with Overall health critical.

image

55_8: Resolving health issues of Load Balancer

In a 3-tier architecture, the Web Layer is the top layer, followed by the Application Layer, and the Database Layer at the bottom.

Traffic Flow Explanation:

Ingress to Web Layer:

The end user accesses the application via a web browser, sending requests to the Web Layer. To allow this traffic, an ingress rule should be configured on the Web Layer’s security list or security group, allowing HTTP (port 80) traffic from anywhere (0.0.0.0/0).

image

Egress from Web Layer to Application Layer:

The Web Layer forwards requests to the Application Layer, which resides in a private subnet. To ensure security, an egress rule on the Web Layer should only allow traffic to the Application Layer subnet (10.0.1.0/24) on the necessary ports (e.g., port 8080 for web applications or other relevant application ports).

image

No rule is found here, hence we need to create Egress rule for Web Subnet to go to application server.

image image

After doing this, still we are not to able to open the URL

55_9: Neccessity of Internet Gateway

We have successfully installed the HTTPD application on our Application Layer (private subnet) and configured the Web Subnet and Web Security List. Additionally, we have set up a Load Balancer to route traffic to the private application server.

Current Traffic Flow:

1️⃣ The end-user sends a request to the Load Balancer.

2️⃣ The Load Balancer attempts to forward the request to the Application Server (Private Subnet).

3️⃣ The Application Server is expected to serve the index.html page through HTTPD.

However, we are still unable to access the web application.

Why is it Stuck?

The Web Layer (Load Balancer and Web Subnet) does not have a route to the Internet. The Application Server is in a private subnet, so it cannot reach external repositories for software updates or respond to requests effectively. Solution: Create an Internet Gateway To resolve this, we need to: ✅ Create an Internet Gateway in the Virtual Cloud Network (VCN).

✅ Attach the Internet Gateway to the Web Subnet via the Route Table.

✅ Ensure egress rules allow outbound traffic from the Web Layer to the Internet.

Once the Internet Gateway is in place, the Web Layer can communicate with external users, allowing proper request flow and enabling access to the web application through the Load Balancer

Creating Internet Gateway

image image image

Attaching the IG to Web Route Table

image image image

Trying Again by pasting Load Balancer Public IP in the browser:

image

image

55_10: Horizontal Scaling

A web application running on one server struggles with traffic spikes. Instead of upgrading it, additional servers are added behind a load balancer to evenly distribute the load. Horizontal scaling is used to increase system capacity and performance by adding more machines (servers) to a system instead of upgrading a single machine. It distributes traffic across multiple servers, reducing the load on any single machine.

To further enhance scalability and reliability, I will install one more application server in another private subnet and attach it to the backend set of the Load Balancer. This setup will ensure that traffic is evenly distributed between Application Server and Application Server 1, improving fault tolerance, availability, and overall system performance.

  1. Installed another httpd appliation in Application Server1.

image

image

  1. Editing and reading the index.file
<html>
<head>
    <title>Application Server</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            text-align: center;
            margin-top: 50px;
        }
        h1 {
            color: #2c3e50;
        }
        p {
            font-size: 18px;
            color: #34495e;
        }
    </style>
</head>
<body>
    <h1>Welcome to the Application Server 1 !</h1>
    <p><strong>VCN:</strong> NP_Test_VCN (Spoke VCN)</p>
    <p><strong>Private IP:</strong> 10.0.1.27</p>
    <p>Server is up and running. Enjoy secure access!</p>
</body>
</html>

image

  1. Adding this application server 1 to the backend of exisiting Load Balancer

3_A: Navigating to Load Balancer

image

3_B: Go to Backend Set

image

3_C: Navigate to Backends.

image

3_D: Add backends

image

3_E: Add the New Instance Application Server 1 image image image

3_F: Backend is updated with New Application Server1 but health is critical.

image

  1. Resolving Backend set in critical health.

4_A: Check the status of the httpd application.- Its running.

systemctl status httpd

image

4_B: If application is Dead, then run below command.

systemctl start httpd

4_C: Check the status of the firewall. It is also active.

systemctl status firewalld

image

4_D: Check the firewall ports. - No port is added.

firewall-cmd --list-ports

image

4_E: Add port 80 in the list and confirm the ports.

firewall-cmd --permanent --zone=public --add-port=80/tcp
systemctl restart firewalld
firewall-cmd --list-ports

image

4_F: Load Balancer Backend's health is OK Now.

image

  1. Validating in URL.

image

Application Server

image

Application Server 1

image

55_11: Conclusion

In this lab, we successfully installed an application on an Application Server within a Private Network and validated its accessibility from the internet using a Load Balancer. Additionally, we attached another instance to the load balancer, enabling traffic distribution across multiple servers. This setup enhances scalability, fault tolerance, and high availability, ensuring that incoming requests are efficiently handled while maintaining system reliability and performance.

⚠️ **GitHub.com Fallback** ⚠️