Core Azure Services - mjaffry01/Archinterview GitHub Wiki
1. Which Azure service is the most fundamental IaaS (Infrastructure-as-a-Service) offering, providing on-demand virtual machines? a) Azure App Service b) Azure Functions c) Azure Virtual Machines d) Azure Kubernetes Service (AKS)
Answer: c) Azure Virtual Machines Explanation: Azure Virtual Machines (VMs) are the cornerstone of Azure's IaaS offerings. They allow you to provision and manage virtualized servers in the cloud, giving you full control over the operating system and installed software, just like a physical server. App Service and Functions are PaaS and FaaS (serverless) respectively, and AKS is a container orchestration service.
2. An organization wants to migrate its existing web applications to a PaaS (Platform-as-a-Service) environment that manages the underlying infrastructure, patching, and scaling. Which Azure service is the best fit? a) Azure Virtual Machines b) Azure App Service c) Azure Container Instances d) Azure Batch
Answer: b) Azure App Service Explanation: Azure App Service is a fully managed PaaS for building, deploying, and scaling web apps and APIs. It abstracts away the underlying infrastructure (servers, OS, etc.), allowing developers to focus on their application code. Azure handles patching, scaling, and security.
3. What is the primary purpose of Azure Blob Storage? a) To provide a file share service for legacy applications. b) To store large amounts of unstructured data, such as text or binary data. c) To offer a relational database service. d) To provide a low-latency NoSQL database.
Answer: b) To store large amounts of unstructured data, such as text or binary data. Explanation: Blob (Binary Large Object) Storage is optimized for storing massive amounts of unstructured data, such as images, videos, documents, backups, and log files. It's highly scalable and cost-effective for this purpose.
4. An application requires a shared file system that can be accessed using the standard SMB (Server Message Block) protocol. Which Azure Storage service should be used? a) Azure Blob Storage b) Azure File Storage c) Azure Table Storage d) Azure Queue Storage
Answer: b) Azure File Storage Explanation: Azure File Storage provides fully managed file shares in the cloud that are accessible via the industry-standard Server Message Block (SMB) protocol. This makes it ideal for "lift-and-shift" migrations of applications that rely on traditional file shares.
5. What is the core function of an Azure Virtual Network (VNet)? a) To provide a public-facing IP address for Azure resources. b) To create an isolated network environment for Azure resources to communicate with each other and the internet. c) To distribute traffic across multiple virtual machines. d) To filter network traffic to and from Azure resources.
Answer: b) To create an isolated network environment for Azure resources to communicate with each other and the internet. Explanation: A VNet is a fundamental building block for your private network in Azure. It enables Azure resources like VMs to securely communicate with each other, the internet, and your on-premises networks.
6. Which Azure networking component is used to filter network traffic between Azure resources in a Virtual Network? a) Azure Load Balancer b) Azure Application Gateway c) Network Security Group (NSG) d) Azure Firewall
Answer: c) Network Security Group (NSG) Explanation: NSGs act as a basic, stateful firewall. They contain a list of security rules that allow or deny network traffic to resources connected to Azure VNets. You can associate an NSG with a subnet or a specific network interface.
7. What is the primary purpose of Azure SQL Database? a) To provide a fully managed relational database-as-a-service (DBaaS). b) To host a NoSQL database for high-performance applications. c) To run a SQL Server instance on a virtual machine. d) To provide a data warehousing solution.
Answer: a) To provide a fully managed relational database-as-a-service (DBaaS). Explanation: Azure SQL Database is a PaaS offering that provides a fully managed, intelligent, and scalable relational database engine. It automates tasks like patching, backups, and monitoring, allowing you to focus on application development.
8. For a globally distributed application requiring a multi-model NoSQL database with low-latency access, which Azure service is the most appropriate choice? a) Azure SQL Database b) Azure Database for MySQL c) Azure Cosmos DB d) Azure Cache for Redis
Answer: c) Azure Cosmos DB Explanation: Azure Cosmos DB is Microsoft's globally distributed, multi-model NoSQL database service. It's designed for applications that need fast, low-latency data access from anywhere in the world, offering multiple consistency levels and APIs (SQL, MongoDB, Cassandra, etc.).
9. What is the primary role of Azure Active Directory (Azure AD)? a) To manage virtual machines and other infrastructure. b) To provide identity and access management services for applications and resources. c) To monitor the performance of Azure services. d) To store application secrets and keys.
Answer: b) To provide identity and access management services for applications and resources. Explanation: Azure AD is a cloud-based identity and access management service. It helps your employees sign in and access resources in external resources (like Microsoft 365, the Azure portal, and thousands of other SaaS applications) and internal resources (like apps on your corporate network).
10. A developer needs to run a small piece of code in response to an HTTP request without managing the underlying server infrastructure. Which Azure compute service is the most cost-effective and efficient option? a) Azure Virtual Machines b) Azure App Service c) Azure Functions d) Azure Kubernetes Service (AKS)
Answer: c) Azure Functions Explanation: Azure Functions is a serverless compute service (FaaS - Function-as-a-Service). It allows you to run event-triggered code without having to explicitly provision or manage infrastructure. You are billed only for the time your code runs, making it very cost-effective for short-lived, event-driven tasks.
11. Which Azure service is designed for orchestrating and managing containerized applications using Kubernetes? a) Azure Container Instances (ACI) b) Azure App Service c) Azure Kubernetes Service (AKS) d) Azure Service Fabric
Answer: c) Azure Kubernetes Service (AKS) Explanation: AKS simplifies deploying a managed Kubernetes cluster in Azure. It offloads the operational overhead of managing the Kubernetes control plane to Azure, allowing you to focus on managing and deploying your containerized applications.
12. An organization needs to distribute incoming internet traffic across a set of virtual machines to ensure high availability and reliability. Which Azure networking service should be used? a) Azure DNS b) Azure VPN Gateway c) Azure Load Balancer d) Azure ExpressRoute
Answer: c) Azure Load Balancer Explanation: Azure Load Balancer operates at Layer 4 (transport layer) of the OSI model. It distributes incoming traffic among healthy virtual machines in a backend pool, which is a key strategy for building highly available and scalable applications.
13. What is the main advantage of using a PaaS database service like Azure SQL Database over an IaaS solution like SQL Server on an Azure VM? a) Complete control over the operating system and database software. b) Automated patching, backups, and high availability managed by Azure. c) Lower cost for small, infrequently used databases. d) The ability to install custom software on the database server.
Answer: b) Automated patching, backups, and high availability managed by Azure. Explanation: The primary benefit of PaaS is the reduction in administrative overhead. With Azure SQL Database, Microsoft manages the underlying infrastructure and handles routine maintenance like patching, backups, and ensuring high availability, freeing up DBAs to focus on higher-value tasks.
14. Which Azure service provides a secure and centralized way to store and manage application secrets, keys, and certificates? a) Azure Storage b) Azure Key Vault c) Azure Active Directory d) Azure Security Center
Answer: b) Azure Key Vault Explanation: Azure Key Vault is a cloud service for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, or cryptographic keys.
15. What is the purpose of a Managed Identity in Azure? a) To create a user account for a person to log in to the Azure portal. b) To provide an Azure resource with an automatically managed identity in Azure AD for authenticating to other Azure services. c) To manage the roles and permissions of Azure users. d) To create a service principal for an application.
Answer: b) To provide an Azure resource with an automatically managed identity in Azure AD for authenticating to other Azure services. Explanation: Managed Identities eliminate the need for developers to manage credentials. By assigning an identity to an Azure resource (like a VM or App Service), that resource can use its Azure AD identity to authenticate to other services that support Azure AD authentication (like Key Vault or Storage) without any credentials stored in code.
16. An application needs to send messages between different components in a decoupled manner. Which Azure service provides a reliable messaging queue? a) Azure Event Hubs b) Azure Service Bus c) Azure Notification Hubs d) Azure Event Grid
Answer: b) Azure Service Bus Explanation: Azure Service Bus is a fully managed enterprise integration message broker. It's designed for high-value, transactional messaging and is often used to decouple applications and services from each other, providing reliable asynchronous communication with features like queues and topics (pub/sub).
17. Which of the following is NOT a primary category of Azure services? a) Compute b) Storage c) Networking d) Virtualization
Answer: d) Virtualization Explanation: Virtualization is the underlying technology that enables cloud computing, but it is not considered a primary service category itself. The main categories are Compute (e.g., VMs, Functions), Storage (e.g., Blob, Files), Networking (e.g., VNet, Load Balancer), and Databases (e.g., SQL DB, Cosmos DB), among others like Identity and AI/ML.
18. A company wants to establish a dedicated, private connection between their on-premises network and Azure. Which Azure networking service should they use? a) Azure VPN Gateway b) Azure ExpressRoute c) Azure Virtual WAN d) Azure Bastion
Answer: b) Azure ExpressRoute Explanation: Azure ExpressRoute provides a private, dedicated, high-throughput connection between your on-premises datacenter and the Microsoft cloud. Unlike a VPN which goes over the public internet, ExpressRoute offers higher reliability, faster speeds, and lower latencies.
19. Which Azure service is best suited for storing and analyzing large volumes of data for business intelligence and machine learning? a) Azure SQL Database b) Azure Cosmos DB c) Azure Data Lake Storage d) Azure File Storage
Answer: c) Azure Data Lake Storage Explanation: Azure Data Lake Storage is a massively scalable and secure data lake for high-performance analytics workloads. It's built on top of Azure Blob Storage but adds a hierarchical namespace, making it highly optimized for big data analytics frameworks like Azure Databricks and Azure Synapse Analytics.
20. What is the primary difference between Azure Container Instances (ACI) and Azure Kubernetes Service (AKS)? a) ACI is for running single containers, while AKS is for orchestrating multi-container applications. b) ACI is a PaaS offering, while AKS is an IaaS offering. c) ACI is more expensive than AKS for large-scale applications. d) ACI does not support Windows containers, while AKS does.
Answer: a) ACI is for running single containers, while AKS is for orchestrating multi-container applications. Explanation: ACI provides a simple and fast way to run a container in Azure without managing any virtual machines or adopting a higher-level service. It's perfect for simple tasks, automation scripts, or build jobs. AKS, on the other hand, is a full-blown container orchestrator designed to manage the lifecycle of complex, multi-container applications with features like scaling, service discovery, and load balancing.
1. The Global E-Commerce Platform Latency Challenge
Correct Answer: c) Re-architect the database layer to use Azure Cosmos DB with multi-region write capabilities, configuring the application to use the local Cosmos DB endpoint for both reads and writes.
Explanation: This is the only option that fundamentally solves the core problem: cross-region write latency while maintaining a zero Recovery Point Objective (RPO).
- Why (c) is correct: Azure Cosmos DB is designed for this exact scenario. With multi-region writes (also known as multi-master), every region has a writable replica. The application in Southeast Asia writes to the Southeast Asia Cosmos DB endpoint, and the application in the US writes to the US endpoint. The writes are committed locally with very low latency. Cosmos DB handles the replication and conflict resolution between regions asynchronously in the background. This directly addresses the write bottleneck and provides the best possible user experience globally. It also meets the RPO of zero for transactional data within a region's session consistency.
- Why the others are flawed:
- a) Implementing read-replicas only solves half the problem. Page loads for catalog browsing would be faster, but the critical checkout process, which involves *writes*, would still have to travel to the primary US region, incurring the same latency.
- b) Migrating to AKS and using a Service Bus is a significant architectural change that adds complexity. While asynchronous processing is good, it doesn't solve the synchronous database write latency required to confirm an order to the user. The database remains the bottleneck.
- d) Upgrading and sharding Azure SQL is a valid but far more complex and operationally intensive solution than using a natively geo-distributed database. Sharding logic must be built into the application, which is difficult to get right. While it can improve performance, it doesn't inherently solve the problem of write latency to a single primary region as elegantly as Cosmos DB's multi-master feature.
---
2. The Financial Services Security & Governance Mandate
Correct Answer: c) Implement a hub-spoke network topology with a third-party Network Virtual Appliance (NVA) cluster in the hub for traffic inspection. Use UDRs to force-tunnel all traffic through the NVA. Use Azure Private Link for all PaaS services and implement Azure Policy with "Deny" effects to proactively block non-compliant deployments.
Explanation: This option provides the most comprehensive and secure solution that meets all the stringent requirements.
- Why (c) is correct:
- Hub-Spoke & NVA: This is the standard, robust pattern for centralized control. Using a best-in-class third-party NVA (e.g., from Palo Alto, Fortinet) often provides more advanced threat intelligence and inspection capabilities than the native Azure Firewall, which is a common requirement for large financial institutions.
- Force-Tunneling with UDRs: This is the correct mechanism to ensure *all* traffic (spoke-spoke and outbound) is forced through the central inspection point.
- Azure Private Link: This is the critical component for PaaS security. Private Link exposes PaaS services via a private IP address *inside your VNet*. This completely removes the service from the public internet, unlike Service Endpoints which still use public IPs but just route traffic over the Azure backbone. For a high-security environment, Private Link is the superior choice.
- Azure Policy with "Deny": This is the only way to *proactively prevent* non-compliant resources from being created. An "Audit" effect only reports on non-compliance after the fact, which is too late for a secure environment.
- Why the others are flawed:
- a) This is a good solution, but less secure than (c). Azure Firewall is capable, but a third-party NVA is often required for advanced features. More importantly, it suggests using Service Endpoints, which are less secure than Private Link.
- b) A mesh topology is unmanageable at scale and doesn't provide centralized inspection. It's the opposite of what's required.
- d) Virtual WAN is a great service for managing global connectivity, but it's not the core answer to the detailed inspection and PaaS security requirements. Again, it mentions Service Endpoints, and RBAC is not the right tool to prevent the *creation* of specific resource configurations; that is the job of Azure Policy.
---
3. The IoT Manufacturing Data Ingestion Dilemma
Correct Answer: c) Azure IoT Hub for ingestion, an Azure Kubernetes Service (AKS) cluster running custom processing logic, Azure Cosmos DB for the hot path, and Azure Data Lake Storage Gen2 for the cold path.
Explanation: This architecture provides the best balance of purpose-built services for extreme scale, flexibility, and performance.
- Why (c) is correct:
- IoT Hub: This is the purpose-built service for IoT scenarios. It provides per-device identity, security, and bidirectional communication, which is superior to a generic ingestion service like Event Hubs for a true IoT project.
- AKS: For complex, near real-time processing, especially if it involves custom machine learning models or complex logic, AKS provides the most flexibility and control over the execution environment compared to the more constrained environments of Functions or Stream Analytics.
- Cosmos DB (Hot Path): It's designed for massive-scale, low-latency reads and writes, making it perfect for serving real-time dashboards and alerts on the latest telemetry data.
- Data Lake Storage Gen2 (Cold Path): This is the de facto standard for storing massive volumes of structured and semi-structured data for big data analytics and ML model training at the lowest possible cost.
- Why the others are flawed:
- a) Azure SQL Database is a relational database and is not well-suited for the schema-on-read, high-volume, high-velocity nature of IoT telemetry data. It would quickly become a performance and cost bottleneck.
- b) This is a very strong and viable architecture. However, Stream Analytics, while powerful, can be less flexible for highly complex, custom code or specific ML model libraries compared to running your own code on AKS or Databricks. Synapse is excellent, but using it for the hot path might be overkill and less performant for point-reads than a dedicated NoSQL store like Cosmos DB.
- d) Service Bus is a messaging broker, not a high-throughput data streaming ingestion service; it's the wrong tool for the job. Azure Archive Storage is not suitable for the cold path as data retrieval is slow and expensive; it's for data that is rarely, if ever, accessed.
---
4. The Hybrid Identity & CI/CD Security Overhaul
Correct Answer: d) Configure a Workload Identity Federation between the Azure AD App Registration and the Azure DevOps service connection, allowing the pipeline to exchange its own token for a short-lived Azure AD access token.
Explanation: This is the most modern, secure, and recommended approach, completely eliminating the "secret zero" problem.
- Why (d) is correct: Workload Identity Federation (using OpenID Connect) is a feature designed specifically for this use case. It allows an external system (like Azure DevOps, GitHub Actions, etc.) to be trusted by Azure AD. The pipeline presents its own internal token to Azure AD, and in exchange, Azure AD issues a short-lived, scoped access token. No secrets, keys, or certificates ever need to be stored in the CI/CD system. This is the gold standard.
- Why the others are flawed:
- a) This is just "less bad" security. It reduces the blast radius of a compromised secret but doesn't eliminate the fundamental problem of having a long-lived credential stored in a third-party system. Secret rotation is also an operational burden.
- b) This only works if your build agents are Azure VMs that you manage. With Microsoft-hosted agents or other build systems, this isn't an option. It couples your CI/CD system tightly to a specific infrastructure.
- c) This is a valid and secure method that predates Workload Identity Federation. However, it's more complex to manage. You have to create, secure, deploy, and rotate the certificate, which adds operational overhead compared to the purely configuration-based setup of Workload Identity Federation.
---
5. The Cost Optimization vs. Performance Conundrum
Correct Answer: d) Implement a combination of User Node Pools and Spot Node Pools in AKS, using node taints and tolerations to ensure critical workloads run on user nodes. Implement the Horizontal Pod Autoscaler (HPA) and cluster autoscaler. For Cosmos DB, use the Autoscale provisioned throughput feature and also leverage Azure Reservations for the baseline RU/s.
Explanation: This answer demonstrates a sophisticated, multi-layered approach to cost optimization without sacrificing performance.
- Why (d) is correct:
- Spot Node Pools: This is the single biggest cost-saver for stateless, interruptible workloads on AKS, offering up to 90% savings. The key is using them in combination with standard "User Node Pools" and using taints/tolerations to ensure that critical, stateful, or non-interruptible parts of the app land on reliable nodes.
- HPA & Cluster Autoscaler: This is a fundamental best practice. HPA scales the pods based on demand (CPU/memory), and the cluster autoscaler adds/removes nodes (the expensive part) based on pod scheduling needs. This combination ensures the cluster scales up for peak load and down for off-peak.
- Cosmos DB Autoscale & Reservations: This is the most effective cost strategy for Cosmos DB. Autoscale automatically scales the RUs within a predefined range, handling the spiky 9-to-5 workload perfectly. For the absolute minimum RUs you know you'll always need, purchasing an Azure Reservation provides a significant discount over pay-as-you-go pricing for that baseline capacity.
- Why the others are flawed:
- a) This is a massive, likely unnecessary re-architecture. While App Service can be cheaper for simple apps, it may not be suitable for the complex microservices running on the original AKS cluster.
- b) Switching Cosmos DB to serverless is a poor choice for a workload with predictable, high-traffic peak hours. Serverless is best for intermittent and unpredictable traffic; it would be more expensive than provisioned throughput during the 9-to-5 peak.
- c) Using *only* Spot pools is too risky. An eviction of all nodes at once could cause a total outage. Manually scripting Cosmos DB scaling is brittle and inferior to the native Autoscale feature.