Modern Azure VM Monitoring (2025): AMA, DCR, DCE, Log Analytics & Alerts — Complete Hands-On Lab (AZ-104)

By Joshua McNair — jmcnairtech.com
Monitoring is one of the most critical responsibilities of an Azure Administrator — and one of the most heavily tested subjects on the AZ-104 exam. Azure’s monitoring ecosystem has evolved dramatically, and today it revolves around a modern, modular pipeline that gives you full control over what data is collected and where it flows.
In this hands-on lab, you’ll build the full 2025 Azure monitoring pipeline manually, including:
Azure Monitor Agent (AMA)
Data Collection Endpoint (DCE)
Data Collection Rules (DCR)
Log Analytics Workspace (LAW)
Metric alerts + Action Groups
KQL log queries
Azure Monitor Workbooks
This approach is the standard used in many enterprise environments — and it is the version of monitoring Microsoft expects you to understand for the AZ-104 exam as of November 2025.
Unlike the automatic “Enable Insights” button, the manual pipeline ensures:
You control exactly which counters and event logs are collected
You choose the workspace
You understand the DCE + DCR architecture
Your KQL tables load correctly
No hidden, auto-generated resources
Monitoring is predictable and consistent across environments
This is the cleanest, most exam-accurate, and most professional way to configure Azure VM monitoring
What You’ll Build (Architecture Overview)
You will deploy the full monitoring chain:
Log Analytics Workspace (LAW)
Stores performance logs, events, and KQL data.Data Collection Endpoint (DCE)
The secure endpoint AMA sends data to.Data Collection Rule (DCR)
Defines which performance counters and logs are collected and where they go.Azure Monitor Agent (AMA)
Installed on the VM and connected to the DCE + DCR + workspace.Metric Alerts
Lightweight alerts based on platform metrics.KQL Queries
Validate Perf, Heartbeat, and InsightsMetrics tables.Monitor Workbook
Build a dashboard visualizing the collected data.
This architecture is used in real-world monitoring scenarios for Azure VMs, Azure Arc servers, AKS nodes, and hybrid resources.

Lab Requirements
Azure subscription (free trial works)
Region: East US
One VM (Windows Server 2022 recommended)
60–90 minutes
Very small ingestion charges may apply
PART 1 — Create the Virtual Machine
Step 1.1 — Create a Resource Group
Azure Portal → Resource groups → Create
Name: rg-az104-monitoring-lab
Region: same region for all resources
Click Review + create → Create

Step 1.2 — Create the VM
Azure Portal → Virtual Machines → Create → Azure virtual machine
| Setting | Value |
| VM Name | LabVM01 |
| Region | East US |
| Image | Windows Server 2022 Datacenter |
| Size | Standard_B2s |
| Username | labadmin |
| Password | secure password |
| Inbound ports | RDP (3389) |
Click Next: Disks → accept defaults.
Click Next: Networking → accept defaults.
Click Next: Management → Disable everything for this lab:
Click Next: Monitoring → Enable Boot Diagnostics: Enable with managed storage account (recommended)
Click Next → Advanced
Do NOT add extensions
Do NOT add VM applications
Click Review + create → Create

PART 2 — Connect to the VM
Step 2.1 — Navigate to the VM
Azure Portal → Virtual Machines → LabVM01
Ensure:
Status: Running
Public IP assigned

Step 2.2 — Connect using RDP
Left menu → Connect → RDP → Download RDP File
Login:
Username: labadmin
Password: your password
You are now inside the VM.

PART 3 — Check the Pre-Monitoring State
Before enabling advanced monitoring:
Azure Portal → LabVM01 → Insights
You should see:
Basic CPU graph (host metrics)
Availability
Basic platform data
You will NOT see full performance charts yet.
This confirms AMA, DCR, and workspace is not active.

PART 4 — Create the Log Analytics Workspace
Azure Portal → Log Analytics workspaces → Create
| Field | Value |
| Resource group | rg-az104-monitoring-lab |
| Name | lab6-monitoring |
| Region | East US |
Click Review + create → Create
This workspace will store your VM logs.
PART 5 — Create the Data Collection Endpoint (DCE)
Azure Portal → Monitor → Data Collection endpoints → Create
| Field | Value |
| Name | dce-az104-monitoring |
| Resource group | rg-az104-monitoring-lab |
| Region | East US |
You now have:
A VM
A Log Analytics Workspace
A Data Collection Endpoint
Next, you will create the Data Collection Rule (DCR) that defines what data the VM will send to the workspace.
PART 6 — Create the Data Collection Rule (DCR)
Azure Portal → Monitor → Data Collection Rules → Create
Fill out the Basics tab:
Name:
dcr-az104-monitoringSubscription:
Your subscriptionResource group:
rg-az104-monitoring-labRegion:
Same as your VM (East US)Platform Type:
WindowsData Collection Endpoint:
Select your endpoint:
dce-az104-monitoring
Click Next: Resources
Step 6.2 — Assign DCR to the VM
On the Resources tab:
Click Add resources → select: LabVM01
Click Apply
Click Next: Collect and deliver
Step 6.3 — Add a Data Source
You should now be on the Collect and deliver tab.
Click:
+ Add data source
A panel opens with two tabs: Data source and Destination.
Stay on Data source.
Step 6.4 — Configure Performance Counters
Under Data source type, choose: Performance Counters
Below that, select:
Basic (recommended)
This automatically enables the 4 standard counters:
| Performance Counter | Sample Rate |
| CPU | 60 seconds |
| Memory | 60 seconds |
| Disk | 60 seconds |
| Network | 60 seconds |

Leave all four selected
Leave sample rates at 60 seconds
Click Next (Destination)
6.5 — Select Destinations (Mandatory)
This is where many people get stuck — but the choices are simple:
Destination type: Azure Monitor Logs
Subscription: Your Subscription
Destination details: Lab 6 Monitoring
This sends the performance logs into the Perf and InsightsMetrics tables.

Click Next: Tags
(You can skip tags.)
Click Review + Create → Create
Your DCR is now live.
Important - AMA is automatically installed on the VM as soon as you attach the VM to a DCR.
You do NOT manually install AMA under Extensions.
You do NOT click “Enable Insights” inside the VM.
Assigning the VM to a DCR triggers a background deployment of AMA.
Azure handles it for you.
RESULT AFTER PART 6
You now have the full monitoring ingestion pipeline configured:
AMA (auto-installed when DCR is applied)
DCE created
DCR created and assigned
Performance collection configured
Data flowing to LAW (incoming within ~10 minutes)
This is exactly how Azure Monitor works in production.
PART 7 — Create a High-CPU Metric Alert
Azure Portal → Virtual Machines → LabVM01 → Alerts
Step 7.1 — Create New Alert Rule
Click Create alert rule.
Under Condition → Add condition
Search for metric: Percentage CPU
Configure:
| Setting | Value |
| Threshold type | Static |
| Value is | Greater than |
| Threshold | 75 |
| Aggregation type | Average |
| Check every | 5 minutes |
| Loopback period | 5 minutes |

Step 7.2 — Create an Action Group
Under Action Group → Create action group
Step 7.3 — Basics Tab
Project details:
Subscription: Your subscription
Resource group:
rg-az104-monitoring-labRegion:
Global
(Action Groups are always Global — this is correct)
Click Next: Notifications
Notifications Tab
Click + Add notification
Configuration:
Notification type: Email/SMS message/Push/Voice
Name:
emailNotifyChannel: Email
Email address: your email address
Click OK, then Next: Actions
Actions Tab
Leave blank → Click Next: Tags
Skip tags → Click Review + create → Create
Your Action Group is now created.
Step 7.4 — Attach Action Group to the Alert Rule
Back in the alert rule wizard:
- Under Actions, select the new Action Group:
ag-az104-notify
Click Next: Details
Step 7.5 - Complete the Alert Rule
Set:
Alert rule name:
ar-az104-highcpuSeverity: 2 (Warning)
Enable upon creation:
Click Review + create → Create
Your CPU alert is now active.
PART 8 — Validate Log Ingestion with KQL
Now that AMA + DCR are attached to the VM, we verify that logs are flowing into your Log Analytics Workspace.
Step 8.1 — Open the KQL Query Interface
Azure Portal → Monitor
Left menu → Logs
If prompted for a scope:
Resource type: Log Analytics Workspace
Workspace: lab6-monitoring
Click Apply.
You are now inside the KQL query editor.
Step 8.2 — Validate VM Connectivity (Heartbeat)
Run:
Heartbeat | where Computer == "LabVM01" | sort by TimeGenerated desc

If you see rows → AMA is connected and reporting.
Step 8.3 — Validate Performance Counters (Perf Table)
Run:
Perf | where Computer == "LabVM01" | summarize Count = count() by ObjectName, CounterName | sort by ObjectName asc
You should see:
Processor
Memory
LogicalDisk
Network Interface
This confirms the DCR is collecting CPU, memory, disk, and network counters.

Step 8.4 - Generate a CPU Spike (Inside the VM)
Inside LabVM01, open PowerShell and temporarily stress the CPU:
while ($true) { 1..50000 | ForEach-Object { [Math]::Sqrt($_) } }
Let this run for 60–90 seconds.
It will push CPU to 80–100%.
Close the PowerShell window to stop the loop.
This generates real performance data for the Perf table and will also trigger the High-CPU alert we created earlier.
Step 8.5 — Validate the CPU Spike in Log Analytics
Run the following KQL in your Log Analytics Workspace:
Perf | where Computer == "LabVM01" | where CounterName == "% Processor Time" | sort by TimeGenerated desc
You should see:
A sharp jump in CPU percentage
Several Perf entries around the timestamp when the spike occurred
This confirms:
AMA is functioning
DCR is collecting Perf
The VM is sending data to LAW
Alerts will fire correctly
Monitoring pipeline is working

RESULT AFTER PART 8
You have verified:
AMA is installed
Heartbeat data is flowing
Perf counters are flowing
CPU spike is visible in KQL
Alert rule will trigger
Log Analytics Workspace is receiving data
Your VM is successfully sending monitoring data into Azure Monitor.
PART 9 — Build an Azure Monitor Workbook (Visual Dashboard)
Workbooks let you visualize performance and insights from your VM.
Step 9.1 — Create a New Workbook
Azure Portal → Monitor
Left menu → Workbooks
Click + New
A blank workbook opens.

Step 9.2 — Add a CPU Chart
Click Add → Add query
Ensure your workspace is set to lab6-monitoring
Run:
Perf | where Computer == "LabVM01" | where CounterName == "% Processor Time" | summarize AvgCPU = avg(CounterValue) by bin(TimeGenerated, 5m)Visualization → Line chart
Title: CPU Usage (5m Average)
Click Save inside the query pane.

Step 9.3 — Add a Memory Chart
Perf | where Computer == "LabVM01" | where CounterName == "Available MBytes" | summarize AvgMem = avg(CounterValue) by bin(TimeGenerated, 5m)
Visualization → Line chart
Title: Available Memory (MB)
Step 9.4 — Add a Disk Activity Chart
Perf | where Computer == "LabVM01" | where ObjectName == "LogicalDisk" | summarize AvgIO = avg(CounterValue) by bin(TimeGenerated, 5m)
Visualization → Line chart
Title: Disk Activity
Step 9.5 — Add Heartbeat Table
Heartbeat | where Computer == "LabVM01" | sort by TimeGenerated desc
Visualization → Table
Title → Heartbeat Logs
Step 9.6 — Save the Workbook
Top: Save
Name:
Azure VM Monitoring Dashboard
Resource Group:
rg-az104-monitoring-lab
Click Save
You now have a complete, custom dashboard.

PART 10 — Final Validation Checklist (2025 Azure Monitor Pipeline)
Before closing the lab, verify that each component in your monitoring pipeline is working as expected. This ensures your configuration matches real-world Azure monitoring architecture and AZ-104 exam requirements.
Log Analytics Workspace (LAW)
Workspace exists in East US
Data tables (Perf, Heartbeat) are receiving entries
KQL queries return results without errors
Data Collection Endpoint (DCE)
Provisioning state = Succeeded
Region matches the VM, LAW, and DCR
Connected to the DCR
Data Collection Rule (DCR)
Assigned to LabVM01
Platform Type = Windows
Collects Basic Performance Counters (CPU, Memory, Disk, Network)
Sends data to lab6-monitoring
Azure Monitor Agent (AMA)
Automatically installed when the DCR was applied
No need to manually install or enable VM Insights
Heartbeat entries appear every 1–5 minutes
KQL Log Validation
Heartbeat table shows connectivity
Perf table shows performance metrics
CPU spike appears in recent Perf logs
Workspace ingestion pipeline is healthy
Azure Monitor Workbook
Custom dashboard created
CPU, Memory, Disk, Heartbeat visualizations display correctly
Shows real metric data collected via DCR
Alerts + Action Group
High-CPU alert rule created
Action Group configured with Email/SMS/Push
Alerts will fire when CPU spike occurs
Demonstrates end-to-end monitoring and notification flow
FINAL SUMMARY — Azure Monitoring Lab (2025 Pipeline)
In this lab, you built the complete modern Azure Monitor ingestion pipeline exactly as used in production environments and expected on the AZ-104 exam. Instead of relying on auto-generated resources from VM Insights, you deployed everything manually:
A Log Analytics Workspace for storing telemetry
A Data Collection Endpoint for secure ingestion
A Data Collection Rule defining what data to collect
The Azure Monitor Agent, automatically deployed
Performance data flowing into Perf and Heartbeat
A High-CPU Alert Rule with email notifications
A Custom Azure Monitor Workbook for visualization
You validated the entire workflow using a real CPU spike, confirming that the VM is sending logs to your workspace and that Azure Monitor is capturing, storing, analyzing, and alerting on activity.
This hands-on approach gives you a deep understanding of how Azure monitoring really works, helps you stand out in job interviews, and builds exactly the skills needed to pass the AZ-104 exam.
Your monitoring environment is now fully operational.
You're ready for advanced monitoring labs or for integrating this pipeline into real enterprise deployments.




