Building and Hardening an Azure Honeypot: A Journey with Microsoft Sentinel and Live Internet Threats (Part 3)

Robert Onyango
21 min readJan 18, 2025

--

Using NIST SP 800–61 to handle security incidents in our Honeypot

Introduction

Welcome back to the blog series, ‘Building and Hardening an Azure Honeypot: A Journey with Microsoft Sentinel and Live Internet Threats’. Click the link to part two of the lab where we deployed the honeypot. In this section, we are going to build and deploy our monitoring infrastructure i.e. the SOC in accordance with phase one of the NIST SP 800–61 guidelines.

The National Institute of Standards and Technology (NIST) is a federal U.S. agency that is responsible for developing standards and guidelines, including minimum requirements, for providing adequate information security for organizations and U.S. federal operations.

NIST SP 800–61 is a publication that assists organizations in establishing computer security incident response capabilities and handling incidents efficiently, effectively and systematically. This publication provides guidelines for incident handling, particularly for analyzing incident-related data and determining the appropriate response to each incident i.e. detecting, analyzing, prioritizing, and handling incidents.

A computer security incident or simply ‘incident’ is a violation or imminent threat of violation of computer security policies, acceptable use policies, or standard security practices that causes adverse events to an organization’s computing infrastructure.

It is important to remember that preventative activities based on the results of risk assessments can lower the number of incidents an organization experiences, but not all incidents can be prevented as newer, more innovative attacks are developed every single day. It is therefore important to have a comprehensive incident response capability for rapidly detecting incidents, minimizing loss and destruction, mitigating the weaknesses that were exploited, and restoring IT services.

NIST SP 800–61 provides guidelines that when followed helps organizations achieve these goals, thereafter allowing us to prevent subsequent attacks by understanding threats and identifying modern attacks to prevent subsequent compromises, and finally share the findings with internal and external stakeholders.

Handling an Incident

The major phases of the incident response process as proposed by NIST 800–61 are:

  1. Preparation: Incident response involves establishing an incident response capability that allows responding to the incidents that occur and also preventing incidents by ensuring that systems, networks, and applications are sufficiently secure. One key example of preparation is ensuring your first responders are properly trained, have adequate communication and coordination mechanisms.
  2. Detection and Analysis: This phase involves accurately detecting and assessing possible incidents i.e. determining whether an incident has occurred and, if so, the type, extent, and magnitude of the problem. It involves paying attention to precursors i.e. a sign that an incident may occur in the future and indicators i.e. a sign that an incident may have occurred or may be occurring right now. Computer security software alerts (from tools such as IDPSs, SIEMs, XDRs), logs (from OS, Application, System and Network), and publicly available information are the backbone of this phase.
  3. Containment, Eradication, and Recovery: Containment is important before an incident overwhelms resources or increases damage and should allow for the developing of tailor-made remediation strategies. After containment, eradication may be necessary to eliminate components of the incidents e.g. deleting malware and disabling breached user accounts, from all the identified affected hosts. Finally, recovery is restoring systems to normal operations and confirming that indeed the systems are functioning normally. It is during recovery that steps will be taken to remediate the vulnerabilities exploited to prevent similar incidents in the future.
  4. Post-Incident Activity: This phase involves learning and improving from the incident response process as an organization and security team to further improve the resilience of the security measures and incident handling process. The team reviews the reports created and used during the incident process, creates a follow-up report for each incident and provides a mechanism for information sharing.

Phase One: Preparation — Building the Azure Cloud SOC

Following the NIST SP 800–61 guidelines, we must first prepare to handle security events with tools, teams, and procedures. For our lab, we’ll prepare by building our cloud SOC in Azure. This will allow us to monitor all aspects of our lab infrastructure to know what is happening, when it is happening.

Building our SOC will consist of the following steps:

  1. Deploy a Log Analytics Workspace and connect it to Microsoft Sentinel
  2. Upload GeoIP Watchlist to Microsoft Sentinel
  3. Enable Microsoft Defender for Cloud
  4. Enable Logging for VMs and NSGs
  5. Enable Tenant, Subscription, and Resource-Level Logging
  6. Finalize Microsoft Sentinel Setup

Step 1: Deploy Log Analytics Workspace (LAW) and connect it to Microsoft Sentinel

The LAW is the log aggregator in Azure cloud that allows us to collect the Activity, Resource and Entra ID logs is a central repository for our deployed honeypot environment. From here, we can proceed to analyze the logs using the KQL query language to find the specific data we need to identify security incidents. As discussed earlier, Microsoft Sentinel is the Security Information and Event Management (SIEM) system in our lab that will be added on top of the LAW. This will allow Microsoft Sentinel to offer near real-time analysis of our lab’s security posture, strengthening our ability to respond to incidents effectively.

Create a Log Analytics Workspace

  1. In the Azure Portal Search Bar, enter Log Analytics and select Log Analytics workspaces from the list of results.
  2. On the Log Analytics workspaces page, choose Create.
  3. On the Basics page of the Create Log Analytics workspace wizard, provide the following information and choose Review + Create.

4. Review the information and choose Create.

Connect Microsoft Sentinel to the Log Analytics Workspace

  1. In the Azure Portal Search Bar, enter Microsoft Sentinel and select Microsoft Sentinel from the list of results.
  2. On the Microsoft Sentinel page, choose Create.
  3. On the Add Microsoft Sentinel to a workspace page, select the HoneyPot-LAW. Choose Add.

Step 2: Upload a GeoIP Watchlist to Microsoft Sentinel

Microsoft Sentinel watchlists enable collecting data from external data sources for correlation with the events in your Microsoft Sentinel environment. Once created, you can leverage watchlists in your search, detection rules, threat hunting, workbooks and response playbooks. Watchlists are stored in your Microsoft Sentinel workspace as name-value pairs and are cached for optimal query performance and low latency.

A GeoIP watchlist supplies Microsoft Sentinel with the data required to map out where attacks are coming from providing a great way to visualize where attackers are located in a global view. The watchlist we will use in our lab is a list of IP addresses and their geographical locations. In this lab, I used a MaxMind’s GeoIP2-City csv data collected on 29th November 2024. I simplified and condensed the csv file to allow Microsoft Sentinel to digest the file as quickly as possible. You can find the csv file here <GitHub Link>.

Let’s first store the log files in Azure Storage account. You can find out more about Azure Storage from my previous lab Microsoft Azure Storage: Understand and Implement cloud storage solutions in real-world scenarios. Follow the steps below to achieve this:

  1. In the Azure portal, search for and select Storage accounts. There may be an option Storage account(classic), make sure not to select it.
  2. Select + Create.
  3. On the Basics tab, select the Honeypot-RG resource group.
  4. Provide a Storage account name. The storage account name must be unique in Azure.
  5. Set the Performance to Standard and Redundancy to LRS
  6. Select Review + create, and then Create.
  7. Wait for the storage account to deploy and then Go to resource.
  8. Under Data Storage, click on Containers.

9. Click on + Container and name the container ipgeodata. Click on Create.

10. Click on our ipgeodata container.

11. Click on Upload and navigate to the downloaded csv file.

NOTE: The file is too large for GitHub, so I had to split it into repo1 and repo2. You should merge the two files to get the best results. Once selected, click on Upload.

12. Once uploaded, click on the csv file and select Generate SAS. The SAS token will provide secure access to the stored file from the Microsoft Sentinel watchlist.

13. Change the expiry year to 2025, click on Generate SAS token and URL, then copy the newly generated Blob SAS URL and paste it into a text document, we’ll need it in the next few steps.

Let’s proceed to add the GeoIP data to our watchlist by following the steps below:

  1. In the Microsoft Sentinel page, click on Watchlist under Configuration and select New.

2. Enter the name geoip as the Name and Alias. Click Next.

3. Select Source type as Azure Storage (Preview), copy the Blob SAS URL (Preview) link and enter the SearchKey as network.

4. Select Next: Review + create and select Create. We may need to wait for a few hours before our watchlist is fully loaded.

5. Once loaded, we can go to our Log Analytics Workspace and use the following KQL command to count how many rows of data we have uploaded for our GeoIP watchlist. I can confirm I have over 3 million rows of data.

Step 3: Enable Microsoft Defender for Cloud

Microsoft Defender for Cloud provides Azure and hybrid cloud workload protection and security posture management. Its features cover the two broad pillars of cloud security:

  1. Cloud Security Posture Management (CSPM): This provides continuous monitoring to help you understand your current security position and hardening recommendations to help you efficiently and effectively improve your security position. It performs this function by using Secure Score — an aggregate value of assessments of your resources, subscriptions, and organization for security issues. The higher the secure score value, the lower the identified risk level.
  2. Cloud Workload Protection (CWP): Offering security alerts that are powered by Microsoft Threat Intelligence and a range of advanced, intelligent, protections for your workloads. These workload protections are Azure resource-specific providing security for resources such as virtual machines, SQL databases, containers, web applications, your network, key vault, resource manager etc.

NB: Microsoft Defender for Cloud also supports hybrid and multi-cloud environment by deploying Azure Arc and enabling Defender for Cloud.

For purposes of our lab, it is important to note that Defender for Cloud detects a threat in any area of our environment and will analyze the logs read from our resources. Defender for Cloud will apply security analytics to these logs and will generate security alerts to potential security incidents. These alerts describe details of the affected resources, suggested remediation steps, and in some cases, an option to trigger a logic app in response. These alerts can be exported to Microsoft Sentinel or any third-party SIEM.

The second key point for our lab is that we are enabling Microsoft Defender for cloud to be compliant with NIST SP 800–53. NIST SP 800–53 is a publication that provides a catalog of security and privacy controls for information systems and organizations to protect operations, assets, individuals, other organizations, from a diverse set of threats and risks, including hostile attacks, human errors, natural disasters, structural failures, foreign intelligence entities, and privacy risks. Defender for Cloud will use NIST SP 800–53 to recommend ways to strengthen our resources’ security posture.

In Azure, NIST SP 800–53 is automatically integrated and the Azure Policy regulatory compliance built-in initiative definition maps to compliance domains and controls in NIST SP 800–53. This allows Microsoft Defender for Cloud to automate compliance by providing actionable security control recommendations that align with our Azure environment.

Now, let’s proceed to enable Microsoft Defender for our lab. We are going to enable it for our Log Analytics Workspace and for the subscription resources as well. Begin by following the steps below:

  1. In the Azure Portal Search Bar, enter Microsoft Defender for Cloud and select Microsoft Defender for Cloud from the list of results.
  2. Under Management, click Environment settings.
  3. Scroll down to locate our workspace, Honeypot-LAW and click on the 3 dots to see the Edit Settings button and click on it.

4. Click on Enable All Plans to allow Defender for Cloud to access our resources’ traffic flow. Click on Save.

5. Click on Data collection and turn on All events. This ensures Defender for Cloud can collect all the information on Windows security events that we have seen from the event viewer. Click on Save.

Next, we’re going to enable Microsoft Defender for Cloud to oversee certain aspects of our subscription. These settings influence how security policies, compliance frameworks, and automated remediation actions are consistently applied across our subscription. This provides a broader, more unified view of our entire subscription’s security posture.

  1. Go back to Environment settings. Scroll down to locate your subscription and click on the 3 dots to see the Edit Settings button and click on it.

2. In Defender Plans, make sure to turn the Status ON for servers, databases, storage, and key vault. Click on Save.

3. Ensure that Defender for Servers pricing is on Plan 2 since this is the only plan that supports Log Analytics data ingestion (500MB free).

NB: Note that Azure deprecated the Azure Monitoring Agent in November 2024. This agent was tasked with collecting security-related configuration and event logs from the virtual machines and store them in the selected Log Analytics Workspace for analysis. Now, we have a simpler onboarding process where we enable Microsoft Defender for the specific workspace as we have done above then proceed to enable Microsoft Defender for the servers ensuring the pricing is on Plan 2.

Next, we need to enable NIST SP 800–53 to be compliance with the standard so we can enjoy the benefits discussed above i.e. get recommendations to strengthen our security controls.

  1. Click on Security Policies.
  2. Toggle the status button on the NIST SP 800–53 to ON.

Finally, we’re going to import all of the Microsoft Defender for Cloud alerts to Log Analytics Workspace.

  1. Click on Continuous export, select Log Analytics workspace then make sure all of the boxes are checked. This will automatically and continuously export security information to our LAW allowing real-time visibility of our deployment.

2. Scroll down to select the correct Resource Group and Log Analytics Workspace.

3. Click on Save.

Step 4: Enable logging for VMs and NSGs using Flow Logs

We will now proceed to configure our VMs and NSGs to forward logs to our Log Analytics Workspace. In order to forward logs from our NSGs to the Log Analytics Workspace, we need to create NSG flow logs. NSG flow logs is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a NSG (both inbound and outbound). The NSG flow log capability allows you to log the source and destination IP address, port, protocol, and whether traffic was allowed or denied by an NSG. This gives us the ability to monitor the security of our network. Proceed with steps below:

  1. Go to the Honeypot-RG and select the WindowsServer-VM-nsg.

2. Under Monitoring, select NSG flow logs. Click on Create Flow log.

3. Select target resources. Then choose both the Linux and Windows VM NSGs. By doing this, we’ve configured flow logs for both the servers.

4. Once confirmed, ensure that the storage account we created earlier, geoip20241129 is selected. Enter a retention time of 30 days, for the data to be stored for only 30 days unless otherwise. Click on Next: Analytics.

5. Enable traffic analytics with a processing interval of every 10 minutes. Traffic Analytics provides us a detailed overview of the data passing through our NSGs. It’s like a high-powered surveillance system that alerts you to anything out of the ordinary. When it’s enabled, Traffic Analytics observes every data packet, every byte, and turns it into actionable insights.

6. Select Review + Create. Then select Create. When you go to the Flow Log resource, you should see the following.

Next, we’re going to configure Data Collection Rules (DCRs) to transfer logs from our VMS (/var/logs and Event Viewer: Application and Security logs) to our Log Analytics Workspace. You can find out more about Data Collection Rules from my article Deploying Azure Monitor to Track and Respond to Security Events Across Windows, Linux VMs, and Cloud Apps. Ensure that your VMs are running then proceed with the following steps:

  1. Navigate to our Log Analytics Workspace, Honeypot-LAW.
  2. Under Settings, select Agents. Select Data Collection Rules.

3. Click Create. This will open the Create Data Collection Rule interface. Ensure that the Resource Group and Region are consistent with our lab so far. Select All for the platform type. Click Next : Resources.

4. Under the Resources tab, click on Add Resources and select the WindowsServer-VM and LinuxServer-VM. Click Apply.

5. Click on Next: Collect and Deliver. Here, we want specify the logs we want to collect from each of our VMs. We are going to add two data sources for our DCR: Linux Syslog and Windows Event Logs.

6. Click on Add data source. Under Data Source, select Syslog from the drop down. Here, we only want to see the failed and successful logons (authorization logs) from the auth.log file as we had demonstrated above. So, we will select LOG_AUTH facility and choose LOG_DEBUG.

7. Click on Next: Destination. Click on the Add destination button and ensure that the Honeypot-LAW is selected. Click on Add data source.

8. You should see that syslog is enabled as below.

9. Now we need to add Windows Event Logs, specifically information, audit success and audit failure logs. The information log will forward logs regarding SQL Server Authentication. The security audit success and failure log will forward logs regarding RDP authentication attempts. Click on Add data source.

10. We can also add some custom rules that will forward logs when the WindowsServer-VM detects malware or when someone’s tampering with its firewall.

11. Click on Next: Destination. Click on the Add destination button and ensure that the Honeypot-LAW is selected. Click on Add data source.

12. Both our data collection rules should be created as follows:

13. Click on Review + Create.

Now, it is prudent to confirm that indeed our VMs are sending logs to our Log Analytics Workspace. Navigate to Honeypot-LAW. We should see that we have two agents, a Windows Server and a Linux Server connected.

We can also confirm that our logs are being forwarded to the LAW by using Kusto Query Language (KQL). KQL is a read only, Microsoft Query Language developed to support near-real-time analysis of large volumes of data streamed in from multiple sources.

We will attempt a few failed RDP and SSH from our Attack Machine as we did when we were generating logs. This time, we will use the username testing for easier querying.

The data table for Linux is Syslog. On the Honeypot-LAW, go to Logs and input the query below. You can see that the authorization logs are being forwarded successfully.

The data table for Windows is Event. The Event ID for Logon Failures is 4625. You can see that the security logs are being forwarded successfully.

Finally, we’ll make sure the Network Security Group logs are being forwarded to the LAW. The data table for NSG logs is AzureNetworkAnalytics_CL. Again, we can confirm below that everything is working as expected.

Step 5: Enable Tenant, Subscription, and Resource-Level Logging

The next step is to enable log forwarding from the resources we have deployed to the Log Analytics Workspace so we can see how to capture Tenant-Level, Subscription-Level and Azure PaaS Resource-Level logs in a LAW. We will work with the following resources:

  • Microsoft Entra ID (Tenant-Level)
  • Activity Log (Subscription-Level)
  • Storage Account (Resource-Level)
  • Key Vault (Resource-Level)

Step 5.1: Export Entra ID Logs to Log Analytics Workspace

We want to make sure that we’re keeping track of anyone trying to log into our tenant account or making identity management changes.

  1. Navigate to Microsoft Entra ID on the Azure Portal. Under Monitoring, select Diagnostic settings.
  2. Click on Add diagnostic setting.
  3. In the diagnostic setting page, configure the entries as below. This will ensure we have a diagnostic setting to ingest Entra ID logs. Click on Save.

You can test for AuditLogs and SigninLogs from the LAW using KQL. Feel free to use the KQL Cheat Sheet in my GitHub repo.

Step 5.2: Export Azure Activity Logs Log Analytics Workspace

We’ll need to add diagnostic settings for forwarding Activity Logs. These logs track everything that happens within the Azure portal on the management plane — resource creation, deletion, and configuration changes

  1. In the Azure Portal Search Bar, enter Monitor and select Monitor from the list of results.
  2. Select Activity Log, then Export Activity Logs.

3. Add the following diagnostic setting that sends all activity logs to our Log Analytics Workspace.

4. You can create a few test resources and query the LAW using the AzureActivity maker.

Step 5.3 Export Blob storage and Key Vault Logs to Log Analytics Workspace

Forwarding these logs to your Log Analytics Workspace is crucial. It helps you detect suspicious activity — someone trying to download large amounts of data, unusual access from a foreign IP, or a known IP accessing files at odd hours for the case of your storage account and someone trying to steal your secrets e.g. keys, certificates and credentials.

  1. Navigate to the geoip20241129 storage account. Click on Diagnostics settings. Note that logs will be stored as blob storage. Click on blob.

2. Ensure all logs are forwarded to the Honeypot-LAW.

3. You can verify that storage blobs are being forwarded by using the StorageBlobLogs marker in our Honeypot-LAW. You can add then delete now blob files to see this in action.

4. Now, let’s proceed to Key Vault. We want to see the logs of people trying to access our secret stored in key vault. Navigate to the honeypot-kv we deployed earlier.

5. Under Monitoring, click on Diagnostic settings. We want to add Audit Logs of the key vault to our LAW. Click on Add diagnostic setting and configure as below.

6. We can check our LAW by using the AzureDiagnostics marker and specifying where ResourceProvider == “MICROSOFT.KEYVAULT”.

Step 6: Microsoft Sentinel — Build Attack Maps and Security incidents

After successfully configuring logging for our Cloud SOC, the last step is finalizing the Microsoft Sentinel Setup. The intention here is to create malicious attack maps and generate security incidents.

We will build attack maps for each of the following endpoints by creating 4 different workbooks in Sentinel for each:

  • Linux VM: Failed SSH authentication attempts.
  • Windows VM: Failed RDP/SMB/General authentication attempts.
  • Microsoft SQL Server inside Widows VM: Failed authentication attempts.
  • Network Security Groups: Malicious flows allowed into our network.

Procced to follow the steps below:

  1. Navigate to Microsoft Sentinel. Under Threat Management, select Workbooks. Click on Add Workbook.

2. In the New Workbook, click on Edit then click on the Edit button on the far right, then click on Remove.

3. Ensure you have an empty workbook as below.

4. Click on Add, then Add Query.

5. One approach to creating the map is to write a KQL query that pulls the necessary data from our LAW, then manually configure the map settings to visualize what we want. However, to keep it simple, paste the JSON from the link into the Advanced Editor. Then click Done Editing.

Inside the JSON data, there’s a section called query. This is the KQL query that pulls data from our Log Analytics Workspace — in this case, failed logon attempts and the IP addresses trying to break into our Linux machine. Another part of the JSON joins data from our LAW with the GeoIP watchlist we created earlier, giving us the geographical location of these attackers. There’s also a “visualization” section that tells the workbook how to visualize the data on a map. You can find and review the JSON code here: linux-ssh-auth-fail.

The result is a malicious attack map where threat actors are located (there’s limited data in the image below since my VM hasn’t been running long).

The following map is generated from the JSON script.

You should save the map as linux-ssh-auth-fail.

Now we’ll repeat the above process for the three other workbooks as shown below. We’ll name them mssql-auth-fail, nsg-malicious-allowed-in, and windows-rdp-auth-fail. The mssql-auth-fail workbook will show a map of failed SQL Server authentication attempts for our Windows VM. The nsg-malicious-allowed-in will show a map of all external public IPs passing through our cloud firewalls. Remember we exposed both of our NSGs to the public internet. And the windows-rdp-auth-fail workbook will show a map of all failed Windows logon attempts or SMB.

windows-rdp-auth-fail

mssql-auth-fail

You should have the following workbooks:

Next, we’re going to create Analytics Rules in Microsoft Sentinel. These will be used to create alerts and, as a result, spin up incidents for different events happening in our environment. These are KQL queries that define the conditions for triggering an alert. Sentinel will pull data from our Log Analytics Workspace that matches these queries, placing them in the “Incidents” section of our Sentinel environment.

We have the ability to manually create an analytics rule. But for the sake of time, I want to import 10+ different analytics rules at once from the <link>.

  1. While in Microsoft Sentinel, click on Analytics under Configuration. Click on Import.

2. Copy the contents of the link in notepad as JSON then import the Sentinel rules. The rules should be uploaded as follows.

Our lab is now ready for the first iteration of testing. We’ve successfully built a cloud SOC for our honeypot deployment. We’re going to let it run live for 24 hours. It’s currently, 9th January 2025, 8:00 PM, EAT. After this period, we will proceed to part four, where will see the detections and analyze what incidents get spun up in Microsoft Sentinel. Based on this analysis, we will harden the security posture of our deployment and see the results after another 24-hours. We will compare the data and draw the necessary conclusions.

Happy Learning!

--

--

Robert Onyango
Robert Onyango

Written by Robert Onyango

🔒 Cybersecurity Novice | 💻 Hacking into the world of cyber: Cracking codes and dodging malware – welcome to my digital diary! 📖 #CyberSec

No responses yet