Azure Sentinel News
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
  • Security and Compliance
  • SOC
  • Threat Intelligence
  • Security Ochestration & Automated Response
  • SOAR
  • Security Operations
  • Artificial Intelligence
No Result
View All Result
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
  • Security and Compliance
  • SOC
  • Threat Intelligence
  • Security Ochestration & Automated Response
  • SOAR
  • Security Operations
  • Artificial Intelligence
No Result
View All Result
Azure Sentinel News
No Result
View All Result
Home IR

A new detection model for Azure Sentinel

Azure Sentinel News Editor by Azure Sentinel News Editor
November 12, 2020
in IR, KQL
0
Critical infrastructure and industrial orgs can test Azure Defender for IoT for free
2.4kViews

By: Christophe Parisel

Picking up where we left off in part 1, we know that time series decomposition is not entirely suited for detecting cyberattacks from the Azure Activity logs produced by the plentiful SPNs operating in our subscriptions. Let’s figure out what is their limit and how we could get around them in Azure Sentinel.

Current limitations

In a context of suspicious operations detection, I think the three main grievances one might have against anomalies decomposition are:

  1. Non-distributivity. As we discovered previously, anomalies(op1+op2) != anomalies(op1) + anomalies(op2). Likewise, anomalies(spn1+spn2) != anomalies(spn1) + anomalies(spn2). To perform detection at scale, with so many ops and SPNs to manage, it would be much desirable that anomaly detection be at least roughly distributive.
  2. No learning capability. An anomaly which triggers once will always trigger, even if it is a false positive (or if it’s a benign true positive). This approach is not sustainable in a context of automated devSecOps.
  3. No time-orientation. If analyzing things in the right order might not be crucial for failure prediction and health monitoring, itis of key importance for cybersecurity: patching an image before publishing it in a registry is better than publishing before and patching after. Time-orientation eliminates many false positives (but it could also ignore some true positives).We could take time-orientation for granted because one can’t imagine anything more chronological than time-series. But in fact, the process of decomposition destroys chronology: the only component that retains a flavor of time-orientation is the seasonality. Unfortunately as we have seen previously, even automated tasks -when complex, can be unseasonal.

In our search for a successful replacement of time-series, we must thrive to get those three properties: distributivity, memorization and chronology.

But above all, we must find the right balance between perfect and functional anomaly detection. This is really important if we want to go anywhere. In support for this argument, let me quote Mahmoud ElAssir, VP of Customer experience at Google Cloud:

Complexity needs to be managed because it’s too complex to solve. What you want to do is manage complexity with better measurements, better prediction, and better accountability

Achieving better detection with Markov models

I propose to follow a classical approach in anomaly detection: evaluate the ebb and flow of SPNs activity against a first-order hidden Markov model.

Such models are made of two parts: a “hidden state”, and “observable outcomes”. Here, the hidden state (also named the ’emission matrix’) holds all acceptable transitions between two subsequent operations of the {OperationNameValue} set. It is a square matrix of rank c, where c is the cardinality {OperationNameValue}.

Observable outcomes are long sequences of legitimate operations taken from Azure Activity logs.

The construction of the emission matrix is straightforward: each time operation A is followed by operation B in a given time-series, we increase a counter at coordinates (A,B). So this counter simply tracks the number of A->B transitions in the series. When we have ingested the whole data set, we normalize row A so that each row cell represents a probability and that the row sums up to 1.0.

Optimizations

To keep the matrix rank small, we may hash operation names with a modulus (at the expense of precision). Kusto built-in hash(object,modulus) is good for that, beware the algorithm is subjected to change by Microsoft without notice.

To make the process less CPU intensive, we may replace the emission matrix with a simpler object without loss of precision: a logical matrix. That’s not a problem because we do not want to know the likelihood of a given transition between two ops, we just want to know whether the transition is legitimate (probability > 0.0) or not (probability == 0.0)

In the logical matrix, the “ones” represent legitimate transitions, and the “zeroes” represent unexpected transitions. Hitting a zero during a routine evaluation is like setting off a canary or detonating a url: we have found an anomaly which needs to be investigated.

Model assessment

Distributivity

Distributivity should be “good enough” if we take care to group SPNs into families with similar semantics so as to reduce:

a) false negatives caused by artefacts[*]

b) false positives in the symmetrical difference[**]

This grouping is very business-dependent; it’s not guaranteed to scale well with the number of SPNs, but if it does it’s not difficult to identify and to set up.

Without grouping we have:

markov(spn1 OR spn2) = markov(spn1) OR markov(spn2) OR artefacts(spn1,spn2) OR delta(spn1,spn2)

With proper grouping, we hope to have: markov(spn1 OR spn2) ~= markov(spn1) OR markov(spn2)

Memorization and chronology

The learning ability is straightforward: checking a false positive and forgetting about it in future evaluations just means OR-ing the false positive with the existing matrix.

Time-orientation is ensured by design: the highest the order of the model, the more time-oriented it will be. In practice however, memory constraints limit us to orders 1 and 2.

Conclusion

A simplified markov model looks like a good substitute for anomalies decomposition when tackling the seemingly intractable problem of outling Azure activities for a given SPN: on one hand, three properties work in sympathy to limit false positives drastically: this is an important criteria for performing sustainable devSecOps. On the other hand, recording keeping transitions offers assurance that most true positives won’t be missed. This is an equally important criteria, this time for cyberdefense.

The main current grey area is whether the model scales as the number of SPNs grows. If not, its use could be limited to business-critical SPNs.

In part 3 (the next instalment), I will describe a case study to comfort the conclusions we’ve had of far, and how we can stitch this together with the native and superb Azure Sentinel incidents management workflow.

In part 4, I will describe a pen-testing tool (yes! you read me…) I use to probe this model against frauds.

Finally, let me quote the second part of Mahmoud ElAssir’s point on complexity:

What you want to do is manage complexity with better measurements, better prediction, and better accountability. In other words, better data management and analytics.

Notes

[*]: artefacts are caused by artificial transitions across two SPNs: an operation triggered by SPN1 is followed incidentally by an operation triggered by SPN2.

[**]: the more two SPNs are similar, the smaller the symmetrical difference of their logical matrix.

By: Christophe Parisel

Reference: https://www.linkedin.com/pulse/improve-detection-scale-azure-sentinel-christophe-parisel/

Azure Sentinel News Editor

Azure Sentinel News Editor

Related Posts

What’s new: Microsoft Teams connector in Public Preview
IR

How to Generate Azure Sentinel Incidents for Testing

February 26, 2021
Microsoft’s newest sustainable datacenter region coming to Arizona in 2021
IR

The Holy Grail of Azure Sentinel Data Connections: The Azure Service Diagnostic Setting

February 22, 2021
With new release, CrowdStrike targets Google Cloud, Azure and container adopters
IR

Understanding the Little Blue Permissions Locks in Azure Sentinel Data Connectors

February 8, 2021
Next Post
Azure Sentinel To-Go (Part2): Integrating a Basic Windows Lab 🧪 via ARM Templates 🚀

Azure Sentinel part 1: why detection needs steroids

BlueVoyant acquires Managed Sentinel, builds out Microsoft MSS offerings

BlueVoyant acquires Managed Sentinel, builds out Microsoft MSS offerings

BDO Expands its Managed Detection and Response Capabilities to Support Microsoft Azure Sentinel Clients and Becomes a Microsoft Intelligent Security Association Member

BDO Expands its Managed Detection and Response Capabilities to Support Microsoft Azure Sentinel Clients and Becomes a Microsoft Intelligent Security Association Member

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Follow Us

  • 21.8M Fans
  • 81 Followers

Recommended

Insight Recognized as a Microsoft Security 20/20 Partner Award Winner for Azure Security Deployment Partner of the Year

Insight Recognized as a Microsoft Security 20/20 Partner Award Winner for Azure Security Deployment Partner of the Year

3 months ago
Security Unlocked—A new podcast exploring the people and AI that power Microsoft Security solutions

Security Unlocked—A new podcast exploring the people and AI that power Microsoft Security solutions

3 months ago
Microsoft introduces integrated Darktrace-a-like, Azure Sentinel

Stay ahead of threats with new innovations from Azure Sentinel

3 months ago
New Azure Kubernetes Service (AKS) Security Workbook

New Azure Kubernetes Service (AKS) Security Workbook

5 months ago

Instagram

    Please install/update and activate JNews Instagram plugin.

Categories

  • AI & ML
  • Artificial Intelligence
  • Incident Response
  • IR
  • KQL
  • Security and Compliance
  • Security Ochestration & Automated Response
  • Security Operations
  • SIEM
  • SOAR
  • SOC
  • Threat Intelligence
  • Uncategorized

Topics

anomaly automation Azure Azure DevOps Azure Security Center Azure Sentinel Azure Sentinel API Azure Sentinel Connector BlueVoyant Call cybersecurity Detection file GitHub Hunting Huntingy IAC incident response Incident Triage infrastructure as code Investigation jupyter LAQueryLogs MDR Microsoft microsoft 365 mssp Multitenancy Notebooks Pester Playbooks PowerShell python Records Security Sentinel Sharing SIEM signin Supply Chain teams Threat hunting Watchlists Workbooks XDR
No Result
View All Result

Highlights

New Items of Note on the Azure Sentinel GitHub Repo

Tuning the MCAS Analytics Rule for Azure Sentinel: System Alerts and Feature Deprecation

New Search Capability for Azure Sentinel Incidents

Follow-up: Microsoft Tech Talks Practical Sentinel : A Day in the Life of a Sentinel Analyst

Changes in How Running Hunting Queries Works in Azure Sentinel

Azure Sentinel can now Analyze All Available Azure Active Directory Log Files

Trending

What’s new: Microsoft Teams connector in Public Preview
IR

How to Generate Azure Sentinel Incidents for Testing

by Azure Sentinel News Editor
February 26, 2021
0

Do you want to generate an Incident in Azure Sentinel for testing/demoing? Here’s a couple easy ways...

What’s new: Microsoft Teams connector in Public Preview

Azure Sentinel Notebooks Loses It’s Preview Tag

February 25, 2021
Microsoft’s newest sustainable datacenter region coming to Arizona in 2021

The Holy Grail of Azure Sentinel Data Connections: The Azure Service Diagnostic Setting

February 22, 2021
Microsoft’s newest sustainable datacenter region coming to Arizona in 2021

New Items of Note on the Azure Sentinel GitHub Repo

February 18, 2021
Microsoft’s newest sustainable datacenter region coming to Arizona in 2021

Tuning the MCAS Analytics Rule for Azure Sentinel: System Alerts and Feature Deprecation

February 17, 2021

We bring you the best, latest and perfect Azure Sentinel News, Magazine, Personal Blogs, etc. Visit our landing page to see all features & demos.
LEARN MORE »

Recent News

  • How to Generate Azure Sentinel Incidents for Testing February 26, 2021
  • Azure Sentinel Notebooks Loses It’s Preview Tag February 25, 2021
  • The Holy Grail of Azure Sentinel Data Connections: The Azure Service Diagnostic Setting February 22, 2021

Categories

  • AI & ML
  • Artificial Intelligence
  • Incident Response
  • IR
  • KQL
  • Security and Compliance
  • Security Ochestration & Automated Response
  • Security Operations
  • SIEM
  • SOAR
  • SOC
  • Threat Intelligence
  • Uncategorized

[mc4wp_form]

Copyright © 2020 - Azure Sentinel News

No Result
View All Result
  • Home
  • Security and Compliance
  • SOC
  • Threat Intelligence
  • Security Ochestration & Automated Response
  • SOAR
  • Security Operations
  • Artificial Intelligence

Copyright © 2020 Azure Sentinel News