How Microsoft Manages Its Own Insider Risk Using Artificial Intelligence & Machine Learning

Pete Boden, General Manager, Digital Security & Risk Engineering at Microsoft

It’s
unfortunate that you can’t seem to scroll through your news feed without
hearing about another security breach, another data leak or an intellectual
property (IP) theft case where a company has lost millions of dollars or its
trade secrets were stolen. According to one study 90% of organizations feel vulnerable to insider attacks and
about 53% have had confirmed insider attacks against their organization in the
previous 12 months. Quite sobering.

Bret Arsenault, Chief Information
and Security Officer for Microsoft shares a similar sentiment that insider threat is also one of the things that keeps him up at night when
it comes to protecting Microsoft’s assets, our employees and our customer’s
data.

Today,
employees are empowered to use technology to create, store, and share
information across devices, but it has resulted in a complex digital environment
that is tough for organizations to manage. Popular
technologies being used to deter threats are Data Loss Prevention (DLP),
encryption, and identity and access management solutions. While those security
services are necessary, they’re not enough. Focusing on
insider threat is different than just protecting the perimeter – it requires a
knowledge of the organization on a global level and a formula that strikes the right balance of assessing the
activity that has been detected and digital artifacts. In other words, a
collaborative partnership across company functions and solutions that
leverage technology innovations in
artificial intelligence to help us not only automate and add efficiencies, but
stay a step ahead. 

At Microsoft, we have an Insider
Risk Program that is made up of digital security, HR, legal and privacy teams
to help prevent and mitigate insider threats without negatively impacting
employee productivity and privacy or hindering our learning culture. The
program team actively collaborates with key internal partners and industry
peers to identify risk-based insider threat scenarios. This allows us to
prioritize our investment of resources on the highest risk activity.

One of these high-risk scenarios is
the intentional export of intellectual property. As such, the Insider Threat
Program worked with the Data Intelligence team in Core Services Engineering at
Microsoft to develop a machine learning (ML) model. It uses Azure Data Lake
Store and Azure Data Factory to detect and provide alerts for unusual
SharePoint Online activity, which could potentially indicate theft of
intellectual property. The ML model automates the toil of finding “needles in a
haystack”. It also optimizes the efficiency of data analysis and reduces false
alerts, which further minimize disruption to daily business activity. We then
review concerning alerts with HR and business leaders to determine if activity is
expected or unexpected, as well as the potential business impact to determine
if further response is required. The data intelligence we get from the model
helps it continually learn and become smarter over time.

After testing and using the internally built
tool for over a year, we shared our internal solution with the Microsoft 365
product team to see if we could help our enterprise customers who face similar challenges.
As a result, the Microsoft 365 compliance engineering team just announced general
availability of the newMicrosoft Insider
Risk Management solution
, which helps organizations to quickly identify, detect,
and take action on insider threats. For example, you could see if a user submitted
their resignation and then subsequently downloaded sensitive files and copied
them to a USB device.

The solution uses the Microsoft Graph and
other services to look for irregular signals across Windows, Azure and Office
products like SharePoint, OneDrive, Teams and Outlook. Additional third-party
signals from human resources (HR) systems such as SAP SuccessFactors and
Workday can also be integrated via connectors. Then a comprehensive view provides
a curated summary of individual risks within your organization and includes a
historical timeline of relevant activities and trends associated with each
identified user. The Microsoft 365 team also accounted for privacy, so display
names are anonymized by default to maintain confidentiality and prevent
conflicts of interest.

It’s
gratifying our internal digital security team could be a part of creating a
solution that not only helps keep Microsoft’s assets and our employees safe,
but can help our enterprise customers who also lay awake at night struggling
with the same challenges. Bret Arsenault, our CISO also shared his thoughts on
our journey in this post.

For additional resources and information, check out the
Insider Threat session
we co-led with the Microsoft 365 at Microsoft Ignite 2019 and
visit Microsoft’s Tech
Community blog
to learn more about how Microsoft is developing solutions to
address Insider Risk Management and Communication Compliance to help companies
address insider risks and code-of-conduct policy violations.

The post How Microsoft Manages Its Own Insider Risk Using Artificial Intelligence & Machine Learning appeared first on Cybersecurity Tech Accord.

error: Content unreachable !!