in , ,

Microsoft launches Security Copilot, its GPT-4 assistant for cybersecurity

The new AI-powered assistant will help identify vulnerabilities and make recommendations. Here’s everything you need to know!

An assistant for cybersecurity professionals

Microsoft announced the arrival of Security Copilot, a new AI-powered assistant for cybersecurity professionals, during its online event Microsoft Secure. The tool combines OpenAI’s GPT-4 technology and a security model developed by Microsoft, running on Azure infrastructure. Microsoft Security Copilot is able to analyze more than 65 billion daily signals.

According to the company, the introduction of this new assistant aims to address a major problem in the field of cybersecurity: the imbalance between the constant threats and the lack of qualified personnel to respond to them.

“Cybersecurity professionals are engaged in an asymmetrical battle against prolific, relentless, and sophisticated attackers. […] This challenge is compounded by the global shortage of qualified security professionals, which translates to an estimated 3.4 million unfilled cybersecurity jobs worldwide.”

The program is not intended to replace human expertise, but rather to provide assistance to improve the efficiency of professionals who can “respond to security incidents in minutes rather than hours or days.”

Related: Impressive GPT-4’s Ability to Hack and Use Various Tools to Answer Questions and Solve Complex Tasks

What can Microsoft Security Copilot do?

Like Microsoft Copilot, announced a few weeks ago, the tool presents itself as a simple prompt box. Cybersecurity professionals can use it for various requests, such as:

  • Asking about vulnerabilities and incidents: the AI will even be able to classify them by order of importance.
  • Requesting advice: Security Copilot can indicate the different steps to follow to respond to encountered vulnerabilities.
  • Integrating documents: it is possible to integrate files, URLs, code snippets, and request information about them.
  • Working in teams: a board is integrated into the tool to allow teams to pin responses that could be useful to colleagues.

Microsoft, however, clarifies that its assistant, like any AI-powered program, can make mistakes. It is equipped with a learning system that allows it to continuously improve with internal and external information to the organization. Security Copilot can also provide information about items detected by other cybersecurity tools. Additionally, the chatbot indicates the sources that allow it to generate its responses.

Strictly confidential data

Regarding the use of data, Microsoft assures that the AI will be used in a “safe, secure, and responsible” manner. Thus, data will not be used to feed the AI outside the organization’s framework.

“Your data is your data. You own and control it, and you choose how you want to exploit and monetize it.Microsoft ensures that your data is not utilized to train or enhance fundamental AI models that are utilized by other entities. No one outside your organization benefits from the AI trained on your data or business processes.”

To date, Microsoft has not yet announced a deployment date for Security Copilot.

Impressive GPT-4’s Ability to Hack and Use Various Tools to Answer Questions and Solve Complex Tasks

Connect Your Brain to GPT-4: A Revolutionary Experiment