Microsoft’s disclosure of three Microsoft 365 Copilot vulnerabilities has sharpened scrutiny of how artificial intelligence tools handle sensitive workplace data across cloud productivity platforms.
The flaws, published on May 7, affect Microsoft 365 Copilot, including Business Chat, and Copilot Chat in Microsoft Edge. They are tracked as CVE-2026-26129, CVE-2026-26164 and CVE-2026-33111, all involving information disclosure risks that could allow unauthorised attackers to access data over a network. Microsoft has deployed mitigations at the cloud-service layer, meaning organisations are not required to install a separate patch or take direct administrator action to close the specific weaknesses.
The disclosure is significant because Copilot is being embedded across Microsoft’s enterprise software estate, including Outlook, Teams, Word, Excel, PowerPoint, SharePoint, OneDrive and Edge. Its value to companies rests on its ability to summarise documents, search across work content, draft messages and answer questions using business context. That same access raises the potential impact of any flaw that allows data to cross boundaries intended to restrict visibility.
CVE-2026-26129 concerns improper neutralisation of special elements in Microsoft 365 Copilot. CVE-2026-26164 involves improper neutralisation of special elements in output used by a downstream component, placing it in an injection-related weakness category. CVE-2026-33111 affects Copilot Chat in Microsoft Edge and is linked to command-injection-style handling of special elements. Each carries a CVSS 3.1 base score of 7.5, with a network attack vector, low attack complexity, no requirement for privileges, no user interaction, unchanged scope and high confidentiality impact.
The technical details released publicly do not indicate active exploitation, and exploit code maturity has been described as unproven. Even so, the vulnerabilities have drawn attention because they sit within AI systems that can process emails, files, prompts, browser context and organisational records depending on licensing, configuration and user permissions. For security teams, the central concern is not only whether an attacker can trigger a flaw, but what data an AI assistant can reach once a weakness is present.
Microsoft 365 Copilot is designed to respect existing permissions, meaning users should only receive answers based on content they are authorised to access. That model depends heavily on clean identity controls, accurate file permissions and disciplined data governance. Enterprises with broadly shared SharePoint libraries, legacy access groups or weak classification practices may find that AI assistants expose governance gaps faster than traditional search tools.
Copilot Chat in Edge adds another layer of risk because it can interact with web pages, uploaded files and, in certain configurations, open workplace content. Microsoft’s own privacy documentation says prompts and responses are processed within the Microsoft 365 service boundary and are not used to train foundation models, while some generated web queries may be sent to Bing without tenant identifiers. The vulnerabilities underline how complex those boundaries become when browser context, cloud productivity data and AI orchestration meet.
The disclosures also arrive as companies accelerate adoption of generative AI assistants to reduce administrative work and improve knowledge retrieval. Financial institutions, law firms, consultancies, government contractors and healthcare organisations are among the sectors most sensitive to information leakage because internal communications can include client records, commercial negotiations, regulatory material, intellectual property and board-level strategy. A vulnerability that affects confidentiality rather than system availability may still carry serious business consequences.
Microsoft’s cloud-side remediation limits the immediate operational burden, but it does not remove the need for defensive checks. Security teams are expected to review Copilot access policies, audit overshared repositories, monitor unusual prompt activity and ensure that data loss prevention rules apply consistently across Microsoft 365 services. Organisations using Edge-based Copilot features may also need to examine browser policies, web content access settings and whether staff can allow Copilot to interact with internal pages or PDFs.
The flaws, published on May 7, affect Microsoft 365 Copilot, including Business Chat, and Copilot Chat in Microsoft Edge. They are tracked as CVE-2026-26129, CVE-2026-26164 and CVE-2026-33111, all involving information disclosure risks that could allow unauthorised attackers to access data over a network. Microsoft has deployed mitigations at the cloud-service layer, meaning organisations are not required to install a separate patch or take direct administrator action to close the specific weaknesses.
The disclosure is significant because Copilot is being embedded across Microsoft’s enterprise software estate, including Outlook, Teams, Word, Excel, PowerPoint, SharePoint, OneDrive and Edge. Its value to companies rests on its ability to summarise documents, search across work content, draft messages and answer questions using business context. That same access raises the potential impact of any flaw that allows data to cross boundaries intended to restrict visibility.
CVE-2026-26129 concerns improper neutralisation of special elements in Microsoft 365 Copilot. CVE-2026-26164 involves improper neutralisation of special elements in output used by a downstream component, placing it in an injection-related weakness category. CVE-2026-33111 affects Copilot Chat in Microsoft Edge and is linked to command-injection-style handling of special elements. Each carries a CVSS 3.1 base score of 7.5, with a network attack vector, low attack complexity, no requirement for privileges, no user interaction, unchanged scope and high confidentiality impact.
The technical details released publicly do not indicate active exploitation, and exploit code maturity has been described as unproven. Even so, the vulnerabilities have drawn attention because they sit within AI systems that can process emails, files, prompts, browser context and organisational records depending on licensing, configuration and user permissions. For security teams, the central concern is not only whether an attacker can trigger a flaw, but what data an AI assistant can reach once a weakness is present.
Microsoft 365 Copilot is designed to respect existing permissions, meaning users should only receive answers based on content they are authorised to access. That model depends heavily on clean identity controls, accurate file permissions and disciplined data governance. Enterprises with broadly shared SharePoint libraries, legacy access groups or weak classification practices may find that AI assistants expose governance gaps faster than traditional search tools.
Copilot Chat in Edge adds another layer of risk because it can interact with web pages, uploaded files and, in certain configurations, open workplace content. Microsoft’s own privacy documentation says prompts and responses are processed within the Microsoft 365 service boundary and are not used to train foundation models, while some generated web queries may be sent to Bing without tenant identifiers. The vulnerabilities underline how complex those boundaries become when browser context, cloud productivity data and AI orchestration meet.
The disclosures also arrive as companies accelerate adoption of generative AI assistants to reduce administrative work and improve knowledge retrieval. Financial institutions, law firms, consultancies, government contractors and healthcare organisations are among the sectors most sensitive to information leakage because internal communications can include client records, commercial negotiations, regulatory material, intellectual property and board-level strategy. A vulnerability that affects confidentiality rather than system availability may still carry serious business consequences.
Microsoft’s cloud-side remediation limits the immediate operational burden, but it does not remove the need for defensive checks. Security teams are expected to review Copilot access policies, audit overshared repositories, monitor unusual prompt activity and ensure that data loss prevention rules apply consistently across Microsoft 365 services. Organisations using Edge-based Copilot features may also need to examine browser policies, web content access settings and whether staff can allow Copilot to interact with internal pages or PDFs.
Topics
Technology