Microsoft’s AI researchers have made a huge mistake.
According to a new message from cloud security company Wiz, Microsoft’s AI research team accidentally leaked 38 TB of the company’s private data.
38 terabytes. It is many give.
The exposed data included full backups of two employees’ computers. These backups contained sensitive personal data, including passwords to Microsoft services, secret keys, and more than 30,000 internal Microsoft Teams messages from more than 350 Microsoft employees.
The tweet may have been deleted
So, how did it happen? The report explains that Microsoft’s AI team uploaded a bucket of training data containing open source code and AI models for image recognition. Users who came across the Github repository were provided with a link from Azure, Microsoft’s cloud storage service, to download the models.
One problem: A link provided by Microsoft’s AI team gave visitors full access to an entire Azure storage account. And not only could visitors view everything in the account, they could also upload, overwrite or delete files.
Wiz says this is due to an Azure feature called Shared Access Tokens (SAS), which is “a signed URL that grants access to Azure Storage data.” A SAS token may have been set to restrict which file or files can be accessed. However, this particular link was configured with full access.
Adding to the potential problems, according to Wiz, is that the data appears to have been exposed since 2020.
Wiz contacted Microsoft earlier this year, on June 22, to warn them of their discovery. Two days later, Microsoft invalidated the SAS token, closing the issue. Microsoft conducted and completed an investigation into potential impacts in August.
Microsoft provided TechCrunch with a declarationclaiming that “no customer data was exposed and no other internal services were compromised due to this issue.”
Topics
Microsoft Cyber Security