Once again, Big Tech faces intense scrutiny regarding its involvement in Israel’s military actions against Palestinians. Recently, Microsoft drew attention for selling artificial intelligence models and cloud services to the Israeli military, sparking protests from concerned employees. In response, Microsoft asserts there’s no evidence its products have harmed people in Gaza—though its ability to investigate is limited.
On Thursday, Microsoft announced it had carried out reviews regarding the Israel Ministry of Defense’s utilization of its technology. The company stated, “We take these concerns seriously,” adding, “We found no evidence to date that Microsoft’s Azure and AI technologies have been used to target or harm people in the conflict in Gaza.”
However, Microsoft didn’t specify the external firm that conducted the review, nor did it elaborate on its methodology. It merely mentioned that the process involved interviewing employees and examining documents. Microsoft also acknowledged its reviews have limitations, specifically citing a lack of visibility into how its software is employed on private servers outside its cloud infrastructure.
An increasing sense of urgency has emerged within Microsoft following a February report that revealed its significant $133 million contract with Israel. According to AP News, the Israeli military’s use of Microsoft and OpenAI technology surged nearly 200 times after the Palestinian groups launched an attack on Israel on October 7, 2023. Notably, Microsoft’s Azure platform is reportedly used to manage data acquired through mass surveillance, storing over 13.6 petabytes of information—about 350 times the total data required for the Library of Congress.
Last year, Microsoft terminated two employees who organized an “unauthorized” vigil for Palestinians killed in Gaza. In February, the company removed five employees from a town hall for voicing objections to its contracts with Israel. Most recently, software engineer Ibtihal Aboussad made headlines at Microsoft’s 50th-anniversary AI celebration by confronting company leadership with, “Shame on you. You are a war profiteer. Stop using AI for genocide.”
The Verge reported that Aboussad emailed staff across the company, stating, “Microsoft cloud and AI enabled the Israeli military to be more lethal and destructive in Gaza than they otherwise could,” while urging employees to sign the No Azure for Apartheid petition and expressing, “We will not write code that kills.”
The timing of Microsoft’s announcement coincides with an upcoming Seattle conference where No Azure for Apartheid plans to demonstrate. Although Microsoft reassured that the Israeli military must adhere to its conditions of use, which mandate responsible AI practices, its reassurances seem hollow given Israel’s history of violating international law. A group of independent human rights experts found that Israel has systematically inflicted suffering on civilians in the occupied territories through acts of murder, torture, and the bombing of essential services.
Additionally, charges of genocide have been levied against Israel, a serious allegation under international law. According to the Geneva Convention, genocide involves actions taken “with intent to destroy, in whole or in part, a national, ethnical, racial, or religious group.” The Gaza Health Ministry reported that fatalities have surpassed 50,000, with some analyses indicating that Israel has entirely wiped out over 1,200 families. The violence, which escalated significantly following attacks on October 7, raised questions about whether those actions meet the criteria for intent to destroy.
Big Tech has long been implicated in supporting Israel’s military operations, as seen with Google and Amazon’s involvement in Project Nimbus. While Microsoft may strive to minimize its liability by stating its technologies weren’t directly responsible for harm, the reality remains that these tools empower Israel’s military capacity to inflict further harm on Palestine and its residents.
What does Microsoft’s involvement mean for the future of technology and ethics? As conversations around corporate responsibility deepen, it’s essential to hold these companies accountable.
Is Microsoft using its technology to support military efforts? Microsoft claims no evidence suggests its products are being used to target or harm civilians in Gaza, despite significant contracts with the Israeli military. However, employee protests reveal a substantial concern surrounding this association.
Have protests against Microsoft’s contracts with Israel gained traction? Yes, employee-led protests have intensified, particularly as awareness of the humanitarian situation in Gaza has increased. Microsoft employees are vocalizing their opposition to the company’s contracts.
What commitments has Microsoft made regarding the use of its technology? Microsoft asserts that its technology must adhere to responsible AI practices. Yet, critics question the enforceability of these commitments, especially considering the historical context of Israel’s actions.
As we navigate through complex issues surrounding technology and human rights, it’s crucial to remain informed. There’s much more to explore on these interconnected topics. For further insights and comprehensive coverage, visit Moyens I/O.