In its summary, Meta said the law firm had “noted the potential for Meta’s platforms to be connected to salient human rights risks caused by third parties,” including “advocacy of hatred that incites hostility, discrimination, or violence.”
The assessment, it added, did not cover “accusations of bias in content moderation.”
Rights groups for years have raised alarms about anti-Muslim hate speech stoking tensions in India, Meta’s largest market globally by number of users.
Its top public policy executive in India stepped down in 2020 following Wall Street Journal reporting that she opposed applying the company’s rules to Hindu nationalist figures flagged internally for promoting violence.
In its report, Meta said it was studying the India recommendations but did not commit to implementing them as it did with other rights assessments.
Asked about the difference, Meta Human Rights Director Miranda Sissons pointed to United Nations guidelines cautioning against risks to “affected stakeholders, personnel or to legitimate requirements of commercial confidentiality.”
“The format of the reporting can be influenced by a variety of factors, including security reasons,” Sissons told Reuters.
Sissons, who joined the company in 2019, said her team is now comprised of eight people, while about 100 others work on human rights with related teams.
In addition to country-level assessments, the report outlined her team’s work on Meta’s COVID-19 response and Ray-Ban Stories smart glasses, which involved flagging possible privacy risks and effects on vulnerable groups.
Sissons said analysis of augmented and virtual reality technologies, which Meta has prioritized with its bet on the “metaverse,” is largely taking place this year and would be discussed in subsquent reports.
© Thomson Reuters 2022
#Facebook #Instagram #Parent #Meta #Releases #Human #Rights #Report
You can also shop on Amazon for millions of products with fast local delivery Click Here