From the JPost article:
Two articles published in the last few days were part of the IDF investigation, one from Jewish Chronicle and one from the German tabloid newspaper Bild. Both have claimed to reveal internal and top secret documents of Hamas, supposedly straight from Yahya Sinwar’s computer.
Not the NYT. Not sure how the Times gets painted with that brush for not publishing based on those documents. Damned if you do, damned if you don’t, I guess.
Your claim about the sexual violence article is also not true. The NYT spent a month re-reporting that story which didn’t result in a single correction. Their reporting is also supported by a UN investigation that concluded that there “are reasonable grounds to believe that conflict-related sexual violence — including rape and gang-rape — occurred across multiple locations of Israel and the Gaza periphery during the attacks on 7 October 2023.” I don’t think you can accuse the UN of collaborating with the IDF.
From this article:
The Times assessed the documents’ authenticity by sharing some of their contents with members of and experts close to Hamas. Salah al-Din al-Awawdeh, a Hamas member and a former fighter in its military wing who is now an analyst based in Istanbul, said that he was familiar with some of the details described in the documents and that keeping organized notes was consistent with the group’s general practices. A Palestinian analyst with knowledge of Hamas’s inner workings, who spoke on the condition of anonymity to discuss sensitive topics, also confirmed certain details as well as general structural operations of Hamas that aligned with the documents.
The Israeli military, in a separate internal report obtained by The Times, concluded the documents were real and represented another failure by intelligence officials to prevent the Oct. 7 attack. The Times also researched details mentioned in the meeting records to check that they corresponded with actual events.
No, if there were serious, pervasive bias impacting scores, it would lower the correlation and MBFC would be an outlier in the group because they would be in agreement less. If something’s happening at such a low level that it doesn’t impact correlation, it’s just an outlier. Multiple researchers conclude that the differences between monitors is too low to impact downstream analysis which is hard to square with your claim. And, each entry represents about 0.01% of their content, so what percentage of that data is being used to draw sweeping conclusions about the whole?
There is just high agreement about what constitutes high and low quality news sites. The notion that MBFC is somehow inferior to other bias monitors or extremely biased is not supported by evidence. If one of those organizations is better than the others, it isn’t much better. As this study concludes, because the level of agreement between them is so high, it doesn’t really matter which one you use. They’re all fine. Even they think so. Not only do MBFC ratings correlate nearly perfectly with Newsguard, Newsguard’s rating of MBFC is a perfect score. They’re well-respected by each other.
And, really, how could these researchers who’ve dedicated their lives to understanding this stuff have gotten it so wrong? Academia definitely isn’t a hotbed of conservatism. Using awful tools could destroy their careers but MBFC is regularly used in research. Why? How are these studies getting through peer-review? How are they getting published? There are just too many failure points required.