Categories
NFU

A caveat on Bernie’s (likely?) win in Massachusetts

From Jonathan Simon (whose Code Red is the best book out there on computerized election theft):

M – Not that anyone ever does anything about such things, but this would be a very sketchy snippet of evidence to go to the mat over. It is NOTHING like E2016 evidence or E2004 evidence or Coakley-Brown evidence or Ossoff evidence…  I’m not questioning Ted’s numbers, just the conclusions (“Bernie won MA”) that you among others are drawing from them. Ted can be pretty stubborn and will probably never do business with me again, but screaming fraud from the rooftops on these analyses (so far) is a really bad idea. I’m not sure whether you’re on the list I sent this to but, assuming not, here’s why:

Agreeing with you [Brad F.] on exit polls (and DBIs), at least when used to verify/challenge results in primaries. As you know, I pretty much started this whole thing on November 2, 2004 and I’ve put a lot of weight behind exit poll-based forensics. But there are subtleties and limits to what we can conclude. Exit poll analysis is stronger when it is pattern analysis (that is, a pattern discernible over many individual contests) and much stronger when there is a baseline allowing a second-order comparison to be made (a perfect example is the 2016 general election, where the national exit poll was accurate while the swing state exit polls were massively red-shifted – extremely hard to explain as mere exit poll inaccuracy). We don’t have that here. We have a few exit polls showing pro-Biden shifts in primaries in a very volatile political moment. Here’s why we should slow down:

EPs are more problematic in primaries. Quite a few reasons for this, but mainly it’s that gauging the turnout and composition of the electorate (which is what exit pollsters are obliged to do) is a lot trickier than in the general, and the pollsters also can’t stratify by party ID as they do in the general. MOE looks like a really solid measure, but all it tells you is the variance of a perfectly random sample of a given size. EPs are NOT perfectly random samples, so there is another measure called TSE (Total Survey Error) that gives a much better idea of accuracy.

In the general election, TSE is usually about MOE x 1.4. In primary elections the multiplier is harder to pin down and may vary a great deal based on a number of factors – but it may be closer to MOE x 2.0. So jumping on any single primary’s EP/VC disparity is dangerous. Exit polls can be useful (especially, as noted above, where there is a baseline, as in 2016, where the national EP and the swing-state EPs varied so dramatically in accuracy) for analysis, but you have to recognize that they really come down to turnout guesswork (informed but not always correct), which is much tougher in primaries, especially when events and race dynamics are volatile.

In the case like South Carolina, we also should be looking at it from a potential rigger’s standpoint, asking what was to be gained. Riggers are presumptively rational – it doesn’t make much sense to rig a whole state to give Biden a slightly bigger win that hardly changes the delegate count at all. Exit poll disparities, especially in primaries, are not such strong evidence in themselves that we can just ignore context and factors such as motive and reward/risk ratios. We don’t do ourselves any favors to scream fraud from the rooftops on such quarter-baked forensic evidence. – Jon Jonathan D. SimonExecutive Director, Election Defense AllianceAuthor:

CODE RED: Computerized Elections and the War on American Democracy
www.CodeRed2020.com
@JonathanSimon14
617-538-6012

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.