Snyder’s Take: So, About That FL 13 Poll From St. Pete Polls… #Shenanigans

Hi. I’m Back. More on my hiatus later. But for now, lets talk about this Alex Sink – David Jolly matchup in the 13th Congressional District Special Election.

For those that are not familiar, this election is not in November, it’s in 56 Days as a result of the death of long term Republican Congressman Bill Young on October 18th of last year. Yesterday, St Pete Polls released a poll that the folks over at Daily Kos Elections had a field day with. It’s comically skewed towards Republicans (naturally) and prominent blogger Peter Schorsch (whom by the way has written scripts for St Pete Polls in the past, and currently has his own fights in the ethics department as reported in the Tampa Bay Times), covered the release of the poll with a soft piece that (naturally) DID NOT mention the gleaming disparity in the sample: the number of Republicans and Democrats that were actually polled.

The Voter Registration numbers in Florida’s 13th Congressional District per the Florida Division of Elections, tell us that the district is in fact 37 percent Republican, 35 percent Democratic, and 24 percent independent. However, St Pete Polls’ OWN WEBSITE tells us that the poll was conducted with a 47-35 bias in favor of Republicans.  Yeah, let that sink in. I’m calling shenanigans.


But, wait! There’s more! St. Pete Polls has a history of ridiculously inaccurate polling in FL Congressional Races. Daily Kos examined 8 races, St Pete Polls’ projection for each race (which we linked to St. Pete Polls’ actual poll for each race), the actual final results of each race (linked to 3rd party confirmation of results), and just how poor SPP’s accuracy really was.

FL-02: St. Pete: Southerland (R) 47-47; actual: Southerland (R) 53-47; error: +6 D

FL-09: St. Pete: Grayson (D) 45-40; actual: Grayson (D) 63-37; error: +21 R

FL-10: St. Pete: Webster (R) 50-42; actual: Webster (R) 52-48; error: +4 R

FL-13: St. Pete: Young (R) 54-37; actual: Young (R) 58-42; error: +1 R

FL-16: St. Pete: Buchanan (R) 55-38; actual: Buchanan (R) 54-46; error: +9 R

FL-18: St. Pete: West (R) 51-42; actual: Murphy (D) 50.3-49.7; error: +9.6 R

FL-22: St. Pete: Frankel (D) 48-45; actual: Frankel (D) 55-45; error: +7 R

FL-26: St. Pete: Rivera (R) 46-43; actual: Garcia (D) 54-43; error: +14 R

On top of all of this, Peter Schorsch has worked for and with St Pete Polls in the past, and as the Tampa Bay Times examined, may be acting unethically by taking money from candidates in the form of ad space fees in exchange for fluff pieces and positive coverage. (Naturally.)

While I am not alleging any wrongdoing by Peter Schorsch, St Pete Polls, or any of the campaigns mentioned here, the facts I’m posting here for public consumption raise some interesting questions. Do with them as you please. A more in depth post on the actual FL 13 election is forthcoming, but I couldn’t write it without clearing the air of this shady pollution.

Until Next Time,
JS

@JustinSnyderFL

About these ads

7 Comments

  1. Hello, Matt Florell from St. Pete Polls here.

    First, I would like to point out that some of those results you show were not from the most recent polls. Take a look at this November 2nd poll of three of the US Congressional races,
    http://stpetepolls.org/surveys/election_2012_november02_congress_general.html

    Second, I would like to point out that you took the numbers from the ACTIVE VOTER column, and not the REGISTERED VOTER column, which ended up being more accurate for some of those races.

    Also, I would like to mention that our polling is not influenced by our clients aside from what races to poll and what questions to ask. Peter Schorsch has no sway over the results and has never asked for us to reevaluated or re-factored poll results in any way, ever. In many ways he has been an ideal client.

    As for the results of the most recent CD-13 special election poll we conducted this week, as stated in the poll summary, the sample was selected from all registered voters, but the sample demographics were based upon the active voter population from 2010 and 2012. This population skews in favor of Republicans by about +10 and against independents about -10 points. This kind of skew was also evident in the final HD-36 special election late last year. To state it simply, in off-year and special elections Republicans are much more reliable voters. We also accounted for this kind of skew in St. Petersburg’s city election 2 months ago, but in that race the Democrats had a very effective GOTV effort and turned out Democrats in greater numbers than usual to give Rick Kriseman a much bigger win than we had forecast using the turnout model of previous elections.

    For this upcoming CD-13 special election, if Democrats and Independents vote at the same rate of the 2012 general election, it is likely that Sink will win. If Democrats and Independents vote at the rate they did in 2010, it’s much more likely that Jolly will win.

    I ran an analysis of this week’s poll results using the “Registered Voter” demographics, and Sink won by 3%. The problem with that is, historically it has not proven to be a reliable turnout model for special elections in Florida.

    Also, it’s important to mention that poll results always change over time. We’ll see what things look like in a few weeks the next time we run this poll.

    If you have any other questions, please feel free to contact me.

    • ROMNEY WINS………LOL

  2. Fred Freeman

    I think the poll should include the 2nd Party Candidate, the Libertarian, Lucas Overby. I think he will change the numbers dramatically.

    Someone called my phone and asked if I would vote for Sen Jack. Latvala or “someone else” for District 20 seat for Fl. Senate. Whose Poll was that?
    Fred

  3. The most important thing here is the methodology. Already I see some flaws in it. Some might be simple errors, while others seem to be more systemic.

    First, are these RDD surveys? The polling firm states that they are “shuffled”, but what is exactly shuffled? Is a predetermined list shuffled? Is this quota sampling or matching or are these purely random surveys? The fact that SPP is very vague on how they do the sampling makes me worry.

    First some basics: When doing a poll, you do a random sample of the electorate, which is just random digit dialing (RDD) from a list, usually one that doesn’t involved a quota (which sets limits on party, race, gender by using voter registration stats or the census, this is very important). Your margin of error is determined by your sample size. In this case, their margin of error is +/-2.8% with 1252 respondents, which seems spot on for a 95% CI (I have to do the math, but it is good, trust me).

    When doing RDD, you should be able to get close to the number in your polling, especially in a case where you want a +/-2.8 MoE. So, if you have a district that has 35% of the population registered as Democrats (which CD 13 has), then your poll shouldn’t be too far off. In the case of this poll, the number of Democrats is 34.3%, which is bang on! But if you look at the Republicans, it is a different story. Republicans are 38% of the district, but represent 46.5% of respondents, which is quite a large gap.

    SPP claims that they fix this problem by weighing. But there is a problem with that. Weighing should only be used when a small population in a universe is underrepresented in a sample. For example, if you have a district that is 8% black but only 1%-2% of your survey is black, then you would weigh that. You should not have to weigh political party affiliation, as the number of respondents should be quite large.

    In this case, it seems as if SPP weighs party affiliation. Why would this be the case? There are two ways that you can look at it. First, they might just be bad at sampling. It seems as if their universe and their sample numbers are way off, especially on things that should have close resemblance, such as party affiliation. The second and more sinister option is that they oversample certain portions of the population on purpose. If this is the case, this would be highly unethical. And if either of these are the cases, you can still have a high confidence index, as well as being statistically significant. But if the sample is shit, then it doesn’t matter. Weighing a sample shouldn’t be used to fix a bad sample, which is what is being done in this case. Weighing should only be used to explain a segment of the population that is smaller and only have a few responses.
    Even with this being the case, I do disagree somewhat with this article. To compare a poll in mid-October to the final results is like saying that Alabama is the best team in college football and will win the championship before they play Auburn. Since this is a cross-sectional poll, it could be correct at this particular point in time.
    The biggest issue I have is that SPP never uses the term RDD, or random digit dialing. This makes me question the validity of the survey.
    As we already know, Peter Schorsch had a “pay for play” scheme going on with his blog, according to the Times. But having favorable polling numbers is another way for candidates to get good publicity. “We give you a favorable poll, you can put it in a fundraising letter”. Again, I cannot say if this is the case with SPP, as I am not going to claim anything regarding Schorsch’s role in SPP (as I do not know it). But if past history regarding St. Petersblog is any example, it is quite disturbing.

    • I don’t think you read the comment I left above yesterday based upon your comments, but I think it might answer some of your questions.

      We don’t do RDD. We only dial based upon voter lists and positive matched white-page listings. I know this isn’t the standard, but it is what we have done for over 2 years and it has proven to be an accurate methodology in the races we have polled.

      As for our sampling, in this poll we were not attempting to sample based upon registered voters because “registered voters” is not a historically accurate representation of the people that actually vote in special elections and non-Presidential election years.

      • Dave Trotter

        But why are you trying to create a “historically accurate representation” of the race anyway? A poll is not a prediction, and it is not the job of a polling firm to make predictions!!! It is a cross-sectional poll which is supposed to tell you about what would happen on the day (or days) of the poll, not project what will happen during a future event! And in this case you are trying to predict a future event, which is voter turnout on Election Day.

        In fact, you are contradicting yourself! You claim here that you fudging with the data to create this historical representation of actual vote. That means you are trying to predict turnout on Election Day. Therefore, you are saying in your poll “from a poll done today, this is what we predict will happen on Election Day”. But that is not how polling works, which is the reason people say “if the election were held today” when discussing a poll.

        But then you say that the author of this article is incorrect in talking about your errors. Why would that be? If you are saying that you are trying to make a prediction, then what Justin says here is absolutely valid. If you are saying, in CD 9 for example “from a poll conducted today, we predict Alan Grayson will get 45% of the vote on Election Day”, then Justin has every right to question the validity as your numbers are way off.

        So which one is it? In the 2012 polls you are kind of hinting that they are cross-sectional that could not project a future outcome (Election Day), but the special election in CD 13 you are saying that you are trying to replicate “historically accurate representation”, which is predicting a future outcome. What exactly is the methodology?

        Basically, you are not doing survey research! You are not polling correctly! You are trying to predict how an election outcome is going to turn out on Election Day by using surveys from today, but by predicting (either accurately or not) voter turnout on another day!

        If you are trying to do a “historically accurate representation” of Election Day, then that means you aren’t doing a random sample. Yes, you might be doing it “randomly”, but you have set the parameters regarding the sample. If you think that Republican voter turnout will be 50% of the total vote, then who cares if those Republicans are interviewed randomly if they automatically make up a predetermined portion of the survey? All you are telling me with this poll is what you think turnout will be, that is it.

        If you want to do “predictions” that is fine. I am a pretty damn good predictor myself. But if you want to do polling (which I am now thinking about doing in Florida), you need to do it within the parameters of what is considered correct survey research and methodology. Your “polls” (really predictions) fall well outside the lines of survey research!!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 6,975 other followers

%d bloggers like this: