Item response theory (IRT) models estimate a latent underlying quantity for individual units (legislators, judges, students) based on a series of binary indicator variables for each unit (votes, decisions, test answers). The basic two parameter IRT model is given by:
\[ \begin{align} \text{Pr}(vote_{ij}) = \text{logit}^{-1}(\beta_j\theta_i - \alpha_j) \end{align} \]
Where \(\beta_j\) is the discrimination parameter for vote \(j\), which determines how much a yes or no rollcall on this vote tells us about legislator \(i\)’s ideal point \(\theta_i\). \(\alpha_j\) is a baseline difficulty parameter for vote \(j\) that \(\theta_i\) must overcome for us to observe a 1.
The function ideal()
in the package pscl
estimates this model in a Bayesian framework. Download voting data for the 110th senate from https://voteview.polisci.ucla.edu/static/data/ord/senate_110.csv. Remove President Bush from the data, and then create an object of just the votes themselves. Recode any vote greater than 6 and less than 9 as NA, and any vote that is a 6 to a 0.
senate <- read.csv("https://voteview.polisci.ucla.edu/static/data/ord/senate_110.csv")
senate <- senate[-1, ] # remove president
votes <- senate[, 10:ncol(senate)]
votes[votes > 6 & votes < 9] <- NA
votes[votes == 6] <- 0
Before we can estimate our ideal points, we need to use rollcall()
to convert our data into a rollcall
object. To make assessing the validity of our results easier, be sure to use the legis.names
argument.
library(pscl)
vote_rc <- rollcall(votes, legis.names = senate$name)
Examine the structure of the rollcall
object. It tells us the number of legislators, the number of votes, and the codes for yes, no, missing votes, and not in legislature. Use the ideal()
function to estimate the ideal points of each senator.
irt <- ideal(vote_rc)
This model is actually unidentified. We have a few options for identifying the model. We can set the argument normalize = T
in the call to ideal()
. This is actually a wrapper to postProcess()
which maps the ideal point estimates via an affine transformation, so even if you forget to set it, you don’t have to re-run your model to obtain valid ideal point estimates.
irt_id <- postProcess(irt, constraints = 'normalize')
Examine the structure of our IRT object. We can directly access the MCMC samples using $x
, and the posterior means with $xbar
Notice that we only have samples for every 100th iteration. This is because ideal()
has an argument called thin
which defaults to 100.
str(irt_id)
## List of 9
## $ n : int 102
## $ m : int 655
## $ d : num 1
## $ codes :List of 4
## ..$ yea : num 1
## ..$ nay : num 0
## ..$ notInLegis: num 9
## ..$ missing : logi NA
## $ x : num [1:51, 1:102, 1] -0.609 -0.597 -0.575 -0.581 -0.574 ...
## ..- attr(*, "dimnames")=List of 3
## .. ..$ : chr [1:51] "5000" "5100" "5200" "5300" ...
## .. ..$ : chr [1:102] "STEVENS" "MURKOWSKI" "SESSIONS" "SHELBY" ...
## .. ..$ : chr "D1"
## $ beta : NULL
## $ xbar : num [1:102, 1] -0.567 -0.59 -1.082 -0.932 0.462 ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:102] "STEVENS" "MURKOWSKI" "SESSIONS" "SHELBY" ...
## .. ..$ : chr "D1"
## $ betabar: NULL
## $ call : language ideal(object = vote_rc, d = 1, codes = list(yea = 1, nay = 0, notInLegis = 9, missing = NA), dropList = list| __truncated__ ...
## - attr(*, "class")= chr "ideal"
Another way to handle this identification problem is to include legislator level information. In the context of US legislative politics, the simplest way to do this is to use party to break the reflection invariance problem. Use the legis.data
argument in rollcall()
to give Republicans a value of -1, and everyone else (the Democrats and Sanders) a value of 1.
vote_rc_leg <- rollcall(votes, legis.names = senate$name,
legis.data = matrix(ifelse(senate$party == 200, -1, 1)))
irt_leg <- ideal(vote_rc_leg)
Another option is to use the same Republican/other coding scheme to generate starting values for \(\theta\) that the Markov chain will begin at, using the startvals
argument.
irt_sv <- ideal(vote_rc, startvals = list(x = ifelse(senate$party == 200, -1, 1)))
Compare your estimates for all three identification strategies side-by-side.
data.frame(name = senate$name[order(irt_id$xbar)],
ideal_norm = irt_id$xbar[order(irt_id$xbar)],
ideal_leg = irt_leg$xbar[order(irt_id$xbar)],
ideal_start = irt_sv$xbar[order(irt_id$xbar)])
Each of these identification strategies produces estimates which cover a slightly different range of ideal point values. At the extremes each strategy produces consistent results, but in the middle the exact order of ideal points varies slightly. However, the substantive differences between these legislators are small. It’s also important to note that we can’t compare the absolute range of each identification strategy since each one is determined by the identification constraint used. This isn’t problematic if we want to use these ideal points in separate analyses, because the estimates only matter relative to others under the same identification strategy.
data.frame(ideal_norm = senate$name[order(irt_id$xbar)],
ideal_leg = senate$name[order(irt_leg$xbar)],
ideal_start = senate$name[order(irt_sv$xbar)])
Graphical presentation of ideal points can greatly help readers understand where specific senators fall relative to one another. This approach also has the advantage of being able to clearly convey uncertainty around estimates, allowing readers to understand which estimates are statistically significantly different from one another.
library(plyr) # array to dataframe
library(ggplot2) # ggplots
library(plotly) # interactive plots
# combine mean of ideal point estimates w/ 95% CI
irt_df <- data.frame(irt_id$xbar,
t(apply(adply(irt_id$x, 1)[, -1], 2, quantile, c(.025, .975))))
# rename columns
names(irt_df) <- c('mean', '2.5%', '97.5%')
# sort dataframe by idela point mean
irt_df <- irt_df[order(irt_id$xbar), ]
# add senator variable for plotly
irt_df$senator <- rownames(irt_df)
## create ordering variable to use for x(y) axis
irt_df$order <- seq(1, length(irt_df$mean))
# plot ideal points and uncertainty
irt_gg <- ggplot(irt_df, aes(y = mean, x = order, ymin = `2.5%`,
ymax = `97.5%`, label = senator)) +
geom_point(aes(), cex = .25) +
geom_linerange(size = .15, col = 'grey') +
coord_flip() +
theme_bw() +
theme(plot.background = element_blank(),
panel.grid.minor = element_blank(),
panel.grid.major = element_blank(),
panel.border = element_blank(),
axis.title.y = element_blank(),
axis.ticks.y = element_blank(),
axis.text.y = element_blank(),
axis.title.x = element_blank())
# interactive plot; ggplotly needs colour, not color
ggplotly(irt_gg, tooltip = c('label', 'mean'))
Since votes are dichotomous data, one of the best measures of fit we can get is predictive accuracy. Using the predict()
function on an ideal
object created with store.item = T
will allow us to see our overall correct prediction rate, our correct yea prediction rate, and our correct nay prediction rate. These last two can be especially useful if we’re working with data where the consequences of a false positive and false negative are different (see (Ward, Greenhill, and Bakke 2010) for an in-depth discussion of this). In addition, we can get the predicted probabilities for each legislator and vote \(i,j\), as well as the actual prediction, which is not just whether \(\text{Pr}(vote_i=yea) > .5\) because of the difficulty parameter \(\alpha_i\).
irt_store <- ideal(vote_rc, normalize = T, store.item = T)
preds <- predict(irt_store)
str(preds)
## List of 10
## $ pred.probs : num [1:102, 1:655] 0.902 0.901 0.816 0.845 0.983 ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:102] "STEVENS" "MURKOWSKI" "SESSIONS" "SHELBY" ...
## .. ..$ : chr [1:655] "Vote 1" "Vote 2" "Vote 3" "Vote 4" ...
## $ prediction : logi [1:102, 1:655] TRUE TRUE TRUE TRUE TRUE TRUE ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:102] "STEVENS" "MURKOWSKI" "SESSIONS" "SHELBY" ...
## .. ..$ : chr [1:655] "Vote 1" "Vote 2" "Vote 3" "Vote 4" ...
## $ correct : logi [1:102, 1:655] TRUE TRUE TRUE TRUE TRUE TRUE ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:102] "STEVENS" "MURKOWSKI" "SESSIONS" "SHELBY" ...
## .. ..$ : chr [1:655] "Vote 1" "Vote 2" "Vote 3" "Vote 4" ...
## $ legis.percent : Named num [1:102] 68.7 68.4 84.9 80 62.2 ...
## ..- attr(*, "names")= chr [1:102] "STEVENS" "MURKOWSKI" "SESSIONS" "SHELBY" ...
## $ vote.percent : Named num [1:655] 97.8 94.8 76.5 61.9 66.7 ...
## ..- attr(*, "names")= chr [1:655] "Vote 1" "Vote 2" "Vote 3" "Vote 4" ...
## $ yea.percent : num 79.2
## $ nay.percent : num 74.5
## $ party.percent : NULL
## $ overall.percent: num 77.5
## $ ideal : symbol irt_store
## - attr(*, "class")= chr "predict.ideal"
Unfortunately, these identification strategies are insufficient if we want to estimate ideal points in a space of dimensional \(d > 1\). To do so, we must constrain \(d + 1\) legislator’s ideal points. We can do this using the constrain.legis()
function. Let’s pick the two most extreme senators as identified in our one dimensional analyses as the ends of our latent scale, and a middle of the road senator to be out mid-point. We need to provide fixed values. If we were working with a limited sample of votes where we already know what a yea or nay says about a legislator’s ideal point e.g. access to birth control, religious freedom laws, tax increases, we could instead constrain certain item parameters to be positive or negative for identification.
# constrain legislative ideal points via priors
vote_2d <- constrain.legis(vote_rc, x = list('DEMINT' = c(-1, -1),
'SNOWE' = c(0, 0),
'LAUTENBERG' = c(1, 1)), d = 2)
# estimate two dimensional ideal point model
irt_2d <- ideal(vote_rc, d = 2, normalize = T, priors = vote_2d, startvals = vote_2d)
The package emIRT
allows you to fit ordinal response IRT models, and hierarchical or dynamic IRT models for binary responses using EM instead of MCMC. However, this package also demonstrates the limits of canned code. The UN voting data below are analyzed by Bailey, Strezhnev, and Voeten (2017) using a dynamic ordinal response model estimated with a custom written hybrid Gibbs/MH sampler. You could further extend the model by adding a hierarchical component where the mean of a state’s ideal point in a given year is a function of state level variables such as the size of the economy. However, doing so would require you to either write a sampler from scratch, or to employ a flexible modeling language that lets you specify whatever model you want…
Fit a one dimensional IRT model on Voeten et al.’s UN Voting Data, using countries as the unit of analysis. Treat abstentions as no votes, and absences as missing data. Report your results for the estimated ideal points, along with measures of uncertainty, in table and graphical form.
Bailey, Michael A., Anton Strezhnev, and Erik Voeten. 2017. “Estimating Dynamic State Preferences from United Nations Voting Data.” Journal of Conflict Resolution 61 (2): 430–56.
Ward, Michael D., Brian D. Greenhill, and Kristin M. Bakke. 2010. “The Perils of Policy by P-Value: Predicting Civil Conflicts.” Journal of Peace Research 47 (4): 363–75.