Methodological elitism and quantitative ego

EDIT: This is a re-posted entry from my previous blog.

I was having a talk with the sagacious Ben Skinner the other day about our methodological training, and as I am prone to doing, I have found myself reflecting on that conversation probably more than is healthy. I am not, in my estimation, a master of research method, but I think Ben and I both have had unique experiences in this regard. Both of us have been exposed to the very applied side of methods training within the econometric tradition in education policy, while also having had experience in the world of more statistician-dominated methods training.

The foundation for the discussion we were having was that from both of our perspectives, many of the people in education policy (and at least in my opinion, in social science in general) have placed a certain premium on being able to do rigorous quantitative research. Whether or not that is positive for our field, what it also produces, in many ways, is a certain type of egoism that leaves and perpetuates a gap between how good people think they are at quantitative and computational methods and how good they actually are. This in turn produces three negative outcomes (among others) that I think are particularly problematic:

  1. People can often uncritical of their own methods when faced by those who they feel are not as well trained,

  2. A certain sense of already having accomplished mastery leads people to feel that what they have is “good enough” for being thorough, and

  3. A social hierarchy is produced where those with this mathematical ego see themselves as better than others.

(I should note that the community to which I am referring includes myself. I will also note that Ben, from my conversation, also is reflective on this – I am just avoiding speaking for him.)

On the ego side, I think it’s simpler to see how this is, if not unjust, at least very annoying for the rest of the field. I think it doesn’t require much discussion to justify why this is a bad thing. But I think the methodological deficiency might take a bit of explaining, which I might outline more over time, but here are the cliffnotes…

On the math side, I think that many social scientists learn the basic mechanics and assumptions of regression analysis and, in some places, econometric methods of causal inference. For most of what we do and the work we produce, I think this passes the sniff test: it really is good enough. But often, it’s not. In particular, I’ve found that many of my colleagues (and I say this with the understanding that I am also mathematically deficient in many ways) lack a certain sense of nuance in terms of how data exists (or doesn’t) in time and space. This has implications for many modeling shortcuts I’ve seen, where “good enough for discussion” is prioritized over using distributions and statistics and mathematical perspectives that are outside of our basic econometric paradigm, even if those other perspectives may be more technically correct and may more faithfully describe the world we are trying to explain.

On the computational side, while I think we acknowledge that the flows of research through (especially recent) history have often been determined by our computational capacity as a field, I do think, as was Ben’s point to me (as I understood it), that we forget that we play an active role in creating that capacity. Not everyone needs to be out there creating new computational methods, sure, but I do think that it’s better for research if people are willing to look outside of their comfort zones to find better pathways for answering their questions. Moreover, the questions we ask are constricted by the methods we know, and so even if we aren’t able to commit the time to fully and deeply understanding other computational options, refusing to engage with them functionally limits the creativity and scope of the work we can do. Quantitative research will always be limited simply by the availability of data, but we’re only exacerbating this problem when we limit ourselves to only answering the questions whose required methods are within our current, at-the-moment skill sets.

I don’t think a lot of this is particularly a social science research problem, exclusively. I think math and computational methods hold a privileged position in our society that at least for my part, I don’t think they deserve. But I also don’t think it’s wrong to say that a general lack of faithful rigor in methodological training beyond “good enough” is real and has important implications for the construction of that same society. (And in defense of quantitative methods, I think rejections of quantitative paradigms from those in other communities often suffer from similar types of egoism.)

In the end, I think both the elitism and the deficiencies are essentially tied to one another, and they can’t be addressed separately from one another. I think quantitative researchers, as a field, could benefit for getting off their methodological high horse every now and again to not only reflect on what we don’t know, but also on the need for us to fix it.

Previous
Previous

Affirmative Action may be uncomfortable, and that’s okay (for some people)

Next
Next

Forgiving, forgetting, and fixing racial climates