I love SPSP, the conference for the Society of Personality and Social Psychology, the flagship professional organization for social psychology.
It always messes me up, in a good way. Conferences and colloquia are pleasantly unpleasant for me. They're extremely generative. Whatever a researcher is talking about always sparks a cascade of ideas for me. This might be partly driven by the fact that when you're sitting in a talk, there is nothing else to do but think, or listen and think. It's a very focusing setting.
Jon Krosnick and Lee Jussim organized a symposium on scientific integrity. Hart Blanton is one of the speakers and I'm eager to see how he unpacks this: "(1) misidentified measurement and causal models, (2) treatment of arbitrary psychological metrics as if they are meaningful..."
I'm especially curious about what he means by "arbitrary" metrics. It could be something I haven't thought of, and it would be thrilling if Hart is miles ahead of me on methodological considerations. The model identification issues will be interesting, but there I'm guessing I know what he means.
In any case, it's great to see people take a deep interest in methodological issues.
Most of the talks will be on empirical research, which is as it should be. I've focused on methodological issues lately because 1) I think they're extremely interesting, and 2) I think I can have a bigger impact right now on the methodological side than the empirical side, given the methods I have access to. I'm on record as saying that data is sometimes overrated in our field, that data flowing from invalid methods is not going to give us much insight into human nature or behavior.
I don't yet have access to the kinds of empirical methods I want to use. I like Mturk, and I use it, but there are hard constraints on what I can do with it. I've got piles of data to write up, but in general I think my effects are only about 70% likely to be true. That's not terrible, but I want to supplement it with other methods, mostly field work and organizational samples. I stopped using student samples a couple of years ago, because I think the probability of an effect being true, valid or interesting falls sharply if it's based on student samples. Others will disagree, and it's unlikely to be a simple matter, but in general I think the burden is on us to show that student samples are valid, not on others to show that they're not valid. For anything beyond sensory-perception or basic cognitive processes, I'd want broader samples.
There might be a dance battle at SPSP if certain people are amenable.
José L. Duarte
Social Psychology, Scientific Validity, and Research Methods.