The next session I attended, was a Focus session on Social Sciences. Two presenters were present, and unfortunately for my beloved discipline, the session was held in one of the smaller rooms.
The first presentation addressed a problem initially thought to be simple: what is the sample size we need to answer our research question(s), and how do we keep costs low? All basic introductory books on statistics learn you how to do this, given any expected sample variance and correlation sizes. However, it appears that these methods can be largely improved by a stratified sampling design. This, however, makes the calculation of the required sample sizes remarkably more complex, for it needs to be done for each stratum and simultaneously the required sample sizes of each stratum influences each other.
In a sample study on Italian farms in 103 provinces, they were able to achieve a reduction in required total sample size of approximately 45%. Evidently, this will lead to an enormous reduction in the associated costs. While only in their starting phase, I think that these authors have made an interesting package, that I will surely be investigating.
The second presentation focused on the problem of small groups and questionnaires. Coming from educational evaluation research, the problem had risen that standard evaluation methods, using questionnaires with Likert-scale items, were not applicable to faculties with (very) small numbers of students. Having developed a probability model, the new package should allow for more exact measures of student satisfaction. I found this presentation a little too much focused on sampling theory and probability functions, because the more interesting part was too short. It was on the automation of scanning all the questionnaires using OCR software, and analyzing the data resulting from that process. I think a presentation with a stronger focus on that part of the project would have been much more interesting to a broad useR! conference.