I’ve had some success in grant proposals with research designs for human-centered software engineering that follow the following (common) pattern. It is a three-step of
Structured literature review (to create an initial theory),
Action research (to build out the quickly evolving theory), and
Case study research (to conclude by evaluating the theory)
The output of research is theory, initially just proposed, later validated (or invalidated). A theory is a description or model of phenomena of interest, and its main value is to make predictions about these phenomena. No predictions, no theory. Predictions are turned into hypotheses to put a theory to the test.
The key output of research is a theory or something supporting the building or validation of a theory. A theory, in turn, is knowlege, for example in the form of a model, that lets us predict the future or create reliable output in some form. Scientists usually publish theories for other scientists to review, in journals.
But what about the practitioners that we are creating the theories for in the first place?
In this video I present a way of codifying (presenting) theories as practical pattern handbooks so that practitioners can use these handbooks and apply your theory. This way, we connect science to practice and hopefully help make the world a better place. It also helps to get industry engaged in your work.
An associated (preprint) technical report, soon a journal paper, is available with more information. The slides for the video above are also available.
Many computer science degree programs do a lousy job at teaching science. A high school student, entering university, often has a good idea what science is about, based on their physics and chemistry classes. At least, it involves controlled experiments. At university, this is rarely picked up, and computer science students are given the idea that programming something novel constitutes science. With that idea, they are often bewildered when I teach them rigorous research methods, in particular if those originated in the social sciences (like qualitative interviews or hypothesis-testing surveys).
On a whim, I asked my Twitterverse (which includes a fair number of computer scientists) what they think about the following question:
When peer-reviewing somebody else’s work submitted for publication, what should you do if you find that the authors have a different belief than you about what can be known?
Research should be presented with appropriate choice of words to the world. So it bugs me if researchers, maybe unknowingly, overreach and call the evaluation of a theory a validation thereof. I don’t think you can ever fully validate a theory, you can only validate individual hypotheses.
The following figure shows how I think key terms should be used.
Last weekend, I ventured into unchartered territory (for me) and attended the Berliner Methodentreffen, a research conference mostly frequented by social scientists. I participated in a workshop on mixed methods, where the presenter discussed different models of mixing methods with each other (“Methodenpluralität” in German).
She omitted one model that I thought is often used: First to perform exploratory data analysis to detect some interesting phenomenon and then to do some explanatory qualitative research to formulate hypotheses as to why this phenomenon is, was, or keeps happening.
During my question, the temperature in the room dropped noticeably and I was informed that such exploratory data analysis is unscientific and frowned upon. Confused I inquired some more why this is so, but did not get a clear answer.
From my excursion into qualitative research land (the aforementioned Berliner Methodentreffen) I took away some rather confusing impressions about the variety of what people consider science. I’m well aware of different philosophies of science (from positivism to radical constructivism) and their impact on research methodology (from controlled experiments to action research, ethnographies, etc.) I did not expect, however, for people to be so divided about fundamental assumptions about what constitutes good science.
One of the initial surprises for me was to learn that it is acceptable for a dissertation to apply only one method and for that method to only deliver descriptive results (and thereby not really make a contribution to theory). In computer science, it is difficult to publish solely theory development research (let alone purely descriptive results) without any theory validation attempt, even if only selective. The limits of what can be done in 3-5 Ph.D. student years are clear, but this shouldn’t lead anyone to lower expectations.