In a recent grant proposal, we proposed to develop a new method and to evaluate it using case study research. By definition, case studies are a good way of evaluating theories where you can’t control all parameters, as is the case with new methods in an open and complex world setting like software engineering.
Interestingly, both reviewers to the proposal asked for hypotheses to test. Why weren’t there any? We had posed open-ended research questions, but no hypotheses that lead to a simple yes/no answer (well, rejection of a null hypothesis or not, usually).
In my book, there is a difference between a (mostly qualitative) evaluation of a theory, like a proposed method, and testing hypotheses that the theory generates. (I have blogged about this previously as the basic process of science in a traditional world-view.)
It is best to first perform a few rounds of theory building, in which you evaluate and possibly reorient your building out of the new theory, before you add hypothesis testing.
Hypothesis testing makes sense when the theory has reached some stability, because hypotheses that test true are usually worth more (both in terms of generated insight as well as in scientific currency achieved) and it is easy to pick invalid hypotheses if you pick early. Also, hypothesis testing is comparatively expensive, but only generates one specific yes/no result. So it is best to delay hypothesis testing to later stages of the research.
Which is why not all research needs to test hypotheses right away. It is wasteful and comes with the opportunity cost that the money could have been spent more wisely.