On Research Objectives - and Research Questions - and Research Hypotheses

The purpose of research is to find something out, so research objectives cannot be "to make a model", "to build a demonstrator", or "to develop a methodology". You build a model, or run a demonstration, in order to find something out. What is it? A model is part of the 'how' - not the 'what'.

So referees of research proposals, faced with stated objectives like "to develop a methodology", typically ask "What are the research questions?"

Unfortunately, it seems that, for some researchers, even this is not a sufficiently clear hint. Instead of "to develop a methodology" we get the question "Can we develop a methodology?" But that is still not right. Where is the key technical idea? Essentially, the research should be built around the 'big idea' - "Will my big idea work?" And the 'research questions' are the more detailed questions entailed by that big question: what you need to find out to determine whether your idea works.

Now, without wanting to be overly formal, I find it helps if one at least attempts to formulate a research hypothesis. This should be along the lines of "if one does A then one will perceive B". (Cf. 'contribution to knowledge')

Even then, I find, this is not clear enough. So we get "If we devise a methodology that simplifies the design-build process, we will save money". No! The implicit 'minor premise' ('and this methodology does simplify the design-build process') is at too high a level and simply raises the question "How will you simplify the design-build process?" In other words, it doesn't include your 'angle'. 'Simplifying the design-build process' and 'saving money' are too close in some sense. One might as well say, tautologically, "If we devise a methodology which saves money, it will save money".

Surely 'development of a methodology which simplifies the design-build process' is not your 'big idea'? Surely the big idea is something to do with some concepts upon which said methodology is to be based.

What we seek is something like "If we devise a methodology, based upon the [fill in the blanks from your 'angle', amplified and argued elsewhere] concepts, then when used in a [fill in the blank] environment it will allow/enhance/support/give/... [fill in the blanks for the desired beneficial outcome]".

And now we should know what we intend to find out. In short, is the hypothesis right? If we do A, will we observe B? Or, "We intend to find out whether, if we devise a methodology based on my big idea, the design-build process [measured how?] will be simplified" (And if so, we think it will probably save money, but that is not the subject of the technical research.)

And now we should be able to identify the 'research questions' - what we need to find out in order to be able to 'test the hypothesis'.

Now let's relax a little
Going back to that first formulation of a hypothesis, we might not know whether we 'can do A' (for instance whether "we can devise a methodology based on my big idea"). In general, that is not a problem. We might have, as a research objective, "to determine whether we can devise a methodology based on my big idea, and if so, if it will simplify the design-build process". From this will flow the 'research questions' entailed by that objective.

However, you might want to investigate whether the 'doing of A' (the devising of a methodology based on your big idea) can be achieved at all, irrespective of any possible benefits it might yield. But I doubt whether a proposal would be successful if it were to explore whether something could be done without at least some idea of what benefits it might yield, and whether it actually does so.

Can we relax a little more?
Maybe you have a clear understanding of the problem, but don't yet have any idea how to resolve it. I am not counting the trivially tautological 'resolution', like "We propose to address the problem of expensive re-work by development of an architecture and associated methodology which will allow incremental modification without incurring massively non-linear costs of change." This just obscures the fact that the 'big idea' about how this might be achieved is missing.

However, there is a whole class of research - perhaps just 'search' would be a more appropriate term - which is a search for ideas. "How might we do A in the hope that we will observe B?" For instance "What sort of architecture might enable more linear change ramification?"

In the more formal language of the philosophy of science, this is know as 'hypothesis generation' since, having found one or more appropriate ideas upon which to build one's approach, one will then be able to formulate a hypothesis, and then proceed to test it, as discussed previously. Note that one cannot - at least not usefully - formulate the objectives for a search for ideas as a test of a hypothesis. (I say 'not usefully' because one could formulate a testable hypothesis along the lines of "If you give me the money I will find something interesting which I can work up into a full proposal" - but that is not quite the kind of hypothesis we had in mind.)

Too often, research proposals lump together a search for ideas in the same project as the actual development of those ideas and test of their efficacy. This is asking for rejection. The proposers are asking for all the funds for the expensive development and test of the ideas in advance of finding the ideas. 'Trust us'! The initial search is much better tackled on its own, first, as a smaller scale project. Though it is not strictly appropriate terminology, the label 'feasibility study' is gaining currency for such projects.

Even then, the proposers are still saying, in effect, 'Trust us'. So, to give a little reassurance to referees and funding agencies, even if they are only casting about for ideas proposers would be well-advised to give some hint as to what they are looking for and where they will look. In other words, can you give some indication of the desirable characteristics of the 'ideas' sought, or even criteria for selection of 'ideas' for further investigation? And could you give some indication of what avenues you might explore? What likely prospects are there 'out there'? What lines of other researchers' work offer promise? What work in other disciplines give us a glimmer of an idea, even if only by metaphor?

What of falsification?
In engineering we are often concerned to find something which works - typically some new conceptualisation of a problem domain which gives us some framework within which we can reason, make decisions, and so on. But too often, this leads to sloppy science. Too often, if researchers find some instance in which their 'methodology based on ... etc' works, they claim that the methodology is therefore 'proven'. This is nonsense. A particular corroboration of a belief (or hypothesis) is not proof of the general validity of the approach. If a conceptualisation is intended to be 'generic' - to have value beyond helping us out of our immediate mental hole - then we should try to establish that genericity.

To counter sloppy science, Popper pressed for 'falsifiability'. Put very simply, the idea is that one should not seek evidence that a new theory is 'correct', but try as hard as one can to prove the theory wrong. If one fails, then one's confidence in the theory - in its general applicability - increases. (That is the simple story: the more interesting version is that if the theory is invalidated in the circumstances of the 'test' then the theory can be refined to become 'more correct'. There is nothing very deep here - in everyday life we gradually form more sophisticated world-models to encompass ever richer experiences.)

However, we do not usually have the luxury of extended effort to try to falsify a theory, and in reality we are unlikely to feel inclined to disprove our own pet ideas. But what we can do is to conceive of stringent tests of those ideas. One example which is often quoted is the bending of starlight by the sun as a prediction of the special theory of relativity.

Most of us are unable to make quite such startling predictions, let alone test them and find them valid. But we should, at least, try to test our ideas rigorously. To do this we should express our hypothesis in a form which is, at least in principle, falsifiable.

To be falsifiable, it should be possible to imagine a counter-example which would disprove the hypothesis. Consider: "The proposed approach will be useful." This is well and truly falsifiable. Any situation in which the approach does not prove useful will disprove the hypothesis. But do you really mean 'useful in all circumstances'?

No you didn't. But be careful if you back away from this bold assertion that you don't weaken the hypothesis into an untestable form. So "The proposed approach could be useful." is universally true, but universally useless. It is universally true since even if the approach has not yet proved useful there might be a future situation in which it would be useful. It is useless since we don't know if it will be useful in a given situation in which we might find ourselves.

One must define the circumstances in which the approach will be useful. And then one must define what one means by 'useful'. In what way would it be useful? And then you have, "The proposed approach will, in the defined circumstances, offer the following benefits ..." And here we are again - back to the testable hypothesis.

Note that a tame 'toy' case study is not a very tough test. We should seek realistic complexity and scale in our test cases, and we should seek argumentation - attempted refutation - from as wide a base as possible - industrial reference groups, etc.

Dissemination should also be seen in this light - not just publication to a like-minded coterie, but seeking to encourage potential objectors to object - to test our results - as much as seeking to persuade them of the rightness of our results. In many scientific disciplines, replicability (the test of what I have called 'transferability') is a shibboleth.

Indeed, if you accept that knowledge is 'socially constructed', then getting buy-in from others is not just about getting acceptance of your contribution to knowledge: it is part of the process of turning ideas into knowledge.

What of statistics?
I have tried here to encourage researchers to formulate their ideas crisply and 'tightly', both to clarify what is being proposed and to facilitate real testing of the ideas. Popper's thoughts on falsifiability were prompted by concern to avoid mere 'corroboration'. But in many areas of science life is not so simple. We cannot devise black and white tests of whether our hypothesis (or theory) is true or false, nor even whether it is true or false in a well-defined set of circumstances.

Often, instead, we are looking for correlations. "This compound will have [some specified] effect on human physiology in [some specified circumstances]." But given a whole range of unspecified variables, we do not expect it always to have an effect, and not always the same effect. We are looking for some statistical indication of efficacy, and many areas of the natural sciences are intensely concerned with statistics. The experts on statistical validation of hypotheses are (I submit) to be found in biology and in psychology.

Unfortunately, in engineering research we often do not have the luxury of sampling a significant population. Hence the concern to avoid mere 'corroboration' from a small sample - often one! - that tame case study again. And again, hence one should do one's best to extend the sample by stringent 'surrogate' testing of the hypothesis - by exposing and exploring the ideas as widely as possible.

For those who are interested in the statistics of hypothesis testing (and clear thinking in general) I strongly recommend "Stop Working & Start Thinking: A guide to becoming a scientist" by Jack Cohen and Graham Medley (with an introduction by Ian Stewart), published by Stanley Thornes Ltd, ISBN 0 7487 4334 0

Note ... In the above I often used 'development of a methodology' as a focus for discussion. It has not been my intention to indicate either that this is a preferred type of project, or that 'methodologists' are particularly bad at formulating their research intentions: it is just one source of examples.


GO TO .... | ideo home | SII home | EPSRC SI programme home | SI proposers' guidance