There seems to be no consensus on this, as it can vary greatly due to
to project parameters and the unique needs and constraints at the moment
I think it is necessary to address it by looking at the factors (or context) that affect the amount of time required, which is probably easier to document compared to trying to find the "magic" number. But in general, you can try to divide it into the types of methods, the type of end user (or demographic data), the nature of the information you want to collect. That should be relatively consistent enough between different researchers.
Of course, sometimes the chosen method is dictated by the constraints of the project (for example, time, budget, available resources, etc.) so it is possible that it does not even get a real reflection of the amount of real time needed, since that is only getting a measure of the amount of time allotted.
I know that with qualitative research, time is very difficult
estimate from "information saturation" (inflection point where there are no new
Insights are discovered) is a stranger.
Again, you can probably see this in terms of the importance (how valid the finding is) and the trust (how applicable this is to the rest of the users) that the researcher wants to achieve with the research, which is more objective than just trying. to estimate the point at which the saturation of information is reached. When you combine this with the type of information you want to discover (for example, critical usability problems versus minor problems), you can arrive at a rough estimate.
For example, it is often cited that it is not necessary to test a large number of users to discover a critical usability problem, since it is likely that something very important will affect any user that chooses to perform the tests. While something more subtle will take many more users for the researcher to determine the exact cause of the behavior.
Methods versus phases
I think that if you can get an approximate figure in the methods, then it is a question of applying a factor in the phases, since they will depend on the chosen method. For example, if you want to do most of the research manually (that is, pencil and paper in person), the analysis will take more time. Sorting cards on physical cards is fine as long as you are not handling hundreds of cards on many different walls because the processing and analysis time does not fit very well.
What is the purpose of the estimates?
I think that if the ultimate goal is for you to have a resource that gives you a good estimate of the effort required for research activities, it is better to compile your own data, since at least many of the variations caused by factors that affect The accuracy and precision of the estimates will be reduced.
But I think if you ask the same question to the developers, they will also have difficulty finding an accurate answer. Basically, the estimate will only be as accurate as the parameters used to produce the estimate, so the estimate of the time to correct an error compared to the time required to implement a function using a given API and based on an existing code will give different degrees of accuracy (and this can also vary between individuals).
It's great that I'm researching in this area, and this is the exact kind of problem that agile methodologies are supposed to help with. When it comes to managing projects that can change during the useful life of the product or service you want. develop. Therefore, my advice is to be as accurate as I can reasonably expect someone to do it and then adjust those baselines as it goes.