Research Methodology



It has often been said that for any given topic, there are as many opinions as there are people. Some people form their opinions based on personal experience, some on hearsay. Some look at isolated cases and generalize, some take general opinions and apply them to specific cases. Most of the time, though, opinions are held because they “sound” right, or “seem” to be true.

But old wive’s tales and superstitions can also “sound” right, or “seem” to be true without having any basis in fact. In mankind’s efforts to move away from superstition, we have developed a different method of building our body of knowledge … of determining what the facts are. Research methodology is the practice of investing independent authorities with the function once held by intuition. And with every false belief that we discover, or replace with truth, we take one step closer to understand our world and those around us.

In order to develop authoritative knowledge, we must use what is known as the scientific method. Using the scientific method, all ideas, opinions, and propositions can be subjected to an empirical test, which serves to discriminate between those that are true, and those that are not. In so doing, the scientific method must observe several standardized rules and procedures. These rules and procedures can then be followed by anyone who hopes to gain knowledge, and once knowledge is acquired in this manner, it can be shared with others with the certainty that it will serve to further mankind, and not confuse.

The scientific method is used by virtually every discipline of science and research. However, for the purposes of this paper, the scientific method will be discussed predominantly as it applies to psychology.


There are four main reasons why we perform research. We may wish to describe behavior, predict behavior, determine cause and effect, or explain behavior.

Describing behavior

Researchers will often perform studies that help them form a more accurate picture of the current state of affairs. The results of these studies can then be used as the jumping-off point for further studies, or they can be used directly. Demographics is one obvious example of descriptive research, the results of which might be used to either chose subgroups of people for further study, or to target potential customers for products or services.

Predicting behavior

What type of man will make the best leader? What class of people would benefit most from a social program? What behavior might we expect from prison overcrowding? These are all questions that we can attempt to answer using research to look into the future, based on data currently available to us.

Determine cause and effect

Which came first, the chicken or the egg? This seeming paradox was answered by Darwin’s research into the origin of the species. It is, in fact, easy to arrive at the correct answer to this question, compared with the more subtle and complex questions social scientists ask themselves. Are nonwhites inherently less intelligent than whites? What has been the impact of the War on Poverty? What precipitated World War II? The key to proper determination of cause and effect lies in taking into account all the factors involved. The problem is, of course, that oftentimes there are thousands of factors, not all of which are necessarily obvious in terms of the role they play.

Explain behavior

Seeing may be believing, but it is certainly no explanation. Just as the description of the sequence of events in a card trick doesn’t give the listener enough information to duplicate the feat, watching an alcoholic drink doesn’t explain what it is he hopes to gain from such behavior. When we can accurately explain behavior, often we can deduce the steps necessary to change it for the better.

Types of research

There are three types of research: basic, applied, and program evaluation. The nature of the question we are seeking to answer determines to a large extent the type of research that will be necessary to discover the answer.

Basic research

Research into fundamental issues is called theoretical or basic research. The value of basic research is usually not readily apparent, except that it increases our knowledge base.

Applied research

Research that concerns itself with practical applications or solutions to specific problems is called applied research. Although not as broad-based as basic research, applied research can be valuable in showing us the day-to-day ramifications of what we learn from basic research.

Program evaluation research

With the proliferation of social programs comes the need to evaluate their efficacy. Program planning helps identify the problem, defines the treatment approach, and projects the cost of the program. Program monitoring ensures that the right people are being helped, and that there are no variables that are being overlooked. Impact assessment evaluates whether or not the program is attaining its stated goals. Efficiency assessment evaluates whether or not the program is worth the time, effort, and expenditure. For example, a program that costs a million dollars for each person it helps stop drinking would be judged too expensive, even though the program does stop drinking.


In algebra, equations are solved to find the values of the variables in the equation. In research, variables can be either a value you wish to find or a value you wish to control. For example, if you were conducting a study to determine if alcoholism is caused by absentee parents, the sought after variable would be the percentage of alcoholics that come from homes with an absentee parent. All other variables, such as alcoholic parents, abusive parents, etc., would have to be rigidly controlled. Obviously, the number of variables that need to be controlled can be quite large, which makes research into social issues exceedingly difficult.

Although I just mentioned two types of variables, there are actually three. They are called the independent, the dependent, and the extraneous variables. The independent variables are those variables that are manipulated by the researcher during the study. The dependent variables are those variables that reflect the reaction of the subject to the independent variables. The extraneous variables are those factors that the researcher does not wish to change or measure, but which must be kept constant to maintain the validity of the study. Lack of control or consideration of the extraneous variables is the number one downfall of most research projects.

Methods of research

There are two basic ways in which research can be conducted: experimentally or nonexperimentally. In the example of the study of the causes of alcoholism in the section above, a nonexperimental method was used, in which there was no manipulation of the variables. Studies in which the independent variable is manipulated are called experimental studies.


Nonexperimental research is largely confined to observation, or to publications searches. Of the many types of nonexperimental research, the type that is most probably encountered by the neophyte student researcher is the library study. This involves looking for articles, books, and other published material that deals with the subject at hand. Although this has traditionally been done with a card catalog and an understanding librarian, more recently computers have entered the field. Virtually any personal computer or interactive terminal can be used to search on-line, computerized data bases containing thousands of book and article abstracts. Typically, the researcher will tell the computer what key words or subjects he is interested in, and the computer will scan the data base, pulling up abstracts that make reference to the key word or subject. With today’s computers, a typical search in a data base with nearly half a million abstracts takes less than five seconds, in which time hundreds of abstracts can be pulled up for the researcher.

One type of library research is archival research. The researcher uses existing information to conduct experiments in the past, so to speak. For example, there might be historical information on alcoholism, and historical information on dietary habits, but there might not be historical information on the dietary habits of alcoholics. The archival researcher would attempt to establish the relationship between these two factors. One pitfall of archival research is that not all desired data is necessarily available, and the information that is available might not be accurate. However, in the case of historical searches, it is impossible to gather information any other way.

Case studies are another type of nonexperimental research. Case studies are not often made part of the public record unless the case is unique, or at least unusual. Case studies are valuable in that they show us situations that at one time did exist, and therefore must be accounted for by any general theory of behavior.

Field observation involves observing subjects in the natural setting, usually over a long period of time. When performing field observation, the researcher can be concealed or nonconcealed. If nonconcealed, the research can also be participatory or nonparticipatory. Each case has its strengths and weaknesses. For example, a concealed observer may be invading the privacy of the subject, while a participatory observer runs the risk of losing his objectivity, or of affecting the outcome of his own study.

Correlational studies are similar to field studies, except that the researcher looks for subjects based on one variable of interest, and then observes their behavior in a second variable of interest. However, both variables are observed as they occur naturally.

Surveys are a very popular form of research. The Census Bureau, polling organizations, sales and marketing organizations, and others all conduct surveys in an effort to form a more accurate description of the populace. Survey questions must be stated in such a way that no prejudice is indicated by the researcher, as studies have shown that the same survey topic will elicit different responses depending on how the question is worded. If the sample for the survey is not to include everybody in the populace, then it becomes very important that those who were chosen to participate in the survey be representative of the target populace. Finally, if not everyone responds to the survey, the researcher must determine if the characteristics of those who responded are different from those who did not respond.


In experimental research the researcher manipulates one variable (the independent variable) while observing the subject’s reactions to obtain the value of the second variable (the dependent variable). Because the researcher needs to guarantee that the value of the dependent variable is truly due to the change in the independent variable, it is very important that the extraneous variables be strictly controlled. This need for tight control of the extraneous variables makes field studies difficult. However, subjects will often respond to certain situations only when seen in the context of their natural setting. Laboratory studies, on the other hand, offer a much better opportunity for the researcher to control the extraneous variable, but the subjects are then aware that they are being studied, which can affect the outcome of the research. In either setting, the researcher can also randomize the extraneous variables, which has the effect of a statistically equal probability that the extraneous variable will help (or hinder) all the subjects in the research.

Another way researchers can eliminate bias and the effect of extraneous variables from the study is through the use of a control group. The control group typically has the same qualities as the experimental group, but they are not subjected to the experiment. When a control group is used, differences in behavior or test results between the two groups are examined to determine the validity of the hypothesis. The more difference there is between the control group and the experimental group, the better the chances are that the hypothesis is true.

Conducting research

When the value of the dependent variable can be shown to be caused by the value of the independent variable, the study is said to have internal validity. One way of double-checking ones results is through the use of the null hypothesis. The null hypothesis contradicts the research hypothesis by stating that the results of the research are due to random behavior. By accepting or rejecting the null hypothesis, the researcher is able to better gauge the internal validity of his research hypothesis. Thus, using a null hypothesis provides the researcher with a sort of benchmark against which to measure the validity of his study.

Because the null hypothesis is either true or untrue, and because the research hypothesis is either true or untrue, the researcher is faced with a decision that has only four possible conclusions. If he rejects the null hypothesis, and it turns out that the research hypothesis is correct, he has made the right decision. If he accepts the null hypothesis, and the research hypothesis turns out to be flawed, he has again made the correct decision. If, however the researcher rejects the null hypothesis when it is actually true, he has made what is called a Type I error. If the null hypothesis is accepted when it is actually false, he has made a Type II error.

While it may seem that there is an equal chance of making a Type I error as a Type II error, in fact the researcher can weight the errors as needed. When a Type I error would be much worse than a Type II error, the researcher can reduce the significance level of a Type I error, increasing the probability of a Type II error, and vice versa. For example, our legal system is built on the premise that it is better to let a few guilty parties free than to convict innocent people. Because the Type I error (convicting innocent people) is considered more serious a problem than the Type II error (freeing guilty parties), the law states that a person is innocent until proven guilty beyond a reasonable doubt.

One threat to internal validity is the bias of the experimenter. When the experimenter knows the research hypothesis and believes in it, it is possible that he can give subjects verbal and non-verbal clues as to which answers are the “right” answers. One solution to this problem is double-blind testing, in which neither the experimenter nor the subject knows which answers are the “right” answers. A good example of the double-blind method can be seen in high fidelity component listening tests, which can serve to skirt the prejudice of the know-all who “knows a better sounding piece of equipment when he sees it.” Another solution lies in using machines, such as computers, to interact with the subjects.

Experimenter bias and the experimenter expectancy effect are most often a problem when the subjects of the experiment provide data through self-reporting. Self-reporting simply means that subjects respond to interviews or questionnaires. Unfortunately, self-reporting is also prone to “faking” and reactivity. “Faking” can occur either in a “bad” direction or a “good” direction. Either way, the results of the experiment are invalidated. Reactivity occurs when the very act of testing the subject changes the subject’s response. When reactivity is detected, the method of research must be changed to eliminate the conscious knowledge of the test from the mind of the subject.

One way of doing this is by using a behavioral measure. This differs from the self-report in that the researcher observes the subject’s behavior, rather than asking the subject about his behavior. For example, instead of asking the subject what he normally eats during the day, the researcher would follow the subject and see for himself. Another way of gathering data is through the use of physiological measures. The electromyograph, electroencephalogram, and the polygraph ("lie detector") are all forms of physiological measurement. Finally, the researcher can use indirect techniques, such as the projective measures. These include word association, the Draw-A-Person test, and the Rorschach Inkblot test.

This all points up how important the subject selection process is. In most cases, the best way to select subjects is using probability sampling. The easiest probability sampling is simple random sampling. With this, the research draws his subjects from among the entire population of otherwise eligible subjects (assuming, of course, that the population is larger than the researcher wants to deal with, making it necessary to select a sub-group). With simple random sampling, each subject in the population has an equal chance of being asked to take part in the research. A slight variation on this theme is stratified random sampling, with takes into account the presence of sub- groups in the population. For example, the researcher might want to choose from among the entire population, but he might also want to have his selections reflect the demographics of the entire population. Thus, if 12 percent of the population was over 65 years of age, 65 year old or older subjects would make up 12 percent of the subjects selected for the research.

Another way to choose subjects is with nonprobability sampling. Using this method, the researcher who needed 100 subjects would round up the first 100 eligible members of the population he could find. Nonprobability sampling (also called haphazard sampling) is less precise than probability sampling, but it does have its place. Researchers attempting to identify migratory habits of animals, for example, can’t be as particular about their sampling, and for them the haphazard sample is the only realistic alternative.

Inherent in the scientific approach is the ability to reproduce the results of a research project. This might be needed to clarify some of the aspects of the original research, or as an aid to generalizing the results of the study. When the results are reproduced by following the same steps and methodology, it is called exact replication. When subsequent researchers attempt to establish the same connection, but using different methods, it is called conceptual replication.

If a given study has internal validity, no experimenter bias, proper sampling, and is replicable, it still might not apply to any cases except the ones studied. In other words, the researcher would not be able to generalize based on the results of his study. In cases in which it is possible to generalize based on the results, the study is said to have external validity.

Understanding research results

A big part of research is making sense of all the numbers you get in the course of your study. The numbers themselves (the so-called “raw data”) don’t always present a clear picture, and for this reason the researcher must examine them closely to discover the true significance of the research.

For some tests, simple calculations of central tendency and variability are enough. Central tendency calculations give a single number that describes the score of the group as a whole. Among these calculations are: the mode, or the most frequently occurring score; the median, the middle score of a list of scores; and the mean, or the sum of the scores divided by the number of scores. Variability calculations include range (the highest score minus the lowest score) and standard deviation (the sum of the individual scores squared, minus the number of scores times the mean squared, all divided by the number of scores minus one).

These calculations merely describe the test results, however. Often, the researcher is not as interested in describing the results as in drawing inferences from the results. When this is the case, inferential calculations such as the t-test, the F test, and the chi-square test must be used. The t-test shows the researcher whether two groups are significantly different from each other. The F test analyzes variances among groups to see if the variance is significant. The chi-square test helps the researcher determine whether or not the results of the research are due to random error. One additional component of these calculations is the statistical table. These tables can be found in most texts on statistics. The results of the calculated value are compared with the value found in the table for general cases for which purely random occurrence would account. Only when the calculated value is significantly different from the value found in the statistical table is the calculated value considered to have proved the research hypothesis.

With these and other tests, the mathematics involved can become quite complicated and lengthy, when done by hand. Fortunately, there are currently dozens of computer programs available for most brands of computer to help relieve the researcher of the burden of repetitious computations.

Formulas, numbers, and tables are not the only way test results can be examined, however. Data can also be displayed pictorially in a line graph, scatter plot, histogram, or frequency polygon. With all of these means, it is standard practice to display the independent variable on the horizontal axis, and the dependent variable on the vertical axis.

Graphs and scatter plots can show any of four different relationships. In the case of a line graph, if the dependent variable increases as the independent variable increases, it is referred to as a positive linear relationship. If the dependent variable decreases as the independent variable increases, it is called a negative linear relationship. If the dependent values describe a curve as the independent value increases, there exists a curvilinear relationship. Most students have run into this one when in a class in which the teacher graded “on the curve.” The “curve” is a bell-shaped line that shows the tendency of most test scores to fall between the highest and lowest scores. Finally, if the dependent variable shows no distinct pattern in relationship to the independent variable, there is said to be no relationship.

Histograms are essentially bar charts, with the length of the bar indicating the value of the dependent variable. Frequency polygons are similar to line charts in which the values from two or more groups are displayed on the same graph.

Ethical and legal concerns

Both of the research methods (experimental and nonexperimental), when dealing the people, must also deal with the issue of consent. Thus the researcher must either inform the subject of the true nature of the test, or assume that the subject would participate if he knew the true purpose. Each of these types of consent has its values and its drawbacks.

Informed consent is preferred by some because it is more “honest” to tell subjects exactly what the study is about before asking for their participation. The drawback is, of course, that once the subject knows of the study, it can become more difficult to control the extraneous factors.

With assumed consent, the researcher can exercise control over a great many variables, even using deception, if desired. The drawbacks are that most people do not enjoy being deceived, and, depending on the nature of the study, there can be legal ramifications to deal with if someone takes offense or is hurt. For example, if a researcher wanted to observe bystander reactions during a staged robbery, without informing the subjects that the robbery was not real (and thus lose internal validity) the researcher cannot be certain that there will be no armed, off-duty police officers in the crowd, who might react with deadly force.

There is a compromise, however, that is being tested by researchers. The researcher starts by presenting the true nature of the experiment to a sample group of subjects. If these subjects consent to participate in the experiment, then the researcher assumes that the experiment would be acceptable to others of similar background.

Most often, issues of consent arise from experiments that provoke stress in the subjects. In these cases it is always important, whether consent was informed or assumed, to debrief the subject after the experiment. Debriefing is also warranted after any experiment that involves deception, and many researchers include a debriefing session after all experiments.

While deception is a problem that arises between researchers and subjects, fraud is a problem that arises among researchers. Fraud may include falsified, fabricated, or wrongly excluded data. In simple experiments, fraud is bad enough, although others may easily duplicate the procedures used to discover the fraudulent research. In more complex studies, however, it might take years before the truth comes out. The publication of fraudulent results can cause others to waste time either following erroneous leads or duplicating experiments. But the worst effect of fraudulent research is that it weakens the fabric of knowledge woven by the scientific method. Without the assurance that the results we see are due to ethically conducted research, we would all be reduced to checking and double-checking each other’s work.

With the many moral and ethic gray areas that confront researchers, we may never develop definitive procedures that spell out the handling of all cases. However, all researchers should be guided by the principles and guidelines set forth in the “Ethical Principles of Psychologists” (1981) and the “Ethical Principles in the Conduct of Research with Human Participants” (1982). As society grows and evolves, so too will these principles and guidelines.


Great strides have been made in understanding the human condition by following the scientific method. Still, social scientists and researchers seem to be battling against tremendous odds in their search for a better and better understanding. With the staggering complexity of the human mind, social scientists and their “soft” data will never be able to conduct experiments with the self-assurance of the physical scientist, or even of scientists in the plant and animal worlds, who deal with “hard” data.

Still, the scientific method has served us well and, barring a fundamental change in our perception of human thought and behavior, it will no doubt continue to do so for many more years.


American Psychological Association. Ethical Principles of Psychologists. Washington, D.C. APA. 1981.

American Psychological Association. Ethical Principles in the Conduct of Research with Human Participants. Washington, D.C. APA. 1982.

Brown, C. and Ghiselli, E. Scientific Method in Psychology. New York. McGraw-Hill. 1955.

Campbell, D. and Stanley, J. Experimental and Quasi-Experimental Designs for Research. Chicago, Illinois. Rand McNally. 1966.

Chassan, J. Research Design in Clinical Psychology and Psychiatry. New York. Irvington Company. 1979.

Cozby, P. Methods in Behavioral Research. Palo Alto, California. Mayfield Publishing. 1985.

Einstein, A. Ideas and Opinions. New York. Crown Books. 1954.

Hays, W. Statistics For Psychologists. New York. Wiley and Sons. 1973.

Hoffman, L. Foundation of Family Therapy: A Conceptual Framework for System Change. New York. Basic Books. 1981.

Hyman, R. The Nature of Psychological Inquiry. Englewood Cliffs, New Jersey. Prentice-Hall. 1964.

Isaac, S. and Michel, W. Handbook in Research and Evaluation. San Diego, California. Edits. 1971.

Keeney, B. Aesthetics of Change. New York. Guilford Press. 1983.

Kidder, L. Research Methods in Social Relations. New York. Holt, Rinehart, and Winston. 1981.

Lewin, M. Understanding Psychological Research. New York. Wiley and Sons. 1979.

Longabaugh, R. The Systematic Observation of Behavior in naturalistic Settings. In The Handbook of Cross-Cultural Psychology. Volume 2. Boston. Houghton Mifflin. 1969.

Lyman, H. Test Scores and What They Mean. Englewood Cliffs, New Jersey. Prentice-Hall. 1978.

Maslow, A. Psychology of Science. Chicago, Illinois. Henry Regneny Company. 1966.

McCall, R. Fundamental Statistics for Psychology. New York. Harcount, Brace, Janovich. 1980.

Polanyi, M. Study of Man. Chicago, Illinois. University of Chicago Press. 1958.

Schoeninger, D. and Insko, C. Introductory Statistics for the Behavioral Sciences. New York. McGraw-Hill. 1953.

Selitiz, C., Wrightsman, L., and Cook, S. Research Methods in Social Relations. New York. Holt, Rinehart, and Winston. 1976.

Skinner, B. Walden Two. New York. Macmillan. 1948.

Stanley, J. and Hopkin, K. Education and Psychological Measurement and Evaluation. Englewood Cliffs, New Jersey. Prentice-Hall. 1972.

Townsend, J. Introduction to Experimental Method for Psychology and the Social Sciences. New York. McGraw-Hill. 1953.