{"id":30,"date":"2020-08-26T15:24:32","date_gmt":"2020-08-26T15:24:32","guid":{"rendered":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/?post_type=chapter&#038;p=30"},"modified":"2020-08-26T15:48:56","modified_gmt":"2020-08-26T15:48:56","slug":"chapter-6-survey-research","status":"publish","type":"chapter","link":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/chapter\/chapter-6-survey-research\/","title":{"raw":"Chapter 6: Survey Research","rendered":"Chapter 6: Survey Research"},"content":{"raw":"<h1 id=\"h1\">Survey Research<\/h1>\r\n<h2 id=\"h2\">6.1\u00a0 Overview of Survey Research<\/h2>\r\n<strong>What Is Survey Research?<\/strong>\r\n\r\nSurvey research is a quantitative approach that has two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.\r\n\r\n&nbsp;\r\n\r\nMost survey research is non-experimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.\r\n\r\n&nbsp;\r\n\r\n<strong>History and Uses of Survey Research<\/strong>\r\n\r\nSurvey research may have its roots in English and American \u201csocial surveys\u201d conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987). By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called Literary Digest conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this \u201cstraw poll,\u201d the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite\u2014that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.)\r\n\r\n&nbsp;\r\n\r\nFrom market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health\u2014where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of college students that were routinely used in psychology (and still are).\r\n\r\n&nbsp;\r\n\r\nSurvey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see\u00a0<a href=\"http:\/\/www.hcp.med.harvard.edu\/ncs\" target=\"_blank\" rel=\"noopener\">http:\/\/www.hcp.med.harvard.edu\/ncs<\/a>). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003. Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders and also to clinicians and policymakers who need to understand exactly how common these disorders are.\r\n\r\n&nbsp;\r\n\r\nAnd as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on college students. Although this is not a typical use of survey research, it certainly illustrates the flexibility of this approach.\r\n\r\n&nbsp;\r\n\r\n<strong>Key Takeaways<\/strong>\r\n\r\n\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.\r\n\r\n\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.<b><\/b>\r\n<h2 id=\"h3\">6.2\u00a0 Constructing Survey Questionnaires<\/h2>\r\nThe heart of any survey research project is the survey questionnaire itself. Although it is easy to think of interesting questions to ask people, constructing a good survey questionnaire is not easy at all. The problem is that the answers people give can be influenced in unintended ways by the wording of the items, the order of the items, the response options provided, and many other factors. At best, these influences add noise to the data. At worst, they result in systematic biases and misleading results. In this section, therefore, we consider some principles for constructing survey questionnaires to minimize these unintended effects and thereby maximize the reliability and validity of respondents\u2019 answers.\r\n\r\n&nbsp;\r\n\r\n<strong>Survey Responding as a Psychological Process<\/strong>\r\n\r\nConsider the following questionnaire item:\r\n\r\nHow many alcoholic drinks do you consume in a typical day?\r\n\r\n_____ a lot more than average\r\n\r\n_____ somewhat more than average\r\n\r\n_____ average\r\n\r\n_____ somewhat fewer than average\r\n\r\n_____ a lot fewer than average\r\n\r\nAlthough this item at first seems straightforward, it poses several difficulties for respondents. First, they must interpret the question. For example, they must decide whether \u201calcoholic drinks\u201d include beer and wine (as opposed to just hard liquor) and whether a \u201ctypical day\u201d is a typical weekday, typical weekend day, or both. Once they have interpreted the question, they must retrieve relevant information from memory to answer it. But what information should they retrieve, and how should they go about retrieving it? They might think vaguely about some recent occasions on which they drank alcohol, they might carefully try to recall and count the number of alcoholic drinks they consumed last week, or they might retrieve some existing beliefs that they have about themselves (e.g., \u201cI am not much of a drinker\u201d). Then they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day. For example, this might mean dividing the number of alcoholic drinks they consumed last week by seven to come up with an average number per day. Then they must format this tentative answer in terms of the response options actually provided. In this case, the options pose additional problems of interpretation. For example, what does \u201caverage\u201d mean, and what would count as \u201csomewhat more\u201d than average? Finally, they must decide whether they want to report the response they have come up with or whether they want to edit it in some way. For example, if they believe that they drink much more than average, they might not want to report this for fear of looking bad in the eyes of the researcher.\r\n\r\n&nbsp;\r\n\r\nFrom this perspective, what at first appears to be a simple matter of asking people how much they drink (and receiving a straightforward answer from them) turns out to be much more complex.\r\n\r\n&nbsp;\r\n\r\n<strong>Context Effects on Questionnaire Responses<\/strong>\r\n\r\nAgain, this complexity can lead to unintended influences on respondents\u2019 answers. These are often referred to as context effects because they are not related to the content of the item but to the context in which the item appears (Schwarz &amp; Strack, 1990). For example, there is an item-order effect when the order in which the items are presented affects people\u2019s responses. One item can change how participants interpret a later item or change the information that they retrieve to respond to later items. For example, researcher Fritz Strack and his colleagues asked college students about both their general life satisfaction and their dating frequency (Strack, Martin, &amp; Schwarz, 1988). When the life satisfaction item came first, the correlation between the two was only \u2212.12, suggesting that the two variables are only weakly related. But when the dating frequency item came first, the correlation between the two was +.66, suggesting that those who date more have a strong tendency to be more satisfied with their lives. Reporting the dating frequency first made that information more accessible in memory so that they were more likely to base their life satisfaction rating on it.\r\n\r\n&nbsp;\r\n\r\nThe response options provided can also have unintended effects on people\u2019s responses (Schwarz, 1999). For example, when people are asked how often they are \u201creally irritated\u201d and given response options ranging from \u201cless than once a year\u201d to \u201cmore than once a month,\u201d they tend to think of major irritations and report being irritated infrequently. But when they are given response options ranging from \u201cless than once a day\u201d to \u201cseveral times a month,\u201d they tend to think of minor irritations and report being irritated frequently. People also tend to assume that middle response options represent what is normal or typical. So if they think of themselves as normal or typical, they tend to choose middle response options. For example, people are likely to report watching more television when the response options are centered on a middle option of 4 hours than when centered on a middle option of 2 hours.\r\n\r\n&nbsp;\r\n\r\n<strong>Writing Survey Questionnaire Items<\/strong>\r\n\r\n<em><strong>Types of Items<\/strong><\/em>\r\n\r\nQuestionnaire items can be either open-ended or closed-ended. Open-ended items simply ask a question and allow participants to answer in whatever way they choose. The following are examples of open-ended questionnaire items.\r\n\r\n&nbsp;\r\n\r\n\u201cWhat is the most important thing to teach children to prepare them for life?\u201d\r\n\r\n\u201cPlease describe a time when you were discriminated against because of your age.\u201d\r\n\r\n\u201cIs there anything else you would like to tell us about?\u201d\r\n\r\n&nbsp;\r\n\r\nOpen-ended items are useful when researchers do not know how participants might respond or want to avoid influencing their responses. They tend to be used when researchers have more vaguely defined research questions\u2014often in the early stages of a research project. Open-ended items are relatively easy to write because there are no response options to worry about. However, they take more time and effort on the part of participants, and they are more difficult for the researcher to analyze because the answers must be transcribed, coded, and submitted to some form of content analysis.\r\n\r\n&nbsp;\r\n\r\nClosed-ended items ask a question and provide a set of response options for participants to choose from. The alcohol item just mentioned is an example, as are the following:\r\n\r\nHow old are you?\r\n\r\n_____ Under 18\r\n\r\n_____ 18 to 34\r\n\r\n_____ 35 to 49\r\n\r\n_____ 50 to 70\r\n\r\n_____ Over 70\r\n\r\n&nbsp;\r\n\r\nOn a scale of 0 (no pain at all) to 10 (worst pain ever experienced), how much pain are you in right now?\r\n\r\n&nbsp;\r\n\r\nHave you ever in your adult life been depressed for a period of 2 weeks or more?\r\n\r\n&nbsp;\r\n\r\nClosed-ended items are used when researchers have a good idea of the different responses that participants might make. They are also used when researchers are interested in a well-defined variable or construct such as participants\u2019 level of agreement with some statement, perceptions of risk, or frequency of a particular behavior. Closed-ended items are more difficult to write because they must include an appropriate set of response options. However, they are relatively quick and easy for participants to complete. They are also much easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet. For these reasons, closed-ended items are much more common.\r\n\r\n&nbsp;\r\n\r\nAll closed-ended items include a set of response options from which a participant must choose. For categorical variables like sex, race, or political party preference, the categories are usually listed and participants choose the one (or ones) that they belong to. For quantitative variables, a rating scale is typically provided. A rating scale is an ordered set of responses that participants must choose from.\r\n\r\n&nbsp;\r\n\r\n<strong>What Is a Likert Scale?<\/strong>\r\n\r\nIn reading about psychological research, you are likely to encounter the term Likert scale. Although this term is sometimes used to refer to almost any rating scale (e.g., a 0-to-10 life satisfaction scale), it has a much more precise meaning.\r\n\r\n&nbsp;\r\n\r\nIn the 1930s, researcher Rensis Likert (pronounced LICK-ert) created a new approach for measuring people\u2019s attitudes (Likert, 1932). It involves presenting people with several statements\u2014including both favorable and unfavorable statements\u2014about some person, group, or idea. Respondents then express their agreement or disagreement with each statement on a 5-point scale: Strongly Agree, Agree, Neither Agree nor Disagree, Disagree, Strongly Disagree. Numbers are assigned to each response (with reverse coding as necessary) and then summed across all items to produce a score representing the attitude toward the person, group, or idea. The entire set of items came to be called a Likert scale.\r\n\r\n&nbsp;\r\n\r\nThus unless you are measuring people\u2019s attitude toward something by assessing their level of agreement with several statements about it, it is best to avoid calling it a Likert scale. You are probably just using a \u201crating scale.\u201d\r\n\r\n&nbsp;\r\n\r\n<strong>Writing Effective Items<\/strong>\r\n\r\nWe can now consider some principles of writing questionnaire items that minimize unintended context effects and maximize the reliability and validity of participants\u2019 responses. A rough guideline for writing questionnaire items is provided by the BRUSO model (Peterson, 2000). An acronym, BRUSO stands for \u201cbrief,\u201d \u201crelevant,\u201d \u201cunambiguous,\u201d \u201cspecific,\u201d and \u201cobjective.\u201d Effective questionnaire items are brief and to the point. They avoid long, overly technical, or unnecessary words. This makes them easier for respondents to understand and faster for them to complete. Effective questionnaire items are also relevant to the research question. If a respondent\u2019s sexual orientation, marital status, or income is not relevant, then items on them should probably not be included. Again, this makes the questionnaire faster to complete, but it also avoids annoying respondents with what they will rightly perceive as irrelevant or even \u201cnosy\u201d questions. Effective questionnaire items are also unambiguous; they can be interpreted in only one way. Part of the problem with the alcohol item presented earlier in this section is that different respondents might have different ideas about what constitutes \u201can alcoholic drink\u201d or \u201ca typical day.\u201d Effective questionnaire items are also specific, so that it is clear to respondents what their response should be about and clear to researchers what it is about. A common problem here is closed-ended items that are \u201cdouble barreled.\u201d They ask about two conceptually separate issues but allow only one response. For example, \u201cPlease rate the extent to which you have been feeling anxious and depressed.\u201d This item should probably be split into two separate items\u2014one about anxiety and one about depression. Finally, effective questionnaire items are objective in the sense that they do not reveal the researcher\u2019s own opinions or lead participants to answer in a particular way. Table 6.2 \u201cBRUSO Model of Writing Effective Questionnaire Items, Plus Examples\u201d shows some examples of poor and effective questionnaire items based on the BRUSO criteria.\r\n\r\n&nbsp;\r\n\r\n<strong>Table 6.2 BRUSO Model of Writing Effective Questionnaire Items, Plus Examples<\/strong>\r\n\r\n&nbsp;\r\n<figure class=\"table\">\r\n<table>\r\n<tbody>\r\n<tr>\r\n<td valign=\"top\">Criterion<\/td>\r\n<td valign=\"top\">Poor<\/td>\r\n<td valign=\"top\">Effective<\/td>\r\n<\/tr>\r\n<tr>\r\n<td valign=\"top\">B\u2014Brief<\/td>\r\n<td valign=\"top\">\u201cAre you now or have you ever been the possessor of a firearm?\u201d<\/td>\r\n<td valign=\"top\">\u201cHave you ever owned a gun?\u201d<\/td>\r\n<\/tr>\r\n<tr>\r\n<td valign=\"top\">R\u2014Relevant<\/td>\r\n<td valign=\"top\">\u201cWhat is your sexual orientation?\u201d<\/td>\r\n<td valign=\"top\">Do not include this item unless it is clearly relevant to the research.<\/td>\r\n<\/tr>\r\n<tr>\r\n<td valign=\"top\">U\u2014Unambiguous<\/td>\r\n<td valign=\"top\">\u201cAre you a gun person?\u201d<\/td>\r\n<td valign=\"top\">\u201cDo you currently own a gun?\u201d<\/td>\r\n<\/tr>\r\n<tr>\r\n<td valign=\"top\">S\u2014Specific<\/td>\r\n<td valign=\"top\">\u201cHow much have you read about the new gun control measure and sales tax?\u201d<\/td>\r\n<td valign=\"top\"><\/td>\r\n<\/tr>\r\n<tr>\r\n<td valign=\"top\">O\u2014Objective<\/td>\r\n<td valign=\"top\">\u201cHow much do you support the new gun control measure?\u201d<\/td>\r\n<td valign=\"top\">\u201cWhat is your view of the new gun control measure?\u201d<\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<\/figure>\r\n&nbsp;\r\n\r\nFor closed-ended items, it is also important to create an appropriate response scale. For categorical variables, the categories presented should generally be mutually exclusive and exhaustive. Mutually exclusive categories do not overlap. For a religion item, for example, the categories of Christian and Catholic are not mutually exclusive but Protestant and Catholic are. Exhaustive categories cover all possible responses. Although Protestant and Catholic are mutually exclusive, they are not exhaustive because there are many other religious categories that a respondent might select: Jewish, Hindu, Buddhist, and so on. In many cases, it is not feasible to include every possible category, in which case an Other category, with a space for the respondent to fill in a more specific response, is a good solution. If respondents could belong to more than one category (e.g., race), they should be instructed to choose all categories that apply.\r\n\r\n&nbsp;\r\n\r\nFor rating scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be \u201cbalanced\u201d around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:\r\n\r\nUnlikely | Somewhat Likely | Likely | Very Likely | Extremely Likely\r\n\r\n&nbsp;\r\n\r\nA balanced version might look like this:\r\n\r\nExtremely Unlikely | Somewhat Unlikely | As Likely as Not | Somewhat Likely | Extremely Likely\r\n\r\n&nbsp;\r\n\r\nNote, however, that a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default.\r\n\r\n&nbsp;\r\n\r\nNumerical rating scales often begin at 1 and go up to 5 or 7. However, they can also begin at 0 if the lowest response option means the complete absence of something (e.g., no pain). They can also have 0 as their midpoint, but it is important to think about how this might change people\u2019s interpretation of the response options. For example, when asked to rate how successful in life they have been on a 0-to-10 scale, many people use numbers in the lower half of the scale because they interpret this to mean that they have been only somewhat successful in life. But when asked to rate how successful they have been in life on a \u22125 to +5 scale, very few people use numbers in the lower half of the scale because they interpret this to mean they have actually been unsuccessful in life (Schwarz, 1999).\r\n\r\n&nbsp;\r\n\r\n<strong>Formatting the Questionnaire<\/strong>\r\n\r\nWriting effective items is only one part of constructing a survey questionnaire. For one thing, every survey questionnaire should have a written or spoken introduction that serves two basic functions (Peterson, 2000). One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail\u2014and the researcher must make a good case for why they should agree to participate. Thus the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent\u2019s participation, and describe any incentives for participating.\r\n\r\n&nbsp;\r\n\r\nThe second function of the introduction is to establish informed consent. Remember that this means describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent\u2019s option to withdraw at any time, confidentiality issues, and so on. Written consent forms are not typically used in survey research, so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.\r\n\r\n&nbsp;\r\n\r\nThe introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that this is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.\r\n<h2 id=\"h4\">6.3\u00a0 Conducting Surveys<\/h2>\r\nThe four main ways to conduct surveys are through in-person interviews, by telephone, through the mail, and over the Internet. As with other aspects of survey design, the choice depends on both the researcher\u2019s goals and the budget. In-person interviews have the highest response rates and provide the closest personal contact with respondents. Personal contact can be important, for example, when the interviewer must see and make judgments about respondents, as is the case with some mental health interviews. But in-person interviewing is by far the most costly approach. Telephone surveys have lower response rates and still provide some personal contact with respondents. They can also be costly but are generally less so than in-person interviews. Traditionally, telephone directories have provided fairly comprehensive sampling frames. Mail surveys are less costly still but generally have even lower response rates\u2014making them most susceptible to non-response bias.\r\n\r\n&nbsp;\r\n\r\n<strong>Key Takeaways<\/strong>\r\n\r\n\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Responding to a survey item is itself a complex cognitive process that involves interpreting the question, retrieving information, making a tentative judgment, putting that judgment into the required response format, and editing the response.\r\n\r\n\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Survey questionnaire responses are subject to numerous context effects due to question wording, item order, response options, and other factors. Researchers should be sensitive to such effects when constructing surveys and interpreting survey results.\r\n\r\n\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Survey questionnaire items are either open-ended or closed-ended. Open-ended items simply ask a question and allow respondents to answer in whatever way they want. Closed-ended items ask a question and provide several response options that respondents must choose from.\r\n\r\n\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 According to the BRUSO model, questionnaire items should be brief, relevant, unambiguous, specific, and objective.<b><\/b>\r\n<h2 id=\"h5\">References from Chapter 6<\/h2>\r\nConverse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890\u20131960. Berkeley, CA: University of California Press.\r\n\r\n&nbsp;\r\n\r\nLikert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 140, 1\u201355.\r\n\r\n&nbsp;\r\n\r\nPeterson, R. A. (2000). Constructing effective questionnaires. Thousand Oaks, CA: Sage.\r\n\r\n&nbsp;\r\n\r\nSchwarz, N., &amp; Strack, F. (1990). Context effects in attitude surveys: Applying cognitive theory to social research. In W. Stroebe M. Hewstone (Eds.), European review of social psychology (Vol. 2, pp. 31\u201350). Chichester, UK: Wiley.\r\n\r\n&nbsp;\r\n\r\nSchwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54, 93\u2013105.\r\n\r\n&nbsp;\r\n\r\nStrack, F., Martin, L. L., &amp; Schwarz, N. (1988). Priming and communication: The social determinants of information use in judgments of life satisfaction. European Journal of Social Psychology, 18, 429\u2013442.\r\n\r\n&nbsp;\r\n\r\nSudman, S., Bradburn, N. M., &amp; Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass.","rendered":"<h1 id=\"h1\">Survey Research<\/h1>\n<h2 id=\"h2\">6.1\u00a0 Overview of Survey Research<\/h2>\n<p><strong>What Is Survey Research?<\/strong><\/p>\n<p>Survey research is a quantitative approach that has two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.<\/p>\n<p>&nbsp;<\/p>\n<p>Most survey research is non-experimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>History and Uses of Survey Research<\/strong><\/p>\n<p>Survey research may have its roots in English and American \u201csocial surveys\u201d conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987). By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called Literary Digest conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this \u201cstraw poll,\u201d the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite\u2014that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.)<\/p>\n<p>&nbsp;<\/p>\n<p>From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health\u2014where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of college students that were routinely used in psychology (and still are).<\/p>\n<p>&nbsp;<\/p>\n<p>Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see\u00a0<a href=\"http:\/\/www.hcp.med.harvard.edu\/ncs\" target=\"_blank\" rel=\"noopener\">http:\/\/www.hcp.med.harvard.edu\/ncs<\/a>). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003. Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders and also to clinicians and policymakers who need to understand exactly how common these disorders are.<\/p>\n<p>&nbsp;<\/p>\n<p>And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on college students. Although this is not a typical use of survey research, it certainly illustrates the flexibility of this approach.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Key Takeaways<\/strong><\/p>\n<p>\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.<\/p>\n<p>\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.<b><\/b><\/p>\n<h2 id=\"h3\">6.2\u00a0 Constructing Survey Questionnaires<\/h2>\n<p>The heart of any survey research project is the survey questionnaire itself. Although it is easy to think of interesting questions to ask people, constructing a good survey questionnaire is not easy at all. The problem is that the answers people give can be influenced in unintended ways by the wording of the items, the order of the items, the response options provided, and many other factors. At best, these influences add noise to the data. At worst, they result in systematic biases and misleading results. In this section, therefore, we consider some principles for constructing survey questionnaires to minimize these unintended effects and thereby maximize the reliability and validity of respondents\u2019 answers.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Survey Responding as a Psychological Process<\/strong><\/p>\n<p>Consider the following questionnaire item:<\/p>\n<p>How many alcoholic drinks do you consume in a typical day?<\/p>\n<p>_____ a lot more than average<\/p>\n<p>_____ somewhat more than average<\/p>\n<p>_____ average<\/p>\n<p>_____ somewhat fewer than average<\/p>\n<p>_____ a lot fewer than average<\/p>\n<p>Although this item at first seems straightforward, it poses several difficulties for respondents. First, they must interpret the question. For example, they must decide whether \u201calcoholic drinks\u201d include beer and wine (as opposed to just hard liquor) and whether a \u201ctypical day\u201d is a typical weekday, typical weekend day, or both. Once they have interpreted the question, they must retrieve relevant information from memory to answer it. But what information should they retrieve, and how should they go about retrieving it? They might think vaguely about some recent occasions on which they drank alcohol, they might carefully try to recall and count the number of alcoholic drinks they consumed last week, or they might retrieve some existing beliefs that they have about themselves (e.g., \u201cI am not much of a drinker\u201d). Then they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day. For example, this might mean dividing the number of alcoholic drinks they consumed last week by seven to come up with an average number per day. Then they must format this tentative answer in terms of the response options actually provided. In this case, the options pose additional problems of interpretation. For example, what does \u201caverage\u201d mean, and what would count as \u201csomewhat more\u201d than average? Finally, they must decide whether they want to report the response they have come up with or whether they want to edit it in some way. For example, if they believe that they drink much more than average, they might not want to report this for fear of looking bad in the eyes of the researcher.<\/p>\n<p>&nbsp;<\/p>\n<p>From this perspective, what at first appears to be a simple matter of asking people how much they drink (and receiving a straightforward answer from them) turns out to be much more complex.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Context Effects on Questionnaire Responses<\/strong><\/p>\n<p>Again, this complexity can lead to unintended influences on respondents\u2019 answers. These are often referred to as context effects because they are not related to the content of the item but to the context in which the item appears (Schwarz &amp; Strack, 1990). For example, there is an item-order effect when the order in which the items are presented affects people\u2019s responses. One item can change how participants interpret a later item or change the information that they retrieve to respond to later items. For example, researcher Fritz Strack and his colleagues asked college students about both their general life satisfaction and their dating frequency (Strack, Martin, &amp; Schwarz, 1988). When the life satisfaction item came first, the correlation between the two was only \u2212.12, suggesting that the two variables are only weakly related. But when the dating frequency item came first, the correlation between the two was +.66, suggesting that those who date more have a strong tendency to be more satisfied with their lives. Reporting the dating frequency first made that information more accessible in memory so that they were more likely to base their life satisfaction rating on it.<\/p>\n<p>&nbsp;<\/p>\n<p>The response options provided can also have unintended effects on people\u2019s responses (Schwarz, 1999). For example, when people are asked how often they are \u201creally irritated\u201d and given response options ranging from \u201cless than once a year\u201d to \u201cmore than once a month,\u201d they tend to think of major irritations and report being irritated infrequently. But when they are given response options ranging from \u201cless than once a day\u201d to \u201cseveral times a month,\u201d they tend to think of minor irritations and report being irritated frequently. People also tend to assume that middle response options represent what is normal or typical. So if they think of themselves as normal or typical, they tend to choose middle response options. For example, people are likely to report watching more television when the response options are centered on a middle option of 4 hours than when centered on a middle option of 2 hours.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Writing Survey Questionnaire Items<\/strong><\/p>\n<p><em><strong>Types of Items<\/strong><\/em><\/p>\n<p>Questionnaire items can be either open-ended or closed-ended. Open-ended items simply ask a question and allow participants to answer in whatever way they choose. The following are examples of open-ended questionnaire items.<\/p>\n<p>&nbsp;<\/p>\n<p>\u201cWhat is the most important thing to teach children to prepare them for life?\u201d<\/p>\n<p>\u201cPlease describe a time when you were discriminated against because of your age.\u201d<\/p>\n<p>\u201cIs there anything else you would like to tell us about?\u201d<\/p>\n<p>&nbsp;<\/p>\n<p>Open-ended items are useful when researchers do not know how participants might respond or want to avoid influencing their responses. They tend to be used when researchers have more vaguely defined research questions\u2014often in the early stages of a research project. Open-ended items are relatively easy to write because there are no response options to worry about. However, they take more time and effort on the part of participants, and they are more difficult for the researcher to analyze because the answers must be transcribed, coded, and submitted to some form of content analysis.<\/p>\n<p>&nbsp;<\/p>\n<p>Closed-ended items ask a question and provide a set of response options for participants to choose from. The alcohol item just mentioned is an example, as are the following:<\/p>\n<p>How old are you?<\/p>\n<p>_____ Under 18<\/p>\n<p>_____ 18 to 34<\/p>\n<p>_____ 35 to 49<\/p>\n<p>_____ 50 to 70<\/p>\n<p>_____ Over 70<\/p>\n<p>&nbsp;<\/p>\n<p>On a scale of 0 (no pain at all) to 10 (worst pain ever experienced), how much pain are you in right now?<\/p>\n<p>&nbsp;<\/p>\n<p>Have you ever in your adult life been depressed for a period of 2 weeks or more?<\/p>\n<p>&nbsp;<\/p>\n<p>Closed-ended items are used when researchers have a good idea of the different responses that participants might make. They are also used when researchers are interested in a well-defined variable or construct such as participants\u2019 level of agreement with some statement, perceptions of risk, or frequency of a particular behavior. Closed-ended items are more difficult to write because they must include an appropriate set of response options. However, they are relatively quick and easy for participants to complete. They are also much easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet. For these reasons, closed-ended items are much more common.<\/p>\n<p>&nbsp;<\/p>\n<p>All closed-ended items include a set of response options from which a participant must choose. For categorical variables like sex, race, or political party preference, the categories are usually listed and participants choose the one (or ones) that they belong to. For quantitative variables, a rating scale is typically provided. A rating scale is an ordered set of responses that participants must choose from.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>What Is a Likert Scale?<\/strong><\/p>\n<p>In reading about psychological research, you are likely to encounter the term Likert scale. Although this term is sometimes used to refer to almost any rating scale (e.g., a 0-to-10 life satisfaction scale), it has a much more precise meaning.<\/p>\n<p>&nbsp;<\/p>\n<p>In the 1930s, researcher Rensis Likert (pronounced LICK-ert) created a new approach for measuring people\u2019s attitudes (Likert, 1932). It involves presenting people with several statements\u2014including both favorable and unfavorable statements\u2014about some person, group, or idea. Respondents then express their agreement or disagreement with each statement on a 5-point scale: Strongly Agree, Agree, Neither Agree nor Disagree, Disagree, Strongly Disagree. Numbers are assigned to each response (with reverse coding as necessary) and then summed across all items to produce a score representing the attitude toward the person, group, or idea. The entire set of items came to be called a Likert scale.<\/p>\n<p>&nbsp;<\/p>\n<p>Thus unless you are measuring people\u2019s attitude toward something by assessing their level of agreement with several statements about it, it is best to avoid calling it a Likert scale. You are probably just using a \u201crating scale.\u201d<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Writing Effective Items<\/strong><\/p>\n<p>We can now consider some principles of writing questionnaire items that minimize unintended context effects and maximize the reliability and validity of participants\u2019 responses. A rough guideline for writing questionnaire items is provided by the BRUSO model (Peterson, 2000). An acronym, BRUSO stands for \u201cbrief,\u201d \u201crelevant,\u201d \u201cunambiguous,\u201d \u201cspecific,\u201d and \u201cobjective.\u201d Effective questionnaire items are brief and to the point. They avoid long, overly technical, or unnecessary words. This makes them easier for respondents to understand and faster for them to complete. Effective questionnaire items are also relevant to the research question. If a respondent\u2019s sexual orientation, marital status, or income is not relevant, then items on them should probably not be included. Again, this makes the questionnaire faster to complete, but it also avoids annoying respondents with what they will rightly perceive as irrelevant or even \u201cnosy\u201d questions. Effective questionnaire items are also unambiguous; they can be interpreted in only one way. Part of the problem with the alcohol item presented earlier in this section is that different respondents might have different ideas about what constitutes \u201can alcoholic drink\u201d or \u201ca typical day.\u201d Effective questionnaire items are also specific, so that it is clear to respondents what their response should be about and clear to researchers what it is about. A common problem here is closed-ended items that are \u201cdouble barreled.\u201d They ask about two conceptually separate issues but allow only one response. For example, \u201cPlease rate the extent to which you have been feeling anxious and depressed.\u201d This item should probably be split into two separate items\u2014one about anxiety and one about depression. Finally, effective questionnaire items are objective in the sense that they do not reveal the researcher\u2019s own opinions or lead participants to answer in a particular way. Table 6.2 \u201cBRUSO Model of Writing Effective Questionnaire Items, Plus Examples\u201d shows some examples of poor and effective questionnaire items based on the BRUSO criteria.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Table 6.2 BRUSO Model of Writing Effective Questionnaire Items, Plus Examples<\/strong><\/p>\n<p>&nbsp;<\/p>\n<figure class=\"table\">\n<table>\n<tbody>\n<tr>\n<td valign=\"top\">Criterion<\/td>\n<td valign=\"top\">Poor<\/td>\n<td valign=\"top\">Effective<\/td>\n<\/tr>\n<tr>\n<td valign=\"top\">B\u2014Brief<\/td>\n<td valign=\"top\">\u201cAre you now or have you ever been the possessor of a firearm?\u201d<\/td>\n<td valign=\"top\">\u201cHave you ever owned a gun?\u201d<\/td>\n<\/tr>\n<tr>\n<td valign=\"top\">R\u2014Relevant<\/td>\n<td valign=\"top\">\u201cWhat is your sexual orientation?\u201d<\/td>\n<td valign=\"top\">Do not include this item unless it is clearly relevant to the research.<\/td>\n<\/tr>\n<tr>\n<td valign=\"top\">U\u2014Unambiguous<\/td>\n<td valign=\"top\">\u201cAre you a gun person?\u201d<\/td>\n<td valign=\"top\">\u201cDo you currently own a gun?\u201d<\/td>\n<\/tr>\n<tr>\n<td valign=\"top\">S\u2014Specific<\/td>\n<td valign=\"top\">\u201cHow much have you read about the new gun control measure and sales tax?\u201d<\/td>\n<td valign=\"top\"><\/td>\n<\/tr>\n<tr>\n<td valign=\"top\">O\u2014Objective<\/td>\n<td valign=\"top\">\u201cHow much do you support the new gun control measure?\u201d<\/td>\n<td valign=\"top\">\u201cWhat is your view of the new gun control measure?\u201d<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/figure>\n<p>&nbsp;<\/p>\n<p>For closed-ended items, it is also important to create an appropriate response scale. For categorical variables, the categories presented should generally be mutually exclusive and exhaustive. Mutually exclusive categories do not overlap. For a religion item, for example, the categories of Christian and Catholic are not mutually exclusive but Protestant and Catholic are. Exhaustive categories cover all possible responses. Although Protestant and Catholic are mutually exclusive, they are not exhaustive because there are many other religious categories that a respondent might select: Jewish, Hindu, Buddhist, and so on. In many cases, it is not feasible to include every possible category, in which case an Other category, with a space for the respondent to fill in a more specific response, is a good solution. If respondents could belong to more than one category (e.g., race), they should be instructed to choose all categories that apply.<\/p>\n<p>&nbsp;<\/p>\n<p>For rating scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be \u201cbalanced\u201d around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:<\/p>\n<p>Unlikely | Somewhat Likely | Likely | Very Likely | Extremely Likely<\/p>\n<p>&nbsp;<\/p>\n<p>A balanced version might look like this:<\/p>\n<p>Extremely Unlikely | Somewhat Unlikely | As Likely as Not | Somewhat Likely | Extremely Likely<\/p>\n<p>&nbsp;<\/p>\n<p>Note, however, that a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default.<\/p>\n<p>&nbsp;<\/p>\n<p>Numerical rating scales often begin at 1 and go up to 5 or 7. However, they can also begin at 0 if the lowest response option means the complete absence of something (e.g., no pain). They can also have 0 as their midpoint, but it is important to think about how this might change people\u2019s interpretation of the response options. For example, when asked to rate how successful in life they have been on a 0-to-10 scale, many people use numbers in the lower half of the scale because they interpret this to mean that they have been only somewhat successful in life. But when asked to rate how successful they have been in life on a \u22125 to +5 scale, very few people use numbers in the lower half of the scale because they interpret this to mean they have actually been unsuccessful in life (Schwarz, 1999).<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Formatting the Questionnaire<\/strong><\/p>\n<p>Writing effective items is only one part of constructing a survey questionnaire. For one thing, every survey questionnaire should have a written or spoken introduction that serves two basic functions (Peterson, 2000). One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail\u2014and the researcher must make a good case for why they should agree to participate. Thus the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent\u2019s participation, and describe any incentives for participating.<\/p>\n<p>&nbsp;<\/p>\n<p>The second function of the introduction is to establish informed consent. Remember that this means describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent\u2019s option to withdraw at any time, confidentiality issues, and so on. Written consent forms are not typically used in survey research, so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.<\/p>\n<p>&nbsp;<\/p>\n<p>The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that this is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.<\/p>\n<h2 id=\"h4\">6.3\u00a0 Conducting Surveys<\/h2>\n<p>The four main ways to conduct surveys are through in-person interviews, by telephone, through the mail, and over the Internet. As with other aspects of survey design, the choice depends on both the researcher\u2019s goals and the budget. In-person interviews have the highest response rates and provide the closest personal contact with respondents. Personal contact can be important, for example, when the interviewer must see and make judgments about respondents, as is the case with some mental health interviews. But in-person interviewing is by far the most costly approach. Telephone surveys have lower response rates and still provide some personal contact with respondents. They can also be costly but are generally less so than in-person interviews. Traditionally, telephone directories have provided fairly comprehensive sampling frames. Mail surveys are less costly still but generally have even lower response rates\u2014making them most susceptible to non-response bias.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Key Takeaways<\/strong><\/p>\n<p>\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Responding to a survey item is itself a complex cognitive process that involves interpreting the question, retrieving information, making a tentative judgment, putting that judgment into the required response format, and editing the response.<\/p>\n<p>\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Survey questionnaire responses are subject to numerous context effects due to question wording, item order, response options, and other factors. Researchers should be sensitive to such effects when constructing surveys and interpreting survey results.<\/p>\n<p>\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Survey questionnaire items are either open-ended or closed-ended. Open-ended items simply ask a question and allow respondents to answer in whatever way they want. Closed-ended items ask a question and provide several response options that respondents must choose from.<\/p>\n<p>\u00b7\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 According to the BRUSO model, questionnaire items should be brief, relevant, unambiguous, specific, and objective.<b><\/b><\/p>\n<h2 id=\"h5\">References from Chapter 6<\/h2>\n<p>Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890\u20131960. Berkeley, CA: University of California Press.<\/p>\n<p>&nbsp;<\/p>\n<p>Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 140, 1\u201355.<\/p>\n<p>&nbsp;<\/p>\n<p>Peterson, R. A. (2000). Constructing effective questionnaires. Thousand Oaks, CA: Sage.<\/p>\n<p>&nbsp;<\/p>\n<p>Schwarz, N., &amp; Strack, F. (1990). Context effects in attitude surveys: Applying cognitive theory to social research. In W. Stroebe M. Hewstone (Eds.), European review of social psychology (Vol. 2, pp. 31\u201350). Chichester, UK: Wiley.<\/p>\n<p>&nbsp;<\/p>\n<p>Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54, 93\u2013105.<\/p>\n<p>&nbsp;<\/p>\n<p>Strack, F., Martin, L. L., &amp; Schwarz, N. (1988). Priming and communication: The social determinants of information use in judgments of life satisfaction. European Journal of Social Psychology, 18, 429\u2013442.<\/p>\n<p>&nbsp;<\/p>\n<p>Sudman, S., Bradburn, N. M., &amp; Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass.<\/p>\n","protected":false},"author":13,"menu_order":6,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"part":3,"_links":{"self":[{"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/pressbooks\/v2\/chapters\/30"}],"collection":[{"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/wp\/v2\/users\/13"}],"version-history":[{"count":3,"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/pressbooks\/v2\/chapters\/30\/revisions"}],"predecessor-version":[{"id":54,"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/pressbooks\/v2\/chapters\/30\/revisions\/54"}],"part":[{"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/pressbooks\/v2\/parts\/3"}],"metadata":[{"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/pressbooks\/v2\/chapters\/30\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/wp\/v2\/media?parent=30"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/pressbooks\/v2\/chapter-type?post=30"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/wp\/v2\/contributor?post=30"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/digitaleditions.library.dal.ca\/researchmethodspsychneuro\/wp-json\/wp\/v2\/license?post=30"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}