Introduction
The purpose of this article is to identify three quantitative data collection approaches as well as to provide a summary of the findings,
Chromatogram and determine an appropriate quantitative data collection approach for a study on the role of Information Technology (IT) during national emergency responses. It is equally important to ensure that these findings include justification about sampling, bias, validity, and reliability. The article will reveal in what ways researchers considered sampling, bias, validity, and reliability for the study.
The article summarizes three quantitative data collection techniques and how these have influenced in the quantitative study of integrating Information Technology (IT) to national emergency response tools like the Emergency Management Information System (EMIS). The article will address how experience and training of emergency responders can affect operations of EMIS, and how a study influences national emergency responses during emergencies like those during the attacks on September 11, 2001.
Quantitative methodology
Conducting research studies require that researchers
Physical property test examine the best research approach for each individual study. No single research approach will fit every study, and this is why researchers must plan their research strategy employing analytical and numerical approaches when considering a quantitative methodology as the best course of action for their research. Before getting ready for a research, scholars face the dilemma about what research methodology approach to use. A good number of them choose to adopt a qualitative approach while others may like a quantitative methodology more. The reality is that there is no right or wrong methodology. Each approach contains unique characteristics that can be useful depending on the purpose of the research (Holton & Burnett, 2005). Quantitative and qualitative research methodologies present distinct views of the social world, the role of science in knowing and understanding this world, and data collection methods. When choosing a quantitative methodology, according to Cohen and Manion (1980), researchers employ empirical methods and empirical statements by demonstration that those methods and statements are a descriptive representation of what the "real world" is rather than what is "ought" to be. Typically, researchers communicate empirical statements in numerical terms, which become part of empirical evaluations. These empirical evaluations translate into ideas that try to determine the extent of what degree a specific process or methodology can empirically fulfill or not fulfill certain standards or norms.
Quantitative research commonly uses experimental, quasi-experimental, correlational, or descriptive instruments (Holton & Burnett, 2005). As stated earlier the article communicates three of the different quantitative data collection approaches, but before addressing these, it is essential to define what quantitative methodology is as the scholars see it. Researchers choose a quantitative approach instead of its counterpart, the qualitative research methodology, to illustrate a measurable causal effect from numerical changes that emerge from the characteristics of a studied population (Kraska, 2010). Any measurable form of data resulting from the study can also help explain predictability, as well as describing causal relationships (2010). Convincing those that believe that a quantitative paradigm distorts the phenomena studied, must be confronted with the idea that the quantitative paradigm is more appropriate when establishing cause and effect relationships, and when professional values are not compromised by intrusive, and controlled techniques (Siegel, 1984). The next section presents the data collection methodologies.
Data Collection
Data collection methods used in quantitative research studies generates objective, observable, reliable, numerical facts about particular, operationally defined components of social reality (Allen-Meares & Lane, 1990). It is important to take assumptions in consideration before discussing any data collection methods. The first assumption is that the quantitative concept hypothesizes the social world as causal expressions of the reality that influences and predicts social actions (Cohen & Manion, 1980). Another assumption is that the quantitative methodology is a paradigm that uses processes designed to verify and confirm relationships described by theory in part by experimental or correlational research strategies that are verifiable, and logically deductive (Reichardt & Rallis, 1994). The first of the three data collection methods discussed on this article is structured observation.
Structured Observation
Structured observation works with a predetermined plan of what exactly to look for during the observation sessions, whereas naturalistic observation looks anything of importance without any prearranged strategy (Wragg, 1999). Structured observations measure the frequency, duration, or magnitude of certain behaviors exhibited by the observed object or objects. Moreover, structured observations often involve using rubrics or checklists in order to allow for standardized data collection. Quantitative approaches tend to be highly structured with very detailed pre-developed observation schedules. If this approach is chosen, decisions have to be made by the researcher as to whether already existing observational schedules will be used, or an observation schedule will be specifically developed for one particular study. Compilation of data by ways of observations can consider scenarios that include facts, real life events, or behaviors (verbal or non-verbal). Structured observations focus on social phenomena employed to test research hypotheses employing a coding system to register the participants' conduct (Sukamolson, 2007). The coding system classifies behavior, and coded as observed in terms of how frequent such behavior appears. As stated earlier, one difference of structured observation from other observations is that the focus of the observations has been determined beforehand (2007). Structured observation satisfies the ideologies and expectations of quantitative research because the focus of the observation shifts into manageable and more adaptable forms of data that translate into variables, hence in the form of quantitative data.
Interviews
Although interviews are often associated with qualitative research, interviews can play a big part of the quantitative research as well (Sukamolson, 2007). In quantitative research studies, the interviews are very structured, with the interviewee able to choose a response only from a pre-set series of questions on the interview form. Often the reply can be a simple yes or no, or it may just be a number. Otherwise, the interviewee may have to choose one item from a list. After completing the interview, the answers are coded and entered into a computer database for statistical analysis.
Researchers conduct interviews in different ways. Three types of interview exist in the literature of research methodology (Morgan & Harmon, 2001). The purpose of the study that researchers like to achieve establishes the choice of interview. The first, face-to-face interview, researchers attempt to discover peculiarities about the observed objects and as a result, come up with effective answers in a short time because of the personal interaction that this type of setting provides (2001). The next interview type is the structured interview, which takes the form of a questionnaire. In this method respondents are asked the same questions and in the same sequence with the purpose to minimize any bias (Morgan & Harmon, 2001). The next type is the semi-structured interview. In this technique, the interviewer has a list of general subjects, but some questions can be omitted depending upon the circumstances. The third technique is the unstructured interview referred to as informal. This type of interview is an effective way of collecting in-depth data. There is no predetermined list of questions, so the respondent can talk freely as long as the content and topic are related (Morgan & Harmon, 2001).
Surveys
Surveys employ data collection instruments to collect data from a sample of a relevant population, or from an entire population (Patton, 2002). Surveys are used considerably in experiments because of their flexibility to collect data on almost any issue (2002). Surveys are inexpensive and precise methods of collecting data, but at times difficult to produce, and may return a low response percentage, affecting in a negative way the reliability and validity of the collected data (2002).
Information Systems in Emergency Management
Information technologies form an essential part of emergency preparedness and planning. Emergency preparedness outlines a community's steps to respond efficiently to external or internal threats, and the reduction of the effect of these threats to the wellbeing and security of the community (Perry & Lindell, 2003). National crises and disasters can turn into serious events that demand the best decision-making efforts from leaders in both government and the private sector (Mendonca, 2007). Decision Support Systems strengthen operational, tactical, and strategic decisions in organizations (French and Turoff, 2007). These systems assist in the decision-making process during incident management. Emergency response communication and information requirements vary in scope and proportion, and information and communication technologies help making these requirements a reality (Turoff, 2002). To provide adequate decisional support to manage crises, researchers and practitioners in information system and disaster management have urged attention to the development or enhancement of Emergency Management Information Systems (EMIS) (Carver & Turoff, 2007).
Emergency Management Information Systems Research Study
During their study, Shen, Carswell, Santhanam, and Bailey (2012) conducted two experiments and found how Emergency Management Information Systems (EMIS) helped those responsible of making hard decisions with information that is gathered and shared at every level of the incident response team. As a decision-support system, the EMIS provide members of the emergency management team the ability to collect partial but sometimes-important data during highly stressful circumstances and a high volume of information to be processed (2007). Originally, EMIS were designed to provide geospatial data only as two-dimensional (2D) displays that show orientations and relative positions by laying objects in a plane and by using colors or contour lines to represent elevations (Kwan & Lee, 2005). While performance factors for these types of tools improve, the interest changed to three-dimensional (3D) visualization, especially after the events of September 11. With the extra dimension, 3D displays incorporated orientations, relevant positions, elevations, and shapes of objects, all in a single view. Even if 2D or 3D EMIS are utilized during, emergency events there still always the possibility that making a decision on time might cause hesitation, harm, or injuries. The study showed that before recommending any kind of flexible solution for EMIS, it is crucial to consider if EMIS users and decision makers with less experience with these systems, can learn and operate the system without assistance, while performing the decision making job in a dependable fashion and demonstrate the capacity to make the hard decisions without delays (2007). Like any technical tool or IT application, these systems become impractical without any type of human interaction. For the study, a laboratory-based experimental method was selected to test two hypotheses:
H10. Without guidance, novice EMIS users will fail to choose task-appropriate display formats at a rate greater than what would be expected by chance.
H2A. Decision makers given prospective decisional guidance about display format selection will have better decisional performance than those who do not get such guidance.
The use of a controlled environment allowed for the implementation of various tests, as well as the addition of one decisional guidance, while allowing continuous scrutiny of participants' answers (Shen, Carswell, Santhanam, & Bailey, 2012). The researchers followed Silver's three-step approach, which analyzed the effect of decisional guidance with two independent but closely related experiments (Silver, 1991).
Experiment one gave researchers the opportunity to investigate the extent to which inexperienced EMIS users, without supervision, instinctively selected display designs matched better their role as incident decision maker. The results for experiment one demonstrated that when the task required only horizontal information about the target object, 84% of all participants specified correctly that the plan view was the best visual representation for the task (Shen, Carswell, Santhanam, & Bailey, 2012). On the other hand, only 34% and 49% of participants correctly selected displays for tasks that showed elevation views or 3D displays, correspondingly. Researchers replicated and validated the results with actual first responders from local fire and police departments. H10 examined whether inexperienced EMIS users demonstrated any proof of switching their display selections to fit a predictable and more comfortable working environment by selecting the displays that met their individual likes and not necessarily the standard (2012). To determine any statistical trends if participants made the correct displays choices, researchers compared real percentage of correct choices to the percentage that would be expected to happen only as a possibility (2012).
For the second experiment, Shen et al. established two groups, an experimental group with participants who used a decisional guidance example with basic information about display choices. The second was the control group with participants who received a guidance example that included only general information about the job of an incident commander but no additional information about display choices (Shen, Carswell, Santhanam, & Bailey, 2012). Participants were given questionnaires to document demographic data, graphical preference, and 3D mental rotation ability. To ensure the successful installment of treatment, researchers conducted manipulation checks after the treatment or control scripts. Participants were asked to answer ten manipulation check true or false questions. Only those participants who properly answered all ten manipulation-check questions were included in the statistical analyses. The next step was to project the horizontal plan view, and 3D view onto a screen on the wall. Participants then completed the task to choose what display format was more appropriate for improving their decision-making time and effectiveness. In summary, providing effective decisional guidance to inexperienced EMIS emergency response operators can improve the fluidity and promptness of their decisions. Participants who passed the manipulation check were included for the experiment. Therefore, these results underlined that participants could understand, and appropriately apply, the correct doctrines for selecting a display format for each particular decision tasks. Provided that one purpose of EMIS is to assist users to make efficient and effective decisions in crisis situations, results like these demonstrate the positive value of giving incident commanders an EMIS that uses both 2D and 3D imaging processes but more important to provide them directions on when and how to use each system (2012).
Sampling
Choosing the appropriate type and size of sample from a population plays a significant role in the planning of research (Passmore, & Parker, 2005). Sampling methodologies known to researchers include convenience sampling, purposive sampling, simple random sampling, cluster sampling, stratified sampling, and 100% census of all members of a population (2005). For the first experiment, Shen et al. selected 48 students from a basic psychology class at a southern state university. An additional 13 participants were recruited from local fire and police departments. The last group of participants was composed of four women and nine men, with ages ranging from 22 to 68-years-old, and an average job experience in emergency response of 16 years. Selection of participants from two different groups makes the sampling a stratified type due to the heterogeneous population (2005). For their study, Shen et al. used a random selection of display formats resulting on an average accuracy rate for task-display matches of 40% (Shen, Carswell, Santhanam, & Bailey, 2012).
For the second experiment, sixty-one participants were recruited from a basic psychology class at a southern state university (2012). They were randomly assigned to the experimental or control groups. As discussed earlier, manipulation checks were conducted to verify the correct distribution of treatment conditions. Statistical analyses were conducted only on participants who had correctly answered all manipulation check questions (2012).
Study Bias
Flexibility in adapting a specific type of display for EMIS assumes that inexperienced users are able to instinctively make task-display compatibility decisions as it was described earlier (Shen, Carswell, Santhanam, & Bailey, 2012). Nevertheless, it appears that empirical studies demonstrated the decisions by users degraded. Users who displayed knowledge bias or displayed disinterest may stress about a specific type of display that they are comfortable with or have used more (Baddoo & Hall, 2003). Another bias identified by the researchers that influenced the choice of displays is called naïve realism, or predisposition of users to like more those displays with photorealistic definitions only because of their own understanding that displays that look more realistic must be more accurate (Smallman & John, 2005).
Validity
Shen et al. used a one-sample t-test against .40 suggesting that participants successfully beat the odds (Shen, Carswell, Santhanam, & Bailey, 2012). The same test was performed for one task that using only vertical information and one that using a combination of vertical and horizontal information (2012). The total accuracy of the participants for all tasks was 56%. There was indication that participants leaned toward choosing the correct display for two out of the three tasks therefore, rejecting the null hypothesis.
The two-sample t-tests did not prove variances in performance of either gender or expertise (2012). The differences between actual first responders and participants without experience may imply that the most fitting selection of displays is a talent or skill that does not develop from experience (2012). Although first responders performed well on tasks that used horizontal information of 90% and vertical information of 40%, college students were outperformed on both tasks 82% and 33% respectively. First responders' degree of accuracy on tasks that used 3D view was 40%, lower to the group of students, which was 52% (2012).
Reliability
Researchers have developed precise measures of reliability, some related to certain statistical tests providing the impression that any research tool should provide the same information if used by different people (Roberts, Priest, & Traynor, 2006). To assess for tendencies of whether participants making the correct choices was statistically reliable, Shen et al. associated the actual percentage of right selections to the percentage that would be expected to occur by chance (Shen, Carswell, Santhanam, & Bailey, 2012). Even though surpassing chance performance is a forgiving condition, it offers an impartial reference that allows researchers to make up for the lack of other reliable sources (2012). The researchers perhaps overlooked the assessment of internal consistency of the questions by not using the Cronbach's alpha coefficient and applying the split-half test (Cronbach, 1951).
Conclusion
The article began by summarizing the quantitative research methodology. Researchers elect a quantitative approach instead of its counterpart, the qualitative research methodology, so they can illustrate a quantifiable causal effect from numerical changes that emerge from the characteristics of a studied population. When employing a quantitative approach, the researcher will normally be using experimental, quasi-experimental, correlational, or descriptive instruments for their study.
The article identified three quantitative data collection approaches, structured observation, interviews, and surveys. The article synthesized the findings and determined that surveys and interviews combined served as an appropriate quantitative data collection approach for a study Emergency Management Information Systems (EMIS) and how the study may influence national emergency responses like during the attacks on September 11, 2001. Finally, the article provided justification that included sampling, bias, validity, and reliability about the study. http://www.globe-instruments.com/