Predicting Task Difficulty for Different Task Types

Size: px
Start display at page:

Download "Predicting Task Difficulty for Different Task Types"

Transcription

1 Predicting Task Difficulty for Different Task Types Jingjing Liu, Jacek Gwizdka, Chang Liu, Nicholas J. Belkin School of Communication and Information, Rutgers University 4 Huntington Street, New Brunswick, NJ {jingjing, changl}@eden.rutgers.edu, asist2010@gwizdka.com, belkin@rutgers.edu ABSTRACT This paper reports our investigation of differences in users behavior between difficult and easy search tasks, as well as how these differences vary with different types of tasks. We also report how behavioral predictors of task difficulty vary across task types. In addition, we explored how whole-tasksession level user behaviors and within-task-session level behaviors differ in task difficulty prediction. Data were collected in a controlled lab experiment with 48 participants, each completing 6 search tasks of three types: single-fact finding, multiple-fact finding and multiple-piece information gathering. Results show that task type affects the relationships between task difficulty and user behaviors and that prediction of task difficulty should take account of task type. Results also show that both whole-session level and within-session level user behaviors can serve as task difficulty predictors. Whole-session level variables show higher prediction accuracy, but within-session level factors have the advantage of enabling real-time prediction. These findings can help search systems predict task difficulty and adapt to users. Keywords Task difficulty, difficulty prediction, task type, user behavior, behavioral measures, dwell time, first dwell time INTRODUCTION Search engines do a good job with easy tasks, for example, when and where will ASIST 2010 conference be held?, but not as good with somewhat difficult tasks, for instance, collect information about good rental apartments in Pittsburgh. Better systems are needed that can help people easily find desired information in more difficult tasks. It is also useful for the systems to be able to detect when the users are working with difficult tasks. Task difficulty has been attracting much research attention and has been found to be a significant factor influencing users search behaviors and performance. In difficult tasks, users are more likely to visit more web pages (Kim, 2006; Gwizdka & Spence, 2006), issue more queries (Kim, 2006; Aula, Khan, & Guan, 2010), and spend more time on search engine result pages (SERPs) (Aula, Khan, & Guan, 2010). Factors that are good for predicting task difficulty have also been examined with prediction models (e.g., Gwizdka, 2008). However, little effort has been spent researching if task difficulty predictors vary with contexts. As many Copyright is held by the author/owner(s). ASIST 2010, October 22 27, 2010, Pittsburgh, PA, USA. studies (e.g., Kelly & Belkin, 2004; White & Kelly, 2006) indicate, user behaviors are affected by tasks and/or task types, thus it is reasonable to ask whether task difficulty predictors vary by task types. In addition, the behavioral and performance aspects that previous works addressed have focused mainly on the overall task session level, referred to as whole-session variables in this paper. These include the time to complete the tasks, the number of documents read, the number of queries issued, etc. Since these variables cannot be obtained until the end of a whole task, it is not practical for systems to detect task difficulty in real time based on these behavioral measures. More work is needed to explore behavioral measures that are available during the search episode, so that systems can learn dynamically if users are dealing with difficult tasks in order to adapt search toward users specific situations. One behavioral measure that is suitable for real-time detection is dwell time on a document, i.e., the time spent by a user on a retrieved document. Dwell time (White & Kelly, 2006) and first dwell time (Liu & Belkin, 2010) have been studied as potential factors for predicting document usefulness, but not in conjunction with task difficulty yet. Another behavioral measure suitable for real-time tracking is the number of viewed documents per query. We name both measures within-session level factors since they describe task sections and can be obtained in real-time within a session. These are different from whole-session level factors that describe the whole task and cannot be obtained until the end of a task. In sum, this study aimed to answer the following questions: 1) Do behavioral variables that can be used in task difficulty prediction vary between different types of tasks? 2) What are the differences between whole-session and within-session level behaviors in task difficulty prediction? LITERATURE REVIEW Task difficulty Task difficulty, as well as the closely related task feature of complexity, has attracted some research attention. In their comprehensive task classification scheme, Li & Belkin (2008) differentiate these two concepts clearly. According to them, task complexity can be both objective and subjective and its values in both types can be low, moderate, or high. Objective task complexity is defined by the number of activities involved in a work task (Ingwersen & Järvelin, 2005) or the number of information

2 sources involved in a search task. Subjective task complexity is assessed by task doers. Task difficulty, as they noted, can only be subjective, assessed by task doers. With a similar conceptualization of task difficulty as Li & Belkin (2008), Kim (2006) suggests that difficulty is the task doer s perception of task complexity, that it could be both pre- and post-task perceptions, and that task type is a variable in task difficulty. In a study to examine the effect of task difficulty on user behaviors, she used three types of tasks: factual, interpretive, and exploratory. Through a correlation examination of task difficulty and some wholesession behaviors, it was found that in factual tasks, posttask difficulty was significantly associated with task completion time, and the numbers of queries and documents viewed; in exploratory tasks, user behaviors were significantly correlated with pre-task difficulty, but in interpretive tasks, most correlations between behaviors and task difficulty were not significant. The findings of this study cannot to be used in real-time task difficulty prediction because the examined behavioral variables are measured after the task is completed. Using a similar definition of task difficulty as Li & Belkin (2008), Gwizdka & Spence (2006) examined how users behaviors could indicate the difficulty of a factual information-seeking task. Task difficulty was assessed by users after each task. Their results indicated that higher search effort, lower navigational speed, and lower search efficiency were good predictors of task difficulty tested by regression models. One limitation of this study is that only one type of task was used, and it is therefore uncertain what the predictors would be in other types of tasks. In their recent work, Aula, Khan, & Guan (2010) also took the approach of post-task difficulty assessment, but it was determined by users success or failure in finding the answers to their tasks, which were closed information tasks that have a single, unambiguous answer. The study found that in difficult tasks, users started to formulate more diverse queries, used advanced operators more, and spent longer time on the SERPs. The study used only one type of task hence it is uncertain how the findings can be generalized to other types of tasks. The Role of Task Type as a Contextual Factor Task type as a contextual factor has been studied quite extensively with respect to how it affects users interactions with search systems. One stream of research examines the effect of task type on user behaviors in searching and has found significant effects. For example, information gathering tasks are the most complex out of several types and require long task completion time and viewing more pages (Kellar, Watters, & Shepherd, 2007); mixed-product tasks require longer task completion time, more pages and more search sources than factual tasks (Liu et al., 2010a). Another stream of research explores whether task type is a helpful contextual factor in document usefulness prediction. For example, task type information helps predict document usefulness from first dwell time and task stage (Liu & Belkin, 2010). The fruitfulness of studies related to contextual factors sheds light on considering task types in difficulty prediction. Whole-session vs. Within-session level Factors Whole-session user behaviors have been extensively examined in interactive information retrieval (IIR) research with respect to how they are affected by other factors, such as task type, task features, user characteristics, to name a few. Much has been found about how they are affected by task difficulty in the above described related studies. To better help people in their search process, systems need to perform real-time prediction of task difficulty, which is not possible based on the findings in the current literature. Within-session level factors, however, are suitable for dynamic prediction. Perhaps the most frequently used measure of a within-session level behavior is document dwell/reading time. Dwell time has been found to be helpful in document usefulness prediction when considering task information in the IIR setting (White & Kelly, 2006). Another within-session level factor, number of content pages viewed per query, has been found to help predict a task being closed or open-ended (Liu et al., 2010b). These findings allow us to hypothesize that they could also be good indicators of task difficulty. METHOD Data analysis presented in this paper uses the same dataset as Gwizdka (2008). In contrast to the 2008 paper, the current paper uses more behavioral measures (such as measures related to query efficiency) and focuses on the differences between behavioral measures across task types. The data was collected in a controlled lab experiment, in which 48 university students (17 females) conducted question-driven, web-based information search in individual sessions. Participants mean age was 27 years. Among them, 65% were undergraduates, 6% were masters, 23% were doctoral students, and the other 6% had just graduated. Most of them were very frequent Web searchers: 35% used the Web almost constantly, 46% several times a day, 17% once a day, and only one person (2%) searched the Web relatively infrequently: once or twice a week. Procedure Participants were invited to the lab for individual experiment sessions. Each experiment session lasted about an hour and a half to two hours. The lab was equipped with a desktop computer running the Microsoft Windows XP operating system. Each session consisted of the following steps: an introduction to the study, consent form, search task practice, background questionnaire, six search tasks, and post-session questionnaire. The searchers were asked to bookmark (as a saving method) and tag the web pages that they considered best answer(s) to the task questions. User interaction with the computer (visited and bookmarked

3 URLs, mouse and keyboard events, and video from a screen cam) was recorded using Morae software 1. Tasks The study search tasks were designed as questions that described what information needed to be found and provided a context for the search. Totally, there were 12 tasks, 8 of which were created by Toms and her colleagues (Toms et al. 2007) and 4 were created by us. The 8 Toms (2007) questions included 4 Fact Finding (FF) and 4 Information Gathering (IG) (Kellar, Watters & Shepherd, 2007) tasks. FF tasks asked users to locate some short, specific information, and information gathering tasks involve the collection of information for which there is no one specific answer. These 8 tasks had multiple concepts which required the users to find multiple pieces of information or web pages to answer them. The 4 tasks created by us are also FF tasks, but they asked the users to locate only one single piece of information. These 12 tasks have two salient features: Level (Liu et al., 2010a), and Number of concepts. The feature Level has two values: Segment Level tasks that require locating specific information within a page, while Document Level tasks that only require users to judge if a page is useful or relevant in general but do not necessarily require locating specific information. Table 1 shows the 12 task features. The tasks were constructed by using task scenarios that provided participants with the search context and the basis for relevance judgments. Sample tasks of each type: FF-S: Everybody talks these days about global warming. By how many degrees (Celsius or Fahrenheit) is the temperature predicted to rise by the end of the XXI century? FF-M: A friend has just sent an from an Internet café in the southern USA where she is on a hiking trip. She tells you that she has just stepped into an anthill of small red ants and has a large number of painful bites on her leg. She wants to know what species of ants they are likely to be, how dangerous they are and what she can do about the bites. What will you tell her? IG-M: You recently heard about the book "Fast Food Nation," and it has really influenced the way you think about your diet. You note in particular the amount and types of food additives contained in the things that you eat every day. Now you want to understand which food additives pose a risk to your physical health, and are likely to be listed on grocery store labels. Task features Tasks FF-S FF-M IG-M Level Segment Mixed Document Number of concepts Single Multiple Multiple Table 1. Tasks and task features 1 During the course of an individual study session, each participant performed 6 tasks of differing types and structures. The 48 participants performed a total of 288 person-tasks (called tasks below). In each task, the participant was able to choose between two questions of the same type but on different topics. We offered the choice of topics to increase the likelihood of a participant's interest in the question topic. Search tasks were performed on the English version of Wikipedia by using two different search engines: Google Wikipedia search and ALVIS Wikipedia search. Both task order and system order were rotated to avoid possible learning effect and system order effect. This paper does not focus on the interface effect on task difficulty, thus we will not discuss this factor further. Variables The following lists the whole- and within-session level variables considered in this study as indicators of task difficulty: Whole-session level variables: Numbers of all content pages Numbers of unique content pages Number of SERPs Number of unique SERPs Number of queries Number of queries leading to saving pages: number of queries that were followed by page saving before the next query was entered. Number of queries not leading to saving pages: number of queries that were not followed by page saving before the next query was entered. Ratio of queries leading to saving pages: the ratio of the number of queries leading to saving pages to the number of all queries in a task. Ratio of queries not leading to saving pages: the ratio of the number of queries not leading to saving pages to the number of all queries in a task. Task completion time: time users spent on each task. Total dwell time on each unique content page: total time users spent on each unique content page in a task. Average total dwell time on unique content pages: the average of total dwell times of all unique content pages in a task. Total dwell time on each unique SERP: total time users spent on each unique SERP in a task. Average total dwell time on unique SERPs: the average of total dwell times of all unique SERPs in a task. Within-session level variables: Number of pages per query: the number of all content pages viewed in a task divided by the number of queries. Number of unique pages per query: the number of unique content pages viewed in a task divided by the number of queries.

4 First dwell time on content pages: the duration between when a user opened a content page and when the user first left the page. Mean dwell time of all content pages: total dwell time on all content pages divided by the number of all content pages. First dwell time on SERPs: the duration between when a user opened a SERP and when the user first left this SERP. Mean dwell time of all SERPs: total dwell time on all SERPs divided by the number of all SERPs. Although in the current analysis, we calculated values of the above six variables for the whole task, these variables can be calculated during the search process for task sections ending, respectively, in entering a query, visiting a content page or examining a SERP. Therefore, these variables are treated as within-session level variables. 2010), and collapsed the rating scores into fewer groups. Specifically, in our analysis, the difficulty scores were collapsed into 2 groups based on the distribution (scores 1-3 into a difficult group, and scores 4-5 into an easy group; see Table 2). 12 Tasks FF-S FF-M IG-M Difficult 100(34.7%) 17(17.7%) 37(38.5%) 46(47.9%) Easy 188(65.3%) 79(82.3%) 59(61.5%) 50(52.1%) Total 288(100%) 96(100%) 96(100%) 96(100%) Table 2. Task difficulty and task type distribution Examination of the distribution of behavioral variables showed that none of them was normally distributed for difficult or easy tasks. In such a situation, non-parametric statistical tests are appropriate. The Mann-Whitney test was generally used in our analysis except when otherwise specified. Whole-session Level Behavioral Variables in Difficult RESULTS vs. Easy Tasks Among the 288 experiment sessions, there were 96 of each Results in 12 Tasks Combined type: FF-S, FF-M, and IG-M. Table 2 shows the counts of This section reports the results when all 12 tasks were each type of task. Out of the 288 tasks, there were 100 considered together. As can be seen from Table 3, except difficult ones, 17 of which were FF-S, 37 were FF-M, and for mean total dwell time of unique content pages, all other 46 were IG-M tasks, respectively. variables showed significant differences between difficult In this study, we adopted Li & Belkin s (2008) definition of task difficulty, i.e., difficulty is defined as the subjective assessment of task by the task doer. Task difficulty was rated by each user immediately after task completion on a 5-point scale. This scale contains distinctions too fine for a future system to differentiate. We therefore adopted a prior research practice (cf., White & Kelly, 2006; Liu & Belkin, and easy tasks. Specifically, users spent more time completing difficult tasks than easy tasks (U(286)=4811.5, z=6.82, p<.001). They also spent more total dwell time on unique SERPs (U(286)=5894.5, z=5.21, p<.001). However, the total dwell time they spent on unique content pages in difficult tasks and in easy tasks was not significantly different (U(286)=8667.5, z=1.09, ns). Variables 12 tasks FF-S tasks FF-M tasks IG-M tasks Difficult Easy Difficult Easy Difficult Easy Difficult Easy Task completion time p=.000 p=.000 p=.004 p=.001 Total dwell time on unique content pages p=.276 p=.046 p=.299 p=.092 Total dwell time on unique SERPs p=.000 p=.034 p=.016 p=.001 Number of content pages p=.000 p=.004 p=.390 p=.001 Number of unique content pages p=.000 p=.006 p=.125 p=.002 Number of SERPs p=.000 p=.000 p=.000 p=.000 Number of unique SERPs p=.000 p=.000 p=.000 p=.000 Number of queries p=.000 p=.000 p=.000 p=.001 Number of queries with no saved pages p=.000 p=.000 p=.000 p=.000 Number of queries with saved pages p=.000 p=.515 p=.018 p=.918 Ratio of queries with no saving pages to all queries p=.000 p=.000 p=.001 p=.000 Ratio of queries with saving pages to all queries p=.000 p=.000 p=.001 p=.000 Table 3. Whole-session level factors for different tasks (those in bold showed significant differences)

5 Compared with easy tasks, in difficult tasks, users viewed more content pages (U(286)=5624, z=5.63, p<.001), more unique content pages (U(286)=5518, z=5.80, p<.001), more SERPs (U(286)=4066, z=8.00, p<.001), and more unique SERPs (U(286)=4194, z=7.89, p<.001). Users issued more queries in difficult tasks than in easy tasks (U(286)=4370.5, z=7.66, p<.001). These results are generally consistent with those found in previous studies (e.g., Gwizdka, 2008) difficult easy # of queries # of queries with no saving pages # of queries with saving pages Figure 1. Number of queries in difficult and easy tasks A closer examination of the queries found that in difficult tasks, users issued both more queries leading to saving pages (U(286)=7220, z=3.51, p<.001) and more queries not leading to saving pages (U(286)=4587.5, z=7.57, p<.001) than in easy tasks (Figure 1). The ratio of queries with no saving pages to all queries in difficult tasks was also greater than those in easy tasks (U(286)=5200, z=6.59, p<.001). We also noticed that in difficult tasks, users entered more queries that did not lead to saving pages than queries that led to saving pages (Wilcoxon test z=3.18, df=98, p<.001). However, in easy tasks, this pattern was reversed: users entered fewer queries that did not lead to saving pages than queries that led to saving pages (Wilcoxon test z=6.55, df=186, p<.001) (Figure 1). Entering more ineffective queries than effective queries is a reasonable observation and explanation of why a task is difficult. FF-S Tasks This section reports the results in FF-S tasks (those requiring one fact) only. The difficult vs. easy task patterns of most variables were the same as those when all 12 tasks were considered. Users spent more time completing difficult tasks than easy tasks (U(94)=302, z=3.55, p<.001). They also spent more total dwell time on unique SERPs (U(94)=450, z=2.13, p<.05). Compared with easy tasks, in difficult tasks, users viewed more content pages (U(94)=377.5, z=2.87, p<.005), more unique content pages (U(94)=394, z=2.73, p<.01), more SERPs (U(94)=224, z=4.46, p<.001), and more unique SERPs (U(94)=268, z=4.18, p<.001). Users issued more queries in difficult tasks than in easy tasks (U(94)=236.5, z=4.57, p<.001), and had more ineffective queries (those not followed by saving pages) (U(94)=227, z=4.71, p<.001). The ratio of queries followed by saving pages and the ratio of queries not followed by saving pages in difficult tasks were also greater than those in easy tasks (U(94)=278, z=4.15, p<.001). However, there were two variables whose difficult vs. easy patterns were different in FF-S tasks from all 12 tasks as a whole. In difficult FF-S tasks, users spent more total time on unique content pages than in easy FF-S tasks (U(94)=464, z=2.00, p<.05), but in the12 tasks combined their total dwell time on unique content pages did not differ between difficult and easy tasks (U(286)=8667.5, z=1.09, ns). The difference here may be explained by the task feature of Level. FF-S tasks asked users to locate specific pieces of information, meaning that they had to spend more time on content pages than in document Level tasks, and those difficult FF-S tasks could make their dwelling on the content pages even longer. Another variable with a different difficult vs. easy task pattern in FF-S tasks from in all tasks combined was the number of queries leading to saving pages. In 12 tasks combined, this number in difficult tasks was bigger than that in easy tasks, but this number in the difficult and easy FF-S tasks was roughly the same, (U(94)=623.5, z=0.65, ns). This could be explained by the task feature Number of concepts. FF-S tasks only required the users to find one piece of information, meaning that users only needed to bookmark one content page to complete the task, and hence one effective query would have helped them find a useful page and finish the task. This would not have been affected by the task being difficult or easy, although in difficult tasks, users may have issued more ineffective queries which did not help them find the useful page, as is also reported in the previous section (Table 3 and Figure 1). FF-M Tasks In FF-M tasks, most of the variables showed the same difficult vs. easy task patterns as in the12 tasks combined. Users spent more time completing difficult tasks than easy tasks (U(94)=710, z=2.87, p<.005). They also spent more total dwell time on unique SERPs (U(94)=770.5, z=2.42, p<.05) in difficult tasks than in easy tasks. Their total time spent on unique content pages did not differ (U(94)=953.5, z=1.04, ns). Compared with easy tasks, in difficult tasks, users viewed more SERPs (U(94)=612, z=3.63, p<.001), and more unique SERPs (U(94)=559.5, z=4.06, p<.001). Users issued more queries in difficult tasks than in easy tasks (U(94)=552.5, z=4.12, p<.001). They had both more ineffective queries (those not followed by saving pages) (U(94)=557.5, z=4.16, p<.001), as well as more effective queries (those followed by saving pages) (U(94)=791, z=2.36, p<.05) in difficult tasks than in easy tasks. The ratio of queries leading to saving pages and the ratio of queries not leading to saving pages in difficult tasks were also greater than those in easy tasks (U(94)=648.5, z=3.45, p=.001). There were two variables in FF-M tasks whose difficult vs. easy task patterns were different than those in 12 tasks

6 combined: the number of content pages (U(94)=977.5, z=0.86, ns) and the number of unique content pages (U(94)=889, z=1.53, ns). These numbers did not have significant differences in difficult and easy FF-M tasks. In easy FF-M tasks, they still viewed about 10 content pages and 6 unique content pages, which were not significantly smaller than those in difficult FF-M tasks: 12 content pages and 8 unique content pages. One possible explanation for the greater number of viewed pages in FF-M tasks than in FF-S tasks could be the task feature of Number of concepts. There were 3 concepts in FF-M tasks, and users needed to look for useful pages to answer all of them, which would have led them to more content pages, especially than in FF-S tasks. FF-M tasks being difficult may be attributed to other factors, such as difficulty in finishing the tasks in general (as indicated by longer task completion time). Task difficulty did not correlate with the number of content pages viewed in FF-M tasks, meaning that the number of viewed pages is not a good indicator of task difficulty in FF-M tasks. IG-M Tasks The general patterns of the variables were similar to those when all 12 tasks were considered. Users spent more time completing difficult tasks than easy tasks (U(94)=679, z=3.45, p=.001). They also spent more total dwell time on unique SERPs (U(94)=811, z=2.49, p<.05). The total dwell time they spent on unique content pages in difficult tasks and in easy tasks was not significantly different (U(94)=920, z=1.69, ns). Compared with in easy tasks, in difficult tasks, users viewed more content pages (U(94)=702, z=3.29, p=.001), more unique content pages (U(94)=735.5, z=3.05, p<.01), more SERPs (U(94)=595.5, z=4.08, p<.001), and more unique SERPs (U(94)=620.5, z=3.91, p<.001). They also issued more queries in difficult tasks than in easy tasks (U(94)=699.5, z=3.35, p=.001), and had more ineffective queries (those not followed by saving pages) (U(94)=650.5, z=3.85, p<.001). The ratio of queries followed by saving pages to all queries and the ratio of queries not followed by saving pages to all queries in difficult tasks were also greater than those in easy tasks (U(94)=646, z=3.88, p<.001). There was only one variable whose difficult vs. easy task pattern was different in IG-M tasks than in the 12 tasks combined: the number of queries with saved pages. In 12 tasks combined, this number in difficult tasks (2.06) was bigger than that in easy tasks (1.64), but this number in the difficult and in the easy IG-M tasks was roughly the same (U(94)=1136.5, z=0.10, ns), with values of 2.02 and The feature of Number of concepts may at least partially explain the results: even in easy IG-M tasks, users still needed to search for multiple pieces of information by issuing different queries which led to useful pages. In difficult IG-M tasks, they did not issue more queries leading to useful pages than in easy IG-M tasks, but they did issue more queries that did not lead to useful pages, i.e., the ineffective queries, than in easy tasks. This increased number of ineffective queries in difficult tasks means plausibly that it is harder to construct good queries for difficult tasks. In sum, task difficulty did not seem to be a good function of different numbers of queries followed by saving pages, meaning that these numbers would not be good indicators of task difficulty in IG-M tasks. Within-session Level Behavioral Variables in Difficult vs. Easy Tasks In 12 tasks combined Results (Figure 4) show that two within-session level variables had significant differences between difficult and easy tasks. Specifically, compared with easy tasks, in difficult tasks, users had longer first dwell time on unique SERPs (U(286)=6836.5, z=3.81, p<.001). This indicates that in difficult tasks, users spent longer time on SERPs before they first clicked to open a document or left the page than in easy tasks. Users also had longer average dwell time on all SERPs (U(286)=5894.5, z=5.21, p<.001), indicating that in difficult tasks, users spent more time on SERPs in general than in easy tasks. Variables 12 tasks FF-S tasks FF-M tasks IG-M tasks Difficult Easy Difficult Easy Difficult Easy Difficult Easy Number of content pages per query p=.778 p=.599 p=.004 p=.284 Number of unique content pages per query p=.386 p=.452 p=.005 p=.467 First dwell time on unique content pages p=.217 p=.035 p=.943 p=.117 Mean dwell time of all content pages p=.660 p=.316 p=.973 p=.271 First dwell time on unique SERPs p=.000 p=.126 p=.020 p=.013 Mean dwell time of all SERPs p=.000 p=.217 p=.013 p=.004 Table 4. Within-session level factors in different tasks (those in bold showed significant differences)

7 Number of all content pages per query (U(286)=9210.5, z=0.28, ns) and number of unique content pages per query (U(286)=8820.5, z=0.87, ns) did not show differences between difficult and easy tasks. This means that although users issued more queries in difficult tasks, they did not necessarily view more content pages per query. Users viewed roughly the same numbers of documents per query in tasks of different difficulty levels. In difficult tasks, they just reformulated queries more often. First dwell time on unique content pages (U(286)=8569, z=1.24, ns) and mean dwell time on all content pages (U(286)=9104, z=0.44, ns) did not show differences between difficult and easy tasks, either. This suggests that in general, difficult tasks did not necessarily cost users longer time on content pages than easy tasks. It should be noted that while mean dwell time on content pages did not differ significantly between difficult and easy tasks, that on SERPs in difficult tasks was longer than in easy tasks, as reported earlier in this section (Figure 2) difficult mean dwell time of all content pages easy mean dwell time of all SERPs Figure 2. Mean dwell time on content pages and SERPs FF-S tasks In FF-S tasks, only one variable showed significant differences between difficult and easy tasks. Specifically, compared with easy tasks, in difficult tasks, users had longer first dwell time on unique content pages (U(94)=451, z=2.11, p<.05). This means that in difficult FF-S tasks, users tended to spend a longer time on content pages before they left the page than in easy FF-S tasks. This is reasonable considering that FF-S tasks required users to look for a specific piece of information in a document and to judge its relevance according to its correctness with respect to the task question. In difficult tasks, users were not able to locate the specific information quickly so that their time spent on the page was extended. Different from what was found when all 12 tasks were considered, in FF-S tasks, there were no significant differences in users first dwell time on SERPs (U(94)=512, z=1.53, ns) and mean dwell time on all SERPs (U(94)=543, z=1.23, ns) between difficult and easy tasks. This means that in difficult FF-S tasks, although users spent more time looking for the useful piece of information in the document, they did not have more difficulty than in easy FF-S tasks in determining which document on the SERPs to open. Number of all content pages per query (U(94)=618, z=0.53, ns) and that of unique content pages per query (U(94)=596, z=0.75, ns) did not show differences between difficult and easy tasks, either. These results are consistent with the findings in all tasks combined. FF-M tasks In FF-M tasks, more variables were found to show differences between difficult and easy tasks. Firstly, users viewed fewer pages per query (U(94)=710.5, z=2.87, p<.01) and fewer unique pages per query (U(94)=718, z=2.82, p<.01) in difficult tasks than in easy tasks. These are different from the results found in all 12 tasks combined as well as in FF-S tasks. A closer examination of other variables found that in FF-M tasks, users viewed roughly the same number of documents in both easy and difficult tasks, but they issued more queries in difficult tasks (Table 4), hence on average they viewed less pages per query in difficult tasks. This suggests that when users worked with difficult FF-M tasks, they adopted the strategy of reformulating more frequently instead of opening more pages in order to find useful pages. In addition, users first dwell time spent on SERPs (U(94)=770.5, z=2.42, p<.05) and their mean dwell time on SERPs (U(94)=762.5, z=2.48, p<.05) were different between difficult and easy tasks. This means that in difficult FF-M tasks, users spent more time determining which document on the SERPs to open. However, users spent roughly the same first dwell time on unique content pages (U(94)=1082, z=0.07, ns) and average dwell time on all content pages (U(94)=1087, z=0.03, ns) in both easy and difficult FF-M tasks. This pattern was different from FF-S tasks. The reason could be that FF-M tasks were a mix of document and segment level, meaning that on average, in difficult FF-M tasks, users did not need to spend more time on each content page than in easy tasks. IG-M Tasks The patterns in IG-M tasks were roughly the same as those in 12 tasks combined. Specifically, two variables had significant differences between difficult and easy tasks. In difficult tasks, users had longer first dwell time on unique SERPs (U(94)=811, z=2.49, p<.05), and longer dwell time on all SERPs than in easy tasks (U(94)=754, z=2.90, p<.01). They needed more time to decide which documents to open in difficult tasks than in easy tasks. Nevertheless, first dwell time on unique content pages (U(94)=936, z=1.57, ns) and mean dwell time on all content pages (U(94)=1000, z=1.10, ns) did not show significant differences between difficult and easy tasks. This suggests that when users worked with difficult IG-M tasks, they did not have to spend more time on content pages than when working with easy IG-M tasks. Users spending roughly the same first dwell time in difficult IG-M tasks (31 seconds) as

8 in easy FF-S tasks (32 seconds) showed that document level tasks did not require users longer time on content pages. Number of all content pages per query (U(94)=1004, z=1.07, ns) and unique content pages per query (U(94)=650.5, z=3.85, ns) did not show differences between difficult and easy tasks. This means that although users issued more queries in difficult tasks, they did not necessarily view more content pages per query. Users viewed roughly the same numbers of documents per query in tasks of different difficulty levels. In difficult tasks, they just reformulated queries more often. Predicting Task Difficulty The above results showed the different variables that may be good candidates for predicting task difficulty in different types of tasks. In this last part of the results section, we report the results of logistic regression tests that further explored what factors are good predictors of difficulty and what is the prediction model s accuracy. In the analysis, the Forward Conditional method was used, which automatically selects variables for the predictive model. The regression was conducted for all 12 tasks combined as well as for each type of task to explore if there were differences in the variables and prediction accuracy. We also compared different sets of variables: all variables together, whole-session variables only, and within-session level variables only. As can be seen from Table 5, variables included in the regression model were quite different across task types. In all 12 tasks combined, three variables were significant in predicting task difficulty: number of unique SERPs, number of queries with saving pages, and total dwell time on unique SERPs. In FF-S tasks, only one variable significantly contributes to the prediction model, which is different from the 3 variables in the 12-task model: number of queries with no saving pages. In FF-M tasks, number of unique SERPs was the single significant predictor included in the model. In IG-M tasks, ratio of queries with no saving pages and total dwell time on unique SERPs were two factors significantly contributing to task difficulty prediction. The task difficulty prediction accuracy also varied across tasks. In all tasks combined, prediction accuracy was 77.1%. In specific task types, the prediction accuracy in FF- S tasks (88.5%) was the highest, followed by FF-M tasks (77.1%), and the lowest was in IG-M tasks (65.6%). In sum, these results clearly showed how different the variables are in the prediction model in different types of tasks, and how the prediction accuracy varied. One can notice that all the above variables included in the prediction model are at the whole-session level, and that the prediction results using all variables together and using only whole-session are exactly the same (Table 5). This means that within-session level variables did not contribute to the models. An approach using within-session level variables only to test the prediction model (Table 6) showed that the results were not as promising as using whole-session level factors. The accuracy of the model when 12 tasks combined were considered was 62.8%, with one variable significant: first dwell time on unique SERPs. The accuracy for FF-S tasks was as high as 82.3%, but actually no variables contributed to the model and it simply classified all tasks as easy ones. Results were similar for the IG-M tasks: no variables contributed to the model and the accuracy was 52.1%. FF-M tasks performed better with 2 variables contributing to the model: number of unique pages per query and average time on SERPs. The prediction accuracy was 69.8%. 12 tasks FF-S tasks FF-M tasks IG-M tasks Predicted Difficult Easy Difficult Easy Difficult Easy Difficult Easy Observed Difficult Easy Prediction accuracy 77.1% 88.5% 77.1% 65.6% Variables in the model a* b* c* d* a* c* e* Sig B value * a. Number of unique SERPs b. Number of queries with saving pages c. Total dwell time on unique SERPs d. Number of queries with no saving pages e. Ratio of queries with no saving pages Table 5. Prediction accuracy and variables in the prediction model (all and whole-session variables considered) 12 tasks FF-S tasks FF-M tasks IG-M tasks Predicted Difficult Easy Difficult Easy Difficult Easy Difficult Easy Observed Difficult Easy Prediction accuracy 62.8% 82.3% 69.8% 52.1% Variables in the model f* n/a g* h* n/a Sig..027 n/a n/a B value n/a n/a * f. First dwell time on unique SERPs g. Number of unique content page per query h. Average time on all SERPs Table 6. Prediction accuracy and variables in the prediction model (within-session variables considered)

9 DISCUSSION Task Type Affects the Relations between Task Difficulty and User Behaviors Our results demonstrate that behavioral variables showing significant differences between difficult and easy tasks were different across task types. In particular in FF-S tasks users total dwell time and first dwell time on unique content pages were significantly longer in difficult tasks than in easy tasks. This relationship was not true in FF-M, IG-M and all tasks combined. This result could be due to the task feature Level. FF-S tasks required users to actually find a specific piece of information, and difficult FF-S tasks therefore increased the likelihood that users dwelled longer on content pages. Also, in FF-S tasks only, users total dwell time and first dwell time on unique SERPs did not differ between difficult and easy tasks. However, in other task types, difficult tasks tended to be associated with longer dwell time on SERPs. This could also be explained by the task feature Level as follows: users judged FF-S tasks to be difficult probably because it was not easy to locate the required information in content pages, but not necessarily because it was hard to determine which result to open in the SERPs. More variables showed special patterns in FF-M tasks compared with other tasks. The numbers of all and unique content pages that users viewed did not differ between difficult and easy tasks in the FF-M type, but they did in other types of tasks. This suggests that these factors are probably not good indicators of task difficulty of FF-M tasks. Users had lower numbers of all and unique content pages per query in difficult tasks than in easy tasks, which means that these measures could possibly be used to infer task difficulty in FF-M tasks. In addition, users had more effective queries in difficult than in easy FF-M tasks, but not in other task types indicates that the number of queries followed by saving pages could possibly be an indicator of task difficulty, but only in FF-M tasks. Since the FF-M tasks are Mixed level tasks, it is hard to speculate what made the users judge the difficult FF-M tasks as difficult. These findings correspond with Kim (2006), who found that for factual and exploratory tasks task difficulty correlated significantly with behaviors, however, for interpretive tasks, it did not. These findings combined seem to indicate that more special attention is needed to explore task difficulty factors in the middle or mixed type of search tasks, especially since these tasks may be most frequently encountered in everyday life. The Role of Task Type in Task Difficulty Prediction Logistic regression analysis results showed that behavioral variables that were included in the task difficulty prediction models were different across task types, and that their prediction accuracy was different as well. Specifically, for FF-S tasks, the best predictor of task difficulty was the number of ineffective queries, i.e., queries that did not lead to saving pages. In FF-M tasks, the best predictor of task difficulty was the number of unique SERPs, and the best within-session level predictor was the number of unique content pages per query and the average dwell time on SERPs. In IG-M tasks, total dwell time on unique SERPs and the ratio of ineffective queries to all appeared to be the best predictors of task difficulty. In all 12 tasks combined, the predictors in the model were found to be the number of unique SERPs, number of queries with saving pages, and total dwell time on unique SERPs. Some of these were also found as significant predictors of task difficulty in FF-M and IG-M tasks, but not in FF-S tasks. Without differentiating task types, difficulty prediction for tasks, which are of FF-S type, would have under-performed. All these findings clearly demonstrate that task difficulty prediction should take account of task type. The practical implication of these findings on system design is obvious. Whole- vs. Within-session level Indicators of Task Difficulty It was found that whole-session level factors that showed significant differences between difficult and easy tasks include: task completion time, total dwell time of unique SERPs, number of SERPs, number of unique SERPs, number of queries, number of queries not leading to saving pages, ratio of queries with no saving pages, and ratio of queries with saving pages (the above in all task types), numbers of all and unique content pages (12 tasks combined, FF-S and IG-M), number of queries leading to saving pages (12 tasks combined and FF-M), and total dwell time of unique content pages (FF-S). Some of these variables were also found to be significant in predicting task difficulty as demonstrated by the logistic regression model: the number of unique SERPs, the number of queries with saving pages, and total dwell time on unique SERPs. These findings are consistent with previous research. The prediction accuracy of task difficulty using these variables was promising, with 77% for all tasks and the best accuracy of 88.5% was for FF-S tasks. However, it should be noted these whole-session level factors cannot be captured until the end of a task session. Therefore, they cannot be used for predicting difficulty for an ongoing search and their usefulness is therefore limited. On the other hand, within-session level factors can be captured during a search session. Thus, they have a more practical application in real-time prediction of task difficulty. Within-session level factors considered in the current analysis also showed significant differences between difficult and easy tasks: the first dwell time of unique SERPs and the mean dwell time on all SERPs (for all 12 tasks combined, FF-M tasks, and IG-M tasks), the first dwell time of unique content pages (for FF-S tasks), and numbers of all and unique content pages per query (for FF-M tasks). The accuracy of using these variables for predicting task difficulty was not as good as for the

10 whole-session level factors though, both for 12 tasks combined and for each task type. Nevertheless, the prediction accuracy for 12 tasks combined (63%) and FF- M tasks (70%) was still acceptable. In FF-S tasks and IG- M tasks, these variables did not contribute much to the prediction model. Compared with the accuracy of wholesession level factors, this is reasonable since wholesession level factors describe and reflect what happened during the whole task and therefore contain more information and can be better predictors of the whole task feature, while within-session level factors only reflect a part of the search session. It is possible to incorporate more variables in a model to increase prediction accuracy. Other within-session level variables than those considered in the current study will be explored in order to make the real-time task prediction more accurate. CONCLUSIONS In this study, we conducted analyses of differences in users behavior between difficult and easy tasks, as well as examining how these differences vary across task types. We also used logistic regression tests to explore how behavioral variables contribute to prediction of task difficulty. These analyses were performed for all tasks combined as well as for each type of tasks individually: single-piece fact-finding, multiple-piece fact-finding and multiple-piece information gathering. The contribution of this paper is two-fold: 1) it demonstrates that predicting task difficulty should take account of task type, and 2) it shows that both wholesession and within-session level user behaviors can be good predictors of task difficulty. While whole-session behaviors present higher prediction accuracy and may be used for following sessions to adapt search to the users, they cannot be used in real-time prediction for the current session. Within-session level behaviors are good for realtime prediction for the ongoing session, but the prediction accuracy using the limited numbers of variables considered in this study are not as good as whole-session predictors. Future studies will continue to explore other within-session level variables for task difficulty prediction, and will eventually contribute to a prediction model that can be used in search systems. ACKNOWLEDGMENTS This research was supported by IMLS grant LG REFERENCES Aula, A., Khan, R. & Guan, Z. (2010). How does search behavior change as search becomes more difficult? Proceedings of CHI '10, Byström, K. (2002). Information and information sources in tasks of varying complexity. Journal of the American Society for Information Science & Technology, 53(7), Byström, K., & Järvelin, K. (1995). Task complexity affects information seeking and use. Information Processing & Management, 31, Gwizdka, J., Spence, I. (2006). What can searching behavior tell us about the difficulty of information tasks? A study of Web navigation. Proceedings of ASIST '06. Gwizdka, J. (2008). Revisiting search task difficulty: Behavioral and individual difference measures. Proceedings of ASIST '08. Ingwersen, P. & Järvelin, K. (2005). The turn: Integration of information seeking and retrieval in context. Springer-Verlag New York, Inc. Secaucus, NJ, USA. Kelly, D., & Belkin, N.J. (2004). Display time as implicit feedback: Understanding task effects. Proceedings of SIGIR 04, Sheffield, UK, Kellar, M., Watters, C., & Shepherd, M. (2007). A field study characterizing Web-based information-seeking tasks. Journal of the American Society for Information Science & Technology, 58(7), Kim, J. (2006). Task difficulty as a predictor and indicator of web searching interaction. Proceedings of CHI '06, Li, Y. & Belkin, N.J. (2008). A faceted approach to conceptualizing tasks in information seeking. Information Processing & Management, 44, Liu, J. & Belkin, N.J. (2010). Personalizing information retrieval for multi-session tasks: The roles of task stage and task type. Proceedings of SIGIR '10. Liu, J., Cole, M., Liu, C., Bierig, R., Gwizdka, J., Belkin, N.J., Zhang, J., & Zhang, X. (2010). Search behaviors in different task types. Proceedings of JCDL '10. Liu, J., Liu, C., Gwizdka, J., & Belkin, N. (2010). Can search systems detect users' task difficulty? Some behavioral signals. Proceedings of SIGIR '10. Toms, E., MacKenzie, T., Jordan, C., O Brien, H., Freund, L., Toze, S. et al. (2007). How task affects information search. Workshop Pre-proceedings of in Initiative for the Evaluation of XML Retrieval (INEX), White, R., & Kelly, D. (2006). A study of the effects of personalization and task information on implicit feedback performance. Proceedings of CIKM '06,

Understanding Searchers Perception of Task Difficulty: Relationships with Task Type

Understanding Searchers Perception of Task Difficulty: Relationships with Task Type Understanding Searchers Perception of Task Difficulty: Relationships with Task Type Jingjing Liu 1, Chang Liu 2, Xiaojun Yuan 3, Nicholas J. Belkin 2 1 Department of Information and Library Science, Southern

More information

Revisiting Search Task Difficulty: Behavioral and Individual Difference Measures

Revisiting Search Task Difficulty: Behavioral and Individual Difference Measures Revisiting Search Task Difficulty: Behavioral and Individual Difference Measures Jacek Gwizdka Department of Library and Information Science, Rutgers University 4 Huntington St, New Brunswick, NJ 08901,

More information

Assessing Cognitive Load on Web Search Tasks

Assessing Cognitive Load on Web Search Tasks This is the final author s version of this journal article. The final published version may differ. Gwizdka, J. (2009). Assessing Cognitive Load on Web Search Tasks. The Ergonomics Open Journal. Bentham

More information

Assessing Cognitive Load on Web Search Tasks

Assessing Cognitive Load on Web Search Tasks 114 The Ergonomics Open Journal, 2009, 2, 114-123 Assessing Cognitive Load on Web Search Tasks Jacek Gwizdka * Open Access Department of Library & Information Science, School of Communication and Information,

More information

What Can Searching Behavior Tell Us About the Difficulty of Information Tasks? A Study of Web Navigation

What Can Searching Behavior Tell Us About the Difficulty of Information Tasks? A Study of Web Navigation Gwizdka, J., Spence, I. (2006). What Can Searching Behavior Tell Us About the Difficulty of Information Tasks? A Study of Web Navigation. Proceedings of the 69th Annual Meeting of the American Society

More information

The Logic of Data Analysis Using Statistical Techniques M. E. Swisher, 2016

The Logic of Data Analysis Using Statistical Techniques M. E. Swisher, 2016 The Logic of Data Analysis Using Statistical Techniques M. E. Swisher, 2016 This course does not cover how to perform statistical tests on SPSS or any other computer program. There are several courses

More information

Distribution of Cognitive Load in Web Search

Distribution of Cognitive Load in Web Search Distribution of Cognitive Load in Web Search Jacek Gwizdka, PhD Assistant Professor Department of Library and Information Science School of Communication and Information Rutgers University 4 Huntington

More information

Method Bias? The Effects of Performance Feedback on Users Evaluations of an Interactive IR System

Method Bias? The Effects of Performance Feedback on Users Evaluations of an Interactive IR System Method Bias? The Effects of Performance Feedback on Users Evaluations of an Interactive IR System Diane Kelly, Chirag Shah, Cassidy R. Sugimoto, Earl W. Bailey, Rachel A. Clemens, Ann K. Irvine, Nicholas

More information

The Effects of Search Task Determinability on Search Behavior

The Effects of Search Task Determinability on Search Behavior The Effects of Search Task Determinability on Search Behavior Rob Capra, Jaime Arguello, Yinglong Zhang University of North Carolina at Chapel Hill Abstract. Among the many task characteristics that influence

More information

Development of the Web Users Self Efficacy scale (WUSE)

Development of the Web Users Self Efficacy scale (WUSE) Development of the Web Users Self Efficacy scale (WUSE) Eachus, P and Cassidy, SF Title Authors Type URL Published Date 2004 Development of the Web Users Self Efficacy scale (WUSE) Eachus, P and Cassidy,

More information

What Are Your Odds? : An Interactive Web Application to Visualize Health Outcomes

What Are Your Odds? : An Interactive Web Application to Visualize Health Outcomes What Are Your Odds? : An Interactive Web Application to Visualize Health Outcomes Abstract Spreading health knowledge and promoting healthy behavior can impact the lives of many people. Our project aims

More information

Investigation I: Effects of alcohol on Risk Behavior (estimated duration 1-2 hours) Table of Contents

Investigation I: Effects of alcohol on Risk Behavior (estimated duration 1-2 hours) Table of Contents Investigation I: Effects of alcohol on Risk Behavior (estimated duration 1-2 hours) Table of Contents I. Pre-requisite knowledge 2 II. Massachusetts Science and 2 Technology/Engineering Frameworks Compliance

More information

Doing High Quality Field Research. Kim Elsbach University of California, Davis

Doing High Quality Field Research. Kim Elsbach University of California, Davis Doing High Quality Field Research Kim Elsbach University of California, Davis 1 1. What Does it Mean to do High Quality (Qualitative) Field Research? a) It plays to the strengths of the method for theory

More information

{djamasbi, ahphillips,

{djamasbi, ahphillips, Djamasbi, S., Hall-Phillips, A., Yang, R., Search Results Pages and Competition for Attention Theory: An Exploratory Eye-Tracking Study, HCI International (HCII) conference, July 2013, forthcoming. Search

More information

Cognitive Authority. Soo Young Rieh. School of Information. University of Michigan.

Cognitive Authority. Soo Young Rieh. School of Information. University of Michigan. Cognitive Authority Soo Young Rieh School of Information University of Michigan rieh@umich.edu Patrick Wilson (1983) developed the cognitive authority theory from social epistemology in his book, Second-hand

More information

Assignment 4: True or Quasi-Experiment

Assignment 4: True or Quasi-Experiment Assignment 4: True or Quasi-Experiment Objectives: After completing this assignment, you will be able to Evaluate when you must use an experiment to answer a research question Develop statistical hypotheses

More information

Scientific Investigation

Scientific Investigation Scientific Investigation Say Thanks to the Authors Click http://www.ck12.org/saythanks (No sign in required) To access a customizable version of this book, as well as other interactive content, visit www.ck12.org

More information

INSTRUCTOR WALKTHROUGH

INSTRUCTOR WALKTHROUGH INSTRUCTOR WALKTHROUGH In order to use ProctorU Auto, you will need the Google Chrome Extension. Click this link to install the extension in your Google Chrome web browser. https://chrome.google.com/webstore/detail/proctoru/goobgennebinldhonaajgafidboenlkl

More information

The National Deliberative Poll in Japan, August 4-5, 2012 on Energy and Environmental Policy Options

The National Deliberative Poll in Japan, August 4-5, 2012 on Energy and Environmental Policy Options Executive Summary: The National Deliberative Poll in Japan, August 4-5, 2012 on Energy and Environmental Policy Options Overview This is the first Deliberative Poll (DP) anywhere in the world that was

More information

Chapter 11. Experimental Design: One-Way Independent Samples Design

Chapter 11. Experimental Design: One-Way Independent Samples Design 11-1 Chapter 11. Experimental Design: One-Way Independent Samples Design Advantages and Limitations Comparing Two Groups Comparing t Test to ANOVA Independent Samples t Test Independent Samples ANOVA Comparing

More information

Getting the Design Right Daniel Luna, Mackenzie Miller, Saloni Parikh, Ben Tebbs

Getting the Design Right Daniel Luna, Mackenzie Miller, Saloni Parikh, Ben Tebbs Meet the Team Getting the Design Right Daniel Luna, Mackenzie Miller, Saloni Parikh, Ben Tebbs Mackenzie Miller: Project Manager Daniel Luna: Research Coordinator Saloni Parikh: User Interface Designer

More information

Exploring the Efficiency of an Online Bridge Between Working Interpreters, Student Interpreters, and the Deaf Community

Exploring the Efficiency of an Online Bridge Between Working Interpreters, Student Interpreters, and the Deaf Community Melissa Smith TEP 297 Exploring the Efficiency of an Online Bridge Between Working Interpreters, Student Interpreters, and the Deaf Community Need for student and working interpreters to discuss interpreting

More information

Models of Information Retrieval

Models of Information Retrieval Models of Information Retrieval Introduction By information behaviour is meant those activities a person may engage in when identifying their own needs for information, searching for such information in

More information

Sample size calculation a quick guide. Ronán Conroy

Sample size calculation a quick guide. Ronán Conroy Sample size calculation a quick guide Thursday 28 October 2004 Ronán Conroy rconroy@rcsi.ie How to use this guide This guide has sample size ready-reckoners for a number of common research designs. Each

More information

SANAKO Lab 100 STS USER GUIDE

SANAKO Lab 100 STS USER GUIDE SANAKO Lab 100 STS USER GUIDE Copyright 2008 SANAKO Corporation. All rights reserved. Microsoft is a registered trademark. Microsoft Windows 2000 and Windows XP are trademarks of Microsoft Corporation.

More information

A Comparison of Collaborative Filtering Methods for Medication Reconciliation

A Comparison of Collaborative Filtering Methods for Medication Reconciliation A Comparison of Collaborative Filtering Methods for Medication Reconciliation Huanian Zheng, Rema Padman, Daniel B. Neill The H. John Heinz III College, Carnegie Mellon University, Pittsburgh, PA, 15213,

More information

Supplemental materials for:

Supplemental materials for: Supplemental materials for: Krist AH, Woolf SH, Hochheimer C, et al. Harnessing information technology to inform patients facing routine decisions: cancer screening as a test case. Ann Fam Med. 2017;15(3):217-224.

More information

Interventions, Effects, and Outcomes in Occupational Therapy

Interventions, Effects, and Outcomes in Occupational Therapy Interventions, Effects, and Outcomes in Occupational Therapy ADULTS AND OLDER ADULTS Instructor s Manual Learning Activities Mary Law, PhD, FCAOT Professor and Associate Dean Rehabilitation Science McMaster

More information

Optimizing Communication of Emergency Response Adaptive Randomization Clinical Trials to Potential Participants

Optimizing Communication of Emergency Response Adaptive Randomization Clinical Trials to Potential Participants 1. Background The use of response adaptive randomization (RAR) is becoming more common in clinical trials (Journal of Clinical Oncology. 2011;29(6):606-609). Such designs will change the randomization

More information

The Power of Feedback

The Power of Feedback The Power of Feedback 35 Principles for Turning Feedback from Others into Personal and Professional Change By Joseph R. Folkman The Big Idea The process of review and feedback is common in most organizations.

More information

Title: Healthy snacks at the checkout counter: A lab and field study on the impact of shelf arrangement and assortment structure on consumer choices

Title: Healthy snacks at the checkout counter: A lab and field study on the impact of shelf arrangement and assortment structure on consumer choices Author's response to reviews Title: Healthy snacks at the checkout counter: A lab and field study on the impact of shelf arrangement and assortment structure on consumer choices Authors: Ellen van Kleef

More information

PSYC1024 Clinical Perspectives on Anxiety, Mood and Stress

PSYC1024 Clinical Perspectives on Anxiety, Mood and Stress PSYC1024 Clinical Perspectives on Anxiety, Mood and Stress LECTURE 1 WHAT IS SCIENCE? SCIENCE is a standardised approach of collecting and gathering information and answering simple and complex questions

More information

Instructor Guide to EHR Go

Instructor Guide to EHR Go Instructor Guide to EHR Go Introduction... 1 Quick Facts... 1 Creating your Account... 1 Logging in to EHR Go... 5 Adding Faculty Users to EHR Go... 6 Adding Student Users to EHR Go... 8 Library... 9 Patients

More information

Exploration and Exploitation in Reinforcement Learning

Exploration and Exploitation in Reinforcement Learning Exploration and Exploitation in Reinforcement Learning Melanie Coggan Research supervised by Prof. Doina Precup CRA-W DMP Project at McGill University (2004) 1/18 Introduction A common problem in reinforcement

More information

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN Vs. 2 Background 3 There are different types of research methods to study behaviour: Descriptive: observations,

More information

Lesson 11 Correlations

Lesson 11 Correlations Lesson 11 Correlations Lesson Objectives All students will define key terms and explain the difference between correlations and experiments. All students should be able to analyse scattergrams using knowledge

More information

Sanako Lab 100 STS USER GUIDE

Sanako Lab 100 STS USER GUIDE Sanako Lab 100 STS USER GUIDE Copyright 2002-2015 SANAKO Corporation. All rights reserved. Microsoft is a registered trademark. Microsoft Windows XP, Windows Vista and Windows 7 are trademarks of Microsoft

More information

An Empirical Study on Causal Relationships between Perceived Enjoyment and Perceived Ease of Use

An Empirical Study on Causal Relationships between Perceived Enjoyment and Perceived Ease of Use An Empirical Study on Causal Relationships between Perceived Enjoyment and Perceived Ease of Use Heshan Sun Syracuse University hesun@syr.edu Ping Zhang Syracuse University pzhang@syr.edu ABSTRACT Causality

More information

Health Care Callback Survey Topline August 2001

Health Care Callback Survey Topline August 2001 Health Care Callback Survey Topline 8.28.2001 August 2001 Princeton Survey Research Associates for the Pew Internet & American Life Project Sample: n = 500 Internet users who go online for health care

More information

decisions based on ethics. In the case provided, the recreational therapist is faced with a

decisions based on ethics. In the case provided, the recreational therapist is faced with a Brackett 1 Kassie Brackett The Ethical Problem Professionals are faced with situations all the time that force them to make decisions based on ethics. In the case provided, the recreational therapist is

More information

Elsevier ClinicalKey TM FAQs

Elsevier ClinicalKey TM FAQs Elsevier ClinicalKey FAQs Table of Contents What is ClinicalKey? Where can I access ClinicalKey? What medical specialties are covered in ClinicalKey? What information is available through ClinicalKey?

More information

Chapter 14 My Recovery Plan for My Life

Chapter 14 My Recovery Plan for My Life 281 1 Decision 4 Activities 5 People 6 Feelings 7 Life Style 8 History 3 Exposure 2 Body Chapter 14 My Recovery Plan for My Life In this chapter you can begin to map out a longer view of your recovery

More information

Something to think about. What happens, however, when we have a sample with less than 30 items?

Something to think about. What happens, however, when we have a sample with less than 30 items? One-Sample t-test Remember In the last chapter, we learned to use a statistic from a large sample of data to test a hypothesis about a population parameter. In our case, using a z-test, we tested a hypothesis

More information

Lionbridge Connector for Hybris. User Guide

Lionbridge Connector for Hybris. User Guide Lionbridge Connector for Hybris User Guide Version 2.1.0 November 24, 2017 Copyright Copyright 2017 Lionbridge Technologies, Inc. All rights reserved. Published in the USA. March, 2016. Lionbridge and

More information

What You Should Know Before You Hire a Chiropractor by Dr. Paul R. Piccione, D.C.

What You Should Know Before You Hire a Chiropractor by Dr. Paul R. Piccione, D.C. What You Should Know Before You Hire a Chiropractor by Dr. Paul R. Piccione, D.C. www.woodsidewellnesscenter.com Woodside Wellness Center 959 Woodside Road Redwood City, Ca 94061 (650) 367-1948 Disclaimers

More information

ORIENTATION SAN FRANCISCO STOP SMOKING PROGRAM

ORIENTATION SAN FRANCISCO STOP SMOKING PROGRAM ORIENTATION SAN FRANCISCO STOP SMOKING PROGRAM PURPOSE To introduce the program, tell the participants what to expect, and set an overall positive tone for the series. AGENDA Item Time 0.1 Acknowledgement

More information

HEALING SPICES: HOW TO USE 50 EVERYDAY AND EXOTIC SPICES TO BOOST HEALTH AND BEAT DISEASE BY BHARAT B. AGGARWAL PHD, DEBORA YOST

HEALING SPICES: HOW TO USE 50 EVERYDAY AND EXOTIC SPICES TO BOOST HEALTH AND BEAT DISEASE BY BHARAT B. AGGARWAL PHD, DEBORA YOST Read Online and Download Ebook HEALING SPICES: HOW TO USE 50 EVERYDAY AND EXOTIC SPICES TO BOOST HEALTH AND BEAT DISEASE BY BHARAT B. AGGARWAL PHD, DEBORA YOST DOWNLOAD EBOOK : HEALING SPICES: HOW TO USE

More information

Who? What? What do you want to know? What scope of the product will you evaluate?

Who? What? What do you want to know? What scope of the product will you evaluate? Usability Evaluation Why? Organizational perspective: To make a better product Is it usable and useful? Does it improve productivity? Reduce development and support costs Designer & developer perspective:

More information

Anxiety Studies Division Annual Newsletter

Anxiety Studies Division Annual Newsletter Anxiety Studies Division Annual Newsletter Winter 2017 Members (L-R) Top: Dr. C. Purdon, K. Barber, B. Chiang, M. Xu, T. Hudd, N. Zabara, Dr. D. Moscovitch (L-R) Bottom: O. Merritt, J. Taylor, J. Dupasquier,

More information

Introduction to Research Methods

Introduction to Research Methods Introduction to Research Methods Updated August 08, 2016 1 The Three Types of Psychology Research Psychology research can usually be classified as one of three major types: 1. Causal Research When most

More information

Book Review of Witness Testimony in Sexual Cases by Radcliffe et al by Catarina Sjölin

Book Review of Witness Testimony in Sexual Cases by Radcliffe et al by Catarina Sjölin Book Review of Witness Testimony in Sexual Cases by Radcliffe et al by Catarina Sjölin A lot of tired old clichés get dusted off for sexual cases: it s just one person s word against another s; a truthful

More information

11. NATIONAL DAFNE CLINICAL AND RESEARCH DATABASE

11. NATIONAL DAFNE CLINICAL AND RESEARCH DATABASE 11. NATIONAL DAFNE CLINICAL AND RESEARCH DATABASE The National DAFNE Clinical and Research database was set up as part of the DAFNE QA programme (refer to section 12) to facilitate Audit and was updated

More information

Being an Effective Coachee :

Being an Effective Coachee : : Things to Act on When Working with a Personal Coach By Roelf Woldring WCI Press / Transformation Partners 21 st Century Staffing Innovators www.21cstaffing.com Elora, Ontario, Canada N0B 1S0 Copyright,

More information

Telford and Wrekin Council. Local Offer. Annual Report 2016

Telford and Wrekin Council. Local Offer. Annual Report 2016 Telford and Wrekin Council Local Offer Annual Report 2016 Welcome to the Local Offer Annual report 2015 2016. In the report we hope to give you a snap shot of what we have achieved during the Local Offers

More information

Examining differences between two sets of scores

Examining differences between two sets of scores 6 Examining differences between two sets of scores In this chapter you will learn about tests which tell us if there is a statistically significant difference between two sets of scores. In so doing you

More information

Eye Movements, Perceptions, and Performance

Eye Movements, Perceptions, and Performance Eye Movements, Perceptions, and Performance Soussan Djamasbi UXDM Research Laboratory Worcester Polytechnic Institute djamasbi@wpi.edu Dhiren Mehta UXDM Research Laboratory Worcester Polytechnic Institute

More information

Audio: In this lecture we are going to address psychology as a science. Slide #2

Audio: In this lecture we are going to address psychology as a science. Slide #2 Psychology 312: Lecture 2 Psychology as a Science Slide #1 Psychology As A Science In this lecture we are going to address psychology as a science. Slide #2 Outline Psychology is an empirical science.

More information

Priming Effects by Visual Image Information in On-Line Shopping Malls

Priming Effects by Visual Image Information in On-Line Shopping Malls Priming Effects by Visual Image Information in On-Line Shopping Malls Eun-young Kim*, Si-cheon You**, Jin-ryeol Lee*** *Chosun University Division of Design 375 Seosukdong Dong-gu Gwangju Korea, key1018@hanmail.net

More information

Anamnesis via the Internet - Prospects and Pilot Results

Anamnesis via the Internet - Prospects and Pilot Results MEDINFO 2001 V. Patel et al. (Eds) Amsterdam: IOS Press 2001 IMIA. All rights reserved Anamnesis via the Internet - Prospects and Pilot Results Athanasios Emmanouil and Gunnar O. Klein Centre for Health

More information

Prediction, Causation, and Interpretation in Social Science. Duncan Watts Microsoft Research

Prediction, Causation, and Interpretation in Social Science. Duncan Watts Microsoft Research Prediction, Causation, and Interpretation in Social Science Duncan Watts Microsoft Research Explanation in Social Science: Causation or Interpretation? When social scientists talk about explanation they

More information

Health Consciousness of Siena Students

Health Consciousness of Siena Students Health Consciousness of Siena Students Corey Austin, Siena College Kevin Flood, Siena College Allison O Keefe, Siena College Kim Reuter, Siena College EXECUTIVE SUMMARY We decided to research the health

More information

Education. Patient. Century. in the21 st. By Robert Braile, DC, FICA

Education. Patient. Century. in the21 st. By Robert Braile, DC, FICA Patient Education 21 st in the21 st Century By Robert Braile, DC, FICA Thealthcare marketplace. We also here are a few things we need to recognize relative to how chiropractic is perceived in the need

More information

Hearing, Deaf, and Hard-of-Hearing Students Satisfaction with On-Line Learning

Hearing, Deaf, and Hard-of-Hearing Students Satisfaction with On-Line Learning Hearing, Deaf, and Hard-of-Hearing Students Satisfaction with On-Line Learning By James R. Mallory, M.S. Professor Applied Computer Technology Department jrmnet@rit.edu Gary L. Long, PhD, Associate Professor

More information

Selected Proceedings of ALDAcon SORENSON IP RELAY Presenter: MICHAEL JORDAN

Selected Proceedings of ALDAcon SORENSON IP RELAY Presenter: MICHAEL JORDAN Selected Proceedings of ALDAcon 2005 SORENSON IP RELAY Presenter: MICHAEL JORDAN MICHAEL JORDAN: Okay. I m excited to be here. I feel that the communication that Sorenson has and will continue to provide

More information

Item Analysis Explanation

Item Analysis Explanation Item Analysis Explanation The item difficulty is the percentage of candidates who answered the question correctly. The recommended range for item difficulty set forth by CASTLE Worldwide, Inc., is between

More information

Moodscope: Mood management through self-tracking and peer support

Moodscope: Mood management through self-tracking and peer support Moodscope: Mood management through self-tracking and peer support Introduction Moodscope is a novel online mood-tracking system which enables individuals to accurately measure and record daily mood scores

More information

Can Multimodal Real Time Information Systems Induce a More Sustainable Mobility?

Can Multimodal Real Time Information Systems Induce a More Sustainable Mobility? 1 Can Multimodal Real Time Information Systems Induce a More Sustainable Mobility? José Pedro Ramalho Veiga Simão University of Applied Sciences and Arts of Southern Switzerland SUPSI Institute for Applied

More information

FUNCTIONAL CONSISTENCY IN THE FACE OF TOPOGRAPHICAL CHANGE IN ARTICULATED THOUGHTS Kennon Kashima

FUNCTIONAL CONSISTENCY IN THE FACE OF TOPOGRAPHICAL CHANGE IN ARTICULATED THOUGHTS Kennon Kashima Journal of Rational-Emotive & Cognitive-Behavior Therapy Volume 7, Number 3, Fall 1989 FUNCTIONAL CONSISTENCY IN THE FACE OF TOPOGRAPHICAL CHANGE IN ARTICULATED THOUGHTS Kennon Kashima Goddard College

More information

Reliability, validity, and all that jazz

Reliability, validity, and all that jazz Reliability, validity, and all that jazz Dylan Wiliam King s College London Published in Education 3-13, 29 (3) pp. 17-21 (2001) Introduction No measuring instrument is perfect. If we use a thermometer

More information

What Solution-Focused Coaches Do: An Empirical Test of an Operationalization of Solution-Focused Coach Behaviors

What Solution-Focused Coaches Do: An Empirical Test of an Operationalization of Solution-Focused Coach Behaviors www.solutionfocusedchange.com February, 2012 What Solution-Focused Coaches Do: An Empirical Test of an Operationalization of Solution-Focused Coach Behaviors Coert F. Visser In an attempt to operationalize

More information

Part 8 Logistic Regression

Part 8 Logistic Regression 1 Quantitative Methods for Health Research A Practical Interactive Guide to Epidemiology and Statistics Practical Course in Quantitative Data Handling SPSS (Statistical Package for the Social Sciences)

More information

(happiness, input, feedback, improvement)

(happiness, input, feedback, improvement) Introduction in hifi-process (happiness, input, feedback, improvement) Strategy Meeting January, 8 th 2016 Why this session? Why this session? Lot of talks, feedback and input from lots of people drafted

More information

Empirical Research Methods for Human-Computer Interaction. I. Scott MacKenzie Steven J. Castellucci

Empirical Research Methods for Human-Computer Interaction. I. Scott MacKenzie Steven J. Castellucci Empirical Research Methods for Human-Computer Interaction I. Scott MacKenzie Steven J. Castellucci 1 Topics The what, why, and how of empirical research Group participation in a real experiment Observations

More information

MSc Software Testing MSc Prófun hugbúnaðar

MSc Software Testing MSc Prófun hugbúnaðar MSc Software Testing MSc Prófun hugbúnaðar Fyrirlestrar 43 & 44 Evaluating Test Driven Development 15/11/2007 Dr Andy Brooks 1 Case Study Dæmisaga Reference Evaluating Advantages of Test Driven Development:

More information

The Clock Ticking Changes Our Performance

The Clock Ticking Changes Our Performance The Clock Ticking Changes Our Performance Shoko Yamane, Naohiro Matsumura Faculty of Economics, Kinki University; Graduate School of Economics, Osaka University syamane@kindai.ac.jp Abstract We examined

More information

A Case Study for Reaching Web Accessibility Guidelines for the Hearing-Impaired

A Case Study for Reaching Web Accessibility Guidelines for the Hearing-Impaired PsychNology Journal, 2003 Volume 1, Number 4, 400-409 A Case Study for Reaching Web Accessibility Guidelines for the Hearing-Impaired *Miki Namatame, Makoto Kobayashi, Akira Harada Department of Design

More information

CHAPTER 1 Understanding Social Behavior

CHAPTER 1 Understanding Social Behavior CHAPTER 1 Understanding Social Behavior CHAPTER OVERVIEW Chapter 1 introduces you to the field of social psychology. The Chapter begins with a definition of social psychology and a discussion of how social

More information

Anti-smoking vaccine developed

Anti-smoking vaccine developed www.breaking News English.com Ready-to-use ESL/EFL Lessons Anti-smoking vaccine developed URL: http://www.breakingnewsenglish.com/0505/050516-nicotine.html Today s contents The Article 2 Warm-ups 3 Before

More information

Executive Functions and ADHD

Executive Functions and ADHD Image by Photographer s Name (Credit in black type) or Image by Photographer s Name (Credit in white type) Executive Functions and ADHD: Theory Underlying the New Brown Executive Functions/Attention Scales

More information

Good Enough But I ll Just Check: Web-page Search as Attentional Refocusing

Good Enough But I ll Just Check: Web-page Search as Attentional Refocusing Good Enough But I ll Just Check: Web-page Search as Attentional Refocusing Duncan P. Brumby (BrumbyDP@Cardiff.ac.uk) Andrew Howes (HowesA@Cardiff.ac.uk) School of Psychology, Cardiff University, Cardiff

More information

Sightech Vision Systems, Inc. Real World Objects

Sightech Vision Systems, Inc. Real World Objects Sightech Vision Systems, Inc. Real World Objects Animals See the World in Terms of Objects not Pixels Animals, especially the more compact ones, must make good use of the neural matter that they have been

More information

Engagement Newsletter

Engagement Newsletter Engagement Newsletter July 2018 Edition Engagement Newsletter This Month Engagement Spotlight: Social Media EASS teams up with Disability Wales Planning new advice aids New advisers Isle of Man visit Success

More information

User Guide Seeing and Managing Patients with AASM SleepTM

User Guide Seeing and Managing Patients with AASM SleepTM User Guide Seeing and Managing Patients with AASM SleepTM Once you have activated your account with AASM SleepTM, your next step is to begin interacting with and seeing patients. This guide is designed

More information

Student Minds Turl Street, Oxford, OX1 3DH

Student Minds Turl Street, Oxford, OX1 3DH Who are we? Student Minds is a national charity working to encourage peer support for student mental health. We encourage students to have the confidence to talk and to listen. We aim to bring people together

More information

UNIT. Experiments and the Common Cold. Biology. Unit Description. Unit Requirements

UNIT. Experiments and the Common Cold. Biology. Unit Description. Unit Requirements UNIT Biology Experiments and the Common Cold Unit Description Content: This course is designed to familiarize the student with concepts in biology and biological research. Skills: Main Ideas and Supporting

More information

The Relationship between YouTube Interaction, Depression, and Social Anxiety. By Meredith Johnson

The Relationship between YouTube Interaction, Depression, and Social Anxiety. By Meredith Johnson The Relationship between YouTube Interaction, Depression, and Social Anxiety By Meredith Johnson Introduction The media I would like to research is YouTube with the effects of social anxiety and depression.

More information

This report summarizes the stakeholder feedback that was received through the online survey.

This report summarizes the stakeholder feedback that was received through the online survey. vember 15, 2016 Test Result Management Preliminary Consultation Online Survey Report and Analysis Introduction: The College s current Test Results Management policy is under review. This review is being

More information

Color naming and color matching: A reply to Kuehni and Hardin

Color naming and color matching: A reply to Kuehni and Hardin 1 Color naming and color matching: A reply to Kuehni and Hardin Pendaran Roberts & Kelly Schmidtke Forthcoming in Review of Philosophy and Psychology. The final publication is available at Springer via

More information

This is a guide for volunteers in UTS HELPS Buddy Program. UTS.EDU.AU/CURRENT-STUDENTS/SUPPORT/HELPS/

This is a guide for volunteers in UTS HELPS Buddy Program. UTS.EDU.AU/CURRENT-STUDENTS/SUPPORT/HELPS/ VOLUNTEER GUIDE This is a guide for volunteers in UTS HELPS Buddy Program. UTS.EDU.AU/CURRENT-STUDENTS/SUPPORT/HELPS/ CONTENTS 1 2 3 4 5 Introduction: Your role as a Buddy Getting started Helping with

More information

VO2 Max Booster Program VO2 Max Test

VO2 Max Booster Program VO2 Max Test VO2 Max Booster Program VO2 Max Test by Jesper Bondo Medhus on May 1, 2009 Welcome to my series: VO2 Max Booster Program This training program will dramatically boost your race performance in only 14 days.

More information

Optimal Flow Experience in Web Navigation

Optimal Flow Experience in Web Navigation Optimal Flow Experience in Web Navigation Hsiang Chen, Rolf T. Wigand and Michael Nilan School of Information Studies, Syracuse University Syracuse, NY 13244 Email: [ hchen04, rwigand, mnilan]@mailbox.syr.edu

More information

Anti-smoking vaccine developed

Anti-smoking vaccine developed www.breaking News English.com Ready-to-use ESL/EFL Lessons Anti-smoking vaccine developed URL: http://www.breakingnewsenglish.com/0505/050516-nicotine-e.html Today s contents The Article 2 Warm-ups 3 Before

More information

Investigating the Reliability of Classroom Observation Protocols: The Case of PLATO. M. Ken Cor Stanford University School of Education.

Investigating the Reliability of Classroom Observation Protocols: The Case of PLATO. M. Ken Cor Stanford University School of Education. The Reliability of PLATO Running Head: THE RELIABILTY OF PLATO Investigating the Reliability of Classroom Observation Protocols: The Case of PLATO M. Ken Cor Stanford University School of Education April,

More information

Assistive Technology Theories

Assistive Technology Theories Assistive Technology Theories CHINTAN PATEL CSE 7390 u Nicole Sliwa u 10 November 2014 And now, the discussion topic we've all been waiting for... The Papers Predictors of assistive technology abandonment

More information

Testing Means. Related-Samples t Test With Confidence Intervals. 6. Compute a related-samples t test and interpret the results.

Testing Means. Related-Samples t Test With Confidence Intervals. 6. Compute a related-samples t test and interpret the results. 10 Learning Objectives Testing Means After reading this chapter, you should be able to: Related-Samples t Test With Confidence Intervals 1. Describe two types of research designs used when we select related

More information

THEORY OF CHANGE FOR FUNDERS

THEORY OF CHANGE FOR FUNDERS THEORY OF CHANGE FOR FUNDERS Planning to make a difference Dawn Plimmer and Angela Kail December 2014 CONTENTS Contents... 2 Introduction... 3 What is a theory of change for funders?... 3 This report...

More information

Pathway Project Team

Pathway Project Team Semantic Components: A model for enhancing retrieval of domain-specific information Lecture 22 CS 410/510 Information Retrieval on the Internet Pathway Project Team Susan Price, MD Portland State University

More information

Political Science 15, Winter 2014 Final Review

Political Science 15, Winter 2014 Final Review Political Science 15, Winter 2014 Final Review The major topics covered in class are listed below. You should also take a look at the readings listed on the class website. Studying Politics Scientifically

More information

BELL WORK. Do you have any fitness goals and if so what are they? What are you currently doing to achieve those goals?

BELL WORK. Do you have any fitness goals and if so what are they? What are you currently doing to achieve those goals? BELL WORK Do you have any fitness goals and if so what are they? What are you currently doing to achieve those goals? REVIEW What are the four measures of fitness? HEART AND LUNG ENDURANCE MUSCLE STRENGTH

More information

Operation S.A.V.E Campus Edition

Operation S.A.V.E Campus Edition Operation S.A.V.E Campus Edition 1 Suicide Prevention Introduction Objectives: By participating in this training you will learn: The scope and importance of suicide prevention The negative impact of myths

More information