Reviewer Rubric
Does the submission address a significant problem in clinical/health informatics including one or more of the following conference objectives?
- Apply evidence- or experience- based models of informatics practices to improve health care
- Leverage informatics tools to better engage in team-based care
- Connect emerging trends in health and health IT policy with regulatory efforts that impact care delivery
- Identify relevant public and population health informatics strategies for prevalent health issues
- Develop core competencies and leadership skills unique to health informatics professionals
- Assess and improve the application of health IT infrastructure best practices
Does the submission describe an innovative application and/or and innovation in clinical/health informatics? Impact: The submission has had an actual impact to date in one or more of the major topic areas:
- Usability, Efficiency, and Experience
- Clinical Decision Support and Analytics
- Learning Health System
- Interoperability and Informatics Infrastructure
- Leadership, Advocacy, and Policy
- Does the order of sections and description follow the Call for Participation?
- Is the quality of the English acceptable?
- Is the submission formatted according to the submission guidelines?
- Does the work refer to and relate the findings to relevant previous literature? (all)
- In describing findings with implications on health or health care, is the thesis clearly presented and appropriately evaluated? (regular paper, student paper, podium, poster)
- Do the described methods convey the rigor of techniques used in the study? (regular paper, student paper)
- Do the methods provide enough details to support the reproducibility of the study? (regular paper, student paper, podium, poster)
- Is the approach clearly presented for how the activities will be organized? (demonstration, interactive panel, didactic panel)
"Accept" Example Review
This paper reports on a content analysis of posts from xxx forum. The main contribution this paper makes to the AMIA community is insights on characteristics of posts by people with xxx that predict recovery. The paper also links those insights to recommendations for designers to develop or improve tools to support people with xxx in their recovery process. This contribution is significant because of the large scale of the analysis that builds substantially on prior work.
This was an excellent paper on so many levels. The paper is extremely well written with key points made easy to find through appropriately bolder headers, such as “Summary of Findings.” The work is a great fit for the AMIA audience particularly for attendees interested in machine learning and consumer health.
The work is clearly original. I have seen no other work tackle this problem using similar methods to predict recovery in any health domain. The authors used a large corpus of yyy posts by people with xxx to identify characteristics in what these people post that predict their recovery. Although other published work reports on aspects of online posts, studies have been exploratory and much smaller scale. The authors identified several key, predictive characteristics that also fit well with the xxx literature, which the authors thoroughly reviewed to motivate the work.
The methods are clear, thoroughly described, and well-matched to answer the stated research questions. It was also nice to see close attention paid to human subjects and ethical issues, including the extra step in using modified quotes and explicitly discussing ethical implications.
Although the analysis involved complex steps, readers without a technical background should find this paper quite accessible because of the authors’ clarity. Strengths of this paper are the use of both qualitative and quantitative methods and steps taken to assess validity of findings. The paper includes appropriate and well-justified implications for design as well that specifically connect findings from their predictive model to potential technological tools that could aid in
people’s recovery. The authors were very careful to couch their results with appropriate limitations and cautions.
- Make it clearer at the start of the paper and in the abstract that you include insights for design. It currently implies that the only result is identifying people who will recover.
- Consider adding a cautionary note for design implications about nudges potentially backfiring. In general, I believe that nudges have a powerful potential for positive change, but sometimes these seemingly small pushes make things worse. Designers should be watchful for such effects. For example, what if the nudges push people away from social media and isolate them further?
"Reject" Example Review
This paper reports on a content analysis of posts from xxx forum. The paper’s contribution to the AMIA community is helping to answer the question of how features in an online health community affect participation using the case of zzz, a popular online health resource. Although I agree wholeheartedly that researchers and designers of online communities need to take a socio-technical approach to understand community participation, this paper does not provide specific advice to guide designers. It also suffers from methodological issues that make it difficult to determine any real causal effects, which the authors acknowledge, but that limit the paper’s contributions.
Answering the question of how design features of an online health community influence participation is interesting and important. Such insights could help make online resources even better. However, the paper lacks the level of detail the AMIA audience expects, including a clearly framed approach and specific design insights.
Many researchers have looked at the impact of online communities on participation. The authors could have included more prior work and better explained what is missing from that work and how it relates to their study. For example, much work from the field of human-computer interaction speaks to factors that affect participation in online communities. In particular, authors of paper x describe the same community and how moderators’ posts influenced participation: <paper x reference>.
The authors provide few details about how they collected data for this study. There are methodological limitations to the way posts were sampled at only certain times of year that could have influenced findings. Thus, findings are difficult to evaluate and justify as a reader. The paper would be substantially strengthened with clear, thoroughly described methods that match the stated research questions. The authors should also describe methodological limitations in the discussion.
Findings support prior work that finds participation drops after interface changes are made. Yet the amount and type of moderation also changed at the same time, which could influence participation substantially. It helped that the authors included a qualitative analysis to examine why people leave, but quotes are not effectively embedded to explain points or themes. Rather quotes are simply listed at the end of the results without explanation or categorization. There are many potential reasons for abandonment—including just the fact that things changed and people often struggle with change at first before getting better. It was insightful to see both the types of categories they found that indicated someone would leave the forum as well as the types of complaints posted. Yet from this type of retrospective analysis, it is very difficult to demonstrate causal evidence for abandonment.
Unfortunately, the paper does not provide any advice or insights on how a forum should be better designed. For example, the category “availability of better platforms” doesn’t help us understand what in particular is better about other platforms. I do applaud the authors for taking multiple approaches to look at this issue, but I did not see how they could ever provide the promised design insights by this type of content analysis.