[usPropHeader] Error loading user control: The file '/CMSWebParts/WK.HLRP/LNC/LNCProductHeader.ascx' does not exist.

Authors

  1. Fisher, MaryDee DNP, RN, CPN
  2. Robb, Meigan PhD, RN
  3. Wolf, Debra M. PhD, RN
  4. Slade, Julie DNP, RN

Article Content

Within online learning environments,1 the discussion forum (DF) is used to facilitate in-depth student interactions, reflections on practice, and knowledge base formation.2,3 A rubric is an evaluation tool used in online DF postings to assess student performance levels.4,5 This article describes how nursing faculty at a private university created a standardized rubric to be used across online nursing programs (RN-BSN, MSN, and DNP) to objectively assess evidence of learning within DF postings.

 

Approach

Anecdotal evidence suggested that faculty were inconsistently using the existing holistic rubric to evaluate student performance. Students with dual-track enrollments (RN-MSN and BSN-DNP) reported being confused due to multiple DF expectations across programs. Therefore, an ad hoc committee with faculty from all programs was formed to address the concerns.

 

Rubric Development

The committee first reaffirmed the purpose of the DF was to provide evidence of a student's ability to understand, apply, and evaluate core concepts. A review of literature suggested using an analytic rubric for evaluating online discussions decreased student frustration and subjectivity of the evaluator through defining various levels of performance, describing performance behaviors, and associating point values with specific behaviors.2-4,6,7 The committee next identified similarities and differences between DF requirements across program levels. Requirements for the number and timing of posts were consistent. However, activities (eg, typed response, video comments, creation of a concept map, feedback on drafted assignments, etc) that constituted a DF posting varied.

 

To develop the rubric, the committee worked through the processes of (1) defining criteria to promote engagement throughout the week, (2) selecting a point scale that summarized behaviors not resulting in grade inflation, and (3) crafting descriptors for each performance level that described expectations for faculty and students. The final rubric design (see Table, Supplemental Digital Content 1, http://links.lww.com/NE/A601) reflected 5 main categories tied to independent learning (main response due by Wednesday), collaborative learning (peer responses 1 and 2 due on 2 separate days by Sunday), and netiquette (timeliness and academic writing mechanics).

 

The rubric was piloted across the programs by the ad hoc committee members. Anecdotal reports were positive regarding interpretation, usability, ease of grading, and appropriateness of point allotments. Faculty further indicated the rubric was an applicable DF assessment tool for multiple learning activities across programs. This feedback supported undertaking a full program rollout.

 

Rollout

The first step was establishing a line of communication between the ad hoc committee and full-time and adjunct faculty. A narrated video highlighting differences between the previous and revised rubric was created, and a link to the video was sent via email. The resources and supplemental materials were also posted on the Faculty Support Site in the learning management platform to allow for referencing retrieval.

 

A checklist and tip sheet were shared with each course liaison (a full-time faculty member who oversees course design and maintenance) outlining steps to integrate the revised rubric. Liaisons were asked to (1) post a scripted announcement in the course about the change, (2) send a private email to students to reinforce the change, (3) upload the rubric, (4) revise the course grade book to reflect new point values, and (5) edit the syllabus to reflect use of the standardized rubric.

 

Conclusion

Well-designed rubrics enhance students' understanding of required performance behaviors and increase consistency in faculty grading practices. The incorporation of a standardized analytic rubric across multiple nursing programs is one method for promoting consistent evaluation of student learning that occurs in online DFs. Involving faculty in each step of the rollout of a pedagogical change is key.

 

References

 

1. Fact sheet: nursing shortage. American Association of Colleges of Nursing website. Available at: https://www.aacnnursing.org/Portals/42/News/Factsheets/Nursing-Shortage-Factshee. Updated May 18, 2017. Accessed November 5, 2018. [Context Link]

 

2. Craig GP. Evaluating discussion forums for undergraduate and graduate students. Faculty Focus. Available at: https://www.facultyfocus.com/articles/online-education/evaluating-discussion-for. Published 2015. Accessed August 17, 2018. [Context Link]

 

3. Wyss V, Freedman D, Siebert CJ. The development of a discussion rubric for online courses: standardizing expectations of graduate students in online scholarly discussions. TechTrends. 2014;58(2):99-107. [Context Link]

 

4. McKinney BK. The impact of program-wide discussion board grading rubrics on students and faculty satisfaction. Online Learn J. 2018;22(2):289-299. [Context Link]

 

5. Phillippi JC, Schorn MN, Moore-Davis T. The APGAR rubric for scoring online discussion boards. Nurse Educ Pract. 2015;15(3):239-242. [Context Link]

 

6. Renjith V, George A, Renu G, D'Souza P. Rubrics in nursing education. Int J Adv Res. 2015;3(5):423-428. [Context Link]

 

7. Minnich M, Kirkpatrick AJ, Goodman JT, et al. Writing across the curriculum: reliability testing of a standardized rubric. J Nurs Educ. 2018;57(6):366-370. [Context Link]