Every UND program is expected to have an assessment plan. Plans include:
- Learning outcomes statement
- Department mission statement
- Map of outcomes and courses
- Student learning outcomes and their assessment methods
- Student learning outcome goals
Department Assessment Expectations
Every program at UND is expected to have an assessment plan. This means every major and every certificate program at both the graduate and the undergraduate levels should have a plan. If a department offers multiple degrees, it would make sense that each degree program has some difference in intended learning outcomes, however similar the programs may be.
It makes sense that there will be a lot of overlap in learning outcomes because the curriculum for one major likely has a great deal in common with the curriculum for the other. An engineering department, for example, should have a plan for the Master of Science degree and a plan for the Master of Engineering degree. But there may be only a single goal that distinguishes one program from the other, with the MS providing a greater focus on research skills and the ME a focus on practice. That small difference will be reflected in the two similar, yet not identical, assessment plans.
Every program at UND is expected to have faculty collecting data about learning every year. That doesn't mean every faculty member, every learning outcome, and every method in every year. Make sure that some assessment activity occurs annually, as described in your program's assessment plan.
Every program should offer opportunities for faculty to talk about and share assessment results and findings (i.e., "what do you think this bit of information means?") every year. There is great merit in having this discussion during a retreat – where there's time for meaningful discussion.
Results Sharing Model
- Collect information all year.
- At an annual retreat, begin by reviewing all assessment information collected during the year, possibly including looking together at actual student work products which were the basis of some of the assessment results and findings that you'll be discussing. Talk about what the collected documents show and what the information means.
- Review notes from last year's retreat. What information did you see there? What actions, if any, did you agree would be taken this year? How have those things worked out? If plans weren't followed, why was that?
- Spend the last part of the retreat planning for next year. Based on the assessment findings and on your discussion of actions taken over the past year (and drawing in whatever additional information might be relevant), what should be on your department's "to-do" list for the next year? You might want to talk about individual courses ("can some faculty include a bit of introduction to presentations in 200-level classes so the expectations in the capstone are not so unfamiliar?"), curriculum ("should course offerings, requirements, electives, content be tweaked?"), and/or the overall assessment process ("are we finding that assessment activities provide useful information, or should we be looking for better ways to answer our questions about learning?").
Every program should have someone documenting the assessment activities completed each year. One convenient place to create a (permanent) paper trail of documentation is in the annual reports. If you have your own program accreditor, you may want to document in another way that meets your accreditor's expectations. If, even without a program accreditor, you have developed an effective and well-honed practice of documenting internally, you can do that. In any case, someone should file an assessment report annually for every program. That report should include:
- Reviewing your posted assessment plan
- Indicating assessment methods used during the last year
- Providing a sample of assessment results and conclusions from those results
- Describing any loop-closing activities that occurred during the past year, either in response to your new assessment findings or as a result of previous assessment work
Writing Assessment Plans
Program assessment plans should answer the following questions for departments and students:
- What should students be able to do by the time they complete your program? In other words, what learning outcomes should be achieved by the time they complete the major or certificate?
- What methods will you use to find out if they have can do the things you've named (i.e., the learning outcomes you've identified)?
- How will you ensure that the necessary information gets collected, analyzed, and discussed? Who will remind faculty? What will be the timetable? Who will ensure that analysis occurs (in a whole-department meeting or within a departmental committee)? Who will make sure that results get discussed by the faculty as a whole?
- How will all of this work get documented so that what's done in one year remains available for review and discussion two or three years down the road, when there might be new findings that should be compared?
Assessment Plan Submission
Assessment Plans are included in the Taskstream assessment template and should be updated as necessary.
Assessment Review Procedure
As of October 15, 2020, copies of program assessment plans are housed in UND's Taskstream Assessment Management System.
Some programs have a program accreditor that mandates language for the intended learning outcomes. Perhaps your accreditor says that your statements of what students learn during college will be learning outcomes but what they should be able to do on the job will be goals. Perhaps your accreditor expects each goal (defined as what students should be able to do by the time of graduation) to be lofty and broad, but also to be unpacked via detailed objectives that describe exactly what you'll measure and what the standard for success will be.
If your accreditor uses a specific set of words for descriptions of assessment expectations, please develop a plan that uses your accreditor's terminology. It doesn't make sense to write using one set of words for your program accreditors and another set of words for UND.
If you do not have a program accreditor or if your accreditor does not prescribe terminology, then use language which makes sense to faculty in the field. In some programs, faculty write learning outcomes (whatever you choose to call them) that are specific, creating no need to pin down meaning more precisely. If so, perhaps no sub-categories (most commonly called objectives) may be necessary. In other cases, your department may want to start with an overarching set of program outcomes (which faculty might call "goals") and a supporting list of more specific learning outcomes. If your goals are broad, this kind of supporting list can provide needed clarity. "Objectives" of this sort are usually both specific and measurable. In fact, the objective itself may contain information that points to an assessment method or the "bar" you hope to see met. For example, a broad goal like "Students will communicate well" might be immediately followed by a first objective which specifies "90% of program seniors will be able to write a paper that is scored at 3, 4, or 5 on the department's rubric for effective communication."
Failing to use both direct and indirect measures: Direct assessments are those which involve looking at student work that actually demonstrates the learning identified by your goal or outcome. Each student's work is then rated or scored (with numbers or via narrative) specifically in terms of that learning outcome. Finally, ratings or scores earned by many different students are combined so that conclusions can be drawn about overall student achievement of that specific learning outcome.
So imagine, for example, that you want to find out how well students are doing on their presentation skills. You directly assess that by observing student presentations and scoring them on the aspects of presentation that you have identified as important. You might use a rubric to score each criterion, or perhaps you write a narrative of each student's strengths and weaknesses (related to the criteria you've identified) as you observe the presentations. Then you compile the information. If you've used scores, you'll probably count how many students scored at each point on the scale for each criterion of interest. If you wrote brief narratives, you'll look back through them for themes that describe patterns, in relation to criteria of interest, observed across all the students. In either case, you'll see the patterns of strengths and weaknesses, and consider that information in relation to what you had intended (and hoped) to see demonstrated. That's direct assessment. And every goal or learning outcome should be directly assessed.
Direct assessment contrasts with indirect assessment, which involves eliciting perspectives about student learning. Indirect assessment is most often done by asking students, usually via a survey or informal writing assignment, to describe their own sense of confidence in their ability to do whatever is specified as an intended learning outcome. For example, students might score themselves on their perceived ability to do a high quality presentation, perhaps using the same rubric you're scoring with or perhaps by writing paragraphs about what they see as their strengths and weaknesses. Asking all program faculty to summarize their impressions of student presentation skills (without observing and rating actual presentations) would also be an indirect assessment.
Perception information (indirect assessment) is an easy-to-collect and worthwhile assessment strategy. Student perceptions of their learning are particularly important, and information from a compilation of student perceptions is especially useful when paired with direct assessment findings about the same learning outcome.
Too much assessment: Just as it can seem logical to have goals for each course, it may seem intuitively logical to have methods that require every teacher to collect work products and analyze them for assessment information in every course – or at least once every semester. While regular participation in assessment is important, there is no value in becoming buried in data. A better strategy is as follows:
- Identify two or three different ways of looking at each learning outcome, ideally starting with methods or tools which help you see learning near the time of program completion (if students could do what's expected at the end of the fifth semester but have lost a competency or two by the time of graduation, that's not particularly satisfactory; what really matters is what they can do when they leave the university).
- To the degree possible, look for opportunities to make those methods overlap, so that a single method can help you look at multiple learning outcomes.
- Establish a cyclical rotation of assessment so that every method or "tool" is used every two or three years. Key methods may be used more frequently, if deemed reasonable and appropriate.
- If you find (once information begins rolling in) that your findings are generating more questions than answers, develop additional strategies to dig more deeply into areas where you need to know more.
Too many factoids, no analysis: It may seem counterintuitive to say this on an assessment website, but burying yourself in data may not be a good thing for assessing student learning. You want to collect enough information to gain a systematic (researched) understanding of student learning, but you don't want so many pieces of information that it's impossible to manage the paper flow or find time to analyze what's been collected. The aim isn't to have the most data. It's to have information that reveals patterns and trends in learning that will help faculty in your program make good decisions about any changes that might be contemplated or needed.
No one wants to talk about the data: If you aren't generating information that's interesting, the truth probably is that no one does want to talk about it. It feels like a waste of time. It may be time to regroup and focus on learning outcomes that everyone agrees are critical, methods for collecting information likely to yield intrinsically interesting findings, and questions about learning that program faculty are genuinely curious about. It can be hard to see how to do this, so one approach is to get an outside perspective. Or get in touch with Dr. Tim Burrows, Director of Assessment.
Faculty in my department see assessment as busywork: This problem becomes self-perpetuating. If colleagues assume assessment is busywork, they may do whatever is easiest rather than what is most likely to be useful. The best way to change that attitude is with a bit of success. Try creating an assessment project that will yield genuinely interesting findings. Engage your colleagues in discussion of your results. And then build on that first small success.
Difficulties with organization: The more information you collect, the greater your organizational challenge. And there is no single strategy for success.
The best advice is what you already know. It makes sense to keep all materials in two places, one for all hard copies and one for all electronic documents. Make sure that minutes are kept at every meeting where assessment is discussed, because a lot of the results, findings, and loop-closing information will be elicited from faculty discussions. If it's not written down, you will not remember when it's time to report. Add those minutes to the assessment file as soon as they're written up. Assign one individual in the department (ideally someone with good organizational skills) to be in charge of maintaining the records, even though you will want to be sharing responsibility for other aspects of the work. At the beginning of the semester, set up a schedule for assessment reminders, and choose someone to be in charge of them. Find a colleague in another department where they've been doing assessment well for a long time and ask how they do it. Beg, borrow, or steal the best methods you find.