Student Outcomes Assessment

Welcome

This website is intended to make available general information on student outcomes assessment and specific information related to the assessment at NTID, one of the nine colleges of Rochester Institute of Technology (RIT). Resource information in the form of links to other websites, bibliography or printed materials and relevant internal and external workshops and conferences are included.

Development of Student Outcomes Assessment at NTID
A brief history of outcomes assessment at NTID and the operational and best practice principles used to develop the initial college assessment plan.

Program Level Assessment at RIT, including NTID
A description of RIT’s Academic Program Assessment efforts, with links to RIT’s Educational Effectiveness, Student Learning Outcomes Assessment Committee, and RIT’s Annual Outcomes Assessment Progress Reports.

Unit/Program Assessment Plans
A listing of degree programs, arts and sciences and academic support areas in the college with links to their completed outcomes assessment plans.

Assessment Resources
A collection of print and electronic assessment resources, including assessment plans from other institutions, information on how to write course and program objectives, how to develop rubrics, examples of student surveys and detail on specific discipline-based assessment strategies.

Middle States Association
RIT's Periodic Review Report submitted to Middle States Association in June 2002; NTID's College Assessment Plan (June 2002); Middle States Review - Student Outcomes Assessment - NTID - Executive Summary, February 24, 2006; and Academic Outcomes Assessment Task Force Report, March 15, 2006. Also included are links to the RIT Middle States Accreditation webpages and Self-Study 2017.

Assessment Workshops and Conferences
Information about upcoming assessment convenings at the local and national level.

Development of Student Outcomes Assessment at NTID

The college of NTID at RIT was involved in outcomes assessment well before this process received national attention. As a college, NTID has been actively engaged in the assessment of institutional outcomes since its establishment in 1965. NTID’s unique mission, federal funding and strategic plan have helped shape what our college calls performance indicators and what traditionally has comprised our college’s approach to outcomes assessment.

In 1996, college-level performance indicators were established in response to the Congressionally –mandated Government Performance and Results Act (GPRA). These included a series of institutional effectiveness objectives and benchmarks for: (1) enrollment; (2) diversity (student); (3) retention (first year); (4) graduation; (5) placement; (6) student satisfaction; and (7) alumni satisfaction. These performance outcome indicators have been consistently reported for the college in general, but were not broken down by program.

So, while assessment of institutional outcomes was a part of our collegiate culture for a long time, assessment of student learning outcomes at the program level was more recent. Fortunately, NTID’s Strategic Objectives (1999-2004) placed attention on student outcomes assessment and established a college steering committee, the Student Outcomes Assessment Steering Committee, to help formulate a grassroots approach. The work of the Steering Committee has provided a good beginning for our college planning.

At about the same time the Strategic Objectives were drafted, the Middle States Association (MSA) of Colleges and Schools began to place special emphasis on the assessment of student academic achievement as part of its accreditation process. In concurrence with and in response to that emphasis, the Dean charged the already established Steering Committee with oversight of the student outcomes assessment efforts to ready the college for its next scheduled MSA review.

The NTID system of assessment was uniquely unit-based and faculty-driven. The June 2001 Report of the Steering Committee on Outcomes Assessment explained this approach in the following way:

  • NTID’s Strategic Plan (1992) established six domains of graduation exit competencies. These domains and their associated sub-competencies lay the framework for student outcomes assessment.
  • Our accrediting body pointed out during its site visit in 1997 that NTID needed to develop a coherent approach to outcomes assessment. In their report, Middle States Association (MSA) indicated that NTID should invest further attention to student outcomes:
  • "Outcomes-based measures should be more fully developed, with the inclusion of students in process-oriented, self evaluation based upon clearly communicated competencies and indicators." (Middle States Association Accreditation Site Visit Report to RIT, 1997)
  • As a result, NTID’s Leadership Team requested outcomes assessment be included as one of the targeted initiatives within the NTID’s Strategic Initiatives Blueprint (1999-2004). The Blueprint specified that the central purpose of the outcomes assessment plan would be to:
    • Improve teaching and learning, to contribute to the personal development of students, to ensure institutional improvement and to facilitate accountability.

In the fall of 1998, the Steering Committee on Outcomes Assessment was established and charged with developing a campus plan for outcomes assessment. The Steering Committee emphasized the importance of assessment of student outcomes in improving instruction, and began to investigate how other institutions of higher education were addressing assessment of student academic achievement.

The following questions were addressed by the Steering Committee:

  • What is student outcomes assessment and why is it important?
  • What are the characteristics of an effective system to assess student outcomes?
  • What are the proposed elements of a system of outcomes assessment and program improvement at NTID?

Since completing the pilot with one program, the Steering Committee has worked with all other academic units to develop assessment plans for their degree programs. In addition to the development of plans for each academic program, the Steering Committee was involved with the development of assessment measures for broader, overarching, campus-wide learning goals that are expected of all NTID students.

Steering Committee Members

Academic program faculty invited to join the steering committee were:

  • Stephen Aldersley
  • Tom Raco
  • Laurie Brewer
  • Larry Scott
  • Paula Brown
  • Michael Steve
  • John Cox
  • Lee Twyman
  • Vince Daniele
  • Kathy Voelkl
  • John Madison
  • Christine Licata (as convener and facilitator)
  • Barbara McKee

The Pilot

During 1998-99, the Steering Committee began to explore reasonable approaches to outcomes assessment and become familiar with national trends, issues and best practices. Our college sent a team to the American Association for Higher Education (AAHE) National Assessment Conference in June 1998 and again in June 1999, June 2000 and June 2001.

Following these initial orientation activities, we began our work on a very small scale by collaborating with a program in the Center for Technical Studies that was interested in developing outcome measures on a pilot basis. The Art and Computer Design (ACD) program at the AOS level became the pilot.

As a result of the pilot project, our college gained considerable experience in how to identify key program outcomes; how to use existing classroom assessment as a springboard to assessment of a capstone nature; and how to develop assessment rubrics in the technical and general education domain. The pilot also provided us with a nucleus of faculty experienced in the process who then were able to help other programs in the development of outcomes assessment plans.

Definitions and Parameters

The Steering Committee established the following outcomes assessment design parameters:

  1. The pilot was an iterative process with the emphasis on what we can learn from establishing an outcomes assessment design. The point of the assessment pilot was to better understand how a program might approach establishing a system of outcomes assessment. The intent was not to develop one model that would be applied to all programs.
  2. The pilot focused only on the AOS level program.
  3. The outcome competencies included in the pilot focused on five (5) specific areas—all of which were derived from the Strategic Plan:
    • Technical Skills
    • English Literacy
    • Communication Skills
    • Mathematics
    • Critical Thinking
  4. In order to keep the pilot on a reasonable threshold for success, the underlying objective was to identify a small number of key student learning outcomes within each of these five (5) domains which the Art and Computer Design faculty considered compelling and which fit with the nature of the AOS program and the mission of NTID.

Based on this experience, the subcommittees recommended that faculty and department chairs work through the following steps in arriving at an initial assessment design:


Graph with box at top labeled Student Learning Outcomes, branching below it to two boxes, 1) box labeled Continuous Program Improvement (with 2 items below it: Curriculum and Pedagogy), and 2) box labeled Program Compliance (with 2 items below: Middle States Accreditation and Accountability to Federal Government

Growing out of the work of the College’s Outcomes Assessment Steering Committee and our pilot project, a set of organizing principles was generated to help program faculty get started in thinking about the various components of assessing student outcomes. These principles included the following operational advice:

  1. Set the goal for outcomes assessment on continuous improvement-assess for learning. The focus is not on a particular student or faculty member.
  2. Tie assessment to the major learning outcomes of the curriculum.
  3. Be sure performance measured is sufficient to represent the outcome assessed.
  4. Be sure outcomes assessed are sufficient to capture what is critical for graduates to do and know.
  5. Embed assessment, as much as possible, into existing occasions for assessment that occur in courses and program experiences-do not make this an add-on for faculty or students.
  6. Make assessment as authentic as possible. Consider using:
    1. Capstone seminar
    2. Portfolios
    3. Synthesizing class project
    4. Certifying skill test
  7. In addition, make assessment practices simple, manageable and easy to record.
  8. Maximize the use of existing data and information.
  9. Utilize quantitative and qualitative data.
  10. Make sure the results of what you measure can lead to program (course) improvement.
  11. Be realistic!

Program Level Assessment RIT/NTID

NTID fully participates in RIT’s program level assessment processes for both ongoing and newly proposed academic programs, following the requirements, templates and timeline established for all of the colleges of RIT.  These are under the direction of the Office of Educational Effectiveness Assessment (EAA) in RIT’s Academic Affairs unit.

Academic Program Assessment is well supported through staff in the EAA, through the Student Learning Outcomes Assessment Committee, and through information, templates, and resources on the EAA website.

RIT’s Assessment Management System (Taskstream) provides a web-based hub for all assessment initiatives, including the annual Student Learning Outcomes Assessment Progress Report. These annual reports are aggregated, analyzed, and shared with the campus community by the EAA. NTID’s Program Level Outcomes Assessment findings and use of results are also summarized and published in NTID’s Annual Report to the federal Secretary of Education and NTID community.

Unit/Program Assessment Plans and Reports

Resource Information

Numerous assessment resources were used—and continue to be used—by departments and college units as part of the assessment development and implementation process. For ease of navigation and viewing, these resources are organized into the following categories: examples of assessment frameworks; assessment plans and related materials and printed materials. To assist you in locating a desired resource, brief descriptions of each category are provided below.

  1. RIT Educational Effectiveness Assessment (EAA) - RIT’s EAA website has a robust section for Academic Program Assessment. It includes:
    • Academic program assessment overview, template and sample plans
    • Rationale for program level assessment
    • Resources to support the assessment of student learning
    • RIT’s Student Learning Outcomes Assessment Progress Report procedures and annual reports
    • New program and course level assessment guidance
    • FAQ’s for academic program assessment
    • Information about the RIT Student Learning Outcomes Assessment Committee

  2. Printed Materials - Itemizes a collection of texts and articles selected for their quality and applicability to the outcomes assessment development process.

    See the Resource Information > Bibliography of Useful Text Resources section below.


  3. Assessment Framework - Illustrates the general structure(s) recommended by assessment experts for development of assessment plans.

    Alternative Assessment and Electronic Portfolios, good links to sites describing ways to do alternative assessment including portfolios.

    Field-tested Learning Assessment Guide (FLAG) for Science, Math, Engineering, and Technology Instructors, great resources for learning how to select assessment techniques to match course goal - each tool has been field tested in college and university classrooms.

    National Center for Postsecondary Improvement at Stanford and University of Michigan, two websites that provide extensive information about what approaches colleges and universities are using in outcomes assessment.


  4. Assessment Plans and Other Related Materials - Describes, and frequently links to, documents and materials available via the Internet or from other institutions. Some excellent examples of other university outcomes assessment plans are included, as well as information on specific assessment components (e.g., how to write objectives; develop rubrics, etc.).

    Assessment Plans - Other Institutions

    North Carolina State University - Planning and Analysis, terrific website containing general resources, assessment handbooks, and assessment of specific skills or content. All resources are directly linked to other sites.

    Worcester Polytechnic Institute

    Writing Objectives (Course and Program)

    Writing Measurable Learning Outcomes was written at Georgia Tech to assist faculty in writing objectives. This is in a terrific website. Faculty can use it to assess their current course objectives and design new objectives and determine ways to assess if each objective is met.

    Developing Rubrics

    Introduction to Rubrics includes An Assessment Tool to Save Grading Time, Convey Effective Feedback and Promote Student Learning.

    Surveys

    University of Idaho Electrical Engineering Program: Senior Survey and Exit Interview Form

    University of Idaho Electrical Engineering Program: Alumni Survey Form

    Holistic Critical Thinking Scoring Rubric

(Copy available through the NTID Associate Dean for Curriculum and Special Projects, LBJ-2845.)

  • AAHE Assessment Forum. (1997). Learning Through Assessment: A Resource Guide for Higher Education. Edited by Lion F. Gardiner, Caitlin Anderson, and Barbara L. Cambridge. Washington, DC: American Association for Higher Education (ISBN 1-56377-0004-0)
    Lists hundreds of resources on outcomes assessment—great tool.

  • Angelo, Thomas A. & Cross, K. Patricia. (1993). Classroom Assessment Techniques: A Handbook for College Teachers, Section Edition. San Francisco: Jossey-Bass (ISBN 1-55542-5000-3) RIT Wallace 4th Floor: LB2822.75.A5 1993
    Lots of suggestions for how to do continual classroom assessment.

  • Assessment Update. This newsletter, published six times a year, “is dedicated to covering the latest developments in the rapidly evolving area of higher education assessment. Assessment Update offers all academic leaders up-to-date information and practical advice on conducting assessments in a range of areas, including student learning and outcomes, faculty instruction, academic programs and curricula, student services, and overall institutional functioning.”

  • Astin, Alexander W (1996). Assessment for Excellence: The Philosophy and Practice of Assessment and Evaluation in Higher Education. Portland, OR: Oryx Press and American Council on Education (ISBN 0-89774-805-0)

  • Banta, T. W., Lund, J. P., Black, K. E., & Oblander, F. W. (1996). Assessment in practice; Putting principles to work on college campuses. San Francisco: Jossey-Bass.
    Includes over 80 campus examples of effective assessment practices.

  • Banta, T. W., Jones, E. A. and Black, K. E, (2009). Designing Effective Assessment: Principles and Profiles of Good Practice. San Francisco: Jossey-Bass (ISBN 978-0-470-39334-5).
    Contains profiles of good practice in several categories: general education, undergraduate academic majors, faculty/staff development, use of technology, program review, first-year experiences, student affairs, community colleges, and graduate programs.

  • Banta, T. W., Editor (2007). Assessing Student Achievement in General Education. Assessment Update Collections. San Francisco: Jossey-Bass.
    Good information about various strategies and measures for assessing student achievement in general education. Includes published articles on the use of both standardized tests and locally designed instruments. Approaches are described for assessment of individual generic skills (e.g. writing and critical thinking) as well as assessment methods that are applicable across knowledge and skills areas (e.g. capstone courses and course-embedded measures.)

  • Bresciani, M. J. (2006). Outcomes-Based Academic and Co-Curricular Program Review. Sterling, VA: Stylus Publishing, LLC.
    A summary of best practices at particular universities who have had substantial experience and success linking outcomes assessment and program review. It addresses the steps institutions need to undertake for effective outcomes-based program review and the elements within each step that should be considered. It is a resource that can facilitate program review for both internal improvement and external accountability.

  • Erwin, T. Dary (1991). Assessing Student Learning and Development: A Guide to the Principles, Goals, and Methods of Determining College Outcomes. San Francisco: Jossey-Bass (ISBN 1-55542-325-6).
    Good overview of assessment process.

  • Gronlund, Norman E. (2000). How To Write and Use Instructional Objectives, Sixth Edition. Upper Saddle River, NJ: Prentice Hall (ISBN 0-13-886533-7). RIT Wallace 4th Floor: LB1027.4.G76 2000
    Excellent guide to writing instructional objectives in all learning domains.

  • Kuh, George D., And Others (1994). Student Learning Outside the Classroom: Transcending Artifical Boundaries. ERIC Digest. Washington, DC: ERIC Clearinghouse on Higher Education. (ED394443)

  • Kuh, George D., and Others (2015). Using Evidence of Student Learning to Improve Higher Education. National Institute for Learning Outcomes Assessment. San Francisco: Jossey-Bass. (ISBN 9781118903381)

  • Mager, Robert F. (1997) 3rd Ed. Measuring Instructional Results Or Got a Match? How to find out if your instructional objectives have been achieved. Atlanta, GA: The Center for Effective Performance, Inc. (ISBN 1-879618-16-8)
    Terrific blueprint for how to measure results against objectives.

  • Middle States Commission on Higher Education. (2007, Second Edition). Student Learning Assessment: Options and Resources. Philadelphia: Author.

  • Nichols, James O. (1989, 1991, 1995). A Practitioner’s Handbook for Institutional Effectiveness and Student Outcomes Assessment Implementation, Third Edition. New York: Agathon Press (ISBN 0-87586-113)
    Useful and practical ideas for how to develop outcomes assessment on department/unit level.

  • Nichols, James O. (1995). Assessment Case Studies: Common Issues in Implementation with Various Campus Approaches to Resolution. New York: Agathon Press (ISBN 0-87586-112-1)

  • Palomba, Catherine A. & Banta, Trudy W. (1999). Assessment essentials: Planning, implementing , improving. San Francisco: Jossey-Bass. (ISBN 0-7879-4180-8)
    Good overview of assessment process with chapters on assessing general education and on reporting and using results.

  • Schuh, John H. & Upcraft, M. Lee (2001). Assessment Practice in Student Affairs: An Applications Manual. San Francisco: Jossey-Bass (ISBN 0-7879-5053-X).

  • Suskie, Linda (2009, second edition). Assessing Student Learning: A Common Sense Guide. San Franciso, CA: Jossey-Bass (ISBN 978-0-470-289648
    Excellent resource for planning and implementing program-wide or institutional assessment of student learning in higher education. Includes references and recommended readings for each chapter.

  • Suskie, Linda (Ed.) (2001). Assessment to Promote Deep Learning: Insights from AAHE’s 2000 and 1999 Assessment Conference. Washington, DC: American Association for Higher Education (ISBN 1-56377-048-2)
    Summary of plenary session presentations from two national assessment conferences.

  • Suskie, Linda (2015). Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability. San Franciso, CA: Jossey-Bass (ISBN 978-1-118-76140-3)

  • Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco: Jossey-Bass. RIT Wallace 4th Floor: LB2822.75.W35 3004
    A short, step-by-step guide of about 100 pages useful to anyone who might be involved in assessment. This book is intended to make assessment simple, cost-efficient, and useful to the institution, while meeting the requirements of accreditation agencies and review boards.

Middle States Association

The Commission on Higher Education, Middle States Association (MSA) of Colleges and Schools places special emphasis on the assessment of student learning. Beginning with its Framework for Outcomes Assessment (1996), MSA has provided helpful guidance to institutions and student learning outcomes assessment has been an increasingly important requirement for accreditation.

RIT responded early on to this imperative by developing a university outcomes assessment framework. The framework included components of assessment required of all units. Each college developed its own plan taking special care to ensure that the components of the university framework were included.

RIT's outcomes assessment planning and implementation efforts and NTID's initial plans and reports are described in the documents below:

The RIT Academic Program and Curriculum Management website contains an explanation and link to the MSCHE 2012 Periodic Review Report, which is focused primarily on assessment, and the subsequent Reviewer’s Report.

Middle States Standards for Accreditation and Requirements of Affiliation changed in 2015 and RIT became one of the pilot schools participating in a Collaborative Implementation Project during 2015-2016 followed by a Site Visit in 2017. An RIT website describes the process, timelines and Self-Study Design. Assessment is an integral part of several of the new standards.

Assessment Workshops and Conferences

Workshops and conferences on outcomes assessment were useful—and continue to be—as a way for faculty to learn about good assessment practice and sharing information on best practice. A list of local and national workshops and conferences are listed—many with links to the conference/workshop website.

  • Assessment Institute, Indiana University-Purdue University Indianapolis, 317-274-4111.
    "The Assessment Institute in Indianapolis is the nation's oldest and largest event focused exclusively on Outcomes Assessment in Higher Education and is designed to provide opportunities for: individuals and campus teams new to outcomes assessment to acquire foundation knowledge about the field; individuals who have worked as leaders in outcomes assessment to extend their knowledge and skills; those interested in outcomes assessment at any level to establish networks that serve as sources of support and expertise beyond the dates of the Institute."
  • Assessment Network of New York
    “ANNY was formed in the summer of 2010 by higher education professionals representing the State University of New York (SUNY), the City University of New York (CUNY), and New York’s private colleges and universities. The group’s primary objective was to create a state-wide network to stimulate dialogue and the exchange of ideas related to assessment across institutions and to provide support and resources to those committed to meaningful assessment of student learning and institutional effectiveness.”

    “The mission of ANNY is to advance the quality assessment of institutional effectiveness and to enhance the success of institutions of higher education and their students in New York State. ANNY works toward achieving this mission by:

    • Providing its members with exposure to best practices and emerging trends in assessment through conferences, newsletters, workshops, and other means
    • Creating networking opportunities for its members
    • Facilitating cost-effective professional development and consultation opportunities”
  • New England Educational Assessment Network (NEEAN)
    The mission of NEEAN is "to promote quality assessment of student learning and development and thus to enhance the effectiveness of institutions of higher education." This group shares information on assessment methods, problems and successes in higher education. It also plans conferences, makes available a referral directory and manages an on-line discussion group.
  • Association of American Colleges and Universities (AAC&U)
    AAC&U sponsors a variety of continuing programs-meetings, workshops, and summer institutes for campus teams-that bring together college educators from across institutional types, disciplines, and departments. AAC&U activities nurture the talents and creativity of higher education's current and future leaders. Attendees of recent meetings have described them as powerful and transformative-providing participants with innovative ideas and practices, and shaping the direction of their educational reform efforts. At least one annual meeting is devoted to assessment.
    • 2017 General Education and Assessment: Design Thinking for Student Learning
      This conference “will focus on how educators throughout all sectors of higher education can address these and other issues by designing, implementing, and evaluating high-quality general education pathways that are effective for all students—especially those from traditionally underserved groups. Conference participants will examine how faculty members and all campus educators are being supported and rewarded for innovative work and leadership in designing new approaches to teaching and learning that prepare today’s students to be active participants in our nation’s democracy and to succeed in an ever-changing global society.”
  • ABET sponsored meetings and workshops
    • Program Assessment Workshops are one-day workshops, led by “highly experienced professionals with wide-ranging experience in assessment and evaluation” which provide essential tools and guidance for implementing a successful assessment process.
    • Institute for the Development of Excellence in Assessment Leadership (IDEAL): "IDEAL provides a professional development opportunity for those responsible for leading their faculty in the development and implementation of a program assessment plan to improve student learning and document program effectiveness."
  • NASPA Assessment & Retention Conference
  • New Mexico Higher Education Assessment & Retention Conference, sponsored by Student Affairs Administrators in Higher Education, "Assessment in a Time of Tight Budgets". 

    "The NMHEAR Conference provides an opportunity for faculty, staff and administrators to share ideas about assessment and retention initiatives that promote student success. This is a single conference with tracks on both assessment and retention, as well as tracks of combined interest. Pre- and Post Conference Workshops are offered."

  • Middle States Commission on Higher Education

Acknowledgements

Outcomes Assessment initiated and led by:
Christine M. Licata
Senior Associate Provost
Rochester Institute of Technology

Ongoing leadership and website maintenance:
Marianne S. Gustafson
Associate Dean for Curriculum and Special Projects
Office of Academic Affairs
National Technical Institute for the Deaf
585-475-7901
msgncs@rit.edu