• Undergraduate study
  • Find a course
  • Open days and visits
  • New undergraduates
  • Postgraduate study
  • Find a programme
  • Visits and open days
  • New postgraduates
  • International students
  • Accommodation
  • Schools & faculties
  • Business & partnerships
  • Current students
  • Current staff

Academic Quality and Policy Office

  • Academic Integrity
  • Academic Student Support
  • 2: Changes for 2022/23
  • 3: Academic integrity
  • 4: Academic awards and programme structures
  • 5: Recognition of prior learning
  • 6: Academic student support
  • 7: Suspension of study
  • 8: Supplementary year
  • 9: Forms of assessment
  • 10: Conduct of assessment
  • 11: Reasonable adjustment to assessment because of disability or other reason
  • 12. Submission and considering the impact of student circumstances
  • 13: Feedback to students
  • 14: Marking practices: benchmarking and calibration
  • 15: Marking criteria and scales
  • 16: Moderation and treatment of marks
  • 17: Anonymity
  • 18: Penalties
  • 19: Pass marks
  • 20: Boards of examiners
  • 21: Appeals against decisions of boards of examiners
  • 23: Treatment and publication of results
  • 24: Admission and study (UG)
  • 25: Programme structure and design (UG)
  • 26: Study abroad (UG)
  • 27: Industrial placements (UG)
  • 28: Intercalation (UG)
  • 29: Processing and recording marks (UG)
  • 30: Student progression and completion (UG modular)
  • 31: Student progression and completion (UG non-modular)
  • 32: Awards and classification (UG modular)
  • 33: Awards and classification (UG non-modular)
  • 34: Admission and study (PGT)
  • 35: Programme structure and design (PGT)
  • 36: Extension of study (PGT)
  • 37: Processing and recording marks (PGT)
  • 38: Student progression and completion (PGT)
  • 39: Awards and classification (PGT)
  • Temporary amendments for 2019/20
  • Temporary amendments for 2020/21
  • Temporary amendments for 2021/22 and after
  • Temporary amendments for 2022/23
  • Annexes to the Regulations and Code of Practice for Taught Programmes
  • Assessment and Feedback Strategy
  • Institutional Principles for Assessment and Feedback
  • Feedback to Students
  • External Examiners
  • Committees and Groups
  • Degree Outcomes Statement
  • Educational Partnerships
  • Postgraduate Education
  • Programme and Unit Development and Approval
  • Quality Framework
  • Student Surveys
  • Undergraduate Education
  • Unit Evaluation

Related links

  • Education and Student Success
  • Bristol Institute For Learning and Teaching
  • QAA Quality Code

Education and Student Success intranet

University home > Academic Quality and Policy Office > Assessment and Feedback > Regulations and Code of Practice for Taught Programmes > 15: Marking criteria and scales

15. Marking Criteria and Scales

15.1   Marking criteria are designed to help students know what is expected of them. Marking criteria differ from model answers and more prescriptive marking schemes which assign a fixed proportion of the assessment mark to particular knowledge, understanding and/or skills. The glossary  provides definitions for: marking criteria, marking scheme and model answer.

15.2   Where there is more than one marker for a particular assessment task, schools should take steps to ensure consistency of marking. Programme specific assessment criteria must be precise enough to ensure consistency of marking across candidates and markers, compatible with a proper exercise of academic judgment on the part of individual markers . 

15.3   Markers are encouraged to use pro forma in order to show how they have arrived at their decision. Comments provided on pro forma should help candidates, internal markers and moderators and external examiners to understand why a particular mark has been awarded.  Schools should agree, in advance of the assessment, whether internal moderators have access to the pro forma / mark sheets completed by the first marker before or after they mark a candidate’s work.

15.4   Detailed marking criteria for assessed group work, the assessment of class presentations, and self/peer (student) assessment must be established and made available to students and examiners.

15.5   In respect of group work, it is often desirable to award both a group and individual mark, to ensure individuals’ contributions to the task are acknowledged. The weighting of the group and individual mark and how the marks are combined should beset out in the unit specification .

University generic marking criteria

15.6   The common University generic marking criteria , set out in table 1, represent levels of attainment covering levels 4-7 of study. Establishing and applying criteria for assessment at level 8 should be managed by the school that owns the associated programme, in liaison with the faculty . A new level-specific University generic marking criteria ( UoB only ) has been agreed for introduction from 2024/25.

15.7   The common marking criteria are designed to be used for an individual piece of assessed student work. The descriptors give broad comparability of standards by level of study across all programmes as well as level of performance across the University. They reflect the QAA Framework for Higher Education Qualifications but need to be benchmarked against subject specific criteria at the programme level.

15.8   Faculties, with their constituent schools, must establish appropriately specific and detailed marking criteria which are congruent with the University-level criteria and, if appropriate, the level of study. All forms of programme-specific marking criteria must be approved by the Faculty .

Marking scales

15.9      Assessment must be marked and returned as an integer using one of the sanctioned marking scales, as follows:

  •            0-100 marking scale
  •            0-20 marking scale

or using a pass/fail marking scheme (see 10.33).

Any mark on the chosen marking scale can be used.

A five-point A-E marking scale is only available for programmes in the School of Education.

Standard setting in marking is permitted in programmes where it is a professional accreditation requirement.

15.10   Schools should utilise the marking scale that is best suited to the form of assessment. This and the marking criteria for the assessment should be established prior to its commencement.

15.11    Where the averaging of different component marks within an assessment or the outcome of two markers creates an assessment mark with a decimal point, markers should reconcile any significant difference in marks and make a deliberate academic decision as to the exact mark on the scale that should be awarded. Otherwise the mark will be rounded to the nearest integer and returned (if on the 0-20 marking scale, then this should take place before converting to a mark on the 0-100 scale).

Exceptions to the sanctioned marking scales

15.12   Highly structured assessments that are scored out of a total number less than 100 may be utilised where each mark can be justified in relation to those marks neighbouring it. In these cases, the mark must be translated onto the 0-100 point scale, mapped against the relevant marking criteria, and students informed of the use of this method in advance of the assessment in the appropriate medium (e.g. on Blackboard).

Reaching the ‘Unit Mark’ (see also Sections 29 and 37 )

15.13    Marks gauged on the 0-20 scale should be translated to a point on the 0-100 scale before entry into the VLE to calculate the overall unit mark for the purposes of progression and classification (see table 2 ).

15.14   The 0-20 point scale is a non-linear ordinal scale; for example, a mark on the 0-20 point scale IS NOT equivalent to a percentage arrived at by multiplying the mark by 5. Table 2 provides an equivalence relationship between the scales to enable the aggregation of marks from different assessment events to provide the overall unit mark which will be a percentage. This is illustrated below for a notional unit.

In this example, the MCQ uses all points on the 0-100 scale whereas all the other assessments use the 0-20 point scale .

To achieve the final unit mark each component mark needs to be adjusted as:

15.15      The overall unit mark must be expressed as a percentage as the University’s degree classification methodology is based on the percentage scale.

15.16       The final programme or taught component mark will be calculated by applying the agreed algorithm to the unit marks (see sections 32 and 39 ).

  TABLE 1:   Generic Marking Criteria mapped against the three marking scales

  TABLE 2: Relationship between the three marking scales

University of Bristol Beacon House Queens Road Bristol, BS8 1QU, UK Tel: +44 (0)117 928 9000 Contact us

Information for

  • New students

Connect with us

Study at bristol.

  • Students' Union
  • Sport, exercise and health
  • Find a researcher
  • Faculty research
  • Impact of our research
  • Research quality and assessment
  • Engaging with the public

About the University

  • Maps and travel
  • Tours and visits
  • The University on film
  • Explore the city of Bristol
  • Board of Trustees

Support the University

  • Alumni and friends
  • Working at Bristol
  • Job listings

A–Z of the University

  • Terms and conditions
  • Accessibility statements
  • Privacy and cookie policy
  • Modern Slavery statement
  • © 2024 University of Bristol
  • Memberships
  • Institutional Members
  • Teacher Members

Academic English UK

RESOURCES: Reading / Writing / Listening / Speaking / Argument / SPSE / Reading Tests / Summary / Dictogloss / Grammar / Vocab / Critical Thinking / Instant Lessons / Medical English / Graphs / New 2024 /

 Academic Presentations

Academic presentations are an integral part of university study and assessment. Academic presentations may be presented individually or as a group activity but both require the key skills of planning and structuring key information. The key difference between an academic presentation and a general presentation is that it is usually quite formal and includes academic research to evidence the ideas presented. The presentation will include references to credible sources and demonstrate clearly your knowledge and familiarity of the topic.

Presentations AEUK

Presentation lessons / worksheets

Click on any link to be taken to the download

Presentation Information

Intro to presentations, academic presentations, presentation phrases , what is an academic presentation , presentation ppt slides, improve your ppt slides, create effective ppt slides, a basic ppt presentation  , graphs & charts, presentation feedback,  marking criteria, teacher feedback form, peer feedback form, peer-to-peer feedback form, terms & conditions of use, academic  presentation information.

  • Good Presentations
  • Structure / organisation
  • Signposting Language

Giving a good academic presentation

  • Think about the aim of your presentation and what you want to achieve.
  • Concentrate on your audience: who   they are and  what   they (want to) know.
  • Choose the topic that interests you: involvement and motivation are key to confidence.
  • Give your presentation a  clear   and  logical   organization so that everyone can follow.
  • Present information  visually : this adds interest to your talk and makes it easier to follow.
  • Practise giving your presentation until you are familiar with the key points; this way you may discover any potential problems and check the timing. Besides, practice will also make you feel more confident.

Basic outline / structure

  • Introduction: introduce the topic, some basic background, thesis (your stance or argument).
  • Outline: provide basic bullet points on the key parts of the presentation.
  • Main body: divide the main body into sections.
  • Evaluation: always include evaluation. This can be a separate section or part of the main body.
  • Conclusion: summarise key points, restate the thesis and make a recommendation / suggestion / prediction.
  • Reference list: create one slide with all your sources.
  • Questions : be prepared to answer questions.
  • Cope with nerves: breathe deeply; it calms you down and stops you from talking too quickly.
  • Control your voice: speak clearly and try to sound interesting by changing intonation and rhythm.
  • Watch your body language: try to give the impression that you are relaxed and confident.
  • Maintain eye contact with your audience: it keeps them interested in what you are saying. For this reason, you should not read.
  • Provide visual information, but do not give too many facts at a time. Give your audience enough time to take them in.
  • Keep attention by asking rhetorical questions.

university presentation marking criteria

 Advanced Signposting Language –

key language phrases for presentation

Presentation Speaking Criteria

This i s a basic criteria to assess presentation speaking skills. It has three key criteria: Language accuracy & language range,  fluency &  pronunciation, and   presentation & engagement.    Example  /   Level: ** *** [B1/B2/C1]   TEACHER MEMBERSHIP

An Introduction to Academic Presentations

  introduction to presentations (new 2023).

This lesson is designed to introduce students to academic presentations. It contains information on how to plan, structure, and deliver an academic presentation. It includes a listening worksheet, presentation signposting phrases and a mini-presentation activity. Example . Level: ** * ** [B1/B2/C1] TEACHER MEMBERSHIP / INSTITUTIONAL MEMBERSHIP

£4.50 – Add to cart Checkout Added to cart

Presentation Phrases (Signposting Language)

  presentation phrases sheet : a range of standard english phrases .

Suitable phrases to use for greeting, structuring, examples, transitions summarising and  concluding .

Free Download

What is an Academic Presentation?

Presentation Worksheet

 This lecture discusses the key ideas of giving an academic presentation including referencing, signposting, delivery and rehearsal.  2-page listening worksheet with answers. A great introduction to giving a presentation.   Example.  Level *** ** [ B1/B2/C1]   Video [7:00]  / MP3 /   TEACHER MEMBERSHIP  / INSTITUTIONAL MEMBERSHIP

Improve your PPT Slides

Improve your Presentation PowerPoint Slides

These are PPT slides from the above video or  go here . It’s a great way to explain how to present effective slides by using the correct fonts, focusing on key points and using animation to help audience engagement. The slides can be adapted to sort your style and method of teaching.   Video  [12:00]   Level *** ** [B1/B2/C1]  / TEACHER MEMBERSHIP / INSTITUTIONAL MEMBERSHIP

£4.00 – Add to cart Checkout Added to cart

Create PPT slides people will remember – Duarte Inc [CEO]

Harvard Business Review: How to plan an informed presentation and what is needed to create really effective slides that keep an audience engaged. More HBR listening worksheets are   Example   Video  [03:08]   Level: ** * * * [B2/C1]  / TEACHER MEMBERSHIP  / INSTITUTIONAL MEMBERSHIP

A Basic PPT Presentation

This is a video example of a ‘basic’ presentation on Domestic Violence using signposting language and a basic structure

      Memberships (Teacher / Institutional)

      Full access to everything -  £80 /  £200 /   £550

  Join today * x

Academic  Presentation Marking Criteria

A basic criteria that can be used to assess and grade a students’s presentation – full criteria in paid version (below).

university presentation marking criteria

online resources

university presentation marking criteria

Medical English

new resources 2024

New for 2024

Dropbox Files AEUK

DropBox Files

Members only

university presentation marking criteria

Instant Lessons

academic marking criteria

Marking Criteria

OneDrive Files

OneDrive Files

university presentation marking criteria

Critical Thinking

topic lesson Books by AEUK

Topic-lessons

Peer feedback forms

Feedback Forms

6-week academic English course

6-Week Course

university presentation marking criteria

SPSE Essays

free resources

Free Resources

graphs and charts

Charts and graphs

university presentation marking criteria

AEUK The Blog

12- week academic English course

12-Week Course

Advertisement:.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Am J Pharm Educ
  • v.74(9); 2010 Nov 10

A Standardized Rubric to Evaluate Student Presentations

Michael j. peeters.

a University of Toledo College of Pharmacy

Eric G. Sahloff

Gregory e. stone.

b University of Toledo College of Education

To design, implement, and assess a rubric to evaluate student presentations in a capstone doctor of pharmacy (PharmD) course.

A 20-item rubric was designed and used to evaluate student presentations in a capstone fourth-year course in 2007-2008, and then revised and expanded to 25 items and used to evaluate student presentations for the same course in 2008-2009. Two faculty members evaluated each presentation.

The Many-Facets Rasch Model (MFRM) was used to determine the rubric's reliability, quantify the contribution of evaluator harshness/leniency in scoring, and assess grading validity by comparing the current grading method with a criterion-referenced grading scheme. In 2007-2008, rubric reliability was 0.98, with a separation of 7.1 and 4 rating scale categories. In 2008-2009, MFRM analysis suggested 2 of 98 grades be adjusted to eliminate evaluator leniency, while a further criterion-referenced MFRM analysis suggested 10 of 98 grades should be adjusted.

The evaluation rubric was reliable and evaluator leniency appeared minimal. However, a criterion-referenced re-analysis suggested a need for further revisions to the rubric and evaluation process.

INTRODUCTION

Evaluations are important in the process of teaching and learning. In health professions education, performance-based evaluations are identified as having “an emphasis on testing complex, ‘higher-order’ knowledge and skills in the real-world context in which they are actually used.” 1 Objective structured clinical examinations (OSCEs) are a common, notable example. 2 On Miller's pyramid, a framework used in medical education for measuring learner outcomes, “knows” is placed at the base of the pyramid, followed by “knows how,” then “shows how,” and finally, “does” is placed at the top. 3 Based on Miller's pyramid, evaluation formats that use multiple-choice testing focus on “knows” while an OSCE focuses on “shows how.” Just as performance evaluations remain highly valued in medical education, 4 authentic task evaluations in pharmacy education may be better indicators of future pharmacist performance. 5 Much attention in medical education has been focused on reducing the unreliability of high-stakes evaluations. 6 Regardless of educational discipline, high-stakes performance-based evaluations should meet educational standards for reliability and validity. 7

PharmD students at University of Toledo College of Pharmacy (UTCP) were required to complete a course on presentations during their final year of pharmacy school and then give a presentation that served as both a capstone experience and a performance-based evaluation for the course. Pharmacists attending the presentations were given Accreditation Council for Pharmacy Education (ACPE)-approved continuing education credits. An evaluation rubric for grading the presentations was designed to allow multiple faculty evaluators to objectively score student performances in the domains of presentation delivery and content. Given the pass/fail grading procedure used in advanced pharmacy practice experiences, passing this presentation-based course and subsequently graduating from pharmacy school were contingent upon this high-stakes evaluation. As a result, the reliability and validity of the rubric used and the evaluation process needed to be closely scrutinized.

Each year, about 100 students completed presentations and at least 40 faculty members served as evaluators. With the use of multiple evaluators, a question of evaluator leniency often arose (ie, whether evaluators used the same criteria for evaluating performances or whether some evaluators graded easier or more harshly than others). At UTCP, opinions among some faculty evaluators and many PharmD students implied that evaluator leniency in judging the students' presentations significantly affected specific students' grades and ultimately their graduation from pharmacy school. While it was plausible that evaluator leniency was occurring, the magnitude of the effect was unknown. Thus, this study was initiated partly to address this concern over grading consistency and scoring variability among evaluators.

Because both students' presentation style and content were deemed important, each item of the rubric was weighted the same across delivery and content. However, because there were more categories related to delivery than content, an additional faculty concern was that students feasibly could present poor content but have an effective presentation delivery and pass the course.

The objectives for this investigation were: (1) to describe and optimize the reliability of the evaluation rubric used in this high-stakes evaluation; (2) to identify the contribution and significance of evaluator leniency to evaluation reliability; and (3) to assess the validity of this evaluation rubric within a criterion-referenced grading paradigm focused on both presentation delivery and content.

The University of Toledo's Institutional Review Board approved this investigation. This study investigated performance evaluation data for an oral presentation course for final-year PharmD students from 2 consecutive academic years (2007-2008 and 2008-2009). The course was taken during the fourth year (P4) of the PharmD program and was a high-stakes, performance-based evaluation. The goal of the course was to serve as a capstone experience, enabling students to demonstrate advanced drug literature evaluation and verbal presentations skills through the development and delivery of a 1-hour presentation. These presentations were to be on a current pharmacy practice topic and of sufficient quality for ACPE-approved continuing education. This experience allowed students to demonstrate their competencies in literature searching, literature evaluation, and application of evidence-based medicine, as well as their oral presentation skills. Students worked closely with a faculty advisor to develop their presentation. Each class (2007-2008 and 2008-2009) was randomly divided, with half of the students taking the course and completing their presentation and evaluation in the fall semester and the other half in the spring semester. To accommodate such a large number of students presenting for 1 hour each, it was necessary to use multiple rooms with presentations taking place concurrently over 2.5 days for both the fall and spring sessions of the course. Two faculty members independently evaluated each student presentation using the provided evaluation rubric. The 2007-2008 presentations involved 104 PharmD students and 40 faculty evaluators, while the 2008-2009 presentations involved 98 students and 46 faculty evaluators.

After vetting through the pharmacy practice faculty, the initial rubric used in 2007-2008 focused on describing explicit, specific evaluation criteria such as amounts of eye contact, voice pitch/volume, and descriptions of study methods. The evaluation rubric used in 2008-2009 was similar to the initial rubric, but with 5 items added (Figure ​ (Figure1). 1 ). The evaluators rated each item (eg, eye contact) based on their perception of the student's performance. The 25 rubric items had equal weight (ie, 4 points each), but each item received a rating from the evaluator of 1 to 4 points. Thus, only 4 rating categories were included as has been recommended in the literature. 8 However, some evaluators created an additional 3 rating categories by marking lines in between the 4 ratings to signify half points ie, 1.5, 2.5, and 3.5. For example, for the “notecards/notes” item in Figure ​ Figure1, 1 , a student looked at her notes sporadically during her presentation, but not distractingly nor enough to warrant a score of 3 in the faculty evaluator's opinion, so a 3.5 was given. Thus, a 7-category rating scale (1, 1.5, 2, 2.5. 3, 3.5, and 4) was analyzed. Each independent evaluator's ratings for the 25 items were summed to form a score (0-100%). The 2 evaluators' scores then were averaged and a letter grade was assigned based on the following scale: >90% = A, 80%-89% = B, 70%-79% = C, <70% = F.

An external file that holds a picture, illustration, etc.
Object name is ajpe171fig1.jpg

Rubric used to evaluate student presentations given in a 2008-2009 capstone PharmD course.

EVALUATION AND ASSESSMENT

Rubric reliability.

To measure rubric reliability, iterative analyses were performed on the evaluations using the Many-Facets Rasch Model (MFRM) following the 2007-2008 data collection period. While Cronbach's alpha is the most commonly reported coefficient of reliability, its single number reporting without supplementary information can provide incomplete information about reliability. 9 - 11 Due to its formula, Cronbach's alpha can be increased by simply adding more repetitive rubric items or having more rating scale categories, even when no further useful information has been added. The MFRM reports separation , which is calculated differently than Cronbach's alpha, is another source of reliability information. Unlike Cronbach's alpha, separation does not appear enhanced by adding further redundant items. From a measurement perspective, a higher separation value is better than a lower one because students are being divided into meaningful groups after measurement error has been accounted for. Separation can be thought of as the number of units on a ruler where the more units the ruler has, the larger the range of performance levels that can be measured among students. For example, a separation of 4.0 suggests 4 graduations such that a grade of A is distinctly different from a grade of B, which in turn is different from a grade of C or of F. In measuring performances, a separation of 9.0 is better than 5.5, just as a separation of 7.0 is better than a 6.5; a higher separation coefficient suggests that student performance potentially could be divided into a larger number of meaningfully separate groups.

The rating scale can have substantial effects on reliability, 8 while description of how a rating scale functions is a unique aspect of the MFRM. With analysis iterations of the 2007-2008 data, the number of rating scale categories were collapsed consecutively until improvements in reliability and/or separation were no longer found. The last positive iteration that led to positive improvements in reliability or separation was deemed an optimal rating scale for this evaluation rubric.

In the 2007-2008 analysis, iterations of the data where run through the MFRM. While only 4 rating scale categories had been included on the rubric, because some faculty members inserted 3 in-between categories, 7 categories had to be included in the analysis. This initial analysis based on a 7-category rubric provided a reliability coefficient (similar to Cronbach's alpha) of 0.98, while the separation coefficient was 6.31. The separation coefficient denoted 6 distinctly separate groups of students based on the items. Rating scale categories were collapsed, with “in-between” categories included in adjacent full-point categories. Table ​ Table1 1 shows the reliability and separation for the iterations as the rating scale was collapsed. As shown, the optimal evaluation rubric maintained a reliability of 0.98, but separation improved the reliability to 7.10 or 7 distinctly separate groups of students based on the items. Another distinctly separate group was added through a reduction in the rating scale while no change was seen to Cronbach's alpha, even though the number of rating scale categories was reduced. Table ​ Table1 1 describes the stepwise, sequential pattern across the final 4 rating scale categories analyzed. Informed by the 2007-2008 results, the 2008-2009 evaluation rubric (Figure ​ (Figure1) 1 ) used 4 rating scale categories and reliability remained high.

Evaluation Rubric Reliability and Separation with Iterations While Collapsing Rating Scale Categories.

An external file that holds a picture, illustration, etc.
Object name is ajpe171tbl1.jpg

a Reliability coefficient of variance in rater response that is reproducible (ie, Cronbach's alpha).

b Separation is a coefficient of item standard deviation divided by average measurement error and is an additional reliability coefficient.

c Optimal number of rating scale categories based on the highest reliability (0.98) and separation (7.1) values.

Evaluator Leniency

Described by Fleming and colleagues over half a century ago, 6 harsh raters (ie, hawks) or lenient raters (ie, doves) have also been demonstrated in more recent studies as an issue as well. 12 - 14 Shortly after 2008-2009 data were collected, those evaluations by multiple faculty evaluators were collated and analyzed in the MFRM to identify possible inconsistent scoring. While traditional interrater reliability does not deal with this issue, the MFRM had been used previously to illustrate evaluator leniency on licensing examinations for medical students and medical residents in the United Kingdom. 13 Thus, accounting for evaluator leniency may prove important to grading consistency (and reliability) in a course using multiple evaluators. Along with identifying evaluator leniency, the MFRM also corrected for this variability. For comparison, course grades were calculated by summing the evaluators' actual ratings (as discussed in the Design section) and compared with the MFRM-adjusted grades to quantify the degree of evaluator leniency occurring in this evaluation.

Measures created from the data analysis in the MFRM were converted to percentages using a common linear test-equating procedure involving the mean and standard deviation of the dataset. 15 To these percentages, student letter grades were assigned using the same traditional method used in 2007-2008 (ie, 90% = A, 80% - 89% = B, 70% - 79% = C, <70% = F). Letter grades calculated using the revised rubric and the MFRM then were compared to letter grades calculated using the previous rubric and course grading method.

In the analysis of the 2008-2009 data, the interrater reliability for the letter grades when comparing the 2 independent faculty evaluations for each presentation was 0.98 by Cohen's kappa. However, using the 3-facet MRFM revealed significant variation in grading. The interaction of evaluator leniency on student ability and item difficulty was significant, with a chi-square of p < 0.01. As well, the MFRM showed a reliability of 0.77, with a separation of 1.85 (ie, almost 2 groups of evaluators). The MFRM student ability measures were scaled to letter grades and compared with course letter grades. As a result, 2 B's became A's and so evaluator leniency accounted for a 2% change in letter grades (ie, 2 of 98 grades).

Validity and Grading

Explicit criterion-referenced standards for grading are recommended for higher evaluation validity. 3 , 16 - 18 The course coordinator completed 3 additional evaluations of a hypothetical student presentation rating the minimal criteria expected to describe each of an A, B, or C letter grade performance. These evaluations were placed with the other 196 evaluations (2 evaluators × 98 students) from 2008-2009 into the MFRM, with the resulting analysis report giving specific cutoff percentage scores for each letter grade. Unlike the traditional scoring method of assigning all items an equal weight, the MFRM ordered evaluation items from those more difficult for students (given more weight) to those less difficult for students (given less weight). These criterion-referenced letter grades were compared with the grades generated using the traditional grading process.

When the MFRM data were rerun with the criterion-referenced evaluations added into the dataset, a 10% change was seen with letter grades (ie, 10 of 98 grades). When the 10 letter grades were lowered, 1 was below a C, the minimum standard, and suggested a failing performance. Qualitative feedback from faculty evaluators agreed with this suggested criterion-referenced performance failure.

Measurement Model

Within modern test theory, the Rasch Measurement Model maps examinee ability with evaluation item difficulty. Items are not arbitrarily given the same value (ie, 1 point) but vary based on how difficult or easy the items were for examinees. The Rasch measurement model has been used frequently in educational research, 19 by numerous high-stakes testing professional bodies such as the National Board of Medical Examiners, 20 and also by various state-level departments of education for standardized secondary education examinations. 21 The Rasch measurement model itself has rigorous construct validity and reliability. 22 A 3-facet MFRM model allows an evaluator variable to be added to the student ability and item difficulty variables that are routine in other Rasch measurement analyses. Just as multiple regression accounts for additional variables in analysis compared to a simple bivariate regression, the MFRM is a multiple variable variant of the Rasch measurement model and was applied in this study using the Facets software (Linacre, Chicago, IL). The MFRM is ideal for performance-based evaluations with the addition of independent evaluator/judges. 8 , 23 From both yearly cohorts in this investigation, evaluation rubric data were collated and placed into the MFRM for separate though subsequent analyses. Within the MFRM output report, a chi-square for a difference in evaluator leniency was reported with an alpha of 0.05.

The presentation rubric was reliable. Results from the 2007-2008 analysis illustrated that the number of rating scale categories impacted the reliability of this rubric and that use of only 4 rating scale categories appeared best for measurement. While a 10-point Likert-like scale may commonly be used in patient care settings, such as in quantifying pain, most people cannot process more then 7 points or categories reliably. 24 Presumably, when more than 7 categories are used, the categories beyond 7 either are not used or are collapsed by respondents into fewer than 7 categories. Five-point scales commonly are encountered, but use of an odd number of categories can be problematic to interpretation and is not recommended. 25 Responses using the middle category could denote a true perceived average or neutral response or responder indecisiveness or even confusion over the question. Therefore, removing the middle category appears advantageous and is supported by our results.

With 2008-2009 data, the MFRM identified evaluator leniency with some evaluators grading more harshly while others were lenient. Evaluator leniency was indeed found in the dataset but only a couple of changes were suggested based on the MFRM-corrected evaluator leniency and did not appear to play a substantial role in the evaluation of this course at this time.

Performance evaluation instruments are either holistic or analytic rubrics. 26 The evaluation instrument used in this investigation exemplified an analytic rubric, which elicits specific observations and often demonstrates high reliability. However, Norman and colleagues point out a conundrum where drastically increasing the number of evaluation rubric items (creating something similar to a checklist) could augment a reliability coefficient though it appears to dissociate from that evaluation rubric's validity. 27 Validity may be more than the sum of behaviors on evaluation rubric items. 28 Having numerous, highly specific evaluation items appears to undermine the rubric's function. With this investigation's evaluation rubric and its numerous items for both presentation style and presentation content, equal numeric weighting of items can in fact allow student presentations to receive a passing score while falling short of the course objectives, as was shown in the present investigation. As opposed to analytic rubrics, holistic rubrics often demonstrate lower yet acceptable reliability, while offering a higher degree of explicit connection to course objectives. A summative, holistic evaluation of presentations may improve validity by allowing expert evaluators to provide their “gut feeling” as experts on whether a performance is “outstanding,” “sufficient,” “borderline,” or “subpar” for dimensions of presentation delivery and content. A holistic rubric that integrates with criteria of the analytic rubric (Figure ​ (Figure1) 1 ) for evaluators to reflect on but maintains a summary, overall evaluation for each dimension (delivery/content) of the performance, may allow for benefits of each type of rubric to be used advantageously. This finding has been demonstrated with OSCEs in medical education where checklists for completed items (ie, yes/no) at an OSCE station have been successfully replaced with a few reliable global impression rating scales. 29 - 31

Alternatively, and because the MFRM model was used in the current study, an items-weighting approach could be used with the analytic rubric. That is, item weighting based on the difficulty of each rubric item could suggest how many points should be given for that rubric items, eg, some items would be worth 0.25 points, while others would be worth 0.5 points or 1 point (Table ​ (Table2). 2 ). As could be expected, the more complex the rubric scoring becomes, the less feasible the rubric is to use. This was the main reason why this revision approach was not chosen by the course coordinator following this study. As well, it does not address the conundrum that the performance may be more than the summation of behavior items in the Figure ​ Figure1 1 rubric. This current study cannot suggest which approach would be better as each would have its merits and pitfalls.

Rubric Item Weightings Suggested in the 2008-2009 Data Many-Facet Rasch Measurement Analysis

An external file that holds a picture, illustration, etc.
Object name is ajpe171tbl2.jpg

Regardless of which approach is used, alignment of the evaluation rubric with the course objectives is imperative. Objectivity has been described as a general striving for value-free measurement (ie, free of the evaluator's interests, opinions, preferences, sentiments). 27 This is a laudable goal pursued through educational research. Strategies to reduce measurement error, termed objectification , may not necessarily lead to increased objectivity. 27 The current investigation suggested that a rubric could become too explicit if all the possible areas of an oral presentation that could be assessed (ie, objectification) were included. This appeared to dilute the effect of important items and lose validity. A holistic rubric that is more straightforward and easier to score quickly may be less likely to lose validity (ie, “lose the forest for the trees”), though operationalizing a revised rubric would need to be investigated further. Similarly, weighting items in an analytic rubric based on their importance and difficulty for students may alleviate this issue; however, adding up individual items might prove arduous. While the rubric in Figure ​ Figure1, 1 , which has evolved over the years, is the subject of ongoing revisions, it appears a reliable rubric on which to build.

The major limitation of this study involves the observational method that was employed. Although the 2 cohorts were from a single institution, investigators did use a completely separate class of PharmD students to verify initial instrument revisions. Optimizing the rubric's rating scale involved collapsing data from misuse of a 4-category rating scale (expanded by evaluators to 7 categories) by a few of the evaluators into 4 independent categories without middle ratings. As a result of the study findings, no actual grading adjustments were made for students in the 2008-2009 presentation course; however, adjustment using the MFRM have been suggested by Roberts and colleagues. 13 Since 2008-2009, the course coordinator has made further small revisions to the rubric based on feedback from evaluators, but these have not yet been re-analyzed with the MFRM.

The evaluation rubric used in this study for student performance evaluations showed high reliability and the data analysis agreed with using 4 rating scale categories to optimize the rubric's reliability. While lenient and harsh faculty evaluators were found, variability in evaluator scoring affected grading in this course only minimally. Aside from reliability, issues of validity were raised using criterion-referenced grading. Future revisions to this evaluation rubric should reflect these criterion-referenced concerns. The rubric analyzed herein appears a suitable starting point for reliable evaluation of PharmD oral presentations, though it has limitations that could be addressed with further attention and revisions.

ACKNOWLEDGEMENT

Author contributions— MJP and EGS conceptualized the study, while MJP and GES designed it. MJP, EGS, and GES gave educational content foci for the rubric. As the study statistician, MJP analyzed and interpreted the study data. MJP reviewed the literature and drafted a manuscript. EGS and GES critically reviewed this manuscript and approved the final version for submission. MJP accepts overall responsibility for the accuracy of the data, its analysis, and this report.

Academic Development Centre

Oral presentations

Using oral presentations to assess learning

Introduction.

Oral presentations are a form of assessment that calls on students to use the spoken word to express their knowledge and understanding of a topic. It allows capture of not only the research that the students have done but also a range of cognitive and transferable skills.

Different types of oral presentations

A common format is in-class presentations on a prepared topic, often supported by visual aids in the form of PowerPoint slides or a Prezi, with a standard length that varies between 10 and 20 minutes. In-class presentations can be performed individually or in a small group and are generally followed by a brief question and answer session.

Oral presentations are often combined with other modes of assessment; for example oral presentation of a project report, oral presentation of a poster, commentary on a practical exercise, etc.

Also common is the use of PechaKucha, a fast-paced presentation format consisting of a fixed number of slides that are set to move on every twenty seconds (Hirst, 2016). The original version was of 20 slides resulting in a 6 minute and 40 second presentation, however, you can reduce this to 10 or 15 to suit group size or topic complexity and coverage. One of the advantages of this format is that you can fit a large number of presentations in a short period of time and everyone has the same rules. It is also a format that enables students to express their creativity through the appropriate use of images on their slides to support their narrative.

When deciding which format of oral presentation best allows your students to demonstrate the learning outcomes, it is also useful to consider which format closely relates to real world practice in your subject area.

What can oral presentations assess?

The key questions to consider include:

  • what will be assessed?
  • who will be assessing?

This form of assessment places the emphasis on students’ capacity to arrange and present information in a clear, coherent and effective way’ rather than on their capacity to find relevant information and sources. However, as noted above, it could be used to assess both.

Oral presentations, depending on the task set, can be particularly useful in assessing:

  • knowledge skills and critical analysis
  • applied problem-solving abilities
  • ability to research and prepare persuasive arguments
  • ability to generate and synthesise ideas
  • ability to communicate effectively
  • ability to present information clearly and concisely
  • ability to present information to an audience with appropriate use of visual and technical aids
  • time management
  • interpersonal and group skills.

When using this method you are likely to aim to assess a combination of the above to the extent specified by the learning outcomes. It is also important that all aspects being assessed are reflected in the marking criteria.

In the case of group presentation you might also assess:

  • level of contribution to the group
  • ability to contribute without dominating
  • ability to maintain a clear role within the group.

See also the ‘ Assessing group work Link opens in a new window ’ section for further guidance.

As with all of the methods described in this resource it is important to ensure that the students are clear about what they expected to do and understand the criteria that will be used to asses them. (See Ginkel et al, 2017 for a useful case study.)

Although the use of oral presentations is increasingly common in higher education some students might not be familiar with this form of assessment. It is important therefore to provide opportunities to discuss expectations and practice in a safe environment, for example by building short presentation activities with discussion and feedback into class time.

Individual or group

It is not uncommon to assess group presentations. If you are opting for this format:

  • will you assess outcome or process, or both?
  • how will you distribute tasks and allocate marks?
  • will group members contribute to the assessment by reporting group process?

Assessed oral presentations are often performed before a peer audience - either in-person or online. It is important to consider what role the peers will play and to ensure they are fully aware of expectations, ground rules and etiquette whether presentations take place online or on campus:

  • will the presentation be peer assessed? If so how will you ensure everyone has a deep understanding of the criteria?
  • will peers be required to interact during the presentation?
  • will peers be required to ask questions after the presentation?
  • what preparation will peers need to be able to perform their role?
  • how will the presence and behaviour of peers impact on the assessment?
  • how will you ensure equality of opportunities for students who are asked fewer/more/easier/harder questions by peers?

Hounsell and McCune (2001) note the importance of the physical setting and layout as one of the conditions which can impact on students’ performance; it is therefore advisable to offer students the opportunity to familiarise themselves with the space in which the presentations will take place and to agree layout of the space in advance.

Good practice

As a summary to the ideas above, Pickford and Brown (2006, p.65) list good practice, based on a number of case studies integrated in their text, which includes:

  • make explicit the purpose and assessment criteria
  • use the audience to contribute to the assessment process
  • record [audio / video] presentations for self-assessment and reflection (you may have to do this for QA purposes anyway)
  • keep presentations short
  • consider bringing in externals from commerce / industry (to add authenticity)
  • consider banning notes / audio visual aids (this may help if AI-generated/enhanced scripts run counter to intended learning outcomes)
  • encourage students to engage in formative practice with peers (including formative practice of giving feedback)
  • use a single presentation to assess synoptically; linking several parts / modules of the course
  • give immediate oral feedback
  • link back to the learning outcomes that the presentation is assessing; process or product.

Neumann in Havemann and Sherman (eds., 2017) provides a useful case study in chapter 19: Student Presentations at a Distance, and Grange & Enriquez in chapter 22: Moving from an Assessed Presentation during Class Time to a Video-based Assessment in a Spanish Culture Module.

Diversity & inclusion

Some students might feel more comfortable or be better able to express themselves orally than in writing, and vice versa . Others might have particular difficulties expressing themselves verbally, due for example to hearing or speech impediments, anxiety, personality, or language abilities. As with any other form of assessment it is important to be aware of elements that potentially put some students at a disadvantage and consider solutions that benefit all students.

Academic integrity

Oral presentations present relative low risk of academic misconduct if they are presented synchronously and in-class. Avoiding the use of a script can ensure that students are not simply reading out someone else’s text or an AI generated script, whilst the questions posed at the end can allow assessors to gauge the depth of understanding of the topic and structure presented. (Click here for further guidance on academic integrity .)

Recorded presentations (asynchronous) may be produced with help, and additional mechanisms to ensure that the work presented is their own work may be beneficial - such as a reflective account, or a live Q&A session. AI can create scripts, slides and presentations, copy real voices relatively convincingly, and create video avatars, these tools can enable students to create professional video content, and may make this sort of assessment more accessible. The desirability of such tools will depend upon what you are aiming to assess and how you will evaluate student performance.

Student and staff experience

Oral presentations provide a useful opportunity for students to practice skills which are required in the world of work. Through the process of preparing for an oral presentation, students can develop their ability to synthesise information and present to an audience. To improve authenticity the assessment might involve the use of an actual audience, realistic timeframes for preparation, collaboration between students and be situated in realistic contexts, which might include the use of AI tools.

As mentioned above it is important to remember that the stress of presenting information to a public audience might put some students at a disadvantage. Similarly non-native speakers might perceive language as an additional barrier. AI may reduce some of these challenges, but it will be important to ensure equal access to these tools to avoid disadvantaging students. Discussing criteria and expectations with your students, providing a clear structure, ensuring opportunities to practice and receive feedback will benefit all students.

Some disadvantages of oral presentations include:

  • anxiety - students might feel anxious about this type of assessment and this might impact on their performance
  • time - oral assessment can be time consuming both in terms of student preparation and performance
  • time - to develop skill in designing slides if they are required; we cannot assume knowledge of PowerPoint etc.
  • lack of anonymity and potential bias on the part of markers.

From a student perspective preparing for an oral presentation can be time consuming, especially if the presentation is supported by slides or a poster which also require careful design.

From a teacher’s point of view, presentations are generally assessed on the spot and feedback is immediate, which reduces marking time. It is therefore essential to have clearly defined marking criteria which help assessors to focus on the intended learning outcomes rather than simply on presentation style.

Useful resources

Joughin, G. (2010). A short guide to oral assessment . Leeds Metropolitan University/University of Wollongong http://eprints.leedsbeckett.ac.uk/2804/

Race, P. and Brown, S. (2007). The Lecturer’s Toolkit: a practical guide to teaching, learning and assessment. 2 nd edition. London, Routledge.

Annotated bibliography

Class participation

Concept maps

Essay variants: essays only with more focus

  • briefing / policy papers
  • research proposals
  • articles and reviews
  • essay plans

Film production

Laboratory notebooks and reports

Objective tests

  • short-answer
  • multiple choice questions

Patchwork assessment

Creative / artistic performance

  • learning logs
  • learning blogs

Simulations

Work-based assessment

Reference list

USC shield

Center for Excellence in Teaching

Home > Resources > Group presentation rubric

Group presentation rubric

This is a grading rubric an instructor uses to assess students’ work on this type of assignment. It is a sample rubric that needs to be edited to reflect the specifics of a particular assignment. Students can self-assess using the rubric as a checklist before submitting their assignment.

Download this file

Download this file [63.74 KB]

Back to Resources Page

University of Leeds logo

  • Study and research support
  • Academic skills

Presentations: posters

Effective poster presentations.

An effective poster presentation and a good oral presentation share many qualities: it's important to know your audience and their needs, be confident of your purpose, and to convey your key message with impact. Poster presentations challenge you to communicate your research in a different way to oral presentations or written assignments.

Before you start, make sure you read the marking and assessment guidelines and follow them.

Here are some key things that make an effective poster:

  • Attractive visual impact to entice people to read it
  • A compelling title, interesting and intriguing enough to compel your audience’s attention
  • A clear message that differentiates your research poster from others
  • Good use of images and diagrams – a picture paints a thousand words in a restricted space
  • An obvious reading order
  • Audience interaction – is there something you want your audience to do, or think about, as a result of reading your poster?

This guide will cover planning and designing your poster presentation. We will also consider how poster presentations are assessed.

The University of Edinburgh home

  • Schools & departments

Institute for Academic Development

Presentations and posters

Guidance and tips for effective oral and visual presentations.

Academic presentations

Presenting your work allows you to demonstrate your knowledge and familiarity of your subject. Presentations can vary from being formal, like a mini lecture, to more informal, such as summarising a paper in a tutorial. You may have a specialist audience made up of your peers, lecturers or research practitioners or a wider audience at a conference or event. Sometimes you will be asked questions.  Academic presentations maybe a talk with slides or a poster presentation, and they may be assessed. Presentations may be individual or collaborative group work.

A good presentation will communicate your main points to an audience clearly, concisely and logically. Your audience doesn’t know what it is you are trying to say, so you need to guide them through your argument.

There are a few key points that you should consider with any sort of presenting:

  • What is the format? Is it a poster, a talk with visual material or a video?
  • What is the purpose? Is it to summarise a topic; report the results of an experiment; justify your research approach?
  • Who is your audience? Are they from your tutorial group, course or is it a wider audience?
  • What content needs to be included? Do you need to cover everything, just one topic or a particular aspect? How much detail is expected?
  • How should it be organised? This is often the trickiest part of designing a presentation and can take a few attempts.

Planning a presentation

Different people take different approaches to presentations. Some may start by doing some reading and research, others prefer to draft an outline structure first. 

To make an effective start, check your course materials for the format you need to use (e.g. handbooks and Learn pages for style guidelines). If it is an oral presentation, how long do you have?  If it will be assessed, have a look at the marking criteria so you know how you will be marked. (If you do not use the required formatting you may be penalised.) Do you need to allow time for questions?

One way to think about the content and draft a rough structure of your presentation is to divide it into a beginning, middle and end.

  • The beginning: How are you going to set the scene for your audience and set out what they can expect to gain from your presentation? This section should highlight the key topic(s) and give any necessary background. How much background depends on your audience, for example your peers might need less of an introduction to a topic than other audiences. Is there a central question and is it clear? If using slides, can it be added as a header on subsequent slides so that it is always clear what you are discussing?
  • The middle: How are you going tell the story of your work? This section should guide your audience through your argument, leading them to your key point(s). Remember to include any necessary evidence in support. You might also want to include or refer to relevant methods and materials.
  • The end: What is your conclusion or summary? This section should briefly recap what has been covered in the presentation and give the audience the final take-home message(s). Think about the one thing you want someone to remember from your talk or poster. It is usually also good practice to include a reference or bibliography slide listing your sources.

Alternatively, you could start at the end and think about the one point you want your audience to take away from your presentation. Then you can work backwards to decide what needs to go in the other sections to build your argument.

Presentation planner worksheet (pdf)

Presentation planner worksheet (Word docx)

Presentation planner (Word rtf)

Using the right language can really help your audience follow your argument and also helps to manage their expectations.

Guiding your audience (pdf)  

Guiding your audience (Word rtf)

Oral presentations – practise, practise, practise!

Giving a talk can be daunting. If you have a spoken presentation to give, with or without slides, make sure you have time to rehearse it several times.

Firstly, this is really good at helping you overcome any nerves as you’ll know exactly what you are going to say. It will build your confidence.

Secondly, saying something aloud is an effective way to check for sense, structure and flow. If it is difficult to say, or doesn’t sound right, then the audience may find it difficult to follow what you are trying to say.

Finally, practising helps you know how long your presentation will take. If your presentation is being assessed, you may be penalised for going over time as that would be unfair to other presenters (it is like going over your word count).  

If you can, find out what resources and equipment you will have when you present. It is usually expected that presenters will wear or use a microphone so that everyone can hear. But you will still need to remember to project your voice and speak clearly. Also think about how you are going to use your visual material.

IS Creating accessible materials - PowerPoint presentations

IS LinkedIn Learning - online skills development

Making a video

There is no need to use expensive specialist equipment to make a recorded presentation. The Media Hopper Create platform allows film makers to create, store, share and publish their media content easily. You can create presentations using the Desktop Recorder on a PC or Mac.

All University of Edinburgh students are provided with an account on the Media Hopper service allowing you to record and upload media to your personal space and publish to channels. 

You can also use your mobile phone or tablet to make a video presentation. The DIY Film School is an online course covering the basics of shooting video on a mobile device, filming outdoors and indoors and how to get the best audio. Some materials from LinkedIn Learning are relevant to the DIY Film School and include editing advice.

IS Media Hopper Create

IS DIY Film School online course

IS LinkedIn Learning and the DIY Film School

Poster presentations

A poster is a way of visually conveying information about your work. It is meant to be a taster or overview highlighting your key points or findings , not an in-depth explanation and discussion. Your poster should communicate your point(s) effectively without you being there to explain it.

The trickiest thing with poster presentations can be the limited space and words you have. You will need to think critically about what it is important to present.

If the poster is assessed, or is for an event such as a conference, there may be a size and format which you need to follow (e.g. A1 portrait or A0 landscape). Your title should be clear.  Aim to make your poster as accessible as possible by considering the type size and font, colours and layout. It is usually good practice to include your name and email address so people know who you are and how to contact you.

Information Services (IS) have a range of resources including help on using software such as PowerPoint to make a poster and guides to printing one.

IS uCreate user guides and advice on poster printing

Standing up and talking can be intimidating; so can being filmed. Anxiety and stress can get in the way of performing effectively. 

The Student Counselling Service offer advice and workshops on a variety of topics. They have produced a helpful e-booklet about stress, why we need it and how to manage our stress levels to strike the right balance. 

Student Counselling service

Self-help online courses and workbooks on anxiety, stress and mental wellbeing

Stress: A short guide for students (pdf booklet)

Information Services (IS) provides access to a range of support and training for software provided by the University. This includes training and advice on LinkedIn Learning.

IS Digital skills and training

IS LinkedIn Learning

IS Microsoft Office 365 suite

Prezi is a popular alternative to PowerPoint but is often inaccessible to disabled people. Therefore, it is recommended that Prezi is not used for academic presentations. However, if you have to use Prezi, there are some steps you can take to improve your presentation.

IS PREZI and accessibility issues

If you are presenting at an external event, it may be appropriate to use University branding.

University brand guidelines and logos (Communications and Marketing)

  • How to Contact Us
  • Library & Collections
  • Business School
  • Things To Do

Close-up of church candle with book behind

Assessment Types – Guidance and Marking Criteria

This policy should be read in conjunction with:.

  • The University's Learning and Teaching Handbook, and in particular the Principles of Assessment
  • Our  glossary , which provides an explanation of many of the key terms below.

For a more detailed overview of our assessment types, see:  Assessment Types - Overview

There are many different ways of assessing student learning. We encourage TEIs to use a wide range of forms of assessment, and to find creative in thinking about the activities that might allow students to develop and display their achievement of the Learning Outcomes for any given module. TEIs should make sure that there is a good fit between assessment types and learning outcomes: some assessment types will not work well for some kinds of learning outcome.

There is a widespread myth that Durham requires primarily essay-based assessment, or that each module has to include a substantial written assignment – but this is not the case. Many other approaches are possible. If TEIs can think of an activity that will allow students to develop and display that learning, and that will allow markers to assess it fairly, it is very likely already to be possible within the Common Awards Framework. If you want to implement a form of assessment that is not covered below, please contact the Common Awards Team. It may well still be possible.

The guidance below sets out the forms of assessment that we currently recognise under the Common Awards Framework, and how to use them – but we encourage TEIs to interpret this list creatively. For instance: the variety of different activities that can be included under ‘project’ type, ‘practical skill’, ‘portfolio’ or ‘written assignment’, is almost endless.

A moderation record sheet (produced by the Continuing Implementation Group) is also available, in the templates and forms section, for recording the results of moderation of a whole batch of pieces of work, for use if required. It is not compulsory.

* Use the 'Essays and other written assignments' criteria

** Use the 'Oral presentations' guidance

*** Most short tests generate a straightforward numerical score. TEIs should consult our page on the principles for Numerical  Marking for information on how to convert scores into marks.

Using assessment criteria

The assessment criteria don't make sense on their own. They need to be used in conjunction with the Intended Learning Outcomes on the relevant module outline. Used together, markers can judge what mark to give based on how fully and with what clarity and insight students have met those learning outcomes.

The assessment criteria themselves are, to a certain extent, differentiated by level; module learning outcomes are much more thoroughly differentiated by level. It is the combination of the two which should help markers and students understand how marking differs from level to level.

Within the common parameters and standards set across the Common Awards scheme, it is possible for TEIs to set very different assignments for students. The guidance provided here is therefore necessarily generic. TEIs are encouraged to give students more detailed guidance about what is expected for particular assignments. See our Assessment Design page for more information on the guidelines when developing and setting assessments for programmes. 

Please note that all the guidance documents and assessment criteria presented here have been developed by the Continuing Implementation Group (CIG), as a way of interpreting the University's  generic assessment criteria for a Common Awards context.

You are using an outdated browser. Please upgrade your browser to improve your experience.

Federation University Australia logo

  • Important dates
  • Academic transcripts
  • Student email
  • my Student Centre
  • Student print system
  • Room bookings
  • Staff email
  • Employee Self-Service (ESS)
  • ITS Service Desk Portal
  • International language websites

Marking criteria

Deciding on the model of setting marking criteria can depend upon the intended learning outcomes of the course and the type of assessment task. There are two main ways to provide marking criteria - marking guides and rubrics – of which there is a range of formats. The choice of using a marking guide or a rubric to present your marking criteria will depend on the type of assessment task designed, the intended learning outcomes being demonstrated and the learning technologies used. In its simplest form, a marking guide provides broad outlines for success and allocates a range of marks for each component, and a simple rubric provides specific outlines and examples of what is expected for success and allocate specific marks. There is no preference for either method (or sometimes you may use a combination of both) – they can both be done well, and poorly.

Regardless of which method you use, the purpose of marking criteria is to provide students with instructions on what it is that you are asking them to demonstrate. So teaching staff who are marking the assessment need to have a clear understanding of what the students have been asked to demonstrate in order to make a judgement of success. The language used within the criteria needs to be clear, concise and within the levels of learning expected.

Marking guides

A marking guide is a means of communicating broad expectations for an assignment, providing feedback on works in progress, and grading final products. This marking scheme articulates the expectations for an assignment by listing the criteria or elements and describing the various levels of quality from excellent to poor. Students receive a list of expectations required for each component of the task, within a range. A marking guide differs from a rubric in that each criteria is given a range, not a specific point value. For example: Excellent 8-10, Good 5-7, Poor 2-4, Unsatisfactory 0-1.

It is worth noting that depending upon the learning technology used for assessment submission and/or marking, the structure of the marking guide may differ. It is important that the technology tool chosen matches the purpose of the assessment task. Please visit the technologies to enhance assessment webpage to explore what Federation University supports.

A rubric is a means of communicating specific expectations for an assignment, providing focused feedback on works in progress, and grading final products. This marking scheme articulates the expectations for an assignment by listing the specific criteria or elements and describing the various levels of quality from excellent to poor. Students receive a comprehensive list of expectations required for each component of the task. A rubric differs from a marking guide in that each criteria is usually given a specific point value, not a range. For example: Excellent 5, Substantial 4, Moderate 3, Minimal 2, Poor 1, Unsatisfactory 0.

Rubrics are often used to grade student work but more importantly, they also have the role of teaching as well as evaluating. When used as part of a formative, student-centred approach to assessment, rubrics have the potential to help students develop understanding and skill, as well as make dependable judgments about the quality of their own work. Students should be able to use rubrics the same way that teachers use them—to clarify the standards for a quality performance, and to guide ongoing feedback about progress toward those standards.

Creating rubrics

Whilst the advantages of using rubrics are evident, they can be quite time-consuming to develop initially. Before you get started, view the Rubistar website developed by the University of Kansas to assist you in creating quality rubrics. They provide templates for many common assessment tasks, giving you a foundation to build your specific rubric for your specific assessment task marking criteria.

It is worth noting that depending upon the learning technology used for assessment submission and/or marking, the structure of the rubric may differ. It is important that the technology tool chosen matches the purpose of the assessment task. Please visit technologies to enhance assessment web page to explore what Federation University supports.

Federation University Learning and Teaching website

  • Teaching Practice - Technologies to enhance assessment
  • University of Kansas – Rubistar website

Professional Learning Modules – Online | self-paced. Access the following strategies.

  • Introduction to assessment principles (30 min)
  • Importance of effective marking criteria (15 min)
  • Introduction to simple rubrics (15 min)
  • Introduction to simple marking guides (15 min)
  • Contact your Learning Designer via the CAD job portal to assist in matching the right type of marking criteria with your assessment task, and exploring the technology tools that may enhance the assessment.
  • Contact your Learning Skills Advisor to assist you with improving the clarity and expression of marking criteria.

university presentation marking criteria

Marking, College Framework

Document profile.

The College Marking Framework includes:

  • Marking Models
  • College Marking Schemes
  • College Marking Criteria

The framework is an important reference point for setting and maintaining academic standards across the College. It provides guidance for all assessment practices and promotes consistency across taught programmes with the aim of enhancing the student experience of assessment. This College Marking Framework was endorsed by the Academic Standards Subcommittee (ASSC) and approved by College Education Committee (CEC) in November 2021. The framework was noted for information by Academic Board in December 2021. It was piloted in some faculties in 2022-23 and is the College Marking Framework for all faculties from September 2023.

The College Marking Criteria also provides a frame for the setting of learning outcomes and supports faculties and assessment sub-boards in refining their faculty, discipline or assessment-specific marking criteria.

Step-Marking Guidance for Faculties

  • Step-Marking Guidance for Faculties, 2023-24
  • Step-Marking Guidance for Students

Previous Framework

The previous College Marking Framework and the Undergraduate and Taught Postgraduate Marking Criteria are available here:

  • College Marking Framework
  • Undergraduate Marking Criteria
  • Taught Postgraduate Marking Criteria

IMAGES

  1. Presentation Skills

    university presentation marking criteria

  2. Presentation Marking Scheme

    university presentation marking criteria

  3. oral presentation grading rubric

    university presentation marking criteria

  4. oral presentation marking criteria

    university presentation marking criteria

  5. Tutorial Presentation Marking Criteria

    university presentation marking criteria

  6. Group Report and Oral Presentation Marking Criteria

    university presentation marking criteria

VIDEO

  1. University Presentation Demo। Live Presentation । BBA Faculty Presentation । Presentation Tips

  2. State of the University Presentation

  3. Presentation skills lec 3

  4. Individual Presentation

  5. Opinion Marking Signals (Video Presentation for Grade 8)

  6. Ontario Universities' Information Session

COMMENTS

  1. PDF Oral Presentation Grading Rubric

    Makes minor mistakes, but quickly recovers from them; displays little or no tension. Displays mild tension; has trouble recovering from mistakes. Tension and nervousness is obvious; has trouble recovering from mistakes. Verbal Skills. 4 - Exceptional. 3 - Admirable. 2 - Acceptable. 1 - Poor. Enthusiasm.

  2. PDF Oral Presentation: Scoring Guide

    1 point - No clear organization to the presentation. ) Content: currency & relevance 4 points - Incorporates relevant course concepts into presentation where appropriate 3 points - Incorporates several course concepts into presentation, but does not incorporate key concepts which are relevant to presentation.

  3. Mark Scheme for presentations

    But the following criteria gives you an idea of the areas to think about when preparing and presenting, and what makes for a good presentation. First Class (marks of 74+) Information: detailed, accurate, relevant; key points highlighted; Structure: rigorously argued, logical, easy to follow;

  4. PDF SAMPLE ORAL PRESENTATION MARKING CRITERIA

    SAMPLE ORAL PRESENTATION MARKING CRITERIA 1. INFORMAL PEER FEEDBACK ON ORAL PRESENTATION Give feedback on each presentation using the following table 2.LECTURER FEEDBACK ON ORAL PRESENTATIONS Presentation Grading Criteria

  5. 15. Marking Criteria and Scales

    Marking criteria differ from model answers and more prescriptive marking schemes which assign a fixed proportion of the assessment mark to particular knowledge, understanding and/or skills. The glossary provides definitions for: marking criteria, marking scheme and model answer.

  6. Presentation Skills

    Marking Criteria Teacher Feedback Form Peer Feedback Form Peer-to-Peer Feedback Form Terms & Conditions of Use Academic Presentation Information Good Presentations Structure / organisation Delivery Signposting Language Criteria Giving a good academic presentation Think about the aim of your presentation and what you want to achieve.

  7. PDF Oral Presentations

    MARKING CRITERIA . The questions that your marker will be consideringwhen assessing your work are as follows: 1. Did the presentation answer the question or address the task set for the presentation? 2. How did your presentation demonstrate that you had acquired the knowledge, understanding and skills of the relevant learning outcomes of the ...

  8. A Standardized Rubric to Evaluate Student Presentations

    Explicit criterion-referenced standards for grading are recommended for higher evaluation validity. 3, 16-18 The course coordinator completed 3 additional evaluations of a hypothetical student presentation rating the minimal criteria expected to describe each of an A, B, or C letter grade performance. These evaluations were placed with the ...

  9. Oral presentations

    From a teacher's point of view, presentations are generally assessed on the spot and feedback is immediate, which reduces marking time. It is therefore essential to have clearly defined marking criteria which help assessors to focus on the intended learning outcomes rather than simply on presentation style. Useful resources. Joughin, G. (2010).

  10. Marking criteria for undergraduate oral presentations

    Marking criteria for postgraduate presentations Drama English and Film Studies History Modern Languages Theology and Religion Flexible Combined Honours Undergraduates Postgraduate taught

  11. PDF Common Awards Assessment Criteria at Level 4 Oral Presentations

    CRITERIA ORAL PRESENTATIONS - LEVEL 5 1 COMMON AWARDS ASSESSMENT CRITERIA AT LEVEL 5 ORAL PRESENTATIONS . 86 - 100 76 - 85 70 - 75 65 - 69 60 - 64 55 - 59 50 - 54. Evidence of fulfilment of all relevant learning outcomes Overwhelming Excellent and extensive Excellent Very good Good Sound Sound, but with ...

  12. Group presentation rubric

    It is a sample rubric that needs to be edited to reflect the specifics of a particular assignment. Students can self-assess using the rubric as a checklist before submitting their assignment. Download this file Loading... Taking too long? Reload document | Open in new tab Download this file [63.74 KB] Back to Resources Page

  13. PDF PowerPoint Marking Criteria

    Your presentation MUST: be on the PowerPoint template provided, at least Title slide list your References on the last slide (in text citation is expected) One minute per slide. It's a best practice is to use more slides, with less content per slide, if at all possible. One idea per slide ie no more than 3 to 6 bullets points per slide

  14. PDF Guidelines to Building Marking Rubrics

    Adapted from SEEC, 2016. When building a marking rubric, consider the threshold level of performance i.e. the pass level, first. We should be satisfied that work at this grade (just) meets the required level. We can draw from the relevant frameworks to help build the descriptor at this threshold pass level.

  15. Undergraduate Stepped Marking Scheme and Marking Criteria (Grade

    NOTE: When marking written examinations, the criteria in the following descriptors that refer to academic and referencing conventions should be set aside entirely, and those that refer to matters of style and presentation should be applied with regard to the standards that may reasonably be expected of work produced under timed conditions.

  16. PDF Assessment Rubric for Presentation Final

    Assessment Rubric for Presentations Team: Assessor: Date: Category/ Criteria Exemplary (5) Competent (3) Needs Work (1) Score Structure • The presentation has a concise and clearly stated focus that is relevant to the audience. • The presentation is well-structured with a clear storyline. • Ideas are arranged logically; they strongly

  17. Assessment Framework

    Guidance for the presentation of Taught Masters Dissertations for UG and PGT ... of assessment criteria and marking scales. • Innovation in online assessment, marking and submission of assessed work is ... model/expected answers and the marking scheme (see the University's Guidance on External Examiner Procedures). Note: 3.1 (Academic Unit ...

  18. Effective poster presentations

    Poster presentations challenge you to communicate your research in a different way to oral presentations or written assignments. Before you start, make sure you read the marking and assessment guidelines and follow them. Here are some key things that make an effective poster: Attractive visual impact to entice people to read it

  19. Presentations and posters

    There are a few key points that you should consider with any sort of presenting: What is the format? Is it a poster, a talk with visual material or a video? What is the purpose? Is it to summarise a topic; report the results of an experiment; justify your research approach? Who is your audience?

  20. Assessment Types

    Assessment Types - Guidance and Marking Criteria This policy should be read in conjunction with: The University's Learning and Teaching Handbook, and in particular the Principles of Assessment Our glossary, which provides an explanation of many of the key terms below.

  21. Marking criteria

    Marking criteria Deciding on the model of setting marking criteria can depend upon the intended learning outcomes of the course and the type of assessment task. There are two main ways to provide marking criteria - marking guides and rubrics - of which there is a range of formats.

  22. Criteria For Marking Dissertations

    Marking criteria for undergraduate oral presentations; Marking criteria for undergraduate narrated PowerPoint presentations ; Marking criteria for undergraduate gobbet questions; Marking criteria for undergraduate blog posts; Marking criteria for undergraduate dissertations; Marking criteria for translations and language tests; Code of good ...

  23. Marking, College Framework

    The College Marking Framework includes: Marking Models College Marking Schemes College Marking Criteria The framework is an important reference point for setting and maintaining academic standards across the College.