The eLearning Coach

For designing effective learning experiences

Connie Malamed

Six Ways to Use Examples And Nonexamples To Teach Concepts

by Connie Malamed

Examples and Nonexamples for Teaching Concepts

For example, when you first saw a comic strip you most likely didn’t know the concept of “comics.” But over time, you learned that certain styles of line drawings formatted in a sequence were referred to as comics. Once you formed this concept, it became easier to classify other examples of line drawings as comics or non-comics .

analysis non example

At a young age, we learn to form generalizations, such as the concept for “comics.”

Examples or instances seem to be crucial for helping a learner form accurate concepts. Otherwise, according to educational researchers, the learner can overgeneralize, under-generalize or form misconceptions. Here are six ways to help learners acquire concepts. In overgeneralization, a learner applies a rule to a concept or classifies an example to a concept when it does not apply. Their concept is too wide. In under-generalizing, a learner’s understanding of a concept is too limited. They may not think that an example is part of the concept when it is. Misconceptions involve a failure to comprehend a concept accurately.

Rule #1: Use examples in which the irrelevant attributes vary widely.

As learning designers, we can help by providing the right types of examples. Just any old examples won’t do. The attributes of the examples should vary widely, particularly on irrelevant characteristics, so learners get the right idea.

In a course on sexual harassment, if your examples only demonstrate a person harassing an individual of the opposite sex, a learner might erroneously generalize that sexual harassment cannot occur between people of the same gender. This, of course, is not true.

By varying the less relevant attributes of your examples, learners get a more refined understanding of a concept. Learners may form inaccurate concepts if the examples of the concept don’t vary widely on irrelevant attributes. This type of generalization is too narrow (Merrill & Tennyson, 1977).

Rule #2: Progress from simple to difficult examples.

It’s also important to start with simple examples at first, but progress to more complex ones later. Research shows that if a learner only sees simple instances of a concept, they will be less likely to classify the more difficult instances. This causes under-generalization (Merrill & Tennyson, 1978).

Imagine a radiologist learning about bone fractures. If the radiologist only learns how to identify the most obvious instances, in which the fragments separate completely, they will undergeneralize what a fracture is. Upon seeing a fracture in which the bone fragments are still partially joined, the radiologist could make a false diagnosis.

Rule #3: Present instances of a concept in rapid sequence or allow all instances to be viewed simultaneously.

Display examples close together in time. This enables them to be active in working memory all at once, facilitating the process of generalization. If there is a time gap between the presentation of examples, the learner might not generalize from them. So present examples in rapid sequence or leave several on the screen at one time (Gagne, 1985).

Rule#4: Use matched examples and nonexamples for concepts with related attributes.

Your brain also likes to discriminate. Discrimination restricts the range of examples we use to form a concept. To help learners form an accurate concept and to avoid overgeneralization, promote discrimination by presenting nonexamples. Nonexamples are an instance that is specifically not an example of the concept being learned. Nonexamples should vary in one attribute from the example with which it is paired.

Some research points to the fact that concepts with clearly distinguishable attributes (like texture versus color) are better taught with examples only. The learner only needs to generalize in order to acquire the concepts. But concepts that have common attributes, such as the statistical concepts of mean, median and mode, require discrimination skills and thus need matched examples and nonexamples (4). Matched examples and nonexamples should be present in working memory simultaneously.

Back on the comics theme mentioned earlier, it’s difficult to fully define the concept of comics . So a paired example and nonexample can help, because discrimination is required. The pair below demonstrates that the intent of the artist is one attribute for defining comics . The example on the left was intended to be read as a comic book. Roy Lichtenstein’s comic-based painting on the right was intended as an artwork that provided commentary on the mass media.

analysis non example

Present matched examples and nonexamples when concepts could be confused with each other.

Rule #5: Provide opportunities for learners to generate their own examples of a concept.

If you’re working in a setting that allows learners to communicate with you and each other, ask them to generate their own examples of a concept. Promoting reflection and response in this way will reinforce well-formed concepts and correct misunderstandings. It may also clarify fuzzy areas, as not all concepts are neatly defined.

In the sexual harassment course mentioned earlier, a learner might come up with an example that asking a co-worker for a date is harassment. Others in the discussion would probably respond that this is not harassment, but that repeatedly asking the same co-worker for a date after multiple refusals might be harassment. This feedback corrects the misconception.

Rule #6: Expose learners to a wide range of examples and nonexamples and allow them to discover the concept.

We have been discovering concepts on our own since childhood. It’s a natural occurrence for a pattern-seeking brain. It’s not surprising then that an effective way to learn concepts is to discover them from interacting with a set of examples and nonexamples.

Using this inquiry strategy in eLearning isn’t quite as easy as in instructor-led training, but it can be done. It involves presenting multiple examples to the learner and inquiring about shared characteristics of the examples. Through exploration and discovery (interactive activities), learners can acquire the concept. Then use questions and context-sensitive feedback, to correct any misconceptions.

In a synchronous setting , concept formation can emerge from structured online discussions. Research seems to point to the fact that discovery learning is most effective when learners already have a developed knowledge base.

References:

  • Gagne, Ellen. The Cognitive Psychology of School Learning. Brown Little, 1985.
  • Merrill, M. D., & Tennyson, R. D. (1977). Concept teaching.An instructional design guide. Educational Technology.
  • Merrill, M. D., & Tennyson,R . D. (1978). Concept classification and classification errors as a function of relationships between examples and non-examples. Improving Human Performance, 7, 351-364.
  • Tennyson, R.D. and Cocchiarella, M.J. (1986). An Empirically Based Instructional Design Theory for Teaching Concepts. Review of Educational Research, Vol. 56, No. 1.

Get smarter about online learning with articles, tips and resources delivered to your Inbox once a month.

' src=

January 1, 2023 at 1:41 pm

As a trainer, I believe it is crucial to understand how our learners learn and to implement effective strategies to help them solidly acquire key concepts. Another argument I would like to add is the importance of repetition and review to reinforce concept learning. By repeating and revising concepts at different times during the training, we can help learners retain them better and use them effectively in their daily work.

' src=

November 13, 2022 at 9:23 am

I appreciate your suggestions to allow the concepts to be discovered by the learners themselves. There are so many perspectives and an array of beliefs, viewpoints, etc. As instructors, it would be impossible to provide lessons designed for each individual. However, developing lessons that allow the different personalities to explore and discover concepts themselves is truly a strategy to ensure that all can learn.

' src=

November 1, 2022 at 4:21 am

Every progress is made from easier towards more difficult things. Examples shouldn’t be excluded 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Evidence-informed English Teaching

Blogs about the teaching of English (mainly) and the odd view on the wider education system

The Frayer Model examined: the power of “non-examples” in English

The Frayer model, perhaps due to the excellent work from Alex Quigley on the instruction of vocabulary, is an integral part of many teachers’ toolkit. If you are unfamiliar with this, it is a graphic organiser (first conceived Dorothy Frayer, University of Wisconsin). It can be used to help introduce, and ultimately, better understand a new academic term or sophisticated vocabulary choice.

It looks like this:

analysis non example

In English, I tend to use this to introduce linguistic or structural methods (where appropriate) or word choices that tend to contain figurative ideas to understand the meaning best.

For example, I am sure that many students in English have encountered an oxymoron, whether this is Romeo’s famous ‘O loving hate’ or Byron’s ‘melancholy merriment’ in Don Juan , or even Auden’s ‘juicy bone’ referred to in Funeral Blues .

Though very often I find that students develop misconceptions of what an oxymoron is, and therefore its authorial intent, especially if students have been grappling with juxtaposition or paradox. So, how can the Frayer aid us in the teaching of oxymoron.

The power of the ‘non-example’

Whilst all the Frayer model is useful with vocabulary instruction, the ‘non-example’ is too often over-looked in its power. Worst still, many teacher and students see the non-example as anything that’s not the example, which diminishes its pedagogical purpose.

Most things will be a non-example of an oxymoron, so how is this used effectively?

Pretty ugly

When I use the Frayer model with my year 7s, I often begin by using this example below, beginning with the ‘non-example’ first:

analysis non example

The discussion begins with a discussion of ‘pretty ugly’ and what the phrase means. This begins at a literal level, where students zoom in on ugly, then later evolves into a discussion of being ‘quite’ ugly. Eventually I guide them to think about syntax and the the contrasting nature of the two words (though students often get there first!). I then reveal the ‘example’:

analysis non example

Students then consider why ‘loving hate’ is an oxymoron and not ‘pretty ugly’. Some struggle and think that both are opposite words side by side, before remembering, or being told, that ‘pretty’ is not the opposite to ‘ugly’ in this context , which opens up a whole discussion about the context of language choices, which develops a greater appreciation of vocabulary.

Following this, I then actually ask the students to write their ‘characteristics’ section to solidify (but also check for any issues in understanding of) how the oxymoron works.

My year 7s came up with:

“having two different ideas near each other” (close, but this could be juxtaposition)

“having two opposites next to each other” (closer, but could fall into the ‘pretty ugly’ trap)

“sort two things, maybe like opposites, beside each other” (better, not not firmly understood perhaps).

The next reveal:

analysis non example

Further explicit instruction is then required of ‘contradiction’ and ‘incongruous” – but this is now adding to their knowledge and vocabulary, which is essential. Students need to understand the concept of the term to have any chance of understanding authorial intent and just feature spot the term in texts.

Finally, I reveal our working definition, which I expect all students to use in its exact form due to all the work we have done so far to get this point.

analysis non example

Translation to analysis

The students then need to analyse the method in a text, and for this I choose a short extract from Don Juan and explore Byron’s ‘melancholy merriment’ in relation to this stanza:

It is an awful topic—but ‘t is not My cue for any time to be terrific: For checker’d as is seen our human lot With good, and bad, and worse, alike prolific Of melancholy merriment, to quote Too much of one sort would be soporific;— Without, or with, offence to friends or foes, I sketch your world exactly as it goes.

Whilst challenging, and obviously requiring further instruction of the poem’s central ideas, context etc. the students deftly evaluated the use of oxymoron, with one student beginning their analytical paragraph as such:

The speaker presents a rather incongruous idea, by presenting the seemingly contradictory ‘melancholy’ with ‘merriment’ in relation to their view on the world…

The student began to analyse language choices of ‘melancholy’ and ‘merriment’ in a way I think most could. But what struck me was their conceptual understanding of the definition we crafted in a previous lesson and how this was used to inform their language analysis.

Not bad, I think, for Y7s who missed that latter half of their Year 6…

Share this:

Leave a comment cancel reply.

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Analyzing the Structure of the Non-examples in the Instructional Example Space for Function in Abstract Algebra

  • Published: 07 April 2022

Cite this article

  • Rosaura Uscanga 1 &
  • John Paul Cook   ORCID: orcid.org/0000-0003-3434-3514 2  

2029 Accesses

2 Citations

Explore all metrics

The concept of function is critical in mathematics in general and abstract algebra in particular. We observe, however, that much of the research on functions in abstract algebra (1) reports widespread student difficulties, and (2) focuses on specific types of functions, including binary operation, homomorphism, and isomorphism. Direct, detailed examinations of the function concept itself–and such fundamental properties as well-definedness and everywhere-definedness–are scarce. To this end, in this paper we examine non-examples of function in abstract algebra by conducting a textbook analysis and semi-structured interviews with abstract algebra instructors. In doing so, we propose four key categories based upon the definitive function properties of well-definedness and everywhere-definedness. These categories identify specific characteristics of the kinds of non-examples of function that abstract algebra instruction should emphasize, enabling us to hypothesize how students might be able to develop a robust view of function and explain in greater detail the nature of the reported difficulties that students experience.

Similar content being viewed by others

analysis non example

Exploring an Instructional Model for Designing Modules for Secondary Mathematics Teachers in an Abstract Algebra Course

Learning mathematical practices to connect abstract algebra to high school algebra.

analysis non example

Designed Examples as Mediating Tools: Introductory Algebra in Two Norwegian Grade 8 Classrooms

Avoid common mistakes on your manuscript.

Introduction

The function concept is critical in mathematics and is a core topic in the secondary and undergraduate mathematics curriculum (Bagley et al., 2015 ; Dubinsky & Wilson, 2013 ; Even & Tirosh, 1995 ; Hitt, 1998 ; Oehrtman et al., 2008 ). In abstract algebra, a nationally representative sample of abstract algebra experts recently concluded that topics like homomorphism, isomorphism, and binary operation–all of which are specific types of functions–are some of the most important concepts in the course (Melhuish, 2019 ). Indeed, nearly all of the research on functions in abstract algebra has examined key aspects of these various types of functions (e.g., Brown et al., 1997 ; Hausberger, 2017 ; Larsen, 2009 ; Leron et al., 1995 ; Melhuish et al., 2020b ; Rupnow, 2019 ). One theme that emerges from this literature is that students experience considerable challenges reasoning about these types of functions. The function concept itself, however, has received considerably less attention in these advanced settings. We also note that much of the functions literature has focused on examples of functions and has largely overlooked non-examples. To this end, in this paper we investigate the contents and structure of the non-examples in the instructional example space (Watson & Mason, 2005 ; Zazkis & Leikin, 2008 ) for function in abstract algebra. Our research question is: what non-examples of function do students encounter in introductory abstract algebra, and what are the key characteristics by which these non-examples might be productively classified?

Literature Review

Characterizations of the function concept.

Much of the functions literature focuses on a covariational (e.g., Carlson, 1998 ; Carlson et al., 2002 ; Oehrtman et al., 2008 ) approach to functions, in which a function is viewed primarily as a relationship between two quantities that are changing in tandem. Although a covariational perspective is a very useful way to conceive of functions in courses like algebra and calculus, it is not useful in an abstract algebra setting because it “superimposes an ordinal system on function, which does not underlie many of the discrete structures in abstract algebra” (Melhuish & Fagan, 2018 , p. 22). Thus, a significant portion of the research on functions in the mathematics education literature is not able to account for the ways in which students must reason about functions in abstract algebra. Instead, we take a relational (Slavit, 1997 ) view of function in order to focus on “relationships between input–output pairs” (p. 262). This includes relationships between “individual inputs and outputs” (Slavit, 1997 , p. 262) as well as relationships between sets of inputs and sets of outputs.

Our relational focus highlights a need to specify in greater detail how we define the relationship between the inputs and outputs of a function. Weber and colleagues ( 2020 ) pointed out that there are two common ways to do so. The first defines a function in terms of “a domain, a codomain, and a correspondence between the domain and the codomain such that each member of the domain is assigned exactly one element of the codomain” (Weber et al., 2020 , p. 2). The correspondence mentioned here is often referred to in the literature as the rule. From this perspective, the phrase ‘exactly one element’ is conventionally interpreted in terms of two conditions: (a) each element of the domain maps to at most one element of the codomain (i.e., the proposed mapping must be well-defined Footnote 1 ), and (b) each element of the domain maps to at least one element of the codomain (i.e., the proposed mapping must be everywhere-defined ). The second characterization involves viewing a function \(f\) as a set of ordered pairs such that, if \(({x}_{1}, {y}_{1})\) and \(({x}_{2}, {y}_{2})\) are in \(f\) , if \({x}_{1}={x}_{2}\) then \({y}_{1}={y}_{2}\) . Here, the domain of \(f\) is defined as the set of all of the first coordinates of these ordered pairs and the codomain (equivalent in this case to the range) is the set of all of the second coordinates. A subtle but critical difference between these two characterizations is that correspondences defined using this second characterization are automatically everywhere-defined, and thus the only condition that must be satisfied for a proposed correspondence to be a function is well-definedness.

We adopt the first characterization of function because it is commonly used in abstract algebra. For example, functions are often used to define a relationship between a familiar, well-understood algebraic structure and one that is unfamiliar in order to familiarize oneself with the latter (this is one of the many uses of the First Isomorphism Theorem). This choice shaped the study in important ways, particularly the way we operationalized the study’s central notion of a non-example of function (see “ The Importance of Non-examples ” section).

Literature on Everywhere-definedness and Well-definedness

Research on functions generally emphasizes the importance of attending to well- and everywhere-definedness. In the abstract algebra literature, this includes studies that examine students’ reasoning about binary operation (e.g., Brown et al., 1997 ; Melhuish & Fagan, 2018 ; Melhuish et al., 2020a ), homomorphism (e.g., Hausberger, 2017 ; Rupnow, 2019 ), and isomorphism (e.g., Larsen, 2009 ; Leron et al., 1995 ; Nardi, 2000 ). Collectively, these studies point out that developing a robust understanding of function is a key precursor to reasoning about these specific types of functions. For example, closure–one of the definitive characteristics of a binary operation–can be framed as a specific manifestation of everywhere-definedness. Additionally, with respect to reasoning coherently about homomorphisms, Melhuish and colleagues ( 2020b ) noted that “a fractured or rich understanding of function may serve as a hindrance or support, respectively” (p. 14). In short, the abstract algebra literature very clearly illustrates the implications of well-definedness and everywhere-definedness for reasoning with subsequent function-related ideas. Studies that involve direct, detailed examinations of these notions in their own right, however, are scarce.

The core function concept has also received attention in the broader literature base on functions. We note two themes from these studies. First, well-definedness has received considerably more attention than everywhere-definedness, which remains a critical but oft-overlooked concept. Second, the nuance of well-definedness creates some difficulties for students, who find it difficult to articulate what it means and why it is important (e.g., Even, 1993 ; Even & Tirosh, 1995 ). Students might also associate it primarily with procedural conceptions of the vertical line test (e.g., Clement, 2001 ; Kabael, 2011 ; Thomas, 2003 ). Third, students have difficulties adapting well-definedness (and the vertical line test) to functions whose domains are not the real numbers (e.g., Dorko, 2017 ; Even & Tirosh, 1995 ). We note that the vertical line test is of limited use in abstract algebra as (1) many functions do not lend themselves to a useful graphical illustration (which is required for the vertical line test), and (2) as it is typically stated, the vertical line test does not address everywhere-definedness. Thus, though it has been more than two decades since this observation was originally made by Even and Bruckheimer ( 1998 ), we believe it is still very much true that well-definedness “deserves more careful attention than it receives” (p. 30).

The difficulties students experience with well- and everywhere-definedness can be explained in part by an overreliance on the proposed rule used to define the correspondence between the domain and codomain (e.g., Bailey et al., 2019 ; Breidenbach et al., 1992 ; Carlson, 1998 ; Clement, 2001 ). Thompson ( 1994 ), for example, noted that the “predominant image evoked in students by the word ‘function’ is of two written expressions separated by an equal sign” (p. 68). Indeed, a rule-only view of function can “mask definitional properties such as well-definedness” (Melhuish et al.,  2020b , p. 4) and, we propose, everywhere-definedness. To help students overcome these difficulties, researchers have suggested that it is critical for students to consider the rule in conjunction with the domain and codomain when determining when a proposed correspondence is or is not a function (e.g., Dorko, 2017 ; Kabael, 2011 ; Oehrtman et al., 2008 ; Zandieh & Knapp, 2006 ). How to emphasize the importance of the domain and codomain, however – such as the characteristics of non-examples that an instructor might use to encourage students to attend to these features – has not been explored.

Theoretical Perspective

The importance of non-examples.

Examples and non-examples are critical in mathematical reasoning because they can provide concrete illustrations of abstract ideas (e.g., Goldenberg & Mason, 2008 ; Tsamir et al., 2008 ; Zaslavsky, 2019 ). In particular, non-examples of a concept can illuminate insights that are not always apparent when considering examples of that same concept. As noted by Watson and Mason ( 2005 ), non-examples have the potential to “demonstrate the boundaries or necessary conditions of a concept” (p. 65) and, in turn, showcase the essential aspects and features of definitions (such as the features of everywhere- and well-definedness in the definition of function). In particular, non-examples can make these key conceptual features more apparent by illustrating what happens when they are not satisfied (e.g., Tsamir et al., 2008 ).

Our characterization of a non-example of function is shaped by the characterization of function we adopted in the “ Characterizations of the Function Concept ” section: a function is a proposed mapping \(f:A\to B\) that is both well-defined and everywhere-defined. We view a non-example of function, therefore, as a proposed correspondence that fails to satisfy either the well-definedness condition or the everywhere-definedness condition (or both). This is a key distinction: with the other characterization, functions that are defined in terms of sets of ordered pairs are automatically everywhere-defined (and thus a non-example would simply be a relation that is not well-defined). Our choice here reflects the literature’s emphasis on the importance of (yet notable lack of attention afforded to) everywhere-definedness. Another related consequence of this choice is that changing the domain or codomain changes the nature of the proposed correspondence, even if the rule remains the same. Weber and colleagues ( 2020 ) offered the example of the squaring function from \({\mathbb{R}}\) to \({\mathbb{R}}\) and note that it is a different function than the squaring function from \({\mathbb{R}}\) to \([0, \infty )\) . Extrapolating this point, we note that changing the domain or codomain of a proposed correspondence could, for instance, change a non-example of function into a function (a process that we call repairing a non-example).

Generally, we note that, while example-based research has received a fair amount of attention in abstract algebra (e.g., Cook & Fukawa-Connelly, 2015 ; Fukawa-Connelly & Newton, 2014 ), research that leverages non-examples in this domain is scarce. Thus, non-examples are a potentially valuable but currently underutilized tool. Indeed, the literature suggests that such an analysis could be particularly productive at the advanced undergraduate level. Melhuish and colleagues ( 2020b ), for instance, noted that “a lack of unification between the general function [concept] and specific AA functions was pervasive” (p. 15, emphasis added). We infer, then, that examining specific non-examples of function could yield similar insights into the nature of the general function concept. Similarly, Even ( 1993 ) and Even and Tirosh ( 1995 ) called attention to the importance of being able to distinguish between functions and non-functions and illustrated that having students consider well-chosen non-examples of function could be particularly beneficial in helping them develop a clearer image of this distinction. What it means for a collection of non-examples of function to be ‘well-chosen,’ however, is currently unclear. To address this issue, in this paper we examine the non-examples contained in the instructional example space for function in abstract algebra.

The Instructional Example Space

We employ Watson and Mason’s ( 2005 ) notion of example space –that is, the collections of examples that are associated with a particular concept. We interpret the term ‘example’ in a holistic way to refer to any specific, concrete manifestation of an abstract mathematical principle, concept, or idea. This can include exercises, illustrations, or, importantly for this study, non-examples. Watson and Mason ( 2005 ) distinguished between different kinds of example spaces, two of which are relevant for our objectives here. A personal example space is the collection and organization of examples and non-examples that an individual associates with a particular mathematical topic. The conventional example space is the collection of examples “as generally understood by mathematicians and as displayed in textbooks, into which the teacher hopes to induct his or her students” (Watson & Mason, 2005 , p. 76). Zazkis and Leikin ( 2008 ) proposed a useful refinement of the conventional example space, distinguishing between expert example spaces and instructional example spaces . Expert example spaces are the personal example spaces of experts and display the “rich variety of expert knowledge” whereas instructional example spaces involve what examples are “displayed in textbooks” and are used in instruction (Zazkis & Leikin, 2008 , p. 132). In this paper, in order to investigate what it means for a collection of non-examples of function to be ‘well-chosen,’ we examine the non-examples contained in the instructional example space.

Example spaces are not only characterized by lists of examples and non-examples; they also include the means by which these examples and non-examples might be organized and structured (Sinclair et al., 2011 ). We therefore distinguish between the contents and the structure of the non-examples in the instructional example space. For our purposes, the contents of the instructional example space are the union of the instructional non-examples that specific, individual experts consider to be useful in their instruction. Thus, to say that an example is in the instructional example space for function in abstract algebra is to say that there is a specific individual (in this case, an abstract algebra instructor or abstract algebra textbook author) who (1) views the proposed correspondence as a non-example of function, and (2) considers it to be useful in their instruction. We consider the structure to be the characteristics that we infer (from instructors’ descriptions and explanations) about what certain non-examples illustrate and why they are important. Inferences about the structuring of the non-examples in the instructional example space might involve, for instance, (1) researchers’ own perceptions of what conceptual aspect a non-example can (or is intended to) illustrate, (2) researchers’ interpretations of why an expert believes a particular characteristic to be important, or (3) researchers’ conjectures about the key distinctions between non-examples in a given collection.

We employed two methodologies: an analysis of introductory abstract algebra textbooks and semi-structured interviews with algebraists. First, we conducted a textbook analysis because (1) the instructional example space, by definition, contains the examples in textbooks, and (2) textbook analyses can provide insight into “how experts in a field […] define and frame foundational concepts” (Lockwood et al., 2017 , p. 389). Accordingly, while the primary purpose of the textbook analysis was to identify the non-examples in the instructional example space (the contents ), we were also attentive to insights in the textbooks regarding how experts might organize these non-examples (the structure ). Second, upon completion of the textbook analysis, we conducted a series of semi-structured interviews (Fylan, 2005 ) with abstract algebra instructors. Semi-structured interviews were important for our objectives because they allowed us to “address aspects that are important to individual participants” (Fylan, 2005 , p. 66) and thus provided a means by which to flexibly pursue emerging themes we inferred related to the structure of the instructional example space. Indeed, the primary purpose of these interviews was to gain insight into the structure of the instructional example space (though we remained open to identifying additional contents as well).

Two considerations guided our selection of textbooks: relevance (to select textbooks that are currently in use in current undergraduate abstract algebra courses in the United States) and depth (to select a sample that is large enough to saturate any categories that emerge in our analysis). In total, we collected data from 13 textbooks, Footnote 2 9 of which we had verified were being used ubiquitously (Melhuish, 2019 ) or at prominent universities (National University Rankings, n.d. ) (to ensure relevance ), and 4 of which we introduced ourselves (following Lockwood et al., 2017 ) (to ensure depth ) – see Table 1 . Certainly this sample was relevant: according to Melhuish ( 2019 ), the 4 textbooks in row 1 of Table 1 alone were in use at a combined total of 60% of the 1244 institutions surveyed (nearly 750 institutions); supplementing with 5 textbooks in use at the top Research-1 institutions increases the percentage of market share (and therefore relevance) of our sample. In the event that we determined this initial sample to be insufficient for achieving saturation, we had planned to incorporate more textbooks as needed using similar criteria. Post-analysis, however, we concluded that, even though the 4 textbooks included for depth certainly helped us to illustrate the categories of our framework, the 9 textbooks selected for relevance would have been sufficient on their own for achieving saturation; as such, additional selection measures were not necessary.

We created a list of terms (informed by the literature and our own knowledge of abstract algebra) related to functions in abstract algebra: function, relation, map, correspondence, well-definedness, everywhere-definedness, domain, codomain, rule, binary operation, homomorphism, and isomorphism. The first author then identified the sections in each textbook that corresponded to these terms by using the table of contents and the index. Next, the first author collected the sections in each textbook related to these terms by obtaining a digital PDF file of the textbook (when available) or by scanning the desired sections from a hard copy of the textbook (we included section-ending exercises as part of each section). Then the first author read the relevant sections of each textbook, highlighting any excerpts that contained (1) non-examples of function (to identify the contents ), and (2) any associated descriptions and explanations related to a given non-example (to infer the structure ). The textbooks with the greatest market share (row 1 of Table 1 ) were analyzed first; the first author then used theoretical sampling techniques (Creswell, 2012 ) to select the textbooks that, based upon her initial examination of the textbooks in data collection, she believed would enable her to elaborate and refine codes and emerging themes most effectively. We identified a total of 71 non-examples of function (this number includes non-examples that emerged in the semi-structured interviews, described below). To analyze the data, we followed Creswell’s ( 2012 ) method for thematic analysis. This analysis was exploratory but did involve the use of some a priori codes: once a non-example of function was identified, we initially coded it (and any associated descriptions or explanations) as either (a) a well-definedness issue, (b) an everywhere-definedness issue, or (c) both. To enable us to focus more clearly on specific well- or everywhere-definedness issues, in this paper we focus only on those non-examples that satisfy either (a) or (b) but not both. Once the first author had assigned one of these three codes to each excerpt and trimmed those coded as both (a) and (b), she re-read each remaining excerpt, creating and assigning secondary codes based upon her interpretations of what the textbook authors were identifying as key characteristics of these non-examples. All codes were continually refined, revised, and reorganized as coding progressed. During this process, the second author reviewed all coded excerpts and proposed different ways by which they might be plausibly interpreted and organized; each code was then discussed and negotiated until agreement was reached.

Five mathematicians (whom we refer to as Professors A, B, C, D, and E) participated in the semi-structured interviews. All were tenured or tenure-track faculty members at a midwestern Research 1 university who had taught an undergraduate abstract algebra course in the last five years. All interviews were conducted and recorded on Zoom (on account of the COVID-19 pandemic). We began data collection with a semi-structured group interview with all 5 abstract algebra instructors because we hypothesized that a group setting would be more conducive to generating non-examples than an individual interview–that is, we anticipated that in such a setting the group would “become more than the sum of its parts [and] exhibit a synergy that individuals alone don’t possess” (Krueger & Casey, 2009 , p. 19). A central question of the group interview was, “what are 3–4 non-examples that you like to use to illustrate the function concept, and why?” Individual semi-structured interviews followed. The prompts for these interviews were informed by the results of the textbook analysis and group interview; though the nature of semi-structured interviews prevents us from providing a comprehensive listing of all questions asked, a representative sample is included in Fig. 1 .

figure 1

Typical questions asked of the abstract algebra instructors in the semi-structured interviews

All mathematicians participated in at least one individual interview in addition to the group interviews; each mathematician was invited to participate in multiple individual interview sessions, but some were unable to do so due to varying availability. Professors A and E participated in a total of three individual interviews, Professor B participated in two, and Professors C and D participated in one. Each individual interview session lasted approximately 1 to 1.5 h. To analyze the data from the (group and individual) interviews, we transcribed each session in its entirety and employed the same procedures for thematic analysis (Creswell, 2012 ) that we employed for the textbook analysis (the one distinction being that we began this phase of the analysis with the codes that resulted from the textbook analysis). This iterative process resulted in four key themes: well-definedness – domain choice, well-definedness – codomain choice, everywhere-definedness – domain restriction, and everywhere-definedness – codomain expansion . These codes correspond to the four categories by which we structure the instructional example space; the characteristics by which we assigned these codes are included as part of the results.

We now characterize and illustrate four categories that, we propose, can be used to productively organize the non-examples in the instructional example space for function in abstract algebra. We wish to call attention to three points before proceeding. First, as mentioned in the “ Methods ” section, to enhance the clarity of our analysis, we restrict ourselves here to non-examples that either have (a) a well-definedness issue or (b) an everywhere-definedness issue (but not both). Second, as such, we do not claim that these categories partition the entire space of non-examples (that is, we do not consider these categories to be exhaustive or disjoint). Finally, we focus our analysis on a small number of what we considered to be vivid, prototypical non-examples in each category.

  • Well-definedness

We classify a non-example in the well-definedness category if there exists at least one element of the proposed domain with at least two corresponding images contained in the proposed codomain. For example, consider \(\phi :{\mathbb{Q}}\to {\mathbb{Z}}\) given by \(\phi \left(\frac{a}{b}\right)=a+b\) (non-example 2.1 in Fig. 2 ). Gallian ( 2017 ) explained that \(\phi\) “does not define a function since \(1/2=2/4\) but \(\phi (1/2)\ne \phi (2/4)\) ” (p. 21). That is, as noted by Professor D, for “one half and two fourths, you get different answers. So if you get different answers for the same input, it’s not a function.” Consider also \(f:{\mathbb{R}}^{+}\to {\mathbb{R}}\) given by \(f\left(a\right)=\pm \sqrt{a}\) (non-example 2.4). Here, as noted by Rotman ( 2006 ), “there are two candidates for \(\sqrt{9}\) , namely 3 and -3” (p. 83) and, thus, “ \(f\left(a\right)=\pm \sqrt{a}\) is not single-valued, and hence it is not a function” (Rotman, 2006 , p. 88). Similarly, Professor C noted that the input 9 maps to “plus or minus 3 […]. But plus or minus three isn’t a number, it’s two numbers.” Other non-examples Footnote 3 that we classified in this category are displayed in Fig. 2 .

figure 2

Non-examples with well-definedness issues

We further refine these non-examples into categories based upon a distinction we inferred from the way the experts in our study discussed them. A key element of this distinction involved the nature of the choices one makes when evaluating a proposed correspondence at a particular input value. Professor A, for example, proposed that certain non-examples with well-definedness issues “demand a different treatment.” He then proposed that this ‘different treatment’ could be framed in terms of the following question: “Where is the choice taking place? Is it in your input? Or is it, uh, in the execution of the rule?” Professor B similarly framed this distinction – which he described as “two different types of problems” – in terms of the same choice :

Your function could be, um, not well-defined because the value in the domain is not well-defined, or that you have to make a choice in the value of the domain. Or they could be not well-defined because the value of the output is not well-defined and you have to make a choice of that value of the output.

Broadly, then, we infer that this distinction centers primarily on whether one is making a choice in the domain or the codomain . We account for this distinction by introducing two subcategories, which we elaborate below.

Well-definedness – Domain Choice

The aforementioned choice in the domain refers to the choice of different yet equivalent representations for a given domain element. This issue was explicitly attended to in both the textbooks and interviews. For example:

“Problems arise when the element \(x\) can be described in more than one way, and the rule or formula for \(f(x)\) depends on how \(x\) is written” (Beachy & Blair, 2019 , p. 56).

If “there are multiple ways to represent elements in the domain (like in \({\mathbb{Z}}_{n}\) or \({\mathbb{Q}}\) ), then we need to know whether our mapping is well-defined before we worry about any other properties the mapping might possess” (Hodge et al., 2014 , p. 129).

“If the defining rule for a possible binary operation is stated in terms of a certain type of representation of the elements, then the rule does not define a binary operation unless the result is independent of the representation for the elements” (Gilbert & Gilbert, 2015 , p. 305).

“The function is deliberately taking […] a particular presentation of the rationals … That’s the issue … that’s a problem. Like if you’re going to, if you’re gonna use a representative … then you have to be extra careful.” (Professor E)

We identify two elements common to these excerpts: each mentions the importance of attending to multiple representations of a domain element as well as the image of these representations under the rule. These features correspond to the two definitive characteristics of the non-examples in the well-definedness – domain choice category:

elements in the domain can be represented in different yet equivalent ways, and

these equivalent representations are mapped to different outputs by the rule.

In light of these characteristics, we propose that the previously discussed non-example 2.1 belongs in this category. Notice that (1) each rational number (such as ½) admits different yet equivalent representations, and (2) the rule maps each representation to a different element of the codomain. Consider also non-example 2.3. Beachy and Blair ( 2019 ) pointed out that, “in defining functions on \({\mathbb{Z}}_{n}\) it is necessary to be very careful that the given formula is independent of the numbers chosen to represent each congruence class” (p. 53). Referring to the same non-example, Professor B’s comment illustrates why such caution is indeed necessary: “if I take \(x\) equal to the equivalence class of 1 mod 4, well that’s equivalent to 5. And, if I were to choose 5, it would map to 5 and if I were to choose 1, it would map to 1. And 1 and 5 are not equivalent in the codomain.” Regarding the characteristics of this category, this comment illustrates that (1) the elements of the domain \({\mathbb{Z}}_{4}\) can be represented in multiple, equivalent ways, highlighting the aforementioned notion of ‘choice’ in the domain, and (2) the rule maps these equivalent representations to outputs that are not equivalent in the codomain. See Fig. 3 for an illustration of non-examples 2.1 and 2.3; for other non-examples in this category, see Fig. 4 .

figure 3

Illustrations of non-examples 2.1 and 2.3 ( well-definedness – domain choice )

figure 4

Additional non-examples in the well-definedness – domain choice category

Well-definedness – Codomain Choice

The choice in the codomain to which we refer above involves choosing amongst multiple outputs that are associated with a single, unambiguously represented input. This issue was explicitly attended to in the interviews. For example:

Professor A: “We’ve got to make a choice. […] There is not a choice in the domain, […] there is a choice of things that satisfy the statement in your rule.”

Professor B: “I wouldn’t say equivalence is at the heart of [it]. […] The definition has two possible values. […] You have to clarify which value you’re going to choose. That’s a problem with multiple values.”

Professor E: “You’re not taking advantage of any strange representation. The problem is just, like, with the function itself.”

We identify two features common to these excerpts. First, each mentions that the well-definedness issue is not attributed to equivalent representations in the domain. Second, we infer that the well-definedness issue is instead attributed to a choice in the codomain caused by multiple values of the rule. These two features correspond to the two definitive characteristics of the non-examples in the well-definedness – codomain choice category:

the proposed correspondence does not invoke different yet equivalent representations of elements in the domain, and

despite the lack of equivalent representations of elements in the domain, the rule still forces a choice to be made amongst outputs in the codomain.

The aforementioned non-example 2.4 exemplifies these characteristics: (1) the domain ( \({\mathbb{R}}^{+}\) ) causes no issues with respect to representation, yet (2) there is still an input that the rule maps to two outputs. Non-example 2.2 can also, we propose, be classified in the codomain choice category. Dummit and Foote ( 2004 ) explained that “this unambiguously defines \(f\) unless \({A}_{1}\) and \({A}_{2}\) have elements in common (in which case it is not clear whether these elements should map to \(0\) or to \(1\) )” (pp. 1–2). For example, if \({A}_{1}=2{\mathbb{Z}}\) and \({A}_{2}=3{\mathbb{Z}}\) , then it is not clear whether the domain element 6 maps to 0 or 1; to use the language of the algebraists, a choice must be made in the codomain. Additionally, Professor E pointed out that, in this non-example, “there’s no issue of representative, you know … you’re not invoking a presentation of elements of \([{A}_{1}]\) or \([{A}_{2}]\) .” Through the lens of the characteristics of this category, these comments collectively call attention to the fact that (1) the elements of the domain do not admit multiple representations, and (2) the rule is ambiguous and possibly maps at least one input to two outputs. See Fig. 5 for an illustration of non-examples 2.2 and 2.4; other non-examples that we classified in this category appear in Fig. 6 .

figure 5

Illustrations of 2.2 and 2.4 ( well-definedness – codomain choice )

figure 6

Non-examples included in the codomain choice category

Everywhere definedness

We classify a non-example in the everywhere definedness category if there exists at least one element of the proposed domain for which there is no corresponding image in the proposed codomain. For instance, consider \(f:{\mathbb{R}}\times {\mathbb{R}}\to {\mathbb{R}}\) given by \(f\left(a,b\right)=a/b\) (Pinter, 2010 ). Professor D pointed out that \(f\) is not a function because, generally, “you have to know that you can’t divide by zero.” Pinter ( 2010 ) specifically pointed out that “there are ordered pairs such as (3,0) whose quotient 3/0 is undefined” (p. 19). Fraleigh ( 2002 ), commenting on a similar non-example, noted that there is no element of the codomain that “is assigned by this rule to the pair (2,0)” (p. 25). Additionally, consider the proposed correspondence \(g:{\mathbb{Z}}\to {\mathbb{N}}\) given by \(g\left(x\right)={x}^{3}\) (non-example 7.5). Professor C concluded that \(g\) is not a function, rhetorically asking “where does -1 go?” Professor E similarly specified that “-1 cubed is not a natural number, so it doesn’t go anywhere in its codomain.” Figure  7 displays other non-examples with everywhere-definedness issues.

figure 7

Non-examples with everywhere-definedness issues

We now introduce a distinction the mathematicians attended to related to the specific ways in which the input–output correspondence fails. Professor B, for instance, mentioned that “there’s no division by zero, ever .” In contrast, when examining non-example 7.5, he pointed out that \(-1\) (the output associated with the input \(x=-1\) ) exists but is “not contained in the codomain.” We therefore infer that he is distinguishing between instances in which the output does not exist at all and those in which it does exist but not in the specified codomain. Other algebraists made this distinction as well. Professor A, for example, suggested that, amongst non-examples with everywhere-definedness issues, “there are situations where you just can’t execute the instructions and there are situations where you could execute the instructions but [miss] only by a margin target.” We therefore introduce two subcategories, which are characterized and elaborated below.

Everywhere-definedness – Codomain Expansion

The everywhere-definedness – codomain expansion subcategory includes the everywhere-definedness non-examples for which the output exists in some natural, accessible superset of the proposed codomain. We observed that these kinds of non-examples were often discussed by the mathematicians in the context of expanding the proposed codomain (hence the category name):

Professor A: “The codomain should generally be some space that’s large enough.”

Professor D: “It’s not a function because that formula is not defined on every element of the domain. So [we’re] having to adjust the codomain.”

Professor E: These kinds of non-examples “are basically not functions in the same way. There is a way to extend the codomain to make them functions.”

For example, Hungerford ( 2014 ) considered the rule \(f\left(x\right)=\frac{x}{2}\) in which the proposed domain and codomain are both \({\mathbb{Z}}\) , pointing out that that “the rule of \(f\) makes sense for odd integers” (p. 513). We interpret this to mean that the rule can, in fact, be evaluated for odd integers (such as 9) to obtain some number (9/2). However, they go on to note that “ \(f\left(9\right)=9/2\) , which is not in \({\mathbb{Z}}\) ” (p. 513), the codomain. In light of our comments above, we observe that replacing the proposed codomain \({\mathbb{Z}}\) with, say, \({\mathbb{Q}}\) , repairs this non-example and resolves the issue. We note that these kinds of non-examples can also be repaired by restricting the domain–for instance, restricting the domain of \(f\) to \(2{\mathbb{Z}}\) also resolves the issue – but our focus here is on the fact that they can be repaired by broadening the codomain. This serves to distinguish this subcategory from the everywhere-definedness – domain restriction subcategory (which, as we will discuss in the “ Everywhere-definedness – Domain Restriction ” section, must be repaired by restricting the domain because it cannot be easily or naturally repaired by broadening the codomain). Thus, we propose the following characteristics of everywhere-definedness – codomain expansion :

There exists at least one input for which the corresponding output is not an element of the proposed codomain.

The proposed correspondence can be repaired by broadening the proposed codomain in a natural way.

Consider, for instance, the aforementioned non-example 7.5 in Fig. 7 . Notice that (1) the cube of each negative integer exists, but many of these outputs are not contained in \({\mathbb{N}}\) , the proposed codomain, and (2) this non-example can be repaired by broadening the codomain from \({\mathbb{N}}\) to \({\mathbb{Z}}\) . We also classify non-example 7.3 in the everywhere-definedness – codomain expansion category. Beachy and Blair ( 2019 ), for instance, called attention to the fact that “we immediately run into a problem: the square root of a negative number cannot exist in the set of real numbers” (Beachy & Blair, 2019 , p. 52). The mathematicians identified the same issue. Professor B, for example, noted that the proposed correspondence “is not defined for negative real numbers, so therefore it’s not a function.” Professor A specified that, with the proposed codomain, “there is no square root negative one or something, so this process is no good.” Professor A later clarified, however, that \(\sqrt{-1}\) does exist, noting that “we’re going to have to deal with complex roots, which means we need to modify the codomain.” Along these lines, Beachy and Blair ( 2019 ) pointed out that “we can enlarge the codomain to the set \({\mathbb{C}}\) of all complex numbers, in which case the formula \(f(x)=\sqrt{x}\) yields a function \(f:{\mathbb{R}}\to {\mathbb{C}}\) ” (p. 52). Non-example 7.3 belongs to the everywhere-definedness – codomain expansion category, then, because (1) there is indeed at least one input for which the corresponding output is not an element of the codomain, and (2) the non-example can be repaired by broadening the proposed codomain from \({\mathbb{R}}\) to \({\mathbb{C}}\) . See Fig. 8 for an illustration of non-examples 7.1 and 7.3; other non-examples that we classified in this category appear in Fig. 9 .

figure 8

Illustrations of non-examples 7.3 and 7.5 ( everywhere-definedness – codomain expansion )

figure 9

Additional non-examples in the everywhere-definedness – codomain expansion category

Everywhere-definedness – Domain Restriction

The everywhere-definedness – domain restriction category refers to those everywhere-definedness non-examples for which the image of a given input does not exist in any set that is accessible Footnote 4 to an introductory abstract algebra student. This idea was often discussed in terms of restricting the domain to repair the non-example in question:

Professor B: “You just have to change the domain. […] If there’s no sensible definition at some point in your domain, then you have to change the domain.”

Professor C: “You restrict the domain to make that a function.”

Professor E: “It’s a domain problem, a domain error. […] Modifying the domain, you know, you can always make it smaller”.

The key theme we observe from these excerpts is that the mathematicians viewed the issue as domain-related. In particular, we note Professor B’s use of the phrase “have to,” which highlights the fact that repairing such non-examples by replacing the proposed codomain is perhaps not sensible (we illustrate this point using non-examples 7.2 and 7.4 below). This theme forms the basis for the characteristics of this category of non-examples:

There exists at least one input in the proposed domain for which the corresponding output is not an element of the proposed codomain.

The proposed correspondence can only be repaired by restricting the domain.

Consider again non-example 7.2. Characteristic 1 is satisfied because, as previously noted, there exist inputs in the domain (such as (2,0)) for which there is no corresponding image in the codomain. Regarding Characteristic 2, the mathematicians generally framed ‘divide by 0’ as an issue that could be resolved by repairing the domain . Professor E, for instance, noted that he “would fix this by changing the domain.” Illustrating one possible way to do this, Fraleigh ( 2002 ) restricted the domain to pairs of positive rational numbers (i.e., the set \({\mathbb{Q}}^{+}\times {\mathbb{Q}}^{+}\) ) and noted that, as a result of this modification, the conditions for function “are satisfied” (p. 25). Professor B made an even stronger statement, noting that “you just have to fix the domain by saying that \(b\) has to not be zero. And then it makes sense.” Put another way, there is no (accessible) superset containing the proposed codomain that can be used to repair this non-example in a natural way.

We also note that non-example 7.4 can be classified in the everywhere-definedness – domain restriction category. Regarding characteristic 1, Fraleigh ( 2002 ) explained that “the usual matrix addition is not a binary operation on \({M}_{m\times n}({\mathbb{R}})\) since \(A+B\) is not defined for an ordered pair (A,B) of matrices having different numbers of rows or of columns” (p. 21). So, for example, the input (ordered pair) consisting of, say, a \(2\times 3\) matrix \(A\) and a \(2\times 2\) matrix \(B\) does not have a sum in the proposed codomain because \(A\) and \(B\) have a different number of columns (3 vs. 2, respectively). For the related case of matrix multiplication, Professor B pointed out that “matrix multiplication is only defined if the number of columns in A equals the number of rows in B.” We note that the ordered pair consisting of the aforementioned \(2\times 3\) matrix \(A\) and the \(2\times 2\) matrix \(B\) also has no image in the proposed codomain with respect to matrix multiplication. Regarding characteristic 2, the mathematicians suggested that these non-examples could be repaired by restricting the domain. For instance, Professor E’s proposed restricting the domain to \({M}_{n\times n}\left({\mathbb{R}}\right)\times {M}_{n\times n}({\mathbb{R}})\) : “you could fix this by fixing the size, you know? You could say, like, square matrices of n by n ... n by n matrices would be fine.” Professor B also commented that “in this case, you have to restrict that set [the domain] to the pairs of matrices that are consistent.” We again interpreted the use of the phrase ‘have to’ to refer to the fact that repairing this non-example by expanding the codomain is neither natural nor sensible–put another way, there is no accessible superset of \({M}_{m\times n}({\mathbb{R}})\) that contains the matrices \(A+B\) and \(AB\) (as stipulated above). In such cases, Professor B noted, “we just say it’s undefined.” See Fig. 10 for an illustration of non-examples 7.2 and 7.4; for other non-examples that we classified in this category, see Fig. 11 .

figure 10

Illustrations of 7.2 and 7.4 ( everywhere-definedness – domain restriction )

figure 11

Additional non-examples in the everywhere-definedness – domain restriction category

Using a textbook analysis and semi-structured interviews with mathematicians, we have identified categories that highlight key characteristics of the non-examples in the instructional example space. These categories, we propose, offer a clearer image of what it means for a set of non-examples of function in introductory abstract algebra to be ‘well chosen’ (see Fig. 12 for a summary of these categories Footnote 5 and their characteristics).

figure 12

Summary of categories of non-examples in the instructional example space

In this section, we outline our conjectures pertaining to the importance and implications of these categories, identify the contributions of this work, and discuss limitations and future research.

Importance and Implications of These Categories

We examined the instructional example space to identify the characteristics of non-examples that would be advantageous for students to have in their personal example spaces. Here we discuss the four categories in light of this objective.

Recall the theme in the literature that students experience considerable challenges with functions–even in advanced mathematics–in part because they have a view of function that focuses predominantly on the rule (to the exclusion of the domain and codomain). We propose that these categories provide a meaningful way to parse the instructional example space because they can each be viewed and characterized in terms of meaningful relationships with the domain or codomain and therefore hold potential for supporting students in developing a more comprehensive view of function as a coordination of the rule, domain, and codomain. The two subcategories in the well-definedness category, for instance, can be characterized by either a choice in the domain (amongst equivalent representations of the same element) or the codomain (amongst multiple output values assigned by the rule). The two subcategories in the everywhere-definedness category can be characterized by determining whether the non-example can be repaired by expanding the codomain (if the targeted output exists in some accessible superset) or must be repaired by restricting the domain (if the targeted output does not exist in an accessible superset). We therefore hypothesize that non-examples chosen according to these four categories do indeed offer potential opportunities for students to move beyond rule-only reasoning to develop a more comprehensive view of function that explicitly attends to the domain and codomain.

We consider the well-definedness – domain choice and everywhere-definedness – codomain expansion categories to be particularly important to include in introductory abstract algebra instruction (and, accordingly, for students to incorporate into their own example spaces) because these categories include non-examples not included in a typical introductory student’s personal example space. Put another way, we suspect that many introductory abstract algebra students at the beginning of the course are more familiar with non-examples in the well-definedness – codomain choice and everywhere-definedness – domain restriction categories. This is notable for two reasons. First, experience with well-definedness – domain choice and everywhere-definedness – codomain expansion is critical for subsequent reasoning with functions in abstract algebra. Well-definedness – domain choice is critical for reasoning with functions defined on sets of equivalence classes or quotient structures (as in the First Isomorphism Theorem or results related to the formal construction of the rational numbers). In particular, students need to check well-definedness issues related to equivalence every time they define a function whose domain involves a quotient structure, a common task. Everywhere-definedness – codomain expansion is important in abstract algebra when reasoning, for example, about the closure of a proposed binary operation or when attempting to define a mapping between two algebraic structures. While students likely have some prior exposure to domain restriction (e.g., inverse functions), the notion of expanding the codomain is likely to be less familiar.

Second, we suspect that rule-only reasoning is insufficient to determine that many of the non-examples in well-definedness – domain choice and everywhere-definedness – codomain expansion are, in fact, non-examples. Consider, for instance, non-examples 2.1 and 2.3 from the well-definedness – domain choice category. While the issue with these non-examples resides in the representation of elements in the domain, they both feature simple, familiar formulas (addition and the identity) that are commonly associated in previous courses with functions on the real numbers. There are no obvious pitfalls (such as the prototypical division by zero, see non-examples 7.2 and 7.6) or multiple outputs (such as the vertical line test, see non-example 2.4) that are observable simply by examining the rule. That is, we suspect that rule-only reasoners would likely overlook the existence of equivalent representations ( well-definedness – domain choice characteristic 1) and thus there would be no means of perceiving that the rule maps the same input to different outputs (characteristic 2). We also observe the same feature in the everywhere-definedness – codomain expansion category. Non-examples 7.1 and 7.5, for instance, also have familiar rules that typically have been associated with functions in students’ experiences. A student only attending to the rule is therefore unlikely to notice that there is an input for which the corresponding output is not an element of the proposed codomain ( everywhere-definedness – codomain expansion characteristic 1) and thus, in their view, there is no need to repair anything (characteristic 2). Thus, non-examples in well-definedness – domain choice and everywhere-definedness – codomain expansion are exactly the kinds of non-examples for which rule-only reasoning is the least well-suited. This underscores the need to deliberately incorporate these categories into instruction to provide students with opportunities to incorporate the domain and codomain into their views of function.

Contributions

In addition to providing conjectures about how we might support students’ learning about function in abstract algebra, this paper makes two primary contributions. First, it contributes to the literature on examples and example spaces. We consider our methodology–a textbook analysis paired with semi-structured interviews with experts–to be a particularly helpful way to gain insight into the instructional example space (and the conceptual structure of mathematical ideas more generally). This paper is also one of only a few analyses of non-examples, which are a key element of example spaces that have not received much attention in the literature. We believe our analysis emphasizes the potential that the non-examples in the instructional example space hold for affording insight into the key aspects of a concept, a tool that is currently underutilized.

Second, as previously noted, the literature on functions in abstract algebra is substantial but largely focused on binary operation, homomorphism, and isomorphism. This paper provides one of the few direct, detailed analyses of well- and everywhere-definedness. Our analysis illustrates (1) the various ways in which well- and everywhere-definedness can manifest in various non-examples, and (2) how these manifestations relate to the key notions of the domain and codomain. We note that our structuring of the instructional example space, in addition to providing hypotheses about supporting students’ learning about function in advanced contexts, extends findings from this body of literature. For instance, the framework provides a frame of reference for why students might see functions in advanced mathematics as different from functions in secondary mathematics (e.g., Zandieh et al., 2017 ). The framework we propose in this study enables us to build upon this idea by proposing a refined conjecture: introductory abstract algebra students experience considerable challenges with function in abstract algebra because their personal example spaces are likely to involve only the well-definedness – codomain choice and everywhere-definedness – domain restriction categories, but much of abstract algebra involves the unfamiliar well-definedness – domain choice and everywhere-definedness – codomain expansion categories. Equivalently, we hypothesize that instructors can support students in overcoming these difficulties by providing them with myriad opportunities to reason about non-examples from each category in the framework.

Future Research

In this paper, we have outlined our investigation of the ways in which experts view the function concept. The implications for students ’ learning that we set forth above are therefore empirically-based hypotheses that provide clear direction for testing and refinement in future research. Relatedly, while the literature has generally identified that a function should be understood as a coordination of the rule, domain, and codomain, such a view has not been explicated or directly examined in any detail. We believe that the structuring of the instructional example space reported here can inform such efforts. Though students’ conceptual structures can not be adequately captured by descriptions of behaviors, we note that doing so can be a useful first step because it then enables the research to ask, “how might the student be thinking about this idea that might explain their behaviors?” To this end, we propose that it is a useful first (though by no means last) step to initially characterize a productive view of function as one that enables students to reason successfully about non-examples from all 4 categories outlined in this paper. This hypothesis could be pursued in future research via task-based clinical interviews (Goldin, 2000 ) or teaching experiments (Steffe & Thompson, 2000 ).

Our structuring of non-examples in this study also motivates us to consider whether there might be a similar structuring for examples. This could be pursued directly using a similar design in which examples (instead of non-examples) are the primary focus. Alternatively, the the notion of ‘repairing’ a non-example–that is, modifying a given non-example in some way so that it becomes an example–might provide some insight into this issue. For instance, we propose that repairing could be used to extend these categories of non-examples so that they also include the associated repaired examples (see Fig. 13 ). Given that the instances of repairing that we observed in this study involved explicit attention to the domain or codomain, we further suspect that instructional tasks that prompt students to repair non-examples could support students in moving beyond a rule-only view of function to develop a robust, coordinated view of function as a coordination of the rule, domain, and codomain. We therefore hypothesize that repairing could have potential advantages for researchers (as a productive way to bridge between the non-examples and examples in an example space) and students (as a productive way to develop a robust view of function).

figure 13

Repairing non-examples to obtain examples

Another fruitful path for future research could involve examining specific types of functions–such as binary operations, homomorphisms, and isomorphisms–through this new lens. One potential option in this vein would be to use the framework to further explore the aforementioned connection between everywhere-definedness and the closure of a binary operation. Another relates to the notions of injectivity and surjectivity, as the injectivity and surjectivity of a function \(f:A\to B\) is equivalent to the well-definedness and everywhere-definedness (respectively) of its inverse \({f}^{-1}:B\to A\) . In particular, future research could explore how students whose personal example spaces are structured according to our framework might reason with these subsequent topics.

Well-definedness is also referred to as univalence in the literature.

For each textbook in the sample, we obtained and analyzed the most recent edition available to us. The two citations for Herstein ( 1975 ; 1996 ) correspond to two different books (and not two editions of the same book). For more information, see the Textbooks in Our Sample .

When needed, for clarity we occasionally modified or reformulated some of non-examples throughout this paper – usually by inferring reasonable domains and codomains or introducing clear notation – without altering the underlying structure of the proposed correspondence. Additionally, many non-examples appeared in multiple textbooks; throughout this paper we typically note only one.

We have chosen to describe this subcategory in this way because some of its non-examples can indeed be repaired by broadening the codomain, but it is usually simpler (and more coherent in the given context) to restrict the domain. For example, Professor A pointed out that, when repairing non-example 7.6, “you absolutely can pass to the Riemann sphere or something and have it make sense” by defining 1/0 to be the point at infinity. Our use of the term “accessible” is intended to acknowledge that, while instructors and experienced abstract algebra students might be able to repair such non-examples by broadening the codomain in this way, it is arguably simpler and more accessible in the early stages of an introductory abstract algebra course to address such issues by restricting the domain.

Recall that we do not intend for these categories to partition the non-examples in the instructional example space, but rather to simply point out the key characteristics of these non-examples.

Bagley, S., Rasmussen, C., & Zandieh, M. (2015). Inverse composition and identity: The case of function and linear transformation. Journal of Mathematical Behavior, 37 , 36–47.

Article   Google Scholar  

Bailey, N., Quinn, C., Reed, S. D., Wanner, C. A., McCulloch, A. W., Lovett, J. N., & Sherman, M. F. (2019). Calculus II students’ understanding of the univalence requirement of function. In A. Weinberg, D. Moore-Russo, H. Soto, & M. Wawro (Eds.), Proceedings of the 22nd annual conference on Research in Undergraduate Mathematics Education (pp. 18–26).

Breidenbach, D., Dubinsky, E., Hawks, J., & Nichols, D. (1992). Development of the process conception of function. Educational Studies in Mathematics, 23 , 247–285.

Brown, A., DeVries, D. J., Dubinsky, E., & Thomas, K. (1997). Learning binary operations groups and subgroups. Journal of Mathematical Behavior, 16 (3), 187–239.

Carlson, M. P. (1998). A cross-sectional investigation of the development of the function concept. In E. Dubinsky, A. H. Schoenfeld, & J. J. Kaput (Eds.), CBMS Issues in mathematics education: Research in collegiate mathematics education III (Vol. 7, pp. 115–162). American Mathematical Society.

Google Scholar  

Carlson, M., Jacobs, S., Coe, E., Larsen, S., & Hsu, E. (2002). Applying covariational reasoning while modeling dynamic events: A framework and a study. Journal for Research in Mathematics Education, 33 (5), 352–378.

Clement, L. L. (2001). What do students really know about functions? Mathematics Teacher, 94 (9), 745–748.

Cook, J. P., & Fukawa-Connelly, T. (2015). The pedagogical examples of groups and rings that algebraists think are most important in an introductory course. Canadian Journal of Science Mathematics and Technology Education, 15 (2), 171–185.

Creswell, J. W. (2012). Educational research: Planning conducting and evaluating quantitative and qualitative research (4th ed.). Pearson.

Dorko, A. (2017). Generalising univalence from single to multivariable settings: The case of Kyle. In A. Weinberg, C. Rasmussen, J. Rabin, M. Wawro & S. Brown (Eds.), Proceedings of the 20th annual conference on Research in Undergraduate Mathematics Education (pp. 562–569).

Dubinsky, E., & Wilson, R. T. (2013). High school students’ understanding of the function concept. The Journal of Mathematical Behavior, 32 (1), 83–101.

Even, R. (1993). Subject-matter knowledge and pedagogical content knowledge: Prospective secondary teachers and the function concept. Journal for Research in Mathematics Education, 24 (2), 94–116.

Even, R., & Tirosh, D. (1995). Subject-matter knowledge and knowledge about students as sources of teacher presentations of the subject-matter. Educational Studies in Mathematics, 29 , 1–20.

Even, R., & Bruckheimer, M. (1998). Univalence: A critical or non-critical characteristic of functions? For the Learning of Mathematics, 18 (3), 30–32.

Fukawa-Connelly, T. P., & Newton, C. (2014). Analyzing the teaching of advanced mathematics courses via the enacted example space. Educational Studies in Mathematics, 87 (3), 323–349.

Fylan, F. (2005). Semi-structured interviewing. In J. Miles & P. Gilbert (Eds.), A handbook of research methods for clinical and health psychology (pp. 65–77). Oxford University Press.

Goldenberg, P., & Mason, J. (2008). Shedding light on and with example spaces. Educational Studies in Mathematics, 69 (2), 183–194.

Goldin, G. A. (2000). A scientific perspective on task-based interviews in mathematics education research. In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 517–546). Lawrence Erlbaum Associates.

Hausberger, T. (2017). The (homo)morphism concept: Didactic transposition meta-discourse and thematisation. International Journal of Research in Undergraduate Mathematics Education, 3 , 417–443.

Hitt, F. (1998). Difficulties in the articulation of different representations linked to the concept of function. The Journal of Mathematical Behavior, 17 (1), 123–134.

Kabael, T. U. (2011). Generalizing single variable functions to two-variable functions function machine and APOS. Educational Sciences: Theory & Practice, 11 (1), 484–499.

Krueger, R. A., & Casey, M. A. (2009). Focus groups: A practical guide for applied research (4th ed.). Sage Publications.

Larsen, S. (2009). Reinventing the concepts of group and isomorphism: The case of Jessica and Sandra. Journal of Mathematical Behavior, 28 , 119–137.

Leron, U., Hazzan, O., & Zazkis, R. (1995). Learning group isomorphism: A crossroads of many concepts. Educational Studies in Mathematics, 29 , 153–174.

Lockwood, E., Reed, Z., & Caughman, J. S. (2017). An analysis of statements of the multiplication principle in combinatorics discrete and finite mathematics textbooks. International Journal of Research in Undergraduate Mathematics Education, 3 , 381–416.

Melhuish, K. (2019). The Group Theory Concept Assessment: A Tool for Measuring Conceptual Understanding in Introductory Group Theory. International Journal of Research in Undergraduate Mathematics Education, 5 (3), 359–393.

Melhuish, K., Ellis, B., & Hicks, M. D. (2020a). Group theory students’ perceptions of binary operation. Educational Studies in Mathematics, 103 , 63–81.

Melhuish, K., Lew, K., Hicks, M. D., & Kandasamy, S. S. (2020b). Abstract algebra students’ evoked concept images for functions and homomorphisms. Journal of Mathematical Behavior, 60 , 1–16. https://doi.org/10.1016/j.jmathb.2020.100806

Melhuish, K., & Fagan, J. (2018). Connecting the group theory concept assessment to core concepts at the secondary level. In N. H. Wasserman (Ed.), Connecting abstract algebra to secondary mathematics, for secondary mathematics teachers (pp. 19–45). Springer.

Chapter   Google Scholar  

Nardi, E. (2000). Mathematics undergraduates’ responses to semantic abbreviations, ‘geometric’ images and multi-level abstractions in group theory. Educational Studies in Mathematics, 43 , 169–189.

National University Rankings (n.d.). Retrieved April 2, 2020, from  https://www.usnews.com/best-colleges/rankings/national-universities . Accessed September 2020.

Oehrtman, M., Carlson, M., & Thompson, P. W. (2008). Foundational reasoning abilities that promote coherence in students’ understanding of function. In M. P. Carlson & C. Rasmussen (Eds.), Making the connection: Research and teaching in undergraduate mathematics education (pp. 27–42). Mathematical Association of America.

Rupnow, R. (2019). Instructors’ and students’ images of isomorphism and homomorphism. In A. Weinberg, D. Moore-Russo, H. Soto, & M. Wawro (Eds.), Proceedings of the 22nd Annual Conference on Research in Undergraduate Mathematics Education (pp. 518–525).

Sinclair, N., Watson, A., Zazkis, R., & Mason, J. (2011). The structuring of personal example spaces. The Journal of Mathematical Behavior, 30 (4), 291–303.

Slavit, D. (1997). An alternate route to the reification of function. Educational Studies in Mathematics, 33 (3), 259–281.

Steffe, L. P., & Thompson, P. W. (2000). Teaching experiment methodology: Underlying principles and essential elements. In R. Lesh & A. E. Kelly (Eds.), Handbook of research design in mathematics and science education (pp. 267–307). Lawrence Erlbaum Associates.

Thomas, M. (2003). The role of representation in teacher understanding of function. In N. A. Pateman, B. J. Dougherty, & J. T. Zilliox (Eds.), Proceedings of the 2003 joint meeting of PME and PMENA (Vol. 4, pp. 291–298). University of Hawaii: Center for Research and Development Group.

Thompson, P. W. (1994). Students, functions, and the undergraduate curriculum. In E. Dubinsky, A. H. Schoenfeld, & J. J. Kaput (Eds.), CBMS Issues in mathematics education: Research in collegiate mathematics education I (Vol. 4, pp. 21–44). American Mathematical Society.

Tsamir, P., Tirosh, D., & Levenson, E. (2008). Intuitive nonexamples: The case of triangles. Educational Studies in Mathematics, 69 (2), 81–95.

Watson, A., & Mason, J. (2005). Mathematics as a constructive activity: Learners generating examples . Lawrence Erlbaum Associates.

Weber, K., Mejía-Ramos, J. P., Fukawa-Connelly, T., & Wasserman, N. (2020). Connecting the learning of advanced mathematics with the teaching of secondary mathematics: Inverse functions domain restrictions and the arcsine function. Journal of Mathematical Behavior, 57 , 1–21.

Zandieh, M., Ellis, J., & Rasmussen, C. (2017). A characterization of a unified notion of mathematical function: The case of high school function and linear transformation. Educational Studies in Mathematics, 95 , 21–38.

Zandieh, M. J., & Knapp, J. (2006). Exploring the role of metonymy in mathematical understanding and reasoning: The concept of derivative as an example. Journal of Mathematical Behavior, 25 , 1–17.

Zaslavsky, O. (2019). There is more to examples than meets the eye: Thinking with and through mathematical examples in different settings. The Journal of Mathematical Behavior, 53 , 245–255.

Zazkis, R., & Leikin, R. (2008). Exemplifying definitions: A case of a square. Educational Studies in Mathematics, 69 , 131–148.

Textbooks in Our Sample

Artin, M. (2011). Algebra (2nd ed.). Prentice Hall.

Beachy, J. A., & Blair, W. D. (2019). Abstract algebra (4th ed.). Waveland Press.

Davidson, N., & Gulick, F. (1976). Abstract algebra: An active learning approach. Houghton Mifflin.

Dummit, D. S., & Foote, R. M. (2004). Abstract algebra (3rd ed.). John Wiley & Sons.

Fraleigh, J. B. (2002). A first course in abstract algebra (7th ed.). Pearson.

Gallian, J. A. (2017). Contemporary abstract algebra (9th ed.). Cengage Learning.

Gilbert, L., & Gilbert, J. (2015). Elements of modern algebra (8th ed.). Cengage Learning.

Herstein, I. N. (1975). Topics in Algebra (2nd ed.). John Wiley & Sons.

Herstein, I. N. (1996). Abstract algebra (3rd ed.). Prentice-Hall.

Hodge, J. K., Schlicker, S., & Sundstrom, T. (2014). Abstract algebra: An inquiry-based approach. CRC Press.

Hungerford, T. W. (2014). Abstract algebra (3rd ed.). Brooks/Cole, Cengage Learning.

Pinter, C. C. (2010). A book of abstract algebra (2nd ed.). Dover Publications.

Rotman, J. J. (2006). A first course in abstract algebra with applications (3rd ed.). Pearson Prentice Hall.

Download references

Author information

Authors and affiliations.

Mercy College, 555 Broadway, Dobbs Ferry, NY, 10522, USA

Rosaura Uscanga

Oklahoma State University, 406 Mathematical Sciences Building, Stillwater, OK, 74078, USA

John Paul Cook

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to John Paul Cook .

Ethics declarations

Conflict of interests.

On behalf of all authors, the corresponding author states that there are no conflicts of interest or competing interests. Additionally, the work described in this paper has not been published before and is not under consideration for publication anywhere else.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Uscanga, R., Cook, J.P. Analyzing the Structure of the Non-examples in the Instructional Example Space for Function in Abstract Algebra. Int. J. Res. Undergrad. Math. Ed. (2022). https://doi.org/10.1007/s40753-022-00166-z

Download citation

Accepted : 09 February 2022

Published : 07 April 2022

DOI : https://doi.org/10.1007/s40753-022-00166-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Abstract algebra
  • Example space
  • Non-examples
  • Everywhere-definedness
  • Find a journal
  • Publish with us
  • Track your research

6.1 Overview of Non-Experimental Research

Learning objectives.

  • Define non-experimental research, distinguish it clearly from experimental research, and give several examples.
  • Explain when a researcher might choose to conduct non-experimental research as opposed to experimental research.

What Is Non-Experimental Research?

Non-experimental research  is research that lacks the manipulation of an independent variable. Rather than manipulating an independent variable, researchers conducting non-experimental research simply measure variables as they naturally occur (in the lab or real world).

Most researchers in psychology consider the distinction between experimental and non-experimental research to be an extremely important one. This is because although experimental research can provide strong evidence that changes in an independent variable cause differences in a dependent variable, non-experimental research generally cannot. As we will see, however, this inability to make causal conclusions does not mean that non-experimental research is less important than experimental research.

When to Use Non-Experimental Research

As we saw in the last chapter , experimental research is appropriate when the researcher has a specific research question or hypothesis about a causal relationship between two variables—and it is possible, feasible, and ethical to manipulate the independent variable. It stands to reason, therefore, that non-experimental research is appropriate—even necessary—when these conditions are not met. There are many times in which non-experimental research is preferred, including when:

  • the research question or hypothesis relates to a single variable rather than a statistical relationship between two variables (e.g., How accurate are people’s first impressions?).
  • the research question pertains to a non-causal statistical relationship between variables (e.g., is there a correlation between verbal intelligence and mathematical intelligence?).
  • the research question is about a causal relationship, but the independent variable cannot be manipulated or participants cannot be randomly assigned to conditions or orders of conditions for practical or ethical reasons (e.g., does damage to a person’s hippocampus impair the formation of long-term memory traces?).
  • the research question is broad and exploratory, or is about what it is like to have a particular experience (e.g., what is it like to be a working mother diagnosed with depression?).

Again, the choice between the experimental and non-experimental approaches is generally dictated by the nature of the research question. Recall the three goals of science are to describe, to predict, and to explain. If the goal is to explain and the research question pertains to causal relationships, then the experimental approach is typically preferred. If the goal is to describe or to predict, a non-experimental approach will suffice. But the two approaches can also be used to address the same research question in complementary ways. For example, Similarly, after his original study, Milgram conducted experiments to explore the factors that affect obedience. He manipulated several independent variables, such as the distance between the experimenter and the participant, the participant and the confederate, and the location of the study (Milgram, 1974) [1] .

Types of Non-Experimental Research

Non-experimental research falls into three broad categories: cross-sectional research, correlational research, and observational research. 

First, cross-sectional research  involves comparing two or more pre-existing groups of people. What makes this approach non-experimental is that there is no manipulation of an independent variable and no random assignment of participants to groups. Imagine, for example, that a researcher administers the Rosenberg Self-Esteem Scale to 50 American college students and 50 Japanese college students. Although this “feels” like a between-subjects experiment, it is a cross-sectional study because the researcher did not manipulate the students’ nationalities. As another example, if we wanted to compare the memory test performance of a group of cannabis users with a group of non-users, this would be considered a cross-sectional study because for ethical and practical reasons we would not be able to randomly assign participants to the cannabis user and non-user groups. Rather we would need to compare these pre-existing groups which could introduce a selection bias (the groups may differ in other ways that affect their responses on the dependent variable). For instance, cannabis users are more likely to use more alcohol and other drugs and these differences may account for differences in the dependent variable across groups, rather than cannabis use per se.

Cross-sectional designs are commonly used by developmental psychologists who study aging and by researchers interested in sex differences. Using this design, developmental psychologists compare groups of people of different ages (e.g., young adults spanning from 18-25 years of age versus older adults spanning 60-75 years of age) on various dependent variables (e.g., memory, depression, life satisfaction). Of course, the primary limitation of using this design to study the effects of aging is that differences between the groups other than age may account for differences in the dependent variable. For instance, differences between the groups may reflect the generation that people come from (a cohort effect) rather than a direct effect of age. For this reason, longitudinal studies in which one group of people is followed as they age offer a superior means of studying the effects of aging. Once again, cross-sectional designs are also commonly used to study sex differences. Since researchers cannot practically or ethically manipulate the sex of their participants they must rely on cross-sectional designs to compare groups of men and women on different outcomes (e.g., verbal ability, substance use, depression). Using these designs researchers have discovered that men are more likely than women to suffer from substance abuse problems while women are more likely than men to suffer from depression. But, using this design it is unclear what is causing these differences. So, using this design it is unclear whether these differences are due to environmental factors like socialization or biological factors like hormones?

When researchers use a participant characteristic to create groups (nationality, cannabis use, age, sex), the independent variable is usually referred to as an experimenter-selected independent variable (as opposed to the experimenter-manipulated independent variables used in experimental research). Figure 6.1 shows data from a hypothetical study on the relationship between whether people make a daily list of things to do (a “to-do list”) and stress. Notice that it is unclear whether this is an experiment or a cross-sectional study because it is unclear whether the independent variable was manipulated by the researcher or simply selected by the researcher. If the researcher randomly assigned some participants to make daily to-do lists and others not to, then the independent variable was experimenter-manipulated and it is a true experiment. If the researcher simply asked participants whether they made daily to-do lists or not, then the independent variable it is experimenter-selected and the study is cross-sectional. The distinction is important because if the study was an experiment, then it could be concluded that making the daily to-do lists reduced participants’ stress. But if it was a cross-sectional study, it could only be concluded that these variables are statistically related. Perhaps being stressed has a negative effect on people’s ability to plan ahead. Or perhaps people who are more conscientious are more likely to make to-do lists and less likely to be stressed. The crucial point is that what defines a study as experimental or cross-sectional l is not the variables being studied, nor whether the variables are quantitative or categorical, nor the type of graph or statistics used to analyze the data. It is how the study is conducted.

Figure 6.1  Results of a Hypothetical Study on Whether People Who Make Daily To-Do Lists Experience Less Stress Than People Who Do Not Make Such Lists

Second, the most common type of non-experimental research conducted in Psychology is correlational research. Correlational research is considered non-experimental because it focuses on the statistical relationship between two variables but does not include the manipulation of an independent variable.  More specifically, in correlational research , the researcher measures two continuous variables with little or no attempt to control extraneous variables and then assesses the relationship between them. As an example, a researcher interested in the relationship between self-esteem and school achievement could collect data on students’ self-esteem and their GPAs to see if the two variables are statistically related. Correlational research is very similar to cross-sectional research, and sometimes these terms are used interchangeably. The distinction that will be made in this book is that, rather than comparing two or more pre-existing groups of people as is done with cross-sectional research, correlational research involves correlating two continuous variables (groups are not formed and compared).

Third,   observational research  is non-experimental because it focuses on making observations of behavior in a natural or laboratory setting without manipulating anything. Milgram’s original obedience study was non-experimental in this way. He was primarily interested in the extent to which participants obeyed the researcher when he told them to shock the confederate and he observed all participants performing the same task under the same conditions. The study by Loftus and Pickrell described at the beginning of this chapter is also a good example of observational research. The variable was whether participants “remembered” having experienced mildly traumatic childhood events (e.g., getting lost in a shopping mall) that they had not actually experienced but that the researchers asked them about repeatedly. In this particular study, nearly a third of the participants “remembered” at least one event. (As with Milgram’s original study, this study inspired several later experiments on the factors that affect false memories.

The types of research we have discussed so far are all quantitative, referring to the fact that the data consist of numbers that are analyzed using statistical techniques. But as you will learn in this chapter, many observational research studies are more qualitative in nature. In  qualitative research , the data are usually nonnumerical and therefore cannot be analyzed using statistical techniques. Rosenhan’s observational study of the experience of people in a psychiatric ward was primarily qualitative. The data were the notes taken by the “pseudopatients”—the people pretending to have heard voices—along with their hospital records. Rosenhan’s analysis consists mainly of a written description of the experiences of the pseudopatients, supported by several concrete examples. To illustrate the hospital staff’s tendency to “depersonalize” their patients, he noted, “Upon being admitted, I and other pseudopatients took the initial physical examinations in a semi-public room, where staff members went about their own business as if we were not there” (Rosenhan, 1973, p. 256) [2] . Qualitative data has a separate set of analysis tools depending on the research question. For example, thematic analysis would focus on themes that emerge in the data or conversation analysis would focus on the way the words were said in an interview or focus group.

Internal Validity Revisited

Recall that internal validity is the extent to which the design of a study supports the conclusion that changes in the independent variable caused any observed differences in the dependent variable.  Figure 6.2  shows how experimental, quasi-experimental, and non-experimental (correlational) research vary in terms of internal validity. Experimental research tends to be highest in internal validity because the use of manipulation (of the independent variable) and control (of extraneous variables) help to rule out alternative explanations for the observed relationships. If the average score on the dependent variable in an experiment differs across conditions, it is quite likely that the independent variable is responsible for that difference. Non-experimental (correlational) research is lowest in internal validity because these designs fail to use manipulation or control. Quasi-experimental research (which will be described in more detail in a subsequent chapter) is in the middle because it contains some, but not all, of the features of a true experiment. For instance, it may fail to use random assignment to assign participants to groups or fail to use counterbalancing to control for potential order effects. Imagine, for example, that a researcher finds two similar schools, starts an anti-bullying program in one, and then finds fewer bullying incidents in that “treatment school” than in the “control school.” While a comparison is being made with a control condition, the lack of random assignment of children to schools could still mean that students in the treatment school differed from students in the control school in some other way that could explain the difference in bullying (e.g., there may be a selection effect).

Figure 7.1 Internal Validity of Correlational, Quasi-Experimental, and Experimental Studies. Experiments are generally high in internal validity, quasi-experiments lower, and correlational studies lower still.

Figure 6.2 Internal Validity of Correlation, Quasi-Experimental, and Experimental Studies. Experiments are generally high in internal validity, quasi-experiments lower, and correlation studies lower still.

Notice also in  Figure 6.2  that there is some overlap in the internal validity of experiments, quasi-experiments, and correlational studies. For example, a poorly designed experiment that includes many confounding variables can be lower in internal validity than a well-designed quasi-experiment with no obvious confounding variables. Internal validity is also only one of several validities that one might consider, as noted in Chapter 5.

Key Takeaways

  • Non-experimental research is research that lacks the manipulation of an independent variable.
  • There are two broad types of non-experimental research. Correlational research that focuses on statistical relationships between variables that are measured but not manipulated, and observational research in which participants are observed and their behavior is recorded without the researcher interfering or manipulating any variables.
  • In general, experimental research is high in internal validity, correlational research is low in internal validity, and quasi-experimental research is in between.
  • A researcher conducts detailed interviews with unmarried teenage fathers to learn about how they feel and what they think about their role as fathers and summarizes their feelings in a written narrative.
  • A researcher measures the impulsivity of a large sample of drivers and looks at the statistical relationship between this variable and the number of traffic tickets the drivers have received.
  • A researcher randomly assigns patients with low back pain either to a treatment involving hypnosis or to a treatment involving exercise. She then measures their level of low back pain after 3 months.
  • A college instructor gives weekly quizzes to students in one section of his course but no weekly quizzes to students in another section to see whether this has an effect on their test performance.
  • Milgram, S. (1974). Obedience to authority: An experimental view . New York, NY: Harper & Row. ↵
  • Rosenhan, D. L. (1973). On being sane in insane places. Science, 179 , 250–258. ↵

Creative Commons License

Share This Book

  • Increase Font Size

logo image missing

  • > Statistics

Non-Parametric Statistics: Types, Tests, and Examples

  • Pragya Soni
  • May 12, 2022

Non-Parametric Statistics: Types, Tests, and Examples title banner

Statistics, an essential element of data management and predictive analysis , is classified into two types, parametric and non-parametric. 

Parametric tests are based on the assumptions related to the population or data sources while, non-parametric test is not into assumptions, it's more factual than the parametric tests. Here is a detailed blog about non-parametric statistics.

What is the Meaning of Non-Parametric Statistics ?

Unlike, parametric statistics, non-parametric statistics is a branch of statistics that is not solely based on the parametrized families of assumptions and probability distribution. Non-parametric statistics depend on either being distribution free or having specified distribution, without keeping any parameters into consideration.

Non-parametric statistics are defined by non-parametric tests; these are the experiments that do not require any sample population for assumptions. For this reason, non-parametric tests are also known as distribution free tests as they don’t rely on data related to any particular parametric group of probability distributions.

In other terms, non-parametric statistics is a statistical method where a particular data is not required to fit in a normal distribution. Usually, non-parametric statistics used the ordinal data that doesn’t rely on the numbers, but rather a ranking or order. For consideration, statistical tests, inferences, statistical models, and descriptive statistics.

Non-parametric statistics is thus defined as a statistical method where data doesn’t come from a prescribed model that is determined by a small number of parameters. Unlike normal distribution model,  factorial design and regression modeling, non-parametric statistics is a whole different content.

Unlike parametric models, non-parametric is quite easy to use but it doesn’t offer the exact accuracy like the other statistical models. Therefore, non-parametric statistics is generally preferred for the studies where a net change in input has minute or no effect on the output. Like even if the numerical data changes, the results are likely to stay the same.

Also Read | What is Regression Testing?

How does Non-Parametric Statistics Work ?

Parametric statistics consists of the parameters like mean,  standard deviation , variance, etc. Thus, it uses the observed data to estimate the parameters of the distribution. Data are often assumed to come from a normal distribution with unknown parameters.

While, non-parametric statistics doesn’t assume the fact that the data is taken from a same or normal distribution. In fact, non-parametric statistics assume that the data is estimated under a different measurement. The actual data generating process is quite far from the normally distributed process.

Types of Non-Parametric Statistics

Non-parametric statistics are further classified into two major categories. Here is the brief introduction to both of them:

1. Descriptive Statistics

Descriptive statistics is a type of non-parametric statistics. It represents the entire population or a sample of a population. It breaks down the measure of central tendency and central variability.

2. Statistical Inference

Statistical inference is defined as the process through which inferences about the sample population is made according to the certain statistics calculated from the sample drawn through that population.

Some Examples of Non-Parametric Tests

In the recent research years, non-parametric data has gained appreciation due to their ease of use. Also, non-parametric statistics is applicable to a huge variety of data despite its mean, sample size, or other variation. As non-parametric statistics use fewer assumptions, it has wider scope than parametric statistics.

Here are some common  examples of non-parametric statistics :

Consider the case of a financial analyst who wants to estimate the value of risk of an investment. Now, rather than making the assumption that earnings follow a normal distribution, the analyst uses a histogram to estimate the distribution by applying non-parametric statistics.

Consider another case of a researcher who is researching to find out a relation between the sleep cycle and healthy state in human beings. Taking parametric statistics here will make the process quite complicated. 

So, despite using a method that assumes a normal distribution for illness frequency. The researcher will opt to use any non-parametric method like quantile regression analysis.

Similarly, consider the case of another health researcher, who wants to estimate the number of babies born underweight in India, he will also employ the non-parametric measurement for data testing.

A marketer that is interested in knowing the market growth or success of a company, will surely employ a non-statistical approach.

Any researcher that is testing the market to check the consumer preferences for a product will also employ a non-statistical data test. As different parameters in nutritional value of the product like agree, disagree, strongly agree and slightly agree will make the parametric application hard.

Any other science or social science research which include nominal variables such as age, gender, marital data, employment, or educational qualification is also called as non-parametric statistics. It plays an important role when the source data lacks clear numerical interpretation.

Also Read | Applications of Statistical Techniques

What are Non-Parametric Tests ?

Types of Non-Parametric Tests:1. Wilcoxon test 2. Mann-Whitney test 3. Kruskal Wallis test 4. Friedmann test

Types of Non-Parametric Tests

  Here is the list of non-parametric tests that are conducted on the population for the purpose of statistics tests :

Wilcoxon Rank Sum Test

The Wilcoxon test also known as rank sum test or signed rank test. It is a type of non-parametric test that works on two paired groups. The main focus of this test is comparison between two paired groups. The test helps in calculating the difference between each set of pairs and analyses the differences.

The Wilcoxon test is classified as a statistical  hypothesis tes t and is used to compare two related samples, matched samples, or repeated measurements on a single sample to assess whether their population mean rank is different or not.

Mann- Whitney U Test

The Mann-Whitney U test also known as the Mann-Whitney-Wilcoxon test, Wilcoxon rank sum test and Wilcoxon-Mann-Whitney test. It is a non-parametric test based on null hypothesis. It is equally likely that a randomly selected sample from one sample may have higher value than the other selected sample or maybe less.

Mann-Whitney test is usually used to compare the characteristics between two independent groups when the dependent variable is either ordinal or continuous. But these variables shouldn’t be normally distributed. For a Mann-Whitney test, four requirements are must to meet. The first three are related to study designs and the fourth one reflects the nature of data.

Kruskal Wallis Test

Sometimes referred to as a one way ANOVA on ranks, Kruskal Wallis H test is a nonparametric test that is used to determine the statistical differences between the two or more groups of an independent variable. The word ANOVA is expanded as Analysis of variance.

The test is named after the scientists who discovered it, William Kruskal and W. Allen Wallis. The major purpose of the test is to check if the sample is tested if the sample is taken from the same population or not.

Friedman Test

The Friedman test is similar to the Kruskal Wallis test. It is an alternative to the ANOVA test. The only difference between Friedman test and ANOVA test is that Friedman test works on repeated measures basis. Friedman test is used for creating differences between two groups when the dependent variable is measured in the ordinal.

The Friedman test is further divided into two parts, Friedman 1 test and Friedman 2 test. It was developed by sir Milton Friedman and hence is named after him. The test is even applicable to complete block designs and thus is also known as a special case of Durbin test.

Distribution Free Tests

Distribution free tests are defined as the mathematical procedures. These tests are widely used for testing statistical hypotheses. It makes no assumption about the probability distribution of the variables. An important list of distribution free tests is as follows:

  •  Anderson-Darling test: It is done to check if the sample is drawn from a given distribution or not.
  • Statistical bootstrap methods: It is a basic non-statistical test used to estimate the accuracy and sampling distribution of a statistic.
  • Cochran’s Q: Cochran’s Q is used to check constant treatments in block designs with 0/1 outcomes.
  • Cohen’s kappa: Cohen kappa is used to measure the inter-rater agreement for categorical items.
  • Kaplan-Meier test: Kaplan Meier test helps in estimating the survival function from lifetime data, modeling, and censoring.
  • Two-way analysis Friedman test: Also known as ranking test, it is used to randomize different block designs.
  • Kendall’s tau: The test helps in defining the statistical dependency between two different variables.
  • Kolmogorov-Smirnov test: The test draws the inference if a sample is taken from the same distribution or if two or more samples are taken from the same sample.
  • Kendall’s W: The test is used to measure the inference of an inter-rater agreement .
  • Kuiper’s test: The test is done to determine if the sample drawn from a given distribution is sensitive to cyclic variations or not.
  • Log Rank test: This test compares the survival distribution of two right-skewed and censored samples.
  • McNemar’s test: It tests the contingency in the sample and revert when the row and column marginal frequencies are equal to or not.
  • Median tests: As the name suggests, median tests check if the two samples drawn from the similar population have similar median values or not.
  • Pitman’s permutation test: It is a statistical test that yields the value of p variables. This is done by examining all possible rearrangements of labels.
  • Rank products: Rank products are used to detect expressed genes in replicated microarray experiments.
  • Siegel Tukey tests: This test is used for differences in scale between two groups.
  • Sign test: Sign test is used to test whether matched pair samples are drawn from distributions from equal medians.
  • Spearman’s rank: It is used to measure the statistical dependence between two variables using a monotonic function.
  • Squared ranks test: Squared rank test helps in testing the equality of variances between two or more variables.
  • Wald-Wolfowitz runs a test: This test is done to check if the elements of the sequence are mutually independent or random.

Also Read | Factor Analysis

Advantages and Disadvantages of Non-Parametric Tests

The benefits of non-parametric tests are as follows:

It is easy to understand and apply.

It consists of short calculations.

The assumption of the population is not required.

Non-parametric test is applicable to all data kinds

The limitations of non-parametric tests are:

It is less efficient than parametric tests.

Sometimes the result of non-parametric data is insufficient to provide an accurate answer.

Applications of Non-Parametric Tests

Non-parametric tests are quite helpful, in the cases :

Where parametric tests are not giving sufficient results.

When the testing hypothesis is not based on the sample.

For the quicker analysis of the sample.

When the data is unscaled.

The current scenario of research is based on fluctuating inputs, thus, non-parametric statistics and tests become essential for in-depth research and data analysis .

Share Blog :

analysis non example

Be a part of our Instagram community

Trending blogs

5 Factors Influencing Consumer Behavior

Elasticity of Demand and its Types

What is PESTLE Analysis? Everything you need to know about it

An Overview of Descriptive Analysis

What is Managerial Economics? Definition, Types, Nature, Principles, and Scope

5 Factors Affecting the Price Elasticity of Demand (PED)

6 Major Branches of Artificial Intelligence (AI)

Dijkstra’s Algorithm: The Shortest Path Algorithm

Scope of Managerial Economics

Different Types of Research Methods

Latest Comments

analysis non example

lls-logo-main

  • Guide: Non-Value Add Analysis
  • Learn Lean Sigma

Lean Six Sigma Competancy Test Banner

Non-value-added analysis, a cornerstone of lean management, is an essential strategy for enhancing process efficiency and operational effectiveness in various industries. Originating from the Toyota Production System and later evolving into Lean Management, its primary goal is to eliminate waste or ‘muda’ activities that do not add value from the customer’s perspective.

This meticulous process involves reviewing and understanding each activity within a business process to ascertain its value contribution. Activities that fail to transform resources, aren’t done right the first time, or aren’t desired by the customer are deemed non-value-adding. This approach is fundamental for businesses striving to remain competitive, optimize operations, reduce costs, and enhance the customer experience.

Table of Contents

What is non value add analysis.

Non-value-added analysis is used as a strategy method as part of lean management in various industries to enhance process efficiency and operational effectiveness. Lean Management, which originally was developed as the Toyota Production System (TPS) and was referred to as Lean Management later as it evolved, had the core principle of removing waste or non-value-added activities from the process. Activities that are classified as non-value-added analysis are also referred to by the Japanese term “ muda ” and are activities within a business that do not add value from the perspective of the customer.

Conducting non-value-add analysis is the process of reviewing and understanding each activity in the process to identify if it adds value in the eyes of the customer. Non-value-adding activities usually include unnecessary steps, redundancies, delays, or anything that leads to inefficiency without enhancing the product or service.

In general, if an activity does not meet all three of the following criteria, it is a waste:

  • It transforms  people, information, or materials.
  • It is done right the  first time.
  • The customer wants it and is willing to pay for it.

If you cannot say yes to all three of the above, the activity is non-value-adding.

Value Add analysis Pie chart

Value Add analysis Pie chart

The Importance of Non-Value Add Analysis

For a business to remain competitive and maintain or improve margins it is key to focus on optimizing operations, reducing cost, and enhancing the customer experience. Non-value-added analysis serves as a critical tool for achieving these objectives. By systematically identifying and addressing activities that do not add value, organizations can streamline their processes, thus reducing unnecessary costs and improving overall efficiency. This allows businesses to remain price-competitive in the market while also maintaining profit margins. This waste forms part of the business process and is passed on to the customers in the form of additional product or service costs.

Consider the theoretical example of the process below. The process that has reduced non-value-adding steps will take less time to produce and thus result in a cheaper product or service cost for the business, which can be passed on to the customer or maintained as the company’s margin.

Value add analysis

Value add analysis example

Eliminating or reducing non-value-adding activities often results in improved process quality. When processes are leaner and more focused on value-adding steps, the chance of errors or defects decreases, leading to higher-quality outputs. This directly impacts customer satisfaction, as customers receive products or services that meet or exceed their expectations in terms of quality and delivery time.

Methodologies of Non-Value Add Analysis

Non-value analysis is a key approach to finding opportunities for improvement to make the organization leaner. Several methodologies can be deployed to identify and eliminate non-value-adding activities. These methodologies each have unique approaches and are used to analyze processes from different perspectives.

Process Mapping

Process mapping is a foundational tool in Lean Six Sigma and a tool most practitioners learn early on at the Yellow Belt level. Process mapping is an ideal tool for non-value-added analysis as it involves creating a visual diagram of the workflow within a process. This diagram, or process map, displays each step in the sequence of activities from start to finish of the process.

Basic flow chart or Process map

Detailed sub-process map

Using a process map to visualize the process, stakeholders can follow the process through and ask the question of each step, “Does this add value to the customer?”. If the answer is “no,” it can be marked as a wasteful step. This can then be a focus, ideally removing the step from the process. If it is not possible to eliminate the step, it should be analyzed to understand how it can be reduced in terms of resource use from the process.

Another use for the process map will be to review other inefficiencies in the process, such as bottlenecks, redundant steps, or unnecessary complexities that do not add value to the end goal.

Time and Motion Studies

Times and Motion Studies are a more analytic and quantitative approach to understanding a process when compared to process mapping. This method involves creating a detailed analysis of the time and effort required to perform each task in a process.

Value add analysis Excel sheet

Time and motion studies involve measuring the time taken to complete each task and the motions involved in these tasks. By analyzing these aspects, organizations can pinpoint areas where time 

and resources are being wasted on non-value-adding activities. This could include excessive movements, waiting times, or any action that does not contribute directly to completing the tasks.

The analysis from Time and Motion Studies can lead to process redesign, where tasks are streamlined, and non-essential activities are eliminated or reduced.

Value Stream Mapping

Value Stream Map (VSM)

Value Stream Mapping (VSM) is a high-level overview that looks at the flow of information and materials through an entire end-to-end process, from suppliers to customers. Unlike process mapping, which focuses on individual processes, VSM covers the entire value stream, providing a broader view of how inputs are transformed into outputs.

The VSM helps to identify non-value-adding steps at both micro (individual process) and macro (overall value stream) levels. This includes identifying delays, inventory pile-ups, and any form of waste. The VSM also provides a strategic overview, allowing organizations to see the big picture and make more informed decisions about where to focus improvement efforts for the greatest impact.

Implementing Non-Value analysis

The approach to applying non-value-Add analysis to a business is a structured and strategic process that involves several critical steps for a thorough analysis.

Step 1: Identifying Non-value Add Analysis

The first step would be to review current processes using one, or all of the methods from above. This would involve breaking the process down into individual steps and identifying activities that do not contribute to adding value to the customer. This can include redundant steps, unnecessary movements within the process, prolonged waiting times, overproduction beyond demand, and the occurrence of defects.

Step 2: Assessing the Impact

Once you have identified activities within a process that are non-value-adding, the next step is to assess their impact. The impact of all steps can be measured by how much time , resources , and other costs are attributed to those activities.

This assessment will help in understanding the level of inefficiency these activities bring to the process, including their impact on the process flow, employee productivity, and overall operational costs. This will give you a measure of the overall benefit of removing the process step if there is a cost to doing so, which will allow you to conduct a cost-benefit analysis.

Step 3: Developing Improvement Strategies

Based on the assessment of the non-value-adding steps identified in the process, you should now develop strategies to eliminate or reduce the non-value-adding activities, with a focus on eliminating first and reducing if you are unable to eliminate them. 

This could involve redesigning the process for greater efficiency, adopting new tools or technology that will streamline operations, or changing the workflow to bypass unnecessary steps.

For this process step, it is important to involve all relevant stakeholders to ensure that the improvements are practical and consider all aspects of the process.

Step 4: Implementing Changes

Once improvement strategies have been developed, the next step is to implement these changes in the organization. This will include making any physical and tangible changes to the process, which could include reorganization of equipment and machines, changing tooling and equipment, updating process documentation, and training operators on the new process method. Close monitoring of the implementation is essential to ensure that the changes are effective and to make adjustments as necessary.

Step 5:Continuous Monitoring and Improvement

Non-value-added analysis is not a one-off event but a continuous process. Regular monitoring of the processes is essential to ensure they remain efficient and free from non-value-adding activities. Continuous feedback mechanisms should be in place to detect any inefficiencies. This enables regular reviews and adjustments to the process, ensuring sustained efficiency and improvement over time.

This step is also about instilling a culture of continuous improvement within the organization, where employees are always looking for ways to enhance processes and add value.

Implementing non-value-add analysis is a strategic and structured process vital for streamlining business operations and maintaining efficiency. It begins with identifying non-value-adding activities using various methodologies like Process Mapping, Time and Motion Studies, and Value Stream Mapping. Assessing the impact of these activities, developing improvement strategies, and implementing changes are crucial steps in this process. However, the journey doesn’t end with implementation; continuous monitoring and improvement are essential to ensure long-term efficiency and to foster a culture of continuous improvement. This ongoing process is key to an organization’s ability to stay competitive, reduce costs, and increase customer satisfaction, thereby ensuring sustained business success.

  • Shou, W., Wang, J., Wu, P. and Wang, X., 2020. Value adding and non-value adding activities in turnaround maintenance process: classification, validation, and benefits.  Production Planning & Control ,  31 (1), pp.60-77.
  • Capuano, T., Bokovoy, J., Halkins, D. and Hitchings, K., 2004. Work flow analysis: eliminating non–value-added work.   JONA: The Journal of Nursing Administration ,  34 (5), pp.246-256.
  • Ng, K.C., Lim, C.P., Chong, K.E. and Goh, G.G.G., 2013, December. Elimination of waste through value Add/Non value add process analysis to improve cost productivity in manufacturing —A case study. In  2013 IEEE International Conference on Industrial Engineering and Engineering Management  (pp. 410-414). IEEE.

A: Non-Value Add analysis is a systematic approach used to identify and eliminate activities within a process that do not contribute to meeting customer requirements or add value to the final product or service. The goal is to reduce waste, improve efficiency, and enhance customer satisfaction by focusing on value-added activities.

A: Non-Value Add analysis is important because it helps organizations identify and eliminate wasteful activities that do not contribute to customer satisfaction or the desired outcome. By streamlining processes and reducing non-value add activities, organizations can improve efficiency, reduce costs, and create more value for their customers.

A: To identify non-value add activities, review the process flow and look for steps that do not directly contribute to meeting customer requirements or adding value. Examples of non-value add activities include unnecessary approvals, redundant paperwork, excessive waiting times, and unnecessary transportation. Analyzing the purpose and impact of each step will help you identify non-value add activities.

A: Prioritize non-value add activities based on their impact on the process and customer satisfaction. Focus on high-impact activities that consume significant resources or cause delays. Consider factors such as frequency, time wasted, and potential for improvement. By targeting high-impact activities, you can maximize the benefits of elimination or reduction with the least effort or resources.

A: Measure the success of Non-Value Add analysis by tracking key performance indicators (KPIs) related to process efficiency, customer satisfaction, and cost reduction. Metrics such as cycle time, throughput, customer feedback, and cost savings can provide insights into the impact of eliminating non-value add activities. Regularly monitor and review these metrics to assess the effectiveness of your improvements.

A: No, Non-Value Add analysis is not a one-time process. It should be part of an ongoing commitment to continuous improvement. Organizations should regularly review and analyze processes to identify and eliminate new instances of non-value add activities. By fostering a culture of continuous improvement, organizations can continuously optimize processes and drive sustainable efficiency gains.

A: Yes, Non-Value Add analysis can be applied to any type of process, whether it’s in manufacturing, service delivery, administrative tasks, or any other business function. The principles of identifying and eliminating non-value add activities can be applied universally to improve efficiency and reduce waste in various processes across different industries and sectors.

Daniel Croft

Daniel Croft

Daniel Croft is a seasoned continuous improvement manager with a Black Belt in Lean Six Sigma. With over 10 years of real-world application experience across diverse sectors, Daniel has a passion for optimizing processes and fostering a culture of efficiency. He's not just a practitioner but also an avid learner, constantly seeking to expand his knowledge. Outside of his professional life, Daniel has a keen Investing, statistics and knowledge-sharing, which led him to create the website learnleansigma.com, a platform dedicated to Lean Six Sigma and process improvement insights.

Download Template

Value Add Analysis Template - Feature Image - Learnleansigma

Free Lean Six Sigma Templates

Improve your Lean Six Sigma projects with our free templates. They're designed to make implementation and management easier, helping you achieve better results.

Other Guides

  • User Experience
  • Web Development
  • Google Ad Grant Management
  • Search Engine Optimization
  • Social Media Management
  • Website Maintenance
  • Privacy Solution
  • Thought Leadership
  • Referral Program
  • Google Ad Grant
  • Nonprofit Branding
  • Nonprofit Marketing
  • SEO for Nonprofits
  • Social Media
  • WordPress Help
  • Best Nonprofit Websites of 2023
  • Best Nonprofit Logos
  • Nonprofit Mission Statement Examples
  • Nonprofit Vision Statement Examples

TECH DISCOUNTS FOR NONPROFITS

Save 10-60% on dozens of NPTech brands, most of which are not available on TechSoup. Simply sign up for the Elevation newsletter! ACCESS DISCOUNTS

60+ TECH GRANTS FOR NONPROFITS

More than half are rolling deadlines, so take a look now and then start bookmarking for your next grant cycle. The list focuses on grants that will fund technology and marketing needs for NGOs and other non-profits. CHECK DEADLINES

How to Do a SWOT Analysis That's Actually Useful to Your Nonprofit

Emily Friedrichs

  • Fundraising & Donations
  • Nonprofit Tools
  • WordPress Maintenance
  • Email Marketing
  • UX for Nonprofits
  • Storytelling
  • Graphic Design
  • Nonprofit Web Hosting
  • University Coursework

Recent history has showed us how quickly norms can be upended and the waves of change can hit. Nonprofits, like other social and physical structures, need to build resiliency for the future. When conducted correctly, SWOT analysis can be a great tool for creating a nonprofit's contingency plan and growing its adaptability. 

Many organizations fail to benefit from a SWOT analysis because it has been widely mistaught , and they are only partially completing the exercise.  Here we share how to conduct a SWOT analysis so that it generates real results for your organization.

What is SWOT Analysis?

If you're unfamiliar with it, SWOT stands for strengths, weaknesses, opportunities, and threats. It is an important tool for strategic management in the nonprofit and public, as well as private, spheres. By embracing strengths, acknowledging weaknesses, and appreciating external threats and opportunities, SWOT analysis helps organizations to protect their core values and plan for the future.

Differences Between SWOT Analysis for Businesses and Nonprofits

Unlike the strengths of businesses, which usually focus on what a company does better than its competitors, nonprofits should focus on their relative strengths compared to similar organizations or to an ideal.

Despite these differences, many SWOT lists for nonprofits will look similar to lists for businesses. Both sectors may also be targeting the same audience, which means that you should consider comparing yourself to businesses as well as you conduct your analysis.

Applications of SWOT Analysis for Nonprofits

A SWOT analysis is one of the basic planning and evaluation tools for nonprofit organizations. Nonprofits can use this tool for marketing, strategic planning, needs analysis, program development, and much more. However, nonprofit organizations must focus their analysis on specific functions to best benefit from the process.

Here is a breakdown of ways to use SWOT analysis within your organization.   

Marketing Strategy

Nonprofits require marketing to attract new donors, showcase success, gain media attention and boost their reach. Therefore, it's important to craft a strategy that utilizes existing resources for maximum return.   

While developing a marketing strategy, SWOT analysis can discover important information about staffing, budget, and potential partners or donors. If qualified and competent staff is one of the organization's strengths, the directors can build a strategy around that and make the team a featured component in marketing campaigns.    

Improving Programs

Nonprofits can use SWOT analysis to address many program issues, including lack of best practices, funding, publicity, or even social acceptance for its programming. For clear examples of the many ways in which SWOT can be applied for strategic decision-making at your nonprofit, as well as examples of how and how not to list specific SWOTs, take a look at this webinar from Clare Axelrod and Boomerang.

The webinar also covers how to use SWOT analysis to develop your board, plan annual fundraising, improve organizational culture, and more. 

Website Redesign

Conducting a SWOT analysis before a website project can uncover issues like your organization's lack of visibility or technology skill gaps in your staff.

It can also help you to assign budget priorities: you should emphasize an SEO audit over additional custom templates if your organization is struggling to attract new supporters.

How to Do A SWOT Analysis for Your Nonprofit

SWOT analysis involves a dual approach: an internal focus on strengths and weaknesses and an external focus on opportunities and threats.

Strengths are anything that your organization does well. As mentioned earlier, your nonprofit may also be comparing itself to businesses that seek to engage the same audience. Examples of organizational strengths can include:

  • The staff is experienced

The organization has a clear mission and passionate volunteers

  • The government takes care of a large percentage of the operating costs
  • There are two (or more) major donors
  • The leadership has an excellent reputation and strong links in the community
  • The organization has a well-maintained website that attracts new visitors/donors

Most nonprofits are also exempt from taxes, which is another strength. Listing even the mundane or obvious strengths can make it easier to identify new opportunities as you continue through the exercise.

The weakness quadrant allows you to list things you consider deficient at your nonprofit. Here you can include anything that causes internal problems for your organization. Examples of common weaknesses include:

  • Lack of financial stability, which is most common among nonprofits that are still in their early years
  • Small staff who have skill gaps
  • Poor or little reputation in the government
  • Lack of equipment/facilities
  • Reliance on a small pool of donors
  • Poor intake process

Weaknesses should only come from your internal issues; external factors will be addressed under threats.

Opportunities

Opportunities are external factors which have potential to benefit the organization. Consider everything that could be helpful in the future. Here are some examples of opportunities:

  • The government is increasing funding or considering legislation related to your nonprofit
  • Other NGOs in your area would like to partner
  • Shrinking or closure of nearby organizations could grow your client and supporter base

It's a good idea to spend some time researching legislation and general trends in order to complete a thorough and thoughtful list of opportunities.

Threats are the external factors that the organization cannot control, but they may cause negative/adverse effects to the organization now or in the future. Examples of threats that you can list for your nonprofit include:

  • Your primary source of funds reports a possible budget deficit in the next three years
  • The program costs have been increasing for the past few years due to factors beyond your control
  • Political factors are a threat to your organization
  • There is an effort to discredit your organization or research from within your sector as a whole
  • New compliance requirements are passed, such as recent laws affecting donor data privacy

Note that every organization faces unique threats. Take your time to analyze the critical issues facing your organization and to complete all the SWOT quadrants.

Turn Your Analysis into Action

Your SWOT analysis is not finished until you've created proper actionable follow-up . Together with your team, you should plan on how to:

  • Increase your opportunities using your nonprofit's strengths
  • Use strengths to reduce your threats
  • Eliminate weaknesses by taking advantage of opportunities
  • Reduce weaknesses to mitigate potential threats

3x3 matrix with the Interal factors, Strengths and Weaknesses, along the top and instructions to list 5-10 of each. The external factors, opportunities, are on the left, with instructions to list 5-10 of each. The four quadrants in the lower right corner instruct you to write "ways your strengths can capitalize on opportunities", "ways to capitalize on opportunities by overcoming weaknesses", "ways your strengths can mitigate threats", and "ways to minimize weaknesses and threats"

An effective SWOT analysis matches strengths and weaknesses to corresponding opportunities and threats. The goal is to leverage the positives for growth and mitigate negatives to minimize potential risk.

Strategic Pairing

Ideally, an organization utilizes strengths to take advantage of available opportunities. After completing all four quadrants, begin pairing strengths to opportunities.

As an example, if one of your strengths is that you have an experienced grant writer in your team, you could pair that with looking for more federal grant opportunities.

Operationalize The Opportunity

After identifying your strategic pairings, hold a meeting to operationalize them. If there are many to address, list them in order of priority. It's advisable to prioritize items that bring financial growth through funding, more donors, and partnerships. You can subsequently prioritize organizational growth, operation efficiency, or achieving cultural change.

The last step involves creating a roadmap to achieve the objective that includes allocating resources and assigning one person to be responsible for shepherding the project into fruition.

Remember, a true SWOT analysis pairs each positive attribute to a negative attribute that it can address. Don't stop at merely identifying strengths, weaknesses, opportunities and threats. Complete all the steps outlined here to walk away with actionable outcomes for your nonprofit.

Other articles picked for you

analysis non example

Navigating the Nonprofit Marketing Funnel: Tips for Building a Healthy Path to Success

Introduction.

Marketing for nonprofits is more crucial than ever, as these organizations work...

analysis non example

A Guide to AI for Nonprofits

Nonprofits are well-known for being able to do a great deal with minimal resources. With this in...

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Choosing the Right Statistical Test | Types & Examples

Choosing the Right Statistical Test | Types & Examples

Published on January 28, 2020 by Rebecca Bevans . Revised on June 22, 2023.

Statistical tests are used in hypothesis testing . They can be used to:

  • determine whether a predictor variable has a statistically significant relationship with an outcome variable.
  • estimate the difference between two or more groups.

Statistical tests assume a null hypothesis of no relationship or no difference between groups. Then they determine whether the observed data fall outside of the range of values predicted by the null hypothesis.

If you already know what types of variables you’re dealing with, you can use the flowchart to choose the right statistical test for your data.

Statistical tests flowchart

Table of contents

What does a statistical test do, when to perform a statistical test, choosing a parametric test: regression, comparison, or correlation, choosing a nonparametric test, flowchart: choosing a statistical test, other interesting articles, frequently asked questions about statistical tests.

Statistical tests work by calculating a test statistic – a number that describes how much the relationship between variables in your test differs from the null hypothesis of no relationship.

It then calculates a p value (probability value). The p -value estimates how likely it is that you would see the difference described by the test statistic if the null hypothesis of no relationship were true.

If the value of the test statistic is more extreme than the statistic calculated from the null hypothesis, then you can infer a statistically significant relationship between the predictor and outcome variables.

If the value of the test statistic is less extreme than the one calculated from the null hypothesis, then you can infer no statistically significant relationship between the predictor and outcome variables.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

analysis non example

You can perform statistical tests on data that have been collected in a statistically valid manner – either through an experiment , or through observations made using probability sampling methods .

For a statistical test to be valid , your sample size needs to be large enough to approximate the true distribution of the population being studied.

To determine which statistical test to use, you need to know:

  • whether your data meets certain assumptions.
  • the types of variables that you’re dealing with.

Statistical assumptions

Statistical tests make some common assumptions about the data they are testing:

  • Independence of observations (a.k.a. no autocorrelation): The observations/variables you include in your test are not related (for example, multiple measurements of a single test subject are not independent, while measurements of multiple different test subjects are independent).
  • Homogeneity of variance : the variance within each group being compared is similar among all groups. If one group has much more variation than others, it will limit the test’s effectiveness.
  • Normality of data : the data follows a normal distribution (a.k.a. a bell curve). This assumption applies only to quantitative data .

If your data do not meet the assumptions of normality or homogeneity of variance, you may be able to perform a nonparametric statistical test , which allows you to make comparisons without any assumptions about the data distribution.

If your data do not meet the assumption of independence of observations, you may be able to use a test that accounts for structure in your data (repeated-measures tests or tests that include blocking variables).

Types of variables

The types of variables you have usually determine what type of statistical test you can use.

Quantitative variables represent amounts of things (e.g. the number of trees in a forest). Types of quantitative variables include:

  • Continuous (aka ratio variables): represent measures and can usually be divided into units smaller than one (e.g. 0.75 grams).
  • Discrete (aka integer variables): represent counts and usually can’t be divided into units smaller than one (e.g. 1 tree).

Categorical variables represent groupings of things (e.g. the different tree species in a forest). Types of categorical variables include:

  • Ordinal : represent data with an order (e.g. rankings).
  • Nominal : represent group names (e.g. brands or species names).
  • Binary : represent data with a yes/no or 1/0 outcome (e.g. win or lose).

Choose the test that fits the types of predictor and outcome variables you have collected (if you are doing an experiment , these are the independent and dependent variables ). Consult the tables below to see which test best matches your variables.

Parametric tests usually have stricter requirements than nonparametric tests, and are able to make stronger inferences from the data. They can only be conducted with data that adheres to the common assumptions of statistical tests.

The most common types of parametric test include regression tests, comparison tests, and correlation tests.

Regression tests

Regression tests look for cause-and-effect relationships . They can be used to estimate the effect of one or more continuous variables on another variable.

Comparison tests

Comparison tests look for differences among group means . They can be used to test the effect of a categorical variable on the mean value of some other characteristic.

T-tests are used when comparing the means of precisely two groups (e.g., the average heights of men and women). ANOVA and MANOVA tests are used when comparing the means of more than two groups (e.g., the average heights of children, teenagers, and adults).

Correlation tests

Correlation tests check whether variables are related without hypothesizing a cause-and-effect relationship.

These can be used to test whether two variables you want to use in (for example) a multiple regression test are autocorrelated.

Non-parametric tests don’t make as many assumptions about the data, and are useful when one or more of the common statistical assumptions are violated. However, the inferences they make aren’t as strong as with parametric tests.

Prevent plagiarism. Run a free check.

This flowchart helps you choose among parametric tests. For nonparametric alternatives, check the table above.

Choosing the right statistical test

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient
  • Null hypothesis

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Statistical tests commonly assume that:

  • the data are normally distributed
  • the groups that are being compared have similar variance
  • the data are independent

If your data does not meet these assumptions you might still be able to use a nonparametric statistical test , which have fewer requirements but also make weaker inferences.

A test statistic is a number calculated by a  statistical test . It describes how far your observed data is from the  null hypothesis  of no relationship between  variables or no difference among sample groups.

The test statistic tells you how different two or more groups are from the overall population mean , or how different a linear slope is from the slope predicted by a null hypothesis . Different test statistics are used in different statistical tests.

Statistical significance is a term used by researchers to state that it is unlikely their observations could have occurred under the null hypothesis of a statistical test . Significance is usually denoted by a p -value , or probability value.

Statistical significance is arbitrary – it depends on the threshold, or alpha value, chosen by the researcher. The most common threshold is p < 0.05, which means that the data is likely to occur less than 5% of the time under the null hypothesis .

When the p -value falls below the chosen alpha value, then we say the result of the test is statistically significant.

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). Choosing the Right Statistical Test | Types & Examples. Scribbr. Retrieved February 22, 2024, from https://www.scribbr.com/statistics/statistical-tests/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, hypothesis testing | a step-by-step guide with easy examples, test statistics | definition, interpretation, and examples, normal distribution | examples, formulas, & uses, what is your plagiarism score.

analysis non example

Non-literary Analysis: Textual Analysis (Paper One)

How do you approach analyzing non-literary texts such as articles, editorials, speeches, blogs, advertising, etc., textual analysis.

RHETORICAL OR TEXTUAL ANALYSIS

How does the author’s language shape the meaning? How does the purpose, audience, medium, disposition, appeals, and style impact the reception of the message? How does the author use language to persuade?

Why did the author write this text? And why did the author write this text in a certain way? What is the occasion for the text e.g. some specific incident or event? What is the intent of the piece: TO INFORM, TO NARRATE, TO PERSUADE, TO DESCRIBE? See https://www.mrsmacfarland.com/dp-curriculum/text-types for more information on different text types and their purposes.

Consider the following:

– what the author said and the diction used

– what the author did not say

– how the author said it and the alternative ways it could have been said

-what the intended effect is e.g. to reflect, to call to action

2) Audience

Who is the target audience? How does the text’s language and rhetoric suit the audience? Are there groups excluded from the intended audience? Is there more than one intended audience?

Socioeconomic status

Beliefs, Values, Attitudes (special interest groups)

3) Nature of the Medium

What are the characteristics that define the text? Consider the differences in the variety of texts such as newspaper articles, magazine ads, editorials, blogs, etc. What modes of writing are included: expository, narrative, descriptive, argumentative? Does the author adhere to the conventions of the genre or stray from them? What is the impact of the medium and how the message is received? Consider the text type.

4) Disposition

How does the author present his or her disposition or inherent mindset on the topic(s)? Is there an inherent bias in the author?Does the bias distort the truth in some way? What influences may have impacted the delivery of the message such as historically, politically, socially, or economically? Is there a clear tone? What tone shifts are seen through the text?

Bias in the media can occur through:

Selection & Omission --choosing to tell only parts of the story

Placement-- where the story appears in the newspaper or during news hour or on a website

Headlines-- often crafted to catch attention and sell papers rather than report facts

Word Choice and Tone --using sensational and emotional words to dramatize the events

Photos/Captions/Camera Angles --making one person look good and another bad, for example

Names & Titles --calling a person a “bad guy” instead of by his name, for example

Statistics & Crowd Counts --dramatizing numbers for effect

Source Control --using information or sources that only show or support one side of a story

You also want to consider the source: Is it a more liberal (left-leaning) source or a conservative (right-leaning) source or is it more in the center. Check out https://www.allsides.com/media-bias/media-bias-ratings for a chart.

Does the rhetorical piece use Logos, Ethos or Pathos?

How does the author use strong, connotative language that incites a reaction making an emotional appeal ( pathos )?

How does the author use a logical appeal ( logos ) through facts, statistics, examples, organizational strategies, etc?

How does the author create an ethical appeal (ethos)through his or her experience and credibility in order to gain the trust of the audience?

How is the piece ordered e.g. compare/contrast, cause/effect, problem/solution, analogies, narrative, description, etc? What rhetorical tropes and schemes are used such as extended metaphor, hyperbole, anecdotes, examples, antithesis, anaphora, litotes, analogy, symbolism, irony, paradox, rhetorical questions, etc? How would you describe the word choice and its effect to convey the message?

Types of Evidence

Analogical Evidence

How does the author compare two things that are similar in order to show the reader parallels and make a point to support his/her argument? What is persuasive or enlightening about using analogies to support an argument?

How can the use of an analogy draw an insightful connection between a well known phenomenon to a less known phenomenon?

Anecdotal Evidence

How does the author use anecdotes to tell a story in order to prove a point?

How does the author’s storytelling of anecdotes coupled with statistical or testimonial effective help build an argument?

O bservations

How does the author use his or her own observations to form conclusions and support his/her argument?

How does the author use numbers and percentages from verified sources to support his claim using reasoning? How do these statistics lend credibility to his/her argument?

Are the statistics being dramatized or manipulated for a specific effect?

How valid are the statistics in supporting the argument?

Quotes or Testimonials

How does the author use quotes from leading experts and authorities in order to support his/her position?

Are there facts that can't be disputed and can be accepted as true? How do these facts help support the argument?

Organizational Strategies

When analyzing an author’s style for a non-literary text such as an editrial, determine what organizational patterns he or she uses:

Exemplification : specific examples, brief

Illustration : examples in more detail

Description : concrete, sensory diction

Narration : use of stories e.g. anecdotes

Cause / effect : clear reason/result

Compare / contrast : similarities/differences

Process : how to do something...

Problem/Solution: describes a problem and its implications and then provides a solution

Classification: how something is classified e.g. science

Extended definition: how to define an abstract concept e.g. patriotism, democracy, love, faith, etc.

Rhetorical tropes

How do rhetorical tropes and schemes affect how the text is read?

RHETORICAL TROPES

(historical, literary, pop cultural metaphorical reference)

(comparison)

Rhetorical question

(asking ? for effect)

(adjectives or nouns to used to describe another noun- accentuates a dominant characteristic for effect)

(softer word instead of a harsh one)

(understatement, form of irony)

(exaggeration, form of irony)

(situation is not expected. Verbal irony occurs when someone says something that is exaggerated or understated for an effect)

Juxtaposition

(contrasting ideas next to each other)

(direct/implied comparison between two things)

( A p astiche imitate the author’s style in a respectful way by changing an aspect of the story: point of view, ending, change protagonist from male to female, setting, etc. You also could imitate the author’s style and language with a new topic.)

( an imitation of the style of a writer or artist with deliberate exaggeration for comic effect or ridicule)

Personification

(metaphor giving human qualities to nonhuman entity)

(using negative constructions to emphasize a point)

(recurring element which contributes to theme/purpose)

(story in which people, events, or things often have symbolic meanings)

(something that seems contradictory but is actually true)

SYNTACTICAL FORMS

Parenthetical Asides (authorial intrusion)

Author interjects with her/his opinions to add humor or ridicule with dashes or with parenthesis

(words, sounds, or ideas used more than once to enhance the rhythm, or create emphasis)

Parallelism

(similar constructions help audience to compare/contrast parallel subjects or to emphasize a point. Writers will use similar phrases and clauses to balance a sentence)

(two opposing ideas presented in a parallel manner; the juxtaposition of contrasting ideas through syntax EX “She is my happiness!—she is my torture, none the less!”)

(the regular repetition of the same word or phrase at the beginning of successive phrases or clauses e.g. “We shall fight on the beaches. We shall fight on the landing grounds….”)

Word Choice

LEVEL OF FORMALITY

FORMAL: elevated, learned, pretentious, ornate, flowery, archaic, scholarly, pedantic, elegant, dignified, impersonal, elaborate, sophisticated, formal, cultured, poetic, abstract, esoteric (hard to understand), colorful, eloquent, euphonious

INFORMAL: candid, detached, plain, simple, straightforward, informal, conversational, concrete

COLLOQUIAL: abrupt, terse, laconic, simple, rustic, vulgar, slang, jargon, dialect, simple

CONNOTATIVE vs DENOTATIVE LANGUAGE

Denotative language: authentic, actual, apparent, literal, journalistic, straightforward, concrete, precise

Connotative language: poetic, lyrical, symbolic, metaphoric, sensuous, grotesque, picturesque, abstract, whimsical, euphemistic, figurative, obscure, allegorical, suggestive, idyllic, emotive

POSITIVE TONES: cheerful, eager, lighthearted, hopeful, exuberant, enthusiastic, complimentary, confident, cheery, trusting, optimistic, loving, passionate, amused, elated, sympathetic, compassionate, proud, wistful, longing, romantic, humorous

NEGATIVE TONES: bitter , angry, outraged, accusing, incensed, turbulent, furious, wrathful, inflammatory, irritated, disgusted, indignant, irate, caustic, condescending, cynical, pompous, satiric, critical, grotesque, melancholic, mournful, apprehensive

NEUTRAL TONES: objective, nostalgic, candid, restrained, detached, instructive, learned, factual, informative, authoritative, disinterested, judicial, impartial, frank, aloof, calm, imploring

TYPES OF IMAGERY

Visual Imagery : Something seen in the mind’s eye

Auditory Imagery : language that represents a sound or sounds

Olfactory Imagery : language representing the sense of smell

Gustatory Imagery : a taste

Tactile Imagery : touch , for example, hardness, softness, wetness, heat, cold

Organic Imagery : internal sensation: hunger, thirst, fatigue, fear

Kinesthetic Imagery: movement or tension

Persuasive Techniques

The first 18 examples are taken from the following source: Media Literacy Project at medialiteracyproject.org

1. Association . This persuasion technique tries to link a product, service, or idea with something already liked or desired by the target audience, such as fun, pleasure, beauty, security, intimacy, success, wealth, etc. The media message doesn’t make explicit claims that you’ll get these things; the association is implied. Association can be a very powerful technique. A good ad can create a strong emotional response and then associate that feeling with a brand (family = Coke, victory = Nike). This process is known as emotional transfer. Several of the persuasion techniques below, like Beautiful people, warm & fuzzy, Symbols and Nostalgia, are specific types of association.

2. Bandwagon . Many ads show lots of people using the product, implying that "everyone is doing it" (or at least, "all the cool people are doing it"). No one likes to be left out or left behind, and these ads urge us to "jump on the bandwagon.” Politicians use the same technique when they say, "The American people want..." How do they know?

3. Fear . This is the opposite of the Association technique. It uses something disliked or feared by the intended audience (like bad breath, failure, high taxes or terrorism) to promote a "solution.” Ads use fear to sell us products that claim to prevent or fix the problem. Politicians and advocacy groups stoke our fears to get elected or to gain support.

4. Humor. Many ads use humor because it grabs our attention and it’s a powerful persuasion technique. When we laugh, we feel good. Advertisers make us laugh and then show us their product or logo because they’re trying to connect that good feeling to their product. They hope that when we see their product in a store, we’ll subtly re-experience that good feeling and select their product. Advocacy messages (and news) rarely use humor because it can undermine their credibility; an exception is political satire.

5. Plain folks. (A type of Testimonial – the opposite of Celebrities.) This technique works because we may believe a "regular person" more than an intellectual or a highly-paid celebrity. It’s often used to sell everyday products like laundry detergent because we can more easily see ourselves using the product, too. The plain folks technique strengthens the down-home, "authentic" image of products like pickup trucks and politicians. Unfortunately, most of the "plain folks" in ads are actually paid actors carefully selected because they look like "regular people.”

6. Testimonials . Media messages often show people testifying about the value or quality of a product, or endorsing an idea. They can be experts, celebrities, or plain folks. We tend to believe them because they appear to be a neutral third party (a pop star, for example, not the lipstick maker, or a community member instead of the politician running for office.) This technique works best when it seems like the person “testifying” is doing so because they genuinely like the product or agree with the idea. Some testimonials may be less effective when we recognize that the person is getting paid to endorse the product.

7. Intensity . The language of ads is full of intensifiers, including superlatives (greatest, best, most, fastest,

lowest prices), comparatives (more, better than, improved, increased, fewer calories), hyperbole (amazing,

incredible, forever), exaggeration, and many other ways to hype the product.

8. Euphemism . While the Glittering generalities and Name-calling techniques arouse audiences with vivid, emotionally suggestive words, Euphemism tries to pacify audiences in order to make an unpleasant reality more palatable. Bland or abstract terms are used instead of clearer, more graphic words. Thus, we hear about corporate "downsizing" instead of "layoffs," or "intensive interrogation techniques" instead of "torture.”

9. Glittering generalities. This is the use of so-called "virtue words" such as civilization, democracy, freedom, patriotism, motherhood, fatherhood, science, health, beauty, and love. Persuaders use these words in the hope that we will approve and accept their statements without examining the evidence. They hope that few people will ask whether it’s appropriate to invoke these concepts, while even fewer will ask what these concepts really mean.

10. Name-calling. This technique links a person or idea to a negative symbol (liar, creep, gossip, etc.). It’s the opposite of Glittering generalities. Persuaders use Name-calling to make us reject the person or the idea on the basis of the negative symbol, instead of looking at the available evidence. A subtler version of this technique is to use adjectives with negative connotations (extreme, passive, lazy, pushy, etc.) Ask yourself: Leaving out the name-calling, what are the merits of the idea itself?

11. Rhetorical questions. These are questions designed to get us to agree with the speaker. They are set up so that the “correct” answer is obvious. ("Do you want to get out of debt?" "Do you want quick relief from headache pain?" and "Should we leave our nation vulnerable to terrorist attacks?" are all rhetorical questions.) Rhetorical questions are used to build trust and alignment before the sales pitch.

12. Slippery slope. This technique combines Extrapolation and Fear. Instead of predicting a positive future, it warns against a negative outcome. It argues against an idea by claiming it’s just the first step down a “slippery slope” toward something the target audience opposes. ("If we let them ban smoking in restaurants because it’s unhealthy, eventually they’ll ban fast food, too." This argument ignores the merits of banning smoking in restaurants.) The Slippery slope technique is commonly used in political debate, because it’s easy to claim that a small step will lead to a result most people won’t like, even though small steps can lead in many directions.

13. Ad hominem. Latin for "against the man," the ad hominem technique responds to an argument by attacking the opponent instead of addressing the argument itself. It’s also called "attacking the messenger.” It works on the belief that if there’s something wrong or objectionable about the messenger, the message must also be wrong.

14. Analogy . An analogy compares one situation with another. A good analogy, where the situations are reasonably similar, can aid decision-making. A weak analogy may not be persuasive, unless it uses emotionally-charged images that obscure the illogical or unfair comparison.

15. Cause vs. Correlation . While understanding true causes and true effects is important, persuaders can fool us by intentionally confusing correlation with cause. For example: Babies drink milk. Babies cry. Therefore, drinking milk makes babies cry.

16. Group dynamics. We are greatly influenced by what other people think and do. We can get carried away by the potent atmosphere of live audiences, rallies, or other gatherings.

17 Scapegoating . Extremely powerful and very common in political speech, Scapegoating blames a problem on one person, group, race, religion, etc. Some people, for example, claim that undocumented (“illegal”) immigrants are the main cause of unemployment in the United States, even though unemployment is a complex problem with many causes. Scapegoating is a particularly dangerous form of the Simple solution Technique.

18. Straw man . This technique builds up an illogical or deliberately damaged idea and presents it as something that one’s opponent supports or represents. Knocking down the "straw man" is easier than confronting the opponent directly.

19. Card Stacking : Only the good aspects of the product are emphasized; negative aspects appear in fine print

20. Stereotyping : Broad generalizations are made of people based on their gender, ethnicity, race, political, social reasons.

21. Begging the Question or Circular Reasoning: The argument goes around and around, with evidence making the same claim without really providing logical reasoning

22 Non Sequitur: (logic) a conclusion that does not follow from the premises, there is a disconnect between two statements

23. False Dilemma (also called Either/Or Fallacy) : a statement that identifies two alternatives and falsely suggests that if one is rejected, the other must be accepted--EITHER this OR that

24. False Analogy : Making a misleading comparison between logically unconnected ideas

25. Red Herring: Red herring is a kind of fallacy that is an irrelevant topic introduced in an argument to divert the attention of listeners or readers from the original issue.

Visual Analysis

R hetorical situation: Who is the audience and what is the purpose behind the visual and the creator? Is the audience left leaning (liberal) or right leaning (conservative)? How does the media type influence the viewer’s interpretation? Does the specific media creator influence the way the viewer sees the image? The XXX, an online and print newspaper, featured …. The XXX, an online and print newspaper, geared toward a ____ audience….

E mphasis : What is emphasized? Is there a focal point that draws your eyes in? Is there a balance or harmonious relationship or a clear juxtaposition of contrasting images for an effect? How is the composition of the image structured? Where are the images organized such as in the foreground, in the background, to the left, to the center, to the right, etc.? Is there an interesting perspective or camera angle? Is there a clear focal point? Are some images blurred while others are in sharper focus? Stems: The image of _____ is emphasized as a central focal point…. The creator illustrates harmony and balance through ….. The creator shows a contrasting image of _____ in order to _____. The perspective suggests that _______.

L ighting & Color: What colors and lighting do you see? Do the colors or lighting suggest a specific connotation or even a symbolic idea? Do the colors suggest an attitude or tone of the creator? Do the warm colors connote warmth for instance?

The ___ colors of the ___ signify ____. The viewer can infer that the colors connote a feeling of _____ The colors symbolize _____

A ppeals : Does the creator evoke emotions in the reader with positive or negative feelings? Does the creator evoke logic in the viewer through making a comparison or showing an effect to something? Does the creator create credibility about the topic through the images (ethos)?

The creator evokes a feeling of _____ through _____. The creator appeals to the viewer’s sense of logic through _____. The creator establishes credibility and trust through ______.

T heme : Is there a specific message that is being conveyed about humanity, society, or culture? What details construct an argument for the viewer? Are certain people represented in a certain way to convey a message? Are certain objects portrayed in a certain way to convey a message? How does this image relate to the global issue that you selected? STEMS: The photograph portrays the message that ______. The visual suggests that _______.

E ffect : Overall, what was effective about the image in conveying the argument? End your analysis with your conclusions.

The creator effectively conveys a message of _____ through ____, _____, and ______.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Analyzing the Structure of the Non-examples in the Instructional Example Space for Function in Abstract Algebra

Rosaura uscanga.

1 Mercy College, 555 Broadway, Dobbs Ferry, NY 10522 USA

John Paul Cook

2 Oklahoma State University, 406 Mathematical Sciences Building, Stillwater, OK 74078 USA

The concept of function is critical in mathematics in general and abstract algebra in particular. We observe, however, that much of the research on functions in abstract algebra (1) reports widespread student difficulties, and (2) focuses on specific types of functions, including binary operation, homomorphism, and isomorphism. Direct, detailed examinations of the function concept itself–and such fundamental properties as well-definedness and everywhere-definedness–are scarce. To this end, in this paper we examine non-examples of function in abstract algebra by conducting a textbook analysis and semi-structured interviews with abstract algebra instructors. In doing so, we propose four key categories based upon the definitive function properties of well-definedness and everywhere-definedness. These categories identify specific characteristics of the kinds of non-examples of function that abstract algebra instruction should emphasize, enabling us to hypothesize how students might be able to develop a robust view of function and explain in greater detail the nature of the reported difficulties that students experience.

Introduction

The function concept is critical in mathematics and is a core topic in the secondary and undergraduate mathematics curriculum (Bagley et al., 2015 ; Dubinsky & Wilson, 2013 ; Even & Tirosh, 1995 ; Hitt, 1998 ; Oehrtman et al., 2008 ). In abstract algebra, a nationally representative sample of abstract algebra experts recently concluded that topics like homomorphism, isomorphism, and binary operation–all of which are specific types of functions–are some of the most important concepts in the course (Melhuish, 2019 ). Indeed, nearly all of the research on functions in abstract algebra has examined key aspects of these various types of functions (e.g., Brown et al., 1997 ; Hausberger, 2017 ; Larsen, 2009 ; Leron et al., 1995 ; Melhuish et al., 2020b ; Rupnow, 2019 ). One theme that emerges from this literature is that students experience considerable challenges reasoning about these types of functions. The function concept itself, however, has received considerably less attention in these advanced settings. We also note that much of the functions literature has focused on examples of functions and has largely overlooked non-examples. To this end, in this paper we investigate the contents and structure of the non-examples in the instructional example space (Watson & Mason, 2005 ; Zazkis & Leikin, 2008 ) for function in abstract algebra. Our research question is: what non-examples of function do students encounter in introductory abstract algebra, and what are the key characteristics by which these non-examples might be productively classified?

Literature Review

Characterizations of the function concept.

Much of the functions literature focuses on a covariational (e.g., Carlson, 1998 ; Carlson et al., 2002 ; Oehrtman et al., 2008 ) approach to functions, in which a function is viewed primarily as a relationship between two quantities that are changing in tandem. Although a covariational perspective is a very useful way to conceive of functions in courses like algebra and calculus, it is not useful in an abstract algebra setting because it “superimposes an ordinal system on function, which does not underlie many of the discrete structures in abstract algebra” (Melhuish & Fagan, 2018 , p. 22). Thus, a significant portion of the research on functions in the mathematics education literature is not able to account for the ways in which students must reason about functions in abstract algebra. Instead, we take a relational (Slavit, 1997 ) view of function in order to focus on “relationships between input–output pairs” (p. 262). This includes relationships between “individual inputs and outputs” (Slavit, 1997 , p. 262) as well as relationships between sets of inputs and sets of outputs.

Our relational focus highlights a need to specify in greater detail how we define the relationship between the inputs and outputs of a function. Weber and colleagues ( 2020 ) pointed out that there are two common ways to do so. The first defines a function in terms of “a domain, a codomain, and a correspondence between the domain and the codomain such that each member of the domain is assigned exactly one element of the codomain” (Weber et al., 2020 , p. 2). The correspondence mentioned here is often referred to in the literature as the rule. From this perspective, the phrase ‘exactly one element’ is conventionally interpreted in terms of two conditions: (a) each element of the domain maps to at most one element of the codomain (i.e., the proposed mapping must be well-defined 1 ), and (b) each element of the domain maps to at least one element of the codomain (i.e., the proposed mapping must be everywhere-defined ). The second characterization involves viewing a function f as a set of ordered pairs such that, if ( x 1 , y 1 ) and ( x 2 , y 2 ) are in f , if x 1 = x 2 then y 1 = y 2 . Here, the domain of f is defined as the set of all of the first coordinates of these ordered pairs and the codomain (equivalent in this case to the range) is the set of all of the second coordinates. A subtle but critical difference between these two characterizations is that correspondences defined using this second characterization are automatically everywhere-defined, and thus the only condition that must be satisfied for a proposed correspondence to be a function is well-definedness.

We adopt the first characterization of function because it is commonly used in abstract algebra. For example, functions are often used to define a relationship between a familiar, well-understood algebraic structure and one that is unfamiliar in order to familiarize oneself with the latter (this is one of the many uses of the First Isomorphism Theorem). This choice shaped the study in important ways, particularly the way we operationalized the study’s central notion of a non-example of function (see “ The Importance of Non-examples ” section).

Literature on Everywhere-definedness and Well-definedness

Research on functions generally emphasizes the importance of attending to well- and everywhere-definedness. In the abstract algebra literature, this includes studies that examine students’ reasoning about binary operation (e.g., Brown et al., 1997 ; Melhuish & Fagan, 2018 ; Melhuish et al., 2020a ), homomorphism (e.g., Hausberger, 2017 ; Rupnow, 2019 ), and isomorphism (e.g., Larsen, 2009 ; Leron et al., 1995 ; Nardi, 2000 ). Collectively, these studies point out that developing a robust understanding of function is a key precursor to reasoning about these specific types of functions. For example, closure–one of the definitive characteristics of a binary operation–can be framed as a specific manifestation of everywhere-definedness. Additionally, with respect to reasoning coherently about homomorphisms, Melhuish and colleagues ( 2020b ) noted that “a fractured or rich understanding of function may serve as a hindrance or support, respectively” (p. 14). In short, the abstract algebra literature very clearly illustrates the implications of well-definedness and everywhere-definedness for reasoning with subsequent function-related ideas. Studies that involve direct, detailed examinations of these notions in their own right, however, are scarce.

The core function concept has also received attention in the broader literature base on functions. We note two themes from these studies. First, well-definedness has received considerably more attention than everywhere-definedness, which remains a critical but oft-overlooked concept. Second, the nuance of well-definedness creates some difficulties for students, who find it difficult to articulate what it means and why it is important (e.g., Even, 1993 ; Even & Tirosh, 1995 ). Students might also associate it primarily with procedural conceptions of the vertical line test (e.g., Clement, 2001 ; Kabael, 2011 ; Thomas, 2003 ). Third, students have difficulties adapting well-definedness (and the vertical line test) to functions whose domains are not the real numbers (e.g., Dorko, 2017 ; Even & Tirosh, 1995 ). We note that the vertical line test is of limited use in abstract algebra as (1) many functions do not lend themselves to a useful graphical illustration (which is required for the vertical line test), and (2) as it is typically stated, the vertical line test does not address everywhere-definedness. Thus, though it has been more than two decades since this observation was originally made by Even and Bruckheimer ( 1998 ), we believe it is still very much true that well-definedness “deserves more careful attention than it receives” (p. 30).

The difficulties students experience with well- and everywhere-definedness can be explained in part by an overreliance on the proposed rule used to define the correspondence between the domain and codomain (e.g., Bailey et al., 2019 ; Breidenbach et al., 1992 ; Carlson, 1998 ; Clement, 2001 ). Thompson ( 1994 ), for example, noted that the “predominant image evoked in students by the word ‘function’ is of two written expressions separated by an equal sign” (p. 68). Indeed, a rule-only view of function can “mask definitional properties such as well-definedness” (Melhuish et al.,  2020b , p. 4) and, we propose, everywhere-definedness. To help students overcome these difficulties, researchers have suggested that it is critical for students to consider the rule in conjunction with the domain and codomain when determining when a proposed correspondence is or is not a function (e.g., Dorko, 2017 ; Kabael, 2011 ; Oehrtman et al., 2008 ; Zandieh & Knapp, 2006 ). How to emphasize the importance of the domain and codomain, however – such as the characteristics of non-examples that an instructor might use to encourage students to attend to these features – has not been explored.

Theoretical Perspective

The importance of non-examples.

Examples and non-examples are critical in mathematical reasoning because they can provide concrete illustrations of abstract ideas (e.g., Goldenberg & Mason, 2008 ; Tsamir et al., 2008 ; Zaslavsky, 2019 ). In particular, non-examples of a concept can illuminate insights that are not always apparent when considering examples of that same concept. As noted by Watson and Mason ( 2005 ), non-examples have the potential to “demonstrate the boundaries or necessary conditions of a concept” (p. 65) and, in turn, showcase the essential aspects and features of definitions (such as the features of everywhere- and well-definedness in the definition of function). In particular, non-examples can make these key conceptual features more apparent by illustrating what happens when they are not satisfied (e.g., Tsamir et al., 2008 ).

Our characterization of a non-example of function is shaped by the characterization of function we adopted in the “ Characterizations of the Function Concept ” section: a function is a proposed mapping f : A → B that is both well-defined and everywhere-defined. We view a non-example of function, therefore, as a proposed correspondence that fails to satisfy either the well-definedness condition or the everywhere-definedness condition (or both). This is a key distinction: with the other characterization, functions that are defined in terms of sets of ordered pairs are automatically everywhere-defined (and thus a non-example would simply be a relation that is not well-defined). Our choice here reflects the literature’s emphasis on the importance of (yet notable lack of attention afforded to) everywhere-definedness. Another related consequence of this choice is that changing the domain or codomain changes the nature of the proposed correspondence, even if the rule remains the same. Weber and colleagues ( 2020 ) offered the example of the squaring function from R to R and note that it is a different function than the squaring function from R to [ 0 , ∞ ) . Extrapolating this point, we note that changing the domain or codomain of a proposed correspondence could, for instance, change a non-example of function into a function (a process that we call repairing a non-example).

Generally, we note that, while example-based research has received a fair amount of attention in abstract algebra (e.g., Cook & Fukawa-Connelly, 2015 ; Fukawa-Connelly & Newton, 2014 ), research that leverages non-examples in this domain is scarce. Thus, non-examples are a potentially valuable but currently underutilized tool. Indeed, the literature suggests that such an analysis could be particularly productive at the advanced undergraduate level. Melhuish and colleagues ( 2020b ), for instance, noted that “a lack of unification between the general function [concept] and specific AA functions was pervasive” (p. 15, emphasis added). We infer, then, that examining specific non-examples of function could yield similar insights into the nature of the general function concept. Similarly, Even ( 1993 ) and Even and Tirosh ( 1995 ) called attention to the importance of being able to distinguish between functions and non-functions and illustrated that having students consider well-chosen non-examples of function could be particularly beneficial in helping them develop a clearer image of this distinction. What it means for a collection of non-examples of function to be ‘well-chosen,’ however, is currently unclear. To address this issue, in this paper we examine the non-examples contained in the instructional example space for function in abstract algebra.

The Instructional Example Space

We employ Watson and Mason’s ( 2005 ) notion of example space –that is, the collections of examples that are associated with a particular concept. We interpret the term ‘example’ in a holistic way to refer to any specific, concrete manifestation of an abstract mathematical principle, concept, or idea. This can include exercises, illustrations, or, importantly for this study, non-examples. Watson and Mason ( 2005 ) distinguished between different kinds of example spaces, two of which are relevant for our objectives here. A personal example space is the collection and organization of examples and non-examples that an individual associates with a particular mathematical topic. The conventional example space is the collection of examples “as generally understood by mathematicians and as displayed in textbooks, into which the teacher hopes to induct his or her students” (Watson & Mason, 2005 , p. 76). Zazkis and Leikin ( 2008 ) proposed a useful refinement of the conventional example space, distinguishing between expert example spaces and instructional example spaces . Expert example spaces are the personal example spaces of experts and display the “rich variety of expert knowledge” whereas instructional example spaces involve what examples are “displayed in textbooks” and are used in instruction (Zazkis & Leikin, 2008 , p. 132). In this paper, in order to investigate what it means for a collection of non-examples of function to be ‘well-chosen,’ we examine the non-examples contained in the instructional example space.

Example spaces are not only characterized by lists of examples and non-examples; they also include the means by which these examples and non-examples might be organized and structured (Sinclair et al., 2011 ). We therefore distinguish between the contents and the structure of the non-examples in the instructional example space. For our purposes, the contents of the instructional example space are the union of the instructional non-examples that specific, individual experts consider to be useful in their instruction. Thus, to say that an example is in the instructional example space for function in abstract algebra is to say that there is a specific individual (in this case, an abstract algebra instructor or abstract algebra textbook author) who (1) views the proposed correspondence as a non-example of function, and (2) considers it to be useful in their instruction. We consider the structure to be the characteristics that we infer (from instructors’ descriptions and explanations) about what certain non-examples illustrate and why they are important. Inferences about the structuring of the non-examples in the instructional example space might involve, for instance, (1) researchers’ own perceptions of what conceptual aspect a non-example can (or is intended to) illustrate, (2) researchers’ interpretations of why an expert believes a particular characteristic to be important, or (3) researchers’ conjectures about the key distinctions between non-examples in a given collection.

We employed two methodologies: an analysis of introductory abstract algebra textbooks and semi-structured interviews with algebraists. First, we conducted a textbook analysis because (1) the instructional example space, by definition, contains the examples in textbooks, and (2) textbook analyses can provide insight into “how experts in a field […] define and frame foundational concepts” (Lockwood et al., 2017 , p. 389). Accordingly, while the primary purpose of the textbook analysis was to identify the non-examples in the instructional example space (the contents ), we were also attentive to insights in the textbooks regarding how experts might organize these non-examples (the structure ). Second, upon completion of the textbook analysis, we conducted a series of semi-structured interviews (Fylan, 2005 ) with abstract algebra instructors. Semi-structured interviews were important for our objectives because they allowed us to “address aspects that are important to individual participants” (Fylan, 2005 , p. 66) and thus provided a means by which to flexibly pursue emerging themes we inferred related to the structure of the instructional example space. Indeed, the primary purpose of these interviews was to gain insight into the structure of the instructional example space (though we remained open to identifying additional contents as well).

Two considerations guided our selection of textbooks: relevance (to select textbooks that are currently in use in current undergraduate abstract algebra courses in the United States) and depth (to select a sample that is large enough to saturate any categories that emerge in our analysis). In total, we collected data from 13 textbooks, 2 9 of which we had verified were being used ubiquitously (Melhuish, 2019 ) or at prominent universities (National University Rankings, n.d. ) (to ensure relevance ), and 4 of which we introduced ourselves (following Lockwood et al., 2017 ) (to ensure depth ) – see Table ​ Table1. 1 . Certainly this sample was relevant: according to Melhuish ( 2019 ), the 4 textbooks in row 1 of Table ​ Table1 1 alone were in use at a combined total of 60% of the 1244 institutions surveyed (nearly 750 institutions); supplementing with 5 textbooks in use at the top Research-1 institutions increases the percentage of market share (and therefore relevance) of our sample. In the event that we determined this initial sample to be insufficient for achieving saturation, we had planned to incorporate more textbooks as needed using similar criteria. Post-analysis, however, we concluded that, even though the 4 textbooks included for depth certainly helped us to illustrate the categories of our framework, the 9 textbooks selected for relevance would have been sufficient on their own for achieving saturation; as such, additional selection measures were not necessary.

Description of the textbook sample

* We obtained this information by either (a) consulting the information provided by the relevant campus bookstores, or (b) finding websites from instructors who had taught the introductory abstract algebra course in the 2019–2020 academic year. We were able to include the textbooks in use at 22 of the Top 25 Research-1 universities in the United States. The five textbooks listed in the second row of Table ​ Table1 1 are not a complete listing of the textbooks used in these institutions. In particular, several institutions used Gallian’s ( 2017 ), Fraleigh’s ( 2002 ), and Hungerford’s ( 2014 ) textbooks, all of which were already included in the first row of Table ​ Table1 1 on account of Melhuish’s ( 2019 ) study. Additionally, we were unable to obtain copies of the textbooks from three of these institutions, which either used proprietary course materials or textbooks that at the time were otherwise difficult to obtain.

We created a list of terms (informed by the literature and our own knowledge of abstract algebra) related to functions in abstract algebra: function, relation, map, correspondence, well-definedness, everywhere-definedness, domain, codomain, rule, binary operation, homomorphism, and isomorphism. The first author then identified the sections in each textbook that corresponded to these terms by using the table of contents and the index. Next, the first author collected the sections in each textbook related to these terms by obtaining a digital PDF file of the textbook (when available) or by scanning the desired sections from a hard copy of the textbook (we included section-ending exercises as part of each section). Then the first author read the relevant sections of each textbook, highlighting any excerpts that contained (1) non-examples of function (to identify the contents ), and (2) any associated descriptions and explanations related to a given non-example (to infer the structure ). The textbooks with the greatest market share (row 1 of Table ​ Table1) 1 ) were analyzed first; the first author then used theoretical sampling techniques (Creswell, 2012 ) to select the textbooks that, based upon her initial examination of the textbooks in data collection, she believed would enable her to elaborate and refine codes and emerging themes most effectively. We identified a total of 71 non-examples of function (this number includes non-examples that emerged in the semi-structured interviews, described below). To analyze the data, we followed Creswell’s ( 2012 ) method for thematic analysis. This analysis was exploratory but did involve the use of some a priori codes: once a non-example of function was identified, we initially coded it (and any associated descriptions or explanations) as either (a) a well-definedness issue, (b) an everywhere-definedness issue, or (c) both. To enable us to focus more clearly on specific well- or everywhere-definedness issues, in this paper we focus only on those non-examples that satisfy either (a) or (b) but not both. Once the first author had assigned one of these three codes to each excerpt and trimmed those coded as both (a) and (b), she re-read each remaining excerpt, creating and assigning secondary codes based upon her interpretations of what the textbook authors were identifying as key characteristics of these non-examples. All codes were continually refined, revised, and reorganized as coding progressed. During this process, the second author reviewed all coded excerpts and proposed different ways by which they might be plausibly interpreted and organized; each code was then discussed and negotiated until agreement was reached.

Five mathematicians (whom we refer to as Professors A, B, C, D, and E) participated in the semi-structured interviews. All were tenured or tenure-track faculty members at a midwestern Research 1 university who had taught an undergraduate abstract algebra course in the last five years. All interviews were conducted and recorded on Zoom (on account of the COVID-19 pandemic). We began data collection with a semi-structured group interview with all 5 abstract algebra instructors because we hypothesized that a group setting would be more conducive to generating non-examples than an individual interview–that is, we anticipated that in such a setting the group would “become more than the sum of its parts [and] exhibit a synergy that individuals alone don’t possess” (Krueger & Casey, 2009 , p. 19). A central question of the group interview was, “what are 3–4 non-examples that you like to use to illustrate the function concept, and why?” Individual semi-structured interviews followed. The prompts for these interviews were informed by the results of the textbook analysis and group interview; though the nature of semi-structured interviews prevents us from providing a comprehensive listing of all questions asked, a representative sample is included in Fig. ​ Fig.1 1 .

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig1_HTML.jpg

Typical questions asked of the abstract algebra instructors in the semi-structured interviews

All mathematicians participated in at least one individual interview in addition to the group interviews; each mathematician was invited to participate in multiple individual interview sessions, but some were unable to do so due to varying availability. Professors A and E participated in a total of three individual interviews, Professor B participated in two, and Professors C and D participated in one. Each individual interview session lasted approximately 1 to 1.5 h. To analyze the data from the (group and individual) interviews, we transcribed each session in its entirety and employed the same procedures for thematic analysis (Creswell, 2012 ) that we employed for the textbook analysis (the one distinction being that we began this phase of the analysis with the codes that resulted from the textbook analysis). This iterative process resulted in four key themes: well-definedness – domain choice, well-definedness – codomain choice, everywhere-definedness – domain restriction, and everywhere-definedness – codomain expansion . These codes correspond to the four categories by which we structure the instructional example space; the characteristics by which we assigned these codes are included as part of the results.

We now characterize and illustrate four categories that, we propose, can be used to productively organize the non-examples in the instructional example space for function in abstract algebra. We wish to call attention to three points before proceeding. First, as mentioned in the “ Methods ” section, to enhance the clarity of our analysis, we restrict ourselves here to non-examples that either have (a) a well-definedness issue or (b) an everywhere-definedness issue (but not both). Second, as such, we do not claim that these categories partition the entire space of non-examples (that is, we do not consider these categories to be exhaustive or disjoint). Finally, we focus our analysis on a small number of what we considered to be vivid, prototypical non-examples in each category.

Well-definedness

We classify a non-example in the well-definedness category if there exists at least one element of the proposed domain with at least two corresponding images contained in the proposed codomain. For example, consider ϕ : Q → Z given by ϕ a b = a + b (non-example 2.1 in Fig. ​ Fig.2). 2 ). Gallian ( 2017 ) explained that ϕ “does not define a function since 1 / 2 = 2 / 4 but ϕ ( 1 / 2 ) ≠ ϕ ( 2 / 4 ) ” (p. 21). That is, as noted by Professor D, for “one half and two fourths, you get different answers. So if you get different answers for the same input, it’s not a function.” Consider also f : R + → R given by f a = ± a (non-example 2.4). Here, as noted by Rotman ( 2006 ), “there are two candidates for 9 , namely 3 and -3” (p. 83) and, thus, “ f a = ± a is not single-valued, and hence it is not a function” (Rotman, 2006 , p. 88). Similarly, Professor C noted that the input 9 maps to “plus or minus 3 […]. But plus or minus three isn’t a number, it’s two numbers.” Other non-examples 3 that we classified in this category are displayed in Fig. ​ Fig.2 2 .

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig2_HTML.jpg

Non-examples with well-definedness issues

We further refine these non-examples into categories based upon a distinction we inferred from the way the experts in our study discussed them. A key element of this distinction involved the nature of the choices one makes when evaluating a proposed correspondence at a particular input value. Professor A, for example, proposed that certain non-examples with well-definedness issues “demand a different treatment.” He then proposed that this ‘different treatment’ could be framed in terms of the following question: “Where is the choice taking place? Is it in your input? Or is it, uh, in the execution of the rule?” Professor B similarly framed this distinction – which he described as “two different types of problems” – in terms of the same choice :

Your function could be, um, not well-defined because the value in the domain is not well-defined, or that you have to make a choice in the value of the domain. Or they could be not well-defined because the value of the output is not well-defined and you have to make a choice of that value of the output.

Broadly, then, we infer that this distinction centers primarily on whether one is making a choice in the domain or the codomain . We account for this distinction by introducing two subcategories, which we elaborate below.

Well-definedness – Domain Choice

The aforementioned choice in the domain refers to the choice of different yet equivalent representations for a given domain element. This issue was explicitly attended to in both the textbooks and interviews. For example:

  • “Problems arise when the element x can be described in more than one way, and the rule or formula for f ( x ) depends on how x is written” (Beachy & Blair, 2019 , p. 56).
  • If “there are multiple ways to represent elements in the domain (like in Z n or Q ), then we need to know whether our mapping is well-defined before we worry about any other properties the mapping might possess” (Hodge et al., 2014 , p. 129).
  • “If the defining rule for a possible binary operation is stated in terms of a certain type of representation of the elements, then the rule does not define a binary operation unless the result is independent of the representation for the elements” (Gilbert & Gilbert, 2015 , p. 305).
  • “The function is deliberately taking […] a particular presentation of the rationals … That’s the issue … that’s a problem. Like if you’re going to, if you’re gonna use a representative … then you have to be extra careful.” (Professor E)

We identify two elements common to these excerpts: each mentions the importance of attending to multiple representations of a domain element as well as the image of these representations under the rule. These features correspond to the two definitive characteristics of the non-examples in the well-definedness – domain choice category:

  • elements in the domain can be represented in different yet equivalent ways, and
  • these equivalent representations are mapped to different outputs by the rule.

In light of these characteristics, we propose that the previously discussed non-example 2.1 belongs in this category. Notice that (1) each rational number (such as ½) admits different yet equivalent representations, and (2) the rule maps each representation to a different element of the codomain. Consider also non-example 2.3. Beachy and Blair ( 2019 ) pointed out that, “in defining functions on Z n it is necessary to be very careful that the given formula is independent of the numbers chosen to represent each congruence class” (p. 53). Referring to the same non-example, Professor B’s comment illustrates why such caution is indeed necessary: “if I take x equal to the equivalence class of 1 mod 4, well that’s equivalent to 5. And, if I were to choose 5, it would map to 5 and if I were to choose 1, it would map to 1. And 1 and 5 are not equivalent in the codomain.” Regarding the characteristics of this category, this comment illustrates that (1) the elements of the domain Z 4 can be represented in multiple, equivalent ways, highlighting the aforementioned notion of ‘choice’ in the domain, and (2) the rule maps these equivalent representations to outputs that are not equivalent in the codomain. See Fig. ​ Fig.3 3 for an illustration of non-examples 2.1 and 2.3; for other non-examples in this category, see Fig. ​ Fig.4 4 .

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig3_HTML.jpg

Illustrations of non-examples 2.1 and 2.3 ( well-definedness – domain choice )

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig4_HTML.jpg

Additional non-examples in the well-definedness – domain choice category

Well-definedness – Codomain Choice

The choice in the codomain to which we refer above involves choosing amongst multiple outputs that are associated with a single, unambiguously represented input. This issue was explicitly attended to in the interviews. For example:

  • Professor A: “We’ve got to make a choice. […] There is not a choice in the domain, […] there is a choice of things that satisfy the statement in your rule.”
  • Professor B: “I wouldn’t say equivalence is at the heart of [it]. […] The definition has two possible values. […] You have to clarify which value you’re going to choose. That’s a problem with multiple values.”
  • Professor E: “You’re not taking advantage of any strange representation. The problem is just, like, with the function itself.”

We identify two features common to these excerpts. First, each mentions that the well-definedness issue is not attributed to equivalent representations in the domain. Second, we infer that the well-definedness issue is instead attributed to a choice in the codomain caused by multiple values of the rule. These two features correspond to the two definitive characteristics of the non-examples in the well-definedness – codomain choice category:

  • the proposed correspondence does not invoke different yet equivalent representations of elements in the domain, and
  • despite the lack of equivalent representations of elements in the domain, the rule still forces a choice to be made amongst outputs in the codomain.

The aforementioned non-example 2.4 exemplifies these characteristics: (1) the domain ( R + ) causes no issues with respect to representation, yet (2) there is still an input that the rule maps to two outputs. Non-example 2.2 can also, we propose, be classified in the codomain choice category. Dummit and Foote ( 2004 ) explained that “this unambiguously defines f unless A 1 and A 2 have elements in common (in which case it is not clear whether these elements should map to 0 or to 1 )” (pp. 1–2). For example, if A 1 = 2 Z and A 2 = 3 Z , then it is not clear whether the domain element 6 maps to 0 or 1; to use the language of the algebraists, a choice must be made in the codomain. Additionally, Professor E pointed out that, in this non-example, “there’s no issue of representative, you know … you’re not invoking a presentation of elements of [ A 1 ] or [ A 2 ] .” Through the lens of the characteristics of this category, these comments collectively call attention to the fact that (1) the elements of the domain do not admit multiple representations, and (2) the rule is ambiguous and possibly maps at least one input to two outputs. See Fig. ​ Fig.5 5 for an illustration of non-examples 2.2 and 2.4; other non-examples that we classified in this category appear in Fig. ​ Fig.6 6 .

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig5_HTML.jpg

Illustrations of 2.2 and 2.4 ( well-definedness – codomain choice )

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig6_HTML.jpg

Non-examples included in the codomain choice category

Everywhere definedness

We classify a non-example in the everywhere definedness category if there exists at least one element of the proposed domain for which there is no corresponding image in the proposed codomain. For instance, consider f : R × R → R given by f a , b = a / b (Pinter, 2010 ). Professor D pointed out that f is not a function because, generally, “you have to know that you can’t divide by zero.” Pinter ( 2010 ) specifically pointed out that “there are ordered pairs such as (3,0) whose quotient 3/0 is undefined” (p. 19). Fraleigh ( 2002 ), commenting on a similar non-example, noted that there is no element of the codomain that “is assigned by this rule to the pair (2,0)” (p. 25). Additionally, consider the proposed correspondence g : Z → N given by g x = x 3 (non-example 7.5). Professor C concluded that g is not a function, rhetorically asking “where does -1 go?” Professor E similarly specified that “-1 cubed is not a natural number, so it doesn’t go anywhere in its codomain.” Figure  7 displays other non-examples with everywhere-definedness issues.

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig7_HTML.jpg

Non-examples with everywhere-definedness issues

We now introduce a distinction the mathematicians attended to related to the specific ways in which the input–output correspondence fails. Professor B, for instance, mentioned that “there’s no division by zero, ever .” In contrast, when examining non-example 7.5, he pointed out that - 1 (the output associated with the input x = - 1 ) exists but is “not contained in the codomain.” We therefore infer that he is distinguishing between instances in which the output does not exist at all and those in which it does exist but not in the specified codomain. Other algebraists made this distinction as well. Professor A, for example, suggested that, amongst non-examples with everywhere-definedness issues, “there are situations where you just can’t execute the instructions and there are situations where you could execute the instructions but [miss] only by a margin target.” We therefore introduce two subcategories, which are characterized and elaborated below.

Everywhere-definedness – Codomain Expansion

The everywhere-definedness – codomain expansion subcategory includes the everywhere-definedness non-examples for which the output exists in some natural, accessible superset of the proposed codomain. We observed that these kinds of non-examples were often discussed by the mathematicians in the context of expanding the proposed codomain (hence the category name):

  • Professor A: “The codomain should generally be some space that’s large enough.”
  • Professor D: “It’s not a function because that formula is not defined on every element of the domain. So [we’re] having to adjust the codomain.”
  • Professor E: These kinds of non-examples “are basically not functions in the same way. There is a way to extend the codomain to make them functions.”

For example, Hungerford ( 2014 ) considered the rule f x = x 2 in which the proposed domain and codomain are both Z , pointing out that that “the rule of f makes sense for odd integers” (p. 513). We interpret this to mean that the rule can, in fact, be evaluated for odd integers (such as 9) to obtain some number (9/2). However, they go on to note that “ f 9 = 9 / 2 , which is not in Z ” (p. 513), the codomain. In light of our comments above, we observe that replacing the proposed codomain Z with, say, Q , repairs this non-example and resolves the issue. We note that these kinds of non-examples can also be repaired by restricting the domain–for instance, restricting the domain of f to 2 Z also resolves the issue – but our focus here is on the fact that they can be repaired by broadening the codomain. This serves to distinguish this subcategory from the everywhere-definedness – domain restriction subcategory (which, as we will discuss in the “ Everywhere-definedness – Domain Restriction ” section, must be repaired by restricting the domain because it cannot be easily or naturally repaired by broadening the codomain). Thus, we propose the following characteristics of everywhere-definedness – codomain expansion :

  • There exists at least one input for which the corresponding output is not an element of the proposed codomain.
  • The proposed correspondence can be repaired by broadening the proposed codomain in a natural way.

Consider, for instance, the aforementioned non-example 7.5 in Fig. ​ Fig.7. 7 . Notice that (1) the cube of each negative integer exists, but many of these outputs are not contained in N , the proposed codomain, and (2) this non-example can be repaired by broadening the codomain from N to Z . We also classify non-example 7.3 in the everywhere-definedness – codomain expansion category. Beachy and Blair ( 2019 ), for instance, called attention to the fact that “we immediately run into a problem: the square root of a negative number cannot exist in the set of real numbers” (Beachy & Blair, 2019 , p. 52). The mathematicians identified the same issue. Professor B, for example, noted that the proposed correspondence “is not defined for negative real numbers, so therefore it’s not a function.” Professor A specified that, with the proposed codomain, “there is no square root negative one or something, so this process is no good.” Professor A later clarified, however, that - 1 does exist, noting that “we’re going to have to deal with complex roots, which means we need to modify the codomain.” Along these lines, Beachy and Blair ( 2019 ) pointed out that “we can enlarge the codomain to the set C of all complex numbers, in which case the formula f ( x ) = x yields a function f : R → C ” (p. 52). Non-example 7.3 belongs to the everywhere-definedness – codomain expansion category, then, because (1) there is indeed at least one input for which the corresponding output is not an element of the codomain, and (2) the non-example can be repaired by broadening the proposed codomain from R to C . See Fig. ​ Fig.8 8 for an illustration of non-examples 7.1 and 7.3; other non-examples that we classified in this category appear in Fig. ​ Fig.9 9 .

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig8_HTML.jpg

Illustrations of non-examples 7.3 and 7.5 ( everywhere-definedness – codomain expansion )

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig9_HTML.jpg

Additional non-examples in the everywhere-definedness – codomain expansion category

Everywhere-definedness – Domain Restriction

The everywhere-definedness – domain restriction category refers to those everywhere-definedness non-examples for which the image of a given input does not exist in any set that is accessible 4 to an introductory abstract algebra student. This idea was often discussed in terms of restricting the domain to repair the non-example in question:

  • Professor B: “You just have to change the domain. […] If there’s no sensible definition at some point in your domain, then you have to change the domain.”
  • Professor C: “You restrict the domain to make that a function.”
  • Professor E: “It’s a domain problem, a domain error. […] Modifying the domain, you know, you can always make it smaller”.

The key theme we observe from these excerpts is that the mathematicians viewed the issue as domain-related. In particular, we note Professor B’s use of the phrase “have to,” which highlights the fact that repairing such non-examples by replacing the proposed codomain is perhaps not sensible (we illustrate this point using non-examples 7.2 and 7.4 below). This theme forms the basis for the characteristics of this category of non-examples:

  • There exists at least one input in the proposed domain for which the corresponding output is not an element of the proposed codomain.
  • The proposed correspondence can only be repaired by restricting the domain.

Consider again non-example 7.2. Characteristic 1 is satisfied because, as previously noted, there exist inputs in the domain (such as (2,0)) for which there is no corresponding image in the codomain. Regarding Characteristic 2, the mathematicians generally framed ‘divide by 0’ as an issue that could be resolved by repairing the domain . Professor E, for instance, noted that he “would fix this by changing the domain.” Illustrating one possible way to do this, Fraleigh ( 2002 ) restricted the domain to pairs of positive rational numbers (i.e., the set Q + × Q + ) and noted that, as a result of this modification, the conditions for function “are satisfied” (p. 25). Professor B made an even stronger statement, noting that “you just have to fix the domain by saying that b has to not be zero. And then it makes sense.” Put another way, there is no (accessible) superset containing the proposed codomain that can be used to repair this non-example in a natural way.

We also note that non-example 7.4 can be classified in the everywhere-definedness – domain restriction category. Regarding characteristic 1, Fraleigh ( 2002 ) explained that “the usual matrix addition is not a binary operation on M m × n ( R ) since A + B is not defined for an ordered pair (A,B) of matrices having different numbers of rows or of columns” (p. 21). So, for example, the input (ordered pair) consisting of, say, a 2 × 3 matrix A and a 2 × 2 matrix B does not have a sum in the proposed codomain because A and B have a different number of columns (3 vs. 2, respectively). For the related case of matrix multiplication, Professor B pointed out that “matrix multiplication is only defined if the number of columns in A equals the number of rows in B.” We note that the ordered pair consisting of the aforementioned 2 × 3 matrix A and the 2 × 2 matrix B also has no image in the proposed codomain with respect to matrix multiplication. Regarding characteristic 2, the mathematicians suggested that these non-examples could be repaired by restricting the domain. For instance, Professor E’s proposed restricting the domain to M n × n R × M n × n ( R ) : “you could fix this by fixing the size, you know? You could say, like, square matrices of n by n ... n by n matrices would be fine.” Professor B also commented that “in this case, you have to restrict that set [the domain] to the pairs of matrices that are consistent.” We again interpreted the use of the phrase ‘have to’ to refer to the fact that repairing this non-example by expanding the codomain is neither natural nor sensible–put another way, there is no accessible superset of M m × n ( R ) that contains the matrices A + B and AB (as stipulated above). In such cases, Professor B noted, “we just say it’s undefined.” See Fig. ​ Fig.10 10 for an illustration of non-examples 7.2 and 7.4; for other non-examples that we classified in this category, see Fig. ​ Fig.11 11 .

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig10_HTML.jpg

Illustrations of 7.2 and 7.4 ( everywhere-definedness – domain restriction )

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig11_HTML.jpg

Additional non-examples in the everywhere-definedness – domain restriction category

Using a textbook analysis and semi-structured interviews with mathematicians, we have identified categories that highlight key characteristics of the non-examples in the instructional example space. These categories, we propose, offer a clearer image of what it means for a set of non-examples of function in introductory abstract algebra to be ‘well chosen’ (see Fig. ​ Fig.12 12 for a summary of these categories 5 and their characteristics).

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig12_HTML.jpg

Summary of categories of non-examples in the instructional example space

In this section, we outline our conjectures pertaining to the importance and implications of these categories, identify the contributions of this work, and discuss limitations and future research.

Importance and Implications of These Categories

We examined the instructional example space to identify the characteristics of non-examples that would be advantageous for students to have in their personal example spaces. Here we discuss the four categories in light of this objective.

Recall the theme in the literature that students experience considerable challenges with functions–even in advanced mathematics–in part because they have a view of function that focuses predominantly on the rule (to the exclusion of the domain and codomain). We propose that these categories provide a meaningful way to parse the instructional example space because they can each be viewed and characterized in terms of meaningful relationships with the domain or codomain and therefore hold potential for supporting students in developing a more comprehensive view of function as a coordination of the rule, domain, and codomain. The two subcategories in the well-definedness category, for instance, can be characterized by either a choice in the domain (amongst equivalent representations of the same element) or the codomain (amongst multiple output values assigned by the rule). The two subcategories in the everywhere-definedness category can be characterized by determining whether the non-example can be repaired by expanding the codomain (if the targeted output exists in some accessible superset) or must be repaired by restricting the domain (if the targeted output does not exist in an accessible superset). We therefore hypothesize that non-examples chosen according to these four categories do indeed offer potential opportunities for students to move beyond rule-only reasoning to develop a more comprehensive view of function that explicitly attends to the domain and codomain.

We consider the well-definedness – domain choice and everywhere-definedness – codomain expansion categories to be particularly important to include in introductory abstract algebra instruction (and, accordingly, for students to incorporate into their own example spaces) because these categories include non-examples not included in a typical introductory student’s personal example space. Put another way, we suspect that many introductory abstract algebra students at the beginning of the course are more familiar with non-examples in the well-definedness – codomain choice and everywhere-definedness – domain restriction categories. This is notable for two reasons. First, experience with well-definedness – domain choice and everywhere-definedness – codomain expansion is critical for subsequent reasoning with functions in abstract algebra. Well-definedness – domain choice is critical for reasoning with functions defined on sets of equivalence classes or quotient structures (as in the First Isomorphism Theorem or results related to the formal construction of the rational numbers). In particular, students need to check well-definedness issues related to equivalence every time they define a function whose domain involves a quotient structure, a common task. Everywhere-definedness – codomain expansion is important in abstract algebra when reasoning, for example, about the closure of a proposed binary operation or when attempting to define a mapping between two algebraic structures. While students likely have some prior exposure to domain restriction (e.g., inverse functions), the notion of expanding the codomain is likely to be less familiar.

Second, we suspect that rule-only reasoning is insufficient to determine that many of the non-examples in well-definedness – domain choice and everywhere-definedness – codomain expansion are, in fact, non-examples. Consider, for instance, non-examples 2.1 and 2.3 from the well-definedness – domain choice category. While the issue with these non-examples resides in the representation of elements in the domain, they both feature simple, familiar formulas (addition and the identity) that are commonly associated in previous courses with functions on the real numbers. There are no obvious pitfalls (such as the prototypical division by zero, see non-examples 7.2 and 7.6) or multiple outputs (such as the vertical line test, see non-example 2.4) that are observable simply by examining the rule. That is, we suspect that rule-only reasoners would likely overlook the existence of equivalent representations ( well-definedness – domain choice characteristic 1) and thus there would be no means of perceiving that the rule maps the same input to different outputs (characteristic 2). We also observe the same feature in the everywhere-definedness – codomain expansion category. Non-examples 7.1 and 7.5, for instance, also have familiar rules that typically have been associated with functions in students’ experiences. A student only attending to the rule is therefore unlikely to notice that there is an input for which the corresponding output is not an element of the proposed codomain ( everywhere-definedness – codomain expansion characteristic 1) and thus, in their view, there is no need to repair anything (characteristic 2). Thus, non-examples in well-definedness – domain choice and everywhere-definedness – codomain expansion are exactly the kinds of non-examples for which rule-only reasoning is the least well-suited. This underscores the need to deliberately incorporate these categories into instruction to provide students with opportunities to incorporate the domain and codomain into their views of function.

Contributions

In addition to providing conjectures about how we might support students’ learning about function in abstract algebra, this paper makes two primary contributions. First, it contributes to the literature on examples and example spaces. We consider our methodology–a textbook analysis paired with semi-structured interviews with experts–to be a particularly helpful way to gain insight into the instructional example space (and the conceptual structure of mathematical ideas more generally). This paper is also one of only a few analyses of non-examples, which are a key element of example spaces that have not received much attention in the literature. We believe our analysis emphasizes the potential that the non-examples in the instructional example space hold for affording insight into the key aspects of a concept, a tool that is currently underutilized.

Second, as previously noted, the literature on functions in abstract algebra is substantial but largely focused on binary operation, homomorphism, and isomorphism. This paper provides one of the few direct, detailed analyses of well- and everywhere-definedness. Our analysis illustrates (1) the various ways in which well- and everywhere-definedness can manifest in various non-examples, and (2) how these manifestations relate to the key notions of the domain and codomain. We note that our structuring of the instructional example space, in addition to providing hypotheses about supporting students’ learning about function in advanced contexts, extends findings from this body of literature. For instance, the framework provides a frame of reference for why students might see functions in advanced mathematics as different from functions in secondary mathematics (e.g., Zandieh et al., 2017 ). The framework we propose in this study enables us to build upon this idea by proposing a refined conjecture: introductory abstract algebra students experience considerable challenges with function in abstract algebra because their personal example spaces are likely to involve only the well-definedness – codomain choice and everywhere-definedness – domain restriction categories, but much of abstract algebra involves the unfamiliar well-definedness – domain choice and everywhere-definedness – codomain expansion categories. Equivalently, we hypothesize that instructors can support students in overcoming these difficulties by providing them with myriad opportunities to reason about non-examples from each category in the framework.

Future Research

In this paper, we have outlined our investigation of the ways in which experts view the function concept. The implications for students ’ learning that we set forth above are therefore empirically-based hypotheses that provide clear direction for testing and refinement in future research. Relatedly, while the literature has generally identified that a function should be understood as a coordination of the rule, domain, and codomain, such a view has not been explicated or directly examined in any detail. We believe that the structuring of the instructional example space reported here can inform such efforts. Though students’ conceptual structures can not be adequately captured by descriptions of behaviors, we note that doing so can be a useful first step because it then enables the research to ask, “how might the student be thinking about this idea that might explain their behaviors?” To this end, we propose that it is a useful first (though by no means last) step to initially characterize a productive view of function as one that enables students to reason successfully about non-examples from all 4 categories outlined in this paper. This hypothesis could be pursued in future research via task-based clinical interviews (Goldin, 2000 ) or teaching experiments (Steffe & Thompson, 2000 ).

Our structuring of non-examples in this study also motivates us to consider whether there might be a similar structuring for examples. This could be pursued directly using a similar design in which examples (instead of non-examples) are the primary focus. Alternatively, the the notion of ‘repairing’ a non-example–that is, modifying a given non-example in some way so that it becomes an example–might provide some insight into this issue. For instance, we propose that repairing could be used to extend these categories of non-examples so that they also include the associated repaired examples (see Fig. ​ Fig.13). 13 ). Given that the instances of repairing that we observed in this study involved explicit attention to the domain or codomain, we further suspect that instructional tasks that prompt students to repair non-examples could support students in moving beyond a rule-only view of function to develop a robust, coordinated view of function as a coordination of the rule, domain, and codomain. We therefore hypothesize that repairing could have potential advantages for researchers (as a productive way to bridge between the non-examples and examples in an example space) and students (as a productive way to develop a robust view of function).

An external file that holds a picture, illustration, etc.
Object name is 40753_2022_166_Fig13_HTML.jpg

Repairing non-examples to obtain examples

Another fruitful path for future research could involve examining specific types of functions–such as binary operations, homomorphisms, and isomorphisms–through this new lens. One potential option in this vein would be to use the framework to further explore the aforementioned connection between everywhere-definedness and the closure of a binary operation. Another relates to the notions of injectivity and surjectivity, as the injectivity and surjectivity of a function f : A → B is equivalent to the well-definedness and everywhere-definedness (respectively) of its inverse f - 1 : B → A . In particular, future research could explore how students whose personal example spaces are structured according to our framework might reason with these subsequent topics.

Declarations

On behalf of all authors, the corresponding author states that there are no conflicts of interest or competing interests. Additionally, the work described in this paper has not been published before and is not under consideration for publication anywhere else.

1 Well-definedness is also referred to as univalence in the literature.

2 For each textbook in the sample, we obtained and analyzed the most recent edition available to us. The two citations for Herstein ( 1975 ; 1996 ) correspond to two different books (and not two editions of the same book). For more information, see the Textbooks in Our Sample .

3 When needed, for clarity we occasionally modified or reformulated some of non-examples throughout this paper – usually by inferring reasonable domains and codomains or introducing clear notation – without altering the underlying structure of the proposed correspondence. Additionally, many non-examples appeared in multiple textbooks; throughout this paper we typically note only one.

4 We have chosen to describe this subcategory in this way because some of its non-examples can indeed be repaired by broadening the codomain, but it is usually simpler (and more coherent in the given context) to restrict the domain. For example, Professor A pointed out that, when repairing non-example 7.6, “you absolutely can pass to the Riemann sphere or something and have it make sense” by defining 1/0 to be the point at infinity. Our use of the term “accessible” is intended to acknowledge that, while instructors and experienced abstract algebra students might be able to repair such non-examples by broadening the codomain in this way, it is arguably simpler and more accessible in the early stages of an introductory abstract algebra course to address such issues by restricting the domain.

5 Recall that we do not intend for these categories to partition the non-examples in the instructional example space, but rather to simply point out the key characteristics of these non-examples.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Rosaura Uscanga, Email: ude.ycrem@agnacsur .

John Paul Cook, Email: ude.etatsko@pjkooc .

  • Bagley S, Rasmussen C, Zandieh M. Inverse composition and identity: The case of function and linear transformation. Journal of Mathematical Behavior. 2015; 37 :36–47. doi: 10.1016/j.jmathb.2014.11.003. [ CrossRef ] [ Google Scholar ]
  • Bailey, N., Quinn, C., Reed, S. D., Wanner, C. A., McCulloch, A. W., Lovett, J. N., & Sherman, M. F. (2019). Calculus II students’ understanding of the univalence requirement of function. In A. Weinberg, D. Moore-Russo, H. Soto, & M. Wawro (Eds.), Proceedings of the 22nd annual conference on Research in Undergraduate Mathematics Education (pp. 18–26).
  • Breidenbach D, Dubinsky E, Hawks J, Nichols D. Development of the process conception of function. Educational Studies in Mathematics. 1992; 23 :247–285. doi: 10.1007/BF02309532. [ CrossRef ] [ Google Scholar ]
  • Brown A, DeVries DJ, Dubinsky E, Thomas K. Learning binary operations groups and subgroups. Journal of Mathematical Behavior. 1997; 16 (3):187–239. doi: 10.1016/S0732-3123(97)90028-6. [ CrossRef ] [ Google Scholar ]
  • Carlson MP. A cross-sectional investigation of the development of the function concept. In: Dubinsky E, Schoenfeld AH, Kaput JJ, editors. CBMS Issues in mathematics education: Research in collegiate mathematics education III. American Mathematical Society; 1998. pp. 115–162. [ Google Scholar ]
  • Carlson M, Jacobs S, Coe E, Larsen S, Hsu E. Applying covariational reasoning while modeling dynamic events: A framework and a study. Journal for Research in Mathematics Education. 2002; 33 (5):352–378. doi: 10.2307/4149958. [ CrossRef ] [ Google Scholar ]
  • Clement LL. What do students really know about functions? Mathematics Teacher. 2001; 94 (9):745–748. doi: 10.5951/MT.94.9.0745. [ CrossRef ] [ Google Scholar ]
  • Cook JP, Fukawa-Connelly T. The pedagogical examples of groups and rings that algebraists think are most important in an introductory course. Canadian Journal of Science Mathematics and Technology Education. 2015; 15 (2):171–185. doi: 10.1080/14926156.2015.1035463. [ CrossRef ] [ Google Scholar ]
  • Creswell JW. Educational research: Planning conducting and evaluating quantitative and qualitative research. 4. Pearson; 2012. [ Google Scholar ]
  • Dorko, A. (2017). Generalising univalence from single to multivariable settings: The case of Kyle. In A. Weinberg, C. Rasmussen, J. Rabin, M. Wawro & S. Brown (Eds.), Proceedings of the 20th annual conference on Research in Undergraduate Mathematics Education (pp. 562–569).
  • Dubinsky E, Wilson RT. High school students’ understanding of the function concept. The Journal of Mathematical Behavior. 2013; 32 (1):83–101. doi: 10.1016/j.jmathb.2012.12.001. [ CrossRef ] [ Google Scholar ]
  • Even R. Subject-matter knowledge and pedagogical content knowledge: Prospective secondary teachers and the function concept. Journal for Research in Mathematics Education. 1993; 24 (2):94–116. doi: 10.2307/749215. [ CrossRef ] [ Google Scholar ]
  • Even R, Tirosh D. Subject-matter knowledge and knowledge about students as sources of teacher presentations of the subject-matter. Educational Studies in Mathematics. 1995; 29 :1–20. doi: 10.1007/BF01273897. [ CrossRef ] [ Google Scholar ]
  • Even R, Bruckheimer M. Univalence: A critical or non-critical characteristic of functions? For the Learning of Mathematics. 1998; 18 (3):30–32. [ Google Scholar ]
  • Fukawa-Connelly TP, Newton C. Analyzing the teaching of advanced mathematics courses via the enacted example space. Educational Studies in Mathematics. 2014; 87 (3):323–349. doi: 10.1007/s10649-014-9554-2. [ CrossRef ] [ Google Scholar ]
  • Fylan F. Semi-structured interviewing. In: Miles J, Gilbert P, editors. A handbook of research methods for clinical and health psychology. Oxford University Press; 2005. pp. 65–77. [ Google Scholar ]
  • Goldenberg P, Mason J. Shedding light on and with example spaces. Educational Studies in Mathematics. 2008; 69 (2):183–194. doi: 10.1007/s10649-008-9143-3. [ CrossRef ] [ Google Scholar ]
  • Goldin GA. A scientific perspective on task-based interviews in mathematics education research. In: Kelly AE, Lesh RA, editors. Handbook of research design in mathematics and science education. Lawrence Erlbaum Associates; 2000. pp. 517–546. [ Google Scholar ]
  • Hausberger T. The (homo)morphism concept: Didactic transposition meta-discourse and thematisation. International Journal of Research in Undergraduate Mathematics Education. 2017; 3 :417–443. doi: 10.1007/s40753-017-0052-7. [ CrossRef ] [ Google Scholar ]
  • Hitt F. Difficulties in the articulation of different representations linked to the concept of function. The Journal of Mathematical Behavior. 1998; 17 (1):123–134. doi: 10.1016/S0732-3123(99)80064-9. [ CrossRef ] [ Google Scholar ]
  • Kabael TU. Generalizing single variable functions to two-variable functions function machine and APOS. Educational Sciences: Theory & Practice. 2011; 11 (1):484–499. [ Google Scholar ]
  • Krueger RA, Casey MA. Focus groups: A practical guide for applied research. 4. Sage Publications; 2009. [ Google Scholar ]
  • Larsen S. Reinventing the concepts of group and isomorphism: The case of Jessica and Sandra. Journal of Mathematical Behavior. 2009; 28 :119–137. doi: 10.1016/j.jmathb.2009.06.001. [ CrossRef ] [ Google Scholar ]
  • Leron U, Hazzan O, Zazkis R. Learning group isomorphism: A crossroads of many concepts. Educational Studies in Mathematics. 1995; 29 :153–174. doi: 10.1007/BF01274211. [ CrossRef ] [ Google Scholar ]
  • Lockwood E, Reed Z, Caughman JS. An analysis of statements of the multiplication principle in combinatorics discrete and finite mathematics textbooks. International Journal of Research in Undergraduate Mathematics Education. 2017; 3 :381–416. doi: 10.1007/s40753-016-0045-y. [ CrossRef ] [ Google Scholar ]
  • Melhuish K. The Group Theory Concept Assessment: A Tool for Measuring Conceptual Understanding in Introductory Group Theory. International Journal of Research in Undergraduate Mathematics Education. 2019; 5 (3):359–393. doi: 10.1007/s40753-019-00093-6. [ CrossRef ] [ Google Scholar ]
  • Melhuish K, Ellis B, Hicks MD. Group theory students’ perceptions of binary operation. Educational Studies in Mathematics. 2020; 103 :63–81. doi: 10.1007/s10649-019-09925-3. [ CrossRef ] [ Google Scholar ]
  • Melhuish K, Lew K, Hicks MD, Kandasamy SS. Abstract algebra students’ evoked concept images for functions and homomorphisms. Journal of Mathematical Behavior. 2020; 60 :1–16. doi: 10.1016/j.jmathb.2020.100806. [ CrossRef ] [ Google Scholar ]
  • Melhuish K, Fagan J. Connecting the group theory concept assessment to core concepts at the secondary level. In: Wasserman NH, editor. Connecting abstract algebra to secondary mathematics, for secondary mathematics teachers. Springer; 2018. pp. 19–45. [ Google Scholar ]
  • Nardi E. Mathematics undergraduates’ responses to semantic abbreviations, ‘geometric’ images and multi-level abstractions in group theory. Educational Studies in Mathematics. 2000; 43 :169–189. doi: 10.1023/A:1012223826388. [ CrossRef ] [ Google Scholar ]
  • National University Rankings (n.d.). Retrieved April 2, 2020, from  https://www.usnews.com/best-colleges/rankings/national-universities . Accessed September 2020.
  • Oehrtman M, Carlson M, Thompson PW. Foundational reasoning abilities that promote coherence in students’ understanding of function. In: Carlson MP, Rasmussen C, editors. Making the connection: Research and teaching in undergraduate mathematics education. Mathematical Association of America; 2008. pp. 27–42. [ Google Scholar ]
  • Rupnow, R. (2019). Instructors’ and students’ images of isomorphism and homomorphism. In A. Weinberg, D. Moore-Russo, H. Soto, & M. Wawro (Eds.), Proceedings of the 22nd Annual Conference on Research in Undergraduate Mathematics Education (pp. 518–525).
  • Sinclair N, Watson A, Zazkis R, Mason J. The structuring of personal example spaces. The Journal of Mathematical Behavior. 2011; 30 (4):291–303. doi: 10.1016/j.jmathb.2011.04.001. [ CrossRef ] [ Google Scholar ]
  • Slavit D. An alternate route to the reification of function. Educational Studies in Mathematics. 1997; 33 (3):259–281. doi: 10.1023/A:1002937032215. [ CrossRef ] [ Google Scholar ]
  • Steffe LP, Thompson PW. Teaching experiment methodology: Underlying principles and essential elements. In: Lesh R, Kelly AE, editors. Handbook of research design in mathematics and science education. Lawrence Erlbaum Associates; 2000. pp. 267–307. [ Google Scholar ]
  • Thomas, M. (2003). The role of representation in teacher understanding of function. In N. A. Pateman, B. J. Dougherty, & J. T. Zilliox (Eds.), Proceedings of the 2003 joint meeting of PME and PMENA (Vol. 4, pp. 291–298). University of Hawaii: Center for Research and Development Group.
  • Thompson PW. Students, functions, and the undergraduate curriculum. In: Dubinsky E, Schoenfeld AH, Kaput JJ, editors. CBMS Issues in mathematics education: Research in collegiate mathematics education I. American Mathematical Society; 1994. pp. 21–44. [ Google Scholar ]
  • Tsamir P, Tirosh D, Levenson E. Intuitive nonexamples: The case of triangles. Educational Studies in Mathematics. 2008; 69 (2):81–95. doi: 10.1007/s10649-008-9133-5. [ CrossRef ] [ Google Scholar ]
  • Watson A, Mason J. Mathematics as a constructive activity: Learners generating examples. Lawrence Erlbaum Associates; 2005. [ Google Scholar ]
  • Weber K, Mejía-Ramos JP, Fukawa-Connelly T, Wasserman N. Connecting the learning of advanced mathematics with the teaching of secondary mathematics: Inverse functions domain restrictions and the arcsine function. Journal of Mathematical Behavior. 2020; 57 :1–21. doi: 10.1016/j.jmathb.2019.100752. [ CrossRef ] [ Google Scholar ]
  • Zandieh M, Ellis J, Rasmussen C. A characterization of a unified notion of mathematical function: The case of high school function and linear transformation. Educational Studies in Mathematics. 2017; 95 :21–38. doi: 10.1007/s10649-016-9737-0. [ CrossRef ] [ Google Scholar ]
  • Zandieh MJ, Knapp J. Exploring the role of metonymy in mathematical understanding and reasoning: The concept of derivative as an example. Journal of Mathematical Behavior. 2006; 25 :1–17. doi: 10.1016/j.jmathb.2005.11.002. [ CrossRef ] [ Google Scholar ]
  • Zaslavsky O. There is more to examples than meets the eye: Thinking with and through mathematical examples in different settings. The Journal of Mathematical Behavior. 2019; 53 :245–255. doi: 10.1016/j.jmathb.2017.10.001. [ CrossRef ] [ Google Scholar ]
  • Zazkis R, Leikin R. Exemplifying definitions: A case of a square. Educational Studies in Mathematics. 2008; 69 :131–148. doi: 10.1007/s10649-008-9131-7. [ CrossRef ] [ Google Scholar ]

Textbooks in Our Sample

  • Artin, M. (2011). Algebra (2nd ed.). Prentice Hall.
  • Beachy, J. A., & Blair, W. D. (2019). Abstract algebra (4th ed.). Waveland Press.
  • Davidson, N., & Gulick, F. (1976). Abstract algebra: An active learning approach. Houghton Mifflin.
  • Dummit, D. S., & Foote, R. M. (2004). Abstract algebra (3rd ed.). John Wiley & Sons.
  • Fraleigh, J. B. (2002). A first course in abstract algebra (7th ed.). Pearson.
  • Gallian, J. A. (2017). Contemporary abstract algebra (9th ed.). Cengage Learning.
  • Gilbert, L., & Gilbert, J. (2015). Elements of modern algebra (8th ed.). Cengage Learning.
  • Herstein, I. N. (1975). Topics in Algebra (2nd ed.). John Wiley & Sons.
  • Herstein, I. N. (1996). Abstract algebra (3rd ed.). Prentice-Hall.
  • Hodge, J. K., Schlicker, S., & Sundstrom, T. (2014). Abstract algebra: An inquiry-based approach. CRC Press.
  • Hungerford, T. W. (2014). Abstract algebra (3rd ed.). Brooks/Cole, Cengage Learning.
  • Pinter, C. C. (2010). A book of abstract algebra (2nd ed.). Dover Publications.
  • Rotman, J. J. (2006). A first course in abstract algebra with applications (3rd ed.). Pearson Prentice Hall.

Excel Dashboards

Excel Tutorial: How To Analyze Non Numeric Data In Excel

Introduction.

Understanding how to analyze non-numeric data in Excel is essential for making informed business decisions. While Excel is widely used for numerical analysis, it also offers powerful tools for processing and interpreting non-numeric data such as text, dates, and symbols. In this tutorial , we will explore the various methods and functions available in Excel for analyzing non-numeric data, empowering you to extract valuable insights from diverse data sets.

Key Takeaways

  • Understanding how to analyze non-numeric data in Excel is crucial for informed business decision-making.
  • Excel offers powerful tools for processing and interpreting non-numeric data such as text, dates, and symbols.
  • Techniques for analyzing non-numeric data include using text functions, date and time functions, and conditional formatting.
  • Converting non-numeric data to numeric data is important for effective analysis, and Excel provides functions for this purpose.
  • Best practices for analyzing non-numeric data in Excel include ensuring data cleanliness, proper documentation, and regular updates to analysis techniques.

Understanding non-numeric data

Non-numeric data refers to any data that is not expressed in numerical form. In the context of Excel, non-numeric data can include text, dates, times, and other non-numeric formats.

Non-numeric data in Excel refers to any data that cannot be used in mathematical calculations. This type of data is commonly used for labels, descriptions, and other textual information.

Examples of non-numeric data types in Excel include:

  • Text: This includes any alphabetic or special character-based data, such as names, addresses, and descriptions.
  • Dates: This includes calendar dates, such as 01/01/2022 or January 1, 2022.
  • Times: This includes specific times of day, such as 12:00 PM or 3:30 AM.

Understanding non-numeric data is essential for effectively analyzing and manipulating data in Excel. By knowing the different types of non-numeric data and how they are used, you can improve your data analysis skills and make better use of Excel's features.

Techniques for analyzing non-numeric data

Non-numeric data in Excel can be effectively analyzed using a variety of techniques, including the use of text functions, date and time functions, and conditional formatting.

Text data in Excel can be manipulated and analyzed using a variety of text functions. These functions allow you to extract specific characters from a string, combine different text strings, convert text to uppercase or lowercase, and much more.

Some commonly used text functions include:

  • LEFT and RIGHT: to extract a specific number of characters from the left or right side of a text string.
  • LEN: to calculate the number of characters in a text string.
  • CONCATENATE: to combine multiple text strings into one.
  • UPPER and LOWER: to convert text to uppercase or lowercase.

Date and time data in Excel can be analyzed using a range of date and time functions. These functions allow you to extract specific components of a date or time (such as the month or hour), calculate the difference between two dates, determine the day of the week, and much more.

Some commonly used date and time functions include:

  • YEAR, MONTH, DAY: to extract the year, month, or day from a date.
  • DATEDIF: to calculate the difference between two dates in days, months, or years.
  • WEEKDAY: to determine the day of the week for a given date.
  • TIME: to create a time value from a given hour, minute, and second.

Conditional formatting is a powerful tool for visualizing patterns and trends in non-numeric data. It allows you to apply formatting (such as colors, icons, or data bars) to cells based on specific criteria or rules.

Some ways to use conditional formatting for non-numeric data analysis include:

  • Highlighting cells that contain specific text or dates.
  • Creating color scales to visualize the distribution of non-numeric values.
  • Using icons to indicate different categories or levels within non-numeric data.

Converting non-numeric data to numeric data

When it comes to analyzing data in Excel, the ability to convert non-numeric data to numeric is crucial. This process allows for more accurate and comprehensive analysis, as numeric data is easier to manipulate and perform calculations on.

Non-numeric data, such as text or categorical information, cannot be directly used for mathematical operations or statistical analysis. By converting this type of data to numeric values, it becomes possible to perform various analytical tasks, such as creating charts, calculating averages, and conducting regression analysis.

Excel provides several functions that can help convert non-numeric data to numeric, such as the VALUE function, which converts a text representation of a number to an actual numeric value.

i. Using the VALUE function

The VALUE function is a simple and effective way to convert non-numeric data to numeric data in Excel. To use this function, simply enter =VALUE(cell) in a new cell, where "cell" is the reference to the non-numeric data you want to convert. This will return the numeric equivalent of the non-numeric data.

ii. Using Text to Columns feature

Another method for converting non-numeric data to numeric in Excel is by using the Text to Columns feature. This feature allows you to split a single column of text data into multiple columns, and then convert those columns to numeric values using the appropriate format.

  • Select the column containing the non-numeric data
  • Go to the Data tab, and click on Text to Columns
  • Follow the prompts in the Text to Columns Wizard to specify the delimiters and data format

By following these steps, you can easily convert non-numeric data to numeric in Excel, making it ready for comprehensive analysis and reporting.

Advanced analysis of non-numeric data

When working with non-numeric data in Excel, it's important to have the tools and techniques to analyze and make sense of the information. Here are some advanced methods to analyze non-numeric data in Excel.

Pivot tables are a powerful tool for analyzing non-numeric data in Excel. They allow you to summarize and aggregate data in a customizable way, making it easier to identify patterns and trends.

1. Grouping non-numeric data

One of the key features of pivot tables is the ability to group non-numeric data. This can be useful for categorizing and summarizing information such as text values or dates.

2. Adding calculated fields

Another useful feature of pivot tables is the ability to add calculated fields. This allows you to perform custom calculations on non-numeric data within the pivot table, giving you more flexibility in your analysis.

Excel's formula capabilities are not limited to numeric data. You can create custom formulas to analyze and manipulate non-numeric data to suit your specific analysis needs.

1. Text functions

Excel has a range of text functions that can be used to manipulate and analyze non-numeric data. Functions such as CONCATENATE, LEFT, RIGHT, and MID can be used to extract and manipulate text values.

2. Logical functions

Logical functions such as IF and SEARCH can be used to perform conditional analysis on non-numeric data. This can be useful for categorizing and organizing non-numeric data based on specific criteria.

Data validation is an important step in ensuring the accuracy of non-numeric data analysis. By setting up validation rules, you can control the type and format of non-numeric data entered into your Excel worksheets.

1. Setting data validation rules

Excel allows you to set up data validation rules to control what type of non-numeric data can be entered into cells. This can help to prevent errors and ensure consistency in your analysis.

2. Using drop-down lists

One way to incorporate data validation for non-numeric data is by using drop-down lists. This can be useful for ensuring that data is entered in a standardized format, making it easier to analyze and interpret.

Best practices for analyzing non-numeric data in Excel

When dealing with non-numeric data in Excel, it is important to follow best practices to ensure accurate analysis and interpretation. Here are some key points to consider:

Non-numeric data can often be prone to errors and inconsistencies, so it is crucial to ensure that the data is clean and consistent before attempting any analysis. This can include removing duplicates, correcting misspellings, and standardizing formats.

1. Removing duplicates

Before analyzing non-numeric data, it is important to remove any duplicate entries to avoid skewing the results and obtaining inaccurate insights.

2. Correcting misspellings

Misspellings in non-numeric data can lead to discrepancies in analysis. It is essential to correct any misspelled entries to maintain data accuracy.

3. Standardizing formats

Standardizing formats such as dates, addresses, and names can help ensure consistency and make analysis easier and more accurate.

Documenting and labeling non-numeric data is crucial for easy analysis and interpretation. This includes adding clear and descriptive labels to the data, as well as documenting any changes or transformations made to the data.

1. Clear and descriptive labels

Using clear and descriptive labels for non-numeric data can help users understand the data and its context, making it easier to analyze and interpret.

2. Documenting changes and transformations

It is important to document any changes or transformations made to the non-numeric data, as this can impact the analysis results and provide important context for future analysis.

Analysis techniques for non-numeric data may need to be updated and revised over time to account for changes in the data or new analysis methods. It is important to regularly review and update these techniques to ensure accurate and relevant analysis.

1. Reviewing analysis methods

Regularly reviewing and updating analysis methods for non-numeric data can help ensure that the techniques used are still relevant and accurate.

2. Adapting to changes in the data

As non-numeric data evolves and changes, it is important to adapt analysis techniques to account for these changes and maintain the accuracy of the analysis.

In conclusion, this tutorial has provided valuable insights into how to analyze non-numeric data in Excel . We discussed the importance of using text functions, pivot tables, and data validation to effectively interpret and manipulate non-numeric data. I encourage all readers to apply the techniques learned in this tutorial to their own Excel analysis of non-numeric data, as it will undoubtedly enhance their data analysis capabilities and contribute to making more informed decisions.

Excel Dashboard

Immediate Download

MAC & PC Compatible

Free Email Support

Related aticles

Excel Tutorial: What Does #### Mean In Excel

Excel Tutorial: What Does #### Mean In Excel

Understanding Mathematical Functions: How To Call A Function In Vba

Understanding Mathematical Functions: How To Call A Function In Vba

Understanding Mathematical Functions: How To Add Function In Google Sheets

Understanding Mathematical Functions: How To Add Function In Google Sheets

Understanding Mathematical Functions: How To Fill In A Table Using A Function Rule

Understanding Mathematical Functions: How To Fill In A Table Using A Function Rule

Understanding Mathematical Functions: What Are The Basic Functions Of A Cell

Understanding Mathematical Functions: What Are The Basic Functions Of A Cell

Making Write 15 Minutes On A Timesheet

Making Write 15 Minutes On A Timesheet

Making Identify Sheet Sizes

Making Identify Sheet Sizes

Mastering Formulas In Excel: What Is The Formula For Standard Deviation

Mastering Formulas In Excel: What Is The Formula For Standard Deviation

Mastering Formulas In Excel: What Is The Formula Of Force

Mastering Formulas In Excel: What Is The Formula Of Force

Mastering Formulas In Excel: What Is Net Present Value Formula

Mastering Formulas In Excel: What Is Net Present Value Formula

Mastering Formulas In Excel: How To Write Formula In Google Docs

Mastering Formulas In Excel: How To Write Formula In Google Docs

Mastering Formulas In Excel: How To Do A Formula In Google Sheets

Mastering Formulas In Excel: How To Do A Formula In Google Sheets

  • Choosing a selection results in a full page refresh.

EU AI Act: first regulation on artificial intelligence

The use of artificial intelligence in the EU will be regulated by the AI Act, the world’s first comprehensive AI law. Find out how it will protect you.

A man faces a computer generated figure with programming language in the background

As part of its digital strategy , the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits , such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI.

Learn more about what artificial intelligence is and how it is used

What Parliament wants in AI legislation

Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.

Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.

Learn more about Parliament’s work on AI and its vision for AI’s future

AI Act: different rules for different risk levels

The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.

Unacceptable risk

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Biometric identification and categorisation of people
  • Real-time and remote biometric identification systems, such as facial recognition

Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.

AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

1) AI systems that are used in products falling under the EU’s product safety legislation . This includes toys, aviation, cars, medical devices and lifts.

2) AI systems falling into specific areas that will have to be registered in an EU database:

  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.

General purpose and generative AI

Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission.

Limited risk

Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.

On December 9 2023, Parliament reached a provisional agreement with the Council on the AI act . The agreed text will now have to be formally adopted by both Parliament and Council to become EU law. Before all MEPs have their say on the agreement, Parliament’s internal market and civil liberties committees will vote on it.

More on the EU’s digital measures

  • Cryptocurrency dangers and the benefits of EU legislation
  • Fighting cybercrime: new EU cybersecurity laws explained
  • Boosting data sharing in the EU: what are the benefits?
  • EU Digital Markets Act and Digital Services Act
  • Five ways the European Parliament wants to protect online gamers
  • Artificial Intelligence Act

Related articles

Benefitting people, the economy and the environment, share this article on:.

  • Sign up for mail updates
  • PDF version

Advertisement

The Civil Fraud Ruling on Donald Trump, Annotated

By Kate Christobek

Former President Donald J. Trump was penalized $355 million , plus millions more in interest, and banned for three years from serving in any top roles at a New York company, including his own, in a ruling on Friday by Justice Arthur F. Engoron. The decision comes after the state's attorney general, Letitia James, sued Mr. Trump, members of his family and his company in 2022.

The ruling expands on Justice Engoron’s decision last fall , which found that Mr. Trump’s financial statements were filled with fraudulent claims. Mr. Trump will appeal the financial penalty and is likely to appeal other restrictions; he has already appealed last fall’s ruling.

The New York Times annotated the document.

Download the original PDF .

Page 1 of undefined PDF document.

New York Times Analysis

This ruling by Justice Arthur F. Engoron is a result of a 2022 lawsuit filed by New York’s attorney general, Letitia James , against Donald J. Trump and the Trump Organization; his adult sons, Donald Trump Jr. and Eric Trump; the company’s former chief financial officer Allen Weisselberg and former controller Jeffrey McConney; and several of their related entities. Mr. Trump’s daughter, Ivanka Trump, was also initially a defendant until an appeals court dismissed the case against her.

Page 2 of undefined PDF document.

The law under which Ms. James sued, known by its shorthand 63(12), requires the plaintiff to show a defendant’s conduct was deceptive . If that standard is met, a judge can impose severe punishment, including forfeiting the money obtained through fraud. Ms. James has also used this law against the oil company ExxonMobil, the tobacco brand Juul and the pharma executive Martin Shkreli.

Page 4 of undefined PDF document.

Justice Engoron is now providing a background of this case. This ruling comes after a three-year investigation by the attorney general’s office and the conclusion of a trial that ended last month. But this likely won’t be Mr. Trump’s last word on the matter — he will appeal the financial penalty and is likely to appeal other restrictions, as he has already appealed other rulings.

In late 2022, Justice Engoron assigned a former federal judge, Barbara Jones, to serve as a monitor at the Trump Organization and tasked her with keeping an eye on the company and its lending relationships. Last month, she issued a report citing inconsistencies in its financial reporting, which “may reflect a lack of adequate internal controls.”

Page 5 of undefined PDF document.

Here, Justice Engoron is laying out the laws he considered in his ruling beyond 63(12). The attorney general’s lawsuit included allegations of violations of falsifying business records, issuing false financial statements, insurance fraud and related conspiracy offenses.

Justice Engoron is explaining the decision, issued a week before the trial, in which he found that Mr. Trump’s financial statements were filled with fraud , fundamentally shaping the rest of the trial.

Page 6 of undefined PDF document.

For over 50 pages, Justice Engoron describes his conclusions about the testimony of all of the witnesses who spoke during the trial.

Page 8 of undefined PDF document.

Justice Engoron discusses Mr. McConney’s important role in preparing Mr. Trump’s financial statements. The judge points out that Mr. McConney prepared all the valuations on the statements in consultation with Mr. Weisselberg.

Page 24 of undefined PDF document.

In his discussion of Mr. Weisselberg, Justice Engoron calls his testimony in the trial “intentionally evasive.” Justice Engoron then brings up Mr. Weisselberg’s separation agreement from the Trump Organization, which prohibited him from voluntarily cooperating with any entities “adverse” to the organization. Justice Engoron says that this renders Mr. Weisselberg’s testimony highly unreliable.

Page 27 of undefined PDF document.

When Donald Trump Jr. testified in court, he disavowed responsibility for his father’s financial statements despite serving as a trustee of the Donald J. Trump Revocable Trust while his father was president. But Justice Engoron specifically cites here that Donald Trump Jr. certified that he was responsible for the financial statements, and testified that he intended for the banks to rely on them and that the statements were “materially accurate.”

Page 30 of undefined PDF document.

During his testimony, Eric Trump, the Trump Organization’s de facto chief executive, initially denied knowing about his father’s financial statements until this case. As Justice Engoron points out here, Eric Trump eventually conceded to knowing about them as early as 2013. As a result, Justice Engoron calls Eric Trump’s credibility “severely damaged.”

Page 33 of undefined PDF document.

Justice Engoron points to Mr. Trump’s testimony when he took the witness stand in November when Mr. Trump acknowledged that he helped put together his annual financial statements. Mr. Trump said he would see them and occasionally have suggestions.

Page 35 of undefined PDF document.

After four pages of describing Mr. Trump’s testimony, Justice Engoron says Mr. Trump rarely responded to the questions asked and frequently interjected long, irrelevant speeches, which all “severely compromised his credibility.”

Page 38 of undefined PDF document.

For several pages, Justice Engoron provides background on specific assets that Mr. Trump included in his annual financial statements.

Page 61 of undefined PDF document.

The judge is clarifying that Ms. James had to prove her claims by a “preponderance of the evidence,” meaning she had to demonstrate it was more likely than not that Mr. Trump and the co-defendants should be held liable. This is a lower standard than that of a criminal trial, which requires that evidence be proven “beyond a reasonable doubt.”

Page 76 of undefined PDF document.

During the trial, Mr. Trump and his legal team tried to shift the blame for any inaccuracies in his financial statements onto his outside accountants. But Justice Engoron criticizes that argument here.

Page 77 of undefined PDF document.

During the monthslong trial, Mr. Trump, his legal team and several witnesses stressed that real estate appraisals are an art, not a science. But here it’s clear Justice Engoron, while agreeing with that sentiment, also believes it’s deceptive when different appraisals rely on different assumptions.

Page 78 of undefined PDF document.

Justice Engoron is now going through the defendants one by one and articulating the evidence that shows each of their “intent to defraud,” which is required by the statute against falsifying business records. Notably, his first paragraph describing the former president’s intent provides examples including Mr. Trump’s awareness that his triplex apartment was not 30,000 square feet and his valuation of Mar-a-Lago as a single-family residence even though it was deeded as a social club.

Page 79 of undefined PDF document.

Among the defendants, Justice Engoron finds only Allen Weisselberg and Jeffrey McConney liable for insurance fraud. Here, he doesn’t provide an explanation for why the other defendants, including Mr. Trump and his adult sons, were not found liable, and he says that both Mr. Weisselberg and Mr. McConney made false representations to insurance companies about Mr. Trump’s financial statements.

While Mr. Trump and his adult sons were not found liable for insurance fraud, here Justice Engoron finds them liable for conspiracy to commit insurance fraud, explaining that they all “aided and abetted” the conspiracy to commit insurance fraud by falsifying business records.

Page 82 of undefined PDF document.

Justice Engoron here adopts the approximations of Michiel McCarty, the attorney general’s expert witness. Justice Engoron says Mr. McCarty testified “reliably and convincingly,” and finds that the defendants’ fraud saved them over $168 million in interest.

Page 83 of undefined PDF document.

In finding that the defendants were able to purchase the Old Post Office in Washington, D.C., through their use of the fraudulent financial statements, Justice Engoron rules that the defendants’ proceeds from the sale of the post office in 2022 should be considered “ill-gotten gains.” He penalizes Donald Trump and his companies over $126 million, and Donald Trump Jr. and Eric Trump $4 million each, for this one property.

Page 84 of undefined PDF document.

Justice Engoron blasts the defendants for failing to admit that they were wrong in their valuations — adding that “their complete lack of contrition and remorse borders on pathological.” He says that this inability to admit error makes him believe they will continue their fraudulent activities unless “judicially restrained.”

Page 88 of undefined PDF document.

The judge cites other examples of Mr. Trump’s “ongoing propensity to engage in fraud,” bringing up lawsuits against Trump University and the Donald J. Trump Foundation. He also notably raises two criminal cases brought by the Manhattan district attorney’s office: one against Mr. Weisselberg, who pleaded guilty to tax fraud and falsifying business records , and another against the Trump Organization, which was convicted of 17 criminal counts including tax fraud .

Justice Engoron states that Judge Barbara Jones, who has been serving as an independent monitor at the Trump Organization since 2022, will continue in that role for at least three years. He clarifies that going forward, her role will be enhanced and she will review Trump Organization financial disclosures before they are submitted to any third party, to ensure that there are no material misstatements.

Page 89 of undefined PDF document.

In addition to extending the monitor’s tenure and strengthening her powers, Justice Engoron also took the unusual step of ordering that an independent compliance director be installed inside The Trump organization, and that they report directly to the monitor.

— William K. Rashbaum

In his pre-trial order, Justice Engoron ordered the cancellation of some of Mr. Trump’s business licenses . But here, he pulls back on that decision and instead says that any “restructuring and potential dissolution” would be up to Ms. Jones, the independent monitor.

Page 90 of undefined PDF document.

Justice Engoron lays out his bans against the defendants, ruling that Mr. Trump, Mr. Weisselberg and Mr. McConney cannot serve as officers or directors of any corporation or legal entity in New York for the next three years, and bans his sons Donald Trump Jr. and Eric Trump for two years from the same. He also prohibits Mr. Trump from applying for any loans from any New York banks for the next three years. The ruling goes further in the cases of Mr. Weisselberg and Mr. McConney, permanently barring them from serving in the financial control function of any New York business.

Page 91 of undefined PDF document.

Justice Engoron also ordered that Mr. Trump and his sons pay the interest, pushing the penalty to $450 million, according to Ms. James.

Page 92 of undefined PDF document.

An earlier version of this article misstated how long the adult sons of former President Donald J. Trump — Donald Trump Jr. and Eric Trump — were barred by Justice Arthur F. Engoron from serving as officers or directors of any corporation or legal entity in New York. It was two years, not three. The article also misstated the number of pages in which Justice Engoron describes his conclusions about the testimony of all of the non-defendant witnesses. It was under 50 pages, not over 50 pages. The article also misstated the number of pages in the section in which Justice Engoron provides background on specific assets that Mr. Trump included in his annual financial statements. It was several pages, not more than a dozen pages.

  • Share full article

IMAGES

  1. What Are Non Examples

    analysis non example

  2. Non-Examples

    analysis non example

  3. PPT

    analysis non example

  4. PPT

    analysis non example

  5. How to Write a Summary, Analysis, and Response Essay Paper With

    analysis non example

  6. PPT

    analysis non example

VIDEO

  1. Analysis & Interpretation

  2. Non Probability Sampling

  3. NICL Analysis

  4. Applied Research Methods: Part VII-Logistic Regression Analysis

  5. Law of statistical regularity

  6. IFRS 10

COMMENTS

  1. Six Ways to Use Examples And Nonexamples To Teach Concepts

    In under-generalizing, a learner's understanding of a concept is too limited. They may not think that an example is part of the concept when it is. Misconceptions involve a failure to comprehend a concept accurately. Rule #1: Use examples in which the irrelevant attributes vary widely.

  2. 11 Examples of a Nonexample

    A nonexample is a sample of something that is not included in a concept. As with examples, these are used as explanatory devices. The following are illustrative examples. Culture Culture is any aspect of life that emerges spontaneously amongst groups of people. Example A street dance such as hip hop dance or dubstep. Nonexample

  3. How To Write an Analysis (With Examples and Tips)

    Writing an analysis requires a particular structure and key components to create a compelling argument. The following steps can help you format and write your analysis: Choose your argument. Define your thesis. Write the introduction. Write the body paragraphs. Add a conclusion. 1. Choose your argument.

  4. The Frayer Model examined: the power of "non-examples" in English

    The power of the 'non-example' ... Translation to analysis. The students then need to analyse the method in a text, and for this I choose a short extract from Don Juan and explore Byron's 'melancholy merriment' in relation to this stanza: It is an awful topic—but 't is not

  5. Analyzing the Structure of the Non-examples in the Instructional

    The Importance of Non-examples. Examples and non-examples are critical in mathematical reasoning because they can provide concrete illustrations of abstract ideas (e.g., Goldenberg & Mason, 2008; Tsamir et al., 2008; Zaslavsky, 2019).In particular, non-examples of a concept can illuminate insights that are not always apparent when considering examples of that same concept.

  6. Nonexample Definition & Meaning

    Nonexample Definition Meanings Definition Source Origin Noun Filter noun Example that is irrelevant to a rule or a definition already shown, used for a clearer explanation. Wiktionary Advertisement Origin of Nonexample non- and example. From Wiktionary Find Similar Words Find similar words to nonexample using the buttons below. Words Starting With

  7. 21 Examples of Analysis

    21 Examples of Analysis. John Spacey, January 16, 2020. Analysis is the practice of breaking things into their component parts in order to understand them. This is a basic mode of thinking, writing and communication that is applied to everyday life and a wide range of subjects, topics and problems. The following are illustrative examples.

  8. 6.1 Overview of Non-Experimental Research

    Non-experimental research is research that lacks the manipulation of an independent variable. Rather than manipulating an independent variable, researchers conducting non-experimental research simply measure variables as they naturally occur (in the lab or real world). ... For example, thematic analysis would focus on themes that emerge in the ...

  9. Non-Parametric Statistics: Types, Tests, and Examples

    Non-parametric statistics helps in deriving data analysis and interpretation even in cases of fluctuating data entry. Learn its types, tests and examples.

  10. Experimental analysis of using examples and non-examples in safety

    The importance of training with examples and non-examples seems to extend equally well to safety concepts; however, the explicit use of examples and non-examples in safety training is rarely discussed—if at all—in the safety literature. Consider the problem of teaching a contractor's apprentice safe and hazardous electrical conditions.

  11. Guide: Non-Value Add Analysis

    Conducting non-value-add analysis is the process of reviewing and understanding each activity in the process to identify if it adds value in the eyes of the customer. Non-value-adding activities usually include unnecessary steps, redundancies, delays, or anything that leads to inefficiency without enhancing the product or service.

  12. Experimental analysis of using examples and non-examples in safety

    Experimental analysis of using examples and non-examples in safety training - ScienceDirect Abstract Introduction Section snippets References (44) Cited by (12) Journal of Safety Research Volume 59, December 2016, Pages 97-104 Experimental analysis of using examples and non-examples in safety training ☆ Matthew A. Taylor a b , Oliver Wirth a ,

  13. What Is Nonresponse Bias?| Definition & Example

    Example: Nonresponse bias Suppose you are researching workload among managers in a supermarket chain. You decide to collect your data via a survey. Due to constraints on their time, managers with the largest workload are less likely to answer your survey questions.

  14. Nonparametric Tests vs. Parametric Tests

    Nonparametric analyses to assess group medians (sometimes) In particular, I'd like you to focus on one key reason to perform a nonparametric test that doesn't get the attention it deserves! If you need a primer on the basics, read my hypothesis testing overview. Related Pairs of Parametric and Nonparametric Tests

  15. How to Do a SWOT Analysis That's Actually Useful to Your ...

    A SWOT analysis is one of the basic planning and evaluation tools for nonprofit organizations. Nonprofits can use this tool for marketing, strategic planning, needs analysis, program development, and much more. However, nonprofit organizations must focus their analysis on specific functions to best benefit from the process.

  16. Choosing the Right Statistical Test

    Categorical variables represent groupings of things (e.g. the different tree species in a forest). Types of categorical variables include: Ordinal: represent data with an order (e.g. rankings). Nominal: represent group names (e.g. brands or species names). Binary: represent data with a yes/no or 1/0 outcome (e.g. win or lose).

  17. Mrs. MacFarland

    Organizational Strategies. When analyzing an author's style for a non-literary text such as an editrial, determine what organizational patterns he or she uses: Exemplification: specific examples, brief. Illustration: examples in more detail. Description: concrete, sensory diction. Narration: use of stories e.g. anecdotes.

  18. PDF CS483-04 Non-recursive and Recursive Algorithm Analysis

    Outline aReview and More aAnalysis of Non-recursive Algorithms aAnalysis of Recursive Algorithms aExamples Review O(g(n)) := {f (n) | there exist positive constants c and n ≥ 0 }. such that ≤ 0 f (n) ≤ c g(n) for all n n 0 (n) ∈ O(g(n)) (n) grow no faster than g(n). Ω(g(n)) := {f (n) | there exist positive constants c and n such that ≤ 0 c g(n)

  19. Analyzing the Structure of the Non-examples in the Instructional

    Inferences about the structuring of the non-examples in the instructional example space might involve, for instance, (1) researchers' own perceptions of what conceptual aspect a non-example can (or is intended to) illustrate, (2) researchers' interpretations of why an expert believes a particular characteristic to be important, or (3 ...

  20. A Gentle Introduction to Nonparametric Statistics

    Data that does not fit a known or well-understood distribution is referred to as nonparametric data. Data could be non-parametric for many reasons, such as: Data is not real-valued, but instead is ordinal, intervals, or some other form. Data is real-valued but does not fit a well understood shape. Data is almost parametric but contains outliers ...

  21. How to Perform a Nonprofit SWOT Analysis (Downloadable Template)

    2. Ask questions. Performing a SWOT analysis will feel different from anything you've done, so you may feel confused about how to start. The best way to begin your SWOT analysis is by asking yourself questions and being entirely honest about the answers. Below are a set of questions you can ask to get started:

  22. Excel Tutorial: How To Analyze Non Numeric Data In Excel

    Examples of non-numeric data types in Excel include: Text: This includes any alphabetic or special character-based data, such as names, addresses, and descriptions. Dates: This includes calendar dates, such as 01/01/2022 or January 1, 2022. Times: This includes specific times of day, such as 12:00 PM or 3:30 AM. Conclusion

  23. PDF Analysis of a non-fictional text

    How to analyse a non-fictional text In class you often must analyse texts. The first step of the analysis is to identify the text type you are dealing with. Texts can be classified into two main categories: (Possible) forms of argumentative texts

  24. EU AI Act: first regulation on artificial intelligence

    As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.. In April 2021, the European Commission proposed the first EU ...

  25. The Civil Fraud Ruling on Donald Trump, Annotated

    By Kate Christobek. Feb. 16, 2024. Former President Donald J. Trump was penalized $355 million, plus millions more in interest, and banned for three years from serving in any top roles at a New ...