In 2019, our research team submitted a qualitative proposal in the postmodern paradigm to the National Science Foundation and, while it was not ultimately funded, it was reviewed positively. In 2020, we conducted a major revision of the proposal to address concerns raised by the reviewers and to update the proposal considering a pilot study that we began conducting while the proposal was being reviewed. We were caught off guard when we received the reviews. One reviewer said that the research team did not have “statistical power to draw conclusions that can be extended beyond their sample” and that it was unclear “how they [the researchers] plan to replicate their initial findings in separate samples.” Another reviewer stated, “It is unclear how the researchers propose to generalize the findings.”
This story is provided to illustrate a possibly common occurrence in qualitative engineering education research (EER), where there is a difference in epistemologies of researchers, funding agencies, and reviewers. These epistemological differences may shape not only the type of research that we take on in engineering education, but also how we describe that research within our subsequent publications. These tensions between researchers in our community who are pulled more towards positivist epistemologies and those pulled more towards postmodern and critical epistemologies may result in funded research projects and refereed journal articles that reflect an imbalance of representation regarding various epistemologies along the paradigmatic continuum.
Despite these tensions that are experienced by researchers and reviewers in qualitative EER, our community has made strides in both our advanced qualitative research methodologies and the acceptance of qualitative research within the broader field. Evidence of this includes articles with small numbers that have been both accepted within our community (Pawley, 2019) and have earned best paper awards (Benson, 2019; Burt, 2020; Secules et al., 2018). These strides perhaps mark a shift in our epistemologies and a shift in our values as a scholarly community.
The theoretical perspective and methodology—along with the epistemology of a researcher—influence the research process from research design to data collection and analysis and, finally, to dissemination (Staller, 2013). As an example, we will consider two hypothetical researchers with differing epistemologies and their resulting qualitative research projects around LGBTQIA+ engineering students. A post-positivist may use random sampling to identify participants and then conduct structured interviews with 100 participants. Their grounded theory data analysis may use a constant comparative method and result in a theoretical model that describes patterns that appeared across participants’ responses to questions. At the conclusion of the research, the post-positivist may present their results in a tabulated format in a peer-reviewed journal article or engineering education conference presentation. Conversely, a researcher who aligns with more critical and interpretive epistemologies may conduct unstructured interviews with three participants on multiple occasions. The subsequent data analysis may involve a narrative analysis that includes counter-narratives. Their dissemination efforts may include traditional avenues such as journal articles and conferences but may also branch out towards more creative approaches such as podcasts or the development of course curriculum. While these are both examples of educational research projects focused on LGBTQIA+ engineering students’ experiences that involve data, analysis, and dissemination, the subsequent research projects, results, dissemination efforts, and impacts are starkly different. Researchers, reviewers, journals, and funding agencies’ epistemologies influence our community’s research projects from initial design and conceptualization of their research projects to published journal articles.
In this paper, we take a closer look at recently published manuscripts in the EER community to better understand what our publications reveal about us as qualitative researchers, our epistemologies, and our values. Through an analysis of the way we communicate in our publications, we can begin to consider taken-for-granted assumptions and inferences that are implied but not explicitly stated in our publications (Gee, 2014) to better understand the engineering education community’s current values and epistemologies. Moreover, this analysis will also help us better understand the state of the broader field of qualitative EER, our culture, and the values of our community. Through this analysis, we hope to reveal epistemological and paradigmatic tensions within our community while demonstrating the value that various epistemological approaches bring to our community and to our understandings of engineering education.
Engineering education researchers have historically borrowed methods and theory from a wide array of disciplines, resulting in a heterogeneous collection of epistemologies that are demonstrated throughout the corpus of engineering education literature (Beddoes, 2014b). Epistemologically, EER can vary from positivist to postmodern in nature (Waller, 2006), although there has historically been a marked preference for large-scale, generalizable positivist or post-positivist studies, derived from EER’s embeddedness within the positivist engineering paradigm (Beddoes, 2014a; Pawley et al., 2016; Riley, 2017; Yu & Strobel, 2011).
However, adopting a critical historical perspective reveals that the preference for positivist or post-positivist studies can be directly linked to the development of the American academy within the context of settler colonialism and the institution of chattel slavery (i.e., the practice of having total ownership over another; Wilder, 2013; Winfield, 2007). The most prominent example of positivism’s role in the formation and perpetuation of racism, white1 supremacy, and eugenics is American biological and psychological scholars’ body of work produced at the height of the moral dilemma surrounding chattel slavery and westward expansion during and following the Enlightenment period (Curran, 2011; Lombardo, 1987; Tyson & Oldroyd, 2019). These researchers weaponized positivist epistemological assumptions of objectivity and generalizability through the scientific method to fabricate evidence that situated African slaves and Native Americans as biologically, cognitively, psychologically, and morally inferior to whites, instantiating the pseudoscience of eugenics as a popular and economically profitable area of study (Council of National Psychological Associations for the Advancement of Ethnic Minority Interests, 2016; Farrall, 1979; Neejer, 2015). Thus, positivism is the epistemological underpinning that informed the institutionalization of scientific research in the American academy (Wilder, 2013).
The scholarship of critical (and particularly Black and Indigenous) historians reveals the roots of today’s positivist epistemological unconsciousness within the legacy of American white supremacy, imperialism, settler colonialism, and nationalism (Wilder, 2013; Winfield, 2007). The American academic enterprise cannot be separated from the contexts with which it was developed, and it continues to reproduce itself through the enculturation of a positivist epistemological unconsciousness that manifests individually in researchers as a desire to produce objective, generalizable research (Staller, 2013; Steinmetz, 2005). This positivist epistemological unconsciousness in academia, and particularly in the social sciences, can lead qualitative engineering education researchers, and especially novice qualitative engineering education researchers, towards valuing data that is measurable, quantifiable, and generalizable over other forms of data. This can lead to a tendency towards quantifying qualitative research and unconsciously valuing qualitative data and analysis less, as it is inherently (and intentionally) not generalizable to the broader population. In opposition to positivist underpinnings within academia, qualitative methodologies that are derived from critical paradigms acknowledge positivism’s roots in white supremacy and colonialism by contextualizing the role of white supremacy in social science research, as well as problematizing objectivity and generalizability. These critical, qualitative methodologies tend to be more liberatory and focus on dismantling oppressive institutions through the elevation of marginalized voices (Crenshaw et al., 1995; Delgado et al., 2012; Freire, 1973).
Because positivist and post-positivist approaches are favored, institutionalized, and privileged in the academy (and American ideology more broadly), it is not surprising that researchers doing research from a more constructivist or postmodern epistemological stance will often be faced with illegitimate questions when the utility and validity of qualitative work is reviewed or presented, as illustrated by our vignette at the beginning of the introduction. Guba and Lincoln (2005) termed these illegitimate questions as questions without meaning because they are being asked and answered by people with differing epistemologies. Questions such as, “How can we trust this research if you only have three subjects?” or “How can you claim this work to be generalizable if you only focused on one case?” are illegitimate questions from the perspective of a critical qualitative researcher, which exemplify a difference in epistemology between the researcher and reviewer. Despite the positivist epistemological unconsciousness for quantifiability and generalizability, qualitative and/or interpretive research has become more commonplace in EER, accounting for nearly half of the body of work published in 2018, and it has contributed to major breakthroughs in the field (Liu, 2019).
Illegitimate questions about qualitative methodologies happen when we, as a community, apply quantitative quality criteria to judge the contribution or quality of qualitative research (Tracy, 2010). In applying these quantitative quality criteria, we will tend to both “misunderstand and misevaluate qualitative work” (Tracy, 2010, p. 839). Developing widely accepted criteria that align better with qualitative research and epistemologies that are more in alignment with constructivist, critical, or postmodern leanings are needed. Criteria that better align with these epistemological perspectives were developed by Tracy (2010) and include eight big tent criteria with which to judge the quality of qualitative work. These include having a worthy topic, rich rigor, sincerity, credibility, resonance, significant contribution, ethics, and meaningful coherence.
To better understand EER’s alignment with positivist epistemological unconsciousness, we can look at our history and the somewhat recent development of EER as a discipline. In early efforts to establish EER as a discipline, there was a push towards rigorous EER (Streveler & Smith, 2006). This initial effort was aligned with positivist epistemologies and aligned criteria with traditional concepts of rigor (Beddoes, 2014b). This initial effort to establish rigorous EER was heavily influenced through National Science Foundation (NSF) funding, which explicitly sought to construct the EER field as a “well-established” and “rigorous” one (Beddoes, 2014b, p. 302). Due to NSF and EER’s roots in the positivist tradition, conceptual standards of rigor and validity bled into how we as a community conduct qualitative research, which has seemingly created tensions among qualitative methodological researchers in the field (Riley, 2017). As we continued to develop our field, Walther and colleagues proposed a framework for quality in EER that was better aligned with more constructivist epistemologies and borrowed a framework from engineering focused on total quality management (Walther et al., 2013, 2015, 2017b). Walther and colleagues (2013, 2015, 2017b) provided a helpful structure for a collaborative inquiry approach, which moves away from positivist concepts of quality. This process is both empirical and interpretive in nature and advocates for researchers to work collaboratively to create meaning from the data, while paying attention to how they are relating to the data. This process includes the critical analysis of one’s subjectivities, along with understanding and respecting how these subjectivities influence data collection and analysis (Hampton et al., 2021). However, some have incorrectly applied Walther’s quality framework in a way that closer aligns with more positivist traditions, as their misuse of the framework involves using it as a simple checklist (Kellam & Cirell, 2018). Criticism of rigor in EER gave way to “methodological diversity,” which called for a movement away from rigorous, positivist work (Beddoes, 2014b, p. 298). This tension perhaps marks an era of evolution within the field of EER, suggesting that EER may be shifting, albeit slightly, from valuing generalizability in research to valuing rich, nuanced understandings of experiences.
In engineering education, it is common to find language such as evidence-based, empirical, systematic, and rigorous when describing research projects (Borrego & Henderson, 2014; Brown et al., 2016; Jamieson & Lohman, 2009; Streveler & Smith, 2006). This may be due in part to researchers feeling as if they need to heed off illegitimate questions that they may receive from reviewers due to the positivist epistemological unconsciousness present in our field. This epistemological unconsciousness likely runs deep within our field with a common objective of research that is generalizable, repeatable, and rigorous. Many of these desired attributes of our research derive from positivist epistemological perspectives and can likely be uncovered through studying discourses within our publications.
Despite EER’s marked preference for empirically driven and generalizable research, significant breakthroughs have been made using small number works, answering the call for methodological diversity within EER (Beddoes, 2014b; Case & Light, 2011). In 2007, Foor and colleagues wrote a seminal article where they presented an “ethnography of the particular,” which focused on the story of one participant, Inez (p. 103). More recently, Pawley (2013) also advanced our small numbers focus where she encouraged us to “learn from small numbers” through her work that pushes us to better understand marginalized and underrepresented communities in engineering. In Pawley’s work, she has been critical and transparent in her methodological choices (Pawley, 2013; Pawley & Phillips, 2014; Slaton & Pawley, 2018; Walther et al., 2015). She uses narratives to preserve the richness of each participant’s story (2013) and to better understand structural inequalities in engineering education (2019). We interpret the increasing prevalence of small numbers research in EER to represent the beginnings of a shift in the landscape of our community’s epistemological unconsciousness. Thus, we are interested in how this turn towards small number research has manifested itself now that it has been over a decade since that seminal paper about Inez (Foor et al., 2007).
Other works in EER highlight this changing landscape and suggest a possible pull towards more critical and postmodern research. Evidence of this changing landscape include a chapter from the Cambridge Handbook of Engineering Education Research that argues for a more critical approach to engineering education research that is situated around power relations (Riley et al., 2014), recent journal articles that emerged from more interpretive and critical research projects (Burt, 2020; Pawley, 2017, 2019; Secules et al., 2018), articles arguing for the inclusion of positionality statements (Hampton et al., 2021; Secules et al., 2021), and recent efforts and calls to overhaul and make the review process more transparent and unbiased (Coley et al., 2021; Edstrom et al., 2020). In this project, we examine recent 2019 publications in EER to see if the broader community has begun to engage in an internalization of the epistemological perspectives that are embracing positivist and post-positivist epistemologies as well as more critical and postmodern epistemologies in qualitative EER scholarship.
The purpose of this research project was to develop an understanding of our qualitative EER community’s epistemologies and values through an analysis of recent publications. The research question guiding this study was: Through an analysis of qualitative, engineering education manuscripts published in 2019, what voices of researchers and participants appear in our work and what do they unveil about our EER community’s epistemologies and values? We asked this question to begin to understand the EER community’s most recent qualitative research trends, as well as to determine how the calls for methodological diversity have been answered and if postmodern and critical approaches are part of this diversity.
The intention of this article was to interrogate the positivist and post-positivist biases and epistemological leanings of the EER community through a critical perspective while simultaneously advocating for an increase in the prominence of postmodern, critical, and liberatory epistemologies in EER. As such, we intentionally have adopted a postmodern, interpretive approach in our methods, inspired by the book Reconceptualizing Qualitative Research (Koro-Ljungberg (Koro), 2016), discourse analysis (Foucault, 1972; Gee, 2014) and The Listening Guide (a feminist approach to qualitative research methodologies, again derived from critical theory, which centers the voices of the participants over the voices of the research; Gilligan et al., 2003). Our interpretations reflect this epistemological preference, which is also discussed in the Reflexive Interlude section.
The databases Engineering Village and Google Scholar were employed to search for journal articles that are qualitative, focused on engineering education, and published in 2019. We searched using the terms qualitative and engineering education and excluded the terms quantitative, mixed, and educational technology. We also constrained the search to include only journal articles that were published in 2019. We conducted this analysis in 2020 and decided to constrain our search to the most recent complete year (2019) as these publications were the latest research published in the field at that time. The initial search resulted in 60 journal articles. These journal articles were filtered by removing the ones that were not focused on engineering education, those that involved mixed methods, and one that was not an authentic journal article (the authors misrepresented that the article was published in the International Journal of Engineering Education). This resulted in 27 journal articles from 11 journals that met our criteria.
With such a large data set of 27 articles, we were concerned that the analysis and subsequent interpretations would become somewhat superficial and not generate much insight. To address this concern, we borrowed critical and postmodern ways of engaging with the data inspired by the book Reconceptualizing Qualitative Research (Koro-Ljungberg (Koro), 2016) and the Listening Guide method (Gilligan et al., 2003). In Koro’s book, she describes a process of moving beyond coding and categorizing the data to a process where she gives space for the data to speak to her (2016). The Listening Guide method involves a series of listenings in which the researcher listens for different aspects of a person’s multilayered voice within a transcript (Gilligan et al., 2003). We borrowed from both approaches and listened for the voices of the researchers and the participants in published journal articles. This focus on the voices enabled us to capture the essence or mood of the articles that would not necessarily be captured in other types of analyses. We first read each article to become familiar with its content. In subsequent readings (or listenings) of each article, we listened for the voice(s) of the researchers, as well as the participants. We also borrowed from discourse analysis methods (Gee, 2014) to consider the underlying and sometimes unspoken assumptions of the researchers. During each listening, we took notes, wrote memos, and annotated digital copies of the articles. This documentation of our process helped us stay close to the data while also giving space for interpretation. We found it important to complete multiple readings of each manuscript so that we could identify multiple, sometimes conflicting voices within a single manuscript.
After we completed multiple listenings of each article, we began to compose the analysis (Gilligan et al., 2003), which involved attending to the research question and exploring how our analysis helped answer this question. This involved bringing together evidence from the prior listenings, memos written during the analysis, and engaging in more interpretation. In the analysis, we loosely organized our interpretations around the voices of the researchers and participants. In early listenings of the articles, we identified and described many researcher and participant voices. Through multiple readings, these voices were refined and grouped together. For example, we initially identified collaborative and alongside researcher voices. During our iterative memoing and reading we found that these two voices were similar conceptually and should be combined into a single voice. For this example, we chose to call this conceptual voice the alongside voice, as we felt that this label better captured the essence of this voice. Another example of this combination of voices is the absent or abstracted participant voice. In earlier listenings of the articles, tabulated and visual voices were identified. These participant voices appeared as tables or figures in the findings section. Upon further memoing and reflecting on the voices, we felt that these tabulated and visual voices made sense being included as an absent or abstracted participant voice. While these are not included as a separate voice, they are included in our interpretations and subsequent discussion. In our analysis, we identified voices that hopefully provide a more critical and insightful understanding of the diverse epistemologies within these manuscripts. We loosely organized our interpretations by presenting the voices that are aligned with more positivist epistemological perspectives first and then moved along the paradigmatic continuum towards voices that are aligned more closely with critical and postmodern epistemological perspectives.
Kellam is a white, queer engineering and engineering education faculty member at Arizona State University. They find themselves drawn to stories as ways of demonstrating complex and nuanced experiences of individual but also value larger studies that show patterns and differences across different contexts. In their own work they have been making a turn towards more postmodern and critical ways of engaging in scholarship. They hope that this paper is a call to action to engineering education researchers to begin to examine their own epistemologies, biases, perspectives, and values and how these epistemologies and values influence their work as researchers, reviewers, and engineering educators. Kellam led the efforts of this paper from conceptualization to data analysis and write-up of this manuscript.
Jennings is a white, queer, disabled graduate student at Arizona State University. Their background is in manufacturing engineering and ferrous metallurgy, and they are now pursuing an MS in human systems engineering and a Ph.D. in engineering education. They are interested in restructuring the engineering institution to become less hostile to marginalized groups using tenants from queer theory, feminism, Marxism, and critical theory. They believe that part of the way to accomplish this goal is to examine and critique research trends as a community in order to hold the community accountable. Jennings prefers qualitative methodologies when working with small numbers (Slaton & Pawley, 2018), and believes that quantitative research on underrepresented groups such as the LGBTQIA+ community may serve to marginalize these communities further. Jennings was involved in building the literature review for this study, as well as providing critical feedback and editing for Kellam’s data analysis.
As described above in the data analysis section, this section is divided into the following subsections: Voices of the Researchers and Voices of the Participants. Within each of these broader categories, we begin with more positivist leanings and progress towards more critical voices. As we described the voices, we include examples from the data so that you, the reader, can gain a sense of the data alongside our interpretations.
The following section addresses the voices that we identified in our analysis that appeared to be derived from the researcher.
Separated voices represented a positivist pull and appeared where there was a separation between the authors and the data as presented. In the Valentine et al. article (2019), this voice manifested in the narrative through referring to the people who conducted the research as assessors. The assessors were first introduced in the data analysis section of the paper:
The instructions to be used by assessors to evaluate the course outlines were first drafted based upon the two research questions… They were then refined through a pilot application by engaging all assessors in reading ten specified course outlines (from the authors’ institution) and using the instructions to evaluate the courses (Valentine et al., 2019, pp. 4–5).
Because the authors use the passive voice in this quote, we are unsure who created and later refined these instructions, thus hinting towards the researchers attempting to be seen as objective. This use of language sets up the researchers, assessors, and data as being separate and having little influence on one another. Moreover, there was no discussed connection between the authors and the data apart from the mention that the first set of course outlines came from the authors’ institution. This separation between the data and the authors expressed in the manuscript may have been an artifact of the gold standard of generalizable research valued within a positivist or post-positivist paradigm or may have been included to try to minimize the perception of bias or influence of the researchers on the data.
The apologetic voice emerged when authors described the research as being biased or not being generalizable. Researchers actively or passively described the study as being weak because of these limitations inherent in qualitative research. In the limitations, Valentine and colleagues discussed that the “method of analysis is subjective and subject to potential evaluator bias. Therefore, different evaluators may reach different results” (2019, p. 7). The authors were concerned about the subjectivity of this qualitative research project and the possible influences of the evaluator or assessor on the analysis of the data. This concern may be a reason for the way the authors discussed how the research was conducted (with assessors) and how the findings were presented (very brief and descriptive). We interpret this voice to potentially be representative of studies where researchers lean towards a more positivist epistemology but have designed and implemented a qualitative study.
In other papers, the apologetic voice appeared only subtly and typically appeared when the authors discussed limitations, potential biases, and strategies to overcome potential biases. This discussion of biases ranged from responding to reviewers or readers and providing a rebuttal to a claim that qualitative research is not as trustworthy as quantitative research to apologizing for the methods in the paper that do not meet the assumptions of quantitative research. We include these examples to show some of the ways that authors discussed biases in more and less apologetic ways. An example of this appeared in the McCord and Matusovich article, where the authors described the limitations of the study including the context of the study (“limited to problem-solving engineering courses for undergraduate engineering students” [p. 498]), the participants being sophomore students (“observations focused on a sophomore engineering course and do not include upper-class students” [p. 498]), and the homogeneity of the student population (and participants) of their study (“the sample of participants for this study lacks diversity” [p. 498]). The authors went on to explain that “this lack of diversity limits the generalizability of findings and the use of the current observational coding strategy” (p. 498). They further explained that “the outcomes of the research could be strengthened through the use of multiple methodological approaches for the purpose of data triangulation” (p. 499). This came across as somewhat apologetic in nature and in future work could be perceived by the community as encouraging future researchers to use multiple methodological approaches to reduce the potential biases that they identified in their study. Using multiple methodological approaches could impact both the types of research conducted and the nature of the findings that emerged from that research.
An example of a more subtle apologetic voice was in the Mobley et al. article where the authors explained, “While we recognize the limitations of our sample size of four for the in-depth narratives, the participants’ narratives offer detailed insights into their experiences and perceptions of identity” (2019, p. 46). In the next paragraph, Mobley and colleagues explained, “our results do not represent the experiences of all student veterans in engineering” (2019, p. 46). These examples are somewhat apologetic, but counter that bias or concern with strengths that their qualitative methods offer. This is still considered apologetic in that the methods of the paper will never meet the assumptions of quantitative research with larger data sets and more generalizable results. In the Boklage et al. article, the authors discussed overcoming bias during the interviews by “having the participants share their story in its entirety in the interview and only after that story was finished, did we follow up with probing questions” (2019, p. 4). While this was a concern about the author biasing the data, it also demonstrated that the authors were aware of their role in co-constructing the data through their active role during the interview process. These examples demonstrate ways to discuss biases while taking on a more or less apologetic voice in the narrative. Providing the strengths of a study alongside the possible biases is one way to lessen the apologetic tone that easily arises when discussing limitations and biases of a study.
In the Eastman et al. article (2019), the authors were not apologetic, as they directly discussed bias and generalizability and explained that these concerns were illegitimate concerns for readers of their article. These researchers described the intention of their article and the value that this article brought to the EER community, “It is our hope that readers recognize and reflect on parallels between Roger’s struggle to understand the social context of the engineering education environment and their own experiences as educators” (2019, p. 464). This example helps demonstrate the different ways in which authors avert potential concerns (in a non-apologetic way) around biases and shows a more critical epistemological approach to qualitative research.
The generalizable voice appeared in articles when authors discussed either how their research was generalizable, reasons their research was not generalizable (even though they seemed to wish it was), or an explanation of the purpose of their research to not be generalizable at all. In the Holland et al. article (2019), the generalizable voice appeared when the authors discussed their concerns about their research not being generalizable due to it being qualitative research. They explained, “Qualitative data is unsuited to producing universal, ‘objective’ rules related to design education” (2019, p. 5). This statement implies values of the researchers where they would prefer to conduct research that is more generalizable. The author’s choice of words in this excerpt was particularly interesting, as qualitative data was the subject of the sentence and the chosen language could be interpreted that qualitative data would be superior if it could indeed produce “universal, ‘objective’ rules.” This example suggests that these authors lean towards a more positivist epistemology and would value studies that could produce these more “universal, ‘objective’ rules.”
In the Fourati-Jamoussi et al. article (2019), the authors discussed a desire to compare their case with other institutions but cannot because of the differences between the institutions. They explained,
To compare the evolution of UniLaSalle with roughly comparable structures (Lima et al., 2016) raises many problems, as the chosen indicators can experience biases that are difficult to assess: different financial resources, specific student populations, diverse academic programmes, specific and non-comparable business activities of graduate students (2019, p. 575).
This quote demonstrates a concern about their work not being generalizable and that their findings cannot be used to compare or contrast other comparable organizations. This quote also suggests that the authors recognize that many of the indicators were not easily quantifiable or directly accessible. This quote may also show us the values of the researchers, who value more quantifiable studies that lead to more generalizable understandings. The authors also use the qualifier non-numerical when describing the data elsewhere in the paper, which further demonstrates a pull towards a more positivist epistemological leaning (Fourati-Jamoussi et al., 2019, p. 576).
In the Rulifson and Bielefeldt article (2019) the authors suggested a tension around generalizability of their results in the author team or between authors and reviewers. These researchers address a potential critique of their own work: “One could question whether the students who participated in the interviews are broadly representative of engineering students” (2019, p. 580). Later in that paragraph, however, they somewhat contradicted that statement by explaining, “Besides these issues there is little reason to believe that the students are drastically different from the larger pool of engineering students overall” (2019, p. 580). While we cannot understand the motivation behind these contradictory statements in the final paper, we do wonder whether these somewhat conflicting statements were a reaction to reviews that they received on earlier versions of this paper (Beddoes, 2014a), or if this concern about generalizability was something that they themselves were internally or epistemologically grappling with.
In the Jordan et al. article (2019), the generalizable voice emerged in the limitations section. They explained that,
Our participant sample consisted almost entirely of Navajo engineers who currently live and work outside of the Navajo Nation based on the available sample. We, therefore, do not make claims about Navajo engineers who live and work on the reservation. We also do not make claims as to whether our sample is representative of the larger population of Navajo engineers as there is a lack of detailed data to confirm this. We present the need for an expanded sample as an opportunity for future research. Also, while our methodology of phenomenography does not produce generalizable results, it does provide rich, in-depth description of experiences that are transferrable [sic] (2019, p. 363).
This dialogue around generalizability seemed to have some tensions within it and with other aspects of the paper. The authors explained that they “do not make claims” of generalizability “as there is a lack of detailed data to confirm this.” This idea that there could be detailed data to confirm those claims suggests a more positivist epistemology. Also, a more positivist epistemology was inherent in their argument that an expanded sample could help create a more generalizable study and that this could be attained in future research. Next, they shifted to explaining that “phenomenography does not produce generalizable results,” which suggests that the authors are aware that even with more participants, they would not be able to reach generalizable results, at least within the confines of this methodology. It would be interesting to learn more about how this limitations paragraph was written and whether it was present in the initial draft of this paper submitted for publication or arose in response to reviewers during the review process. This discussion around generalizability seemed to be in tension with other aspects of the paper as the authors provided positionality statements and rich participant quotes in the findings, which suggests a more critical and postmodern epistemology.
In some articles without a generalizable voice, the authors directly addressed potential concerns around generalizability. For example, in the Eastman et al. article (2019), the authors explained, “It is not our intention that the findings from this research are generalizable for all engineering educators” (2019, p. 464). These statements around generalizability may be included to counter perceived concerns of readers or could be in response to reviewers’ comments. In Pawley’s article, she conducted interviews and through an analysis of the interviews focused on the structure of ruling relations (2019). She explained, “While the study participants serve as the focus of the research, the subject of the research is the structure of ruling relations and institutions themselves, accessed via participants’ stories. As a result, I do not claim conventional generalizability directly for the outcomes but instead expect the reader to assess their pragmatic validity (Walther et al., 2013)” (2019, p. 19). Pawley was inviting the reader to consider ways for the research to inform other contexts. This is another form of co-constructing meaning, not only with the participants but also with the readers of the article.
The alongside voice was one in which the researcher(s) co-constructed meaning together with the participant(s). Including statements around the co-construction of meaning suggested that the authors believed research was not simply learning the truth from participants, but that the researcher influenced the participant and the subsequent data that was collected, and that researcher involvement also brought value to the research. Eastman explained, “The researcher is an instrument in the processes of data collection and analysis, the critical analysis tool responsible for influencing the co-construction of meaning during the conversational interaction between interviewer and interviewee (Howe & Eisenhart, 1990; Mishler, 1991)” (2019, p. 464). This idea of the researchers and participants influencing one another showed that the researchers understood their active roles in the research project and the complex and multifaceted way that their presence influenced the conceptualization of the research project, the interview process, the subsequent data that results from the interview, and the analysis and eventual writeup of the articles. Eastman and colleagues continued to explain this co-construction process and to share their positionalities so that the readers of their article can better understand the specific context of this research project and the reader’s interpretation of the findings of the paper. “Because this co-construction process is embedded in the data collection and analysis, we believe that the first author’s status as an engineering insider lends a greater credibility to the argument we present as he was able to recognize, interpret, and capture the nuanced nature of an insider’s account of the culture of engineering education” (Eastman et al., 2019, p. 464). In this paper, the authors discussed the importance of having insider and outsider perspectives among the research team. The first author related well to the participant as they are both white men who have a shared experience of learning about privilege and diversity. The author team viewed this insider perspective as being important to developing the argument of this paper. This infusion of the alongside voice in their article suggests a more postmodern or critical epistemology.
This alongside voice was also very salient in Minichiello et al.’s article (2019), “Within the constructivist perspective, researchers are viewed as ‘instruments’ of data collection and interpretation and, thus, play an important role in constructing meaning along with the participants” (Glesne, 2016, p. 9). Minichiello and colleagues framed researchers as being alongside participants. This type of discussion appeared in manuscripts that exhibited this voice and tended to appear in positionality sections, which helped the readers better understand how the researchers perceive themselves and their role in the research process. This voice was very different from voices present in articles that focused only on concerns about biases and ways of minimizing bias during data collection and interpretation. Researchers who adopted this perspective seemed to understand their role in the research process and how their specific perspectives and backgrounds can contribute to, influence, and strengthen the overall research project. This alongside voice appeared more in articles that were aligned with postmodern and critical epistemologies.
While it was not common, there was one article where the authors discussed how the data influenced the researcher as well as how the researcher influenced the data in the positionality section (Boklage et al., 2019). As an example of this in the researcher positionality section of the paper, Coley explains how she was influenced by the participants and data and how she influenced the participants and data. “Interviewing these participants served as a form of mentorship, guidance and encouragement” (Boklage et al., 2019, p. 7). She then goes on to explain how she also influenced the data collection and analysis aspects of the project. This symbiotic relationship between the participants, researcher, and data suggests a more critical epistemology.
The vulnerable voice represented a willingness of the authors to interrogate themselves, their roles in the research, their own biases, and their missteps. This voice sometimes appeared as a confessional tale (Van Maanen, 2011) where the researcher shared their accounts of engaging in the research process and shared what they learned and how they influenced the research project (e.g., in selecting participants, in deciding on the context, in selecting a research question). In other examples, the vulnerable voice ran throughout the paper and highlighted the positionality of the researcher. We felt that the presence of this vulnerable voice was interesting, as it showed potential growth and learning in our field by qualitative researchers and highlighted our positionality and how our perspectives influenced and are influenced by the research that we engage in.
While the vulnerable voice appeared most often in the papers with authors aligned with a more critical epistemology, it did appear in one that was more aligned with post-positivist epistemology. Carstensen and Bernhard explained that EER was an emerging field when they began this work and without much guidance they decided to collect “a huge amount of data” (250 hours of video recordings) and came to the conclusion that “analysing this huge video dataset was a major challenge” (2019, p. 86). Because they were unsure of what to do and their attempts to borrow other methods proved to not work with such a large dataset, they developed their own method, the “‘learning of a complex concept’ (LCC) model.” (p. 86). In describing their story and their uncertainty around how to proceed because of the common knowledge at the time in the field, they were putting forth a more vulnerable voice where they explain that they were not sure how to best design a study and ended up with a lot of data that led to difficulties when trying to figure out how best to analyze that data. This led to a more abstracted voice of the data as the results were presented as abstracted models and not with the potential richness and nuances that were likely present in the 250 hours of video-recorded data. The presence of the vulnerable voice in this manuscript may suggest a positivist pull of the researchers. They could have struggled with this analysis as they might have been applying quality criteria that they typically use in quantitative projects to this qualitative EER project.
In manuscripts that aligned with a more critical or postmodern epistemology, a vulnerable voice commonly emerged, which showed how the authors and researchers were open to being influenced by the participants and data and simultaneously influencing the participants and data (Eastman et al., 2019; Pawley, 2019). The presence of this voice sometimes led to changes in how the research project played out. This emerged in a few instances in the Pawley article (2019). One instance was when she was describing her recruitment processes where she listed criteria for participating in the study as belonging to specific racial/ethnic categories. She later expressed regret for her decision to use this categorization strategy at the participant recruitment stage. Pawley explained, “I came to regret my classification of racial/ethnic categories” (p. 17) and later wrote, “Informed by gender and race theory, we did not presume participants’ racial (or later, gender) identities” (p. 18) during data collection. In this case, Pawley described regrets that she had around the research design and situated herself as a learner. This vulnerability also appeared later in the article when Pawley reflected on her role in perpetuating structural inequalities. She wrote,
As an engineering instructor, I think about the many course policies I implement in my syllabi without thinking, borrowed from course precedents—about late work or attendance or participation—and have come to notice myself as an actor in a system potentially serving the needs of the institution of higher education over the needs of my individual students. How does this complicity ‘come to happen’? (Pawley, 2019, p. 28).
This example of vulnerability of the researcher demonstrated that the researcher is spending time reflecting deeply on the contexts and participants of her research. She is not only situating what she is learning as something that others “should” do but engaging in deep reflections and considering her own role and perspective.
Another example of a paper with a vulnerable voice appeared in the Eastman and colleagues ethnographic article (2019). Their study specifically focused on privilege and equity of a white, male engineering professor. The first author of the paper, Eastman, was a white male working on his PhD in a STEM education research program during this project and has more than 20 years of experience as an engineering faculty member. In taking a critical approach to the research project, the authors explained, “Like Roger, during their shared experience in the doctoral cohort, the first author struggled with conceptions of privilege and the understanding of diversity, and considers these to remain as personal, ongoing struggles of continued learning” (p. 464). By sharing this vulnerable voice, Eastman showed how he learned and struggled through this project, demonstrating how he was vulnerable and learning alongside his participant throughout this research project which extended over four years of data collection. The research team also shared aspects of their positionality and how they planned to interrogate their biases, which is another way that they enabled a more vulnerable voice to come forward.
Our research team consists of two white men and one Black woman who have experienced success in engineering education, chemistry education, and geology education. As collaborative authors, we hope to interrogate our own biases as we study issues of equity and diversity in engineering. By offering our critical view of race, gender, and culture in this study, we wish to offer a needed transparency for others regarding the culture of engineering and help readers interpret their place in it (p. 6).
This willingness of the authors to interrogate themselves and their own biases is an indication of putting themselves into a more vulnerable position. We did note, however, that this vulnerable voice did not reappear later in the paper, specifically in the findings and subsequent discussion (this quote was found near the beginning of the 22-page article). Nevertheless, the inclusion of a vulnerable voice in this paper helped demonstrate how being vulnerable and sharing positionalities and biases can help our findings become more accessible and relatable to the readers of our work.
In the Mukhtar et al. article (2019), participants’ voices were entirely missing from the manuscript. In this article, Mukhtar et al. conducted a document analysis followed by interviews with 10 participants. The data analysis section provided ample detail about data collection (details about recruitment and interview questions) and data analysis (transcription, coding, and member checking). However, in the results section, the themes and sub-categories were described in two paragraphs and a table was provided with columns for each participant and check marks indicating the competencies that were mentioned by each participant. There were no participant quotes provided in the results section.
While this absent participant voice was uncommon in this set of manuscripts, there were others that included minimal quotes from participants and presented the findings in an abstracted way (Carstensen & Bernhard, 2019; Fourati-Jamoussi et al., 2019; Sadikin et al., 2019; ten Caten et al., 2019; Valentine et al., 2019). In these articles, short quotes were often included in a table in the results or findings section. This methodological choice to remove or minimize the participant voice from the manuscript seemed to occur in cases where authors seemed more aligned with positivist or post-positivist methodologies and valued models and relationships between categories that emerged in their research projects over rich descriptions of participants’ experiences.
An example of the abstracted voice was in the Carstensen and Bernhard paper, where they discussed the difficulties associated with “analyzing the huge dataset” and explained that “transcribing complete labs would be too time-consuming” (2019, p. 86). They then abstracted the huge datasets into figures and models. This abstraction technique enabled the researchers to reduce six thousand lines of transcriptions into two simple figures. In another article, Fourati-Jamoussi and colleagues (2019) used an array of bar charts, radar charts, and a model to describe their data. Farouti-Jamoussi and colleagues included two quotes, but most of the analysis involved figures and discussions of those figures. This reduction of a very complex and large data set into two simple figures had significant implications for the results from the paper and aligned closely with more positivist and post-positivist epistemologies.
In the Pembridge and Paretti article (2019), the participants’ voices were present, but were not accentuated in the findings. This may be due somewhat to the large data set but was also likely due to the nature of this research project. The authors developed a taxonomy that included nine functions with twenty-eight practices. Because they described each of these functions and practices individually, the authors chose to present participant quotes in a tabular form. The authors may have preferred integrating the quotes into the text but chose to include them in tabular form “for brevity” (p. 203). In this case, the researchers may have made intentional choices to minimize the participant’s voices in the papers as their purpose in the paper was to develop theories and taxonomies and were likely required to consider manuscript length considerations of the journal venue. This example is included to show how the research project and research design influence the voices present in the manuscript, along with the epistemologies and values of the researchers.
The last examples in this section were two articles that had small data sets, but still had very abstracted participant voices emerge with little interpretation from the researchers. The Seiradakis and Spantidakis article (2019) was a very short article that used a case study approach, in which the authors based their analysis on three interviews with three participants. The findings were not particularly interpretive in nature and consisted mostly of participant quotes. Similarly, ten Caten and colleagues (2019) included the analysis of one focus group that had seven participants. The only mention of analysis was that it was, “transcribed, indexed, and analyzed” (p. 143). The findings were then presented in three sections that align with questions asked during the focus group. These examples imply a positivist epistemology of the researchers.
The dehumanized participant voice appeared in manuscripts in which authors referred to participants by numerical and alphabetical codes (Holland et al., 2019; Main et al., 2019; Mobley et al., 2019; Pembridge & Paretti, 2019) and in those in which the participant’s voice was tabulated. In one case, the authors referred to each participant by their identities (First Generation Student Veterans in Engineering [FGSVEs]) as the subject of the sentence instead of referring to them as students or participants (Mobley et al., 2019). This seemed to prioritize the identities of the participants over the participants themselves. In another case, the authors referred to participants by assigning a five-digit participant ID and paragraph number (e.g., 10169/50, Pembridge & Paretti, 2019). This dehumanized voice was also accentuated in the Main et al. article (2019) with their choice of pronouns to describe participants. In this study, Main and colleagues referred to all participants as masculine, regardless of gender identity, to protect the anonymity of the single woman participant. While their reason to do this is understandable, it may have further dehumanized the participant’s voices by referring to all students with masculine pronouns. This methodological choice influenced both the way that the participants were represented in the manuscript and influenced the way the reader interacted with the data and the participant quotes in the article by dehumanizing the participants, placing masculine identities as the norm in engineering, and potentially placing a barrier between the reader and the participant.
There were notable differences in the way that researchers named or pseudonymized their participants in studies. Ten Caten (2019) referred to participants as “P#” (e.g., P3), which felt to us like an attempt to anonymize the participants and possibly reduce the perception of bias present in the study. Conversely, Pawley (2019) wanted to give power to the participants in deciding how they would be referred to in publications. Some participants selected a pseudonym while others preferred to use their own name. Pawley explained that she did not indicate which names were pseudonyms and which were real names, in an attempt to have a “‘community immunity’ effect” where readers would not be able to easily identify participants (p. 18). Pawley also encouraged participants to read the original and pseudonymized transcripts so that they could have an opportunity to check for inaccuracies in their own stories and remove anything that they did not want included in the analysis.
When authors used the storied voice, they focused on a small number of participants and shared stories from those participants (Boklage et al., 2019; Eastman et al., 2019; Minichiello et al., 2019; Pawley, 2019; Rulifson & Bielefeldt, 2019). This may be in the form of case studies (e.g., Rulifson & Bielefeldt, 2019) or vignettes (e.g., Boklage et al., 2019). In the Rulifson and Bielefeldt article, the researchers collected longitudinal interview data over four years (2019). They analyzed all the data using a framework and categorization and then presented “four students’ evolution of ideas” (p. 577). The authors wanted to provide more context to the findings through presenting these four student descriptions. They explained, “These deeper explorations of students’ ideas show the changes in the students’ own words to provide deeper context to the rough type classifications.” This paper demonstrated authors (or panelists, reviewers, or editors) who value large sample sizes, but, simultaneously, valued the complexities and nuances that can be conveyed when considering a smaller subset of the data.
This storied voice also appeared in the Mobley et al. (2019) article. In this manuscript, Mobley and co-authors explained that this data was taken from a larger study where interviews were conducted with 60 student veterans from four institutions. The author team was interested in specific intersections of participants’ identities. Specifically, they were interested in veterans who were also first-generation college students. They presented case studies of four participants that involved identity circles and narratives. The purpose of this was to “illustrate the dynamic and overlapping nature of first-generation, SES, engineering, and military identities” (p. 39). This research article shows how author teams dealt with the relatively large data set of 60 interviews by only considering a subset of the data. This small numbers study “couched” within a larger study also may suggest an epistemological tension between the positivist leanings of funding agencies or journals and the more postmodern epistemologies of the researchers. This tension may be embodied within individual researchers, among researchers on the larger team, between grant review panels and researchers, or among reviewers/editors and researchers.
Articles with a storied participant voice tended to have more researcher interpretation represented in the analysis and findings (Eastman et al., 2019; Meyer & Fang, 2019; Minichiello et al., 2019; Pawley, 2019). These interpretive studies led to insights that could not have been drawn had the study been more descriptive in nature or included more participants. Minichiello and colleagues’ (2019) article was one example of a more interpretive article. This study included six participants who each participated in two interviews. In addition, instructors and teaching assistants were interviewed and course artifacts were also analyzed. The authors explicitly stated that they have an “open mind-set to the variety of perspectives and issues that might arise” (p. 5), which suggested that the authors were open to uncovering (or perhaps co-creating) insights in the analysis. This approach provided multiple perspectives and vantage points of stories that were represented and suggests a more critical and postmodern epistemology.
Many of the articles with a storied voice either included a subset of a larger number of participants (Main et al., 2019; Mobley et al., 2019), focused on a few participants (Meyer & Fang, 2019; Minichiello et al., 2019), or focused on a single participant (Eastman et al., 2019). Another article included long quotations from participants as they developed their arguments in their finding sections (Pawley, 2019, pp. 19–20). This storied voice suggests at least some pull from researchers towards more critical and postmodern epistemologies.
In this study, we set out to develop an understanding of our qualitative EER community’s epistemologies and values through an analysis of published 2019 journal articles. Through this analysis we uncovered a pull towards positivist epistemologies and a pull towards more critical and postmodern epistemologies. The post-positivist pull that was uncovered in many of the voices and emerged in our analysis demonstrates that our community (or, at least part of our community) values research with large data sets that minimizes or controls for researcher bias and/or subjectivities, and values generalizable research. This desire for generalizable research could be a reason that some people in our community value qualitative data less than quantitative data (Beddoes, 2014a) and could demonstrate that some qualitative researchers and reviewers are operating from a positivist or post-positivist epistemology, which situates them to value research that is generalizable and has minimal biases present. On the other hand, many of the voices that emerged in our analysis show that we also have many researchers who are moving towards more critical and postmodern epistemologies. This trend towards postmodern research could have large implications for the type of research that we as a community conduct, the potentially diverse participants that we develop more in-depth understandings of, and the types of learning that can happen as researchers and participants create alongside each other and then share with our broader engineering education community.
During our analysis of voices, there was a suggestion of epistemological unconsciousness that is threaded throughout our community. When discussing the generalizable voice in the findings, we shared an example of the Jordan et al. article (2019), where the authors appeared to have an apologetic or defensive tone when discussing the lack of generalizability of their study, which suggests a more positivist leaning epistemology. This, however, is in tension with other aspects of the paper (e.g., rich participant quotes, positionality statement), which suggest a more critical or postmodern epistemology. The perceived epistemological tension within this article could be due to an epistemological unconsciousness present in our community, whereas the author team may have been responding to journal requirements, reviewers of their paper, or to reviewers on NSF panels when securing research funding for the project. It also could be due to an assumption or perception that others in the community would not find the number of participants in their study to be enough to warrant publication of this research. Finally, it could be due to an epistemological unconsciousness within a single researcher or the research team.
Epistemological unconsciousness also appeared in the couched studies that were somewhat common in articles we analyzed (Main et al., 2019; McCord & Matusovich, 2019; Mobley et al., 2019). These couched studies were essentially studies that analyzed a small number of participants, which were derived from larger datasets. These couched studies may suggest a tension between engineering education researchers who are themselves leaning towards more critical and postmodern epistemologies but feel as if they must couch these in larger studies with much larger datasets to assist with gaining credibility in our broader community. Examples of this happening include McCord & Matusovich (2019), where they had over forty hours of observation but then focused their analysis on three observations, and the Main et al. (2019) article, where they conducted sixty interviews at four institutions but then randomly selected three transcripts from each university for analysis in their paper. These examples point to a potential tension within our community between feeling the need to collect large amounts of data and wanting to conduct more in-depth analysis on smaller amounts of data. We interpret this tension to be evidence of an epistemological unconsciousness that is present in our community.
The presence of the abstracted voice in our analysis suggests potential consequences of an epistemological unconsciousness in our community. In many of the studies with larger datasets there was a tendency of researchers to present the findings as abstractions. This abstracted voice often appeared as models or tables with few, if any, quotes included from the participants (Carstensen & Bernhard, 2019; Fourati-Jamoussi et al., 2019; Mukhtar et al., 2019). This pseudo-quantifiable method of presenting qualitative findings could mislead our community towards over-generalizing our understandings of certain types or groups of people (e.g., the typical engineering student). This over-generalization is a critical concern for our community, as our empirical observations from this type of data presentation and interpretation could lead us to focusing our research efforts on groups of people who are more prevalent in engineering (e.g., white male students), thus developing and prioritizing understandings of these majority groups in engineering and perhaps ignoring voices that are marginalized in our communities (e.g., Black male engineering students).
Our positivist epistemological unconsciousness also has implications for the way we consider quality within qualitative EER. As more engineering education researchers begin to embrace postmodern and critical epistemologies while others continue to operate unknowingly with a positivist or post-positivist epistemology, the tension over quality work in EER may continue to be exacerbated, resulting in more frequent (and perhaps more dramatic) instances of the types of illegitimate questions that we presented at the beginning of this article. More critically, our communities’ tendencies towards a positivist epistemological unconsciousness have direct and historical implications for embedding racial, sexual, gender, and ableist discrimination within our qualitative methodologies. As discussed in the background section of this article, positivist-leaning epistemologies have been critiqued by critical scholars as tools of exploitation and violence against Black and Brown bodies, the queer community, disabled people, and women. Thus, we feel that the qualitative EER community (who is often privileged, white, cisgender, straight, and/or non-disabled) has a responsibility to reflect upon how our unconscious epistemological preferences are embedded within our research and how it impacts marginalized and vulnerable communities, both inside and outside of the engineering institution.
While this analysis was limited, as we only saw the accepted and published versions of manuscripts, there were some indicators that authors were answering reviewer’s illegitimate questions (or they were heeding off expected, yet illegitimate questions; Guba & Lincoln, 2005). Many of the articles that we analyzed discussed the inherent non-generalizability of their work, either by stating it confidently (e.g., their study was not intended to be generalizable; Pawley, 2019) or apologetically (e.g., stating that their study was not generalizable even though the authors appeared to want it to be; Valentine et al., 2019). Other authors focused their discussion on providing contextual findings so that other educators could assess the validity of the results for their specific contexts (Minichiello et al., 2019).
Throughout the research process there are myriad opportunities that illegitimate questions could influence the trajectory of one’s work. Illegitimate questions and concerns that appear in journal requirements and author guidelines, funding agencies’ expectations, and the review process may push qualitative researchers to build quantifiability or generalizability into their study. This becomes apparent when we interrogate the bias towards discussing the generalizability of our findings, even though qualitative research is not meant to be generalizable. The different ways that the articles we analyzed discussed generalizability may have been influenced by illegitimate questions from our broader EER community and indicate the substantial influences that positivism has on the way that we design our research projects, write our articles, and interact with participants. Thus, validating illegitimate questions in our qualitative research may have led to the separated, apologetic, abstracted, and dehumanized voices that emerged in our analysis, even within articles that were more critical or postmodern in nature.
It is important to reiterate that the authors of the papers that we analyzed are part of a broader EER community when considering the apologetic, generalizable, and dehumanized voices that appeared in our analysis. Our articles are not only influenced by our own epistemologies as researchers but are also influenced by the broader EER community’s epistemologies, as well. Beddoes (2014) describes her experience as she published a journal article where she took a critical feminist approach to her work. In this case study, she shares how her paper evolved as she received reviews from the editorial team for the journal. She explains that “articles are often the result of multiple and competing deliberations and negotiations” (p. 273). These deliberations and negotiations can be especially difficult when a field is as interdisciplinary as EER and when reviewers of your work approach it from diverse epistemological perspectives. It is important to remember that competing epistemological preferences likely influenced the voices that emerged in our analysis and that we sought to interrogate and challenge our community’s epistemological unconsciousness in this study, not the authors of the papers we discussed.
As more engineering education researchers begin to embrace postmodern and critical epistemologies, the tension over quality work in EER continues to be exacerbated. In Beddoes’ (2014) critique, she highlights that the tensions between positivist and postmodern thinkers in EER are exacerbated by a lack of agreement regarding what is considered quality work, as well as the fact that the field is so interdisciplinary in the first place. Her findings align with the patterns that we uncovered in this work. Beddoes writes all of this to make the point (which we are also extrapolating) that our relationship to data as engineering education qualitative researchers is influenced in part by the paradigmatic tensions that we have outlined. For example, postmodern thought leaders in EER submitting an article to be published may be pressured to add, “these results are not generalizable” by a positivist reviewer when the authors never intended for their research to be generalizable. Both assertions are correct when viewed through the lenses of their respective epistemologies. However, these tensions are a result of a lack of knowledge surrounding the community’s interdisciplinary and inter-paradigmatic nature, and often manifest as disagreements or misunderstandings regarding what quality qualitative research ought to include.
More explicitly, some of the voices that were uncovered in this research (e.g., separated, apologetic, generalizable) displayed an embedded and inherent preference for large-scale datasets, which were leveraged to help justify the research as being methodologically rigorous and trustworthy by implying that there were generalizable implications. Embedded in this methodological decision is the idea that qualitative research projects with small numbers of participants are easy, untrustworthy, and of low quality. Koro and Douglas (2008) describe this perception by explaining, “qualitative research can appear at first glance as if it simply involves interviewing a few people and then writing up a summary” (p. 172). This perception of small number studies being less trustworthy and valid may be reinforced by qualitative research that takes a large-scale, generalizable approach. For example, in the ten Caten et al. article (2019), they conducted one focus group with seven participants, described their analysis as “transcribed, indexed, and analyzed,” and presented three themes that emerged from the analysis, an analytic approach that is commonly used in studies with larger datasets. Koro and Douglas explain further, “In fact, qualitative research can be just as difficult to conceptualize, and be as methodologically and theoretically challenging, if not more challenging, than quantitative research” (2008, p. 172). In another example, Eastman and colleagues (2019) had one participant and the authors thoughtfully and thoroughly triangulated the data that was included in the findings section. They only included themes that were corroborated by three data sources in the findings. This shows both that the authors went to great lengths to collect their data and to conduct their analysis. In addition, it shows that the authors wanted to convince the readers that this study with only one participant was as rigorous as other larger studies. However, one may wonder whether this article would have been published and received so well if the authors had taken a more postmodern approach to their research by assuming that their reviewers understood this epistemology to be a valid, trustworthy, and insightful way to conduct research.
There has been much effort to introduce rigor into EER in general and qualitative research in particular (Borrego, 2007; Streveler & Smith, 2006). While many are moving away from the discussion of rigor (refer to Riley, (2017) for a problematization of the term rigor) towards a discussion of quality, there still seems to be an effort to explain the community’s expectations of quality for qualitative research (Kellam & Cirell, 2018; Walther et al., 2017a). However, some of this effort may not be solved by considering only the quality and trustworthiness of the research but may have arisen because of the polarity of epistemologies that exist within different EER sub-communities. The polarity of these epistemologies are likely major contributors to the voices that we heard and underlying tensions in many of the papers (e.g., apologetic, generalizable, and alongside voices). One possible way to address these disparities in epistemology and quality criteria is adopting criteria that align more closely with critical and postmodern epistemologies as proposed by Tracy (2010). Here we are not arguing against quality, but rather to caution authors and reviewers against applying quality criteria and approaches that align with positivist epistemologies to projects that are more critical and postmodern in nature.
Beddoes describes this movement focused on quality historically within the EER field from early discussions and focus on rigorous research to a more recent focus on methodological diversity (2014b). In thinking about how the field of EER has evolved over the years, it is not surprising that we continue to notice remnants of these earlier discourses in more recent articles. It is promising that, through our analysis, we heard voices that aligned with both positivist and critical epistemologies. It will be interesting to observe how qualitative EER continues to evolve in the coming years. We are hopeful that engineering education researchers have begun to recognize the value, importance, and contribution of researchers with more critical and postmodern epistemologies, and that we can continue the trend of valuing qualitative research more for what it is (complex and nuanced understandings) over what it is not (generalizable; Alasuutari, 2010).
Our findings may be an indication that these tensions in our relationships with qualitative data are derived from tensions among researchers and within our culture and broader community. We are not advocating that this diversity in epistemologies is something that needs to be fixed. We also do not believe that we, as a community, should have a collective paradigm. This epistemological diversity brings a richness to our community and the types of research that we engage in and understandings that we uncover. We do, however, feel that it would be helpful for qualitative researchers to consider their epistemological stance and how it influences their research decisions. This consideration of epistemological perspectives would also have implications for reviewers of articles and grants, as well, as we strive to understand the epistemology of the research team and how their epistemologies influence the decisions that were made about the research design and analysis. In addition, it may be helpful to consider the differences between the epistemologies of the researchers and the readers of these manuscripts. For example, Beddoes describes the tensions that arose in the field of EER when researchers tried to appeal to their audience of engineering educators who perceive social science research as inherently being inferior to engineering research and science, as social science was not perceived as being objective and generalizable (2014b).
This desire to remain unbiased in our research with a focus on large number studies has implications for the types of research that we can conduct and the populations that we can impact (Slaton & Pawley, 2018). Large number studies enact power differentials, as majority groups have more power in this type of study and people from marginalized groups (e.g., due to race, gender, sexuality, or disability) are further marginalized by losing their voices and their analytic power because of their small numbers (Slaton & Pawley, 2018). Slaton and Pawley argue that these choices that we make as researchers throughout our research design, such as who we recruit as participants, our data collection processes, and our research methodology, have “profound social consequences,” especially when promoting equity focused EER (p. 133).
Through the process of reviewing qualitative EER articles for this paper, we felt like we developed a more comprehensive understanding of where qualitative research currently stands and the potential futures within our field. As a community, we experience tensions during the writing and review process that are likely due to our varying epistemological perspectives. Building an initial awareness of our community’s mosaic of epistemological perspectives and how they influence the type of research we engage in is an important first step to identifying reasons for some of the underlying tensions in our field. If we as researchers and reviewers in this community begin considering the epistemological perspectives of others in our communities more deeply, we can begin to provide more constructive and legitimate reviews that help us move our research forward. Without this deeper understanding of epistemological differences, we may continue to notice apologetic, generalizable, or dehumanized voices appear in our manuscripts.
To stimulate further thought and/or discussion surrounding diverse qualitative epistemologies and paradigms, we present questions below, which are intended for consideration while engaging in the design, implementation, and dissemination of qualitative research. If you are involved as a reviewer, on an editorial board for a journal, or a program chair for a conference, you may consider the following reflective questions as you begin to review others’ work:
As our community begins to recognize the value of more diverse epistemological perspectives, we can begin to have more people take on research that is aligned with a postmodern epistemology, which could begin to expand the potential and possibilities of our research. We are not advocating for a monolithic epistemology for all qualitative researchers. Instead, we advocate for recognizing the value of our diverse epistemologies and the potential for more impactful research that could influence the way we perceive and imagine the future of engineering education. If one recognizes that they lean towards more positivist epistemologies, they may, for example, consider what this means for their research design and develop a project that aligns more closely with their epistemology. For example, this researcher may be well-suited to conduct a grounded theory study with thirty participants. Conversely, if a researcher is aligned with more postmodern epistemologies, they may be well-suited to conduct a study exploring the complex and nuanced experiences of a unique and individual student as they navigate an engineering undergraduate program. If either of these researchers conduct research that is misaligned with their epistemology, there could be a notable impact in the quality of the research, or they may find that they have more tensions when they engage in the research throughout the project from inception to completion.
If you are a qualitative engineering education researcher, we encourage you to reflect on the following questions as you begin to uncover your own epistemological perspective and what that perspective means for you and the type of work that you want to engage in:
These questions are meant to help you begin to reflect on your personal values and epistemology so that you can design qualitative research projects that have the potential to have the greatest impact within engineering education and that are consistent with your epistemological preferences. We encourage you to take some time to reflect on these questions through discussions with your research teams and peers or through writing memos in response to our questions. As you engage in this reflective practice, you may begin to develop more questions to ponder and a deeper understanding of yourself so that you can create research projects that are more authentic and meaningful to you personally while also having more impactful contributions to our broader community. In addition, building awareness about your own and other’s epistemologies can help you review other’s research in more constructive ways.
Through an analysis of discourses in qualitative engineering education articles published in 2019, we developed an understanding of epistemologies and values of qualitative engineering education researchers. This analysis helped us better understand some of the underlying tensions within the qualitative EER community and, in some cases, within ourselves and our research teams. Some of these tensions led to an apologetic researcher voice where researchers seemed apologetic about their qualitative research projects not being generalizable. Researchers who seemed to be pulled towards a more positivist epistemology tended to have separated, apologetic, and generalizable researcher voices with absent, abstracted, and/or dehumanized participant voices. Researchers who were pulled more towards critical and postmodern epistemologies had a more alongside and vulnerable researcher voice, and the participants’ voices were more storied. We intended for this analysis to shed light on the role that researchers’ and reviewers’ epistemologies play in the types of qualitative research that we conduct as a community and how we write about that research in our dissemination efforts. We hope that readers will join us as we begin to interrogate our own epistemologies and consider how those epistemologies influence us as researchers, reviewers, and readers of qualitative engineering education research.
This article hopefully brings light to the difficulties of our broader community in taking on more critical and postmodern epistemologies. These difficulties may be unconsciously embedded within us as individual researchers, as we are constantly engaging with institutions that are historically rooted within a positivist epistemological paradigm. Unconsciously or compulsively striving for positivist notions of generalizability in our research comes from the academy’s role in American history. Additionally, the broader EER community has historically valued more positivist and post-positivist ways of doing qualitative research (albeit unconsciously) due to its proximity to the field of engineering, which is explicitly derived from the positivist tradition. Interrogating our own epistemologies as qualitative engineering education researchers could begin to open us up to understanding other ways of knowing as we design research studies that not only influence and have implications for our participants but enable us to grow and change as a community as we ourselves are influenced by our participants.
1The 7th edition of the APA Style Guide and the authors guidelines for SEE recommend capitalizing racial and ethnic terms (American Psychological Association, 2019). We are following this guideline when discussing works by or topics affecting people of color. However, we do not capitalize white throughout this manuscript for two reasons. First, we aim to decenter whiteness as an institution in our scholarly work. Secondly, we recognize that people of color have suffered oppression by white people throughout history, which has necessitated the creation of counter spaces and collective cultures that are unique to each community of color that whiteness has harmed. Whiteness, on the other hand, shares no collective culture as a result of oppression. Thus, we do not capitalize white, as we disagree with the implicit assertion that there is a white culture worth acknowledging in this particular manuscript.
We would like to thank the reviewers of this paper, our research team, our colleagues, and our mentors for keeping us thinking critically and deeply about the ways we engage in research. We look forward to continuing to grow and learn with you all. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Nadia Kellam is a member of the advisory board for Studies in Engineering Education, which is on a voluntary basis. Madeleine Jennings has no competing interests.
Alasuutari, P. (2010). The rise and relevance of qualitative research. International Journal of Social Research Methodology, 13(2), 139–155. DOI: https://doi.org/10.1080/13645570902966056
American Psychological Association (Washington, District of Columbia) (Ed.). (2019). Publication manual of the American psychological association (Seventh edition). American Psychological Association.
Beddoes, K. (2014a). Using peer reviews to examine micropolitics and disciplinary development of engineering education: A case study. Discourse: Studies in the Cultural Politics of Education, 35(2), 266–277. DOI: https://doi.org/10.1080/01596306.2012.745735
Beddoes, K. (2014b). Methodology discourses as boundary work in the construction of engineering education. Social Studies of Science, 44(2), 293–312. DOI: https://doi.org/10.1177/0306312713510431
Benson, L. (2019). Editor’s page. Journal of Engineering Education, 108(2), 143–144. DOI: https://doi.org/10.1002/jee.20257
Boklage, A., Coley, B., & Kellam, N. (2019). Understanding engineering educators’ pedagogical transformations through the Hero’s Journey. European Journal of Engineering Education, 44(6), 923–938. DOI: https://doi.org/10.1080/03043797.2018.1500999
Borrego, M. (2007). Development of Engineering Education as a Rigorous Discipline: A Study of the Publication Patterns of Four Coalitions. Journal of Engineering Education, 96(1), 5–18. DOI: https://doi.org/10.1002/j.2168-9830.2007.tb00911.x
Borrego, M., & Henderson, C. (2014). Increasing the use of evidence-based teaching in STEM higher education: A comparison of eight change strategies. Journal of Engineering Education, 103(2), 220–252. DOI: https://doi.org/10.1002/jee.20040
Brown, B. A., Henderson, J. B., Gray, S., Donovan, B., Sullivan, S., Patterson, A., & Waggstaff, W. (2016). From description to explanation: An empirical exploration of the African-American pipeline problem in STEM. Journal of Research in Science Teaching, 53(1), 146–177. DOI: https://doi.org/10.1002/tea.21249
Burt, B. A. (2020). Broadening participation in the engineering professoriate: Influences on Allen’s journey in developing professorial intentions. Journal of Engineering Education, 109(4), 821–842. DOI: https://doi.org/10.1002/jee.20353
Carstensen, A. K., & Bernhard, J. (2019). Design science research – a powerful tool for improving methods in engineering education research. European Journal of Engineering Education, 44(1–2), 85–102. DOI: https://doi.org/10.1080/03043797.2018.1498459
Case, J. M., & Light, G. (2011). Emerging Methodologies in Engineering Education Research. Journal of Engineering Education, 100(1), 186–210. DOI: https://doi.org/10.1002/j.2168-9830.2011.tb00008.x
Coley, B. C., Simmons, D. R., & Lord, S. M. (2021). Dissolving the margins: LEANING INto an antiracist review process. Journal of Engineering Education, 110(1), 8–14. DOI: https://doi.org/10.1002/jee.20375
Council of National Psychological Associations for the Advancement of Ethnic Minority Interests. (2016). Testing and Assessment with Persons and Communities of Color. American Psychological Association. https://www.apa.org/pi/oema
Eastman, M. G., Miles, M. L., & Yerrick, R. (2019). Exploring the White and male culture: Investigating individual perspectives of equity and privilege in engineering education. Journal of Engineering Education, 108(4), 459–480. DOI: https://doi.org/10.1002/jee.20290
Edstrom, K., Benson, L., Mitchell, J., Bernhard, J., van den Bogaard, M., Finelli, C., Kellam, N., Lee, M., Lord, S., Rover, D., Saliah-Hassane, H., & Zappe, S. (2020). Review unto others as you would have others review unto you. IEEE Frontiers in Education Conference, 1–2. DOI: https://doi.org/10.1109/FIE44824.2020.9274132
Farrall, L. A. (1979). The history of eugenics: A bibliographical review. Annals of Science, 36(2), 111–123. DOI: https://doi.org/10.1080/00033797900200431
Foor, C. E., Walden, S. E., & Trytten, D. A. (2007). “I wish that I belonged more in this whole engineering group”: Achieving individual diversity. Journal of Engineering Education, 96(2), 103–115. DOI: https://doi.org/10.1002/j.2168-9830.2007.tb00921.x
Foucault, M. (1972). The Archaeology of Knowledge & The Discourse on Language. In The Archeology of Knowledge and the Discourse on Language. DOI: https://doi.org/10.1002/9780470776407.ch20
Fourati-Jamoussi, F., Dubois, M. J. F., Agnès, M., Leroux, V., & Sauvée, L. (2019). Sustainable development as a driver for educational innovation in engineering school: The case of UniLaSalle. European Journal of Engineering Education, 44(4), 570–588. DOI: https://doi.org/10.1080/03043797.2018.1501348
Gee, J. P. (2014). How to do discourse analysis: A toolkit (2nd ed.). Routledge. DOI: https://doi.org/10.4324/9781315819662
Gilligan, C., Spencer, R., Weinberg, M. K., & Bertsch, T. (2003). On the listening guide: A voice-centered relational method. In P. M. Camic, J. E. Rhodes, & L. Yardley (Eds.), Qualitative research in psychology: Expanding perspectives in methodology and design (pp. 157–172). American Psychological Association. DOI: https://doi.org/10.1037/10595-009
Guba, E. G., & Lincoln, Y. S. (2005). Paradigmatic controversies, contradictions, and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (3rd ed., pp. 191–216). Sage.
Hampton, C., Reeping, D., & Ozkan, D. S. (2021). Positionality statements in engineering education research: A look at the hand that guides the methodological tools. Studies in Engineering Education, 1(2), 126. DOI: https://doi.org/10.21061/see.13
Holland, D. P., Walsh, C. J., & Bennett, G. J. (2019). A qualitative investigation of design knowledge reuse in project-based mechanical design courses. European Journal of Engineering Education (UK), 44(1–2), 137–152. DOI: https://doi.org/10.1080/03043797.2018.1463196
Howe, K., & Eisenhart, M. (1990). Standards for qualitative (and quantitative) research: A prolegomenon. Educational Researcher, 19(4), 2–9. DOI: https://doi.org/10.3102/0013189X019004002
Jamieson, L. H., & Lohman, J. R. (2009). Creating a Culture for Scholarly and Systematic Innovation in Engineering Education: Ensuring U.S. engineering has the right people with the right talent for a global society. American Society for Engineering Education Report.
Jordan, S. S., Foster, C. H., Anderson, I. K., Betoney, C. A., & Pangan, T. J. D. (2019). Learning from the experiences of Navajo engineers: Looking toward the development of a culturally responsive engineering curriculum. Journal of Engineering Education, 108(3), 355–376. DOI: https://doi.org/10.1002/jee.20287
Kellam, N., & Cirell, A. (2018). Quality considerations in qualitative inquiry: Expanding our understanding for the broader dissemination of qualitative research. Journal of Engineering Education, 107(3), 355–361. DOI: https://doi.org/10.1002/jee.20227
Koro-Ljungberg (Koro), M., & Douglas, E. P. (2008). State of qualitative research in engineering education: Meta-analysis of JEE articles, 2005–2006. Journal of Engineering Education, 97(2), 163–175. DOI: https://doi.org/10.1002/j.2168-9830.2008.tb00965.x
Lima, R. G. de, Lins, H. N., Pfitscher, E. D., Garcia, J., Suni, A., Guerra, J. B. S. O. de A., & Delle, F. C. R. (2016). A sustainability evaluation framework for Science and Technology Institutes: An international comparative analysis. Journal of Cleaner Production, 125, 145–158. DOI: https://doi.org/10.1016/j.jclepro.2016.03.028
Liu, Q. (2019). A snapshot methodological review of journal articles in engineering education research. Proceedings of the Canadian Engineering Education Association (CEEA). DOI: https://doi.org/10.24908/pceea.vi0.13795
Main, J. B., Camacho, M. M., Mobley, C., Brawner, C. E., Lord, S. M., & Kesim, H. (2019). Technically and tactically proficient: How military leadership training and experiences are enacted in engineering education. International Journal of Engineering Education, 35(2), 446–457.
McCord, R. E., & Matusovich, H. M. (2019). Naturalistic observations of metacognition in engineering: Using observational methods to study metacognitive engagement in engineering. Journal of Engineering Education, 108(4), 481–502. DOI: https://doi.org/10.1002/jee.20291
Minichiello, A., Marx, S., McNeill, L., & Hailey, C. (2019). Exploring student study behaviours in engineering: How undergraduates prepared textbook problems for online submission. European Journal of Engineering Education, 44(1–2), 253–270. DOI: https://doi.org/10.1080/03043797.2018.1474342
Mishler, E. (1991). Representing discourse: The rhetoric of transcription. Journal of Narrative and Life History/Narrative Inquiry, 1(4), 255–280. DOI: https://doi.org/10.1075/jnlh.1.4.01rep
Mobley, C., Main, J. B., Brawner, C. B., Lord, S. M., & Camacho, M. M. (2019). Pride and promise: The enactment and salience of identity among first-generation student veterans in engineering. International Journal of Engineering Education, 35(1(A)), 35–49.
Mukhtar, N., Saud, M. S., Kamin, Y., Al-Rahmi, W. M., Mohd Kosnin, A., Yahaya, N., Abd Hamid, M. Z., Abd Latib, A., & Nordin, M. S. (2019). Environmental sustainability competency framework for polytechnics engineering programmes. IEEE Access (USA), 7, 125991–126004. DOI: https://doi.org/10.1109/ACCESS.2019.2936632
Neejer, C. (2015). The American eugenics movement. In A Companion to the History of American Science (pp. 345–360). John Wiley & Sons, Ltd. DOI: https://doi.org/10.1002/9781119072218.ch27
Pawley, A. L. (2013). “Learning from small numbers” of underrepresented students’ stories: Discussing a method to learn about institutional structure through narrative. American Society for Engineering Education Annual Conference & Exposition. DOI: https://doi.org/10.18260/1-2--19030
Pawley, A. L. (2017). Shifting the “default”: The case for making diversity the expected condition for engineering education and making whiteness and maleness visible. Journal of Engineering Education, 106(4), 531–533. DOI: https://doi.org/10.1002/jee.20181
Pawley, A. L. (2019). Learning from small numbers: Studying ruling relations that gender and race the structure of U.S. engineering education. Journal of Engineering Education, 108(1), 13–31. DOI: https://doi.org/10.1002/jee.20247
Pawley, A. L., & Phillips, C. M. (2014). From the mouths of students: Two illustrations of narrative analysis to under-stand engineering education’s ruling relations as gendered and raced. Proceedings of the 2014 American Society for Engineering Education Annual Conference and Exhibition DOI: https://doi.org/10.18260/1-2--20524
Pawley, A. L., Schimpf, C., & Nelson, L. (2016). Gender in engineering education research: A content analysis of research in JEE, 1998–2012. Journal of Engineering Education, 105(3), 508–528. DOI: https://doi.org/10.1002/jee.20128
Pembridge, J. J., & Paretti, M. C. (2019). Characterizing capstone design teaching: A functional taxonomy. Journal of Engineering Education, 108(2), 197–219. DOI: https://doi.org/10.1002/jee.20259
Riley, D. (2017). Rigor/Us: Building boundaries and disciplining diversity with standards of merit. Engineering Studies, 9(3), 249–265. DOI: https://doi.org/10.1080/19378629.2017.1408631
Riley, D., Slaton, A. E., & Pawley, A. L. (2014). Social justice and inclusion women and minorities in engineering. In A. Johri & B. M. Olds (Eds.), Cambridge Handbook of Engineering Education Research (pp. 335–356). Cambridge University Press. DOI: https://doi.org/10.1017/CBO9781139013451.022
Rulifson, G., & Bielefeldt, A. R. (2019). Learning social responsibility: Evolutions of undergraduate students’ predicted engineering futures. The International Journal of Engineering Education, 35(2), 572–584.
Sadikin, A. N., Mohd-Yusof, K., Aliah Phang, F., & Abdul Aziz, A. (2019). The introduction to engineering course: A case study from Universiti Teknologi Malaysia. Education for Chemical Engineers, 28, 45–53. DOI: https://doi.org/10.1016/j.ece.2019.04.001
Secules, S., Gupta, A., Elby, A., & Turpen, C. (2018). Zooming out from the struggling individual student: An account of the cultural construction of engineering ability in an undergraduate programming class. Journal of Engineering Education, 107(1), 56–86. DOI: https://doi.org/10.1002/jee.20191
Secules, S., McCall, C., Mejia, J. A., Beebe, C., Masters, A. S. L., Sánchez-Peña, M., & Svyantek, M. (2021). Positionality practices and dimensions of impact on equity research: A collaborative inquiry and call to the community. Journal of Engineering Education, 110(1), 19–43. DOI: https://doi.org/10.1002/jee.20377
Seiradakis, E., & Spantidakis, I. (2019). EFL engineering students’ research article genre knowledge development through concept mapping tasks: A qualitative interview-based study. Journal for the Study of English Linguistics, 7, 45. DOI: https://doi.org/10.5296/jsel.v7i1.14870
Slaton, A., & Pawley, A. (2018). The power and politics of engineering education research design: Saving the ‘small N.’ Engineering Studies, 10(2–3), 133–157. DOI: https://doi.org/10.1080/19378629.2018.1550785
Staller, K. M. (2013). Epistemological boot camp: The politics of science and what every qualitative researcher needs to know to survive in the academy. Qualitative Social Work: Research and Practice, 12(4), 395–413. DOI: https://doi.org/10.1177/1473325012450483
Steinmetz, G. (2005). The epistemological unconscious of U.S. sociology and the transition to post-Fordism: The case of historical sociology. In J. Adams, E. Clemens, & A. S. Orloff, Remaking Modernity. Duke University Press. DOI: https://doi.org/10.1215/9780822385882-005
Streveler, R. A., & Smith, K. A. (2006). Conducting rigorous research in engineering education. Journal of Engineering Education, 95(2), 103–105. DOI: https://doi.org/10.1002/j.2168-9830.2006.tb00882.x
ten Caten, C. S., Silva, D. S., Aguiar, R. B., Silva Filho, L. C. P., & Huerta, J. M. P. (2019). Reshaping engineering learning to promote innovative entrepreneurial behavior. Brazilian Journal of Operations & Production Management, 16(1), 141–148. DOI: https://doi.org/10.14488/BJOPM.2019.v16.n1.a13
Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16(10), 837–851. DOI: https://doi.org/10.1177/1077800410383121
Tyson, T. N., & Oldroyd, D. (2019). Accounting for slavery during the Enlightenment: Contradictions and interpretations. Accounting History, 24(2), 212–235. DOI: https://doi.org/10.1177/1032373218759971
Valentine, A., Belski, I., Hamilton, M., & Adams, S. (2019). Creativity in electrical engineering degree programs: Where is the content? IEEE Transactions on Education, 62(4), 288–296. DOI: https://doi.org/10.1109/TE.2019.2912834
Van Maanen, J. (2011). Tales of the field: On writing ethnography (2nd ed). University of Chicago Press. DOI: https://doi.org/10.7208/chicago/9780226849638.001.0001
Waller, A. A. (2006). Special session—Fish is fish: Learning to see the sea we swim in: Theoretical Frameworks for Education Research. IEEE Frontiers in Education Conference, 1–2. DOI: https://doi.org/10.1109/FIE.2006.322605
Walther, J., Pawley, A., & Sochacka, N. (2015). Exploring ethical validation as a key consideration in interpretive research quality. ASEE Annual Conference & Exposition. DOI: https://doi.org/10.18260/p.24063
Walther, J., Sochacka, N., Benson, L., Bumbaco, A., Kellam, N., Pawley, A., & Phillips, C. (2017a). Qualitative research quality: A collaborative inquiry across multiple methodological perspectives. Journal of Engineering Education, 106(3), 398–430. DOI: https://doi.org/10.1002/jee.20170
Walther, J., Sochacka, N., & Kellam, N. (2013). Quality in interpretive engineering education research: Reflections on an example study. Journal of Engineering Education, 102(4), 626–659. DOI: https://doi.org/10.1002/jee.20029
Walther, J., Sochacka, N. W., Benson, L., Bumbaco, A., Kellam, N. N., Pawley, A. L., & Phillips, C. M. (2017b). Qualitative research quality – A collaborative inquiry from multiple methodological perspectives. Journal of Engineering Education, 106(3), 398–430. DOI: https://doi.org/10.1002/jee.20170