Start Submission Become a Reviewer

Reading: A Methodological Roadmap for Phenomenologically Based Interviewing in Engineering Education:...

Download

A- A+
Alt. Display

Empirical Research

A Methodological Roadmap for Phenomenologically Based Interviewing in Engineering Education: Identifying Types of Learning in Makerspaces

Authors:

Megan Tomko,

Georgia Institute of Technology, US
X close

Melissa Aleman,

James Madison University, US
X close

Wendy Newstetter,

Georgia Institute of Technology, US
X close

Julie Linsey ,

Georgia Institute of Technology, US
X close

Robert Nagel

James Madison University, US
X close

Abstract

Background: The complex nature of engineering education requires methodologies that enable insight into dynamic and multifaceted phenomena.

Purpose: This paper describes a roadmap for applying phenomenologically based interviewing as an approach for understanding engineering students’ lived academic experiences.

Scope: To demonstrate the methodology in practice, the paper offers an exemplar of its use in a study of women’s experiences learning through making in academic makerspaces. Specifically, the example showcases the power of this methodology for generating cognitive and behavioral typologies in engineering education research.

Discussion/Conclusions: Phenomenologically based interviewing holds potential to generate rich datasets toward the discovery of a wide range of areas of interest in engineering research. The lived experiences of women students illuminates both breadth and depth of learning through which they engage as participants in academic makerspaces. A wide variety of complex phenomena and understudied populations can be examined in engineering such as the influence of past experiences, developing typologies, team processes over time and gaining insights into process changes overtime. For the example presented, the interviews resulted in a typology of learning through making.

How to Cite: Tomko, M., Aleman, M., Newstetter, W., Linsey, J., & Nagel, R. (2022). A Methodological Roadmap for Phenomenologically Based Interviewing in Engineering Education: Identifying Types of Learning in Makerspaces. Studies in Engineering Education, 2(1), 100–118. DOI: http://doi.org/10.21061/see.32
  Published on 25 Feb 2022
 Accepted on 29 Nov 2021            Submitted on 16 Jan 2020

Introduction

Qualitative interviewing processes are important for engineering education as they highlight context as a critical feature of the phenomena studied, and they enable transferability with findings explained through thick descriptions of behaviors and contextually driven explanations (Borrego et al., 2009; Geertz, 1973; Hoepfl, 1997; Van Note Chism et al., 2008). In engineering education, in-depth interviews have been used to provide insights into conceptual change associated with students’ understanding of mechanics of materials (Brown et al., 2018), students’ perceptions of their experiences of engineering problem solving (Kirn & Benson, 2018) and problem-based learning (Dahlgren, 2003), and the learning experiences of Taiwanese women studying engineering (Chou & Chen, 2015).

Although many qualitative researchers productively use single interviews following semi-structured protocols, the depth and complexity of participant experiences may also be suitable to alternate approaches. To that end, in this paper we describe a qualitative interviewing process based on Siedman’s (Seidman, 2006) phenomenological, in-depth interview processes, which relies on a sequence of three guided interviews. To illustrate this approach, we describe a case study that uses Phenomenologically Based Interviewing (PBI), coupled with more traditional qualitative coding methods, toward understanding interviewees lived experiences as they relate to learning and practicing engineering. Specifically, the methodology is applied to explore women’s experiences of learning through making in academic makerspaces to show the methodology’s broader potential for qualitative research designs in engineering education. This paper uses a layered format in which the case example of women’s learning in makerspaces immediately follows descriptions of each step in the method such that readers can view the application of PBI at each stage. Passages specific to this case example are italicized for clarity, beginning with the rationale for the use of PBI for studying women’s learning through making.

We found this PBI methodology to be particularly valuable in engineering education as it mirrors how engineering students contextualize their work providing key insights into the students’ complex and abstract lived experiences, enabling understanding of learning processes as the student perceived them unfolding, and allowing students to share the meaning that they connect with those processes. We originally employed PBI to understand the types of learning (Tomko, 2019) that occurs through making and in makerspaces and the differences in learning in different engineering-centered makerspaces (Saracino, 2021; Saracino et al., 2021). We chose PBI because we believed that the rapport-building aspects of in-depth, three-part interviews would be necessary for answering our research questions. Our rich, contextualized data gave insights into significantly more than expected: the gendered expectations serving as barriers occurring through women’s life narratives related to making (Tomko et al., 2021), the importance of engaging and immersive catalysts for creating making opportunities, especially for women (Tomko et al., 2021; Tomko et al., 2020), the role of verbal and nonverbal affirmations in fostering a supportive making community (Tomko et al., 2020), the importance of structured and unstructured opportunities to explore creative pursuits pre-college, and the role of community, both during college and before (Tomko, 2019; Tomko et al., 2021; Tomko et al., 2020). It is through this lens of unexpected discovery that we share PBI and illustrate its use in engineering education herein.

Case Example: Using PBI to Study Learning through Making

Phenomenologically Based Interviewing (PBI) is particularly useful for exploring research questions seeking to understand phenomena through individuals’ lived experiences in complex, and at times, unknown environments. For the following reasons, this methodology enabled the authors to identify the types of learning women students experience through making and in makerspaces:

  • As informal, interactive, collaborative, self-paced, and problem-based learning environments (Halverson & Sheridan, 2014; Lande & Jordan, 2014; Litts, 2015), makerspaces are complex environments to study.
  • While billed as democratized learning spaces (Dougherty, 2012; Hatch, 2014), there still remains powerful societal constructions that assign gender to making and designing (Meyer, 2018). Men occupy 81% of the maker movement, and women makers are an underrepresented and understudied population.
  • Little empirical evidence exists (Weiner et al., 2018) that shows the value of making experiences and makerspaces for the professional development of STEM students in higher education, thus more targeted quantitative approaches are difficult to deploy effectively.

Consequently, the authors sought a methodology that matches the complexity of these adaptive, dynamic, and interactive learning environments and have arrived at phenomenologically based interviewing.

The overarching goal of this paper is to provide engineering education researchers with a methodological roadmap for data collection grounded in phenomenologically based interviewing as well as to champion the value of qualitative research for research endeavors in engineering education. Toward this goal, the methodological roadmap is interwoven with the authors’ experiences using phenomenologically based interviewing to demonstrate its valuable use and application in engineering education research.

Background

Qualitative Research

There are numerous characteristics of qualitative research that both showcase its value for building knowledge and highlight how it differs in important ways from the assumptions and goals that underlie quantitative studies. First, qualitative research seeks to provide rich descriptions and understanding of phenomena as a means to create transferable knowledge that can be applied to other research sites (Hoepfl, 1997), in contrast to the primary goal of generalizability that guides quantitative studies. Because of the need for both complex and resonant descriptions, in-depth interviews, focus groups, observations, and textual analysis of open-ended questions in surveys are commonly employed (Borrego et al., 2009; Golafshani, 2003; Leydens et al., 2004; Patton, 2002; Van Note Chism et al., 2008). Second, qualitative research values creating knowledge in situ, highlighting the context as a critical feature of the methodology (Borrego et al., 2009; Van Note Chism et al., 2008). The contextually driven nature of qualitative methodologies requires a detailed understanding of the setting in which the phenomenon is occurring. As a result, systematic qualitative data collection and analysis are time-consuming, specifically due to the iterative processes involved in generating and refining context-appropriate research questions, ensuring procedural and relational ethics, and acquiring sound and complete findings from the data. Third, findings in qualitative research, and most specifically ethnographic work, center the voices of the participants over that of the researcher’s interpretations to enhance the readers’ deep understanding of the phenomenon being examined from the community members’ perspectives. Such rich descriptions enable in-depth insights into people’s lived experiences, which is particularly important when exploring new environments and understudied populations whose voices may be lost or marginalized.

Stories and Sensemaking through Interviews

Interviewing is a particularly effective method for gaining rich insights into people’s lived experiences by eliciting stories of a person’s “lived world” (Kvale, 1996, p. 4) through a “guided conversation” in which one person asks questions and actively listens, and another responds (Rubin & Rubin, 2012). Stories elicited in the context of interviews help researchers to understand the ways that people make sense of phenomena and organize their experiences. Indeed, a story is a way of knowing, and the act of telling a story prompts meaning-making (Seidman, 2006). Interviews also situate the voices of the participants and their stories within a context. The participants’ experiences placed in the context of their personal narratives inform the meaning and reasons behind their engagement in a particular phenomenon. Then, “by putting together descriptions from separate interviewees, researchers create portraits of complicated processes” (Rubin & Rubin, 2012, p. 3).

Phenomenologically Based Interviewing (PBI)

Phenomenologically Based Interviewing (PBI) is a particularly useful methodology for examining complex issues in engineering education, such as learning. PBI is an iterative interviewing process that uses open-ended questions aimed at inviting participants to reconstruct experiences that pertain to a specified topic of interest and reflect upon the meanings of those experiences. The interviewing methodology described in this paper is adapted from Seidman’s (2006) in-depth phenomenologically based interviewing approach, a process which includes a series of three 90-minute interviews with each participant in a study. Seidman’s approach has been widely used to study educational contexts, including the experiences for first-year English teachers (Cook, 2009), the experiences of both ESL teachers and ESL students (Gabriel, 1997; Young, 1990), and the experiences of student teachers (Compagnone, 1995; O’Donnell, 1989), among others.

This form of interviewing is particularly useful for generating rich, in-depth accounts of the lived experiences of understudied and marginalized populations (Seidman, 2006). For example, it has been used to illuminate the unique experiences of understudied or underrepresented groups such as African American performance artists and Black jazz musicians who teach at collegiate or university level (Hardin, 1987; Jenoure, 1995), and gendered issues in student teaching (Miller, 1997). As previous research demonstrates, PBI proves useful for exploring topics in educational contexts, highlighting the experiences of understudied or underrepresented populations, and unpacking complex phenomena. Our research questions about learning in makerspaces matched well with three-part phenomenological interviews. Specifically, we asked: how are academic makerspaces supporting learning for women students? And from this primary question, the following three questions were also posed.

  1. How are women students navigating pathways into and through makerspaces?
  2. What different types of learning are reported by women users in academic makerspaces?
  3. How are women students’ making competencies developing?

In-Depth Phenomenologically Based Interviewing (PBI)

In-depth Phenomenologically Based Interviewing (PBI) as articulated in Irving Seidman’s (2006) Interviewing as Qualitative Research is a specific process of open-ended reflexive interviews designed to “[explore] complex issues in the subject area by examining the concrete experience of people … and the meaning their experience had for them” (Seidman, 2013, p. 15). This reflexive interviewing method couples the frameworks of life history interviewing (Bertaux, 1981) with in-depth interviewing based in Alfred Schutz’s phenomenology. It is through eliciting of participants’ life story about a particular phenomenon of interest that researchers can garner an understanding of the complex “stocks of knowledge” (Schutz & Luckmann, 1973, p. 7) and the tacit, taken-for-granted meanings associated with a particular setting. In short, PBI illuminates the processes and meanings through which individuals come to have particular forms of knowledge in a given context and how that knowledge enables them to participate successfully with others within that cultural context. In the following, we briefly describe the contributions of life history interviewing and Schutz’s phenomenology to PBI to better understand the assumptions that underlie processes of data collection and analysis.

Life history interviews, oral histories, and life story interviews hold one thing in common: they focus on and honor the subjective experiences of the storyteller. Quite simply, “life history interviewing is a research method that is designed to record an individual’s biography in his or her own words … Life histories provide a means of accessing people’s narrative accounts of their lives and of the changes that have occurred within living memory (Jackson & Russell, 2010, p. 172). Interviews that focus on the storytellers’ accounts of their lives are particularly helpful toward understanding how one’s identities are shaped over time (Atkinson, 2007; McAdams, Josselson, et al., 2006) and have been used in educational research to understand identity development (McAdams & Guo, 2014), as well as to understand the gendered experiences of engineering students (Walker, 2001). Life story interviews are often structured around major turning points in a person’s journey, creating opportunities for interviewees to explore each chapter of their life reflectively (McAdams & Guo, 2014)—what McAdams et al. (2006) call “the guided autobiography.”

Life story interviewing complements the meaning-centered roots of phenomenology. Alfred Schutz argued that our social realities are multiple and contextually driven. Eberle (2014) describes Alfred Schutz’s ontology that underlies how social scientists come to describe meaning-making experiences:

And every (“normal”) actor on earth has a subjective, biographically determined stock of knowledge at hand; uses (linguistic and pre-linguistic) typifications and is guided by systems of relevances; orients in time and space; and relies on systems of appresentation in order to understand others or relate to multiple realities (Eberle, 2014, p. 187).

Schutz (1970) argues that the processes of reflection are what allow humans to make sense of and give meaning to their past experiences. “Phenomenal experience is, therefore, never of oneself behaving, only of having behaved (Schutz, 1970, p. 67). Such retrospection allows humans to define their experiences as separate from other experiences and important to their understandings of self, other, and their life world.

The in-depth PBI method uses three consecutive 90-minute interviews designed to evoke a person’s lived experiences retrospectively as first-hand narratives, through an open-ended, semi-structured protocol. Interviews should be recorded, and the resultant data are the verbatim interview transcriptions. To this methodology, we put forward three adaptations: an artifact to elicit discussion and provide contextual support, a timeline for reaffirming and reframing discussion prior lived experiences, and the use of shortened, targeted interviews following in-depth, three-part, 90-minute interviews.

The Interviews

Each of the three 90-minute interviews focuses on different aspects of an individual’s lived experience as they pertain to a specific topic of interest. The individual is prompted to reconstruct their lived experiences retrospectively to develop an in-depth portrait of their personal narrative; following which, directive questions are posed and the interviewer points to different moments within the participant’s reconstructed narrative inquiring about its specific relevance to the topic of study. Seidman argues that the three interviews should be conducted in relatively close proximity to one another, over the course of two to three weeks, to provide both the opportunity for a sense of continuity, but also, time for reflection between each interview. In each of the following paragraphs, the processes of PBI as outlined by Seidman (2006) are described and presented as we have adapted them for the study of engineering education.

Case Example: The in-depth PBI process was selected in order to cultivate a relationally centered interview experience that enabled both the interviewer and the interviewee to develop rapport over an extended period of time thus creating a data collection process that was characterized by mutuality, vulnerability, and deep reflection on experiences. This relationship-centered experience was particularly important in creating the space for often unheard stories and experiences of women makers, an added value for studying understudied populations. The in-depth PBI process examining women’s learning through making occurred over a two-month period. To minimize distractions, interviews were conducted on campus in a private room with a simple layout. Each was recorded with a handheld device following participant consent. Interviews were organized around each of our three pre-determined themes: (1) Interview One – life history; (2) Interview Two – details of making experiences; (3) Interview Three – meanings of making experiences. The interview prompts were focused on helping participants craft their retrospective narratives around making and learning experiences, particularly in makerspaces, and were formatted to be open-ended and generative. Five women participated in the study, with each participant’s total set of interviews occupying approximately 4.5 hours.

First Interview

The first interview in the PBI process investigates an individual’s life history in relation to the phenomenon of interest with participants being asked to reconstruct the experiences that have led to their current situation in regard to the phenomenon of interest. In order to gain insight into lived experiences from a life history standpoint, the interview is centered on how rather than why; a focus on how allows for the participant to openly describe their experiences, whereas a focus on why would confine the scope of the interview, suggest a preconceived objective, and prevent recollection and reflection on experiences. Phenomenological interviews are guided by a series of prompts or “jotting” (Seidman, 2013), rather than questions. The “jottings” serve as topics for discussion.

Case Example: The first interviews with women makers were thus designed to be guided by the experiences of the participant and set the groundwork for the interviewer-interviewee relationship. During the interviews, the researcher continued to bring the participant back to the meaning of their making experiences. The following prompts were among the types used to direct this first interview:

  • How did you get involved?
  • Growing up, what was it like for you creating or making thing?,
  • What attracts you to this space or type of space?
  • What inspires you to use the space?
  • Who are some people who have influenced you?

In the interviews, one participant, was asked how early childhood experiences in making and crafting helped to contribute to her experiences at the university.

Response: It gave me more confidence, I would say. I like being in the know, as most people do, but I just remember the tools training we did freshman year with the trebuchets – I was sort of helping everyone out because they’re trying to use the jigsaw, and they’re like, “Ah!” and I’m like, “I’ve been doing that for years.” So, I would say it gave me like, confidence, definitely. I could go in there, and I was like, “Oh, I already know how you did that.” And it also made me feel kind of powerful helping out guys in the woodshop.

Second Interview

The second interview explores an individual’s most recent lived experiences and encourages the reconstruction of these experiences through story. In the context of engineering education, an interviewer should strongly consider asking a participant to bring an artifact to the interview. While artifact reflections are not part of the standard protocol as described by Seidman’s (2006) original PBI methodology, the artifact is a particularly valuable talking point for launching a discussion about engineering education related topics and allows for greater contextual support for the participants’ descriptions. This modification is particularly useful for engineering design education, where we have often found an emphasis on the processes and prototyping – a finding supported by Douglas et al. with artifact elicitation interviews of self-declared makers recruited during the San Francisco Bay Area Maker Faire (Douglas et al., 2015). We found with PBI that artifacts work well for informing follow-up questions, thereby inviting the participant to engage a more thorough, reflective account of the meaning involving the phenomenon of interest.

Case Example: Participants brought a variety of artifacts to the second interview demonstrating previous experiences and personal meaning through making; a few examples are provided as Figure 1. These artifacts allowed the interviewer to ask generative elicitation questions (Tracy, 2019) that invited a storytelling about participant’s making and deepened our understanding of the contexts, processes, and products of their making experiences.

Example artifacts brought to the second interview
Figure 1 

Example artifacts brought to the second interview.

Third Interview

The third interview highlights an individual’s reflection on their lived experiences. Because talking about an experience elicits meanings (Vygotsky, 1987), meaning-making inherently occurs within both the first and second interviews as the participant describes their past and more recent experiences. The narrative that develops in the first and second interviews establishes a foundation for the participant to reflect on their lived experiences. In the context of engineering education, the interviewer may find it particularly helpful to open the interview by asking the participant to draw a timeline on paper of their experiences around the phenomenon of interest. Like the artifact in the second interview, the timeline is an addition to Seidman’s protocol specific for engineering education research. Kolar et al. (2015) overview a process for employing timelines during in-depth interviews suggesting that one should first describe a timeline to the participant followed by asking participants to draw their own timeline depicting their own experiences. Similar to Kolar et al. (2017), we have found that a timeline creates a starting discussion point that helps reiterate and reaffirm lived experiences as well as fill in gaps missed in the verbal narrative of the first and second interviews.

Case Example: For interviewees, the timeline used in the third interview provided a framework to guide their reflection and leans into the chronological strategies used in life story interviews described earlier in the paper. For the interviewer, the concrete timelines provided a means to prompt discussion into previous experiences and history of making beginning with childhood and progressing through present. This enables an interviewing process that circles back to many of the stories told in the first two interviews and invites a reflection on the participants’ meanings of those concrete experiences. Figure 2 is one such example from an interview participant.

Recreated timeline developed during a third interview
Figure 2 

Recreated timeline developed during a third interview.

Single-targeted Interview

The in-depth PBI process is time-consuming and costly. A research team would need 90+ interview hours to obtain 20 complete three-part interviews! To mitigate this challenge, the process may be adjusted to include shortened, single-targeted interviews. These focused 60- to 90-minute interview protocols can be used after an initial data set is collected using the three-series interviews described above. These single-targeted interviews can then inform and bolster the interpretations and themes that emerge from the three-series PBIs conducted with a smaller sample size and provide an important opportunity to refine and fill gaps in the interpretations. This interview protocol adapts the original protocol so as to draw a concise, yet thorough narrative from the participants beginning by asking participants to draw a timeline on paper of their experiences around a phenomenon of interest, moving toward the clarification of their engagement in the phenomenon, and finally, inviting sharing of experiences involving artifacts through pictures on their phone. While these targeted interviews limit the time the interviewer and interviewee have together to cultivate a relationship, the design of the interviews still centers rapport building and reflection that are a central feature of PBI and at the heart of our goal to create space for women to share their experiences.

Case Example: Following initial analyses of five sets of three-series PBIs, an additional 15 women participated in single-targeted interviews. In these more focused interviews, participants constructed a timeline of their making experiences, shared photos of artifacts they had made, and reflected on their experiences learning in makerspaces. The targeted, one-hour interviews sought to confirm the interpretations (a typology) generated through initial data analyses of the first more focused data set and to identify any gaps in the existing interpretations. The one-hour interviews provided a succinct approach for obtaining saturation of themes in the typology once the research became less exploratory and experiences of the key phenomena of learning through making became clear.

Transcription of all 20 interviews was outsourced. Upon receiving transcriptions, the accuracy of each was checked by the interviewer listening to each interview, correcting for errors, and adjusting un-transcribed utterances or jargon.

The Interviewer

One of the goals of phenomenological interviewing is to create rapport and build trust between the interviewer and the interviewee. Reflexive interviews are enriched when the interviewer-participant relationship is delicate and deep, and the participant feels safe and brave to reconstruct their experiences vulnerably. Through this process, the interviewer and participant work together to co-create the participant’s story. This relationship is maintained during the entire three-part interview process and ended respectfully upon completion of the interview(s) (Dexter, 1970; Mischler, 1986). This deeply personal relationship that ideally develops between the interviewer and the participant necessitates that the interviewer should be clearly described in the presentation of the research as a critical component of the research methodology.

Developing the shared safety and vulnerability is critical for establishing trust. Thus, it is important for the interviewer to take the time to inform the participant of the research and to allow the participants to ask questions before consenting. Typically, in these three-part interviews, the interviewer comprises the role of the main researcher establishing the interview protocol, interviewing the participants, and analyzing the data (Seidman, 2006). In this way, the interviewer is fully immersed in the data and understands the underlying nuances within the data. If, however, the research team cannot maintain one researcher as the interviewer for the full research design (as is the case for multi-university studies), the research team should ensure that only one researcher collects the three 90-minute interviews in a set, and also, keep the number of interviewers to a minimum. The interviewers should have similar training and communication with one another as a means to ensure consistency of practices across the interviews.

Case Example: Interviews were conducted by a woman graduate student in her mid-twenties who, at the time of the interviews, was studying mechanical engineering at a large public university in the South. She holds a BS in mechanical engineering from a different public university in the Northeast. Her training in qualitative research methods has been over the course of three years and included studying and implementing various qualitative and ethnographic methods, taking one course on survey methodology, two courses on qualitative research methods, and working with four different qualitative researchers. The interviewer’s youthful appearance (often being confused for a first-year student), her coy personality, and shared gender identities to the participants enabled her to relate with participants and develop rapport based on shared social locations. Her curiosity about women and learning via making in makerspaces emerges from her own lack of hands-on making experiences in engineering which she vulnerably and readily revealed in initial conversations with participants as she overviewed the background and goals for the study. Rapport is an ongoing interpersonal process created over time in each interview and deepened by vulnerability, mutuality, and deep listening. The interviewer demonstrated active listening through asking follow-up questions designed to elicit greater depth, paraphrasing statements to ensure understanding, and referencing specific aspects of the participants’ narratives. She finds great inspiration in the narratives and lived experiences of women engineering students who are making in makerspaces.

The Participants

PBI requires the selection of participants who are poised to offer the greatest insight into the meaning of the phenomenon under investigation. Purposeful maximum variation sampling is one useful technique that enables the interviewer to identify a heterogeneous group of participants who (or sites which) meet the criteria for the inquiry of interest. In purposeful sampling, cases are selected based on their potential to provide rich information regarding a certain topic, as per the available resources (Palinkas et al., 2015; Patton, 2002). Paired with purposive sampling, maximum variation sampling pertains to selecting sites and/or people (Tagg, 1985) that are more fully representative of the heterogeneity of the larger population and will provide relatability to a wide audience (Seidman, 2006). In applying maximum variation sampling, one must first define what is meant by “maximum range” in terms of population and/or sites followed by defining who is in the population and which sites are of interest with the goal of identifying a wide range of variation or difference across a single population (Seidman, 2013, p. 56). Seidman (2013) warns that through this process, if the population or sites becomes unmanageable, then further clarification is required.

Often snowball sampling will be necessary to recruit populations that are difficult to reach, such as underrepresented student populations in engineering or students highly involved in a particular activity. Snowball sampling is the process of initial informants referring the researcher to other individuals who would meet the criteria of eligibility for a study (Morgan, 2008). With participant selection, the goal is to excavate the range of possible experiences of a certain phenomenon until there are enough cases to reach theoretical saturation. Theoretical saturation is reached when analysis of data no longer yields new meanings (Douglas, 1976; Glaser & Strauss, 1967; Tracy, 2013).

Case Example: The interviewer recruited women who were highly involved in making at the different university makerspaces through purposeful maximum variation sampling. Following maximum variation sampling, the sites were defined broadly as all formally recognized makerspaces on campus, and women were chosen who had different sets of interactions with these varied campus makerspaces. Women were selected who identified as being of different social locations (e.g., nationality, ethnicity) as well as who represented different majors and academic cohorts on campus. Snowball sampling further allowed the interviewer to identify new potential interviewees through the referrals of previous participants. These approaches were chosen due to factors that made it challenging to identify the true size of the population: 1) women students may hold different perceptions of the label “highly involved,” 2) students tend to have different labels for what they deem as a makerspace, and 3) the population of women highly involved in the university’s makerspaces is visibly low.

The Data Analysis

Although PBI is most often thought of in the context of phenomenological analysis, there is great value in pairing PBI as the data collection process with methods of data analysis most commonly used in qualitative social science, particularly when working with research teams in which multiple team members may be analyzing the data. This departs from standards of practice in PBI approaches to analysis that focus primarily on meanings interpreted by the interviewer. Such a combination enables the benefits of the open and relational spaces created in the PBI’s that empower participants to reflect deeply on their life stories in relation to a given topic of interest, while also enabling researchers to conduct micro-level analyses of the language that participants use to talk about a subject, narrative features of their stories, and manner in which their reflections are connected to larger discourses in a given field (such as engineering or making), among others. When using this combined approach, data analysis begins as soon as the first interview is transcribed and follows an iterative process that results in the construction of a typology comprised of an arrangement of categories or “types” as they pertain to a certain phenomenon of interest. The following steps reveal the processes for constructing a typology from PBI data. It is important to note that while the steps appear in a linear fashion, the procedures themselves are iterative and continue as each new interview is collected.

Data Immersion

Qualitative analysis begins as soon as the first set of interviews are completed and transcribed, starting with the interviewer reviewing the full set of transcripts several times to familiarize themselves with the data and gain a holistic perspective. The purpose of the data immersion phase is for the researcher(s) to become familiar with the data and understand its nuances. During initial data immersion, the researcher makes analytic memos noting points of interest; these aid later with data interpretation. Data immersion is necessary following each set of transcribed interviews. When working in a research team, each member of that team should also engage in the data immersion phase.

Coding

Data are analyzed through multiple cycles of coding (Saldaña, 2016) where coding is the process of interpreting meanings in the data by assigning “chunks” of data with a label or category. A code is “most often a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data” (Saldaña, 2016, p. 4). Saldaña’s (2016) The Coding Manual for Qualitative Researchers is particularly useful for researchers combining PBI with qualitative coding as he combines and highlights multiple frameworks for qualitative coding creating names for coding practices for “clarity” and flexibility’s sake” (p. 2). As Saldaña writes:

Coding is just one way of analyzing qualitative data, not the way. Be cautious of those who demonize the method outright. And be equally cautious of those who swear unyielding affinity to codes or what has been colloquially labeled “coding fetishism.” I prefer that you yourself, rather than some presumptive theorist or hardcore methodologist, determine whether coding is appropriate for your research project. (2016, p. 3, emphasis in original)

As such, researchers combining PBI with qualitative coding are encouraged to identify the analytical processes and types of coding that best suit their research questions (see Saldaña, 2016). In this paper, we highlight two types of coding that he offers as a means of moving back and forth between examining data at a micro-level and looking at how those data fit into broader categories or types: primary cycle coding and secondary cycle coding. The process of coding toward the development of a typology follows multiple iterative phases or steps of primary and secondary cycle coding.

  • Primary cycle coding is an exploratory process that breaks down the data into distinct parts while examining these parts for similarities and differences (Saldaña, 2016, p. 4). It is a process of extracting and investigating attributes within the data, and one should remain open to the possibility that their interpretations of the data may lead to a number of potential theoretical directions in often unanticipated ways (Charmaz, 2014).
  • Secondary cycle coding expands on the primary cycle coding, collects the codes, and reorganizes the codes to eliminate, converge, and compare the attributes in the data so as to build categories.

This process of identifying codes, and eventually themes, in both cycles is completed through the iterative process of constant comparison, and “lumping” and “fracturing” data (Tracy, 2013). Typology construction often moves back and forth between the micro-level categories identified in primary cycle coding and the broader categories – or types – identified through secondary cycle coding.

Example Case: In order to gain a sense of the data and to identify the emergent categories of learning, the interviewer immersed herself in the data by reading the datasets several times. Second, she began the process of primary cycle coding seeking to answer the question “what is this participant learning?”. Primary cycle coding produced an extensive list of categories of learning, as the process is meant to “open up” an understanding of the data. First-level codes reflected characteristics of learning such as “hands-on,” and processes of learning such as “problem-solving” and “prototyping,” and contents of learning such as “safety” and “machines,” among others.

After coding initial datasets, the interviewer grouped relationally similar dimensions and attributes together through the processes of secondary coding, seeking to group the first level codes into qualitatively similar types of learning. For example, attributes such as “hands-on,” “making mistakes,” “tinkering,” and “experimenting,” along with others, were grouped together in a relational set and labeled with the second-level code, or type “Learning by Doing.” Following this initial coding of the data, the interviewer and two members of the team then independently reviewed a sample of the data (approximately 10 percent of dataset one, as suggested by Campbell et al., 2013). The three researchers discussed the emergent codes (categories of learning) through a series of collaborative sessions to determine whether the emergent codes were fully addressing the research questions.

Triangulation

While PBI data analyses as outlined by Seidman are typically conducted by the interviewer who is fully immersed in the relationship from the data collection to the analysis of meanings (see Seidman, 2013), when combining PBI data collection with qualitative coding using a research team project interpretations can benefit from triangulation. Specifically, while each set of the 90-minute interviews should be collected by a single researcher, having multiple researchers analyze data “allow[s] different facets of problems to be explored, increases scope, deepens understanding, and encourages consistent (re)interpretation” (Tracy, 2010, p. 843). Given the volume and complexity of the data, this collaboration benefits from careful processes described in the following steps:

  1. The complete set of interviews as transcribed should be read by the interviewer to give the interviewer a holistic framework for data analysis, writing analytic memos, and beginning the first cycle of open coding.
  2. As broader themes or categories begin to emerge during the initial coding stage, the interviewer should meet with co-researchers to consult with them on the emerging themes. The co-researchers should also read the full set of transcripts and create analytic memos in the data immersion phase.
  3. The research team should discuss the initial categories of the emerging typology and ask critical clarifying questions of each category in order to refine, distinguish, and test examples from the data against types.
  4. The interviewer returns to the next set of transcripts and repeats this process of coding and collaborative discussion with co-researchers until no new types emerge in the data, reaching theoretical saturation.

Case Example: The interviewer and a member of the research team refined the first level codes by revisiting the same data set. The researchers discussed newly emergent codes and conflicts, and the interviewer consolidated and organized the first level codes through another round of coding, which resulted in an initial typology (a coding scheme that could be applied to additional data collected).

To further refine the coding scheme, the interviewer and a member of the research team independently coded a second interview set using the previously refined coding scheme and developed new codes as necessary. The researchers’ subsequent discussion about their coding helped to collapse and refine categories in the coding scheme to create clearer qualitative distinctions between categories. Having multiple coders apply the coding scheme enabled further refinement of the definitions of the codes, particularly as a result of disagreements in coding.

Intercoder Metrics

Intercoder reliability is particularly valuable when creating a typology that may be used by other researchers to analyze new sets of data, enhancing the applicability and transferability of the constructed typology. Intercoder reliability refers to the ability of two or more coders to select the same code for the same sample of text, given that the coders are in isolation of one another and are considered to be equally capable (Campbell et al., 2013). Intercoder agreement is the ability of the same two or more coders to reconcile the discrepancies in their codes through discussion (Campbell et al., 2013). Intercoder metrics may assist the research team in systematic agreement on the categories in the coding scheme and help to create clear definitions of codes (or types) that can be applied to other data sets by other researchers. It is a particular challenge for the multiple coders to have similar perceptions regarding how to view such complex and unstructured data, often recognized as a strength of qualitative research; though intersubjectivity – agreed or shared meanings between persons – can be achieved and evidenced (e.g., Marques & McCall, 2005). To reach agreement, first, it is critical that any researcher coding the data read a qualitative data set in its entirety to understand the context and gain a holistic perspective.

Second, the data should be unitized by the primary interviewer for training new coders how to use the coding scheme (Campbell et al., 2013; Miles & Huberman, 1984). Unitizing is a process in which the data is broken down into “units of analysis,” in unitizing interview data, the researcher may indicate that a full answer, or part of answer may be a unit of analysis to be coded. Third, training coders to code data using an inductively derived typology involves multiple iterations of coding parts of the unitized data, discussing points of disagreement, reconciling disagreement, refining rules for coding, and coding another set of data. This training repeats until the coders have a shared understanding of the codes and can reliably (e.g., 80% agreement) code the same data. Lee et al. may be referenced for a detailed description of this methodological process (McAlister et al., 2017).

While intercoder metrics may satisfy some engineering education researchers’ calls for ensuring reliability of the data analysis process, they must be used with caution. When working from an interpretive lens, intercoder metrics are not necessary to demonstrate the credibility of the research, and in some cases may even threaten the potential benefit of the multiplicity of interpretations on a research team. Instead of reliability, other standards of trustworthiness are far more important in assessing qualitative research methods such as PBI. Tracy (2010) offers a comprehensive framework for evaluating the quality of qualitative research taking into consideration Lincoln and Guba’s (1985) foundational quality criteria while advancing a vocabulary that qualitative researchers can adopt, regardless of ontological perspective. Tracy (2010) argues that the quality of qualitative work can be assessed through the extent to which a work evidences eight criteria: worthy topic, rich rigor, sincerity, credibility, resonance, significant contribution, ethical, and meaningful coherence. These categories align with Lincoln and Guba’s (1985) notion of trustworthiness and affirm the notions of credibility, transferability, dependability, and confirmability in the findings (p. 289–332). Seidman (2013) provides some guidance toward assessing quality by ensuring that the context of statements is maintained, noting that participants were allowed to express themselves, recognizing consistency across each of the three interviews, and exploring connections in alignment of participants experiences (p. 27).

Case Example: Due to the volume of data collected, two coders participated in training and independently coded a sample of data that had been unitized by the interviewer. Following the coding process, the coders met to discuss and reconcile disagreements in coding. Through an iterative training process of coding and discussion, the percent intercoder reliability of 0.47 assessed in the first round of coding increased to an intercoder agreement of 0.96. Intercoder reliability was further improved later in the process and is described below. Intercoder reliability was calculated using the process described by Miles and Huberman (1984, p. 63) and involves dividing “the number of coding agreements by the number of agreements and disagreements combined” (Campbell et al., 2013, p. 309) for 95% of the coded references in the data. This process resulted in the first complete version of the coding scheme (typology of learning).

The iterative process of qualitative data analysis proceeded to refine the coding scheme (typology of learning) through coding additional interviews and developing clear coding instructions and rules for analysis of future datasets. Two researchers read and analyzed three more interviews. Specifically, one researcher used the coding scheme to code the remaining datasets, while the other identified gaps in the coding scheme. Through discussing the gaps, it was evident the definitions of the codes needed further clarification.

The interviewer and a team member worked together to refine the coding scheme and coding instructions. They systematically reviewed all previously coded interviews and discussed areas of contention, instances of confusion, and the repetition of codes across all five datasets. The team member asked pointed questions of the interviewer that invited distinction between categories of learning. For example, she asked questions such as, “What is the difference between coding an excerpt as ‘practicing’ and coding as ‘exploring’?” This led to the creation of a series of coding rules for each category. For example, the following coding rule for “practicing” resulted from that discussion:

Code when participant indicates that they have followed the same process over and over again. Words like “perfecting,” “getting better at it,” might appear in this code. Also “play around” is a cue for both code 1.3 (practicing) and code 1.5 (exploring) – the difference is that in code 1.3 (practicing), the person is trying to gain proficiency and in code 1.5 (exploring) the person is trying to figure out a solution. Table 1 is an example from the codebook that includes coding rules and an example for the code “Failing,” a code subsumed under the broader category “Learning by Doing.”

Table 1

Example from Codebook.


CODE DESCRIPTION CODING RULES EXAMPLES

Failing Discussion of failing, making mistakes, falling short in succeeding to achieve a goal, or to error in one’s actionor judgement. Code when participant points to specific mistakes or failures they made that required them to rethink how they were making. Mistakes might be related to the choice of machine, the speed, the steps, or the materials. And so, I went in and I’m like, “Okay, so let me just take this wood and cut it down.” And I cracked a piece of wood. And I’m like, “Shoot, okay, I can’t do it this fast.”

Through this process, a comprehensive codebook was established that included for each code: a number, a name, a description, a set of coding rules, and an example. During this time, additional insights were provided by other uninvested colleagues so as to ensure that the coding scheme made sense. This commitment to continuous peer debriefing led to a coherent refined coding scheme and clear coding instructions for potential future datasets. The development of a robust codebook is critical to coding large volumes of data, particularly when multiple researchers are collaborating, and enables the continued refinement and testing of codes through each iteration of data analysis.

As a result of earlier trials, the team refined the codebook and developed a robust training process. Specifically, the team learned that when coding PBIs, all coders must have a thorough contextual understanding of the data and read the entire data set they are working with prior to starting. Without such contextual understanding, coders are unlikely to reliably code the data.

To ensure the typology aligned with an engineering design audience, the research team invited an expert in engineering design to provide unbiased feedback on the coding scheme. In the peer debriefing session, the expert discussed the ways in which the coding scheme aligns or not with the engineering design audience and vocabularies. This discussion pointed to ways that participants everyday talk about making did not match up with the vocabularies in the discipline. For example, participants used “problem solving” to describe their way of thinking through the design process. Using students’ language of “problem solving” in the typology, however, does not align with the conceptualizations of problem solving in the literature

The peer debriefing led us to ensure that the uses of vocabularies in the typology aligned with relevant literatures, particularly for the engineering design process. The interviewer examined the literature on problem solving, the design process, design thinking (Brown, 2008), Bloom’s taxonomy (Bloom et al., 1956; A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives, 2001), and 21st century skills (cognitive, interpersonal, intrapersonal) (NRC, 2012), among other learning frameworks (Adams et al., 2011; Greeno et al., 1996; Kolb, 1984; Leonard, 2002). This effort led to grouping the taxonomy of learning into three broad categories: cognitive, interpersonal, and intrapersonal proficiencies. In this process, the interviewer tightened the language, definitions, rules, and labels for each of the codes to both reflect the meanings of the participant and contribute to broader understandings of learning in engineering contexts.

Typology Construction

While a code captures the essence of a segment of data, a typology embodies the ecosystem of the data, illuminating the broad categories or groupings of codes. Typologies are constructed through iterative, analytical, and interpretive processes of moving back and forth between the first level codes and the second level codes. Second level codes are types, and the interview data may yield a variety of types: types of learning, types of experiences, types of obstacles, among others, depending on the research question. According to Kluge (2000), “types are constructed in order to comprehend, understand, and explain complex social realities” (p. 1).

In creating a typology, the researcher discerns what kind of categories are of interest based upon the research question. For example, if the researcher is interested in the types of learning experiences participants have in an engineering course, the researcher will organize and group together the open codes to create broader categories that reflect “types of learning experiences.” The first level codes, then, ultimately characterize the attributes of each of the categories. For example, the type “learning by doing” may be composed of attributes in the data such as “active,” “hands-on,” and “making.” An initial typology may begin to emerge after the analysis of two to three sets of interviews. Thereafter, the researcher codes the remaining interviews using the typology itself, seeking to refine categories and the attributes therein with each new interview analyzed. The data collection is considered complete once analysis reveals there to be no continued refinement of the typology.

Case Example: Data analysis of the interviews yielded a typology of learning for women using makerspaces at a large public institution in the southern United States. The typology showcases the processes and types of learning associated with women students in academic makerspaces. This includes the modes of learning and the products of learning (the cognitive skills, the interpersonal skills, and the intrapersonal skills). Table 2 shows a subset of the overarching categories and subcategories of the typology. By using the PBIs, we were able to generate a detailed and robust typology of learning.

Table 2

Subset of the Typology at a glance – the primary and secondary categorization.


1 LEARNING BY DOING

1.1 Failing

1.2 Struggling

1.3 Practicing

1.4 Iterating

1.5 Exploring

2 CULTURAL KNOWLEDGE AND SKILLS

2.1 Access conventions and protocols

2.2 Roles and structure of participation

2.3 Rules of the community

2.4 Gender associations

Implications of Case Example: Design and Learning Pathways

The PBI approach described in this paper offers researchers the opportunity to examine much more than types of learning. Building off of the work of generating the typology and coupling the narratives and timelines from the interviews, we also examined how women students’ design and learning pathways developed over time. The women students’ narratives and timelines can be analyzed following similar processes of data analysis, where each woman’s narrative and timeline are compared in order to identify emerging themes and patterns, allowing for the development of models. From these emerging themes and patterns, we can begin to understand how women enter makerspaces, the impacts of their background on their involvement in the space, the barriers to entry, and are the ways that gendered experiences impact a woman’s involvement. These types of topics are difficult to address using quantitative methodologies, as they require us to expand upon rather than reduce the information collected in the data. The methodology presented in this paper is particularly useful for generating insights for engaging women in design.

Discussion

Qualitative inquiry offers an opportunity to gain deep insights into the complex phenomenon associated with engineering education. As engineering education continues to evolve, it becomes essential to develop means that appropriately evaluate and study phenomena of interest. In this work, we describe a PBI process and how this process has and can be fruitfully adapted for the engineering education audience. Suggested adaptations to Seidman’s original methodology for engineering education purposes are discussed and demonstrated, which include having individuals create timelines and bringing previous prototypes to add tactile and tangible points of reference that deepen discussion in the interview. The verbal timeline that participants provide in the first interview is further validated and endorsed by the timeline that they draw in the third interview.

Regardless, the participants’ willingness to openly share their stories is rooted in the mutual trust developed between the interviewer and the participant. It is important for other researchers, who are considering qualitative methods, to seriously evaluate and articulate how mutual respect and trust will be attained. Another important consideration is that developing the appropriate research questions and interviewing protocol requires a great deal of time. In this research, two years were spent simply in exploration, and an additional year was spent in developing the appropriate protocol. The in-depth interviewing process is not suggested to be used for a study that merely aims to explore a field. This is because the presented methodology is targeted at delving deeper into phenomenon toward building complex explanations, typologies, and models.

Through describing the methodology in a specific context, we demonstrate how a PBI process and methods of qualitative data analysis can capture the lived experiences and the meaning of these experiences. Through PBI, we have been able to follow the ethical guidance of Sochacka et al. (2018), doing justice to our participants through empowerment to tell their own lived stories of making; the result of which are data illustrating the rich accounts of young women’s pathways as they navigate their own academic journeys and become makers in academic makerspaces. Strikingly, investigating these lived experiences of women students through qualitative inquiry illuminates both breadth and depth to the forms of learning through which they engage as participants in academic makerspaces. This breadth and depth would not have been attainable through even a small number of controlled design studies, surveys, or quasi-experimental designs. Qualitative inquiry produces a rich dataset for examining uncontrolled and unstructured environments; such environments are otherwise difficult to study using approaches that demand predictable patterns of learned and replicable sets of interactions.

Researchers exploring engineering education, and in particular in the context of makerspaces, can benefit from PBI methodologies. Using the presented roadmap, researchers may engage a small number of participants in the in-depth phenomenologically based interviews as a means to gain initial insights into their question of interest while simultaneously mastering their interviewing skills and developing emergent findings. Through implementing this type of methodology, a wide variety of complex phenomena and understudied populations can be examined in engineering. For example, the interviewing process would assist in understanding how a team’s processes (e.g., management, design, engineering) has evolved over time, along with gaining insights into why the process has changed and why individuals believe the process to be effective. Further, an engineering student’s past experiences in teams impact their current decision; this methodology would excavate the impact of past experiences on current teaming decisions. Moreover, the inductive approach to generating a typology such as the one described in our exemplar is particularly useful when there is limited existing knowledge, research, or theory on a present engineering design phenomenon. Ultimately, this methodology is particularly useful for constructing typologies, and thus can be used for identifying a wide variety of typologies: types of design projects, types of prototypes, types of problem interpretation strategies, types of analogies, types of student engagement, types of function decompositions, types of empathy, types of creativity, or types of barriers to engineering design. Since engineering is informed by experiences and builds upon previous learning, this methodology allows for extracting the insights from all types of experiences, whether that be capstone students, students in a first-year design class, K–12 students, industry, faculty, and administration. Engineering education researchers should determine what experiences will help them further dive deeper into understanding a particular phenomenon. For many current research endeavors, the phenomenologically based interviewing approach would allow for rich datasets and insights into the phenomenon of interest.

Conclusions

This paper emphasizes the importance of qualitative techniques and how the methodology of phenomenologically based interviewing can be applied in engineering as a means to obtain deeper insights on experiences of learning. When describing the phenomenologically based interviewing methodology, we highlight the critical aspects, including interviewer-participant relationship and recruiting strategies, that impact the quality of the interview data. Through both the in-depth three-series interviewing and the single targeted interview, the participants share a narrative of their experiences pertaining to a certain phenomenon. To demonstrate, this paper presents the “methodology in practice” in order to illustrate the processes for implementing the interviewing methodology and analyzing the data for how academic makerspaces support learning of women students. This work designates the approach for systematically developing a coding scheme that lends way to a complete typology. Similarly, this approach could be used in other research enterprises so as to develop typologies for prototyping, design methods, design projects, engineering trajectories, product development, and so on. Overall, this paper forwards a qualitative methodology that we have found useful in our research on engineering makerspaces – phenomenologically based interviewing – and presents the qualitative methodology as a roadmap for other engineering education researchers.

Acknowledgements

This work is supported by the National Science Foundation through grants EEC- 1733708 and 1733678. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Competing Interests

The authors have no competing interests to declare.

References

  1. Adams, R. S., Daly, S. R., Mann, L. M., & Dall’Alba, G. (2011). Being a professional: Three lenses into design thinking, acting, and being. Design Studies, 32(6), 588–607. DOI: https://doi.org/10.1016/j.destud.2011.07.004 

  2. Atkinson, R. (2007). The life story interview as a bridge in narrative inquiry. In D. J. Clandinin (Ed.), The Handbook of Narrative Inquiry (pp. 224–245). Sage. DOI: https://doi.org/10.4135/9781452226552.n9 

  3. Bertaux, D. (Ed.) (1981). Biography and society: The life history approach in the social sciences. Sage. 

  4. Bloom, B. S., Englehart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. David McKay Company, Inc. 

  5. Borrego, M., Douglas, E., & Amelink, C. (2009). Quantitative, qualitative, and mixed research methods in engineering education. Journal of Engineering Education, 98(1), 53–63. DOI: https://doi.org/10.1002/j.2168-9830.2009.tb01005.x 

  6. Brown, S., Montfort, D., Perova-Mello, N., Lutz, B., Berger, A., & Streveler, R. (2018). Framework theory of conceptual change to interpret undergraduate engineering students’ explanations about mechanics of materials concepts. Journal of Engineering Education, 107(1), 113–139. DOI: https://doi.org/10.1002/jee.20186 

  7. Brown, T. (2008). Design thinking. Harvard Business Review, (June), 84–92. 

  8. Campbell, J. L., Quincy, C., Osserman, J., Pedersen, O. K., Oserrman, J., & Pederson, O. K. (2013). Coding in-depth semi-structured interview: Problems of unitization and intercoder reliability and agreement. Sociological Methods and Research, 42(3), 294–320. DOI: https://doi.org/10.1177/0049124113500475 

  9. Charmaz, K. (2014). Constructing grounded theory (2nd ed.). Sage Publications. 

  10. Chou, P.-N., & Chen, W.-F. (2015). Female engineering students’ perceptions of college learning experiences: A qualitative case study in Taiwan. International Journal of Engineering Education, 31(2–11). 

  11. Compagnone, W. (1995). Student teachers in urban high schools: An interview study of neophytes in neverland. University of Massachusetts at Amherst. 

  12. Cook, J. S. (2009). “Coming into my own as a teacher”: Identity, disequilibrium, and the first year of teaching. The New Educator, 5, 274–292. DOI: https://doi.org/10.1080/1547688X.2009.10399580 

  13. Dahlgren, M. A. (2003). PBL through the looking glass: Comparing applications in computer engineering, psychology and physiotherapy. International Journal of Engineering Education, 19(5), 672–681. 

  14. Dexter, L. A. (1970). Elite and specialized interviewing. Northwestern University Press. https://books.google.com/books?id=spGyXLNREukC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false 

  15. Dougherty, D. (2012). MAKE/Intel Maker Market Study: Makers at the Forefront, An In-depth Profile of of Hardware Innovation. http://www.nyu.edu/reynolds/speaker_series/pdf/Maker%20Market%20Study%20FINAL.pdf 

  16. Douglas, E. P., Jordan, S. S., Lande, M., & Bumbaco, A. E. (2015). Artifact Elicitation as a Method of Qualitative Inquiry in Engineering Education. 2015 ASEE Annual Conference & Exposition, Seattle, Washington. 

  17. Douglas, J. (1976). Investigative social research: Individual and team field research. Sage Publications. 

  18. Eberle, T. S. (2014). Phenomenology as a research method. In U. Flick (Ed.), The SAGE handbook of qualitative data analysis (pp. 184–202). Sage. 

  19. Gabriel, J. (1997). The experiences of language minority students in mainstream English classes in United States public high schools: A study through in-depth interviewing. University of Massachusetts Amherst. 

  20. Geertz, C. (1973). Thick description: Toward an interpretive theory of culture. In The Interpretation of Cultures: Selected Essays (pp. 3–30). Basic Books, Inc. https://chairoflogicphiloscult.files.wordpress.com/2013/02/clifford-geertz-the-interpretation-of-cultures.pdf 

  21. Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Aldine Transaction. DOI: https://doi.org/10.1097/00006199-196807000-00014 

  22. Golafshani, N. (2003). Understanding reliability and validity in qualitative research. The Qualitative Report, 8(4), 597–607. 

  23. Greeno, J. G., Collins, A. M., & Resnick, L. B. (1996). Cognition and learning. In D. C. Berliner & R. C. Calfee (Eds.), Handbook of educational psychology (pp. 15–46). Prentice-Hall. 

  24. Halverson, E. R., & Sheridan, K. M. (2014). The maker movement in education. Harvard Educational Review, 84(4), 495–504. DOI: https://doi.org/10.17763/haer.84.4.34j1g68140382063 

  25. Hardin, C. (1987). Black professional musicians in higher education: A dissertation based on in-depth interviews. (Publication Number DEU97-10458) Comprehensive Dissertation Index 1983–1987. 

  26. Hatch, M. (2014). The Maker Movement Manifesto. McGraw-Hill. 

  27. Hoepfl, M. C. (1997). Choosing qualitative research: A primer for technology education researchers. Journal of Technology Education, 9(1), 47–63. DOI: https://doi.org/10.21061/jte.v9i1.a.4 

  28. Jackson, P., & Russell, P. (2010). Life history interviewing. In D. Delyser, S. Herbert, S. Aitken, M. Crang, & L. McDowell (Eds.), The Sage Handbook of Qualitative Geography (pp. 172–192). Sage. DOI: https://doi.org/10.4135/9780857021090.n12 

  29. Jenoure, T. (1995). Navigators, challengers, dreamers: African American musicians, dancers, and visual artists who teach at traditionally white colleges and universities. University of Massachusetts Amhers. Unpublish doctoral dissertation. https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=6528&context=dissertations_1 

  30. Kirn, A., & Benson, L. (2018). Engineering students’ perceptions of problem solving and their future. Journal of Engineering Education, 107(1), 87–112. DOI: https://doi.org/10.1002/jee.20190 

  31. Kluge, S. (2000). Empirically grounded construction of types and typologies in qualitative social research. Forum: Qualitative Social Research, 1(1), Article 14. DOI: https://doi.org/10.17169/fqs-1.1.1124 

  32. Kolar, K., Ahmad, F., Chan, L., & Erickson, P. G. (2015). Timeline Mapping in Qualitative Interviews: A Study of Resilience with Marginalized Groups. International Journal of Qualitative Methods, 14(3), 13–32. DOI: https://doi.org/10.1177/160940691501400302 

  33. Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development (2nd ed.). Pearson Education, Inc. 

  34. Kvale, S. (1996). Interviews: An introduction to qualitative research interviewing. Thousand Oaks, California: Sage Publications. 

  35. Lande, M., & Jordan, S. (2014, October 22–25). Making it together, locally: A making community learning ecology in the Southwest USA. 2014 IEEE Frontiers in Education Conference, Madrid, Spain. DOI: https://doi.org/10.1109/FIE.2014.7044394 

  36. Leonard, D. C. (2002). Learning theories, A to Z. Greenwood Publishing Group, Inc. https://books.google.com/books?id=nNcoAO5Za9YC&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false 

  37. Leydens, J., Moskal, B., & Pavelich, M. (2004). Qualitative methods used in the assessment of engineering education. Journal of Engineering Education, 94(1), 13–25. DOI: https://doi.org/10.1002/j.2168-9830.2004.tb00789.x 

  38. Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage. DOI: https://doi.org/10.1016/0147-1767(85)90062-8 

  39. Litts, B. K. (2015). Making learning: Makerspaces as learning environments. University of Wisconsin-Madison. http://search.proquest.com/docview/1651611969?accountid=11648 

  40. Marques, J. F., & McCall, C. (2005). The application of interrater reliability as a solidification instrument in a phenomenological study. The Qualitative Report, 10(3), 439–462. DOI: https://doi.org/10.46743/2160-3715/2005.1837 

  41. McAdams, D. P., Bauer, J. J., Sakaeda, A. R., Anyidoho, N. A., Machado, M. A., Magrino-Failla, K., White, K. W., & Pals, J. L. (2006). Continuity and change in the life story: A longitudinal study of autobiographical memories in emerging adulthood. Journal of Personality, 74(5), 1371–1400. DOI: https://doi.org/10.1111/j.1467-6494.2006.00412.x 

  42. McAdams, D. P., & Guo, J. (2014). How shall I live? Constructing a life story in the college years. New directions for higher education, 2014(166), 15–23. DOI: https://doi.org/10.1002/he.20091 

  43. McAdams, D. P., Josselson, R., & Lieblich, A. (2006). Identity and story: Creating self in narrative. American Psychological Association. DOI: https://doi.org/10.1037/11414-000 

  44. McAlister, A. M., Lee, D. M., Ehlert, K. M., Kajfex, R. L., Faber, C. J., & Kennedy, M. S. (2017). Qualitative coding: An approach to assess inter-rater reliability. 2017 ASEE Annual Conference, Columbus, Ohio. 

  45. Meyer, A. (2018, February 14, 2018). Feminist makerspaces: Making room for women to create. The Riveter. https://www.therivetermagazine.com/feminist-makerspaces-making-room-for-women-to-create/ 

  46. Miles, M. B., & Huberman, M. A. (1984). Qualitative data analysis: A sourcebook of new methods. Sage Publications. 

  47. Miller, J. H. (1997). Gender issues embedded in the experiences of student teaching: Being treated like a sex object. Harvard Educational Review, 49(1), 1–19. DOI: https://doi.org/10.1177/0022487197048001004 

  48. Mischler, E. G. (1986). Research interviewing. Harvard University Press. http://www.hup.harvard.edu/catalog.php?isbn=9780674764613 

  49. Morgan, D. L. (2008). Snowball sampling. In L. M. Given (Ed.), The SAGE encyclopedia of qualitative research method. Sage. DOI: https://doi.org/10.4135/9781412963909.n425 

  50. NRC. (2012). Education for life and work: Developing transferable knowledge and skills in the 21st century. T. N. A. Press. 

  51. O’Donnell, J. F. (1989). Tracking: Its socializing impact on student teachers: A qualitative study using in-depth phenomenological interviewing. University of Massachusetts Amherst. Unpublished doctoral dissertation. https://core.ac.uk/download/pdf/220131704.pdf 

  52. Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., & Hoagwood, K. (2015). Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health, 42(5), 533–544. DOI: https://doi.org/10.1007/s10488-013-0528-y 

  53. Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Sage Publications, Inc. 

  54. Rubin, H. J., & Rubin, I. S. (2012). Qualitative interviewing: The art of hearing data (3rd ed.). Sage. 

  55. Saldaña, J. (2016). The coding manual for qualitative researchers (3rd ed.). Sage Publications. 

  56. Saracino, D. (2021). Comparison of Makerspace Learning Outcomes Between Genders, Universities, and Online Communities. Georgia Institute of Technology. 

  57. Saracino, D. M., Sadel, K., Alemán, M. W., Nagel, R., & Linsey, J. (2021). Comparison of Student Learning in Two Makerspace Communities. 2021 ASEE Annual Conference & Exposition, virtual. 

  58. Schutz, A. (1970). Alfred Schutz on the phenomenology of social relations. In H. R. E. Wegner (Ed.). University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/A/bo3624278.html 

  59. Schutz, A., & Luckmann, T. (1973). The structures of the life-world (Vol. 1). Northwestern University Press. 

  60. Seidman, I. (2006). Interviewing as qualitative research: A guide for researchers in education and the social sciences (3rd ed.). Teachers College Press. 

  61. Seidman, I. (2013). Interviewing as qualitative research: A guide for researchers in education and the social sciences (4th ed.). Teachers College Press. 

  62. Sochacka, N. W., Walther, J., & Pawley, A. L. (2018). Ethical validation: Reframing research ethics in engineering education research to improve research quality. Journal of Engineering Education, 107(3), 362–379. DOI: https://doi.org/10.1002/jee.20222 

  63. Tagg, S. K. (1985). Life story interviews and their interpretations. In M. Brenner, J. Brown, & D. Canter (Eds.), The Research Interview: Uses and Approaches (pp. 163–199). Academic Press. 

  64. Anderson, L. W., & Krathwohl, D. R. (Eds.) (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. Longman. 

  65. Tomko, M. (2019). Developing One’s “Toolbox of Design” through the Lived Experiences of Women Students: Academic Makerspaces as Sites for Learning. Dissertation, Georgia Institute of Technology. Atlanta, GA. 

  66. Tomko, M., Alemán, M., Nagel, R., Newstetter, W., & Linsey, J. (2021). Changing the Narrative Around Making: Understanding Women’s Pathways into University Makerspaces. Journal of Engineering Education, 110(3), 18. DOI: https://doi.org/10.1002/jee.20402 

  67. Tomko, M., Aleman, M., Newstetter, W., Nagel, R., & Linsey, J. (2020). “Academic makerspaces as a “design journey”: Developing a learning model for how women students tap into their “toolbox of design””. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 34(3), 363–373. DOI: https://doi.org/10.1017/S089006042000030X 

  68. Tracy, S. J. (2010). Qualitative quality: Eight “Big-Tent” criteria for excellent qualitative research. Qualitative Inquiry, 16(10), 837–851. DOI: https://doi.org/10.1177/1077800410383121 

  69. Tracy, S. J. (2013). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact. Wiley-Blackwell. 

  70. Van Note Chism, N., Douglas, E., & Hilson, W. J., Jr. (2008). Qualitative research basics: A guide for engineering educators. https://crlte.engin.umich.edu/wp-content/uploads/sites/7/2013/06/Chism-Douglas-Hilson-Qualitative-Research-Basics-A-Guide-for-Engineering-Educators.pdf 

  71. Vygotsky, L. (1987). Thought and language. MIT Press. 

  72. Walker, M. (2001). Engineering identities. British Journal of Sociology of Education, 22(1), 75–89. DOI: https://doi.org/10.1080/01425690020030792 

  73. Weiner, S., Lande, M., & Jordan, S. S. (2018). What have we “learned” from maker education research? A learning sciences-base review of ASEE literature on the maker movement. 2018 ASEE Annual Conference & Exposition, Salt Lake City, UT. https://peer.asee.org/what-have-we-learned-from-maker-education-research-a-learning-sciences-base-review-of-asee-literature-on-the-maker-movement 

  74. Young, S. (1990). ESL teachers and their work – A study based on interviews conducted with teachers of English as a second language. University of Massachusetts Amherst. Unpublished doctoral dissertation. https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=5712&context=dissertations_1 

comments powered by Disqus