What’s the underlying theory? An interview with Kathleen Eisenhardt

In the early days of this blog, I had prepared an initial list of scholars who I wanted to interview, and as many can well imagine, Kathleen Eisenhardt was high on that list. However, our paths never crossed, and I never dared write to her directly, held back by a fear of rejection or by unexpected shyness – can’t say which for sure – and so this interview stayed where it was: on my wish list. It took the pandemic, and my own need to “walk the talk” with students to make it happen. Indeed, to maintain community among our local PhD students during the pandemic, I had begun a series of student-led, online talks with well-known scholars on various aspects of qualitative research. To those who offered to help me with organizing, I had said: “Invite whomever you like.” They answered: “Anyone?” And I said, “Yes, anyone”. In my qualitative methods class, I encourage students never to be shy about reaching out to potential informants, regardless of whether they know them or not, whether they are famous or not, whether they are the CEO, a VP or someone working on an assembly line. It doesn’t matter. I tell them, “Try reaching out. What is the worse than can happen?” So, when the students said they wanted to invite Kathleen Eisenhardt, what could I say? We wrote to her, and she wrote back almost immediately! How amazing was that? We had a brilliant and wonderfully informal group discussion about case study method, and some time after that, Kathleen graciously took time to let me interview her for this blog. And here you have it: an informal discussion with Kathleen about the ups, downs and practicalities of conducting, writing and publishing case study research. Enjoy!
You’ve conducted multiple case studies in a wide variety of contexts. Where do your initial ideas for undertaking those studies come from?
I try to stay aware of what’s happening and what’s happening next – novel phenomena and theories. Going to conferences, reviewing a lot, and so on to know where the cutting-edge is. My most original idea was probably my PhD thesis – a quantitative study on agency theory, a “hot” theory of the time. After that, I mostly worked with other people, and they often were the ones who came up with an idea. Over the past 30 years, my process has been mostly that my students come up with an idea. They say “Let’s study this” and then I say, “Oh! That’s pretty good. But if we studied this (related idea), it would be even better.” If you hand me a blank piece of paper, I’m not very effective. If you hand me something – even a fragment, I can often change it into something good that tackles a new and significant issue.
So, a student comes up to you and says, “I’d like to study this”. Then what?
It depends. So, for example, Pinar Ozcan was interested in alliances. At the time, I knew that the literature had a pretty good handle on how to form a single alliance, but hadn’t really looked at how to build a portfolio of alliances and broadly, ecosystems. So alliance portfolios was what was new about her topic. We then looked around for the right industry to study. Pinar looked at several different industries, and realized that mobile gaming was a great fit. Now, Pinar is not a gamer, I’m not a gamer. It’s not that gaming was interesting in and of itself. Rather, it was an industry that was very amenable to what we wanted to study – alliance portfolios. Another student, Eric Volmar, came to Stanford with an interest in higher education and how competition happens in higher ed. I knew the opportunity to join institutional theory with strategy in nascent markets. Together we settled on studying venture strategies in the emerging online higher education industry, a highly institutionalized setting. So, some people arrive with a topic that they want to study, and some people have an industry that they want to study. Some people are more open. Together we figure out how to turn those interests into a fruitful research project.
What about picking the specific cases to study, how does that happen?
The way it works with most of my students is there is a general area of interest – alliances, institutional theory, whatever – that we’re going to study. The students then do a literature review on that topic. I’m a big believer in literature reviews because you can’t tell if what you’re doing has already been done if you don’t do a literature review on it. So I am very big on doing thorough literature reviews so that you totally know the prior literature. You know where the holes are, you know when you’re just seeing what other people have already seen which is fine. But what you’re really looking for is something new. So, we’ll start with the literature review, figure out the research gaps, and hunt around for a setting or industry that is suitable for studying that area of interest.
What about access, has it ever been an issue? How have you dealt with that?
There are always issues with access. Choosing sites is always about finding a match between the research question and getting in. But most of the time, once we get into a site, we complete the study. We’ve occasionally run into situations where the organization is falling apart and nobody wants to talk. So they disappear from the sample. Generally, I also try not to be too greedy about the time that I want from informants. This helps with access. In contrast, ethnographers, for example, have to take a lot of time which is an important reason why they usually only have one or maybe two sites. Multi-case researchers, especially strategy ones, can often require less time commitment. They can also often give organizations value – there is payoff to participants. For example, I have often said: “We’re going to study you and we’re going to tell you how you compare with other companies.” We offer this comparative feedback as part of the pitch. But many times, we are never asked to provide that feedback. Sometimes the organizations or informants want to approve the quotes before we put them into a paper. This is fine, I don’t mind doing that and usually they approve.
Let’s talk about a paper that was a memorable struggle to get published.
The paper on simple rules with Chris Bingham which came out in SMJ (Strategic Management Journal) in 2011 was hard to get published. Actually, Chris and Pinar Ozcan graduated in the same year. Pinar’s paper (Academy of Management Journal in 2009) was super easy to get published. First try. The reviews were great. The editor loved it. It was published! Meanwhile, Chris’s paper was struggling to get accepted. It took time to find the right audience for that paper – a strategy paper with a lot of psychological insights. The paper itself never really changed. It wasn’t like we suddenly realized the data had some amazingly new insights. It was more that we hadn’t found the audience that wanted to hear our insights in the way that we wanted to tell them. So that was a paper that took time to publish. In contrast, with Pinar we just hit an editor who loved our paper – illustrating how publishing is assisted by the luck of getting an editor who really loves what you’re doing.
Did that process involve a lot of re-writing? You said the main ideas didn’t change – what changed then?
The framing of Chris’ paper and the discussion changed a bit. Yet the middle of the paper – the core findings – didn’t change much. It was more the front and the back that kept changing. In my experience, it’s hardly ever the case that the middle changes a lot because the findings are what they are. What really changed for us was the audience – SMJ had the readers and reviewers who were attracted to our paper.
In a publications workshop, I recently heard a former AE (associate editor) say that authors should brace themselves to change the framing of their study – possibly several times – before making it through the review process. And the students in the session were like, “Really, is it ok for editors and reviewers to hijack the process like that?”
I think that sometimes it’s useful for reviewers to suggest re-framing because the reviewers are a subset of the audience for the paper. Since you’re trying to appeal to a wide audience, reviewers can sometimes be a proxy for them. But, sometimes they’re just a pain. And sometimes you might look at the frame they’re suggesting, and say, “I don’t think so”. I’d say two out of three times, reviewers are useful, and one out of three times, they’re just a pain in the neck. And sometimes, after the review process is through, we may tweak some of the framing back to what we wanted to say.
You mean after the paper had been accepted?
Yes. Lightly tweak to include some of the things you actually wanted to say, and of course, the editor sees these changes. It’s not hidden. I think – at the end of the day, it’s important to remember that it’s your paper. For example, in the paper that I wrote with Jeff Martin on cross-business synergies, we added a finding after the paper was accepted. We went to the editor and said, “You know, we’ve got this finding that we didn’t put in because we didn’t want the paper to be too complicated. But we can add it in over the weekend. Can we do it?” And the editor said, “Yeah, do it, and I’ll look at it and see if I like it.”
I didn’t know you can do that. When I started doing these interviews, I didn’t even know that you could speak to an editor about their decision on a paper. It was a big eye-opener for me.
I often do. That is, we get the reviews, read them over, and then may check in with the editor and say, “Ok, we’re trying to understand whether we understood the reviewers and you correctly.” I also often try to find out where the reviewers are coming from. Like is this a strategy person? Is this an organization theory person? Is this a quant person? Just an idea of the demographics of the reviewers because this information helps to understand why reviewers are making the comments they’re making. In my experience, when reviewers complain about something, it’s usually legitimate, but their interpretation of what’s wrong or their solution for fixing it isn’t always useful. Sometimes knowing where somebody is coming from helps you understand why they said what they said.
And are editors open to those kinds of questions about, well, who are these reviewers?
Most of the time. If they think you’re stepping over a line, they don’t answer. Also I never “fish” for who the specific reviewers are – just background that helps interpret the reviews. Also, if you catch editors right after they’ve written the decision letter, they remember the paper. If you wait too long, they may not remember the paper. Editors also don’t like you to bring up a paper, say at a conference, out of the blue. They don’t like that. And I don’t blame them. But if you actually ask about a particular issue or have a question: “We can’t interpret this”, or “Do you think this will satisfy?” They might say, “Well I don’t know whether it will satisfy, but that sounds good to me,” or “These are the more important comments, and these are the less important ones”, or “R2 really nailed your paper.”
What is a paper that you’re particularly proud of, and what makes you proud of it?
Well, my papers are like my children, I’m proud of them all! For example, I like the cross-business collaboration paper (with Jeff Martin in AMJ 2010) because I think the research design was particularly clever and the problem was significant. We were looking at cross-business synergies, and studied a good and a bad case in each of the firms. In each company, we studied a high-performing and low-performing collaboration, but it’s the same people in both. So, the people are staying the same. They have a really good outcome working together and they have a really bad outcome working together. This design is really powerful because it takes away the argument that, “Oh, well maybe some people are just more cooperative than other people.” I also like it because cross-business collaboration is a true business problem – not just a theoretical one. Real-life executives often don’t know how to make cross-business collaboration work. I don’t know how well you remember the paper, but the punch line is: “Collaborations that come from the CEO are less effective.” And here the person who let us into the company, the person who picked the good and bad collaborations, is the CEO. So, the CEOs end up realizing, “Oh, it’s me, I’m the problem.”
How did you ever manage to find those perfectly matched cases?
We thought about what kinds of companies have lots of opportunities for internal collaboration across businesses. We realized that software companies do. So, we picked an industry where we knew that there would be lots of collaborations, and that collaborations would be important. Then we contacted multi-business software companies – we happened to have connections to some of them which helps. We sometimes did random cold calls. I don’t remember how many companies we contacted. Not a huge number. I don’t remember how many people said “no”. But this goes back to their interest in our problem. Executives were genuinely interested in cross-business synergies. Like, “how do we do better collaborations?” After all, synergy is a rationale for product-market diversification. It’s the rationale for many mergers and acquisitions. Yet, if you look at the track record of companies in achieving synergies and effective acquisitions, it’s actually really marginal. So, our access was helped because executives saw our topic as a real practical problem for them. And we saw that it as a really fabulous theoretical problem. That is, when we reviewed the prior literature, it was clear that many people who had written those articles had little experience inside companies.
I see it now. If you’re looking at a company that does lots of these (internal collaborations), then it would be kind of normal that some are more successful than others.
Exactly. There were enough collaborations in each of the companies that you could ask about particularly good and bad examples. So get matched pairs.
Let’s talk about intuition. Do you have a process for moving from intuition about your findings to theoretical insight?
We almost always start with a research question. We don’t always stick to it, but we always have one. So that anchors what we’re doing. And then we try to understand each case as its own standalone answer to that research question. OK, case A – what does it say about the research question? What does this next case B say? So first try to understand each case on its own regarding the focal question. Then do cross-case analysis – that is, put cases together and ask, “How is A like B? How is A like C or how is A not like B? And then, different pairs. Or sometimes we’ll look at variables. Let’s look at, “Oh, this case has high-conflict – let’s look at conflict differences. Let’s look at… whatever”. So, it’s trying to look at the cases in both the ways that immediately jump out and the ways that don’t seem like they fit, but could have surprises. So, it’s trying to look at the data in as many ways as possible. And asking, “OK, what’s the pattern that’s emerging?” Doing this sometimes means changing abstraction levels, because that’s part of what you have to do to recognize conceptually relevant similarities and differences across cases.
Practically speaking, how do you document this process? Are you coding, writing stuff down?
Other people do elaborate coding. I more try to put together data tables that use case data as measures of more abstract concepts. It’s like first and second order themes. So, if we notice, “Oh! here’s a pattern” then it becomes “Let’s now see if we can make a table that shows that for each case. Is that pattern really there?” We’ll usually build the tables before we do any writing: do the related measures and constructs make sense? Does the pattern replicate across each case? Then we think about the underlying theoretical mechanisms. Why something is leading to something else. I often look at performance. Not always, but often. But broadly, why does doing X lead to Y?
One of my biggest gripes about the field is the movement towards long method sections where authors outline everything they did on their journey to the point that it is now taking up way too much space in a paper. Yet at the same time, they often ignore underlying theoretical mechanisms. In other words, they give the reader a complicated boxes and arrows diagram – that has little underlying theoretical argument that explains what is going on. This is what I think the field is missing in a major way. Theory is not about boxes and arrows. It’s about the mechanisms that are underlying those arrows. In my view, lately there’s been too much “Let’s be really transparent about our methods and tell readers about everything that we did,” at the expense of clarity about theoretical mechanisms, theory, and grounding of the theory in the data.
It’s great that you bring this up because it is something I wanted to ask you about. In their 2019 methods article in SMJ (Strategic Management Journal) Aguinis and Solarino appear to equate transparency with rigour. So, the more transparent your methods, the more rigorous your study because it will be easier to replicate. But how to do that without the whole thing turning into “Kabuki theater” as you’ve called it?
My view is that method sections are more faddish than other parts of a paper. So, the method sections that we are writing now are not the method sections that we were writing ten or fifteen years ago. This doesn’t mean that the method sections today are better than the past. They’re just different. What I do is write the best method section that I can, and then wait for the reviewers to complain. If they write, “I want to know this, or I want to know that”, then I put it in. So, I don’t think there’s a right answer to method sections per se because they’re really reviewer dependent.
In my view, the Aguinis and Solarino article, and even the Pratt, Kaplan and Whittington response that followed were both very philosophical. Yet, they missed the central point that theory building is about theory building. It’s not meant to be replicated exactly. It’s meant to be tested. You don’t take a formal mathematical model – another method to build theory – and say, “Well, you know, replicate that.” Instead, you test it.
So, the whole question of whether you can replicate the findings (or more accurately the emergent theory) of a theory building study doesn’t matter much because the outcome is a theory that should be tested. Or, aspects of it should be tested. Also this theory-building aim is why I think that there need to be explicit theoretical mechanisms – that is, arguments. In other words, replication isn’t the point. What Aguinis and Solarino did was take an argument that is relevant for deductive research, and try to make it relevant for theory building – inductive – research. This doesn’t make sense. To say that rigorous inductive research is all about transparency, and to never mention that a theory should actually be there, is a stunning omission to me. At the end of the day, multi-case theory building is fundamentally about the emergent theory: how strong is the theory and how logical is the theory? How well-grounded in the data is the theory? And why a theory that says brown-eyed CEOs do better doesn’t make sense unless there’s a logical, theoretical reason for why brown eyes are relevant. Otherwise, it’s just a random association.
Not all inductive theories are testable though.
I don’t claim to speak for interpretivists like Denny Gioia. He does his thing and I think his work is great. But, it’s not me. And I don’t know what his stance would be, and I don’t know what ethnographers’ stance would be. Rather, my point is that describing the methodological steps thoroughly is important, but the balance between describing the methodological journey and the rigor of the emergent theory is off in the current debate. If it’s all about transparency regarding whom I interview and how I talk to them etc., and it’s never about theory and theoretical logic, then there’s something wrong with that research if its aim is theory building. So to go back to your question. Yes, an inductive theory from a study with the aim of theory building should be testable.
And what of reviewer comments, how have they evolved over time?
Well, comments about framing have evolved. It used to be that… like my fast decisions paper (AMJ 1989), framing was easy. Just state that fast decisions are really important in certain contexts, but we don’t know anything about them. Isn’t this a cool problem? In other words, you didn’t have to really frame much. It was more, “Hey this is cool, let’s study it!” Then there was a time when you had to really frame a lot – put in a big background section and pull in multiple theories. This forced you to write a much more elaborate front end. And now, the pendulum is kind of going the other way back to simplicity. So framing comments – that’s changed. The use of certain terms has evolved. For example, there was a time in the mid-2000s where you couldn’t use the term “grounded theory” unless you were super into Strauss and Corbin. There was also a time when reviewers wanted you to put in propositions. Now that’s not true. Then data structures became popular for a while and now they’re being criticized.
What hasn’t changed – at least for me anyway – is the idea that you ultimately have to present the data for each case that underlie the emergent theory. And so you need to have some sort of illustration of each of the findings for each case. I do that in the text with more elaborated illustrations with a few cases, and data tables for each construct, for each case. And, you also need a theory that explains why proposed relationships are likely to be true. These features have stayed the same.
Let’s talk a bit about knowledge transfer. With several articles published in HBR (Harvard Business Review), two books, you’ve managed this exceptionally well. How do you do it?
Inductive – that is, theory building – research using cases lends itself readily to writing for a popular audience. In this type of research, we tend to deal with problems that are hard for executives to understand and solve. So, the research offers well-argued insights about a relevant topic, and uses the cases that were studied as examples of that theory or its insights. It’s just much easier for multi-case, theory-building researchers to translate their work to a popular audience than it is for a quantitative researcher. It’s often simply a matter of pulling out the citations and writing the findings section in lay prose.
Given your experience, can you share some insights about the process of publishing in practitioner-oriented journals like HBR, MIT Sloan or California Management Review?
Both MIT Sloan and California Management Review accept submissions. So, you can just send in something. HBR is a bit more about knowing the editors and getting on their radar. The first article that I published in HBR in 1997 was “How Do Management Teams Have a Good Fight?”. This came about because the HBR editors were at one of our big conferences, perhaps AOM (Academy of Management), I don’t remember. Anyway, they were just going to sessions looking for papers that they could publish. And the editors saw my paper, liked it, and basically took over writing the HBR article. That was the entrée. So, now I knew the editor. She then passed me to another editor, and I got to do more articles. The other path is through people who have published in these outlets. The editors of these publications often publish a lot of consultant stuff, but they don’t want to. They want to publish academics.
What advice do you have for those who’d like to emulate your career?
Everyone is different! When you’re not tenured, it’s all about getting tenure. Once you have tenure, you can do a lot more of what you want. So, just keep in mind that it’s about getting tenure and that you don’t have to love everything that you do. You don’t have to love your dissertation. You just have to finish it and publish it! The other piece of advice I give is actually from Bob Sutton who said, “three papers make a stream”. I don’t follow this advice much anymore, but I did when I started out. The idea is that you write three papers about something, and only then move on. So, you don’t just write one paper on topic A, one paper on topic B and so on. Rather, with three papers on a topic, you become associated with a set of ideas. Then, when you go up for tenure, people can write about you as the person who does X. There’s also an efficiency to it like repeating a similar background section in every paper. Yet, not every study lends itself to three papers. Some studies can be carved into multiple papers, but not all. The data and ideas need to fall in the right way. Also, the more cases you have and the more data you collect, then the greater likelihood that you’ll be able to write more papers from a given set of cases.
Interviewed and edited by Charlotte Cloutier.
References:
Aguinis, Herman, & Solarino, A. M. (2019). Transparency and replicability in qualitative research: The case of interviews with elite informants. Strategic Management Journal, 40(8), 1291-1315.
Bingham, Christopher B. Eisenhardt, Kathleen M. (2011). Rational heuristics: The -simple rules- that strategists learn from process experience. Strategic Management Journal, 32(13), 1437–1464.
Eisenhardt, Kathleen M. (1989). Building theories from case study research. Academy of Management Review, 14(4), 532-550.
Eisenhardt, Kathleen M. (1989). Making fast strategic decisions in high-velocity environments. Academy of Management Journal, 32(3), 543-576.
Eisenhardt, Kathleen M., Kahwajy, Jean L. and Bourgeois III, L.J. (1997) How management teams can have a good fight. Harvard Business Review. July–August. 75-86.
Eisenhardt, Kathleen M. (2021). What is the Eisenhardt Method, really? Strategic Organization, 19(1), 147-160.
Martin, Jeffrey A, Eisenhardt, Kathleen M. (2010). Rewiring: Cross-business-unit collaborations in multibusiness organizations. Academy of Management Journal, 53(2), 265-301.
Ozcan, C. Pinar, Eisenhardt, Kathleen M. (2009). Origin of alliance portfolios: entrepreneurs, network strategies, and firm performance. Academy of Management Journal, 52(2), 246–279.
Pratt, Michael G., Kaplan, Sarah, and Whittington, Richard (2020). Editorial essay: the tumult over transparency: Decoupling transparency from replication in establishing trustworthy qualitative research. Administrative Science Quarterly, 65(1), 1–19.