Our Evaluative Approach at Lumina Foundation
Lumina uses a leadership model of philanthropy to guide our work. The model is characterized by three main attributes that are critical for organizational effectiveness: focus, flexibility, and fortitude. Because we are intently focused on one goal, and because we consistently measure progress toward that goal, we often reassess our strategies and tactics. This means we regularly work to refine our approach and redirect our work based on the evidence at hand. It is in this spirit that we engage in evaluation at the foundation. We use evaluation to inform grantmaking in ways that can drive greater impact through our investments and ultimately drive toward our goal of ensuring that more adults earning quality post-high school credentials.
When done well, evaluation is a powerful tool to inform the decisions that foundation and our partners make about how to best to use limited resources. Evaluation differs from other forms of measurement in that it focuses not only on observing whether change has occurred, but also why or how it occurred.
Evaluation is best used to answer questions about what actions work best to achieve outcomes, how and why those outcomes are or are not achieved, what the unintended consequences have been, and what needs to be adjusted to improve execution.
Principles of our evaluative practice
We cannot evaluate every grant or every aspect of a strategy. As a part of our practice, we set goals or outcomes we hope to achieve and work backward from there to determine what we need to know and what type of evaluative efforts can best provide those lessons. One criterion for what we choose to evaluate in partnership with our colleagues concerns openness to change and readiness to challenge strongly held beliefs. Several other criteria guide our decisions about where to put evaluation resources. The most common considerations include:
- Learning from emergent models/approaches: When a department or team needs to assess the progress of new operational models or approaches, we may choose to gather evaluative data about them. We then use that data to inform our internal efforts and the field.
- Learning for continuous improvement and feedback loops: Evaluation is also in order when we and our partners need a better understanding of how a cluster of important investments, or a specific program or body of work is performing. For instance, an implementation or outcomes evaluation of multi-site initiatives focused on common end-goals might be useful in informing our work and future investments.
- Learning when evidence is needed to fill a knowledge gap or evaluate a significant policy decision. Evaluation can help to resolve uncertainty and determine the relative cost-effectiveness of different interventions, models, or approaches.
- Learning from our “big bets.” Evaluations of some of our well-resourced investments—either single investments or those made over time—can provide important third-party information on the progress of implementation, impact, and potential for sustainability.
Evaluation design and methods
Each of our funding strategies is guided by a racial equity lens and complemented with a theory of change and strategic learning questions. The theory of change is grounded in research and evidence, helping us understand what we know and what we need to know. By filling these gaps in our knowledge, we can achieve our end goal of having key stakeholders implement evidence-based practices and policies across each of our concentration areas. As noted, we work with our colleagues to determine which internal efforts should include an evaluation and to ensure that strategic learning is embedded in the work. Through our investments, we also seek to understand how to push not just our own learning, but that of our grantees. For instance, lessons from their recent responses to COVID-19 and the racial justice movement can help them develop a broader perspective on how their work will need to continually adapt and change. We work to mutually support our grantees, the foundation, and the field to learn how our partners are adapting, and we share these learnings with others.
We use a racial equity lens to guide all our research efforts. We use a racial equity lens to guide our evaluation efforts. What’s more, we work continually to improve our equity practices and those of our evaluation partners We pay close attention to the culturally responsive and equitable evaluation (CREE) capacity of the evaluators with whom we partner.
We employ a mixed-methods approach to nearly every evaluation, balancing numeric data (which focuses on the “what”) with data that focuses on the reasoning behind the actions of individuals, societies, and cultures: the “why” and the “how.”
Communication and dissemination of actionable findings are critical components of the work, and we use a mix of methods to amplify the findings. Importantly, the findings are focused on key stakeholders and what they need to do to advance the work.
Three main uses for our evaluations:
- To inform foundation practices and decisions.
- To inform grantees’ practices and decisions.
- To inform the field.
The purpose of an evaluation—which includes its planned audience, use, and timeline for when evaluation findings will be most useful—is central. For an evaluation to have the most positive impact, its questions, methods, and timing all must flow from a clear understanding of how the findings will be used, when, and by whom.
 CREE incorporates cultural, structural, and contextual factors (e.g., historical, social, economic, racial, ethnic, gender) using a participatory process that shifts power to individuals most affected. CREE is not just one method of evaluation, it is an approach that should be infused into all evaluation methodologies. CREE advances equity by informing strategy, program improvement, decision-making, policy formation, and change.