Conversation Map Deep Dive
“How do accelerators measure success?”

We all have a love/hate relationship with metrics.  One participant shared “proper measurement of outcomes is usually incredibly resource intensive and diverts focus from our core business activities.” For accelerators, the measurement issue is further complicated by the variety of stakeholders involved and the extended timeline of actual venture performance.

However, this inability to speak cogently about measurable effectiveness is damaging our credibility and keeping capital on the sidelines.

Who are we measuring for?

Accelerators feel pressure to report metrics from both internal and external stakeholders. As definitions “of success vary for different stakeholders in the accelerator ecosystem…” it can be hard to know which metrics are worth tracking.  As one participant put it: “Do we aggregate key metrics by industry, sector, region, etc.? Or do we continue to pick and choose and pile on more metrics?”

One suggestion is to use an independent third party (or a collaboration like the “partnership between Village Capital, Unreasonable Institute, and Emory University”) to design a small set of fixed metrics that would create a layer of comparable information across all accelerators–primarily for the benefit of the accelerators and the entrepreneurs they serve.

Are we gathering information or just collecting data?

The following is a list of specific metrics highlighted in the conversation map:

    • Grants Raised*
    • Investment Capital Raised*
    • Revenues*
    • # of Employees*
    • # of Customers*
    • Reach to rural customers
    • 1-3 outputs specific to their business
    • % diversity in leadership, employees and suppliers
    • Examples of community involvement
    • # of people served*
    • # of graduates still operating*
    • % women and minority-led businesses
    • Acres of farmland and # of farmers added to value chain
    • Solving for water and waste per GIIRS

These metrics raise an important issue: Data points without the context to create meaning cannot yield actionable information. The metrics followed by an asterisk (50%) are data metrics that lack the context to become information.

For example:

“# of Employees hired each year” is a common accelerator metric. If Company A grows to 12 employees in its 1st year and Company B only grows to 3 employees, which does the data indicate is more successful?

Company A.

But adding the context that both Company A and Company B have revenue of $500K changes the equation. The context of revenue reveals that Company A is generating $41.6K/employee while Company B is generating $166.6K/employee.

Which does this information say is more successful?

Company B.

It is only by adding context that we can derive meaning from data points to transform them into information. Such contextualized metrics can be normalized and comparable across organizations.

Many accelerators are drowning in data, but gathering little to no actual information for themselves or their stakeholders.

Similarly, using output metrics / input metrics helps to assess the efficiency of a venture. But to understand effectiveness (i.e.: impact) one must track measurable outcomes.

Are we asking the right questions?

It’s easy to see how standardized performance metrics introduce a temptation to rank and make claims about “who is the best or worst accelerator.” One participant noted: “The question shouldn’t be which is the best accelerator, but which is the best accelerator for this particular venture?

Re-framing success in this way gets ego out of the way and opens the door to collaboration and aligned incentives.  Ian Fisk from Mentor Capital Network and Andy Lieberman of the Global Social Benefit Incubator have teamed up on the Accelerator Selection Tool Collective Impact Project to begin defining the information and metrics we need to maximize fit between accelerator and venture.

One key finding of the Measuring Value Created by Impact Incubators and Accelerators report from ANDE, iDEV and Agora Partnerships, for example, was the need to better distinguish between Incubators serving early stage (pre-revenue, pre-customer, pre-product) ventures and Accelerators serving growth stage (post-revenue) ventures.  Honing in on this distinction (and the gap in the ecosystem it represents) opens concrete opportunities for growth stage accelerators and growth stage investment funds/firms to “align their models and focus to ensure graduates of an accelerator meet the requirements of what funds consider ‘investment ready.’”

Understanding The Impact Ecosystem

“Ventures do not operate in a vacuum.”  A venture’s effectiveness in a complex system as one participant pointed out “usually requires multiple initiatives, movements, and policies working together.” The goal, one person responded is “to create a network effect.”

Leveraging the power of these network effects is how to magnify the power of impact accelerators.

In Ecology, “volunteer plants” are the first to inhabit a new territory – one that would be hostile to other species.  Over time, those volunteer plants actually change the environment around them, making it more inviting to other plants that have lower risk tolerances or different resource requirements.

In our impact ecosystem, many of the organizations we support are like those “volunteer plants”. They are hearty and resilient, and their success changes the environment around them, paving the way for others to follow. But this process takes time. “We…track our alumni over many years.  We know it could take 5 years in some cases, so we must structure our [data] system well today to yield the benefits.”

We need to ensure we’re asking the right questions, gathering information (not just data) and building on a shared definition of success in our measurement efforts.