After the excitement and adrenaline of deciding to convene and carrying out a convening, conveners and participants may be left with a lingering “so what” feeling.  Measuring the outcomes of a convening can be tricky because some outcomes are not likely to emerge for some time and other outcomes might be hard to see (e.g. changes in attitude).  The art and science of measuring convening outcomes may be difficult, but not impossible.  

Recently, Conveners.org, Skoll Foundation, and TCC Group engaged leading conveners in the impact ecosystem to discuss the evaluation of convenings.  The engagement included leading conveners in the impact ecosystem including Concordia, Gates Foundation, Intentional Media, Obama Foundation, Opportunity Collaboration, Rockefeller Foundation, Social Venture Circle, and Synergos.  The event surfaced a lot of ideas and insights, which are being captured in this blog series.   Our last two blog posts in this series have laid the foundation for what to evaluate. So now that we know what we think convening can achieve – how do we measure it? That is the focus for this part 3.  

It may be no surprise that nearly every convener utilizes both registration forms and after-event surveys to collect information about the participant experience.  These cost-effective and relatively well-understood tools are the most common ways to assess if a convening achieved the desired result. We found that the data collected in surveys were being used in communications and reports for a wide range of stakeholders:

  1. The public
  2. The field/industry
  3. Participants
  4. Sponsors
  5. Organization leaders (board, trustees, executives)
  6. Internal team

Survey questions were intended to provide information to support conveners in creating better events, achieve greater impact, and unlocking resources.  A lot of the information collected is for the purpose of immediately reporting on the convening.  As Hanne Dalmut highlighted, for Concordia these snapshots of 80 percent of participants felt X, or believe Y is helpful to bolster the narrative in the report.  Sponsors need to know that this will be an investment of money that will achieve their desired goals, and organization leaders need to understand that convening is going be a more effective way to achieve their desired mission than some other intervention.  

However, registration forms and surveys often ask questions that return relatively unhelpful information (e.g. the room was too cold).  One of the most simple and important guidance around surveys is to ask who will use the information and how will they be able to use it?  If the answer is no one, then it is probably not worth collecting the information.  

Another discussion around surveys surrounded the timing.  There is a lot of pressure for conveners to get a report out quickly and there is a strong belief that it is unlikely that convening participants will fill out more than one survey.  As a result, almost all convening surveys are conducted in the immediate aftermath of the convening, when response bias is likely to be highest and the emergence of outcomes at a low point.  Conveners administering surveys multiple months after the event tended to report very low response rates.  

It may be no surprise that the first reactions were just for better survey tools that improve the experience from what SurveyMonkey is able to deliver.  Tolls like PlayVerto, Poll-Anywhere, or Sli.do were all shared as resources to improve survey completion rates.  Another way to increase participant response rates is when you ask them to complete the survey.  Efrain Gutierrez of the Obama Foundation shared, “I’ve been able to get 10-15 minutes of an event going on stage, saying this is important this is why we need your feedback, do the survey now.  But getting the time from the folks organizing the event is not easy.”  We at Conveners.org have definitely seen this to be true, that making the time for participants to provide feedback during the event can lead to 70-80% completion rates vs. the industry average of 20% for surveys.  

However, Valerie Redhorse-Mohl from Social Venture Circle shared the group sentiment, noting that, “we would love to find a better tool other than surveys – how it is delivered and how the information is gathered.” Only a few conveners shared perspective collection methods that went beyond the survey.  Gurpreet of the Skoll Foundation shared that they collect the experiences and insights from the 40 foundation employees who attend the Skoll World Forum as a proxy for understanding the impact of the experience.  Topher Wilkins of Opportunity Collaboration shared, “we try to talk to everyone who comes to OC in advance – why they are showing up, who they want to talk to – I do that personally in the lead up to the event and again on the back end.” And Jared Raynor of TCC Group provided additional evaluation methods including “intense period debriefs, outcome tracing, secondary data analysis – but many of these require a defined end place that you are looking to go.”  Jared also shared the possibility of asking people who were not at the convening what they heard or asking questions about outcomes to non-attendees, without referencing the convening at all (e.g. influencers, key informants, etc.).  

There is a lot more work to be done on this issue of methods for measuring.  However, as part of our commitment to fostering shared learning and engagement, we are pleased to announce a shareable question bank as a starting place.  We are gathering and organizing different questions that people are using related to convenings. We encourage you to take a look and see if something strikes you as useful and to contribute your own good questions.  We are also considering a minimum common data set for convenings that would allow for some cross-convening analysis — look out for a call to action on that in the future.