What is the research methodology being used?

The research methodology is a quantitative longitudinal tracking study. Instead of asking the same people to take the same survey at regular time intervals, which would be a true panel methodology, we are deploying the survey to different samples of ticket buyers with each wave of surveying, so that no one is asked to take the survey more than once. The approach to protocol design is to use a core module of key questions about audience attitudes about going out, and then rotate through other modules of questions, over the various deployments, to explore specific issues/questions that do not require time series data/results. We don’t know what questions we’ll need to be asking in July and August. What we are doing is creating the apparatus to collect whatever data is needed down the line, as well as time series data on key indicators. Finally, we will also be developing qualitative protocols (e.g., protocols for virtual focus groups on specific topics), which participating organizations will be able to implement on their own, with a bit of coaching from AMS/WolfBrown.

How will the study treat regionality?

We expect to see regional differences in attitudes. It may or may not make sense to create aggregates of PACs in different regions or to group PACS in some other fashion. The dashboard allows us to make custom aggregates, so we can do whatever makes the most sense for everyone in terms of regional or other aggregates.

How do I create a “random” selection of records? For example, Broadway and Opera records skew older than pop events.

We are proposing that participating PACs deploy the survey to a pool of ticket buyers that represents the core of their audience. This might be Broadway only, or it might be a blend of Broadway and other product lines. We expect to see important variations by age, but we do not expect to see variations according to disciplinary preferences.

In the survey, we’ll be asking respondents to self-identify by their subscriber status, and we also take an indicator of loyalty and donor status. So, those will be available in the dashboard as filters (for cross-tabulation purposes). It’s important to know that some of the questions in the survey are general in nature and other questions will be asked in reference to your organization.

What does it mean when you ask for ticket buyers “since Jan 2018?”

The contact information you select from your database should represent ticket buyers from Jan 2018 forward. Obviously, those same people may have attended events in your venue for many years prior to 2018, but the critical element is to consider them relatively “recent” ticket buyers.

How long will each survey take my patrons to complete?

The survey protocol should take somewhere between 12 – 15 minutes to complete.

Why does the study not poll the same people every single time?  Wouldn’t that be a better way to track changing sentiment over time?

A true panel study approach would involve recruiting a panel of respondents to take the survey multiple times.  The WolfBrown research team concluded that such an approach was not optimal for several reasons. Instead, we are using a modified panel approach in which the survey is deployed to randomized sub-groups of patrons within an organization’s patron database.

A number of factors contributed to this decision:

  • A good deal of time and effort is involved in recruiting individual patrons into a panel, and both of which are scarce resources in the present situation
  • We would expect some level of panel attrition over the long duration of the study, and therefore would need to over-recruit by a large margin in order to end with a large enough sample 8 months down the road
  • Recruitment would require coordinating respondent incentives across many organizations, adding a layer of complexity to process
  • We were concerned about the “rehearsal effect” – a form of response bias associated with panel studies wherein respondents “game” the survey because they remember what they answered last time, and answer in reference to that, instead of their true feelings
  • Using a randomized sub-list approach will allow for comparison of results over time, subject to sampling error margins
  • Given the very large sample sizes being generated through multiple cohorts of arts organizations across cities and countries, results for one organization can be compared against results for 20 or more organizations in the same cohort, and also across cohorts, reducing the risk of drawing incorrect conclusions from anomalous results.

What other research have you or WolfBrown performed?

WolfBrown has conducted several large-scale cohort studies of a similar nature, including:

  • A study of the audience impact of 136 different choral music concerts presented by 23 choruses, commissioned by Chorus America
  • A study of attitudes about new theatrical works among single-ticket buyers to 32 theatre companies in the National New Play Network
  • A study of the digital media habits of performing arts ticket buyers, in partnership with Capacity Interactive, involving 58 organizations and yielding 27,000 survey responses

WolfBrown’s dashboard software, developed over the past 10 years, has been used by hundreds of arts organizations to view and interrogate data from audience surveys of all types, both individually and in cohorts.

AMS Analytics has been collecting arts sector data for more than 3 decades. We annually collect data from hundreds of organizations and tens of thousands of consumers. AMS has collected multi-community data for major national studies since the late 1990s for individual organizations, national service organizations and foundations. We deploy a variety of online technologies and operate the Analytics Suite.