Market Research Topics - Displayr https://www.displayr.com/category/market-research/ Displayr is the only BI tool for survey data. Tue, 05 Dec 2023 23:32:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://www.displayr.com/wp-content/uploads/2023/10/cropped-Displayr-Favicon-Dark-Bluev2-32x32.png Market Research Topics - Displayr https://www.displayr.com/category/market-research/ 32 32 Qualtrics Integrations https://www.displayr.com/qualtrics_integrations/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/qualtrics_integrations/#respond Mon, 04 Dec 2023 20:48:49 +0000 https://www.displayr.com/?p=35328

Displayr and Qualtrics Integrations

Qualtrics has been an unqualified success in recent years, democratizing survey scripting and data collection and used by organizations globally to drive customer experience and many other types of survey research projects. At the same time Displayr has built an unparalleled product in survey analysis and reporting. From simple cross tabs to advanced analytics, from PowerPoint automation to beautiful dashboards, Displayr makes everything easy.

This is why integrating Qualtrics with Displayr makes perfect sense. The new Qualtrics integrations means you can automate everything, instantly, from data collection through to insight delivery.  And even if you are building a new report from scratch, once you have your Qualtrics data connected you'll benefit from Displayr's exploratory analysis tools to help to find and build the story in your data.

Integrating Qualtrics

The key to connecting Qualtrics and Displayr is direct integration. Via the Qualtrics API you can connect your data to Displayr's vast array of analysis and reporting features and control how often your report, and its underlying analysis, is updated. Displayr automatically cleans and formats your Qualtrics data so everything is ready to go in an instant. This means

  • Crosstabs and visualizations will be updated
  • All 'Rules' and conditions are updated
  • Table structures and formats are automated
  • Dashboard reports and infographics can be made available in real-time
  • Any PowerPoint reports created using the Qualtrics API can be automatically updated.

In addition, Displayr makes it easy to dive deeper into your analysis. This means that any analysis technique is now available to you including regression, PCA, clustering, latent class analysis, machine learning, MaxDiff, conjoint, TURF, and so much more. In fact, there's no multivariate analysis you cannot perform in Displayr.

Is the Qualtrics API free?

Qualtrics offers integration capabilities with a variety of software, including Displayr. To access the Qualtrics API, you will need to have Administrator access in Qualtrics. You can then retrieve the Qualtrics API key and paste it into the Displayr integrations pop-up box that appears when you click the 'connect data' button in Displayr. The Qualtrics integration works with the Displayr free, Displayr trial, Displayr Professional, and Displayr Enterprise licenses. The Displayr free license is limited to a dataset with no more than 1000 rows and 100 columns of data.

Are there disadvantages of using Qualtrics?

Qualtrics is a great survey collection platform and has solid analysis capabilities. However, professional market researchers and consumer insights teams often require more functionality than the Qualtrics platform has. So the Qualtrics Displayr integration provides all the tools professional researchers need. Professional researchers are sometimes limited by the following Qualtrics disadvantages:

  1. Limited/rigid crosstabs features. Researchers often need to churn out and quickly sort through hundreds of crosstabs. They also need flexibility in merging columns, fusing different tables and questions, creating custom calculations within or across tables, and setting tables up to meet individual specifications.
  2. Limited PowerPoint reporting functionality. Most researchers report in PowerPoint, and require software to connect their data to their PowerPoint reports so they can be automatically updated with new data or when the data changes.
  3. Limited advanced analysis techniques and no ability to work in code. Researchers need to use a wide range of statistical analysis techniques for different types of data. They also occasionally prefer the flexibility of using R code to for calculations or dashboards.
  4. Limited dashboard design capabilities. One of the reasons most researchers use PowerPoint is its ability to add narratives and images to the data stories. Researchers need to make insights easy for their audience to understand and to have live, updated dashboards. So having online, interactive PowerPoint-style reports that are connected to their data gives them the best of both worlds.

Setting up the Qualtrics API

The following video shows how to easily connect your Qualtrics data using the Qualtrics API.

You can also find more information here: How to Import Qualtrics Data in Displayr

 

Take the Next Step

If you want to know more about data integration or Displayr generally, book a demo or take a free trial.

]]>
https://www.displayr.com/qualtrics_integrations/feed/ 0
Creating a composite or “mash-up” summary table in Displayr https://www.displayr.com/composite-summary-table/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/composite-summary-table/#respond Tue, 07 Nov 2023 00:51:39 +0000 https://www.displayr.com/?p=35129

 

Displayr can create bespoke calculations, making it easy to go beyond the observed data to help tell your story. This saves you from using multiple applications, for example, having some of your workings in Excel.

Indeed, you can replicate much of what you might want to do in Excel using the Calculation Grid … it can be used to create interim calculations and, importantly, combine different types of data in one table.

Beginning with the end in mind ...

The end game here is to create a table like this, which summarizes data from several questions in a single, easy-to-format, matrix-style table.

 

Key Inputs

This table is built using these four inputs ...

  1. Ranked (disguised) data on the Main Cell Phone provider.   Given the structure of the market, we only want to focus on the Top 3
  2. Net Promoter Score results (click for more information on Net Promoter Scores or NPS)
  3. Satisfaction with three critical elements of the service offer.   The table shows Z-Statistics to tease out relative strengths and weaknesses.
  4. A filter control (click for more information).

All inputs 1-3 are linked to the filter control.

 

Earlier, I'd also prepared cross-tabs for Main Phone Company by all demographics and sorted them in order of significance to help zero in on the main differences.

Cross tab inputs

Creating our composite or summary table.

Take a look at the process in action in this short video.

The key steps are:

  • Insert a Calculation Grid of the required dimensions
  • Double-click into cells to edit labels and enter simple text as we go (text information needs to be contained within "quotation marks")
  • Copy selected cells from the "Main Phone Company" table and paste them into the Grid. Formulae with references are created automatically, e.g., table.Main.phone.company.4[1]
  • Enter a formula for the first Net Promoter Score calculation (Promoters minus Detractors) by clicking into the required cells and adding mathematical operations as we go: table.Net.Promoter.Score.2[1, 1]-table.Net.Promoter.Score.2[3, 1]
  • Using a bit of Displayr magic, enter code to sort the satisfaction scores in memory and read off the label of the highest ranked row for the first column: names(sort(table.satisfaction.2[, 1],decreasing=TRUE))[1]. This gives us "Key Strength".
  • Repeating step 5 for "Key Weakness," replacing decreasing with increasing in the formula.
  • Selecting the formulae created for steps 4, 5, and 6 and dragging to autofill these same formulae for the other columns.

The only limit is your imagination.

As you can see, Displayr's Calculation Grids allow you to customize your analysis, making it easy to go beyond the observed data without having to do your workings elsewhere.   You can easily change and manipulate calculations once they are set up; everything is connected. This saves you time, which you can then apply to other ways to add value to your data.

If you want to know more about Calculation Grids or Displayr generally, book a demo or take a free trial.

 

]]>
https://www.displayr.com/composite-summary-table/feed/ 0
Much faster visualizations of single numbers https://www.displayr.com/updated-single-number/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/updated-single-number/#respond Fri, 21 Apr 2023 00:12:16 +0000 https://www.displayr.com/?p=34060 ...]]> Recap - what's a single number visualization?

Many dashboards and infographics are collages of images and visualizations. For example, the dashboard below consists of 32 visualizations on top of a background image. Each visualization below shows a single number, either as a bar or a circle. When a visualization contains a single number, we call it a Number Visualization. (Most traditional visualizations show multiple numbers; e.g., a pie chart has a number for each pie segment.)

They're now faster

The underlying technology we initially used to create the single number visualizations was relatively slow. The more you had on a page, the slower things got. They have now been rebuilt with faster technology.

The user interface has changed

The old user interface was a bit clunky. The new interface is more straightforward and more flexible. For more information see How to Create a Number in a Shape Visualization.

Migrating existing visualizations

Any dashboards you've already created will still work, and there is no need to do anything. However, they will be using older technology. If you want to switch to the newer visualizations:

  1. Select each visualization
  2. Press the Visualization button in the object inspector.
  3. Choose your desired visualization in the Number group.

 

]]>
https://www.displayr.com/updated-single-number/feed/ 0
Fast track categorizing and coding text data https://www.displayr.com/fast_track_text_data/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/fast_track_text_data/#respond Mon, 20 Mar 2023 02:18:54 +0000 https://www.displayr.com/?p=33949 ...]]> Overview

Displayr's text coding functionality is designed with needs of the survey researcher front and centre.   For many years the text categorization functions in Displayr have already supported what we might call a manual workflow.  We make it easy to view, sort, and filter text responses, create and structure categories, and assign or code responses to those categories.    More recently we've added semi-automated functions to the interface and extensively upgraded the algorithms that drive them.  We believe our tools in this space are state-of-the-art ...

  • In selecting "Semi-Automatic" Text Categorization, users are presented immediately with a draft set of categories with the bulk of the data already coded.  So in a matter of minutes you are off to a great start
  • The algorithms that create this output are based on analysing context and meaning (not word similarity, like many other tools).   Your draft code frames are intuitive from the get-go
  • We've made this work effectively for multiple response categorizations, (where responses can be assigned to more than one code), which are historically more challenging to automate
  • For tracking and related research, we have specific algorithms that recognize and categorize unaided brand awareness questions
  • Once you have your draft categories, the user interface makes it easy to edit, with tools to combine, rename, and split categories

So the workflow now becomes:

  1. Let Displayr do the hard work and get you most of the way there (via a draft categorization), but in a fraction of the time it would take manually
  2. You then fine tune and edit the categories via the intuitive user-interface.

Accessing the automated functions

The quickest way to do this is to select a text variable in the Data Set tree, hover above or below it to '+' insert a new variable, and follow the prompts via the Semi-Automatic menu path:

 

 

We know some users might want to start the process manually.   This could involve reading through some responses and create some pre-planned categories.    Even if you follow the Manual menu path, you can access the Automatic categorization function.  At any time you can speed up the coding of remaing uncategorized data.    In the categoriztion interface, set "Sort by:" to Fuzzy match, (as matching is a key building block of the algorithm), and the "Auto" button appears:

 

 

The functionality and workflow in action

Take a look at the process in action in this short video.  It uses an open ended question on how people feel about 'Tom Cruise' as input*

You can get a broader overview of text analysis methods and solutions in this webinar recording. How to quickly analyze text data

Streamline your text data analysis.

The process of turning open text responses into usable data is traditionally time consuming and expensive (being often outsourced).   Displayr's text categorization tools are state of the art.  You can create a draft categoriztion in minutes automatically and then quickly fine tune it into a polished codeframe.   If you use a lot of text data and want to know more, book a demo or take a free trial.

 

Discretion is advised - the data used in the video is from a real survey containing unvarnished attitudes to Tom Cruise.   Some respondents have written unkind, distasteful and potentially offensive things.   Displayr does not condone or endorse any of the comments that have been made.

]]>
https://www.displayr.com/fast_track_text_data/feed/ 0
Save time translating and coding text data https://www.displayr.com/translating-text-data/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/translating-text-data/#respond Wed, 21 Sep 2022 01:42:29 +0000 https://www.displayr.com/?p=32485 ...]]> While there are several ways to translate text data in Displayr, our text categorization function is made even more powerful by having Automatic Text Translation built into the interface.  When you insert a new text categorization variable (+ > Text Categorization > .... > New - more on this in the video below), you are given the option to Translate the text:

 

 

Use any Source and Output language.

You are then prompted to select the Source language:

  • Automatically detect language
  • Specify with variable (use this option if the source language is identified by a variable in your data set. This option is particularly useful if your file contains multiple languages)
  • A specific language - the default language is English.

You can set the Output language here as well.

 

Text Translation in Action

Consider a simple scenario where data has been collected on hotel reviews and it includes a "comment" option - guests of course need to complete the survey in a language they are comfortable with:

  • The data file will have multiple languages, and (typically) an additional variable classifying the language selected.
  • The person responsible for categorizing (coding) the data will want to do so in their language
  • The outputs, being the categories (or code-frame), will also need to be in their language.

Displayr makes all this very easy, including dealing with multiple language inputs simultaneously.    And once translated, you can create an intial catergorization (code frame) automatically.

Take a look at the process in action in this short video, covering both single and multiple language translation ...

 

Streamline your text data translation and analysis.

The process of translating text data into the analyst's language is traditionally time consuming and expensive (being typically outsourced).   Displayr's translation tools are now available directly in the text categorization interface - you can even create a draft code frame in your preferred language automatically.   If you use non-native language text data and want to know more, book a demo or take a free trial.

]]>
https://www.displayr.com/translating-text-data/feed/ 0
Automatically highlight key results on bar charts https://www.displayr.com/automatically-highlight-results-on-bar-charts/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/automatically-highlight-results-on-bar-charts/#respond Thu, 04 Aug 2022 14:44:10 +0000 https://www.displayr.com/?p=31357 ...]]> Finding the balance between detailed data and charting.

One of the dilemmas a researcher faces in building a report is to work out just how much detail to show.  Consider a typical cross-tab (in this case using a banner) that has lots of interesting significant differences, as indicated by the blue and red number formatting

 

 

Of course the key finding here is the Total or Net result, so we have to show it.   Rather than asking the audience to study the cross-tab in detail a common way of visualizing this type of a result is a bar chart with call-outs or labels to highlight the key findings:

 

 

This takes a while to set up in PowerPoint.  And in the case of a typical tracking study, where the results will change wave-to-wave, it can be a very tedious to update.   Not anymore - Displayr's visualization suite now contains an option to automate this type of chart, in seconds!.   See it in action in this short video:

 

 

Try Bar Charts with Skews now

Existing customers will quickly see how much time they can save using this new visualization. Anyone else can book a demo or take a free trial.

]]>
https://www.displayr.com/automatically-highlight-results-on-bar-charts/feed/ 0
Learn More about Weighting in Displayr https://www.displayr.com/learn-more-about-weighting/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/learn-more-about-weighting/#respond Tue, 08 Sep 2020 22:13:00 +0000 https://www.displayr.com/?p=25406 ...]]> Weights in Displayr - General Resources

These are the best places to start to learn about weighting in Displayr.

These resources address specific aspects of working with weights

 

From here, the resources below dig into more specific aspects and uses of weights.

Effective Sample Size

 

Unique Uses for Weight Variables

 

Shapley Regression and Johnson's Relative Weights

Johnson's Relative Weights isn't about weighting survey data, but the technique will come up in results when looking for information about weighting on our blog or in our technical documentation. The collected resources on this topic are below.

]]>
https://www.displayr.com/learn-more-about-weighting/feed/ 0
The Correct Treatment of Sampling Weights in Statistical Tests https://www.displayr.com/the-correct-treatment-of-sampling-weights-in-statistical-tests/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/the-correct-treatment-of-sampling-weights-in-statistical-tests/#respond Fri, 10 Jul 2020 07:03:48 +0000 https://www.displayr.com/?p=24350 ...]]> Case study

A simple case study is used to illustrate the issue. It contains two categorical variables, one measuring favorite cola brand (Pepsi Max vs Other), and the other measuring gender. The data set contains five weights, each of which is used to illustrate a different aspect of the problem. The data is below. All the calculations illustrated in this post are in this Displayr document.

Consider first an analysis that doesn't use any of the weight variables (an unweighted analysis). The output below shows a crosstab created in SPSS. A few key points to note:

  • For the purposes of this post, it is easiest to think of this test as comparing the proportions of Females that prefer Pepsi Max (10.4%) with the proportion of Males (19.4%).
  • The sample sizes are 67 Female and 299 for Male, respectively.
  • The standard chi-square test results are shown in the Pearson Chi-Square row of the Chi-square tests table and it's p-value is .083. So, if using the 0.05 cutoff we would conclude there is no relationship between Pepsi Max and gender.
  • Four other tests are shown on the table. Pearson's tests is probably the best for this table (for reasons beyond the scope of this post). However, the other tests, which make slightly different assumptions are giving broadly similar results. This point's important: different tests with different assumptions will give different, but similar, results, provided the assumptions aren't inappropriate.

If you studied statistics, there's a good chance you were taught to do a Z-test to compare the two proportions. The calculations below perform this test. They are written in the R language, but if you take the time you will find it easy to follow the calculations. The computation of the statistic is different from the chi-square test statistic but the tests are equivalent in this unweighted case with identical p-values.

Getting the wrong answer using weights in SPSS

The output below shows the SPSS results with a weight (wgt1) applied.  These results are incorrect for two reasons. The first reason is that, by default, SPSS has rounded the cell counts, and used these rounded cell counts in the chi-square test and the percentages, meaning that every single number in the analysis is, to a small extent, wrong. (In fairness to SPSS, this routine was presumably written in the mid-1960s, and this type of hack may have been necessary for computational reasons back then.)

The output below uses the un-rounded results. The first thing to note is that that the p-value is much lower than with the unweighted result (.016 versus .08). If testing at the .05 level, you would conclude the difference is significant. So, you may end up with a completely different conclusion. Which, if either, is correct? In this case, the unweighted result is correct. Let me explain:

  • If you look at the table below, you will see that it reports the same percentages as in the earlier example: 10.4% for Female and 19.4% for Male. So, the weights have not changed the results at all. Why? We weight data to correct for over- and under-representation in a sample. In this example, the weight is correcting for an under-representation of women. Consequently, the weight does not change the percentages in any way, as the percentages are within the gender groups.
  • Note that the sample size for the Female group is shown in the table as 183 and the same sample size is shown for the male groups. And, this is how SPSS has computed the test. It has used the weighted sample size when conducting the test. This is obviously wrong. In the sample we only have 67 females. The weight doesn't change this. Whether we weight or not, we still only have data for 67 females, so performing a statistical test that assumes we have data from 183 females is not appropriate.
  • This isn't really a flaw of SPSS. If you read its documentation it explains that it is not meant to be used with sampling weights. SPSS even a special module called Complex Samples which will perform the problem correctly and is what they recommend you use.

Getting the wrong answer using regression in R

The example above can't be reproduced in R, because its standard tools for comparing proportions don't even allow you to use weights of any kind. However, we can illustrate that the problem exists in routines in R that do support weights, such as regression.

Unweighted logistic regression

It's possible to redo the statistical test above using a logistic regression. This is illustrated below. The yellow highlighted number of 0.0887 is the p-value of interest. It's basically the same as we got with the standard chi-square test used at the beginning of the post (it had a p-value of .083). This statistical test in the logistic regression is assessing something technically slightly different and it's typical that tests that present slightly different technical answers get slightly different results.

Weighted logistic regression

R's logistic regression does allow us to provide a weight. The output is shown below. Note that once again the p-value has gotten very small. The reason is that the weighted regression is, in its internals, making exactly the same mistake we saw with SPSS's chi-square test: it's assuming that the weighted sample size is the same thing as the actual sample size. Note that it basically gets the same result as with the chi-square test as well. So, we get the wrong answer by supplying a weight.

Doing the analysis properly

In the world of statistical theory, this is a solved problem. All of the techniques above correctly compute the percentages with the weights. More generally, all the parameters calculated using traditional statistical techniques, be they percentages, regression coefficients, cluster means, or pretty much anything else are typically computed correctly with weights. What goes wrong relates to the calculation of the standard errors. Below I reproduce the proportions test code shown earlier in the post. When the data is weighted, the lines of the code that are wrong are on lines 5-6. You can see based on first principles that it must be wrong, as it has no place to insert the weights. A correct solution is to rewrite lines 5-6, using a technique known as Taylor Series Linearization to calculate the standard error (se) in a way that addresses the weights appropriately. This is, sadly, not a trivial thing to do. I will disappoint you and not provide the math describing how it works, as the math will take a page, and, unless you have good math skills it's not a page that will help you at all. Last time I checked there were no good web pages about this topic. If you want to dig into how the math works, as well explore alternatives to Taylor Series Linearization, please read Chapter 9 of Sharon Lohr's (2019):  Sampling: Design and Analysis. But, warning, it's hard going.

So, how do we do the math properly? The secret is to use software designed for the problem. This means:

  • Use our products: Q and Displayr. They get the right answer for problems like this by default.
  • Use Stata. It provides excellent support for sampling weights (which it calls pweights).
  • Use IBM SPSS Complex Samples. SPSS has a special module designed for weighted data. It will give you the correct results as well.
  • Use the survey package in R

For example, the table below on the left shows the data in a Displayr crosstab that is unweighted. This gives the same result as we saw for the unweighted chi-square test in SPSS. The table on the right is using Taylor Series Linearization as a part of a test called the Second-Order Rao-Scott Chi-Square Test. It basically gets the same answer. That is, the Taylor Series Linearization gets the correct answer. You don't need to do anything special in Displayr and Q to have the Taylor Series Linearization applied; when you apply the weight it does this automatically.

Similarly, here's the same calculation done using R:

And, if we use the appropriate logistic regression, designed for survey weights, it also gives the same conclusion (again, it is using slightly different assumptions, so the result is slightly different):

The hacks (which don't work)

Most experienced survey researchers are familiar with these problems, and they typically employ one of four hacks as an attempt to fix the problem. These hacks are often better than ignoring the problem, but they are all inaccurate. In each case, they are attempts to limit the size of the mistakes that are made with using software not designed for sampling weights. None of the hacks solves the real problem, which is to validly compute the standard error(s).

Using the unweighted sample size

In the example above, we saw that the result was wrong because the weighted sample was used in the analysis, and this was clearly wrong. Further, in this example, we can compute the correct result using the weighted proportions and the unweighted sample size. However, this doesn't usually work. It only works in the example above because we were comparing percentages between the genders, and gender was the variable that was used in the weighting. It is easy to construct examples where using the unweighted sample size fails.

To appreciate the problem I re-run the same example, but using a different weight, wgt2,  which weights based on the row variable in the table. You will recall before that we have been comparing 10.4% with 19.1%. With the new weight, the percentages have changed dramatically, and we now have preferences for Pepsi Max of 44.8% and 62.6%. Note that the p-value in this example remains at .086.

However, when we compare the weighted proportions, but use the unweighted sample size, we get a very different result. Again, this incorrect analysis suggests a highly significant relationship, which is wrong.

Performing statistical testing without weights

If you have been carefully reading through the above analyses, you will have realized that in each of the examples, the correct answer was obtained by performing the test using the unweighted data. However, this isn't generally a smart thing to do. The only reason that it works in the examples above is because we are testing two proportions and the weights are perfectly correlated with one of those variables in each of the tests above. It is easy to construct an example where using the unweighted data for testing gives the wrong answer. In the table below, after the weight has been applied (wgt3), there is no difference between the genders. As you would expect, the p-value, computed using the Taylor Series Linearization, is 1. However, the unweighted data still would show a p-value of .086, which is not sensible given there is no difference between the proportions.

Scaling weights to have an average of 1

When using statistical routines not designed for sampling weights, it is easy to get obviously wrong results in situations where the weights are large or small. In academic and government statistics it is common to create a weight that sums up to 1. When this is done, if using software that treats the weighted sample size as the actual sample size, nothing is ever shown as statistically significant. It is also common to creates weights that sum to the population size, as this means that all analyses are extrapolated to the population. This has the effect of making all analyses statistically significant. A common hack for both of these problems is to scale the weight to have an average value of 1. This is certainly better than using the unscaled weights, but it doesn't fix the problem. This has already been shown above, as all the calculations above have been performed using a weight with an average value of 1, and, as shown, the p-value was not computed correctly.

In addition to correctly computing the p-values, the useful thing about using routines designed for sampling weights, is that you don't need to rescale the weights at all. This means that you can use weights that gross up to the population size. In the table below, for example, which is from Displayr, the weighted sample size of each gender is 100,000,000, and the p-value is unchanged.

Using the effective sample size in calculations

The last of the hacks is to use Kish's effective sample size approximation when computing statistical significance. This is typically implemented in one of three ways:

  • The weight is rescaled so that it sums to the effective sample size.
  • The effective sample size is used instead of the actual sample size in the statistical test formulas.
  • The effective sample size for sub-groups is used in place of the actual sample size for sub-groups in the statistical test formulas.

This hack is widespread in software. It is used in all stat tests in most of the older survey software packages (e.g., Quantum, Survey Reporter). We use it in some places in Q and Displayr, where the math for Taylor Series Linearization is intractable, inappropriate, or is yet to be derived. For example, there is no nice solution to dealing with weights when computing correlations, so we rescale the weights to the effective sample size.

This approach is typically better than any of the other hacks. But, it is still a hack and will almost always get a different (and thus wrong) answer to that obtained using Taylor Series Linearization. For example, below I've used the effective sample size for the first weight, and same (incorrect) result as obtained when using the unweighted sample size (they will not always coincide).


To summarize: if you have sampling weights you are best advised to use statistical routines specifically designed for sampling weights. Using software not designed for sampling weights leads to the wrong result, as do all the hacks.

All the non-SPSS calculations illustrated in this post are in this Displayr document.

]]>
https://www.displayr.com/the-correct-treatment-of-sampling-weights-in-statistical-tests/feed/ 0
A Change in Displayr’s Default Stat Tests for Numeric Data https://www.displayr.com/a-change-in-displayrs-default-stat-tests-for-numeric-data/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/a-change-in-displayrs-default-stat-tests-for-numeric-data/#respond Mon, 11 May 2020 06:01:05 +0000 https://www.displayr.com/?p=23604 ...]]> We've released some new options in Displayr for customizing control over stat testing and, at the same time, we've changed the default used in Means statistical tests for any documents created from May 12th, 2020 onward.

Additional options

If you go into Appearance > Highlight Results Advanced there are now three new options:

  • Date controls whether testing is conducted against the previous time period or not. See Make Stat Tests Compare to Previous Time Periods.
  • Proportions test and Means test govern which rules are used when determining which statistical tests to use. These rules choose the tests contingent upon the structure of the data. Choices available include non-parametric, z-tests, t-tests, and Quantum and Survey Reporter tests. See Statistical tests for categorical and numeric data on the Q wiki for additional information.

Means test for numeric data now defaults to t-test

Previously, our Means tests to use for numeric data have been non-parametric. In particular, for independent samples, Displayr has been testing using a Kruskal-Wallis test, and for correlated samples Friedman's test for correlated samples.

Now that users can change the options for statistical testing, we've changed the Means test to use to the more powerful t-tests for independent and paired samples respectively.

]]>
https://www.displayr.com/a-change-in-displayrs-default-stat-tests-for-numeric-data/feed/ 0
Using Random Assignment in Pricing Research Studies https://www.displayr.com/using-random-assignment-in-pricing-research-studies/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/using-random-assignment-in-pricing-research-studies/#respond Mon, 24 Feb 2020 16:35:05 +0000 https://www.displayr.com/?p=22098 ...]]> A basic problem with both Stated Willingness-to-Pay and the Price Sensitivity Meter is that they are asking people to nominate prices. The issue is that people find these questions to be difficult to answer because we typically react to prices while we are shopping and not things we nominate in surveys.

So, the basic method that is used in pricing research is to ask questions that closely mirror the real world. If your questions are more consistent with prices that are presented in real life, then we have a better chance of getting better data, which in economic jargon is called ecological validity.

So, the approach to our fifth technique, Random Assignment, is called Buy or Not Buy. With this method, you would provide a description of the product as if they would or would not buy a product. The more realistic the question the better, so in markets with many competitors like the consumer goods market, you can make your questions more realistic by adding specific competitors into the question.

However, most market researchers don’t like that style of question, they instead like to use a different approach called Purchase Intention. Typically, researchers use weighted survey questions to find out how much people prefer to pay. This type of question would ask someone if they would buy a product at a certain price using a scale. See the image below.

With questions such as these, you get different groups of people who were asked the same questions, but each answer varies by price point. From these questions, we can create a demand curve in order to find the preferred price, just like in our third approach Stated Willingness-to-Pay.

For example, below is a table and chart from a cable TV study where we were trying to work out how much more to charge for a bundle of new TV channels. In this study, we tested five different price points among 2,622 people. The advantage of this approach is that our data is more reliable because the questions are realistic, but the disadvantage is how expensive interviews are for a sample size that large. And even with such a big sample, there can still be problems with the data such as a sampling error between the different price points.

For example, this analysis shows that people were indifferent between paying $7.50 and $10 for the new TV channel bundle. So, if you used the demand curve for profit optimization it will likely end up recommending pricing at $10, which can be an issue because this conclusion could likely be based on a sampling error. And because the issue of a sampling error can arise with these types of questions, most market researchers will often use the Choice-Based Conjoint Technique.

 

For more examples on the other pricing research techniques see: Price Salience, Price Knowledge/AwarenessStated Willingness-To-PayPrice Sensitivity MeterChoice-Based Conjoint.

]]>
https://www.displayr.com/using-random-assignment-in-pricing-research-studies/feed/ 0
Using Price Knowledge/Awareness for Pricing Research Studies https://www.displayr.com/using-price-knowledge-awareness-for-pricing-research-studies/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/using-price-knowledge-awareness-for-pricing-research-studies/#respond Mon, 24 Feb 2020 16:33:14 +0000 https://www.displayr.com/?p=21948 ...]]> Pricing Knowledge/Awareness is our second technique used for pricing research studies. You can find our first technique, Price Salience, here.

The basic idea of Price Knowledge/Awareness research is to compare and contrast what people think they paid versus what they actually paid. This technique is relatively easy, but the hard bit is working out what people actually pay.

The chart below is a scatterplot example of price knowledge/awareness for a confectionary brand. The way we worked out the actual price of the certain brand product in this study was by asking customers where they had made their last purchase. Then with that, we compared prices by looking through various shop’s sales record data. Another way to find this information out is by asking for receipts if they have just left a shop.

As you gather the guessed and actual price information, you’ll want to plot each price point on a scatterplot. If the people surveyed have a good idea of what they spent, then all of the data will be close to the red line, which represents the actual costs. However, in this category, you can see that the people surveyed had very little idea of what they were paying. On average, people thought they were paying more than what they actually paid; this is shown on the scatterplot with more shading above the red line. Overall, this is great news for the seller as it means they can raise their prices without many negative consequences, which they did.

For more examples on the other pricing research techniques see: Price Salience, Stated Willingness-To-Pay, Price Sensitivity Meter, Random Assignment, Choice-Based Conjoint.

]]>
https://www.displayr.com/using-price-knowledge-awareness-for-pricing-research-studies/feed/ 0
Using Choice-Based Conjoint in Pricing Research Studies https://www.displayr.com/using-choice-based-conjoint-in-pricing-research-studies/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/using-choice-based-conjoint-in-pricing-research-studies/#respond Mon, 24 Feb 2020 16:31:39 +0000 https://www.displayr.com/?p=22102 ...]]> This one is a bit more complicated than the first five techniques we’ve talked about, but the idea of this technique is to find people’s preferences by providing them tradeoffs between a series of products and describing them based on certain attributes.

For example, below is a question that compares several different cell phone providers with different prices and features. They are asked to pick a provider based on the offered package’s prices and features. Then they are asked another similar question, but with different prices and features. The magic that comes out of this approach is that once these questions are answered we can estimate, with some complicated maths, what each person’s stated willingness-to-pay is for individual product features.

When working in pricing research, one of the key outputs from a conjoint approach is the Median Willingness-to-Pay Attribute Level. In the example below, we have a sample data of product features from the U.S. cellphone market. To start this approach, we have to set a baseline and within each attribute, the lowest level of performance is assigned a willingness-to-pay of $0, so everything else is relative to that number. So, we can see that 50% of people would be willing to pay $2.19 or more for an increase in hotspot data from 10GB to 20GB, and 50% of people would pay $9.71 or more for unlimited hotspot data relative to 10GB. But beware.

While this analysis says that 50% of people would be willing to pay an extra $9.71 to get an upgrade from 10GB to Unlimited Hotspot Data, they will only pay this if there is no competition. Because if a competitor is giving unlimited hotspot data for a much lower rate then that’s what the market will bear, and you won’t be able to charge the maximum willingness-to-pay.

The next key output that people love to get form conjoint studies is Simulator, which predicts preference share. Or, if you spend a lot of time calibrating it can sometimes be used to predict market share. Looking at the example simulator below, you can see that AT&T has a market share of 51%. But what happens when we increase their price from $30 to $40? The preference share will then drop to about 37%. So, from that, we can then construct a demand curve to work out the profit-maximizing price, the same way we have in the previous techniques.

However, in practice, these optimizations are not always as useful as people envision. That’s because people will typically assume that the models are complete accurate predictors of market share and that’s rarely the case because so many key factors are ignored.

But in my history of consulting work, I have found the concept of Value Equivalence Line (VEL) to be much more useful. The idea of VEL is that a company should have a portfolio of products at different price points that match different benefits. With our cellphone company example, the idea is that the price points of the various phone plans should match the value of benefits included. So, the plans with successively higher price points will deliver more benefits.

We can do the VEL approach with Simulator to find the right price for each cellphone plan. In the simulator example below, we just have AT&T and four different price points but everything else is the same. The model suggests that the majority of people will prefer the cheapest option, as one would expect. But when trying to optimize a portfolio, our goal is to come up with four products that have similar preference shares. So, in order to find a similar percentage of preference shares, you will start to edit the options outside of price so that they are broadly similar in preference share and roughly equal in value, which is shown in the second image below.

And while the first five approaches are still important in pricing research, I find that this approach is the most useful because you are looking at the many different factors that affect price in relation to preference share.

 

For more examples on the other pricing research techniques see: Price Salience, Price Knowledge/AwarenessStated Willingness-To-PayPrice Sensitivity MeterRandom Assignment.

]]>
https://www.displayr.com/using-choice-based-conjoint-in-pricing-research-studies/feed/ 0
Using Price Sensitivity Meter in Pricing Research Studies https://www.displayr.com/using-price-sensitivity-meter-in-pricing-research-studies/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/using-price-sensitivity-meter-in-pricing-research-studies/#respond Mon, 24 Feb 2020 16:30:04 +0000 https://www.displayr.com/?p=21960 ...]]> An alternative approach (from Stated Willingness-to-Pay) for asking people what they will pay for something is known as the Price Sensitivity Meter.

This approach asks people four questions to determine their price preferences. The four questions are:

  1. At what price would you consider the product to be so expensive that you would not consider buying it? [too expensive]
  2. At what price would you consider the product to be priced so low that you would feel the quality couldn’t be very good? [too cheap]
  3. At what price would you consider the product to be a bargain—a great buy for the money? [cheap]
  4. At what price would you consider the product starting to get expensive, so that it is not out of the question, but you would have to give some thought to buying it? [expensive]

These questions are designed to work out the fairest price. A goldilocks price if you will, not too expensive but not too cheap—just right.

You can figure out the right price by using this approach in both Displayr and Q which have built-in tools that quickly compute the price sensitivity meter. For example, we'll use the same iLock data we used in our third approach, Stated Willingness-to-Pay.

We’ll start with the first question by looking at the too expensive data. As you can see in the chart below, the thin dotted green line shows what percentage of people say is the highest price that they would pay for the iLock.

For example, if you look at the $50 mark you will see that about only 25% of people say that price is too expensive. So, if we sold the iLock at $50, then we can expect about 75% of the market to buy it.

But when you jump up to $100, that percentage climbs up to 50% of people saying that that price would be too expensive, and at $400, about 87% of people say that’s too expensive.

Now for comparison, we’ll overlay the too expensive information with the too cheap price data. In the image below, the dotted red line shows the prices and percentage of people’s preference of price that they find would be too cheap—a price at which they would doubt the quality of the product.

As you can see at the $24 mark, about 60% of people think that price would be too cheap for the iLock. However, the price at which these two lines intersect, which is at $50.08, about 25% of the market believes that price is either too expensive or too cheap. But, if we flip those numbers around, this would be the just right price for about 75% of the market.

Now let’s continue and overlay the information a little bit more with the last two questions that are asked for the Price Sensitivity Meter. The first question asks what price people consider to be a bargain, which is shown in the dark red line below. The intersection of this line with the too expensive (dotted green) line is very informative because it shows that at $99, more people regard the price as being too expensive than a bargain. So, if we are focused on maximizing appeal, we will most likely need to price the iLock at a cheaper price.

Now we want to see at what price people start finding it to be expensive, so that the cost is not out of the question, but the buyer would have to give some thought to it, which is represented by the solid green line in the chart below. Where this line crosses with the dotted red line shows what the minimum price should be, which is $50.04. Prices below that, people will find the product as too cheap. Where all of these lines intersect with each other suggests that we need a price between $50-99.

Some people use the price sensitivity meter as an input into the calculation of demand curves. But I find it's a technique that is most useful if you are trying to get a basic idea of how people value what you’re trying to sell, as it only asks them about perceptions and never directly asks about purchasing.

For more examples on the other pricing research techniques see: Price Salience, Price Knowledge/AwarenessStated Willingness-To-PayRandom AssignmentChoice-Based Conjoint.

]]>
https://www.displayr.com/using-price-sensitivity-meter-in-pricing-research-studies/feed/ 0
Using Stated Willingness-To-Pay in Pricing Research Studies https://www.displayr.com/using-stated-willingness-to-pay-in-pricing-research-studies/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/using-stated-willingness-to-pay-in-pricing-research-studies/#respond Mon, 24 Feb 2020 16:28:02 +0000 https://www.displayr.com/?p=21953 ...]]> Economic theory says that the way to figure out how to price something is to know the most at which people will pay for a product. In economic jargon, this is referred to as the willingness-to-pay of each person in the market. The simple approach to this technique is by asking them directly—for their stated willingness-to-pay.

Below is an example of this simple approach by asking people how much they’d be willing to pay for an Apple iLock.

As you can see in the chart below, the very useful thing about Stated Willingness-to-Pay is the way you can show how many people in a market will pay for a product at different price points. For example, if you look at the chart roughly 77% of the market stated a willingness-to-pay at least $50 or more for the Apple iLock, provided the data is good.

Note the little cliffs in the chart, those are known as price shelving. These cliffs reflect the people who provided more rounded answers.

When we multiply the curve in the chart by the size of the market which, in this case, is the number of households, we then have a good old fashion demand curve. If you’ve studied economics you’ve probably seen the axes on the chart flipped, but they are interpreted the same way.

Once you have your demand curve you can use basic economics to find out the profit-maximizing price. In this case here, let’s say we have a product with a fixed cost of $18 million and they cost $200 per unit to create. This then means that we would want to set the price of the product at, at least $399.50 to make a profit of $2.5 billion.

One of the key points about Displayr is that you can demonstrate these things interactively where clients can input cost assumptions while you present or later on if they don’t want to share information with you.

The Stated Willingness-to Pay technique all hinges upon the ability of people to give us high-quality data. An important question to ask before taking this approach is “can they actually and honestly answer with the highest cost that they will pay for something?”

 

For more examples on the other pricing research techniques see: Price Salience, Price Knowledge/AwarenessPrice Sensitivity MeterRandom AssignmentChoice-Based Conjoint.

]]>
https://www.displayr.com/using-stated-willingness-to-pay-in-pricing-research-studies/feed/ 0
Using Price Salience for Pricing Research Studies https://www.displayr.com/using-price-salience-for-pricing-research-studies/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/using-price-salience-for-pricing-research-studies/#respond Mon, 24 Feb 2020 16:22:14 +0000 https://www.displayr.com/?p=21935 ...]]> The first pricing research technique is Price Salience, which includes two approaches for working out if people are conscious of prices in a particular category. It answers the question: “Is the price of a product or service something they even think about?”

The first and simplest approach is to ask about reasons of preference for choosing something and to see how many times the price is mentioned. In the example below, I have asked people why they like their current cell phone provider. As you can see, price is the second most important factor. So, price is clearly very important within this market.

The second approach to Price Salience is to measure preference by asking people to rate the importance of price across different categories of products. In the chart below, I’ve used a four-point scale where people were asked if they’d stick with their preferred brands if they were on sale, or if they’d stick with their preferred brands even if they weren't on sale, if they only looked for the most heavily discounted brands, or if they buy what is cheapest.

As you can see, the last dark blue bar is the row for paper towels and it shows that only 25% of people say they ignore that price if it’s their preferred brand. By contrast, the blue bar in the top row for teabags shows that about 68% of people pay no attention to the price of their preferred brand.

So, to use economic jargon the demand for tea bags is relatively elastic. This approach to using price salience has shown us that people tend to stick with their brands more when purchasing tea bags, regardless of price, at least more so than when purchasing paper towels.

 

For more examples on the other pricing research techniques see: Price Knowledge/Awareness, Stated Willingness-To-Pay, Price Sensitivity Meter, Random Assignment, Choice-Based Conjoint.

]]>
https://www.displayr.com/using-price-salience-for-pricing-research-studies/feed/ 0
How to Get your Data Sparkling Clean – Fast! https://www.displayr.com/get-your-data-clean-fast/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/get-your-data-clean-fast/#respond Mon, 27 May 2019 00:12:26 +0000 https://www.displayr.com/?p=17776 ...]]> We’re here to help -  here is your guide to data cleaning! We're going to cover:

  • identifying problems and bad data by checking sample sizes, screening criteria, routing and filtering instructions, duplicates
  • cleaning your data from recording and rebasing values to fixing metadata to deleting dodgy respondents and more!

So, let’s channel the queen of keeping things tidy, Marie Kondo, and get started. We might even spark some joy along the way.

 

It all starts with a clean data file

What do we mean when we say we want a clean data file? It sounds obvious, right?! Well, you might be surprised by the number of researchers working with unstructured files, or more frequently, metadata-poor datafiles (shudder). We want the data to high quality.

Some common culprits of being metadata-poor are fixed column text files, comma-delimited text files, Excel files, CSV files, SQL databases, XML files, JSON text files, and HTML. Did you catch yourself using any of these files?

Then you’ll definitely want to consider switching to using metadata-rich data files. Rewind the clock and gain back your time by having variable metadata in your data files. After all, imagine all the time you’ll save not having to look-up and cross-reference information. The best metadata-rich files to use are SPSS .sav files and Q packs.

 

Then we need to inspect the data

Survey data cleaning involves identifying and removing responses from individuals who either don’t match your target audience criteria or didn’t answer your questions thoughtfully. It’s time to don our detective caps. Like all good detectives, we need to check a lot of our information and sources.

Sample sizes

Step one is to check our sample size. If your sample size is bigger than expected, then you probably have respondents with incomplete data polluting your survey.

Screening criteria

The next is to check your screening criteria worked as intended, and the fastest way to do so is with a Sankey Diagram.

This Sankey Diagram lets us quickly distinguish between respondents who have been flagged as excluded from the quota and those that are complete and in quota – meaning that they were aged over 18, had a known gender, and consumed cola at least once a month. Now you could do the same thing with code or manually looking through crosstabs, but the Sankey diagram will save you a heap of time.

Data quality

What about the data quality for each question and variable? Here are some things to check for: poor-metadata, unusual sample sizes, outliers and other funky values, too small categories, incorrect variable types, and incorrect variable sets. Check out the eBook for more detail.

Routing and filtering instructions

You’ll also want to check your routing and filtering instructions. You could scan through your raw data, but it’s time-consuming and easy to miss exceptions – especially if you have a large dataset. Here’s where the Sankey diagram comes in handy again.

Data clean

Our table shows that those of a full-time work status were asked for their occupation. However, our Sankey diagram immediately gives us clues that some students and part-time workers were also asked for their occupation – indicating there is a problem with our survey routing and the data will need to be cleaned.

Missing data patterns

All right, are we finally done with checking things? Not quite, but don’t worry, the bulk of our data cleaning has already been done. Now we’ve got to check for missing data patterns. The best way to do this is surprise, surprise – visually. We’re going to create a heatmap or line chart that shows the missing values for each observation, with each ‘column’ representing a different variable and lines representing missing values - and if we’re lucky it’s not going to resemble a Jackson Pollock painting.

Data clean

Different lines or clusters of lines indicate different problems with missing data. For example, long horizontal lines (highlighted on the left) can indicate observations with severe missing data issues.  See the eBook for more details.

Duplicates

Duplicates can be a serious problem. Sometimes you can find them just by analyzing the ID variables. Other times you’ll need to jointly analyze sets of variables.

Unit tests

When cleaning data, it is super useful to set up some unit tests to automate the process of checking for errors. This is an especially good investment when it comes to longitudinal or tracking projects as unit tests can automatically check key things whenever new data is uploaded. Some common things to test for in unit tests include: out of range errors, flow errors, variable non-response, variable consistency errors, lie tests, and sum constraint errors.

 

And finally - clean the data!

We’ve now spent a bit of time looking for problems in the data. Now it’s time to get our Marie Kondo on and finally fix them up. On an individual level, we can edit problematic values. By changing all occurrences of specific values to other values, we can recode or rebase our data.

Recoding and rebasing

Often you may want to set inconsequentially small values of “don’t know” to be missing values. This tells your analysis software to automatically filter the table and recompute the values with the “don’t knows” excluded. This is also known as rebasing. Other common ways of recoding values include capping and re-aligning values with your labels.

Merging categories

You can also merge small categories. In some software, this is regarded as another example of recoding. In Q or Displayr, merging is considered a separate process, since merging does not affect the underlying values of data.

Fixing metadata

Typically, you can fix metadata by adding or correcting labels.

Deleting respondents

A respondent’s data is excluded from the data when, for example, they have completed the survey too quickly (speeders), given the same answers to all questions (straightliners), and give inconsistent responses throughout. The most useful way to do so it via filters.

Nice work, your data should be in tip-top, sparkling clean shape now! Download our free eBook for loads more detailed information and specific steps for how to do this in Q, Displayr, R and SPSS.

]]>
https://www.displayr.com/get-your-data-clean-fast/feed/ 0
4 Ways to Improve Customer Feedback Metrics https://www.displayr.com/4-ways-to-improve-customer-feedback-metrics/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/4-ways-to-improve-customer-feedback-metrics/#respond Tue, 30 Apr 2019 03:57:48 +0000 https://www.displayr.com/?p=16723 ...]]> Focus on key drivers

Driver analysis is the process of identifying the key factors -- or drivers -- influencing an outcome. In the context of customer feedback surveys, driver analysis is used to determine the product attributes that most influence satisfaction rates. This is usually done through a regression model, with the overall satisfaction rate as the predictor variable and product attribute ratings as the predictor variable. By identifying the key drivers of customer satisfaction, businesses can better focus their efforts on product attributes that matter most.


The above output shows the results of a Relative Importance Analysis regression from a bank satisfaction survey. The survey polled customers on their overall level of satisfaction, along with their level of satisfaction with specific aspects of the bank. With this data, we can see which bank attribute correlates most strongly with overall satisfaction. The results show that branch service and bank fees hold the most relative importance. In other words, the quality of branch service and the level of bank fees are the strongest drivers of bank satisfaction.

From this, the bank can conclude that improving branch service and lowering bank fees are the most effective way to increase customer satisfaction.

 

Pay attention to open-ended responses

Reading through open-ended feedback responses can be a tiring and time-consuming process. Not all feedback is useful and some can be downright indecipherable. However, it's in the open-ended responses where customers will tell you how they really feel, often in painful detail. It's likely that specific themes and issues will keep popping up, which will help you understand what really matters to your customers.

Looking through the feedback responses above, there are a couple of popular themes that keep emerging. One is "ease of use," which appears to be an extremely important attribute to customers. Another popular word is "innovative," which is clearly a desirable brand attribute. With this information, a company can improve customer satisfaction by better positioning itself in the market and improving their product.

 

Identify product issues and user pain points

A small issue can have a huge impact on user experience, and unless you are tracking user journeys or prompting customers to report issues, it's easy for these things to slip under the radar. Bugs and difficulty using a product are two of the most common reasons for low customer satisfaction. By staying on top of product issues, you can ensure a seamless and problem-free customer experience.

The Sankey diagram above shows the breakdown of reported problems from a technology product. It's clear that the bulk of issues have come from mobile users trying to open a sidebar, which implies that there is a bug within the app. Something as simple as a broken sidebar will drive down customer satisfaction and can even lead to significant customer churn. By identifying these problems early on, you can nip the issue in the bud.

 

Consider demographic and geographic factors

Customer satisfaction rates can vary wildly across different demographics and cultures. Younger users may find your product intuitive and easy to use, while older users struggle. English-speaking users may be very satisfied with your product, but non-English speaking users could feel neglected. Understanding how different market segments respond to your product is crucial to maintaining and improving customer satisfaction.

The above output is an example of how the Net Promoter Score can vary by region. If your product is only geared towards an English-speaking market, then there's a chance that non-English speaking users will have difficulty using your product, reading your documentation, and communicating with support staff. If there is a segment of your customer base with a significantly lower-than-average satisfaction rate, look into whether there may be issues specific to that group.

]]>
https://www.displayr.com/4-ways-to-improve-customer-feedback-metrics/feed/ 0
What to do with Customer Feedback Survey Insights https://www.displayr.com/what-to-do-with-customer-feedback-survey-insights/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/what-to-do-with-customer-feedback-survey-insights/#respond Thu, 21 Mar 2019 04:40:44 +0000 https://www.displayr.com/?p=16678 ...]]> Consider the customer feedback survey pipeline: designing a survey, issuing the survey, analyzing the results for insights, and taking action based on those insights. We're going to focus on that very last step. Taking action based on customer feedback can be a daunting task; here are four ways survey insights can inform your decision making.

Identify what's working and what's not working

Customer feedback is as important to your business as it is to other customers. It's easy to make assumptions about what is working and what isn't working. Without feedback from your customers, it's impossible to know for sure. A well-designed feedback survey should gather customer sentiment on particular attributes of your product, allowing you to better understand your successes and shortcomings. There are a number of useful questions that can identify what is and isn't working:

  • Customer Effort score: The customer effort score measures how much difficulty a user experienced when using your product. A high score implies that your product is difficult to use. Expanding a customer effort question to drill down on the specific elements of your product is a great way to identify where work needs to be done.
  • Driver analysis with customer satisfaction: Driver analysis will help you understand the key drivers of customer satisfaction. It will identify the product attributes that most influence how satisfied your customers are.
  • Direct open-ended questions: You can target this issue head-on by directly asking your customers a question like, "What do you dislike about our product?" This allows the customer to answer in their own words and in greater detail than a closed-ended question.

Tailor your approach to different segments

Customer feedback survey results can uncover important segments in your customer base. These segments can be based on a customer's demographics, geography, or behavioral characteristics. For example, older customers might have different requirements to younger customers and you may find urban customers use your product differently to rural customers. By understanding the varied wants and needs of your market, you can better tailor your approach to your customers.

  • Step up your marketing strategy: By understanding how a particular demographic uses your product, you can create a more effective marketing strategy. If you are marketing to a particular age or socioeconomic group, it's important to know where those customers can be found and how they respond to different styles of advertising.
  • Customize your products: Different customers have different needs. By understanding how customers use your product, you can tailor your product to suit their requirements. That may involve creating a standard and premium version of your product, or adding different language settings.
  • Target segments at risk of churn: Your survey results may show that a particular segment is particularly at risk of churn. With this knowledge, you can focus your efforts on retaining this group of customers.

Use customer feedback to uncover potential customers

Feedback surveys aren't just for customer retention because they're also great for customer acquisition. Survey results will inform you of the kinds of potential customers to target, and the existing customers who are willing to help promote your brand.

  • Target lucrative demographics: Your survey results may reveal that your product is particularly appealing to customers of a particular demographic. If this is the case, then you already know where to pursue quality sales leads.
  • Encourage recommendations: The Net Promoter Score measures how likely a customer is to recommend your product to a friend or colleague. Encouraging users who gave a high score to spread the word about your product is a great way to grow your customer base.

Focus on the key drivers of positive customer feedback

Driver analysis is used to identify key attributes that influence positive feedback. For example, do factors like speed, user-friendliness, and price plays a role in driving satisfaction? They do if you are a tech company.

  • Focus on key brand perception attributes: Brand perception attributes are traits that are commonly associated with a brand or product. For example, Apple is perceived to be stylish and Tesla is perceived to be innovative. Identifying which brand perception attributes drive positive feedback can provide brands with ideas of how to market themselves.
  • Focus on key product attributes: Product attributes refer to features like speed, durability, and weight. Driver analysis can determine which product features are most valued by customers. Businesses can then choose to focus on improving those particular features.
]]>
https://www.displayr.com/what-to-do-with-customer-feedback-survey-insights/feed/ 0