Customer Feedback Surveys - Displayr https://www.displayr.com/category/market-research/customer-feedback-surveys/ Displayr is the only BI tool for survey data. Fri, 15 Dec 2023 06:09:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://www.displayr.com/wp-content/uploads/2023/10/cropped-Displayr-Favicon-Dark-Bluev2-32x32.png Customer Feedback Surveys - Displayr https://www.displayr.com/category/market-research/customer-feedback-surveys/ 32 32 4 Ways to Improve Customer Feedback Metrics https://www.displayr.com/4-ways-to-improve-customer-feedback-metrics/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/4-ways-to-improve-customer-feedback-metrics/#respond Tue, 30 Apr 2019 03:57:48 +0000 https://www.displayr.com/?p=16723 ...]]> Focus on key drivers

Driver analysis is the process of identifying the key factors -- or drivers -- influencing an outcome. In the context of customer feedback surveys, driver analysis is used to determine the product attributes that most influence satisfaction rates. This is usually done through a regression model, with the overall satisfaction rate as the predictor variable and product attribute ratings as the predictor variable. By identifying the key drivers of customer satisfaction, businesses can better focus their efforts on product attributes that matter most.


The above output shows the results of a Relative Importance Analysis regression from a bank satisfaction survey. The survey polled customers on their overall level of satisfaction, along with their level of satisfaction with specific aspects of the bank. With this data, we can see which bank attribute correlates most strongly with overall satisfaction. The results show that branch service and bank fees hold the most relative importance. In other words, the quality of branch service and the level of bank fees are the strongest drivers of bank satisfaction.

From this, the bank can conclude that improving branch service and lowering bank fees are the most effective way to increase customer satisfaction.

 

Pay attention to open-ended responses

Reading through open-ended feedback responses can be a tiring and time-consuming process. Not all feedback is useful and some can be downright indecipherable. However, it's in the open-ended responses where customers will tell you how they really feel, often in painful detail. It's likely that specific themes and issues will keep popping up, which will help you understand what really matters to your customers.

Looking through the feedback responses above, there are a couple of popular themes that keep emerging. One is "ease of use," which appears to be an extremely important attribute to customers. Another popular word is "innovative," which is clearly a desirable brand attribute. With this information, a company can improve customer satisfaction by better positioning itself in the market and improving their product.

 

Identify product issues and user pain points

A small issue can have a huge impact on user experience, and unless you are tracking user journeys or prompting customers to report issues, it's easy for these things to slip under the radar. Bugs and difficulty using a product are two of the most common reasons for low customer satisfaction. By staying on top of product issues, you can ensure a seamless and problem-free customer experience.

The Sankey diagram above shows the breakdown of reported problems from a technology product. It's clear that the bulk of issues have come from mobile users trying to open a sidebar, which implies that there is a bug within the app. Something as simple as a broken sidebar will drive down customer satisfaction and can even lead to significant customer churn. By identifying these problems early on, you can nip the issue in the bud.

 

Consider demographic and geographic factors

Customer satisfaction rates can vary wildly across different demographics and cultures. Younger users may find your product intuitive and easy to use, while older users struggle. English-speaking users may be very satisfied with your product, but non-English speaking users could feel neglected. Understanding how different market segments respond to your product is crucial to maintaining and improving customer satisfaction.

The above output is an example of how the Net Promoter Score can vary by region. If your product is only geared towards an English-speaking market, then there's a chance that non-English speaking users will have difficulty using your product, reading your documentation, and communicating with support staff. If there is a segment of your customer base with a significantly lower-than-average satisfaction rate, look into whether there may be issues specific to that group.

]]>
https://www.displayr.com/4-ways-to-improve-customer-feedback-metrics/feed/ 0
What to do with Customer Feedback Survey Insights https://www.displayr.com/what-to-do-with-customer-feedback-survey-insights/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/what-to-do-with-customer-feedback-survey-insights/#respond Thu, 21 Mar 2019 04:40:44 +0000 https://www.displayr.com/?p=16678 ...]]> Consider the customer feedback survey pipeline: designing a survey, issuing the survey, analyzing the results for insights, and taking action based on those insights. We're going to focus on that very last step. Taking action based on customer feedback can be a daunting task; here are four ways survey insights can inform your decision making.

Identify what's working and what's not working

Customer feedback is as important to your business as it is to other customers. It's easy to make assumptions about what is working and what isn't working. Without feedback from your customers, it's impossible to know for sure. A well-designed feedback survey should gather customer sentiment on particular attributes of your product, allowing you to better understand your successes and shortcomings. There are a number of useful questions that can identify what is and isn't working:

  • Customer Effort score: The customer effort score measures how much difficulty a user experienced when using your product. A high score implies that your product is difficult to use. Expanding a customer effort question to drill down on the specific elements of your product is a great way to identify where work needs to be done.
  • Driver analysis with customer satisfaction: Driver analysis will help you understand the key drivers of customer satisfaction. It will identify the product attributes that most influence how satisfied your customers are.
  • Direct open-ended questions: You can target this issue head-on by directly asking your customers a question like, "What do you dislike about our product?" This allows the customer to answer in their own words and in greater detail than a closed-ended question.

Tailor your approach to different segments

Customer feedback survey results can uncover important segments in your customer base. These segments can be based on a customer's demographics, geography, or behavioral characteristics. For example, older customers might have different requirements to younger customers and you may find urban customers use your product differently to rural customers. By understanding the varied wants and needs of your market, you can better tailor your approach to your customers.

  • Step up your marketing strategy: By understanding how a particular demographic uses your product, you can create a more effective marketing strategy. If you are marketing to a particular age or socioeconomic group, it's important to know where those customers can be found and how they respond to different styles of advertising.
  • Customize your products: Different customers have different needs. By understanding how customers use your product, you can tailor your product to suit their requirements. That may involve creating a standard and premium version of your product, or adding different language settings.
  • Target segments at risk of churn: Your survey results may show that a particular segment is particularly at risk of churn. With this knowledge, you can focus your efforts on retaining this group of customers.

Use customer feedback to uncover potential customers

Feedback surveys aren't just for customer retention because they're also great for customer acquisition. Survey results will inform you of the kinds of potential customers to target, and the existing customers who are willing to help promote your brand.

  • Target lucrative demographics: Your survey results may reveal that your product is particularly appealing to customers of a particular demographic. If this is the case, then you already know where to pursue quality sales leads.
  • Encourage recommendations: The Net Promoter Score measures how likely a customer is to recommend your product to a friend or colleague. Encouraging users who gave a high score to spread the word about your product is a great way to grow your customer base.

Focus on the key drivers of positive customer feedback

Driver analysis is used to identify key attributes that influence positive feedback. For example, do factors like speed, user-friendliness, and price plays a role in driving satisfaction? They do if you are a tech company.

  • Focus on key brand perception attributes: Brand perception attributes are traits that are commonly associated with a brand or product. For example, Apple is perceived to be stylish and Tesla is perceived to be innovative. Identifying which brand perception attributes drive positive feedback can provide brands with ideas of how to market themselves.
  • Focus on key product attributes: Product attributes refer to features like speed, durability, and weight. Driver analysis can determine which product features are most valued by customers. Businesses can then choose to focus on improving those particular features.
]]>
https://www.displayr.com/what-to-do-with-customer-feedback-survey-insights/feed/ 0
Analyzing Sentiment in Customer Feedback Responses https://www.displayr.com/analyzing-sentiment-in-customer-feedback-responses/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/analyzing-sentiment-in-customer-feedback-responses/#respond Tue, 19 Mar 2019 05:27:39 +0000 https://www.displayr.com/?p=16626 ...]]> An open-ended customer feedback survey question prompts customers to respond in their own words. This often results in detailed but messy data. Sentiment analysis algorithms help you make sense of your feedback responses by coding responses based on the number of positive and negative words used.

What is sentiment analysis?

Sentiment analysis is the process of using an algorithm to categorize content based on how positive, neutral, or negative it is perceived to be. Note that you can perform a sentiment analysis manually if you have a small dataset, but it's time-consuming. The algorithms access a dictionary of words with positive or negative sentiments attached to them. The algorithm then looks at each response and assigns a sentiment score. The more positive the comment is perceived to be, the higher its sentiment score.

However, it should be noted that sentiment algorithms will misinterpret idioms and fail to detect sarcasm. It can also be tripped up when a respondent uses positive or negative words but are actually stating a fact. For example, "Microsoft is a very well known brand that is well liked" is given a positive score by the algorithm despite the fact that it actually doesn't express a positive sentiment felt by the respondent.

A practical example of customer sentiment analysis

Once you have calculated sentiment scores from your feedback survey, there are a few ways to analyze and present your data with Displayr.

Average sentiment

Calculating the average sentiment score is always a great starting point. A high score indicates that the feedback has been generally positive and that words with a positive sentiment are over-represented in your responses. On the other hand, a score below zero indicates that the feedback has been largely negative.

Sentiment histogram

The average sentiment score tells you nothing about the distribution of your sentiment scores. A histogram is a great way to visualize the shape of your data.

From the histogram, we can see that the sentiment scores are clustered around a mean of 0.7. There are very few responses with either abnormally high or abnormally low sentiment scores, which suggests that the majority of respondents did not have an extreme reaction to your customer feedback question. A cluster of sentiment scores on either extreme is something that should be noted and further investigated.

Donut chart and column chart

The donut chart and column chart are two ways to visualize negative, neutral, and positive responses. The donut chart is a great way to illustrate the proportion of each category, while the column chart is a better way to visualize the counts.

From the two charts, we can see that 182 respondents gave feedback with a positive sentiment score, which makes up over half of all respondents (61%). On the other hand, only 31 respondents left feedback that is perceived to be negative, which makes up only 10% of all respondents.

Word cloud

We now know that the majority of customers responded with positive feedback, with an average sentiment score of 0.7. But we don't know much about the actual responses. A word cloud will help you better understand the themes and topics addressed in your responses.

The words "innovative", "easy", "leader", and "trustworthy" are commonly found in the survey responses. This suggests that the positive sentiments expressed within the responses are due to these attributes. The word cloud provides us with a deeper understanding of the responses and the sentiments expressed within them.

Try it yourself

Want to recreate the visualizations you just saw? Click the button below for a step-by-step guide on how to analyze customer feedback data with Displayr!

Analyze Social Media Sentiment with Displayr

 

]]>
https://www.displayr.com/analyzing-sentiment-in-customer-feedback-responses/feed/ 0
How to Check Your Customer Feedback Analysis for Statistical Significance https://www.displayr.com/how-to-check-your-customer-feedback-analysis-for-statistical-significance/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/how-to-check-your-customer-feedback-analysis-for-statistical-significance/#respond Tue, 19 Mar 2019 00:40:15 +0000 https://www.displayr.com/?p=16662 ...]]> Analyzing customer feedback survey data requires the use of statistical significance testing. Here are three instances where Displayr automatically tests for statistical significance.

Built-in significance testing with Displayr

Cross-tabulations

Every time you create a cross-tab with Displayr, statistical significance is calculated behind the scenes and the results are displayed in the output. Statistically significant results are color-coded and accompanied by an upward or downward arrow, which indicates the direction of significance.

The above table compares Net Promoter Score data across four regions: Australia, USA, UK, and Other. The cells show the percentage of respondents for each category. Without statistical significance testing, we may be tempted to focus on the fact that Australia has the highest NPS (80.8) and the highest percentage of promoters (83%). After all, that seems like a valid conclusion and a useful insight.

However, those would be misguided conclusions. While Australia has a higher NPS than the other countries, the difference is not statistically significant. This means that there isn't enough evidence to be confident that the average NPS from Australian respondents is significantly different to respondents from other countries.

Instead, we should focus on the results that are statistically significant: Australian detractors and American detractors. Only 3% of Australian respondents are labeled as detractors, while 9% of Americans are labeled as detractors. These are the only two statistically significant results in the cross-tabulation. From this, we can conclude that Australia has significantly fewer detractors than other countries, and the US has significantly more.

Charts

Many charts in Displayr are created with already-computed significance checks. Much like the cross-tabulation output, these figures are straightforward to interpret. The arrows denote whether the outcome is statistical significance and the direction in which it is significant.

The charts above show the distribution of Net Promoter Scores. The results show that the high percentage of detractors and the low rate of promoters are statistically significant. There is also a disproportionately high number of respondents who gave a score of 5 (22%), 7 (17%), and 8 (13%). On the other end of the spectrum, responses of 1 (2%), 2 (1%), and 10 (5%) were significantly low.

Statistical models

Let's end with a slightly more advanced example. Every time you run a regression analysis with Displayr, the p-values are computed and presented in a way that is easy to interpret. Variables that are significant at the 5% level (p < 0.05 ) are listed in bold and their estimate coefficients are colored either red or blue, depending on their value.

The regression output above is from a driver analysis of a tech company's Net Promoter Scores. The aim of the regression model is to identify which brand perception attributes -- fun, innovative, stylish, etc. -- influence NPS responses from customers. We can see that there are only four brand attributes that can be considered statistically significant drivers of NPS. By making use of Displayr's in-built statistical significance tests, we can make some advanced inferences about the behavior and attitudes of our customers.

Try it yourself

Testing for statistical significance is a breeze with Displayr. Click the button below for a step-by-step tutorial.

Learn how to statistically test Net Promoter Score in Displayr

]]>
https://www.displayr.com/how-to-check-your-customer-feedback-analysis-for-statistical-significance/feed/ 0
4 Visualizations For Your Customer Satisfaction Data https://www.displayr.com/visualize-your-customer-satisfaction-data-with-displayr/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/visualize-your-customer-satisfaction-data-with-displayr/#respond Tue, 19 Mar 2019 00:04:00 +0000 https://www.displayr.com/?p=16618 ...]]> Measuring customer satisfaction only requires a single customer feedback survey question. Something as simple as, “On a scale of 1-10, how satisfied are you with this product?’ can open you to a world of information about your customers and how they interact with your product.

Customer satisfaction data is especially useful when analyzed alongside geographic, market, or time series data. You can analyze how your numbers have developed over time, compare your scores with industry competitors, and measure how satisfaction varies across the world. But in order to do this, you need to know the correct graph or chart to use. Using survey data from the tech industry, we'll show you the top four visualizations for customer satisfaction data.

Pictograph bar chart

A customer satisfaction score doesn’t tell you much about the distribution of responses. An average of 3.5 on a 7-point scale usually means that the majority of responses are scattered around the average, but it could also be that the responses are equally scattered around the two extremes. It's important to find out what is actually going on. A pictograph bar chart is a great way to visualize the distribution of your customer satisfaction numbers.

The chart above shows the distribution of customer satisfaction responses for a tech company. We can see from the visualization that the responses are distributed around the mean, which is generally what we expect to see. There are no unusual clusters or oddities within the data. If the responses contained clusters around the two extremes, then we would conclude that there are distinct groups within the customer base that are unlike the rest.

The distribution of your responses should inform your decision making. If the responses are clustered around the average, then incremental improvements to your product or service should increase future satisfaction scores. However, if there are clusters around the lower tail of the distribution, then you should try to pinpoint exactly why a subset of your customer base is so dissatisfied.

 

Time series line chart

If you are working with data from multiple surveys collected over a period of time, then a simple pictograph bar chart will not suit your needs. Instead, you will want to use a visualization that can track how customer satisfaction has varied over time. The results from a single survey could be an anomaly and therefore unreliable, but results over an extended period tend to be far more trustworthy.


A time series line chart is a great way to visualize your customer satisfaction results over time. It’s always a good idea to send out customer feedback surveys in regular intervals, and including an unchanged customer satisfaction question is a great way to track how satisfaction rates are trending. From the chart, we can see that the average scores fluctuate between 6.5 and 7.5, but there isn't a clear upward or downward trend.

 

Geographic map

It's often useful to see how your customer satisfaction numbers vary across geographic regions. A geographic map that color-codes countries based on their average scores can describe how language and cultural factors are influencing customer satisfaction rates.

The chart above shows the average customer satisfaction of respondents for each country.  Most countries do not have enough users to make meaningful inferences from the data, so we'll have to limit our analysis to only a handful of countries. Despite the limited data set, there are still a few useful insights to be found. One interesting observation is that English-speaking countries generally have higher rates of customer satisfaction than non-English speaking countries. This is a particularly useful insight if the company is looking to expand into foreign markets.

 

Stacked bar chart

Customer satisfaction data is always more useful when placed in the context of the overall market. Average satisfaction rates can vary wildly across industries, and so an individual statistic can be misleading if interpreted within a vacuum. For this reason, it's important to establish industry benchmarks and compare your results with your competitors.

The stacked bar chart above shows the distribution of customer satisfaction scores for the wider tech industry. We can see that there is enormous variation within the industry. Companies like Google have a satisfaction rate of over 75% among survey respondents, while Yahoo barely breaks 20%.

 

Try it yourself

Want to start creating insightful customer satisfaction visualizations? Click the links within this blog post for simple step-by-step guides on how to recreate the data visualizations!

]]>
https://www.displayr.com/visualize-your-customer-satisfaction-data-with-displayr/feed/ 0
How to Identify the Key Drivers of Your Net Promoter Score https://www.displayr.com/nps-driver-analysis-with-displayr/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/nps-driver-analysis-with-displayr/#respond Mon, 18 Mar 2019 23:53:47 +0000 https://www.displayr.com/?p=16604 ...]]> What is driver analysis?

A customer feedback survey should aim to answer two questions when it comes to the Net Promoter Score (NPS):

  1. How likely are your customers to recommend your product or service?
  2. What are the key factors influencing your customers’ likelihood to recommend your product or service?

The first question is answered simply by calculating the Net Promoter Score. The second question is a lot harder to answer and involves what is commonly known as ‘driver analysis.’ The underlying goal of driver analysis is to determine the key attributes of your product or service that determine your Net Promoter Score. These attributes are referred to as ‘drivers.’

Driver analysis requires that you ask some follow-up questions about how the respondent would rate different attributes of your brand. For example, a tech company could poll customers on a range of brand perception attributes – fun, value, innovative, stylish, ease of use, etc. – to determine the key Net Promoter Score drivers.

Driver analysis often requires the use of statistical methods like linear regression modeling and relative weights analysis, which is more advanced than most forms of survey data analysis. However, it is well worth the effort.

Why is NPS driver analysis important?

Computing your Net Promoter Score is a great first step, but the simple statistic doesn’t tell you anything about why your customers are likely (or unlikely) to recommend your product or service. Driver analysis allows you to pinpoint the key factors driving their responses.

This information can influence how to tailor your product and where you focus your efforts. If a tech company finds that being perceived as ‘fun’ is a larger driver of NPS than being perceived as ‘innovative,’ then they may alter their marketing strategy to adopt a more ‘fun’ approach.

A practical example of NPS driver analysis

To better understand NPS driver analysis, let’s dive into a real-world example. Using Displayr, we analyzed NPS data from 14 large technology companies to determine which brand perception attributes played the largest role in influencing Net Promoter Scores. Survey respondents were asked how likely they were to recommend the given brands, as well as whether they associated the brands with specific perception attributes.

Regression modeling

To perform the driver analysis, we used two regression models to determine the effect of each brand perception attribute on a respondent's NPS response.

The first model is an ordered logit model, otherwise known as an ordered logistic regression. The model estimates the effect and significance each brand attribute has on overall Net Promoter Scores.

 

The ‘Estimate’ column measures the effect each brand attribute has on Net Promoter Scores. The larger the number, the larger the effect. The ‘p’ column measures the statistical significance of the brand attribute. If a brand attribute has a value below 0.05, we can conclude that it plays a significant role in determining NPS.

The second model is similar to the first, but there is one important distinction. Instead of estimating the overall effect each brand attribute has on NPS, it estimates the ‘relative importance.’ This means that it estimates the importance of each brand attribute in relation to the others.

The relative importance of each brand attribute can be interpreted as a percentage. For example, our model suggests that ‘fun’ accounts for almost 25% of the variation in NPS.

Data visualization

The two regression models have unpacked a lot of useful information and insights from the data set. Now it’s time to communicate our findings. To do this, we will create a data visualization that is both informative and easy to interpret.

The bar chart ranks the relative importance of each brand attribute, allowing us to compare their effects. It is easy for anyone to see that ‘fun’ is the most important attribute without having to interpret regression output data.

Try it yourself

Want to try analyzing NPS drivers for yourself? Click the button below for a simple step-by-step guide to recreate the data models and visualizations you just saw!

Learn NPS Driver Analysis in Displayr

]]>
https://www.displayr.com/nps-driver-analysis-with-displayr/feed/ 0
Which Survey Questions Should You Include? https://www.displayr.com/choosing-the-perfect-survey-questions/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/choosing-the-perfect-survey-questions/#respond Tue, 26 Feb 2019 04:31:33 +0000 https://www.displayr.com/?p=16388 ...]]> Common customer feedback metrics
  • Customer Satisfaction score (CSAT): How satisfied are you with our service or product?
  • Net Promoter Score (NPS): How likely are you to recommend our product or service to a friend or colleague?
  • Customer Effort Score (CES): How easy was it to use our service or product?

When issuing a customer feedback survey these three commonly used metrics should immediately spring to mind. The Net Promoter Score (NPS), Customer Effort Score (CES), and Customer Satisfaction Score (CSAT) are popular feedback benchmarks that have been shown to be useful survey questions. The results can be used to track customer attitudes over time and are also useful benchmarks to compare your results with the industry average.

Laddering questions

  • What did you like about our product?
  • Which features did you find difficult to use?

Laddering questions are questions that allow respondents to elaborate on an answer they previously gave. These questions enable you to gather more information on a particular issue. For example, after prompting a customer for a Customer Effort Score, you can follow-up with a laddering question, like "Which features did you find difficult to use?"

Laddering questions are particularly useful when a closed-ended question is followed by a related open-ended question. The combination of responses can provide rich and detailed insights.

Demographic Questions

  • How old are you?
  • What is your gender?
  • What is your employment status?

Demographic questions do not directly relate to your product, but that doesn't mean you should dismiss them. By gathering personal information from your respondents, you can better understand how different kinds of customers use, feel, and interact with your product. This is especially useful when trying to identify segments in your market. Collecting demographic information will provide you with a more detailed picture of your customer base.

Customer journey questions

  • How did you find out about us?
  • Where did you first hear about us?

Customer journey questions are particularly useful when surveying new customers, making them perfect for many transactional surveys. Users and customers can be prompted with a journey question after a sale, sign-up, or interaction. This is vital information for growing your customer base and reaching more potential users. You can tailor your marketing and outreach strategy based on how your customers are finding you.

Customer Suggestions

  • What changes would you make to our product?
  • How can our product be improved?

Asking customers for changes and improvements they would make to your product or service can yield interesting results. Even if the changes can't be implemented, it's still a way to discover what the pressing issues are for your customers. Now and then, a great suggestion will come from a customer, and that alone makes the question worth asking!

Wide-open Questions

  • What else would you like us to know?
  • Are there any other thoughts you would like to share with us?

A wide-open question is a great way to conclude a survey. Perhaps there is a miscellaneous thought or an unknown issue that the customer would like to raise. Ending a survey by handing a blank slate to the respondent is the perfect way to receive useful feedback you never anticipated.

]]>
https://www.displayr.com/choosing-the-perfect-survey-questions/feed/ 0
Common Mistakes of Survey Design https://www.displayr.com/common-mistakes-of-survey-design/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/common-mistakes-of-survey-design/#respond Thu, 21 Feb 2019 04:25:19 +0000 https://www.displayr.com/?p=16366 ...]]> Loaded and leading questions

When writing questions for your customer feedback survey, you want respondents to be able to answer as freely and honestly as possible. This means avoiding loaded and leading questions. A loaded question is one with an in-built assumption, and a leading question is one that nudges the respondent to answer in a particular way. Here are some examples:

  • Loaded question: Where is your favorite place to drink alcohol?
  • Leading question: How would you rate our exceptional customer service?

The loaded question assumes that the respondent drinks alcohol, and the leading question assumes that the respondent agrees that your customer service is "exceptional." A well-written question would strip away any unnecessary assumptions.

Misplaced questions

Related questions should always be grouped and asked in sequence, using a technique called "laddering." A respondent can be asked how satisfied they are with a particular feature or product, and in the next question be asked to elaborate. This allows respondents to focus on one thing at a time. If these questions were scattered throughout the survey, the respondent might feel overwhelmed and forget how they answered the previous questions.

Incomplete and mutually non-exclusive response categories

Design multiple-choice questions so that the response categories are complete and mutually exclusive. In order to be complete, the response categories must cover all possible answers. This often involves including a "None of the above" or "All of the above" category. For the response categories to be mutually exclusive, there must only be one correct answer.

Unintentionally vague questions

Sometimes you want to include a question that can be a little vague. Something like, "What else would you like us to know?" is a popular way to conclude a survey. However, unintentionally vague questions can be difficult to analyze. There is a world of difference between "What do you think about this product?" and "What do you like about this product?" Be sure of what you're actually asking!

Double-barreled questions

Each question should ask one thing and one thing only. A question like, "What did you think about the price and quality of this product?" is what is known as a double-barreled question. It asks for the customer's opinion on two different things: price and quality. The customer's response could be in regards to the price, the quality, or both. You won't necessarily know which. Instead, ask two separate questions or pick the attribute you most want to focus on.

Too many questions

We're going to keep coming back to this one. If a question is not essential, omit it! Consider the time commitment you are asking from your customers, and give some serious thought to how many questions your customers are willing to answer. As a general rule, you should avoid designing a survey with over ten questions. That's usually a surefire way to lower your response rates.

]]>
https://www.displayr.com/common-mistakes-of-survey-design/feed/ 0
Why is Customer Satisfaction Feedback Important? https://www.displayr.com/why-is-customer-feedback-important/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/why-is-customer-feedback-important/#respond Thu, 14 Feb 2019 03:07:30 +0000 https://www.displayr.com/?p=16240 ...]]> Better understand your users

You won’t know if you don’t ask. Analyzing customer satisfaction feedback is the most reliable way to measure customer satisfaction and the only way to truly know your users. While there is an art to designing an effective customer satisfaction survey, the types of questions you ask should be related to the customer's experience with your product including demographic information. That way you can see which kinds of users are most satisfied (and dissatisfied) with the product.

It’s impossible to make good decisions without first understanding the wants and needs of your customers. There’s also no better way to gain a read on the overall market than to survey your own users.

Customer satisfaction feedback can result in increased customer retention

Customer retention increases your customers' lifetime value and boosts your revenue when you know what is important to them. It helps build trust and amazing relationships. It is as simple as that. Customer feedback surveys can provide you with a complete picture of how your user base feels about your product, and the appropriate changes can be made based on this information. By listening to your customers, you are also showing them that they are valuable to you and your business. This alone can go a long way.

Improve customer growth

The Net Promoter Score (NPS) measures customer experience and predicts business growth. You can calculate your NPS by using the answer to a key question that uses a 0-10 scale: How likely are you to recommend your product, service, or company to a friend or colleague? Naturally, a high NPS is often key to growing your customer base. Analyzing your customer feedback surveys and investigating the drivers of your NPS is a great way to improve customer growth.

Also, if you have freemium or trial users, it is crucial to find out what their needs are. They are already using your product, so the next step is to convince them to be paying users.

Track customer satisfaction over time

Customer feedback surveys are particularly useful when issued regularly over time. A question like “On a scale of 0-10, how satisfied are you with this product?”, can help you measure whether satisfaction is improving or worsening over time. Repeated surveys also result in a larger sample size, which leads to more statistically significant results. A single survey can contain anomalies and irregularities, but a large sample over a prolonged period is usually very reliable.

Identify the impact of specific changes in your product

Product modifications can be made with the best of intentions, however, they are not always well-received by customers because they are not always viewed as improvements. Any time your product undergoes a major change, you want to survey your customers on how they feel about the updates. You can ask specific questions about the changes or simply ask for their general satisfaction.

Interested in tracking customer satisfaction? Here are some things to consider when constructing your survey.

]]>
https://www.displayr.com/why-is-customer-feedback-important/feed/ 0
How to Evaluate Customer Survey Responses https://www.displayr.com/how-to-measure-customer-feedback-from-surveys/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/how-to-measure-customer-feedback-from-surveys/#respond Thu, 14 Feb 2019 00:00:51 +0000 https://www.displayr.com/?p=16223 ...]]> Closed-ended question methods

Top 2 Box score

The Top 2 Box score is a common way to compute metrics like the Customer Satisfaction Score and Customer Effort Score. It is calculated by taking the percentage of respondents who gave a score within the top two available choices.

For example, taking the percentage of respondents who answered “9” or “10” on a 0-10 scale question would give you the Top 2 Box score for a traditional question like, “On a scale of 0-10, how satisfied are you with this product?”

Average and Median score

The easiest way to measure customer feedback is to simply take the average or median score. The process is simple and interpreting the results is straightforward.

However, using only the average or median in your analysis can produce some misleading results. The simple mean or median doesn’t provide any information on the distribution of responses. An average result of 5 could mean that most respondents gave a rating of around 4-6, but it could also mean that there are two equally-sized clusters around the top (9-10) and bottom (1-2) of the scale.

Mode Score

The mode is simply the most-picked answer. The modal outcome is particularly useful when analyzing responses from multiple-choice questions. When you only need to know what the most popular response is, the mode will be your go to. However, we recommend also considering the distribution of your responses, along with the mode. You may find that there are multiple answers with unusually high scores.

We have created a customer satisfaction tutorial that shows how easy it is to create a Top 2 Box Score and calculate the average and median.

Tutorial: Measure Customer Satisfaction in Displayr

Net Promoter Score

The Net Promoter Score (NPS) has become a staple of many customer feedback surveys. it's a great way to measure customer loyalty and predict customer retention rates.

Then you calculate NPS with the following formula:

NPS = Percentage of Promoters – Percentage of Detractors

Where:

  • Detractors: respondents who gave a score between 0-6
  • Passives: respondents who gave a score between 7-8
  • Promoters: respondents who gave a score between 9-10

Tutorial: Calculate Net Promoter Score in Displayr

Open-ended question methods

Human judgment

When it comes to open-ended responses, there’s no substitute for human judgment. Data algorithms are prone to misinterpreting and mischaracterizing customer feedback when written in plain-spoken language. No matter how advanced an algorithm is, it can't understand your customers as well as you can.

Even if you plan to use a wide array of machine learning and text processing algorithms, it’s always beneficial to also have a person analyze the responses.

Sentiment analysis

Sentiment analysis algorithms can take comments from your respondents and measure how positive or negative the feedback is. It assigns a score to individual words and phrases based on their definitions and the context in which they appear. For example, the word “terrible” has a negative score while a word like “wonderful” has a positive one.

This technique can be useful, but you should be aware of its many shortcomings. Sentiment analysis algorithms are notoriously bad at detecting sarcasm and can mislabel colloquialisms, idioms, and unusual phrases.

Tutorial: Sentiment Analysis in Displayr

Topic modeling

Topic modeling algorithms detect common keywords and phrases in your customer feedback data. For example, the words “lag”, “slow”, and “loading time” could be grouped together to identify respondents who complained about the speed of your program. People who used positive words ("good", "great", and "helpful") alongside words like "support" could be considered customers who are satisfied with your customer service.

Like with sentiment analysis, topic modeling algorithms can mischaracterize your feedback and skim over issues that a person would definitely pick up on.

]]>
https://www.displayr.com/how-to-measure-customer-feedback-from-surveys/feed/ 0
Open-ended vs Closed-ended Survey Questions https://www.displayr.com/open-ended-vs-closed-ended-survey-questions/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/open-ended-vs-closed-ended-survey-questions/#respond Wed, 13 Feb 2019 23:39:18 +0000 https://www.displayr.com/?p=16220 ...]]> What are open-ended questions?

Open-ended questions are those that allow users to respond in their own words. Rather than prompting customers to select from a list of responses, an open-ended question gives customers the chance to respond in an original and unique way.

What are closed-ended questions?

Closed-ended questions have a list of set responses. Respondents are asked to select from either a multiple-choice answer, a numeric scale, or a simple Yes/No. There is no opportunity to clarify or elaborate on their answer.

Common examples

Open-ended questions  Closed-ended questions
What do you like about this product?  Would you recommend this product? Yes/No
How can this product be improved?  On a scale of 0-10, how would you rate this product?
Why did you choose this product? How often do you use this product (daily, weekly, monthly, annually)?

Advantages and Disadvantages

Both open-ended and closed-ended questions come with advantages and disadvantages.

Open-ended questions grant a lot of freedom to the respondent. They can construct their own responses and elaborate where they feel fit. This can lead to valuable and insightful feedback. On the other hand, the responses can also be difficult to interpret and analyze. Closed-ended responses are far more limited, but they are extremely consistent. This makes for easy analysis and interpretation. However, the information gained from each answer is quite restrictive.

Analyzing open-ended and closed-ended questions

The analysis methods for closed-ended questions are quite conventional. Customer Satisfaction is usually measured using a scale of 0-10, and a simple mean or median is a perfectly adequate way to compute the score. A Top 2 Box score, which calculates the share of respondents who gave the top two responses, is another method which is commonly used. More advanced statistical techniques like regression modeling and cluster analysis can be used to identify drivers and segments based on closed-ended question responses.

Analyzing open-ended responses usually requires more human input. Sentiment analysis algorithms can measure the tone of a response, and topic modeling techniques can identify commonly used keywords and phrases. However, when analyzing open-ended responses, algorithms and statistical techniques cannot replace human judgment.

Tutorial: Sentiment Analysis in Displayr

Laddering questions

Should you use open-ended or closed-ended questions in your customer feedback survey?

The answer is both. Using a combination of open-ended and closed-ended questions is the best way to structure a survey. A survey can begin with a closed-ended question, like “On a scale of 0-10, how difficult did you find this process?”, and then follow-up with an open-ended question, like “What did you find difficult about this process?”

This method is called “laddering, ” where you begin with a broad question and drill down into the specifics. Using both open-ended and closed-ended questions gives you the best of both worlds. You have closed responses that are easy to quantify, and open responses that are far more detailed.

Now you know how to ask open-ended and closed-ended questions. Here's what else to ask in a customer satisfaction survey.

]]>
https://www.displayr.com/open-ended-vs-closed-ended-survey-questions/feed/ 0
What is the Customer Effort Score? https://www.displayr.com/what-is-the-customer-effort-score/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/what-is-the-customer-effort-score/#respond Wed, 13 Feb 2019 23:20:51 +0000 https://www.displayr.com/?p=16217 ...]]> Customer Effort Score is a metric used to gauge how easy it is for customers to use your product or service. It is traditionally scored on a seven-point scale, from Very Difficult (1) to Very Easy (7). Customers are asked a simple question like, "On a scale of 1 (Very Difficult) to 7 (Very Easy), how difficult was it to use our product?"

The Customer Effort Score is different from other common customer feedback metrics, like Customer Satisfaction and the Net Promoter Score, in that it seeks to measure a specific component of a product or service, rather than a customer's overall sentiment.

However, CES results are often found to be better predictors of customer loyalty and retention than the Customer Satisfaction score and Net Promoter Score. In many cases, the Customer Effort Score is a better metric for a customer's overall sentiment, even though it asks an extremely specific question. This is why so many companies love including the CES in their customer feedback surveys.

Calculating the Customer Effort Score

You can calculate the score by simply taking the mean or median of your responses. The distribution of effort scores can be used to identify clusters among the respondents. If the median score is high but you had a cluster of customers respond with “Very Difficult,” then it is worth finding out why some customers are having such a hard time. They could potentially be experiencing a bug or there has been some form of misunderstanding.

When is the best time to survey

The perfect time to prompt your customers for a Customer Effort Score is directly after they’ve used your product or service. The experience will still be fresh in their minds and they will be able to answer most accurately. It is also useful to prompt a Customer Effort Score after they seek customer support. That way you can measure how effective the support was. Collect Customer Effort Scores often and consistently to track how the scores change over time.

Benefits and Shortcomings

Benefits of Customer Effort Score  Shortcomings of CES
Often a better predictor of future customer behavior than NPS and CSAT It does not explain why customers are finding the experience easy or difficult
An accurate measure of an important aspect of your product or service It does not compare the ease of use of competitor products and services
Highly used and commonly accepted customer feedback metric It does not measure the complete customer relationship with your company or product/service

How to Improve your Customer Effort Score

Once you’ve started tracking your score, it’s time to start looking for ways to improve it. Here are some things to consider:

  • Focus on negative feedback: Customers that rated their experience “very difficult” are most at risk of churn. Their problems may have a very simple solution that could quickly change their response to “very easy.”
  • Ask open-ended questions in your surveys: Start asking “why” questions and find out the specific parts of the user journey that they are finding difficult.
  • Make improving Customer Effort Score a key priority: By placing the score at the center of your customer-relationship strategy, you have a reliable metric to track progress.

Find out why customer feedback is so important.

]]>
https://www.displayr.com/what-is-the-customer-effort-score/feed/ 0
What is Customer Feedback? https://www.displayr.com/what-is-customer-feedback/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/what-is-customer-feedback/#respond Tue, 12 Feb 2019 22:22:57 +0000 https://www.displayr.com/?p=16210 ...]]> Different forms of customer feedback

Customer feedback can come in many different forms and from many different sources. It can be extremely well-structured, like a multiple-choice survey. Or it can be messy and difficult to interpret, like feedback from a website comment section.

Here are the most common ways you can collect customer feedback:

  • Customer feedback survey: a structured questionnaire designed to collect feedback from customers
  • Social media: comments, ratings, suggestions, and general sentiments left on social media sites
  • Website prompts: a website pop-up that prompts users to quickly rate a feature, product, or experience
  • In-product prompt: users can be called upon to quickly rate their experience while they are using a product.
  • Third-party reviews and ratings: you can also find feedback on user-generated review sites and rating aggregators.

Open-ended and Closed-ended feedback

You can divide customer feedback broadly, into two categories: open-ended and closed-ended responses.

An open-ended response is one where the customer is able to respond in their own words. Users are free to describe specific issues they are having and suggest ways to improve a product. Closed-ended responses have a set list of options for users to choose from, like multiple-choice questions and ratings on a numeric scale. Although this form of feedback is significantly less detailed than open-ended answers, the results are a lot easier to interpret and analyze.

Closed-ended questions Open-ended questions
On a scale of 0-10, how satisfied are you with this product? What do you like most about this product?
On a scale of 0-10, how likely are you to recommend this product? Who are you most likely to recommend this product to?
On a five-point scale, how difficult was your experience? What did you find difficult about your experience?

Common customer feedback metrics

Customer feedback metrics are a great way for you to gauge how satisfied, interested, and loyal your customers are. The common metrics – Customer Satisfaction, Net Promoter Score, and Customer Effort Score – are particularly useful when you can compare them over a period of time.

Customer Satisfaction

Customer Satisfaction is the most generic form of customer feedback metrics. You can gauge customer satisfaction by asking a simple question like, “On a scale of 0-10, how would you rate this product/company/brand?

The most common way to measure customer satisfaction is with a Top 2 Box score. This is calculated by taking the share of the two top options (for example, 9-10 on a 0-10 scale) as a percentage of all responses. The average, median, or mode score are also adequate ways of measuring satisfaction.

Tutorial: Measure Customer Satisfaction in Displayr

Net Promoter Score (NPS)

The Net Promoter Score measures how likely a respondent is to recommend a product, company, or brand. It is based on responses to a single question: “On a scale of 0-10, how likely are you to recommend this product/company/brand to a colleague or friend?”

Then you can calculate your NPS with the following formula:

NPS = Percentage of Promoters – Percentage of Detractors

Where:

  • Detractors: respondents who gave a score between 0-6
  • Passives: respondents who gave a score between 7-8
  • Promoters: respondents who gave a score between 9-10

The NPS is a measure of customer loyalty and a predictor of revenue growth.

Tutorial: Calculate Net Promoter Score in Displayr

Customer Effort Score (CES)

The Customer Effort Score measures ease of use by simply asking customers to rank their experience on a scale of “Very Difficult” to “Very Easy”. Respondents who found a product difficult to use are vulnerable to churn, so it’s important to identify the pain points in the user journey.

]]>
https://www.displayr.com/what-is-customer-feedback/feed/ 0
How to Analyze a JotForm Customer Survey in Displayr https://www.displayr.com/how-to-analyze-a-jotform-customer-survey-in-displayr/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/how-to-analyze-a-jotform-customer-survey-in-displayr/#respond Thu, 31 Jan 2019 05:23:20 +0000 https://www.displayr.com/?p=9720 ...]]> Let's start by creating a customer feedback survey. If you don't have a JotForm account, click here to sign up for free. Building a form should be quite intuitive, but if you run into any problems there is plenty of help available.

When you're happy with your form, click PUBLISH on the JotForm site. Once you've done that, there are various options for how respondents can fill out the form. You can send them a link like this, or embed it in a blog post or website.

Below are two embedding options. You could also use scripts, pop-ups, buttons or third-party platforms such as content management systems. Please fill out the form for yourself!

JotForm embedded with an iframe



JotForm embedded in a Lightbox

Open a lightbox containing the survey

Importing the responses to Displayr

To get started, you only need two things: an API key and a Report ID Number.

An API key is required to automatically extract data from JotForm, and this link describes how to do that in a few simple steps. To find the Report ID Number, follow JotForm's guide on "How to create a report" and make a note of the resulting number.

Once you have your API key and ID number, you're ready to analyze your data in Displayr. Add a new data set by clicking on the blue cloud New Data Set button or the plus symbol (+) in the data tree on the bottom left. Choose R as the source of data and paste in the following code, replacing YOUR_API_KEY and YOUR_REPORT_ID with your values.

message("R output expires in 600 seconds")
library(jsonlite)

jotform.api.key = "YOUR_API_KEY"
report.id = "YOUR_REPORT_ID"

report = read.csv(paste0("https://www.jotform.com/csv/", report.id, "?apiKey=", jotform.api.key))

By clicking this link, you can access a Displayr template document and insert your own API key and Report ID into the data set code. The template document contains static demo data, but feel free to connect it up to your own survey responses.

Visualizing the Responses

The data consists of IP addresses and timestamps, as well as responses to the scale and text survey questions. You can experiment with the best ways to present your data.

The first line of the R code above updates the Displayr document with the latest responses every 10 minutes. To see the latest document, click here. Because the charts embedded in this post are cached, they will be updated daily.

Below is a histogram of the rating responses.


The following chart displays the average rating by submission date.


Maps and Word Clouds

The IP addresses collected can be used to identify the location of the respondents. With that information, we can easily plot a map showing which countries our responses came from. To read more about geocoding IPs, see this post


Finally, we can collate the free-form text data into a word cloud to highlight popular terms.


Try it yourself

This post has demonstrated the "full circle" of importing and exporting JotForm survey data to and from Displayr. To create your own document and chart the results of a survey using our template, click here.

]]>
https://www.displayr.com/how-to-analyze-a-jotform-customer-survey-in-displayr/feed/ 0
Relational versus Transactional Net Promoter Score https://www.displayr.com/relational-versus-transactional-nps/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/relational-versus-transactional-nps/#respond Wed, 16 Jan 2019 03:09:25 +0000 https://www.displayr.com/?p=15819 ...]]> Relational vs. Transactional NPS

The relational approach involves surveying customers to gauge their overall perception of the organization in general and is intended to show overall satisfaction and loyalty. Relational NPS engages the customer independent of a transaction experience. This provides a deeper understanding of the customer’s overall sentiment towards the brand and encompasses all of their combined product experiences.

Transactional Net Promoter Score is designed to instead measure a customer’s satisfaction after a specific event or at a specific stage of engagement. Examples include, immediately after product order, after product delivery/installation or after a customer service interaction. The transactional survey is focused only on that specific event and the customer’s NPS rating at the time of the event. Like relational NPS, transactional NPS should also be measured on an on-going basis.

When to use Relational and Transaction NPS

With relational NPS, the primary goal is to understand the overall perception of the company and customer loyalty. Relational NPS is also useful for benchmarking against competitive NPS scores and for targeting low and non-purchasing or non-returning customers.  You should measure relational NPS continuously for a more accurate reading of the overall brand health.

Transactional NPS is better suited for identifying specific strengths and weaknesses of individual customer interactions which, when improved, will improve the overall relational NPS since improving the individual customer interactions are key to improving the overall customer experience.

Survey Design Considerations

At a minimum, a standard relational NPS question asks how likely the customer is to recommend the product or service on a scale of 0 to 10, with wording such as: “How likely are you to recommend [your product] to your friends and family?” You should then include an open-ended question as a follow-up so that the customer can provide reasons for the score they gave.

The NPS question in a transactional survey should be modified to include some sort of reference to the specific event which triggered the NPS survey offer: “After completing the ordering process, how likely are you to recommend [your product] to your friends and family?”

Relational surveys are sent at the same time to all potential respondents independent of any specific customer interaction. Transactional surveys can be sent out to customers at any interval following the specific interaction (immediately, the next day, etc.). Surveys can be distributed to potential respondents using a variety of survey methodologies including email, text, website pop-ups, etc.

Tutorial: Calculate Net Promoter Score in Displayr

]]>
https://www.displayr.com/relational-versus-transactional-nps/feed/ 0
Analyzing Customer Satisfaction Scores https://www.displayr.com/analysing-customer-satisfaction-scores/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/analysing-customer-satisfaction-scores/#respond Wed, 09 Jan 2019 03:51:07 +0000 https://www.displayr.com/?p=15793 ...]]> Having collected your customer satisfaction data, it’s time to do some analysis. Here I've outlined four common and simple methods of analyzing your customer satisfaction score.

Depending on the structure of your customer feedback survey question, the data will usually come through on a numeric scale.  The variation may be in the endpoints, e.g. if you’ve used a 7-point or 10-point scale. Regardless of this, the basic methods of analysis are the same.

Frequencies

The easiest way to look at your customer satisfaction data is to look at the frequencies for each scale point. In the example below, I have created a table using customer satisfaction data on a scale of 1 to 7. The first table shows the number of respondents to the survey that selected each scale point.  It’s easy to see here that a majority of people selected scale points at 4 or above. However, turning this into percentages gives a much clearer understanding of the differences between these groups. 

Top and bottom boxes

A common method of simplifying the scale is to look at the frequencies of individuals who selected either one of the top or bottom two scale points.  In our example here, that would be those respondents that selected 1 or 2 for the bottom “box”, and 6 or 7 for the top “box”.  At its simplest, you sum up the number of cases in each of the two sets of scale points, and present them together:

This method can help rid your analysis of bias introduced by respondents who never select the top scale point because “…there’s always room for improvement”.

Top 2 box score

This method is similar to the "top and bottom boxes" approach we just discussed, but instead of examining both ends of the response spectrum, we just consider the top two boxes. The satisfaction score is calculated as a percentage, so the Top 2 Box score for our example data set is 45%.

Average

You are, of course, not limited to frequencies. An average may help you get an idea of the overall customer satisfaction that your respondents are experiencing. This will come out as a single number and should be assessed against the maximum scale point.

In the example data here, the average comes out at 4.9 out of 7. That’s an OK result, but clearly there may be room to improve the satisfaction among these respondents.

Tutorial: Measure Customer Satisfaction in Displayr

Other methods

It may also be interesting to look at the median and mode of the data. The median will tell you – assuming that your responses to the customer satisfaction are arranged in ascending order – which value falls in the exact middle of your sample.  The example data here has an even number of cases (n=896) and so the middle falls between two cases.  To get the median for these two cases, we simply take the mean. Here, both cases are 5 and so the mean will also be 5, which then becomes the median.

The mode is simply the scale point with the highest frequency, i.e. the scale point that has been selected the most. We can tell that already from the frequency tables we made earlier: it’s 6.

]]>
https://www.displayr.com/analysing-customer-satisfaction-scores/feed/ 0
How to Analyze Trends in Customer Satisfaction https://www.displayr.com/analyzing-customer-satisfaction/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/analyzing-customer-satisfaction/#respond Thu, 27 Dec 2018 13:25:48 +0000 https://www.displayr.com/?p=15142 ...]]> Customer satisfaction is an especially useful metric when tracked over time. By regularly sending out customer feedback surveys, you can measure how satisfaction rates are trending. Here are a few data models and visualizations to help you analyze your customer satisfaction time series data.

Tutorial: Measure Customer Satisfaction in Displayr

Average satisfaction over time

Creating a crosstab of the Date with Overall Satisfaction automatically shows the average satisfaction per time period. We can also include the row sample size in the table. For our example, we chose an aggregation period of a month, but if your row sample size is small you can use a larger aggregation period.

Significant values are indicated by blue or red arrows. In this table, the only significant value is a higher satisfaction in the last month. However, the row sample size is also much smaller than the other months. This suggests that the data collected over May is incomplete (or at least, not comparable to the other months). We will omit this time point from analysis using monthly aggregated data.


Trends in average satisfaction over time

To look for patterns in change over time we use the table to create a chart. We added a trend line, which can make patterns more visible. We excluded the last data point from the trend to avoid using incomplete data. From the column chart, it is clear that there is no strong trend. In fact if you hover over the trend line you can see that average satisfaction is actually decreasing slightly.

Learn how to analyze customer satisfaction trends in Displayr

Changes in the distribution over time

To look at not only the averages but the entire distribution of responses,  we use a stacked column chart showing cumulative percentages. The upper edge of the orange bar shows the percentage of respondents who gave a score of 1 (extremely dissatisfied) or 2 (dissatisfied). From the stacked chart, we can see that the proportion of dissatisfied or extremely dissatisfied respondents has increased from January 2018 to April 2018.


To confirm these results are significant, we can perform a linear regression using only data from January 2018.



The pink highlighting in the table above shows that the coefficient for Date is significantly different from zero. In fact, the results suggest that overall satisfaction is decreasing on average by 0.04 per month. In contrast, when we perform the same analysis for the whole data set (below), we see no significant trend.


 

]]>
https://www.displayr.com/analyzing-customer-satisfaction/feed/ 0
Optimizing the Mobile Delivery of Your Net Promoter Score Survey https://www.displayr.com/optimizing-the-mobile-delivery-of-your-net-promoter-score-nps-survey/?utm_medium=Feed&utm_source=Syndication https://www.displayr.com/optimizing-the-mobile-delivery-of-your-net-promoter-score-nps-survey/#respond Wed, 19 Dec 2018 01:23:42 +0000 https://www.displayr.com/?p=15378 ...]]> Most online customer feedback survey platforms such as Qualtrics, SurveyMonkey, Alchemer (formerly Survey Gizmo), etc. typically report between 25% and 50% of surveys are currently viewed on a smartphone or tablet device. As device ownership continues to grow, it won’t be long until a majority of all surveys are taken on a mobile device.

Conducting NPS surveys on mobile devices

Due to this continued expected growth in device ownership and, therefore, mobile survey engagement, it is more important than ever to optimize your Net Promoter Score (NPS) survey for mobile delivery. This will ensure you get the most accurate results possible. In addition, deploying an NPS survey on a mobile device allows for real-time, continuous NPS results as opposed to sending out waves of surveys in bulk designed for periodic measurement. This continuous survey approach aims to reach customers at a consistent and specific point in time or after a specific interaction with your product or service.

Mobile design optimization

Most of the major survey platforms have built-in mobile optimization in their survey design toolbox. When designing your survey, these survey platforms automatically provide an option for designing the survey in a standard desktop computer format or in mobile device format.  This allows you to deploy both a desktop version and a mobile version of your survey simultaneously.

When fielding the survey, the platform can detect whether the survey respondent is using a desktop device or a mobile device and will adjust the layout and inputs accordingly. Thus, respondents should be able to access surveys from any mobile device and the platform will select the appropriate design for that mobile device layout.

When optimized for mobile devices, the NPS question will look similar to a desktop version, but will have larger buttons to select a response since the answer will be selected with a finger instead of a mouse pointer.

General best practices for mobile surveys

Here is a list of general best practices to follow when developing surveys for mobile devices:

  1. Survey length – try to keep the survey short as you will have a greater chance of break-offs with mobile surveys compared to on a desktop.
  2. Pages/scrolling – show one question at a time and break up large questions into smaller ones reducing or eliminating the need to scroll, which is more difficult on a mobile device.
  3. Limit text - avoid using large bodies of text as reading on mobile devices can be cumbersome.
  4. Response options – one vertical column is best for both single response and multiple choice questions.
  5. Drop-down lists – avoid using drop-down lists as they are more difficult to read and select a response.
  6. Grid/matrix questions – avoid grid/matrix question altogether as they are very difficult to display correctly on mobile devices due to having to display horizontal response options and therefore, potentially, horizontal scrolling.
  7. Open-ends - limit the number of open-ended questions since it is more difficult to type text responses on a mobile device.
  8. Images – limit the number of images used in the survey, especially larger images since mobile devices may not be able to render large images correctly.
  9. JavaScript – avoid using JavaScript as it is not supported by all mobile devices.

Now that you've collected your Net Promoter, learn how to analyze it by first calculating it.

Tutorial: Calculate Net Promoter Score in Displayr

]]>
https://www.displayr.com/optimizing-the-mobile-delivery-of-your-net-promoter-score-nps-survey/feed/ 0