Let’s be honest, text feedback is a bit of a minefield to read through and draw themes out of, let alone turn into a metric. For that reason alone, many businesses revert to asking lots of rating and multi-choice questions to enable responses to be turned into a metric. The problem with this is the richest insights typically come from customers sharing their feelings via written responses. We believe it’s also the best user experience — keep a survey short and sweet by asking for just a rating and text feedback.
Taking a holistic view of text feedback is incredibly insightful. For example, a holiday park heard some customers complaining about the quality of the Wifi. These sort of intermittent problems are notoriously difficult to test — Was it just a one-off? Or was there a genuine long term problem affecting many customers? How long has it been going on? Did spending $1,000’s to fix it end up resolving the issue?
You could wade through hundreds or thousands of text feedback to find out, but who has time for that? Unless you hire a summer student once a year.
Using modern text analysis methods makes it possible to make sense of your data quickly — quantify it, discover themes, understand the sentiment (positive or negative), and trend it.
This area of computation has rapidly changed with the evolution of natural language processing, as part of Artificial Intelligence. This goes far beyond a word cloud, which may miss out on groups of similar words like “price”, “prices” and “pricing”. Natural language processing involves stripping a word down and aggregating words based on that comparison. In addition, key phrases may be useful, e.g. “marine park” or “customer service” rather than each word treated independently.
Finally, AI models are trained to detect sentiment, such as positive, negative or neutral. This helps detect negative comments which might have been wrapped up in an overall positive rating.
The next step is grouping similar words into ‘text topics’. This can be done automatically by using AI algorithms, although a large data set (10,000+ words) is required, and it doesn’t work particularly well when multiple topics are described within short feedback, which happens a lot in reality.
The best text feedback methods involve your input to group words of interest, and then let algorithms automatically sort and categorize the rest. With this, you can now make sensible counts of mentions and monitor trends of those mentions.
Suddenly you’ve got a metric you can report on, get alerts on such as an increase of negative mentions relating to Wifi. An example of this is analysing feedback to find data to support a business decision. In fact, a tourism customer of ours has justified a significant investment in a new product based largely on the validation from words of customers.
So, ready to ditch your spreadsheets, summer student, and start using your text feedback in real-time to make data-driven decisions, automatically? Give Yonder a go.