Automated Analysis of Participant Feedback Using ML

 

Every week, hundreds of participants share their experience as they complete our program. Capturing this feedback is crucial to continuously improving our services. After automating a portion of the analysis through R programming, we considered an unsupervised method of classification. This project examined: (1) Researching the technology, (2) choosing the method and (3) testing the algorithm in order to evaluate its integration into our research workflow.


 

Testing an emergent approach

AI offers many possibilities for speeding up our work. As with any new application of technology, however, we must test and validate new methods prior to adopting them in the workflow. We therefore ran tests to assess ML-algorithms’ adoption.

 

Project Details

Text-mining to uncover themes

Reviewing large data sets to uncover common themes is time-consuming. We wanted to leverage computer-aided technology to identify common themes. We turned to text-mining as the favored approach.

 

Related Podcasts

Evaluating Topic Modeling & Latent Dirichlet Allocation

The theoretical construct of LDA differs from traditional dictionary-based approaches. Both use word distributions as a starting point, but their statistical use differs. Understanding these differences was key in choosing the right methodology for our project.

 

Related Articles

  • Should ML Topic Modeling replace your rules-based analysis?