Ask too many questions - Yes I mentioned this in my first post but I think it’s worth mentioning again but this time it’s on behalf of the researcher. Make sure you consider the number of questions you ask as well as the number of tasks participants perform. It’s up to you to look through all these data points and report out your findings. For my diary study focused on the companion app I had numerous stakeholders that each had numerous research questions and all were of equal importance, according to the client. I quickly learned that I was a little overzealous in my attempt to try to accommodate all their needs. While I feel I was able to deliver on the core questions they had I know there was still unexamined data that held more insights but time constraints kept me from analyzing all that I was collecting. So, take time to prioritize what’s most important and leave the less critical questions for the second or third round of research.
Wait to report findings until the end - Even after refining our study protocol to use the most critical questions for the second round of our device study, we still had an enormous amount of data coming in because we had approximately 75 participants spread across three US cities. To help tackle this large amount of data and to keep the client informed of what we were learning I delivered weekly topline reports which focused on key steps in the overall customer experience process (remember our study started at the initial education of the device, through the purchase, delivery, and setup processes, and concluded after one month of using the device). This proved to be a great way to divide the data into logical and manageable chunks and allowed us to report our issues and findings quickly and directly to the key stakeholders focused on specific steps in the customer journey. Because we delivered on this weekly routine the client could quickly begin implementing fixes and changes aimed at improving the overall user experience. Additionally, this weekly rhythm gave us great insight into how numerous metrics were tracking week after week. In fact we saw steady improvement for nearly all key metrics, including participant satisfaction, during the second round of our device diary study.
Much like the previous ‘Do’s and Don’ts’ I shared, these came from my desire to conduct effective and impactful research studies. Hopefully you can utilize them in your next diary study. And, as every study is different, you’ll likely discover other ways to run more effective and insightful research.
⬡ ⬡ ⬡
Do you have experience running diary studies? Please share your ‘Do’s and Don’ts’ by leaving a comment below. Thanks!