Evaluation: DLNET Chat May 2021
Posted on
Earlier this month, we had an interesting chat on the topic of evaluation. Like it or loathe it, being able to evaluate your content can ensure you stay current and that you are meeting your audience’ needs. However, this is easier said than done!
So to get us thinking about this week's #DLNETChat, what is the most important factor in evaluating digital learning?
If you have different thought, let us know in the replies!
Don't forget to join us this Friday 7 May at 1.00pm!
— DLNET (@DLNET) May 5, 2021
With time constraints polling as the most important factor in the evaluation process, lots of people got involved to share some of their tips and tricks in how to get the most out of your evaluation.
Data Processing
With no time to waste, we’ll start by looking at some time saving techniques for processing data that you’re getting in.
Next question for today's #DLNETChat…
Q3. Processing data can often be the most time-consuming element of evaluation, what are the headlines that you look for to see ‘at a glance’ whether your work is doing what you intended?
— DLNET (@DLNET) May 7, 2021
We had a few interesting responses to this, with a discussion about Net Promoter Scores (NPS) coming from the question. An NPS is a metric used to measure visitor satisfaction with a number, usually between -100 and 100. A simple example of this could be “On a scale of 1 to 10, did you enjoy your session today?”. This will give you data that you can look at quickly to determine overall satisfaction. You can read a quick summary of using an NPS in your museum here. And see below for some samples you can use yourself!
It's a question that's normally along lines of 'how likely are you to recommend X'? (sample Q bank: https://t.co/XWpniOAc6o) then a little calculation gives you a score from -100 to +100 #DLNETChat
— Helen Adams (@HelAdams) May 7, 2021
Recently, there’s been a trend towards using tools to measure wellbeing and mood before and after a session. One of our chatters suggested this, it looks like a really useful toolkit.
A3: Wellbeing measures can be useful to see whether an experience/session/workshop has resulted in a change in mood or perception. @UCL_Culture have developed a toolkit for this and we've used the Younger Adult umbrellas a bit: https://t.co/UDJBamj65i #DLNETChat
— Helen Adams (@HelAdams) May 7, 2021
Tools
Speaking of toolkits, we asked people for their opinions on what they thought was the most important technique for gathering this information to begin with! There are survey tools readily available with high levels of customisation to suit your needs. Google Forms has lots of customisation available, as well as a handful of pre-built example surveys that can be edited. Google Analytics as well has useful tools to help track website visits, downloads and demographics. Helen might have hit the nail on the head with this one however;
A2: For me in the case of surveys & forms, it's only asking Qs I can do something useful with the answer, asking the most crucial things first (in case of fatigue), & being creative in the use of multiple choice / Likert scales to limit free text and aid analysis. #DLNETChat
— Helen Adams (@HelAdams) May 7, 2021
Keeping your questions succinct and asking only for what you need and for information that you have a use for is key to getting the most out of your evaluation.
Issues
We’ve dedicated a bit of this post to time, but there are plenty of other issues that we face when trying to effectively evaluate your output. The discussion focussed on a few important factors to be aware of during the evaluation process. Evaluation involves getting the right answers to allow you to grow, and to get those answers you need to ask the right, meaningful questions to help you gather your information. If you have clear objectives to begin with then you will find it easier. We should take a second to admire Sian’s excellent GIF game during this chat as well.
Why are we doing this evaluation should be at the crux of every evaluation plan – to hit targets, to communicate success to the boss, to improve our offer, to really listen to our audience's needs and so many more why's! #DLNETChat pic.twitter.com/Qnfq8DRAqj
— Sian Shaw (@SianLShaw) May 7, 2021
With most of the world moving into an online space, the difficulties of gathering data online over in person were discussed. We can’t hand out surveys during a session and collect them, we have to rely on folk filling things out for us in their own time. Once the session is over, they might forget to fill out an evaluation form. We can get quantitative data by asking for “thumbs ups” or doing simple polls on zoom but getting measurable data on that is tricky. It seems like openness might be key.
In a project we are doing, getting the qualitative stuff has been achieved in some cases by getting buy-in at a deeper level from participants: "we can offer this to you because we are funded, help us continue to get funding by honestly answering these questions"… #DLNETChat
— Stuart D. Berry (@stuartdberry) May 7, 2021
So rather than monetarily incentivising your audience to complete evaluation after a session (love 2 shop vouchers, anyone?), emphasising the importance of their feedback and adding the value that way could help get you the meaningful feedback needed.
Thanks to everyone who joined in on the chat and shared their experiences! If you want to read it all in full, you can click here to go to a Twitter Moment!