|Current selected tag: metrics. Clear.|
Your new post is loading...
Karen Dietz's insight:
I discovered this from fellow curator Jose and I thought you would find it both interesting and helpful.
The bane of storytellers and biz story professionals are decent evaluation tools. We have scant few. I'd say we don't have any at all, but I'm not aware of everything in the universe :)
How do you know a story is good? If you hear a less than compelling story, how do you know what's wrong?
The same is true for digital stories. And believe me, I view lots of digital stories and pass on most. Now I have some rubrics to help me tell you why.
Standard evaluation measures are essential -- they help build consistency and take evaluations out of the land of white-washing or personality contests.
These rubrics were developed for teachers, but any business can use them! I hope they help you as you craft your stories, and to know why a story (digital or otherwise) falls flat.
Until we have our own Roger & Ebert (so sad they are both gone now), we'll have to find rubrics where we can, eventually develop our own, and keep testing them out and refining them.
Summary On the average Web page or blog post, users may read about 20 percent. The first 10 seconds are critical. This score supports your efforts in keeping readers longer.
Here is some very interesting work going on by my colleague Urs Gattiker in Switzerland who is working on algorithms to help businesses measure engagement on their websites.
This is tough work but I think Urs is on to something here. While we don't have measures yet on the quality of stories on a website/blog, the algorithms here will indicate if the stories you share on your site are captivating (longer site visits). If you end up with a low score, you probably need to revisit your content and visuals.
I look forward to hearing more about Urs' work as he continues to work on these algorithms and shares his results with us.