Node Status: SKETCH
Concerns and Biases [Temporary Section]
- [Major] I have operationalized "interactive narrative experience" down to well-formed story structure, story-level agency, and reported experience. And then I have measures for each one.
My concern is the potential disconnect between story structure and agency verses user experience. That is, by my Marlinspike-based measures I may find a significant difference, but the user may not notice or agree. For example, we might find that with threading, all these actions are connecting to all kinds of later events (which doesn't happen without threading). And so, by our definition, the user has significant agency. But the user could say, "No, I found my agency was the same in either case." Is this a problem we can avoid?
I suppose such a result would serve to refute our definitions, implying that we're missing something else that significantly shapes the user's experience. And so we would have learned something from it all. The pilot test should give us some warning signs on this sort of thing.
- Presumably there will be a significant difference with threading on verses off. There won't be if:
- there are insufficient scenes to pick from; that is, if there is only one scene at a time that has its preconditions met.
- there is no thread difference between scenes; that is, if all the scenes that can be played also extend the same threads.
- Participants will probably be recruited by posting signs or otherwise advertising the study, which means they will be self-selected. I don't think this is a problem if I'm willing to limit my claims to "educated people who would choose to try an interactive drama". (I'm probably willing to accept that in exchange for the ease of getting subjects.)
- I could broaden my scope (beyond academia, perhaps) as well as possibly get more subjects if I run the study online. This means no chance for observation of users though. Could I run a second "dirty"/"unknown" web data set? Some confounds of doing this:
- Different/uncontrolled testing environment (not being watched; potential technical issues; possibly a different medium/interpreter used than in the main study; etc)
- Probably be a lot of incompletes (didn't play twice, etc) to be dropped (more of a bummer than a confound tho)
- Do I need to mention in the proposal exactly what data analysis I'm going to do? Or can I leave that out for now (you know, "implied")?
- Loaded deck: All scenes fit into same story, so are already mostly relevant,
especially after canPlay phase. So any selection effect is subtle--possibly too subtle to for users to detect, and probably too subtle to measure accurately without detailed, side-by-side comparison.
Remember Chatman: readers will often infer a causality between events, even if such a relationship is not explicitly stated in the narrative itself.
ToDo