A number of test participants will be presented with a series of screenshots from various Twitter accounts. Upon seeing each screenshot, the test participant will make a single yes/no choice about whether they believe the account is a genuine representation of its owner. The participant will also be given the (optional) opportunity to explain why they think that account is genuine or fake. This information will help confirm if the factors I mentioned in my most recent blog post are relevant. Each participant will receive the same list of screenshots, though the order in which the screenshots are presented will be randomised. This is to help prevent order bias.
The reason I refer to the “genuine representation” of an account as opposed to the person who operates the account is to allow for users who delegate management of their account to another person. This is normally the case for high-ranking politicians such as Barack Obama and David Cameron, the latter of whom has previously given reasons for his lack of enthusiasm towards Twitter.
I previously mentioned the factors I believe influence a Twitter user’s decision on whether a given account is genuine. Based on these beliefs, each screenshot shown to a participant will contain the Twitter account’s:
- “Verified” icon (if appropriate)
- Personal website information
- Three sample tweets by the account
So, a participant may be shown this image as an example:
I also intend to collect some profile information from each participant (e.g. age group, gender, level of computer experience, if they are a Twitter user) to help determine if different groups of people look for different pieces of information when authenticating a Twitter account.
As this experiment will essentially be a series of questions answered by humans, any existing survey or questionnaire software should prove sufficient for collecting the raw data. Some survey creation services also provide automatic analysis of the data, saving time when I’m making my own analysis and conclusions.
The two main services I am looking at are SurveyMonkey and Web Survey Creator. Both websites offer free accounts for creating surveys, each with a maximum of 100 respondents per survey. However, while SurveyMonkey only allows a maximum of 10 questions per survey for a free account, Web Survey Creator allows for an unlimited amount of questions. However, neither allow for questions to be given in a random order, which is something I would require.
The “roll your own” alternative is LimeSurvey, an open-source survey tool. This software offers many of the same features as an online service, but since it will be deployed to my own web hosting there will be no limits for the number of respondents or questions per survey. The documentation does not mention anything about assigning questions in a random order, so I will need to investigate this software further before making a decision on whether to use it for this experiment.