Development and validation of the Student Attitudes and Beliefs about Authorship Scale: a psychometrically robust measure of authorial identity. A version for the educated layperson

After what feels like a very long time, the first paper from my PhD studies, titled ‘Development and validation of the Student Attitudes and Beliefs about Authorship Scale: a psychometrically robust measure of authorial identity.’ has been published. There have been a couple of delays, partly down to me dragging my feet and partly because I moved house and jobs twice (for each) since submitting my thesis. It feels like it has been a long time since I completed the research for this paper, and for the other two that are manuscripts in draft form. It is over a year since I was awarded my doctorate and the days of writing my thesis feel like a lifetime ago, but more about that in another post.

The paper presents a new psychometric measure of authorial identity. Authorial identity is a concept that we are exploring as a counter to plagiarism. If you are interested in the latent variable model or in the scale that was developed, the paper is available from http://www.tandfonline.com/10.1080/03075079.2015.1034673. Please take a look if you are an academic researcher or teacher with an interest in the field. However, it has been pointed out to me that the paper is incomprehensible to the educated lay-person, and parts of it are too technical for those unfamiliar with psychometrics, which includes many members of the scholarly community. Therefore, I am including a non-technical summary of the paper here as a blog post and hope it will encourage people to look at the research. The original article was co-authored with Professor James Elander and Dr Edward Stupple from University of Derby.

Note: If you are a reader from the scholarly community, I am aware that you will spend a lot of time screaming “who said that?” at the screen. Please understand that you are not the primary intended audience for this piece – please refer to the original manuscript for citations and references that informed the studies.

Introduction

Most of the ways that we try to reduce plagiarism in universities are focused on deterring and deterring students from copying. Some researchers and teachers have commented that this approach modeled on law enforcement needs to be supported with other methods. This has led to specialist classes being developed. Many of these deliver information about plagiarism and referencing to improve students’ understanding of the area. One recent approach has aimed to go beyond this and improve students’ authorial identity as well. Previous researchers defined authorial identity as ‘the sense a writer has of themselves as an author and the textual identity they construct in their writing’. Therefore authorial identity approaches aim to develop students as authors, because previous research showed that they feel more like editors and find it difficult to write with originality.

In addition to defining authorial identity, researchers designed teaching to improve authorial identity. They also developed a questionnaire measure called the Student Authorship Questionnaire (SAQ) to assess a student’s authorial identity.This was used to evaluate the sessions that were delivered. Although the teaching was evaluated fairly positively, the SAQ had some limitations that meant it could be improved as a measurement tool. Following initial development of the questionnaire, other researchers found that the relationships between different parts of the questionnaire were not consistent.

To address the issues with the SAQ, we conducted studies to develop a new questionnaire to assess authorial identity in students. A systematic process was used to suggest candidate questions, assess their relevance and then to trial them in groups of students. Different features of the new measure’s performance as a psychological measure were tested in two trials of the measure and established good practice from the field of psychometrics were used to guide the research.

Methods

Generating questions and expert ratings of their relevance

Questions were proposed by examining previous research on the topic of authorial identity. We then conducted interviews with 27 academics and group discussions with 17 students on the area. We identified 106 prospective items, which were evaluated by 15 experts, who were all professional academics familiar with authorial identity. They were asked to score each questions relevance to measuring ‘the sense a writer has of themselves as an author and the textual identity they construct in their writing’. Statistical analysis was used to examine the expert scores and 47 items were identified as suitable for further testing.

Participants in the trials

Two studies were used to test questions for the measure. The first one included 439 students and the second one included 307 students; however, one of the responses from the second trial were unusual and thus excluded from analysis. We recruited students from a range of subjects. The average age of students in the first trial was 24 and in the second trial it was 23. A table of relevant demographic information is available as Table 1 in the paper.

Materials

We used a brief demographic questionnaire and trial versions of our new authorial identity measure. The new authorial identity questionnaire stated ‘To what extent do you agree with each of the following statements?’ followed by a list of the proposed questions, which had been rephrased to be a suitable format. There were six options available for responding: ‘Strongly disagree’ (1), ‘Disagree’ (2), ‘Slightly disagree (3)’, ‘Slightly agree (4)’, ‘Agree (5)’ and ‘Strongly agree (6)’.

In the second trial, a reduced pool of items from the first trial was administered – this new questionnaire was titled the Student Attitudes and Beliefs about Authorship Scale (SABAS). Alongside the SABAS, we also included measures of critical thinking and writing self-efficacy, because these were predicted to have relationships with authorial identity. The SAQ was also included for comparison with SABAS responses.

Procedure

Both studies used a combination of paper surveys and online questionnaires to collect responses. This allowed us to recruit students from a range of universities. The paper surveys were given out at the end of lecture sessions at one university and students were asked to complete them at the time. The link to the online version was posted on student forums to allow participation from online students and those from other universities. The second study excluded those who had taken part in the first study. For classes approached to take part in the second study, another round of data collection was conducted after four weeks to collect data for a retest comparison. Online participants to the second study were contacted by email (with permission) to provide responses for the retest.

Analysis

A range of statistical techniques were used to investigate the scores. These analyses come from the field of psychometrics, which is an area that focuses on developing and understanding suitable psychological measures. For study one, a technique called exploratory factor analysis (EFA) was used as the primary form of analysis. EFA examines the relationships between scores for the questions and is used to identify a structure underlying the responses. Groups of questions with closely related responses are conceptualised as representing one aspect of what is being measured.

For the second study, a technique called confirmatory factor analysis (CFA) was used to test the structure from the first study on the new group of respondents. Another structure was also tested for comparison with this technique. CFA gives information about how well the tested structures fit to the data. It also gives clues as to where the responses do not match what would be expected from the proposed structure. Both of the factor analysis techniques were supported with a range of other methods that dealt with issues in the data or supported extra decisions made during analysis.

Results

First study

Responses from the first study were used to reduce the 47 proposed questions down to 17. EFA was used repeatedly to examine the structure among question responses. Each time, questions that did not link with any of the identified aspects of authorial identity were removed. A structure with three aspects of authorial identity was derived. Each question and its relationship with an authorial identity aspect is shown in Table 2 of the paper. Note that each of the questions only relates strongly (a value of above .45) with one aspect (referred to as Factors in the Table). This is referred to as simple structure, and is desirable in this form of research because it makes it easier to interpret each set of questions as measuring something distinct. After examining the questions relating to each aspect, they were labeled as ‘Authorial confidence’, ‘Valuing writing’ and ‘Identification with author’.

Second study

The second study tested the structure from study one and a structure that had all of the questions coming from a sole aspect of authorial identity. The structure with only one aspect/factor showed poor fit with the data collected. The structure from the first study showed acceptable and good levels of fit on some indicators, but poor fit on others. However, it was definitively better than the structure with only one factor. Further analysis using the three factor structure showed that the relationship between two particular questions was potentially a source of problems with how well the model fit.

Using data collected from the same participants after four weeks, we analysed the stability of SABAS scores (and scores for each of the three aspects). This showed that SABAS scores were moderately stable, which is expected for authorial identity, because we are expecting to change it with teaching.

We also looked at the relationship of SABAS scores with a measure of Critical Thinking (CT) and Writing Self-efficacy. As predicted, there were positive relationships between these scores (i.e., those with higher SABAS scores had higher CT and self-efficacy scores). These show that SABAS scores are related to other scores in the way that we would predict, suggesting that it is measuring what we intended it to assess.

Discussion

The structure identified using the SABAS shares some features with the structure from previous research but also has some important differences. The first SAQ structure included six aspects – three key attributes of authorial identity and three approaches to writing associated with authorial identity. The SABAS structure links with the first set, but differs because it does not include focus on writing approaches. This could be due to the influence of experts who saw writing approaches as related to authorial identity, but not necessarily a component part of it. Due to this, the research contributes to our understandings of authorial identity and theories about what it includes.

For practical purposes, the research has produced a new measure of authorial identity that is suitable for use in research and the classroom. It also allows us to target, develop and test  new methods of improving authorial identity. Further research should be conducted to investigate the areas where the structure did not fit perfectly. However, the SABAS has a more stable structure compared to previous measurement tools. Other research could examine SABAS structures in different groups and how the scores relate to other aspects of student writing.

Leave a comment