Print Friendly

Boot, Peter, Huygens Institute of Netherlands History, The Netherlands, peter.boot@huygens.knaw.nl

Introduction

Some literary works and authors acquire enduring reputation, whereas others do not. What causes the difference is a contested issue. Fishelov (2010) distinguishes between a ‘beauty party’ and a ‘power party’, where the beauty party argues for the existence of aesthetic qualities, inherent in the work, while the power party argues that social power structures determine the outcome of canonisation processes. As argued by Miall (2006), the question should be decided empirically. However, because there is no objective measure of literary quality (if there were one, the discussion would be over) empirical investigation of the issue has been fraught with difficulty. Both parties can lay some claim to empirical evidence. For the power party, Rosengren (1987: 295-325) and Van Rees and Vermunt (1996: 317-333), among others, have shown that reputations to some extent are made by newspaper reviewers. For the other side, some of the evidence that Miall adduces suggests that foregrounded linguistic features (deviant from normal, non-literary usage) may be responsible for some of the (emotional) response that literary reading evokes. It is long way, however, from that finding to showing that this sort of textual properties can actually explain a work’s longer-term reputation.

As a way forward in this debate, I propose to look into online writing communities, i.e. websites where communities of amateur writers publish and discuss their poems and stories. The sites offer the potential for empirical and quantitative research into literary evaluation because they contain large quantities of works and responses to these works. Text analysis can be combined with statistical analysis of number of responses and ratings.

In this paper I will look at Verhalensite, a Dutch-language online writing community, unfortunately no longer active. At the basis of the paper is a site download that contains ca. 60,000 stories and poems, written by 2500 authors. The texts received 350,000 comments and the comments drew another 450,000 reactions. 150,000 comments were accompanied by a rating in terms of 1 to 5 stars. I reported about Verhalensite and its research potential in (Boot 2011a). In (Boot 2011b) I discuss available data and use them in an analysis of the possibility to predict long-term activity on the site based on activity and response in the first four weeks of site membership.

Context factors

I focus here on the role of context factors in determining commenters’ response to the works published on the site. I look at pairs of one author and one commenter, where the commenter has responded to at least one of the author’s works. I select only those pairs where both author and commenter have published at least ten works. This makes it possible to compute linguistic similarity between authors’ and commenters’ texts and the similarity of their genre preferences. There are 49,437 of author-commenter pairs fulfilling these requirements, and we can compute statistical relationships between the contextual factors and the response variables. As an example I give correlations between some of the context variables and the number of comments the commenter has given the author, presumably reflecting the commenter’s opinion of the author’s works:

Table 1: Partial correlations with number of comments from commenter to author, given author productivity, commenter’s average number of comments per week and overlap in the commenter’s and author’s active periods on the site
Variable Partial correlation Variable grouping
Similarity creative texts 0.06 Author-commenter similarity
Similarity commentary texts 0.11
Difference in numbers of poems -0.11
Same preferred genre 0.09
Replies by author to comments (fraction) 0.07 Author networking activity
Comments by author on others 0.15
Comments by author on commenter 0.57
Ratings by author 0.13
Prose writers 0.08 Text properties

The first four variables are measures of similarity between the author’s and commenter’s texts. The correlations show that the closer commenter and author are in terms of language, in the amount of poetry they write, and in genre, the more likely the commenter is to respond to the author’s works. I use two aspects of textual similarity. The similarity between authors’ and commenters’ creative texts (poems and stories) is computed using Linguistic Inquiry and Word Count (LIWC), argued by Pennebaker (e.g. Pennebaker & Ireland 2011: 34-48) to reflect important psychological properties. LIWC counts words in grammatical, psychological and semantic categories (e.g. pronouns, positive emotions, or work-related words). I use the cosine distance over all LIWC categories as a measure of textual similarity. For the commentary texts, I created a number of site-specific vocabulary lists corresponding to certain aspects of the response: compliments (good, nice, cool), greetings (hi, welcome, grtz), critical (not necessarily negative) discourse (dialogue, stanza, suspense), negative response (disappointing, boring, missing), and some others. I computed frequencies for each of these categories. Textual similarity between author and commenter was computed from the weighted differences between their frequencies in the site-specific categories.

The next four variables are measures of the author’s networking activity, which was shown by Janssen (1998: 265-280) to be an important factor in determining the amount of critical attention that an author receives. They represent respectively the fraction of received comments that the author replies to (usually to say thank you, sometimes with a longer reaction), the number of times the author comments on others’ works, the number of comments the author has given to the specific commenter (the quid-pro-quo effect), and the number of ratings that the author has given. All four have positive influence on the amount of comments. The next variable shows that if the shared genre is prose (and not poetry), the commenter is more likely to rate the author’s works highly. At present I have no hypothesis as to why this should be the case.

In terms of the discussion about literary evaluation, the numbers are interesting because none of the context variables reflect properties of the texts (except for the prose/poetry distinction). They show that to some extent literary evaluation depends on linguistic agreement between author and evaluator and on an author’s networking activities. While at a general level this is perhaps hardly surprising, it is interesting to obtain a measure of the (relative) strengths of these influences, including the quid-pro-quo effect. It is also interesting to note LIWC is to some extent able to capture linguistic distance between authors.

As a next step in this analysis, I hope to look more closely into alternative measures of textual similarity and the extent to which they can predict the number of comments exchanged in an author-commenter pair. One interesting candidate would be Latent Semantic Analysis, possibly using only words from certain semantic fields. It would also be interesting to investigate the differences between commenters in sensitivity to the discussed variables. The numbers given here reflect average behaviour, and it seems very likely that some commenters will care more than others for e.g. linguistic agreement between their own works and the works that they comment on. A follow-up question would then be whether it is possible to cluster commenters based on their preferences.

Discussion

To some extent a paper such as this one is unusual in the Digital Humanities context. Work in Digital Humanities has tended to focus on texts, their stylistics, authors, their sources, and the vehicles (editions) that bring texts to their modern students. Reader response has been largely ignored in our field. The discussion above shows some of the influence of contextual factors on literary evaluation, but many other sorts of analysis could and should be undertaken on the basis of this material. For one thing, I have not yet investigated the effects of ‘power’ (i.e. the influence of third persons on commenters’ behaviour). Similarly, many analyses at the level of the individual work have yet to be performed. The existence of digital environments such as online writing communities has made these analyses feasible and can thus contribute to broadening the scope of Digital Humanities.

References

Boot, P. (2011a). Literary evaluation in online communities of writers and readers [to appear]. Scholarly and Research Communication.

Boot, P. (2011b). Predicting long-term activity in online writing communities: A Quantitative Analysis of Amateur Writing. Paper presented at Supporting Digital Humanities 2011, Copenhagen. http://crdo.up.univ-aix.fr/SLDRdata/doc/show/copenhagen/SDH-2011/submissions/sdh2011_submission_1.pdf (accessed 23 March 2012).

Fishelov, D. (2010). Dialogues with/and great books: the dynamics of canon formation. Eastbourne: Sussex Academic Press.

Janssen, S. (1998). Side-roads to success: The effect of sideline activities on the status of writers. Poetics 25(5): 265-280.

Miall, D. S. (2006). Literary reading: empirical & theoretical studies. New York: Peter Lang.

Pennebaker, J. W., and M. E. Ireland (2011). Using literature to understand authors: The case for computerized text analysis. Scientific Study of Literature 1(1): 34-48.

Rosengren, K. E. (1987) Literary criticism: Future invented. Poetics 16(3-4): 295-325.

Van Rees, K., and J. Vermunt (1996). Event history analysis of authors’ reputation: Effects of critics’ attention on debutants’ careers. Poetics 23(5): 317-333.