Modelling the Effect of Document Context on Sentence Acceptability
Professor Shalom Lappin FBA
Joint work with Jean Philippe Bernardy, University of Gothenburg and Jey Han Lau, IBM Research Melbourne
We investigate the influence that document context exerts on human acceptability judgements for English sentences, through two experiments. The first compares ratings for Wikipedia sentences presented on their own, with ratings for the same sentences given in their document contexts. The second measures the accuracy with which two types of deep neural network models predict these judgements. One type of DNN incorporates context into training, while the other does not. We also study the effect of integrating context input into the testing of both types of model. Our results indicate that: (1) while context improves acceptability ratings for ill-formed sentences, it reduces them for well-formed sentences, and (2) contextual information, both in the training and the testing processes, increases the accuracy of unsupervised systems in modelling acceptability.
Shalom Lappin received his BA in Philosophy at York University, Toronto Canada (1970), and his MA (1973) and PhD (1976) in Philosophy at Brandeis University. He taught philosophy at Ben Gurion University of the Negev (1974-1980), Linguistics at the University of Ottawa (1980-84), where he was Chair of the Linguistics Department (1981-84), and linguistics at the University of Haifa (1984-88) and Tel Aviv University (1988-89).
He was a Research Staff Member in the Natural Language Group of the Computer Science Department at IBM T.J. Watson Research Center (1989-93). He then took up a position in the Linguistics Department at SOAS, University of London (1993-99).
He was a Professor of Computational Linguistics at King's College, London (1999-2015). In 2010 he was elected a Fellow of the British Academy (FBA).
Since 2015 he is a Professor of Computational Linguistics at University of Gothenburg, Sweden. He is a Director of Centre for Linguistic Theory and Studies in Probability.