qats2016

LREC 2016 Workshop & Shared Task on

Quality Assessment for Text Simplification (QATS)

28th May 2016

Portorož, Slovenia

home # important dates # shared task # submission # programme # proceedings

Introduction

This workshop aims to bring together researchers interested in all aspects of evaluation of automatic text simplification (ATS) systems, including automatic, human, and user-focused evaluation. It addresses the current problem in the text simplification community that there are no common standards and evaluation methodologies which would enable fair comparison of different ATS systems.

Given the close relatedness of the problem of automatic evaluation of ATS system to well-studied problems of automatic evaluation and quality estimation in machine translation (MT), the workshop also features a shared task on automatic evaluation (quality assessment) of ATS systems. We hope that this shared task will bring together the researches from MT and TS communities. We will provide training and test data-sets for English (obtained by using three different ATS approaches) and several baselines.

We solicit papers describing resources, models and techniques for evaluation of text simplification systems, proposing automatic evaluation metrics or discussing problems in user-focused evaluation, and the papers describing systems which participated in the shared task.

The workshop will feature an invited speaker, oral and poster presentations, and a closing panel discussion.

Motivation and topics of interest

In recent years, there has been an increasing interest in automatic text simplification (ATS) and text adaptation to various target populations. However, studies concerning evaluation of ATS systems are still very scarce and there are no methods proposed for directly comparing performances of different systems. Previous studies (Štajner et al. 2013) showed that machine translation automatic evaluation metrics such as BLEU, METEOR and TINE provide a good baseline for this task.

By organising a shared task, we wish to bring together researchers working on automatic evaluation and quality estimation of machine translation output, to participate and try to adapt their metrics to this closely related task. This would provide an opportunity to establish some metrics for automatic evaluation of ATS systems and enable their direct comparison in terms of the quality of the generated output, as well as less time consuming assessment of each ATS system.

We solicit papers on all aspects of evaluation of text simplification systems (automatic, human, and user-focused) hoping to provide an opportunity for researchers to exchange their experiences and discuss common problems in all aspects of ATS evaluation.

Topics of interest include, but are not limited to:

  • Resources for evaluation of text simplification systems and evaluation schemes
  • Automatic evaluation metrics and complexity metrics
  • Quality assessment for text simplification
  • Problems in human evaluation of text simplification systems
  • Problems in user-focused evaluation of text simplification systems
  • Similarities and dissimilarities of automatic evaluation of text simplification systems with automatic evaluation of machine translation and/or summarisation systems
  • NLP tools for enhancing evaluation of text simplification systems
  • User studies

The participants in the shared task are invited to submit a short paper (4 to 6 pages) describing their evaluation method(s). You are not required to submit a paper if you do not want to. If you don't, we ask that you give an appropriate reference describing your method which we can cite in the overview paper.

Submission information

We encourage contributions in the form of full papers (up to 8 pages long + 2 pages references) and short papers (up to 4 pages long + 2 pages references). Papers should follow the LREC format which is available at the LREC 2016 Web Site (http://lrec2016.lrec-conf.org/en/). The submission process has to be done online using the START conference system: https://www.softconf.com/lrec2016/QATS/

All accepted papers will be presented orally or as a poster and published in the workshop proceedings.

Identify, describe and share your LRs!

Describing your LRs in the LRE Map is now a normal practice in the submission procedure of LREC (introduced in 2010 and adopted by other conferences).

To continue the efforts initiated at LREC 2014 about “Sharing LRs” (data, tools, web-services, etc.), authors will have the possibility, when submitting a paper, to upload LRs in a special LREC repository. This effort of sharing LRs, linked to the LRE Map for their description, may become a new “regular” feature for conferences in our field, thus contributing to creating a common repository where everyone can deposit and share data.

As scientific work requires accurate citations of referenced work so as to allow the community to understand the whole context and also replicate the experiments conducted by other researchers, LREC 2016 endorses the need to uniquely identify LRs through the use of the International Standard Language Resource Number (ISLRN, www.islrn.org), a Persistent Unique Identifier to be assigned to each Language Resource. The assignment of ISLRNs to LRs cited in LREC papers will be offered at submission time.

Important dates

Shared task description and release of training data: 9th December 2015
Shared task release of test set: 20th January 2016
Submission of shared task results: Deadline extended until 3rd February 2016.
Paper submission deadline: Deadline extended until 15th February 2016.
Submission of shared-task description papers: 15th February 2016.
Notification of acceptance: 8th March 2016
Camera-ready version: 25th March 2016
Workshop: 28th May 2016

Organisation committee

Sanja Štajner (University of Mannheim, Germany)
Maja Popović (Humboldt University of Berlin, Germany)
Horacio Saggion (Universitat Pompeu Fabra, Spain)
Lucia Specia (University of Sheffield, UK)
Mark Fishel (University of Tartu, Estonia)

Program committee

Sandra Aluisio (University of São Paolo)
Eleftherios Avramidis (DFKI Berlin)
Susana Bautista (Federal University of Rio Grande do Sul)
Stefan Bott (University of Stuttgart)
Richard Evans (University of Wolverhampton)
Mark Fishel (University of Tartu)
Sujay Kumar Jahuar (Carnegie Mellon University)
David Kauchak (Pomona College)
Elena Lloret (Universidad de Alicante)
Ruslan Mitkov (University of Wolverhampton)
Gustavo Paetzold (University of Sheffield)
Maja Popović (Humboldt University of Berlin)
Miguel Rios (University of Leeds)
Horacio Saggion (Universidad Pompeu Fabra)
Carolina Scarton (University of Sheffield)
Matthew Shardlow (University of Manchester)
Advaith Siddharthan (University of Aberdeen)
Lucia Specia (University of Wolverhampton)
Miloš Stanojević (University of Amsterdam)
Sanja Štajner (University of Mannheim)
Irina Temnikova (Qatar Computing Research Institute)
Sowmya Vajjala (Iowa State University)
Victoria Yaneva (University of Wolverhampton)

Contact

For further information please contact us at:
stajner.sanja@gmail.com or maja.popovic@hu-berlin.de