Speeches are part of many social and societal processes and functions. However, a practical question is, which speech is “good” or how a speech can be “improved”. Apart from external factors, such as the impact and consequences of a speech, speeches can first be viewed by themselves. At least since Aristotle and his rhetoric, researchers have been interested in the question of which rules apply to good speeches and how speeches can be improved. Questions such as choice of words, syntactic structures and complexity, structure of arguments and stylistic devices are included as well as prosodic structures, i.e. emphasis, rhythm and tone. In addition to a creative task, it is also a matter of making the appropriate choice of words, syntactic complexity or simplicity, etc., which seem appropriate for a specific occasion. Whether speeches can be analyzed and classified automatically based on appropriate criteria, whether this evaluation corresponds with the subjective judgements of listeners and whether these findings can be used specifically to improve speeches is the motivation behind this user story.
This user story is based on data from Lexical Resources and Collections; for historical speech manuscripts and theoretical treatises, the Editions are also relevant.
Can existing speech manuscripts, transcripts and recordings be used to contribute to a speech evaluation using automated procedures and can this evaluation already be applied to a manuscript with suggestions on how to improve a speech? Are the aspects of lexical inventory, syntactic structures, prosodic patterns, and the number of topics and arguments sufficient criteria to make a qualitative and stylistic statement on a speech, apart from the speaker, that is consistent with evaluations by listeners? On this basis, can those who write a speech be given advice on how to improve their manuscript, including prosodic indications?
For this question the following data sources and tools are needed:
- The largest possible collection of different speech transcripts or manuscripts whose classification of the speech is clear (Bundestag, in court, private context, public ceremony, …). The collection of debates in Text+ could be used to develop a scoring. In addition, speech manuscripts are available in the DTA, and there are also other manuscripts in DeReKo.
- For the processing of the syntactical structures the analysis with language technological tools is necessary. The Text+ function WebLicht with syntactic parsers, NER, etc. can be used to evaluate the average syntactic complexity, the variance in the structures, the references to persons, places and organizations and to make suggestions for changes if necessary.
- The tools for the spoken language of BAS are required for the joint consideration of the prosodic structures.
- GermaNet can be used to measure the range of variation in word choice as well as suggestions for improvement.
- References to other speeches can be established by drawing on existing collections, if they are thematically accessible.
- Tools for automatic prosody annotation are still missing.
- The debate transcripts are already syntactically cleansed, hesitations, false starts, etc. are not included, so that the alignment of text and signal is still problematic. However, ASR tools could be used, which in turn could be bleached off with the minutes or the manuscript.
- The development of a scoring for the individual speech also requires a selection of source data.
- It must be checked whether a developed scoring also corresponds to a judgement of human experts.
- The independent development of the necessary data and tools would be too costly for their expected value. The co-use of existing offers from Text+ ensures that there is a realistic chance to address this research question with reasonable effort.