Dienste und Werkzeuge

Text+ und die beteiligten Einrichtungen stellen eine Reihe von Diensten rund um Sprach- und Textdaten zur Verfügung. Neben den Forschungsdaten sind diese Dienste und Werkzeuge ein wesentlicher Teil des Text+ Angebots für die Nutzenden.

Für die Bereitstellung und Pflege dieser Dienste-Übersicht nutzt Text+ den SSH Open Marketplace, eine Plattform aus dem Social Sciences and Humanities Cluster in der EOSC. Hier können Forschende die Angebote von Text+ in einem breiten Kontext mit für sie wesentlichen Ressourcen übersehen, weitere Informationen entdecken und so ihre Forschungsaktivitäten unterstützen. Falls Sie Ergänzungen oder Korrekturen an Dienstebeschreibungen haben, können Sie diese selbstständig direkt im SSH Open Marketplace auf der Seite der Ressource vornehmen.

CLARIAH-DE Tutorial Finder

The Tutorial Finder allows users to browse freely available and reusable teaching and training materials on procedures, tools, research methods, and topics in the humanities and its related disciplines.

This resource is supported by Text+.

Finding Discovering
CLARIND-UdS Repository (Saarbrücken)

The CLARIND-UdS data center is part of the Text+ infrastructure and operated by the Department of Language Science and Technology at Saarland University.

Storing Publishing
DARIAH-DE and OPERAS-GER academic blogging with Hypotheses

Hypotheses is a non-commercial blog portal for the humanities and social sciences. The portal provides a free service that facilitates scientific blogging and ensures greater visibility as well as archiving of content.

Blogging Communicating Publishing
DARIAH-DE Data Federation Architecture (DFA)

The DARIAH-DE data federation architecture is the term for services and tools that enable research data and collection descriptions to be found from various sources (such as cultural institutions, libraries, archives, research facilities, and data centers) and to be used for analysis.

Collecting Data Mapping
DARIAH-DE Data Modeling Environment

Environment for modeling data and their relationships.

The Data Modeling Environment (DME) is a tool for modeling and associating data. By means of the DME, data models and mappings between them are defined and provided in terms of interfaces (REST-API).

Modeling Data Mapping
DARIAH-DE Generic Search

Search engine that allows to search in the metadata records of the Collection Registry

The Generic Search creates a comprehensive search facility in DARIAH-DE.

DARIAH-DE Geo-Browser

The DARIAH-DE Geo-Browser (or GeoBrowser) allows a comparative visualisation of several requests and facilitates the representation of data and their visualisation in a correlation of geographic spatial relations at corresponding points of time and sequences.

Georeferencing Visual Analysis

A good starting point to receive support for DH-related questions, tools and resources provided by CLARIN-D, DARIAH-EU and DARIAH-DE, CLARIAH-DE and Text+ is the helpdesk.

DARIAH-DE Monitoring of research infrastructures and services using Icinga

Monitoring is a important factor for the operation of a digital research infrastructure. The data centers focus the hardware and the state of the basic software. Monitoring can be used to correct any faults and failures as quickly as possible.

DARIAH-DE Publikator

For whom? Researchers who want to deposit their research data safe, persistent, and referencable in a research data repository.

The DARIAH-DE Publikator offers the possibility to prepare, manage and import research data for the import into the DARIAH-DE Repository.

Digital Object Identifier Archiving Publishing
DARIAH-DE Repository dhrep

The DARIAH-DE Repository is a central component of the DARIAH-DE research data federation architecture. The DFA aggregates various services and, thus, ensures a convenient use.

Storing Publishing

entityXML is (so far) a concept study in version 0.5.2 (ALPHA), which aims to model a standardised XML-based data format for the GND Agency Text+. This resource is supported by Text+. In case of questions you may get in touch with the Text+ helpdesk at textplus-support@gwdg.de.

FAIR Check

Findable, accessible, interoperable and reusable (FAIR) - that's how research data should be. Sounds good, but is not so easy? Take our online FAIR check and evaluate your research data. Our questionnaire will help you assess which criteria you meet.

GND Agency Text+

The GND Agency Text+ is a service that is being set up at the Göttingen State and University Library (SUB Göttingen) as part of the NFDI consortium Text+.


Online tool for collaborative text editing to work together on the same texts at the same time.

The HedgeDoc-pad is an open-source-based web editor that allows multiple users to work on a single text simultaneously from different locations.

Collaborating Creating
Indico Event Management

The open source software Indico developed by Cern is a web application. Lectures, meetings and conferences can be created using Indico. Three different event types (lecture, meeting and conference) can be created in Indico.

Communicating Teaching

MONAPipe stands for "Modes of Narration and Attribution Pipeline". It provides natural-language-processing tools for German, implemented in Python/spaCy.

Publishing Annotating

The Project Management Service is a collaboration self service that allows you to manage and track your projects and source code repositories. By using DARIAH-DE OpenProject, users can independently coordinate their projects, keep track of their issues and document their results.

Managing Communicating
Persistent Identifier Service

In all aspects of research, the amount of digitally stored data is increasing continuously. Thereby the management will be more and more complex, so that the sustainable reference to data and thus their permanent censibility represent a challenge.


Rocket.Chat is a web-based, persistent messaging service focusing on group communication. It is a simple and intuitive platform for all users with a GWDG account/AcademicID for their communication.

Communicating Instant Messaging
Text+ Federated Content Search (FCS)

The Federated Content Search (FCS) is a specification and technical infrastructure for querying and aggregating distributed research data.

Discovering Finding Searching
Text+ GitLab

Web-based source code management with a wide range of functionalities to support development processes. Configuration of continuous integration per project. Support for the merge request workflows. Consultancy and support in setting up your projects. Connection to the GWDG user administration.

Versioning Managing
Text+ Web Portal

Web portal for Text+ based on HugoCMS and a CI/CD deployment pipeline This resource is supported by Text+. In case of questions you may get in touch with the Text+ helpdesk at textplus-support@gwdg.de.

TextGrid Import UI

A Jupyter Notebook-based user interface for TextGrid Import Modelling, which is a command line tool that facilitates the creation of the metadata files required for importing data into TextGrid Rep.

Data Ingestion
TextGrid Repository & Laboratory

TextGrid is a virtual research environment for text-based humanities scholarship. It offers a variety of tools and services for collaboratively creating, analyzing, editing, and publishing texts.

Migration Searching Enriching Collaborating Lemmatizing Discovering Disseminating Editing Publishing
tg-model - TextGrid Import Modeller

Whats the aim? This project focuses on attemps for a simple import of text corpora (encoded in XML/TEI) to TextGrid Repository by modeling the required metadata file structure.

Data Ingestion
tgadmin - TextGrid repository administration cli tool, based on tgclients

What is the aim? A command line tool for managing your projects in the TextGrid repository without TextGridLab.

The actual data import is finally carried out by the Python tools tgadmin and tgclients, which in turn communicate with the TextGrid backend via the various TextGridRep APIs.

Data Ingestion
tgclients - TextGrid Python clients

What is the aim?

The actual data import is finally carried out by the Python tools tgadmin and tgclients, which in turn communicate with the TextGrid backend via the various TextGridRep APIs.

Data Ingestion
WebLicht Const Parsing EN

WebLicht Easy Chain for Constituency Parsing (English). The pipeline makes use of WebLicht's TCF converter, the Stanford tokenizer, and the statistical BLLIP/Charniak parser.