The epistemic web requires us to rethink fundamental concepts of how to encode data and how to contextualize data in a standardized way. The focus is on the description of data and making it available for human- and machine-based interpretation, where both methods of interpretations are complementary and will not replace each other. Here the key aspect of data modeling is to be as precise as possible; this also includes documenting the limits of these methods. An overarching conceptual framework is needed which enables the description of sources, the process of working with sources, the publication of research results, and commenting on sources.
The framework that is currently developed in cooperation with the Department and the Digital Innovation Group at Arizona State University, led by Manfred Laubichler and Julia Damerow, allows for a description of the interaction of people and institutions based on project-specific interests and also shows the connection to other datasets collected in different contexts. This can occur on the very general level of the pure information of people’s interaction, up to a close description of how this happened. An ontology is therefore required that enables access to the general level of “interaction” up to a very specific form of interaction that is only of interest to the specialist. Starting with an ontology-based approach allows for the classification of objects in a non-exclusive way but still ensures this object is retrievable in a different context. For example, an art historian might classify a picture of the sphere as an artistic object, but a historian of science will classify the same object as a scientific one. Identifying the existence of this object in different databases is only possible if one can agree on at least one conceptual level. In the worst case, we have to go through all “things,” but this is also possible in principle and doesn’t need any kind of data integration process.