Loading…
TDWG 2016 has ended
Contributed 01 [clear filter]
Wednesday, December 7
 

09:00 CST

TDWG Then and Now
The Taxonomic Databases Working Group (TDWG, now Biodiversity Information Standards) started out in Geneva in 1985 and this will be its 25th anniversary.   TDWG has evolved from a relatively close-knit group within the field of Plant Sciences into the current encompassing standards organization underpinning biodiversity publication effort across the entire Linnean tree, over the whole globe, and on top of cutting edge of information technology.
What drove TDWG throughout the turn-of-the-century’s  changes in biodiversity science? How has TDWG’s scope adapted or evolved to changing challenges? What has been its turnout rate? How have interests shifted over time? Can it be considered a stable, healthy network ready to continue its work, boldly going where no taxonomists have gone before?
Through a network analysis, and representation and visualization of samples of TDWG’s themes and interests along its annual meetings, we will picture TDWG’s role, change and adaptation to the flow of biodiversity research, and we will explore what new frontiers may lie ahead to be trodden by TDWG participants between now and  BIS/TDWG’s 26th anniversary.


Wednesday December 7, 2016 09:00 - 09:15 CST
Auditorium CTEC

09:15 CST

Nanopublications for biodiversity: concept, formats and implementation
The concept of “nanopublication” is developed by the Concept Web Alliance (http://www.nanopub.org), and is defined as “the smallest unit of publishable information: an assertion about anything that can be uniquely identified and attributed to its author.” A nanopublication includes three key components, or named graphs: (1) Assertion, or a statement linking two concepts (subject and object) via a third concept (predicate); (2) Provenance, or metadata to provide context for the assertion, and (3) Publication/citation metadata about the actual nanopublication itself. A similar form of a machine-readable formalization of knowledge are the “micropublications” which may include also evidence underlying claims and arguments to support the assertions.
Nanopublications are proposed as a complement to traditional scholarly articles allowing the underlying data to be attributed and cited, providing an incentive to researchers to make their data available using machine-readable formats thus supporting large scale integration and interoperability whilst being able to track the provenance of every contribution.
Nanopublications can be derived from research or data papers or the supplementary materials associated with them, or can also be composed de novo as independent publications used to disseminate various kinds of data that may not warrant publication as a paper. For example, one possibility is to allow export of a nanopublication in the form of a “nanoabstract” by developing a mapping from article XML to nanopublication RDF. This could be facilitated either by mapping tools or via a specially designed user interface where authors can express the most important findings in their articles as assertions in nanopublications.
Nanopublication may play potentially a highly useful role in the challenging process of community curation of biodiversity databases, such as GBIF (see, iPhylo “Annotating GBIF: from datasets to nanopublications”, http://iphylo.blogspot.ie/2015/01/annotating-gbif-from-datasets-to.html), Catalogue of Life, or taxon names registries.  The credit and recognition provided by nanopublications, may serve as an incentive for experts and citizen scientists to annotate/amend data for community use.
As machine-readable RDF-based formalizations of knowledge, nanopublications can be consumed into the Biodiversity Knowledge Graph  and will be an essential component of the RDF-based Open Biodiversity Management System (OBKMS) developed by Pensoft and Plazi. Nanopublications for some classes of biodiversity data will be implemented first in the Biodiversity Data Journal and TreatmentBank. Nanopublication formats currently under development are: (1) descriptions of new taxa; (2) renaming of taxa, synonymies (nomenclatural acts); (3) new occurrence records; (4) new traits or other biological information about a taxon.




Wednesday December 7, 2016 09:15 - 09:30 CST
Auditorium CTEC

09:30 CST

COPIS: A Computer Operated Photogrammetric Imaging System
Technological advancements over the past two decades have made information about types and other specimens housed in natural history collections available online in digital form, primarily for research purposes. In the past few years, more emphasis has been placed on digital imaging of specimens, in effect bringing the specimens out of their cabinets and increasingly into public view globally via the World Wide Web.  This presentation will introduce full color, 3-dimensional imaging of external anatomy using photogrammetry, and will describe an architecture known as COPIS (Computer Operated Photogrammetric Imaging System) developed for rapid multi-camera, multi-view, image acquisition.  In addition to 3D-imaging, the outputs of COPIS, may also be used in traditional 2-dimensional image analysis.

Speakers
Sponsors

Wednesday December 7, 2016 09:30 - 09:45 CST
Auditorium CTEC

09:45 CST

The Digital Object Lifecycle of Paleo Data: Concepts of Digital Curation in a Natural History Context
Paleontological data presents many challenges. It can often be difficult to maintain best practices, follow established standards and methodologies, and ensure data quality over time.  At the Smithsonian National Museum of Natural History (NMNH) Department of Paleobiology, we are developing a comprehensive program for understanding and managing the full digital object lifecycle of our collections and research data. Following the tools and resources developed by the digital curation field, we are able to complete a comprehensive analysis of our digital data and all of its characteristics. This analysis follows a digital object, or paleontological collections record in this case, from the point of creation, whether through transformation from analog or as born digital, through submission to repository and preservation systems, and then as an output of interoperable information disseminated for consumption by a variety of audiences through many access points. An added complexity is the inherently cyclical nature of biodiversity data, requiring additional consideration for the continuous distribution, analysis and enhancement, resubmission, and redistribution over time. By defining the actions, roles, characteristics, and standards needed at each step in the lifecycle we build the capacity to fully comprehend our data. Therefore we increase our ability to enhance standards, workflows, policies, and ultimately longterm data quality and data management. This comprehensive definition of paleontological data also enables more in depth discussion at the global biodiversity informatics level, contributing to conversations about the current standards, data needs, and data usage, and to needed discussions about ways of improving the comprehensiveness and interoperability of paleontological data across institutions and data sources. This talk will cover the efforts and progress made thus far by the NMNH Department of Paleobiology and invite discussion from others undertaking similar studies or interested in collaborating on the global applications of these concepts.

Speakers

Wednesday December 7, 2016 09:45 - 10:00 CST
Auditorium CTEC

10:00 CST

Building Linked Open Data for Zooarchaeological Specimens and Their Context
Zooarchaeological collections data present special challenges for mobilization into global biodiversity networks, given the critical importance that human site context plays in interpretation.  At the same time, faunal remains are biological samples that can be represented using existing standards.  Here we present a means to use a linked open data framework to connect cultural context and specimen data in order  to support integrated global change research.  We demonstrate this approach using a subset of the zooarchaeological holdings of the Florida Museum of Natural History as a case study.  We show how these datasets can be expressed using Darwin Core, especially information relating to excavation, chronology, and cultural provenience.  We also have developed means to share context information with Open Context, an archaeoinformatics project that is well established in the community.  We discuss the importance of linked open data frameworks in representing zooarchaeological data, and the importance for future development of Darwin Core extensions to further appropriately capture contents rather than relegating this content to container fields such as dynamicProperties in Darwin Core. Many of the concepts required to share zooarchaeological data have conceptual overlap with paleontological data, and we argue it is timely and needed to fully connect biological, paleontological, and archaeological data together most efficiently for broad scientific use.


Wednesday December 7, 2016 10:00 - 10:15 CST
Auditorium CTEC

10:15 CST

Demonstrating the Prototype of the Open Biodiversity Knowledge Management System
The Open Biodiversity Knowledge Management System (OBKMS) is a suite of semantic applications and services running on top of a graph database storing biodiversity and biodiversity-related information, known as a biodiversity knowledge graph (http://rio.pensoft.net/articles.php?id=8767). A biodiversity knowledge graph is a data structure of interconnected nodes (e.g., specimens, taxa, sequences), compatible with the use of standards for Linked Open Data, and able to be merged with other similar management systems to ultimately form a grand Biodiversity Knowledge Graph of all biodiversity and biodiversity-related information.
The main purpose of OBKMS is to provide a unified system for interlinking and integrating diverse biodiversity data e.g., taxon names, taxon concepts, taxonomic treatments, specimens, occurrences, gene sequences, bibliographic information. The graph, at this stage, is serialized as Resource Description Framework (RDF) quadruples, extracted primarily from biodiversity publications, but database interlinks will follow. Options for expressing Darwin Core encoded data as RDF for insertion in the graph are explored.
You are encouraged to listen to the talk in Symposium S01, “The Open Biodiversity Knowledge Management System: A Semantic Suite Running on top of the Biodiversity Knowledge Graph,” to get a grasp of the theoretical underpinnings of OBKMS.
In this computer demo we would like to demonstrate the prototype of OBKMS that we already have running at the Bulgarian Academy of Sciences. We will mostly do SPARQL (SPARQL Protocol and RDF Query Language) queries and showcase some insights learned from the data (sources: Pensoft, Plazi).
We also want to be very interactive and gather the user's perspectives of what OBKMS should become after its prototype stage. In particular, after the database stage we want to explore different services and applications that OBKMS could offer.
 


Wednesday December 7, 2016 10:15 - 10:30 CST
Auditorium CTEC
 


Filter sessions
Apply filters to sessions.
  • Contributed 01
  • Contributed 02
  • Contributed 03
  • Contributed 04
  • Contributed 05
  • Interest Group 01
  • Interest Group 02
  • Interest Group 03: Data Quality
  • Interest Group 04
  • Interest Group 05
  • Interest Group 06
  • Interest Group 07
  • Interest Group 08
  • Interest Group 09
  • Lightning Talks
  • Symposium 00
  • Symposium 01: Semantics for Biodiversity Science
  • Symposium 02: BHL
  • Symposium 03
  • Symposium 04
  • Symposium 05
  • Symposium 06: Biodiversity Data Quality
  • Symposium 09
  • Symposium 10
  • Symposium 12
  • Symposium 13
  • Workshop 01
  • Workshop 03: Darwin Core Invasive Species Extension Hackathon
  • Workshop 03C
  • Workshop 04
  • Workshop 05
  • Workshop 06: Darwin Core
  • Workshop 06A
  • Workshop 08