Digitization and the State-of-the-art(world) [VIDEO]

The audience during the first session of our Digitization conference, March 5, 2020.
Photo: Huffa Frobes-Cross

In March of 2020, the Wildenstein Plattner Institute presented Digitization and the State of the Art World, a conference in collaboration with the Hasso Plattner Institute and the Professional Advisors to the International Art Market (PAIAM). SAP’s Next-Gen space and Ann Rosenberg kindly hosted our event at 10 Hudson Yards, providing an exceptional view as a backdrop for our speakers and panels.

The program brought together a variety of professionals — from catalogue raisonne researchers, museum registrars and art librarians, to tech developers and data scientists — who are all focused on methods of describing, classifying, capturing, storing, and propagating information on works of art. “Art data collection” may be the catch-all classification to describe this pursuit, at very least it encapsulates who attended our conference, as there is a significant cultural divide between art world practitioners and tech industry data scientists. And this is a divide that is becoming increasingly necessary to bridge. Digitization and the State-of-the-Art(world) sought to explore some of the key issues that arise —and the disruptions that can occur — when we consider the varying and opposing perspectives of those involved in art data management. The conference explored “best practice” approaches for improving the accuracy and accessibility of art-related data, and through three distinct sessions established some pathways for moving forward towards the industry’s common goals.

All fifteen of our speakers addressed issues that are at the heart of what we do at the Wildenstein Plattner Institute. The WPI was founded in 2016 to forge ahead with the Wildenstein Institute’s scholarly pursuit. Our approach, however, has been geared towards the needs of 21st-century users of art-specific information. Though at the forefront, the WPI did not have a team in place. What it did have was a huge, uninventoried treasure-trove of paper archives and sales catalogues, as well as a legacy staff in Paris who were hard at work on multiple catalogue raisonne projects. Though up until then, our staff wrote the majority of these catalogue raisonnes on Microsoft word. And in the cases of older projects, photo-index cards, maintained in loose paper files, provided our primary information storage system. A few of the projects, though, were dutifully being logged into a bespoke cataloguing system. 

Presentation by Prof. Dr. Lynn Rother from Session One: “The Currency of Information”
Photo: Huffa Frobes-Cross

Jerry-rigged from a pre-existing collection management system, our relational database housed data that was backed up on giant physical servers and maintained by a single, off-site IT specialist. The system had broadly-designated fields to capture free-form texts of all varieties. Information was added inconsistently for each object description and bibliographic citations were noted as specialized numeric codes. The greatest problem was that the system was constantly being reformatted to accommodate the requirements of individual users, depending on their projects. This resulted in a data structure so finicky that running and exporting meaningful reports was impossible. Even though the staff had been working on the same database, each project was essentially siloed, resulting in a lot of duplication, and inaccuracy.  If the Wildenstein Plattner Institute wanted to produce high-quality catalogue raisonnés for the digital age, the institution needed a much more sophisticated tech platform. This is where the Plattner brain-trust came into play, and soon HPI’s Martin Lorenz came on-board as the leader of our new tech effort.

In the art world, however, this surprisingly antiquated set-up is not unusual. Some of the most prestigious art institutions in the world struggle with issues of data inconsistency, one-person IT support, retrofitted databases, and short-sighted solutions that don’t look towards the interconnectivity of multiple projects at other institutions. Perpetuating this individualistic approach results in a massive headache for future art researchers. So, do we usher in the solutions that tech can provide and compromise our specialized methodologies for the greater good? And if so, what do we lose when we forego, say, that the time-tested approach of writing notes on an index card?

In art research, one needs to know what to look for and where to look, to be cognizant of the limitations or omissions inherent in what one is finding, and to consider how one can piece all the information together to create a reliable and clear narrative. This pursuit is fundamental to the role of the art historian for time everlasting. The world will always need art researchers to make sense of art data. 

Even with the ideal system, questions remain—some of our own approaches to art data management in the digital age might need to be overhauled. Should we forgo keeping our own paper research files and log everything into a searchable database? In the age of the internet, shouldn’t we all promote a free exchange of information and oppose overly-restrictive applications of copyright and intellectual property?  How about scanning an entire archive and putting it online? Why don’t we all use the same style guides and vocabularies for describing works of art? And why don’t we link to one another’s work, cite widely and connect to foundational resources? It all seems like we would benefit from these more democratic practices. 

Panel for Session Three: “Next Generation Solutions”, with Christian Bartz, HPI, Martin Lorenz, Wildenstein Plattner Institute, and Andrew Lih, Wikidata

Our first session on Thursday, “the Currency of information”, acknowledged the value of the data that we collect and our incentive to share it. As many in the art world know, we are as valuable as the information we keep. The professional compilation of this information, as art advisors, curators, catalogue raisonné authors, independent academics and auction houses, is fundamental to our professional livelihood. And given this common objective, we all stand to benefit from an efficient mode of data collection that reduces the redundancy of this effort and the propagation of error.  The technology that could make this a possibility exists. But as Caitlin Sweeney, the session’s moderator, asked, “what’s the hold-up?”

The challenges and opportunities that co-exist in compiling and safeguarding data complicate the endeavor. The conference then looked to some case studies for possible solutions, in practice, and in development. The second session, “Interim Solutions”, presented some of the applications and protocols currently in use to corral and standardize data, and how those efforts are being received by the key users of this information. David Newberry, from the Getty Research Institute, led our second session participants in a discussion following their presentations.

Our final session, “Next Generation Solutions”, staged three data scientists who addressed the identification mechanisms employed for works of art and for art-specific information. Introduced by Joann Halpern, the director of the Hasso Plattner Institute here in New York, the presentations focused on the next generation of possible approaches to sharing, collecting and identifying art-specific data through AI, unique identifiers, and crowd-sourcing platforms like Wikidata.  Some of the questions discussed were: What identification schemes exist to share artwork-related information? How accepted are these existing identifiers and who uses them? How would a system need to be designed to support a federated approach to the issuance and management of identifiers? What concepts or technologies are missing to increase the wide adoption of an identification scheme, suitable for automatic information exchange?

The tech and art worlds have a lot to learn from each other, including the ways we query and seek information. The conference’s task was to raise awareness about the types of questions that need to be asked and to consider the steps we can take collectively to improve the utility of art data collection. We hope our conference was able to make some headway on that front, and what we have learned from our work, which began with an Ad Hoc cataloging system, can begin to integrate and simplify art research for all.

To read more about the conference, please click here to read our full breakdown of the day. Otherwise, please find Complete footage of the event below filmed by Martin Loper.

Program – Digitization and the State-of-the-Art(world)

Welcome 

Ann Rosenberg, Senior VP & Global Head of SAP Next-Gen

Opening Remarks 

Executive Director, Elizabeth Gorayeb

Session 1: Currency of Information

Presentations

                 Katie Reilly, Director of Publishing, Philadelphia Museum of Art

                 Cristina Linclau, Manager of Exhibitions and Collections Information, Guggenheim Museum

                 Lynn Rother, Lichtenberg-Professor for Provenance Studies at Leuphana University Lüneburg

                   Meghan Noh, Art Law Group, Pryor Cashman LLP

Moderated Discussion and Q&A with our Director of Digital Publications, Caitlin Sweeney

Session 2: Interim Solutions

Presentations

                   Josh Hadro, Managing Director, IIIF Consortium

                   Rachael Kotarski, Head of Research Infrastructure Services at the British Library

                   Jennie Choi, General Manager of Collection Information, The Metropolitan Museum

Moderated Discussion and Q&A with David Newbury

Session 3: Next Generation Solutions 

Introduction

Joann Halpern, Director, Hasso Plattner Institute

Presentations

                   Martin Lorenz, Director of Technology, Wildenstein Plattner Institute

                 Christian Bartz, Multimedia Analysis and Deep Learning, Hasso Plattner Institute

                   Andrew Lih, Wikidata and Digital Media Strategist

Moderated Discussion and Q&A

Scroll to Top