ALA Midwinter 2009

From ALCTS Committee on Catalog Form and Function
Jump to: navigation, search

CFFIG: Wiki / Web
ALCTS: Wiki / Web

Synchronizing Catalog Externalities: Strains & Stresses Between the Catalog & Parallel Search Tools<br> ALCTS Catalog Form and Function Interest Group<br> Saturday, 24 January, 10:30-12:00 a.m.<br> Grand Hyatt Denver, Maroon Peak Room<br> ALA Midwinter Meeting, Denver, CO<br> Attendance: ca. 75 <br>

The meeting at Midwinter focused on the catalog as both a consumer of external data in multiple formats and a supplier to external discovery tools. Three presentations looked at the exchange of MARC and XML metadata between disparate systems and the tools needed to make this work.

<br> <b>Introduction</b><br> Charley Pennell, North Carolina State University<br>

<b>Optimized Metadata Repurposing in a Library Using MarcEdit</b><br> Sai Deng, Wichita State University Libraries<br>

<b>Abstract:</b> At WSU Libraries, Special Collections and Technical Services had been creating in-house inventories in Excel and MARC records in OCLC separately for their projects, including for a collection of 1800 poems by American poet Albert Goldbarth. This presentation will address how we repurposed those brief Special Collections Excel records, enriching and manipulating this metadata before transforming it into MARC for loading into OCLC and our local Voyager catalog. In a similar workflow, metadata describing ETDs had been captured for our institutional repository and for OCLC separately. By harvesting DC metadata from DSpace/SOAR (Shocker Open Access Repository) and transforming it using XSLT in MarcEdit, we were able to automate the process of creating MARC records for OCLC. Both cases of metadata repurposing promote the sharing and reuse of metadata resources in different systems and greatly improve our cataloging workflow.<br>

This presentation will analyze the tools utilized to perform the transformation, Delimited Text Translator, Metadata Harvester, and MarcEdit developed by Terry Reese. It will discuss some common challenges in metadata mapping and transformation, and what we did to resolve some of the issues in our cases. It will also address data integrity, data loss, and data maintenance issues across the resulting metadata repositories.<br> <br>

<b>Multiple sources, multiple paths: Migrating metadata between systems</b><br> Lucas Wing Kau Mak, Michigan State University<br>

<b>Abstract:</b> Migrating metadata from one system to another requires a thorough understanding of the differences between source and target metadata, as well as of the availability and limitations of transmitting and transforming technologies. Michigan State University Libraries has been trying out a new digital asset management system that uses Qualified Dublin Core. Instead of reinventing the wheel, we decided to migrate existing metadata to the new system. Since digital collections are accessible through both a homegrown MySQL database and the library OPAC, we have two sets of source data to pick from - Dublin Core from MySQL and MARC from the catalog. Besides multiple data sources, we also have multiple data extraction mechanisms available, namely XML export from MySQL & the OPAC, OAI-harvesting, and "create list" functionality within Innovative Interface's Millennium cataloging module. This presentation will discuss how and why we decided to adopt a workflow that extracts MARC records from the OPAC and then transforms them into QDC. Also, the presenter will talk about unexpected difficulties faced when transforming MARC records into QDC.<br>

<br> <b>Making it work in 2 places: Maneuvering between Emory’s EUCLID catalog and "DiscoverE" Primo system </b><br> Laura Akerman, Emory University<br>

<b>Abstract:</b> In the process of porting records from our ILS system in order to provide our users with a new "DiscoverE" interface, (Ex Libris’s Primo), we have encountered some interesting choices. Two examples involve e-journals, print journals, and new "deduping" features. Changes in both the catalog data and the Primo settings were needed to get things to show up in Primo giving users the fullest information (in the case of titles) and just the right information (in the case of call numbers). Handling these and other issues has caused us to learn a lot (for example, fun with regular expressions) but also to wonder what changes in underlying metadata structure and tools could do to support multiple discovery interfaces in a more "seamless" way.<br>

<br>