The first thing that struck me when I arrived at Charleston was how big it was. Over 1400 people had made the trip to talk libraries, making it the biggest the conference has ever been, and a sign that it’s still growing year on year.
What makes Charleston is the fact that it’s a one of a kind event that attracts a very wide range of information professionals, including everyone from technical services to academic librarians, and that it’s very focused within that market. The nearest competition to Charleston would be ALA, SLA or ACRL, all of which are much larger and more general events. So it was no surprise that an increasing number of international publishers come to the conference each year. This time around Intellect and Thieme caught my eye as the latest internationally-minded publishers to catch the Charleston bug.
Beyond the buzz and general growth of the conference, however, two themes dominated all of the discussions people were having in the sessions themselves and in private meetings. These were discoverability and ROI.
As you’ll have seen in the Storify of Charleston that we posted on Wednesday, librarians are really thinking hard about what they can do to help patrons discover what they need from their university’s collection. They’re conscious that the process for identifying specific content in many libraries is still too complex, and the general queries most users still rely on return too many results to be truly useful. This is a particularly pressing issue for many librarians of specialised collections, whose content may be under-used as a consequence of them being difficult to access. In such circumstances they can find it increasingly difficult to justify the cost that maintaining this collection represents to their organisation.
A solution for solving this problem formed the basis of the conference’s opening session “The Semantic Web for Publishers and Libraries” delivered by Stanford University Librarian, Michael Keller. In this he argued that the way to improve discoverability is for librarians to publish the bibliographical information on which discoverability depends in standard formats (RDF and URI) – effectively to open source it.
His vision of open source metadata was a big and exciting one, and if it does get off the ground has the potential to transform the discoverability experience. Like all big visions, however, it will be complex and challenging to implement. What it does highlight is that as librarians and publishers become increasingly aware of the need to develop really great metadata for their collections demand for semantic tagging and data mining will grow.
Which brings us to the return on investment question. While improved discovery will have a positive effect on usage statistics, working on this part of the process can justify some of the costs that library collections represent to an organisation. After all if users can’t discover the content in libraries there can’t be any ROI. And trends such as Patron Driven Acquisition, which tie the use and purchase of items in a library’s collection much closer together, are all efforts to provide users with what they really want, rather than what librarians think they want.
What discovery can’t answer, however, is what value the same organisation derives from making this research material available. And in my experience it’s definitely this question that’s keeping librarians awake at night.
The librarians I spoke to said they’re under a lot of pressure from institutions to provide tangible evidence that the costs their collections represent deliver value. They’re therefore crying out for analysis of value tools and packages that will allow them to peg the research material they make available to real outcomes. As with most ROI models, the task of connecting the availability of research to the commercial benefit of the published papers, innovations or new products or services developed in response to it will be fiendishly difficult, but it will also be necessary in a world where every penny counts.