Every year, we come up with a different way to finish Science Boot Camp. The idea is to focus on something pertinent and useful for science librarians. We also try to make this session more interactive.
This year, our Capstone session on Friday morning will focus on evaluating the quality of journals and datasets. We hope this will be informative for attendees themselves, give ideas on how to instruct patrons on these issues, including tips for creating their own workshop.
Open Access, Authors Rights, & Sciences Librarian, University of Connecticut
With the rise of open access journals – new, unknown, and with the potential for duplicity – how do we evaluate journals for quality and trustworthiness? We have all heard stories about journals with official-sounding titles but with predatory intent, as well as the numerous email solicitations of researchers for papers, reviewers and editors. How do we help our faculty, students and staff learn which journals are true outlets of scholarship and which are not?
In this session we will learn how to identify quality publishing and editing practices among journals. We will try out a variety of metrics, standards and directories in verifying the credibility of journals. We will also test a series of questions to answer in determining both positive and negative journal qualities. These techniques, though used most often with open access journals, can be used with any unknown journal.
NOTE: For an interactive experience please bring a laptop or tablet to the session.
Data Analysis Specialist, Brandeis University
Academic Outreach Librarian for Government Information and the Social Sciences, Brandeis University
These days, we see so much emphasis on data. Fancy graphs appear in our news feeds, our employers are increasingly trying to make data-driven decisions, researchers are creating their own libraries of data files, and students are looking for data to complete their assignments, even when the data they find isn't particularly relevant! Beyond information literacy, data literacy is becoming a necessary part of the librarian's toolbox.
In this session, we will go over the basics of data literacy. How can we tell the data is good? What do we mean by "good?" Where can we find the good stuff? What happens when it's not good? Participants will have an opportunity to assess a dataset in real time in order to answer these questions and any that come up. Plan for a robust conversation and knowledge share about best practices and cautionary tales in the ever-expanding world of data.