Electronic Lab Notebooks (ELNs) are a digital tool to help address the increasing concerns around data. Researchers are concerned about reproducibility and granting agencies are concerned about data management and availability. Labs are also collaborating with people around the world. Paper just doesn’t cut it anymore. With ELNs, researchers can capture notes about their experiments and attach the generated data files directly to them. Then, they can give their collaborators access to the data by sharing the notebook with them. Plus, many notebooks allow for metadata generation and have advanced search capabilities, something that paper notebooks cannot do.
Join us to hear about the general concept of ELNs, some of the popular products on the market today, what supporting them looks like, and a bit about how to obtain one for your university.
An Institution-wide Examination of Data Needs: NSF EPOC Deep Dive at the University of Cincinnati
The Engagement and Performance Operations Center: Overview and Opportunities. The Engagement and Performance Operations Center (EPOC), funded by the US National Science Foundation, is a collaborative focal point for operational expertise and analysis jointly lead by Indiana University (IU) and the Energy Sciences Network (ESnet). The Center enables researchers to routinely, reliably, and robustly transfer data through a holistic approach to understanding the full pipeline of data movement to better support collaborative science. Through its measurement and monitoring work, as well as associated service advice and training, it brings together multiple knowledgeable and experienced science engagement teams.
A University’s Point of View. The University of Cincinnati was fortunate to be one of the first EPOC Deep Dive locations. We will 1) share the process we used to collect the data/case studies from the researchers before the onsite visit from EPOC, 2) the two-day visit with the EPOC team and 3) outcomes, challenges and opportunities for implementation of the recommendations.
Thanks to all the attendees and participants in the Capabilities Model activities at PEARC this year!
Following the CaRCC Town Hall, the Caps Model paper presentation, and the full-day Capabilities Model workshop, we had an overwhelming number of new downloads for the tool. Because of this and the feedback from those institutions already working through the Model, we are extending the data submission deadline to September 27, 2020. We hope the extra time will allow everyone to complete the Model and meet the 2020 community data submission deadline.
For August, we’ll do something a little different, with two opportunities for cross-track conversation. (These are in lieu of track-specific calls, which are otherwise canceled for August.)
PEARC20 After-Party: August 4th @ 1:00pm-2:00pm ET
(at the usual Data-Facing track meeting time)
Be sure to join at the top of the hour to finalize topic-based breakout rooms. We’ll use a Blackboard Collaborate session, which has a feature for breakout rooms that participants can move between, freely. Potential breakout topics suggested by our track coordinators are listed below:
New Applications Tech
New Systems/Services Tech
Communicating about RCD Resources
Review of Sessions about CaRCC Activities
Happy Hour (whatever that means to you)
The Blackboard Collaborate session and a Google Doc for notes have been shared via email to the entire People Network, as well as a calendar invite. If you did not receive these and would like to join the call, please contact firstname.lastname@example.org
Service Models for Researcher-Purchased Computing and Storage: August 20th @ 1:00pm-2:30pm ET
(at the usual Systems-Facing track meeting time)
Description: The term “condo” is an umbrella term for an increasingly common family of service models for research computing and data storage in higher education. However, the way this design pattern manifests can vary greatly from one institution to another, and there’s no single answer for the right way to implement computing and data capacity for-purchase. The purpose of this call will be to discuss approaches to researcher-purchased capacity from a variety of perspectives, including systems professionals, support and facilitation professionals, researchers, and other stakeholders. Discussion areas will include ownership models, funding and purchase strategies, user experience/policy considerations, hosting and operational support, etc. Discussion will consist of a panel format with a series of short site introductions by representatives of diverse service models, followed by a longer Q&A. To accommodate the multiple perspectives and facets, this call will be 90 minutes long.
Zoom coordinates (usual) have been distributed via email to the People Network email list, or can be requested via email@example.com.
Version 1.0 of the Research Computing and Data Capabilities Model (RCDCM) was released in Spring of 2020. We will aggregate contributed assessment data from the community, and make this available in the Fall. If you complete an institutional assessment and contribute your results to the 2020 Community Data collection by August 30 (deadline extended to September 27) you will get access to the detailed version of the data, allowing you to benchmark your institution’s program relative to peer institutions.
We know there is broad interest in having a community dataset. 88% of the institutions who have requested to use the Assessment tool listed “Benchmarking of current service offerings” as an intended use of the model. Users of the model are fairly diverse: as of early summer, there are over 70 institutions representing 32 states, both public and private, and a mix of R1s, R2s, and institutions with emerging research programs. As more institutions participate, the more useful the Community Cata collection becomes!
What data to keep? — Making decisions about confocal microscopy data
Presenters: Huajin Wang, Librarian/ Program Director for Open Science & Data Collaborations, Carnegie Mellon University Libraries and Susan Ivey, Research Data & Infrastructure Librarian, NC State University Libraries
As the quantity and volume of data produced by research increases exponentially, it has become increasingly challenging to preserve and reproduce data. Traditionally, researchers have often created their own workflows and their own data storage solutions, but this is no longer sustainable, making collaborations and data sharing challenging. On the other hand, data librarians are tasked with helping researchers share and preserve their data, but understanding specific types of data and how to maximize reuse can be difficult. Large and complex data exist in a variety of disciplinary areas, and one example is confocal microscopy data. In April 2019, the Data Curation Network held their 2nd Data Curators Workshop at Johns Hopkins University. Susan Ivey, Amy Koshoffer, Gretchen Sneff, and Huajin Wang formed a group to address many of these issues associated with confocal microscopy data. During this July’s Data-Facing Call, we’ll go into detail about common workflow and challenges that researchers face when working with confocal microscopy data and give an overview of our “Confocal Microscopy Data: A Primer for Curators,” which we created to help those tasked with curating this type of data. We’ll also present some of the use cases that we used to inform this work and invite the audience to think about how to best preserve and share these data.
Have questions about how to get started with the Research Computing and Data Capabilities Model? Or are you already working with it and just want to discuss the process, or a particular aspect of the assessment tool? Join working group members at one of our upcoming Office Hours to get help, ask your questions, and share your experiences! Office Hours for Summer and into Fall are scheduled for:
Topic: How are we doing? A discussion on philosophies/culture, approaches, and tools for understanding creation of knowledge, metrics, and impact
Our recent calls, from handling support requests via various tools/modes, to remote work for support/consultations and training, to working with your (remote) team in this unprecedented time, have been an unexpected but fruitful journey. This month we close the loop: from your overall efforts of the team — both internal- and external-facing work — and via processes and tools, what is your philosophy on and how are you measuring your activities and impact? What information do you gather around internal- or researcher-facing activities? Are you using approaches or tools that harness NLP, ML, or predictive analytics? And do you have specific goals that you strive towards? Join us and share.
Wednesday, May 20, 12pm ET/ 11am CT/ 10am MT/ 9am PT/ 7am HT Note: The Data-Facing and Systems-Facing calls will NOT happen at their normal times in May; please join the joint call above instead.
Join us for a community panel on effectively incorporating student workers into research computing and data groups. Topics will include: hiring, development, structuring student positions, work assignments, challenges, managing remote work, and training. Bring your questions for the panelists.
Panelists: Amy Neeser (University of California Berkeley); Tony Elam (University of Kentucky); Colby Witherup Wood and Alper Kinaci (Northwestern University); Amy Work and Stephanie Labou (University of California San Diego); Betsy Hillery (Purdue University); Joanne Luciano (University of the Virgin Islands); Brian Haymore (University of Utah); Troy Baer (OSC)
Description: Let’s continue the discussion around remote support and how you make it work. This month, we’ll focus on how you work with your team – how have your collaboration objectives with your team shifted given our work-at-home reality? How effective have they been? What would you do differently? And what has surprised you? Add your comments and questions to the call document in advance of the start of the meeting or anytime throughout the call!