The Consulting Edge: UCLA’s Human-Centered Approach to Research Computing

Located in the sunny foothills of the Santa Monica mountains, University of California, Los Angeles (UCLA) is a leading R1 research institution with more than $1 billion in annual research funding and a long history of supporting computational research. CaRCC spoke with Lisa M. Snyder, Director of the Computational Research Technology Groups at UCLA’s Office of Advanced Research Computing (OARC), to learn how her team supports researchers across this diverse campus. The following Q&A has been edited for brevity and clarity.

How did the Office of Advanced Research Computing get started at UCLA?

It actually has an incredibly long history and has changed names several times over the years. The journey began in the 1990s as the Office of Academic Computing (OAC), transitioned to Academic Technology Services (ATS) and the Office of Information Technology (OIT), before becoming the Office of Advanced Research Computing around 2020. OARC now sits under the Office of the Vice Chancellor for Research & Creative Activities.

What kinds of services and support does OARC offer?

I lead the Computational Research Technology Groups, which includes several teams: a systems team that runs the campus high-performance computing cluster (Hoffman2);  three consulting teams specializing in statistical methods and data analytics, computational science, and GIS and visualization; and an infrastructure support team that handles desktop support and networking.

My team is complemented by another set of research technology groups under OARC’s Mobile Research, Cloud and Data team. These groups work on websites for campus entities and research projects, the campus mobile app, and include a disabilities and accessibility team. OARC also houses community engagement efforts like Innovate at UCLA and CESMII, a smart manufacturing institute.

Training is another significant offering. Each quarter, we partner with different entities around campus such as the Library to develop workshops and targeted training courses. These workshops are available to the campus community and to anyone who registers. During the pandemic, these shifted to recorded sessions and are now available on YouTube, with over 140 training videos and presentations. Some are incredibly popular, with one on structural equation modeling receiving over 54,000 views.

Who are your clients? Are there particular groups of researchers you especially work with?

The Hoffman2 cluster, established around 2008-2009 and named after a former employee, was originally designed to address the needs of physics and traditional high-performance computational units. While we’ve been growing and building the cluster to serve broader disciplines, we still face challenges in engaging humanities and arts programs.

The people who need HPC know they need HPC and they come find us. Any given quarter these days we’ll have around a thousand solid users that will consume anywhere from 24 to 26 million CPU hours.

We’re working to engage what we call our “North Campus” – Humanities; Worlds Arts and Cultures/Dance; Theater, Film, and Television – with programs like “HPC for Humanities” that focus on non-traditional use cases such as a Linguistics professor using HPC to study syntax and semantics of archaic Indo-European languages. While HPC adoption has been limited, these researchers have other needs and we consult with them on things like unstructured data sets, image archives, OCR, translation, whatever. Different campus entities have different computing needs and different research support needs. So we try as best as possible to hit them all.

What makes your team’s approach unique?

I would say the consulting is something that we do very well. This includes one-on-one conversations with researchers to optimize code, troubleshoot issues, explore website building options, or incorporate 3D technologies in classrooms.

Many team members are occasionally “bought out” by PIs submitting grants who purchase a percentage of staff time to work on specific projects. This embedded support across statistics, computation, and web development has been very successful.

The cluster program has also been successful for its target users, though we’ve faced challenges in building GPU capacity due to cost, power requirements, and supply constraints. In some cases, like with our Engineering school, departments are building their own GPU clusters to meet their specific needs.

How is your program funded?

The bulk of my team is core funded, but we’re open to buyouts, teaching arrangements, and grant work. The Mobile Research, Cloud and Data teams do significant sales-for-service work, building websites, and developing apps.

For consulting, we’ve set boundaries: For consulting sessions we can commit up to 10 hours or so. We’ll talk to you all you want. We’ll help you as much as we can. But if you start needing project work, we’ll set up a sales and service agreement with a defined scope of work, defined cost, defined deliverables, and a defined delivery date.

How do you measure the impact of your organization?

For programs, particularly the cluster, we track metrics like number of users, CPU hours, and jobs. We send monthly reports to faculty sponsors and quarterly administrative reports to units and divisions detailing resource usage. We maintain documentation of publications referencing the cluster and produce an annual report featuring usage statistics and stories highlighting collaborations with researchers.

Advocacy represents another important impact: We are on the ground listening and talking to researchers about what they need. OARC co-sponsors the faculty-led Institute for Digital Research and Education, which is a good platform for pushing on investments in research data services and infrastructure.

How large is your team, and what are their backgrounds?

My team includes approximately 25 people: seven supporting the cluster, four computational science consultants, four and a half staff in statistical methods and data analytics, two in GIS and visualization, and several others in computational support roles. Most team members hold PhDs across diverse fields from Mathematics to Classics.

The team has experimented with student workers and interns, though the overhead related to training and supervision can be challenging given our workload. We currently have one work-study student and occasionally hire students for grant fulfillment or through partnerships with programs like Digital Humanities.

How does your team organize its work?

Each team has regular meetings, and I meet with my five managers collectively and individually. Different teams use different communication methods – the HPC systems team relies heavily on Slack, while the consultants are less formal, focusing on emerging researcher needs and responding to consulting requests.

In that way, we’re perhaps a little more reactive than proactive. And I’d like to see more proactive engagements with the campus.

What are your near or medium-term priorities?

Continually building up the cluster is a priority, though space, power, and cooling present ongoing challenges. Storage requirements for researchers are a massive priority, along with ensuring consultants stay current with the latest methods and technologies.

I also emphasize the need to build centralized research infrastructure. Researchers across the campus have consistent baseline needs for infrastructure, whether it’s networking, storage, or software access. There are so many baseline needs that it does not make sense anymore to do those at a divisional level, it needs to be centralized.

What’s your elevator pitch for your team?

OARC’s mission is to intensify and broaden data-driven research and technical capabilities at UCLA. As such, we provide the central research support for the campus. My team operates the high-performance computing cluster available to everyone on campus (with the caveat that we can’t do sensitive, restricted, or otherwise encumbered data) and our consultants are available to work with faculty and help enable their research.