
The DEAC HPC Cluster team: Sean Anderson, Adam Carlson and Cody Stevens.
From Systems to Service: How Wake Forest’s Three-Person HPC Team Puts People at the Center of Research Computing
Located in Winston-Salem, North Carolina, Wake Forest University (WFU) is a premier collegiate university that balances a deep commitment to undergraduate education with a rapidly growing research portfolio. As an R2: High Research Activity institution, Wake Forest operates with a distinct mission: providing high-level resources while maintaining the personalized mentorship of a smaller campus. At the center of this mission is the Distributed Environment for Academic Computing (DEAC) Cluster.
CaRCC spoke with the High Performance Computing (HPC) team—Adam Carlson (Assistant Director of Research Computing), Cody Stevens (Senior HPC UNIX Systems Administrator), and Dr. Sean Anderson (Senior HPC UNIX Systems Administrator)—to discuss how a lean, centrally funded team of three supports eighteen departments and prepares students for the modern STEM workforce.
The following Q&A has been edited for brevity and clarity.
How did research computing and data (RCD) support get started at Wake Forest? What was the catalyst?
Cody Stevens: The DEAC Cluster, as we know it today, was born about 20 years ago through several National Science Foundation (NSF) grants primarily supporting the Department of Physics. Rick Matthews, who was the chair of Physics at the time, saw the need for a more sustainable model. Information Systems (IS), WFU’s Central IT department, eventually assumed ownership, providing the recurring funding needed for hardware refreshes and full-time professional staff.
How has the program evolved from those “grassroots” days?
Adam Carlson: I think the main evolution has been accessibility. It has shifted from a grant-funded effort, into a centrally funded, enterprise-level service accessible to all departments. The cluster has become less tied to its Linux command line roots, and has been made more accessible through our user friendly login web portal. A large factor in our success has been making our team more physically accessible to users; so much so that we are moving into a dedicated HPC Center on the Reynolda Campus this summer!

Adam Carlson giving a classroom tour of the datacenter at Wake Forest.
A Personalized Approach
If a new researcher has a project, how do you help them navigate the available resources?
Sean Anderson: We never want to discourage research; our goal is to enable it. Whether a user finds us through the IT website or a help desk ticket, the process usually starts with a 30-to-45-minute consultation. We use a questionnaire and direct conversation to brainstorm their workflow: Do they need GPUs? High-speed storage? Specialized software?
Do you ever recommend resources outside of your local cluster?
Cody Stevens: All the time. We act as the “last line of defense” for research support. If a project doesn’t quite make sense on our local cluster—perhaps because it needs public-facing dashboards or specific cloud architectures—we help them spin up instances in Amazon Web Services (AWS) or Jetstream2. We help guide them to where the research makes the most sense.
Does the university’s teaching mission change how you interact with students?
Sean Anderson: Absolutely. Wake Forest isn’t a school of 300-person lectures; our average class size is less than 12. When we visit a class to give a 20-minute presentation, we can actually talk to every student individually. Because we adopted Open OnDemand, the barrier to entry is much lower; students are up and running in minutes compared to the stressful trainings of the past.
The New HPC Center
Tell us about the evolution of Research Computing at Wake Forest.
Cody Stevens: For years, the HPC Team operated mostly behind the scenes, often physically removed from the researchers we served. Over time, as the DEAC Cluster has evolved into an Enterprise-level service, it has allowed us more time to interact directly with faculty and students. This includes teaching an Introduction to HPC class within the Computer Science Department, attending new faculty orientations, giving data center tours, and training new researchers.
Adam Carlson: The next evolution of this support will be a new HPC Center opening this summer in partnership with the Computer Science department. This is part of a building renovation that will provide new homes for four academic departments: CS, Education, Philosophy, and Entrepreneurship. We envision the space as a collaborative ‘hub’ where researchers from any department can sit and interact with us and other researchers, troubleshoot a workflow in real-time, or tell us about their project ideas. The space will also include dedicated team offices, workrooms, and a garage door feature that opens up the space to make it more inviting for special events and walk-ins!
Sean Anderson: By embedding the HPC Team in an interdisciplinary academic building, Wake Forest is making a bold statement, saying that compute power is a foundational utility for all researchers, regardless of their discipline.
Strategic Infrastructure: On-Premises vs. Cloud
How do you balance local hardware with the cloud?
Adam Carlson: Information Systems utilizes a “cloud-first” strategy for its Enterprise services, and we collaborate with those Enterprise teams where it makes sense to complement our on-prem cluster.
Cody Stevens: For example, there are some long-form projects where a researcher might do the heavy calculations on our cluster, but then need to use AWS to host a public-facing app to share their results globally.
Is there a move toward “cloud bursting” given the current market?
(Cloud bursting: When computing capacity exceeds a certain threshold, traffic is redirected to a public cloud, ensuring applications do not crash.)
Adam Carlson: We are exploring “intelligent cloud bursting.” With the current AI boom causing memory shortages and cost increases for physical servers, we want to push small, parallelized jobs to AWS during peak periods. This keeps our local resources free for massive jobs that would be very costly to run in the cloud, while ensuring our researchers never hit a bottleneck.
Regional Impact & Partnerships
Can you tell us about the NC Share project?
Adam Carlson: NC Share is an NSF grant-funded, statewide collaboration (including Duke, NC A&T, and Davidson) designed to provide HPC resources to institutions that don’t have their own. While we have our own cluster, we’ve been active in the service working group to help the program grow. It’s been a great way to connect with admins across North Carolina and is a great program that supports faculty statewide!
The “Human-Focused” Evolution: From Systems to Services
How has the HPC evolved in what it offers researchers and students over time?
Adam Carlson: When I was brought on, my primary goal was technical: to transform a grassroots cluster into a stable, Enterprise-level service. In the early days, success was measured in “uptime” and “hardware reliability.” As outages became less frequent, we realized that the true “bottleneck” to HPC usage was the human barrier to entry. This was more of a philosophical shift. We began to think beyond System Admins to being service providers and intellectual partners in research.
Sean Anderson: We are a lean, mean team of three. Because we are small, we can sit down one-on-one with students and researchers to get a sense of what they are trying to achieve and provide deeper insights into what is possible—and what the HPC can enable. We can help researchers determine if their project belongs on the local DEAC cluster, in the AWS cloud, or a regional resource like Jetstream2. The adoption of Open OnDemand was a pivotal moment in our evolution. By providing a web-based portal interface, students and researchers can effectively access the cluster. What used to be a laborious and time-consuming onboarding process for students is now a 20-minute classroom visit. We have been working to change the reputation of what we offer to an “approachable” face of IT. If a researcher has a problem that doesn’t fit into a standard support ticket, they know they can find a human being in the HPC Center who will help them brainstorm a solution, even if that solution exists outside the cluster.

Cody on the scene in the Peruvian Amazon as part of the CINCIA collaboration.
The Human Element: Success Stories
Are there stories that demonstrate the human impact of your work?
Cody Stevens: Two years ago, I traveled to Peru with a research group collaborating with an NGO called Center for Amazonian Scientific Innovation (CINCIA). We went deep into the Amazon rainforest to study the effects of gold mining on the environment and human-assisted forest regeneration. I got to meet the researchers on the ground, help them install software, and see how they use drone imaging and AI to monitor the health of the jungle. It was an incredible way to exercise our “Pro Humanitate” mission in action.

Sean receiving a “Red Jacket” award from Open OnDemand in recognition of his expansive contributions to the Open OnDemand community.
Sean Anderson: On a personal note, I was recently honored as one of the first three people in the world to receive a “Red Jacket” from the Open OnDemand community. It was a humbling recognition of the work we’ve put into making HPC more accessible.
Adam Carlson: By lowering the barrier of entry with Open OnDemand, we’re seeing increased adoption of HPC not only in research, but the classroom as well. Students who have never used Linux before can have an application up and running on the DEAC Cluster in five minutes!
Is there a story that truly sums up the impact on a student’s life?
Sean Anderson: There is one student who we worked with extensively who serves as the true “North Star” of our mission. He didn’t start as an expert, and like many students, he found High Performance Computing daunting at first. He was a regular fixture in the HPC offices. He took our class, engaged in the workshops, and eventually became a “power user,” leveraging the DEAC cluster for his undergraduate research. We watched him grow from a curious student into a brilliant computational mind. He went on to a graduate program at University of Michigan. He later returned to Wake Forest to give a presentation on his research in LLMs for Genomics to the Statistics Department. I went to hear him speak. He said, “I really want to thank the HPC team. They mentored me and taught me many of the things that got me to this stage.” He wasn’t just a ‘user’ who consumed core hours or GPU credits; he was a researcher whose career and methods of doing research had changed instrumentally because of the partnership we formed with him in working with him individually to learn those methods and access those resources.
–As the team moves into their new HPC Center this summer, this story serves as the blueprint for their future: a future where the most powerful component of the cluster isn’t just the silicon in the server room, but the human connection in the office upstairs. Go DEACs!
