Responding to Covid19

All CiS Staff are currently working remotely and can still be contacted as usual, with the exception of our office phone lines. Please get in touch with the team using the group email address:; Alternatively raise a ticket in our support portal: (VPN connectivity required). We are also happy to work with you via instant messaging, voice or video conf ...[Read More]

We are recruiting for a System Administration Assistant

Salary: £24,815 – £30,270 Hours per week: 37 Months duration: 12 Application deadline: 19 Jan 2020 Working within a professional team of HPC, virtualisation and storage experts in Computing infrastructure for Science (CiS). Administrating a Service Desk ticketing system including triaging and distributing user requests, documenting planned and completed works, managing the system, and mainta ...[Read More]

CIMMYT HPC visit to CiS

August 2017 – CiS hosted a knowledge exchange visit from Mr Jaime Campos Serna.  Mr Serna is developing a new HPC cluster service for his Institute (CIMMYT) and visited Norwich on a fact-finding mission.

The three ‘R’s of dependability

The three R’s of dependability by Paul Fretter We need to feel that we can depend on computer services or systems, to a greater or lesser degree, according to their importance, exposure and development status.  Almost by definition, critical systems need to be highly dependable, whereas faults or outages can be tolerated in non-essential and development systems.  In this short article, I attempt t ...[Read More]

Earlham Institute and CiS win Bio-IT best practices award for IT Infrastructure/HPC

Bio-IT World – Best Practices Award 25th May 2017 – Boston, MA Improving Global Food Security and Sustainability By Applying High-Performance Computing To Unlock The Complex Bread Wheat Genome Using the SGI UV systems specified, procured and supported by CiS, and the integration of Edic ...[Read More]

CiS submits fix for SLURM memory allocation >4TB

April 2016 In early 2016, CiS specified, procured and installed two SGI UV UV300TM systems for one of its customers (the Earlham Institute, previously known as TGAC), and implemented the open source SLURM job scheduler, instead of the closed-source commercial software that is normally supported on UV systems.  Each of theses systems has 256 cores and 12TB RAM, and were purchased for running multi- ...[Read More]

CiS submit enhancements for SLURM memory accounting

The SLURM scheduler has a number of mechanisms it can use to manage jobs.  In our current configuration at CiS we encapsulate processes and tasks using cgroups.  This protects the nodes and other jobs from run-away applications.  Though jobs are encapsulated within cgroups the statistics are monitored via the jobacct_gather/linux plugin.  While SLURM does offer the jobacct_gather/cgroup plugin whi ...[Read More]

TGAC re-brands as the Earlham Institute

As of Monday 27th June 2016, the Genome Analysis Centre has renamed to be The Earlham Institute. Please view the new website here:

New UV300 big-memory systems for TGAC

We (CiS) have recently completed the procurement of 2 new UV300 systems for The Genome Analysis Centre (TGAC).  Each system comprises 256 CPU cores (16x E7-8867V3 16core Haswell), 12TB RAM, 16x 2TB Intel NVMe FLASH and a fibrechannel-connected 100TB InfiniteStorage IS5100 disk array. That’s a combined capacity of 512 cores, 24TB RAM, 64TB of NVMe FLASH and 200TB scratch disk.  Right now, thi ...[Read More]

Thinking of Exascale?

Trying to achieve Exascale performance using general-purpose scalar CPUs would be a very tall problem.  The World’s fastest machine is currently Tianhe-2, with a theoretical peak performance of 33 peta-flops (Pflop/s) and already consumes a whopping 24MW of power! An Exascale system will need to be at least 30 times faster, so what does that say about the power requirements?  (I’ll let ...[Read More]