Dell Launches Open Source Omnia Suite to Manage AI and Analytics Workloads


Where is your business on the AI ​​adoption curve? Take our AI survey discover.

Dell today ad The liberation of Omnia, an open-source software package aimed at simplifying the deployment and management of AI and compute-intensive workloads. Developed in Dell’s High Performance Computing (HPC) and AI Innovation Lab in collaboration with Intel and Arizona State University (ASU), Omnia automates the provisioning and management of HPC, AI and data analysis to create a pool of hardware resources.

Omnia’s exit comes as companies turn to AI during health crises to drive innovation. According to a statistic investigation, 41.2% of companies say they compete on data and analytics, while 24% say they have created data-driven organizations. Meanwhile, 451 Research reports that 95% of companies polled for its recent study see AI technology as important to their digital transformation efforts.

Dell describes Omnia as a set of Ansible playbooks that accelerate the deployment of converged workloads with containers and Slurm, as well as frameworks, services, and library applications. Ansible, which was originally created by Red Hat, helps with configuration management and application deployment, while Slurm is a Linux task scheduler used by many supercomputers and IT clusters around the world.


Omnia automatically prints software solutions to servers – especially networked Linux servers – depending on the particular use case. For example, it could be HPC simulations, neural networks for AI, or in-memory graphics processing for data analysis. Dell claims the can cut deployment time from weeks to minutes.

“As AI with HPC and data analytics converge, storage and networking configurations have remained in silo, making it difficult for IT teams to deliver the necessary resources for changing demands,” said said Peter Manca, senior vice president of Dell Technologies, in a press release. Open source software Omnia, teams can dramatically simplify the management of advanced IT workloads, helping them accelerate research and innovation.

Above: a flowchart describing how Omnia works.

Image Credit: Omnia

Omnia can create clusters that use Slurm or Kubernetes for workload management, and tries to build on existing projects rather than reinventing the wheel. The software automates the cluster deployment process, starting with operating system provisioning on servers, and can install Kubernetes, Slurm, or both with additional drivers, services, libraries, and applications.

“Engineers from ASU and Dell Technologies worked together on the creation of Omnia,” said Douglas Jennewein, senior director of computer research at Arizona State University, in a statement. “It has been a rewarding effort working on code that will simplify the deployment and management of these complex mixed workloads, at ASU and for the entire high-tech industry. “

In a related announcement today, Dell said it is expanding its on-demand HPC offering to support VMware environments to include VMware Cloud Foundation, VMware Cloud Director, and VMware vRealize Operations. Beyond that, the company now offers the optional Nvidia A30 and A10 Tensor Core GPUs for its Dell EMC PowerEdge R750, R750xa and R7525 servers.


VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the topics that interest you
  • our newsletters
  • Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
  • networking features, and more

Become a member


Comments are closed.