The company made several tech announcements, all geared at accelerating and scaling scientific discovery.
IBM Research on Wednesday introduced new technology and partnerships geared at enabling companies to dynamically run massive AI workloads in hybrid clouds. The company made a series of announcements at the IEEE CAS/EDS AI Compute Symposium.
IBM Research is collaborating with Red Hat, one of the newest members of the IBM Research AI Hardware Center, to make IBM Digital AI Cores compatible with Red Hat OpenShift and its ecosystem.
The company said it is also making its toolkit for Analog AI Cores open source.
The AI Hardware Center will collaborate with Synopsys to address the challenges of developing new AI chip architectures. IBM said it is investing in infrastructure to accelerate new chip packaging development to eliminate memory bandwidth bottlenecks.
“The next generation of computing will define how we respond to crises, by accelerating the analysis of problems and the synthesis of solutions to them,” IBM said in a statement.
“The ability to make advances in AI … will empower collaborative scientific communities and bring automation and virtually unlimited computing resources to every facet of the scientific process. With the help of AI and the computers that power it, scientists will be able to accelerate and scale scientific discovery at a pace never before seen.”
NASA taps AI to identify “fresh craters” on Mars
To meet the demand for AI’s unprecedented demand for data, power and system resources, IBM said it is developing a new class of energy-efficient AI hardware accelerators. The goal for the accelerators is to increase compute power by orders of magnitude, in hybrid cloud environments, without the demand for increased energy.
Through its involvement in the AI Hardware Center, Red Hat is collaborating with IBM to build compatibility between IBM Digital AI Cores and Red Hat OpenShift.
Red Hat is collaborating with IBM’s AI hardware development stream and working to enable AI hardware accelerator deployment across hybrid cloud infrastructure—multicloud, private cloud, on-premise, and edge.
In traditional hardware architecture, computation and memory are segregated in different locations, IBM said. Information is moved back and forth between computation and memory units every time an operation is performed, creating a limitation called the von Neumann bottleneck.
To alleviate the bottleneck, IBM said it is developing analog AI that could provide significant performance improvements and energy efficiency by combining compute and memory in a single device.
IBM Research is releasing an Analog Hardware Acceleration Kit as an open source Python toolkit that enables a larger community of developers to test the possibilities of using in-memory computing devices in the context of AI.
The kit has two main components: PyTorch integration and an analog devices simulator. PyTorch is an open source machine learning library based on the Torch library, a scientific computing framework with wide support for machine learning algorithms. PyTorch is used for developing AI applications such as computer vision and natural language processing.
AI practitioners can use the analog kit to evaluate analog AI technologies while allowing for customizing a wide range of analog device configurations, and the ability to modulate device material parameters, IBM said.
The IBM Research AI Hardware Center now has 14 members, the company said.
This includes efforts with Synopsys, a provider of electronic design automation software and emulation and prototyping solutions. Synopsys also develops IP blocks for use in the high-performance silicon chips and secure software applications.
Going forward, Synopsys will serve as lead electronic design automation (EDA) partner for IBM’s AI Hardware Center, IBM said.
AI requires a lot of interconnect bandwidth connectivity to take advantage of increases in computing power. IBM and NY Creates are investing in a new cleanroom facility on the campus of AI Center member, SUNY-Poly, in Albany, NY, that will focus on advanced packaging, also called “heterogeneous integration,” to improve memory proximity and interconnect capabilities.
“As AI empowers society to extend scientific exploration, we will increasingly be confronted with large data processing workloads that demand breakthroughs in processing power, memory and bandwidth,” IBM said. “Working with Red Hat, Synopsys and other partners, our advancements in AI hardware and hybrid cloud management software integration will enable models and methods that will forever change the way we solve problems.”
Leave a Reply