At IBM’s Think 2018 user/partner conference, IBM’s then president and CEO, Ginny Rometty introduced a new inflection point – a happenstance that, according to her, occurs about once every thirty years in the computing industry. These inflection points take place when a fundamental convergence of technologies creates a situation that will have a profound effect on technology use and technological innovation for decades to come. She called this happenstance “the Data + AI inflection point” – and she sent her whole company on a mission to embed AI and machine learning into any IBM product that could be improved using AI. (For more details on this conference, see our report here.)
The company then went on a tear – incorporating AI into many of its software products (its analytics products, operations management products; security products, and more), wherever it made sense. As a result, IBM now has a huge portfolio of AI-enabled software offerings.
At the same time, IBM embarked on a mission to teach customers how to clean their data and make it ready for advanced analytics. Many of those customers have now AI-enabled their own application and database environments.
IBM also reached out to its ecosystem, recruiting independent software vendors, helping them ready for a world that will make heavier and heavier use of AI for applications, databases and operations management. Now, dozens of 3rd party vendors have AI-enabled their product offerings.
And now, IBM has just announced a new version of its enterprise server: the IBM z16. With on-chip AI functionality. Get ready world – some amazing things are about to happen!
IBM Z now has a new personality
First things first: IBM’s new Telum processor provides on-chip acceleration for artificial intelligence (AI) inferencing. (“Inferencing” is about using pre-trained models to “infer” conclusions – the very nature of AI applications). Telum offers 6 TFLOPs of computing capacity per chip – meaning that IBM Telham can do a lot of inference processing in real-time. And because Telum is running in a System z – it can do this at scale.
The IBM Z organization has a long history of embedding purpose-built accelerators along with its chips. (For instance, see this Clabby Analytics report on how IBM embedded designs improve performance of common tasks like cryptography, compression and sorting. Telum adds a new integrated AI accelerator with more than 6 TFLOPs compute capacity per chip. Every core has access to this accelerator and can dynamically leverage the entire compute capacity to minimize inference latency. Every time IBM announced a new mainframe, the first thing I look for is “what did they do to the CPU?” My reason (as I wrote in this 2019 report) is as follows:
A key goal in the design of central processing units (CPUs) is to offload non-essential work to other processors or specialized adapters. By doing so, the CPU can be focused on its primary task of performing mathematical calculations – churning thread after thread executing real work rather than handling system functions. On the other hand – by architecting and placing certain system functions in logic close to a CPU where those functions can be processed more expeditiously – stupendous improvements in the performance of those tasks can be achieved; Quality of Service (QoS) can be dramatically improved; and cost for associated hardware (such as storage or peripherals) can be significantly lowered. So, when IBM announces that it will move a system function onto its CPU and co-locate that function with its microprocessor cores –IT IS BIG NEWS. By co-locating a system function on the same processor die, IBM is saying: “It is extremely important to use CPU cycles to execute this particular system function. The cost in CPU cycles is vastly outweighed by the benefits that will be achieved by handling this system function at the CPU level.”
Back in 2019, IBM moved data compression to the CPU, and by compressing data at the processor chip level (see this report on IBM’s On-Chip Integrated Accelerator for zEnterprise Data Compression [zEDC]) Now, IBM has moved AI to the chip level. Why is this important? Because it means that the new IBM z16 can now – in real time – use on-chip AI to accelerate the processing of various functions, processes and workloads. IBM has spent years AI-enabling its product portfolio – the new z16 can now significantly accelerate the performance of that portfolio. And since IBM customers have spent years AI-enabling their production environments, the new IBM z16 can now greatly accelerate the performance of customer applications.
Use cases for the new AI accelerator
What use cases are likely to exploit the IBMZ Integrated AI functions?:
- At the transaction level, think of real-time, accelerated fraud protection that takes advantage of on-chip AI to deliver instantaneous recognition for unauthorized for illegal activities or transactions.
- Think of customers in the banking, financial, trading and insurance sectors who have written applications that use predictive analytics (solutions that use data to predict future outcomes – using patterns in that data to identify risks and opportunities). These customers will be able to greatly accelerate the performance of their queries – even performing AI functions in real-time.
- Think about accelerated operations management (operations management includes activities such as the monitoring of application and system behavior, gaining visibility into systems/application behavior; and using predictive analysis to identify problems before they occur and address them) – all done in real-time. This would ensure system/applications/database problems can be quickly isolated and corrected.
- Think about using Telum accelerated AI to support IBM’s various security offerings such as its Secure Execution environment – making it possible to protect systems from aggressive attacks before or as they are happening. IBM z16 also offers transparent encryption of main memory.
Next topic: post-quantum cybersecurity
Switching gears: IBM’s z16, with its AI acceleration capabilities, is being positioned on the vanguard of defense in “post-quantum cyber security.” Here’s the way we see it…
“Quantum Volume” is a measure of the overall power of a circuit-based quantum computing system. It involves the number of qubits but many other factors that influence just how “good” those qubits are. Generally, the higher the quantum volume, the more suited a quantum computer will be for solving complex problems such as undermining traditional methods of encryption.
As Quantum Volume increases, expect to see advances in chemical simulation, scenario simulation and optimization, and artificial intelligence/machine learning (AI/ML) will start to occur. After this, the next phase of use cases in chemicals and petroleum will include advances in oil shipping/trucking, refining processes, feedstock to product and drilling locations. Advances in distribution and logistics will include freight forecasting, irregular behaviors (Ops), disruption management and distribution supply chain improvements. In financial services, quantum computers will bring advances in derivatives pricing, irregular behavior analysis (fraud analysis) and investment risk analysis. In healthcare and life sciences, advances can be expected through accelerated diagnosis, in clinical trial enhancements, in genomic analysis, and in medical/drug supply chain activities. Finally, in manufacturing, quantum computing will aid in quality control, process planning, manufacturing supply chain activities, and in fabrication optimization.
As quantum volume increases, a third phase will be driven by increases in system power and the introduction of fault tolerance, when thousands of qubits can be reliably used, may bring advances in seismic imaging, in consumer offer recommendations, in financial offer recommendations, in disease risk predictions and in structural design and fluid dynamics.
It is this phase of quantum computing that causes us some concern. Because, in this phase, quantum computers will be powerful enough to be used for cracking the encryption protection schemes of conventional systems. With quantum power and speed in this phase, quantum computers will be used to crack exposed systems – doing so in a fraction of the time it would take a conventional system. When this happens (also referred to as the “Quantum Apocalypse”) conventional systems will be highly exposed to invasion by bad actors.
IBM is a leader in quantum computing – and, as such, understands how quantum systems work and how they can be used to find and attack vulnerabilities in conventional systems. IBM Z, on the other hand, is the most securable system platform in the commercial marketplace. So IBM has a vested interest in promoting the use of quantum computers, while ensuring that its customers’ data remains safe. To do this, IBM is focusing on its own “Quantum Safe” program.
All current conventional systems using 128-bit encryption will someday be exposed to being cracked by quantum computers. It might take a quantum computer six months to break the 128-bit ciphers of an intended victim – but eventually that security will be cracked. One way around this, in the short term, would be to implement AES-256 – a 256-bit encryption key. But the result may ultimately be that it takes twice as long to crack that algorithm.
To make its IBM Z (and other IBM products) “quantum safe,” IBM researchers have been involved in the development of three quantum-safe cryptographic algorithms based on lattice cryptography that are in the final round of consideration in the National Institute of Standards and Technology (NIST) Post-Quantum Cryptographic Standardization Program (the chosen algorithm should be announced soon). In the meantime, IBM z16 will be the industry first quantum safe system, offering:
- Protection by multiple quantum-safe layers of firmware;
- Built-in dual signature scheme with no changes required; and,
- New Crypto Express card offers Quantum-safe application program interfaces to modernize existing and new applications (using quantum-safe cryptography and classical cryptography).
Additionally, IBM offers discovery tools that can discover where and what cryptography is used in various applications (these aid in developing a composite picture for an enterprise for planning purposes). IBM also offers a new crypto discovery feature in its IBM Application Discovery and Delivery Intelligence (ADDI).
Bottom line: IBM is showing the proper foresight with its IBM Z line when it comes to future protection against quantum attacks.
IBM has just announced its next-generation IBM Z: the z16. With this new announcement, IBM Z remains a cloud-enabled, highly secure, highly resilient transaction and analytics server. But this announcement also makes IBM Z a cloud-enabled, highly secure, highly resilient AI server. IBM Z has a new personality.
With new, on-chip AI acceleration, Clabby Analytics expects IBM Z to further penetrate markets, grabbing new workloads while enhancing and speeding existing workload functionality and performance. In banking, expect inroads to be made in compliance testing (protection against account ID takeover and ID theft); prevention of gaming-the-system schemes; interest rate forecast improvements, and speedier loan processing and approval. In finance, expect point-of-sale payment processing with fraud detection in real time; detection of financial crimes (such as money laundering); and improved wealth management with predictive models. In trading, expect high-frequency trading analytics; algorithmic trading; and the ability to more quickly clear settlements. And, in insurance, expect real-time fraud detection for claims and images; speedier claims adjudication; and pricing and actuarial analysis for better risk assessment.
As for security, IBM already has strong cyber security offerings – including quantum-safe offerings. With better algorithms on the way – expect this story to be further strengthened.