IBM’s THINK 2021: Processors & Code?

From the outset of IBM THINK 2021, a many-session teleconference, the company made it clear: “IBM is ‘100% in’ on augmented intelligence (AI) and the hybrid cloud.” Session after session and product discussion after product discussion reinforced this message.  But hearing, and seeing, are two different senses.  What I heard was: “AI, hybrid cloud, AI, hybrid cloud, AI, hybrid cloud…” But what I saw was processors and code.

I have always been a firm believer that owning microprocessor development is crucial to advanced system design.  Companies that make their own microprocessors can tune and hone those microprocessors to perform certain functions.  And, by so doing, they can create distinct competitive advantages, open new markets, and deliver significant processing speed and security advantages to their customers.  Processors are crucial to market differentiation, as well as to workload execution.

I have also long believed that application development is too complex.  Developers have to figure out how to best design a program (the program workflow); they have to learn specific programming languages; then they have to write their programs, debug them, tune them, test them, then deploy and maintain them.  It would be much simpler in an ideal world if an application owner could simply describe a process flow to a system and have the system write the code needed to execute the process flow.  What I heard at THINK is that IBM is starting to head in that direction by enabling AI to speak “the language of machines.”

This report focuses on the critical messages that IBM delivered at THINK 2021 around processors and coding.

First things first: Comments on the THINK Video Conference

In years past, IBM THINK (and its predecessor events with other names) have been large (10,000+ people and scores of sessions) where users, business partners and IBM executives convened to share messaging, learn new skills, learn about new technologies and network with other practitioners.  But, with the COVID risks, this event has gone the videoconferencing route for the past two years.  And, as much as I liked the in-person events (primarily because they give me a chance to meet with users to understand their challenges better), and with vendors (who tell me what is really going on in the marketplace), I must admit I like this electronic format a lot better from an “IBM messaging” and “logistics” perspective.

Regarding messaging, there were often situations at in-person events where I wished to attend sessions that overlapped with others I wanted to attend.  With the electronic recorded videoconferencing approach, I can better tailor THINK sessions to address my needs – without missing any of interest to me.  It’s like a buffet – you can choose what you want and consume as much as you want on the topics of genuine interest to you.

The other thing I like about the recorded format is that I can stop a presentation, rewind it, and replay statements that may not have been clear – or that I need to note.  With recorded sessions, you don’t miss a thing.  And, most of the videos include closed captions at the bottom of the screen – so in case you want clarity, you can always refer to the written subtitles.

As for logistics – have you ever gone to a presentation and realized it was not what you expected?  Suppose a session does not meet your needs using this videoconferencing perspective. In that case, you simply terminate the video and move on to the next one (no embarrassing need to stand up and leave a conference room while a speaker is presenting…).

As much as I liked the in-person format, I like the videoconference format even more.

Back to the THINK messaging

Arvind Krishna, IBM’s Chairman and CEO, opened the conference with descriptions of IBM’s hybrid cloud and AI missions.  In a hybrid cloud, IBM’s mission is to modernize mission-critical workloads with cloud services across public and private clouds.  In AI (formerly “artificial intelligence,” now being referred to as “augmented intelligence“), IBM sees its mission as “making sense of large amounts of data” and using AI to automate processes.  Both technologies, IBM believes, are vital to accelerating business transformation (rebuilding entire business models into flexible core processes that speed services to enterprises and their customers).

But here’s the fun part (at least for me…).  Mr. Krishna also spoke about IBM’s CodeNet initiative and IBM’s foray into 2 nanometer (nm) chip design.  (I love this stuff).

Project CodeNet

Back to my idea that humans should determine a business need, articulate a process to achieve this need, and then relay that process verbally to a computer that can generate the code and algorithms to implement that process.  It will be a long time until this scenario becomes a reality – but someday, it will happen.

In the meantime, steps are being taken to reach that goal – and one such step is a new discipline known as “AI for Code.” The idea behind this is to simplify code writing and deployment for software developers.  Using AI-based technologies (such as natural language processing and other AI innovations), combined with intelligent code analysis and compilation techniques, AI tools can be used to automatically streamline code development, performing such tasks as code searches, summarizations, one-coding language-to-another-coding-language translations (to help modernize legacy software – restructuring older monolithic applications into microservices), and much, much more.

In the area of computer vision, there is an extensive data set repository for imagery known as ImageNet.  This repository provides code examples for comparison and experimentation, benchmarking tools to test and streamline imaging programs.  Using these tools to simplify the development of powerful algorithms has enabled developers to deliver image understanding capabilities more quickly to market, thus acting as a catalyst to speed advancements.

With its “Project CodeNet,” IBM has now done the same for the field of AI.  Its stated goal is “to provide the community [with] a large, high-quality, curated, open-source dataset that can be used to advance AI techniques for source code.” It intends to do for the field of AI for Code what ImageNet has done for computer vision – to act as a catalyst for AI for Code development, benchmarking and experimentation.

Today, AI techniques are already being used to generate computer code.  But the task of programming does not end with the generation of code.  Developers need to manipulate code to modernize it, fix bugs, speed performance, make code more secure and ensure compliance with regulations.  Project CodeNet provides developers with code examples and AI-based tools that automatically address the testing and deployment of code using AI tools and techniques.  This data set can be used for instructional purposes to help developers frame AI for Code problems and learn how to more rapidly and efficiently benchmark and deploy AI for Code solutions.

There are other initiatives in AI for Code, including Codata, Ponicode, TransCoder, and OpenAI GPT-3 model.  But comparatively, IBM eclipses these other solutions in terms of scale.  CodeNet offers solutions for 4,000 coding problems, 14 million code examples, over 500 million lines of code, and supports 55 programming languages including C++, Java, Python, Go, COBOL, Pascal, and FORTRAN – though the most frequently-used examples are written in C++, C, Python, and Java.

IBM’s 2 Nanometer announcement triggered thoughts

Twenty years ago, I published a book called “Visualize This” that described an evolving vision of the “sensory, virtual Internet” where various technologies would contribute to sensory experiences over the Internet (the book was sort of whimsical, and the vision is still evolving…).   But, in the first draft of that book, I wrote that I couldn’t see microprocessor chip technology being shrunk to less than seven nanometers – I just didn’t see how that could be physically possible (even my reviewers didn’t believe me – so I [regretfully] adjusted that number upwards to 20 nanometers).

So, imagine my surprise when I learned that IBM had designed a chip using “nanosheet technology” that can fit up to 50 billion transistors using 2-nanometer design into an area smaller than a postage stamp!  Impressive.  (What this means is that a chip such as this can deliver 45% higher performance and use 75% less energy than today’s 7nm designs).

But this IBM advancement sent my mind wandering in another direction.  I haven’t been paying much attention to IBM’s other microprocessors lately, so I listened to a couple of THINK video broadcasts on IBM’s POWER processors, and to a few mainframe presentations.

In the POWER processor space, 20 years ago, IBM’s POWER processors used to be a volume chipset (found in IBM’s Power Systems and predecessor lines – as well as seen in Microsoft’s Xbox game system).  But, over time, Microsoft moved elsewhere – and Power Systems evolved into specialty processors for accelerated computing tasks.

In a blog last year, I wrote: “Power Systems offer a microprocessor that is distinctly different from z & x86 processors.  POWER processors offer extremely fast I/O; they can be closely integrated with graphical processing units (GPUs) to achieve extremely high performance levels; and they offer very tight security. They have access to massive amounts of main memory.  These differences mean that there are workloads that Power Systems can process most assuredly better than x86-based servers.” But, when I wrote this, I was unaware of IBM’s forthcoming POWER10 microprocessor.

On August 17, 2020, IBM announced its new POWER10 design aimed at enterprise hybrid cloud computing and accelerated applications, including AI applications.   The new chip will roll out in the second half of 2021 using a 7nm form factor with an expected improvement of up to 3x greater processor energy efficiency, workload capacity and container density than its predecessor POWER9 microprocessor.

I also learned that the new chip has support for Multi-Petabyte Memory Clusters (MPMCs) using a new technology called Memory Inception.  This technology will significantly improve cloud capacity (thus supporting IBM’s Hybrid Cloud strategy) while reducing the cost to process memory-intensive workloads (last year, I wrote about the cost advantages of this technology and approach).

But, under the covers, a chip of this design will also be excellent for processing large-model AI inference models.  And that observation brings me to a point I mentioned in the opening paragraphs: being able to design your own microprocessor gives computer makers a distinct competitive advantage over those forced to utilize commodity chip designs.  POWER10, with its massive memory management and MPMC clusters for cloud capacity, as well as its superior suitability in processing AI applications, shows how microprocessors can be used to support major corporate initiatives (in IBM’s case: hybrid clouds and AI).

In another THINK session, IBM’s general manager for System z, Ross Mauri reported that IBM saw great success with its z microprocessor being used for traditional transaction processing workloads and on-chip analytics.   By supporting analytics on a System z, IBM customers do not need to Extract/-Transform/Load (ETL) their data to other machines for analytics processing because they can keep their data at one protected location.    Again, this is another example of why owning microprocessor development creates a competitive advantage for IBM because the z chip was modified a few years back such that it could better process analytics workloads.

Summary observations

Even though IBM’s THINK conference has had to be delivered virtually for the past two years, it is still a highly effective means to get caught up on the company’s latest strategies and product offerings.  I would argue that it may even be more effective than doing the in-person conferences, given the ability to playback presentations and the ability to attend more sessions without missing any due to scheduling conflicts.

As for the highlights of THINK 2021, there were plenty of hybrid cloud and AI-focused video presentations that provided insights into IBM’s various strategies and product offerings.  But, for me, the conference took me in slightly different directions: toward microprocessors and coding. IBM’s announcement of “CodeNet” is highly significant because it represents a means to better-educate programmers on the use of AI to speed program development and deployment.  That gets us one step closer to programming nirvana: the day when we’ll be able to talk to computer systems and have them program themselves.

As for IBM’s latest microprocessor developments, they emphasize the importance of owning your own designs. IBM’s 2nm processor is a huge step forward in speed with less power consumption.  And IT executives who want really fast, in-memory database processors and super-fast hybrid clusters need to look very closely at IBM’s POWER10 design.  For large volume transaction processing, as well as ETL-less analytics processing, there’s nothing better than a System z.

As much as I like the new videoconference THINK format, I hope that it returns to an in-person event next year. There’s something to be said for the camaraderie element of rubbing elbows with like-minded people with different insights on IBM and the industry.

Leave a Reply

Your email address will not be published. Required fields are marked *