A dozen years ago, back in the old “Big Data” days, I was asked by an IBM development tools specialist to describe what I thought the future of application development looked like.  Being the fantastic prognosticator that I am, I said: “someday, programming computers will be done by voice, in native languages – you’ll just describe the desired process and the computer will handle the rest.”  Pretty good pipe dream – even if I didn’t know how that would be accomplished.  That is, til now…

I haven’t been tracking the development tools/system management marketplace for a few years now (focused on other technologies including quantum) – but when I saw an invitation to attend an IBM briefing on its “Project Wisdom” — a first-of-its-kind capability that automatically generates code for developers – I was intrigued.  Project Wisdom was billed as a joint Red Hat/IBM Research project that has been designed to generate code that helps systems programmers/cloud managers configure, deploy and manage systems across hybrid-cloud environments.  With an artificial intelligence (AI) overlay, cloud programmers can generate complex cloud configurations using English language commands (not using voice per my pipe dream – but rather using English language keyboard input [for now…]) – and once the commands are entered, artificial intelligence on the backend takes over to generate the machine code needed to build and manage the new configuration.

After watching IBM’s demo of Project Wisdom, my key take-away is this: I now know how my systems generating code prognostication will eventually come into being.  Someday a voice overlay will be put in front of an AI engine that can generate complex programs in machine language.  With a little guidance and oversight, systems will (as I had speculated) program themselves.

Why is this important?

Two reasons: skills and “foundation models.”

The biggest reason that Project Wisdom, and other like efforts such as Copilot (a GitHub effort) and Codex (an Open AI effort), are important has to do with skill set availability.  Programming interactions across hybrid cloud environment can be complex, exacting and time consuming.  It is difficult to find programmers with the skill sets necessary to build complex, secure, cross-platform, hybrid cloud environments.  And, if they can be found, their skill sets are costly.

Project Wisdom greatly simplifies hybrid cloud programming efforts by enabling cloud programmers to:

  1. Enter simple commands that, using AI, generate the code needed to execute those commands;
  2. Discover similar coding, playbooks or roles (playbooks and roles are described in a footnote below) – such that a cloud programmer can capitalize on flexible, reusable previous efforts rather than reinventing the wheel;
  3. Automatically optimize content; and,
  4. Better understand what the code is doing (content explanation/impact).


A playbook is, according to Red Hat Ansible documentation, the basis for a really simple configuration management and multi-machine deployment system, unlike any that already exist, and one that is very well suited to deploying complex applications); or roles (again, according to Red Hat Ansible documents: ways of automatically loading certain vars_files, tasks, and handlers based on a known file structure– grouping content by roles also allows easy sharing of roles with other users.

The second reason this announcement is important has to do with “foundation models.”  In order to overlay AI onto a cloud deployment and management environment, an AI model needs to be created (a model consists of a labeled dataset for the task at hand; an AI system can then learn how all of the elements in that model interrelate and then use that model to generate the machine language to generate the programming needed to execute a given task).  Project Wisdom is a “pre-trained” foundation model that generates code germane to hybrid cloud deployment and management – thus saving IT managers/data scientists from having to spend hundreds (or potentially thousands) of hours building their own AI models.

How is this implemented?

Project Wisdom is implemented as a software stack solution that initially mixes Red Hat infrastructure with AI modeling.  Using high-quality data sources such as Red Hat’s Ansible Galaxy combined with IBM Research AI’s foundation model cluster – an intelligent Red Hat cloud environment can be created.

At the bottom of the stack is IBM AI value-add.  Using IBM “AI for IT” technology, a model is generated that serves as the basis for the Red Hat stack.  Machine code is generated using the IT Language YAML — a broad language understanding model that goes across many IT use cases

The middle layer is a community layer wherein various communities (with their various use cases) are given access to the underlying foundation model.  The plans are to support the Ansible Community Model; the Kubernetes Community Model (cloud native); and a general Operations Community Model (fine-tuned for specific customer cases).

The hardware side

One of the challenges in overlaying AI across a hybrid cloud environment has to do with training an AI system.  AI datasets can be huge (with millions or billions of parameters) – and cannot be housed in the memory of a small GPU environment.  To train AI for Red Hat infrastructure, a parallelized distributed software environment need to be created – and highly efficient hardware is needed not only to generate an AI model, but also to ensure high performance of the resulting AI/infrastructure on an ongoing basis.  To further improve performance, algorithms need to be optimized to be efficient and scalable for both memory and computation.

To this end, IBM’s AI foundation model and the Red Hat software stack can be configured in a distributed cluster that can run on thousands of latest-generation graphical processing units (GPUs).

And this is where IBM starts to differentiate itself from the other AI suppliers in the marketplace.  Clabby Analytics has already written about IBM’s Telum processor (see here), an IBM-designed microprocessor that has an AI chip on-board.  But now, add IBM’s AIU (AI Unit) to the specialized hardware mix.  The AIU is not a CPU or a GPUI It is an application-specific integrated circuit (ASIC) that has been designed to accelerate and virtualize AI workloads.  The AIU fits into a category known as “SOC” (System on Chip) where CPU, graphics and memory interfaces, and other elements used to build a system are integrated into a single chip design.  The AIU has now been enabled for Wisdom and future foundation models; its is enabled in the Red Hat software stack; and integration into the Watson software stack is underway.

As I am running out of space in this blog, it is my intention to contact IBM for a more detailed briefing on its AIU.  More to follow…

Summary observation

The future of AI at IBM is to “create models that are trained on a broad set of unlabeled data that can be used for different tasks with minimal fine-tuning.”  Project Wisdom is one of the first instantiations of this future AI plan.  Expect “foundations” for other parts of its business – as well as overlays for IBM business partners.  If this approach works as billed, expect a major shift across the broad spectrum of the IT industry as artificial intelligence starts to relieve major skill shortages worldwide as systems learn to program themselves.