It’s really hard to go anywhere that doesn’t have something about AI attached to it. The local home improvement store had a combination washer/dryer that made use of AI. Machines for fabricating and machining and many other things feature AI touches also.
We can better understand AI by thinking of ways to classify it, but first we must know what isn’t AI. It’s interesting to hear some of the conspiracy theorists make claims about things we know happened—going to the moon is a good example. They seem to be saying, “I cannot fathom how that happened, therefore I don’t believe that it did.” They cannot fathom it because it took 400,000 people to make it happen. And they did it with comparatively primitive equipment in the form of toggle switches and dials.
And yet if it were happening today, it might be claimed that only AI could manage such a task. In fact, let’s take a step downward from space and into the low altitudes. Even on a small, general aviation airplane like a Beechcraft Bonanza, you can find a fairly sophisticated autopilot. It borders on the magical, you can maintain an altitude, keep the airplane traveling in a straight line toward its destination, and run a flight profile right up to the descent and landing pattern. It seems too good to be human intelligence, but there is absolutely no AI involved. The commands would be reactive and unintelligent: we are drifting to the right. Head left until we’re on course.
The basic requirements for a system to be called AI include many of these things:
- The ability to receive inputs or “percepts”
- Use the percept information to arm the decision process
- Decide upon an action
- Perform the action
- Learn from the process
- Use learning for unknown environments or situations
Depending on the application for AI, data (real or synthetic) can be loaded to inform a process. We see a lot of emphasis on palletizing and truck loading. Each process has the goal of wasting the least amount of space, and building a structure that is sturdy and easy to move (and easy to break apart again). That’s a lot to ask, but many companies are asking AI to do exactly that.
The last thing that can happen is autonomy. This autonomy counts on the last bullet point above, the system using what it learned to excel in new surroundings. Think of a robot with machine vision that is trained to pick up a part. As it learns other parts, it comes closer to deploying by itself, perhaps looking for a flat surface to grab on the new part.
We’ll see what happens in the next few years. Right now we have fairly simple systems—that describes the “open” systems with which we are becoming more familiar by the day, like ChatGPT and the various branded assistants that are proffered by search companies and makers of browsers and operating systems. While these systems are “open” in that we interact with them, and they do not run unless we do, they are limited to facts and figures that were fed to them already. They can make inferences based on that diet of information, but the lineage from search engine to chat-based assistant is very clear.
Right now there are tiny islands of AI, much like there were tiny islands of computing two generations ago. Back then, local area networks (LANs) were the answer to link the islands, and things got connected in a hurry and in de facto fashion. I expect the same to happen now with the tiny bits of AI strewn about the machines and gadgetry around us. There will be umbrellas of AI management that cover a chipset, slightly larger ones that cover a device, much larger ones that cover an Industry 4.0 or Fifth Wave corporate/industrial network, and so forth. Do you think this will spawn industry groups and standards committees? I can’t make up my mind on this question.
There was never a Year of the LAN; by the time it was the Year of the LAN, it was too late, LANs were everywhere and it was the status quo. I see that exact scenario for AI by 2030.