Everyone could use more help in the workplace.
By my increasing odometer reading, it seems only last week that AI in the workplace was taking shape and we got to interact with a few different iterations of what is called generative AI, meaning that our interactions with it (or commands to it) were producing something, words, pictures, etc. The etc. can include code or paragraphs. (While I have used ChatGPT in my work, I have not run any ChatGPT-authored articles!).
One of the problems with generative AI is that it knows what it knows, and that’s all. It really doesn’t make leaps of logic or share new, creative ideas with anyone. I asked ChatGPT to come up with a portrait of a friend of mine who is a very well-known jazz musician. Two problems came up. The first problem was that it was not allowed to create any artwork depicting real, live people. The second problem really made me scratch my head—it did not know the difference between an alto saxophone and a tenor saxophone when it attempted to do a “generic” saxophone player with an alto.
I tried to carefully help it along. It didn’t really understand what I was saying, which made me guess that musical instruments were not part of its primary education program. In contrast, I had a conversation with ChatGPT on a subject about which I continue to learn after 35 to 40 years of exploration. It was a fine conversation and it prompted more questions from me (but none from it). So while it was fun to be in that topic, ChatGPT does not really keep its end of the conversation up.
Now we introduce a term you may have seen lately: agentic AI. As you might have guessed, the root of this term is in AI acting as an agent. Be prepared to feel a little creeped out: Whereas generative AI can summarize, make predictions, characterize and organize topics, agentic AI can act on them. From a computing point of view, they could take a piece of information and make entries or read from a database, or even invoke APIs and make a program do something. You can see where this would get very interesting very quickly. In the fabricating world, that means an agent could start and stop machines, combine unrelated nests, change a process—lots of things. Many actions would be helpful, and some would not.
This is only the first escalation toward a more broad-based AI. There are different models of the AI ascending staircase, many of which have a top step labeled, “self-aware, sentient AI.” That’s the one that wants to wipe out carbon units like you and I and move toward a mathematically precise world, creating enough of a dystopian future to fuel the sci-fi film industry for decades.
I’m really not worried about that, though. I am worried about the lack of guardrails not just on AI but on robotics and automation. There are whole companies devoted to this, as luck would have it. One is based in Tel-Aviv/Jaffa (with an office in NYC) and is called NOMA Security. They have a team of smart people who are working on problems just like this. In fact, if you would like a very good article on the advent of agentic AI, there is one on the Noma site at https://noma.security/blog/understanding-agentic-ai-the-shift-to-ai-that-acts/. It’s not too technical, it is very clearly written, and it talks about the advantages and the concerns. It is the latter that is making Noma Security call for and recommend security and governance steps that will move as quickly or more so than the AI deployments.
If you think it’s too arcane or advanced to be worth your time, I beg to differ. This will be a big deal, and although you probably will want to delegate this to your IT people, it is still important to develop some knowledge on the topic, and I will revisit this throughout the last part of the year and during 2026.
Meanwhile if there are other topics you’d like to see, please contact me, dave@fifthwavemfg.com.