Already in 2022 machine vision is making a huge impact on manufacturing. Much of the automation in machine tools is controlled or monitored by machine vision. The same holds true for machine status, consumables, a multitude of applications just for the machine tool or cell. As cobots make their entry into areas like machine tending, sorting, and packaging, machine vision does not stop at the machine, it becomes pervasive in the shop, and in a fifth wave world it forms a sort of vision “fabric” throughout the manufacturing floor(s).
Machine vision is an area ripe to explore, invest in, and reap commensurate benefits. We spoke to Gabriele Jansen about the burgeoning machine vision area and its relationship to manufacturing. Jansen is currently Managing Director of Vision Ventures, a machine vision M&A firm and as such she sees the cutting edge of this industry. She formed Vision Ventures a decade ago, and has simultaneously served, between 2002 and 2021, as a member of the Executive Board for the European Machine Vision Association (EMVA), an organization she helped form.
She began her career as an engineer, then became expert in business disciplines and finally, mergers & acquisitions law, and takes education very seriously as you might sense from her academic background:
- Cologne University of Applied Sciences, Dipl.Ing. (M.Sc.), Optics and Machine Vision
- University Pforzheim, Dipl.Wirtsch.Ing. (MBA Eng.), Business Administration & Engineering
- St. Gallen Business School, International Management
- University Munster, School of Tax and Business Law, Master of Laws (LL.M.), Mergers & Acquisitions
She has more in her amazing background but let’s hear what she has to offer in this Q&A about the current machine vision market.
Fifth Wave Manufacturing (FWM): It’s great to speak with you again. Let’s jump right into the first question. What has been the biggest change in the machine vision market in the last 5-10 years?
Gabriele Jansen (GJ): I like that question. I’m sure you expect me to say something about AI or embedded vision—and that would not be wrong. However, let me approach that question from a different angle. From my point of view, the biggest impact on the vision tech landscape in the last, say, 10 years, has actually been money.
FWM: Money!?
GJ: Money. For the first time since the baby steps of machine vision, in the ‘80s and ‘90s, real money is available to finance the development of vision tech, to finance the companies that do such development. That makes a huge difference in the market. All kinds of things are coming out of this changed financial situation. Among them are the approach to using AI for vision, and the development of dedicated vision processors, as well as the development of products that use new spectral bands. Today, vision technology is integral in the manufacturing world, integral for automation technology in all kinds of industries.
FWM: And frankly, that tells us that there is not only a lot of money available but a lot of money to be made.
GJ: Very true. As I can see in our business daily, valuations of vision companies are sky high.
FWM: I remember a conversation we had a long time ago, you talked about a glass plate that would have a truck tire going over, and you would be able to check its inflation at a weigh station along the highway. That is really specific, but that was at least 15 years ago. Here is an application where the entire setup is so specific, but today it seems like things are moving in an opposite direction, from specific to general, especially in the software.
GJ: Yes, absolutely. And machine vision today is instrumental in many more areas than 10 or 20 years ago.
FWM: Things are changing quickly. Where is innovation coming from now? Universities? Incubators? Small startups? Large companies?
GJ: It’s as always–innovation is coming from small and agile teams. Very often, it starts in universities, as you mentioned. There are also groups of ambitious people from larger companies who recognize that they cannot fulfill their ideas fast enough or well enough and then they move on to spin off a startup. What we see in the last couple of years are also startups of founders with a background in consulting, like business consultants, strategy consultants, financing consultants, they have all seen so much interesting stuff. And so quite some companies have founders who come from the business side and they partner up with a technical person with a bright idea and they jointly bring it into the market.
Larger companies are very keen to acquire the technology and the markets developed by these innovators. They also are keen to acquire some of the startup spirit. They can use that to fuel their own innovation power.
FWM: Yes. For the small, innovative company of 20 years ago, everyone’s dream sounded like this: “O God, please let me be acquired by Microsoft.” What’s the equivalent of that in machine vision? It still seems like a reasonable strategy, being bought by a bigger company.
GJ: It’s still a reasonable strategy. And today’s startups, even more than 20 years ago, have their mind from the onset on the exit, either through an acquisition or through an IPO.
In the early days, a lot of founders were more focused on, and more interested in, developing the next cool thing, the next technical revolution, with the idea to stay independent as long as possible doing their own thing. Whereas young founders today, their mind is set towards how do I finance, how I do I do the investment rounds by creating more and more value of the company in each round. And what is the end game?
FWM: Yes. They’re much more aware than yesterday’s entrepreneurs.
GJ: Very much more aware. And there are many more possibilities also. Maybe it’s no longer Microsoft necessarily making an acquisition, maybe today it’s also Google or Amazon, but it’s the same idea.
FWM: Yes, exactly. Related to this question, are there certain “gravitational centers” for machine vision, especially in Europe? Kind of like there is an epicenter of robotics in Odense?
GJ: Oh yes, there are. Actually, there are several hubs, which on one hand is good. On the other hand, it also tells you that the hubs are not as huge as Silicon Valley, for example. But there are a number of hubs, and one big hub is in Israel—all over Israel. Israel is not a big place, but it has a very high concentration of A) vision startups; B) AI startups; and C) AI and vision startups.
Other hubs here in Germany include Berlin and the southern part of Germany, I would say the area around Stuttgart and Munich. You also have hubs in Paris, London, and Grenoble for the deep optical part of vision tech. There are several hubs for machine vision, making these hubs less dense than Odense is for robots. One reason why this is, is that while Odense robotics is more oriented toward certain applications, vision tech is extremely broad. It goes into all kinds of industries, all kinds of applications, all kinds of hardware implementations.
To get an idea of how broad vision is, think about embedded vision with super small, super low-cost equipment. Then, on the other side, think of huge machines for complex surface inspection tasks that cost you a $1 million apiece. Vision covers both and everything in between, and the range is unbelievably broad. To be successful in a hub, you need a certain ecosystem around you—an ecosystem of like-minded companies, and an ecosystem of customers and interested parties to acquire the product outcomes, use it, and finance it. In the case of vision tech, it is not possible to have all the versatility of vision in only one place.
FWM: And maybe because there is separation of hardware and software in many cases, makes it a little less monolithic and a little more distributed.
GJ: Yes. Somehow the image has to be generated. The hardware used in vision systems does different jobs, depending on the application, and it affects product design. Also, hardware used in vision systems is on a very broad spectrum.
FWM: As you say, vision infers a camera, or at least a sensor of some kind. It seems that embedded cameras are becoming the norm in manufacturing machines. A laser cutting machine I know of features three of them. My car has four, plus the software to stitch the views together and give a 360-degree rendering as if I’ve launched a drone. Still a good time to be in cameras, I assume?
GJ: It is always a good time to be in cameras! The camera is the center of everything in vision, every vision task. You are also correct in your observation that everything is becoming smarter, like the cameras in your car. Today cameras are everywhere, including all kinds of places where you don’t want them to be—like constant surveillance resulting in traffic tickets!
I think it’s also important to understand that often when you compare two cameras, it’s like comparing apples and oranges. There is a world of difference between the cameras that you have in your car and a camera used for industrial inspection, for example. Or a camera for surveillance, with infrared imaging.
FWM: Like a FLIR Systems type of camera with natural and infrared.
GJ: Exactly. The camera arena itself is absolutely huge. You have to compare like to like. I mean, there’s obviously also a significant price war on cameras for the high-volume applications, for example those in your car. It is clear that these cameras are sold for “pennies.” And it takes a certain type and size of company to do well in this type of camera business. That’s a different playing field than those for a company like FLIR/Teledyne with their specific flavor of the camera business. (Camera manufacturer Teledyne FLIR, based in Wilsonville, OR, is a $2 billion company with more than 4,000 employees and serves mainly security applications—Ed.)
FWM: While machine vision software is becoming more flexible or general than ever, cameras are becoming more specific than ever. You have many applications like security, machine vision, embedded systems, visualization, etc., that demand a specific camera. It has to be well matched to the task yet it must retain some flexibility to be useful in related applications.
GJ: A lot of retaining this flexibility is certainly due to software. It’s not exactly the camera in the application, it’s the vision module, let’s say. Also, when you only look at the camera, it is clear that it is task-oriented. There will be different sensor sizes, interface types, spectrum sensitivity, 2D, 3D, color or no color. It’s a huge variety of specifications that enables meeting each specific task. And there is a never-ending hunger for more variety.
FWM: I suppose if there could be more, there will be more. What are we learning from important and data-intensive areas such as medical imaging that we can bring to manufacturing?
GJ: Let me answer this by looking at manufacturing first. Manufacturing has certainly been the early adopter for vision technology, and for many years has been on the forefront of vision technology usage. This has been instrumental for the development of the technology and has also provided a major source of funding for the technology.
However, as it sometimes happens, using something for a long time creates habit. In manufacturing there is often a certain expectation how things are supposed to be, how things are supposed to run, how things are supposed to fit. Through this there is a danger of missing out on new technology and new approaches. What manufacturing can learn from other areas is to be more open to new approaches like the cloud. Up until now this has been pretty much a no-go. Now this is very slowly turning into something worthwhile discussing. That is a good start.
Another learning could be—or maybe will be—that it is good to have a person in the loop in automation. This probably sounds a little counterintuitive. For the longest time, manufacturing’s mantra has been to automate, automate, automate. The idea was whatever you can automate, automate. However, you still see humans working in manufacturing and this will likely continue forever. There are tasks that for various reasons will not be automated.
In the past, when the work was manual so was the quality control, yield improvement, and any performance management. What manufacturing is learning, when we speak about the use of vision, is that there are certain aspects of the workflow that can be automated with a person in the loop.
Now, when you think, for example of monitoring a worker and automatically analyzing the quality and efficiency of their work, you feed this information back into the process. For the first time you are getting data from the manual process and making it available for automated analytics and for automated yield improvement. This is a huge step for many manufacturers. Part of this is a learning from other application areas, and part of this is embracing new technology that makes new things available, like deep learning, for instance.
FWM: That’s a very good and strong point. Automation is kind of like the famous picture with an ape on the left, walking more erectly as you move to the right where the last figure is a guy with a briefcase. We’re at an exciting time in automation. Automation was all about the doing, and now it’s just getting into some of the thinking as well, or at least providing food for thought in the form of data and even information.
GJ: That is correct.
FWM: We are asking so much more from machine vision in terms of data density. As we process more data, the infrastructure of a manufacturing cell or even an entire shop floor will be stretched. Where are we seeing a need for more bandwidth, throughput, processing power?
GJ: Asking where we see a need for more bandwidth and data throughput, to me, is the same as asking where we see a need for more money.
FWM: A wonderful, surprising answer.
GJ: I think you also know the saying: data is the new gold. Every application is data-hungry. This is also true for novel approaches like embedded vision. The goal in such a vision system is to make it small, affordable, and minimally power-hungry. At the same time, you want this device to process huge amounts of data.
There is a requirement for either more clever approaches or more dedicated approaches for data handling that has increased many-fold. When you look at 3D applications or hyperspectral applications, or monitoring human beings and capturing their workflow and process, there are huge amounts of data involved. The data have to be processed and stored. The data have to be transmitted either between the data capturing unit—like a camera—and the processor, or between the edge device and the cloud.
FWM: And the cloud brings up security concerns as well.
GJ: Very true. And the cloud is not the only place where we have to be much more sensitive to and focused on security.
FWM: I totally agree.
GJ: Additionally, there is a strong tendency to go wireless, to physically decouple the front end of a vision system, acquiring the data and the processing back end, because it’s such a headache to do the cabling. And it inhibits flexibility. Without even talking about the cloud, you have a wireless manufacturing environment. While it’s not a completely new situation, people are still having to deal with security requirements for these devices.
FWM: Let’s talk about acquiring the image, and what kind of frequencies are being used. Have we significantly expanded/changed the spectra that we employ in machine vision?
GJ: We are working on this. There are teams working on Terahertz vision for industrial inspection, for example, even though it’s in a very early stage. There is already a visible increase in industrial CT – computer tomography. I see exciting new approaches to short wave infrared imaging, exciting in a sense that new technical approaches could bring costs down significantly, by an order of magnitude. That would allow a widespread use in industrial applications. None of this does substantially scale—yet, but the ecosystem is working on all this.
FWM: Are they still in the R & D phase with these things?
GJ: I would say no, that it is beyond R & D, I would say it’s early market phase. Early products are available. There are proof of concepts in the market. There are very early installations in many fields, but it has not exploded yet.
FWM: I could see some applications where you have to do some metallurgy. By law, you have to do inspection on certain items. If you could reliably automate that and just have the inspection item go by or run the imaging device over, say, a weld seam automatically, that would save lots of time and trouble.
GJ: Yes, and those things are coming.
FWM: Are we moving toward more dynamic applications, like motion within the factory, or color and texture differentiation?
GJ: I would say we’re already there with color, it’s standard, nothing new. Also, if color is not strictly needed, you would forgo color. A lot of applications use gray scale, because color automatically increases data volume at a minimum by a factor of three.
Motion analysis is definitely an area of focus and often looks at both, motion analysis of processes and of workers. You can also use motion analysis in something like vibration analysis, or as a means to monitor machine health, or as a means to do predictive maintenance. This can be done acoustically too, but it is also done very cleverly with optical methods.
In terms of texture, there is a lot going on since the advent of deep learning. Take a challenging surface, a natural surface, like wood. It’s a challenge to classify it reliably with traditional tools, do a defect inspection, and then do a quality analysis. We are expanding the borders of machine vision significantly in this area by employing deep learning technologies.
FWM: Which leads to this question: Will AI play a role soon in industrial vision systems?
GJ: I would say AI plays a significant role already. Based on AI, completely new application fields for vision technology have opened up. The impact of these products will be absolutely huge.
However, thus far it has not translated into mass products. Yet the new applications are so exciting. For example, the quality assessment of cars can make use of AI vision. Think of inspecting cars at a rental station. Today when you rent a car, you walk around and look at the car and see any damage. You mark it on a piece of paper. When you return the car, a person at the rental agency does the same thing and compares both pieces of paper. If there is a defect on the latest sheet that wasn’t there before, then you have to pay for it. Think of the process and how labor-intensive it is, and it is so annoying for the customer too. Using AI, this can be automated today, and the machine will pay for itself.
There are different approaches today. The images of the car might be taken with a smartphone, or they could originate from embedded cameras in a tunnel. No matter what the image acquisition looks like, the system will depend on deep learning to make it possible to deal with clean cars, dirty cars, pristine cars or dinged cars, any make or model. Mark my words: You will see these types of systems in the next five years at every airport, every major rental station and in a lot of auto repair shops.
FWM: It seems like robotics is a perfect match for machine vision, because a vision system allows the cobot to more or less mimic a worker—but do a lot more. What’s your take on matching cobots with vision?
GJ: I see combining a robot and vision as the natural development for both technologies. And when you look at it from either side, vision substantially increases the versatility of the cobot and the cobot gets vision into the long tail of the industrial markets. The cobot is the door-opener at the manufacturers that today are still shying away from vision because it’s too complex. The more integration between vision and cobot, the better for the user.
FWM: My final question for you is this: What does machine vision need that it doesn’t have right now?
GJ: Once again, I’m probably not going to answer the way you had intended the question. Right now, vision companies are in dire need of a range of things: processors, image, sensors, cables, connectors, sometimes it’s something as mundane as a piece of metal. Supply chain issues are a huge problem for the mostly small and medium-sized companies that predominantly make up the vision ecosystems. It is important to reassess some of the globalization aspects we have gotten ourselves into, and to make sure that a minimal level of independence is regained.