Chris Yates is Managing Partner of Vision Ventures, a mergers and acquisition (M & A) advisory firm specializing in machine vision companies and their technologies. Yates has more than 20 years’ experience in the industry. Prior to Vision Ventures, he was the Director, Advanced Technology at Rockwell Automation in that company’s Safety and Sensing Business. He was brought to Rockwell in an acquisition of 3D imaging specialist Odos Imaging, where he was the CEO and founder. Concurrently, he is the president of the European Machine Vision Association.
Our interview covers a lot of ground—so much, in fact, that we split the conversation into two parts (our next will appear on the site soon and will be included in our July newsletter). In this first part we discuss centers of machine vision technology; seeking technology that fits vs. having a fit and finding the right space; the role of patents in this market; the most interesting of today’s vision technologies; AI; and managing massive amounts of data.
Fifth Wave Manufacturing: Today we have, as our guest, Chris Yates. Chris is involved in Vision Ventures as a Managing Partner, and is very much involved in the European Machine Vision Association, or EMVA.
Chris Yates: That’s right, thanks, you’re right to identify exactly the two roles. I’m very lucky to work in the machine vision business in these two roles, and to engage with so many different companies, exciting companies, across the spectrum.
At Vision Ventures I’m a Managing Partner. We are a specialized mergers and acquisitions advisory firm with our sole focus on vision technology. We typically handle sell-side transactions where companies are looking for an acquirer or for a sale of their business, typically with the owner/managers. We also work for companies that are looking to strategically enter or increase their exposure in the vision market by investing in or acquiring vision companies.
Looking at the strong strategic interest within the other side of it, the European Machine Vision Association, Vision Ventures has always been a member of the EMVA. EMVA is the leading European industry trade association for machine vision. Any member can put themselves forward to be elected to the board, which I did in 2017 and have then been on the board ever since. And for the last four years, I have also held the role of president, which is then elected by the board. So again, this is a very interesting role to be able to help guide the executive team, to produce additional benefits for the members, and to promote machine vision, particularly in Europe, also.
FWM: I was just wondering how you compare your air miles between Germany and France or Italy? I would think that Germany probably is quite a prominent landing spot for you in your business?
Yates: Absolutely. Germany has a very strong footprint in machine vision, and historically also tied to the very strong automotive manufacturers that have been there, which was one of the very early adopters of machine vision in a real consolidated sense across their processing lines. There is certainly a strong footprint there, but do not think that machine vision in Europe is only in Germany. It’s extremely strong in France. Italy has a tremendous amount of high-end manufacturing, particularly machine builders that are using a lot of vision. The Netherlands also has many companies. Spain—I sound like I’m listing every country in Europe—and Poland are centers. We are seeing more countries in vision, also into central Europe. It’s one of the great things that we have a continent that’s close together and there are many different cultures all working on the same interesting topic of vision.
FWM: It’s true. It seems even within Europe that there are certain regions that are a little more immersed in that business. It’s a bit of a shame you can’t drive to the Netherlands, it’s so very close.
Yates: It’s the first flight out from Scotland, and I’m there quite regularly.
FWM: It’s been a handful of years now where you’ve taken on a new role with Vision Ventures, to really help as a peer with Gabriele Jansen, who’s been doing that for a long time. (see also our interview from 2022 with Gabriele Jansen here–Ed.) Please talk about that a little bit.
Yates: I joined Vision Ventures some five years ago. I’ve known Gabriele Jansen, who’s the founding partner of Vision Ventures, for almost 12 years now. I entered into the machine vision business in 2010, when I was asked to lead a spinoff from Siemens. It was Siemens corporate technology in 3D imaging, and I was recruited to take that forward, and it was a company that we built up and then ultimately sold to Rockwell Automation in the U.S. That transaction was handled by Vision Ventures, so I also got to work very closely with Gabriele throughout that transaction. Corporate transactions and particularly strategic innovation has always been topics I’ve been very interested in. My background is in startup companies, and this is something like a natural extension of it. So coupled with a strong interest in technology and all aspects of business. It works extremely well.
FWM: You are in an enviable position, and for the people who think about entering the machine vision industry and inventing some things that use it, there are many things to talk about. One thing is, congratulations for being in such an interesting and vibrant industry. Secondly, there is a lot of interesting and sometimes unique technology being used. Also, many technology efforts are driven by cash, in a funding sense. You are one of the few people who has experience in each milieu, which makes you a very rare and effective person.
Yates: One point about Vision Ventures is we are specialists in vision technology, and we have been in the industry and worked and built up our own companies within there before moving into pure M & A advisory. Machine vision is still a complex topic. It touches many areas from low level electronics and image sensors up through different types of cameras, how to assemble systems and how system solutions work together. That sector-specific knowledge is certainly important in our business and is certainly something that’s valued by our clients.
FWM: Indeed. Now, to put this into simple terms, do you find more engagements where you have a puzzle piece in your hand looking for the right space? Or do you find more where there’s a space and you need to go find the puzzle piece?
Yates: It’s a good question. This part of the role means being continually engaged in the sector at all levels. So I wouldn’t put it as finding the puzzle piece that then needs to be there. There is a continual process of engaging with companies at different stages in their life cycle, understanding what they’re looking to achieve within that company, looking at what technologies are coming through, where people are being successful, what business models are working. At the same time, more towards the larger company side, we need to understand what their strategic desires are so that when we are taking on a particular mandate, we have the, the understanding of where potential opportunities may best be found.
FWM: I oversimplified that! However, I can see where your business comes from any side of an equation.
Yates: Absolutely. A typical example on the sell side might be, uh, a well-established company that was founded in the late ‘90s, early 2000s or so, has been successfully grown for 20 years. And then the owners or the majority shareholders are looking to effect an execution plan to allow the continued growth of the business and, and exit from the day-to-day operation. And in that case, we would certainly act to cover the entire process to find the best possible forward partner and acquirer for the company.
FWM: Right.
Yates: In terms of the larger companies, global multinationals, and also increasingly, private equity, groups are looking to secure strategic assets within the vision sector, we can help by effectively bringing our expertise and knowledge and connections to the companies to facilitate the right match that works with their strategic plans.
FWM: Yes. We seldom think about that part, how different it is for larger concerns. Before we leave this business question, I’d like to know if patents play a big role in this?
Yates: Patents? I mean, as a general question, actually you’re touching on one of my favorite subjects. I like patents!<laugh>.
FWM: Oh, good, I can’t wait to hear how they play in vision markets.
Yates: How patents play out depends on the companies. We have engaged a lot with early-stage companies—less on the transaction sense, but more in terms of seeing the directions of innovation, the industry. Patents can be extremely valuable. They can be extremely valuable for startup companies, at the point at which a company might be being acquired, but they always need to have a strategy around them in terms of what is to be filed, where to file it, in which countries, what it is to be protected, what to disclose, because a patent becomes public. So the information that you choose to put in there will have to be put in there, ultimately goes into the public domain. So there’s a choice about what to patent and to gain that legal protection, but at the same time, disclose versus what you might choose to keep internally as basically company knowhow. It’s a very interesting area, but certainly a well thought out IP strategy, I would say is important for any company and IP being more than patents, being patents, internal knowhow, copyrights, design rights, et cetera.
FWM: You can still get protection, even with our 20-year protection here in the States. However, the 20 years really isn’t 20 years, it’s 18 ½ years, because you’ll probably have to wait 18 months for approval, and that time is taken from the patent clock. Doing the preparation is so important, and things can go more quickly if you do a good job on the forms and defining the scope of your patent. And it’s possible that you won’t need the patent after eight years or so anyway, because technology marches on.
Yates: It’s all very individual to which technologies you are developing. I mean, another aspect to think about this is the patent gives you an exclusive right to the invention, but someone else may be inventing or doing the same thing at the same time in a different product. And so at that point, you could make a case that there was an infringement there, and therefore the other company would stop. But in that case, you would almost certainly have to prove that your particular patent was being used within the product. And this is quite an area to think about, because maybe this is a software AI algorithm that’s sitting down on a chip, it can be almost impossible to identify whether or not it is actually being used within there. This is also a consideration in patent IP because it also costs money. The lifetime of a patent and the more countries you put a patent into, certainly the bigger the bill. You must think carefully about what your core inventions are and how you want to protect your competitive, internally developed IP.
FWM: Yes. Well, thank you. That’s a great topic.
Yates: It’s a very creative area. I think it’s not often thought that way, but it is creative, and it’s certainly an important commercial tool.
FWM: It is. Let’s move on to another question. There are always interesting technologies like black and white images where you can make it into a color image with some accuracy. Also, stitching capabilities are advancing quickly. There are so many things happening today, technologically, in this market. What are you finding most interesting today?
Yates: You’re right, there’s a tremendous amount of innovation across the industry in general. So probably not enough time to pick up on all of them, but we could highlight a couple of things that I see as being interesting directions in the industry. Probably the one that I think is quite interesting is the increase in non-visible imaging. What I mean by that is typically, when people think of machine vision, they’re thinking about a black and white image in the visible spectrum, the spectrum we see as humans. Maybe it’s a color image, but it’s in the visible spectrum. What you’re seeing now is more and more companies developing both the image sensors and cameras and imaging techniques that are outside the visible.
For example, into the infrared, exactly as you mentioned, finding that infrared light can see through silicon wafers. It can be used to see defects within silicon in production. It then extends further mid wave and long wave infrared. Also, we are seeing an increase in the number of ultraviolet solutions, which is moving to the other side of the electromagnetic spectrum, also x-rays more and more use of x-ray imaging in industry and in inspection devices. So the broad feeling is that we have this fantastic technology and machinery that’s been primarily developed in the visible spectrum, but is now extending to cover more parts of the electromagnetic spectrum. And that is really quite powerful.
One thing that is interesting is that as you start to use different wavelengths, you can also pick up material properties. Hyperspectral imaging is a good example of this. Hyperspectral is like a super red, green, blue—or a super color camera. Instead of just picking up three colors, it’s maybe picking up 1,000 or 5,000 individual small colors. And by seeing the individual signatures of a particular wavelength, you can pick up things like, what is this plastic made of? Is this polyethylene or is this polycarbonate very important? When you’re thinking about recycling sortation, you can also use it.
FWM: It seems like spectroscopy or chromatography.
Yates: Exactly. It is exactly that, but across an entire image. So you can think of very nice applications. Is this fruit ripe? And I can measure the sugar content in an orange with a camera. And this is getting really towards imaging, but imaging to pick up material properties. It is a fascinating area and I think you’ll see more applications, and more companies entering this sector.
FWM: That is amazing. Anything else that you see out there?
Yates: Well, we couldn’t have this interview without mentioning AI. AI has been the trend of the last, I would say, 10 years, very strongly within vision. I think the vision sector, whether it’s machine vision, or computer vision, was actually very early in using and developing AI techniques. Some of the original developments that showed the power of AI were done using image processing tasks back in 2010 or 2012. And then we’ve had over the last two years ChatGPT and AI being everywhere. So I think we saw the first companies really showing on the radar strongly in 2014 or so, in vision using AI to analyze images. And it’s been tremendously powerful, particularly when you think about situations that are not constrained.
A lot of machine vision is working in factory automation, and we want to pick up defects, for example. We want to pick up different scratches on the bodies of cars, et cetera. And this can actually be quite a complicated task if you are trying to define rules to say, this is what a scratch looks like. Whereas AI has been extremely powerful in being able to say, “here are some examples of what a scratch could look like. Here’s an example of what good parts look like.” You work it out, and then we run the inspection. And that’s where you’ve seen real powerful and fast uptake. I would say almost every company in the vision industry is now addressing AI to some extent.
FWM: I think that’s right. Some of that in our little sector in the metal fabricating and machining is color-driven, as you mentioned before. I’m cutting this one and a half-inch metal plate. It seems like it’s not cutting well, but the machine will use the color of the cut to decide what to do. The device is trained like any other AI application. It’s really coming to the mainstream.
Yates: I would also think the point is certainly in the mainstream being addressed, but it is not a complete replacement for what had gone before. So I think this is something that’s also sometimes not recognized. And the thought that this removes all of the developments and work that was done beforehand, that’s not the case. And it’s also worth remembering, particularly in machine vision, even before AI, inspection systems were very high-performing. If you think of a normal pharmaceutical vial inspection, in which it is very important to have the correctness of the vial, the inspections there were above 99.5% anyway. So if you are to look at putting AI on top of that, you have to offer a benefit either in speed, cost, or detection capability over and above that existing baseline. But the reality is AI is another tool and builds on top of what was there and gives us greater capability and greater scope for the industry.
FWM: Your last point there was so well taken because people are running scared of AI in some cases, and the fact is you do need those other benefits to come up because otherwise you put yourself in a case of limiting your returns. In your example, the 0.5% of simply yes or no doesn’t really buy you all that much considering the investment you will have to make.
Yates: Absolutely. You raise a good point in terms of the concerns in wider AI. There is an open question in some AI implementations of understanding why the algorithm made the choice that it did. And that can be quite important. Obviously, if an AI inspection system suddenly starts rejecting parts, you obviously would like to know why it’s made that decision, if those were good parts, in order to correct it. That’s an area that is general across AI, and certainly a lot of people are looking into this.
FWM: Yes. And, and if I may, one more sub-question on that topic. The more complex the applications, the more it comes into the market like Swiss cheese, you know, <laugh>, there are holes everywhere in it, but it is applied not to a hamburger, but to whatever the application is. There are Swiss cheese-like gaps where no one has figured out a way to apply AI—or even decide if you need to apply AI. But there are in fact gaps where no one’s figured out a way to apply AI, or if you even need to apply AI.
Yates: Absolutely. One point there, is that this is slightly different in the kind of consumer space where you might try a new AI algorithm on your smartphone to do something for your posts is very different from implementing something on a production line. And the steps that things go through in order to be produced or to be validated in practice on a production line by necessity have to have to go slower and use a more conservative process. This is always the case that industry will move a little bit slower and more incrementally and take value where it can, but not, as we would say, throw out the baby with the bath water.
FWM: That’s a great point, and especially for our readers, there’s only so much speed you can use in bending a piece of metal because your metal bending machine becomes a shearing machine without you knowing.
Yates: Yes, absolutely.
FWM: Let’s look at overall speed too, and maybe what can be captured with a vision system. It seems like we are reaching a practical limit in some cases. I’ve seen videos of someone capturing a photon—a blob that we know from quantum physics, there goes that blob across the room. If we go any faster than that, it’s meaningless because nothing goes that fast. There is no practical reason to do it.
Yates: I’ve also seen the videos and I think they’re fascinating. To see the speed of light, to trace a small laser pulse as it moves around, it’s phenomenal in terms of the technology. But the general question about speed, it has certainly been a trend for the last decade or so. We have image sensors that sit within the cameras, and these have become faster and higher resolution. So the two aspects need to be taken together. Now, it’s quite usual to have a 12 megapixel, 25 megapixel, 127 megapixel, very large image sensor cameras, also spitting out data at high frame rates. This is a tremendous benefit for an application because you can look at larger areas, or you can look at finer detail. You’ve got more raw data, which you can use to extract information from, but it does have a cost. And that cost is, there are tremendous amounts of data having to flow over the interfaces, which then is handled by the machine vision system—how to handle this data, both how to transfer it to a computer or a processing unit and how to process it. It is a trend, and obviously of benefit, but not without some engineering overhead.
FWM: I agree and I see that new standards continue to evolve for getting those millions of bits of data through that pipeline as quickly as possible. As amazing as it is, we have to start to wonder, where does the intelligence lie? Is it at the camera or is it more central to a distributed system, or where, maybe a parallel processor? In a way the camera is a good choice, but that’s a hard one too, because now you have to answer to a program that’s running at the camera versus a program that goes back to where the camera’s sending the data.
Yates: Yes, this is an age-old question of where to put the intelligence in a system, and I think there have been elements of the answer that this is all going to move towards a smart camera and everything’s done on the camera completely. At the other end, you’ll see implementations that use another direction. There was a study in China, quite a big actual implementation study, where they were setting up an entire manufacturing plant, including its machine vision, where all of the data was being transferred over 5G and all of the processing and intelligence was being done in centralized fashion. One of the great things about machine vision is it can be applied to so many different solutions, and then the considerations of the solution determine exactly where you choose to put your computing power within the solution.
For instance, if you need your answer very fast, you need low latency, which means the computing needs to be much closer. If that is not a priority, and you can wait seconds, maybe minutes, maybe that processing can happen on a different continent in the cloud and everywhere in between. It’s a very interesting topic, and I think that the good thing is that there will be a need for the image data, and we should always separate the raw image data as provided by the image sensor; information is what the machines, the tools, the humans want from it. And the translation from data to information is done by an algorithm running on a computer somewhere.
FWM: It’s a bit like movie networks, Netflix, Amazon Prime, and all of those. It’s a lot of data to move. And, and they’re stuck doing both of the things that we talked about, having this gigantic horsepower centralized processing, even distributed centralized processing, but then also edge processing because of things like Akamai that will accelerate the information and its acquisition. I fully expect that will happen in a smaller way with machine vision.
Yates: Yes, exactly. One of the big enablers anyways that the image sensors have got cheaper, no doubt, but the compute power has got dramatically cheaper and dramatically more powerful over the last, well, the last 70 years. This is driven by the mobile devices to a large extent, and later by Nvidia. We have a lot of compute capability and I’m sure a lot of people are trying to push it further <laugh>.
FWM: Seemingly, Moore’s Law of doubling capacity and halving cost applies just as well to machine vision as it does to computing.
Yates: Exactly, it’s a tremendous enabler having great compute power available at lower cost. Fantastic! It enables the entire industry.
Part II coming soon
More information: www.visionventures.eu