Episode Transcript
[00:00:00] Speaker A: Good day, everyone. Welcome to Lubrication Experts and today we have a very exciting guest. We have Evan Zabowski from Eurofins Test Oil. He's the senior technical advisor there. Now, I've had the opportunity of seeing Evan speak a couple of times. I think it was at STLE a couple of years ago, and one of the few people that can make oil analysis seem entertaining. So that's one of the reasons I wanted to bring him on, you know, any number of stories and a huge amount of experience that he can bring to the table. So, Evan, thanks so much for joining us.
[00:00:31] Speaker B: Well, thank you.
[00:00:32] Speaker A: Glad to be here. Yeah. Now, this is going to be, this is going to be good. It's long, kind of long awaited. I've been trying to get Evan on the, you know, schedules and whatever have never sort of lined up. So let's just kind of like start maybe with, with the very, very beginnings of oil analysis. And, and maybe we want to say the degree of maturity to which it's being used because, you know, you and I know that there are many versions of oil analysis and we see many different, let's say, program maturities out there.
So I think maybe where we want to start is your assessment of the current state of oil analysis. So what is the kind of quote, unquote, average maturity level that you see among not even just your test oil customers, but, but, but even beyond that as well, you know, is it just a compliance exercise? Are we just ticking the box because, you know, corporate says that we need to have an oil analysis program?
Is it. We're doing kind of pretty menial, like, traffic light analysis, you know, you know, red light means change the oil kind of thing.
Or do you see it, you know, being used to its fullest extent where, you know, we have different levels of oil analysis which are being applied based on the criticality of the asset. Maybe you're doing some exception testing and that sort of thing. And there's probably, just to preface this, there's probably not one answer. I'm sure there are different industries that are doing this better than others, maybe.
So what's your kind of feel, general vibe on the, on the state of oil analysis right now?
[00:02:12] Speaker B: Well, the way I always, you know, address this kind of topic is to actually talk about the history of oil analysis briefly. And that is that, you know, about, say, 60 years ago, we had commercial labs, but not everyone recognizes that for about 20 years before that, we had the earliest incarnations of oil analysis occurring at trucking fleets, railways, military, and the thing they all benefited from was large fleets of singular make model combinations.
And once commercial oil analysis labs came about, that's who they targeted was again similarly large fleets of few make model combinations. But where oil analysis as a technology, a strategy, whatever you want to call it, has matured is as I joke, now we have end little make model combinations with, you know, thousands upon thousands of one offs where there's a, an end user using a piece of equipment that does not exist anywhere else in the world, utilized the same way, under the same circumstances, etc. Etc.
And so people who have a knowledge of the history of oil analysis, people who've experienced oil analysis, they grew up in a world where oil analysis worked and so they like it. And how it worked was back to your terminology there. The traffic light system, where it was really red or green, was kind of the way we went. And I find that there's a lot of industries out there that have not evolved past that.
They are stuck to where we were say 50, 60 years ago and they are not really utilizing it to their fullest extent. I do find to your comment, yes, a lot of people are ticking a box. They, they are aware of oil analysis and they may not have experience with it, but they know it's something they, you know, quote unquote, they should do, so they do it.
One of our colleagues in industry who retired last year, he used to joke that when he first started into industry he was selling filing cabinets because he said people did oil analysis. People received reports, they filed them away and therefore, you know, the compliance was very, very low.
Back to your question. I do think there are a few industries, mining would be the one with the most experience in my, my experience that actually utilizes it to its fullest extent, that tries to truly make it predictive, actually reads a report and anticipates that by the next report something will be off and either requests different analysis or takes the corrective action before it gets to the, you know, traffic light being read and saying, oh my goodness, you know, what do we do? And then frequently what they do, like you said, is well, change the oil, which makes the next report green, even if it didn't fix anything.
So I would say that yes, it's all over the map.
There are a few that are just checking a box. There are a few that are, you know, reaching a full potential and actually leveraging it in a way it was intended to be used. But I would say the dominant answer is yes, it's, it's the traffic light. People think oil analysis is simple. They Treat it as it's simple and they use it in a simple way. So they're not really getting as much out of it as they could.
[00:05:36] Speaker A: Yeah, and maybe a little side discussion on there because, you know, if it is just a compliance exercise, what is the business getting out of that? Like, is there, you know, are we doing it for insurance reasons predominantly? Do you think we're doing it just because we've always done it and it's like culturally built in? Like, what do you think is the reason that we're just kind of taking samples and sending them off?
[00:06:01] Speaker B: There's a small amount that is regulatory or insurance related. Like I think of steel mills and utilities in particular, they fall under such regulations. But they are also people who tend to take it a bit more seriously, probably because they're being forced to in some way.
I would say that the, the most that are just ticking the box is. Yes, out of habit.
Right. It's kind of always done what they've, they've been doing.
And mostly I would say though, it's because they think it works.
[00:06:32] Speaker A: Right.
[00:06:32] Speaker B: They think that using the traffic light system, changing the oil when it goes red, making the next report green, is actually doing something because the machine ultimately didn't fail when they got a red report.
Now when the machine does fail, if it fails, you know, let's be honest, not all will fail due to lubricant related issue. But when they fail, they don't always blame oil analysis. They often find something else to blame and they don't really fault the program, they just continue on. So that is also habit based of what I, what I like to call lube fail repeat because it's repetition of that same activity, usually an oil change and it doesn't really resolve anything. So yeah, it's going to repeat itself. Will get another red report down the road, you will do the same sequence of activities. You know, you'll change the filters, change the oil, check the breather, whatever it was, you get some green reports and you'll just assume that the program works. And like I said, until such a time that the machine fails. And when that happens, you don't tend to think it was the program's fault, you think it was the machine's fault somehow.
[00:07:37] Speaker A: Yeah, I like that. Lube fail, repeat kind of reminds me of, what was it? Edge of Tomorrow.
[00:07:45] Speaker B: That's what I modeled. Yeah, yeah.
[00:07:46] Speaker A: Which was, I think that was live, die, repeat is how that was marketed. Right. Taken from the graphic novel. All you need is kill.
[00:07:53] Speaker B: Yes, great.
[00:07:54] Speaker A: Graphic novel.
All right. So, so that's kind of like the why of, of why some, some businesses might be trapped into this kind of. It's almost like the hamster wheel of, of oil analysis where we're kind of doing it, but it's, it's, it's a kind of a pointless exercise. Right? It's.
What was the Greek sisyphus, you know, pushing the rock up the hill, so.
[00:08:20] Speaker B: Yes, exactly.
[00:08:22] Speaker A: Okay. For those people who are just doing traffic light analysis and for people who are kind of taking that one step further, I mean, obviously the traffic lights, they have to be triggered by something. And I think this is probably most people's intersection with, let's say, generously interpretation of oil analysis results.
So.
And I think people are doing, often doing this blindly and they're taking actions with no understanding of how those red lights are generated or the, or the, or the yellow lights or.
I have seen a couple of labs who have moved beyond traffic lights and they've gone to the emoji system.
Sad, sad, sad face with thumbs down.
Okay, maybe it would be helpful to explain, like, where did those limits even come from? You know, because most people, I would say, I mean, most commercial labs these days give you the capacity to define your own limits, but let's be honest, the vast majority of people are not defining their own limits. They're just using the prescribed ones from the lab. So where do those numbers come from?
[00:09:30] Speaker B: Yeah, so there's essentially a couple of good sources of where they used to come from. One was OEMs and another one would be the lubricant companies themselves.
Then we started seeing a migration towards commercial type industries creating like user groups or whatever. So there'd be like an industry set of limits. So, you know, like I see the model behind you, there'd be a wind turbine user group and they would say these are the limits for all wind turbines and they would treat them all generically equal.
We where that evolved to was commercial labs eventually locking in their own. Because as I always try and be honest, when it comes to choosing a commercial lab, we're all held to the same ISO standard for quality of our results. We all tend to use the same ASTM and other methods for the testing. So at the end of the day, I don't try and distinguish one lab from another based on the testing, the quality of the data, the numbers for, you know, a simple way of putting it. So how people try and differentiate themselves is on the interpretation. And as you say, we put loose air quotes around the word interpretation because it's A cursory analysis of some data. And it's held to, generally speaking, most commercial labs use their own inside as you will, set of limits that are generated by combining either the OEM and the loop supplier limits. Because the OEMs will typically supply limits for wear, metals and contamination. The lube suppliers will typically supply limits for viscosity and other parameters such as, you know, additive levels perhaps.
But the industry ones, if they exist, those ones are kind of a combination of mostly the wear and contamination related ones and to a lesser extent the product specific limits. They might have a viscosity limit as an example, because it's the recommended grade for the application. But they won't get into the nitty gritty of how much calcium, phosphorus, zinc should be there because that may vary from one formulation to the other. So labs typically started with this.
Then what I saw was an evolution whereby labs would take a cross cut of a chunk of their data. Usually they would look at the last, you know, 100,000 or a million samples, whatever it was, they try and make sure it spanned at least a calendar year. So it include any seasonal variations, variances that might exist. And they would basically do distribution curves. And there's even an ASTM standard for this. ASTM D7720 recommends using bell curve distribution. Right. You know, two standard deviations is, you know, is where the first caution comes on. Like you say kind of a yellow light, three standard deviations is the red light. And you know, you can typically assume anything within one standard deviation is your green light.
What's happened then is again because like I said, if you treat that all labs are roughly the same when it comes to the numbers and they got to be different on the interpretation. Well, if everybody has similar customers, everybody's going to end up with similar limits. So where they really try and differentiate themselves and this is a good thing, is they actually will cross cut the data instead of just their entire database a whole year or whatever it might be, they take an entire singular customer's database and just say okay from all your machines. Here's again, probably same thing, 1, 2, 3 standard deviation sections and create customized user specific limits. But at the end of the day, they are based off of past behavior and anticipation that that future behavior is somewhat similar.
So in a nutshell, where all these limits come from is a little bit empirical, right? It's based off of old data, old experience, and just to a lesser extent because it only applies to some of the numbers, the standards to which everything is held to like as an Example, viscosity.
Viscosity frequently has a 10% limit to it, plus or minus 10%, because that's the ISO grading system says plus or minus 10%. So nobody's going to, you know, make that any broader than that. They might choose to tighten it if they can see empirically it's able to stay tighter than that.
But on the outside, worst case scenario, they're just going to plug in plus or minus 10% to the viscosity limits and let those ride.
So that's kind of where I've seen the evolution.
And some people choose to adopt only the OEM limits, only the lube supplier limits, maybe both. Some people choose to create their own, like you say, or have the lab create them for them, because the lab, of course, has the same access to the data and usually the tools to do it a bit quicker.
But if you just go with the generic ones, then, yes, they're just a cross cut of all the lab's customers, which, you know, depending on which lab you deal with, some have more experience in one industry than another, some have more experience with one type of machine than another.
And you may think that that makes them better, but what I find is it just tends to make them broader, so more middle of the road and less specific. And that's kind of why I always encourage people. I said, no matter how a limit is created, you have to appreciate it's a static number.
And being a static number, you have to ask yourself, what conditions did it take to create that number, you know, under how much usage, what speed, what temperature, what environment, and all these kinds of things that went into creating that number. And then say to yourself, is that how I'm operating? And of course, the short answer probably is no. There's at least one of those variables.
So the common example I like to give to people is this.
Just presume three pieces of equipment. So I go with gearboxes, because that's kind of universal. Most people kind of picture what one of those looks like and say they're three different sizes.
One's only 20 liters sump capacity, the other one's 200 and the last one's 2000.
So multiples of 10 increase, easy to keep track of in your head.
Then I suggest to them, in the first gearbox, it only has 20 liters of oil in it. We have around one and a half grams of metal content. What I'm saying is, if we filtered out all the wear metals from the oil accumulated into a tiny little pile, it's about one and a half Grams about the size of a fingernail.
Now in the 200 liter gearbox, we do the same thing, filter out all the wear metals collected into a pile. And I play with the, with the units here, so it's not so easy to see the math behind it. Real quick, I say it's about a half an ounce.
And then in the 2000 liter gearbox, same thing, collect all the wear metals and it works out to about a third of a pound.
So usually when I do this to a big group, an audience, I say, okay, there's your three gearboxes.
2200, 2000 liters, we have one and a half grams, we have half an ounce, we have a third of a pound.
Which one is in the worst shape?
And most people seem to favor the 20 liter gearbox and doesn't matter which one they pick. I'm not there to judge, but I say, well, what's interesting about the math and all that is that all three of those work out to be 100 parts per million.
So I said on an oil analysis report, they would have all looked the same.
So then I say, well, what if we add something to it? What if I tell you though, that the first gearbox, the smallest one, had 1500 hours of service, the middle one had a thousand hours of service, and the last one only had 500?
Well then people quickly change their answer and say, no, no, no, it was the bigger gearbox. That's the worst one. Shortest runtime got to 100 ppm. That's the worst. I said, well, no, no, let me throw one more thing at you though. I said, what if we just make all the runtime the same, make them all a thousand hours, similar runtime in all three. But now I tell you what the previous result was. In the first gearbox it was 90 parts per million. In the second gearbox it was 30, and in the last one it was 80.
Now which one's the worst? Well then everyone quickly picks the middle gearbox and says, that's the worst one. I said, well, every time I change one variable, you change your answer. I said, do you see why a static number will not work? You cannot accurately for all samples.
Consider a static limit at wherever it came from. Like I said, any of the four common sources of OEMs, loop suppliers, industry or commercial labs, experience, all of them are a point in time under certain conditions that may not hold true to today.
And so what I tell people is, and you know, it's, it's a, it's a bad, you know, pun joke mantra. To have but trend is your friend, right? You have to trend results to really see what's actually happening. Is it changing in a way that's good, bad? And what I always try and be clear on is this can come right down to two sister units, two units near identical at the same facility or in the same plant, similar runtime, similar operating conditions, you know, switch up the operators, whatever you want, try and make them as equal as you can. And I say one may sit at 80 parts per million consistently all day. The other one will sit at 30 parts per million consistently all day of the same measured wear metal. I said at that point that's just what's normal for each of those.
And so kind of back to your first question there about talking a little bit about the history of oil analysis and complacency is one thing I always try and say to people is, look, if you think about the four basic condition monitoring technologies that exist, we have vibration, we have ultrasonics, we have thermography, and of course oil analysis. So the first three all have known signatures, known patterns, known sort of frequencies, whatever you want to call it. That a singular reading can tell you something very telling.
And what I always try and be honest about oil analysis is it lacks that.
You show me one report, one moment in time on a system and say, how's this report look? I'm like, I have no idea. First question I'll say is, what was it last month? You know, what would it look like the time before?
What's the new oil look like? You know, I need all these points of reference to be able to accurately, you know, adapt my recommendation.
And every time you add a new piece of information, just like my example to you, it may change the outcome.
So to digress for a different, a different path here for a second, there's an example report I like to show people and basically the trend over time, what you notice is simply that this is a, it's a bearing, it's just an anti friction bearing. You don't know how big it is, but you know it's an anti friction bearing. So you're just picturing, okay, bearing in a housing, could be on a pump, a motor, whatever.
The oil is specified as an RNO 68, you read the report and all you notice is that the new oil does not seem like it contains any additives because that's common for R and O products. That most of the additives that are used don't show up. They're not calcium, magnesium, phosphorus and zinc based. Typically.
You notice though that though that is the oil that is supposed to be there. You are seeing increasing levels of calcium and phosphorus. So there's your first clue something's wrong.
What you then see is the viscosity is unchanged. It is holding steady at 68. So you're like, all right, if this is the wrong product getting in there, it's obviously not affecting the viscosity.
But the big thing that you notice on this report is the wear metals are terrible.
In fact, the particular example I go through has a comment on it and it says, post failure, the sample was taken after the failure occurred. So easy part is we don't have to figure out what went wrong, we just have to figure out why did it go wrong.
When we keep reading the report, the next thing we notice is the particle count is high.
You see a high particle count, you see some silicon to back that up. And it even had an analytical foregraphy done on it and that showed high levels of dust and dirt amongst all the wear metals.
Now, the way I teach people to read reports is ignore the effects, focus on the causes. So wear metals, those are all effects that something drove it to wear. That's not what we're here for. What's the cause? Well, the easy answer is we have calcium and phosphorus, signs of another product getting in there. And we have signs of silicon and a high particle count.
So I asked people, I said, can you think of one event, one action that would make both those two sides of the equation happen simultaneously? And this is the important part to such a point as to drive a bearing into a failure mode. And so usually what people come up with as an answer is they said, well, somebody added the wrong product, but it was the same viscosity. I'm like, all right, you're ticking off how the calcium got there, how the phosphorus got there, why the viscosity didn't change. They said, but it was added using a dirty container and. Or a dirty funnel.
I say, all right, that would tick off the box for how the silicon got there and why the dust and dirt is showing up on the analytical forography and the particle count, etc. So, but do you really think a one time use of a dirty funnel or a dirty container would throw a bearing into a failure mode in a matter of weeks?
And of course, most people start to go, no, probably not. And I say, well, welcome to the club. I said, you've just done data interpretation with the lab's level of knowledge.
So now let me become the end user and tell you two things that the end user would know. The lab doesn't know. The end user knows that the particular bearing housing operates in a very dry and dusty environment and it has seals that are frequently greased.
So once I reveal that to people, you can see the little wheels start to turn there. And then I show them a slide and say, here's the grease they're using and it's a calcium sulfonate EP grease.
So now you have a source of calcium, it's the thickener, you have a source of phosphorus, it's the EP additive, it's a little bit of grease getting into the oil, so it's not going to affect the viscosity. So you've explained half of it again. But you go, why would a little bit of over greasing getting into the oil cause the bearing to fail? And you go, well, it's not just over greasing allowing grease to migrate into the oil. Is that over greasing a bearing? The seal on the bearing will blow out the seal. And since it operates in a dry, dusty environment now, the dust has gotten in because the seal is no longer functioning. And that's what led it to failure. And of course, you know, everyone has this big look of revelation like oh my goodness, yes, it was staring me right in the face. I said, yeah, but until you knew it was in a dry, dusty environment with seals on it that required greasing, you would never have come to that conclusion.
So I say to people, when it comes to oil analysis, what do you think you're paying for? Do you're paying for the numbers or you're paying for the interpretation?
And some people say, well, you know, that's how labs differentiate themselves, is they really are saying our interpretation is better than the other guy. So that's what I'm paying for. And I'll say, well my opinion, you don't have to agree with me, but my opinion is you're paying for the numbers, only the analysis, the interpretation. Back to your comment. It's free and it's worth every penny, right? It's a limited interpretation and oftentimes, back to your original question, it relies on static numbers to determine what's a high iron look like, what's a high silicon look like, and has no idea about the actual circumstances of how that particular asset operates in its particular environment.
And therefore when you read it only from that labs perspective with static limits applied to it, you're barely seeing what you can. And that's why, you know, back to your very original question, why the traffic light system is, you know, it's deeply flawed by today's standards, that we just don't have that same gross amount of same fleet, same make model combinations where we can trust that historical empirical data will represent what the next future failure is going to look like. Now we have to, you know, accept the fact there's so many variables that in the end the trend is your friend, right. That you look, look at every report individually, you don't care how it compares to its sister or its brother. You just look at each one and say, okay, what is this doing in comparison to the last few samples and expect that yes, certain values will increase, certain values may decrease, and that's the normal trend. The question is, are they increasing or decreasing faster than normal or is there a sudden change in direction that's inexplicable. So that's not because you topped up with fluid or changed the filters or did something else that you know, you can account for. Again, information the end user has that the lab would never know.
[00:26:02] Speaker A: So with the the trend is your friend. I think that's probably getting to the idea of us looking at like rate of change limits, right? So the idea that instead of let's say wear metals, let's, you know, for, for ease of sake, let's say iron, you know, instead of it flagging at 50 parts per million, it's, we're going to flag it when the increase has been more than, let's say 2 parts per million per 50 hours or whatever arbitrary number we're going to come up with here, Right?
[00:26:30] Speaker B: Yeah.
[00:26:31] Speaker A: So I think I've only ever seen maybe one or two OEMs prescribe rate of change limits. MWM gas engines comes to mind.
There are a couple of others.
Why are those not more common like you would have thought? For example, you know, if you look in the Yenbach, you know, just bringing. Because I use MWM as an example. If you look in the Yenbacher handbook, for example, most of the wear metals as well as contaminants, you know, silicon is just set to a hard limit for landfill gas engines of 200 parts per million.
From memory, was it copper? I think was like 15 parts per million. Aluminum is about 15 as well. Iron, something like 20.
These are all kind of absolute numbers which like you said, are not necessarily reflective of the where rate.
So why aren't rate of change limits more common? Especially when the OEMs presumably have a ton of data that's available to them.
[00:27:36] Speaker B: I can give you a two part answer that one. The first part, easy answer is why most labs don't use Rate of change is because they don't always have the hour information.
So they're lacking the actual number that they would need to determine the rate.
Second reason, though, bigger reason, is they don't work.
So a good example of this, there was a, interesting, call it a study if you will, that was published and focused on Caterpillar 3406 engines, a fairly common engine out there in industry.
And it focused, yes, specifically on iron and was trying to explain why rate of change didn't work.
The graph plotted the iron compared to the hours of the reported samples and it had a distribution curve on it to show how many of those samples or how many samples drove each of the numbers. There was a very high peak of samples taken at 250 hours and a smaller peak at 500 hours. Because those would be fairly common sampling intervals.
You can see that you would have greater confidence in the values at those two key points. Now what was interesting is that at 250 hours the average iron content in the engines surveyed by this, this lab, it was 12 parts per million.
At 500. What we're all expecting me to say at the end of the sentence is it was somewhere in the neighborhood of 24, maybe even 25. But it wasn't, it was 21.
And so they go, well, why at double the usage does it not have double or even slightly higher than double the amount of iron?
So when you look at the graph, if you look at the Y intercept, the Y intercept does not start at zero.
It doesn't start at zero hours having zero iron. It starts at zero hours having about 7 parts per million iron.
And this is, you know, a fairly common phenomenon we're all aware of, usually referred to as oil hang up in that when you drain the oil out of the system, you never get all the oil. So you always have residual old oil when you mix in the new oil. And this number has been quoted up to about 25% of the volume, volume, etc. Whatever you want to believe.
But this data showed that that was in fact a pretty accurate representation that that number is closer to 40%.
So what I'm saying is whenever you drain your oil, whatever the values were at drain, take 40% of that and consider that your zero starting point, your zero hour starting point.
So when you do rate of change, all you do is you take your number that you get, divide by the hours reported and see what that comes out to be. Well, because of this non zero intercept, what you actually see is a slightly weird curve to it where the Rate of change seems to be extraordinarily high in the first few hours, drops like a stone, and then levels off. But when it does in fact level off, it levels off with a slight downward slope. So as much as we think rate of change is always equal, and when it increases would be a sign of danger, the fact is it's not equal, it's actually slightly downward. So if you try and correct for all this nonsense and say, okay, let's make it work, let's force it to work.
So you take that 7 ppm at 0 hours and do a baseline correction and just subtract 7 from every result and force it through a zero intercept.
Well then what you see is still a flat. Now you get a flat trend, which is good. You don't get a weird high, high starting point, but it still has a downward slope.
So in the reality is if we rely on rate of change, by the time the rate actually increases, it's probably gone into failure mode. So it's too late.
And you know, some people have tried to say, well, this is because as things wear, they have less to wear. And I'm like, look, that, you know, that's like using the word break in, which is an old, old term. Not a lot of equipment actually has to break in anymore. So this data wasn't collected on new units only. It was collected on all units in service at a bunch of different, similar customers and similar applications. It was about five different large mines, I think were used to compile the data.
And it just showed that the fallacy of rate of change is that we assume, presume actually that the rate should be equal throughout its life, throughout a service life, not necessarily assets life, but throughout the service life.
And it is fact, it is not so like I said, even if we had the hours, which we don't always have the hours, but even if we did have the hours, we're basing it off a false assumption. So it doesn't work. And that's why most labs stay far away from rate of change and why I think OEMs don't adopt it and all that. And so as an industry, we've recognized rate of change.
It may work at an individual site, you know, if they do it themselves and they do, you know, force baseline corrections every time they do an oil change.
But the reality is that again, there's so many other variables kicking in that the rate of change for one unit will be different than the rate of change for a sister unit. So unless you're willing to do it individually for Every one of them, you're not likely to come up with a number that says, look, we expect like you say, 2 ppm per hour of use or something like that as the ballpark benchmark type number. We're like, no, you know what, some units it could be 0.2 ppm for every hour and in other one could be three. And both of them are normal.
By our, you know, every other metric we have to go by, these machines appear to be operating normally. So yeah, it's, in short, it just doesn't work.
[00:33:14] Speaker A: Yeah, Interesting, interesting. Okay, so if we want to, if we, if we are a business that is kind of trapped in the hamster wheel of traffic light analysis and we're looking to go beyond right to the type of, you know, trend is your friend analysis that you, you've been discussing.
You know, you've talked a little bit of the mechanics of like how to do that in terms of looking back at, you know, in incorporating operational data as well as some of the context around the asset and doing it on an individual basis. So that's kind of the how.
Maybe another question to ask is in the businesses that you've seen successfully transition and kind of, you know, walk up the maturity ladder, who's driving the change?
As in, where is that coming from?
Because, you know, people need to be incentivized to do it and there needs to be some level of direction. It might also be, there might also be a component of knowledge, you know, whether that, you know, that comes from training or simply infield experience. But who are you finding in the organization is driving, let's say the maintenance reliability teams to like a deeper level of, of oil analysis interpretation?
[00:34:33] Speaker B: Yeah, usually it's the middle levels of management are higher.
Is not the boots on the ground person putting the oil in the machine is not the person who pulls the oil sample, nor is it necessarily even the person who reads the report, but who they report to, you know, whether you want to call them the reliability leader or manager or engineer, whatever title that may be, it generally starts there or possibly even higher up. Sometimes it's a plant manager who, and I would say, you know, guessing here, has a, has their roots in reliability at some point in their career and knows it works. So therefore it is kind of a top down, you know, motivation to do this and it's kind of where it has to be. I've always argued that, that if the upper levels of management aren't supporting it, the program's generally doomed to fail.
And back to our comments earlier, you know, about complacency. Some of the most complacent people, most habit based people, you know, don't want to use the word complacent as meaning like lazy or uncaring, but just habit driven people are the ones who have to do more tasks frequently.
So it is the boots on the ground people because they do the same kinds of things day in, day out, week in, week out. Whereas the higher up the ladder you go, the less frequently they repeat some of the exact same tasks. So they tend to be a bit more willing to assess a situation without going, well, let's just do it the way we did it last time, right? They tend to be a bit more fresher about the whole thing and say, well, how can we approach it this time?
They're also in the position, the authority position to affect a change, right? People who are at the bottom of that ladder, as well intentioned as they may be, as trained as they may be, and as motivated as they may be, they can be very ineffectual if they don't have that support from somewhere higher up the ladder. So where I've seen it successful, it has definitely been at least the bosses of the people who are on the floor or higher that has to say we're doing this Once they say we're doing this. Generally they do have to, you know, convince the people lower down on that ladder that this is a good idea and we're going to do it. And they sometimes have to justify or convince, whichever way you want to say it, the people higher up the ladder that this is how we're going to be spending our money or why I'm hiring more people or whatever it is that we're doing. Because, you know, usually there is an incurred cost to doing the program better because it probably means one, you're taking more samples more frequently, and two, you're spending more time interpreting, reading, disseminating the information somehow. So that means more work orders are created with more things to do than just simply the loop, fail, repeat, where they just go, well, I'll just change the oil and we'll be done with it, right? The next report will be fine. Like, no, let's, let's go inspect the breathers, come back and tell me what you saw. So somebody inspects the breathers. They come back and say, oh yeah, the breather was damaged. They go, well, we should change that breather. They change the breather and they say, now let's resample it, right? We don't resample it the next day. That's not enough time. We need to resample at the normal interval. But when they resample at the normal interval, they're remembering. They go, oh yes, we changed that breather. It seemed like it was a bad breather, it was damaged. So they check the next report a month later, you know, whenever it is, and say, well, if that was the only thing wrong in the system, then we expect the results will be more or less the same. And they go, they see the results are more or less the same, they go, ah, we fixed it, that's what was wrong. We can carry forward knowing that there's no other issues.
Whereas if it continued to increase even though they made one change, they go, ah, that wasn't the only thing wrong. We need to go back and look for something, right? And they close that, that feedback loop of here's what the report says and here's what the person who actually walked past that piece of equipment had to say.
Because oftentimes so many facilities, those are not the same people. The people who read reports, do the interpretation are not the same as the people who have to, you know, either do the inspections or do the repairs or the oil changes, whatever, whatever it comes down to.
So once, once you have that mid level management driving this, they tend to close that feedback loop and those people tend to talk to each other and that's when you start to see some success. And some success tends to drive more success because once people see it kind of works a little bit, they're more invested to see it work a lot and they will take better samples, they will read the reports with a bit more diligence, they will listen to the interpretation with a bit more respect is the word I usually choose for it. You know, they'll actually care what the person has to say and go, well, yeah, if you're telling me the report says this, I will go check that. Because those two things might have something to do with each other. And like I said, it just starts to build from there. But that's usually where I see the, the success. It's got to be mid level management working with the team to convince them that it's working right? So you got to be part salesman here. You gotta, you know, you gotta sell it to, to people on up both sides of the ladder, higher up the ladder, lower down the ladder. And usually you can sell it by, you know, what we call an industry, you know, the war, you know, sharing the little successes we had and say, well, this one time we caught this issue and people go, okay, well, if it can catch that, maybe it'll catch something else and it builds, right? It's, it's not fast, it's not immediate, it's not a switch we can just turn on and say, ah, here we're doing it right. It takes time to do it. But that's, that's where it usually, like I said, my experience has had the best success.
[00:39:50] Speaker A: So that actually kind of brings me to a later question that I wanted to ask, which was we're seeing increasingly, especially among the big mining companies who as you pointed out, maybe have a kind of a deeper level of maturity when it comes to the interpretation of oil analysis results and using oil analysis to its full extent. So obviously mining is huge over here in Australia. It's kind of the industry, for one of a better word, there is maybe a trend towards having centralized reliability as well as hydrocarbon teams. So I'll just pick one of them. Rio Tinto as an example, has a kind of a centrally located hydrocarbon management as well as a kind of a reliability team that will be based out of Perth. And it's what you imagine like the control center looks like, you know, lots of, lots of big screens.
They can see the data feeds coming in from every single mine site that they have, let's say for example, in the state of Western Australia.
And you know, you've got, you know, to use the term boffins in chairs, you know, doing a certain degree of interpretation. Now I can see, personally I can see, you know, benefits as well as downsides to that. Namely the benefits would be the sheer amount of data that's being collected in a centralized location gives you the capacity to see trends that have, you know, 30,000 foot view.
My concern over doing that is that you lose the operational context that you were talking about, which is for each individual asset you, you know, to, to really utilize oil analysis, you need information about the individual assets.
So are you able to speak to the success or otherwise that you've seen out of adopting this kind of model?
[00:41:46] Speaker B: Yeah, yeah, definitely. I've seen what you've described, you know, centralizing a team somewhere, usually off site, to all locations, not even at one of their locations.
And like you say, there are benefits to the big data side of it where they can take those huge swaths and say, okay, here's an overall trend we're seeing.
But like if you go back to the example I gave, being in the end user position compared to being the lab position.
The lab position is, yes, the remote person doing the interpretation without knowing any Contextual piece of information regarding how that assets operated.
The end user one is that, yeah, somebody who's actually got boots on the ground and can walk up, see, smell, touch that piece of equipment and go, okay, I can see there's all this grease pouring out of the seal. I can see all this dirt stuck all over that seal. No wonder that bearing failed. It's in a, it just looks nasty, right? Whereas from the laps perspective, what are they picturing? Well, they might be picturing that, but they also might be picturing a perfectly clean looking bearing operating the way it's intended to operate. So they make their interpretation based off of either a wrong assumption or no assumption, right? They just, they can't assume anything. So they don't factor in the environment to it because they're like, well it is, whatever it might be.
So I definitely have always tried to defend those boots on the ground people and say, look, no one will ever do a better job than somebody who walks the equipment down. Somebody who is there. It could be the operator, could be a maintenance individual, you know, sometimes it could be an electrician, right? Somebody's got nothing to do with the lubrication side of it, but because they walk around frequently and they see it, they smell it, they touch it, they know what normal is and they know when it's not normal, right? They can walk up to you and say, hey, you know that unit over there, it burped out its, its dipstick, it's laying on the ground, right? Oh, that's not right. Whereas you know, a thousand miles away on the other side of the country, you've got somebody looking at a bunch of data going, huh, this one's unit is showing higher signs of silicon than the other ones. I wonder what that's about. You know, really the simple answer is, well, if you actually had laid eyes on it, you would have known what it was before the analysis even told you what it was. So I do think there's a, there's a certain loss if you only rely on the remote group. I think there's, there's powers or some synergy maybe even to be gained by centralizing it because I have seen the downfall.
I have a client who is five coal mines all operated geographically in a similar region, very close to each other.
And when they took a view from the, like I say, from the 30,000 foot view, they recognized that their entire fleet of brand new bulldozers showed high signs of nickel, every last one of them. So it wasn't just one of the five sites, it was all five of them. And they were quickly able to narrow it down to was only the ones they'd recently purchased, none of the previous ones. So long story short, that one turned out to be Caterpillar, had chosen a different bearing vendor and the new bearings were being supplied with nickel flashing on them.
So, you know, but the thing is, if each individual plant had to address, why is nickel showing up in only a few of my units? They may or may not have recognized it was only the newest units that were doing it. Even if they had, they might have thought, geez, we must, we must be treating the new ones a little rougher or you know, whatever it is they might have discounted at, you know, maybe not all five sites, but each one of those five having been responsible for their own interpretation of that high nickel on only select units, wouldn't have seen that bigger picture. What that? No, realistically, there was a common factor here. It was that they were all new.
Like I said, that's something that you can gain from the 30,000 foot view side of it.
Now you will lose if you only said if you only do that, you will lose the boots on the ground aspect of hey, it doesn't smell right, it's running hotter than normal. You know, I can see oil seeping out of this or any comments like that where you go, you know what, those are often the best comments, the ones that are truly predictive that catch things that an earlier stage in the oil analysis waiting to be triggered by a high contaminant count or wear count or an oil that's oxidizing or whatever it might be.
So, you know, I don't think you can say one has to or should replace the other.
I'm not saying we should only go to boots on the ground people. Norm is saying we should definitely outsource everything to a team of very professional, highly educated, you know, people who sit in armchairs, you know, far, far away.
Each has their benefits.
But back to my comment I made on the previous question. It's kind of like it's a feedback loop.
It's really successful when they talk to each other, because when they talk to each other, then the boots on the ground people believe that they actually are bringing some experience even though they're sitting far, far away. And the people who are sitting far, far away start to recognize there are other factors, other variables they should be aware of when they're doing their interpretation. And they start to do a better interpretation than simply, you know, like you say, looking at the wall of screens and going well, that, that line's a little bit higher than all the other ones. So let's, let's, you know, send somebody out to that one, right? They start to go, there might be a reason for it that's simpler than that, right. And get to the root cause faster.
[00:46:59] Speaker A: So maybe one last question, which would be, you know, I think condition monitoring, let's say, as a discipline has been pretty broadly adopted by a lot of industries. Now obviously there are some that are kind of lagging behind. Maybe they don't have the resources, but there's plenty of industries that are using a combination of, you know, VA thermography, ultrasound, oil analysis, probably even some of the motion amplification tools, electrical signal analysis, all the rest of it.
Of the, the main four sort of the core four disciplines, I think there's, there's often a tendency to think that one has supremacy over the other. Right. So here, here in Australia, VA is, is kind of king. You can throw a rock and hit about 10 VA cat ones and cat twos.
And I squarely blame a man called Clyde. I'm gonna call him out Clyde, because he was such a bloody good trainer. He was, he was the Mobius trainer here for probably 10, 15 years or something. And he was just extraordinary, universally good feedback on people attending his courses. And so they were extremely well attended. And he has created an army of VA tattoos all around Australia, which is, you know, a fantastic thing. And it's, you know, real tip of the cap to him. He's, he's kind of recently retired now. Clyde's done an amazing job at sort of elevating the standard of VA's across Australia. Now it does have the knock on effect where VA is kind of king and, and all of the other disciplines are kind of, you know, not poo pooed.
So how do you see, you know, all of them being complementary and, and maybe specifically what are the kind of failure modes that you think oil analysis is really good at picking up that some of the others may not be able to, to, to see?
[00:49:10] Speaker B: Yeah, well, I definitely know what you mean about why, why some gain popularity like you say, you know, I, I have a way of making oil analysis entertaining. And so yes, I have my, my own army of converts who, you know, usually as a result of people attending my training, if they're already our customer, we see an uptick in their oil analysis. Because back to kind of our earlier discussion, if you believe in something, you're more willing to do it, more willing to try it, more willing to stay with it, even if it doesn't seem to be working. Whereas if you don't really have a whole lot of faith, you're pretty much, you know, going to cause it to fail because you're going to take bad readings or bad samples and, you know, it's never going to work. So, yes, the best success is the best of the four does bounce around a little bit, depending on, you know, who you ask. And, you know, I certainly wouldn't offer my opinion as to which one I think is the best, because no one would believe somebody who's got nearly 30 years in oil analysis if they said oil analysis was the best. But what I will say is oil analysis is truly the best at wear and contamination.
If it's a wear related issue or if it's a contaminant related issue, oil analysis will pick that up on the first sample and guide you along the way to say, hey, this isn't right.
In defense, I would say vibration and ultrasonics will pick up imbalances and alignment issues better than oil analysis ever will.
Now, as a catch all, I would say that all three of those technologies would catch every one of those problems. I just said it's where they stand out.
Oil analysis, like I said, will catch wear first. Vibration and ultrasonics will only tell you it's wearing once the wear has progressed to a point where the signature starts to look a bit funny.
But oil analysis was to say, hey, on the first sample this got higher than normal, where this isn't normal, similar with contamination.
But oil analysis, eventually, if you let it run long enough, could tell you that the machine's out of balance or out of alignment, because it would have probably a high amount of wear showing up. But to be confident that that's what's causing the wear would be at such a point that, you know, your vibe guy or your ultrasonics guy, you know, told you six months ago that that was the issue because it only took one reading to know that.
So each technology, each of the four, has a certain type of failure mode that it is better at than the other ones. But most of the four, even thermography, can detect many of the same problems. So that's where I kind of group them together and say, you know, the word you chose is complementary, that some of them have strengths and of course, weaknesses that are complementary to the other one's weaknesses or strengths, you know, in, you know, conversely. And so the two that I find the most complementary are actually vibration analysis and oil analysis that I think they cover the widest gamut of potential failure modes from, you know, over lubrication, under lubrication, contamination, wear, misalignments, all these things, you know, all the common failure modes. And I'm not going to say that ultrasonics won't do it as well, but vibration and oil analysis have a long history together. There's lots of software out there that merges the two data sets in a reasonable way. So it's got the best support in the industry. Because back to my earlier comment, people know it works, they have faith in it. Not as many people have developed the same kind of tools for ultrasonics, which is quickly becoming one of my favorite technologies because of its simplicity of use.
Right, back to your comment. You know you had to be trained to do vibration, right? You can't hand somebody a vibration analyzer and say, hey, here you go, right? They barely even know how to operate it, let alone how to interpret it. But ultrasonics, by today's standards of the acoustic emission ultrasonics, it's so simple that just about anybody could pick one up and be able to detect a few things. They're not going to leverage out all the benefits. But you could point it at a bearing, collect a reading, point it at a different bearing, collect a reading and go, okay, that bearing is worse than this bearing. Or when they come back in a month, go, hey, this one bearing's gotten worse since I last saw it, right? It's very simple to use. And so I am starting to lean towards ultrasonics as being the better complementary one, simply because it's a lower cost to get into it, it's less training required to do it, and it's a bit easier to see some immediate payback because ultrasonics can detect things that vibration can't detect that are a bit unrelated but like they can detect air leaks, right? Like sometimes right there justifies the purchase price of an ultrasonic detector is just solving a bunch of steam leaks or air leaks within the plant acoustic mission. Ultrasonics is also pretty good at detecting electrical faults.
So, you know, in the old days you would have had to have bought a FLIR camera for that. Nowadays you're like, oh, what if I'm getting some of the benefits of flirting in my acoustic emission ultrasonics, which is mimicking a lot of what I can do in vibration. Now you've almost merged three into one. Not quite, but you're getting a very introductory course to all three in one simple handheld device. So I'm like, that's quickly becoming, you know, in a lot of industries I'm dealing with now a go to tool just because of like it's, it's so cheap by comparison and it's so little training by comparison that people are liking it. And oil analysis is that old standard, like we said in the first question, they're checking a box. So they're like, well, if we got oil analysis and we've got either vibration, which if you're lucky enough to have a Cat 1, Cat 2 on staff, or back to your previous question, if you, you know, send it out to a centralized place, it'll do the interpretations for you, but you at least know how to collect the readings.
You leverage that with your oil analysis program together and you've got, I think, a pretty solid gamut. You don't need all four technologies at all sites. Definitely. Of course, each one of the four technologies, like I said, has certain areas that reign supreme. And sometimes it would be nice if you had access to a FLIR camera instead of having to send in an oil sample and ask a vibe guy to come do an interpretation for you. But you know, if you, you know, had to choose kind of thing, I would say that the two to choose would be oil analysis, of course, but that's my bias.
And either ultrasonics, if you're, if you're not willing to invest more time and money into it, than vibration. But vibration, yes, is very complementary to oil analysis. So it's kind of the top two. Oil analysis, vibration, very like the complementary in so much that each one has a strength, each one has a weakness, but it's the other one's weakness and the other one's strength.
[00:55:46] Speaker A: So yeah, that's really interesting. I mean, over the last sort of maybe six to 12 months, I think where I've increasingly seen maybe almost like a divergence or a split is for oil lubricated assets it'd be oil analysis plus va. For grease lubricated assets, it's grease analysis plus plus ultrasonics.
Yeah, because often grease lubricate. Well, in a lot of cases, grease lubricated assets. You know, if you, if you're re greasing with an ultrasonic unit, then you, you know, you're getting that information already. So yeah, it's interesting to see how these are all kind of merging together and as people get better understandings of the strengths and weaknesses where everything is kind of finding its, its niche.
[00:56:29] Speaker B: But hey, Evan, you've seen PF curves and on the PF curve, different curves depending on who produces them, you will see the points near P labeled with oil analysis, vibration, and so on, so forth, but they appear in different orders. And so I was asked by somebody, they said, what's the correct order?
And I said, there is no correct order. I said, whoever produced it is basing it off a particular type of failure mode. So I said, if they're showing you an eventual wear out failure mode, the PF curve will list oil analysis as its earliest detector, followed by vibration, then ultrasonics, and usually terminating with thermography. I said, but if the failure mode was based off of a misalignment, I said vibration and ultrasonics would be first, thermography would be third, and oil analysis would be fourth. I said, there is no correct order to this because every type of failure is caught by a different technology at different points of its life.
And so, you know, that's the thing about the PF curve. There is no, there is no perfect PF curve that explains everything in life. It's like, no, it's just a concept to help you understand. There's a point P where the potential failure exists. There's a point F where it's going to fail. F happens. You can't change that. F is going to happen.
Question is, can you find P sooner? And that's by employing the right technology for that particular failure mode, not for that particular asset, for how it's going to fail. And if it fails differently, yeah, the technology you needed to catch it is going to be different each time. And so that comes back to why you should be as complementary as you can and not rely on any one technology.
Try and rely on at least two. And if you can, three in the course, if you really, you know, won the lottery and you could get all four, then you would have the best opportunity to do as much as you could to catch as many of those peas before they progress all the way to the, to the F of the failure curve. So that's that. That's just it. There is, there's no one answer.
[00:58:26] Speaker A: That's awesome.
[00:58:27] Speaker B: And that, and that feels like a.
[00:58:28] Speaker A: Really good place to end it. I mean, Evan, hey. Really appreciate you coming in to give us a bit of, bit of a, bit of a history lesson, a bit of a technical lesson.
Obviously some, some stories from, from your experience as well, from your time in the oil analysis field.
So, yeah, really appreciate the time and thanks for coming on.
[00:58:48] Speaker B: Yes, well, thank you for having me.
[00:58:50] Speaker A: Awesome. Easy, easy.
[00:58:52] Speaker B: Thanks.
[00:58:53] Speaker A: Thanks so much for that.
That was, that was great. That was great. Hey, I was actually thinking, I had a question for you. Which I thought about, about 15 minutes in when we were speaking about the context.
So I've got a customer who asked me the other day that. So it's an iron orb plant. Right. So there's a lot of hematite and magnetite that'll show up and hang on, magnetite is more magnetic than the hematite is.
I think that's correct, yeah. So they were curious if they had equal levels of hematite and magnetite.
So the iron content is the same but the magnetic effect is different, would they end up with a different PQ index?
[00:59:50] Speaker B: Well, the PQ index, I mean like we at testo we do ferrous wear concentration. So it's the same instrument is a PQ index. The difference being ours is calibrated actually the PPM whereas PQ index is some made up number but the premises has.
[01:00:07] Speaker A: Become the standard for some reason I prefer ferrous density as well. But.
[01:00:11] Speaker B: Yeah, yeah, but you know when somebody like when ALS decides to do PQ becomes dominant. Yeah. So anyways.
But the premise for either machine is the same. It's just a magnetic flux coil that you're interrupting.
And so anything that's magnetic ferromagnetic, so that is really any iron. So this is where I'm now not knowing if I'm correct on saying this, but any iron, whether it be hematite or magnetite, should affect it similarly.
And of course anything else that's ferromagnetic because that's the other qualifying statement. Cobalt chromium, they're magnetic as well.
So the question being if you knew, if you knowingly spiked a sample with, you know, PQ 50 or 100 of one and 100 of the other, would you get a sample that equaled 200 or would you get a yes, would it cause a greater reaction? And I'd like to think that the answer still is no.
And how I'm basing that answer because I don't, I don't 100% know this is true. Is, is that why I mentioned to people that chromium is magnetic? Is because stainless steel kind of falls in that same category as your question.
Stainless steel, there's 300 series and there's 400 series. Right. There's basically two kinds of stainless. Well one's ferromagnetic and the other one isn't. Both are made with high doses of chromium though. Right. Both are made from iron, both are made from chromium. Right. 10 to 25%. And so the point is, is that because you know you're only testing for anything that can be magnetized, not necessarily is magnetic.
I think this is where, like I said, where I'm basing my, my guess of an answer off this one is that if, if it can be magnetized then it will show up on the PQ index.
So whether it's magnetic to begin with or not, I don't think is relevant. As long as it can be magnetized can cause an effect on that magnetic flux. Whereas the one type of stainless steel, the 400 series, being non magnetic doesn't. It's like you might as well throw aluminum in there. It won't do anything, right. Can't, can't interrupt the magnetic flux coil. So I'd like to see the answer to your question is no, one won't read higher than the other. They'll read, you know, however they're supposed to read.
[01:02:33] Speaker A: On a BQ400 series stainless will also show up as nothing on a ferrous density meter too, right?
[01:02:41] Speaker B: Yeah, because it's, it's, it's not attracted to and it, you know, because yeah, like if you think like direct read foregraphy and stuff like that, those, those used a series of magnets that it would go through and if it disturbed the magnet they would say okay, this much disturbance equals whatever. So these, these new meters that we have, the PQ and the, the ferrous wear concentration are just coils of, of, you know, instead. But it's just. Yes. Will it upset the magnetic field?
And if it doesn't like if it's lead, tin, etc, they just pass on through without causing any, any changes. So that's the, the, the, the quick Google answer to you here. If I just.
[01:03:19] Speaker A: Okay, just interesting.
[01:03:21] Speaker B: Can hematite be magnetic? Because if it can, then there you go.
It should show up in the PQ similarly.
[01:03:29] Speaker A: So.
Oh, interesting.
[01:03:34] Speaker B: And this is why I think it's always so, so good to understand how a test is performed.
Because if you don't know how the measurement's being made, then you don't know how it may be affected. And.
[01:03:48] Speaker A: Right.
[01:03:48] Speaker B: This, this is why this is such a valid question. So Google tells me it says while pure hematite is not strongly magnetic, it can exhibit weak magnetism under certain conditions, especially at low temperatures.
So it says that it's essentially non magnetic.
So that would lead me to say then if, if that if this, you know, if Google's correct on this one, that no hematite wouldn't show up, whereas magnetite would.
[01:04:16] Speaker A: So, so you know, it's not that.
[01:04:17] Speaker B: Not that hematite would cause a higher reading than what it is, but you would see it for what it is, whereas the hematite, you actually wouldn't see it for as much as. As it is.
[01:04:28] Speaker A: So that's so interesting. So you'd see, potentially, depending on the particle size, you might see it in icp.
Right. You would see it in xrf, but you wouldn't see it in PQ or ferrous density.
[01:04:43] Speaker B: Yeah.
[01:04:44] Speaker A: So you get potentially four completely different readings.
[01:04:50] Speaker B: Well, and this is the interesting thing, like, has. Have you ever in all your videos, done an explanation for what, why ICP is limited on particle size?
[01:04:59] Speaker A: No, I haven't actually. That. That was. It was a video idea that I. I thought. I was like, oh, I haven't actually done that. I was thinking about that the other day. That.
[01:05:05] Speaker B: Yeah, because the way I try and explain it in simple terms, I give an analogy, and it's a weird analogy, but here we go. I said, imagine it's August, and I ask you to be my research assistant for the Perseid meteor shower.
So you agree to help me. And so we meet the night of the first night of the meteor shower, and I give you a pad of paper, pen, and a stopwatch.
What I ask you to do is every time you see a, you know, streak of light across the sky, you start your stopwatch, and when it finishes burning up, you stop your stopwatch. You record the times. So you do this all night. The first night. Now, when we meet the next morning for coffee, you go to hand me your data, and I said, well, before you do that, sort them between large and small.
And you say, well, I wasn't keeping track of that. How do I know? But then it dawns on you. You go, well, the larger the meteor was, the longer it should have taken to burn up across the sky. So you look at all your data, you find some sort of median point, say it's two seconds, and you decide any meteor that burnt up in less than 2 seconds is small. Anything larger than 2 seconds longer than 2 seconds is large. So I thank you for your data and I say, would you like to help me tomorrow night? And you say, no, because you did not enjoy that experience.
So I try and cajole you and I say, look, I'll make it easier for you the next night because, of course, the Perseid meteor shower takes place over multiple nights. I said, the second night, you just have to count them. You don't have to sort them for sizes. I'll just give you A pad of paper and a pen, no stopwatch involved. So you meet me the next night, later that night, and I say, oh, by the way, here's a porta potty, right? The portable bathrooms that they have at construction sites. I say, here's a porta potty. You're going to do all your measurements from inside of there. And you're like, I wish I'd, you know, read the terms and conditions of this agreement before you, you know, agreed to do this. But being a person of your word, you say, I will do it. So my question to you is, now you're standing inside this, you know, small box, and you're looking up through this tiny, tiny skylight at the top there. How many large meteors do you see? And the answer is, you don't know that you see them.
Every meteor that you see streak past that window to you is small because you only saw it for less than two seconds. You don't know if it started streaking somewhere over here on the horizon and finished somewhere over there on the horizon. You just know that you saw it go past your window. Well, in an icp, the plasma that's being created is the night sky. The sample that's being introduced in the plasma is creating meteors, things that give off light, right? As the electrons are ionizing, they give off their light. That's what we're measuring. But if you're familiar with an icp, beside the torch, just above it, there's a little slit and just a tiny slit pointed at the optic system. So we're only seeing a small, small, small, narrow and slight vertical but very narrow slit of the plasma. So we're looking at the night sky through a skylight.
So part of the reason that we say ICP is good from 0 to about 5 microns, loses a bit of accuracy between 5 and 10, and then ultimately doesn't really see anything larger than 10.
Part of that statement is true because if the particles are between 0 and 5 microns, they completely ionize in that portion of the plasma in front of the window. And we see everything for what it should be. If the particles are on the larger side of that spectra getting closer to the 10 microns, they tend to stretch a bit. They start to ionize down here, they continue to ionize up here instead of being in the narrow window.
But the real reason, a big part of why is the cutoff somewhere around 10, is, as you may know, with ICP analysis, we dilute the sample and then we turn it into an aerosol, and then we spray the aerosol into the plasma. Well, those aerosol droplets are about 13 to 15 microns in size. So short answer to this question is, what's the largest particle you can fit inside a 15 micron droplet? And the answer is about 10 microns.
[01:08:54] Speaker A: Yeah.
[01:08:55] Speaker B: Because the particle can't be any larger than the droplets.
[01:08:58] Speaker A: Right.
[01:08:58] Speaker B: So there are certain particles that aren't even introduced into that plasma. So back to your, your comment about different technologies giving different answers. I always give that analogy to try and say, look, your ICP is never going to see the same things your particle count will see.
[01:09:13] Speaker A: Yeah.
[01:09:14] Speaker B: Right. Because your particle count, that's a completely different technology and different size range it's even aimed at. And it's got its own internal flaws because of spherical equivalency in the way it estimates particle sizes are all over the map. But long story short is the two ranges that they're measuring overlap only ever so slightly. So unless it's right in this range here, your particle count and your ICP results will never 100 agree with each other as to, should they be going up, should they be going down, should they be stable? But then you throw in something like PQ and you say, well, PQ is insensitive to particle size. It doesn't matter how big or small the particles are, it sees everything. So like, oh, is it taking all of this and then creating a range like this?
Sure, but only if it's ferromagnetic.
[01:09:55] Speaker A: Yeah.
[01:09:55] Speaker B: Not if it's iron or made of chrome, you know, so again, it's a different measurement. So no, it's not, you know, measuring exactly the same thing. And I always try and, you know, educate people that way. And I go, look, I am warts and all kind of presenter. I want you to know everything that's wrong with this because then you don't expect it to be what it can't be.
[01:10:15] Speaker A: Yeah.
[01:10:15] Speaker B: And that's the thing everyone assumes because they watch too much CSI or similar programs on TV that, you know, a sample this big can yield this, this encyclopedia set worth of information from it. And you're like, that's just not possible, Especially not in the time they gave us.
[01:10:30] Speaker A: Yeah.
[01:10:32] Speaker B: It's like, look, give me enough time and enough instruments, I'll spend millions of dollars on this. And sure, I could, I could tell you based on the isotopes, where that iron was cast and what facility.
[01:10:43] Speaker A: You.
[01:10:44] Speaker B: Know, I'm like, you don't need to know that data. You need to know, do I have more iron than last time? Yeah, yeah, that's all you're asking then.
[01:10:50] Speaker A: Rde, right, you're not. So you don't dilute the sample, you're not spraying it, you're not atomizing it, but it still has a detection limit. Right. And I'm, I was just, I mean, I'm assuming that that's, you know, once you get above a certain particle size, you don't get full atomization of that particle.
[01:11:05] Speaker B: Right, Exactly. Yeah. And atomic absorption is kind of the same thing, right? Those were the three main technologies were rde, AAS and icp.
And kind of in my perspective, having started an industry when pretty much RDE and AAS were already on their way out, is that ICP is again, it's fairly simple to operate the instrument, even though it does require, you know, a lab based setup. It's not portable, but its speed is unparalleled. Right. You get your 20 elements in about 45 seconds, 45 seconds of rinsing, you're on to the next sample. You, you can load 100 up at a time and walk away. It's what every commercial lab is asking for.
So the last vestiges of RDE tended to be in North America anyways, the military, because they would only analyze a handful of samples every day and they wanted a field portable instrument that was fairly robust, which RDE fits those qualifications. But as soon as you ask a commercial lab to do rde, they go, you know, how many of those we'd have to buy? It's like, no, no, no. You know, we like. As it is at Testoil, we have six ICPs.
Like we, we run around 2,000 or so samples every day and we need to keep, you know, all six of them running because you know, our results are same day. So we have to get the results out by end of day. It's like that's the only way for us to get that volume through in that time frame is to use the fastest technology that can do it accurately. And it's just one of the more widely accepted technologies like ASTM D5185 has kind of been, you know, for decades the standard to which all wear metals are compared to. Because like, well, it's a fairly common instrument. It's not, it's not the cheapest, it's not the easiest to set up. You know, you need, you know, cooling water supply and argon and nitrogen to get you going. But it's like, but once it's there on a day by day, sample by sample cost basis, it's a fairly Cheap instrument, fairly reliable, easy to calibrate, easy to keep in calibration, etc. Etc.
It just gets the throughput. But the limitation is if the particle is much larger than 10 microns, you're not going to hear about it. And so I always try and say to people said well look, what's the most damaging size particle? And the answer is clearance. Right. There's no numerical answer to that question is clearance size particle? I said so what are typical clearances? And while we can quickly agree mostly under 5 microns, you go, well if a clearance sized particle gets into the machine parts, what size wear metal is going to get created? Well you typically can't create a particle larger than clearances, so again it'll be less than clearance sizes. And, and then what's the third thing we're measuring? We're measuring wear metals, we're measuring contaminants, but we're also measuring additives elements. I said well those are sub micron. So I said even if it only measured in the 0 to 5 range, ICP would tell you everything that you expect to see. What it doesn't tell you is the unexpected.
[01:13:50] Speaker A: Yeah, yeah.
[01:13:52] Speaker B: That's where PQ and analytical foregraphy and other large particle analysis, you know, write down a microscopy if you had to. Right. Tell you the story about the other stuff, the unexpected. So that, and so that's. There's no one that's an interesting one.
[01:14:04] Speaker A: Right, because you mentioned direct read photography which I haven't seen a single instrument in Australia.
Right. Of those things it seems kind of to have died, I think.
[01:14:19] Speaker B: Yeah, it, I would say a good 15 years ago it was really popular and I think that was because a few of the larger labs were doing it as a faster way to do foregraphies or you know, whatever. And then I think people quickly saw that there was one, an easier way to do it that's like PQ analyzer type instruments a bit faster about doing the whole thing.
And two, I think it's its name killed it. Calling it Direct Read for graphy.
It's you know like you talk to our friend Ray D. Right. He, he's like that's it's not for us.
[01:14:50] Speaker A: Yeah, yeah.
[01:14:51] Speaker B: Nothing ferrographic about it. Right. You're not, you're not doing any morphology on this whatsoever. It's just a magnetic reading and small versus large is, is, you know, it's a bit arbitrary, it's just more honest. Even if it's, even if it's the wonky scale of a pq, just to say, group it all together, it's this much. Right. Can we really separate the largest from the small? Can we really come up with a decent algorithm or some optical way of like, you know what? No.
There's no point trying to be small versus large. It's just, let's just call it all the same thing. So I think that's why we saw it sort of slip away. And I don't, honestly, off the top of my head, I don't know anybody who still runs it. I know I, I still get asked for.
We steer them somewhere else because we're like, hey, we, we could do that. Which we can't, but we could do it, but we'd rather do this. And we try and explain benefits and like, oh, that sounds better. And you're like, yeah.
[01:15:41] Speaker A: So on the foregraphy thing, because I'm, so I'm looking at opening up a small diagnostics lab over here. Yep. Just because there's only, as far as I can tell, there's only one really in Australia that is kind of doing that sort of foregraphy type stuff.
And he is in his mid-70s and I assume will want to retire at some point.
Analytical like. So a ferro. Ferrograph for a gram. Sorry.
Versus rpd.
That, that, oh, it's used in marine. Swansea Tribology developed it. It's, it's a, it's a, it's the ferrogram, but it's circular.
You seen those?
[01:16:28] Speaker B: No.
[01:16:30] Speaker A: So they kind of look neat.
So the appeal for me is that the machine to create the slide doesn't, there's no pump or anything like that. So in terms of maintenance and that it'll be potentially substantially less.
It's literally just like hand pump oil onto the slide. There's, and there's, it sorts the, the particles into three distinct magnetic rings.
And it also kind of spins the slide so that you get, that's how you get the deposition.
So instead of using gravity and a magnet, it's using magnets and, and a centrifuge.
[01:17:14] Speaker B: Centrifugal force.
[01:17:15] Speaker A: Yeah.
I was just curious if anyone has any preferences. I, I, I quite like it because it, you know, if you can put it on a microscope and have a, a, a rotating, what do you call the thing? I've lost the name for it.
But the table, rotating table, you could actually just kind of like spin the, the particles past you.
Yeah. And, and locate them by the, the markings on the rotating table.
So that, that's the appeal to me. I was just curious if, if you.
[01:17:49] Speaker B: No, that's that's interesting because like I, I described because I get at test oil, we have two ways of doing ferrography. One is the typical ferrogram slide. And I always describe that as, you know, the incline slide based on flow particles will settle out from largest to smallest because of fluid dynamics. And then. But the picture I'm showing has all these straight little lines all over it, right. And I always say to people said, you're probably thinking that that may be true, but they're not going to align in cute straight little lines. I say, well yes they will because there's magnets. I said, but the other way of doing it is just to do a patch test, right. Throw it on a membrane filter and then you can, you can run that under a microscope. And I said the downside is there is no sorting large versus small and there is no sorting magnetic versus non magnetic.
The other downside, if we get into the details is you can't easily, which I'm going to see a presentation in three weeks about this.
You can't easily Heat treat.
[01:18:42] Speaker A: Yes.
[01:18:43] Speaker B: Membrane filters, except for Ray's got his.
[01:18:45] Speaker A: His new heat treatable patches. Right?
Yeah.
[01:18:49] Speaker B: And I know this was something I had played with about 15, 20 years ago.
I worked at a filter company and we were looking at more resilient patches. And so nylon was one of the ones we looked at because the typical cellulose ones, if you want to use a non polar solvent like acetone, it just, they're gone, right. That just eats them in seconds.
And nylon, you can also bake in an oven at a higher temperature before it starts to really ripple and do weird things. But what you're describing hits kind of some high notes here because like you're still sorting based on particle size, you're still sorting based on magnetic reaction.
But I would expect that this could do it faster, that that would actually be a quicker way because that's, that's what really slows down. Photography is not the person spending time on the microscope because I mean a patch is arguably a bigger surface area to keep scanning than a ferrogram might be. But it's the creation of that ferrogram slide that takes so much time. Like we have multiple instruments. Like you see we've got like a row of them and they're all just sitting there trickling the oil and you know, you got to wait and all this kind of stuff to properly create that ferrogram.
So we reserve those kind of ferrograms for the samples we really need to see clear Distinction of what metals are there because those are the ones we're going to heat treat versus if we just need a quick, you know, answer, we just do a patch and we're like, let's just see it for ourselves. It won't group it, but we could still hunt and peck.
But back to your comment, that's what really got me thinking. Like if you put it on a rotating table and read it like a record.
[01:20:24] Speaker A: Yeah.
[01:20:25] Speaker B: And had, had it just sort of slowly migrate towards center. Right. But keep track of its, you know, coordinates kind of thing, then yeah, you could build an algorithm for that based off of known morphology and say yeah, you know, just log it based on its, you know, because I mean, let's face it, cameras nowadays we can, you can train it to size the particles as it goes along.
It can look at the colors and you know, faster than a human could, it could flick the lights from red to green to white. Yes. And log everything while it's doing all that, backlight it, top light it. The only thing it couldn't do simultaneously easily, but somebody could spend time, is do the heat treating sequences along the way. But you could, you could technically automate that.
At least, at least the way you described it would be a faster way of reading a ferrogram than just going the lawnmower path back and forth on the slide. You could just spin, slow spin of course, that patch size, you know, because I'm assuming they're still doing this on glass.
[01:21:28] Speaker A: Yeah, yeah.
[01:21:29] Speaker B: So it can still be heat treated and everything. I'm like, no, that actually sounds, that sounds brilliant. I hadn't heard.
[01:21:33] Speaker A: Yes. Yeah. So Swansea Tribology came up with it like a number of years ago as an alternative to the ferrogram. And it seems to have taken off in Marine but nowhere else. And I think it's because. So Swansea sold the technology to Parker, who.
So it's called the Parker Analyx rpd something something.
I don't know what the name was. Incitec Marine was selling it for a long time and so seemed, I think it's potentially because that unit is more compact.
So that's why the Marine guys really liked it.
But they discontinued it.
So Parker discontinued it literally six months ago. Because I called them about buying a unit, they said, oh, we don't sell that anymore because Parker discontinued the item.
So I kind of went hunting around to try and find a used one because I think it would be really neat.
[01:22:27] Speaker B: Yes.
So I guess the question now is why did they discontinue it? Because you Know I hear this in industry a lot about poor blockage particle counts, is that some labs just don't do it anymore and it's because they find the parts for the instruments hard to get.
Well, testo, we own the rights to.
[01:22:46] Speaker A: It.
[01:22:49] Speaker B: So we have easy access to the parts.
And I know I've said that to people, said we need to be cautious. We can't make the parts hard to get.
If other people want them, we should sell them. Because if enough people stop using their units and we're the only ones using it, then it actually doesn't seem like a good idea anymore.
Everyone's dubious about why are you the only lab that still does it that way?
Um, and so that would be kind of my, my, my two pronged question to, to back to Swansea is why did you discontinue it? Because I'm assuming it, it could be something as simple as certain parts were hard to make or getting expensive or prohibitively, you know, whatever, hard to come by.
And then the second question is, who now holds the rights to it?
Because it could just be that they sold it off to somebody else. And one day you'll see it or you'll never see it, because nobody knows there's a demand for it. They just, they just bought a portion of the business because Pacific Instruments, I mean, that's what they did with the. If you remember the pods, it was the portable oil diagnostic sampler. And it's that. That great big suitcase black box that you hook an air supply or a cartridge. Air cartridge to it. And it does particle, laser, particle counting. Typical optical particle counting, it uses air pressure to push the sample through the detector, but it's just a suitcase size one. And like, we use them in our lab. That's what we use for optical particle counting because we do most of ours with our port blockage units. But if somebody asked for optical, we just have a.
It used to be called. It was made by Hayek Royco and then it was. They sold to Pacific Instruments and Pacific, like, it just, it's been sold so many times that I don't know if they even make them anymore. But, you know, and it's like. But it was a rock solid particle analyzer because, you know, it was portable. All you needed was an air supply, which you could just have. With canisters of carbon dioxide.
[01:24:43] Speaker A: Yeah.
[01:24:45] Speaker B: So it's like sometimes there's these good instruments out there that serve a purpose, but the company that's overseeing them, the bean counters there go, nah, it's not profitable enough for Me?
[01:24:53] Speaker A: Yeah. Or even they just get buried in, in, in other industries and stuff like that. Right. Like I've come across this guy.
Have you seen Peter, Peter Boozer? I don't know if.
[01:25:05] Speaker B: No, I know the name Boozer, but not Peter Boozer. So no.
[01:25:07] Speaker A: So I just came across him recently, you know, you know the attend two guys who are doing the like optical as in image based particle count?
Yeah, they're over in Spain. They, they sell the sensor that goes into the filter technique unit. It's basically laser net fines, except instead of a shadow, it's a, it's an image. Right.
So they've got like a briefcase style version, right. So you can do field testing and it'll show you. It kind of gives you high level photography, right? In the sense that because it's taking an image of the particles, it can use an algorithm to classify the size and the shape of the particles as well.
And those units have started to become really popular.
Well, there's other guy, Peter, who as far as I can tell has identical technology, but he's been selling it in a completely different industry.
Right. So he hasn't been using it for lubricants, he's been using it for people that produce like dry powders so that they can do particle classification and size and distributions and all that sort of stuff. And as far as I can tell, his unit's actually better because he can do dry powders in addition to liquids and, and stuff like that.
And the level of data that he gives you on the back end seems to be in some ways better.
[01:26:36] Speaker B: Well, that reminds me of like back when I first started with optical particle counting, there was a Coulter unit. And Coulter units could go down to like 0.4 microns.
[01:26:46] Speaker A: Wow.
[01:26:47] Speaker B: Like they were way better than anything else. And you're like, well, this is the better particle counter. It's kind of like if you think like a Canon viscometer, right? It's like third decimal place accuracy kind of thing. You're like, why is this not adopted? It's like because it's too good.
[01:26:59] Speaker A: Yeah, right.
[01:27:01] Speaker B: It analyzes it in detail that's not required by what? Back to one of your earliest questions about the traffic light way. People are looking for fast and easy answers. They're like, look, tell me when it's above 50 parts per million. Don't tell me when it's 49.9. I don't even care about 49.9. Right. Like they just want these broad quick cuts. And so sometimes when these better technologies exist, you're like, why aren't these being used? Like, well, either they don't scale well, which is sometimes the issue, or it's just. It's so good it's not necessary.
[01:27:28] Speaker A: Yeah.
[01:27:28] Speaker B: And so people are like, I don't see the need to. And usually it's a more expensive unit, so I don't see the need to invest in that quality. So I'll just stick with this lesser version because it works well enough.
And that's an answer I often give people when they ask questions about why are some things the way they are. I'm like, because it's.
[01:27:46] Speaker A: It's adequate. Yes.
[01:27:47] Speaker B: I said, I think adequate is the most insulting word in the English language because it's the only word that's still positive but means almost nothing. Whereas I said if something was. Was bad or, you know, used any negative word, you're like, oh, I have somewhere to go from there. But if somebody says it's adequate, you're like, well, but I don't really need to improve from there. It's. It's adequate.
[01:28:08] Speaker A: It hit.
[01:28:09] Speaker B: It hits the right notes, you know, like, so somebody says, oh, that was. That was adequate. You're like, o. Oh, yeah, yeah. You know, I thought it was bad because I'm like, well, I can make it better, but if, like, it's adequate, then I say, well, what make him better? Like, it's adequate. Yeah.
[01:28:22] Speaker A: One last question, because you brought up. You brought up doing analysis on just like a patch.
Yeah. Can you do the same analysis on an MPC patch?
[01:28:35] Speaker B: Very simple.
[01:28:37] Speaker A: So it should catch all of the same debris that a usual filter gram will.
[01:28:43] Speaker B: Yeah. And you're not. You're not treating the patch in any way that doesn't allow it to go under a microscope in the same conventional ways. You know, like, you're not.
You're still rinsing away the oil. So there's no oil residue that's there.
You know, it's just. It's a. It's a. Yeah. A different patch and a different micron rating.
And. And that's the thing. I. I did some. Some really cool research where I took oil, filtered it through a 5 micron, then took. Collected that in a clean vessel, ran that residue through 1.2, ran that through a 0.45, ran that through a 0.22, ran that through a 0.1, and. Or I skipped over. I think I did a 0.8 somewhere in there, but it successfully went down smaller once I ran it through a 0.1 or 0.22, whichever. Was the smallest I could get to. I would then switch out the solvent to acetone and allow anything to agglomerate on the second pass and then do, you know, weights of them to show the distribution of weight.
And I remember I did this shortly before Greg Livingstone created the MPC test with Brian Thompson. But one of our colleagues, Dr. Akira Sasaki in Japan, he did similar research to what I was doing.
And both of us were within 1% of each other. Where I was like 95, he was like 96, or one of us was the other way, I don't know. But we said based on mass alone, the weight of material you can extract from an oil sample. These are all hydraulic systems and power plants. 95, 96% by weight was below 1 micron.
So we said, you know, when you're doing these patches to do stuff and, you know, again, they had to come up with a reasonable cutoff for an npc because they go, well, the finer we go, the more information there is, but the harder it is to make the patch. Because I was pulling like, you know, 27 inches of mercury vacuum to pull it through a point one, you know, and I was letting it sit there for many minutes and, you know, all this kind of stuff, like it was not a fast patch to create.
You know, running through a 1.2 or 0.8 micron patch is like, you know, it's like down in seconds. You know, it's quick to do. So again, it comes down to that whole throughput question of commercial labs. Yeah, they don't really get into this, these finer detail stuff because one, it doesn't matter to the end user, and two, it just takes so much more time, effort or money to, to get there. They're like, yeah, no, this is enough data for us to do it. But yeah, back to your question. Yeah, you could totally run MPC patch through any sort of analytical microscopy that you'd want. You can do weight analysis, you could do fair graphic analysis to it. Yeah. And you, you'd get some of that same information. I mean, it's, it's kind of like Rich Wirzbach with the way he does grease analysis and going, okay, I'm going to extrude it. And what can I get from all these extrusions? Well, I can. Well, I can measure the extrusion while I'm doing it. I might as well double up on that, get a measurement while I'm, while I'm prepping the sample, I get one of my measurements, but how many other measurements can I do? Simultaneously.
[01:31:25] Speaker A: Yeah.
[01:31:25] Speaker B: And it's just compounding that to make it as, as dense of a report from as little sample as possible.
And it's like, yeah, you could do the same with mpc. You could, you could double up on.
[01:31:34] Speaker A: The information because that's what I'm trying to run a little project here where I've got a thousand odd MPC patches to see. Well, you know, if you're going to go to the, the, the, the trouble of making an NPC patch, especially because of the, you know, the fact that it has to sit there for so much time and all the rest of it and it's like you've put so much effort into this thing. Can we get more than just like an NPC number out of this thing? Like it feels, it feels almost redundant to be like at the end of this. Oh yeah, 30.
That's it. Like, you know, most, most labs aren't even reporting the lab values. Yeah.
[01:32:09] Speaker B: You know, like, so you might as well weigh it before and after get that number. So you got something there. You know, you could do some fair graphic analysis to it and then when it's all said and done, you could burn it and do a TGA on it. Oh, you know.
[01:32:25] Speaker A: Yeah, true.
[01:32:27] Speaker B: Since, since you've got it and that's pretty much how a TGA is done is by running it through filter paper and collecting it and burning it.
[01:32:32] Speaker A: Yeah.
[01:32:33] Speaker B: Like there's ways, you know, because again, at the end of the day, you don't want the patch when you're done.
[01:32:38] Speaker A: Yeah, yeah, well, exactly. Well, that was.
[01:32:40] Speaker B: Throw it away.
[01:32:41] Speaker A: That was the other thing. It was like, could you, could you run it through XRF as an example and, and get some, you know, probably the quantities of wear metals that you're going to get are low enough that maybe an XRF is not going to pick it up.
[01:32:56] Speaker B: I think the detection would be there. Where I'd be coming from is that the kind of systems you do MPC on are not the kind of systems that typically generate wear metals.
[01:33:03] Speaker A: Yeah.
[01:33:04] Speaker B: So I'm thinking that you actually wouldn't see a whole lot because there wouldn't be a lot.
[01:33:07] Speaker A: Yeah. Although increasingly it's been done on hydraulic systems. And that's kind of where I'm trying to go with it is.
Yeah, we start to do it on more hydraulics. Maybe there's a reason to start looking at the patch for more information.
Yeah.
[01:33:21] Speaker B: What you'd have to do is a quick study of, you know, a reasonable number of samples with by ICP only and by MPC with xrf done it and say, did you catch anything that ICP missed? Because again, ICP does get down to the 0 to 1 micron range very, very well. So it would see what's being trapped on the patch quite easily. You're not expecting anything larger than about 10 microns to be on that patch. So that's where I'd be like, I don't know if you would see anything that ICP wouldn't have already seen. What you're doing is concentrating it, right. That's all you're really accomplishing. And at the end of the day, XRF is a bit more semi quantitative as it is because of the way elements excite themselves.
So it's like probably you would find that. No, it's not a good substitute.
[01:34:02] Speaker A: Yeah.
[01:34:02] Speaker B: That as long as you can get enough sample to run ICP on, then ICP would be the preferred way of doing wear metal analysis.
And you know, that's the other thing comes back to the con is faster.
[01:34:15] Speaker A: Right? Yeah.
[01:34:15] Speaker B: You're not waiting the three days to prep your MPC just to create that patch for, you know, one minute under the spectrophotometer and bang, you have a number like three days later. Yeah, icp. You had your answer in two minutes, I think.
[01:34:26] Speaker A: Yeah, I think it's more like if you have already prepared your npc, then. Then what, you know, what can you get out of it?
Now the, the XRF units are interesting. Oh, I'm trying to play around like I've been talking with Bruker because I mean, obviously none of these things are cheap, right. And, and trying to send them over a few samples to be like, well, do you get any?
Because the types of units that I'm looking at are not at the big end of xrf. Right. So they're. There's not a handheld unit. But also not like, not like one of these massive ones. So it's like, you know, with limited power, how much information can you actually get out of it? Right, yeah, yeah. Interesting.
All right, cool. Hey, well, thanks for that.
We should do this again.
[01:35:17] Speaker B: Sure.
[01:35:18] Speaker A: It's fun.
Oh.