Hemant Banavar, Chief Product Officer at Motive, and Ryan Paulson, CIO at Fusion Site Services, will discuss how AI-powered cameras and telematics are transforming safety, productivity, and profitability across the physical economy, from trucking and construction to field services.

 
 
 
DOWNLOAD TRANSCRIPT

301 Audio.mp3: Audio automatically transcribed by Sonix

301 Audio.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

HEMANT: There's so many situations where you have a need for safety, critical environment and knowing precisely what's going on in your environment in these situations. Good enough just doesn't cut it. The difference between good enough and being really precise is the difference between actually reacting to an accident and preventing it, and that's where really we obsess over in terms of what we have built, in terms of the solution that we offer.

RYAN: In 2023, we had about 470 vehicles, and now we're over 1300. In 2023, I think we had 280,000 safety events across 14 different behaviors. Touching your cell phone or following too close. Now with about 1300 plus vehicles, we're in our third year and we're at about 25,000 events, so about a 98% reduction while increasing our fleet in that short time.

HEMANT: Hi Craig, really good to meet you and glad to be here today. Um, I'm Hemanth, and I'm the chief product officer at motive, and I've spent about over 15 years in the tech industry before motive, um, and have done product leadership roles across Microsoft, Yelp, Uber, and most recently before motive as at stripe. Um, at motive, our mission is to help our customers who operate in the physical economy, um, to make their work safer, more profitable, and more productive. We serve 100,000 customers and about 1.3 million drivers and help them run their operations across multiple industries. And of course, um, Ryan is one of our customers and looking forward to learning from him as well. Today. Our products typically are a multitude of products all across one AI first integrated operations platform. We solve problems across fleet management, driver safety, workforce management, spend management and AI vision.

RYAN: I'm very thankful to be here. Uh, Ryan, uh, CIO for Fusion Site Services. I've been in transportation for about 20 years. Uh, worked in the state of Tennessee, uh, for the state, uh, worked for the Amazon, uh, from a compliance standpoint for for all transportation groups. And now with Fusion Site Services, you know, the responsibility of being and operating in people's communities is very high. And, you know, I really appreciated the vision and mission of Motiv that was just said. And it's making sure that the roads are safe. That's that's ultimately my job is making sure that we're a good partner on the road.

CRAIG: Hey, Mant, uh, can you can you talk a little bit about, uh. I mean, although it seems obvious enough, um, why good enough? Uh, AI isn't acceptable in this kind of an application.

HEMANT: Especially when it comes to safety critical environments like, uh, what Ryan and Fusion site operate in, and many other customers of ours operate in may be driving on the road at a construction site, at a, uh, maybe an oil rig. Right. Like there's so many situations where you have a need for safety critical environment and knowing, uh, precisely what's going on in your environment in these situations. Good enough is just doesn't cut it. The difference between good enough and being really precise is the difference between actually reacting to an accident and and preventing it, to be honest. And that's where really, uh, we obsess over in terms of what we have built in terms of the solution that we offer. We have trained our system over billions of miles of driving, and we have built AI models which actually deliver highly precise alerts in real time. So our customers can actually not only prevent, uh, but also change behavior over time. Right. And that's very important because delivering a alert in real time, when somebody is just about starting to get distracted, is the difference between them actually potentially having an accident versus them correcting that behavior and being safe in their environment?

CRAIG: Yeah. I'm just curious, how does the alert come through to the driver. Presumably it's to the driver. Is it to the fleet manager?

HEMANT: Um, that's a really good question. So again, just to take a step back, the way our solution works is it's a combination of hardware, software and AI all working together. So a lot of times, uh, what you can imagine is there's a driver driving their vehicle down the road. We have our camera installed in the vehicle that can look outside road facing as well as is looking at the driver. At the same time, our AI, we have built AI models that have been trained on, again, billions of billions of data points. And that is now able to understand the context that the driver is in. For example, it's like you can see that the driver is maybe hard braking and the camera can actually add the context and say they're heartbreaking because maybe somebody cut in in front of them, which actually they did the right thing. And that's a positive driving behavior. And our camera and our AI running on that camera can detect that and say, this is good behavior versus somebody breaking really hard when they were actually looking at their cell phone, which is a distracted driving event, which our camera can quickly pick up. And at that time, what our camera and the system does is alert the driver about it so they know in real time that, hey, you're distracted. You probably want to put your cell phone down, which is a good reminder for them to get coached on. But also, if they keep doing that on a repeated basis, we have, uh, thresholds that our customers can configure to say, if this is a repeated behavior, notify my fleet manager who can now actually take action. And we do that by essentially taking that AI generated event and then sending that notification over the cloud to the dashboard that the fleet managers are looking at, so that they get alerted and they can take corrective action at that point.

CRAIG: Yeah. And the alert that comes through to the driver is that audio or is it a voice or is it a beep or.

HEMANT: It's again like all of them. So it's a beep followed by a audio alert. And um, that the driver can see actually like there's also like LEDs that will help them understand, uh, with the color of the LED, uh, what's going on when they have an alert. Right. The same way on the fleet manager side, they can actually get an email alert. They can get notified in the dashboard. Again, this is highly customizable on the back office side, where the fleet manager can decide how they want to receive the alert, uh, for something that they need to know, that they need to take corrective action.

CRAIG: And as I understand, you have a group of people of of Motiv employees that watch and that check every alert or do they sample check the alerts or what what are they doing there? And is that before. I'm sorry. Events that they they check events when when the system identifies an event. And do they, uh are they involved before the alert goes to the driver?

HEMANT: Um, yeah. Let me talk about that a bit. So we believe that again, we obsess over accuracy. And that is why we built our system the way we built it. Um, so what we have is a fully integrated hardware, software and human in the loop validation happening all the time. The reason for that is we start with a new problem that we encounter with, um. Let's say it is, um, close following when we start out, typically like models are, when we start building a model, it may have, uh, a good enough accuracy. And that's improving. We need the human in the loop to actually help the model get better very quickly. What we do is typically when we launch a new model, our human in the loop actually helps validate every single event that's coming out of the model to say, yes, this is good, or this needs to be corrected. So what actually our customers see is the filtered version and the highly accurate version. But that human provides a constant feedback loop to our model development, which allows us to really increase the accuracy of our models very rapidly. So over time. As our models mature, our models are highly accurate, already on the edge. They don't need a lot of human intervention. But as we are kind of starting out, the human validation helps us provide a constant feedback loop to the model to get the model to be highly accurate.

CRAIG: Yeah, and just to dive into the weeds a little bit, is that through a reinforcement learning loop?

HEMANT: That is correct. Yeah. This is a recursive learning loop that we have built in a way that constantly provides feedback to the model. So the model continues to keep getting better. And this is where we can build models and we can make them highly accurate. So we take away all the guessing and the noise from the system. And what our customers like Ryan and their teams get It is a highly accurate version of the view of the world, right? So they are not spending hours and hours sifting through all these events generated by an AI that they now have to go spend time to figure out which one do I react to and which one do I not? They can actually trust that what's actually delivered is something that I need to react to.

CRAIG: Yeah, yeah. And that's an issue with any safety system, AI or not, is, uh, you know, false alerts and alert fatigue. Correct. Um, so, so balancing that accurate, that, uh, detection, uh, with alert fatigue, uh, by having the human in the loop, you reduce, uh, that gap. Right? Yeah.

HEMANT: We reduce the alert fatigue and also honestly, more than the fatigue, it also essentially erodes trust in in the AI. If you are doing this right. So for us, um, we really strongly believe that, um, AI that you use in these environments needs to be highly transparent and really there needs to be clear accountability around it. That is why we built our system the way we did, and we make it really easy for our customers to evaluate our AI in real world environments. And we, um, really, um, support and encourage our customers to go run our systems side by side in real environment for them to actually see the value of the AI that we deliver.

CRAIG: Yeah. And, Ryan, uh, why did you pick motive and who were you working with before? I mean, you don't have to mention the company's name, but were you working with a camera based AI safety system?

RYAN: Yeah. Um, so, like with my experience, we've tested out every single camera system in the industry. And so coming to fusion, they had a specific one I would prefer not to mention. But, uh, what I would say is, yes, we looked at that technology and, you know, we're we're talking about AI as a tool that just saves countless hours of review as well as, you know, enhances the business to scale and address risk before those risky behaviors, uh, turn into accidents, per se. Um, you know, there's the root of the technology, which is even knowing where your vehicles are, if they're doing the basic functions that are expected in the day to day. So, uh, there was an evaluation that went through, uh, I'd say 3 to 5 months of side by side testing, whether it be Evaluating the AI, evaluating the GPS location ability of those specific tool sets. Some organizations, Craig, don't don't have the ability to manage a diverse fleet just from the the tool to connect to your vehicle's computer. So, uh, yeah, fusion shows motive based off of that review process.

CRAIG: Yeah. Um, and and did you do that side by side test with another solution?

RYAN: Uh, 100%. Uh, by by about six different solutions. Yes.

CRAIG: Mm. Yeah. That's fascinating. Um, and, uh, um, how do you, uh, account for the big performance gaps among dash cam providers? Uh, and, and are you benchmarking primarily on the human in the loop feedback or how do you benchmark.

HEMANT: One of the studies that we actually did with was with Virginia Tech Transportation Institute. And we have done this not only with, uh, third party benchmarking, uh, solutions, but we have also encouraged our customers to run these side by side evals for themselves. We believe that what's going on in the industry is the development, and the pace of AI is really rapid. And it hasn't really, um, actually, it is moving so fast that the benchmarking, uh, needs haven't really kept up with that pace. What that leads to is just a lot of promises, and there is no clear criteria for companies like Fusion Site to decide what is the best solution for them. Right. And that is where we believe, um, really making it transparent, making it easy for customers to benchmark goes a long way in actually giving them proof points instead of promises to know for sure what they're getting out of this system. We believe, um, the innovation in this industry necessitates that we actually have very easy ways to benchmark. We want to, uh, make it super easy for our customers to do whatever kind of evaluation they need. We typically will partner with anybody who wants to evaluate our technology in whatever shape or form. We can, help them set up the evaluation and also run any side by side tests if they want.

CRAIG: And what's the idea of zero harm and how do you get there?

HEMANT: Yeah. Look, I think, um, it's very related to this idea of, um, at the end of the day, having transparency. The idea of zero harm is essentially, um, making like working towards a goal of having, um, zero preventable accidents on the road. Actually, what you see today in terms of accidents, um, there is still so many accidents happening on the road, like just in 2024, I think there were 40,000, uh, fatal accidents on freeways. And to reduce that number, we think technologies like ours play a major role by one identifying unsafe driving on the road, helping in real time, changing that behavior. But in order to do that in a meaningful way, we have to have very clear benchmarking that the industry can adopt. And all of us can benefit from that transparency, not just us and other players, but also our customers, because they know what they're going to get, get out of the solutions that they're purchasing.

CRAIG: Yeah. That's fascinating. And Ryan, what sort of impacts uh, did you see, uh, when you switched to motive?

RYAN: So again, we were we went from a system where you didn't have AI capabilities to a system where you are accurate AI capability to a system that had accurate AI capabilities. And so it really you go through phases. Uh, but I would say from 2023 to, uh, to year to date, you've seen about, uh, very specifically in 2023, we had about 470 vehicles, and now we're over 1300. In 2023, I think we had, uh, 280,000 safety events across 14 different behaviors. So that would be like touching your cell phone or following too close. It's about 280,000 now with about 1300 plus vehicles. Uh, we're in our third year and we're at about 25,000 events, so about a 98% reduction while increasing our fleet in that short time year over year, year. We we have seen internally, um, probably right around 80 to 90%, uh, reduction year over year. And, uh, I think the other important piece is that it's not just about behaviors. Uh, it's also about those results. And so, uh, you know, I had mentioned that we went from 76 claims in 2023 to 1 in 2025, but there but with data and with accuracy that that motive provides our fleet. We have probably one of the most diverse fleets in America, um, which means you have a very diverse population. You have a very diverse training program that's built for these individuals. Uh, we were able to build, uh, ancillary, like, driver recognition programs, based off of motives. Safety program. That's part of the total motive package. So we've seen great success.

CRAIG: Yeah. And and presumably insurance premiums came down if you went from 76 claims to one in a year.

RYAN: Year over year, we've seen about a 2 to $2.5 million savings on our insurance premium. And the way that insurance works is it's about a five year rolling period. And so that means next year you'll be saving a significant amount more if you're trending two years in, if you're trending three years in. So it's more more than paid for itself. We, uh, we could not be happier with the results and which which ultimately correlate with, you know, we operate in communities and we're very thankful and, um, appreciative of working in those communities and being good, safe partners, uh, where we operate in. And again, uh, motive allows fusion to act responsibly prior to those events, those harsh events from actually occurring that result in loss runs. And I think that's why, uh, these AI technology is so important to trucking organizations is you're going to get to see the trends, you're going to get to see the behaviors before they result in those, uh, those loss runs. I'll go back to one thing that was said earlier is those alerts in that Virginia study against those, uh, competitive devices. They're going off within within less than a second of, let's say, a driver touching their cell phone or a driver not buckling up. And if you think about that, it literally stops that behavior from continuing, which puts that driver back in a safe situation. And so you're reducing the opportunity for that accident to even occur. So um, Uh, hopefully that's added value. Craig.

CRAIG: Yeah. And and is there pushback from the drivers? I've read that that drivers don't like these systems.

RYAN: Yeah. You know, I, I've installed, uh, this type of device for, for very, very large fleets. And I would simply say this, it's about how you as an organization roll it out. It's about how you treat your drivers with respect. And it's ultimately about setting those expectations because it's not just the company that's liable for those accidents, it's also the driver. And so keeping that driver in a safe position where you're allowing them to succeed, allowing them in a promotion path, like I mentioned earlier, uh, fusion site services that paid our driver population over $700,000, uh, this year to drivers that exceed our safety expectations. So our drivers get to make a significant amount of money from just going a little bit slower. Doing it a little bit safer. We find about in a non you know we have about 200 CDL drivers. So in a in a majority non-regulated fleet we have about a 99% adoption. And the most unique thing is that when we encounter new drivers, like 100 drivers that we're bringing into our fold in about two weeks, those drivers have adopted to, we'll say, fusion site Services path via modus tools. That means they're able to log in, they're able to get coached, they're able to be educated. And then guess what? They're able to get paid more money. So in a very, very short amount of time, you're reducing that driver's behavior by about 90% on the areas that they may need to get coached in.

CRAIG: Yeah, that's interesting that in effect, you share some of the savings that you get from motive to the drivers as to incentivize them.

HEMANT: And maybe I'll add one thing here to what Ryan just said. At the end of the day, this is about the technology is mainly about getting the driver home safe, right? And that's kind of what Ryan and his team are essentially, um, deploying these solutions for. And when they see that, when they see the impact that this technology is making as they are driving every single day, it's not throwing alerts when it shouldn't, and it's actually helping them change behavior to be a better driver to get home safe. They immediately see the value that it's delivering for them. Um, which is where this is about helping them. And the more accurate this technology is, the easier for them to see the value at the end of the day from that technology.

CRAIG: Yeah. Um, and, uh, this is, as you were saying, and it can apply across industries. How broad is the, uh, horizontally is are the industries that you're serving?

HEMANT: Come on again, think about this as everybody who is in the physical economy, right. Anybody who is who has vehicles, assets and workers operating these assets. That's who we serve. And we our technology can help them. Because, look, I think at the end of the day when somebody is running physical operations, they all have the same problems. They want their physical operations to be safe. Safety can mean driving a vehicle on the road. It can also mean at their job site. How do they keep their workers safe? Right. They want their operations are productive. They want to make sure that when their driver or the worker on the site, when they're doing the job, they have the best tools so that they can do their job effectively. We can. Our technology enables them to do that because we can give them insight into everything that's happening within the vehicle, within the asset, so they can make sure that they can increase utilization of these vehicles. And then lastly, it's also about making this profitable for them, right. So a lot of the AI, we also not only, um, develop AI to help with safety but also improve efficiency. We actually our AI is able to find, um, wasteful spending or underutilization or think about, um, idling when it was idling a vehicle for long duration, when maybe it was not necessary. All of these insights, at the end of the day, help them increase their profitability as well. Right. So what we have built is applies to a broad set of, uh, companies who are operating operations, um, in the physical economy.

CRAIG: And who would do that? Who would set those benchmarks? Would it be an industry association? Would it be the departments of transportation, uh, in each state or the federal, the federal level?

HEMANT: Really good question. I would say it's all of them. And actually like, again, like I would go back to other industries where this has happened, right? There's a lot of third parties, um, who are willing and eager to, uh, provide their expertise to even come in and say, what are the right metrics? What are the right benchmarks? Again, VTI is a really good example of such a independent third party who can, um, play a role in Establishing these benchmarks. And we see, um, a real value in having these third parties play an active role in creating that transparency in this industry. So customers in this industry can benefit from that transparency.

CRAIG: And then, um, how do you balance innovation speed with the need for more rigorous testing and validation because you don't want to slow down innovation?

HEMANT: You know, um, you don't want to slow down innovation, but also you want to make sure that your innovation is actually having the right impact. And I would say in this physical economy, a lot of your innovation is safety critical. And when it is safety critical, um, you can you cannot, um, move fast for the sake of moving fast. You want to have real, um, data, real benchmarks behind it before you actually go deploy it. So we believe that, um, you can actually innovate faster if you have the right framework for benchmarking, because it allows everyone to actually know what you are optimizing towards and go race towards that faster than each one of us. Coming up with different ways. On measuring your own success, right?

CRAIG: Uh, would an independent validation, either from some official body or from some industry, uh, body, uh, using, uh, benchmarking. That's kind of recognized and agreed upon. Would that make a difference for for companies?

RYAN: Yeah. You know, um, I think that right now, what a lot of, uh, I'll say people in my role or my or my previous role, Like so safety specific in trucking. Really look to the SMS mythology and the seven basics from Fmcsa to really be the core infrastructure on how to manage your fleet, your drivers and whatnot. But to your point, I think one of the challenges that our organization have is, is to have an internal asset or to your point, an external asset that's able to review the capabilities of these systems to really have, like a top quadrant or top tier provider for different pieces of their technology. I think, you know, we're often used by Motiv to assist with maybe an organization that has that mistrust that you're speaking about. Craig has had maybe some bad experiences. And, you know, when you sit down and talk to them about, well, this is how we use the tool, this is how our drivers feel. This is how our general manager slash operations feel. You really start to see the buy in. And again, I think it comes back down to the data where that third party could really do that validation. I think that, uh, the most challenging pieces of implementing this tool not only are reside in a company supporting that technology, right. We always talk about a technology supporting a company and letting it achieve its objectives, but it's about building those ancillary items around. What do you do when a driver has this behavior 3 or 4 or five times.

RYAN: And I think that's when like NHTSA or, you know, the seven basics or SM anthology could really lend a hand to a trucking company via an insurance company or whatnot saying like, hey, look, if you're having a driver fall into these 3 or 4 categories, maybe they're not insurable or you have to pull them off the road for a period of time and put them through a training program. So, um, I, I have always wanted more partnership with the regulators like fmcsa groups. It's always very interesting to me when the regulators take a step back, uh, and either don't provide guidance or the guidance is loose up to interpretation, you know, and, and again, in this safety sensitive, uh, situation, a lot of the measurements of these systems, just for example, um, when should close following be it be an alert, you know, um, well, those are, you know, down to friction ratios. Those are down to how much that truck weighs. Those are down to, you know, the the weather and the elements in that area. Uh, it's not easy, but I think it's a topic that could totally be tackled by a joint partnership with technology partners, with fleets, and with our regulatory groups. I think coming together for for something that, um, is, is as serious as driving over the road in very heavy vehicles. Uh, every single day. Uh, is is a topic that could be could be solved. We could see this solved.

CRAIG: Yeah. And this is not a prohibitively expensive. Is that right, Hammond?

HEMANT: A lot of the innovation that has happened in technology, both on the hardware front, in terms of the, uh, amount of compute that you can pack into a device that is sitting in a vehicle and running with the power constraints that it has available in a vehicle. Um, there is so much advancement that actually the costs have gone down. So you can actually have a pretty capable device there running AI models inside a vehicle. And same thing. On if you think about what's happening with AI everywhere, the cost of building models, deploying models is rapidly going down. So overall the ROI here for a solution like what we are describing here, what Motiv offers is very significant. And I would actually call on Ryan to maybe speak more about the ROI here. As you see in your case, we believe that the ROI here is significant. Um, but Ryan, please.

RYAN: I mean, there's tiers we talked about insurance premium in a second. Goes up about $2.5 million year over year, which the cost of Motiv is significantly below that. Um, I would say then you have additional attributes that Motiv provides, such as utilization as was mentioned. Um, so that would be, you know, the drive time from service route to the next service site where you'd be collecting revenue, um, down to idle time. Right? I think, for example, we've used $1.6 million or, sorry, million gallons of fuel. Um, we've idled for 330,000 gallons of fuel. There's an opportunity percentage in there that you would be able to rake back for maybe periods of time that were not revenue generating. Um, you know, just to be clear, I think that's one of the main distinguishing attributes for motive is that it is simply not just a camera that is judging a driver on 18 behavioral attributes. It's measuring the distance, the speed, the trips, the idling, the hours of service for that driver, which are regulated in some areas and non-regulated in other in other areas, as well as those actual trips, um, for revenue that you're generating. So, uh, there are probably 6 to 7 different ways to use motive and each one, uh, and I would say the lowest one would be able to pay for the actual cost of, of the motor device. That last piece, I would say, you know, we talked about like the agility of motive.

RYAN: Um, you know, I used to have to pay someone to install motive into our truck. Cost about $250 per installation. Um, it takes me about. And I'm not mechanically inclined. It would probably take me 10 to 15 minutes to install motive on any truck at any time, in any environment. So I'm able to save a significant amount of money because the device, the hardware, is very intuitive to install to your vehicle. Uh, I would just go through those processes. I, we do not have a I think another way to say this, we talked about AI. It's really a probably about seven years ago in the industry used to have a bunch of people, probably hundreds of people with very large fleets, and they would watch a video of their driver driving the whole day, and they would probably clip that image and send it into operations. And they would say, we caught your driver on a cell phone. And then you have this back and forth of what to do with the driver. You know, if we had a thousand trucks and then tomorrow we had 400,000 trucks, I would still have the exact same amount of people managing motive. No more. So if you really sit back and talk about not only affordable technology, but a scalable technology, that's that's what we're talking about. So yeah, hopefully that helps.

CRAIG: Yeah. And on scalability, uh, I was going to ask you you have this I understand they're not all, uh, in the same, uh, geographic location, but you have these people, uh, monitoring events, uh, in real time. Uh, is that scalable? I mean, uh, yeah. Can you is there some ratio, like with, with, uh, you know, for every thousand trucks on the road, you need three people to monitor events or something like that?

HEMANT: Yeah. Uh, the way to think about that. Again, I'll go back to something that we were talking about. How do we build models in the first place? Craig. So a lot of this is, um, as we have more models, we don't linearly need more people to validate them, because typically what we do is in the initial phases of the model development, we need more validation because we need more feedback loop for increasing the accuracy of the model. That actually it's a matter of actually, honestly, like weeks for a new model to get to that highly, uh, high precision, high recall state with that human feedback loop. And after that, you don't need that much, uh, feedback you because your models already super accurate, right. And hence, um, in many ways What happens is you can think of as like initially you need more validation, but as the model matures, the need for validation very rapidly goes down. And so we don't have to linearly keep scaling the human in the loop. What they help us with is actually building more models, right. Like at this time we have more than 15 behaviors, unsafe behaviors that we can detect. Um, and we keep adding to it every single day. And so, uh, for us, um, from a scalability standpoint, what we built, this whole loop is highly scalable because we can focus on building new models and then making them scalable, and then they can run on their own at that point. Um, it's it does not require a lot of human intervention.

CRAIG: Yeah. So where is this going? Uh, in the future? Um, I mean, one of the obvious questions. What happens if all these vehicles are replaced by autonomous vehicles.

HEMANT: At the end of the day? If you think about the core problems that our customers are facing in physical operations are essentially about these are operational problems. And we believe that, yes, there is a lot of problems that we are helping them solve today, which are about helping change the behavior of the driver. But at the end of the day, we're helping them again manage their operations in a safer, profitable and productive way. If you look at it through that lens, what we see is we still continue to have a major role to play in helping them get visibility into their operations.

CRAIG: Yeah. And Ryan, for for the the people with fleets on the ground, uh, do you do you see, first of all, penetration in your industry. Do most fleets have something like this? And as is, uh, as penetration grows. I mean, you said 72 claims to one. Uh, that's pretty dramatic. Do you think that that overall safety in the industry will increase?

RYAN: Yeah, I think the majority of large trucking fleets are required by their insurance company and or investors to be, uh, have cameras, um, have telematics and have their drivers on an electronic logging devices. I think it is 100% penetrating the trucking industry, whether it be, uh, you know, again, very like, let's say, landscapers, which are smaller, smaller vehicles or over the road drivers. I would say it's definitely penetrating the market completely. Um, going back to your other point, We do. Trucking companies do get a 5% decrease on insurance premiums. If you do have cameras. I think insurance companies are going to take that further and further. Um, so I think I think it'd be a very accurate statement to say that the amount of absorption that's currently in place will greatly increase for, for numerous reasons. It, uh, it's a very protective tool when you're, when you're trucking company is is going down the path of ensuring the safety of their drivers and the communities they operate in.

CRAIG: This issue of benchmarking, you know, I've talked to a lot of people about, um, uh, about, uh. Evaluating, uh, large language models, uh, you know, which is important for a lot of applications, but you're talking about benchmarking at the, um, in the real physical world where AI is having a direct impact. And as you said, it's across industries. You guys focus on fleets. But do you think that there are some general principles of benchmarking that you've learned that would apply to any other industrial AI system?

HEMANT: Look, I think, yeah, what I would say is we have. Taken this approach of we want to be very transparent about the impact of our technology and how do we, um. Make it easy for our customers to see the impact, the end impact. And what we have learned in that process is a lot of it is actually in the physical world. The impact is something that, um, you cannot just run a simulation and kind of share that data with your customer. Um, they need to actually see and feel and breathe the technology to get a clear sense of how is this going to affect my driver day to day. Right. And I think it's one of the learnings for us has been, um, yes, there's a lot of like, um, simulation or even just, um, offline data analysis that you can do, but there is real value for our customers to see how that technology actually works in their environment and making it easy for them to set it up and actually see how that works in their environment is really important. And I would say this allows them to actually understand, is this technology noisy? Is this technology actually, um, going to, um. Help me change behavior.

HEMANT: Is this going to, um, even work in maybe, um, I have a fleet in Arizona working in really harsh weather. Is this technology going to survive? Right. Like, there's so many aspects of the physical environment that our customers operate in that is not easy to replicate or generalize. Um, you need a way for you to actually benchmark that in, um, situation. Right. And our learning has been how do we make that easy? And if we make that easy, it helps our customers to actually get the, uh, get a clear understanding of the impact, and we take the step to make it really easy for them to do this. We will we will work with our customers to essentially go to them, install our hardware, install our technology where they are going to use it. The other thing that we have learned is also just like for them to also, um, get a sense of really, how do I compare if I had this technology versus I didn't have this technology, how do I do this in a meaningful way? And we have over time learned. How do we do that? Well, so we can install our technology, but kind of just collect data for a certain amount of time for you to establish a baseline before we start turning on our alerts, before we start turning on all the coaching.

HEMANT: And, um, before we start surfacing all of that data in the dashboard, we kind of run that baseline, and then we can help them see the difference for themselves between that baseline phase and then the actual, uh, phase when all of these features were turned on so that they can see the impact between, hey, when the technology was just silently collecting data, but doing nothing. How was my driving? Uh, like across my fleet. Uh. Unsafe driving? Positive driving. All of that. And then how did that change once we turned this all on? So those are some learnings that we have taken. And then we actually work with our customers whenever they want to evaluate. Like we bring those learnings and we make it easy for them to again, how Ryan was talking about bring these learnings from other customers who have tried the same thing to our next customer to, um, there's a lot of value in doing that. Ryan. Anything else that you would add from your side that you have seen, uh, work well in these evaluations that you have done?

RYAN: I think usually when a company is coming into getting a technology or tool like this, it's either because they need to they've had maybe some historical safety issues or some concerns with their population of drivers. And it's about that trust. I just go back to it. It's about the accuracy of the system and the ability to actually, I call it walk the dog. Right? You know, um, you're taking an event and you're doing something with it. And so when there's that trust from the driver to usually an operator, so not even a, let's say a technology individual, but an operator, and there's this consistency, there's this accuracy and there's an understanding. So I think one of the really great things that Motiv does is it coaches the coacher, right. It gives them, uh, an example of what to say to your driver when you're showing that driver the event. And so there's just this earn trust moment. So whether it be from like a, let's say a CEO perspective or a driver perspective, Motiv does a really good job harnessing AI technology and making it being able to practically apply to those day to day interactions between a driver, uh, to a manager, to the most senior leadership.

RYAN: And you know, it's it's very easy to see in motives analytics how the tool is impacting your driver population. And you get to do one of the best things that I think historically has been a complaint of a lot of employees. You're not burning a lot of time on 80% of the population. You're focusing on the 2 or 3 drivers that are not listening to the vice coach and that are not listening to the leader coach. And so very quickly, you're able to address, let's say, those 2 or 3 drivers, while respecting the remaining 80% of your, let's say, population, whether that be financially taking care of them or just respecting their time from a day to day operation. So I think that would be the only other thing that I would mention is there's there's tools out there that do not have the ability to capture the event and harness it for a company to action on the way that motive does.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you'd love including world-class support, automated translation, generate automated summaries powered by AI, share transcripts, and easily transcribe your Zoom meetings. Try Sonix for free today.


Learn more
 
blink-animation-2.gif
 
 

 Eye On AI features a podcast with senior researchers and entrepreneurs in the deep learning space. We also offer a weekly newsletter tracking deep-learning academic papers.


Sign up for our weekly newsletter.

 
Subscribe
 

WEEKLY NEWSLETTER | Research Watch

Week Ending 11.16.2025 — Newly published papers and discussions around them. Read more