How to choose the right AI solution: getting the best bang for your buck


A podcast conversation with Kevin Franklin, CEO of EiQ
AI is now a top priority for supply chain and sustainability leaders, shaping investment decisions over the next 12–18 months. At the same time, it brings complexity, uncertainty, and fast-moving expectations around governance, data, and real-world application.
In this podcast episode, EiQ Chief Customer Officer Andy Gibbard (AG) speaks with Kevin Franklin (KF), CEO of EiQ, about how organisations should approach AI adoption in the context of supply chain risk and sustainability. They unpack the gap between AI hype and practical implementation, and explore why governance, data quality and human oversight remain critical even as AI capabilities accelerate.
19:12 [minutes]
Full Transcript
AG: Hello and welcome to our session on choosing the right AI solution and navigating the AI hype. I'm here with Kevin Franklin, CEO of EiQ, and pretty much the earliest adopter that I know of, of all the AI tools that are out there – so we're in good hands today. When we surveyed the industry recently, we found that AI was cited by supply chain and sustainability professionals as the number one business priority and the biggest investment area over the next 12 to 18 months. But AI is riddled with risks, with pitfalls, with misconceptions. So today we're going to be busting some myths, providing some human powered clarity, and looking at how to find the right solutions, embed them into your workflow and with that, achieve total supply chain confidence. So, Kevin, to kick us off, what are some of the broadest misconceptions that are out there in terms of AI and what it can do for businesses?
KF: Well, I think one would be that AI is going to solve every problem and that as a result of that, there's no more humans in the loop going to be required. That's definitely a misconception. I think certainly in the space of responsible sourcing, the role of the human will always be critical. Probably the same number of humans, maybe even more, as they really lean into things like decision making, calibration, verification, stakeholder engagement. I think AI is something that can help massively on things like access to intelligence and on things like improved efficiencies relating to workflows. But we're still going to need humans in the loop, making decisions about how to use it.
AG: Okay, great. That sets the scene really well. And now let's think about when a business is looking at adopting AI. What should they be thinking about at that evaluation step and what should some of the criteria be that they should use for their AI decision making processes?
KF: Perhaps let me answer that with respect to responsible sourcing type tools and systems, because I think so many platforms out there are purporting to use AI, and there's a lot of questions that need to be asked around how it's going to be used and whether it's designed correctly. So on that specifically, there's a lot of AI related product development, but there are not enough of those that actually then leverage the right sorts of tools, systems and certifications to show that they're being designed, developed and rolled out effectively. Standards like ISO 42001, which is an AI management system standard. And there are also standards increasingly around AI agents. Specifically, they actually provide structured frameworks for how to use and deploy AI in your product development and the ongoing management of that AI. So it's about having things like AI policies, having a confidence that your security around data is strong and efficient, having good governance around the deployment of AI. Stakeholder engagement in the design and development of AI products. These are the sorts of things that are required by ISO 42001. Likewise, linked to that, there are requirements around the continuous improvement of AI tools, making sure that they aren't hallucinating, or that the outcomes are as expected and not going in the wrong direction. Having the right mechanics and systems around ensuring that your AI is being rolled out effectively is a critical part of adopting any AI tools, working with any businesses that are using AI tools, and ensuring that the governance of those tools aligns with your expectations.
AG: And then looking at the negative side of that, if you like, what are some of the red flags that companies should look out for when they are being pitched to by vendors on AI tools and AI technology?
KF: I'm sure there are many red flags. And I think they go back to really questions around data security, data use, data governance. And also around what kind of controls are in place relating to AI. I say this, but it's very much an emerging space. So in most cases, neither the AI platforms, unless it's the biggest AI platforms out there like ChatGPT, Anthropic, etc., but a lot of those in responsible sourcing and sustainability worlds that are integrating it, they may not yet have all of those things in place. Neither do clients that are rolling those out. So how they ask for AI related questions or clauses in contracts is also still evolving. There's a lot of this world that is still evolving, and I think we're evolving in it together, getting better and better at handling these things. But the more proactive a company can be in showing you its AI policies and evidencing its security controls in evidencing its product development processes around AI, I think should give you a lot of confidence. Deploying AI is not just about rebranding your marketing materials to now say that you're an AI business. It's about making sure that you built it into your systems, product development processes, governance, and how you work as a company. Top to bottom.
AG: Okay. So then what happens once you've bought the tech? What comes next? How can you ensure that you're not just buying AI, but you're really embedding it into your workflow? And thinking within our context of responsible sourcing.
KF: It's really about use cases. So the point here is you need to know what you're buying it for. Are you purchasing it for risk assessment? And therefore you need really great good underlying data intelligence that's real time and adapting and evolving and feeding into your own internal systems. Are you purchasing it to automate an audit process, or to run a program, or to do report review? Just there I’ve given three different potential use cases for AI. Today, we tend to think of AI as kind of this big homogenous thing. But there are lots of different variants of it all the way through from machine learning that we've already been using for things like equivalency to now generative AI that we're deploying in our AI chat bot. And then obviously the agent orientation that connects different parts of a workflow together. I think it's not just about buying a flashy word, it's about really understanding how you're going to use it and making sure that the teams in your organisation are part of that decision making process, and then know how it's going to impact and affect their day to days.
AG: Is AI going to fix data quality?
KF: No and yes would be my answer. So I think one of the sentiments that will be coming across in that survey, Andy, is the fact that today when people receive audit reports in particular, there's a strong and legitimate hesitancy about how good that audit actually was, despite a number of controls and industry schemes that have come into place. I think the market has become quite good at completing audits, potentially passing audits. And as a result, I think there's a lot of concern about whether X audit done at Y factory is the same as another audit or another audit that were all done at that factory. Likewise, our analysis of this data shows the same thing. It shows that there's a vast difference in the quality of one audit from another audit. Now, will AI fix that? No, it's not going to fix the audit quality. It's not going to fix what's happening on the ground directly. But what it will do is a number of things. Firstly, it will enable us to understand more systematically where there's bad data versus where there's good data. And that will occur in the context of a much bigger data environment. So in the world of AI, we're connecting all of the data together. We are looking at the different audits that come through from different schemes or client protocols. We are evaluating the number of noncompliance's at scale. We're looking at the number of zero tolerances at scale. We may be even looking at the auditors that completed those jobs. We may be looking at potentially other audit firms that have completed those jobs. And we're correlating all of that into big, rich data set to help us understand which of those combinations is most likely to deliver the most reliable, trustworthy data. So point one, I think it's about much better understanding data, being able to look at it at scale, and therefore to ensure that where audit data is coming in, that we can either rely more on the good data or less on the not so good data. And then the next part of this is about connecting that or triangulating that data with other data sources. So linking in worker surveys or self-assessment questionnaires or grievance mechanism data much more systematically. Again, to enable us to get a bigger, richer picture understanding of the data and therefore to hopefully correct for data quality issues that may already exist in our industry.
AG: Okay, I'd like to talk now about data quality. It's quite a thorny topic. And actually, when we surveyed the industry, we found that the number one data related issue for supply chains and for sustainability professionals was the quality of the data that they were inputting and working with. But then when we asked the same companies about their outlook and their priorities, only 1 in 5 said that they were going to prioritise addressing this, which I think is quite a direct suggestion that people expect AI to fix this as a kind of silver bullet. So is AI going to solve all of our data quality issues? And if not, what do we need to do to make sure that the data that we're using and inputting is up to scratch?
KF: I think it's shifted only slightly at the moment from the landscape of clients that we are working with. There are not a lot that have really adopted AI at any scale. So I think the way it's shifted today is really towards needing to be more data savvy, needing to know how to use data more effectively. Moving forward, though, it's definitely going to change a lot more. So moving forward, I think it will mean that we have less humans involved in running processes, and we can lean on AI to run processes and more humans involved instead on decision making, collaboration, engagement with suppliers, buyers and other key stakeholders in supply chains as well as in the analysis of the data that's being collected. So I think we're still at the very beginning of that process, and there's a lot more to come over the next 6, 12 and 18 months.
AG: To shift the topic slightly, within EiQ team, we talk quite a lot about the difference between compliance and box ticking versus real purposeful action. So how can AI help here? How can it take us beyond compliance and into purposeful action? How can it really drive sustainability, drive resilience within supply chains?
KF: I think the goal for AI and autonomy in the world of responsible sourcing is about achieving an end state where suppliers are excellent, perpetually, where suppliers are constantly being engaged in small, incremental engagements to proactively manage issues or risks, maybe before they've even occurred, or where they're being incentivized because they've got ROI. Return on investment intelligence that helps them make decisions about investing in technology or systems, or maybe even in changing their workforce hiring and their patterns in which they onboard workers who they work with, etc. So for me, you know, AI is a really critical tool in changing the conversation, therefore, from compliance one which is about, you know, box ticking the audit driven approach that we have today to one that is incentivising, one that is continuously learning and one that is supporting an environment where we have suppliers that are incentivised to be constantly operating at a higher level, a level of excellence rather than one through the compliance landscape where their performance versus your compliance criteria may be very up and down, depending on whether you're doing an audit or not doing an audit, whether you have fixed corrective actions or not.
AG: So going back into the detail then, are there any examples that you can share of how AI has helped to uncover risks or even opportunities that traditional methods may have missed altogether?
KF: We've actually been using AI now already for a few years. In EiQ we've been using it for equivalency. So where we've been able to ingest multiple different audit types across different industry schemes or standards, and to normalise that information around a common underlying data set. Part one. Part two. We've been using AI for Sentinel, which is our adverse media scanning tool, and we now operate that for well over half a million suppliers every two weeks to support better risk detection and to inform and improve their risk assessment process. So these are already live in place and running today for, you know, all of our EiQ clients. And they're already enabling them to better detect risks, whether it's through equivalency, standardising and normalising and then applying their own ratings on top of the individual outcomes, and do that at scale because previously they may have been trying to do that manually, but we can batch process thousands of these at a time. And then in the much more real-time, Sentinel adverse media intelligence. And these are two what today feel like quite simple applications of AI, machine learning in particular. They're not easy to implement and roll out. I think this is just the first step. Most of us are waiting for the next steps, which are coming with the superintelligence and the true automation of the process. But we’ll talk more about that later, I'm sure.
AG: Well, I was actually about to ask you to give us your crystal ball moment. Really. So over the next five years, what is going to change? And by 2030, 2031, what is the AI world going to look like? And what's the broader world going to look like in terms of how it's been influenced by the progression of AI?
KF: You know, I think it's clear AI is going to play a part in pretty much every aspect of our business lives, for sure. And a large number of the aspects in our personal lives. From a business perspective, it's going to be in finance, it's going to be in people management systems, it's going to be in buying decisions. It's going to be in, you know, producing and preparing our PNLs, and it's going to be in supporting our ability to better understand client consumer needs, demands, interests, assimilating and assembling data sets at scale and in ways that we can apply them to our business to drive more business effectiveness, more value for business, and more business transformation. I think just the scale of it there is overwhelming, and it's a huge opportunity. In the world of responsible sourcing specifically, we're looking at really building out an AI oriented super intelligence environment, combining our existing data with external data from clients relating to day to day buying decisions, day to day preferential decisions about which are the best or worst suppliers, as well as external data on things like GDP. And then other metrics; could be broader national metrics around wages, etc. And then obviously the autonomous workflow, which for me is the most exciting piece, but it's going to take probably three years to get the autonomous workflow properly set up. And that's being able to essentially drag and drop your supply chain or have a data feed of your supply chain into our platform and have it then automatically calibrate a good, robust, risk based program design, have people in the loop to make decisions about yes or no, move forward or change the calibration, and then to roll out the whole end-to-end process all the way from the risk assessment to the segmentation to the scoring, to the monitoring, to the corrective action plans, to the reporting and disclosure as well. I think on that last part, the reporting and disclosure today, we have just really annual reporting around sustainability. Well, you have the option of doing that potentially real time. And certainly for consumers where we know that sustainability, responsible sourcing issues are increasingly informing buying decisions, that'll be a very useful metric to differentiate your brand in the market. So I think the opportunities for AI just generally and also in responsible sourcing are limitless and very exciting.
AG: I'd like to come back to the present and just to get really practical again and thinking about our audience today. What advice would you have for leaders navigating the AI hype? What's top of the list of things that they should be thinking about?
KF: Trust, I think, is always top of the list. There are a lot of AI tools out there that are very new. The space of AI generally is very new, and it's evolving very quickly. So whilst many of us may have plans to put in place the right AI governance systems and have all the right security in place, not everyone does necessarily. So it's really important to make sure that you've done your checks, that you've got the right level of due diligence in place around those, and that you're working with a brand, right, that people know. And in the case of EiQ, we work and are backed by LRQA, which has hundreds of years of history as a trusted assurance provider. And that's a huge differentiator when it comes to evidence and trust, including in the space of AI.
AG: Kevin, thank you very much for sharing all of your professional and personal perspectives with us. Really fascinating for me, and I hope for everyone else watching today.
AG: Thank you all for watching, thanks for taking the time. Please do check us out on eiq.com. Please follow us on LinkedIn to keep track of where we are, what we're talking about, and what's coming next. And we look forward to having one of these sessions with you again soon. Thank you very much.
Related Podcasts
Discover more insights from our podcasts.
Test - ROI calculator2

Cracking Scope 3 - why resolving data confidence is crucial
Our EiQ experts dig into why Scope 3 continues to challenge businesses, how to resolve the visibility gaps and why it’s more critical than ever to do so.

EU omnibus proposal: cutting through the noise and refocusing on the fundamentals of sustainability best practice
Amid diverse opinions on the EU Omnibus Proposal this podcast episode offers valuable insights from industry experts on how businesses can stay ahead in an evolving regulatory landscape.