Skip content
AI data image

Podcast: Harnessing AI for responsible sourcing - What you need to know

Harnessing AI for responsible sourcing - What you need to know

27 minutes

Our EiQ experts provide in-depth perspective on how to effectively integrate AI into your responsible sourcing strategy - including the risks and opportunities that come with it. EiQ CEO Kevin Franklin unpacks how to strike the right balance between AI and human expertise.

Follow us on YouTube

EiQ by LRQA

Full transcript:

Andy Gibbard, EiQ Chief Customer Officer: Welcome everybody to this session on AI and total supply chain confidence. I'm here today with Kevin Franklin, CEO of EiQ, and also Chief Product Officer of LRQA, which is the parent company of EiQ. Kevin's a responsible sourcing and data science expert with more than 25 years of experience in this field, and he's also the founder, actually, of EiQ.  

AI is a hot topic in almost every aspect of life at the moment, but supply chain risk management is already a highly data-driven discipline, and AI is actually behind a lot of the existing tools, including EiQ. But as AI continues to evolve, it will really play a transformative role for everyone working in this area, thinking both about how we work day to day and also more strategically into the longer term.  

So, Kevin, to kick off then, how do you envisage AI transforming supply chain management, and also, in what ways have you already seen it in action? 

Kevin Franklin, EiQ CEO: That's a great question. Thank you, Andy, and thanks for inviting me into this session together. Maybe I can start with the second part in terms of how we've already been using it. So EiQ, in fact, you'll remember this was originally called eiq.ai and we have had AI integrated into EiQ for probably about five years now, initially more on the sort of machine learning side, and increasingly we're integrating generative AI layers.  

So, on the machine learning side, we've been using AI for our adverse media scanning tool. Every week, we scan and generate roughly a million artifacts of news that we need to basically triage, segment, risk assess for the hundreds of thousands of suppliers that are in the EiQ platform. We couldn't do that with just human intelligence. We have to use AI, and it's been incredibly effective in that process. Secondly, we also use AI for what we call equivalency, which is where we take into EiQ multiple different audit standards, and we harmonise the outcomes, the individual findings from all of those audits into a common framework. Again, many of these audit reports are PDFs or Word documents or data files, multiple different standard types, the diversity there would make it impossible to do that again with just human layers.  

So, we've been using AI to support that process as well for probably the last three or four years, and there are many other applications moving forward. I think we expect AI to basically be ubiquitous across the responsible sourcing due diligence process, which means all the way through, from the upfront diagnostic and evaluation of a responsible sourcing programme to risk assessment of your supplier base, to and in the design of training tools, the deployment of training tools, even in the audit itself, in report generation, in corrective action plan development and design, and then, of course, through to the benchmarking, trend analysis and continuous improvement, we expect AI to really be everywhere in that process, completely ubiquitous.  

We also expect to use it, and hopefully we will have a beta version of this going live in EiQ, Q2 or Q3 in a generative AI format, where you'll be able to go into the platform and essentially ask it a direct question. So, talk to me, or show me the risk profile for this commodity or that commodity over the last five years for the following five issues, generate a report very similar to how a lot of us are now using some of the new AI tools, which I think have become just a native part of our day-to-day workflows. 

AG: Coming back to the present day. And I'm sure there are a lot of people watching today who feel like they're maybe just starting out on this journey, and who want some advice. So, for those who are integrating AI into their responsible sourcing programmes and also into their day-to-day jobs, what are the key considerations?  

KF: That's a really good question, because I actually asked this to one of our leadership series audiences around a year ago, actually, not quite a year ago, and I think only one person in a room of 150 was actively using AI at the time, but clearly we're all going to be using it. I mean, since then, we've all started using it in our personal lives, and we all recognize that AI is not just a tool, but it's shaping the reality around us, where we listen to its responses, trust its responses, and start to integrate it into our actions. It is shaping our reality just as much as this conversation between you and me, with that in mind for people that are actually bringing it into a business environment. Clearly, there's a lot of things to take into consideration.  

The first is your data, right? You can't just put AI on top of your data without there being a risk of it getting out into the wild inadvertently. I think we've all read stories about that in the media as well. So, it took us about a year longer to bring generative AI into EiQ than we'd expected, because we spent a lot of time on data structure and data security upfront. We've done that now, and it puts us in a position where we can start to layer generative AI into the tool in a way where it can be used by clients. So point one, data structure, data labeling and data security. Point two, super critical. Point three, AI is just one tool. You have a lot of other tools, and you also have a lot of people in your team. AI, who may replace some roles. But in reality, it should be an opportunity to elevate your existing team and how they're working. It should be used in partnership with people in your team and layered alongside the deep, technical, human experience that you have in your organisation.  

If you're looking for really simple, easy, low risk ways to bring in AI - definitely, the risk assessment, the insights, these are really key parts - data analysis - really key parts where I think it can shortcut and probably provide more accurate and more real time insight into what's going on in your supplier base. Think linking responsible sourcing data to sourcing data is a really key part of where AI will add extra value, but ultimately, bringing AI into responsible sourcing is essential, is a no regrets move, and is something we should all be embracing. 

AG: Okay, let's talk about the human factor and about balancing human intelligence and artificial intelligence. So how do you get that balance right when looking at analysing risk and driving sourcing decisions?  

KF: I think it's important to realise that AI is a partner in the process, and in that respect, we need to work with it like we would with any other partner. We need to have degrees of transparency with AI, sharing information with AI, bouncing ideas backwards and forwards with AI, potentially but ultimately, making the decisions and taking the actions together. So you know what this might mean, for example, is if we're looking at alternative sourcing locations in response to tariffs, AI might support us to identify three or four alternative geographies based on an underlying set of information that we share a bit possibly linked to geographic risk levels, issues, adverse media scanning, or maybe even different products or services or commodities. So it can give us an initial set of information around the risk landscape and allow us to benchmark where we've been before and what our risk appetite is with where we might go in the future.  

But what AI is not going to know is, what are the suppliers like? What's the relationship going to be like with those suppliers. It won't necessarily know and understand union dynamics on the ground, for example, it would know wage information, GDP data, potential to draw out working hour information, but actually, implementing a change in your supply chain requires human to human, real world behaviors, and there's a lot in that space, the examples I just gave before, that AI won't know, and that's where you're going to need to draw on your on the ground teams, or our on the ground teams, or on deep human experiences. 

AG: Let's talk about ethics and transparency. They're really key points. Obviously, when it comes to AI governance, where should we be drawing the line in terms of AI automation to respect those and to maintain the integrity of decision making?  

KF: I think ultimately AI and automation stops when human rights begin. And what this means in practice is really making sure that firstly, when we're using AI, we've designed the algorithms, we've designed the recommendations, we've designed the process with a governance that acknowledges human rights issues so it's a fair, balanced algorithm. It's a process that is well governed with the right level of oversight. And it's not free ranging right. It's working within the constraints and processes that have been set up within the EiQ space and actually the broader LRQA organisation. We lean heavily on the structure of ISO 42001, which is actually the first international management systems framework set up specifically for AI. We have people on the ground doing AI 42001 certifications, but the EiQ tool leverages those same principles in the design of our AI infrastructure.  

So point one is making sure that your algorithms, et cetera, are set up correctly. Point two is then examples being like what I gave before. When you do take AI intelligence, you also combine it with a deep understanding of human impacts and behaviors. So for example, if AI recommends three or four different types of interventions with a supplier to ensure that you're monitoring risks effectively, maybe three or four different audit tools or a self-assessment questionnaire framework that you layer a human insight on top of that to help pick the right tool for that supplier based on not just risk, which AI can inform, but also your knowledge of the supplier, the legacy of your relationships, and the kind of deep trust based insight that you might have within your team. 

AG: So, what types of data are most critical do you think for AI-driven supply chain risk analysis, and how does quality and accuracy come into play? 

KF: This is a great question for EiQ Andy, because, as you know, we have some incredibly unique datasets for an AI tool to work well. It needs a lot of data, it needs a history of data, and it needs to be able to combine data now, actually, what we have built within EiQ and the wider LRQA organisation over the last 10 to 15 years is a truly unrivaled dataset relating to responsible sourcing, with 25,000+ responsible sourcing audits every year and probably on average, three to 400 data points collected for each, we have a repository now of around 85 plus million unique, responsible sourcing data points. That's a dataset that doesn't exist anywhere else, literally only within EiQ. It's not in the wild. It's only within the EiQ ecosystem, so no one else, no other AI tools can give you insights like we can because they don't have access to that information. And within there we have things like wages, working hours, transparency rates by country, sub nationally, within country, by state, but then also a lot of other data around carbon emissions, for example, and what AI allows us to do is to combine that to look at the tradeoffs with that data with other data - could be client-specific data around their supply chains, again, also with the right security controls and protections. It could also be public domain data, for example, around GDP or living wage, but ultimately, it's the ability to combine data sets and to leverage unique insights with AI's intelligence, accuracy and predictive capability, and ideally also generative component, that really puts you in a unique position.

AG: Okay, excellent. And I think actually, we're on more than 30,000 audits per year now. Okay, excellent. So thinking then about contrasting AI with maybe more traditional methods of data analysis, how does AI help in identifying emerging risks and trends that might not be apparent through those more traditional methods? 

KF: Yeah, this is also a very good question, and I think immediately to topics like unauthorised subcontracting, which is something we're expecting to see a lot more of given the current economic climate and the geopolitics that so many suppliers are navigating. I think where AI is great is in spotting those hidden issues that actually, as a human being, you'd have to go and look for, right? You'd have to build an algorithm, you'd have to build a design and analysis framework to specifically identify that. For example, you might want to look at order times and connect that to insight into what sort of machinery is in place in a factory, and maybe the number of workers, to be able to get insight into whether actually the orders that were promised were genuinely viable, or if actually a supplier might need to go somewhere else. So as a human being, you'd really have to properly design your analytical framework for that. Whereas AI is going to start doing some of that itself right because it's looking out for anomalies, if you prompt it with the right questions, if you ask it to analyse multiple different variables or multiple different potential outcomes, it will look at all of those things at once, and it will start to tease out things that, as a human being, we may not even have thought about before. So, I think this is one of the real benefits of AI. It can look at a lot more things much more quickly, tease out insights we've never even thought about, just because it's designed to evaluate all of the different causalities. 

AG: Turning back now to people who are looking after a responsible sourcing programme and implementing AI, what are the common challenges that they might face when trying to do that, and how could they overcome some of those obstacles? 

KF: Definitely one of the key challenges is around data protection. Data security and making sure that you know whatever systems they're using have got the right controls in place. If they haven't, then it's possible that data may be accessible by other organisations that are using the same systems and also potentially get into public domain. So, data security is definitely a key point. Another key point is really, human readiness, and the team being ready and open to accepting AI. What I've definitely noticed over the last year or two is actually depending even on geography. You know, in Europe, and the US people are quite familiar with AI. A lot of the new generative tools were released there before they were released in Asia Pacific, for example, or in Greater China. So there's maybe a couple of years, or even if it's just six or nine months of additional time to get used to these tools. So there's definitely a maturity curve that one needs to work through with one's internal team. And with that, you also need to give them the confidence that actually their jobs are not necessarily at stake. The goal with AI is to bring it into your team as an additional part of your process, not to ideally replace those teams. If you do it correctly, you can help upskill the teams that you have and enhance their capabilities. So, I think data security is definitely one. Data structure is definitely one. And then maturity of knowledge and awareness on how to use AI, because they're big gaps, as well as point four, really the confidence within a team to accept AI without really that impending fear that they may have of job security. 

AG: Change management is a big aspect here, and when organisations implement AI into these deeply embedded processes, there's a huge amount of change that needs to be managed, both technologically, but also very much culturally as well. So along this journey, what steps would you recommend that organisations focus on to really manage their journey along that change curve? 

  

KF: Yeah, great question. Very pertinent. Let's touch on the cultural side first, clearly, a lot of teams should have a degree of fear around adopting AI, and a good way to handle that is probably to start with pilot projects, to start with projects that operate in parallel to core business processes. What I mean by that is, if you have a fear of bringing AI in because you're not sure if it's going to generate the same type of recommendations or outputs that your human team maybe having a good way is to bring it in and have it do that alongside of whatever your existing internal team might be doing. That way you can work together to kind of fine tune the AI algorithms, to fine tune the AI process, so particularly around things like risk assessment, how you might weight different topics like forced labour or child labour, can be something that you test and refine together. So point one, operate something in parallel. Point two, do pilot projects.  

AI will also necessitate, depending on how you're setting it up, probably quite a lot of internal controls and work with maybe other teams across your business. A lot of our key stakeholders are on the supply chain or sustainability teams, but if you're going to be using AI, you might find yourself working much more closely with it, or probably also with legal who may be looking at things like terms and conditions associated with whatever tools and platforms you might be using to deploy your AI infrastructure. So it's definitely going to require planning right in terms of how you work with those teams, and it may mean it takes longer to get some of these initial things set up. You'll probably also have to work through processes around data security, data governance, data structure, and you may also need to work more closely with communications and legal around if you're going to use AI for any external reporting, making sure that you've implemented the appropriate quality controls to ensure that there's no hallucination right in whatever it is that's being put together for those reports. So definitely means piloting, definitely means collaborating across multiple teams, definitely means more planning upfront and certainly in the short term, because we're not 100% confident on what we're getting out of AI, it does require a lot of cross checking.  

AG: Let's turn now to the future. So looking ahead, what AI innovations do you think will have the most profound impact on supply chain intelligence and supply chain resilience? 

KF: At some point, Andy, I would expect that a lot of the responsible sourcing part of supply chain resilience and intelligence will run almost automatically with an AI. I would envisage a world where you have an online tool, it could be EiQ, where it conducts the risk assessment for you, where it makes recommendations for you, where it places orders for audit reports or it deploys a self-assessment questionnaire for you. You will need your data to be very well structured for this, right? But it could definitely do that, especially if it knows that you audit a factory once a year or once every two years, and if it knows that in your design, it is going to deploy a deep dive investigation in response to news alerts, all of these things can start to happen automatically. All you need to do is to set those rules up at the beginning of the process. But AI can even support you with that, because it can help you design your risk algorithms, your program, your code of conduct, your framework. It can help you do benchmarking.  

I would also envisage that once the audits are completed and the data is coming in again, AI can bring in multiple different data types. So industry, scheme, data companies, proprietary audit tools, or even our enhanced responsible sourcing assessment, our tool, or any of those frameworks, bring all of those in, standardise that data. All of this is happening automatically right now. If you remember what we said in the beginning, you haven't had to do anything. You just set up the rules. This is all happening automatically. You just log in every day to see what is the status of that all of the results from those audits are coming in through AI as well. Ideally, they're also getting matched to things like CSRD, the corporate sustainability reporting directive, or any other reporting requirements. And report insight is being generated automatically. It gets you all the way to the point of potentially publishing this, when actually you need some kind of human intervention to maybe evaluate whether you're comfortable putting that into the public domain. You can implement multiple different human intervention checkpoints across that process. But I do see a future, probably in the next three years where all of this can be fully automated with AI, 

AG: What a vision. Thank you, Kevin, all right, and back one more time to the to the present day. So at EiQ, we're lucky to work with some fantastic clients, and really some of the companies that are at the forefront of this AI in responsible sourcing domain. So I was wondering if you could share any success stories where AI has significantly reduced supply chain risk or really transformed a strategy, whether or not you can share who's behind that, whether there's something that you can that you can give us that might inspire people to go and see what they can implement in their own programmes? 

KF: Well, there's many examples, Andy, I think I'll keep them anonymous for now, but one of the best is, of course, with our adverse media scanning tool, where, as I said, we're scanning millions of news items every week, and we are raising alerts, red flags against key issues 100 or so different issues in the Labour, Health and Safety, Environment, Business Ethics and labour standard spaces. And those alerts actually help our clients to implement deeper investigations, or to roll out audits, or to actually proactively identify where a shipment may be held at the border due to links to forced labour or child labour. So, we're doing this actively today, and I know many cases where we've actually helped get in front of these risks before they've impacted our clients’ supply chains. So that's definitely working exceptionally well.  

What I would also say is very relevant today, in a time of tariffs, is looking at alternative sourcing locations. So we're definitely working with a few clients today where they're using EiQ, and it's associated AI tools to evaluate multiple different sourcing geographies. Many of you that will be using EiQ, if you're listening to this, will know that we have geographic and product risk data, but we also have around 2000 other indicators, like wage data, working hour data, transparency data that is a little bit more difficult to access in EiQ, but we have that for every geography and for multiple different supplier types globally. So we can use that data to help identify not just lower risk or similar type risk profiles for new sourcing environments, but we can also understand the cost impacts because we've got wage data. We can also understand the subcontracting risks along with those sites, because we've got subcontracting data. We can also understand the tradeoffs around potentially even environmental metrics, if you're transitioning from a country with a low emissions factor for grid electricity because there's a high use of hydropower, for example, to a country with a much browner grid, you're going to have more emissions in your supply chain as a result of changing your sourcing patterns. And these are things that we can do with AI in bulk, leveraging EiQ's insights. 

AG: Okay, Kevin, thank you very much. This has been fantastic. You provided loads of great insights. It has been really insightful for me, and I'm sure for many other people watching, are there any final thoughts that you'd like to share? 

KF: I think the final thoughts I would share really relate to the inevitability of AI, and as we were discussing before, this is something we can't avoid. So, we need to learn how to embrace AI. We know that AI is shaping the reality around us. That's a reality we all live in, and it's a reality that we can help shape together with AI, if we figure out how to interact with it, proactively, responsible sourcing programmes have a lot of opportunity for improvement. They can be faster, better more effective, generate better insights, and they can do this more effectively when they're leveraging use of AI. So, I would say, embrace it and do it quickly.

AG: Okay, fantastic. Kevin, thank you very much. Thank you to everyone who has watched this session, please do follow us on LinkedIn and keep an eye out for announcements for future sessions. Please also send us your questions and your thoughts on the topics that you'd like us to address in these sessions. Please do look out on eiq.com as well to hear about everything else that we've got coming up. Thank you very much.