Brainstorm AI London 2024: Navigating Roadblocks And Catalysts To Enterprise-Wide Adoption
Amy Challen, General Manager, Artificial Intelligence Sachin Dev Duggal, Founder and Chief Wizard, Builder.ai Jenny Griffiths, Vice President, Data Science, Oracle Moderator: May Habib
Category
🤖
TechTranscript
00:00 - Hey, hi again, everybody.
00:02 Really excited to talk about getting those POCs
00:05 out of purgatory and into scale with a fantastic panel.
00:10 First to kick us off, how many folks have got
00:14 scaled use cases in production for generative AI
00:17 in the audience?
00:18 Hands up.
00:19 Okay, just what we thought.
00:22 How many folks, if you know, are spending
00:26 more than a million dollars a year on generative AI?
00:29 (audience mumbling)
00:32 More than three million?
00:33 More than five million?
00:36 More than 10?
00:39 Okay, well, you guys know what my answer is.
00:44 But that's great, insight and data.
00:48 Folks are early.
00:49 And the stat, this is Accenture, BCG,
00:54 lots of folks have polled executives.
00:56 Fewer than 10% of the generative AI explorations
01:00 are in production.
01:01 There is a graveyard of unused chat applications
01:07 when those use cases do go to production.
01:11 So I do think we are in a bit of a trough
01:16 from a adoption and scaling perspective.
01:21 And let us dive into this conversation
01:25 with that in mind.
01:27 You all have scaled use cases in production.
01:31 And I would love for us to actually just maybe
01:33 rapid fire share those before getting into
01:37 what you had to overcome to get to scale.
01:39 And then we definitely wanna take questions in this session.
01:43 Sachin, I'd love to start with you.
01:45 At Buildr, what are you guys doing?
01:47 How are you using AI internally?
01:50 We are proud partners, mutual customers.
01:53 So tell us a little bit about how you guys
01:54 are using AI internally.
01:56 - Sure, so at Buildr, we use the entire length and breadth.
01:59 So right the way from what you call classical AI,
02:03 so machine learning, both supervised, unsupervised.
02:06 We actually, the core foundations of our platform
02:09 are built on a knowledge graph.
02:10 So graph theory is really important
02:12 because it creates the reasoning engine
02:15 to give generative AI guardrails
02:17 for all intents and purposes.
02:18 But also, it's a really strong method
02:22 for reinforced learning.
02:24 And then we use computer vision.
02:27 So when we produce applications,
02:31 the old way of testing was you had a human
02:32 that looked at an application, compared it to design,
02:35 and it takes a really long time.
02:36 We can test an application across thousands of devices
02:40 in 30 seconds using computer vision.
02:42 And then, obviously, which everyone's looking forward to,
02:45 is we use a lot of generative technology.
02:47 Now, I would say we have our own work,
02:51 especially on the small language model side.
02:54 But the use of commoditized LLMs,
02:58 I use commoditized Loosety
02:59 'cause I know it's a new technology, but it's commoditized,
03:01 is actually quite important
03:02 because we work with just about everyone,
03:06 both from an image, video, and the tech side.
03:10 And that allows us to do everything
03:11 from user story generation
03:13 based on ragging with a knowledge graph
03:15 through to what we just launched last week, code generation,
03:19 where our error rate is single-digit percentage
03:24 in code produced because it plugs
03:27 into the rest of our plumbing.
03:28 So really, across the spectrum.
03:30 - Awesome, thanks for sharing.
03:31 Amy, what about you at Shell?
03:34 - What are we doing?
03:35 We're doing a lot
03:36 because we have the remit of the whole of Shell,
03:37 which is a very large,
03:39 integrated value chain energy company.
03:42 But some of our best-established use cases,
03:44 and when I say best-established, I mean 10 years,
03:46 it's things like predictive maintenance,
03:48 which you can use in traditional oil and gas space,
03:51 but also wind farms, also the Holland Hydrogen One,
03:55 which we're building to be operational in a year or two,
04:00 which will create green hydrogen
04:02 from the Holland Coast North wind farm.
04:05 That is being designed to have these integrated as well.
04:07 So that's a use case where we've got
04:10 so tens of thousands of pieces of equipment
04:13 under predictive maintenance models.
04:15 And other examples are our EV charging network.
04:19 We've got optimized charging across that
04:21 to ensure that it will charge as efficiently as possible
04:25 from where you're at.
04:26 And then you've got other applications
04:28 such as our marketing and loyalty pieces.
04:31 Again, a really obvious use case.
04:33 People have been doing it for years,
04:34 but these have been established for years.
04:35 So we've got a long tradition of making all the mistakes,
04:39 putting it into production, scaling it.
04:41 And that has put us in a much better place
04:43 for the newer use cases
04:44 when things like Gen2VI come out,
04:46 everyone gets really excited.
04:47 It's the same problem.
04:49 It's not the technology, which is the problem.
04:51 It's the scaling, it's the implementation to processes,
04:54 it's the change management.
04:55 And so we have got that framework to do this.
04:57 We have got these practices and we're doing the same again.
05:00 So it's been a really good couple of years
05:03 in terms of the improvements in the models
05:05 and improvements in the capabilities,
05:07 but we are just putting it into our existing frameworks
05:10 and existing ways of working
05:11 in order to scale it and get the value.
05:13 - Awesome, we're gonna come back to that, that's great.
05:16 And Jenny, what about you?
05:17 You are both building AI into your products
05:20 and using AI internally.
05:21 So maybe tell us about both those sides of scaled use cases.
05:26 - That's a really good point.
05:27 We do have a kind of two-pronged approach.
05:29 So my group, I'm in Fusion AI,
05:32 and basically we develop embedded AI applications
05:36 for Oracle's Fusion Suite.
05:37 And that covers pillars like human capital management,
05:41 supply chain management, customer experience, finance,
05:45 so loads of kind of different applications,
05:47 but we try and use the same core underlying technology
05:49 for loads of different customers
05:51 and loads of different applications.
05:52 So it makes us think about how you can design AI at scale.
05:56 And yeah, when it comes to
05:58 and what we've been doing with generative AI,
05:59 we have 25 use cases go live just last quarter alone.
06:03 - Awesome.
06:04 - A lot of others as well.
06:05 So we're beginning to see this kind of customer adoption.
06:07 And from my point of view,
06:09 what's been really lovely to see
06:10 is kind of people lean in at scale
06:12 for kind of business knowledge and business application,
06:15 but also on a more academic level
06:17 of we now have customers really engaging with us
06:19 about how AI models are put together,
06:21 what kind of data we're feeding systems with.
06:23 So we're seeing this kind of business awareness
06:25 and kind of academic awareness rise hand in hand,
06:28 which has been really great to see.
06:30 - Awesome.
06:31 The end user adoption that you are seeing,
06:34 let's talk now just your internal use cases
06:38 for generative AI.
06:40 We have heard over a couple of panels
06:42 around the reskilling challenge and the training challenge.
06:46 Training almost feels like an understatement
06:48 for the kind of skill that you need in a workforce
06:53 to be able to really use AI as leverage, right?
06:55 It's such a different type of technology.
06:58 Can anybody speak to the types of things they did
07:01 to really get scaled end user adoption
07:04 inside of their companies that would be good advice?
07:07 - So I think I'll sort of start off.
07:09 We've looked at the implementation in two different lenses.
07:12 One is where are you using this to remove human variants?
07:17 And where are you using this
07:18 to remove entire human workload?
07:20 And they're very different tasks that someone was doing.
07:23 And so where you're using it to remove variants,
07:26 there's a co-piloting or a companion experience, right?
07:30 Because you're usually arming the human being.
07:33 So for example, Natasha is our AI
07:35 and will speak to customers,
07:36 but at the same time will speak
07:37 to one of our customer success team
07:39 and tell them when a customer says this,
07:40 you actually need to answer with this.
07:42 So there's a benefit collective insight.
07:44 But that is very jarring for someone
07:46 who's not used to having someone listening in their ear
07:48 and then telling them what to ask.
07:51 And so I think there is an upward training process in that
07:54 and you need to get people comfortable
07:56 with the fact that there's an AI on the call.
07:58 I think the removing work is a little bit easier
08:01 'cause you're basically saying,
08:02 hey, listen, as a good example,
08:05 you used to manage your day.
08:07 We have customer success teams.
08:08 They used to manage their day.
08:09 You're not gonna manage your day anymore.
08:10 Someone, you're gonna have this assistant
08:12 that's gonna move your calendar around
08:13 and sort things out so that we can use the right time slots
08:16 to do the right things 'cause they're all connected.
08:18 There it's more of an education
08:20 to get people emotionally comfortable with the idea
08:22 that they're sort of one step removed
08:24 from how they were running a particular task.
08:27 - Very helpful.
08:28 - Yeah, and the way that we've been approaching it,
08:30 at least in my team,
08:31 is that we've kind of segmented it
08:32 into the different user bases.
08:34 And depending on who it is
08:35 who will be consuming the technology,
08:36 we apply different ways of educating the end customer.
08:39 So for data scientists with my team,
08:41 this time last year, we introduced,
08:43 we called it LLM Friday,
08:44 which was terribly branded from my point of view.
08:47 But it was basically just my team, every Friday,
08:49 just took the time off tasks just to learn about LLMs
08:53 because it's this huge new technology that's come in
08:57 and we needed to be experts quickly.
08:58 So we've gone to kind of that extreme around education
09:01 to the kind of day-to-day education
09:03 around how are our customers actually engaging with this.
09:05 So it could be hints and tips around prompt engineering.
09:09 We have a center of excellence to be formed
09:10 to kind of lead people on that journey,
09:12 help them engineer those prompts,
09:14 but then we're leaving that knowledge with them
09:16 as time goes on.
09:17 And then you've got the actual outreach to customers,
09:19 doing events, videos, podcasts,
09:23 basically any data that we can feed people around
09:25 how to get the most out of these systems.
09:27 That's what we're seeing is really important.
09:29 - Yeah, absolutely.
09:30 I was in a conference recently about 50 AI CEOs
09:34 and there was one of those like Slido,
09:38 respond to the survey question
09:40 and the number of CEOs that had built AI features
09:45 into their products versus the number of CEOs
09:51 where there was meaningful P&L impact
09:53 from using AI internally, like 10 to one, right?
09:56 And so the onus for actual adoption
09:59 is being pushed onto the customer's customer
10:03 but by the AI vendors.
10:05 And I do think the scale of change
10:09 is really one of those blockers to wide enterprise adoption.
10:14 So I love that you're doing that.
10:15 I do think that inside of even the AI companies,
10:20 the steepness of that learning curve, right,
10:25 is really being felt.
10:26 It's only getting steeper.
10:28 Back to you, Amy at Shell.
10:30 So you all have got a deep history
10:33 of integrating AI into your processes
10:35 and getting a workforce comfortable
10:38 with relying on AI to make super critical
10:40 like life and death types of decisions.
10:43 Tell us a little bit about the framework
10:44 that you're now adopting in the generative AI world
10:47 to scale these use cases.
10:49 What can you share with the audience?
10:51 - I mean, boring answer, it's not that different.
10:53 - Yeah.
10:53 - It's like, because again, it's like any change program,
10:55 you just have to go through the steps
10:57 and it will really depend on the individual use cases
10:59 as the other panelists have mentioned
11:01 about exactly what will be the blocker to change.
11:04 So for an example, like with the,
11:05 particularly with safety use cases, you might call them,
11:09 parallel running is a really good one.
11:10 So we have this technology
11:12 called Operator Roundby Exception
11:14 where you basically complement an operator's round
11:17 on an asset with computer vision technology
11:20 to sort of detecting what's going on.
11:22 Now you run it, everything in parallel,
11:25 the old system, the new system for a long time
11:28 and identify the variances before you take anything out.
11:32 That can just, once you've seen something working
11:36 for however long you need, days, weeks, months,
11:39 people start building trust,
11:41 they start realizing what's going on,
11:43 also it's a big part of like how you know
11:44 you've got the right model right.
11:46 This is the feedback loop.
11:47 And I think that's really the key for me.
11:50 This is agile scaling where you always have a joint team.
11:55 It's always a technical team, but also a business team.
11:57 You never do data science without the business.
11:59 You're always integrating the feedback loop at every stage
12:03 and you're always listening to what people are saying
12:05 and understanding if it's not being used, why?
12:07 It's your problem, it's not the user's problem.
12:09 And then only scaling up and only taking it to the next level
12:12 when you're actually comfortable with that.
12:13 And I think there's just no way out of that.
12:16 It's never the case that you get something
12:17 and it's perfect 'cause it just doesn't work like that.
12:19 - So iterative, thank you so much.
12:21 Okay, questions.
12:22 We've got time for a couple.
12:23 That is a fast hand, you in the middle, yes.
12:26 Please tell everybody where you're from, who you are,
12:30 before you ask your question.
12:32 - Sam, yes, a business school, Barcelona.
12:36 So we've talked a lot about employee training,
12:39 user adoption and so on.
12:40 What about the executives?
12:41 How do we get the executives to a point
12:43 that they can redesign organizational processes,
12:45 structures, and even business models?
12:48 - Oh, I love that question.
12:49 There are so many execs who have to fill in airtime
12:53 at board meetings right now
12:54 and those 10 minutes last for a very long time.
12:56 So who would like to take executive training?
13:00 - So one of the things that we've said
13:02 to our executive team, and we have a weekly war meeting,
13:05 which is sort of weekly action review last week, next week.
13:08 And I sort of put a mandate to everyone,
13:09 I was like, in every second meeting,
13:12 I wanna know a problem you wanna solve.
13:15 'Cause I think less about education, less about process,
13:19 is actually first agreeing
13:20 what is the problem we're trying to solve?
13:22 And is it really a problem that needs to be solved?
13:25 And then the question is,
13:27 how will you use the entire landscape of AI,
13:30 not necessarily just generative,
13:31 how will you use the landscape of AI to solve that problem?
13:34 Right, and where do you need it?
13:35 That's just our experience.
13:36 - Yeah, I love that.
13:37 - Yeah, I'd be inclined to agree, actually.
13:39 If an executive comes to me with a problem,
13:40 I always try and get to where they're trying to be
13:42 and what KPI it is that they want to impact,
13:45 rather than how they think they wanna get there.
13:47 And that's not because they might know the answer,
13:49 but it's kind of good for the data scientists
13:51 behind the scenes to verify the approach that you're taking.
13:54 And that's really come into fruition
13:57 kind of this year with generative AI,
13:59 'cause a lot of people think
14:00 they want a generative AI ease case,
14:02 but actually it turns out they want a deterministic output,
14:04 they want the same thing again and again.
14:06 And then it's up to me to go,
14:07 well, in that case, you should be looking at classic AI.
14:10 And it's not worse or it's not better,
14:11 it's about that kind of education
14:13 around which technologies are appropriate at which stages.
14:16 And the only way you can get there
14:17 is actually talk about where you wanna get to
14:19 right at the very end.
14:20 - Yeah, I love that.
14:21 I think it's really easy for people to come to you
14:23 and say they want blockchain and generative AI in an app.
14:26 And you're like,
14:26 I think you need some business process automation.
14:28 And it's not 'cause people are stupid,
14:29 it's because they're curious, which is great.
14:31 And then you can have a conversation.
14:33 And we've had no problem at all
14:34 having interested executives in all of this.
14:36 And the curiosity is great,
14:37 but ultimately there's no substitute for expertise.
14:39 So it's always a joint conversation
14:41 with people who really know their business
14:42 about what can be done and what's sensible.
14:45 - Helpful, thank you.
14:46 Other questions?
14:46 Yes, back there.
14:49 - Hi, I'm Mark Riley from Mathison AI.
14:54 My clients have got clear vision and roadmaps and plans,
15:01 but the stumbling block is always identifying ROI,
15:06 getting it past the CFO.
15:08 How do we make ROI a meaningful prediction?
15:12 - Yeah, I love this question.
15:14 I can take that too.
15:17 So it is so imperative to get the CFO on the same page.
15:22 And I actually think not getting them
15:24 on the same page as executives
15:26 really can result in some twisted incentives,
15:29 like execs not sharing the actual ROI
15:32 because they don't want the budget taken away from them.
15:35 And so, especially in the early days
15:38 where you wanna be able to reinvest savings
15:41 into more explorations and not disincentivize execs
15:45 from really taking costs out of the business,
15:47 the ROI conversation should not be politicized.
15:51 We track ROI with our customers weekly.
15:54 Not everybody is doing that,
15:55 but it certainly requires being able to partner
15:58 with procurement and finance and have this be
16:01 kind of board level priority to make that happen.
16:05 But it's absolutely essential.
16:07 I think a big part of why POCs that seem to be successful
16:12 at a small scale aren't scaling
16:14 is the CFO doesn't actually know
16:16 how much this is going to cost at scale.
16:17 Everything is priced by tokens
16:19 and you have no idea what it's gonna cost the organization
16:21 to run this at scale.
16:23 So including them is just incredibly important,
16:26 making sure that ROI is part of the actual POC
16:30 'cause that builds the scaffolding
16:32 for what you're gonna take to production
16:33 from people to process is so important.
16:36 - Yeah, and look, I just say,
16:38 whenever we've done an implementation
16:40 and we do this entire thesis beforehand,
16:43 which is we think it's gonna cut costs by this,
16:46 here is the cohort of things we need to test it across.
16:49 Here's the period we need it for.
16:51 And depending on how close we get to the result,
16:54 that becomes the gatekeeper
16:55 for whether we move to production or not.
16:58 If the cost to have that ROI obviously exceeds the ROI,
17:02 then it's not worth doing.
17:04 And I think that's built in a really good discipline
17:07 to test very early versus where we've seen in the past
17:11 is it's tested too late.
17:13 And at this point, it's like, well, has it not shipped
17:16 versus you might actually make the call
17:17 and we've done this a few times,
17:18 it's just not worth shipping.
17:20 Or the approach we've taken is not the right approach
17:23 because the more you generate, the more cost you have.
17:25 But you could use classical AI or knowledge graphs,
17:27 for example, to reuse something you previously generated
17:31 and then actually makes it much more economical.
17:34 - Wonderful, thank you folks so much.
17:36 Thanks to the audience.
17:37 - Awesome. - That's it.
17:38 (audience applauding)
17:40 (audience cheering)
17:43 [BLANK_AUDIO]