From multimodal breakthroughs to the rise of agentic AI, 2024 has been transformative for enterprise AI adoption. But what's coming next in 2025?
Join AI experts Allie K. Miller (Top LinkedIn Voice in Tech & AI) and Mike Storiale (SVP Innovation, Payments & AI, Synchrony Financial), and WillowTree's Bret Kinsella (GM, Fuel iX) alongside moderator Tim MacGregor as they unpack:
Watch the full webinar for actionable insights on bridging the AI education gap and accelerating adoption across your organization.
Tim MacGregor: Thanks for joining us for this executive roundtable that we have today. The Executive's AI Playbook, a look back at 2024 and a look ahead at what's ready for 2025. This has been quite a year for AI, and we're excited to have an expert panel of AI innovators and leaders with us today. First, we have Allie Miller, a top AI leader, advisor, and investor, including being a LinkedIn top voice in technology and AI for the past five years. We have Mike Storiale, who currently is the VP of Innovation for Synchrony Bank, helping to drive their next-gen product suite, including AI and gen AI. And Bret Kinsella, the general manager for Fuel iX, TELUS Digital's enterprise gen AI platform and a leading voice on conversational AI for many years. I'm your moderator, Tim MacGregor, and as a product executive in many of my past roles, I'm really excited to put my product hat back on today and dive into what these leaders think about AI today and tomorrow. To get right into things. I wanted to start with Allie. Allie, 2024: what a year for AI. Looking back to the beginning of the year, can you tell us what promises you think have been fulfilled, and what may have been overhyped, and what major leaps you might see and you're excited about for this next year?
Allie K. Miller: Yeah. So first, thanks for having me. I think 2024 was a big year and maybe a year with like a little bit less acid reflux than 2023. I think just people were getting into the groove of reading the news a bit more. So maybe stress was a little bit lower, but the innovation was not slowing down. I think multimodal AI was probably the standout for me this past year. We got a lot more on the video generation side from Sora and RunwayML. We got a lot in voice interface, and I think that for me was probably the biggest individual change. I now have like morning conversations with ChatGPT every single morning, and that could not have happened a year ago. And then I think we're also seeing a lot more like multimodal workflows that we're able to put together. So even things like my team generates a morning summary of AI daily news and we have like a three minute podcast that gets created kind of like the NotebookLM stuff from Google. So to me, multimodal AI was really the standout. If I had to put my like mark on what I think was overhyped, and maybe overhyped also just means early — I still think it's a really important area — but I think it's AI devices. We saw a lot of failures this past year or just lowlights this past year where things like AI pins from Humane, or AI small mobile devices like Rabbit, and they just didn't have the splash that we were hoping to have. There are still some highlights like Meta Ray-Ban doing very well, but that to me was probably my my sad moment this year.
Tim MacGregor: You mentioned voice. That is such a nice tie-in to some of this conversation we're going to have today. Bret, you've seen AI evolve a lot over your career, and what do you think has changed so much in 2024, and and how can you apply these new innovations to like effective enterprise use cases for gen AI?
Bret Kinsella: Yeah, I think 2023 was kind of about the promise, the Brave New World, and 2024 feels like practical challenges/practical benefits and people just trying to think about this idea: "I've done some pilots. How do I actually get that to production?" And so there's a bunch of barriers that people ran up against. Some of them were technology, but a lot of them were actually more business process and ancillary technologies, not AI technologies. Like how do you get your chief information security officer to sign off on this? How do you know that you have failover? How do you actually change from a model that's being deprecated to a new one? Do you have to change your prompt engineering, your system prompt for that? There were a lot of just practical considerations. And what I think a lot of people realized is essentially what they're doing now is they're building or they're customizing applications. And when you think about applications, AI is the engine. It's the magic that makes a lot of these things work. However, there's all these other things you have to do. Everything from access control to guardrails to agents, or if you really want, sort of like thinking systems, like planner/executors and things like that, which sort of take it to that next level. You had to start to think about at that component level and all these things just beyond AI in general.
Tim MacGregor: It's a great, it's a great point, Bret. You bring up getting to production, and Allie and I had a conversation just prior to this about how hard that can be sometimes. And I wonder, Mike, what you're doing at Synchrony. I've heard you talk in the past about your innovation journey at Synchrony, but from your point of view and from your observations over the past year, how hard or how easy was it for you to get an actual gen AI or a AI project from POC to production? And what pitfalls or pluses did you see because of the innovations of 2024?
Mike Storiale: Yeah. Well, thanks again for having me. I think that the big thing for us moving from proof of concept into production was really all about how to educate every aspect of the business on what it meant. What was new from a risk standpoint, and what was an existing risk. You know, when you bring something like generative AI into a production use case, you start to shine spotlights on things that perhaps have already been there, and people suddenly start to ask questions about whether or not that's something that is going to introduce new risk into the business. What we did early on was set really clear targets for people. We made sure that they were able to ask the right questions, understand where our strike zone was, and ensure that everyone felt like they were included. So that meant that clearing a path is not as easy as driving on a path that's already been created. There were a lot of one-on-one conversations with really key stakeholders to ensure that they got that comfort level and an understanding for them that these processes were fully repeatable so that we could accelerate infinitely more use cases in the future.
Bret Kinsella: So that's so true, though. Just thinking about my experience over the past year, talking to people about green field, brand new use use cases. So much easier than going to an existing use case, an existing process, and actually adding AI to it, because there's all sorts of expectations about how things work with the existing processes. And a lot of times you're saying, "Hey, you know, do this differently. You can drop this whole segment of your process, or you need to do these things differently, or this thing is going to be much faster, or you might need to reorganize a little bit." So I'm glad you brought that up, Mike, because I hadn't really thought we were going to talk about that today. But I will say green field has been so much easier than some of these old things. Maybe we would call them brown field. Maybe that's like a bad, bad sense for people who understand superfund sites, but, you know, I think a lot of processes ... We've been through this era of like 25 years of very low productivity, and a lot of our processes have been ossified. And it's really interesting that so much of this, when you go into existing processes, you have to like reinvent them.
Tim MacGregor: I did not see superfunds coming. That is a segue I did not see. But, Allie, from your impression, like you've done this a lot. How do you get from POC to production?
Allie K. Miller: I guess maybe I'll have like a different point of view on on that, Bret and Mike. I think so much of AI change management is on the people side. Bret, you had talked about like it's not just maybe a tech shift but there's a whole wrapper of tech concerns and then there's also a wrapper of business and people concerns. And I just think if you are still starting out, it is actually really important to talk about a process that's already in place just for people to wrap their head around it. Like there are some employees, probably almost certainly less than 50%, probably less than 20%, that are AI super-users in like an average company. And for those 80%, to get them fully bought in, to actually have that cultural shift that you want, you do have to talk to them about things they're already doing, right? Like writing an email, managing a to-do list, you know, thinking about a cash flow statement, whatever, things that we are used to. Then when they're bought in, then you can start to do the crazy, green field rethink-your-process-from-the-ground-up. I just don't want people to skip that step, or else you'll lose people at your company. And it's so much about moving from POC into production.
Mike Storiale: Yeah, Allie, that really resonates for me because a lot of what we focus on in the last year and a half is around using AI with humans in the loop to make people's lives more fulfilling. And when we start to talk about it in that way, that resonates with people. The idea that we all have parts of our jobs that are not the fulfilling parts of our jobs. And so when we think about where I can start to take away those unfulfilling parts and allow us to focus on others, people start to see the opportunities.
Tim MacGregor: It sounds a little bit like, Mike — we talked a little bit in the past, too — about thinking slow but acting fast. This sounds a little bit like thinking small and getting big. Like, how do you spur that adoption in the community? Allie, one of the things that I wanted to kind of segue into as well, you know, you pointed out at the very beginning: voice and some of the innovations that are going to happen. What are some of the gaps that you see that prohibit the adoption of that last 80%? So you know, you talk about the 20/80, the Pareto rule. How do you get over those gaps?
Allie K. Miller: So first, there are some gaps in technology, right? So like video generation, for example, it's not quite there. We're maxing out at two minutes. It still looks like it's AI-generated. Even in voice, it might not be 100% on transcription. It might not cover all languages as well as it does English. AI agents don't feel fully enterprise-grade yet. I mean, there are a lot of blockers right now, but I would actually advocate that there are enough non-gaffes that you can move forward with these use cases. And then again, just echoing that people is such a big thing, probably education is the biggest gap internally, and I think too many companies are failing to think about how big of a lift upskilling is. I talked to a lot of companies and they'll have like one AI course that's optional. 10% to 30% of their company takes it, and they think now we're AI-first. That is the biggest lie you've told yourself. You should have an AI class that is required for every single person to take, every new hire to take. You should be updating your AI training every quarter, at the very least. That, to me, is actually the biggest gap to adoption. I think, on the tech side, gap-wise, for the last kind of iteration of models — so this like GPT-4-level model — they are performant enough that there are enough line-of-business use cases that you can move forward without much hesitation on the tech side. When you start to get into the more complex, multimodal, workflow, agentic, that's where we still have enough gaps where you can kind of slow down a little bit. But on things that you have seen proven out over the last two years, go for it.
Tim MacGregor: That's a great point of view, getting that 80%. It sounds like the education gap or the business gap is really what you tell your clients and team to focus on. So one other point around that education, you know, Bret and Mike, I'd like to hear from both of you on this one, I think, is security, governance, and kind of responsible AI. That comes up with clients that I talk to about pretty much — I mean, it might be topic number one — once they get over the gap alley of the technology can do it, it's security, governance, and responsible AI. Mike, can we start with Synchrony's journey on that part. Like how are you educating from a responsible AI standpoint, and how has Synchrony kind of put themselves forward so that they can be responsible in ethical AI innovations within your group?
Mike Storiale: Yeah, absolutely. So responsible AI is core to what we do. And so we have cross-functional working groups, a responsible AI working group, that their charter is to ensure that the use cases that we're putting forth with AI match what Synchrony's use cases want to be right. What we don't want to do is get over our skis and ahead of where we're comfortable at this time. And so I mentioned things like human-in-the-loop and ensuring that what we're doing are things that we feel are comfortable. And then from a security standpoint, ensuring that the security and data folks are able to come along and actually be an accelerant to what we're doing rather than a blocker. You know, one of the things we're really fortunate about is that we have great partners in responsible AI, in our legal and data and infosec teams who want AI to succeed. And what it means is making sure that they're able to understand exactly what it is we're trying to do, being remarkably transparent with them, and using their skill sets to accelerate us forward.
Tim MacGregor: That's wonderful. Partnering as opposed to trying to run right over the doors, sounds like. That's great. Bret, how about you on security and kind of your point of view on knowledge hubs or enterprise data that we all want to use with AI. What are your suggestions or point of view on how that can be integrated into the workflow?
Bret Kinsella: Yeah, there's a couple of things there. You know, Allie talked about the education gap. I think there's a trust gap with AI because we're moving into probabilistic tools really for the first time that we're using at runtime. We've had probabilistic tools we did for analysis, but there were always steps in between before they actually became manifested. But when you start using these at runtime, you know, to differentiate from the deterministic solutions, you always knew that a certain sequence of steps would lead to the same outcome — and now we could have some variability — has led to this issue of trust. And that's one of the things where you talk about the chief information security officer saying how can I trust this? Like, what are you doing today? And I was just actually right before this, I was on with one of the leading analysts, and we were talking about how do you do that? There's basically two different ways. And people aren't really thinking about both, I think, as effectively as they should. So one is intervention. And so when we talk about guardrails — we talked about prompt engineering — we talk about agents — they're supervisors in the process — we're really talking about that intervention aspect. And I think that's where most people have been flocking. I think first to prompt engineering, then to guardrails, rules-based systems predominantly, and then to agents, which can use probabilistic versus probabilistic and really come up with much more sophisticated types of protections. What's been overlooked a lot is the prevention piece. And a lot of the reason that people don't know how to do prevention — it's like trial and error — is because they don't have a way to do vulnerability detection at scale. And so I really think about AI safety and security as something that we have to really look at the prevention piece first. We have to have really in-depth testing that's continuous because these are probabilistic systems. Long tail issues might crop up any time when you start operating at scale. We make changes, the models update, there's lots of different things, and so because of that dynamism, we need this testing. But the problem is how do you do that testing? Today it's human read teamers. Cybersecurity background. AI background. Some people call them unicorns. There's not enough people there. And even the ones to the extent that we're starting to see tools for them to automate some of their work, it's not available. So a lot of times what happens is like a new system will go live, and there'll be a red team before. Everybody will sign off. They feel good. But then there's like an update in two months. Is that red team available? Or there may be an emergency update. And so what do you do? And so that's really what I was talking to this analyst about. We've really talked about this prevention piece. You know, we work with Fortify. It's an automated red team solution for non-technical users. Technical users can use it as well, and that like amplifies what they're able to do. And then they can do their bench tools to do certain very specific things that they might want to do from an interrogation standpoint. But if you step back, like the product manager or sort of a lead dev or something like that, they might be the ones who are around when you need to do a quick test. And the question is, do they have the tools? And so we try to take is that red team expert's knowledge and put it into an automation solution using a series of models. It's not just one model. There's a judge. There's an interrogator. There's a classifier. There's a bunch of different things in there. But I think that's one of the things in terms of the trust gap that we need to get to. We need to get to guardrails and agents that people trust that are going to intervene when there's a problem in line. But we also need some of these tools that people know that they actually have good hygiene going in because they've done some of the the preemptive blue-teaming before the red-teaming is necessary.
Allie K. Miller: And, Tim, can I just add like it is so important to maintain two paths. You have to be thinking about tools and models and that sort of testing. And you also have to be thinking about security vulnerabilities as a company, as an organizational structure. And one thing that I hear pretty much every single executive missing is shadow use of AI, shadow AI, inside of your organization. So over half of employees at your company — these are stats from Salesforce — they are using AI that is not approved by your IT team at all. They are going on and doing free testing of whatever new image generator that their kid told them about, and they're doing it on your computer on company time and they don't really care about your rules against it. So you have to be thinking about not just the tool testing. And I completely agree with you, Bret, that it can't just be humans. You also need software, probably even more software on that management side. But so many people are forgetting people is probably your biggest vulnerability.
Tim MacGregor: Yeah. That's a that's a really great point about security being, you know, the human there and making a choice. And shadow IT is another that we talk about a lot with all of our clients. One of the last thing to tie up about 2024 that I've heard a lot from a lot of my clients is, you know, it's the the desire to democratize generative AI. I think you hit on it a little bit, Allie, is like how many people in the enterprise are actually using it and how do we get it there? You know, one of the things that keeps coming up is, you know, we have a lot of large language models that are available, but some people want to bring their own or they want to use a small language model. And I like your point of view, Allie, to start. What do you think about the transition from a commercialized, readily available LLM to a more bring-your-own-style GPT?
Allie K. Miller: So GPT and model are different, but I think for any sort of SLM vs. LLM battle like use whatever works for the task. For any sort of automation that we're building internally on my team, we're testing every single API for every new task, even if it's not the state-of-the-art model, right? So o1 now has an API. We're not throwing o1 at every single task. We might still be using 4o or Claude 3.5 Sonnet for several tasks. So always having some sort of documentation or benchmarking system in place, even if by the way, that's like a vibe check of just like which one feels better when you test it 20 times, I think that's totally fine. But when people are picking which model to use for which use case, I think you have to start thinking about it in terms of like how we process SaaS decisions; what's the interoperability like; how scalable is it; how much will this cost; will this work cross Geo; do we have the right data that is required as input for this model, right? Some models require just text. Some models can take in CSVs as well. Some can't. And so you're always needing to test like is this the right tool and can it scale in a business appropriate way, but I honestly think that like the place that we're in— and you had just asked such a great question, which is like one person asking whether they should use an SLM or an LLM — 2024: this is my, you know, like mean thought on the year, was still a very individual task-oriented year. It was still about the individual person thinking about their own little, you know, tasks and workflows and their own productivity. And I'm not seeing a ton of like full workflows, full team uplifting. And I think that's actually going to be a really big focus for 2025. So having the criteria in place of how you are measuring that scalability across people, even if it's not the best solution for the first person who tested it, that I think is what's going to differentiate a successful company vs. not in 2025.
Tim MacGregor: I love that you led us right into 2025 because that's where I was going next, so I appreciate that. That was wonderful. And it kind of leads right into a question I have for Mike. Mike, you've talked a lot about kind of the responsibility piece we already kind of covered, but also how do you build up that upskilling for that center of excellence that you're trying to create within Synchrony that Allie is talking about within your organization? I'd love to hear your kind of journey on that part.
Mike Storiale: Yeah, you know, I love that you talked about democratizing AI, because when we think about the team within Synchrony that runs AI, our core belief is that we will not be the owners of all AI within the organization. Our job is to empower the business to build and use AI wherever it might take them. What that means for us, though, is about thinking about how we roll out AI a little differently. It means creating solid orchestration layer so that everyone can use AI and we can put in guardrails to ensure that we're keeping the business safe. We talked before about security. Data-loss protection is really critical when we talk about AI. Content moderation, especially depending on where it's going to go, can be really critical for us. Hallucinations and the ability to track that can be really critical for us. But we don't want is every team in the business to have to learn how to do that ad hoc. What we think we can do is instead create a central place to do that. Allow them to go and build their products because what they're experts on is their products or their ideas. So let's empower them to go and do that. And then the thing that Allie said before: you know, a lot of times if it's something net-new that has not been built within our organization, a new use case, we're encouraging teams to do a champion challenge or to do a side-by-side. So perhaps you have an LLM that you want to use. Is there another toolset that you can also use at the same time without a ton of rework so that we can start to make better informed decisions and make the business smarter for what use cases are working better for us? Because if one team learns that a certain thing worked for them and another team has a use case that's 80% similar, we can help that team break down barriers and be smarter as they go in and launch that.
Tim MacGregor: Yeah, I mean, the clients that we've seen be successful at scale, and, Bret, maybe you can kind of expand on this, have a very similar approach where they put AI as a centralized kind of center of excellence or acceleration lab. Bret, from your perspective, I'd like to hear what you think, how that kind of accelerates AI adoption in 2025.
Bret Kinsella: Well, I think it's super-important, and Mike has been one of the early pioneers on that, if I look across like organizations that have done it. At TELUS, which is really our proving ground for a lot of the work that we've done. From a product standpoint, you know, we launch things with tens of thousands of users. From the beginning, there were a couple of things that were really important to them. So one of them is they knew that because there's so many use cases across their organization, because they're regulated and they have like higher standards that they have to follow, they were going to need flexibility at the model level. So from the beginning, they said, "Hey, listen, we can't be tied into a cloud. We can't be tied in to a specific vendor." That was very important. But on the way to the forum, like, they learned a few things, too, right? So it's not just having that flexibility. One of the things they figured out was that a lot of the benefit here is not just like grabbing some prepackaged vertical app, it's like, you know, as Allie was talking about, people working together in processes, right? And generative AI is so great at processing words in particular — it's pretty good with images now, too — but with words in particular, that it's hitting this productivity area we haven't had before. And so you can actually — and it works really well — and this is why everyone can pilot really quickly. And this thing's amazing. Now turning it into like a production system, you've got to do a lot more things. But that was one of the things that they found was like this customization was really important. And then what they found out was like, okay, all of a sudden at TELUS, they had 150 different requests for their centralized group. And so like again along those lines, so that was great because they were able to drive through with the training team; rolling out tools for everybody; dealing with this shadow IT thing, with this idea that, you know, 80% of the people are going to be using something like ChatGPT if you don't offer it; but there was this one other thing that came up is they had these 150 requests and like, we can't support all these things. Like, what are we going to do? And so I think this other thing that we sort of led to was this creation of easy creation of copilots. That a non-technical user can get immediate, customized solution for their workflow, for their team, and that type of thing. Because, you know, Mike talks about this democratization, pushing it out to the edge where the expertise really lies with the teams and the individuals and that transforms everything. And so if you look at something like TELUS, they're supporting 25 — and they've got another 20 or 30 in dev and test right now — generative AI-centered applications of projects. But there are 6,000 customizations within that organization alone that an end user, anybody in the organization, could go in through natural language — they could automatically upload files; it would automatically vectorize them, so they had vectors and those types of things — that is where I think that the innovation teams really shine is that they bring everything together; they provide the tools; but then there's that next step, that I think a lot of the organizations need to do, is they need to figure out a way to get beyond sort of the druids in IT that have to do everything for you and bring it out so that like frontline personnel can not just like use it like something you provided for them but they can really personalize it for their workflow.
Tim MacGregor: Yeah. We talked a little about that earlier in terms of the "think slow/move fast" or "think small/get big." One of the things that we see and, Allie, I'd love your opinion on this in terms of, you know, when you advise folks or with your teams ... How do you advise them on the kind of, you know, crawl/walk/run journey that they're taking? And what do you think will change in 2025 now that these tools are starting to expand?
Allie K. Miller: So it requires having like multiple paths all at once. And I don't want people to think that they should only be thinking about a hundred-whatever line of business use cases. You have to have like an AI committee, or an AI task force, or governance committee, or whatever thinking about it from an organizational view. And the three lines that I would be thinking about are line-of-business use cases that are often proven out, right? They might be like HR use cases where people can search internally to find information on insurance, or healthcare, or whatever. It might be customer support; it might be a sales data-hygiene use case, whatever. The second is overall employee productivity, like giving people access to Copilot, giving people access to Claude Enterprise, or whichever tool you'd like. And the third is the really big, dream, product growth, revenue, topline growth kind of side. And you literally have to be operating in all three at once. So I don't actually think there's much crawl to be happening. The crawl is on before you even get into those lanes. The crawl is getting your task force. The crawl is figuring out who you want to partner with. The crawl is figuring out who your internal champions are. The second you start it's going to already be in that walk zone. So that's at least how I would be thinking about it. The other piece that Mike and Bret have both touched on really well is just setting KPIs in each. So before you even start on these use cases, figure out what does success look like so that you can prove to your board or prove to your C-suite: "This worked. Now give me 1% more of our revenue as a budget so that I can allocate even more to the next 50 use cases." And people aren't thinking about how they have to make the pitch to the board and the C-suite while they're just doing that first use case. So you always have to be thinking about how you're going to pitch that next one while you're doing it.
Tim MacGregor: I love the parallel track. I think that's it's incredible advice for folks that are, you know, truly ready to adopt. Let's go to the run lane then. In 2025 one of the things — and we talked a little bit about this, you know, I spent a bunch of time at Agentforce recently — agentic AI or agents in AI are a huge topic and they keep coming up ... Is 2025 the time — you touched on it earlier — is it the time that workflows get put together with a full agentic workflow, or, you know, I'd love to hear your point of view on that.
Allie K. Miller: You're asking me, the person who screamed for the last six months of whether 2025 is the year of AI agents? The answer is absolutely yes. And I think we've seen like little teasers in 2024 of what this will kind of look like. So o1 coming out. o1 pro mode coming out. And we're starting to see that, okay, there is going to be a world where we're not limited by data, but we're actually thinking about time as the the need, in run time. Bret, you were talking about this. So rather than going, "Okay, how many GPUs do I need for this task?" It's more saying, "How many hours am I willing to allocate to a reasoning agent that has access to many tools to then come back and give me the answer?" And so there will be companies trying to figure out how to maximize this ROI, but the investment is actually time. And so those reasoning agents or reasoning models we're starting to see early. Tools we're starting to see early. I think 2025, if I had to make some bold bets, is going to be a mess of agentic AI security. Like truly, truly a mess. There are going to be people who give agents modification rights immediately, login rights immediately, access to your employee systems. And they're going to do this thinking that agentic is big in 2025 and they have to do it without thinking enough about the new guardrails that exist in agentic AI. So I have some big worries going into this year. Again — Mike touched on it — have a human in the loop. Do not automatically have an agent take that action without a human approving it, even if it's like, you know, having a CFO review that final payment or whatever it is. That is, I think, one of the biggest gaps, just that like enterprise secure version of an AI agent.
Tim MacGregor: And that's a great, great point. And I appreciate your humoring my question on that. But, Mike, agentic AI in financial services: I mean, you know, I kind of led you right in. You guys must be thinking about it, but what steps are you taking to adopt it, and what concerns or benefits do you see?
Mike Storiale: Yeah, it's absolutely on our radar. But, you know, to be candid, as a highly regulated industry in financial services, anything that starts to make decisions starts to be really difficult for us to govern in the same way. And we have to really walk slowly there. So when we said, Allie, before — the crawl/walk/run —that's probably the crawl, at least in my industry. You know, when we think about automation, we have plenty of automation in other areas, right? What I like to call classical AI with robotic process automation in areas like that. But they can be right 100% of the time, and it's super-auditable and super-explainable. And when we think about things like agentic AI, there are a lot of times in our industry where I can't be right 99% of the time. I can't be right 99.99% of the time. I have to deliver it correctly 100% of the time. And so I think to Allie's point, if we do start to see some of that move forward in our industry in the next year, it's going to continue to remain human-in-the-loop. It's probably going to be in use cases that are significantly lower risk either with a public or low risk data. But I think you're going to start to see that continue to move forward. And I hope we don't see people starting to grant carte blanche access across industries. I can tell you I don't expect that to happen in financial services, but I do think that agentic AI provides a lot of great opportunities, but there are a whole lot of areas there that I think people could get overconfident in the responses that they get and overconfident in the capabilities and not recognizing that they did need a second set of eyes or that is not 100% accurate all the time. And generative AI is not always the right tool for the job that you have in front of you.
Allie K. Miller: Mike, can I just ask? Does your answer change whether it's startups or enterprise? Like how risky one company is willing to be?
Mike Storiale: Yeah, I definitely think startups, and unregulated ones at that, are going to take bigger risks. You know, when you talk about being in a regulated industry where everything's got to be really explainable and, you know, in the case of financial services, we're dealing with people's personal information and their money. That's a little different than if I'm working in a startup that's dealing with lower risk data. So I think you will start to see in startup communities, and definitely ones that deal with lower risk data, taking bigger swings here.
Tim MacGregor: Bret, you were shaking your head a lot during that. Agentic AI: what tools do you think can be kind of implemented or used to help start to implement that? I mean, from your your perspective and point of view?
Bret Kinsella: Yeah. We've done a lot of agentic work like early planner/executors, supervisors ... there's a lot of different types of things that we've done. I didn't really plan this, but I'm actually coming in between Allie and Mike on this. I think there's going to be a bifurcation of the market, and there's going to be the top 10% to 15% of the market, who are the most sophisticated, the most aggressive, are going to do a lot with agentic. And I think that will cross barriers into regulated industries because even within regulated industries, there are some workflows which are, you know, very precisely monitored and imperative by definition. And there's others that are more declarative, let's just say, in objective base. So I think the leaders in each of these industries, maybe 10% or 15%, are going to go deep and long into agentic AI. And I'm talking about enterprise, right? So let's talk about large businesses first for a moment. And then I think the vast majority of the business — you know, let's call it 65% with the balance being like not doing much, sort of piddling along as they have been — are going to be looking for like simpler solutions, things with deterministic outcomes. Like connecting it to RPA and some other things where they can have full explainability down the end. Buying more prepackaged solutions. And I really think that like another "A" is what I would say is like my watchword for 2025. And that's agility. I think what we're seeing, particularly any businesses like financial services, telecom, health care, government, law, but across consumer goods as well, any businesses that have a lot of words, right? So they generate a lot of words. They have a lot of words. They receive a lot of words, in terms of inbound from customers or whatever it is. They're going to be starting at the front end of like an agility revolution. Right now, we have all of these really rigid processes that we have to follow. So we know that it just comes out every time, just the same way, almost like a factory methodology, because we haven't had the ability to be faster, better, more nimble, and get the same type of benefits. What I'm seeing right now with generative AI is we can give people tools at a point in a process or across three points in a process, and all of a sudden they can do it much faster. They can handle more variety in what they're doing. And so that's really what I'm looking for is this idea of agility. So organizational agility to drive productivity. And AI really being the engine.
Allie K. Miller: Can I just add like I almost don't think that cost is the main limitation. I am not even sure if risk is the main limitation. I think one of the biggest limitations is just like knowing what tools exist and being able to actually operationalize it in your company. The reason I bring that up is because there are agentic AI companies like CrewAI or Reworkd or Gumloop or any of these things, and if you talk to them, they have waitlists of tens of thousands of people waiting to use these tools, even with all the concerns we've already brought up. People will sprint because there is a competitive advantage and an arbitrage opportunity that's going to happen. So, yes, we're talking about all of these limitations, but there will be some people that decide that those limitations are not worthy of slowing down.
Bret Kinsella: Yeah, I agree with that. I think that people who want to try something new though ... I think I would differentiate it from people who are ready to like move forward. And so I almost feel like 2025 is going to be to agents, what 2023 was to LLMs. And that people are figuring it out. Because the other thing about is when you deploy them, the architecture becomes much more complicated because you really, up to this point, we've been able to ... It's almost like you just unit test it and it works, as opposed to system level testing. And we have to get in a much more rigorous system level testing. I also agree with you that cost is not the biggest driver right now for this because people are just trying to figure out what they can do. And then if they find something, they're just like, "Yeah, I know that's going to give me some benefit." And that's why I was really talking about this idea of agility, and agility can lead to cost benefits. It can lead to other things. I really believe that people are, in general, going to be like more focused on frontier models this year as well. Because I think once you get into the optimization phase, which we're not in, small language models, a lot of, you know, really custom fine-tuning, and stuff like that makes sense to me. I still think we're about 18 months away from that. Now people are just like, "What can I do to make it work?" And I remember talking to Brad Abrams about it. He's now head of product at Anthropic, but he was running Copilot for Microsoft at the time. And he's just like, "Yes, we could run something cheaper than GPT-4 on this product, but we have to because people have to get used to like what is good, and then we can figure out how we can optimize." And so I still think, like you say, Allie, we're trying to figure out where we can have significant bang for the buck with this type of thing. And then we'll move into that optimization phase. But I agree with you. People want to try a lot of things. I think we're going to have a lot of experimentation still. But I also think that we're going to see a lot of really good cases. We're already seeing it. We're seeing it internally, but we're, you know, tens of millions of dollars saved, like in one organization from these tools. We're going to see a lot of those next year where people are getting that benefit.
Tim MacGregor: I think I have to add crawl/walk/run/sprint to my question next time. Clearly, I didn't cover enough of the use cases. So Brad, that's kind of your prediction on 2025. Mike, what do you think Synchrony is going to focus on the most in 2025 from an AI perspective.
Mike Storiale: I think the big thing this year is about scaling up what we've already started and then getting the rest of the business to have a solid viewpoint on how they can use artificial intelligence. I think there's a lot of places — and Allie kind of touched on this earlier — where there are people within the business who are using this in their daily lives every single day and they have ideas a mile long. And there are people that you talk to that still don't understand how AI can apply to their part of the business or even their personal lives. And so as we start to get more of those people on board, you're going to start to see really that democratization that I talked about before, where it proliferates the entire business. I think 2025 is going to be the year where the backlog is bigger than anybody can possibly execute on. And I think that's the best problem to have.00:40:43:04 00:41:24:11
Tim MacGregor: Wow. The backlog gets bigger, and 2025 gets to be exciting. So, Allie, if we can kind of kind of start to wrap things up here. One of the things that we had we talked about a little bit is just your article "What the heck comes after generative AI?" and some of your inspirations for AI. I know it's nearly impossible to predict six months from now, never mind years into the future. So I'll give you one opportunity to kind of wrap up 2025, what you think the main focus will be, and then, you know, from just an inspiration perspective, love to know what you are thinking ... big thoughts on where AI goes from here.
Allie K. Miller: So going into 2025, I think there are going to be a lot of performance increases. I think cost is going to continue to drop by a decent amount. I think interfaces will move well beyond just chat. Like I can't stand chat interfaces anymore, and I want every single thing to have a voice interface. That would be my ideal situation. But I do think that like if I had to give the crown to some sort of topic, it's probably going to be agenetic AI for 2025. Thinking even beyond that, right? Outside of a clear timeline, even though like Bret and Mike, I know we can probably make bets after this meeting. But I think that like the future of AI — how I've described it for years — is it's personalized, predictive, and proactive. And so if I just took those three things, and I said, each one will 5X soon, right? Every single person will have their own AI that is tailored to their own goals, their own values, their own decision-making. We don't have that yet, unless you give it a million-bajillion pieces of context on you, which people aren't really doing. That's one. Predictive is it's going to get more performance. So we'll move from things at like 75% levels up to, you know, high 80s or 90. And then I think the biggest thing, especially related to age AI is proactive systems. This idea that like Amazon might ship you products before you actually order it because they've predicted that you are more likely to buy it and less likely to return it. And so they just ship it to you anyways because the cost to return it is less than the cost of you kind of waiting for that time to actually buy it. I think we're going to see a lot more proactive systems in the next several years and a lot more integration into humans. The piece that I wrote, if people want to look it up, is on like evolutionary tech, which starts to get into brain-computer interface. And we don't want to end on such a weird note. So it's really just that proactive piece that I'd have people focusing on.
Tim MacGregor: Wonderful. This has been kind of a wonderful conversation and I love that everybody has a unique perspective and point of view. TELUS Digital, WillowTree, Fuel iX would like to say kind of thank you for this conversation. For those out there watching, hopefully you found this inspiring, and you learned a couple of things. Thank you to today's panelists: Allie, Mike, Bret. Thank you very much for coming. And we will talk with you guys all soon. Thank you very much.
Bret Kinsella: Thanks a lot, Tim.
Mike Storiale: Thanks.