About the Episode
As 2024 comes to a close, Marc LeBlanc continues the conversation about enterprise AI. This time, he welcomes Todd Wilson from Red Hat to explore the evolution of artificial intelligence as a tool to accelerate and scale initiatives in operations and development within enterprises. Discussing the potential of intelligent platforms and development tools to empower ops and dev teams, Todd reminds listeners of the importance of starting small, starting simple, and proving out a value before scaling initiatives that leverage emerging technology. With this approach, technology teams can manage their investments and lower the barriers to entry.
Transcript
[00:00:00] Todd Wilson: I mean, I think there's been a lot of hype about AI in the marketplace. Everybody was sort of super excited about ChatGPT over a year ago now. And then, you know, the hype's kind of dying down and people are getting their feet a little more firmly planted on the ground about how they want to use these tools, how they want to adopt them.
The people are your key resource, right? AI is a tool that your people can leverage to great effect. And I think if you're looking to replace people with a tool, you're looking in the wrong corners. That's the mindset that folks who are in leadership roles really need to bring that lens to their organization.
It's not about optimizing costs by eliminating people. It's about optimizing costs by enabling our people to do so much more.
I mean, I think right now, a lot of the barriers to get started is, you know, folks will assume that they need to start at scale, right? And I think that is limiting them in a lot of ways.
So I think a lot of folks overcomplicate their journey a little bit, and they forget about the principles of starting small and starting simple, proving out a value, and then scaling.
[00:01:02] Marc LeBlanc: This is Solving for Change, the podcast where you'll hear stories from business leaders and technology industry experts about how they executed bold business transformations in response to shifts in the market or advances in technology.
In every episode, we'll explore real world strategies and technologies that fuel successful evolution. I'm your host this month, Marc Blanc.
In this episode, we're going to talk about AI and AI platforming and how companies are adopting that new technology and what it means for them. We're very fortunate to have Todd Wilson here to talk to about this.
Todd is a Corporate Solutions Architect with Red Hat with over 25 years of experience in IT. Todd has a wide breadth of experience spanning public and private sectors. He's passionate about using leading edge technology to drive enterprise technology transformations. Thank you for joining us in Solving for Change today, Todd.
[00:01:53] Todd Wilson: Thanks for having me Mark, it's a pleasure.
[00:01:54] Marc LeBlanc: So like I mentioned, I'd like to dig into kind of AI, AI platforming, what that means for customers that are out there, how a company should get started. What's Red Hat's stance on that today?
[00:02:07] Todd Wilson: Sure.
I mean, I think there's been a lot of hype about AI in the marketplace.
Everybody was super excited about ChatGPT over a year ago now and then the hype's kind of dying down and people are getting their feet a little more firmly planted on the ground about how they want to use these tools, how they want to adopt them. And Red Hat is included in that.
I think now, if you take a look at our strategy it's sort of linearized itself into two streams: there is the intelligent platforms stream where AI is being brought to bear in our Ansible and RHEL and OpenShift platforms to make them easier to use, to lower the barriers to adoption and have, some smart, intelligent components available to the operators of those platforms just to scale it better, make it more secure, all of the good things.
The other stream is more focused on folks building the intelligent applications themselves. So what do developers need who are creating AI enabled applications, and how are they going to manage their models and train them, and all of the tooling that's necessary for those things.
So really that's Red Hat's focus right now is keeping those two streams going and making sure that we're meeting the needs on the ops side with the intelligent platforms and then on the dev side with the enhanced AI capabilities of apps that people are building.
[00:03:31] Marc LeBlanc: I really like the two streams. I think it's clear to me, having spent a lot of time in the DevOps field, that resonates with me. Having that glue between but still having a separation of duties. I think that's very core. Tell me a little bit more about what that means for an operations team when they have access to a proper smart platform.
[00:03:55] Todd Wilson: Maybe I'll use Ansible Automation Platform as the example, right?
So a lot of organizations when they bring in automation into their shop for the first time or maybe they were using different tooling before, there's a pretty steep learning curve at the beginning. And so the AI components in Ansible Automation Platform are called Lightspeed. So Ansible Lightspeed is a partnership with IBM Watson X, and it provides code assist and code coaching for folks building their automation scripts.
What that does at the early stages is it really reduces a lot of that early friction and that higher barrier to entry when folks are first getting going on their adoption journey. And so it accelerates that and it makes those sort of early steps much less hard, much more accessible for people. Then what we find over time is the use case changes a little bit for something like Lightspeed. Instead of teaching the basics and getting people going quickly, it's actually really assisting with the much more complex workloads and much more complex automations that people are trying to accomplish across multiple environments and across multiple application layers.
And so there's kind of two layers to it: the get started quick and the let's do something hard.
[00:05:09] Marc LeBlanc: I love the get started quick part. I think that we see in the industry there's a lot of need to rapidly get people's skills up. The problems we're solving are so complex today compared to even just a few short years ago and I think maybe that lends to the type of scaling applications we're dealing with.
Tell me on the other side, we talked about smart platforms, on the developer side, what does it mean to a developer to have access to deploy against a smart platform? And what are the benefits with the Red Hat stream for them?
[00:05:45] Todd Wilson: So we'll use OpenShift AI as the platform example here.
What OpenShift AI is, is an opinionated platform distribution of all of the tooling that a data science team and AI development team would need to create a model or use an existing model, train a model, deploy applications that are leveraging services on top of that model, and have that be a sustainable pattern that is keeping up pace with an agile development team so that it can work the same way that they're already working in this platform.
And so those tools are made available, they're supported by Red Hat, they're the community tools that folks are already using, and the bonus feature is they're easy to operate because it's all bundled together with a single operator. So, the lifecycle maintenance of these tools becomes much simpler. So, the teams just don't have to worry about version mismatches and things like that as they're working on their applications.
[00:06:47] Marc LeBlanc: You know, I'm thinking back again about that marrying of the operations and the developer team. Another thing that Red Hat was always known for was being an open source proponent.
How are we bridging the open source culture, that spirit, over to AI?
[00:07:05] Todd Wilson: And you know what? The open source culture is in our bones, right? And we live and breathe it.
Red Hat started a very interesting initiative, again, in partnership with IBM and their Granite model. And what it's called is InstructLab. And the idea being is that there needs to be a more open and transparent and community-driven way to train models. And so that anybody can contribute to it the same way anybody could contribute to an open source project. And so InstructLab was created to provide that avenue and also that workflow for people to provide information into a model that is fully open source, fully licensed under the Apache2 license, anybody can use it. You can commercialize it, you can build a business on top of it, but it's also fully trainable and fully transparent to whoever's using it.
[00:08:01] Marc LeBlanc: Thinking of those people that are running these tools and developing these applications, what does that mean to them?
[00:08:06] Todd Wilson: I think what it does is it really solves one of the problems right now with AI is there's a trust issue. A lot of folks are a little spooked by some of the hallucinations that come out of an AI model. A lot of people are a little spooked by sort of that uncanny valley that happens. When you are doing things in the open, and you're doing them in a very transparent workflow, and folks can see what's going in and what's coming out, it really goes a long way to earning the trust to using these tools in everyday use cases and everyday scenarios.
[00:08:38] Marc LeBlanc: Right, so being able to see the input, seeing how the model, where that hallucination came from and understand something and play it back. Am I kind of capturing that right?
[00:08:48] Todd Wilson: Exactly right. It demystifies it, it takes a little bit of the black magic out of it, and so that, that folks can start to rely on how this got built, therefore we understand what we can use it for.
[00:09:01] Marc LeBlanc: On the culture aspect as well, there does seem to be some trepidation. You hear different things from different folks. One of the common ones is, is AI going to replace my job? The dueling trends that I see when I do research on this, one is the idea that companies, if they don't adopt AI, they'll be left behind. And then the other one is that if companies adopt AI and replace their people with it, they'll also be left behind. What are your thoughts or opinions on what the impact of that is?
[00:09:36] Todd Wilson: Well, I agree definitely, right? That the people are your key resource, right? AI is a tool that your people can leverage to great effect.
And I think if you're looking to replace people with a tool, you're looking in the wrong corners. I mean, I'm going to use an example, one of the use cases that I think is actually going to become really important in the near future, is a lot of Ops steps that are necessary, especially when you're developing an application at scale or a very complex enterprise application. A lot of those sort of late stage Ops steps are extremely difficult to accomplish manually. Things like performance tuning, things like log management and troubleshooting through reams and reams and gigs and gigs of logs. A lot of these things are very well suited for AI models to take advantage of their speed and the comprehension of those large volumes.
And so those AIOps models could take something that would be days or weeks worth of work for a normal human team–a Sys Admin team to accomplish these things–shrink that down into minutes and hours, right? Now, all of a sudden, you've got a team that can scale so much more.
Maybe they could only support one app before. Now, with the AIOps tooling, they can support 30, 40 apps. All of a sudden, your job is a lot easier. You haven't eliminated people, you've actually empowered how much they can accomplish. And I think that's the mindset that folks who are, in leadership roles really need to bring that lens to their organization.
It's not about optimizing costs by eliminating people, it's about optimizing costs by enabling our people to do so much more.
[00:11:16] Marc LeBlanc: That's a really interesting nuanced viewpoint. I really like that. I often talk about AI being an accelerator to higher velocity for your workforce, but you're talking about actually leveraging this from a scale perspective.
And I think that AIOps conversation is probably where the conversation is going. Yes, today we're talking about a lot of chatbots, AIOps is a natural progression. How do we take that large–you're talking about telemetry data from our infrastructure–how do we funnel that into something like an AI instead of a person and an AI can very quickly pull out trends or anomalies, give you some sort of predictive feedback. So that's really powerful.
What is the barrier to some of these companies? We were talking about platforms. What's the barrier to getting started? What are the things you've got to put in place first?
[00:12:10] Todd Wilson: I mean, I think right now, a lot of the barriers to get started is folks will assume that they need to start at scale, right?
And I think that is limiting them in a lot of ways. We've actually had some challenges lately with some of our existing OpenShift customers. You know, they want to start kicking the tires on OpenShift AI. They're excited about it, but then they sort of initially think like, "Oh my, we're going to have to provision, a whole bunch more compute. We're going to have to buy some NVIDIA machines. We're going to have to get all of this stuff." And to get started, that's just not true.
So I think a lot of folks overcomplicate their journey a little bit. And they forget about the principles of starting small, and starting simple, proving out a value, and then scaling.
And so I've been spending a lot of time coaching folks and saying, "No, you don't need the NVIDIA GPUs day one. You know what, you might get that mature, you might need them in the future. And you'll have to put that on the roadmap to budget for it. But that's not a day one need. Let's walk through the workflow. Let's take a look at the type of data that you're going to want to be using. Let's take a look at the kind of application use cases that you have. And start real small. Let's start with what you have."
And so to me, that I think is the key message right now to folks, especially here in Canada is: start small and then grow.
[00:13:27] Marc LeBlanc: Start small and then grow. I think that makes a lot of sense, we like that approach. Especially in DevOps, we always talk about MVP and iterating. Talking about the data a little bit, how does someone identify a useful data source? Is that, is that a challenge? We hear a lot of talk about data science. Do people need to go out and hire that role or is there a better way?
[00:13:50] Todd Wilson: I think folks have a bit of a blind spot with data quality, frankly. I think that there is definitely need for professional data scientists who understand the nuances of bias and understand the nuances of data quality in order to bring the robustness and reliability that we're expecting out of these AI models.
Without it, that's again, why we're getting a lot of these hallucinations is because garbage in garbage out is still true, whether it's a database or whether it's an AI model. It's just a fact of life. And so, yeah, I do think that data science roles are key.
Do you need to have one on staff? Maybe, maybe not. I think, again, if you take a look at how InstructLab is working with the open source model and the contributions and how that open contribution model works, there are data scientists behind the scenes helping with that. So, they don't necessarily need to be your data scientist, but you can join a community that has data scientists.
And I think that is more of the interesting way of working, right? Rather than trying to do it all yourself, let's open up the doors. Let's accept the fact that there is a community here and leverage it.
[00:15:04] Marc LeBlanc: That sounds and really feels like open source, the way you describe it like that with the community behind it. I really like that approach.
What is the significance? I think people know, maybe they don't know, what's running and powering these AI workloads? What's under the covers?
[00:15:23] Todd Wilson: Okay. I mean, still at the end of the day, it's still just the regular old infrastructure, right?
So, from a Red Hat point of view, we've got RHEL as the foundation of everything. And so, under the covers, we've got RHEL servers that are optimized for AI workloads. And so by that we mean they're optimized for things like, GPU, utilization, and making sure they're getting the fastest processing that they can.
On top of that, then we have a container platform in OpenShift. Again, that is optimized to scale those workloads quickly and efficiently and so that you're only leveraging that expensive hardware when you need it.
So under the covers, it's the same tools we've always been using. It's really not magic. But there are some very important customizations that are necessary in order to make these things work at speed and at scale.
[00:16:17] Marc LeBlanc: And when we talk about InstructLab and that community, is that something that someone can get started on a much smaller infrastructure or do they need to go right to like an OpenShift AI?
[00:16:25] Todd Wilson: No, absolutely. You can run it on your laptop. So, there is literally an image that you can download and you could get started very locally.
So that's again, part of the beauty of the approach of the Instruct Lab is the barriers to entry are extremely low. You have access to a computer, you can get started.
[00:16:45] Marc LeBlanc: That'd be really great for companies with like an R& D or some sort of research department or capability that they have, that they can just enable. Very low cost, very low risk, go pull it down, do some exploration, do some dabbling.
[00:16:58] Todd Wilson: Exactly.
[00:16:58] Marc LeBlanc: And see what that MVP looks like.
[00:17:00] Todd Wilson: Yep. Start small, right?
[00:17:02] Marc LeBlanc: Start small.
I think just to recap kind of some of the things we've chatted about today, Todd, it's been a great conversation. I think there's a lot in AI that we're still exploring. Like you mentioned, we are at just sort of the inception of this as an industry. Some of it's been around for a while, but it's becoming more crystallized.
Definitely, we've talked about starting small, not thinking too big out of the gate. Think MVP, think small, get started, just do it.
The other thing we talked about was how Red Hat has broken this into two streams. One around that smart platforms for your operations teams and around supporting the developers building those smart applications.
I think that's fantastic. It keeps everything close, but gives clear separation of duties.
And then the third thing was just around, there's not a lot changing, but we do have to think if we want to scale this, what type of platforms and infrastructure do we need under that?
Todd, thanks so much for joining the podcast today. It's been a pleasure to have you here.
[00:18:03] Todd Wilson: It's been great, Marc. Thanks very much.
[00:18:05] Marc LeBlanc: Thank you for listening to Solving for Change. If you enjoyed this episode, leave us a rating and review on your favorite podcast service and join us for our next episode.
About our hosts
Marc LeBlanc is Director of the Office of the CTO at MOBIA. An experienced technologist who has worked in large enterprises, start-ups, and as an independent consultant, he brings a well-rounded perspective to the challenges and opportunities businesses face in the age of digital acceleration. A thoughtful and engaging speaker, Marc enjoys exploring how technology and culture intersect to drive growth for today’s enterprises. His enthusiasm for these topics made him instrumental in creating and launching this podcast.