Season 1 / Episode 5

How AI Reshaped Network Operations for a Telecom Provider

with:
Marc LeBlanc
Host
Hart Ripley
Guest
"There have been some events and outages on this company's network recently and they've discovered that, with their new AI tools, they actually know where the issues are. They can trace them back much faster and reduce their mean time to resolution."
Hart Ripley
National Automation Lead & SolutionsArchitect for the Office of the CTO at MOBIA

About the Episode

Telecommunications is a competitive industry where success is defined by network performance. In this episode, we take a close look at how a Canadian telecommunications provider transformed its network operations with artificial intelligence (AI). Welcoming Hart Ripley, National Automation Lead & Solutions Architect for the Office of the CTO at MOBIA, Marc Leblanc uncovers how this organization implemented AI and ML tools to take its network operations centre (NOC) from reactive to proactive, anticipating issues that could arise on the network and planning their resolution in advance.

Like many large-scale transformations, this one wasn’t straightforward. For this telecom, modernizing infrastructure was a necessary first step to implementing AI. Find out how they tackled this challenge and what lessons they learned along the way on the journey to improving network operations.

Transcript

[00:00:00] Hart Ripley: A lot of the older network management solutions and insights were more focused on historical and relied heavily on human intervention. So, let's look at a problem let's get some folks involved, let's do some troubleshooting, some triage, and then time to resolution is much longer. So, with some of these AI/ML models and approaches, they can turn that kind of mean time to resolution way down.

The point of the AI piece and the ML piece that's built into AI ops is, now we have all the data in one place. That's kind of point number one. Before we had data all over the place, we needed to get data into kind of one common platform from all our sources and senders. And now we have the ability to say, "Okay, now we have built in monitoring, right? Now we have, a bunch of correlation. Now we have visibility. Now we have trending. Now we have kind of suggestive analysis to understand why we're seeing these issues or these types of trends on these types of devices, right?" We're able build a better roadmap and a better understanding of, what does our entire network footprint look like?

[00:01:13] Marc LeBlanc: This is Solving for Change, the podcast where you'll hear stories from business leaders and technology industry experts about how they executed bold business transformations in response to shifts in the market or advances in technology. In every episode, we'll explore real-world strategies and technologies that fuel successful evolution.

I'm your host this month, Marc LeBlanc.  

In this episode, we'll take a closer look at how modernizing infrastructure for a Canadian telecommunications company had a direct impact on the organization's ability to stay competitive in an industry that thrives on innovation. We're very fortunate to have Hart Ripley here to talk to us today about what this looked like.

Hart is the national automation lead and solutions architect for the office of the CTO here at MOBIA. With over 20 years of experience in IT, Hart has a wide breadth of experience. While he's worked in many areas of IT, his passion and expertise is in building innovative custom solutions to tackle real business challenges. Always up to speed with the latest trends in the industry, he quickly spots the ones that have the potential to be the most impactful.  

Thanks for joining us today on Solving for Change, Hart.  

[00:02:28] Hart Ripley: Thanks Marc, happy to be here.  

[00:02:30] Marc LeBlanc: So tell me a little bit about the problem that this particular telecommunications company was having. What was it that got us involved to begin with?  

[00:02:40] Hart Ripley: Mostly centered around the network operations. They had a lot of network operations tools for subscribers and kind of outages. And there's a lot of hardware in the field to really keep tabs on, make sure everything's working. And they were longtime users of Netcool Network Management, Operations Insight those types of things like that. As well as SevOne.

And some of these tools are deprecated. Some of these tools were being rolled into more of IBM's AI ops tooling, Watson AIOps. And really it was about consolidating that, but also giving them the ability to have more actionable insights, some better observability, some better visibility. But really understanding what's happening in the network, how to respond to it in more of a real time fashion and less reactive.  

[00:03:29] Marc LeBlanc: And what was fueling that? So, I kind of take away that there's a bit of a disparate tooling ecosystem that was shifting away from maybe what they needed. But where were they trying to get? What was the goal they needed to realize?  

[00:03:47] Hart Ripley: They're trying to get to more of a consolidated way to action any outages, but also look at trends in the network. So, really seeing what busy endpoints, when things are getting overloaded, where they're coming from, but being able to correlate that over time.

A lot of the older network management solutions and insights were more focused on historical and relied heavily on kind of human intervention. So, let's look at a problem, let's get some folks involved, let's do some troubleshooting, some triage, and then time to resolution is much longer.

So with some of these AI/ML models and approaches, they can turn that mean time to resolution way down. They can get an idea where the source of the issue is coming from. They can drill down faster.  

The technology and the solutions gives them the ability to do more automated response. Kickoff workflows, run some automation jobs, those types of things, if they, if they want to. But the main thing is really just rolling that up and having the real insights in real time.  

[00:04:57] Marc LeBlanc: Right. And what was the approach that was taken to get them closer to that goal. What were some of the processes or some of the tools that maybe were introduced as part of the solutioning?

[00:05:07] Hart Ripley: Yeah, so the big one was the adoption of the IBM Watson AIOps CloudPak. This is a containerized tool that really will take inputs from SNMP, inputs from other IBM tooling, including SevOne, which was the real main source of ingestion here.  

You have things like SNMP traps. You have other data sources that can send data to AIOps. And the big driver for them was this is a containerized application. It needs a container platform. It's very much a cloud native tool and solution. And they already had the tools to feed in the network information. The piece of the puzzle they were missing was the overall observability, the insights piece, the AI operations platform to really give them a centralized view. But also give them the ability to do something with that information.  

Instead of having a bunch of disparate tools, let's take all these tools and now they can send all their data into one area, aggregate that data and start seeing what are the trends we're seeing? What are the areas where we see some issues, when are issues starting to materialize? And get more of a forward looking proactive approach into this tooling.  

So, overall it was a containerized platform it's a cloud pack and it really comes out of the box with all of these kinds of offerings. But, , where there's telecom, what wasn't really familiar with was more of the containerized journey. They were focused a lot on virtualized workloads and they didn't have a container platform of choice and that's where MOBIA came in to kind of help them with that journey.  

[00:06:59] Marc LeBlanc: It's interesting. I think that's something we see quite a bit in our industry is that jump from virtualization up to a containerized platform. It's quite a jump. It's a different paradigm, way of thinking. What was the approach to gain comfort for both the leadership and with the people that were gonna operate that? What were some of the steps that were taken to get everyone to be Okay and comfortable?  

[00:07:24] Hart Ripley: That's where one of the areas where MOBIA really adds value. We're not just looking to come in and pitch something. We're looking to come in and work with the customer and help them understand: yes, it's a journey. Yes, it's a new piece of technology. But you have an operations team already, you have an infrastructure team. Largely, you're already doing a lot of these things.  

It may seem like a black box initially, but it's really for us to help and say, "This is a journey. There's some new tool language, some new graphical interfaces. There's some new command line tools, potentially. There's probably a new user experience is the best way to put it. At the end of the day, we will help you with those processes. We will help you understand what's required to manage it, what the lifecycle looks like, and really try to make that as comfortable of a transition as possible.  

And the containerized journey in this case was for a specific use case but we're looking at the bigger picture. It's not about how do we run this cloud pack today? It's not, how do we get this working for the single use case? It's how can MOBIA enable you to be able to scale down the road or be able to expand out your containerization requirements down the road.

So, it's very much a collaborative engagement. That's really how we work with these clients. Yes, we want to help you meet your goals right now. We want to help you have some success on your immediate requirements. But also not losing sight of, you're going to need to expand more on the container side down the road. Other off the shelf tooling, other open source tooling, other in-house development is going to drive the requirements for microservices, other containerized outcomes, those types of things. So, it's important not just to say, "Okay, we're going to build a container platform and we're going to run this today." It's how do we integrate this into our operational lifecycle, our operations team, our infrastructure team, our response management, our SRE practices. All these types of things to make sure that it isn't a black box to the organization. Obviously the team that needs to support it and run it also is as enabled as possible. At least day one and then understanding what they can do with the platform and their investment.  

[00:09:53] Marc LeBlanc: And as far as that operations team, there's a jury for them, as well. You touched on a number of the roles or functions that might be required to support that team, or that platform. What's the path to get there? Are these typically new hires or is it upskilling? Is it a blend? Is it looking for partners that can maybe take on the operations? What was the approach that this particular customer went through?  

[00:10:18] Hart Ripley: Yeah, we looked at it pretty holistically. The customer didn't have any experience with containerization or running a containerized platform.

So, the last thing we wanted to do was drop in something that we documented, we did for them and we kind of walked away at the end of the day. We had multiple parties here. We talked about a vendor being IBM. Red Hat OpenShift being the container platform of choice, very much aligned with with IBM and Red Hat's strategy. And obviously the fact that they're under one umbrella, that OpenShift is gonna be the platform for any cloud packs that come from IBM. Everything's certified in OpenShift, everything is versioned and you have a very clear picture of what the lifecycle is for that cloud pack.

Not to mention entitlements, subscriptions, those types of things are part of the cloud pack and really one of the driving forces to adopt OpenShift to run these cloud packs. There's multiple advantages you got both vendors supporting it cohesively. , and that's one thing.

So, I think the big thing is you have internal folks that, a lot of times, they're systems administrators, Linux admins, infrastructure folks. They've done a lot of virtualization management, hypervisor management, those types of things. So, it really comes to that individual approach where there's some upskilling required. And, it's also important to make it a very personal thing. It's a learning experience for some of these folks. It's an opportunity to on the job learn new skills and learn new things. And add things to your overall experience that can benefit the organization, can benefit you down the road as an individual.

And you're really looking to find these stakeholders and find these champions because we don't want to help enable technology in these organizations if the individuals really aren't aligned with the goals. So, the main thing for us is to make sure that they understand and they're on board with us. We're here to help you. We're here to work together. We're not here to do it for you. We will enable you to do as much as you want hands on yourself. We will do collaborative engagements, collaborative delivery. ,  

Part of it to your point is there's operational management: day two, day three. And the big thing there is, if you don't have a team that's run container platforms today or managed the lifecycle of a container platform. You have your managed service providers, in which case MOBIA, we are doing managed Kubernetes today, managed OpenShift. So, there's that option. ,  

It was really from day one trying to understand, was this client looking towards outsourcing some of this type of stuff? We were looking to provide the help, the enablement, the operational continuity where we could. And in this case, we were very much aligned with, okay, you have some individuals that are going to get upskilled in some areas. But overall operational management as you scale out the number of clusters, other app use cases, those types of things, it's just not going to be a core competency or core function day one. So, let MOBIA take the managed services, the ongoing versioning, the break fix, the proactive alerting and response, those types of things. And, let's focus on allowing you to consume the cloud packs, the AI ops intelligence, the automated remediation, the other intel that you've got and is available to you by going down this road and, we'll learn together. We'll go together down this journey. And, we'll continue to enable you and we'll understand that, at some point, your use cases are going to expand, your footprints could expand, your team's going to expand and we'll just stay very aligned with you and understand how can we help. What do you need? Let's continue to do roadmap stuff for the future to understand where can we help, where can we plug in. And as much as we want to help with enablement, really understanding what that team's going to look like down the road and kind of right now.  

[00:14:54] Marc LeBlanc: Right. And you touched on something there. We have this platform in for a very specific use case, but we know that when we bring in a platform like Kubernetes, it does expand. We do bring out other applications in the workloads. And yes, you need help up-front, maybe from a partner for the operations, but by coming up with a holistic approach, you're future proofing your team and your platform to grow with you.

But I want to switch gears a little bit. We talked about the platform, but really we want to dig into what role that AI tooling is playing in this. So, this platform was to get ready for an AI tool. Maybe talk a little bit more specifically about what that tool is providing and how it's going to enable NOCs to better prioritize those alerts and get better insight.

[00:15:39] Hart Ripley: Yeah, for sure. So, I touched on this a little bit earlier, but there was a lot of disparate tools initially where they had to go look at SNMP collectors for certain issues. They had to look at Netcool operations management for certain things, SevOne for certain other things. The point of AI ops is to collapse some of these older tools into more of a modern framework where we're taking in all this data, but what are we doing with the data? How are we correlating it? How are we aggregating and how are we analyzing it? And the NOC that we have today is very much not used to that automated remediation, not really used to being able to see too many trends ahead of time and be able to do actual outcomes on them.

So, the point of the AI piece and the ML piece that's built into the AI ops is: now we have all the data at one place. That's kind of point number one. Before we had data all over the place, we needed to get data into kind of one common platform from all our kind of sources and senders. And now we have the ability to say, "Okay, now we have built in monitoring."  

Now we have a bunch of correlation. Now we have visibility. Now we have trending. Now we have kind of suggestive analysis to understand. We're seeing these issues or these types of trends on these types of devices. We're able to kind of build a better roadmap and a better understanding of what does our entire network footprint look like? Where are we seeing issues or congestion? Where are we seeing even physical issues, port issues, subscriber issues, hand-off issues, as well as user experience issues all kind of rolled up into that one spot.

But it's not about, before having to go look at these tools and trying to normalize what they're telling us. Now, it's very much rolled up into the AI models to say, "Well, here's what we're seeing." It's very visual. The dashboards are very customizable and they're producing, obviously, customized things if that's what the organization decides to do.

But there's also a lot of built-in actionable outputs, workflows, dashboards, visualizations to say, "Okay, I want to show my executives what our network health looks like, or what our trends are over a certain period of time." So, not only can this benefit the NOC and the SRE teams, but we can say to some of our executives who've supported these investments, helped us get some of this newer tooling in the door, "What are we able to see now? How far ahead can we look? What's my network look like at any given time? How healthy is it? Where am I starting to see some issues? When we see a lot of very heavy workloads on the internet with some of these events and these sporting events, how's my network responding?" Those types of things.  

So, it allows us to not only roll it up to the operations team, but the tool itself will allow kind of the different consumers, a different type of user experience, and really be able to say: we're not just going to look at this, we want to put it out there for different team members to see in real time, what's happening.

[00:19:05] Marc LeBlanc: You touched on a number of, of things there, a number of small wins and bigger wins. I'm wondering if maybe you could highlight beyond some of the challenges we talked about having to bring in new technology, getting your operations team aligned, were there any other challenges that we had to overcome as we work through this solution with this company?

[00:19:23] Hart Ripley: Yeah, I think one of the big things is, we're talking about integrating a container platform and running a cloud pack on top of it. So, there's a lot to unpack there. But, container platforms have a lot of requirements. We're talking there's still infrastructure, there's networking, there's storage, there's security, there's traffic management, there's user access control, user experience, onboarding.

So, there's a lot of stuff to consider and that's where MOBIA really adds the value to say, "Okay, you're going down this road for the first time, right? We're building out a production containerized platform. It is going to run an out-of-the-box, off-the-shelf operator that's natively integrated with OpenShift, but we still need to make sure the platform is going to be healthy. Make sure the platform is going to have stability. The storage is always available." I think that's one of the things that really stands out to me is that the more stuff you're running on a platform, especially these multi-faceted tools, there's a lot more requirements that need to be looked at day one. And we spend a lot of time in the architecture design phase because that's really key. We're not going to be able to successfully deploy, operate, manage a platform if we miss something in terms of the architecture and design, or those specific integrations that are going to support the platform's ability to run, to run well, to scale, things like that.

So, I think it's important that we focus on that holistically, but also understanding that we do have some operators on top. We do have other requirements within the infrastructure that need to support it. And really, it's about looking at all that stuff in context and making it easy to disseminate for all the individuals. But also for the organization to say, "Okay, this is new but, we're familiar with storage, we're familiar with networking, we're familiar with infrastructure. It's just now we have some different architecture, different design, and possibly a different consumption model." And really that's kind of the way we tee it up. We really try to make it less of an unknown or a big learning curve. There's definitely some, some pieces that will require a little bit more than others in terms of learning and adoption.  

But at the end of the day, a lot of the skill sets are transferable. And I think that's one of the big things that may get lost, or not talked about enough, when we're modernizing platforms is: you're a systems administrator. You understand networking, you understand, permissions, you understand operating systems, you understand infrastructure to a point. Well, all those skills are very transferable in the modern platform containerized world. So, really trying to wrap that together and making it a little bit easier to move from one to the other, to understand how some of that stuff stacks up.  

[00:22:30] Marc LeBlanc: Right, I'm curious as we've rolled this in, you were working with the teams planning, doing the design, doing the architecture, doing the upskilling. Has the organization seen any impact on the culture within those teams as part of this journey?

[00:22:46] Hart Ripley: A bit, yeah. For sure. The way that we approach a lot of these is that we want to follow as much of a DevOps methodology approach as we can. We're talking about modern platforms. We're talking about containerization, microservices, code images, those types of things. So, we are going to deliver in a way that. Not only are we going to enable the team on the platform and the operation side, but we're going to give them the keys and the training and the enablement to really understand: how do we keep this working? How do we do this all as code? How do we take, we're not copying scripts around, we're not copying files around every time we deploy a cluster. We're going to do it in a very declarative fashion.

And, just by the nature of how we talk about this, how we deliver it, how we go through workshops with these teams, it really starts to settle in to say: Okay, we're using source code management. Everything we do is in a repository. We have a set of code. Here's our starting point. Here's our blueprint, if you want to call it that. And, we really obviously take a step back and look at the Git pieces. And, really how to use Git. How to use branching. How do different repositories work, how do permissions work, how do you do promotion of code, those types of things. Just the fundamental pieces to say, We're not going to be doing this in SCP and copying documentation around. We're not going to be documenting this in Word. We're going to be documenting this as code and just, everything goes in a repository. We make a change to a document, or an artifact, it happens in the code repository. And the team that we're working with alongside our delivery team is going to have access to that code repository. We're going to do things consistently.  

It's going to take a little bit of time to adopt and a little bit of change up-front, but once you get the team on board, people start to see the light and really understand that, "Oh, I can look back and see when this used to work. What was the code base then? Can I revert to that code base? What were the changes made? Who made the change? When did they make the change,?" Those types of things.  

So, there's a lot of advantages to going down that approach and really adopting that agile methodology, and really those DevOps focuses. And that's what we're really trying to teach is just more of that overall workflow, that culture. And I think by leading with that, to your point, is we're trying to build a culture that we have at MOBIA but we're trying to also enable that in our customer environments and especially individuals and teams.

[00:25:32] Marc LeBlanc: Very true. I want to shift gears a little bit and start talking about the value back to the business and what that future roadmap looks like. It's early on in the containerization strategy for this customer but have they realized any business impact so far?  

[00:25:51] Hart Ripley: They've definitely measured a few things and while I don't have all the numbers right now, there's been some occurrences recently where they've had some outages or some issues within the network and the initial feeling and understanding and experience for them and for some of the executive team, the SRE teams has been that, well, we now actually know where the issues are. Now we can trace them back much faster, that mean time to resolution that we kind of touched on a little bit earlier.

They're able to kind of just dive in and say, "Well, the issue is here." Or "This is the device producing the problems." Because a lot of the times when you're dealing with network operations, you're getting a lot of noise and a lot of alarms from different devices, different points in the chain. And the key part with this AI ops is you want to understand: is this router that has ten, twelve, or more routers attached to it the problem, or is it something downstream? How can I hone in and look at the actual root cause, right? Now, they have an intelligent network operational management system that says, "Well, we can give you pretty accurate ideas, visualizations, outputs of, yeah, there's lots of alarms firing, but here's the actual problem. Here's some data to support that and even suggestive ways to remediate those things."  

And I think we're starting to see that a lot with some of these other tools in the market now. It's the event-driven and the suggestive remediation where it's like, "Well, here's a bunch of things you can try." And I think that can be helpful if you understand the context, if you have a plan in place to say, "How do we remediate? A lot of organizations start off down the automation journey with, "Well, I just need to know where the issues are and then I can address them."  

We still see a lot of large organizations not doing automated remediation because there's still risks there. The big thing for them is to determine where the problem is. We have a very capable team in some cases that can go and resolve that issue very easily. We have ways to do the resolution, but the biggest thing is actually finding out where the issue is without having too much downtime, having too many user experience issues, those types of things.

So, that's kind of the biggest feedback we've seen so far is: Okay, now we have our SevOnes reporting into our AIOps platform. So anytime there's an issue in the network, we know where it is. We can go to one spot and find it. We have one way, one place to do notifications and alerts and the remediation is much simpler, much quicker now. So that's kind of the biggest win today. There's some other integrations that I mentioned, SNMP and stuff of that sort, that can also plug into AI ops. Some of that stuff is still in progress. So, it's not all there today in terms of this client and their capabilities, but they have a platform that will support all these different types of logs and metadata getting into the platform.

So, the fact that they're already seeing some wins in terms of resolving issues and visibility, obviously great feedback to the business.  

[00:29:10] Marc LeBlanc: And what would you say is next? You kind of laid out a bit of a path, but what's the next win or two wins they might be looking for?

[00:29:18] Hart Ripley: Yeah. So, more integration into AIOps. But also building out what is their SRE disaster recovery strategy look like? We've got, , Watson AIOps in production but there's a lot more to say if we have an event where our production site is down. Where's our data, how has it been replicated to another site? How can we point all the sources and the feeds into this other site or into the storage that is pulling it from and ensure that our NOC is still able to handle real-time data issues, real-time traffic and basically flip back and forth between production and DR. So, we've been focusing on how do we enable that and it's not it's not an easy task, right?

I think a lot of organizations can do data replication pretty easily, do back-ups pretty easily, but when you're talking about real-time data and, especially events, you're getting thousands of events a second. You can't really lose any data there or else you're not acting on the most accurate data. Your deltas are off those types of things.  

So, it's about how do we ensure that, if there's an issue, what the failover, the business continuity plan looks like. What's the actual process from an operational perspective to say, "Okay, we're going to failover. Here's the procedure. Here's how the data gets moved over. Here's the hot copies of the data. All the feeds that are coming into AIOps. Where's that going now? Is that hailed by DNS failover? Is that round robin?" So, as we kind of touched on with building a platform, there's a lot of moving pieces and there's a lot of prerequisites and integrations.  

These are all things that we're making sure that we help them think about, but also go along with them on their journey to say, "These things are going to take some time. We need to test this stuff. We need to make sure that we have these documented procedures in place." But at the same time, getting them somewhere where they can have some business continuity when they need it. And that's kind of the big thing from the platform perspective.  

For features and functionality, they're going to continue to integrate more stuff into AIOps. . There's some other features that they're not using today, like some of the automated remediation pieces that I was talking about earlier. The dashboarding stuff is still evolving. They have, there's a lot of different network equipment: SNMP traps that... Some of the MIBs and stuff are still needing to be updated or there's some older gear that may not support everything in terms of gathering as much information as needed.

But, I think anytime a telecom kind of goes down this path is you have tons of different generations of hardware, some of them very old, some of them very new, some of them with, different capabilities than others and... You could have a great, cohesive aggregating tool, but it's really dependent on how much data and what data can I get out of these endpoints to start with.

So, continuing to expand out from a platform perspective, from a business continuity perspective, how do we manage multiple Kubernetes clusters,? Supporting the application is a big thing. How do we do that centrally to your point earlier? How do we do that in a kind of GitOps, DevOps fashion, declarative fashion? So, as this organization continues to deploy clusters, how are we handling consistency? We have standards. We understand the hardware footprint in a lot of cases, but how are we deploying a cluster? How are we operationalizing the cluster?  

We're continuing to layer in automation. We're continuing to layer in GitOps and declarative outcomes and the way we do the day two ops, right? As well as ongoing management, continuing to ensure those types of things. But, at the same time, we have to make sure that the team understands how this works, that they can do it themselves.

Obviously there is some reliance on us as a managed service provider, but also making sure that they understand what we're doing. We don't want to be kind of that black box where we just make everything work. We see this as a relationship and a good opportunity for us to work with the client and make sure that they're enabled, their individuals understand, "Okay, we've made an investment in OpenShift, we've made an investment in this IBM CloudPak for AIOps and in other cloud packs. What are we actually getting, in terms of outcomes? We're adding more clusters, we're looking at other application use cases. Our infrastructure and other administrative teams now are understanding the benefit of containerization because we can move quickly. Our DR process looks a little bit different. If we want to add other tenants and what have you, we have the ability to do so."  

Another advantage of OpenShift, obviously, in light of some of the VMware stuff lately is, we can now do virtualization on the same platform, same user experience. We have this client that's a VMware client and there's a lot of virtualization there. This gives them the ability to say, "Okay, what is our strategy around VMware? Maybe we want to virtualize some stuff in OpenShift Virtualization. Maybe we want to keep some stuff in VMware. We're very comfortable managing VMware, as are a lot of the organizations."  

So they've already got a platform that supports it. They're already getting familiar with the user experience of OpenShift. The automation capabilities, the consistency, the common user experience and they can take some of that risk off of: "All our eggs are in the VMware basket."  

Let's say, let's look at some workloads or some lower environments, start getting familiar with Red Hat virtualization and let's do a proof of concept. Let's start using our clusters for other use cases. Let's look at multi tenancy in Kubernetes.  

These are all things that we've started talking about. Both how to operate but how to go down that path and educate these teams. And it's not just an investment to support this use case. It's an investment to support the business longer term. Some of those critical systems and then just give them, really, ultimate flexibilities.  

[00:35:55] Marc LeBlanc: We always like to say that we touch on the people, process, and technology in this podcast. And we definitely did that today.  

I think I got three main takeaways. The first, one by modernizing infrastructure with things like containerization and AI ops, you're really giving the network operations centers and IT teams a lot of our context around problems that are happening in the network. They can determine exactly what's going on, where the issue is, and they can reduce that time to respond a lot faster than if they weren't using tools like this and platforms like this.

The second one would be around AI and ML tooling built into AI ops are really powerful when it comes to understanding the trends and issues on a network beyond just those KPIs alone, so it's giving the context as well.  

And the last one is really around that organizations shouldn't be afraid to use partners as they're rolling in these new projects. It is a journey, as you touched on a number of times, not just on the implementation, but making sure that the right processes are there, making sure the right design is there, making sure that your teams have the right path to get up to the right capability. And leaning on strong partners that have expertise in those areas can really accelerate that journey for different organizations.

Thank you so much to our guest today, Hart Ripley, for joining us and lending some valuable insights into the conversation.  

[00:37:18] Hart Ripley: Thanks for having me, it's a pleasure to be here. Thanks, Marc.  

[00:37:21] Marc LeBlanc: Thank you for listening to Solving for Change. If you enjoyed this episode, leave us a rating and review on your favorite podcast service, and join us for our next episode.

Show All Content
Show All Content

About our guest

Hart Ripley
Host

Hart is the National Automation Lead & Solutions Architect for the Office of the CTO at MOBIA. With over twenty years in IT, Hart has a wide breadth of experience. While he’s worked in many areas of IT, his passion and expertise are in building innovative custom solutions to tackle real business challenges. Always up to speed on the latest trends in the industry, he quickly spots the ones that have the potential to be most impactful.

About our hosts

Marc LeBlanc
Host

Marc LeBlanc is Director of the Office of the CTO at MOBIA. An experienced technologist who has worked in large enterprises, start-ups, and as an independent consultant, he brings a well-rounded perspective to the challenges and opportunities businesses face in the age of digital acceleration. A thoughtful and engaging speaker, Marc enjoys exploring how technology and culture intersect to drive growth for today’s enterprises. His enthusiasm for these topics made him instrumental in creating and launching this podcast.

Keep Listening