modernizing-business-machine-learning-iot-devops

IoT For The Energy Industry: How To Modernize With Machine Learning and DevOps

On-demand Webinars

Our experts discuss and show you how an enterprise organization (specifically a utility company) is able to modernize their business using Machine Learning, IoT devices, and DevOps. In today’s world much of the valuable data collected from devices, machines, and applications goes unused. By integrating devices, business applications, and operational data into a connected ecosystem, you can begin to make better strategic decisions faster. Modernization can reduce customer attrition and increase customer engagement by providing predictive analysis, which averts failures that could cause both customer and financial damage. Watch to learn more.

Modernizing Your Business with Machine Learning, IoT, and DevOps

45 Minute Presentation Covers

Energy Industry Example – How to successfully utilize machine learning, artificial intelligence, and DevOps

  • Utility company use overview

  • Artificial Intelligence & Machine Learning

Internet of Things

See how to get the best of both worlds with devices that can act locally based on the data they generate, while also taking advantage of the cloud to configure, deploy and manage them securely at scale using hybrid Azure IoT Edge cloud solution.

  • IoT in Action

  • IoT Edge

Operationalizing the Solution – DevOps & Automated Testing

By applying real-time analytics and end-to-end traceability enabled by DevOps, IT organizations not only make software delivery more efficient, they also enhance business value delivered throughout the digital journey. By automating the tedious, repetitive processes and tasks associated with ingesting, replicating and synchronizing data across the enterprise, data integration allows you to improve operational efficiency while reducing IT and development costs.

  • Build Team(s)

  • DevOps

Transcript of presentation:

[00:00:01] What I’m going to do is frame up a lot of the conversation today using an energy client or really energy as a vehicle. But this really applies to all kinds of different verticals. This is really just the vehicle for the dialogue. So I don’t want to get too hung up on that. And what I’ll be spending a lot of time on is on the machine learning side of things how that integrates with IoT devices some of the challenges we have there are really focusing on the AI and the Machine Learning. Art is going to spend a lot of time unpacking IoT and also bringing up some new technologies that can make our lives easier with machine learning and all that comes together. And then Eric’s going to bring it all home for us. How do I get this out there in the field. How do I deploy this. How do I keep on making sure that these machine learning models are fresh and adding business value to us. So again just to kind of continue or frame up our dialogue right here. We’ve got our hypothetical energy company right here who is producing electricity. So they’ve got the generation side of things. We’ve got the distribution side of things and then we’ve got our consumers that are actually consuming all of that energy using electricity using the business. That’s great. And as we’re going and pulling all this information together we want to ask some really interesting questions that can make our business different. You add some business value. What can we do.

[00:01:26] Once you start pulling in line of business ERP and also IoT devices into this IoT devices give us that change some of those questions we can ask a kind of go from. You know I’d say Capability Maturity Model from simpler to more sophisticated. So I’d say the first thing we have here are really just great analytics which is business intelligence if we’re talking five years ago. So amazing visibility into your data you can ask some of these rich questions you know. What about if I am making a giant capital investment. How does that get all the way back down to Mike or my customers. Howard is the revenue recollecting from those come all the way back to the capital investment. So we know when we’ve got that return whenever we met that what’s the profitability going to be ultimately. So that’s some pretty interesting visibility all the way through that don’t we get into the predictive side of things. If we have these great IoT devices all right that are operating at the generation side at the transmission side of things. There’s some interesting scenario that brings up all right if the weather changes it gets warmer. Maybe you want to change how much we’re how much we’re generating how much power we’re putting out. If there’s a transmission problem do I want to redirect that to different clients different customers homeowners things like that. All of this is going to help us with managing that that that that ultimately what our product is and that that whole pipeline that we’re working through. And then of course what if scenarios.

[00:02:55] You know this is where we can take those predictive models that we’ve generated and just plug in different scenarios for investments. So on and so forth and see how that might shape out what if there’s a change in the market. What if there is another you know we have more natural weather events like we’ve had this year so far would have next years like that. How is that going to play out. So some interesting things we can do and it’s not just us getting that information with a gut feeling. We’ve got data to back it up. So now we’re going to actually make these requests for new capital investments or to strengthen our supply or supply chain or our transmission side of things. We’ve got some numbers that will let us help us get there really want to focus on the IoT side of things. So when we’re dealing with IoT we may have all these individual sensors at these various areas and they’re going to be generating all kinds of data for us. So these IoT devices can be pretty chatty as far as what they’re what they’re generating for us and we may have something here where you know on this side of it on the generation side we’ve got our turbines and we’ll take a look at that in a little bit that are generating our power. Is there a vibration on those things you know we want to record all of that on the Transmission side of things.

[00:04:18] Again we want to understand what our substations look like and and as all of these things are functioning we’ve got to have a funnel to capture this data we’ll get into the data volumes is going be we can deal with these but essentially this is just really a data consumption platform. So essentially we’re dealing with big data streaming analytics and then we’re also dealing with batch two sides of that. Now we don’t just want to have our information coming in from the IoT devices we want to bring that together with their line of business system as well. So that’s our ERP. So we’ve got real financial models that come back to it. And also you know for the work management Power Plan is a popular package inside of you know energy companies. So we happen to have that there depends on what your industry is. It could be all of those. Essentially we’re bringing this data together in an aggregated fashion so we can plug in an analytics module. Now when we do our analytics oftentimes it’s up in the cloud but due to certain client needs we have to bring that down on premise. And then also we need to deploy that perhaps potentially to different territories. We’ve mostly worked with you know Google tensor flow Microsoft cognitive toolkit or Azure Imao all of these different technologies were great together which is pretty interesting you know from a Google Microsoft perspective we use a lot of the Google visualization data science tools with the Microsoft tools. They’re all based on a lot of python and those two groups work well. So depending on what you’re standardized on you can go with one versus the other and it’s very easy to move from one model to another model a little bit about the back end technology there. Yeah.

[00:06:01] The question was is one of these sensors gets knocked off you know how is that going to be handled. What’s going to happen right there we’re going to get an alert. And I think that brings up some context that probably I skipped over that’s great to to bring up for immediate things that are happening. Like something happened right now either a sensor or just immediately turned off. So maybe it’s not working. Or a sensor detected that all of the sudden vibration is through the roof. That is something we need to make an immediate action on. That’s a great example for IoT where art technologies you go ahead and plug in there if something goes completely off line how we handle that. We’ll see that there’s all of a sudden no data. That’s probably a rules based thing that says it’s just not reporting anymore. Let’s distribute that load across the rest of our inputs and that’s actually you get into generalizations which we’ll talk about in a little bit will be helpful where the predictive side of it. We’re going to talk about a predictive maintenance example. These turbines here these are many millions of dollar investments and we want to know that if we’ve got a maintenance cycle when we have to turn these things off or shut certain things down maybe that’s once every three months once every six months depending on the type of things that we’re we’re maintaining that’s where our predictive analytics is going to apply. So one of these is real time. The second as it’s happening one of these maybe hears you should do this maintenance you should update because of all the load the system has been under it doesn’t follow the traditional pattern.

[00:07:25] So I hope that helps clear that up a little bit. Yup. Gotcha. Q Two questions The first question was around security. How do we handle that. And then about how do we scale these machine learning platforms. So the first one is security as I mentioned this platform right here a lot of this platform was originally cloud based or cloud based. Some of this is has it had to come down on prem use clients having that specific concern so that it can be controlled completely within their data centers. I would advocate that whether you’re dealing with a U.S. Microsoft Azure the Google cloud. They really are quite secure and there’s quite a bit of Tool tooling that you can leverage to make sure that data is very well secured. Things like Key vault and then has good practices like using encryption rest encryption in motion things of that nature. Certain industry those will have industries though. They’re going to dictate hey look this has to be here for whatever reason. And that’s not necessarily an I.T. problem. That’s more of a compliance concern or a business concern that we just have to honor and respect. Same thing with where we’re deploying this certain territories where can I store my data. It can’t just all be in one centralized cloud spot. Maybe I’ve got to have you know deployed to a data center you know in the you know Asia Pacific one in the U.S. and one in Germany. It all depends. So security is an interesting one and I think that really goes back to the specific I’d say a vertical that you’re working with and quite you’re working with.

[00:09:01] But there are new technology tools like Key vaults you know that we can go ahead and leverage data encryption at rest encryption emotion. But I think mitigate that to a large degree. If that answers the first one the second one around scalability. So with tensor flow Google has got a package called tensors serving which essentially is designed for serving up these you know AI agents you know built with tensor flow and that’s container based approach. So containers will see a lot of these things. If you’re dealing with Microsoft CNTKT and you’ve built your own model using cognitive toolkit and I say see antiquing cognitive toolkit. Same thing Microsoft’s known for rebranding things quite often. They rebranded it that you can use containers there as well. You can also use Azure Web rolls to serve those up and scale that out in azure if you’re so inclined. And then with Azure ML the scale is pretty much built for you by Microsoft. But your algorithms and your training sets are a bit more a bit limited. Now that’s changing and future versions of Azure. But you can get started very quickly with Azure machine learning and if you have things that you already do services out there by IBM, Microsoft you know Amazon for just standard cognate is like speech synthesis and things like that. If you can use that and don’t have to build a model why not use your time to market can be months as opposed to a much longer cycle for a customer agent so you’ve got is great. Great question.

[00:10:42] So the question is how do we have more knowledge than gee I would certainly say that he has more knowledge on these individual devices but it’s orchestrating the system that’s what we’re seeking to do right here. It’s not just one device that we need to know a lot about. We need to know a lot about how all these influencers on that system. All right may result in a predictive event and it’s not just one turbine that maybe we’re dealing with right here from G.E. it’s then other vendors that are provided the distribution network. You know then we’ve got all of the endpoint systems and if we’re in retail or financial services maybe these are as opposed to houses we’re taking the data the other way and we’re dealing with things like you know I’m paying on my Apple phone or whatever point of sale back up through. So I wouldn’t say it’s about each individual specific device. Each one of those vendors is going to be the expert there. What we’re advocating is to bring all of this together in an orchestrated fashion devalue that we might bring to the table. We have. So we bring IP. All right. But I wouldn’t say that we have a product that you couldn’t get in a box. We bring IP to the table. And we also have a good sized team of data scientists on staff that have worked with these various technologies so you know I think is as Christos mentioned we’ve got a framework and how we deliver software and team members that are experts in that we do want to align to our clients don’t know especially when we’re dealing with enterprise.

[00:12:14] There are certain compliance things and behaviors and things that they do. We want to be respectful of that. So we’ll take the IP that we have bring that to the table and essentially customize that and modify that based on the specific needs of the client. So the question was is we had expertise in things like Jupiter. So what is Jupiter for everyone. It’s a it’s essentially like a notebook. Think of it like word which you can embed code within that document. This is a great segue because I’m going to provide you guys with the Jupiter notebook that you can run the data science experiments I’ve got here as a takeaway that’ll go ahead and train a couple of different models so you guys can kick the tires on it. No I’m not going to unpack that here for everyone because you probably get a little boring for some of the non tech folks but you guys will have that as a download that you can go ahead and run. And that is a Jupiter notebook. It’ll take you literally 15 minutes to set up and run. I tested this on my wife and she was able to get it running. She’s a teacher so I feel that was a pretty good you know good litmus test rate there. So Jupiter and tools like that that gets back to what tools do data scientists use to figure out these models. We are data scientists are free to use essentially for the most part what they want we do want to align to what our clients have invested in because we want their data scientists also and their teams to understand that.

[00:13:39] But once it goes through data science we still have to operationalize this at scale. And that’s where the data science and the engineering must go hand in hand. The data scientists are going to be great at creating that model you know within a scope. But we need to scale that up scale that out and that’s where the engineering team must also be involved. We have a role we call data engineers. These are developers with probably a bit more math background than a traditional developer and they are essentially the arbiters between our data science team and then the traditional build and development team. And then of course they’re working with an overall team with DevOps and everything else and all that rolls into it. So we talked a bit about our sensors right here. Really the takeaway here is the giant data volume. So when you move quickly through some of these just to make sure we’re on track. These are daily data volumes that we might see how the heck did I get to that or am I making that up. These years shouldn’t just rough assumptions on how we might get there. So these IoT devices to generate very small amounts of data but we’re getting so much of that data that data volume adds up really quickly in this instance right here. We’re looking at you know about 2 gigabytes of data a day just with these somewhat arbitrary numbers that I plugged into this. The takeaway here is that if you’re going to go the IoT path you can get some amazing insights out of it but be prepared for that data volume.

[00:15:01] There’s a lot of data there you have to deal with and you want to make sure you’ve got an appropriate pipeline. We could go ahead and use some big data things to map reduced this down into a much smaller data set that’s perhaps more consumable. But the other option we have and this is where it’s going to really unpack this for us is we can do a lot of this work at the edge at the edge device as opposed to pushing it all up into the cloud. Let’s do it at the edge of things and that’s where we can distribute that workload across all these various systems and still and actually reduce our overall infrastructure. So that’s pretty much a win win. So essentially it’s a bit about the data volumes we have right there. All right. So now I’m going to jump into machine learning specifically and just dip a bit of you know a few definitions around that I just like to use this slide is from a. It’s an old slide but I love it I give these guys some credit to it. And there’s a lot of different ways people describe machine learning artificial intelligence you know and deep learning and I don’t know that they’re all right or wrong. But for the sake of our dialogue this is essentially what I’m using and I’m always changing my thoughts on this. The bottom line is when people think of artificial intelligence that’s the superset of all of this, I think if general AI. That is you know the stuff you see in science fiction with robots and people talking to things and it’s it’s like human intelligence. We’re nowhere near there yet.

[00:16:25] We’re getting narrower and narrower. And what we’re speaking about today is going to be machine learning and that’s all algorithmic approaches it’s math it’s all statistics and some calculus. All right so it’s equations that we’re using to get down and solve these problems deep learning is a subset of that. That’s why we’re using artificial neural networks. The bottom line is a lot of these inputs we pass into these are going to be the same and we’re going to see these in a second as I unpack this the algorithms behave different but their job is to suss out all these inputs to a valuable output and they do it in very different ways. And it’s the computational power we need to solve this that really is made deep learning available to us today. And that’s where the tensor flow and the CNTK cars come from and that’s where we can do things like look at a picture and determine whether that’s a cat or dog with a I never could do that before. And that’s the algorithms and the thought around that is probably 15 years old in some areas they never had the comp we never had the computational power to do that. We do now the other day the reason is why would we do this while we go to all this work do the thing the main takeaway I have is generalization I think is one of the biggest things now. Not everything needs. If you’ve got something you can solve with rules with rules. If you’re trying to detect a problem in your accounting system and you see someone opens a couch and they’ve been deceased. Well that’s a rule.

[00:17:47] You don’t need mail for that you know that a deceased person shouldn’t open up an account. But if we’re that was an arbitrary you know probably not the best example but it’s not. But generalization is where I think the real value comes in. And that’s in traditional software. Every input we have to program for. So if there’s a new input that comes in a new decision to be made we have to program that someone has to engineer that decision and when we have a few decisions that’s not too bad when we have many decisions 5, 10, or hundreds, hundreds of thousands of decisions. That’s not something that is tractable by and by human by a person. That’s where these algorithms are graded this we can pass in here all the inputs that got to this particular answer let the math figured out the math will say these inputs seemed to influence this decision at this time and that’s a big part of data science is that generalization so that way when brand new things come up that we never programmed for we can still get valuable business information out of that machine learning. I’d say that’s one of the biggest things we can do that it brings to the table is that generalization. And then of course there’s problems we just can’t solve with traditional software speech recognition computer vision and that’s because they’ve taken generalization such a far step that it’s sussing out things that are just too far beyond any person to program explicitly so to speak and generalization. I’m going to put up a little bit and talk about.

[00:19:19] Everyone knows who reads right who reads it reads is great. You go ahead and say I want to I want something from this restaurant but I’m out of a hotel. I go ahead and punch a few buttons and someone delivers the food to me. So the example we’ve got here is imagine that we want to add machine learning to read to predict what I want. All right and we’re going to bring it back down to energy in a second with a much more sophisticated example. But this is essentially how we add machine learning and data science into traditional software. We’re doing a regular scrolling development. That’s great. Once the CEO says hey listen I want to do predictive ordering for our customers. Wonderful. Now there’s another methodology. CRISP-DM is what we use a bit of a research methodology but for you guys that are scrum people you won’t feel any pain or know any difference because with that data engineer I talked about that’s the interface layer and this tooling is exactly the same. It’s more so on the data scientist how they operate. But it gives the business visibility into what we’re doing for a non male approach. I want to make a list of you know maybe we may attack this by saying you know what was ordered most recently what was recommended on a price. What are the reviews. These are all a few things I’ve arbitrarily picked. If we have a few of them I think we can engineer that we can program that no email involved whatsoever when we start getting more items. That’s when this gets really tricky.

[00:20:43] That’s when ML comes into play and that’s when we can use generalization to solve this for us and get better and better. And then if we’re dealing with edge devices you can start learning individuals how individuals are working or what they’re eating and requesting. And now it’s tailored if you will to what you have ordered in the past and how do we program this. Here’s what we’ve got. Everything that we passed in here is what we call futures. Those are the inputs and how we train these models are these algorithms I should say forget about what algorithm that is because we can pick different ones. But in this case we’re looking at a multiclass prediction was dissatisfaction high low or medium. I’m going to take some data that I’ve got that says day of the week time of day was it order recently the distance. Do they have a vegan option. Who knows. We can expand this out to as many possible things you can think of. And I’m going to say here’s the answer the label that I may take some third party data Yelp whether who knows. Once I have all of this I can now pass this into many different algorithms and that’s what the data scientist unit is going to do in this case we’ve just got four simple algorithms right here we may be evaluating in truth we’ll use tools to evaluate hundreds or thousands of potential algorithms and see how they perform and what you’ll end up doing is saying all right what are these algorithms has performed better than all the others. We’ll go ahead and take that generate a model from it.

[00:22:14] Now that gets deployed up into the cloud or on prem whether you want to operationalize it. Question. Yup. So with all of these models right here. So this with the different types of models these are multiclass predictions. So what we’ll look at is something called the confusion matrix. Essentially this will say here was the predicting class here is the actual class cell predicted class would be you or me back that up we’ll take this data right here. And what we’ll do is we’ll split this out to like a partitioning scheme of like 80/20. So 80 percent of the data we’re going to go ahead and train our models what we’re going to hold back 20 percent of that so that the models are never trained on that they’ve never seen that but we know the right answer. Then what we’ll do is we’ll run it through all these models these algorithms to generate the models and then we’ll throw out the data. We’ve never seen before that the model has never seen before. How did that model do against that question. The question was if we know the right answer does not introduce Byle where this is training data so this would be labeled data that has all the right answers. We know there is a right now. Bias is a great question and that is is that data influenced in some way that’s subjective and not quantified and that’s really the job I’d say of our data scientist to look at that data and determine if there’s bias in there but we hope that there’s not. Well that’s a great point if there is.

[00:23:42] We need to take a look at that and that’s a job I’d say of the data scientist and just some pragmatic reasoning to determine that we’ve got a corpus of data that we have the answers to. This is our historical data that we’re training on. We’re training on past experiences. All right. But I’m not going to train that model on all the past experiences are going to hold back a set of that that none of the models have ever seen. And I want to see how they do on those that data. They’ve never seen before. If I ask them how they’re going to perform on stuff they’ve seen well they should do well but that’s not really going to be informative that that model is adding business value or that it generalizes well to new things the inputs on how we’re trading this that it’s all going to be classifications. So this is all a multiclass. Yeah sorry could use. We could use something that’s a neural network we could use logistic regression to do all of that the inputs into that model are going to the same know how we work with those algorithms are very different. The hyper parameters all of those things and the data volumes if I’ve got small data volumes known no network might not be the right answer because it’s not going to have enough information to suss out those appropriate weights. Want to use something else. That’s the job of data science though the data scientist I am not a data scientist but we have gotten to a point where we have a model in this hypothetical example right here. Now it’s time to deploy this.

[00:25:01] This is where we’ll go ahead and push us up into whatever deployment platform that we want to operationalize that. So this could be Azure ML it could be CNTK could be tenser flow with tensors serving and container’s – makes no difference. The bottom line is once this is up you’re running in the field. What happens is when someone wants to go ahead and predict where they want to eat notice we don’t know what the right answer is. We just passed in that same input data we had before that we trained with. There is no answer that we’re giving it. It will come out with us and say all right based on this observation it’s high that you should go ahead and use this eat at this particular location. So we’re taking all these locations right here and saying which one of these should I eat at. And it’s saying let’s see – observation. This one’s 95 is well let’s got a low so we will ignore it. We’ll go to our first high and it’s number one which would be this particular spot wherever that is. It’s arbitrary but that’s how we would then score that. And now tell the user where they should eat. Now this model when it’s in the cloud it’s static. It’s not being updated. We need DevOps to take all that information from the field continue to evaluate this and push it back up. So that’s what we have to do there and that’s where Eric will speak to some of that.

[00:26:18] We’ve taken this whole thing that we just went through and I’ve got this now in the context of turbine maintenance events and this is where we have that takeaway data science experiment for you guys to have that actually looks at some data that’s been you know it’s anonymized, that you guys can run through the Jupiter notebook and see how all this comes out. And essentially what we’re looking at right here is now IoT data that’s coming in. And did we have a particular failure and we’re doing the exact same thing as we did before. But here we’ve got some more work to do. The data is not normalized rate. The data is not distributed properly. This is the data scientist that’s going to go ahead and make sure that that data set is structured appropriately for those algorithms to go ahead and do their work. So it’s I did simplify things. It’s a lot more work now to make sure that we have appropriate distributions and you’ll see that inside of the little experiment that we go ahead and provide the go ahead and get our models out of this. In this case I have two examples we picked the one that has a better rock curve and then we go ahead and operationalize just as we did before and essentially that’s it. So that’s data science right there. All right we’ll move into some IoT deep dive and I’ll be around. Thank you. Okay. So I’m going to try in 15 minutes or less get you up to speed on IoT and what it is and what it’s about. So what is IoT. It’s those devices. It’s your refrigerator that is telling you the milk is spoiled so that deceased person won’t try to open up an account because he looked and saw that he had had an IoT device.

[00:28:12] He would not have gone and opened up an account. Exactly. It really is when you break it down Internet of Things is the data and the thing the things are the devices and the devices talk to each other using languages so I speak to my mother in English and I speak to her in Spanish sometimes that’s the language everybody is familiar with HTTP MQTT is another language. And what that is letting things go back and forth. HTTP is one way I listen to what mom says but I don’t do anything about it. MQTT, I’m moving data back and forth. I’m listening and I’m sending data back so if I’ve got my turbine I’m listening to data coming from the turbine I’m sending data back and then how do we transport that data. How do we get things going. We use Wi-Fi and I’m actually going to pass around a few devices that is a temperature pressure and humidity sensor. I’ve got four of them and that sending data using text is using the HTTP protocol I’m sending data to the cloud and I’m going to start processing that data and then here’s a third one. And that one is a Microsoft device that I’m going to send data back and forth. And the fourth one is yeah this device is used. That’s just a battery. Yeah. And this is the fourth device is a distant sensor. So how do these things talk. How does this all work how does this flow. So we start with the devices. We’ve got a distance sensor and it’ll tell you the centimeters.

[00:30:18] We’ve got a couple of temperature and pressure sensors. They all go into the hub and the hub is nothing more than a huge funnel where I pull that data into that I’m going to take that data and are going to use streaming analytics to kind of consolidate to group that data. I don’t need to know the temperature every second from that device I need to know every minute and a note every hour I need to know vibration every 20 minutes so I can use analytics to group that data and then from there I can send it to data lake I can send it to Taylor so he can start doing some work on it. I can send it to power B.I and I can throw some desktops out there. These devices are just basic temperature sensors. So let’s bring this down into an energy discussion so we have a turbine and we’re getting temperature data on that turbine and we know that if that temperature goes above 270 degrees Fahrenheit we have to shut it down because that 50 million dollar turbine will turn into a pack of bricks. So how do we do that. Well we use the IoT device we send that data up to a hub that funnels it we send it to data analytics test and we consolidated we send it to functions and we do something with it and then we talk back down to the device. That’s great but let’s say we lost Wi-Fi. Let’s say our network is bogged down now instead of that transaction taking a few milliseconds it took three seconds. And in those three seconds are 50 million dollar turbine turned into a pack of bricks.

[00:32:01] How do we fix that. Well that’s what IoT edges. It takes all that learning all that Hub that function and it puts it into a container and it puts it as they say on the edge not over the cliff but just right on the edge right there with the devices. So I’ll have all my devices connected to a central hub and that hub will have my filters in it it will have streaming analytics it will have Azure functions and it’ll determine that hey we need to shut that machine down so I’m going to send a command down to it shut it down and then when I get Wi-Fi I’ll send that data back up to Azure and if I have Wi-Fi I’ll still do it quickly on the device and then send the data back up. So now we’ve gone from a one second to second time lapse to 20 milliseconds or to where that number is just greatly reduces the time so what I want to actually show see if I can do this. So one of those sensors is got a T1 under the name of it and it should be telling you that it’s 73 degrees. So if you blow on the T1 sensor you should see it you see the humidity go up and the temperature go up you put your finger on it you’ll see the temperature go up. Yeah the blood alcohol level will go up as well. Yes. Seventy four point five. So what that is doing is I’ve taken data from that device sent through the hub sent it to analytics and now I’m reading it a very simple example but extrapolate that out to 500 machines.

[00:33:54] And now I’m taking that sending it to Taylor to do some machine learning on it do some predictive maintenance. I’m finding out that okay as the temperature rises when it gets to a certain point I need to go replace those parts before that machine turns into a pack of bricks. Yes. Why is the gateway secure. There’s secure keys that you send back and forth. So the devices can’t talk to each other unless they have the right security going right. And I don’t want to actually show the security key because this is going to be on Facebook and then everybody will have my device. One of the devices I’ve got I’m going to send data back and forth to it. So if you look on the device right now that makes chip it should have said AgileThought on it turn it over it just should have said AgileThought on it. Okay. And then I can say it should have just said hello as well. So what we’ve done there is we’ve used MQTT to send data back and forth. So for instance for there’s a device company called Atlas Scientific that makes sensors and I’ve been talking to them about putting pH sensors in the Hudson River and figuring out the pH of the Hudson River. We all know it’s acid but you know they actually want to they actually want to try to do it.

[00:35:25] So what we’re going to do with that is we’re going to use low ran which is radio frequencies to a hub that hub will actually have Wi-Fi and that hub will connect to that hub will take care of the connection issues and between the device and the hub will just be radio. So here is the IoT suite which does this at scale. So now I’ve got 25 different machines devices hooked up going through a hub going through analytics and showing you in a dashboard I’ve got temperature data I’ve got humidity I’ve got alarms set that when the temperature hits a certain value I can shut it down. And all this using the IoT suite and using all the things we’ve talked about we can use provisioning and because we’re a gold partner we can help get that provisioning done and do this at scale and that kind of leads into Eric talking about. Okay so we’ve got the data we’ve got the devices removing data back and forth. How do we deploy this. How do we get this out there how do we update the firmware on these devices. And that’s where Eric will come in and I will push it over to Eric. That’s some awesome stuff. So we talked we talked about some great machine learning IoT that’s some great technology stuff so how do we get that out and operationalize that right. So DevOps I’m going to quickly go through DevOps and why we would operationalize these types of things so we wouldn’t use DevOps just for that DevOps is really operationalizing software delivery and what we’re going to talk about real quickly is we want to use continuous delivery for DevOps that’s going to get your value it’s going to your customer quickly consistently and reliably and we’re going to be able to take these.

[00:37:26] The software that’s running the IoT devices push them out to say a larger region or new territory as long as we can scale that and get those applications out quickly and with lots of automated testing we’ve got a good process. So kind of tying these all together. We we want to have a team or a team member who can help you deploy all these applications out to your different environments. So real quickly. You know how many DevOps engineers does it take to screw in a light bulb. Zero. They automated the process. Come on come on. They wrote up puppet scripts for that. Are you kidding me. That’s right. So anyway I had another joke around the deceased person but I was told not to tell that. So sorry about that. It’s a movie on moving on with DevOps were naval enabling consistent and reliable high quality deployment deployments. So your typical process has been engineering and ops kind of in separate areas. Obviously with Agile we’ve moved everything together so when we talk about DevOps we’re talking about an agile team that has DevOps as part of that. So DevOps is a cultural thing as part of that agile movement. We’re doing all these different tasks really within short feedback cycles right. So we’re doing we’re planning prioritizing deploying and building operating and releasing all within a sprint really. So that’s kind of the essence if you’re a scrum team you know that’s kind of the essence of being able to deploy releasable software at the end of a sprint. So that’s the kind of feedback cycles we’re talking about.

[00:39:34] And also we’re talking about feedback coming from automating monitoring and telemetry those kinds of things. So this is kind of a nice graphic talking about how we prioritize things and how we get that out to production with automation and tooling some key concepts there. If you’re not doing this continuous delivery you know continuous delivery that concept really originated with just humble if you haven’t read the book continuous delivery. It’s a great book. There’s also a great talk from jazz on Agile from the agile alliance in 2017. It’s out there on the web I highly recommend listening to that video but really the concepts here are build automation so you need a team member who can help out with build automation. That’s the part that DevOps continuous testing we’re talking about continuous integration content continuous testing. If you’re not automating your testing that’s not going to help you have that reliable high quality deployment. So those have to be integrated in your delivery. And of course your deployments need to be automated so when we’re talking about this you know deploying new software upgrades to your IoT devices out to the cloud that has to be automated and that’s done through release automation. We used tools like VSTS, release manager is a tool Puppet, Ansible has some, so there’s a lot of tooling to help you automate that. Finally intelligent environments we’re talking about configuration automation we’re talking about infrastructure is code. So you certainly want to have your DevOps role helping version your code your infrastructure the same way you would version your code and those can be intelligently hooked together. You can roll back to the right versions of the environments and the code data seeding.

[00:41:37] That’s something you need to make sure your data is working as well with your with your automation and finally the feedback loop. When we talk about monitoring is more about your environments is our software up nowadays we really want to know errors that are happening before our customer is aware of them. So we want to automate that make sure we’re using some tools that help with that we use things like New Relic and application insights. And then you have things like telemetry so telemetry is going to give you information about what your customers are actually using. So a lot of actionable data around that. So that’s really kind of a real quick walk through how DevOps kind of brings this all together. Thanks a lot. Thank you Eric.

Watch the Full Session

00:00

Speakers

News News

Related