Showing posts with label On. Show all posts
Showing posts with label On. Show all posts

Wednesday, 23 October 2024

Blazor Tips and Tricks

>> On today's Visual Studio Toolbox, Ed Charbeneau is going to show us how to set our Blazors to run. [MUSIC] >> Hi, welcome to Visual Studio Toolbox. I'm your host, Robert Green, and joining me today is Ed Charbeneau. Hey, Ed. >> Thanks for having me, Robert. >> Welcome back. >> It's great to be back. >> Ed's a Developer Evangelist with Progress software, and a regular guest on the show, but it's been about a year since you were last on, right? >> It has been. I've been working hard on Blazor for that entire time. >> So a year ago, you and Sam Basu did an episode on Blazor, I believe it was one of our long ones. >> It was. >> Well-received. >> An hour show cut into bite-size pieces. >> Back when we used to do hour-long shows. So it was an introduction to Blazor. Today, we're going to take a look a year later, and you're going to catch us up to speed on some of the most frequently asked questions that you hear, some tips and tricks that people doing Blazor would need to know. >> Yeah. >> But before we dive into that, let's do a brief refresher of what is Blazor. >> Yeah. So like you said, we took a look at Blazor a year ago, it was a very early beta back then. It's now rolled into the official ASP.NET Core pipeline. So Blazor is a brand new front-end development framework, so a SPA framework that uses C#, Full Stack. So we can actually write client-side application code with C# and not have to rely on JavaScript like we normally do with jQuery, React, or Angular, we now have Blazor to take place of all of that, and we have.NET end-to-end. >> Cool. >> So we saw it early last year, a lot of things have happened since then, and every time I present on Blazor, I get asked these questions that are really great questions, but just at a glance, you don't quite catch when you're talking about Blazor so I thought I'd bring some of those questions up with you today. >> Yeah, cool. Let's do it. >> All right. So I'll hit one of the ones that is the most popular question I get, and that is, does identity work with Blazor? Now, people usually frame that question, does authentication work with Blazor? Authentication is actually a browser behavior, so that's usually happening on the client, but what they usually mean is, does Microsoft Identity work with Blazor? Actually, now it does. So with server-side Blazor shipping in ASP.NET Core 3.0, Microsoft Identity will be available with the application. If you click "File New Project" just like you do with an ASP.NET MVC application or Razor Pages application, you get the dialogue in the File New Project, where you can click to change your authentication type. You get that same experience with Blazor. What you get out of the box is actually what we have running here. Just like in an MVC application, you have your login and your register links at the top of the page. So we can click "Log In" and we get prompted with the identity log in experience, and I don't have an account, so we can click "Register" and create a brand new account. Just like with MVC or Razor Pages, we can create an account here, and what I want to show if I can get a correct password through this system here, I don't remember what their requirements are. Now, we get an error screen or what looks like an error screen, but it's actually something very helpful, and MVC does this as well. It says data operation failed processing requests, it's because we haven't created a database yet. >> Okay. >> This is a brand new File New Project experience, but what's nice is we get a button on the error page that says, "Apply Migrations", we click it, will actually create the database for us, Microsoft identity database. It will apply the migration to our database, create all the tables we need, and then it will bring us back into the application, sees us try refreshing the page so we can come back, hit "Continue", and now, we're actually logged in. It says, "Hello Ed Charbonneau@progress.com." So we have identity working, it's backed by SQL and.NET Framework and this comes out of the box with a few button clicks during the File New Project experience. >> So this is a generic identity provider. Can you hook it up to Active Directory? >> You can hook it. >> Or Azure Active Directory? >> You can hook it up to Azure and that steps you through as well in the File New Project experience. >> Okay. >> One other notable thing that's really interesting about the way identity works in Blazor right now is another question that I get asked a lot, and that is, do Blazor components work with MVC, and can you mix MVC, Razor Pages, and Blazor in the same project? So if we go under "Areas" and "Identity", and we look at our "Pages" in here, our logout or login pages, these are actually Razor Pages. So we have Razor Pages that exist in our Blazor application, and they're able to work side-by-side. Now, another thing is people ask, can I use Razor components or Blazor, the component framework inside of an MVC or Razor Pages application. That ties into the same idea. If we go into our host cshtml file. So.cshtml is a Razor Pages or Razor MVC view, inside of here, this is the bootstrapper for a server-side Blazor project, but this is a Razor page. If we look right here, this is where the application starts up, and we have inside of our app tag an HTML helper that says, "Html.RenderComponentAsync." This RenderComponentAsync method is something that we can call and bring Razor or Blazor components into MVC in Razor Pages views. >> Cool. >> So that's another frequently asked question. Two things tie together in my opinion, talk about using all of ASP.NET in a Blazor application, and that is possible. >> Cool. >> So the next thing that I wanted to show is people asked frequently because they see Blazor component. You can refer to these as Razor components. Razor components is something that is the component model of Blazor. So Blazor components, Razor components, interchangeable. >> Okay. >> But when they look at one of these, it's all written in C#, and your markup and your code is all in the same file. So one of the questions I get right away is, can I separate those two things because I don't want my markup and I don't want my C# code intermingled? So I've got an example here where I've already started taking these two things and decoupling them. So we'll finish this up here. What I want to do is I have a fetch data component, and I have the code section down here. What I want to do is remove this code section. So the first thing I want to do is create a C# file, and I want to name this file the same as my component, but with the extension.razor.cs. >> Okay. >> So what that does is it matches up with our component filename. So our component is FetchData.razor, our code behind is going to be FetchData.razor.cs, that allows to the nest in Visual Studio. So when we name them like that, that semantic allows that nesting to happen. So we get this nice tree effect where we can collapse that code behind down if we'd want to. So I'm going to click on "FetchData.razor.cs, and this is the same code that is inside of that code block. I went ahead and moved it out to keep this nice and short. >> Okay. >> So in FetchData.razor, we can now take out this code block because we have that inner code behind file. >> Okay. >> So to marry those two things back up together, we simply come back up to the top here. We just call inherit, so we have an inherits directive. Then we point to that code behind file. So this is called fetch database. The component name is fetch data, so this is the base of that component. So we'll save that. I can also remove this dependency injection from it because that now happens in the code bar. So now I have tied those two things together, and all of our little red squiggles have gone away, and we separated those two pieces of logic. So our markup is in one file, logics on the other. >> So you named that fetch database? >> Yeah. So the filename itself is fetchdata.razor.cs. >> Okay. >> To provide that nesting. But we can't call it fetch data, because it will collide with the component names space. >> Right. >> So we can't have two fetch data classes. >> Okay. >> So we got to give it some things. >> It's a partial class, it's an entirely separate class. >> Excellent question or observation. Partial classes aren't supported yet. >> Okay. >> But it is something that may come in the next release of the preview. >> Okay. So fetch database was just a naming convention that you use. >> It's a naming convention that I use, and it's pretty common in the community right now, is to name that as your base of the component itself. >> Okay. >> Partials may be coming soon. >> It's okay. >> So this may change a little bit over time, but this is one way that you can separate those out today. Also we need to inherit from component based on that. So that gives us all the lifecycle methods, and everything that is part of the component. >> Right. >> So another notable piece is the inject parameter here. So we're injecting through property injection when we do the code behind versus when we're on the markup side, we use an inject directive instead. >> Okay. >> So that's one piece that people look for when they start refactoring this out, is they don't know how to inject that properly. So the inject attribute on the back end side, and it's inject directive on the front end side. >>Okay. >> So that's one of the pieces that people ask a lot about. >> You then you would expect ultimately that that would just be an available refactoring inside the Visual Studio editors so you don't have to do it yourself. That might be a nice feature request. >> Would be a good feature request. There's a lot of tooling changes that will come with all of this new blazer stuff. >> Right. >> I've talked with Daniel Roth and his team. They're very optimistic about what they can do with the razor engine itself. There was a lot of tooling that can come in the future. >> Cool. All right. >> All right. So we can look at some more frequently asked questions. Another one is we get this application out of the box. We need to rerun that, we've done some code changes here, we'll pick this back up and run it. We have this counter component that comes out of the box. The count starts at zero if we click on the counter. This is all file new project experience. So anybody that runs blazers probably seen this counter component before. If we go back home, and then we visit the counter page again, our state is gone. >> Right. >> So don't share a state if I created more counters. They don't share that count. So we can have multiple counters on the page, they would all have their own independent counts. So how do we manage state if that's something that we do on? So there's actually two ways to handle state in, well there's multiple ways to handle state in blazer. I'm going to show you two of them. There's actually some components that are designed for this, so we'll look at a component approach and then we'll look at using dependency injection and do some application state. So one thing I've added to this project, and under my shared folder, I have a component that I wrote, very simple one, is called counter state provider. So this is a provider component that I can plug into my application, and anything that is a child of it will receive the values that I assign to this component. So it uses a special blazer component, this is something built into the system called a cascading value component. Inside of that cascading value component, I can provide child content. That's essentially what this component is for anyway. On value, I'm going to keep this as simple as possible and I'm just going to assign this. So this component's value will be just assigned to the cascading value. Any child component can subscribe to this value now. So what we'll do is we'll take this, and we'll put our current count for a counter component in that object. So what I need to do to implement this, is I'll go into my main layout and now I can surround any piece of HTML in my application that I want to be able to receive the data from. So we'll go right around the main component of our application, and we'll call in our counter state provider. We'll wrap that around the body of the application. >> So this is now at the application level? >> This is at the root of our pages. >> Okay. >> So anything, any of the pages that we load will be inside of the body. >> Okay. Got it. >> So now anything that's in the body which includes our counter component, will be able to receive that. So how did we receive that inside of our component? So there is a special parameter that we set up. I'm going to use a shortcut here, and we're going to use the cascading parameter attribute on a property. We'll set this up and call this current count, or actually, we need to bring in the component namespace , because this is going to resolve to a counter state provider, and then we can name it, and we'll name this, let's keep this simple and just call it state. So this state will be assigned the value of this that it brings in from the counter state provider. So now, we can consume it, we can get rid of this current count equal zero. When we increment our count, we'll use the state instead. So we'll say state.currentcount, that's coming from the state provider. >>Okay. >> Up in this section, we'll do the same thing; state.currentcount. So that should tie it all together. Now, when we run our application, we won't lose that state as we go from page to page. We can also take advantage of that. So now we have our Current count. We can go to our "Home" page, go back and still have a four. Make this a little bit cooler. On our Index page, we can consume that same StateProvider. So then we have shared state across the application. So we'll do the same CascadingParameter. It's going to be a CounterStateProvider. We'll call it State. Then in my Hello world section here, I can just write out Current Count at State, CurrentCount just like that. Now, this will appear in our Home page. So now, we're getting the count off the counter even though they're not on the same page together. So our Current count is zero. We click this up. We go back and our "Home" page changes. >> Very cool. >> So that's one way that we can share state across the application. This is using a piece of markup, the StateProvider uses the cascading value component. So this is markup-driven. We can also do this through code. So we don't have to have this markup piece that drives the State management. So in my application, I've added a very simple poco object, plain old-class objects, called CounterState. I have a CurrentCount on that as well. This is something we can just inject in our application through dependency injection. So if I go into "Startup.cs", I'm going to scroll down to where all of my dependencies are registered. In services, I'll just uncomment this line that I've added earlier. Services.AddScoped of type CounterState. So now, I have this object, this CounterState that is available to my application through dependency injection. So if we go back to our "Counter" component, instead of using a cascading parameter, we'll come up to the top of the page and we'll use the inject directive. We can inject to that dependency injection that CounterState through dependency injection. So we'll call it CounterState. I think I'm missing a namespace here. This is on data. Data CounterState. We'll simply call that State again. So since we named it the same, we actually don't need any code changes here. I can remove this. Now, our data is resolved through dependency injection instead. So it has the same idea but it's application level. Then in our "Startup.cs", we'll go back here for a moment. We used AddScoped to add that to the application. We have several ways to add things to our container. We can use AddSingleton, AddScoped. The reason I chose AddScoped versus AddSingleton, so on Serverside Blazor, you're sharing that application state with the entire user base if you use AddSingleton. So that will be a single instance per application not per user. So it's a very important thing to know. You don't want to share certain things with everyone. Whereas, our weather forecasts service, something is pulling a weather data, we can share with all users because it's going to be the same. So those are some of the most frequently asked questions that I get when I present on Blazor. People ask about state management, they ask about identity, and they ask about code separation. Those are the real heavy hitters that I get all the time. So hopefully that clears things up for you. >> Yeah. That's awesome. So if someone who is starting out with Blazor, what are briefly some tips and tricks on how to get up to speed on it, how to get used to it, how to understand what it is and how to use it? >> So my biggest advice would be first of all you go to Blazor.NET and you walk through the Getting Started materials. The.NET Docs team, they've done a fantastic job there. Second of all, Blazor is a lot different from MVC, where in MVC applications, we're rendering strings. Blazor has something called a render tree that it builds. It's not a string, it's a representation of the DOM, the Document Object Model that is held in memory and it does diffs against it to see what's changed. So it's a big difference from HTML Helpers and Razor Views that just render out strings all the time. The reason for that is those string rendering mechanisms, the client-side code is done in JavaScript. So it goes back and it finds pieces of the DOM, manipulates it, edits it, deletes it, those sort of things. In Blazor, we work with the render tree. The developers do. Then the framework takes to render tree and applies the changes for us. That's what allows C# to be able to work on the client-side. >> Cool. Your impression of how far along this is. Is it preview? Can you put stuff into production with it? >> In September, we will see a GA release. >> Okay. >> So it's coming soon. For server-side Blazor, we'll have the global availability soon. Client-side Blazor will come sometime next year. We haven't got a solid date just quite yet. I've heard quarter one, quarter two. So that is coming soon. >> It requires.NET core? >> Yes. >> Is that true? >> Blazor requires.NET core. >> Okay. >> It's currently on set for.NET Core 3.0. Client-side will likely come with.NET 5. >> Okay. Cool. Thanks for that. >> Thank you very much. >> All right. >> Hopefully, everybody found something useful out of this. Getting started with this stuff is a little tricky. It's a brand new preview. So I figured that most people are new to the concepts because the entire framework is new. Something I've been working on very closely for the last year because of my job with progress, we build UI components for everything. So we jumped on the bandwagon early and created a suite of Telerik UI Components for Blazor. It's been my main focus. >> We will have links to that in the show notes along with the places people can go to learn more. So I hope you found that useful and we will see you next time on Visual Studio Toolbox.

Azure Pipelines

>> Joining me today on Visual Studio Toolbox is Tupelo, Mississippi's second-most famous resident. We're going to talk about DevOps and Azure Pipelines. Hi, welcome to Visual Studio Toolbox. I'm your host Robert Green and joining me today is Mickey Gised. Mickey, how are you? >> Hi Robert, I'm excited. I'm very excited to be here. >> I'm excited too. I've been trying to get Mickey on this show for probably years. Now, that you are Microsoft employee, and you're in town, today is the day. >> I am. I'm a DevOps Architect on the Microsoft DevOps Customer Advisory team and I'm very excited. >> You are from Tupelo, Mississippi. >> I am. Now, Robert, do you know what Tupelo Mississippi is world famous for? We are a small 30,000 person town in Northeast, Mississippi. We get about 30,000 people a year that visit my town, half of which probably come from Europe. >> Yeah, it's the home of one of my favorite blue Sires, Mr. Paul Thorn. >> That is all- that is true but that is not why they are world famous. >> That would be the birthplace of the iconic Mr. Elvis Presley. >> That is correct. >> I didn't know that. >> Elvis was born in a two room, not a two-bedroom but a two room house in Northeast, Mississippi. So they come, they tour his house. They tour his church. They go to the hardware store downtown where he bought his first guitar at age nine, and it's still a hardware store, and then they drive up to Memphis, Tennessee to see Graceland where he got famous. >> Cool. But that's not what we're here to talk about today. >> No, that's not what we're here to talk. >> We're going to do some DevOps stuff. We're here to talk about Azure pipelines. >> We are indeed. Do you know what Azure pipelines are, Robert? >> I'm going to. I do but I'm going to pretend I don't for the purpose of the show. So, I do know about DevOps. I know about Continuous Integration, Continuous Deployment, your CICD Pipeline. Are the Azure Pipelines the same as those? >> It fits into that story. Yes. So, we have Azure DevOps which used to be called Visual Studio Team Services which is the co-version of what used to be called Team Foundation Server. So, one of the different- one of the components of Azure DevOps is Azure Pipelines, and it's been referred to by a lot of different names. They call it the Build System. It's got build. It's got the build ability to do Build Management, Release Management, all of those things but we encapsulate that into what we call just Azure Pipelines in general. >> Okay. >> What's nice about Azure Pipelines, is it works with a multiple or a multitude of different languages. So, we're talking Ruby C-sharp Python. You can use it to deploy both On-prem or into the Cloud, and you can target pass, you can target infrastructure as a service. It's got multiple different options for being able to work, and the goal being to be able to automate do that whole Continuous Integration, Continuous Delivery to honestly build better software. >> Okay. Cool. >> Because, I mean, the whole point of having a Build System is because, if you have multiple developers working on the same code, if you're not doing a daily build, doing a nightly build, running your Smoke Test, running your other- your Integration Tests, then, you can run into a lot of problems farther down the road. You can have a lot of bugs that creep into the system, and once that happens, that's Technical Debt and other things you have to deal with at a later point in time. What's interesting though, is you also need to look at your releases, not just your builds. Look at the Continuous Delivery aspect of it. Because with Continuous Delivery, the point is you want to be able to release in just say your Dev, your QA, and your Prod the same way, the same code every time. >> Right. >> If you're doing it manually, you might miss that quotation mark that you forgot to copy when you were copying over the line of code to push things out. >> Okay. Cool. >> Now, one of the things I like about Azure pipelines, is that we have the ability to build out these pipelines in a nice GUI Interface, which I'll show you in a moment, but we also have the ability to do it with code. So we can write what's called a YAML file and actually write some code that stays with the rest of our code to be able to make our Builds. >> Yeah. I definitely want to talk about that when we get to that point. >> Now, the other thing I really want to point out real quick about Pipelines, is it doesn't just work with Azure DevOps, it works with Azure Repos, which are the Repos and Azure DevOps. But also, they also works with GitHub, it can work with Bitbucket, it can work with a variety of different systems. So, it's very nice from that standpoint but what's really nice about Azure Pipelines is that, if I have an Open Source Project, and it's a Public Open Source Project, then I can get 10 free Pipelines on the service to be able to do my Builds from my Open Source Project which can really speed up the Build Time for some of these projects that are out there. >> Cool. Let's see how it works. >> Okay. So, where I got started, is that I wanted to create a quick little test project to work with. So, I use the Azure DevOps Demo Generator, which is a publicly available Demo Generator out there, which has like, a lot of different templates that you can work with. They've got ASP.net templates. It's got you know NodeJS. A lot of different projects. >> Some of our famous demos from Keynotes and launches parts. >> Exactly. >> All right. >> So, I work with Parts Unlimited, and honestly this is open for anybody to use and all you have to do is come out here and give it your organization that you want it to deploy to. Your Azure DevOps organization. Give it a name and punch the button and Bob's your uncle. There you go. Now what that created for me out here was a Parts Unlimited Team Project in my Azure DevOps organization. If you've not used Azure DevOps before, which I'm sure you have, but you have the board's area with a Boards Hub which is where you can do with work items. One of the nice things about the Demo Generator, it generates a ton of work items for you. So you can actually see how the Work Item System works. It also generates code for you. So you can see how Repos works. But what we're going to work with is Pipelines. So in the Azure Pipelines Hub, I've got my build section for building my code. That's the CI portion. I've got my Releases Section for doing the releases which is my Continuous Delivery Portion. We have a bunch of other features, libraries, and way to group like Servers. You might want to deploy too into groups things like that. But if we just go look at the Builds, I've got one Build here. I just want to show you what this Build looks like. I've got all green. If I go to edit, this is the Build System that we have in Azure Pipelines. So, for example, right now I'm deploying to Hosted Build. Meaning that I'm using a bunch of Build Servers that Azure DevOps has up in the Cloud for me to use. >> Okay. >> I can use those Hosted Servers or I can use my own Private Servers. Those are called Private Build Servers. Those could be in the Cloud or On-prem as well. Now, the Hosted Servers are really nice thought because the things you need to do your Builds already installed on them. The Pre-software that you might need to make things work. So, what you do is you select your Pipeline, you specify where you want to get your source from. In this case, I'm using Repos but as you can see, I could be using Team Foundation Version Control or GitHub or any of these options. >> Okay. >> Then you specify the agents that you want to use. These are the different jobs. I could specify multiple jobs. Then you have the specific tasks that you want to do in this Build, and these Builds just doing a new get restore and then it's just doing a Regular Build on the solution, running some tests and then publishing my artifacts out to a drop location where then I could use them as part of the release. >> The demo generator created that build for you if you are creating a build from scratch, that you have to know how to select each of these things in what order they go in? >> You would have to understand the basics of how your Build would work. What you do is just come here and add different tasks. There's different tasks for building, for doing tests, for maybe helping with deployment, and these are a lot of tasks that both Microsoft provides as well as there's a marketplace out there for other tasks as well. So, yes, you do can have to understand how what you're building needs to be built. Then, you can add the things in here that you need. So you can add the different task. You can also have variables in your build process. So, these are different things where you might want to be able to specify certain values that you- then you set the variable in the task itself, and then anytime you need to change it, you can modify the variable here. >> Okay. >> In releases that comes in very handy as well. You can trigger. Determines when you want to trigger. Do you want to do Continuous Integration or not. So, Continuous Integration being every time I check in or maybe every time I do pull request, depending on how you want to set things up. You can run certain tests to verify that everything looks good before it moves on to the next step in your process. >> Okay. >> Or you can even schedule on the nightly builds, whatever you want to do. Then of course, you've got different things where you can set your build number format, how long you want to retain your builds, things like that. I'm not going to kick this off because it could take a few minutes to run, but what I will do is come out here, and if you look at the results from one of these builds, then you can see that for each task, it breaks down what happened, you can actually drill in to each task and see the details of what occurred on that particular task. So if you've got errors in your build, this is where you would come to try to find the errors that were related to why you're build failed. Just got a good summary overview where you can flip through here to see what's happening. It shows you the test information, so when your tests ran, it shows you which ones passed, which ones failed. >> Assuming you had a test. >> Everyone has tests. >> Yeah. >> Right? You have to have tests. Then if you're doing code coverage, it has code coverage information as well. So that gets my code built, and then I store my code in a drop location. Since I'm building in Azure DevOps, it stores it in a drop location up in Azure DevOps. If I was building local, I could have maybe a file share somewhere that I might want to put those at. So that handles the CI portion of things, but then we have the CD portion of it. >> You can set it up so that every time you check in, the build occurs, or you can schedule builds, or of course, you could do them manually. But the beautiful things about the DevOps mindset is you have things happen automatically. So you can decide whether you want to turn on Continuous Integration, which is turned off by default, and have every time you check in, a build runs, or you just do a nightly build based on the code that's been checked in. >> Correct. Then also, you can just kick them off manually whenever you need to. The second half of this whole process is releases, we need to release our code. So again, one of the things that the DevOps build generator did for me is create a default release pipeline for me. So what this release pipeline is doing is I'm specifying the build that I want to pull from, and I could turn on Continuous Integration to where as soon as the build passes, it automatically starts releasing if I want to. What I have is different stages, different environments that I can relate to. >> Is that typically done? >> It depends on the organization. >> You would release it to test, I presume. >> We might release it to test and then have tests that are run. >> Just because the build succeeded doesn't mean the app works. >> That is correct. So you want to have gates in place to make sure that you're not just pushing out something all the way to prod just because. So what we've got is different environments. Now, these are logical environments, say, like a dev and a QA. You'll notice I've also got a production but I don't have it actually linked up. This is because in this scenario, I manually pushed to production. So what happens is this build runs, if I had Continuous Integration turned on, it would immediately push out to the dev environment. If we look at the tasks that make up the dev environment, since this is just a website, it just is doing an Azure App Service Deploy, and then it settings some variables specific to the dev environment. So it's setting some specific things related to the dev environment. >> So this is just a web app? >> This is just a web app, yeah. PartsUnlimited is just a website app. By the same token, if we go back to here, wait, let me go back to my releases, see this is the fun part, you can see that I've also got a QA where I've got tasks in my QA as well that would deploy to the QA portion of wherever things are. Now, I've got variables that I set, I set the website name based off of whether I'm in the dev environment, or the QA environment, or the production environment. So I've used these variables and I've set these variables in the different tasks in the pipeline to where then it goes, "Oh, you're deploying to dev". Let's set these variables, let's replace this stuff in app.config or wherever we need to make some changes. What we've actually got here with this pipeline is if we go back to my actual release, right now, we've deployed the dev. It deployed perfectly, but we're waiting on QA. That's because I can actually specify gates. So I can specify post-deployment conditions and pre-deployment conditions. So here in this pre-deployment condition, I've specified that someone has to approve it before it can go to QA. I've got a lot of different options, I can gate off of a web hook into something else, or gate off of tests completing, I have multiple different options for how I want to gate. Then it just becomes a matter of, "Okay, well, if you've gated, well, then let's go look at that release." I would have gotten an e-mail, which I did, and I can just come in here and say that I approve. Toolbox forever. Once I approve that, it will then start deployment into QA. So you have the ability to create these multiple pipelines that can release into multiple different environments. If you enable all of this up, we now have continuous integration. So when we're checking in, we're building our code, and then when we're building our code, we can have it automatically start going down whatever our pipeline processes to do delivery, hopefully, getting code out there faster, but also ensuring quality. That's one thing I think a lot of times people don't realize, is the quality aspect that doing this Continuous Integration and Continuous Delivery can give you. >> It really puts you in a world where you've got a work item, whether it's add a feature, fix a bug, you make the fix, you check the code in, and like, "I'm done", and you walk away. But you're not really done, you're done with your code which you tested locally, and it works locally with your current version of the code. But you check it in, it might've broken somebody else's code or somebody else's code might've broke yours. So the build itself may or not work. If it does work, then all the tests run, then it gets deployed, but all of that's automated for you. So you just code all day long, it's whatever time you leave, you check in, you leave, and then all this work happens automatically. The dev guys or the QA guys come in the next morning, and there may or may not be a new version. If everything went well, that new version's already there and they can proceed to testing. Nobody has to wait around and wait for somebody to do the release. Nobody has to go, "Oh, it's Wednesday? Oh, it's my turn to do the release? Oh, I forgot." It automates a lot of this stuff for you and checks to make sure along the way that things worked. So QA gets in in the morning or you get in in the morning and there's no new version for them, it's because something went wrong in-between you checking in your code and pushing out the new version to them. It's all automated, it's all logged for you, and you may or may not get an e-mail or text message at midnight telling you, depending on whether you want to, but you certainly could, right? >> That is all correct. >> Cool, and it's pretty easy to use. >> It's pretty easy to use. As long as you understand the general process, getting everything set up is pretty easy to do. >> You don't have to be part of a massive team, you don't even have to be part of a multi-developer team. I think even if you're writing just code for yourself, just an app you use yourself, you should run it through the full DevOps processes to get better at building software yourself so that even if it's just for you, it makes you better at delivering software, but now you have the skills that you can bring to work or to the multi-developer applications as well. >> I agree 100 percent. So let me take it one step further. I've shown you the GUI way of building out these pipelines. But one of the things that people ask for, especially with build systems, is a way to track the versions of their build pipeline with their actual code. So the way we do that is with what's called a YAML file. >> YAML. >> YAML. >> Which stands for? >> It depends on who you ask. But according to my research, it stands for YAML Ain't Markup Language. >> So it doesn't stand for yet another markup language? >> That's what I've been told. >> If you go to yaml.org, what do they say? >> Let's see, what do they say? Do they have YAML Ain't Markup Language? >> There you go, a myth busted. >> But YAML dot org is a great place to go to get some basic information on YAML in general. YAML, everybody you can define how you want your YAML to work. Everybody can define it differently but essentially what you have is structure that's enforced through indentation with spaces not tabs. >> Right. >> Key value pairs. >> Which is one of the first things you use when we're working with the YAML that that matters. >> Yes, yes it does. >> I found that the right way. >> That's why its very difficult often to just copy and paste YAML code in, because this code you copy it in and it winds up putting tabs in and then it doesn't work and you have no idea why. >> Exactly and it's basically a human-readable data serialization language. It's based off of JSON obviously, as you can tell. If we go and look. I love by the way, our documentation. if you go and look at the documentation for pipelines, they actually have a creature. First pipeline documentation section with a lot of different selections for you, so we've provided a bunch of the code you need to get started with creating your first pipelines using YAML. >> Okay. >> So what I did is, I took the pipeline's JavaScript one and I forked it into my it. So it's on GitHub Microsoft Docs pipeline JavaScript and I forked it to be just have my own version of the project and it gives you all this out of the box including a starting Pipelines file, so I'm now working with GitHub as my repo. So I'm able to take this repo and I'm able to clone it down and start working with it and my tool of choice for working with pipelines or YAML files is Visual Studio Code. Love Visual Studio code. One of the nice aspects of Visual Studio Code, if you're working with Azure pipelines, is the Azure Pipelines extension. Now, what we've got here if we look at this very simple pipeline, is we're basically saying, this is the pool, this is the image that we want to use. So this is the Linux image we want to use when we do our build and we're saying use the node tool task and just pitch, just for grins we'll change this to be eight instead of 10, just to make a change. You know what, I really want this to show up with a better name. I figure there's a way to do display name. You notice I've got a space in twice. But then if I do "Control Space" I get "IntelliSense" and I can say well I want to do display name and this is just going to be the node tool. Now this is a task that I'm going to run, one of those tasks remember when I click the plus sign in the GUI. >> Yeah. >> This is using one of those tasks. This is actually running a script. So it's going to say just run npm install and then npm test and I'm using some other tasks to publish out and publish the code. >> If you build a pipeline using the Visual Designer, is there a way to convert it over to YAML automatically? >> Yes there is, well not to convert it over automatically. >> To export it. >> But what you can do is, if I go to this build and I go to "Edit". What I can do is, I could try to view the YAML for this entire build. >> Okay, well there you go. >> Sometimes the build may be doing things like putting in, setting certain variables for me. >> Yeah. >> Then I have to make sure I add that to my YAML file. >> Okay. >> But one of the ways I've taught myself YAML and to build out some of these pipelines is to select a particular task you want and you can view the YAML just for that task. >> Okay, cool. >> I build it, first in the GUI to understand. >> Yeah. >> Then I can transpose the code over. Then of course once we've made these changes in Visual Studio Code, we can just and we save those changes. Then we can just come back over "here" and commit that locally. I could type and then once it finishes doing that we'll just push it on up to or push it back out to GitHub. >> Okay. >> Now what I've done in my repo in GitHub, is I've gone to the marketplace and I've grabbed Azure Pipelines and I've gone through a configuration where I've setup my team project in Azure DevOps to be connected to my GitHub repo. >> Okay. >> So that now since there was a check-in to my GitHub repo, it will have kicked off a build. If we come over here to builds, come on, you can do it. You can see there's my change node to eight. >> Yeah. >> It's kicked off my build process. >> How does it know to read that YAML file and how does it that that YAML file contains the information. >> Because I came into my Azure DevOps team project and I created a new pipeline and I pointed it to. >> That YAML. >> That YAML file. >> Got it. >> By pointing it to that YAML file you can see "There". I need no Tool. Now, what's really cool about this. Is that one of the things you have to remember is that you can come in here and it's not what I meant to click but you can come in here and you can modify the triggers. You can use variables. So, you have the ability, just like we had variables over in the build GUI and in the pipelines. >> Yeah. >> I can have different variables that I can access. I can set the trigger. I can point to a different YAML file if I need to, so if I ever needed to say, change the YAML file I was working with, I can a point that to a different area. >> Would you agree that if you're just getting started with these you should use the visual designer and then once you understand what's going on, then you can go over to YAML or you would just start with the YAML? >> The answer is, good consultant answer is it depends. For me it was easier to get started with the GUI. >> Okay. >> For a lot of people that maybe are used to using other build systems like maybe Travis or CircleCi where they use YAML files as well. >> You might have learned they're familiar with YAML files already that's what Helm charts are written in, if I'm not mistaken. They are used in Deploying Containers? >> Put this dust over. Think about it. Now I've got my build, and my build is now or the description of my build is now being version-controlled with my code. It's following me around when I branch. >> Okay. >> I'm able to track the changes to my build in addition to the changes in my code. Now this leads to the question, "Can I do this with my releases?" The answer is not yet. >>.Okay. >> The product team is aware of it and they're actually working on it. If you go to Microsoft slash Azure dash pipelines dot YAML, you can actually see the design docs that we have out there. >> Cool. >> Kind of what the plan is. They're aware that people want that feature. >> Very nice. >> So they're working on it. I'm working closely with some of the open source projects on GitHub to help them with pipelines and it's just a very, very solid robust product. So I really hope people will check it out. >> Very nice. >> So that was a great overview of the pipelines and I think we saw how easy they are to use. We'll put in the show notes. Number of resources, you pointed to the docks and the demo generator and a couple other things. People should just absolutely get started using this stuff and learning how to apply the DevOps practices in their development. >> Absolutely. >> All right. So thanks a lot. Thanks for being on the show. >> Thank you. This was so much fun. >> We will see you next time on Visual Studio Toolbox.

Azure Pipelines - Release

>> On today's Visual Studio Toolbox, part two of our two-parter, we're going to look at release pipelines and continue to see how you can deliver more value to your customers, more frequently and more efficiently. Hi. Welcome to Visual Studio Toolbox. I'm your host Robert Green. In this episode, we're going to continue our two-part look at how you can start adopting basic DevOps practices, and we're going to focus on pipelines. We created a build pipeline last time. Today, I want to show one more thing about build pipelines and then we're going to look at release pipelines. So again, best place to get more information on this is the Azure DevOps services documentation, aka.ms/vst/azuredevopsdocs. Great place to look at that. So in the previous episode, what we did was we created a build pipeline, and that build pipeline waits for a machine to become available, downloads the latest version of NuGet, restores the packages, builds the solution, runs the unit tests, and then publishes the build artifacts, which is the thing that you built so that it can later on be deployed. We saw that we ran one of these. It took about a minute-and-a-half. Then I went into Visual Studio and made a change to the broker unit test. Check that code in, push that code into the repo that triggered the build, which then failed because I broke a test. Then in the meantime, I went back to that code, change the code back. So the test worked again, checked it in, and that triggered a build which succeeded, and I did this part off-air. What I want to do is show one more thing about build pipelines, and I want to talk about branching. Because so far, we've just been working with the master branch. So I'm going to come into Visual Studio and I'm going to, in the master branch, I'm going to create a new branch, and I'm going to call it AboutPageV1.1. It's off of master and I'll create that branch. Then what I want to do in this branch is make a change to the AboutPage and, let's say, provide more additional information. Perfect. Now, I'm going to check this in to that branch because this is V1.1 about the AboutPage. Updated AboutPage text, commit all push. This push is add up to the branch. Then in the last episode when we push things to master, that trigger the build, we've now push things to a branch which does not trigger a build. The reason it doesn't trigger a build is because, we come in here and we edit this, there we go. Because this build gets triggered when we check into master. Now, you can if you want, have this build triggered when things are checked into branches, but the default is that it's only triggered when it's checked into master. Now, that means that if we come over to here and to our repo, we create a pull request. So let's do a new pull request, and this is AboutPage V1 into master. You can do this from inside Visual Studio. You can do this in Azure DevOps Services, either way. I'm going to call this AboutPage V1.1, etc. I can assign reviewers. I'm not going to do that. I'm just going to create the pull request. Then I can approve it. I can reject it, etc. I'm just going to approve it, and then I'm going to complete the PR. I can delete the branch if I want. Complete the merge. I have approved the PR. That code gets checked into master. Now, you might expect a build to be kicked off, and indeed it is. At this point, again, we could sit here and wait for a minute-and-a-half. We can look at some additional information. We can view reports on pipelines. We can see how often this is building and failing. It's worked two out of three times. There's a whole bunch of information that you can get here. But I think what we'll do at this point is we'll just let this go. Then in the meantime, we're going to build a release pipeline. There are a couple of ways you can build release pipelines. You can come into "Releases" and create the release pipeline from here. What you can also do is go into the "Repos." Go into your "Repo," and you can select to set up a release from here. So either way, it works. I'm going to go back to doing it from here. So we'll go to a "Release," and I'm going to create a "New Release Pipeline." The Release Pipeline says two questions; What are you releasing and where is it going? So the first question you get asked is, where is it going? I could do an Azure App Service deployment. I can do a bunch of things to Azure. I can do an IIS website with SQL Server. If I keep scrolling down, there's IIS without SQL Server. There's a whole bunch of options here you can get more in the marketplace. I'm going to deploy to an Azure App Service because I have created a couple Azure app services. So I've got vstoolboxtest.azurewebsites.net, that is a place to test the application. Then vstoolbox.azurewebsites.net is production. At the moment, I haven't put any code up in here. I just created the app services. I'm going to call this stage test because what I'm going to do is create a release pipeline and that releases to test first, and then to production. Now, the question is, what are you releasing? So if I click "one job one task," I said that I was going to deploy to Azure. So the question is, what Azure? So I will select one of my Azure subscriptions. I'll choose this one. Now, I need to authorize, I need to say that from Azure DevOps Services, I can go and talk to Azure, to my Azure, username and password, and of course, now tokens come down. Things are configured. It takes a little bit of time. You'll have to do this once per project. So the next time I create a Pipeline on this subscription, everything will already be on my computer to enable me to talk to Azure. I said it was a Web App on Windows. Now which one? It's VS Toolbox Test. Now, then I come back to the Pipeline, and the first couple of times I tried this out I forgot to specify something in the Artefact, so I said, let's publish to Azure, but I never said what are we publishing, don't make that mistake. Let's click "Add an Artefact". You can base it off a Build. You can go directly to the source code. I'm just going to do it off a Build. I'm going to select the YAML Build that we built last time, Toolbox DevOps Show, and click "Add". Now, look at a couple options. If I look at the trigger, the continuous deployment trigger, just as when we built the Build Pipeline using the classic build process, we had to enable Continuous Integration. So too here, we need to enable Continuous Deployment. Meaning that after the Build succeeds, we will take what got built and send it up to Azure. You could also do it on pull request, I'm going to leave that disabled for now. Then what I also want to do is I want to create another Stage. I could create a new Stage and specify all the things, or I can clone this Stage because it's almost the same. Instead of saying copy of tests, I'm going to call this Production. Then the only thing here that's different, same subscription, but I want to deploy to VS Toolbox which is the Production site. There's other options, I can pass in variables. I can say, how long to retain or release. There's all kinds of things I can do here. But I also want to look at the pre-deployment conditions and the post-deployment conditions. I don't want any conditions on Test. The thing built, send it to Test, Test can start working on it. But before we go to Production, I want to require approval. So let's set pre-deployment approvals to Enabled. Let's say me. There's an option here to say that if you requested the release or deployment, you can approve it, so can approve your on stuff. Here, because I'm a glorious team of one, I'll make me the approver. The default timeout interestingly is 30 days. That seems a bit long to me. I would probably make it earlier, but that's okay. You also have the ability to define gates if you have. More than just approval, one person's approval is required. You want to see that certain load tests passed or certain statistics that you want to be met, certain conditions that you want to be met. You can learn more about that here, but it's really more of you can set a particular bar. Unless the application meets that performance bar or security check or something, then and only then can it be deployed. So I set a pre-deployment approval on the way in, and you can also set deployment conditions on the way out. All right. Now let's save this. Now we have a Release Pipeline. Now, if we pop back to our previous example, where we merge the PR that kicked off a Build. Okay, perfect. All right. Now, what I want to do is I want to create, well, I have a Pipeline. We have a Pipeline that's created, that's automatically going to run if we do another Build, so let's go do one last Build. Then "Okay". Let's make yet another change here. Actually, let's not do that because I still am on the About page. I want to bring down Master. Stash that. I want to bring down Master. Because we did the pull request, so now I want to make sure that I have in Master the current versions. Now, what I'm going to do, I'm not going to work off a Branch, I'm going to work off Master. I'm going to check this in, updated, about page texts, not going to worry about my typos. Yes, I am because it's in perpetuity. All right, let's push this. Now, checking in this change into Master, which will kick-off a Build, and we're not going to sit here and watch that Build go, so let's fast forward until this Build completes. Fantastic. The Build worked, which should kick off a Pipeline. There it is. Release one is just been kicked off because the Build succeeded. What happens is, we're now going to deploy that to Azure. Again, you're starting to see why this is beneficial, and how this helps you deliver value more consistently, because all of this is automated. The idea of building, running the test, taking what got built, and deploying it to Azure. In our case here, we're just taking a simple app and putting it into an Azure Web App. You can imagine that for testing, you might have an ARM template that not only puts the app up there, but creates a new copy of the database, maybe populates it with some sample data. There's a lot going on. Do you want to have to manually do that over and over again? No. What you want to do is work, work, work. Then when you're done with your code, you check it in, you go home. The build works, the deployment works. Whether it takes 15 minutes or three hours, who cares. The next morning test comes in. That application has been deployed. It's all ready for tests to go. That's repeatable, and it happens the same way over and over again. That's the benefit of these pipelines, is to get repeatable, reliable, things happening automatically so that you make a change, they test, you make a change, they test. You do that over and over then you release, and then you do that again in the next Sprint. In the meantime, this has succeeded. It has deployed this to Test. Let's see if that's true. Let's refresh here. We should see the website. There's the website with the course, the latest version. Now, in the meantime, there is an approval pending to send this to Production. Again, if we go into mail, I see that the Build worked, and there is pre-deployment approval. Me as the approver, I get an e-mail telling me that we will go to Production when I approve. Now, what is the process for me to approve? It's probably more than I got an e-mail. I want to see what the Test results are. I want to talk to the Testers. Whatever process there is, let's say that's all happened. Again, I can do this from the mail. I can view the approval from the mail. I can come in here. Click "Pending Approval", we'll take many of the Approval. I can add comments. I can differ for later, but because its all good, I'm going to approve this. What that does is, it kicks off the process of sending this to Production. This may take more time, it may take less time in the Test. It may take more time because you're creating a new version of the database, you want a clean environment. In production, you're probably not wiping out all the data. It may actually take less time. Although, you may then have a series of tests that occurs to see if we literally roll this out. There is canary deployment, blue-green deployment. There's some techniques you can use to roll out the changes bit by bit rather than just have everybody use the new version, only to discover then that there's a mistake, a bug, or something got wrong. There are ways to roll that out beyond our scope here, but what's discussing. Now we're just waiting for this to get sent to Production. That worked. Now, we come over to the Production site like that. There we go, and head over to the About page, and of course it's correct. Again, the docs here they are, Azure DevOps documentation, aka.MS/vst/AzureDevOpsdocs. Great place to learn how to do this. Start slowly, build some simple Build Pipelines, some simple Release Pipelines. Learn at the cadence that makes sense for you, and start adopting the DevOps practices to be able to shorten your release cycles, better value for the customers, better world for you as the Developer. Hope you enjoyed this. We will see you next time on Visual Studio Toolbox.

Azure Pipelines - Build

>> On today's Visual Studio Toolbox part 1 of a two-parter on how you can deliver more value to your end users more efficiently. Hi, welcome Visual Studio Toolbox. I'm your host Robert Green. Today we're going to look at how you can get started using some of the basic DevOps practices. Now, DevOps is a hot topic. Lots have been written about it. Lots of people talking about it. There are entire channel 9 shows devoted to DevOps. But it's really important, and I thought we would take an opportunity on Toolbox to look at how you as a Visual Studio developer can get started down the path. Now, the question is, why? What does DevOps offer you? DevOps, one of the key things about it is, it gives you the ability to continuously deliver value to your customers. Key point there is the idea of continuously delivering. Now, think about apps on your phone. How often do the apps on your phone update? The answer, often. Why is that a good thing? It's a good thing because you're more likely to get bug fixes sooner, and you get new features sooner, which all in all is a better experience for you as the customer. Now, you as a developer, you should also want to take advantage of this. So consider an application you've built. You and your team just spent a year building V1 of a very important internal application. You released it, and your users are using it. Everybody's happy. Users being users, they do two things. One, they find bugs that they want fixed. Two, they have feature requests. Sometimes large, sometimes small. So if someone comes to you with a bug or a minor feature and says, I would like this updated. What do you want your answer to be? Do you want your answer to be, yeah we'll fix that bug in the next service pack which ships quarterly? Or yes that's an interesting feature we'll consider that for the next version of the application which will be six months to a year from now? Consider a world flipping that where you could say, well yeah, we do weekly bug updates, so we can fix that bug this week. We do twice monthly minor feature updates. So we'll take a bunch of features, and you can have those in a couple of weeks. All right. The latter's a better world for the customers who at the end of the day are why you're building software. So how do you do that? How does DevOps enable you to do that? It's a couple of key pieces. One, is the concept of sprints, which is a fixed period of time in which you're going to do work. So maybe you're on two-week sprints, three weeks sprints, and at the end of that sprint, you've got some shipping software. It's a very interesting topic but not what we're going to discuss here. The other part of it is DevOps pipelines, particularly build and release. What we're going to do in this video and the next one because I think this will be a two parter. Is we're going to look at build pipelines in this episode, and release pipelines in the next episode also known as Continuous Integration, Continuous Deployment, and therefore CI, CD pipelines. It's basically a way of having machines in the Cloud do a lot of the work for you of building an application and then releasing it to test to production to QA etc. All right. So what have we got? Best place to learn about this of course is the docs.aka.MS/VST/AzureDevOpsdocs. Azure DevOps docs are excellent, and you can learn a whole bunch of stuff in there. All right. So I have an application. It's a very simple MVC application, built with.net framework it's ASP.net application. I could've used core. I didn't. Doesn't really matter. I have the webapps. It's an MVC app. I've got tests. Of course, I have a grand total of three tests. They all pass, the app builds everything's good. So if I'm going to build this, then I need the source code to be somewhere that Azure DevOps service can get to. So I will add this to source control. What I'm going to do is pick a place to store the code. I could use GitHub. I can use Azure DevOps repos. Both are excellent. To keep things really simple here, I'm going to use Azure DevOps. So I go to my Azure DevOps site which is rogreen.VisualStudio.com. I'm pretty sure if you sign up now, there's a different URL naming scheme. But because I've had this for quite some time, it's still uses the old one rogreen.VisualStudio.com. I'm going to create a new project. This project will be Toolbox DevOps Show. Okay. I can make it private or public. I will keep it private. Then I can choose to use get I could use TFVC, and I can choose from the Agile, the Scrum, the Basic templates are here. I'm just going to leave the defaults, and I'm going to create a project. Now, this is my central location for doing DevOps, if you will. Okay. There are a bunch of pieces in here. There are boards. This is where you maintain work items, and your sprints in your backlogs. Again, that'll be a good subject for a feature Toolbox. Your repose, this is where we have our code, and then the pipelines, is what we're going to focus on. We're going to work on a build pipeline and a release pipeline. You can also do test plans and artifacts, which is a way of managing your NuGet packages. But again, we're going to focus on pipelines. Okay. We don't have code in here yet because we haven't pushed it in here yet. So we were in Visual Studio, we said add to source control, and we're going to publish this to Azure DevOps. So at Visual Studio will now do is, based on who I am, go out and find what Azure DevOps organizations I belong to. So I obviously belong to the rogreen.VisualStudio.com, there's actually a couple others I belong to. If you notice this line here keeps moving, so it's going out and searching. It has found a number of organizations. I'm going to pick this Ro Green one. Then what I'm going to do after it connects and brings down information on that. Notice that it suggests a repository name of Toolbox webapp, which is actually the name of this application. But in reality, my repo's called Toolbox DevOps Show. So what I really want to do is come in here, choose this Toolbox DevOps show project, and that gives me the right repository name. I don't want to create a second repository. I want to put it in the existing repository because I created the project first. Now, if I come in here and do a refresh, here's my code in an Azure DevOps repo. Okay. Perfect. Now I'll be checking code into that. What I want to happen is, I want to set up a build. So the first pipeline I'm going to build, is a build pipeline also known as a Continuous Integration pipeline. Couple places I can do it. I can do it from here. I can click "Setup Build". What I'm going to do is, duplicate this page because I'm going to create two of these. So from the code, I can set up a build. I'm asked what type of application is this. Here are some of the more common ones: ASP, ASP Core, Xamarin, etc. There's a whole bunch more: Docker, etc., Node, and you can get additional ones in the marketplace, but we have an ASP.NET application here, and now what happens is a build pipeline is created using YAML. Now, YAML, some people think YAML stands for Yet Another Markup Language. YAML.org says it's YAML Ain't Markup Language. Either way, it's like XML, it is a configuration language, and what we've configured here is a build. So this says trigger on master. So anytime I make changes to master, if I check code into the master branch in my repo, trigger this build. Then grab a machine, a machine sitting in a pool of machines. So up an Azure, there's a pool of VMs ready to do my build. The one Windows latest, this says there's a machine running Windows Server 2019, also running Visual Studio 2019, so there's a pool of those. Pick one of those machines, then set some variables, then get the latest version of NuGet, do a restore, and then build using these variables. What's the solution? It's *.sln, the buildPlatform and configuration are any CPU and release, and then also run any tests I have. Remember, this application has three tests. So this says run those next, all on a server up in the Cloud. Now, before we actually take a look at this, I want to show you another way of building a pipeline. So if I come down to pipelines here, then I create a pipeline, again, it asks me where's my code. It also gives me the opportunity to use the classic editor. There's two ways you can do builds. One is in YAML, one is in the classic editor. If you've done build pipelines before, you're more used to the classic editor. YAML is the new way of doing it, but let's go look and see what happens if we use the classic editor. First question is, where's your code? Could be in GitHub, could be anywhere else. Here, it's in an Azure repo called Toolbox Show. I click "Continue," and now I can create an empty job and build from scratch. I can just go and do YAML, or I can choose what type of application this is. It is ASP, where is it? There it is, ASP.NET. I click "Apply," and I'm going to rename this Classic Build Pipeline. It says grab a pool agents. The agent pool is the machines that are available. This one defaults to a machine running Windows Server 2016 with Visual Studio 2017. I'm going to choose Windows 2019 running Visual Studio 2019, which is equivalent to this Windows latest in the YAML. Then we're going to get the sources. We already said that. We're going to get the latest version of NuGet and do a restore, which again is equivalent to this YAML here. You're going to install or NuGet command, and then this will build the solution based on some variables, release any CPU, so so far the same. Then this will also run the unit tests. So we're still doing the build, is still doing the tests. So far, these are pretty much the same. One big difference though is using the classic build, the build pipeline or published a symbols path and publish the artifacts. Interestingly, the YAML build at the moment does not publish the artifacts, which means when it's time to release this and deploy this application somewhere, like to testing, there's nothing to deploy because you built it, but you didn't take the results and put it somewhere. Now, why is this? Needs to be said that YAML at this point is still in preview and will improve over time. So eventually, hopefully sooner, this will contain that. So what do we do about it? We need to add some YAML to this. Well, there's a couple of ways we can do that. One, is we can say we want to publish the artifact, and we can view the YAML for that. So for any of these tasks that constitute the build, we can look at the YAML. So here's the YAML that will do that. So I'm just going to take this and copy it, bring it over here, and put it in the YAML for the build I want to use. Another way to do it, if I click "Show assistant", here's a number of tasks including, if we scroll down, the Publish build artifacts tasks. So if I double-click on this and then "Add," I can add that YAML in here. They're not 100 percent the same, so I'm going to use the one from the classic build. Then the last thing that I want to compare the YAML to the classic build is, this automatically is triggered when there is any code written to master, whereas in the classic, you have to come to triggers and turn that on by enabling continuous integration. Now, at this point, these two build definitions do the exact same thing. Now, I don't actually need two of them, so I'm only going to run the YAML one. I'm going to save this and run this, and we'll see what happens. It triggers a build like this. So if I click on this, I can actually watch it. Now, granted, watching builds go is not anybody's idea of a good time, but when you're learning and you want to know what's happening, it's actually quite instructive, also this build takes about a minute and a half, so it's not too too terribly bad. So what happened is we waited for a agent to be available, a machine somewhere in this pool running in the Cloud. Once that's available, the job initializes, tasks get downloaded, code gets checked out.. Then we Update NuGet, the latest version of NuGet, then we do a restore for all our packages. Okay that's happening. Then we do a build. Now remember this build agent is a VM running Visual Studio 2019, so it's using MSBuild obviously. So all of this is going to look very familiar because it's essentially the same thing you do in your machine when you build solution. So all of that should look familiar and again this takes about 90 seconds. Your build might take longer, might be a five-minute build, it might take 10 minutes to run all your tests. You're certainly not going to sit here staring at this. One of the benefits is you're not expected to, you've handed this off to a machine sitting somewhere, a VM from a pool of VMs available and that machine is going to do all the work for you. So all the tests are run and then in one minute and 19 seconds this works. Okay, so now if we go back to here we can see the job succeeded in one minute 19 seconds. We look at the tests, we can see that all of the tests passed and life is good. Okay, so now, we said that every time we check code into master, we're going to do a build. So let's see this happen again, or let's see this be triggered by checking in code. So if I pop over to Visual Studio, let's make a change to some code here. I'm going to come into the home controller and I'm going to change the ViewBag message, save that then I'm going to check this code in. Check this code in by clicking here. Put a comment, updated about ViewBag message. All right, let's "Commit" it. Let's "Push", now what this does is it pushes the current branch and this failed because since I created a build I need to do a "Pull first, usually when doing this demo I remember to do that occasionally, I don't. So let's try this again updated about ViewBag message. Now, let's "Push", and now this gets pushed into master in my Azure DevOps Repo which because I've set up continuous integration kicks off another bill. Now we could sit here and watch this for a while but while that goes let's look at a couple other things. An additional thing that happens when a build is kicked off is that when it succeeds or potentially fails is you get mailed. So here's the scenario. You write a bunch of code, you test it locally, you check it in and then that builds the entire application. Now, try to think about this, on your machine you've got the code you're working on, you've got the tests you're working on. You've got the latest version of everything as of the last time you pulled everything down, but you're focused on your code. You don't want to have the entire application necessarily sitting locally. You're really interested in stuff you're working on or maybe you're doing a PR, you've got a branch and you're going to do a pull request. So everything works on your machine and then you check code in and what's built is the entire application. Now, it's entirely possible that you've written some code that breaks somebody else's code or somebody wrote a test that you didn't get updated, now the code you wrote working on the old version of the test fails the new version of the test. Anything can happen to break the build. So the idea is that you check in code and then you run a build, you run the entire application, not just your piece of it but the entire application as it stands with everybody's code, and this is still queued, it's not running. Okay, so what we'll do at this point is speed this up a little bit, so we don't have to actually sit here and watch. We see that the job failed. Apparently now if we were watching carefully, we would have seen somewhere in here that VSTest, that task failed and there's a hint in here I saw as it was scrolling by, some red, a test fail. Now, again, I'm not going to look at that but if I come back here, I see that the test failed and if I go to look at the tests it is because a test failed. Then in addition I get an e-mail that tells me that the build failed and the reason the build failed is that a test run failed. Okay. Again, I can come back here, view the results, which takes me back to here. I can click here on "About" and I can see exactly why this failed. So again, the beautiful thing about a build pipeline is that every time anybody checks in code we can do a build of the entire application and make sure that things one person has created or worked on don't interfere with another and that entire application builds. Now, if you're the only one working on this, it's still useful to do if it's a complicated application. Now here, of course, it's a simple MVC app and you might be sitting there and saying to yourself, "Well, yeah no kidding green. If you would just run all of your tests before you check that code in you would have discovered this." So it is not the case that the beautiful thing about Azure DevOps is it means you don't have to run your unit tests. I am just using this as a simple example. The beautiful thing is that once I check in code the entire application gets built and then we can see if it still works. Okay, I think this is a good place to stop, we'll pick this up again in our next episode where I think I might show you one more thing about build pipelines but then we'll move on to release pipelines. So I hope you're enjoying this so far, we'll see you next time on Visual Studio Toolbox.

Monday, 21 October 2024

App Center for WPF and WinForms

>> On today's "Toolbox", Matt and Wendy are going to show us the latest goodness being sent away of WPF and Winforms apps. [MUSIC] >> Hi, welcome to Visual Studio Toolbox. I'm your host Robert Green, and joining me today are Matt Cornwell and Wendy Lee. Hey guys. >> Hey, how's it going. >> Good. Welcome the show. >> Thanks for having us. >> Happy to be here. >> Matt and Wendy are in App Center Land, and we're going to talk about App Center today. Now, App Center has been around for a while, and we've done an episode or two on this show and of course James has covered it on the Xamarin Show, and you think about App Center as DevOps for mobile apps. Right. But it's expanding, and you guys are going to talk about App Center for WPF and Winforms. What? >> We are. But really, we're hoping to be App Center for distributed Apps, right? Things that are not co-located where you are and maybe multiple instances around the world. >> Cool. So I guess that immediately brings to mind two questions. One is, how does that work? Which you'll show us, but two, why? Because we already have DevOps for WPF and Winforms is called Azure DevOps Services, right? >> So you don't have to answer right now but as we talk about it, those are the two questions I want to cover. How does it work and why? Then which one would I choose as a WPF or Winforms developer. >> Yeah, absolutely. So App Center is actually very different, and it's a great supplement to users already using Azure DevOps. So we've kind of realized there's not a great tool in the space that allows Windows Developers to easily manage their releases, look at their crash reports and just better understand who's using their application, and got the analytics that they need to really deliver the best user experience. >> Okay. >> So if you're already using Azure DevOps, fantastic, continue using that build your app and Azure DevOps, and App Center will help you release those apps that you build in Azure DevOps. So we really think about those two products as a compliment to one another. >> So use one, and then use the other use them side-by-side basically. >> Yeah. Absolutely. >> There really is when we said they they fill different gaps and that really speaks to the why. Is that as a Windows Developer, there are lots of tools at my disposal, but I really believe there isn't a set of tools that brings this piece of the puzzle together in a comprehensive way, right? You can build things with Visual Studio, you can do your DevOps stuff on Azure DevOps, you can run it on Azure, but how do you manage your app when it's on a million devices in the wild? How do you see who's using it? How do you track logs? All these pieces are things that there really is not a turnkey solution for, and we hope that's what we're bringing to folks. >> Okay. Cool. >> Yeah. So I guess for those of you who don't know what App Center is, we are end-to-end solution for developers, so like you mentioned, James have talked about it probably on Xamarin with prior iOS, Android, and most recently we expanded our platforms to support desktop apps as well. So WPF and Winform Applications targeting dot Net framework as of today. So today we can go over the top three most requested services, so I'll be doing a quick overview of what these services actually looked like on App Center, and then Matt will actually show how to get this up and running under five to 10 minutes. >> Cool. >> Yeah. >> All right. >> So cool. So you can see my screen here this is a sample app that we have. The screen you're looking at is just an overview of how to get started with some SDK instructions that Matt will actually be going over in a bit. First off, we have distribute. This is a service that allows you to easily manage how to release your apps to your end users. So over here you can see the releases I've made to the Beta Testers and to myself in the past day. I'm able to see how many downloads I have, how many unique downloads I have, and I can also sort this table by what's most relevant to me. Under "Groups", I can create a new distribution group. These can be your Beta Testers, your end-users, really anyone you want to download your app, all I have to do is add their e-mail address here, click "Create Group", and then I'll see a Group appear in this screen. >> In the why section to keep sticking with that one, this is I think one of the fundamental pieces that is very different than what Azure DevOps would offer. Getting that app to either testers or end-users did not a compelling story for that that I'm aware of elsewhere, and so this starts to become super valuable. >> To putting setup dot exe on a jump drive and walk it over to another computer and having somebody run that, you don't consider that compelling? >> I mean, it is totally compelling if you're co-located. but we found it really interesting our team is actually we got 15 or so time zones around the world, right? People working on these things, and so this kind of centralized push model where I can control who sees what and when, and let things go to customers. While we don't yet supported for the Windows story, we do have in-App updates where you can basically help auto-update a user from within App Center, really is a game changer to be honest. >> Okay. >> Yeah. So yeah. Looking at distribution, I can click here for a new release. Like Matt was saying, if you're using Azure DevOps that's great. Build your app in Azure DevOps, and then upload your app package here. So we support dot zip, MSI, and a lot of different package formats that can meet your need. So once you upload your package here, I just click" Next", I specify some release notes, who I want to distribute my app to, and then I'm done. >> Can I do the build-in App Center as well? >> Not yet. >> Okay. >> So right now in Azure DevOps you can build those apps. So we definitely encourage users to use Azure DevOps for that, and if we decide there's a need to support that in-App Center as well, that is something that we can look into as well. >> Over time, eventually these tools will probably coalesce into a single location. I mean >> You'd have to ask someone with more knowledge of the discipline than I, but I don't think it'll be unreasonable to picture what would that happened. >> Okay. >> Yeah. So that's our distribution service, the next one we have is diagnostics. So this is a service allows you to see when your apps are crushing and it gives you a good overview of the different types of crashes and errors. I can click into a crash group and get a little bit more details. I can look at my stack traces, look at the individual reports, see which devices are crashing, and got some other data that will help me understand what's actually going on once my app is being used by my end users. >> Right. Then in that world where you've walked as it drive around to a 100 people, and traditionally working through e-mails, asking people to send your logs, right? Even if you've got your logs going to a central location, looking at logs as raw data is very different than looking at rich data that has been grouped in augmented and provide it in a way to help you as a developer to figure out what's wrong, right? There are some other tools in this space that do this, I don't think any of them hit the mark with this kind of complete end-to-end picture, and so really at this point you've got your ability to send your app out across the world, and you've got your ability to get that data brought back to you and presented it and I hope a really useful way. >> Yeah. With features like events or attachments, we really allow you to customize what you need in a crash reporting tool to really understand what's going on in your app. So that's what we have for diagnostics. Last but not least, we have analytics. So all this data you see right here is out of the box. All you need to do is integrate our SDK, and you'll start seeing these metrics flow in. So you see how many users you have, the daily sessions, where your users are, what devices they're using, a lot of really great stuff for you to understand who your user base is and what features are using, and how you can really deliver the best experience for them. So all this nice data is here, and then you can go into our "events" tab and actually set specific events that you want to track. So if you care about a very specific set of features, or you care what buttons users are clicking on and those are things that you can track using our SDK as well. >> Cool. >> Yeah. Those are the three services that we decided to introduce WPF and Winforms Support first, and Matt will go ahead and actually show us how to get started. >> Yeah. If we want to flip over to my machine for a minute. >> So if all you did was use the distribution, it seems like that's pretty cool for starters. That's all you did to get the app onto other people's machines easily when they're ready to do it, and you just push up the latest build. So if you then make a change to the app, anybody can go get it, they'll get notified if there's updates, right? So you just have a central location that's not sitting on a drive somewhere. >> Yeah. >> Right. >> Must be cool. >> I really do think it changes the way Windows Developers can start thinking about shipping this stuff. >> Yeah. >> Right. >> I know we're talking about Winforms and WPF, and so we're not necessarily talking about the store. At least there are other other broader options out there, but I don't think anything targets it with this kind of focus from a really developer first mindset, right? As developers, we want make other developers lives easy, and this is I think a really good first step for the Window space. >> Cool. People can go to aka.msappcenter Windows to learn more about this. >> Yeah. Well have links to the documentation, and I'm actually going to just walk us through the initial documentation. Just so people can see it, I know as a developer seen that actually played out is a lot easier than reading the docs sometimes. So hopefully, this will work it makes sense. >> Yeah. Cool. >> Cool. So the aka MS link will take us to, I'm just going to click everywhere, take us through the Getting Started page, which essentially says Create an app in app center, follow a couple of steps to integrate it. Start making your App crash and go from there. So here I am over an App Center. I have a few apps ready. I'm going to create a brand new app. Let's call this VS Toolbox, and I'm going to say it's a Windows app. WPF and add a new app. >> So UWP is already supported I see. >> UWP is partially support. >> Partially. >> We'll actually talk about that at the end. We have plans to get UWP up and running all the way in, but I think right now WPF and Winforms are the for the store. >> Okay. >> Yeah. So I'm adding new app, assuming the network and everything is working, there we go. So dropped into the same page, we saw Wendy looking at a few minutes earlier, and here on my machine I've got Visual Studio open, and I'm just going to go "File", "New". Let's just close it and start fresh here. All right. I'm going to "create a new project" I want to do WPF app targeting .NET framework is when he said right now we target.NET framework standard. We do have changes coming. So in the wild introduction today, you can use WPF with .NET framework. Our next SDK release which I believe is mid to late August, will target.NET Core 3 for WPF and then forms apps. That won't be cross flat, it's still Windows because that's where the UI framework runs, but we are definitely moving to that OS agnostic platform versions. All right so I'm going to get this app up and running. It's Toolbox, put it in there. Then to get this up and running, I think our main steps are going to be just following the guideline which is going to be adding your NuGet packages as it says right here. I put this code into the start of my app to initialize the SDK with the information about my app. >> Then do it in the Azure app SQL. >> Yes, that's your actual value. >> So app center knows which app data's coming from. >> Correct, where to route it on the backend. >> Is it actually a secret or was it just an identifier? >> So it's a really good question. It is just an identifier that is named a secret. I don't think we publish it a lot of places in the UI, I don't think it's a common reference points. But it's a good question without a great answer. >> But if you don't treat it like a secret and people use your ID in their apps or you use the same ID in multiple apps, then you lose the ability to really know what app caused the problem, right? >> Certainly. >> It's an a unique identifier for the app. >> It is. Our intention would be that as every time you create a new app here in App Center, you would get a new one of these and you would see it through there. So I think as you get in and use it, it becomes pretty clear that an app is a sandbox for all the data of the distribution, the diagnostics, the analytics. So it's really up to you. I mean, it could theoretically, I suppose you could have the same app secret in several different executables, but I'm not sure that pattern would make sense for use cases we thought out, maybe there are other use cases out there. So I want. >> You just want to know how many users you have total and you're too lazy to do the math. >> One way to go about it. So I've included pre-release on our SDKs. Right now the SDKs are in pre-release version that will be changing. So we're going to install these. Also, I want to go ahead and change. I don't remember if I just picked it out on and change the.NET version to 4 or 5 because that's what we found out this morning was on your machine. So we want to make sure this runs as we do it. So I'm installing the crashes. >> You'll see that Matt is installing two separate packages because their services are actually modular. Meaning, if you want a crash reporting to on analytics, that's fine, you can just import the crashes NuGet package and everything will work just as well. But we do want to put everything in one spot. So it is easier if you do want distribution, diagnostics, and analytics all in one. >> It would also seem like this would be a handy place to create an extension that just says, "Set up my app for App Center." >> Yeah, something with that. >> All the packages that adds that line of code and then you just copy the secret in and the unique identifier and you're ready to go. >> Yeah, that's great idea. We would love to hear more ideas like that. I'm definitely drop as a feature request, any feedback that you have. >> We do keep our roadmap and our iteration plans on our public [inaudible] depo. So Microsoft/AppCenter under GitHub, I believe it is. We'll put the official link and the details, but all we feature requests, these are the roadmap where we're going or things what's coming when, it's all on there, trying to keep that living in transparency. So we've got our two packages installed. So I'm going to go over to my main window and actually I'm going to go to my App CS, because that's where the start of the app is, and so I'm as developer here. I'm just going to find out which keywords on my Mac. That's not it. Let's see. There we go, generate overrides. I don't need all of them, I just need on startup. So I've overwritten startup, so I can put the App Center's specific code in there and I go copy in the us ins, even though do it for me, I don't trust my keys, my fingers on these keys, and paste, and back one more time. Copied in, like you said, it's convenient that this is my data and everything is good there. Paste it. I'm going to go ahead and build real quick before I do anything else to make sure I didn't mess anything up this far that'll hamstring us later. Build worked. So I've got that up and running. If I run my app, we should see that it starts and it's just a blank window. So far, nothing particularly noticeable. >> Oh. that's beautiful. >> I know this is this looks like every UI I've ever designed in my life. >> Sharp white. >> Right here. All right. So the next step, I'm going to cheat just a little bit rather than trying to type this out from memory because lose in mind. Just go ahead and put this in. So let me go to the main Window XAML. Bring in a couple of buttons. Now, I don't normally build absolute buttons that say crashed them. No judgment if that's a good thing, but that makes this easy right here. So I've got the buttons, I've got the code behind, and there we go, skip those in there. All right. So we've got all those in and running. As you can see these three buttons, actually exhibit a couple different things in the App Center world. So we have a concept of crashes and errors when your application crashes and has to restart, if you divide by zero error and unhandled exception. There is also a concept of what we called handled errors. Just think of it as a robust error.log. So rather than the user app crashing, you might identify that there was no network connection or you couldn't find the right version of a file on a server is. Something that is manageable but you want to know about, we allow you to track is handled errors and see those in that same diagnostics UI. Then they will actually give you the full stack trace and the other memory data at the same time, so you can go a little bit deeper than a real log. So these are up and running. I think I'm going to need to add. >> You see the crashes, right? >> Yeah, and I'm just going to copy it from over here. Copy. Come on, you can do it. Put that in there. All right. So let's go ahead and run this app. So this would work from my machine and debug, this will work in release. As we'll see in a little bit, we'll send it over to Winnie's machine and it should work over there. >> Okay. >> So if I come in here and I say "Throw Handled Error", I didn't put in the UI in there, but we can validate if that did it thinks. When I go back to my app under diagnostics, give it just a minute to work its way through the pipes of the internet. >> Obviously, if you don't have conductivity, they're all cached and then gets sent up when you're connected. >> Yes. I believe that's the case for handled error is definitely for crashes. So as we'll see in a minute when an app crashes, we actually send it on next restart. >> Okay. >> So what we've learned through our time in the mobile space is that trying to do any processing when something's gone favorably wrong in an app is a good way to make things worse. So we capture a text file actually of what happened. Then when the app next starts, it seeds if there's a thing there, and we'll send that data back sooner than so when you back in a healthy state. So I'll click it a couple more times. It should show up relatively soon. You can do it. There we go. All right. So we see that today, we had one error. If I come in here and look at that error that I just got like you said, in this case, this is all the statuaries that was there because that's what we've put in there. I see the reports. I can go into this one. I can see this one, we know it was running on a virtual machine. See the main thread. I didn't really give it much data here, but this is the general idea. Also under analytics, I'm going to see that I had one unique user show up today. >> Yeah. >> I haven't gone in and done much of the other data write, I'm just in English write. There's not a lot of writing here but this is the basic feedback. So the other thing to show then is to see a crash. So in this case, it's going to stop the app. When I restart it, you'll get some off to App Center. >> Cool. >> All right. So that's the diagnostics piece. What I think takes this really to other level is being able to shift that over to somebody else. >> Right. Yeah. >> So with just I hope a few clicks here, we can do the same thing on Winnie's machine as in real life end user and see the diagnostics and analytics come in. >> Let's see that. >> So let's do this. The easiest way is to just go ahead and publish this app. I'm just published it to my desktop for now, and let's get it in "Desktop", let's make a new folder. In there, "Open" that, "Finish", let it do its thing. All right. There I am. I'm going to "Add" this folder to a zip file, click in it a couple times. So it's the virtual thumb drive that we're using in this instance. >> Right. >> So we do support MSI, we do support app actions from other platforms for the sake of this demo. It's the easiest to go. So I'm going to go over to "Distribute". >> If you build it in Azure DevOps, can you just point App Center to that build? >> You can't. As of yet, you have to download it and bring it over. >> Okay. All right. >> But that is definitely to flow. In fact, for a lot of the other platforms that are a little bit further down the road, it just happens automatically. >> All right. >> If you build in either Azure DevOps or App Center, there's nothing else to do. >> Right, okay. >> So we'd like to get to that point. >> Yeah, cool. >> So you say, I haven't send my app to anybody yet. I'm going to "Upload" the zip. Then over here, let's just give it "1.O.O". Then Y-U-L-I-1 is your, "this is my app it doesn't crash", but it does. Works on my box. >> Yet. >> There we go. All right. So it knows when it is engaged in App Center. As a tester, must send it over. All right. So you can see this there. So if we can flip back over to Winnie's machine, she should get an e-mail in the next minute or two. >> Right here. >> That's already there. >> So I'm very excited to install this app that does not crash. So you'll see it takes me to the install screen. So see Matt's very truthful comment here. >> It worked on my box. >> I mean click "Download", just going to "Open" it, and then you'll see the app right here. So if I click here, let me just open up the executable. So it give me some warning. >> Yup. Just to address this Yahoo. >> Yeah. >> Yeah, it's Yahoo. >> I'm going to install that, and now I have the app on my machine. >> Cool. Nice. >> So why don't you give us a handled error and then crash maybe. >> All right. Let's see. So we hit "Handled Error". Let's do a "Stack Overflow" crash. >> It takes a second to [inaudible]. >> Now, if you open the app backup. >> Let's see. >> Then we didn't give you access to the app as a collaborator. So as a tester, I don't believe you're able to see all the diagnostics information because that's my business. I just asked for you to test it. >> Yeah. >> So we could flip back to my machine one last time, and I go back into diagnostics. We should see in a minute when they process through the pipeline that we have a number of crashes and a number of handled errors coming from Winnie's machine. There's the issue you did, the stack overflow exception. I did the other one there. Under "Analytics", we should now see that I have two users coming in and let me know what's going on. So take that and extrapolated out across tens, thousands. We have apps with millions of users, tens of millions of users. This starts to really just take that distributed debugging for about historical debugging. Clearly, we're not actually debugging. But we are providing information that will allow you to understand how to prioritize what effects, and hopefully, how to have enough information to know effects. >> Right. That is so cool. >> Yeah. >> Awesome. >> So this is in preview, available for anyone to use. >> It's out there in the wild today. >> All right, aka.ms/AppCenterWindows for more information. You guys got to check this out. Give it a shot, and let these guys know how you like it, and what new things should go into it. This is really cool. >> Awesome. Yeah. I'm pretty excited about it as someone who spent a lot of years in this space. Before I got the App Center, we really are doing things that are not really brought together anywhere else, which is awesome. With the coming support for .NET Core 3, that's another step in the right direction. We've got some future plans there to take that, I think a little bit further even outside of the UI section. But we're not quite efficiently there yet, so we'll leave that for another day. >> All right. Cool. Thanks so much for coming on the show. >> Awesome. >> Thank you very much for having us. >> Thanks for having us. >> Hope you guys enjoyed this, and we will see you next time on Visual Studio Toolbox. [MUSIC]

An Overview of the Power Platform

>> On today's Visual Studio Toolbox, April joins us to give us an overview of the Power Platform and we start to explore how Visual Studio developers can join the fun. [MUSIC]. Hi, welcome to Visual Studio Toolbox. I'm your host, Robert Green. On today's episode, we're going to start our look at the Power Platform and April Dunnam is going to join us. Hi, April. >> Hey, Robert. >> How are you? >> Pretty good happy to be here. >> Happy to have you. This is the first in a three-part mini series on the Power Platform, which I think a lot of people have heard about it, certainly seen about it, heard about it, maybe watched the keynote demo or so. But I keep hearing these phrases like no code and citizen developer and easy and .NET developer, Visual Studio guy, my first slide as well. This is no code. What role do I play? The answer is there is a role for Visual Studio and.NET developers in the Power Platform. I think it's a very important one and what we want to do on this series is direct this at .NET developers and explain to them what the Power Platform is and where they can fit in. We're going to do that in these three episodes. Episode 1, today will just be an overview of the Power Platform with April. Next week, Greg Hurlman will be on to do a more of a deep dive and show how to build apps that include Web APIs that are written by .NET developers. Then in our third and final episode, we're going to do a one hour live show where we actually build a Web API that is then consumed in Power Platform apps. I think you guys, people watching this, just watch these and really understand, I don't think we're necessarily breaking any new ground, but we're approaching this from the .NET developer inside instead of the business side building app. Same content, just a different spin on it. Hopefully at the end of the three episodes, you guys will say, "Yeah, that's something I definitely want to do and now I know how to get started." That's the goal and with that you're up. >> Awesome. Thanks for having me, this is something I'm super passionate about as a .NET Dev and SharePoint dev myself that got into Power Platform. We sharing a story of how, like you said, we often hear of Power Platform talked in terms of being no code, low code. Where do we as Pro Devs and done a dev student to this? Actually I'm going to share a screen here on the slide deck that I have that shows a really good visual of this whole story, the holistic story of the Power Platform and what it has to offer. Just setting the stage here. When we talk about the Power Platform, if you haven't really dug into it a lot, we have four main products that we can use that are part of the Power Platform. There's Power BI that you might have heard for your dashboards and reporting. We have Power Automate, which is the glue where we can do some workflow automation, even robotic process automation. Power Virtual Agents is one of the newest features of Power Platform for building low code chatbots, and then Power Apps, probably one you might have heard talked about a lot in some capacity with new traditional developers because that allows us to build low code applications. Those are the components of the Power Platform, but what really makes it powerful is the fact that yes, it can tackle those no code and low code scenarios for citizen developers, but there's a great code-first pro dev integration story with extensibility points that we can leverage for this. It's all about really increasing your efficiency as a developer and letting you focus on what you actually want to focus on. You don't have to worry about the UI, for example. You can focus on building that .NET API or whatever that you're trying to build in the backend and all that and let citizen developers handle some of that frontend stuff and tackle these things together. >> You get to own the data logic, the business logic, the data layer, and not have to do these boxes and grids anymore? >> Exactly. It's all about making your life better and easier and let you focus on what you really care about. Really what this ties into is the whole concept, the fusion development. You might have heard us talk about that a lot here lately. It's nothing that's really new. Fusion development teams have been around a while. I think, what's the stat there? About 84 percent of companies have some fusion development team in them. It's really all about IT Pros, devs and low code developers and some guys, whatever you want to call them, working together to create software faster. There's something in this truly for everyone. You can have those low code devs that know the business process and the solution, what they're trying to solve really well. Get in there, be able to determine the requirements and what's actually needed in the software that you needed to build and even do a prototype of the UI and how you want it to look. All you have to do as a developer is help bridge the gap in some of the planning and security and being able to pull in data, for example, say from internal systems APIs that you might have in facilitate that extension and integration. It really makes the process very user-friendly, collaborative and helps everyone focused on what they actually care. >> Cool. >> Yeah. Those are the metric, they're about 84 percent of companies do have these fusion teams, like I said, so many benefits of embracing this fusion development approach. An example of how this could work in action. I really want to focus here on Power Apps because that's the application development standpoint used for the Power Platform. You are the ability for an end user to be able to come forward with these requirements that you want to solve, having a citizen developer build that frontend in a Power App. Then if you need, like I said, to integrate data from one of your legacy or on internal systems, being able to go plug-in as a pro-dev, add in that functionality, and be able to have this seamless process. How we do that plugin is with something that Power Platform offers called custom connectors. Custom connectors, a connector is something the Power Platform that they have, which is just a wrapper around an API that lets us communicate between services in the Power Platform. It's one of the things that really makes building applications on the Power Platform so easy, because we have about 500 different built-in connectors that we can just plug and play right now. These are services like Microsoft Services, as you might expect, SharePoint and all that. But other services like third party SaaS services like Box and Salesforce and all that. We even have the ability with the connector model to plug into on-prem data with an on-prem gateway and to be able to develop and register your own custom connectors to build building blocks for citizen developers. If you have an API, maybe you want to be able to extend a legacy application that you have in its infinitive backlog forever to add some additional functionality or screens to it. What if you could just expose that API and custom connector and let your Citizen Devs consume that information and build out a simple three-screen Power App to connect to that data? Saves you a bunch of time, all you have to do is just make an API that you already have built probably available and you can do so much more with it. >> If I have a simple Web API that talks to SQL Azure for example, would I connect that to the Power App or would I just let the Power App talk directly to SQL Azure? >> You can technically do both. That's one of those 500 connectors that the Power Platform offers as a connector to Azure SQL. You could do that, but if it's any other RESTful API that you have that maybe doesn't have talk with SQL, maybe it's stored somewhere else, whatever it might be and you want to be able to interface with that API, you can just register it as a custom connector to your service to be able to have that building block. The cool thing about that is you build this connector once, and you can use that same connector in a Logic App, which is what Power Automate is actually built on top of, Power Apps, and Power Automate. You can use it for workflows scenarios, Logic App scenarios, and application scenarios, and Power Apps. >> Can I hook it into Power BI? Is Power BI considered a Power App? >> Yeah. You can have Power Apps embedded in Power BI. I buy the way that design, we could integrate that all in there. What's going on behind the scenes with the connector model just to level set that is you have the connector itself, which is really what knows your Web API and the operations and that host details and all that, but when a user goes to use your connector inside, say of a workflow in Power Automate or an application of Power Apps, they create a connection to your connectors. That's why it has the actual credentials and the reference information to the connector and facilitates that communication to your Web API. >> Okay. >> With custom connectors, a few different ways that we can create these and I know you'll be getting into this in some of the future sessions in more detail, but we have a built-in wizard-like experience, but we can also import directly from OpenAPI definition. Either a file or directly from a URL. It's really pretty easy and seamless to create one of these custom connectors if you're already using OpenAPI. >> Then if I already have my Web API sitting up in an Azure service is there a preview connect directly to that service? >> Yes. That goes into the integration with Azure API Management. There's a native connector and integration with that. If you're using that, you can just connect directly and it will export a connector for you from Azure API Management hosted APIs. >> Cool. >> I thought that we show briefly just how easy it works. You have an API. I'm here on the Power Apps portal. If I'm a user and I want to be able to use this, all I did is I went into Custom Connectors as the developer of the API, and I registered a new connector. I've went to New Custom Connector, I saved from an open API file that I already have exported. It takes you through this process here. It gets all that information down, and you can specify a name for your connector, description, the scheme, and all of that. Then here on the definition, this is all of the actions for the API. It automatically import that for you. If you need to override anything, we can do that here from this portal. We can even have built-in testing, before we try to use this inside of an application to make sure it works. We can do a simple test as the operation, and it will return since it'll have an accept header, it's probably going to come back, for example, with maybe an error message here. Let me know what happened, and if it worked or not so I can do that built-in validation. Now, that's all I need to do as a Pro Dev or a citizen dev who wants to come in and use this in an app. I have this app here. I'm just going to open this in edit mode. You'll see how easy it is to integrate the connector as an end-user inside of an application you're trying to build to extend it. This is an application for Farm Plant Inventory Management, it's really the purpose of this. The API that we have interfaces with our database to retrieve information about different plants. If I want to use this, I just have to go up to the data source, and do a search for that API, that custom connector. Add that in with one click, as you can see, we already have that API in there. Then now if I want to go into search, all I'll I do is on this button, I'm going to have that call my API. I'm going to pass it in what I want to search for. That's going to return the results for me. I just had to do this very simple Excel-like formula as an end-user. Now it's returning data as respecting all the security and everything, that we put in place with their API, so we don't have to worry about that. Now I'll just unlock all these possibilities to be able to search for something there. It's just really pretty seamless. >> Wow. Now you could obviously build this app as a .NET app, whether it's a Web App, or WinForms, or WPF, or WinUI, or Xamarin, or Blazer or blah, blah, blah. You've got the same Web API that you would use for each of those. Here you just register that API. Pip create a connector for it, and then, your low code/citizen developers come in and make this app themselves, or if you wanted to you could build it yourself more traditional way. >> Exactly. You have the flexibility there, or yes you can build it the traditional way, but by being able to have this as a connector available as well, you're just giving even more ways that we can accomplish the same goal and more integrations possible. >> If I was still in charge of building the UI, when will I choose to build it as a Power App versus just a regular UI that I'm used to, is one easier than the other? Obviously, if I build it entirely from scratch you can do everything I can code, but for relatively straightforward apps, what's the sweet spot? >> Obviously, this is going to be super user-friendly. Power Apps, as you're seeing, I like to describe it as a PowerPoint and Excel had a baby, because the interface is very PowerPoint-esque, I mean, you're drag and dropping your controls, and here for what you need is very seamless the formulas are Excel-like. It's going to be really quick to build this. The other thing that you want to keep in mind is, I built this Power App right now and you notice that it's running on my desktop. The same app would run seamlessly on my mobile device or my tablet. I build one application once. I don't have to worry about targeting it for iOS and Android and the desktop and all of that. I just build one user experience, and it works accordingly. That saves a lot of time. The other thing that you need to point out about Power Apps is, this is a Canvas application. The intention of this being to be used within the firewall within your Microsoft 365 Tenants. Decision point where you might just go, the custom route, the traditional way would be if it needs to be anonymous or external or something like that. But for those enterprise or just a specific application needs, Power Apps is pretty hard to be in a sweet spot because it's just so fast to build and to get something up and running. >> If I wanted to build one that my wife and I would use, I could do that and we've got 365 family. Is that the equivalent of a tenant that we can use Power Apps in? >> The family doesn't but what you can do is use the Power Apps developer plan. That's a new plan there, and we can test and build all the Power Apps, that we want their most in 365 plans like the Business Premium and all the eight skews and all that, include a license for Power Apps. We actually have that myself, my husband and I, and I built an application of Power Apps to do manage when he needs to do the lawn and when he needs to water and all that stuff. We have actually did that with Power Apps. >> Cool. >> There are other integration points to that, I thought I'd point out. That's a big one, especially for .NET developers is being able to consume your APIs in there and saving time. But there's a lot of other extension points that I don't want to point out. Let me just pull something up here real quick, because this is a great slide here. Sorry for all the back-and-forth. There you go. This shows the whole scope. Besides Custom Connectors, what else can we do from a product standpoint with a Power Platform, specifically, Power Apps. With Power Apps we can even integrates with things like IoT Edge. It has built in mixed reality, which is pretty cool. This is one of the things I did at Hackathon. It has a 3D viewer mixed reality control where you can place objects in mixed reality and even measure and do things like that. We can have geospatial mapping, even integrate with the HoloLens, the applications that we build in Power Apps, again without having to worry about a lot of heavy code just very quick use. Artificial intelligence. This is another good integration point from a product standpoint. Now, the Power Platform has some built-in lightweight AI capability with some pre-built models. There's going to be some extension points where that might not cut it, you might need to do something even more customs to being able to integrate with Azure Cognitive Services easily, and robotic process automation, in even GitHub and Power Apps Portals API Management. There's just so many possibilities to be able to extend what we can natively do with the no code, low code by integrating some professional development skills. The other great thing that I always like to bring up and we're talking about this, "Man, this is just like another thing I have to learn that if I want to start integrating or working with this." Well, it's really not because the way that the framework works or how everything works here is, is you can leverage the existing tools that you already know and just plug and play into the Power Platform. That stuff I was showing with the custom connectors. I really didn't need to do anything. I had to go, export it. It's even easier if you're exporting from Azure API Management. I didn't have to do much there. I just used the existing tools I have to create and host my API. But another extension point that we can do is something called Power Apps components. When we're looking at that screen, you notice that we had a few built-in controls. Well, we have the ability to inject some product like TypeScript in HTML and CSS, JavaScript, all of that, to be able to create custom controls. If we need like a custom map widget or whatever might be that's not natively there in Power Apps, we can use TypeScript like we are using and be able to build that and integrate that within our apps. There's built-in integrations with Visual Studio Code. We just released a VS Code extension for the Power Platform to help you be able to manage your source code for those Canvas Apps and manage from administration side and all that. There's tools that you're already familiar with we can use to integrate with that, and the command line tooling, and all of that's built in. One thing I did want to point out is the underlying data structure like how this works because Power Platform is a platform and it sets on something called dataverse. That's more than just a database that we can use to store data in the Power Platform. It's really facilitates all kinds of things we've been able to do some standard data operations and transactional book stuff. That's where the assets that you create are stored. As Power Apps that you store, whether you use dataverse as a database to hold that or not, are stored in there. We have built-in APIs to be able to consume information and do automated deployments and things like that with a dataverse. It's even more powerful when you're integrating within that. Dataverse itself, the thing I always like to point out for the Pro Dev audiences that it actually runs on Azure and extends with Azure. It has built-in integrations. You're able to extend it with Azure functions Event Hubs, service bus Kubernetes, and it has support with data flows for all different types of data to be able to pull data into dataverse. We can handle relational data like SQL, non-relational data like Cosmos and all that. It really is pretty robust and supports a lot of extensions. >> Right on. It tends to be probably a good place to stop for Episode 1. It's been a great intro and I think people should at this point hopefully be itching to learn more. Again, we'll come back next week and see how you actually build some apps using the things that you just showed us. In the meantime, where can people go to learn more just to get involved with this? >> We recently, a team here on the Cloud obviously did a fusion development learning path on Microsoft Learn. If you go to aka.ms/learn-fusiondev, that's a really great path walking you through the fusion development approach, how you can integrate custom connectors and all that. >> Right on. Thanks so much for this. I know we could talk for hours on this, but I think 20 minutes that was a really good overview. Thanks so much. Next week, we will start diving in. Then a couple weeks after that we'll get really hardcore. Thanks so much, April. >> Yeah, I think it was fun. >> Hope you guys enjoyed that and we will see you next time on Visual Studio Toolbox. [MUSIC]

Building Bots Part 1

it's about time we did a toolbox episode on BOTS hi welcome to visual studio toolbox I'm your host Robert green and jo...