Showing posts with label of. Show all posts
Showing posts with label of. Show all posts

Wednesday, 23 October 2024

British Airways– Visual Studio 2017 and Microsoft Azure Customer Story

British Airways it's one of the best-known brands in aviation we have about 40,000 employees there can be quite a difficult bunch of people to reach we had our new CEO come in Alex Cruz and he wanted to make sure he could get his message across to everyone wherever they were all the information was spread across multiple different sources they had to use a number of logins we had a number of senior employees do have British Airways issued equipment but the vast majority of them don't have those we wanted to make a level playing field in that sort of sense and creating the app is the platform to do that many of our apps in the recent past are being built on the iOS platform it's a default platform but we've had trouble moving it to other platforms we used Microsoft Visual Studio 2017 to developers Ameren forms app that gives us 95% code reuse across all of the platforms that we were targeting which was iOS Android and Windows Phone visual studio team services and summer in test cloud particularly allowed us to iterate quickly through several releases of the application because we were able to check as we went along that our tests were still passing and the product was still robust enough to deploy and we were able to add extra features quickly without worrying about lengthy regression tests for our back-end services we use as your application services and also as your sequel database to provide the data resources to the application and to provide integration with our other back-end systems to achieve single sign-in for the application we have federated across from Azure Active Directory to our Oracle access management and that was fantastic that was the equivalent of putting a very old car on the moon it feels like a great achievement to know that they're finding a better experience it's working really well for us and we hope that this is the beginning of a very fruitful relationship you

Brad Anderson's Lunch Break s7 e15 Outtakes

- So a lot of corporate leadership coaches ask this question. I'd like to get your feedback on it. - Oh, yeah? - If you had to pick you know, one Golden Girl which one would you be? - Rue McClanahan, she was the sassy one. - Uh huh, that would be you. - That'd be me. - It's lunchtime, and this is Brad Anderson's Lunch Break. Two of my favorite things about Microsoft are the smart people that constantly visit campus and the great fleet of shuttles. Whenever I can I try to take advantage of both of these things and I grab lunch with some of the tech industry's best and brightest. (upbeat peppy music) Okay, so let's have a little fun. You and I can play this game called This or That. - [Nicole] Okay, fantastic. - [Brad] Epic. - I said fantastic. This could go off the rails. - Oh, this is fine. So I'm gonna give you like two topics, and then I'll give you some kind of word or description. You're gonna tell me is it one, two, or both. - Okay. - Okay? So your two terms are transatlantic flights. - Okay. - Or World of Warcraft. Okay, do you know much about World of Warcraft? - A little bit. - [Brad] Alright. - I had to stop playing so that I could finish writing my dissertation, yes. (laughing) - Okay, alright, here we go. You would do something unspeakable in an exchange for an upgrade. - Transatlantic flight. - Okay, I think my boys would say both. You have too much personal baggage to be able to sit comfortably. (laughing) - Um, World of Warcraft. - Walking the floor at a tech conference. - Yes? - Or joining a fight club. People get more aggressive the longer they're there. - [Tim] I guess the fight club. - Yeah, I don't know. That could be both. I've seen people at the tech conference before. - It could be both. Yeah, it could be both. - There ya go, okay. You go back home and brag about how many sessions you were in. - [Tim] That could be both. - Yeah, so a big part of the work that you do is really deepening and broadening the partnerships that Microsoft has. - Right. - And so, you know, there will be a lot of CIOs and senior leaders who will be watching you know, what we're talking about here. How does the work that you do impact their lives and make them better and their organizations better? - [Peggy] I think we've had more of a focus over the last several years on the partnership aspect and kind of moving from this transactional relationship to more of a strategic partnership relationship, and that's opened up a lot of doors for us and our partners. And rather than, you know, approaching it from a transaction we can say, you know, what do our joint customers need? What problems are they trying to solve? And then that helps us look at areas of collaboration that we can do with partners, and I think the opportunities for both sides have increased. - [Brad] Totally. - Yeah, we take a lot of the friction out of it for them so if they have to do something manually that sort of integrates our two products our partner's and our own and we can put them together in a seamless fashion. Um, that's sort of a win for all three of us. - So over the years you've interviewed a ton of like, the tech heavy stars. - Yeah. - Right? - Yes. - Any one of them leave you totally like wow that was amazing? - [Tim] I did interview Steve Ballmer. - Yeah. - Um, so that was quite fun, and his character was not quite what I expected when I first met him. He was uh-- - Lot more low key in an interview? - Yeah, I guess so, yes. Yes, and I liked him actually. I think he was full of life and full of enthusiasm. I really appreciated that. - Yeah, well people know Steve for kind of like the stage presence, but when you're in a small group you know, and you just have this conversation, he really is a great-- - Yeah, and very sharp. - Exactly, just a great guy. I always loved those times I got to spend with Steve like that. We talk a lot about this transition to the cloud and the culture change that it is for our customers. We've had to go through that internally. You know, I've gotten hundreds and hundreds of engineers who have built on prem products for, you know, for some of them for decades. Now some of that skill set is implacable in the cloud, but we've had to fundamentally relearn different aspects of what it means to build cloud services, to operate them, you know, the architecture is different than a client server architecture. So we've had to go through that same cultural transition internally that many customers right now are facing as we're helping them and pushing them to move to the cloud, but we've had to go through it first. - [Jeff] I have started, and I think people would be surprised about this from the SharePoint guy, I start when I'm sitting down with the CIO or I'm on my phone, let me show you how. Look, here's, you know, see all my email, my calendar, but wait, it gets better. Here's OneDrive bring up all my docs. Yeah, that's pretty good. Here's how I share things. You know, let's keep going. Let me show you the new SharePoint app where I can show you my intranet, and I can show you my collab sites and the publishing sites, and search on people and documents and that blows people's mind. They say wow if I could deliver all that just to our users they'd be great. And they say on top of that it is secure. - [Ben] This stuff isn't really a technology barrier. This isn't really a technology change It's a cultural change. It's a huge change. - Massive culture change. - And you know, if we talk about tools or if we talk about technology first without actually addressing the fact that this about people, because we hear every company has their Uber moment coming. It's actually about how do you think about your business in terms of what your customers want and will want into the future and what your people can fundamentally deliver. So it's really about that human element and just spending time kind of thinking about the future and your future. (peppy whimsical music)

Bloom Open Space (Preview)

Bloom came out of a little experiment that we had, where it was just creating a shape on a screen, having it expand and disappear. Really, it's just a very simple tech demo. What I've liked about this project and this space is that you're creating things in a virtual world, but the people outside are actually in the real world watching it as well. It's not quite so divisive. It really is truly mixed reality. It's blurring the lines as to where one starts and one ends. I think it's a bit of a sense of magic, actually. You can also start to interact with other people. It's quite magical, actually, where two of you start creating things in the space around you. I don't think anyone's experienced anything like that before.

Azure DevOps & Azure Pipelines Launch Keynote

welcome to the launch of Azure DevOps my name is Jamie cool I'm the director of program management for the Azure DevOps team we got a ton of exciting things to talk about today here with me I have mr. DevOps himself Donovan Brown thanks for having me Jamie I'm really excited to get started because some of the demos we're gonna show today I'm really gonna highlight the power that we have with a chanela what I want to make sure people realize is that you can also be a part of this conversation it's not just the two of us but if you're out there on social media make sure that you use the hashtag as your DevOps and if you want to learn more about what we think about DevOps here at Microsoft make sure that you go to a sure com forward slash demo right now we're gonna be talking about DevOps you know for this next hour I think it'd be important for us to level set well we actually mean when we said DevOps and I've been led to believe that you might have a few opinions on this particular topic a few and it's really important to level set because if you ask several different people what DevOps means you're gonna get several different answers so it's really good to make sure that we level said on what we believe DevOps is here at Microsoft and we believe that it's the union of people process and products to enable continuous delivery of value to our end-users the most important word of that definition is value too many people focus on just moving files or automation but it's the value that you're trying to deliver to your end-users and what I've found is that it takes the products to support the process what your people have chosen that's empowers your people to deliver that continuous value what I think is even more important than that is that when you start to take that digital transformation when you're on this journey the gains that you get are staggering and sometimes almost unbelievable as I go and I visit our customers all over the world those that have started this transformation are deploying sometimes like 47 times more frequently than they were before and historically there used to be fail fast but they're not failing fast they're succeeding fast because they have seven times less failure eight yeah you know when I hear some of those numbers on the surface they just seem kind of staggering and crazy when I step back and I think about it I realized you know what like I've experienced that you know firsthand you know it wasn't that long ago that a Microsoft you know the average chip cycle was two years sometimes longer you know now I look at just last week last week we shipped hundreds of changes just as your DevOps itself you do the math between a couple you know a couple years and hundreds of changes a week you get these big numbers and it's not like the folks today are just that much better than the folks ten years ago it's that we've gotten smarter we've learned we have better processes we have better products and that's a lot of what we're going to talk about today and what I'm excited about because we're now providing the products that we use that we've invested in and bringing them to you so you're going to be able to take advantage of them and that's what as your DevOps is as your DevOps are a set of services that span the entire DevOps lifecycle you can use them all together for a full solution or if you just need a one with a particular problem you can use just that or you can put it together with other tools that you're already using that you want regardless of how you use it it's going to provide a lot of value for it for you and I know that because inside of Microsoft we get that value we see that regularly just in the last month over 80,000 folks here at Microsoft have used a shred of ops to deliver our products to you from both the smallest all the way up to some of the largest so let's start by drilling in and what we actually mean by ours our DevOps and let's start with Azure pipelines so Azure pipelines is really the heart of the DevOps process it's a CI CD system it's a continuous integration and deployment system and you use this to keep the quality of your application up to make sure every change that you make is taking you forward instead of backwards and that's really the key thing if you want to be able to ship whenever you want to keep your code quality high you can also use it as a launchpad to get your code up into the cloud whether it's our cloud Google AWS or any other because as your pipelines is a system that works for any language any platform and any cloud we have hosted pools and machines of Linux Windows and Mac that we manage for you so that you don't have to because everything we're trying to do with this is to make your life easier as a developer and as your pipelines doesn't stop with what we've shipped it's highly extensible we have an ecosystem of over 500 extensions that have been contributed for both the community and from our partners from slack to Soner cloud all right you know one of the things that always excites me the most is when I see a new extension showing up and seeing what someone's been able to to do and extend you know with are with our product now you can use it for any type of application in any type of deployment mechanism but containers are increasingly becoming that unit of application deployment so as your pipelines works great with containers you can use it to build your containers to test and validate your container to publish it to whatever register you want and deploy it to whatever service you want including kubernetes now it's a lot more interesting to actually look at it then talk about it so Donovan can you give us a walkthrough of Azure pipelines so Azure pipelines is the CI CD system that we have that can build any language targeting any platform and that's what excites me about it as Jamie said earlier we give you access to max to Windows machines and to Linux machines there's nothing to install you just give us code and we'll build it for you Here I am on my dashboard and what I'm going to do is I'm going to click on this icon for this particular spring MVC application this is a Java app over here I happen to have a no js' application again any language in any platform clicking on this icon is then going to take me to my build results of some of the bills that I've been running previously if we look over here I can see exactly what branch I was building I can see if I was actually working on a pull request or not and clicking on one of these builds is now gonna take me to a summary page and a log that I can quickly review as we can see here I had an error on this particular build if I go ahead and drill in on here I can see this log file I can see this great map on the right-hand side and if I scroll down to this huge palm file build you'll see that we actually had some errors down here so it's very easy for me to diagnose but this isn't the only way that I can see my test results and I'll show you some other cool ways of doing that here in just a moment another thing that we can do is we can use this histogram at the top and start to see other builds that have already run and luckily we've had some of whom that have succeeded and as you can see here I have the same log but better than that I also have this great summary the summary here allows me to see my test results it allows me to see any associated work I can come down here and also see any deployments that may be running a matter of fact you've already successfully deployed this build into our dev environment and it's pending my approval to go all the way into our QA environment you get to see the real power of our pipeline system when you go back in and you start to look at editing one of these it's a really nice graphical user interface that allows you to simply drag and drop town again I want to highlight over here if you look at this drop down here you can see all the different hosted pools that we give you what I mean by hosted is there's nothing for you to install all these resources are provided for you and they come in a variety of different platforms you need to build your iOS application we have a Mac sitting in the cloud ready to build that for you you want to build your containers on Linux we have multiple Linux images and containers running for you so you can go and build those images and once they're build you can deploy it in the docker hub or to ACR wherever your images need to be stored in addition to that where can you get your code you can get your code from the most popular source control systems your doing open source work and you want to put your code in github no problem you want to have private repositories because you want to secure your code and you don't want anyone else to see it you can use our get support as well you already have your code in subversion or some other form of source control don't worry about it you can still use as your pipelines to get your code wherever it exists today and be able to start running CI and CD against it again adding new tasks is very simple you simply click on this plus here and you can simply drag and drop from hundreds of tasks that we have available for you right out of the box what I really like about the out-of-the-box tasks is that they're all open source you can go and see exactly how we wrote all of these hundreds of tasks and use this as a way for you to learn how to write your own if you already know nodejs or PowerShell you know how to write these particular tasks but before you go off and write your own adriaen courage you to go and look at our marketplace as jamie said earlier hundreds of our partners have gone off and written extensions that add a lot of value and all you have to do is simply click on it you get them for free and add them to your pipeline and you know you have new value not only in your building your release but new hubs and widgets and all other cool places that you can extend our system now I love our graphical user interface but I've known a lot of people prefer to use yamo they want everything in source control now what's really nice about this is you can just simply click on this link here and we will export the handle for you so you don't even have to write it what I really like about this is I actually keep both of these up because I want the best of both worlds I love the visual representation I love the ease of editing I can come in for example make a quick change to a particular task and then export just the Amal for that task I can copy this to my clipboard I'll run over here to get hub where my code is currently sitting I can find that Yemma file that I created earlier and I could edit it right here inside of github so let's just go ahead make a quick edit I'm gonna paste the code I did I just want you to see that we can have a really cool pull request let's make some changes I'm gonna come down here and save this into a different branch because I want to show you some cool stuff here so I'm creating a patch branch for the change to my llamo file and when I do this and create this pull request I've wired up Azure DevOps to my github repository sets that every pour request that is submitted will actually have to run that build and succeed before I'm even notified as we can see down here we now have a bill that is queued it's currently in progress now if I go ahead and click on the details I will be able to jump right back inside of Azure DevOps see the pipeline that it actually started running for me and in a moment here once it connects to the agent I'll be able to see a full live log of everything running against my particular build that way I can get quick verification that if the pull request is good that it passed my test and doesn't need the attention of the moderators and the contributors to go back in and review that particular pull request another thing that I wanted to talk to you about was the testing that I mentioned earlier so if we to go back to that definition let's pick this one here for some fun and I'm gonna click on analytics what analytics does for me is it actually watches the test results over the history of this particular build and it gives me a report letting me know how successful our testing has been over the course of a period of time if I click on this I get to drill down into these analytics and I actually identify the tests that we need to go back in and verify there's different ways that I can slice and dice this data find out which of my tests are taking too long so that we can focus to get our build times down I can figure out which tests are flaky for example the about test was the one that was broken and then and spend some time going in and investing to make sure that is safe once I know I have a high quality output from my build the next thing we have to do is release that code and that's where our release product or what we call pipelines as you'll see in the navigation here is where we take the output of the build and we run it through a pipeline deploying it into multiple environments and even allowing you to do approvals between those environments to make sure that your code has safely landed in the target environment if I were to go back in for example and look at one of these releases we'll be able to see that I'm actually deploying this application into kubernetes if you wanted to use helm you could I'm a newbie when it comes to using containers in kubernetes so I just went ahead and do some coop cuddle commands to get my code into the cluster and as you can see here I did a coop cutter will apply and another coop cuddle set and then I was able to deploy my code if I go back in here really quick and edit it well you'll be able to see here is that if I go and look at my tasks I can see exactly what it was that was doing before I was playing with infrastructure as code so I was actually able to take an arm template and deploy my entire kubernetes cluster into Azure before it even existed which is really nice as well I could come in here make some really quick edits the tasks are so well written that even your pool secret is automatically handled for you again allowing you to take your code from the fingertips of your developers and putting it into your hands of the users using Azure pipelines so that just shows you some of the power that we have inside of Azure pipelines for any language in any platform all right Jaime show us some more stuff thanks Donovan so over the last six years or so we've been on a journey here in Microsoft that we often talk about internally as the new Microsoft and it really starts at the top but at this point it's percolated really all throughout the entire company and it's someone that's been here for 20 years this has been a really exciting time to be part of Microsoft you know a big part of what it means to be the new Microsoft has been the embrace of open-source and if you look over the last six years you know the amount of things that we've been doing in this space continues to grow up go up year after year after year you know it starts with things like simply making sure that we're embracing the projects that the community has chosen to embrace a great example of this would be kubernetes on Azure another example that I was directly involved with seven years ago was get right it seems obvious now but back then it wasn't such an obvious question about whether we should embrace git or whether we should try to compete with it we chose to embrace it and now seven years later almost all of the development that happens at Microsoft happens in gets including windows and just step back and think about that for a minute the windows team uses the source control system that line is built to build windows it really is a very new Microsoft another example is open sourcing more and more of the products that we deliver vs code and typescript are great examples as your pipelines itself has core parts of its infrastructure open sourced so I'm excited today because we're gonna a bolide amiss list and that is free CI CD with Azure pipelines for any open source project that wants it this means any open source project can use Azure pipelines you get unlimited minutes up to 10 concurrent jobs running at the same time access to our Linux Mac and Windows pool we use the same exact infrastructure for open source that we use internally for our own builds and that we use for our customers this means open source gets the same quality of service that we can that we give to everyone we also want it to be really easy for projects to get started that includes open source and really all of them since most open source projects live on github as your pipelines is now part of the github marketplace this means that you can discover configure and even pay for Azure pipelines through the github marketplace so if you already have a billing relationship with github you don't have to set up a new one with us and again the key theme of all of this is just about making the lives of developers easier and if they got one less payment vehicle to manage your life just got a little bit easier I'd like to show you that in action so let's switch over here and I'm going to show you as your pipelines as part of the github marketplace so I'm looking at your pipelines I can configure my plans or I can set up a new plan we scroll down we can see that you could configure a free plan and like I said it's just free for open source we also have a free plan for private projects you get up to eighteen hundred three minutes if you want to use it for private projects you can add more parallel jobs you can run more of them at the same time you can do that for $40 each and again you can configure this and pay for it right through github itself let's go ahead and choose the free offer and let's set this up we're going to configure this in the Raleigh labs github organization and now what's going to happen is we're gonna consent to grants as your pipelines access to the repositories it could be all the repositories it could be just individual repositories in this case we're gonna just grant access to some repositories and then we'll go ahead and create a new Azure DevOps organization and what this is going to do is it's going to set me up with everything I need to use as your pipelines and as your DevOps so it's going to create our organization and then it's going to set up our first project and land me an experience where I can configure my first pipeline on for whatever repo that I want to in that github organization so let's go use the node container repository once I select that we'll go and analyze that repository to see what's inside of it and we see that there's a node app in it so we're suggesting a whole variety of different node templates that we have out of the box but we also saw that there was a docker image so our default recommendation is to use our docker template now Donavan mentioned config is code we're gonna use that in this situation and this is the definition of my build process and it's simply saying it's going to use our abun to pool and then it's going to go build our dock it's gonna build our docker image I could go and modify this process as part of it but I'm just gonna use the default state and now I'm going to save it you and we're gonna check it into a new branch so what this is gonna do this is gonna create a pull request when the github repository to add this yamo file to that repository which will then have configured pipelines and ensure that all changes that go into that repository from now on are validated then it's going to kick off the first build to do the validation of the actual configuration of pipelines being added to this repository so as that kicks off we can jump back over to the github repo and we can see that there's now a pull request and go into that pull request see the file that we're adding is the actual pipeline yamo file to go back to the conversation we can see that pipelines is in the process of actually validating this change itself we also support the github checks API which is a rich experience for showing the status of a variety of check of all of the sources that are hooked up to this repository so here you can see as your pipeline's publishing that status into it so this just gives you a taste of how easy it is to get started and you know the type of integration that we have with github as part of Azure pipelines now as part of the run-up to the launch of Azure pipelines we've been working with a number of open source partners to do some more early onboarding and we've been really enthused with both the feedback and the reception that we've gotten so I thought it'd be interesting if we invited some of them to come join us so we have folks from a number of from a number of them the first is github desktop so we have Bill hack from github and he's here with Donovan to share his learnings Donovan thanks Jamie so again we have Phil here from github tell me what you do it github hi I'm Phil hack a director of engineering at github I and in charge of the client applications team so we're the team that builds all the software you use outside of github such as get a desktop atom which is text editor electron which is a framework for building cross-platform apps using web tech okay and then we also have a team that builds extensions to third-party editors such as a visual studio just to do code and unity awesome so I would imagine managing that many products something is crucial to CI has to be important so what is CI really mean to you and what does it bring to your development I think one of the best ways to understand CI especially when it comes to open source projects is to imagine a world without it and how people would collaborate with open source so imagine you know someone named John is right and you know approaches github see the repo says I want to contribute to this fixes a bug pushes the code up and now Egret you know who's if one of the project maintainer comes along and she notices that oh this person submitted this fix let me try it out so it pulls the code down and hits the build and it doesn't build and I'm like oh darn it you know so Wright's comment you know nothing you know good fix the build and then the next day you know John's back let sees that you know okay fix the build pushes it Ygritte takes it down the next day runs a test and and realizes oh the tests fail because you know maybe John only random in the dev configuration not the prod configuration and what continuous integration brings is it really tightens that feedback loop so that rather than you know John having to wait for some human to look at it he pushes up the code all the tests run in all the proper configurations and it gives immediate feedback it can run your tests your static analysis your linters and that way like you know you take the grunt work out and really tighten a feedback loop and then force your project standards and all of that and so that's one of the beauty of CI for open-source project yeah I remember the days when I used to come into work do a get latest and the build was broken and because someone left right and then that person has to bring Donuts the next day and when we had CI it was really cool because it would point the finger at the person who told us donuts but you knew not to go do a get latest because you had that signal saying the build is broken which also protected us that way as well yeah a lot of people would try to do the whole traffic light thing exactly in the office right now they're doing raspberry pies with LEDs and like you know if it's if it's okay or not so there's a lot of CI systems out there we've been doing CI for a while what is exciting you about a sure pipelines in particular I mean the thing that most excited me about as your pipelines when I first heard about it is the cross-platform nature of it with electron apps like I mentioned before you're targeting Windows Mac and Linux and that it means that you often will have three different CI providers each of them with a different yam will file sure and it becomes a bit of a maintenance headache and with that your pipelines you know we could have one provider with one Yamma file and have it build on all three targets absolutely I always get on stage and I'm saying any language any platform and people think I'm bluffing I'm like no look at our our queues right there any platform you need is in there for Mac for Linux if you're doing mobile if you're doing containers that's amazing and I've it was interesting I was going around github the other day and every time I'd go into a repository I see three and four llamo files like I don't understand like why are people doing this right don't they know that they can get one yeah mole to rule them all and get you all the platforms well I mean that wasn't an option not too long ago right that's true and it's particularly nice the other thing about Azure pipelines is you know through your generosity you're all are offering it to open-source projects are free so I think yet we have millions of open-source projects on github I think that's a great option for them especially for electron projects who really want to target all this all three platforms cool so you show me what you did yeah so here we have get up desktop and this is our git git and github GUI client okay and it's an open source project we develop it in the open and if I scroll down here you can see that we have badges for our builds and this is the azure pipelines build and then you know let's take a look at pull requests so you know these are submissions by the core team as well as people outside and so let's say you know I'm looking through here and I see oh here's a submission from an external contributor and the build failed let's investigate so I click on that you know hey thanks Damon for contributing I'm going to scroll all the way down to the checks API and we can see that we can see that for there's for failing checks and when I expand that we can see you know each of the individual checks and I see that you know the azure pipeline's build is failing so I can click details and takes me right to logs for this build you can see that you know it's it's building on Windows Linux and Mac and it failed for all three platforms if I scroll down here on the right I can see each of the steps of the build and how long those steps take and here I can see oh the linting had an error and if I click on the error I get the full log output so it helps me see that I needed to leave some strange characters from somewhere and then you know I now know how to fix my bill and if I want to go right back to the pull request I click there and I'm back where we started awesome and this allows like I said I don't have to as a I maintain open source projects too and this just saves me a ton of time as you mentioned earlier just having to allowing them they get the feedback from the system and not having me stop what I'm doing clone their repo try to build their code find out it's broken like oh my god I was like five minutes of my life I'm never getting back again please someone help me do this and now or sec the build fails I don't even look at it all right great that's someone you get the same notification that I do go fix that and then when you're ready I'll finally go ahead and take a look at it so yeah saved me a ton of time as well that's exactly right awesome Phil thank you so much for coming and showing us how you at github or actually using Azure pipelines all right Jamie go ahead and take it back you know what really excited and resonated with me from that conversation was the notion of being able with one product with one pipeline with one llamó file to be able to validate across all three operating systems and not have to deal with managing any of those machines itself it's a pain point I hear again and again and again you know so the next project we're gonna look at is in the Python space so Steve dower is here with Donovan Donovan so I want to go ahead and address the elephant in the room I just heard Microsoft and Python what in the world that you do here at Microsoft that has to do with Python I so I've been here I've been here at Microsoft for about six or seven years now and the entire time being working on Python stuff Wow so a whole lot of the visual studio integration Visual Studio code support for Python a whole lot of the Azure services support for Python has kind of come through my team or out of my team so I've been working on that few years and I'm also a core contributor to Python itself so I'm one of the one of the team of volunteers distributed around the world that work on C Python the reference implementation awesome design and build a language so how did you start to use Azure pipelines when it comes to Python so I've been using agile pipelines internally for years now say all of our products are built on it as we've already heard earlier Visual Studio has been building on it for long time now so I had a lot of experience you know getting pipelines up and going with that and then I saw the cross-platform support was cured because early on that wasn't there and now and now it's here and I start looking at that I'm like oh I could be doing a Python builds on this across all the platforms with a single setup right and then the open source offer comes along it's like oh yeah let's let's get pipe and running on this so I just kind of went out and started doing it cool and just just to put a bit of context on that I've got the the Python home page up here we can see we've got three badges going for the Linux Mac OS and Windows build but Python runs on so many more platforms than that if I run over to this is our our existing build bot site this runs tests against all of the configurations that Python supports on every single commit and you'll see this is a really really long list sure and so you know the the value and being able to get off all of these manually configured manually managed configurations onto something that's you know cross-platform continuous integration in a single service is like really appealing ok and so I did that for the main platforms and so we have all of these pull request builds and commit builds running now for the main platforms and that was really just I went to the dev guide that we have we have all the instructions for all the platforms on how to build we have all the instructions for the existing continuous integration systems and I just pulled them over into that visual designer that you showed earlier sure and just got them running in their view llamó exported out check it in and now it's in the repository and we've got all these builds running out of the Python repository configured as code now what I've noticed is that we offer a lot of hosted agents and a lot of different platforms but we don't offer all the platforms you just showed there so how did you tackle the fact that you needed to run on even different platforms than that yeah so so so far this is not the hundred different configurations right we're not at that yet but the potential is there because as your pipeline supports private agents so I can set up any machine anywhere I want virtual machine and the cloud physical machine on my desk old Raspberry Pi sitting under the under the chair wherever it happens to be put the the pipelines agent running on it so it's dotnet core anywhere don't gotta run I can run that thing if it's connected to the Internet I can start running builds on that machine from the edge of pipeline service that's all still going through the one thing one pleasure I'm already doing that it's actually the windows release build so one of my jobs for cpython is doing the official releases so the the python.org downloads for Windows built by me code signed published all of that is is one of my jobs as a volunteer and so you know that's a manual process right I log into a VM and type all these commands and and you know sit there wait for it to finish to the publish I want to automate that so I put all of that into a a pipeline build which you can see here basically my commands this is not running on one of the hosted agents because it's got kind of some special requirements to it we do profile guided optimization on on every release build so we want you know much more powerful machine faster CPUs to get through the training that much quicker gotcha just to save us some time there's also code signing so it's every single binary and the Python package is signed with authentic code certificate under the Python Software Foundation name it's valid on every Windows machine in the world basically that's that's a high value we don't want I I don't want that you know bouncing over the internet every time I do a build that's locked down on a private virtual machine that I have running on Azure here's my cue or my pool you can say it's offline at the moment I don't even turn the machine on if I'm not building anything that's encrypted at rest it's encrypted while it's running it's BitLocker the whole way through the machine and so I just log into the azure portal start this machine up queue up my builds let them run shut the Machine down and we have like a custom secured machine that no one else has access to no one no one's making one of those poor requests where they just like download all your certificates and send them off to their own site absolutely because we never even run poor requests against this machine yeah so having that private agent there as an option to be able to expand beyond not only the platforms that we provide but also very unique secure scenarios like the one that you just described where you're able to no there's no way anyone's getting to that file because there's only it only exists in one place and that one place is encrypted to the enth degree but I can still use that machine to run my build which is incredible yeah and so as far as getting cpython building there's there's integration steps more to do like there's a whole lot of cool features that we're not using on pipelines yet but the flexibility is there the potentials there and so I'm really excited to keep building on that I know I'm gonna be building on more of this this week so if people log in at the end of the week then they're probably gonna see changes in this already yeah but this is for what I understand the engine that most of this code is actually in see that actually drives a Python but what if you actually want to build a Python app is I mean are you able to do that just as easily yeah that's yeah one of the the newer thing that's that's showing up in pipelines with you know this you know being released right now is a whole lot more Python support and we actually reached out and got a couple of projects on board they were very excited to do it talks pip pip End Python developers will be familiar with this because they're very well known projects there but I want to have a look at talks actually because it's one of the the more thorough and complete integrations that they've done and there's some really cool aspects to it so ok talks is actually a tool that is used in CI systems people it lets you specify a configuration file with all the build steps and run that across all of your full matrix of target platform ok so a Python project is normally you care about Windows Mac Linux you care about Python 2 7 3 3 3 4 3 5 3 6 there's a big matrix of C test against talks helps you with that it lets you write one set of instructions if I pull up their file you can see there's a lot of environments that they care about and they have all of these steps and this was already there ok this was already there in talks like people have this file so when they came came on board the pipelines and started using it they set up a build that like they wanted to reuse that file you guys they didn't want to go oh we have to rewrite everything in a completely new form so if I pop open their most recent build which I've got here we can see that in fact let me jump to there there yeah more file because I made a yellow file 3 because that's you know they want everything in code yep and it's not it's not even as long as the other one there's a few steps in here but essentially all it does if I look at one of these examples they're picking a version of Python we have them there we let you choose the version you want at the start of thing okay then they install themselves they install talks and then and then they use talks to test okay but what they have done is they've actually inverted it so normally you'd say run talks run everything sure and it does all of the platform's all the configurations in one go they've inverted this they're actually using our multi version so our matrix support okay so they have the matrix here of all the versions and then they use talks to run each one and let us do it in parallel gotcha so if I jump to the build and have a look at the logs you can see all of these jobs down the side which is every configuration they care about all the operating systems different queues for each one of these so this one ran on Mac this one ran on Windows different versions each of these ran in parallel yeah because we'll run them in parallel each one is running talks just a single environments you're inside it so they're actually getting the same result as they would have if they were running talks on a single machine but it's all come in parallel some of the cool stuff they've done here is they're using these guys you know really good at using kind of the standard tools that exist in pythons so they use the standard code coverage tool that most Python projects use this generates a cobertura file okay which we can upload and give you the summary right here and so that integrates nicely with pipelines because we understand that file format tests they run in PI test which connects port j-unit perfect XML they then push that up and we get the summary here if I jump over to the Test section we get all the results they have good results right now so let me clear that filter and all of the test results for their process sorry about the size here that's fine here we go we can see all of the tests that have been run and if any of these had failed then we'd get the information from that so all they've really done is taken their existing build and test tools and switched it over they've inverted the matrix a little bit to run on pipelines and they they get all this really nice integration they haven't had to rewrite their entire system to keep using the standard Python tools that is already out there and they're really just bringing over you know what what shell command should we run at each step and it's really nice to not have to learn something new yet you get this really rich first-class experience like you showed me the code coverage and the test results and it's integrated into your summary it's not as if yeah we know how to run your stuff but we don't know how to display the results to you but we do which is really amazing that you can now use it again any language any platform but you don't sacrifice anything when you do write we don't care we whatever you want to bring us we're gonna build it for you and help you deploy it so I really enjoy and I'm really glad that you came and shared with us how Python is actually able to leverage as your pipelines as well so thanks again for coming and Jamie back to you thanks Donovan so the last project we're gonna look at is Visual Studio code and I have Amanda silver here with me from the Visual Studio code team and Amanda everywhere I go I see people using Visual Studio code developers really of all types have just embraced in an amazing way you must be thrilled yeah I mean I joined the Visual Studio code team about two years ago and since its release in 2016 it's actually become one of the most beloved code editors that's out there in the planet and part of the reason for that is because we have a really rapid cadence of updates and releases that have new features and the way that we can support that is we have actually a lot of community contributors we have over 4 million monthly users of v/s code and we actually have one of the it's one of the most popular open source projects out there on github in terms of people contributing to it we've had about 15,000 non-microsoft contributors to the vs code project wow that must take a lot of work to take it out many contributions yeah it is it's actually a lot to manage and our development team definitely says that that one of the most challenging things to deal with is actually code reviews and in fact even when we interview other developers who are users of our tools they also say that code reviews can be really really painful senior developers are just bombarded with requests for reviews and when they actually go in to do a review it's really hard to figure out kind of where to focus and even further you're oftentimes using a web editor or something that's kind of not your usual tool set so we've been working with the github team over the last couple of months to see if we could address some of these problems and actually we've just released a new version of es code that some new API is in it and the github team just released a new extension for PRS that allows us to kind of have a PR experience directly NBS code so that should make it a lot easier awesome one out sure okay so what you can see here is I have my vs code at code editor right here and you can see that I have all the colors that I expect all the theming that I'd like to see I have the get like it lens extension here which is a super popular community contributed extension to the vs code community but I might have a pull request to do and so if you go to the source view right here we can go right here and look at the in addition to kind of seeing the usual changes viewlet i now have a new viewlet in here called github color pull requests and i can further just go ahead and expand that and see that there are some new pull requests that are waiting for my review so what I can do here is expand that as well and even look at the description now right in the context of of BS code now what I probably want to do is then actually go look at the code changes so I can come in here and I can see a dip few right here in vs code which is pretty awesome and that allows me to do that kind of you know first glance a review that I might want to do looking at the two deaths and but I get the the color the colors and theming that I'm used to you for the you know typescript code that I'm looking at in this case if I want to do a deeper review though what I can do is go back to the pull request here and go ahead and check that out and what that's going to do is it's actually going to switch vs code into a code review mode and bring all of those changes locally so now I can go ahead and look at the changes in that pull request go to the files look at that same file and I'm going to just expand this so that in the full kind of view I actually get all of those same extensions completely running in this in this code editor right here and so what you can see here is that I actually get a squiggle and the reason I get that squiggle is that because all of the extensions are running on my local copy of the power of the changes the typescript compiler is actually checking statically analyzing the code here so if I mouse over this you know we can see that we can see that this variable is declared but its value is never read so what I can do is then just go ahead into the gutter here and just add plus and make a comment looks like this isn't used and go ahead and add that comment and then the other thing that looks really weird to me is that these hard-coded values so again I'm going to just add another comment here and say hard-coded values and go ahead and add that comment and then I can go back to the description page for this pull request and just go ahead and add a top-level comment left a few notes so add that comment and now what I can do is actually go ahead and look at this change directly in github and what you can see is that the comments that I added are still there just right here hard-coded values looks like this isn't used you know immediately what I what I just showed is now in github now even further if I go back to vs code what I can then do is go ahead and just exit review mode because I'm done with this review and just go ahead and exit it and just go back to my normal coding go back to my normal view but now what I want to do is go ahead and look at that look at that pull request one more time just to make sure that you know the build past and that's really where as your DevOps pipelines come in so if I look down here what you can see is I can go directly to the vs code build and this will bring me directly into the azure devops pipeline experience and what you can see here is that we for the vs code project we really have as your DevOps pipelines and the reason is because it allows us to build for Windows Linux and Mac OS all simultaneously using all the same kind of infrastructure all the same scripts and so what we can see is that we have all of those builds and all unfortunately failed so let's just go ahead and look at one of these compiled sources and what you can see down here is that yeah so we also have the zoom level default is declared but its value has never reached so you know well that error would have been picked up in the CI which add your DevOps pipelines really would help with as well you know now because I could run that as part of my review it it might not even get to that level I might actually be able to to say hey you know before you actually merge this you probably should fix this error this looks really cool I can see how a lot of folks are gonna get a lot of benefit out of doing this and it's going to make their their review process and just a lot of what they do every day a lot simpler and I'm happy to hear that as your pipelines is able to make the actual development of vs code simpler in and of itself yeah we really appreciate it and definitely our development team I think is going to you know feel super powered by having as your pipelines behind them if other folks want to try out the the github pull requests review extension it's actually available with the latest vs code and in our extension gallery so they just need to search for github pull request awesome thanks a lot Amanda cool thanks Jamie so we've talked a lot about open source projects and you know the thing about Azra devops is that it works great for open source but really for all types of applications from both the smallest to the largest we have organizations of all shapes and sizes using it today shell is an example of a large organization they have over 2,800 developers using average or DevOps all over the world Hawaiian Airlines when they moved to Azure DevOps was able to increase their bill to improve their build time by over 400% you know because as your DevOps is a cloud hosted service getting up and running is really simple so Accenture is able to spin up new projects in no time with Azure DevOps now really the largest of all organizations that use Azure DevOps is Microsoft itself we have over 80,000 people that use it every month to ship our saw and the numbers are just staggering in terms of the scale we do over 4 million builds each month we do 500 million tests executed every single day every single day every we have over half a million work items to get updated and the number that I find the most satisfying is that 78,000 deployments are done with Azure pipelines every single day which means there's a much smaller window from when we write code to when it gets to you which is better for you and frankly it makes our customers our developers happy too because they have a much shorter wait before what they've created gets in your hands so I thought it'd be interesting to actually show you how we use Azure DevOps on a regular basis so of course we use agile devops to build Azure DevOps so Donovan is going to give us a walkthrough of our actual engineering system the actual system that our engineers are using right this minute to build Azure DevOps so Donovan let's taken us have a look thanks Jamie I love that inception kind of work that we have where you actually use the product to build the product and we're not the only ones inside of Microsoft that uses it we obviously have the windows team some of the Xbox teams and everyone's moving to what we call our one engineering system so what I thought it would be cool to do is take you through like the day in the life of an engineer on the azure DevOps team and what they experienced to get their work done using the product that they build to build a product that they use so here we have a dashboard imagine you're coming into work and you see this dashboard on your plasma screen or on your surface and you can see exactly what your team is working on what sprint you're currently in how many days are left you can go and see what work is currently assigned to you how many bugs you have out there you can see the health of your builds your team members what features you're working on and make sure that you're focusing on the most important things first this dashboard is completely customizable yours can look completely different have a great library of widgets that you can use to build it and what they're able to do here very quickly is see exactly what they're supposed to be focusing on so that they can deliver the highest level of value not only do we have great dashboards but we also have product backlogs backlogs are a priority list of all the things that you want your software do you can simply drag and drop items from here to assign them to a particular sprint so you can start doing some sprint planning as well I won't drag anything now because as a Jamie pointed out this is literally where we're building the product and I don't want to assign work to the wrong sprint I love the fact that I can expand this and get a really nice view of exactly all the work necessary to turn one of these ideas into a working piece of software we also have the ability to have Kanban boards this allows you to visualize the movement of an idea from creation all the way to being done and actually being run in production and done for us means it's actually being monitored in production so that we can learn from the telemetry and decide if our priorities are in the right order and go back and use that data we call it monitor and learn again this is a very rich user interface where we can simply drag and drop these tiles are already assigned to people and one of my favorite things that we can do here is actually create a branch right from the board for the work that we want to go off and create so now I'm using git we're using feature branches I have a feature I want to go implement I don't have to go off and create the branch separately I can create the branch right here from this board and what happens up happening is that work item now becomes associated to that particular branch and that traceability lives throughout the entire course of this work so not only do I have the branch tied to the work item every commit is also associated every CI build that is triggered every release that is deployed is all traced back to this particular work that I was doing here I like being able to see every line of code that I changed so I can do test impact analysis and I don't have to go running around to find it the system actually gives that to me by simply creating a branch once that branch is created I go off and do my work and here's our git repository and you can see on the left hand side how all these branches start to come back together this allows me to visualize the branches that were created the pull requests that merge them back in the master to make sure that we're delivering on our goals we don't just go merging back into master willy-nilly right that could be a recipe for chaos so what we do instead is we use a process called a pull request a pull request is an opportunity for our peers to review our code before it gets merged back into master because master is the law this is our golden master and we want to protect that and we protect it through a process called a pork if I were to come over here to the pool request tab it shows me all the pull requests that are currently running a pull request is a chance for your engineers to review each other's code it also has the ability to run tests and run builds against the code as well if I were to drill in on one of these for example I'll just pick a random one here we'll be able to see everyone who's involved in the commit we'll be able to see everyone who has made changes what lines they're on I could actually leave comments on each individual line discussing with the engineer what's good and what's bad about these particular changes if I wanted to I could see if there's any conflicts the commits and we also have policies applied to different branches and the policy that we have applied to our master branches you have to survive a build and in that build we run a lot of tests I believe if I remember correctly we're running somewhere near 83,000 unit tests every single time you do a pull request and if a single one of these fails your entire pull request gets stopped so we want to make sure there's a lot of quality a high level of quality because these are unit tests we can actually run 83,000 of them in less than 20 minutes we're getting that signal back to our developers very very quickly and because they are supposed to pass we've all had flaky tests in the past and people start to ignore those we worked really hard to make sure that these are solid they're fast and they're reliable such that if a single one of them fails no one ignores that we go immediately and figure out what failed and we fix that code and we start this process over again to ensure we only ship the highest quality code to our customers and to ourselves because we are our first and biggest customer here at Microsoft running all these unit tests allow us to ship the highest quality code to our end users and it doesn't stop there we continuously run tests against build after build release after release to make sure that we only ship high quality code and it behooves us to do this because we're the first people to get the code that we ship we don't put it on to our customers unless it survived time with us here at Microsoft when you use the tool that you build to build the tool that you use you cannot risk having bad quality software because it could stop everything that you're doing as a matter of fact here is our deployment plan and what's really cool is we actually use safe deployment here at Microsoft which means we deploy two in a production environment and we let the coat sit there for 24 to 48 hours and it has to survive our telemetry and our monitoring and our daily use before we then allow it to deploy to another environment this first environment which is identified here as ring zero is actually where the Azure dev ops team actually works so when we're pushing out new features we're the first to fill those features if we make a mistake or the only ones to feel the pain of that mistake we go back in and recorrect it after we let it sit here and we monitor the telemetry we check for any new work items that have been logged for particular bugs we make sure that it's healthy and can sustain our traffic we then promote it to the next string of our deployment and we rinse and repeat this until it gets all the way out into production for all of our current users so this is how we ensure and how we use Azure devops to produce Azure DevOps for our end users thanks Donovan now as you saw as your DevOps is really a full end-to-end solution across the entire DevOps chain with great traceability from the start all the way through to the finish it's highly scalable from the smallest team all the way up to the largest team its Enterprise ready you can run in whatever geo that you want so you can keep the data close to home you can run it in our cloud or we'll take care of everything for you or you can run it on premise with Azure DevOps server which lets you install and manage a cert the azure devops services however you want you really have a choice between a public and a private cloud now Azure DevOps is the evolution of visual studio team services which is why we're able to capture so many of the learnings and innovations that we've had over the years and this evolution is really based on the feedback that we've gotten from customers and that cost that feedback is about choice you know lots of folks want to be able to use the full end-to-end solution the way Donovan just showed you but others want to have the choice of just choosing a particular part of it just using pipelines or just using artifacts or just using boards and putting them together however you want and you're able to do that so you can choose which parts of Azure DevOps you want to use and assemble them with other solutions so for example if you wanted to use Azure boards to do your planning and github to store your source control and do your pull requests but use Azure pipelines for your CIS CD and take the artifacts that were produced and put them in artifactory and then use ansible to deploy them to AWS or Google great you can do that you want to mix and match you know however you want if you want to use Azure artifacts instead of artifactory and jenkins instead of Azure pipelines it's up to you you can assemble these solutions in whatever form make sense to you now this is really an evolution and in a broadening of the azure ecosystem so Azure DevOps now provide a juror with a new set of services to help developers make their lives better we're really making a charade developer first cloud as ur already has very broad support you know there are hundreds of tools and technologies many of them open source that are part of that the azure ecosystem from terraform to Jenkins to chef to puppet you name it and Azure itself has a whole set of first-class services that are really important to the DevOps lifecycle one category of those is really focused on telemetry analytics and insights you know when Donovan talked about what DevOps means to him and the definition of it it's really clear that DevOps does not stop with deployment you have to get telemetry in your application you have to know whether that application is performing well how folks are using it you know in this day and age data is becoming a core part of how we build software and asher has a whole set of services that help you do that from Azure monitor application insights and log analytics they provide predefined defaults so that you know what the thresholds are you should be expecting from a high-performing applications it helps you visualize all of this data it's coming in and customizable dashboards and it has infrastructure to help you separate the signal from the noise because as we move into this more data-driven world there's so much data coming in it can often be hard to really tell what the important signals are from all of the different noise and of course all of these services are highly extensible and work with existing processes and tools like ServiceNow and this includes of course as your DevOps so one of the really powerful things that you can do is take all of this telemetry insights and tie it in to the actual workflow processes and in terms of how you work and Donovan is here to give us a walkthrough of how you can do that Donovan thanks Jamie like you said going back to the definition of DevOps we want to continuously deliver value you can't just randomly copy files to a server and assume that you delivered value if no one uses them you didn't the only way that you know is that you monitored the application and here inside of azure we have an amazing offering that allows you to monitor not only the application itself but the infrastructure upon which it is actually running I can make sure that my infrastructure is secure see if there's any patches that need to be run I can see my application health and I can configure all of that using application insights in addition to that application insights ties in really well inside of Azure DevOps and Azure pipelines here I can go to our dashboard that I've created this is the actual application insights container or all the data from an ojs application that I've written is actually being pumped in I'm able to see if I've had any failed requests luckily I have not I'm able to see what my server response time is if that starts to go up or down I can go back in and make adjustments to my code ship out a new version and come back in and check these numbers to make sure that they're looking good I also have availability tests running it's been rock-solid since I deployed this I've had a hundred percent success rate if for any reason one of my applications is unaccessible I will actually get a notification saying I can no longer reach your particular application and I can actually test from several different locations so it'll test from all the different regions that we have available for you from Azure to make sure that your app is accessible from everywhere and if not I'm able to go back in and make it a fix for that what I really like about this is that I can actually incorporate this information back into Azure pipeline so what I mean by that is earlier I talked about how we use safe deployment how we deploy our application into a production environment and we monitor it there well that monitoring historically has been manual people would go and run queries against our work items to see if any new bugs had been logged someone would go and look at a dashboard like the one I just showed you to see if there's any spikes in our traffic or increase in the number of we have but we want to automate everything that we possibly can and thanks to release management I can do just that I can come here and enable something called a release gate a release gate is an automated way for me to take those tasks that I used to do manually and verify them as I move through my pipeline for her to add a node new gate here you'll see there's several different types I can run an arbitrary function so this is an Asscher function that can go off and do whatever it needs to do and then send back either a positive or negative response letting me know that things are good or not I can call a REST API maybe I'm deploying a API myself and I want to be able to call a few of those api's beyond just normal testing but specific API so let me know the health of my particular system and this one I like a lot is I can actually query application insights that I just showed you directly from Azure pipelines and make sure that our we have my really healthy deployment and that everything is going as it's supposed to go and if and only if it is move on to the next environment I'll show you how to configure one of those in just a moment and also we have the querying the work items this is the ability for us to go back in and run queries and see have there been any new bugs logs or new issues logged against our software while it's being deployed and if so you probably want to stop our release and go look at it right go ahead and click over here on Azure monitor I just have to configure a few things it's going to ask me for my Azure subscription because this is where my application insights lives this is where the delet telemetry is being collected and it allows me to then find the exact application which I want to monitor so I'll pick my Azure account here inside of my Azure account obviously have resource groups the resource group I'm looking at that has my kubernetes cluster in it and also my application insights is expressed now I want to look at application insights information one more time I'm going to tell the exact resource that I'm looking at because I can actually have a different application insights container for every environment I don't want all my telemetry going into the same place I want to be able to know how my production environments doing but might be different than my QA environment and or my development environment as I go between one gate and another I can go and review the previous environments gates right here using application insights then I'll choose a particular alert and if I have any failure anomalies during the deployment of my code release Menace what we'll be able to protect me and stop my release and give me a signal letting me know that I need to go back in and investigate that this is a potential bug that would have been able to escape into the next environment had I not had an automated way for me to be able to monitor that information so we're taking the monitoring and we're automating that was the review of that monitoring for you the final thing is having that single pane of glass it's great to have those really rich dashboards in Azure but I want to be able to bring the data that's being collected by application insights directly inside of Azure DevOps and I'm doing that here with a dashboard that I've been able to create as you can see I have tiles here from my dev environment my QA environment and my production environment I can see how many events have been triggered I can see what my response time is and down below I can see my team members the health of my build and the health of my release this again is taking all the power and the data that we need to make a good decision and have a really high functioning piece of software and we're surfacing it here right here in Azure DevOps on a single pane of glass that we can use so again I hope you realize any language any platform and everything that you need to turn an idea into a working piece of software thanks Jaime thanks Donovan so to pull all of this back together aser now has five new services that are going to help you as developers to be more successful to collaborate to ship faster with higher quality they work for any platform any cloud any operating system you can use all of them together or you can use them individually it's all up to you they embrace open source as your pipelines has free open source for any project that once it they work for projects of all shape and size from the smallest up to the largest and it's really easy to get started if you want to get started you can go to as recom slash DevOps and Donovan I think that's about it did I miss anything actually I have a few questions because I'm gonna go on tour here soon and I'm gonna speak bombarded with questions now you said it's free for small teams but what if I'm not a small team how do I pay for this yeah so for Azure pipelines you know if you're an individual doing private build you get 800 3 minutes if you'd like to grow up from that you buy units of paralyzation you know you can run more jobs at a time each one cost $40 ok so if you buy a 10 new jobs you can buy it run 10 jobs at the same time $40 each gosh yeah most of the other services you buy per user for about $6 you can go to the pricing page on our website and get more details great so obviously we've been using V STS for a long time looks like the URLs are changing like what is my experience gonna be as a current V STS user moving over to Azure DevOps yeah this is gonna be a good thing for existing users the transition is going to be seamless and automatic the existing customers will get to decide when they make that transition so they can decide when the right time to go do it for them is okay if you're using all of the capabilities of VST s all these as your DevOps services like you showed work great together they're not going to lose anything they're just going to get more choice so if they want to set up a project and only use it for planning and only have as your boards on it for example they'll have a much cleaner and better experience for doing that sure yeah cuz I've noticed that I've done that on a couple I maintain a couple open source projects and all I needed were pipelines and it was really nice to be able to streamline my experience right I mean I and as the team gets bigger and as we grow I think I'm gonna start turning back on some of those other things but for now is really neat to be able to just streamline it and just let's just focus on what it is that we want to work on which is great but now we talked about bsts but I always say that there's a twin the bsts called TFS does how does that transition work yeah so TFS is now as your DevOps server okay it's still the vehicle that we use to deliver all the value we provide in the cloud one premise the way that we ship it and update it and the way you use it will continue to be the same we'll continue to update it on the same cadence that been doing forever so it'll be again a nice seamless transition for EFS users as well right so that next quarterly update I get it'll just gonna magically change its name from one to the other next major the next update of TFS will be the azure DevOps server gotcha awesome okay so the last thing I want to talk about is just how do we get started if I'm not an existing customer I'm brand new I've seen this I'm excited I see the value of cross-platform any language any platform in one tool chain how would I go and get started there's a couple ways you know one if you want to use as your pipeline and your github user you can go to the github marketplace like we looked at earlier sure otherwise I suggest you go to Azure calm slash DevOps pick the service that you want to get started with and go from there awesome well I thank everyone for joining us but be sure and join us also on September 17th we're gonna have a live stream and a Q&A where you can ask some more questions and interact with those of us who helped develop this amazing product and can answer your questions and and make you productive on your DevOps transformation thank you so much for joining us thank you you

Monday, 21 October 2024

Andrew Ng Influential leader in artificial intelligence

>> With the rise of technology often comes greater concentration of power in smaller numbers of people's hands, and I think that this creates greater risk of ever-growing wealth inequality as well. To be really candid, I think that with the rise of the last few ways of technology, we actually did a great job creating wealth in the East and the West Coast, but we actually did leave large parts of the country behind, and I would love for this next one to bring everyone along with us. >> Hi everyone. Welcome to Behind the Tech. I'm your host, Kevin Scott, Chief Technology Officer for Microsoft. In this podcast, we're going to get behind the tech. We'll talk with some of the people who made our modern tech world possible and understand what motivated them to create what they did. So, join me to maybe learn a little bit about the history of computing and get a few behind the scenes insights into what's happening today. Stick around. Today I'm joined by my colleague Christina Warren. Christina is a Senior Cloud Developer Advocate at Microsoft. Welcome back Christina. >> Happy to be here Kevin, and super excited about who you're going to be talking to today. >> Yeah. Today's guest is Andrew Ng. >> Andrew is, I don't think this is too much to say, he's one of the preeminent minds in artificial intelligence and machine learning. I've been following his work since the Google Brain Project, and he co-founded Coursera, and he's done so many important things and so much important research on AI and that's a topic that I'm really obsessed with right now. So, I can't wait to hear what you guys talk about. >> Yeah. In addition to his track record as an entrepreneur, so Landing.AI, Coursera, being one of the co-leads of the Google Brain Project in its very earliest days, he also has this incredible track record as academic researcher. He has a hundred plus really fantastically good papers on a whole variety of topics in artificial intelligence, which I'm guessing are on the many a PHD student's reading list for the folks who are trying to get degrees in this area now. >> I can't wait. I'm really looking forward to the conversation. >> Great. Christina, we'll check back with you after the interview. Coming up next, Andrew Ng. Andrew is founder and CEO of Landing.AI. Founding lead of the Google Brain Project and co-founder of Coursera. Andrew is one of the most influential leaders in AI and deep learning. He's also a Stanford University Computer Science adjunct professor. Andrew, thanks for being here. >> Thanks a lot for having me Kevin. >> So, let's go all the way back to the beginning. So, you grew up in Asia? And I'm just sort of curious when was it that you realized you were really interested in math and computer science? >> I was born in London, but grew up mostly in Hong Kong and Singapore. I think I started coding when I was six-years-old. And my father had a few very old computers. The one I used the most was some old Atari, where I remember there were these books where you would read the code in a book and just type in a computer and then you had these computer games you could play that you just implemented yourself. So, I thought that was wonderful. >> Yeah, and so that was probably the Atari 400 or 800? >> Yeah. Atari 800 sounds right. It was definitely some Atari. >> That's awesome. And what sorts of games were you most interested in? >> You know, the one that fascinated me the most was a number guessing game. Where you, the human, would think of a number from 1 to 100, then the computer would basically do binary search but chooses: Is it higher or lower than 50? Is it higher or lower than 75 and so on, until it guesses the right number. >> Well, in a weird way, that's like early statistical Machine Learning, right? >> Yeah, and then, so at six-years-old it was just fascinating that the computer could guess. >> Yeah. So, from six years- did you go to a science and technology high school? Did you take computer science classes when you were a kid or...? >> I went to good schools: St. Paul's in Hong Kong and then ACPS in the Raffles in Singapore. I was lucky to go to good schools. I was fortunate to have grown up in countries with great educational systems. Great teachers, they made us work really hard but also gave us lots of opportunities to explore. And I feel like, computer science is not magic. You and I do this, we know this. While I'm very excited about the work I get to do in computer science and AI, I actually feel like anyone could do what I'd do if they put in a bit of time to learn to do these things as well. Having good teachers helps a lot. >> We chatted in our last episode with Alice Steinglass, who's the president of Code.org, and they are spending the sum total of their energy trying to get K-12 students interested in computer science and pursuing careers in STEM. You're also an educator. You are a tenured professor at Stanford and spent a good chunk of your life in academia. What things would you encourage students to think about if they are considering a career in computing? >> I'm a huge admirer of Code.org. I think what they're doing is great. Once upon a time, society used to wonder if everyone needed to be literate. Maybe all we needed was for a few monks to read the Bible to us and we didn't need to learn to read and write ourselves because we'd just go and listen to the priest or the monks. But we found that when a lot of us learned to read and write that really improved human-to-human communication. I think that in the future, every person needs to be computer literate at the level of being able to write these simple programs. Because computers are becoming so important in our world and coding is the deepest way for people and machines to communicate. There's such a scarcity of computer programmers today that most computer programmers end up writing software for thousands of millions of people. But in the future if everyone knows how to code, I would love for the proprietors of a small mom and pop store at a corner to go program an LCD display to better advertise their weekly sales. So, I think just as literacy, we found it having everyone being able to read and right, improved human-to-human communication. I actually think everyone in the future should learn to code because that's how we get people and the computers to communicate at the deepest levels. >> I think that's a really great segue into the main topic that I wanted to chat about today, AI, because I think even you have used this anecdote that AI is going to be like electricity. >> I think I came up with that. >> Yeah. I know this is your brilliant quote and it's spot on. The push to literacy in many ways is a byproduct of the second and third industrial revolution. We had this transformed society where you actually had to be literate in order to function in this quickly industrializing world. So, I wonder how many analogues you see between the last industrial revolution and what's happening with AI right now. >> Yeah. The last industrial revolution changed so much human labor. I think one of the biggest differences between the last one and this one is that this one will happen faster, because the world is so much more connected today. So, wherever you are in the world, listening to this, there's a good chance that there's a AI algorithm that's not yet even been invented as of today, but that will probably affect your life five years from now. A research university in Singapore could come up with something next week, and then it will make its way to the United States in a month. And another year after that, it'll in be in products that affect our lives. So, the world is connected in a way that just wasn't true at the last industrial revolution. And I think the pace and speed will bring challenges to individuals and companies and corporations. But our ability to drive tremendous value for AI, for the new ideas, the tremendous driver for global GDP growth I think is also maybe even faster and greater than before. >> Yeah. So, let's dig in to that a little bit more. So, you've been doing AI Machine Learning for a really long time now. When did you decide that that's the thing you were going to specialize on as a computer scientist? >> So, when I was in high school in Singapore, my father who is a doctor was trying to implement AI systems. Back then, he was actually using XP systems, which turned out not to be that good a technology. He was implementing AI systems of his day to try to diagnose, I think lymphoma. >> This is in the late 80's. >> I think I was 15 years old at that time. So, yeah, late 80's. So, I was very fortunate to learn from my father about XP Systems and also about neural networks, because they had day in the sun back then. That later became an internship at the National University of Singapore where I wrote my first research paper actually, and I found a copy of it recently. When I read it back now, I think it was a very embarrassing research paper. But we didn't know any better back then. And I've actually been doing AI, computer science and AI pretty much since then. >> Well, I look at your CV and the papers that you've written over the course of your career. It's like you really had your hands in a little bit of everything. There was this inverse reinforcement learning work that you did and published the first paper in 2000. Then, you were doing some work on what looks like information retrieval, document representations, and what not. By 2007, you were doing this interesting stuff on self-taught learning. So, transfer learning from unlabeled data. Then, you wrote the paper in 2009 on this large scale unsupervised learning using graphical processing. So, just in this 10-year period in your own research, you covered so many things. In 2009, we hadn't even really hit the curve yet on deep learning, the ImageNet result from Hinton hadn't happened yet. How do you, as one of the principles, you help create the feel, what does the rate of progress feel like to you? Because I think this is one of the things that people get perhaps a little bit over excited about sometimes. >> One of the things I've learned in my career is that you have to do things before they're obvious to everyone, if you want to make a difference and get the best results. So, I think I was fortunate back in maybe 2007 or so, to see the early signs that deep learning was going to take off. So, with that conviction, decided to go on and do it, and that turned out to work well. Even when I went to Google to start the Google Brain project, at that time, neural networks was a bad word to many people and there was a lot of initial skepticism. But, fortunately, Larry Page was supportive and then started Google Brain. And I think when we started Coursera, online education was not an obvious thing to do. There were other previous efforts, massive efforts that failed. But because we saw signs that we could make it work with the conviction to go in. When I took on the role at Baidu at that time, a lot people in the US were asking me, "Hey, Andrew, why on earth would you want to do AI in China. What AI is there in China?" I think, again, I was fortunate that I was part of something big. Even today, I think landing.ai where I'm spending a lot of my time, people initially ask me, "AI for manufacturing? Or AI for agriculture? Or try to transfer calls using AI? that's a weird thing to do." I do find people actually catch on faster. So, I find that as I get older, the speed at which people go from being really skeptical about what I do versus to saying, "Oh, maybe that's a good idea." That window is becoming much shorter. >> Is that because the community is maturing or because you've got such an incredible track record that... >> I don't know. I think everyone's getting smarter all around the world. So, yeah. >> As you look at how machine learning has changed over the past just 20 years, what's the most remarkable thing from your perspective? >> I think a lot of recent progress was driven by computational scale, scale of data, and then also by algorithmic innovation. But, I think it's really interesting when something grows exponentially, people, the insiders, every year you say, "Oh yeah, it works 50 percent better than the year before." And every year it's like, "Hey, another 50 percent year-on-year progress." So, to a lot of machine learning insiders, it doesn't feel that magical. It's, "Yeah, you just get up and you work on it, and it works better." To people that didn't grow up in machine learning, exponential growth often feels like it came out of nowhere. So, I've seen this in multiple industries with the rise of the movement, with the rise of machine learning and deep learning. I feel like a lot of the insiders feel like, "Yeah, we're at 50 percent or some percent better than last year," but it's really the people that weren't insiders that feel like, "Wow, this came out of nowhere. Where did this come from?" So, that's been interesting to observe. But one thing you and I have chatted about before, there's a lot of hype about AI. And I think that what happened with the earlier AI winters is that there was a lot of hype about AI that turned out not to be that useful or valuable. But one thing that's really different today is that large companies like Microsoft, Baidu, Google, Facebook, and so on, are driving tremendous amounts of revenue as well as user value through modern machine learning tools. And that very strong economic support, I think machine learning is making a difference to GDP. That strong economic support means we're not in for another AI winter. Having said that, there is a lot of hype about AGI, Artificial General Intelligence. This really over hyped fear of evil killer robots, AI can do everything a human can do. I would actually welcome a reset of expectations around that. Hopefully we can reset expectations around AGI to be more realistic, without throwing out baby with the bath water. If you look at today's world, there are a lot more people working on valuable deep learning projects today than six months ago, and six months ago, there were a lot more people doing this than six months before that. So, if you look at it in terms of the number of people, number of projects, amount of value being created, it's all going up. It's just that some of the hype and unrealistic expectations about, "Hey, maybe we'll have evil killer robots in two years or 10 years, and we should defend against it." I think that expectation should be reset. >> Yeah. I think you're spot on about the inside versus outside perspective. The first machine learning stuff that I did was 15 years-ish ago when I was building classifiers for content for Google's Ad systems. Eventually, my teams worked on some of the CTR predictions stuff for the ads auction. It was always amazing to me how simple an algorithm you could get by with if you had lots of compute and lots of data. You had these trends that were driving things. So, Moore's Law and things that we were doing in cloud computing was making exponentially more compute available for solving machine learning problems like the stuff that you did, leveraging the embarrassingly parallelism in some of these problems and solving them on GPUs, which are really great at doing the idiosyncratic type of compute. So, that computer is one exponential trend, and then the amount of available data for training is this other thing, where it's just coming in at this crushing rate. You were at the Microsoft CEO Summit this year and you gave this beautiful explanation where you said, "Supervised Machine Learning is basically learning from data, a black box that takes one set of inputs and produces another set of outputs. And the inputs might be an image and the outputs might be text labels for the objects in the image. It might be a waveform coming in that has human speech in it and the output might be the speech." But really, that's sort of at the core of this gigantic explosion of work and energy that we've got right now, and AGI is a little bit different from that. >>Yes, in fact to give credit where it's due. You know actually many years ago, I did an internship at Microsoft Research back when I was still in school. Even back then, I think it was Eric Brill and Michele Vanko at Microsoft way back had already published a paper using simple algorithms, that basically it wasn't who has the best algorithm that wins, it was who has the most data for the application they were looking at at NLP. And so I think that the continuation of that trend, that people like Eric and Michelle had spotted a long time ago, that's driving a lot of the progress in modern machine learning still. >> Yeah. Sometimes, with AI Research you get these really unexpected results. One of those I remember it was the famous Google CAT result from the Google Brain Team. >> Yes, actually, those are interesting projects, while still a full time at Stanford, my students at the time Adam Coates and others, started to spot trends that, basically the bigger you build in your neural networks, the better they work. So that was a rough conclusion. So I started to look around Silicon Valley to see where can I get a lot of computers to train really really big neural networks. And I think in hindsight, back then a lot of us leaders of deep learning had a much stronger emphasis on unsupervised learning, so learning without label data, such as getting computers to look a lot of pictures, or watch a lot YouTube videos without telling it what every frame or what every object is. So I had friends at Google so I wound up pitching to Google to start a project which we later called the Google Brain Project, to really scale up neural networks. We started off using Google's Cloud, the CPU's and in hindsight, I wish we had tried to build up GPU capabilities like Google sooner, but for complicated reasons, that took a long time to do which is why I wound up doing that at Stanford rather than at Google first. And I was really fortunate to have recruited a great team to work with me on the Google Brain Project. I think one of the best things I did was convince Jeff Dean to come and work. And in fact, I remember the early days, we were actually nervous about whether Jeff Dean would remain interested in the project. So a bunch of us actually had conversations to strategize, "Boy, can we make sure to keep Jeff Dean engaged so that he doesn't lose interest and go do something else?" So thankfully he stayed. The Google CAT thing was led by my, at the time PhD student Quoc Le put together with Jiquan Ngiam, were the first two sort of machine learning interns that I brought into the Google Brain Team. And I still remember when Quoc had trained us on unsupervised learning algorithms, it was almost a joke, you know I was like, "Hey! there are a lot of cats on YouTube, let's see this learning cat detector." And I still remember when Quoc told me to walk over and say, "Hey Andrew, look at this." And I said, "Oh wow! You had unsupervised learning algorithm watch YouTube videos and learn the concept of 'cat.' That's amazing." So that winds up being an influential piece of work, because it was unsupervised learning, learning from tons of data for an algorithm to discover concepts by itself. I think a lot of us actually overestimated the early impact of unsupervised learning. But again, when I was leading Google Brain Team, one of our first partners was the speech team working with Vincent Vanhoucke, a great guy, and I was really working with Vincent and his team, and seeing some of the other things happening at Google and outside that caused a lot of us to realize that there was much greater short term impact to be had with supervised learning. And then for better or worse, when lot of deep learning communities saw this, so many of us shifted so much of our efforts to supervised learning, that maybe we're under resourcing the basic research we still need unsupervised learning these days which maybe, you know, I think unsupervised learning is super important that there's so much value to be made with supervised learning. So much of the attention is there right now. And I think, really what happened with the Google Brain Project was- were the first couple of successes, one being the Speech Project that we worked with the speech team on. What happened was other teams saw the great results that the speech team was getting with deep learning with our help. And so, more and more of the speech team's peers ranging from Google Maps to other teams started to become friends and allies of the Google Brain Team. We started doing more and more projects. And then the other story is after, you know, the team had tons of momentum, thank god, we managed to convince Jeff Dean to stick with the project, because one of the things that gave me a lot of comfort when I wanted to step away from a day-to-day role to spend more time in Coursera was, I was able to hand over leadership of the team to Jeff Dean. And that gave me a lot of comfort that I was leaving the team in great hands. >> I sort of wonder, if there's a sort of a message or a takeaway for AI researchers in both academia and industry about the Jeff Dean example. So for those who don't know, Jeff Dean might be the best engineer in the world. >> It might be true. Yes. >> But I've certainly never worked with anyone quite as good as him. I mean, I remember there was this- >> He's in a league of his own. Jeff Dean is definitely- >> I remember back in long, long ago at Google. This must have been 2004 or 2005, right after we'd gone public, Alan Eustace who was running all of the engineering team at the time would, once a year, send a note out to everyone in engineering at performance review time to get your Google resume polished up so that you could nominate yourself for a promotion. First thing that you were suppose to do was get your Google resume, which is sort of this internal version of a resume that showed all of your Google specific work. And the example resume that he would send out was Jeff's, and even in 2004, like he'd been there long enough where he'd just done everything. And, you know I was an engineer at the time. I would look at this and I'm like, "Oh my god, my resume looks nothing like this." And so I remember sending a note Alan Eustace saying, "You have got to find someone else's resume. You're depressing a thousand engineers everytime you send this out." Because Jeff is so great. >> We're just huge fans really of Jeff. So me, you know, fans of Jeff among them and just, not just a great scientist but also just an incredibly nice guy. >> Yeah. But this whole notion of coupling world-class engineering and world class-systems engineering with AI problem solving, I think that is something that we don't really fully understand enough. You can be the smartest AI guy in the world and you know just have this sort of incredible theoretical breakthrough, but if you can't get that idea implemented, not that it has no impact it just sort of diminishes the potential impact that the idea can have. That partnership I think you have with Jeff is something really special. >> I think I was really fortunate that even when I started the Google Brain Team I feel I brought a lot of machine learning expertise and Jeff, and other Google engineers early team members like Rajat Monga, Greg Corrado, just thought a 20 percent project for him. But there are other Google engineers-- really first and foremost Jeff--they brought a lot of systems abilities to the team. And the other convenient thing was that, we were able to get a thousand computers to run this. And having Larry Page's backing and Jeff's ability to marshal those types of computational resources turns out to be really helpful. >> Well, let's switch gears just a little bit. I think it was really apt that you pointed out that AI and machine learning in particular are starting to have GDP scale impact on the world. Certainly, if you look at the products that we're all using everyday, there's many levels of machine learning involved in everything from search to social networks to- I mean, basically everything you use has got just a little kiss of machine learning in it. So, with that impact and given how pervasive these technologies are, there's a huge amount of responsibility that comes along with it. I know that you've been thinking a lot about ethical development of AI and what our responsibilities are as scientists and engineers as we build these technologies. I'd loved to chat about that for a few minutes. Yeah. There's potential to promulgate things like discrimination and bias. I think that with the rise of technology often comes greater concentration of power in smaller numbers of people's hands. And I think that this creates greater risk of ever-growing wealth inequality as well. So, we're recording this here in California, and to be really candid, I think that with the rise of the last few waves to technology, we actually did a great job creating wealth in the East and the West Coast, but we actually did leave large parts of the country behind, and I would love for this next one to bring everyone along with us. >> Yeah. One of the things that I've spent a bunch of time thinking about is, from Microsoft's perspective, when we think about how we build our AI technology, we're thinking about platforms that we can put in the hands of developers. It's just sort of our wiring as a company. So, the example you gave earlier and the talk where you want someone in a mom and pop shop to be able to program their own LCD sign to do whatever and everybody becomes a programmer, we actually think that AI can play a big role in delivering this future. And we want everybody to be an AI developer. I've been spending much of my time lately talking with folks in agriculture and in healthcare, which again you're thinking about the problems that society has to solve. In the United States. the cost of healthcare is growing faster than GDP which is not sustainable over long periods of time. Basically, the only way that I see that you break that curve is with technology. Now, it might not be AI. I think it is. But something is going to have to sort of intercede that pulls cost out of the system while still giving people very high quality healthcare outcomes. And I just see a lot of companies almost every week, there's some new result where AI can read and EKG chart with cardiologists' level of accuracy, which isn't about taking all of the cardiology jobs away. It's about making this diagnostic capability available to everyone because the cost is free and then letting the cardiologist do what's difficult and unique that humans should be doing. I don't know if you see that pattern in other domains as well. >> I think there'll be a lot of partnerships with the AI teams and doctors that will be very valuable. You know, one thing that excites me these days with the theme of things like healthcare, agriculture, and manufacturing is helping great companies become great AI companies. I was fortunate really, to have led the Google Brain team which became I would say probably the leading force in turning Google from what was already a great company into today great AI company. Then, at Baidu, I was responsible for the company's AI technology and strategy and team, and I think that helped transform Baidu from what was already a great company into a great AI company. I think it really Satya did a great job also transforming Microsoft from a great company to a great AI company. But for AI to reach its full potential, we can't just transform tech companies, we need to pull other industries along for it to create this GDP growth, for it to help people in healthcare deliver a safer and more accessible food to people. So, one thing I'm excited about, building on my experience, helping with really Google and Baidu's transformation is to look at other industries as well to see if either by providing AI solutions or by engaging deeply in AI transmission programs, whether my team at Landing.AI, whether Landing.AI can help other industries also become great at AI. >> Well talk a little bit more about what Landing.AI's mission is. >> We want to empower businesses with AI. There is so much need for AI to enter other industries than technology, everything ranging from manufacturing to agriculture to healthcare, and so many more. For example, in manufacturing, there are today in factories sometimes hundreds of thousands of people using their eyes to inspect parts as they come off as the assembly line to check for scratches and things and so on. We find that we can, for the most part, automate that with deep learning and often do it at a level of reliability and consistency that's greater than the people are. People squinting at something 20 centimeters away your whole day, that's actually not great for your eyesight it turns out, and I would love for computers rather than often these young employees to do it. So, Landing.AI is working with a few different industries to provide solutions like that. We also engage companies with broader transformation programs. So, for both Google and Baidu, it was not one thing, it's not that implement neural networks for ads and so it's a great AI company. For a company become a great AI company is much more than that. And then having sort of helped two great companies do that, we are trying to help other companies as well, especially ones outside tech become leading AI entities in their industry vertical. So, I find that work very meaningful and very exciting. Several days ago, I tweeted out that on Monday, I literally wake up at 5:00 AM so excited about one of the Landing.AI projects, I couldn't go back to sleep. I started getting and scribbling on my notebook. So, I find these are really, really meaningful. >> That's awesome. One thing I want to sort of press on a little bit is this manufacturing quality control example that you just gave. I think the thing that a lot of folks don't understand is it's not necessarily about the jobs going away, it's about these companies being able to do more. So, I worked in a small manufacturing company while I was in college and we had exactly the same thing. So, we operated a infrared reflow soldering machine there which sort of melts, surface mount components onto circuit boards. So, you have to visually inspect the board before it goes on to make sure the components are seated and the solder has been screened and all the right parts. When it comes out, you have to visually inspect it to make sure that none of the parts of tombstond. There are a variety of like little things that can happen in the process. So, we have people doing that. If there was some way for them not to do it, they would go do something else that was more valuable or we could run more boards so actually, in a way, you could create more jobs because the more work that this company could do economically, the more jobs in general that it can create. And I'm sort of seeing AI in several different places like in manufacturing automation as helping to bring back jobs from overseas that were lost because it was just sort of cheaper to do them with low cost labor in some other part of the world. They're coming back now because like automation has gotten so good that you can start doing them with fewer more expert people but here, in the United States, locally in these communities where whatever it is that they're manufacturing is needed. It's like these really interesting phenomena. >> There was one part of your career I did not know about it. I followed a lot of your work at Google and Microsoft, and even today, people still speak glowingly of their privacy practices you put in place when you're at Google. I did not know you were into this soldering business way back. >> Yeah, I had put myself through college some way or another. It was interesting though. I remember one of my first jobs, I had to put brass rivets into 5,000 circuit boards. Circuit boards were controllers for commercial washing machines and there were six little brass tabs that you would put electrical connectors onto and each one of them had to be riveted. So, it was 30,000 rivets that had to be done and we had a manual rivet press and my job at this company in its first three months of existence right after I graduated high school was to press, rivet press 30,000 times, and that's awful. Automation is not a bad thing. >> In a lot countries we work with we're seeing, for example Japan, the country is actually very different than the United States, because it has an aging population. >> Yeah. >>And there just aren't enough people to do the work. >> Correct. >> So, they welcome automation because the options are either automate or well, just shut down the whole plant because it is impossible to hire with the aging population. >> Yeah. In Japan, it actually is going to become a crucial social issue sometime in the next 100 years or so because their fertility rates are such that they're in major population decline. So, you should hope for really good AI there, because we're going to need incredibly sophisticated things to take care of the aging population there, especially in healthcare and elder care and whatnot. You know, I think when we automated elevators. Right? Once elevators had to have a person operating them, a lot of elevator operators did lose their jobs because we switched to automatic elevators. I think one challenge that AI offers is that there will be as connected as it is today, I think this change will happen very quickly, or the potential for jobs to disappear is faster this time around. So, I think when we work with customers, we actually have a stance on wanting to make sure that everyone is treated well, and to the extent, we're able to step in and try to encourage or even assist directly with retraining to help them find better options, we're truly going to do that. That actually hasn't been needed so far for us because we're actually not displacing any jobs. But if it ever happens, that is our stance. But I think this actually speaks to the important role of government with the rise of AI. So, I think the world is not about to run out of jobs anytime soon, but as LinkedIn has said through the LinkedIn data and many organizations, and Coursera has seen and Coursera's data as well, our population in the United States and globally is not well-matched to the jobs that are being created. And we can't find enough people for- we can't find enough nurses, we can't find enough wind turbine technicians, a lot of cities, the highest paid person might be the auto mechanic and we can't find enough of those. So, I think a lot of the challenge and also the responsibility for nations or for governments of a society is to provide a safety net so that everyone has a shot at learning new skills they need in order to enter these other trades that we just can't find enough people to work in right now. >> I could not agree more. I think this is one of the most important balances that we're going to have to strike as a society, and it's not just the United States, it's a worldwide thing. We don't want to under invest in AI in this technology because we're frightened about the negative consequences it's going to have on jobs that might be disrupted. On the other hand, we don't want to be inhumane, incompassionate, unethical about how we provide support for folks who are going to be disrupted potentially. >> Yeah. >> I think Coursera plays an incredibly important role in managing this sea change in that we have to make reskilling and education much cheaper and much more accessible to folks. Because one of the things that we're doing is, we're entering this new world where the work of the mind is going to be far, far, far more valuable even than it already is than the work of the body. So, that's the muscle that has to get worked out and we've just got to get people into that habit and make it cheap and accessible. >> Yeah. It is actually really interesting. When you look at the careers of athletes, you can't just train them in great shape at age 21 and then stop working out. The human body doesn't work like that. Human mind is the same. You can't just train, work on your brain until you're 21 and then stop working out your brain. Your brain you go flabby if you do that. >> Yes. >> So, I think one of the ways I want the world to be different is I want us to build a lifelong learning society. We need this because the pace of change is faster. There's going to be technology invented next year and that will affect your job five years after that. So, all of us had better keep on learning new things. I think this is a cultural sea change that needs to happen across society, because for us to all contribute meaningfully to the world and make other people's lives better, the skills you need five years from now may be very different than the skills you have today. If you are no longer in college, well, we still need you to go and acquire those skills. So, I think we just need to acknowledge also that learning and studying is hard work. I want people if they have the capacity. Sometimes your life circumstances prevent you from working in certain ways, and everyone deserves a lot of support throughout all phases of life. But if someone has the capacity to spend more time studying rather than spend that equal amount of time watching TV, I would rather they spend that time studying so that they can better contribute to their own lives and to the broader society. >> Yeah, and speaking again about the role of government, one of the things that I think the government could do to help with this transition is AI has this enormous potential to lower the costs of subsistence. So, through precision agriculture and artificial intelligence and healthcare, there are probably things that we can do to affect housing costs with AI and automation. So, looking at Maslow's Hierarchy of Needs, the bottom two levels where you've got food, clothing, shelter, and your personal safety and security, I think the more that we can be investing in those sorts of things, like technologies that address those needs and address them across the board for everyone, it does nothing but lift all boats basically. I wish I had a magic wand that I could wave over more young entrepreneurs and encourage them to create startups that are taking this really interesting, increasingly valuable AI toolbox that they have and apply it to these problems. They really could change the world in this incredible way. >> You make such a good point. >> So, the last tech thing that I wanted to ask you is, there is sort of just an incredible rate of innovation right now on AI in general, and some of the stuff is what I call "stunt AI" not in the sense that it's not valuable but it's- >> Know go ahead. Name of names. I want to hear. >> No, so I'll name our own name. So, we, at Microsoft did this really interesting AI stunt where we had this hierarchical reinforcement learning system that beat Ms. Pac-Man. So, that's the flavor of what I would call "stunt AI." I think they're useful in a way because a lot of what we do is very difficult for layfolks to understand. So, the value of the stunt is holy crap, you can actually have a piece of AI do this? I'm a big classical piano fan and one of the things I've always lamented about being a computer scientist is, there's no performance of computer science in general, where a normal person can listen to it or if you're talking about an athlete like Steph Curry, who has done an incredible amount of technical preparation and becoming as good as he is at basketball, there's a performance at the end where you can appreciate his skill and ability. And these "stunt AI" things in a way are a way for folks to appreciate what's happening. Those are the exciting AI things for the layfolks. What are the exciting things as a specialists that you see on the horizon? Like new things and reinforcement learning, coming, people are doing some interesting stuff with transfer learning now where I'm starting to see some promise that not every machine learning problem is something where you're solving it in isolation. What's interesting to you? >> So, in the short term, one thing I'm excited about is turning machine learning from a bit of a black art into more of a systematic engineering discipline. I think, today, too much of machine learning among a few wise people who happen to say, "Oh, change the activation function in layer five." And if for some reason it works, then that can turn into a systematic engineering process that would demystify a lot of it and help a lot more people access these tools. >> Do you think that that's going to come from there becoming a real engineering practice of deep neural network architect or is that going to get solved with this learning to learn stuff or auto ML stuff that folks are working on, or maybe both? >> I think auto ML is a very nice piece of work, and ia a small piece of the puzzle, maybe surrounding, optimizing [inaudible] preferences, things like that. But I think there are even bigger questions like, when should you collect more data, or is this data set good enough, or should you synthesize more data, or should you switch algorithms from this type of algorith to that type of algorithm, and do you have two neural networks or one neural network offering a pipeline? I think those bigger architectural questions go beyond what the current automatic algorithm is able to do. I've been working on this book, "Machine Learning Yearning" mlyearning.org, that I've been emailing out to people on the mailing list for free that's trying to conceptualize my own ideas, I guess, to turn machine learning into more of the engineering discipline to make it more systematic. But I think there's a lot more that the community needs to do beyond what I, as one individual, could do as well. But that will be really exciting when we can take the powerful tools of supervised learning and help a lot more people are able to use them systematically. With the rise of software engineering came the rise of ideas like, "Oh, maybe we should have a PM." I think those are Microsoft invention, right? The PM, product manager, and then program manager, project manager types of roles way back. Then eventually came ideas like the waterfall planning models or the scrum agile models. I think we need new software engineering practices. How do you get people to work together in a machine learning world? So all sorting it out to Landing.AI ask our product managers do things differently, then I think I see any other company tell their product managers to do. So we're still figure out these workflows and practices. Beyond that, I think on a more pure technology side [inaudible] again as I do transform entertainment and art. It'll be interesting to see how it goes beyond that. I think the value of reinforcement learning in games is very overhyped, but I'm seeing some real attraction in using reinforced learning to control robots. So early signs from my friends working on projects that are not yet public for the most part, but there are signs of meaningful progress in the reinforced learning applied to robotics. Then, I think transfer learning is vastly underrated. The ability to learn from- so there was a paper out of Facebook where they trained on an unprecedented 3.5 billion images which is very, very big 3.5 images is very large, even by today's standards, and found that it turns out training from 3.5 billion, in their case, Instagram images, is actually better than training on only one billion images. So this is a good sign for the microprocessor companies, I think, because it means that, "Hey, keep building these faster processes. We'll find a way to suck up their processing power." But with the ability to train on really, really massive data sets to do transfer learning or pre-training or some set of ideas around there, I think that is very underrated today still. And then super long term- We used the term unsupervised learning to describe a really, really complicated set of ideas that we don't even fully understand. But I think that also will be very important in the longer term. >> So tell us something that people wouldn't know about you. >> Sometimes, I just look at those bookstore and deliberately buy a magazine in some totally strange area that I would otherwise never have bought a magazine in. So whatever, five dollars, you end up with a magazine in some area that you just previously knew absolutely nothing about. >> I think that's awesome. >> One thing that not many people know about me, is I actually really love stationery. So my wife knows, when we travel to foreign countries, sometimes I'll spend way too long looking at pens and pencils and paper. I think part of me feels like, "Boy, if only I had the perfect pen and the perfect paper, I could come up with better ideas." It has not worked out so far, but that dream lives on and on. >> That's awesome. All right. Well, thank you so much, Andrew, for coming in today. >> Thanks a lot for having me here, Kevin. >> That was a really terrific conversation. >> Yes, it was a ton of fun. It was like all of my best conversations, I felt like it wasn't long at all and was glancing now at my phone and I'm like, "Oh, my god. We've just spent 48 minutes." >> One of the questions that you asked Andrew was, what technology is he most impressed by and excited by this coming down the pike with AI? I wanted to turn that back on you because you've been working with AI for a really long time at Google, and at LinkedIn, and now at Microsoft. So what have you seen that really excites you? >> Several things. I'm excited that this trend that started a whole bunch of years ago, more data plus more compute equals more practical AI and machine learning solutions. It's been surprising to me that that trend continues to have legs. So, when I look forward into the future and I see more data coming online, particularly with IoT and the intelligent edge as we get more things connected to the Cloud that are sensing either through cameras or far field microphone arrays or temperature sensors or whatever it is that they are, we will increasingly be digitizing the world. Honestly, my prediction is that the volumes of data that we're gathering now will seem trivial by comparison to the volumes that will be produced sometime in the next 5-10 years. I think you take that with all of the super exciting stuff that's happening with AI silicon right now and just the number of startups that are working on brand new architectures for a training machine learning models, it really is an exciting time, and I think that combo of more compute, more data is going to continue to surprise and delight us with interesting new results and also deliver this real world GDP impacting value that folks are seeing. So that's super cool. But I tell you, the things that really move me, that I have been seeing lately are the applications into which people are putting this technology in precision agriculture and healthcare. Just recently, we went out to one of our farm partners. The Microsoft Research has been working with the things that they're doing with AI machine learning and edge computing in this small organic farm in rural Washington state is absolutely incredible. They're doing all of this stuff with a mind towards "How do you take a small independent farmer and help them optimize yields, reduce the amount of chemicals that they have to use on their crop, how much water they have to use so you're minimizing environmental impacts and raising more food and doing it in this local way?" In the developing world, that means that more people are going to get fed. In the developed world, it means that we all get to be a little more healthy because the quality of the food that we're eating is going to increase. There's just this trend, I think, right now where people are just starting to apply this technology to these things that are parts of human subsistence. Here's the food, clothing, shelter, the things that all of us need in order to live a good quality life. I think as I see these things and I see the potential that AI has to help everyone have access to a high quality of life, the more excited I get. I think in some cases, it may be the only way that you're able to deliver these things at scale to all of society because some of them are just really expensive right now. No matter how you redistribute the world's wealth, you're not going to be able to tend to the needs of a growing population without some sort of technological intervention. >> See, I thought you were going to say something like, "Oh, we're going to be able to live in the world of Tron Legacy or the Matrix or whatever." Instead, you get all serious on me and talk about all the great things that in the world changing awesome things that are going to happen. I'm going to live in my fantasy but I like that there are very cool things happening. >> I did >> over my vacation read "Ready Player One" and despite its mild dystopian overtones. >> It's a great book. I like the book. >> That's a damn good book. I was like, "I want some of this." >> I'm with you. I'm with you. I was a little disappointed in the movie but I loved the book. Yeah. We can talk about this offline but we'll end this now. >> Yeah. >> Well, awesome Christina. I look forward to chatting with you again on the next episode. >> Me too. I can't wait. >> Next time on Behind the Tech, we're going to talk with Judy Estrin who is a former CTO Cisco, serial entrepreneur, and as a Ph.D. student, a member of the lab that created the Internet protocols. Hope you will join us. Be sure to tell your friends about our new podcast, Behind the Tech, and to subscribe. See you next time.

Building Bots Part 1

it's about time we did a toolbox episode on BOTS hi welcome to visual studio toolbox I'm your host Robert green and jo...