AWS Summit Series 2019 – Santa Clara: Keynote featuring Werner Vogels

Please welcome vice president and chieftechnology officer Dr. Verner Vogels good morning Santa Clara we did comeshake to the left to the right you know meet people next to you no I won't dothat to you good morning as always we're very veryproud that so many of you are willing to come out here today to listen to wherewe are with the Amazon Web Services yes pioneers in the whole cloud worldwe've developed so many services over time where each of you sort of shouldhave the right tools to build the applications that you want to buildsomething that we've always said this that we couldn't have come here withoutyou meaning that your feedback to us andworking very closely with you our customers to actually build our servicesand to develop a world map is unique about 95% of the features and servicesthat we've delivered until now have come in direct feedback from you yeah so andit's most important because I think and if we would have been building the toolsfor development as old companies who have done thatwe would have built the tools that you were using five years ago and there'snot we want to build together if you the tools that you need to be ready for 2025yeah so that niggle in lockstep because the development is changing is ratherthan changing both the ways in which we architect the operational models thesecurity postures the way we use data all these things are changing radicallyand as such we really rely on you and working closely together with you toactually build a web map for that say due to modern application development soafter spending a little bit of time on the business update I'll go deeper intoso the patterns that we've seen arriving with most of our customers have othersort of modern development and what are the kind of things that we've built tohelp you build really two modern applications so yeah six and halfthousand of you are out here thank you very much I think you know you all havebusy day and a switch it's a it's pretty humblingthat so many of you come out here to listen to us as always I consider theseevents to be educational events and not sales events that's not why we're herethose 50 well over 50 deep dive technical sessions is really the meat ofthis gathering where you can really get the glow down we need all the details onwhatever pieces of AWS you're most interested in of course you know it's apart and network is continuing to to grow and if you go out into the expohall you'll find many of our partners having their stands there go talk tothem go hang out with them go listen what kind of pieces they've developedfor AWS I've always said that AWS is so much more than just AWS you know withoutall our partners that are building whether it's operational tools orwhether it's ice fees or software all of them extend AWS in a manner that makesit extremely rich yeah whether you're integrating Twilio into your applicationor stripe or any of the other parties that we have you know that really madeState we as platforms so much more richer then all the services that we'vebuilt ourselves already so let's take a quick look at where we are with thebusiness yeah at this moment based on the core fourth quarter results fromfrom last year we went we are at a close to 30 billion dollar Redmond rate andyou know on that base a forty five percent growth year over year is prettyastonishing and if I believe if I look at some of the past 13 years of AWS Ithink it is the speed at which we've grown as hasreally been the biggest challenging thing you know we're continuouslychanging IT landscape and working together with you to really grow reallyfast built very vivid the tools that you need I think is 45 percent growth yearof here also shows that we are doing the right thing by working closely togetherwith you building a set of tools that you really can useto build your next generation applications it has we're very fortunatethat this has resulted in literally millions of businesses running on AWSand an active customer is something we be considered to be and non Amazonentity that is that has been active in the past 30 days literally millionsmillions of businesses winning on AWS and maybe that is startups and actuallya fine startups to be a bit of a misnomer these days because many ofthese names on this slide here are household names Emilia with us lifts orwhether it's uber or whether it is Dropbox over that's air being be alittle bit slack or Pinterest maybe these companies were at one momenta start-up within my eyes what I would rather cover this internet skillcompanies that I really focused on being internet first and I rather lifting allemail AWS robin hood from here from the neighborhood actually is building hismobile application for for trading five markets around hereyou know they build a highly scalable analytics platform on AWS that allowsthem to go allowed them to go from zero to a hundred million revenue in justover 14 months and of course it's not just not just the startups maybe in theearlier days this was sort of the idea behind the AWS to really serve thesecompanies that wanted to reach internet scale I think enterprises have figuredout that this is way too good for them as well and very closely working withthem building a whole new set of capabilities yeahExpedia is moving all in into AWS capital wang built a digital leadingbanking platform on AWS and today was a great announcementstandard bank one of the largest lenders in in South in in Africa it's moving allof the infrastructure over to AWS and another announcement today will set forbox wagon is collaborating with AWS to build an industrial platform formanaging the efficiency in all of their plants and as you can see you know it'sa wide variety of enterprises making use of a Tobias there's almost notethical where there's not companies that have decided to go all in on to AWS andmaybe that this enterprises but of course also in the public sectorthousands of agencies around the world making use of AWS because most of thegovernment's you know every dollar or every euro he can save actually is ismoney that you can put towards programs that really matter for your citizens andso whether that's for example the UK Ministry of Justice who have built awhole pipeline of services that help law enforcement and prisons and and and allsorts of other activities around there that have very high sensitivity in termsof privacy and security city of Los Angeles build a whole security system inand around all of their of the city's departments in gathering their data andanalyzing for security risks and so across the board long the nonprofitorganizations that salsa government agency making use of AWS we do not gonnado that with other partners especially I think in the in the enterprise worldmany of these organizations already existed for a long time is global systemintegrators like a center in Capgemini and others but also new born in thecloud system integrates like like second watch and so many of them are the onesare likely really helping our customers move on to AWS especially those thathave for example challenging environments for example based on CSAPand things like that and most of these partners have great competencies inactually helping you get there if you're a nice fee or software as a servicevendor you own a WS why because your customers are there your customers willdemand that you there most of these fees they'll be moving to a to asoftware-as-a-service model in terms of delivery and whether it's it's a Keanuwith Adobe which was info informatica or Salesforce or workday Splunkall of them have moved to software as a service model to deliver theirfunctionality on top of AWS now I'm always fortunate in these eventsto have great guest speakers and so a first speaker was actually transforminghis company into a cloud first business so f5 which is a provider in applicationsecurity services became working with AWS about six years ago and using themarketplace and becoming a networking security APM partner come competency andso since joining f5 as president and CEO in 2017 concerns warlock do Co you knowhas focused on accelerating this effort from moving from the traditional companythat that they were into a solutions and services company delivered food cloudnow to be more about this I would like to welcome from swatter States well Iwould like to begin my story today by what most will assume is the end successI believe that success can kill a company the classic definition ofsuccess assumes that we have reached the highest of heights and that thereforewhat we have accomplished must be protected that is a mindset that leadsus to a very dangerous place that I call the status quo the status quo is in factthe biggest threat to our companies to our cultures to our personal growthbecause it is a familiar friend who is hard to resistand even harder to say goodbye to so I want to share with you some of thesymptoms that you see in an organization that is under the spell of the statusquo comfortable bureaucracy settles in you accept that it takes a long time todo anything your customers warn you that things are changing but they continue tobuy for the time being in the form of institutional arrogance sets in wherehow we do things takes over becomes more important thancuriosity and invention and while these are signs of the status quo andorganization I believe that there is one fundamental quality that holds any of usapart from succumbing to the status quo it's aquality that no money can buy you either have it or you don'tand that quality is the drive to reimagine it's the belief that yourfirst success can not be your last it's the courage to challenge our own formulafor success because we know it can be done better and so I am here today totell you the story of f5s rejection of the status quo a feat we have undertakenmore than once in our history and what it took for us to reimagine again nowfor f5 over the last 10 years the status quo looked like this it was a companyfamous for its load balancers but also obsessed with a hardware business modelit was all suffering application security and delivery services to thetop mission critical workloads in a day Center but leaving tens of millions ofother workloads unattended it was a loyal base of net ops usersinside 25,000 enterprise customers but very little to offer to the growingDevOps communities inside the same organizations now as they say what gotus here won't get us there out of five we have a mission we want to provideenterprise grade application services for every app anywhere and the only wayto get there is through the cloud now as it does for you the cloud requires acontinuous transformation of our business for f5 that meant significantimportant but painful decisions we had to completely redefine our customerpersonas who we aim to serve and how we had to make significant shift in whereand how we invest our resources and relook at the behaviors we promote inour own organization and we also had to create startups carve out startups fromwithin f5 with a very clear new charter todisrupt the status quo the result of this is an f5 that is now offering easyto consume friction-free application services consistentapplication security for every workload across every environment and a companythat is finally bridging the divide between net ops and DevOps by joiningforces with nginx the leading apple open source application delivery platform wecan now offer enough effective controls to satisfy the CIO but also enoughfreedom to for application developers and it shouldbe no surprise that for a company committed to disrupting its own statusquo f5 chose AWS we built our cloud services platform leveraging the breadthand depth of AWS infrastructure services we do storage and compute of course butalso caching an identity database and even server lists we worked with the AWSSAS factory team to transform our own development process and build anddeliver new services 50% faster we also leverage the AWS marketplace it allowedus to build digital procurement on a global basis to companies ranging fromstartups to Fortune 500 to our own channel partners 12-month faster I'llsay that again 12-month faster than we would have on our own and leveragingthat the built-in metering features and digital commerce enabled by the AWSmarketplace millions of f5 customers can now try and subscribe to our fiveservices in minutes the result of this work is the next critical step in ourreinvention f5 cloud services I am pleased to announce that we arelaunching today on AWS f5 cloud services f5 cloud services is a family of cloudnative solutions designed for enhanced application delivery security andinsight and it's immediately available for our customers and through ourchannel partners on the AWS marketplace it starts with our DNS cloud service anda preview of our global server load balancing service available for use inAWS or in hybrid cloud environments and later the spring we will bedelivering even more at five enterprise-grade sass capabilitiesincluding security services designed to protect applications from both existingthreats and emerging threats thank you thank youthere must be somebody from f5 in the room here the best part about all thisthough is we are just getting started I don't believe we find an endpoint insuccess the spirit of the summit each of us is asked to consider what can we dodifferently and I know this inspection can be painful but I can also share withyou what it feels like when you have rejected the status quo you are restlessfor more the risk-taking feels less risky and new ideas are courageouslysurfaced every day I know that is what it feels like an f5 now and theopportunity for all of us here today to invent to grow and to break away fromthe status quo thank you Thank You Francois yeah that's why webuilt AWS to help everybody break the status quo yeah because in the statusquo your vendors were in charge not you and one of the biggest things that wetried to do we built AWS is to take all not pieces of the motto of Amazon theretailer to be the Earth's most customer centric company so how do you do that asan IT provider how do you become the world's best customer centric ITprovider is by putting your customers in charge you're in charge of our world mapbut also the economic models that we put in place we're really there to put youin control instead of earth as a provider and so Igo you'll continue to be the Earth's most customer centric IT provider to dothat we need to move away from the Mobile's that we had in the past wherewe as a technology provider would give you everything in the kitchen sink andtell you and this is how you shall use it now think of the new world everybodyknows that you need different tools for the different jobs and their searchwe've been really focusing on making sure that you have choice and buildingthe broadest and deepest platform for you as builders today so you can pickexactly those tools you want to build well maybe maybe in the past we you werebuilding a house it was sort of a prefab thing and it was sit there it couldn'tdo anything about it maybe there's two or three of thesehouses that you could choose from but if you really want to build unique housesyou need to have unique tools to really build exactly the house that you want tohave and I think that's really where we are today so well over I think ahomeless 65 different services in AWS right now and that's continue growingand whether that is in analytics or IOT or machine learning or mobile servicesor the blockchain technology or DevOps the days from whereAWS was just infrastructure-as-a-service yeahcompute storage databases security those days are long gonemostly because you've been asking us once we solve that say the heavy liftingin the infrastructure to start solving the heavy lifting the other pieces ofheavy lifting that you still have and that's why we continue to roll out thesenew services based on your feedback what you need and let's pick on a few ofthese if you look at databases now we have 14 different database services andso of course relational still plays a quite important role because many of youactually have real need for relational databases but sometimes also you usingstandard off-the-shelf packages that only run if you have video relationaldatabase backing it up and so but what we see more and more and especially if Imove to micro services where the components become much smaller and theyreally make use of purpose-built databases to meet exactly the needs oftheir application rather than is key-value already there's graphs orwhether it's a ledger you know all of them have unique capabilities that we'reusing today instead of using the relational database as a hammer that youcan use for everything we are moving to really specific high performance highlyreliable managed purpose-built databases the same goes for security security ahundred and sixteen of our services have encryption enabled in them 52 of him youcan bring your own keys and I'll be talking more about it laterwith respecting in terms of security but encryption is becoming the mostimportant tool you have to make sure that you're the only one who has accessto your data and nobody else and whether there's also in combination with all thethe compliances and all the certifications that we've achieved orall the innovation that we're doing under the covers to build new automationtools for you where you can actually protect yourself and I think automationin security plays are very important well and we'll get back get more intothat in a bit the same goes to storage it's not enoughto just have a volume service yeah a lot for block sales you need to havedifferent variations in there because all of you have different types ofworkloads you know and so you can where you can actually really tweak yourvolumes exactly to meet the requirements that you have and then singles for thedifferent types of oil object storage as well import it in all of this is thatyou can pick exactly that tool that you need the same goes for for exampleinstance types in in the past maybe he was been you would be stuck with thisparticular type of server and you had to develop your software for it these dayshowever you develop your software and then you go look for what is the bestinstance type that actually matches what I need to do in my applications andthere are also going to storage of course and I think still sort of theninth world wonder in terms of digital sense is Amazon s3 now it's it's thefirst service that we launched in AWS is now 18 years old and customers areroutinely processing exabytes of data when it comes to sv and so whether it'sall the mechanisms that we developed the way you can automatically move betweendifferent storage classes or the security capabilities that we put inthere all the object level controls that you have it's been an amazing feat tosee how IC has evolved over time and even to the point that you can use ratiospectrum to your basically your data warehouse to point to your exabyte dataset that lives in s3 and just to understand the data warehouse queriesover it it's pretty spectacular now one of the stores classes of course in a3 iswe Nestle pleasure which is there to do long-term archiving and we we announcedGlacier deep archive at reinvent and I'm very happy to announce that that one isnow general available for everyone to use reported pile of glacier deep archivesis that it comes at a cost point of not even a tenth of a cent per gigabyte permonth yeah no more no more glacier no more tapes and it gets the same level ofdurability that you see in s3 and so this has a whole set of use cases thatis pretty spectacular one of them that I like because of my own history in inhealth care is the fact that in the past we had MRI and cat scan images theywould be archived of film where the hospitals have a requirement that thesedata should be kept around for at least 30 years so instead of keeping thedigital formats they basically printed it on film and then had radiologistslater compare film instead of the digital images mostly because I couldn'tafford it and there were no storage systems available that would actuallysort of have this level of durability over such a long period of time and soat these particular cost points yeah hospitals and others can now startstoring huge amounts of data at a very low cost but be guarantee that thedurability will be there for for decades to comeso if all of these capabilities you know we definitely see a move to newer typesof applications so if I look at what most of our customers are doing it'ssort of interesting to see that the retailer actually wentthrough this phase five to ten years ahead of that we needed to reach scaleand reliability performance and security that most of our customers now I startedto get confronted with so if you look at the how development transformed it really went from what we know is sort of a monolithic applicationover to what's now a hole deep microservices and environment and so youcan talk a bit more about micro services in in a minute but it was in a digitalbusiness like like Amazon experimentation and fast experimentationis crucial continuously experimenting and mama lives are notgood for that they have a very hard time to have this wrong big piece of softwarewhere many different teams have to work on together really really fast movinginnovator and an experimenter doesn't really work and so next to scalingissues and all the technology issues that we had been running a monolith wereally wanted to break up all of that into a manner such that we could movefaster and in the joking term that we use was that of two pizza teams so teamsthat a service has a team associated with itthat is really responsible for that service and completely it's full andtotal ownership also over there web map now so these teams live by what I usedto call you build it you run it and that sort of the first days of what we nowknow as DevOps so it was important for us because now that we've built thissort of decentralized environment not only from an architectural point of viewbut also from an organizational point of view we could move really fast we couldstart making new versions of some of these micro services and experiment withthem in a much easier way that we ever could have done with a monolith and agood example there is actually coming back to s3 when we launched a suite 13years ago we had eight separate micro services that made up se some that theyput in order to get and some of them did the scanning storage and maintaining theindex so only eight of those but what we knew on day one when we were building asuite is that that would not be the architecture that we would be willing tosee four or five years later most of if every order of magnitude growth you haveto sort of revisit your architecture now if this would have been a massive momentif this would have been a nightmare then at one moment you would get an emailfrom Amazon saying like oh we're taking SC of wine on Friday night from 10:00 tomidnight to build a new version that would not be a good plan with it and soas such you need to be able to evolve your software while your customers arestill running so now se is well over 250 five differentdistributed micro-services if also to new capabilities that we've beenbuilding in overtime and also lessons that we learned yeah we we learned thathardware no matter how high end it is trails at times and do really weirdthings like incorrect a bit flips in Rama that suddenly happen and eventhough you know we so you have you built a micro services that take that oneparticular job really well and the thing with micro services is that many ofthese decomposed building blocks that you have out of human olive actuallysome of them many of them have very different scaling and reliabilityrequirements and we'll get back to that so if I look at sort of when we whenAmazon as well as our customers go through his move from from monolith tomicro services what's the impact on the way that we develop our software andthat we operate it so let me go through some of these different phases in thereso first of all we look at this these architectural patterns that the movefrom monolith to micro services is it's probably one of the biggestarchitectural changes that I'm seeing in the past years that the most importantthing and it so let me tell his story there about actually about bad amazonthe retailer so when we broke up the first monolith we had we had three verylarge data sets so customers items as a catalog and orders and basically what wetaken sort of business for Jake moved it away put it next to the databases and wehad these three very large services left and one of them was the customer masterservice basically all code that operated on the customer master database nowwe've learned over time pretty quickly that that was a mistakewe done a data driven decomposition of our system and we should have done afunctional decomposition because in that customer master service you would haveone component that would basically be the recognized customer service a loginservice let's call it actor and in that same piece of software would also seethe address book service that was only needed when you do a check out yet let'slogin it's almost hit on every page so now thewhole component needs to scale at the scale of that smallest component thatsits there press that is whole software component is whole blob has access toboth the credential store as well as the others bookstore which is almost aviolation of security properties and so really being able to decompose into thesmallest building blocks that you can imagine and then have each of thosescale along the dimensions that they need to scale and so the login servicejust by itself can scale to mend asleep without impacting the others book salesnow ten years ago or it was at 20 years ago when we started going through thisprocess and learning about it I wish we had containers we didn't yeahand so if you look at sort of the different types of compute available foryou to support all of this instances VMs virtual machines containers and lambdaall play an important role and there's a clear shift happening over time awayfrom instances into sort of more service development but instances will be aroundfor a very long time yeah so at this moment we have a hundred and eightyinstance types for you and very that is sort of well as burst capability orgeneral purpose or memory intensive or disk intensive or you to memory blobs ifyou want to when your SP Hana systems all of those all of those capabilitiesneed to be available because of so many different workloads that are availableand so where you can pick among load whatever instance type you really needto support the application that you're running now which if a hostess for iswhether we can also reduce cost further and so happy to announce today that theAMD AMD based instances are generally available both in terms of both in the m and the our category sothat is just general purpose as well as memory tens of workloads and so they'reall based on the aimed I am the epic 7000 processor and basically they haveexactly the same numbering the same family evolution as that thegeneral-purpose Amazon are I have the Intel based once so it can immediatelystart switching between ball in the other the advantage of the AMD ones isthat they're about 10 percent lower cost than the previous instances we had if welook at containers that clearly is the point where I see most of our customersthat are moving to a micro services or vomit are actually making use of at thismoment yeah and there's a real rapid evolution in an amount of containershappening because it makes it so easy to actually build this micro servicesenvironment on top of this and we have really lots of customers that areactually really experimenting or building real production systems usingcontainers so think about McDonald's yeah it's by any chance the world'slargest restaurant chain with 37,000 locations around the world serving 64million people a day so they built a home delivery system they did this infour months using ECS the Amazon is yesterday elastic container service andthey serve 20,000 orders a second out of that micro services environment usingcontainers typically latencies are 100 milliseconds so this is an amazinglyscalable environment that really has all the components all the way built othercomponents to reach this massive scale also for the whole API system aroundthem such that they can actually integrate with partners such as ubereatsand then almost a pretty impressive development of of back-end services nowif you look at the capabilities available on on AWS there's differentplaces where you need to make decisions youso are you going to use the elastic container service or are you going touse the lesser container service for coup Bonita so ECS or yukia's those arethe two choices you have at the orchestration level at the compute levelunderneath there underneath the containers you have a choice whether youwant to manage those clusters yourself or when you want to make use of Fargatewhich turns your container service into a surplus container service where youonly have to worry about sort of building the software that has to winyour containers and not worry about the infrastructure anymore and of coursewith all of that you need to have a container registry service that needs tobe highly scalable we have some customers that pull the same image 4,000times into different tasks and so security and scale and reliability ofthe container service is crucial in all of this the harder to make choicesbetween the different container services is is more or less you know it's achoice you make often between highly opinionated systems or ones thatactually give you way more flexibility so whether it is when you valuesimplicity over flexibility and so if you look at the EGS that's clearly whereI think simplicity rules it's a highly opinionated service about how to buildcontainer based applications it's very deep integration with each and every oneof the other AWS services and whether that's a OB or cloud watch or guard dutyall the integrations there are are crucial and so especially when it comesto out of scaling and scaling over multiple availability zones this iscrucial and so many of our customers are makinguse of ECS because of this deep integration into AWS and most of theseare customers that really start building their first container systems on AWSitself ets however is much more flexibility oriented although we'redeeply engaged with the if the open source communityI'm CUBAN eaters and we are hold a present upstream that means that we pushall of our changes or all of the integrations that we do in AWS into thegeneral repositories first and get them accepted there before we start launchingthis in eks itself again we're working on getting deeper integration into theLMS platform but it also allows you often to already start developing thison your laptop or maybe on-premise and then start moving your container serviceover to AWS and I see ku Benitez mostly happening but many of our customers whoare looking to migrate into the cloud where they really start building thingsin their own environment with the idea that they will be able to move this overto the cloud whenever they get to that particular point in all of that I'vealways been definitely in the early days of container based systems I was kind ofsurprised there about the willingness of everyone to manage again resources atthe lowest level because if you think about containers really think about theapplications you want to build you don't want to manage servers or instancesclusters underneath there so we built a to be as far gate to take away all ofthe heavy lifting that comes with winning container systems because reallyyou don't really need to manage those clusters there's no value in that if youreally want to finish focus just purely on building business logic and sowhether the business logic runs in a container or whether you actually makeuse of truly service environment like lunda that's available for each andevery one of you so if you think about sort of the continuing from frominstances to containers and to lambda it is clear that there is a massive drivehappening there and many of our customers especially those at a cloudfirst and thinking about building new applications all start off with servicetoday and why because the productivity is much higher and you don't have tothink all these other pieces that you have todo around sort of provisioning infrastructure winning things that weremultiple disease managing your security posture things like that many of theseare all taken care of by lambda itself and of course we continue to innovatelooking at how you are building these service applications and it's veryinteresting because this is a continuing and we continue to work with you becausethis is such a new world surface that we need to make sure we building the righttools for you and so layers has become one of the important components on onehand to make sure that you don't have to upload redundant pieces of code you canshare this piece of code between different applications or version themand things like that but also it's an ability for you to actually really getone of your own application runtime so your language runtimes and it's anintegrating there as in a lambda layer so you can learn any programminglanguage that you want to read now every C custom is building pretty extensivethings if you look at this this is home away it's a company by Expedia and sowhat they have tell you about six million images are being uploaded therethis is say for clarification home brokerage service people upload aboutsix million images each month with all these images need to be transformed intowhat into standard sized images and you know firm nails and also tended need tobe pushed from machine learning to see whether these images are appropriate andall these kind of things and as you can see in the whole architecture no serversyeah everything is a combination between lambda and other service components likedynamodb an SV and Kinesis it's a pretty this is a pretty common architecturetoday where there are no servers in this picture anymore you literally havehundreds of thousands of our customers that are all using lumber and the mostamazing thing happens in all of this we think about solid building newtechnologies it's often the young tech not a technology startups that are sortof adopting technology first but what we see withwith Linda is that actually enterprises jumping on board immediately and whybecause it is makes it so easy to only have to pay for those resources thatyou've really used very effective management mechanism but also in createso much greater productivity with your developers that that's really somethingthat as an enterprise if you want to move fast are really concerned about sowhether that is for example a company like Capital One they migrated billionsof mainframe transactions into a system with DynamoDB lambda and other AWSservices basically completely eliminating the mainframe instead goingover to another container or another image based system now they moved overcompletely did jumped all of those steps and actually started using a DBS lambdato replace their mainframe with well if all of these different components comingtogether are you might have different languages might have different types ofapplications and all of them are sort of running in this distributed environmentthen suddenly a whole other challenge comes up how do these different microservices find each other how did they discover what do you do when you how doyou communicate with each other how do you get visibility in which servicestalking to which service with what particular load that they're actuallypushing there and also what do you do when failures happenhow can you out traffic away how can you if things are starting to bring out froma performance point of view how can you float all your clients all these kind ofsteps that you need to take if you suddenly live in this completedistributed environment so for that we built a delirious app mash that makesuse of the end for sidecar capabilities that actually now give you a completeview of the network where you can have one consistent mechanism between thecommunication for all of the different components that now live in yourdistributed system and actually need scare of the reliability of thecommunication of failure isolation and actually gives you also insight into howcommunication is happening about particular loads in what particularpaths are being created and also how to configure this so all these capabilitiesin app mash I'm happy to tell you today they are also general available foreveryone to use yeah and so this whole move to micro services is is is veryimportant and it is receipt is happening not only in young businesses butdefinitely also a more established enterprises so every May is a financialservices company technology company that is moving all along to AWS and they'rereally embracing service to her more about the move to the cloud whichwelcome Satish hvala the SPS cloud engineering of le me onstage thank you very much good morningeveryone I'm really excited to be here to share aLeamas journey to the cloud who is Ellie Maesorry slides for a little bit late so Ellie Mae is a technology that powersAmerican dream Ellie Mae's mission is to automate everything that is automatablein the mortgage industry to make home process easier for home buying processeasier for lenders and home buyers because at the end of the day homebuyers don't dream about a mortgage they dream about a home today 40 percent ofall US mortgage –es are processed using elements technology in today's realitynot easy or efficient for lenders or homebuyers so it's complicated anddisjointed process with dizzying number of steps so there's a better way Alima has built a platform that helpssolve this problem for our lenders to originate loans more efficiently andmake it make better decisions based on the data let's take a closer look a robust developer community Ellie Mae haselements lending platform is a two-sided platform with lenders are one side andhome under consumers and borrowers on the other side consumers and lenders andpartners use this platform to process mortgages every daywe have a community of $5,000 purrs innovating on our platform every singleday our journey has began in 1990s with a client-server architecture built foron-premise then we transition to SAS in 2009 then we transformed ourselves to bea platform company in 2016 and that's built on AWS Alima is moving all in toAWS our goal is to move 100% by end of 2020 this many benefit there are manybenefits of moving to the cloud given the seasonal nature of our business youprobably know most homebuyers buy homes during springtime or summertime soelasticity is key for our business in addition to that developer productivityand speed of innovation is key as well let me give you an example of one of thenew products that we launched we built an end-to-end data pipeline and dataproducts that takes every data transaction and stores it in a data Lakebuilt on AWS provides analytics and insights on loan activity for ourcustomers we built this Pat product from idea to go live within six months thiscould have taken 2x longer if we had to build this on on-premise in one mindlet's take a look at some of the AWS servicesAlima is using so like most of you we leverage wide variety of services Ithink one or touchdown whew we love lambda it not only saves money itincreases developer productivity significantlythe fact that we brought the matter of fact we processed 1 billion transactionsin the month of January last month alone out of that trillion transactions thathe was talking about speaking of saving money we did some cost analysis in phaseone we are projecting twenty percent of cost savings when we move all in onthree WS and more in order to ensure you know as we continue our transformationwe are anticipating much more deeper savings as we transform in order toensure success the key for us is to enable a cloud culture within theorganization moving the cloud in a regulator regulated industry like usthere are a few things we need to consider we need to ensure that ourcompliance and security requirements were met while ensuring all the internalstakeholders are aligned to help kick-start accelerate our transformationwe engaged in a number of cloud centric activities I'd like to share some ofthem with you some of the key programs we have implemented included bootcampscloud gameday hackathons and technology summit AWS is a key partner with gameday and meters as you all know great people make greatproducts a happy developers build amazing products I'm proud to say that'swhat our team does our eleme every day thank you so if all of this these newpatterns that we see arriving is not just the architectural panels of coursethat you have to keep in mind you also have to think about so there was theoperational model how I'm gonna operate my services and and that the dog feels alittle bit leaf sort of whether you choose containers or instances or lambdato build your applications into because you know there is a clear increase incomplexity when you move to such a pervasive microservices environment ifyou look at it just already talked about a three moving into versed well over twohundred three five different micro services but if you look at some of ourcustomers they're easily winning thousands of different micro services aspart of their overall system and so is that was it easier in the days we everin everything in the moment yeah for some parts definitely things were just afunction call or procedure call now you have to use app mash to sort of stitchall these pieces together I managed the reliability and the fault isolationthere but there is all these different choices you have to make and so what'sthe best operational model around it such that you sort of minimize minimizethe return that you have in terms of how to stitch these different services totogether and you know whether you pick server full and I consider instances aswell as container services that are winning not over phytate I'll considerthem to be server for yeah because she still have to managethe underlying infrastructure for that and containers of a far gate as well asusing London in terms of the compute side of it is is I think sort of thefirst choice today and we see most of our customers actually really embracingservice as a cloud first strategy except for maybe sometimes you have you knowpre-built software that you came from a vendor and you still may need to runthat in any instance but you see most of our customers building then starting tobuild things around it using long that and and service capabilities but it's somuch more service by the way then just lambda lambda was just the last piecethat was needed to stitch things together such that you never had tothink about service anymore now I think the general model for service is reallythat you have no infrastructure to provision it scales automatically youonly pay for what you've used and the service itself manages high availabilityand security for you and that's not just lambda I mean it's all the differentcapabilities that we've seen at AWS over the years s3 is service matches thisdescription perfectly DynamoDB now or a world as service orall the other integration capabilities that we have to stitch your applicationstogether whether they're the step function so SNS and SQS and api gatewayand app sync and then as computational models lambda and forget surfer this isa whole stack it's not just compute and just focusing on functions as a serviceis not service it's the whole stack in all norm of these pieces you have toworry about the proper provisioning you don't have to worry about sort ofmilitary deployments it is all taken care of for you under the conference andthat's truly what services and it really helps many of our large customers movingsignificant pieces of the infrastructure over into a surface andwell financial engine saved 95% in deployment and operational costsyeah coca-cola cut some processing time from36 hours to 10 seconds and FINRA the government organization that monitorsthe stock exchanges for fraudulent and anomalies and anomalous operationsliterally validates about 500 billion stock market transactions a day using aservice environment and so it is really the first say the cloud first strategyto look at service if he can build it there because they no longer have toworry about your infrastructure that you need amendment so and I've said thisbefore surface really pushes it out to the limits where in the future youreally will only write business logic nobody will be managing infrastructureanymore what you operate is a higher level constructs of micro services nowif all of that of course the way that we build software and deliver softwareneeds to change at all and if you often look at sort of the questions that weget asked when we think about micro services development is yeah so how doesthe release process work how do you push code out how do you debug it how do youbecause all of this is such a new environment that all the tools we haveactually need to adapt to them as well and so in the in the old life cyclethings are clear now yet one pipeline that actually delivered into productionmaybe every three months or maybe every six months depends a bit on what kind ofdevelopment strategy were using but most companies that are actually running amodel if really running sort of a waterfall models as well of course thegreat thing with all those micro services that the development life cycleis very differently each of these teams are fully independent and that meansthat they were able to really every team can deliver in there at their own paceand immediate reacts to requests from from customersin the old days it was much harder if you have a monolith to actually bereally agile and fast-moving and immediately react to your customersbecause basically all your teams are working on the same piece of softwareand that has a very heavy weight development process and so bestpractices around all of this is still really try and not only decompose yourarchitecture into smaller building blocks but your organization as well sothey can actually move fast that each of these teams have total ownership overthe software that they have and can actually really move fast based on thefeedback that they are getting from their customers and all of thatinfrastructure is code and automation and all this kind of things play theyplay crucial walls now these are all best practices that we've seen arriveover time and many of us have all have our favorites kind of the developmenttools and many of our partners are the delivering great technologies there andof course in AWS you needed to make sure that we have AWS cloud native tools aswell yeah and so the whole code pipeline we've called called committing deployand sort of all the different testing tools that are available around it andalso integration with x-ray in cloud words you need to make sure that we haveat least as a whole set of very mature development tools for you to take youcan automate these pipelines and especially now with the rise of serviceat the rise of lunga we need to make sure that all of our development toolsare really supporting these containers and lambda as well and so and mostimportant of course in all if you build your systems you need to be able todebug them or at minimum you need to get a good idea in into them and so AWSx-ray allows you to get a visualization of all the different components of yourmicro services environment and where those are waiting in containers or wherethey're running in in lambda and really whether or not you use app mash thereit's integrated in all of that and the switch you can get a visualization ofany challenges or any problems that are happening in your completely distributedenvironment right and it definitely when we think about sort of resourceprovisioning you know you may have a case where you said certain read andwrite capabilities or you DynamoDB instance on one hand and five microservices down the path is actually affected by how you have proficient thatnow in a disability environment that's pretty hard to figure out exactly wherethese challenges are x-ray gives you detailed inside and view of how you'redistributed systems application works now with all of that debugging isimportant it's a very important part of our development process so with the riseof surveillance we need to make sure that you can use your most popular toolsto actually really build service applications on AWS yeah and we hadalready announced cloud 9 of course which is our which is the AWS IDenvironment that is truly cloud native but also pycharm and actually today theIntelliJ toolkit is general available so you can build your java and python onesfree s code is still in Developer Preview but we expect that to go generalavailable as well soon and so for all of these and we and know we're prettyopinionated about what are the best development tools we have to use yeah wejust need to make sure that all of them work really well on AWS and that you candevelop in the way that that really is your style of development and that goesfrom programming languages all the way over to what kind of IDE you want to usealthough I don't see any integration in VI and Emacs happening anytime soonyeah but we do look at all the different kind of models and so for example Sam isis a different way of actually describing your service application in away more declarative format instead of imperative and so check out Sam if youreally want to do local development as well and it really helps you sort ofthink differently about how to pose your server this application now inall of that I think there is wrong really crucial point that we all need tostart thinking about as technologists you know and that is that in this wholeof continuous integration in the development world if deployment worldsecurity certainly becomes very different and and I think that we astechnologists really need to take responsibility for making sure that wekeep our customers and our businesses secure even if we changing operationaland architectural models as fast as that we're doing now if I look at some of thepast and probably sort of the monolith idea you know you would build softwarethe security team would come in they sprinkle some magic dust over it andsuddenly your application is secure well I think that may have worked maybe inthe past but I think today that's definitely no longer the case I thinkmists of these old-style security approaches all rely on the fact ofbuilding firewalls around it yeah now if firewalls were the right securitysolution we would still have moats around our cities we don't we protectour individual houses we protect our individual rooms within our houses so weshould do that in our digital systems as well I remember most if you look at mostof the threat data or the security data it shows that this brute force from doorattacks almost never happened anymore it's all about social engineering wheresomeone in your organization will get an email that says oh this is your newretirement package click this link to sign it there is always an idiot thatclicks that link because if not they wouldn't be doing it with it yeah and sothere's always some some Evo JavaScript can download that you know getsestablished and if they're not the individual pieces in your organization'sare individually protected everything is toast and I think what we see in thepast years with the number of data breacheshave happened most of those or almost all of them are related to sort of olddigital systems that have been brought online or old operational practices thatwere appropriate maybe five or ten years ago where you were building it accordingto a waterfall model and things like that those are no longer applicable nowthat the security team looks very different today than 10 or 15 years agonow we have all these different components that we have to take care ofand it's no longer the secure separate security team it is us as builders thatare responsible for this security needs to become everyone's job and with allthese data breaches that we've seen in the past years we need to make sure thatwe protect our customers and our business and it's our responsibility astechnologists we really need to make sure that now that we are moving to moreand more digital systems and most of these digital systems are developed in avery different much faster moving way that we do not forget that securityneeds change with this and it's both a security off the pipeline itself as wellas the sort of software that you develop inside the pipeline and make sure thatyour pipeline has high rate development services that you have total control ofand then make sure that all the components that you're building actuallyin each step of this that you check against our we make our introducing newvulnerabilities or not what kind of alarms should go off if you do a hundreddeployments a day security is look different so if this is sort of atraditional set up for your continued integration and deployment we need tomake sure that in each of these steps that are happening Security's becomeimpairment yeah and so whether you do the continuous scans whether if thechanges happening in configuration alarm bells need to go off and sometimeseither automated or sometimes manual checks need to happen if someone adds anew library to your application but no Lyle needs to go off because someoneneeds to see whether this is actually the library that that's been approvedthat maybe it's an open source library whether the vulnerabilities that comewith it all these kind of steps and why is this library being added and then allof these kind of things you need to make sure that you can automate that as muchas possible and if you look at sort of the different components they'redefinitely infrastructure has codes place it plays a crucial role in all ofthat because that means that you can actually see the changes that arehappening in your infrastructure configuration between one in the otherand so if we look at when in this whole development process you need to actuallyhave all sort of operations happening it is both we push new code that actuallyneeds to go through this go through code scanners it needs to look at sort of newlibraries and your dependencies being integers or you know whenever this eventhas happened and if and triggers can be you know whenever a change happens orwhenever you maybe you need to do it on a daily basis or whenever you changeframeworks and then afterwards you need to continue to validate validate whetheractually the application is still meeting your security and maybecompliance requirements in all of that automation plays a crucial role and ofcourse in the sort of in the world that we live in there's this sharedresponsibility model a DBS takes care of a large part of the operationalenvironment for you and you build also all these new tools for you to usethere's a whole collection of Adria's automation tools around security thatyou all should be using because I think that really if you really want to moveto a world that is secure you need to automate as much of the securityprocesses as possible now let me just pick a few of these that I really likeso Amazon inspector basically contain inspect your code that you're runningwhether you have introduced new film abilities in one hand that might be justskinny against well known vulnerabilities but it might also be thecase that you are subject to particular compliance regulations an inspector canactually check whether you are still in compliance and this is important weremember that you maybe need to process credit card transactions you need to bePCI compliant and if you now make a home that changes to your code a day are youstill in compliance in compliance with the regulations yeah an inspector canhelp you with that by really diving deep on some of the changes that you've madein a completely automated fashion so make use of it climb trail if you're notbeing not enabled cloud trail then you're really missing out on gettingdetailed information about how your systems are being being used cloud trailreally locks every possible operation on every object on every resource that ishappening yeah you can continuously record all these API calls and thenlaunches them into stores them into a sea over which you can have client wartsand other analytics tools then then run over and take a look at sort of reallyother anomalies happening in all of this you think about security as I saidbefore encryption is what it's all about yeah so dense like no one's watching andencrypt like everyone is yeah import it in all of this is that encryption is thetool we have to make sure that nobody else has access to our data yeah so forexample with Amazon the retailer in DT PCI compliant that means that about 15%of your calls and storage operations need to be encrypted just decided toencrypt everything that means that none of your engineers can no longer make amistake about should they encrypt this Chile not encrypt it and your PCI orderit's becoming really simple in that manner so given that we'vebuilt encryption into almost all of the AWS services make use of it you knowfive ten years ago we may have had this discussion whether HTTP was tooexpensive now every every consumer service runs over HTTPS the same goesfor encryption for a long time you sat down how these tools are way too hard touse and it costs too much and it turns out we're now building these tools thatdon't make it that difficult for you and whether you have integration ofencryption into all the different services or you know whether you canjust bring you and keep if kek you get with kim s i remember this means thatyou have total control over who is access to data look at redshiftsredshift encrypts every data block with a random key always and then the set ofrandom keys is encrypted with a master key now you can bring your own masterkey or we can generate it for you the most important if you generate yourmaster key you're the only one who can decide who is access to your data soencryption is the most important tool you have to protect your customers withall of this I think as technologists we need to take responsibility here we needto make sure that the next generation of systems that we're building havesecurity as a first great citizen where you need to start thinking aboutprotecting your customers on day one and I know if you're a young business hereyou start to innovate is that think about all these new things that you wantto build security might not be on the forefront of your mind but it should beand definitely us as technologists need to take responsibility for that makingsure that the next generation of systems are as secure as they can be using theautomated tools that we give you well in all of this there's also changes to dataand data management that are that are happening of course and that we at AWSneeded to make sure that you have the right tools to use and so many of ourcustomers for running databases themselves enterprise-grade databases onon premise many of you have asked us to to help you move to open source andwhether there is my sequel or Postgres Reeb mostly not because of thecapabilities necessarily of these database engines but because thelicensing terms that the old guard is using is truly restrictive it's almostgoes back to blackmail yeah the only way for you to drive your costs down is tomake very long-term commitments and then you know buy many more licenses than youever need well I've been on the receiving end of that if I had to buymore databases the only way to drive course down was to make a five or a tenyear commitment I don't know how many databases Amazon will need in ten yearsfrom now but that was something you need to decide at that moment so many of ourcustomers really want to move away from that sort of restrictive environment Ireally want to move over to the cloud preferable using standard interfaceslike my sequel or Postgres but really really would like an enterprise-gradedatabase in the background now none of these relational databases actually havebeen designed for the cloud the only way that we can really scale them outinstead of scaling up is to actually make use of sharding yeah and whetheryou do that at the application level or if they do that sequin level or whetheryou do it in some weird storage making mechanismthose are the ways that you can make use of these databases to scale them out butremember this is technology developed in the 90s it's not modern development itall requires a local disk or even if you coaster of databases they still requiresa shared disk and each of those each of those instances I have a whole stack inthem but actually very much duplicating everything so with all of that we did webuilt Amazon nawara where we basically we up the whole database engine apart soAurora has two interfaces as a my secret interface in the post-crisis interfacebut behind the covers we've ripped everything apart more or less at themiddle of the caching layer I moved to a shared storage servicebased on SSDs which is actually the database aware and this is sort of hasallowed us to build a much faster much higher reliability system than we couldever build using sort of the standard off-the-shelf databases now we actuallymake use of 6xo application so if you build this distributed storage enginesyou use quorum technology to sort of make sure that you know you can actuallyread the last right and so typical scenario is there where the quorum issort of three nodes and you need to have at least two nodes available to write ayou need to have two loads available to read such as there's an overlap you canalways read the last right now in our scenario we believe that our Felicia nowis out there that are much more dangerous to to the reliability of sucha database in our case we would really like to survive at least the failure ofone complete AC and wow there's a complete I see that I may have failed inthat particular timeframe there's a likelihood that one of your other nodesmay fail as well it's just you know when it when it rains it pours and so wereally want to make sure that we have an AC plus one failure scenario where wecan lose a whole AC and one note in the different education either go down to aquorum system of six so we do six way replication to make sure that we have acontinuous overlap in these scenarios so that means that if you lose an AC aloser note you may no longer be able to write but you can we still read and thenwe need to make sure that the repair for writes is really fast and we do that bymaking sure that the individual blocks that are being stored in the storageservice are really small so it's ten gigs meaning that you can very quicklywe repair a field right but actually we replicating the data underneath them nowin all of this that's the reliability side of things it is also very importantto make major improvements so – the way the performance in thistraditional databases is restrictive because their whole thinking is about alocal disk if you look at a typical my sequel writes is actually any write tothe database will result in five writes to the storage engine which then writesfive times to the backup engine they write all the data to your replicas whothen also does all of these storage operations yeah this is hugely expensiveand so you remove the data pages you do double writes to avoid corruptionhappening you move the logs you move metadata all these different pieces arebeing written now it turns out that that's hugely wasteful it turns out thatit's only the log that you actually needs to write because in the log you'llfind it before in the after picture of your database and so we don't need tomove to data page now you can just moving the lock and so in the world weonly move the log write a log to the storage changes and then the storageengines are not just storage engines the database aware because you can actuallyrecreate a database by just purely looking at the log and the only reasonwhy you would ever need to move a database from your story changing intothe database is if there's a cache miss well it's most likely that the mostrecent transaction that you're actually completed will still be hot in your owncache so you can actually recreate these data pages in a very lazy fashion whatyou see here is that the primary instance right twice the log getspersistent at that moment you can delay the acknowledgement event right thengossips with the other storage nodes your six storage nodes to actuallytransport data there and then in a lazy manner you can start recreating yourdata pages all of this has allowed us to create a foundation for to innovation indatabases that was never before no other database systems can do this kind ofinnovation because they're still stuck in the old architecture decompose itinto smaller building blocks and then apply standard distributed systemstechniques to to actually keep him reliable andperformance and so all this gives us a basis for database innovation that it'spretty spectacular if you firm if you ever programmed in alanguage that has the object relational mapping available that grew beyond Walesfor example any changes to your data structure will immediately result in achange of your schema for that all cell databases need to do complete table copyhowever in Aurora basically copying creating a new database or creating anew table based on the old table is a matter of microseconds because we canrecreate we can create a new table in a lazy manner because the storage is databig blog aware so it's been a great success fastest-growing service in thehistory of AWS and it's still growing very fast mostly also because we wereable to push all these new innovations further and it's not just relational andwe talked about this earlier definitely the move to micro services has madeeverybody aware that how wait maybe I can pick the right tool for the job thisparticular micro service just needs a graph database yeah or maybe you'reoperating in the world where you've been considering blocks change style ofinteractions were you looking for an immutable letter they make use of keyODB so each and every one of these services serve a really particularpattern dynamodb has its roots in in a deep dive that we did and Amazon theretailer itself did it in 2004 and when we did a deep dive on how we were usingrelational databases it turned out that 70% of the uses of these relationaldatabases was key value there would only be a single key in the query you wouldget a single well back 70% what we knew that we could build very different typesof databases that will be uniquely positioned to serve a key value worldand we could have title trove of performance reliability all ofthose and dynamo became that and DynamoDB later became the serviceversion of that that we had an AWS but again it turns out that DynamoDB is apowerhouse now for everyone that wants to do 2d scalable operations if youthink about supercell for some head the company that makes Kings they made thesegames clash of clans and others on day one ofa game they will literally have millions of players checking out the game thatmeans that your data stores behind it need to be extremely scalable because abad experience on day one will not have gamers come back and so dynamodb is thepowerhouse that sits behind all of that now everyone is looking to get morevalue out of their their data I mean one of the things that client has done hasmade me the whole ante landscape egalitarian everybody is access to thesame storage the same compute the same databases the same analytics tools thesame IOT tools the same ingestion tools everyone has access to that now so ITcapabilities are no longer competitive different shield so what is them thedifferentiator it's the kind of data that you have and how smartly you makeuse of that data and so we need to make sure that we can actually help you pickexactly the analytics tools you need to use to operate on your data and whetherthat is sort of in the analytic space or how to create data leaks or how to movedata in and out of out of your data Lake yeah and so all of this is crucial foryou to pick exactly the right tool you want to do at work preemies lucifina youwant to make use of Hadoop You Xiomara you want to do this do very complextraditional data warehouse create queries you make use of riches and sopick exactly that right tool for the job well after all that shift is a datawarehouse so you can just fire up on the mountain where in the past maybe datawarehouses were something that was very expensive and centralized you all neededto cure for that what we now see is that manybusiness units are just firing up a data warehouse for two hours on the first dayafternoon that is a radical shift in how databases and data warehouses are beingused including at Amazon and I've trained before that of the past yearNovember 1 was one of my happiest days of the of the year when we shut down oneof the world's largest it's not D largest or called data warehouse and weplaced it and we placed it with red shifts at Amazon and so if all of thiswe have truly moved to an environment that moves so much faster so much moreagile because indeed in the old-style data warehouses it's such an expensivepiece of software and hardware that everyone these are loaded up to the maxyou always need to wait for it especially if you want to run some adhoc queries try and forget that you'll go back in the queue where maybe yourqueries get executed tomorrow and it's really absolutely becoming the mostpopular cloud data warehouse out there because it's so easy to instantiate andso into it moved all of the mission-criticalanalytics workload over to to redshift instead of it's on premise environmentand so much moving so much faster the cool thing with richest is that we'veenabled deep metrics application and database metrics into the into thesystem and as such we're able to really observe how our customers are using oursoftware and then working with our customers understanding how we canactually speed things up for them in the past two years we've been able with allthese improvements to speed to make redshift 10 times faster mostly becausereally this close interaction with our customers really trying to understandwhat are the kind of things we can do short query acceleration or resizingelastic resizing or how to speed up interactive queries all of these kind ofthings is working together with our customers understandingthe patrons you use in a model in data warehouse and that as leopards enormousspeed improvements over time based on the feedback of our customers now all ofthe other things I just talked about is sort of waiting for your cribs now turnsout that there is it 87 percent of our customers never wait for their queriesbut once the other 13 percent what are the kind of things that we can do interms of innovation in our data warehouse to make sure that you neverhave to wait so if that we've launched redshift concurrency scaling which is bythe way today generally available so what does concurrence is scaling do webasically make blur said burst clusters available for you that if we see thequeues queues of queries sort of rising to the point where you actually have towait to execute it clearly we can fire up additional additional clusters foryou such that your customers never have to wait and so much of this comesactually at no cost at all to our customers yeah because we will actuallysort of fire up these queries for you in this cluster without actually chargingyou extra for that now analytics plays a very crucial role andwe have to think about analytics us oh yeah that's sort of the data warehousingthis the old-style world but if you look at some of every modern young businessor every modern young application is being billed as data generation and thenthe lytx integrated into it well we all know about fortnight nobody here playsfor tonight yeah wires oh by the way I serve I've got more than five minutes soI can claim I've done that more importantly next to sort of all theefforts that the epic guys have put into building for tonight as a game they putenormous amount of effort into data generation around that and really as thegame clients for the servers or different types of pieces that allgenerate data for them there's a massive analytics environment sitting underneaththat serving the pieces of the business or wall-mountreal time like service health and tournaments yeah but on the other handalso just business capabilities like just measuring your KPIs or actuallyanalyzing how the game is being used such that the next generation of yourgame that you're building is actually meeting sort of the ways that yourcustomers are playing it and so I've always looked at sort of in these thingsas analytics having three different pillars one of them is looking backwardsyeah I'm looking backwards really means sort of this the redshift type ofoperations EMR where basically you basically generating reports and thenthere is the real-time part there's a real-time pillar where you use Kinesisand elastic search and EMR to probably look at what is my inventory level rightnow I'm not interested in an inventory level yesterday I want to know what itis now and that is real-time operations and then there is different one yeah thethird one is how to predict the future and so looking backwards what's nowwhat's the future now we're really bad I think at predicting the future so thebest next thing that we can do is make use of data that we already have and besmarter with it using AI and machine learning so with that I'd like to invitedr.Matt bouddhiste general manager of deep learning in artificial intelligenceto talk to you more about that Matt's good morning everybody and thank youVerna so as I'm sure many of you are aware we're entering a new golden agefor machine learning where many of the constraints which have held back theapplication of artificial intelligence and machine learning to real-worldproblems start to melt away in the cloud and as a result of that we're startingto see tens of thousands of companies in virtually every industry and eventuallyevery size and shape start to apply machine learning to their central corechallenges whether it is change in health care through change healthcarewhether it is advancement in life sciences with folks like bristol-myersSquibb and Celgene folks progressing manufacturing allowing you to operatemore efficiently telephony contact centers you name it machine learning hasarrived in virtually every industry and it's incredibly exciting to be part ofthe team at AWS which is helping customers drive this forward and a bigpart of why we're seeing this stratospheric movement an advancement inmachine learning is that on AWS there are a number of really key tailwindthese are forces and services and capabilities which are available todevelopers just like you which drives significant acceleration in your use ofmachine learning and what I'd like to do today is just run through the four keytailwind that we're seeing in the trenches at AWS and run through what Ithink are the key challenges and the key solutions which are only available tocustomers today on AWS the first tailwind which is driving developers todo more with their data is a broad and deep set of capabilities which aim toput machine learning in the hands of every developer we joke internally thatwe would just want to make machine learning boring we wanted to do justanother tool in the tool chest which is available whenever and however you needit and to do this we make three main areas of investment the first is aninvestment in the fundamental machine learning frameworks and infrastructurerelated to machine learning so this is typically where the advanced machineapplied scientists live as they're building our advanced models researchingnew ones or even iterating on the key frameworks themselves and theseframeworks are how you define your neural networks and your workflows totrain your models and then that's where you run the inference to makepredictions against your models they're almost all open source they have somestrange names such as tensorflow MX net and pi torch there are other high-levelinterfaces such as gluon and caris and our approach here is maybe a little bitdifferent from others our approach here is that we want to support all of theseincredibly well and make sure that they run as well as possible up on AWS andthe reason for this is that as the science of machine learning is advancingnew techniques and models and approaches and architectures are being madeavailable virtually every single week and those architectures they exist inall of these different frameworks they're published with referencearchitectures in all of these different frameworks and so just picking or tryingto standardize on one is not the right approach because you lose access to allthe other innovation which is happening in all the other frameworks so ourapproach is to invest in all of these areas and we actually have separableteams on at AWS which focus on tensorflow and MX net and PI torch andso on and we'll keep doing that as more andmore these frameworks start to appear so part of their approach is we want thisto be as easy as possible for developers to use and so we take all of theseframeworks and we run them on world-class infrastructure that a lot ofyou are familiar with on ec2 and we make it available in different ways we makeit available in a fully managed service which we call sage maker which I'll talka little bit more about in a second but we also make it available in an ami oran army where we take and optimize all of these frameworks and just make it assingle click to deploy them up on ec2 and this DIY approach is really popularwith scientists and apply machine learning developers who want to get inand tinker at a very very low level and potentially even build more frameworksgoing forwards but as Vern has been talking about we see a definite trendwith more and more developers turning to use containers and so we want to applythe same approach where we're packaging optimizing configuring installing all ofthese frameworks and make them available not just in an army but as a containerand so today I'm very proud to announce AWS deep learning containers these deeplearning containers allow you to quickly set up deep learning environments up onec2 using docker containers they run on kubernetes or ECS and eks we've done allthe hard work of building compiling generating configuring optimizing all ofthese frameworks so you don't have to and that just means that you do less ofthe undifferentiated heavy lifting of installing these very very complicatedframeworks and then maintaining them because they all move very very quicklyand we'll be releasing new containers as new major versions are made availablefor tensorflow and MX net and we'll be adding PI torchvery very soon they're available in the AWS marketplace and through the ec2container registry so moving up a tier the second major area where we're makinginvestments is the machine learning services and our big investment here isa service called sage maker and what stage maker attempts to do is itattempts to bring machine learning and put it in the hands of any developerirrespective of the skill level that they have as it relates to machinelearning and it's sometimes easy to forget just how challenging machinelearning used to be before with the introduction of sage maker virtuallyevery step of the machine learning workflow presented a hurdle or a wallfor most developers who didn't have deep skills in machine learning or deeplearning and combined these walls were effectively infinitely wide andinfinitely high they were just if impossible for most developers to climbover or dig around but with sage maker we systematically approached each ofthese key challenges and started to remove them behind a managed servicewhich I which is very very easy to use so for developers who need to collectand prepare training data this is everybody by the way that wants to domachine learning pretty much we replace that with pre-built notebooks for commonproblems and a managed notebook service which with a single click gives youa notebook environment where you can start to experiment and slice and diceyour data instead of having to choose and optimize your own machine learningalgorithms we built in a set of over a dozen high-performance algorithms theseare optimized for AWS and we use some clever techniques to allow them tostream data from s3 and train in a single pass which dramatically increasesthe accuracy you can obtain and reduces the cost of running them we allowone-click training so with a single click we can spin up a fully manageddistributed cluster under the hood for you to run your training against andthen we added optimization so a dirty secret of successful machine learning isthat you don't just train one model you train a thousand and just pick the bestone and this has traditionally been kind of a trial and error approach butinstead of that in Sage Maker we have a service which provides hyper parameteroptimization and with a single click we'll drive and actually guide thesearch for the best possible model using machine learning under the hood whenyou've got a model that you love you can make it single click and deploy it in afully managed environment and then scale that environment for production useusing auto scaling so you can scale up and scale down and the result of this isthat more than 10,000 developers today are using Amazon Sage Maker to drivetheir machine learning workloads and many are standardizing on the platformas their machine learning central repository of data and analytics relatedto ml the third main area we want to provide these sorts of capabilities toapplication developers who don't necessarily have any machine learningexperience and so here we provide a set of AI services which mimic in many casessome level of human cognition and so we have a set of services for our vision socomputer vision recognition to do image and video analysis and text rack toautomatically extract data from scanned documents we do a lot of work aroundspeech the both the generation of speech using a service called Pali that's thesame service that we use to generate the voice of Alexa and transcription wherewe take speech and turn it into text and then investments in language modelswhere without any machine learning expertise you can start to apply naturallanguage processing and translation to the textthat you potentially captured through speech we build conversationalinterfaces using Lex that's the same natural language understanding systemthat we use under the hood with Alexa for building conversational interfacesand just in December last year we announced two new services forforecasting and for recommendations and this allows you to build our very veryaccurate great learn deep learning driven forecasts and deep learningdriven recommendations based on the same technology that we use on the retailside of the house at and what's interesting about these last twoservices forecast and personalized is that unlike some of these other deeplearning systems unfortunately there is no master algorithm for driving the verybest forecast there is no master algorithm for driving the very bestpersonalization experience whether it is order predicting the news articles orordering search results and as a result what you need to do is you want to beable to take the data that you already have and then train your own modelswhich are specifically for your data and for your customers that's by far thebest way of approaching it but the challenge is that this is incrediblycomplicated and so what we do here is we apply a technique that some people callAuto ml where we take in a lot of input data so a real-time activity stream ofwhat's going on on the platform in terms of personalized the inventory so thearticles or the products that you have along with any demographic informationoptionally that you want to provide to to drive the personalization engine andthen with a single click just three API calls you can build a customized versiona customized version just for you for personalization and recommendation whichwe host behind an API on your behalf now we don't use any of the data it'scustomized for you we don't share it in any way this is just a specific privatemodel for your use but under the hood we're doing a world of things to makethis possible and one of the great things that keeps me skipping into workevery morning is the opportunity to invent and simplify relating to machinelearning on behalf of our customers I think this is an excellent example wherepersonalized under the hood is using machine learning to make all of thesedecisions and we train those machine learning models based on knowledge thatwe've had during several dozen personalizationsystems at and then we drive the workflow from loading the datainspecting the data selecting the right algorithms training the modelsoptimizing them all of that all the way through to hosting them building thefeature stores and the caches on your behalf so that you don't have to worryabout it so this is a step function change in the speed at which you canstart to introduce deep learning and machine learning into yourorganization's the second tailwind is that customers are able to takeadvantage of AWS to increase the performance of their machine learningapplications whilst also lowering costs normally you have to choose between thetwo but we think that's a false choice and so it's never been cheaper or easierto run your machine learning workloads on AWS so machine learning you take thedata that you have usually stored in s3r you run it through a training system andthen you use inference to make predictions and usually do that as I sayinside these frameworks so I'll just use tensorflow as an example so tend to flowvery popular great tool about 85% of all tensorflow workloads out there today runon AWS and we see this across virtually every industry whether it is into it orsiemens or startups a huddle they're all using tensorflowup on AWS now the challenge with tensorflow is whilst it's a great toolit's got a lot of opportunity for developer productivity once you start toget to production he trained on very very large amounts of data you start toget a scaling hit so it's not particularly efficient when it comes toscaling across dozens or even hundreds of GPUs and so what we did with ourtensorflow team is we went super deep into the the central engine oftensorflow and we optimized the networking to beless chatty and to be more efficient across the AWS network what we saw isthat using our AWS optimized version of tensorflow which is available in thecontainers and in the army as well as sage maker you get you can train itnearly twice the speed so with stock tends to flow you're operating at about65 percent efficiency across 256 GPUs that means for every dollar that youspend only 65 cents of that dollar is used for anything actual goodthe rest is just overhead moving to the AWS optimized version we see a 90percent scaling efficiency now you'll never get to 100% it's just it's justnot possible today but 90 percent is a significant increase in speed and whatthat means is you can train your models with more data you can train your modelsfaster and you make better use of your most expensive resource which is yourdata scientists and your developers which are using the machine learningtechniques but another dirty secret of machine learning that'll let you guysinto is that whilst training is incredibly important there's a lot offocus there it's actually only a fraction of the total cost of a machinelearning workload and running inference in production is the overwhelmingmajority of cost when you start to break it down about 90% of the cost of asignificant machine learning system is running predictions against your trainedmodels and so whilst we'll continue to optimize on training we're going tocontinue to focus on optimizing this big chunk of work which is in improvinginference costs we're doing that today with a service called elastic inferenceand this allows you to as a service add a slice of GPU acceleration for yoursmaller models and then dial up the GPU acceleration at the end of an API whenyou need to increase the throughput or you start to work with larger models andjust this service alone can decrease your inference costs which is themajority by up to 75% you can scale that from a single trillion operation persecond which sounds like a lot but actually isn't in terms of machinelearning all the way up to very very big beefy models such as up to about 30 totrillions of operations per second and we've already built this into 10 to flowMX net and will support any model which conforms to the Onix standard coming uptowards the end of the year we're gonna see AWS inferential start tobe introduced and this is our AWS design custom machine learning inference chipand this is designed for more sophisticated models those can takeadvantage of an entire attire chip with high throughput low latency and with asingle chip operating at hundreds of tops but can be combined together tooperate at thousands of tops we're going to make these available through ec2instances through sage maker and also under the hood of elastic inferenceso if you start using that service today when we introduce inferential later thisyear you'll start to see it just an automatic increase the third tailwind isthat it's never been easier faster or cheaper to do to get data ready formachine learning this is an area where a lot of organizations spend a remarkableamount of time and honing and producing accurate training sets is one of themost important ways to build successful machine learning models so these modelsrequire very very large amounts of data tens of millions of images and so ifyou're building say an autonomous driving system what you need to do isyou need to take every photo every frame of the cars that are driving aroundcollecting this data and you need to annotate it in some way you need to tellthe model through training what is important and what is not important andthe way that's done today primarily is through humans you show all of thoseimages to humans several at a time and you get them to say this is sidewalkthis is a car this is a stop sign and these annotations these labels are whatallow the machine learning systems to learn however it's extremely costly anincredibly complex to do this at any sort of scale because not only youmanaging the data you're also managing the humans that have to go off andactually provide the the annotations and so we provide a service uniquely on AWScalled which is built into stage maker which we call ground truth and groundtruth allows you to build highly accurate training datasets which reducedtrainings that data set but preparation costs by up to 70% and we do that underthe hood by using a technique called active learning we take data and as it'sbeing annotated by the humans we capture all of that cognitive investment and wetrain a machine learning model as we go and it progressively gets better andbetter and better and more and more accurate it learns more features as thetraining is taking place and this means you as you train more data you canoffload with confidence more and more of the annotations to the system which yourtraining as you go so with no additional overhead you start to dramaticallyreduce as you go the number of images which need to be shown to humans inaddition to that have world-class workflows and toolingto allow humans to provide those annotations but the key to driving downcost is to learn as you go capturing that cognitive investment thefinal area is that it's never been easier to learn about machine learningone of the things I love about the AWS community is just an insatiable desireto broaden skills and expand their knowledge and on AWS it's never beeneasier to make this investment for yourselves in machine learning we'vetaken the machine learning University content that we use to train our ownengineers at Amazon and we've made that available in a self-service way onthrough our training portal this is one of our most successful training programsto date we also make our own engineers these are folks that have built thingslike personalized and forecast the engineering teams that are involved inthe personalization platform over on the retail side of the store and we'll makethat team available to you to get hands-on keyboards to build initial pocsso our goal here isn't to build a big professional services organization wejust want to help spread the knowledge as much as possible through a program wehave called the machine learning solutions lab if you're more of ado-it-yourself person as I am then we make some products available to help youlearn one of them is called deep lense it's the world's first deep learningenabled video camera for developers and this allows you to capture data trainedagainst that data build models in Sage Maker and with a click of a button willdeploy them directly onto the device these models are actually running on thecamera and then pretty much everybody has things on their desk for objectrecognition and people that they can use and with this fast feedback loop you canstart to learn an experiment which is a fantastic way of broadening your machinelearning knowledge reinvent our developer conference in Las Vegas lastyear we also introduced AWS deep racer this is a fully autonomous 1/18 scalerace car which is driven by a type of machine learning called reinforcementlearning you build your models in a simulator up on the cloud you specify ascoring function very easy to do without any machine learning knowledge requiredand then you use that scoring function in a simulator to train a racing modelwhich you can deploy down onto a car and then race around a track and when westarted doing this at Amazon we saw very very quickly and we shouldhave seen this coming our engineers started to race thesedevices and so we're also announcing the AWS deep racer League this is a globalracing league that anyone can top participate in you can build yourreinforcement learning models up in the cloud we're starting a series of deepracing league races at the AWS summits across the world I encourage you toattend them all there is credit for doing more than one and the winner fromevery single race at every single summit the person that has the fastest timearound our test track will win an all-expenses-paid trip to reinvent toparticipate in our championship cup in 2019 we're also running if you can't getto a summit or you don't have a car a series of virtual tournaments runningevery month through the year so I'm very pleased to announce that this isstarting today you can head down to the expo you can take some models you canstart racing them around the track we have a real professional commentatorfrom spoke motor racing generate the the tracks it's a lot of fun and we have aleaderboard that you can all look up on your phones and track how you're doingso across all of these services the capabilities made available on AWS areremarkable they are more broad and more deep than anywhere else and all of thesetail winds across price-performance across data preparation and of coursethese learning capabilities are only available on AWS and they'respecifically designed to help developers and builders like you get up and runningwith machine learning and to tell us a little bit more about what they've donea word day I'm very proud to introduce Ellen who is the head of data scienceand architecture thanks very much thanks Matt good morning everyone wouldyou believe that we spend close to 2,000 hours at work every year sometimes itfeels like more than that what I love about work day is that vataj 37 millionlives and make their 2,000 ars-art work better and brighter food is the leadingprovider of enterprise cloud applications we deliver applications forfinancial management human resource management analytics and planning Berkeley delivers an incredible trustedsystem of record for some of the largest companies in the world we serve much ofthe fortune 500 my background is in machine learning and is in buildingmachine learning and data products and I'm passionate about using machinelearning to solve some of the hardest problems in enterprise software on topof the system of record the incredible trusted system of record we have we havea layer of engagement that delivers reporting on all tags and planning myteam in workday is focused on delivering a system of inside using machinelearning that helps our customers do their best work we have identified a fewareas their mission learning makes a big difference for our customers we all knowhow hard it how hard unimportant it is to hire and retain the best talent we are on a mission to transform how youidentify hire and retain your best talent in the world of financials havingthe right inside at the right time and right contacts is everything we aretransforming the financial systems which streamline workflows on powerfulpredictions today we all expect both wise experiences in every aspect of ourlife we are focused on delivering personalized recommendations that makeyou better inform more productive maybe even a little bit smarter to do allthese things we needed a solid set of tools on the right partner to get usthere and get us there fast at workday privacy and security of our customerscomes first so naturally first we needed a solidfoundation that ensures the privacy and security of our customers and a systemthat enabled us to track data lineage at a fine granularity so that we canimplement privacy by design once this system was in place the foundation wasin place we gave our data sign pairs the bestmachine-learning on data tools and of course the fastest compute for them totrain the machine learning models we selected AWS as our partner in thisjourney and built our machine learning environment on AWS using a variety ofservices this diagram illustrates our AML workflow and the services that weare using let me use an example to illustrate how this wordsone of the financials product we have is mobile expenses imagine you are on abusiness travel and you have a ton of receipts that you are collecting as youexpense for a variety of stuff mobile expenses allows you to take apicture of your receipt and file your expenses on the go under the hood we usesophisticated deep learning models to extract the details from your receiptsand populate an expense report for you the receipt stem cells are stored in thedata lake and our data scientists use sage maker an MX nerd to train deeplearning models on GPUs once the models are trained we deploy them as restfulWeb Services in our data centers we are excited about the potential of usingground truth now to label these receipts without having to leave our data centersor without having sorry without having to leave our secure ml environment ontop of AWS data scientists and engineers love the AWS tools and as you canimagine when you have data scientist and engineers being happy about the toolsthey are you saying this is resulted in increased productivity and moreimportantly fast experimentation by leveraging sage maker algorithms on GPUswe have reduced the ml development time from months two weeks my team is likereally excited I'm moving really fast we are looking at a few other AWS servicesparticularly around the machine learning services that are very interesting as Imentioned earlier we are evaluating ground truth to label data for ourmachine learning there are other AWS services and sage maker featuresthat are of interest to us like the elastic inference and for some of thefuture use cases we are looking at higher level AWS services like our AIservices like Amazon recognition when it looked to ver be there about a year agowe had a small team of data scientists and engineers with big ideas totransform the enterprise software using AWS and it's full suite of services webuilt a secure and robust platform and on top of that we are building anddelivering machine learning features to our customersit has been an incredible journey and we have only just begun thank you section great customer stories today if I wantto do a little bit of a recap here so if I think about sort of model andapplication development there's a number of areas that you really need to payattention to yeah first of all really thinking about sort of what are the newapplication architectural patterns like you know service first and really modernapplications are truly service and we see really companies making the jump allthe way from mainframe immediate company leap frogging all the way over toservice really pay attention to what kind of data is generated to both helpyour operational performance as well as your business performance and what kindof information can be tree from that to build your next generation of productsand with all of that I really want to emphasize that security is everyone'sjob now because in the future it will it's us as technologists that will needto be that are responsible for protecting our customers and ourbusiness in that sense now we're very fortunate over the past years to meetmany of our extremely exciting customers where those are young businesses orestablished enterprises that are going in completely new directions veryfortunate to meet with them and one of the things that we've decided to do isto make a to make a TV series out of it yeah and so we have this long-form videocontents called now go build but basically I visit young businessesaround the world and do a deep dive on how these companies are actually trulychanging the world around them and so the first one that we launched duringthe event was this a company from Jakarta called Hara talkin speaking useof blockchain technologies to to build identities for this poorest farmers inIndonesia such that they no longer need to go to a loan sharks which will chargethem twenty to sixty percent on their small loans but actually can really goto a bank because now they do have an identityI'm actually not only an identity they have information about sort of the plotof land that they have they yield to the growth and things like that really buildopening up the world of sort of government assistance and things likethat for these youngest farmers it's a great storyif you haven't seen that one yet please go see it because these guys are reallychanging the world for the poorest farmers in the in the world now todaywe're actually releasing the second episode where we go to you to Singaporeto a company called simplistic we made something called multimatic sort ofreally changing the world that young Indian women have to live in by nothaving to continuously make food for their families who spend an hour and ahalf a day making wealthy stew to eat with sort of really changing the worldand so they they sold 40,000 of these machines have a dubious IOT integratedinto it and machine learning but basically they have a machine learningdriven roti maker and new episodes from Norway and Germany and South Africa andBrazil will be released throughout the office now this one the next one that'scoming up let's take a look at the trailer for for Singapore our planet andour civilization are changing faster than everthis is Malcolm will join me as I travel the globe talking to startup foundersusing technologies to make our worlds more interesting accessible and livablethese are the interpreters that that Canadian the future so yeah so catched on YouTube channelI think this these stories are amazing the really fun this particular case issimplistic talking about sort of how does the kitchen of the future adata-driven kitchen of the future using machine learning looks like so with allof that thank you all for being here hope that the technical sessions thisafternoon will really a picture interest and that you go home knowing more aboutAWS and that you did when he walked in the door this morning so thank you alland go build

pexels photo 4319883

As found on YouTube


You May Also Like