AWS Certified Developer – Associate 2020 (PASS THE EXAM!)

Hey, this is Andrew Brown from exam Pro, andI'm bringing you another animal certification course. And this one happens to be the hardestat the associate track, which is the developer associate. So if you're looking to gain handson knowledge, building, securing, and deploying web applications to AWS, this is the coursefor you. And remember, if you really want to pass, make sure you do all the follow onsthat we provide to you here in your own database account. And we definitely welcome any typeof feedback you have. So go on any social media, and we will find you. And if you dopass, definitely tell me on Twitter or LinkedIn, I love to hear that people are passing, andgood luck on your exam. So I'm sure you have a lot of questions about the developer associate.I'm hoping that I can answer some of these before you get started on your journey tomake sure that you're making the right choice about gaining developer associate.And thefirst question we're going to ask is, who is this certification for. And I would saythat if you're a web developer, and you're looking to add cloud computing skills, toyour developer toolkit, this is the certification for you. If you if you want to know how tobuild web applications, while thinking about how to build them cloud first, this allowsyou to push a lot of your web application complexity into managed services. And thenyou have easier, more modular web applications. That is another goal or sorry, another thingyou'll achieve getting the Dell developer associate, you're going to learn how to deployweb applications for a variety of different cloud architectures. So serverless, microservices,traditional applications, there's a variety of different ways to deploy to the cloud.And the last thing is that if you are a web developer, and you're looking to transitioninto a cloud engineering role, this is a certification for you.Now, what value does the developerassociate hold? What's it going to do for you? And I would say first, that the thiscertification is the hardest eight associate certification. So you have the solutions architectand the sysops. This one is absolutely, brutally hard. And the reason why is that you haveto have practical knowledge of AWS to pass this exam, it's very hands on. But that thegreat advantage of that is that it's going to directly help you get a job as a cloudengineer, because it's, it really is about doing the actual work. Okay. And the lastthing I want to point out is that it will help you stand out on resumes.It's not likelyto increase your salary unless your company really values cloud engineers. But you'redefinitely get more job opportunities. And eventually, this is going to be the de factoof a web developer having cloud skills. So it's important you get this now, because you'regoing to get ahead of everybody else. So another question people ask me is how long to studyto pass as developer associate, and you just heard me say that it is the hardest associatecertification. So this is a little bit longer than standard. So if you're a developer, I'mgoing to say it's going to take 1.5 months of study, I generally say for the sysops,or the solution architect associate one to two months, but this one is definitely goingto be 1.5 months, it going definitely into two months, if you are a developer, if youare a bootcamp grad, you don't know anything about AWS, and you're doing this from scratch,you're looking at at least two months study pushing on to three months.If you are alreadya cloud engineer, and you're just trying to add the certification to around out your resume,you're looking at 30 hours of study. So if you really sat down and took the time, youcould get this out in a week. So you know, that is the scope that I would say there.And the last thing here is, is just figure out how much how long and how many questions.So the cost of this certification, just like all the other associates is $150 USD, it's130 minutes. So it's a you get a lot of Apple time there, there's 65 questions in this exam.And to pass, you have to have a 70, around 72%. It's not an exact number, that numbercan float around there. I think one more point is that this certification will be valid forthree years. So it's definitely you know, $150 seems like a lot but it's gonna lastyou for quite a while. So, you know, hopefully that answers your, your questions, and we'llmove on to the exam guide.Alright, so we're taking a look here at the exam guide breakdown.The first thing I want you to know, is the course code, which is dva cs 01. This mightbe the future and you're booking your exam and you might be presented with two versionsof the exam. This happens when a new version of the exam is out. And there's some overlapbetween the old one and the new one. So the solution architect associate is sa c 02. That'sthe new one and the old one is C 01.There has there, they have yet to release a co2version of the developer, I can tell you right now that there isn't a huge change betweenCL one and co2, at least at this, Susan artech associate, all they've really done is rebalanced,the domains percentages, and revise the questions. But the bulk of the content for course, materialis the same. So if this is the future, and you're looking for co2, or you're going tobe totally fine using this course content here, we'll move on the next plot part, whichis the passing grade is 720 points out of 1000.I don't know how those points are distributed,it's not that important to know, all you really need to know is that you need to get 72% topass. And that's not necessarily exactly 72%. AWS can adjust that value based on how manypeople are passing and failing. So you can go on an exam get 73% and still fail. So youdo have to consider that there are 65 questions in this exam, that means you can afford toget 18 questions wrong. So you have a great margin of error here for this particular exam.And the duration is 130 minutes. So that means you get two minutes per question. For thisexam, for the developer associate, it's very unlikely that you'll run out of time. Well,you'll end up with surplus time at the pro levels, you always run out like you're alwaysrunning against the clock. So you have to know your your pacing per question for developerassociate, that's not the case.And if you have surplus time, you're definitely goingto want to go back and review all your questions and utilize 100% of that time. Now what we'lltalk about is just the type of questions you'll encounter. So you'll see multiple choice,this is where you choose one out of four. And then you have multiple responses. Andthis is where you choose two or more out of five. So those are the two typical formatsyou're going to see on the exam here. And now just getting to the actual breakdown ofthe exam, it's broken down into domains. And it also has sub domains. And we'll look atthat in greater detail when we actually pull up the exam guide. But the first domain isdeployment worth 22% deployment is extremely important to know for the developer associate,and you're going to end up with around 14 to 15 questions that you'll be presented with.The next domain is security. And so security is becoming super important in all the certifications.So we're seeing that really across the board everywhere. And here. It's super importantwith 26%. And you're gonna see between 16 to 17 questions, then we have developmentwith AWS services.So this is at 30%. So you're gonna see between 19 to 20 questions, thenyou have refactoring, which is worth 10%. So this is between six to seven questions.And the last domain is monitoring and troubleshooting at 12%. So we have questions between sevento eight that you'll see on the exam. So you can see that development with Ada servicesis the highest percentage. And that makes sense, because it's the developer associate,you should learn how to develop with a diverse services.So in the exam guide recommendswhite papers that you should read. And so if you're not familiar with white papers,what these are, are PDFs that are published by AWS, which are 100%, free for you to download.And they're kind of like Ava's documentation, but they have a sales perspective to helpyou adopt AWS. And when they create the exams, they actually base a lot of information fromthe contents of these white papers. So it's important for you to read some of these whitepapers, when you're studying for professionals, it becomes absolutely essential to know thesewhite papers inside and out at the associate level, not so much. But if there are whitepapers that you should absolutely read, it's these ones in red, you absolutely absolutelyneed to read those white papers, the ones in black are things I suggest for you to read.And the the ones in gray are the ones that I would say, it would not matter whatsoeverif you read them.So that is my recommendation for white papers. And if you're looking forthem, they're free to download, you just got to go to aws.amazon.com forward slash whitepapers. And I believe that they also have the hyperlinks in the actual exam Guide, whichyou can download from AWS as well. But you know, now that we've gone through that, let'sactually open up the the white paper. Alright, so what I've done here is I've pulled up adress.amazon.com, certification certified developer associate. So we can look at theexam guide, maybe the sample questions and a little bit more about this portal here.Just because there is really great information here for you. Before you go and take an exam,or even study, just make sure if any changes are happening. You go to here, a recertificationgo coming soon. I check this all the time, I have to watch it like a hawk. And it tellsyou when things are changing. So here you can see that the solution architect associateexam has changed.They have a bunch of information here. You can also take Beta exams, I thinkat half price off, which is here beta exams are offered 50% discount for standard exampricing. I don't ever bother to sit beta exams. I don't know, I guess you get certified ifyour beta. I'm not really sure about that. But it's definitely not something I actuallyever do.I always just wait for the official exams. But the reason I want you to checkthis is just to see, you know, are there any changes coming to the exam that you are interestedin? And that might affect? You know, when you want to take that exam? Would you ratherwant to wait for the new one, or take the old one, or even in the case here, the bigdata one has been split into two certifications.So is it valuable to still get the big dataone if you if there's a ellickson database? Will this big data make you look dated interms of your certification, so just peek around there and consider that. But comingback to this page, you can see that we have recommended knowledge and experience. Andthen on the right hand side here, we can download the exam guide and download the sample questions.So here I have the exam guide open, I'm just going to zoom in a little bit.And the firstthing I want to check is the actual course code, make sure that's the one that you wantto study. It's interesting here because it says CEO one and then over here, it says dva001. I don't know if that's a mistake or not. But whatever. Then down below, we have therecommended Eva's knowledge. And you'll notice this is exactly the same thing from the listhere. So they're just copied and pasted it there. And then for exam preparation, theyrecommend a was training.So we have developing on AWS, an instructor led live or virtualthree day course, I believe that's like 1000 $2,000. for registration. I don't know anybodythat uses this. And really, it's for enterprise companies that have a lot of money. So there'sa lot of like government programs where if you send someone for training, and the trainingis at $2,000, the government will reimburse like 75% of it. And that's why you have thesereally expensive training packages, which make no sense, would you think they'd be charged$100. It's just kind of that scheme there. I haven't heard good things about these whatsoever.But if you work for a very large company, and they're willing to pay for it, maybe youwant to take it, then you have the ADA, a digital training on the training platform,these are actually Okay, so I would definitely consider checking them out as supplementalcontent to this course here.Then you have the white paper recommendations. And as youcan see, they are all hyperlinked, I could open up this exam guide month over month,and this list will slightly change because they're always revising or updating thesewhite papers. It'll be the same white paper, but the date will change here, maybe the titlewill change. And some some of the content will be revised. But again, for the developerassociate, reading white papers is not a big deal.And you saw the recommendations I madein my list. And I like how they've actually now listed as documentation here, they neverdid before. But when you're going through my exam, all I'm doing is I mean, I've takenthe exam, and I have a lot of practical experience. But I'm going through the documentation andhighly, highly condensing it. If you were to study, just read it, it was documentationalone, you can definitely do it. But you know, you're looking at three, four times longerstudy, and you might be over studying on stuff that you should not be doing.But you definitelywant to check out the database documentation could become extremely important at the professionallevel at the associate level, it can be a bit of a time sink. Then down to the actualexam content here you can see they're talking about those multiple choice multiple responseswhich we talked about there. Then there's unscored content, this happens sometimes,so sometimes you'll end up having additional questions. So like that will never ever bescored. And the reason why is that a device is always testing out new questions. So ifyou take an exam, and you get a question that is like so far out there, and you're reallystressed out about you feel like you know, you didn't study well enough, just considermaybe it's just a test question. And you're not supposed to know the answer. There's alwayslike two or three in there.So you know, don't get stressed out about that. Then we havethe exam results, they talk about this point system, and there it is, it says 700 720.And then we'll go down below and we'll actually look at the domains and sub domain. So wesee them again, deployment, security development, database services, refactoring, monitoring,and troubleshooting. And then here are our sub domains. So under deployment, we havedeploy written code and eight of us using existing ci CD pipelines, processes and patterns.So that's why in my course, we're going to be talking about code pipeline code build,code deploy, and code commit, then deploying applications using Elastic Beanstalk.Thisis something we heavily heavily heavily covered in this in this course here and again. Haveyou really good follow along the 100% need to do is really good. Prepare the applicationdeployment package to be deployed to AWS. This is kind of talking about Elastic Beanstalkin terms of preparing them. This could be preparing containers for deployment. or thiscould be just preparing artifacts that need to be deployed via code deploy. Then you havedeploy serverless applications. Here, they're talking about Sam the serverless applicationmodel, or using cloud formation templates. Moving on to security. So make authenticatedcalls to AWS services. I guess that's just using the COI or API or SDK. I guess it alsocould be Eva's cognito. That could be sts tokens, implementing encryption using Adaservices, that is absolutely 100% going to be kms. And we're probably looking at bothin transit and encryption at rest.So also ACM Amazon certification manager, implementapplication authentication, authorization, authorization, so that is specifically goingto be cognito. I set it up here, but this is definitely cognito. Knowing about web identityFederation, maybe I am accounts that we have development with Ada services, write codefor service applications, so step functions lambdas, X ray API gateway, translate functionalrequirements into application design. I guess that's giving you like, functional, it's givingyou scenarios and then you kind of like pick out what technologies you should use. That'snot too complicated. Implement application design into application code. So they giveyou a design and you have to translate that into code.I don't know what kind of questionsthat would be on the exam. And I've written a lot of exam questions. So that one's a littlebit vague for me write code that interacts with the AWS services, so API SDK COI, youdefinitely will see on the exam, like they'll show you COI commands. So in this course,here, I try to expose you to as many ci commands as possible. And even when we could use theconsole, I'm making us use a cry just so you get those COI commands etched in your brainand also get some experience with SDK, then you have refactoring. So optimize applicationsto best use Eva services and features. So that's just making the best choice based onthe scenario, migrate existing application code to run on AWS. I guess it's like justdeploying code. It's weird because it says migration. So I'm not really sure what theymean by that. And I've written I've written practice exam questions for all this. So Imean, I definitely know. But I, sometimes I lie every design, and we're talking aboutmonitoring and troubleshooting write code that can be monitored.So Cloud watch X ray,there are new features in cloud watch for like cloud watch insights, is a cloud watchsynthetics, there's like a new one for serverless applications, maybe cloud trail, perform rootcause analysis on faults found in testing or production code. So this would be likeknowing when a code deploy fails, and then it shows you errors, like how do you readit and understand what's going on what happens when cloud formation rolls back, and you haveto investigate and go fix that, you know, logging things out into Cloud watch. So like,if you're using a lambda and something goes wrong, you know, to open up cloud watch monitoringroute through the logs. So things like that. So that's the general breakdown of this examguide. And hopefully, it gives you a bit of perspective of what is in front of us. Solet's get to it. So just before we jump into the certification content, I want to remindyou about the AWS certified challenge, and this was officially started by Free Code Camp.And the idea here is it allows you to get support from other people that are also onthe same journey as you so you don't have to do it alone.And all you have to do isif you have a Twitter account, and you could do this on LinkedIn, as well. Tweet a photoof yourself thumbs up, announce that you begun the ADA certified challenge. Tweet your dailyprogress of what you learned, encourage other people that are taking the challenge as well.And when you earn that certification, print out and pose with it. And as an added bonusof there's also an official discord group that you can join.I think we're almost 1000members there. I sit in it all day long. And I'm pretty good about answering questionsin real time. And there's a lot of support there and other resources being shared there.So definitely if you want to maximize your ability to learn and get that support, jointhe discord. And also, you know, join the ido certified challenge on Twitter or LinkedIn.Hey, this is Angie brown from exam Pro. And we are looking at Elastic Beanstalk, whichyou can use to quickly deploy and manage web applications on AWS without worrying aboutthe underlying infrastructure.And this is a platform as a service which we'll talk Abouthere shortly. So to understand Elastic Beanstalk, we're going to need to know what a platformas a services or a pass, and that that is a platform allowing customers to develop,run and manage applications without the complexity of building and maintaining the infrastructuretypically associated with developing and launching an app. So that's exactly what Elastic Beanstalkdoes. You upload your code, and everything just runs. If you want to give a similar serviceto Elastic Beanstalk.I always say think of Elastic Beanstalk as the Roku for AWS. Andthe way we abbreviate Elastic Beanstalk is with EBS. So yeah, there you go. So the ideais you choose a platform, you upload your code, and it runs with little knowledge ofthe underlying infrastructure on Elastic Beanstalk. If you read the documentations, on AWS, they'llsay it's not recommended for production applications. But what does he mean by that, because I knowa lot of startups and sizable companies that run their production workloads on ElasticBeanstalk.So they're talking to enterprise and large companies, you have to understandthat eight of us is a very large company. And so their clients can be like mega corporations,or governments. And they'll think, Okay, I'm gonna run my my infrastructure on beanstalk.But it's just not the use case. So just take that with a grain of salt that you can runproduction applications, but AWS is just warning large companies not to rely on it.So ElasticBeanstalk is powered by cloudformation template templates. So when you spin up an ElasticBeanstalk environment, that's what it's doing. It's just a very fancy cloudformation templatewith a really fancy UI. So if you go over to the cloudformation console, you can actuallysee what it's provisioned. And you can go try to read through that cloudformation template.It's very complicated. But it's just interesting to see what it's doing there. And so lotsof beanstalk can set up things such as elastic load balancer, auto scaling groups, RDS database,and it comes with pre configured easy to instances. And this is the big list of it. So you cansee you can do Docker, multi container Docker, you have go Java, Ruby, etc. And generally,these pre configured platforms come with the common technology, you need to run certainframeworks. So if you're using Ruby, it's gonna be able to run rails, etc. Then youhave monitoring. So it has cloud watch, and SNS integrated into its dashboard, which isreally nice.It has in place and bluegreen deployment methodology. So you don't haveto go out and build a complex code pipeline, which you could spend weeks doing so it alreadyhas it for you there. It can rotate out your passwords for RDS so keeps things very secure,and it can run dockerized environments. So if you are, if you are employing microservices,you can definitely use Elastic Beanstalk as your gateway to elastic container service,which is what it uses under the hood to run Docker containers. So let's just take a quicklook here at the supported languages for Elastic Beanstalk. So we got go Node JS, Java, Python,Ruby, PHP, dotnet, and Docker. And I, you know, I said this previously, but I'm gonnasay it again, these pre configured, platforms generally have the tools that you need, orthe lot of the things you need to run the frameworks as well. So if you're going tobe Ruby, you should be comfortable about running rails. If you're running the Python platform,you can run Django if it's PHP lorelle, Tomcat spring no Jess Express Yes.So just keep thatin mind. So when you first create a Elastic Beanstalk application, you have to choosean environment and you are choosing between web versus worker. And so if you're you needto build a web application, it can be choosing a web environment. But if you need to runbackground jobs, you can choose a work environment, a lot of the cases when you're building webapps, you're going to make two environments, you can actually make both of these one weband one worker, and they're going to be interconnected together. But let's just talk about the componentsthat are involved here. So we have a bit of an idea of how they're different. So on theleft hand side, we have our web environment. And this comes into variants, which we'lltalk about in another slide here. But the idea is that you have these EC two instances,maybe it's one, maybe it's multiples, and they're running an auto scaling group. Andit also creates an elastic load balancer for you, which is optional.If you want to savemoney, you just don't have one there. And that goes out to the internet. So it's a verysimple setup here on the left hand side. But then on the right hand side, we have our ourworker environment, and this is again for background jobs. So you'd have your EC twoinstances, they would be in an auto scaling group. And you'd also create an Sq sq. Soif you didn't have a queue, we create one for you, and it would also install the Sqs daemon on all Those easy two instances so that it can seamlessly communicate with theSq sq.But it also has this other setup here, which cloudwatch, it will watch the amountof instances you have. So that if you're under capacity, it will spin up more instances andadjust the auto scaling group there. So that's really nice. So there you go. That's yourweb and worker environments. So in the prior slide there, I said there were two types ofweb web environments are two variants.So let's look at them. The first one we havealready seen is a load balanced environment. And so the idea with this one is that youhave easy two instances running an auto scaling group. But that auto scaling group is setto scale. So if you get a lot of traffic coming in, it's going to spin up more instances.And then when the traffic declines, it's going to remove instances. So that means that therecould be a variable cost based on traffic, you have an elastic load balancer there. Andthat's where traffic is coming into the lb. So the other case is you can set up a singleinstance environment. And so this one is extremely cost effective, because you're only runninga single server, but you still are using an auto scaling group, because auto scaling groupsare great for keeping, not just for scaling us to add servers, but just to keep you ina single server running.So the desired capacity is always set to one, there is no elasticload balancer. And that's just to save on costs. But with no EOB, that means that there'sgoing to be a public IP address that is used. So if you have roughly two, three, it's goingto point to that IP address where in the load balance environment, it's going to be pointingto that load balancer. So there you go. So I was saying earlier, that Elastic Beanstalkcomes with deployment options built in.And this is definitely going to save you a lotof time. So you don't have to set up your own code pipeline. So let's talk about someof the deployment policies that are available with Elastic Beanstalk. And so what we haveis all at once rolling, rolling with additional batch and immutable. And we're going to bewalking through every single one of these. But based on what deployment policy choose,they're only available for specific web environments. And so you can see for rolling and rollingwith additional batch, they're not available in single instance environments. And the reasonwhy is that is that you need a load balancer in order to do that, because the load balanceris going to attach and detach instances, in batches from EOB, what you're going to noticeis that there is no mention of in place, or blue green on this list. And the reason whyis basically this entire list is in place.But we're gonna explain in place in blue,green, the context of it. So that makes a lot more sense. Because even myself, whenI first started learning this, I was like, Okay, we have this list of words in placein blue, green. So hopefully I can clear that up for you. So let's take a look first atall at once deployment. So the first thing we're going to do is we're going to deploythe new app version to all the instances at the same time, then we're going to take allthe instances out of service while the deployment is processing, and then the servers are goingto become available again.So this is the fastest, but also the most dangerous deploymentmethod, it's fast, because it's doing everything at once. It's dangerous, because you're takingthose instances out of service, meaning that your services are going to become an unavailable.And also you're you're applying updates to all your instances that exact same time. Soif you have a major failure here, do you might have to rollback your changes.But if thatrollback fails, you'll have all these instances that are in a broken state. And you'll haveto deploy the original version again. And it can get kind of messy. So that is all atonce deployment. So now we're going to look at a rolling deploy, it looks really complicated,but we just need a big graphic in order to explain this. So the first thing is we'regoing to deploy the new app version to a batch of instances at a time.So whereas all atonce was all of them, this one is just like we'll do two at a time, right. And so if wehave four servers, we're gonna do two. So we take in batches, those instances out ofservice. And then once that, once those are good, we're gonna reattach them with the updateinstances, we're going to move on to the next batch, and so forth and so forth until we'vegone through all the servers. So you can see that that mitigates some of the problems withall at once deploy, it's going to be definitely a bit slower.But you know, if we need, wemight need to perform additional rolling updates in order to rollback the changes. So rollbackcould be still pretty darn complicated. There. Let's take a look at rolling with additionalbatch. So the idea here is that when you start to deploy, you're going to spin up new servers.So if you're doing it in batches of two or whatever size, instead of taking a batch outof service, we're just going to add new servers. And then we're going to apply our app versionthere. And once those are good, we're going to then terminate our old instances or, oranother batch. And the idea here is that by doing this, we're never going to reduce ourcapacity.And this is important for applications where a reduction in capacity could causeavailability issues for users, because we saw with rolling, we will have reduced capacityfor a short period of time, in this case, we'll never have a reduced capacity. But youknow, we still have the same issues where if you are, if you want to do a rollback,you're going to have to perform an additional rolling update.So roll backs is still quitepainful in this case and slow. So let's take a look at immutable deploys. And this one,it really is reliant on the auto scaling group. So over here, what you can see is that wealready have an elastic load balancer that points needs to do instance, that's insideof an auto scaling group. But what we're going to do is we're going to make a new auto scalinggroup with a single EC two instance in it, or whatever, however many servers we needto replace. And then the next thing we're going to do is we're going to deploy the updatedversion of our app on the new AC two instances in that new auto scaling group. And then whatwe're going to do is we're going to point that elastic load balancer to the new ASC.And then we're going to delete the old SGX, which is going to terminate all the old instances.So the reason why you'd want to do this is that this is the safest way to deploy criticalapplications.And when you want to roll back, if we just go over here, the idea is thatyou don't have to destroy this auto scaling group immediately. You can wait until thisnew production and auto scaling group is running smoothly, you could wait days or weeks howeverlong you want. And then you could destroy this old auto scaling group. Or if you hadto roll back, you could instantly move back to that auto scaling group because all theinfrastructure exists. So, rollbacks are really easy. It's super safe, you know, but there'snot a lot of downsides to it. I mean, this is the one I would choose to do. So we'regoing to take a look at deployment methodologies for Elastic Beanstalk. And you're going tonotice that down below, I have blue green, so we never covered blue, green as of yet.So these have all been immutable all the way to all at once I've been deployment policies,these are built in deployment methods into Elastic Beanstalk.But we have blue, green.And we'll explain the difference between in in place and blue green in the next slidehere. But let's just compare these methodologies and understand what the trade offs are becausethis is definitely important on the exam. So the first one is all at once. And it hasthe fastest deploy time because it updates all the servers at the exact same time. Soif we have four servers, it takes them out of service applies the updates and puts themback into service. But when they're out of service, we're going to experience a downtime.And downtime could be a bad thing. If your users notice. And that could impact theirexperience, or if they're doing serious or series are critical transactions, that couldbe a problem. So you have to decide whether that is a trade off you want to take but mostpeople do not want to use all at once. And also if you encounter an error, let's sayall your deploy fails, and you have four servers, now you have to roll them back manually. Andso that's kind of a pain.But also imagine if you encountered an error during rollback,so the rollback fails, and now you have four servers all stuck in a broken state, thatcould be extremely detrimental to your business, losing hours upon hours of time, so you haveto weigh those trade offs. Now, next, we have is rolling. And so rolling, mitigates thisdowntime problem where we have instances that are out of service. So what it does is, soif you have four servers, like the previous case, it's going to update these in batches.So it's going to take the first two and take them out of service, right, and then put themback into service. And then it's gonna move on the next one. But the trade off, thereis still a downside, which is we're going to have a reduced capacity. So if you haveto have always four servers to run your critical workload or to just handle the current usage,this is not going to be ideal for you. So what you're going to want to do is you'regoing to want to use rolling with additional batch. So rolling with additional batch isvery similar to rolling.So it's going to work in batches, but instead of taking a batchout of service, it's going to add a new batch. And once this one is good and running, itjust kills an old one. And so this way, you always have at least the minimum amount ofservers you need running to meet your capacity needs. But you know, rolling back for thesemethodologies, so can be difficult, it's still a manual process.And you can just imaginehaving to roll each section back can be painful. Imagine that you're rolling this back, andthen this part, this rollback fails. And now this one's stuck into the state. So you havea weird number of servers messed up here, this can be extremely difficult. So the lastone here is immutable, immutable, the idea is you just replicate your entire environment.So like all at once, you know, if you had four servers, it would take them out of servicewith immutable, it would just create four new servers. And once they're all good, thenyou could move over to those four new servers. And if you had a rollback, you just pointback to the old ones, because they still exist, because they haven't yet been deleted untilyou decide to do so. So immutable gives you the best flexibility in terms of the rollbackprocess. It can be more expensive, depending on how long he goes servers around the provisioningtime takes a long time, the actual switch takes very little time, because it's veryfast to rollback or switch to the new version.But to provision those servers take a whilebecause you're replicating everything at once. And before you can actually start routingtraffic to it, because with rollbacks, you could start with rolling, you can start routingtraffic to new instances, gradually remotely. Whereas mutable, you have to wait till allthe servers are ready. But I you know, I think immutable is extremely, extremely safe, anda good a good deployment methodology to use. And then last, we have blue, green, and bluegreen is very, very similar to immutable, where it will replicate the all the servers.So if you have four servers, it's going to make four new servers. But also, it couldspin up other infrastructure, like elastic load balancers and stuff like that. But thatthese are super similar. And that's why we have another slide to talk about them to understandwhen something's in place versus blue, green. But just before we move on here for ElasticBeanstalk, the bluegreen methodology uses a DNS way of doing bluegreen. So that meansthat when you want to move over to these new servers, it refers to three points to a newload balancer.So there's a new load balancer with new instances on it. And so the changeis happening at the DNS level. And the reason why this can be a negative is that DNS, DNSchanges have to propagate to DNS servers around the world. And the effect that could happenis that there could you could have this new production server ready, but people are stillbeing pointed the old one or it's going nowhere. And so even though the server's not down,there could, some users could experience unavailability, just because you know, they're being pointedto the wrong thing.So that is a consideration you have to think about, though, for ElasticBeanstalk. It's not generally that bad, but it does happen. So we have to consider thatas a negative. But now that we know deployment methodologies, let's move on to the comparisonbetween in place and bluegreen. So let's take a look at inplace versus bluegreen. Deploymenthere. And these terms are confusing because they're not definitive in definition. Andthe context can change the scope of what they mean. So it's important to learn these notjust for Elastic Beanstalk, but for DevOps in general. So we're gonna spend a littlebit time here making sure you really know this stuff inside and out. So Elastic Beanstalk,by default, performs in place updates. And that's all the deployment policies we've beenlooking at, they've all been considered in place. But let's change the context to seehow that affects the scope, which will change what is considered in place. So the firstwe're going to look at, which we're already familiar with is with the Elastic Beanstalkenvironment.So when that is our our scope, that means that all the policies we saw priorall at once rolling rolling with additional batch and immutable are considered in placedeployment methodologies. So let's say we change the scope to outside of Elastic Beanstalk.And now it's just servers. So we have these servers. But what's really important is thatthese servers are never replaced.So they always have to be the same servers, the existingservers. So that's our scope. And now the only deployment methodologies that are availableto us is all at once are rolling, which are considered in place because they never replaceda server, they'll take a server out of service and make changes and put it back into service.Now, that doesn't mean that rolling with additional batch immutable would be considered bluegreen.They just wouldn't be considered in place for the scope of that scenario.Let's setup another one where the scope is now we have a server that can never be interrupted. SoDoesn't mean like, that means we can't replace the server with a new server. But that alsomeans we can't take it out of service. So it should never lose traffic, it should alwaysbe traffic should always be pointed to it. And so to solve this, we use zero downtimedeploys. This is where bluegreen occurs on the actual server itself, like in a virtualsense where you, you have your code base, and you deploy the second version of the codebase on the server, and you facilitate the change the switch within the server virtually.And you can't do this on Elastic Beanstalk.I used to do this for years with Capistrano,Ruby on Rails and unicorn. And this allows for deploys that happened within minutes.So I like in 30 seconds or one minute, I'd have the latest version updated, it was amazing.But when we have to consider all these cloud components, you know, this, this kind of agilityis has been lost. But we get other kind of trade off to it. So you know, that's the stuffthat we're looking at.But when we're talking about the exams itself, and they're talkingabout in place, they're going to be talking about generally in the context of ElasticBeanstalk environment. So I just want you to know, the different the differences ofthis, but for the exam, this is the one you're going to focus on, for in place, okay. Okay,so this is the slide where everything is going to fall into place. And you're going to reallyunderstand the difference between in place versus bluegreen deployment. So this is thisis going to be in the context of Elastic Beanstalk. And we're first going to look at an immutabledeployment methodology. So we know the immutable that what happens is, it replicates the autoscaling group with another easy to instance, and then it facilitates the the transitionto the new production servers by switching over to this auto scaling group and then destroyingthe old one.Now, this is funny, because in our last slide, we just saw that there wasa blue green methodology called EOB, blue green. And this is exactly what it did. Sowhy isn't this considered blue, green, and it's called in place. And that has to do withthe the boundaries of the environment. And so the environment is defined here as beingthe Elastic Beanstalk container. And so because the mechanism of our deploys inside the boundariesof this environment, it's considered in place because it's inside of the environment. Sonow looking at a bluegreen deploy, for Elastic Beanstalk. This can only occur at the DNSat the DNS level. So it's roughly three that's facilitating it. But notice that it's outsideof the environments, it's outside of it, it's going from a blue environment to a green environment.And so that is why it's called Blue Green deploy. If it's all within the same environment,it can't be considered blue green. So, you know, talking about doing a DNS, like havingthe DNS level facilitate the the switch of servers, or having the load balancer, I wassaying the load balancer was a lot better.Because with DNS, we could have an interruptionin service, not because the servers aren't ready, but like the DNS servers have to propagatethe changes. And so there could be some unavailability for servers. So why would you still use this,then if this one is better, this has to do with where some of your external resourceslie. So in the context of Elastic Beanstalk, it really has to do with your database. Soif you are using inplace, deployment, a lot of them are very destructive. So your yourdatabase would be inside of the environment, it would be running on an EC two instance,generally, and, you know, it would get terminated with the environment. And so you'd lose yourdata.So that would not be good whatsoever. So your database would have to sit outsideof your environment will like on RDS. And so you know, the better place to do, it wouldgenerally be with a bluegreen deploy with your database outside of it. So that is generallythe reason why we use blue green. That doesn't mean you can't use RDS with in place. Butgenerally it's more for bluegreen deploy. So hopefully that clears up a lot of stuff.And you can understand and visualize those boundaries to see when something's in placeand when something is not considered in place. So if we want to change the way our ElasticBeanstalk environment works programmatically, then the way we're going to do it is throughconfiguration files. These configuration files sit in a hidden folder called Eb extensionsat the root of your project. And they're going to have an extension that says dot configon them. So there's going to be a variety of different ones. But that's what Osip Bienstockexpects. And what we can change to these configuration files is the option settings for our initialenvironment.We can do things very specific to Linux and Windows. And then we can alsoset up custom resources. So if we need other services to integrate, that's what we cando there. But the motivation of having this file here is that let's say we want to handthis project to somebody else, they can just provision all the things they need, with theexact configuration they need. So this is something you'll definitely run into. If you'reworking on Elastic Beanstalk, you'll have to do a little bit of configuring.So I saidthat, you could configure some options settings for the Elastic Beanstalk environment. Andthat's called the environment manifest, which is in a file called the E and V dot YAML.And you're going to store that at the root of your project. So this file is importantbecause when you first create your Elastic Beanstalk environment, it's going to lookfor the style and set up a bunch of defaults. So this is the way that you're really sharingthat configuration with other people or just, you know, saving that configuration for yourself.So what I've done here is I pulled out a little example. And we're just going to look at someof these attributes here. So the first one that we're looking at is the environment name.So that's whatever you want the name to be. And we'll talk about what that pluses in amoment on the end there, then there you have your solution stack.So that could be Ruby,Python, Java, whatever. But that's going to be what ami is chosen. Then you have environments,links, and this associates to other environments. So we saw that you could have a web environment,and a worker environment. And so this is a way to connect them together. And then youhave some default configurations for specific AWS services. So here we are setting a loadbalancer to be cross zone.And there is a lot more options here. But we don't need togo through all of them, you just have to generally know that you can do this stuff. And let'sjust talk about that plus there on the end. So that plus is used to add the environmentname to the end there. So to give these more unique names. So that's all that thing isfor. So there you go. So now we're going to take a look at Linux server configuration,there is one for Windows server configuration.But I think we only need to really learn onehere, because you can pretty much apply this to both. And that will be good enough forour studies. So the first thing you could configure is being able to download packages.So here I have Ruby, or we have mem cache, I think the package manager will generallybe young, because that's what AWS uses for both Amazon is one Amazon likes to if youcan use some kind of other OSS that might change to something else.Then you can setLinux Unix groups, something I don't do, very often, but something you can configure, thenyou can also configure users and assign them to Linux groups. You can also create filesor download files from the internet using a URL, that URL is for public facing file.So I don't think there's a way to download private files. But for the content, you justspecify content and provide what you want. There, I'm providing a yamo file, then youhave commands, these are commands that you want to run before your application has beeninstalled. So Elastic Beanstalk polls, your code base, but this is happens before thatcode is actually in the environment, then you have services. So maybe you've installednginx, and you want to ensure that it is running, when it starts up it continuously run. Sofor whatever reason it shuts down, it will try to make it start up again. And then thelast thing our container commands. And so these are commands that are are specific toyour application. So after your application or your source code has been downloaded tothe environment, these are what you want to run it.It says container commands, whichmake you think that these are for Docker containers, that's not the case. So just be aware of that.But you know, for for Windows, you're going to have similar ones who have container commands,commands, probably something for packages and files. So it's more or less similar, justas long as you conceptually understand the things you can set at the server level. SoElastic Beanstalk has a COI which you can install. And that's going to give you moreof a Heroku like experience and going to know some of these commands are very similar toHeroku commands. So to get the COI, you need to go to GitHub and just had an install it.So it's just as simple as cloning the repo.I believe this is using Python. And so youjust execute that command. If you're on a Mac. This should just work for other systems.You're gonna have to read a little bit more on the GitHub page itself, which you can seeis that the universe Elastic Beanstalk COI set up there, but let's just run through someof the commands that are available to us starting with Eb a net so this configures your projectdirectory in Elastic Beanstalk COI This is the first thing you're going to want to runbecause it's going to set up a bunch of defaults on your computer. And if you don't want tomake a project, you just delete that project afterwards. But you still want to run this.When you want to create an environment you're going to be doing Eb create.When you wantto check the status of the environment, you're going to do Eb status. When you want to checkthe health, you can get health on the particular instances and the overall health of the environment.If you want to get that in real time, or near real time, you can do a refreshable updateevery 10 seconds. If you need to see a bunch of events that are being output outputtedby the Elastic Beanstalk environment, you can run that if you want to see a logs fromthe actual instances yourself, you can run Eb logs, if you want to open up the the applicationin your browser, there's Eb open, but it's really not that hard to do, you could justgo to your browser, when you're ready to deploy your current version of code.You do Eb deploy,if you want to see what kind of configuration setup you have for that environment, you know,seeing whether it's running Ruby or other options, you can check that as well. And ifyou want to terminate that instance, because you want to save money, you can do Eb terminate.So, you know, go ahead and download the CLR and give it give it a go.So we're gonna lookat how you can use your own custom image in Elastic Beanstalk. And this is where you canprovide your own ami instead of the standard Elastic Beanstalk ami of your choice. Thereason why you'd want to do this is because it could improve provisioning time. So ifall if you have a lot of software like packages, you need to install to run your software,it takes a long time to pull all those if you bake it into an ami, it's just gonna speedthings up because it's already there.So let's go through the process of how you'd actuallyget a custom image. Because it's not hard, but there's a lot of steps to it. So the firstthing you want to do is go to AWS docs, there's a page called like supported platforms, andyou get a list of all the type of standard Elastic Beanstalk kaomise. And what you'redoing is you're trying to get the platform information there. So that you can then usethe COI to use this command called describe platform version. And so by using describeplatform version, and you can see over there, it has the platform name in it, it's goingto get us back an image ID and that image ID is the the AMI ID so that we can then findthe AMI and then extend that ami to do what we want with it.So using that ami Id go tothe EC two marketplace, in probably into the community section and paste in that ami, sowe would find the AMI we're looking for. And then we would launch a new EC two server.Once that easy two server is launched, we could then log into it either using SSL orsessions manager, I would suggest sessions manager to do so you would need the correctIm role attached to it. So just be aware of that. But once you get into that machine,you configure it however you want. So you'd go in and you install those packages manually.And then you would go ahead and bake that ami and so you'd have your new ami. So onceyou have the new ami, you could go in your configuration like your files, or you couldeven do it through the console and set the new ami ID. And then when you create new environments,that's what it's going to use. So that is the whole process to setting up a custom image.So we need to talk about configuring your RDS database with Elastic Beanstalk becauseyou actually have two options, you can add a database inside or outside your ElasticBeanstalk environment.And you might not even be aware that you're doing it if you're settingit up. So it's important to know the difference. So let's talk about inside Elastic Beanstalkenvironments. So when you go to create an environment through the console, you'll havethe option to create a RDS database. And if you are doing that, that means it's goingto be within the Elastic Beanstalk environment. Now, the thing is if you do this, that meanswhenever this environment is terminated for any reason, it will take out the databasewith it.So that means that generally this setup is for development environments. Thatdoesn't mean you can't use it for production. Because as long as you're using in place deploymentmechanisms, so like let's say use immutable and stuff that's gonna replace the EC twoservers, it's never going to remove the RDS database but if you for whatever reason, deletethat entire environment, your database is gone with it.Then on the other side you haveoutside the Elastic Beanstalk environment and the way you know you're doing this asyou're creating your database first and RDS. And then you configure it with your EC twoinstances that are in your inside your Elastic Beanstalk environment. Now, when the ElasticBeanstalk environment is terminated, the database is going to remain because it wasn't createdpart of the EB environment. And so these are generally suited for production environments.And with this setup, you know you generally are using bluegreen deployment. You don'thave to but you totally can.I just want you to know that distinction. Vote Inside andoutside the environment. Hey, this is Angie brown from exam Pro. And we made it to theend of the Elastic Beanstalk overview and that means it's time for the cheat sheet.So let's review. Elastic Beanstalk handles the deployment from capacity provisioning,load balancing, auto scaling to application health monitoring. So it really sets up alot of infrastructure for you. It's a good time to use Eb when you want to run a webapp.But you don't want to have to think about the underlying infrastructure. And we justsaw this big list of infrastructure above. It costs nothing to use Eb only the resourcesthat provision so if it spins up RDS, an E lb CT, you're gonna be paying for those butEDI itself costs nothing recommended for tests or development apps not recommended for productionuse. Remember, when AWS says this, when they say not for production use, they're talkingabout super large enterprises who think that they can use Elastic Beanstalk for the productionenvironments. If your small to medium business Elastic Beanstalk is a okay. You can choosefrom the following pre configured platforms we have Java dotnet, PHP, no GS Python, Rubygo and Docker.You can run containers on Eb either in single container or multi containermode. These containers are running on ECS. Instead of EC two, you can launch either aweb environment or a worker environment. Web environments come in two types we have singleinstance at or load balance for single instance environments, it launches a single EC twoinstance and assigns it an elastic IP address to that EC two instance, for a low bounceenvironment, it's going to launch easy to instances behind nlb managed by an auto scalinggroup I didn't mention in the last one.But for the single instance environment, it isn'tan auto scaling group as well, it's just set to one set to design capacity of one. ButI don't that's important for the exam, it's not a big deal. Then you have your work environments,this creates an SQL queue installs the SQL daemon on all these instances has an autoscaling scaling policy, which will add or remove instances based on the queue size.Then we have Eb has the following deployment policies. So we have all at once. So thistakes all the servers out of service applies changes, put servers back in service, thisis super fast and has has downtime.So that is one condition you have to think about.Then you have rolling update servers in batches reduce capacity based on batch size, rollingwith additional batch adds new servers and batches to replace the old never reduces capacity,then you have immutable creates the same amount of servers and switches all at once to thenew servers removing old servers, you really really need to know these deployment policiesinside and out. So make sure you know the difference. And if you don't know, go backto the lecture content. Look at those diagrams and make sure it clicks. And then we're onto the last page here. So rollbacks rollback deployment policies require an EOB. So rollbackthat it should be called rolling. So I meant to write rolling update this here, but forthe video, it's just gonna say rollback. So rolling deployments rolling or rolling withadditional additional additional batch policies requires an E lb so it cannot be used withsingle instance web environments. Just consider that in place deployment is when deploymentoccurs within the environment, all deployment policies are in place.Blue Green is whendeployment swaps environments outside an environment. When you have external resources such as RDSwhich cannot be stored or destroyed. It's suited for bluegreen deployment. Eb extensionsis a folder which contains all configuration files. With Eb you can provide a custom imagewhich can improve provisioning times. If you let Elastic Beanstalk create the RDS instancethat means when delete your environment, it will delete the database. The setup is intendedfor development and test environments.Really do consider that. And the last thing hereis Docker at most JSON is similar to ECS task definition file and defines multi containerconfiguration. So yeah, if you looked at a task definition, you'll understand it. Wedon't have to go through the the the guts of that here. But this is generally what youneed to know for the exam, but really know those deployment models, okay. Hey, this isAndrew Brown from exam Pro. And we are going to start with the Elastic Beanstalk followalong we're going to look at how to deploy Elastic Beanstalk a variety different ways.So we know it inside and out.I want to point out first, before we get started here, makesure you are in the correct region. And we always do everything in US East one, becausethat's where the most abundance of Ada services are available. And it just makes things alot easier. So just go up here and make sure you're in US East one. And be very carefulbecause 80 of us likes to switch out that region on you sometimes. So if you feel likethings aren't going the way that should be going just double check your region. So nowthat we have that out of the way, let's go ahead and make our way over to cloud ninebecause we're going to need a developer, a developer environment to run and test ourapplication and then go ahead and take that over and deploy the Elastic Beanstalk.SoI'm gonna go ahead here to cloud nine. I don't have Any region or environments created here,so we'll go ahead and create an environment. I'm going to name this Dev, n, e and V was,which is developer environment here saying, not to use root account, I'm definitely notlogged in as the root account. So I'm not sure why I'm getting that message. But we'llgo ahead here, hit next. And what we're gonna do is we're gonna make sure this is a TT micro,that's part of the free tier eligibility, we'll scroll down here we have the choicebetween Amazon Linux and Ubuntu. Amazon Linux one is supposed to be unsupported at somepoint, because they want us Amazon Linux to. So if you're watching this in the future,maybe Amazon Linux two will be here, you'll have to use a bun two. But if Amazon Lex oneis here, absolutely use it because it is amazing. We're gonna leave the default cost savingsettings here to 30 minutes. So if we're not using if we don't have any activity, or wedon't have the browser open here, it will shut down the server save us money.It lookslike it wants to create an IM role, we'll let it go ahead and do that, we'll go aheadand hit next. And down below, it has some best practices for us. And just shows us aconfirmation of what we're creating. This is all great. So just hit Create environment.And we'll just have to wait here a little bit. And I'll see you here in a moment. Alright,so our cloud nine environment here is ready. And just before I get started, I like to usethe dark theme. So I'm gonna just switch it down here to the classic dark theme. And Ialso like to use vim, I would recommend just using the default, but vim is what I use,that rebounds all the keys for Super efficiency. So you know, it's just because I've been doingit for years. But anyway, now that we have our cloud nine environment, let's actuallyget an application going here. And since this is a very developer focused, I think we shouldtry to use the terminal as much as we can to get as much experience as possible.Thefirst thing I want you to do is I want you to type in NPM, i c nine hyphen G, so C nine,which is short for cloud nine. This is a Node JS utility that makes it easy to open filesdirectly from the terminal here. So you know, we have this README file. But also just noticedthat C versus environment, this actually maps to this dev at ebn directory, if I hover overthere, you can see it'll autocomplete to that.So I don't know why CLOUD NINE does that.But that's how they name it. But anyway, I just want to show you how c nine works. Sowe have a readme in here. And if I just wanted to open it up, it actually is I think openright there. But if I just typed in LS, and then I typed in C nine README, then it wouldopen up that README file.So that is going to give us a little bit of help along theway. So now that we have seen I installed, let's go set up the actual application itself.And so the first thing we're going to do is we're going to type in MK dir, which makesa new directory, and we want to make that in our environment. So I'm gonna use Tildato make sure I'm always at home, I'm going to type environment study, sync is the nameof the application we are creating today. And you can see up here that it created afolder.Okay, I will go ahead and we'll just create some additional files. So I'm justgoing to CD into that folder to save myself some trouble. And the first thing we needto do is initialize an empty our node projects, we'll do NPM and knit hyphen y. Okay, andwhat that did is created a package dot JSON for us here, which we will adjust momentarily.But we want to run a web app. So we're gonna need some kind of web framework. So we'regonna go ahead and use Express. Okay, so we'll go ahead and type that in. What's, what'sthat going to do, it's going to add it as a dependency there.So now we can use Express.The next thing we want to do is we're going to need some initial files to work with here.So I'm going to type in touch, we're going to type in main.js, we'll probably need indexdot HTML, actually not instead of main, we'll call it index. I think actually, I normallycall it index. Then we will have index dot HTML app.gs and style dot CSS. And so thatcreated all the initial files that we need to work with. And so now we just need to populatethose files. So the first thing I want to do is I want to have a way of actually runningour application here node.So we're going to add a new script up here called start.Okay, and we'll just type in Node main.js actually going to call that index there. Andthe next thing we're going to do is we're to start populating this file. So if you makeyour way over to the GitHub, and you go to exam Pro, co the free database developer associate,there's a folder here called study sync 000. And these are the files that we're going tocopy on over. So the first is the index.js. So we'll go there and just hit raw. And we'llcopy the contents here. And we'll double click on that and we will just paste it. Then wewill click back here and we'll go grab the styling. Okay, we'll hit raw It's not thatimportant for you to know how to program. But I mean, you know, we need to get as comfortableas we can here. So we're not going to really need to learn all this stuff that we're doing.Just copy paste it through.And we just need I think, the app GS file, yep, the JavaScriptfile here. And we will grab all that data. And so those are the three files we need.And just give you a very quick tour of what's going on here. We have this index dot HTMLfile, oh, I guess we didn't populate that. Okay. Give me two seconds here. Sometimesyou think you do something and you don't. So anyway, the index HTML file loads thisstyle dot CSS file, which is located there. What we're doing is we're using a CDN to polenmithral, which is a JavaScript front end framework, we are going to use App JS to load our JavaScript.Going over to our JavaScript here, we're using the mythical framework.So it's very simple,we have this app here. And the idea is it's we're going to have a question, and we havemultiple choices, and we can submit the answer somewhere. And then we just have some plainstallion CSS. So now that we have that all going, the next thing we need to do is actuallypreview this application, because before we can deploy it, and package it, we need tomake sure that it is working here. So I'm just going to go ahead here and close thesetabs. And there's just going to be a couple things that we need to do next. Okay, so whatwe need to do is we need to get our application running to make sure that it's all in workingorder before we go ahead and package it. And so we can preview in cloud nine, but cloudnine, by default doesn't open up its ports to the internet. So we have to go ahead anddo that. This would be no different than you setting up a web app on an EC two instance,you'd still have to open up ports.And so generally, the ports that clubnight allowsout to the internet is 80 8080 8081, and 8082. So what we'll do, I just want to show youhow you normally do this. So you go here, and you go to easy two instances, you go toinstances on the left hand side, and we find the one that's running, that's our cloud nineenvironment. And over here, we're going to find security groups. And if we expand that,and we check inbound rules, we go here at it, and we just add 8080, right. And thenwe restricted to our IP, this is a development environment, oops, I gotta hit plus there.And but we're not going to do it this way. Because I want you to get as much programmaticexperience because this is a developer associate. So we're gonna figure out how to do it completelyfrom the terminal. So we're gonna do the exact same there, I know that this is much faster,but like, trust me, this is gonna help you in the long run for studying here. So let'sget to it. So what we're going to do here, I'm just as I play here, so my screen is niceand clean, is we need to figure out what the MAC addresses for the CC two instance.Andthen we use that MAC address to get the security group IDs. And then from that we use the COIto update and create our own inbound rule. So whenever you want to get information aboutan easy to instance, that is where the metadata service comes into play. And it's very easyto access on a server, whether you're here in your SSH into an EC two instance, or you'rehere in cloud nine, you just type in curl, hyphen s, HTTP, colon, forward slash forwardslash, and then it's 169 254 169 254.Latest and metadata. So here, it's showing you thatthere's a way you can get a lot of data in here, this one is going to be for, you shouldknow this IP address, it should be etched into your brain, because it's definitely astandard here when working with EC two. And as a developer, you need to know it. But weneed to find the MAC address. And here we have this and it says Mac. So we'll go aheadhere and type in Mac. And now we have the MAC address. And the next thing is we're gonnause this MAC MAC address to find out all the security group IDs for the network interfacesthat use that Mac. So what we'll do is we're going to hit up and we're just going to backout here a bit. And we'll do network. And we'll just piecemeal it because if you makethe whole link, sometimes it's a big pain in the butt.And it's hard to hunt down theproblems. So I'm going to just keep on doing this bit by bit. Oh just shows the MAC addressthat's even more convenient. And we'll just hit Enter. And then we want our security groupIDs. And there that's so we only have one security group if there was multiples we likeattaches easy to instance, we probably see more, but we only have this single one here.So now that we have this security group ID what we'll do is we will use The COI, theCOI in vcli is is already installed on this cloud nine environment because it's AmazonLex one already comes pre installed.And CLOUD NINE also loads your credentials from youruser account. So we don't have to play with the credentials file here. If you're doingthis on your local computer, you absolutely would have to set that up, we'll type in AWSEC two. And we'll type in authorize security groups. ingress and there is a new intimacyally and it has autocomplete. So like you could hit tab and it would complete that stuffthere for you. But I don't believe I have the latest one installed here. So I have todo things manually. And so we'll place in that security group will say what port wewant to open up. So Port 8080, we want we'll have to specify the protocol, it's going tobe a TCP. And then we need to supply the cider.So that's the IP address that we're goingto want to be accessible. So before we hit enter here, we actually need to go get ourIP address, because that's what we're going to put in here. So what we'll do is use oneof AWS services, which is called check IP. So it's a very useful service. Let's go usethat now. So I just open up a new tab here, and I'm just going to type in. I'm going totype in check IP, Amazon AWS COMM And this will tell me what my IP address for my localcomputer is. There's other websites like what's my IP, but let's use the AWS service, becausethey took the time to make it for us.And we'll go back over here. And we will hit enterand we want forward slash 32. The forward slash 32 is very, very important. Becausethat says only a single only a single IP address, a cider cider block is a range of IP addresses,something we definitely cover in this course. And you definitely need to know what networkinglike cider blocks are in the associate. But for the time being if you don't know whatit is just understand that you need to put your IP address in there and type in forwardslash.And what we'll do is we'll go hit Enter. And actually, before we do that, no, no, we'lljust hit enter, that's fine. So it didn't show us anything. So I mean, I believe thatsuccessfully created it. So we'll go over here and just take a look here to see if itactually made it. And there it is. But let's say that we didn't want to make our way overhere. And we want to do this pragmatically, let's go confirm the security group, throughthe COI. So what we're going to do is have an a to s EC to describe security groups.We're going to put in that group ID, it takes a bunch of them so but we only need one here.And we'll I'll put it this as text allows us to generally default Jason, but that'sjust really hard to read in this case.And then we're going to use filters. So we'lltype in filters name equals IP, hyphen, permission.to, hyphen, port, values, equal 8080. So whatI'm saying here is describe all the security groups to me, and filter it out. So we ownor only select this security group, display this text, and then filter it out so thatwe only see inbound rules that have Port 8080.We'll hit enter here. And we have a invalidcommand there. So I'm just going to double check here. I might have typed something wrong.permissions does not look spelt correctly to me. Yeah, it's gonna be P er permissions.And I'm still having a bit of trouble here. IP hyphen permissions to port. Oh, you knowwhat it's a singular permissions is not with an ass, I think. There we go. So it's a littlebit hard to read. But the idea here is that it's going to say this is our security groupthat returned it saying that Port 8080 has been setting that's the IP address. Now, ifit hadn't been set, and we ran this command, it would just show nothing. So the fact thatsomething shows up here means that that these that that inbound rule was created. But ofcourse, in practicality, you probably just use the console.So now that we have, we'lljust type clear here to clear this stuff up. So now that we've opened that, that port,the next thing is actually getting the application running. But before we can even do that, weneed to know what the actual IP address of this cloud nine environment is. And I'm prettysure if we use our EC two instance here, we can go here, and we can go and check it andthat is its public IP address. I think that'd be the same thing. But let's again, do itprogrammatically. So we'll type in curl, hyphen s HTTP, colon forward slash forward slash160 9.2 54.1 69.2 54. Latest metadata. I'm gonna hit enter, just so I'm not having toomuch trouble here.You can even see right here. It's public ipv4. So we'll type in public,ipv4 and it says 385 910. I'm gonna go back here. Yep. So it's the same one there. Sothat's what we're going to use to access the web application. And let's go ahead and actuallystart this application up. So to start it up, we just have to make sure we're in thatstudy sync directory here. And we want to start it up on port 8080.So if you're wondering,like, why are we typing Port 8080 there, the way this application works, if you open upthe index dot j, s, uses process AV port, and it's going to pass that port number toexpress. So it knows to start up on that port number. And so what we'll do here is we'lljust go and type in NPM start. And we'll see if this starts up. And we have a little errorhere. And that's totally okay. It's it's failed to parse package JSON data, you know whathappened, we forgot to comment, something I do all the time.So remember, we wrote thisline in here, we have to just make sure there's a comma on the end of it, or it's not validJSON. And I'm just gonna go back down here, hit Ctrl C, which kills that there. And we'llhit up again, we'll see if we're in better shape. So now it says that it's launched onport 8080. So what we can do now is get that IP address that we have earlier. So it issomewhere in here. Um, can't seem to see it. So I'm just going to go here and copy thishere and make a new tab in Terminal and just paste it in.Of course, we could just go upto instance. But why do that when we can try to do the proper way. And so what we'll needto do is do this and say, Port 8080, we're not going to open it up here. But I'm justtyping it out so that I'm having less trouble here. I will just copy that. And we will seeif this works.And there's our application. So subrogation doesn't really do much, youcan select something and submit but it doesn't really submit to anywhere, maybe we will starthooking this up and do more with it. I call this the study sync application supposed tohelp you study I guess, um, but you know, it's just a superficial application. So nowthat we have our application, running, and we can preview it from Cloud Nine, and wehave some COI experience, the next thing is to get a git repository set up. So that'swhat we'll do next. Okay, so let's get going here. And I'm just going to go ahead and closethis bash command will go back here, and we'll just stop the server, I'm doing that by typingCtrl C on my keyboard, you can see that it says Ctrl C, this little icon represents control.And we're going to need a dot Git ignore file because there's files that we just do notwant to include.So up here, we have no modules, this is how node works. They just put allthe libraries in line. And we do not want that in our Git repo. That's just too muchstuff. And so with git, there's a file called dot Git ignore. And what we'll do is we'lljust make sure we create it here in our study stink directory. So just make sure your environmentstudy sync, if you're not sure where you are just type in environment, studies sync assuch. And then we will just go ahead and touch a dot Git ignore file. And that should nowexist, I just don't see it here on the left hand side, it might be hiding it or we mighthave to hit refresh. Well, I definitely know that it's there.So because if we do an lsl hyphen, LA, there, we can see that hidden file, there's probably a way to turn it onthere. I'm just not sure at the current moment. But that's fine, because this is all aboutlearning how to use the terminal and see Eliza developers. So if we want to open up thatdot Git ignore, we're going to use our handy dandy c nine command, type, type Git ignore.And if you don't want to type everything, just hit tab on your keyboard that saves alot of time when your autocompleting stuff works for most things we're going to do istype in node modules. what's what's that going to do, it's going to ignore this file completelyjust folder completely because we don't need to include that with Git.Now that we havethat set up, you may want to set up your Git config global you username and email. We'renot gonna worry about it right now, but we'll probably be prompted for it and it'll justbe annoying message every time we see it. So now that we have a git ignore, let's setup our Git. Let's actually set up this Git repo. So we'll type in get a net and it'sinitialized. An empty repo is create a new folder called dot get here.So we just doan ls, LS hyphen LA, you can see I have a dot Git directory now. And we're not goingto really get into the details about Git here. But you know, just showing you what's goingon. The next thing we want to do is want to add all the stuff that we worked on so farto our actual repos. So if we type in git status, it'll show that we have these untrackedfiles, meaning that they, they aren't going to be committed. So let's add them to be committed.So we have to get add, we'll do git add period, that will just add them all. So we'll hitenter, we'll type in git status. And so now you see one from untracked two changes tobe committed. And so now we just need to write our git commit message hyphen, M, initialcommit.And they are now committed. And there's that thing I was talking about the git, gitconfig username and email, generally, you want to set these with your name an email,since I've just doing this for practice here, I'm not going to do that. And you'll probablykeep on popping up. But we've created a git repository, but it only lives on this cloudnine environment. And we really want to make sure that this is hosted somewhere in thecloud, you could use GitHub. But for the this course we are, or this project, we are goingto be using code commit so that we get some hands on experience with that. I think a lotof developers already have experience with GitHub. But yeah, we'll get to that. Nexthere. So I said, the next thing we want to do is get this, this local repo, that's this,this folder here, well, we can't see it on the left hand side, because it's hidden.Wewant to get this all the contents of this entire folder and the stock get file intoa repo and we're going to use code commit. Now when you use the, the Elastic BeanstalkCOI, you're setting up a project for the new time, it'll the first time it will set upa code commit project for you. And so I figured that's the way we should go ahead and do it.So we'll just go up here and make a new tab. Because the the CLR is not pre installed.So the EVA CLR is pre installed on this cloud nine instance, but not the Elastic Beanstalkone. So what we'll do here is we're just going to go ahead and type in an Elastic BeanstalkCOI GitHub. And because that's going to have the instructions for us to do this installhere, and so I'm just going to scroll down here.And based on your environment, you mighthave to install additional things, you can see there's a bunch of things. But since we'reworking cloud nine, there's not going to be anything that's too difficult here. And allwe have to do to install this is to run the git clone command. So let's go ahead and givethat a go. So that we make our way back to our cloud environments, I'm just gonna typein clear so we can see what we're doing.And I'm just going to go back one directory, becausethis is going to clone that repo like download this folder, and I just don't want to haveit in my study sync here. So I'm just going to go back and directory, which is a CD dotdot, and we're gonna type in git clone. And this is already complaining. Too many arguments.Oh, you know what, because when we copied it, it already typed git clone for us, itwas trying to save me some trouble. So I'm gonna go back in there, copy, I wrote it inmanually, I'm silly.And we'll hit Enter. And that's going to clone it. So that's justgoing to download it to our local computer. And now to run it, if we go over here. Itshould be probably this command. Yep, that's the command. So we'll go back here, and we'lljust hit Enter. And what it's going to do is it's going to install a bunch of stuff.This is probably not this cloud nine environments, probably not using this version of Python.If we just go over here, new tab, we're not gonna we're not going to mess with this one.But if I just type in, I think it's like, just type in Python hyphen version here.Oops,oops, oops, I didn't want that. hyphen, hyphen version, maybe. Okay, so it says version 3.610. And this one wants version 3.72. So, you know, that's just the state of Amazon Linuxone right now. And we're just going to have to wait, this is going to take a few minutes,as it says here, I'm just gonna go over here, as I have, every time I stopped the video,I have to, I have to hit a command key and messes up terminal. But we're just going towait for this to install. And this again, takes several minutes. So please be patient.You know, it might be three, four minutes, and I'll see you back here in a moment. Okay,so just waiting here a little bit here. Coming back to our first tab, we can see that ithas completed so it just took a little bit time to install Python. And there's one morething that we need to do and it's just to add this loss of Bienstock to our path soif we were to type in Eb, it shouldn't be able to find it because it just doesn't knowwhere that binary is stored.So we just need to take its recommendation recommendationHere we are using bash until reason bash says bash up here. We just need to echo that commandhere. I'll just hit that. It also seems to suggest this. I don't remember having to dothis last time. And I think we can go ahead and do that. I think it's safe to do. So.I hope that doesn't mess up this, this follow along here, but I'm pretty sure that won't.And so now if we type in Eb, we have Elastic Beanstalk pop up.So that's really great.What do I need to do is just delete this folder, we don't need it anymore. It's just creatingclutter. So just type in LS hyphen, LA, make sure you are at this environment. directory,if you don't know just type in CD tilde forward slash environment. And we'll type in RM hyphen,RF, e to us, autocomplete it, hit Enter. And we saw that advantage there. So it's justa bit of housecleaning, because we don't need that sticking around. So now that we havethe Elastic Beanstalk environment installed, or the COI, let's see what we can do withit, which will be actually setting up an application, I'm just enclose this other tab here.AndI'll see you here in a moment. So now that we have the CLR installed, we're ready toinitialize a new Elastic Beanstalk project. Now I want to point out that we are currentlyin Tilda environment, that's our home directory, it's very important that we run this commandin the study sync directory, because it needs to find this dot Git directory in order forit to upload our code to code commit. So just type in TT, or CD tilde a Ford slash environment,study sync, and do LS hyphen LA, make sure you see that docket directory there. Beforewe get going here, I'm just gonna open up a couple tabs in AWS. And we're gonna go we'regonna go to one that's actually at the COI here. So we're going to make her way or sorry,to Elastic Beanstalk. And then for this one, we're going to make sure that it's on codecommit.Just so we can see what's happening in the background here. So what I want youto do is I want you to type in Eb will give you a full list of commands. We're not, we'llprobably won't end up using all these commands. But these are the most general ones. And theytell you to use Eb net Eb create an EB open for Eb open, we don't actually have the abilityto use this command. This makes it so the application opens up in the URL in the browser,which is very convenient. If you're if you're running Elastic Beanstalk or your local computer,not in cloud nine, that's a great command to use. But here we won't be able to use it.Let's go ahead and do Eb Annette.The first thing it's going to ask us is the region wedefinitely want to default that to US East one always US East one because it makes thingseasier. It's going to ask us to select the application it's going to be study sync andknows that because it's picking it up from the dot Git folder there. It's gonna ask usif we're using no Jess, we absolutely are. So yes. Do we want to use code code commit,we'll say yes. And then enter the repository name, we're going to call this study think.And then it's going to ask us if or what we want our our default branch to be we wantit to be Master, we'll just type in Master, you just hit enter there either. And whatit's going to do, it's going to go ahead here and in code commit, it will show us that itcreated that repo so here it is. So it is now here in code commit Elastic Beanstalk.We're not going to see anything here yet.This is just an old one here. But if I doa refresh, I go back here. Well, you probably won't see this. But I had I had a older terminatedinstances here. So you might not see anything here as of yet. And then down below, it'sgonna ask us if we want to set up SSH access, I'm gonna say no, we do not need that. Andso now we've initialized our project.Notice that it's created a new folder called Eb ElasticBeanstalk. The period means that it's a hidden folder, we're going to go and open up thisconfig dot yamo file. And so these are some of the options we chose when setting up theEB Annette. So now that we've initialized our project, we have our code on code commit,we need to configure this application so we'll move on to are the the Elastic Beanstalk environment.So we'll move on to that. Okay, so the next thing we need to do is configure this environment.So you generally have to do this for all Elastic Beanstalk projects based on what environmentyou're using, and configurations show up in the.eb extensions directory so we'll haveto create one ourselves. I'm just gonna type clear here I'm going to make sure that I'min the right place.So study sync. And I'm just going to type in MK dir which is goingto make a new folder.eb extensions. Okay, and so now I have that new folder there. I'mjust gonna double check it Eb extensions. That is totally cool. We're going to needto create a couple files in here. So I'm going to do touch, or sorry, I'm just going to CDinto this Eb extensions folder, they'll save me some time, I touch a file called 001 nvar dot config. And I'm going to do another one called node command.I'm just going togo back a directory. So if we open up this folder here, we now have n bar. And this iswhere we're going to set some default environment variables. By default, you don't have to specifyan environment here, it would, it would just go to the Elastic Beanstalk environment, butwe are going to be explicit here. And this is what we want to type. I always always typethe word environment wrong, E and VIRO, m e NT, when you call it on the end there. Andso I'm just gonna set Port 8081. And we're gonna say node envy, production. And it'sfour spaces in debt, that's totally fine. I'm just gonna leave it alone. And so thatis one file configured. Then we will go over to the next one here. I'm just gonna doublecheck this one here, option settings. Yep, that's all good. And so the next thing weneed to do is tell Elastic Beanstalk actually how to start up our application because ithas no idea. So we'll do AWS Elastic Beanstalk container. And then no, Jess.So we're goingto give it some no Jess specific configurations here. First, we're going to provide no commandthat's going to tell what command it's going to run, when it first starts up, we want itto NPM start. And then we need to specify the node version, this is going to be 10 point18.1. I'd actually have to put the parentheses there and use double quotes, I think you usesingle quotes, but I'm just gonna stick to what I wrote earlier, because I don't wantanything to go wrong in my follow along here. And if you do node hyphen version, you cansee that choosing 10 point 19.0. And it is better to use the latest version. But thatdoesn't necessarily mean that those like Elastic Beanstalk can run that I definitely know that10 point 19 is not out for Elastic Beanstalk, and 10. Point 18.1 is available. So we'llstick with this one. But yeah, that's the configuration there. So we'll go ahead andwe'll just commit it. So get add all get commit hyphen m configuration for Elastic Beanstalk.It's very important that we do commit this and then we push it because of these hallsaren't there and we tried to create an environment, it's just going to air out because it alwayslooks at what's in the repo here.So if I go into here, code commit Eb extensions, wecan see we have those files there. So that is the configuration phase over here. I'mgoing to close a couple things here. Just clean up here a bit. And what we'll do nextis we'll actually create a new Elastic Beanstalk environment. Alright, so we are ready to createour Elastic Beanstalk environment. And to do that we need to use the EB create command.But we're not just going to write Eb create, we're going to write Eb create, hyphen, hyphen,single, this hyphen, hyphen, single This is a command flag. What it's going to do is tellthis to create a environment an EB environment that is it running in single instance mode.If you don't provide this flag, it's going to spin up an elastic load balancer elasticload balancers cost money.Technically, if you're using the free tier, if it's if youmade this account, and you're still in your first year, you got one lb free running. Butyou know, we just want to avoid these kind of problems. So let's just not use it. Andthere's we're not going to really be doing anything with elastic load balancer anyway,through this walkthrough. So do Eb single, it's going to prompt us with some options.I'm going to name this study, sync, prod, even though we're in our environment, ourdeveloper environment of my iOS account, if we go back up here, where it says, example,Dev, I'm going to pretend this is a production application.And I'm just hit enter there,we just wanted it to be the same. We don't want you to spot instances, that's a greatway to save money. And then it says it's insufficient Iam privileges, unable to determine if thisrule exists. Assuming that exists. It's gonna go ahead and spin that up. I don't know thisis gonna cause us a problem, but we're gonna have to wait here. And this is gonna it'sgonna take a little bit of time to get going here. So I'm just going to wait for this toget started. I'm gonna open up another terminal here. And I'm just gonna go to study skincare.And I'm just gonna type in AV status.And this actually shows us the current statusof the application. Right now the health is gray and the status is launching if we goover here You can see I was trying to launch some stuff earlier here, those are terminatedinstances. This is the new one here pending. And you can see it's a nice dark gray. Andthis mirrors exactly what we're looking at over here. So this is going to take aboutfive to 10 minutes to launch. And I will see you back here momentarily. Okay, so afterwaiting a little while here, I go back here, and it says that it successfully launchedthe EC two instance. So it looks like to me that it's all in good working order. We goover here and we type in Eb status. You can see that says, ready and Yellow Yellow isnot a great status to have. So if we come back over to the Elastic Beanstalk environment,I can't tell if it's finished yet, but I'm just gonna go back here to study sync, clickinto the yellow here.And it's giving us a warning, it's saying unable to assume therole. It was asked Bienstock service role, so it's supposed to create that for us. Butfor whatever reason, it just did not. When I first wrote this fall long, it definitelycreated that for me. So I'm not sure why it's not creating it. But if we go up to this linkhere, and open it up here, the application clearly is working. So, um, yeah, the oldcommands not that great, but maybe it'll go away on its own, it's gonna hit refresh. SoI'm just gonna wait a little bit here and see if it actually does go away or not.Okay,so our yellow eventually became a red. And it really has to do with this Avis ElasticBeanstalk service role. This is confusing, because when you run Eb create, it's supposedto create this for you, it's supposed to actually create two different Im roles. As I go overhere, I don't see them in here at all. Now, you could go ahead and manually create them.I've tried to do this, and I haven't had much success as well. But there is another wayfor us to create these roles without us having to do a lot of manual labor.And, again, youmight not have to do this, those those roles are Im roles may exist. But in my case, I'mjust having a hard time today with Elastic Beanstalk. So to get them created, what I'mgoing to do is I'm just going to start another Elastic Beanstalk project. So I'll go here,create a new app, I'll just call test, it doesn't matter. Because if we launch one fromhere, it will absolutely create us a those roles. So I'm saying here, go to web serverenvironments. Okay, leave it as test, I'm gonna choose a Ruby, I'm going to go downhere and launch a sample application at launch rails.I'm just gonna write test in here,check for availability, that's all good. hit Create environment. And so what that shoulddo is it should trigger to the console. That should go create those Im roles. So if I refreshhere, now you see they exist as Elastic Beanstalk aect role as Elastic Beanstalk service role.So I have no idea why those aren't appearing, but now they do. But the trick is, I justneed to delete this environment now. So I can't stop this as it's running. So we'llhave to wait till it goes through the motions of it off of this test environment. And thenonce it's done here, we'll just go ahead and terminate it. Okay, so the environment spunup. And now we're just gonna have to go ahead and delete it. I know this is really silly.But I mean, that's the only way I can get these roles to be created. But, you know,the definitely, definitely should be able to make a manual in this definitely shouldautomatically happen.I'm going to go back, I just clicked to all applications here, thenwe'll go into here on the right hand side, we'll see if we can go ahead and delete ithere. So I'll just put test in here. And so what that will do is it should automaticallystart deleting this environment. So if I go into here, it's terminated yet. So that'sthat. But this environment is just no good. So what we'll do is, I mean, we can terminateit, I suppose. I guess what we'll do is we'll just terminate this environment as well. SoI'm just gonna go here and type this in here. And I'm just gonna wait for this one to delete.And then we'll try Eb create again, and hopefully, we won't have any issues this time around.Okay, so I did a refresh there and it's terminated.And what I'm going to do is I'm gonna go backto cloud nine. And I'm going to try this again. So I'm gonna do I'm gonna go back to our firsttab here. I'm going to do is just hit up. It's gonna ask me for the name is going tostudy, sink, fraud. Hit Enter again. We don't want spot here. It's saying that it was possibleinside ECG roll it can't find it. That's okay. When I first did this, I had that error andit wasn't an issue. But let's just really make sure that it actually is there. As longas it's there, that's all that matters. Okay, so the roll is there. So we shouldn't haveany problems this time around.But we'll just wait here and see what the result is. I'mjust gonna go over to Elastic Beanstalk. Give it a refresh, click on it here. And we'lljust wait a few minutes and see how it goes. Okay, great. So this Elastic Beanstalk hasgone green. So I was creating that temporary application, even though as silly as it is,fix the issue here. Hopefully, you don't have to do that. And those roles just exist foryou. So if we go over to cloud nine, here, I'm just going to do a clear here. If youdo Eb status, that's going to show you the status here, so green and ready, which isthe same thing that is over here. If we wanted to go view this web application, it showsthe C name in here. So if I copy this out, we can see we have a link there.And if wescroll up here, we can also see that it assigned us an elastic IP address. So that's anotherway of accessing the web application here is that IP address. If we typed in Eb logs,this would show us what happened actually, on the EC two instance, if anything was loggedout, I showed you this prior, but I'm going to show you quickly here again. And so youcan see that the application started up here, I'm not sure what's going on down below here.I don't think that matters.But this is what we really do want to see, we hit q to exitthat out there. If we type in Eb events, that is going to show us the event history thathas happened. So if you go over here and events, that's the same information here. Really greatway to debug stuff. I want to point out that the deployment, the deployment model we'reusing is all at once right now, we haven't actually done a deploy yet we've I mean, wetechnically have deploy, but we haven't deployed to an existing environments.So I'm just showingyou that it's using all at once. And so the next thing we're going to do is switch thisover to immutable and see what the difference is there. But we're not going to do it inhere, we could just click immutable hit apply, and then do a deploy. But I want to do everythingfor the console. So that's what we're going to do next here. So let's get to that.Soto switch this over to mutable deploys, again, I said that we could modify this file, butwe want to do it programmatically. And we're gonna do that through the configuration files.So what I want you to do is going to Eb dot extensions here. Stop letting me type thathappens to you just close bash, and open a new window here, I already have one open.And I'm just going to go into Eb extensions, I'm going to make a new file, and I'm goingto call this 000 deploy config, I just wanted to be ahead of those other ones. I don't thinkthat order really matters, but it's just what I want to do.Oops, it's not make we wantto do touch. So we'll touch a new file there. And what we're gonna do here is we're gonnaset option settings. And we're gonna do eight of us Elastic Beanstalk here. And we're goingto command and what we're gonna do is set the deployment policy to be immutable. Andthen what we're also going to do is set health check, success, threshold, threshold, to warning.And then we're going to set ignore health check to true, and we're gonna set timeoutto 600 here. So I'm just gonna read this over very quickly here, make sure to make mistakeshere, Elastic Beanstalk? That's correct. immutable, or deployment policies immutable, that looksright to me. Health Check status threshold that looks good to me. Ignore health check.So we're gonna say over here, what we're doing. We're just we're actually checked boxing thatoff, okay. And we're pretty much setting the same settings that are here, except this isgoing to be warning.And that should make the deploy really fast. And also, while we'reat it, let's just make a superficial change. So when we do deploy, we can actually seeif the effects have taken effect here. So I'm just gonna go to our actual applicationhere, I'm gonna just change this to study sink. version one, if we were to actuallycheck out this application before we deploy it here. Notice that it says hello world here.So we're changing that to study sync of version one. We're gonna go back to bash here, we'regoing to go back and directory, it's gonna type clear. Make sure you're in the environments,study sinkin here, and we're going to add all the changes.So we may modify that file,we added a new configuration file, so I'm going to add them both commit I found em immutabledeploys, I'm going to push those changes, I'm going to go ahead and deploy that. Soso we'll just type in Eb deploy. And right away, it should start switching over to amutable deploys, because it's going to pull those configuration files, look at them, andthen it's going to decide on what to do.So I think we just wait here a moment, it's actuallygoing to tell us, and we can see that I actually have an error here. So I'm going to go aheadand abort that. So I'm just type Ed abort. But we can see contains invalid key optionsettings. So I probably just made a mistake here. Oops. Yep. We'll try this again, ABdeploy, oh, well, we have to commit those changes. Push, do AV deploy again. So whatwe're looking for, is to see if it'll actually say that it's doing an immutable deploy. Andthere it is, it says immutable deployment policy enabled. So we're gonna have to waitfor that deploy to happen, I'm just gonna open a new tab here, because I'm going tostop the video here. And immutable deploys are a lot slower than that all once. But theadvantage here is that it won't take our server out of service, it's going to create a newserver, and then when that new server is good, it's going to switch over to it.So our userswill never have an interruption in service. So I'll see you back here shortly. All right,great. So our mutable deploy has completed. It's actually been quite a while since lasttime I was here, because I actually recording this in the next day. But I can tell you thatthat immutable deploy didn't take too long. So it definitely takes longer than the thanthe all at once deploy, all the ones is extremely fast, where these immutable deploys, haveto go through health checks, and then go through multiple checks before it determines thatthe new service is good and moves over. So you know, that was immutable deploy. But whatI want you to do is go back to cloud nine. And we are just going to undo those changesthere, because the next thing we're going to learn how to do is bluegreen deployment.And I don't want these immutable deploys, slowing down or development here.So justto get rid of this immutable deploy stuff, all we're going to do is remove that filethere. So when you type in RM, and we're going to do tilde to here environments, study sync,Eb extensions, 000 config deploy. And then we're just going to add those changes, I'mgoing to make sure that I'm in that study sync directory there because it looks likeI was in the wrong place there. And we will do git add all get commit hyphen m, revertback to immutable deploys. Okay, git push.And just before we do anything else, here,I just want to go back to the environment here and just show you under your configurationthat it should have switched to immutable deploys. So here you can see it's immutableand the health checks are disabled. But anyway, now that we have that set up, all I'm goingto do here is now that I've made these changes, I did a push, I'm just going to do a cap deployor not cap deploy, Eb deploy, I'm thinking of Capistrano, which is for Ruby on Rails.It's not what we're doing here.And, and what we'll do is we'll just revert this back toall at once. And this isn't going to take too long. So I'm just gonna go back here tomy dashboard, and we can see this is updating. And I'll see you here when this is done. Andthen we'll move on to bluegreen deployments which should be super exciting. Alright, soafter a short wait there are, are mutable deploys your back to all at once deploys,and we can just double check here under the configuration. If we go down here, we shouldsee now it's now it's all at once. So let's move on to bluegreen deploy. So bluegreendeploy is when you you switch environments. So right now we have this environment herewhich we can consider our blue environment. And the idea is we're going to spin up identicalenvironment called our green environment, which will have our latest changes. And oncethat environment is in good shape, what we'll do is we will swap the URL of the environments.Now this option isn't available to us right now.Because there's nothing for it to swapto. But once we have that other environment, that's how we will make that switch over.So in order to do so we're gonna go back to cloud nine. And what we need to do is clonethis environment, make a copy of it, and we could go in here, and I think we could goright into here, and then click Actions and clone the environment. But let's do it throughthe CLR. Because again, this is the developer associate, this is the best way to learn.So we're gonna type in Eb two, or Eb clone.And then it's going to prompt us for the name,I think clone is okay, for our case here. So I'll hit enter, we'll keep the C name thesame. And what's going to do is, start up a new environment there. So we'll go backto study sync. And I'm just gonna give it a refresh. and here we can see that this environmentis spinning up. So again, we're gonna have to wait a little bit here, and I will seeyou back momentarily. Alright, so after a short wait there, because it's using alt oncedeployment, we have our production clone environment up, I know, it doesn't look like it's runninghere. But if we just go back here a second, and I do a refresh here, we can see now it'sgreen. So I don't always trust the universe console, always refresh and look around.Becausesometimes things are ready, and you're just waiting around for nothing. So if we takea look here at this clone environment, Following this, this see name here, or the DNS, whatdo they call it, I'll just say URL, this URL here, we can see that it's running. But wewant to make sure that we have a new version here. So this is version one. So let's justmake a superficial change client version two, and then see how we can deploy to this newenvironment and then facilitate the switch.So what I'm going to do is go back to cloudnine, and I'm going to make my way over to the app. And it's just making some complaintshere. So I'm gonna close these tabs. And what we want to do is we want to go ahead and openthis app js file and just change this to version two. All right, and so now that I've changedthe version two, I'm going to commit this to the repo. So I'm gonna go get add, well,I'll do git status, we should always do that, we can see the file one to add git add allget commit change to version two. We will do a git push. git status. Alright, great.So our version two is there. So how would we go about deploying to the green environmentbecause we have these two environments.So if we ran Eb deploy, I think by default, it'sgoing to deploy to the original environment. But if we want to specify the environmentwe want to deploy to, which is the green environment, we just have to provide its name. So that'scalled study, sync, prod, clone, I'm pretty sure that's the name of it. Yep, study, sync,prod, clone. And so this should now deploy the latest changes to this environment. Sogo ahead and press that there. And it's going to start up again. We'll just give it a secondhere. We'll flip back, we will go back to study sync. I'm just gonna give it a refreshhere, because something should be changing. Probably the prod. Yep. So it's updating.So we will let that deploy there. We'll give that a little bit of time. And I'll see youback here, when that's done. And we'll just double check, it's version two.And if that'sall good, we can do the slop. Alright, so pushing our changes, or version two changesto the clone is done, we go to this page and refresh, you're going to notice that therehasn't been any change, but actually has worked on this is just an issue with Chrome. Becauseif you open up another browser, and refresh in Firefox, it says version two for the sameURL. So this is a chrome caching issue. I spent hours upon hours trying to solve thisproblem and not realizing it's Chrome. So just be aware that anytime you're doing deploymentsto anything, you're checking stuff, always just rule out your browser. And sometimesit's not even AWS. So you know, if we want to get to see the latest version here, I'mgoing to open up Inspector, make sure you're on network and have disabled cache, do a refreshthere.And now it says version two, this will not work unless you have this open this checkboxand then you do the refresh. All right. Um, but yeah, now that we have figured out howto deploy our second version with bluegreen deployment, well, we can go ahead and do isswap over the environment URL. And so we said that what we can do is go to here to actionsand to go to swap environment URL. And we could go here and choose our other versionand swap here. However, since this is for the developer, associate, I really do wantyou to get as much experience with the CLR. I'm gonna just keep on saying that. And whatwe're going to do is go ahead and use cloud nine and use the actual ebci to do that.Sothe command here is Eb, swap. And then we're going to say, if we hit enter, now, it wouldprompt us to ask, you know, the source and the destination. But I just want to be veryexplicit here. And I'm going to just type in the source. So I want to swap the prod,with the clone. So we'll say study, sync, prod clone. And this will do exactly the samething as swapping the URL URL out here, okay. I'm just gonna hit enter, unrecognized argumenthere, cologne. Let me just make sure that I spelt that name correctly, I'm just gonnago over here and check. Study sync, prod clone. Study sync product clone. Oh, sorry, you knowwhat, I have to provide a flag here it is, hyphen, hyphen, destination name. And thatwill hit Enter.And this is going to trigger that swap action there. And so we're justgoing to have to wait a little bit there. It says it's completed the swap. Wow, thatwas really fast. So we'll go back here. And if we are to click on prod Now, what I wantyou to notice is this is the clone URL here, right. And we are now in the clone environment.So this used to say clone, but now it says prod it's taking the original seed name fromthe other environment. If we go to the first environment here, this one now has clone.So that's the swap that occurred. So that's how we know that it worked. Now that thatswap has occurred, what we want to do is just get rid of our old environment, because weknow our new environment is running. With no problems here. If we just go to prod likethis, it's running version two, so we're all good.And so what we need to do is go in hereand then just delete, terminate this environment. But let's do it from the COI. So we'll goback here to cloud nine. And what we're going to do is type in Eb, terminate, study, sync,prod. And then it's going to just ask us to confirm it. Hit enter. And that's going toterminate. Now, we're pretty much done with bluegreen deployment here. And with that outof the way, we can actually move on to learning how to deploy a single Docker container next.So what we're going to need to do is rip down everything because we do not need even thisclone anymore.So I'm going to also terminate this one, but I'm going to do through theconsole here. And we need to type its environment name. So I'm just going to copy it here. pastethat in. Make sure that's right. And what we're gonna do is we're going to wait forthese to shut down here. And when these are both terminated, we'll move on to the nextstep, which is deploying a single container Docker environment to Elastic Beanstalk. Alright,so we are back here, it went through the whole process here and built everything and it'ssaying that it's running on port 8080, what we're going to need to do is open a new tabhere, because we're going to need to get the IP address of this Docker or this cloud nineenvironment.So we've got a terminal here, I'm going to type in curl, HTTP, colon forwardslash forward slash 160 9.2 54 160 9.2 54. Latest metadata, we'll hit enter, make surethis works. And then we'll type forward slash public IP v four. And that's the IP address.So what we can do with this, just copy this here, and go Port 8080. And we will see ifthis works. Oh, let's just click it to open. There it is. So this is running in a singleDocker container.The reason you'd want to Docker eyes your environment is because itallows you to ship your your configuration with your code base. So you saw before weare restricted to version 10, point 18, whatever have no but now we are only restricted towhatever we provide with it. So a lot more flexibility around that. In order to preparethis for deployment, we are going to need this node modules anymore. Because we areusing a Docker container and this is just going to do nothing.So we'll go ahead andremove that. So what I want you to do is just close this tab here. I'm going to do CtrlC to stop the Docker container. I'm going to go CD Tilda. Oh, we're already in All rightplace, but this is where we need to be. And we're just going to do RM, Eb extension 002and just remove that file. So now that's been removed, we need to make an adjustment tothis file here, I'll just hit keep open, this needs to be Port 8080, because that's whatwe're setting in our application. And let's go ahead and commit our changes here. So configuregive me for Docker. So now that our pushes have been changed, what we can do is go aheadand do an EB create. I've been I've been single, so we don't launch a load balancer.We aregoing to name this one differences so we can identify it, we're gonna say study sync, Docker,we'll hit enter, we'll hit No, we don't want spot instances. And we will make our way overto the cloud nine, or CLOUD NINE over here, do a refresh. We'll give it a second hereto start up there goes. We can see that using the Docker platform, we'll do a refresh. Andwe will just wait until this is done and see. So after a little while there, our environmentis now running here. Let's take a look to see if it's working. And there you go. We'rerunning on Docker, it was that easy. The thing with Elastic Beanstalk is that it did allthe work for us. We just had the Docker file in here.And when we uploaded it, it did allthe work it built the image for us. But normally, what you'd have to do is build the image yourself,and then push it to an actual Docker repository that could be Docker Hub, or in the case ofAWS, you can use elastic Container Registry or ECR. And that's what we're going to dobecause that's a more complex setup and the more common stuff that people will be using,because most people outgrow this Docker file, the simple setup here.In order to do that,what we're going to need to do is create a new file called a Docker run itos dot JSONfile, and we're going to have to build an image and push it to ECR. But before we dothat, let's just make a revision to our actual code base here. And we will go to App dotj s, and we'll call this version three.And the next thing we will do is we'll go aheadand build our Docker image. So I'm going to type in a Docker build, hyphen t study syncperiod. So this is going to build a Docker image. And it's going to name it study sync.So we'll just wait here a little bit. And there it is, it's done. That was fast. Thenext thing we need to do is we need to authenticate to ECR. And this is a very long command, sowe'll get to it. It's AWS ECR get login, password, pipe, Docker, login hyphen, hyphen, username,AWS hyphen, hyphen, password, st in, we're going to have to provide our account ID here.I don't know what my account ID is for this account, we need to poke around, we shouldbe able to find it somewhere. It's generally under my account settings, I just don't wantto show all of my billing information here.So another easier way, we'll just go makeour way over to I am, I feel like that's always a place where we see our account number. Weshould see it anywhere. We'll just go into even the user here. Here's one, our accountnumber is everywhere. I just need part of it there. So I'm just going to paste it overhere and then extract it out. And then what we need to do is provide it as such. And thenwe need to type in this URL. So we need d k r.ec r. And then we need the region we'reoperating in.So US East one dot Amazon AWS comm if you're wondering how I got this wholelink is in the Airbus documentation for ECR. So what this is going to do is log us in andgenerate out a token so we can authenticate. So it says there's an unknown flag name user,so I'm just going to double check that there. It's actually supposed to be user name. Sowe'll go ahead here and type this in. And here it says it's created that credentialshelper file, so there you go. So notice that it's created a file here called Doc, or ahidden file called Docker config dot Jason. And that's what storing the token which isgoing to help us to authenticate. So let's take a look at the actual images that arehere. And we can see that we have our images built here. And we're going to get how toget this image ID next.And what we need to do is tag this Docker image. So I'm goingto put in that image ID, we need our account ID again here. And it's actually the samelink here. So it would probably be easier if I just copy that out like that. And thenwe need to specify the name. So now that it has been tagged, what we can do is do Dockerpush. And I believe it is the same URL here. So let's copy this. And it says here thatthe repository does not exist with this ID. So maybe what we should do is make our wayover to ECR. And just maybe we need to make the repo beforehand. I always forget this.So I guess we'll find out. I thought we just create it for us. Nope, I guess not. So we'lljust type in study sync here, I'll hit Create repository, then we'll make our way back tocloud nine, just hit up.And there we go. It's uploading our Docker images, kind oflike GitHub is incredibly small, Docker image. So it's not taking too long, which is reallynice. One advantage of using Node JS over other languages and frameworks. So I'll justwait here a little bit, and I'll see you back in a moment. So our Docker image is now builtand pushed to our ECR repo here. So if we go in here, we can see that we have it. Andthe next thing is to prepare our actual, this next environment here, and instead of workingwith this one, because it's gonna be a lot of work here, we're just gonna make a newfolder.So once you go, cd cd.or, just go actually here to CD Tilda Ford slash environment,we're gonna make a new directory, we're gonna call it study sync external. And what I wantyou to do is make a new file in here. So we'll just CD into this. We're going to call itDocker run dot ABS dot JSON. If you've ever seen a task definition file, it's extremelysimilar. And in this developer associate course, we definitely cover how to deploy with ECS,and fargate. So this will become extremely familiar to you shortly. But what we needto do is open up that file, I made it as a directory, that was an accident, I shouldhave made that a file, so I'm just going to remove that.And instead of doing MK dir,I'm going to type in touch. Okay, and then we can just open up that file there. And whatwe're going to do is write some JSON. So the first thing we need to do is define the Dockerversion for EBS here, so Docker run version, it's going to double check, make sure thatis correct. Yep, that is right. And we're going to specify version one. version oneis for single, single containers, when you do multi container, you do version two, thenwe need to specify the image. And that's going to be the URL we were seeing there earlier.I feel like we could grab that from ECR. Yep, it's pretty much the same thing here.I justwant this part of it. I don't know if I need to put latest in there. Until we'll put thatin there, we have to specify the ports. So we'll go ahead here and do that. So it knowswhat to map to. And then that we will do this little bit of cleanup here. I'm just gonnadouble check to make sure everything is right here. Sometimes it's easy to miss these commas.It looks all correct to me, so we're in good shape. The next thing we need to do is initializea Git repo. Here's we're gonna do Git init. And we're gonna copy over a couple files.So we want to bring over our Git ignore Eb extensions file, and our n var config file,I think so we will go ahead and do that. I'm just trying to think the easiest way to dothis, probably just make the files again, so I'm gonna just type touch dot Git ignore.And then we will do we'll make a new directory called Eb extensions. And then we will touchEb extensions.001 dot n var config. And I think that's the only two files we need tomove over. So we will go to our old one here. And it has some Elastic Beanstalk stuff inhere, and that will just take all of it, that's totally fine. And we'll go to our new onehere and paste that in. And we said, we need to set this as well. So we'll go to our oldone, Copy that, paste that, paste that there. And now that we have those files in there,I want you to do is go ahead and do a git status. So we have three files, that's greatgit add git commit hyphen, M. Docker run. And we need this, we need a Git repo becauseit's going to create a new one when we run Eb net here in a moment. So I'll just do gitstatus, make sure that all worked fine. Great, and we will do Eb in it. So we're going tochoose US East one. So number one, we are going to create a new application.So presstwo, we are going to stick with the name that we are given here. So hit enter. We are definitelyusing Docker. So we'll hit yes, we want to use code commit. Sure. So we'll hit y we needto select a repo, we are making a new repo. So press to enter the repos name is goingto be called study sync external. Make sure you type it right. We are going to want itto be master branch. So hit enter. We don't need to SSH in. That's okay. And so therewe go. So now that that is created, I'm just going to double check and make sure that isthe case.So we'll make our way over to code commit. And here we have our external repo.So it's all in good shape. So now that that's all set up, we should be able to create anew environment. So we'll do Eb create, hyphen, hyphen, single, we will name it pretty muchthe same, I'm just going to take off the dead part on it doesn't matter too much as longas you can remember what you set it to. We'll say no for spot instances. And what we'regonna do here is just wait a little bit here, it's nice to see the message, just make surethat it's creating what we want to create. And this is a Docker image. So hopefully thisworks. First try, I'm going to make my way back to Docker here, we'll go over here.Andwhile that's going, we can go ahead and terminate this one, we don't need this one anymore.And just to point out, like, look at this, this doesn't contain any of our code. So whereis our code, our code is part of the actual Docker container. That's why we don't seehere, because when we built it, it copied it and put it into the actual container. Whereasin this setup, the Docker file is here. And so we can work with our source code and haveit all in one place.So you just have to decide you know what workflow works best for you.And you know, if you can get away with just having a Docker file like that, that's definitelybetter. And this is creating, I'm just going to go back here and do a refresh. I don'tsee this new environment yet. should be called external write. Study, think external. Oh,here it is. Yeah, because it's a completely new application. That's totally fine. So yeah,I'll see you back here in a moment once this environment is done creating. Alright, soour deploy is done, but we have helped degraded and it looks like we have an error here. Itturns out, Elastic Beanstalk can't authenticate to ECR because we didn't give it permissionsto do so. Whereas in cloud nine, we had pulled that all the credentials and stored it inhere so that we could read from ECR. So what we need to do is update the incidence profileof the actual eccm instance that runs here. So what we'll need to do is make our way overto Iam.So just type in Im here. We'll open this in a new tab. On the left hand side,we'll go to roles. We'll go to EC to roll here for Elastic Beanstalk. We're going toattach permissions. We're gonna type in Amazon EC two Read Only container will attach thatpolicy. So this should allow us to gain access to ECR. And then what we'll do is go backto cloud nine. And we'll simply do Eb deploy. And so what that will do is it will just deployagain.But now it will also update the Iam role, and it should have permissions thistime around. So we'll give it a second here to get started. We'll make our way back overto here, we'll do a refresh. And we can see this is in progress. And I'll see you backhere momentarily. And our deploy is done. So I'm just going to close these additionaltabs here, I'm just going to open up the new tab here. And we are now seeing version three,if you don't remember could be chrome caching it but there you go. So we went through alot of different variations here with Elastic Beanstalk. And you know, that is a lot ofstuff, but is necessary to go all through these things. Let's just go ahead and cleanup what we have. So I'm going to go back all the way to applications. And what we can dois go ahead and just delete these applications, this should terminate all the environments.So hit Delete. And we're going to also delete this application here.We'll say delete, clickinto this, this should be a terminating. So our cloud nine environment was not a big issue,it's going to shut down after 30 minutes, when we're not in use for our elastic containers,we're going to go to code commits. I don't think this is really an issue having thesearound. So I'm I don't have much motivation to delete them. And we might be using themfor the ECS and fargate tutorial. So we're gonna leave these alone, or we're gonna leaveour code commits, or, or Yeah, co committed ECR alone, so we'll leave this alone as well.But yeah, that's it for the Elastic Beanstalk walkthrough.Hey, this is Andrew Brown fromexam Pro. And we are looking at elastic container service, which is a fully managed containerorchestration service. It's highly secure, reliable and scalable way to run containers.So let's take a look at the components of ECS. And so over here, we have a technicalarchitectural diagram. And we'll just talk about all the components involved. So thefirst is the cluster itself. So ECS cluster is, is just a grouping of EC two instances,they call them EC two containers, which is a bit confusing, because inside these instanceshave containers within them, so they are both running Docker installed on them.So you canlaunch EC two or Docker containers. And so another thing that's really important is taskdefinition files, we don't have a representation of any of that in our architectural diagramhere. But that's just a JSON file that defines the configuration of up to 10 containers thatyou're going to want to run, then you have a task, and that uses a task definition tolaunch containers. And a task is a container, which runs for only the duration of the workload.So the idea is, let's say you have a background job you want to run as soon as it's done,the task stops or deletes itself. So it's really good for one off jobs, then you havea service.And this is exactly like a task, except that it's long running. It's intendedfor web application. So Ruby on Rails, Django, express j s, where you don't intend thesethings to shut down. And the last thing we want to talk about here is the container agent.It's not represented in the diagram here. But this is a binary that's installed on theEC two instances. And this just monitors, as it is monitors the tasks as well as itstops and starts.Let's talk about the options, we have to choose one configuring a cluster.So the first thing we're going to do is go ahead and create that cluster. And you'regonna have to choose between fargate or ECS clusters and whether you want it to have somenetworking components involved. And then once you've done that, you have to go through andchoose a bunch of options. So you have to choose whether you want to be spot or on demand.So with ECS, you can save money with spot because if you're running background tasks,maybe they it's not a big deal for those to get interrupted.Then you don't want to haveto, you're going to want to choose the EC two instance type, and then you'll have tochoose the number of instances. You'll have to choose the EBS storage volume and thenyou can choose whether you want use Amazon Lex to her Amazon Linux one which both haveDocker installed on them. So there are some of those options right there for you as youcan see, then you'll have to choose your VPC or create a new VPC. Then you You need todecide in an IM role, then you have the option to turn on cloudwatch container insights,this is going to give you richer metrics about the operations of your containers.And thenyou can choose a key pair, which is unusual because you don't necessarily need to loginto your instances. And 80 of us generally does not recommend you to SSH into those containers.But you totally can. So that's all the options for ECS. And we'll see this again, when wego through the follow. Let's take a look at that task definition file that we talked aboutthat is used to launch our tasks or services. So what you do is you'd go hit the Createnew task definition, and this would actually have a wizard to help you get set up. Butif you had to write this by hand, this is what the actual file would look like. Andin this file, you can define multiple containers within a task, which is actually what we'redoing here on the right hand side. And the Docker images can be provided either by ECR,which is elastic container repository, which we'll talk about the next slide, or an officialDocker repository, such as Docker Hub.So here, you can see that we are specifying animage and that image is WordPress. And then another important thing is that you must haveat least one essential container. So this container fails or stops, and all the othercontainers will be stopped. So this is just to make sure that you have at least one dependentresource there. And if you aren't sure how this all works, it's okay because eight ofus has that wizard.So when you click that create new task definition button at the top,they have all the fields that you fill out to create this, but if you wanted to createby hand, you could totally do so. So I want to take a quick look here at tasks ContainerRegistry, which stands for ECR. And this is a fully managed Docker container registrythat makes it easy for developers to store manage and deploy Docker container images.So just to give a representation here, you have a Docker image, and then you can pushthat to ECR, which is like it's like a repo for Docker, Docker images. And then once youhave it there, you're gonna be able to access it from elastic Container Service, fargate,Kubernetes, or even on premise. So it's just an alternative way of storing a Docker imageas opposed to Docker hub or somewhere else or hosting your own on it's highly secure.Hey, this is Andrew Brown from exam Pro, and welcome to the ECS.Follow along. In orderto do this follow along, you have to do the Elastic Beanstalk one first, because in thatfall long, we build out a web application, and we turn into a Docker image and then wehost it on ECR. And we are going to need this ECR Docker image in order to complete thisfollow along so please go to that one first. And do that follow along and you definitelyshould. And once you have that done, come back here and we will proceed forward. Sobefore we can create our ECS cluster, we're going to need to create an IM role for ourEC two instances. Each of us has documentation all this the Amazon ECS instance roll, theseinstructions aren't very clear, but I know what to do.So let's follow along. So thefirst thing is we're going to want to name it this UCS incident roll. Well, you couldname it anything you want. But let's just be consistent here as because this is whateverybody else names, it will make our way over to I am on the left hand side, you wantto go to roles, we're going to create a new role.We're going to leave that here, chooseEC to go next. And then what I want you to do is type in EC to container, or we're lookingContainer Service role for EC to this one here, I'm going to double check it just makesure it is what I'm expecting it to be, I usually can tell what this stuff is by lookingat the services.Yep, this is the one we will go next, we will hit Next, we will name thatrole, we will create that role. So now we have this role, and we're ready to go createour cluster. So going back to our first tab here, I want you to make it over to ECS. Sowe'll click on that left hand side, we will choose cluster we will create a cluster andwe'll be presented with multiple options.It's defaulted to fargate, which we will bedoing in the fargate Follow along but right now we're doing ECS and the way you know ifit's ECS is that it's not powered by fargate. And you do not create an empty cluster. Sowe are going to use Linux. So we'll hit next here. And if we check box this to here, thiswould make it fargate. But that's not what we're doing. I'm gonna call this my ECS cluster,we'll leave it as on demand spot is a really great way to save money but I don't I justdon't want anything to go wrong in this fall long.So we'll just leave it as on demand.I want you to go look for T to micro here. Because that's part of the free tier thereI found it. We're gonna only have one instance we want to keep this very inexpensive. Amazonlinks to seems fine to me. We do not need to set a key pair. We do not need to set anyof the VPC settings here and we need to make sure our ECS instance role is there. It automaticallyselect Did it and then we can go ahead and hit Create.What's that going to do? It'sgoing to create an ECS cluster, then it's going to make sure we have that Im policy.And now we're just waiting for this cloudformation stack to complete. So this won't take toolong. It's still pending right here. We're just waiting for the auto scaling group andthe internet gateway. I think what I'll do is I'll just pause the video here. Oh, no,it looks like it's almost done. Maybe we'll just give it a second here. Oh, okay. It'sproceeding forward. Now, we're just waiting for the security group and the auto scalinggroup. Okay, great. Now, we're just waiting for these last two here, the Virtual Privategateway in the auto scaling group. Alright, now we're just waiting for the aect route.You can see that it sets up a lot of stuff to make this cluster. And that looks likeit's done. It's still spinning, though. I think this is pending here, the the routetable subnet Association. So we're just waiting for the route tables to set up and there wego.So let's go ahead and view your cluster. And what we need to do is want to create aservice. But before we can do that, we're going to need to create a task definitionto actually launch. So go to task definitions create a new task definition, we have theoption between far gaming c two, we obviously want to see two, but that's for ECS. or hitnext, I'm gonna name this study sync, we need to choose a task role. Optional, Im role thatthe task can use to make API request to authorize data services, create a one here, we mightneed one here for using ECR. I'm not sure I guess we'll find out as we go. We'll godown here and we'll have to specify some memory. In order to do this, we need to know how muchmemory comes with a T T two micro that is 500 megabytes. So that's the maximum we cando. And with a T two micro you get one v PC, so I'm going to place that in there.Hereyou can set the CPU units, which I have no idea what to do for that I don't know if oneVc vcpus wanted you for probably is but I definitely know there is only one v v CPUfor virtual CPU for t twos. T two micros. So now that we're there, we'll go ahead andadd our container. I'm gonna name this study sink container. And then we need to specifyour image repos. So we'll go over to ECR, I'm going to copy that. Paste that there.Um, I think we want the latest. I think we can place the latest in here. And we'll setit to 256 megabytes, it's a no GS app, it doesn't require a lot of memory, we want tomap the port. So 80 is the is the port that goes to the internet.That's what we want.And 8080 is what our container port is. That's what we start up our web application on. Andwe will see that if like you'll see that in the Elastic Beanstalk follow along, but that'swhat we do, we're not going to set up the health check. We do need to set up some environmentvariables, we don't need to set an entry point or Command, these are set in that build image.If we wanted to override them, we can place them in here.Same thing with working directory,we're going to set up the port here, which will be 8080. And then the node environmentwhich will be production. Then we'll go down here. There's some there's a quite a few optionshere. Not super important. We'll keep on going. And yeah, that should be everything that weneed. We'll hit Add. I still don't know if we're gonna run into an issue because I knowthat ECR requires authentication.And I don't know if it will pull it without that taskrole, but I guess we'll find out. We have a lot of interesting options here for serviceintegration. So app mesh, proxy configuration, log router with fire lens, I don't even knowwhat forlenza sounds like a new service to me. We can add volumes not necessary. I dowant to point out one thing I can't maybe I can click it here. I just didn't show toyou. But when you set environment variables you can do value from and this allows youto provide the Arn of a systems parameter stork key that allows you to pull in secretparameters. So if you had secrets you want to put in there, you could definitely do that.I'm just going to now it might show up on the exam. We'll go ahead and hit Create unableto create this task definition. It doesn't like something I've done.I'm just lookingthrough it here. looks okay to me. The only thing I can think of is because it doesn'thave a TAS role. But it says optional Im role. Alright, so I tried going ahead and creatingthis task definition. And I got this here. And it's exactly what I thought it was, wejust don't have permissions here. And so I did a bit of googling, and I believe we aremissing this ECS task execution role. So here they have the instructions. So we will goahead and go ahead and create that. So luckily, we still have I am open here, I'm going tocreate a new role. I think we need to choose ecso, select trust entity choose elastic ContainerService. What's interesting is that this would actually have been created for us if we hadlaunched Amazon ECS console first run experience.So the first time you do it automaticallycreates it, but like, I don't have anything to launch. I've heard das definition. It'slike a catch 22. One of those things that I wish at best would improve upon I'm sureif I can play it on Twitter, they will definitely do that. But anyway, we'll just go ahead andmanually do it. Because it's all about figuring stuff on your own here.And they might actuallyalready have a pre made one here. But we'll just go down here. Select your use case, chooseelastic Container Service task. Oh, there it is. Okay, we'll hit next. And it shouldautomatically add the policy, I guess not attached permissions policy. Sometimes whenyou select those pre made ones, that automatically fills it in for you. But I guess in this case,that's the one we need.We'll hit next, we'll hit next. Let's name it what it suggests usto name it. Whoops, there we go. Now we have a roll, we'll scroll up, hit refresh. Andthere it is. And then we'll go down here, you're given permission to ECS to create anduse the ECS. So here, it's suggesting that it would have created it on our behalf, butI never did. Alright. That's fine. We'll sit great. Anyway. Hi, all right, I'm back hereagain, sorry, for all the hard cuts, I'm just having a really hard time with ECS.I madethis fall along in my other iOS account, it works perfectly the first time, and I'm justgetting beat up at every single corner. And trust me, I go on Twitter. And I literallycomplain to AWS about these things, just because they are really painful. But I just want youto know that even myself as being an ageless expert. I even have hard time getting throughthese follow along. So you know, just stick with it if you have any issues. But anyway,I was able to create this task definition, I have to create a revision here to show youwhat was going on there. So I'm just gonna hit Create revision. And it has all the samefilled in information. And the only problem I had, we definitely probably had to createthis task role, we should have made it anyway. But we needed to click on here. And I hadlatest on the end of here, which I had I thought prior, but I guess it doesn't allow you todo it.So just remove latest on the end of an end of here. And then you'll be able tocreate this task definition. And also in this box here. I wasn't carefully reading it. Butit was actually saying that it would have created this for us anyway. So if we leftthis blank, and we removed that latest there and created it, it would we would have hadthat role. But anyway, we made we made it through there. So just go ahead and hit Create.And you should be exactly where I am with the task definition. So now that we have ourtask definition, I'm definitely gonna close these things here. Because we have a lot oftabs open. I'm just going to keep on going here, all the way here, we are ready to launchthis, this task. So we'll go over to clusters, oops, I clicked Eks. Definitely not doingKubernetes. Today, we'll go to clusters here, we'll click on my ECS cluster.And we haveservices and tasks. So services continuously run tasks. And as soon as the task is done.So this is a web application, we definitely want to make it a service. So we'll hit Createhere, we want to be easy to because that's free CS, we're not making a fargate type.We'll leave that name alone, that's totally fine. We'll name the service, I'm just goingto call my service very unoriginal.We're gonna leave it as a replica. We want one taskrunning here. We'll leave it as rolling update, that seems fine to me. Easy balance spreadsseem fine to me, I don't play around with these too much. We'll go ahead and hit next.And it's gonna ask us what low bounce we want to use. We don't want when we want to savemoney, obviously, it's recommended to use a load balancer and and have things in anauto scaling group. But you know, we just want to be able to launch a service. That'sall that we really need to learn here. And we'll scroll down we have a lot of optionsyou don't need we don't need to read any of this will hit next that we have service autoscaling is without an auto scaling group.We do not need one so we'll hit next and thenwe will create the service. So this should actually be very fast. So when you log launchan easy to instance, it takes forever, right? Because you have to wait for it to createone. But there's already one running, all it has to do is put that task in there. Sowe just have to wait a little bit on this will take a little bit of time here the firsttime around. But I'm pretty sure that when we launch tasks after this initial setup here,it's really, really fast. So we'll just wait here a little bit, and I'll see you back ina moment. And that was actually really fast. I did not even have to stop the video, butI did so. So that was the started, we can go check out that my service button, clickthe big blue button down below. And here we can see our task is running. And it's running,study sync colon one. So version one. Yep, that version is fine. And we'll click intotask because we only actually have one version.And in here, what we can do is we can seethe container instance, if we click into that, it's going to give us information such asthe public IP address, and etc. I think if we click into the task here, and we drop downhere, I know we just went backwards, but actually shows us Port 80, and 8080. We just have thisconvenient link to get to it, there's tons of ways to get this link.And we can see ourapplication is running. So version three was the one we had last and Elastic Beanstalk.So that's all it takes to run an ECS task. Now there's obviously a lot of options inECS. Not important for this fall long. But definitely, you know, read through all thelecture content, if you are doing the DevOps, you definitely no need to know all those options.So way longer follow along with ECS. For that, but for the developer associate, all we needto know is how to launch that service. Now this cluster costs money, because it is aneasy to instance, that is constantly running. So I'm gonna go ahead and delete it. So we'lljust type in delete me.And it takes a little bit of time here. And once that's deleted,we'll be in good shape. I'm not going to wait around for this video to see this delete,it's deleting I know it will happen. But yeah, once this is done, that means we can moveon to the fargate follow on which is very similar except there is no easy to instancerunning. So that's super exciting, because it's a serverless container. So we'll seeyou then, hey, this is Andrew Brown from exam Pro. And we are looking at fargate, whichis serverless containers. So don't worry about the servers, run the containers and pay basedon the duration and consumption.So fargate is sometimes branded ECS fargate, or justfargate by itself, but it is under the ECS console. But the difference is that when youwant to launch fargate, you go ahead and you create an ECS cluster, but you actually makean empty cluster, so there's no provision to EC two, and then you launch the tasks asfargate. And you could also launch services there as well. So you no longer have to provisionconfigure and scale clusters of EC two instances to run those containers, you're charged forat least one minute, and then after that it's per second.You pay based on the durationand consumption, which we'll look at a bit more. But to really understand the difference,I just want to give you a visual comparison. So this is ECS, which we saw prior. And youcan see that there are you see two containers. And then for fargate. It's extremely similarexcept there's no containers. So you know, hopefully that makes that clear. So now we'regoing to look at how to configure a fargate task. And the first thing you're going todo is using the task definition wizard in the fargate console, you're going to haveto choose how much memory and CPU will utilize entirely for all your tasks. And then whatyou're going to do is you're going to add containers and then allocate memory in CPUfor each based on the requirements based on the allocation that you've defined above.So here I have a Ruby and a Python container. And you can see that I've split the memory,half and half for both. When you run the task, you can choose what VPC and subnet it'll runin a lot of people think that there are no VPC, subnets with fargate because it's a becauseit's serverless containers, but it's not true.So you do have flexibility flexibility aroundthat. And you can apply security groups to a task. This is a great way to you know, havea secure security around your tasks. And this actually puts you up as an exam question knowingthat you can apply that the reason why there's some, like this is such an important questionis because, you know, would you apply the task to the EC two container that's runningthe actual server or would you apply it to the tasks and this is gonna apply for bothECS and fargate. It's always at the task level. You can also apply an IM role to the taskthose you can say for every individual task delegated different policies.And just toreiterate, here, you can apply a security group and an IM role for both ECS and fargate.So that's for both tasks and services. And again, that might show up as an example. SoI want to do a quick comparison between fargate and lambda, because they're both serverlesscompute. And so they seem like they solve the same problem. But there are a few keydifferences. And so we're just going to quickly walk through those. The first is they bothhave cold starts. And it can be argued that fargate cold starts are slightly shorter,I can't really remember as to why, but in the documentation, I think it says that they'rea big different factor is duration.So with a lambda, the maximum time you can run itis 15 minutes, where the duration with the fargate task can run as long as you want,because you can just make it a service. And it runs indefinitely. In terms of memory,lambda has an upper upper limit of three gigabytes, whereas fargate has an upper limit of 30 gigabytes.So if you need a lot of memory, go over with fargate. For containers, you provide yourown containers with fargate. So you definitely have a lot more flexibility in terms of configurationthere. For lambda, setting up containers, is extremely limited. Use the standardizedcontainers, and then you build stuff on top of that. So if you really need something highlyconfigurable, you're going to need to go with fargate. For integration, lambda just hasseamless integration with a lot of serverless services. And it just makes it really easyto chain stuff together. Even recently, like lambda has output destinations, and it's justkeeps on getting more and more easier to integrate with stuff fargate, you know, you can orchestratethings together.So like with step functions, you can tie things together, but it's justnot as seamless as lambda. So and you know, you do have to do a lot of configuration toget your cluster set up and the fargate tasks set up. But definitely less less than doingan ECS cluster without you know, without fargate. And the last thing is pricing. So with lambdayou're paying per 100 milliseconds, where with fargate, you're paying at least one minuteand then every additional second is how the pricing works there. Obviously, the amountof memory you use, and the CPUs also comes into factor there. But just the takeaway thatI want you to know is that lambda is 100 milliseconds, right, and fargate is one minute and everyadditional second.So hopefully, that makes a little bit of sense to you there. And youcan kind of have an idea of like the use cases for each. Hey, this is Andrew Brown from examPro, and we are looking at the fargate. Follow along if you have yet to do the ECS and ElasticBeanstalk follow along, you have to do those ones first. And the reason why is that thisis all dependent on a Docker image that we created in Elastic Beanstalk follow along.And it's good to do this ECS one first so that you get a difference between ECS andfargate. So, you know, go ahead and do that. And once you have this Docker image, you canproceed forward here. So for this one, what we're going to do is make our way over toECS. Because that's where fargate is sometimes atrios calls it ECS fargate, sometimes it'sjust called fargate.All right, but anyway, fargate is ECS, we'll go over to clusters,we're going to create a new cluster, and we're going to use networking only because eightof us is going to manage the EC two instances for us. So we don't have the same kind ofoptions as we would here with the other ones. If you were to go here and choose an emptycluster that is basically a fargate cluster, I'm going to go back here and choose networkingonly, which is the default.And we're going to name this cluster, I'm gonna call my fargatecluster fargate cluster, we are not going to create a VPC, we're gonna use the defaultone here, we're gonna hit create, and it creates it right away. Because it's serverless. It'ssuper fast. There's there's no server running as of yet. So that's why it's super fast,we need to create a task definition, we created one before for study sync for ECS. And thistime, we're going to make one for fargate. And we're going to name it study sync. AndI'm going to put an F on the end. That's just my convention for distinguishing those taskshere. We're going to choose the ECS task execution role. If we had not had this here, I feellike it would have would make it for us.We have the network mo which is AWS VPC, I wason Ada support. And I was not aware of this until I talk to the sport engineer, I guessis the default. And the way it works is that whatever port you set is what is what mappingyou're going to get and explain that in a moment. When we get to adding our container.We're going to choose Test memory size and we're gonna choose our V CPU, I noticed thatlike, I can't go up here.So there are some restrictions as to what you can do. A quarterof a virtual CPU is well enough for this example, no GS does not require a lot of memory, ormemory or CPU power, at least our use case doesn't. We'll go ahead and add a container,we're going to need the ECR image here, I'm going to hit copy. And look here, it saysthat we can put colon tags. So in the ECS, follow along, I had colon latest from reallysure that we can do. But for whatever reason wasn't working for me, maybe I had a spellingmistake, maybe there was something off there.But it'll definitely seems like I can go inhere and use this one where has colon latest, but I just don't want to have any problems.So I'm going to leave that out there, we're going to name this container, I'm going tocall it my fargate container. Okay, not not really original there, we'll just select 128,it's less than we used in the ECS. One, but it's totally fine. And notice, we don't havea host mapping. So this is what I was talking about this AWS VPC thing. So our host portis going to be 8080, we don't have a choice in the matter. That's all we get. We're goingto scroll down, we're going to make our way over to the environment variables, I'm goingto set one for port, and we'll be at 80.And then we'll set one on node E and V. production.Now, I guess if we want to run this on port 80, I could change this to 80. But the thingis, like, if you've ever ran, if you ever set up a server, anytime you run Port 80,on Amazon, Linux two, or whatever throws errors, you got to make sure it's sudo, it's a bigpain, so we're just not going to do it. But for a production environment, you'd want some80. Or we'd have to, we have to find another way around this like doing a proxy a container,which we'll talk about later. In this follow along here. We don't need to fill out anythingelse in here, there's a lot of options here not even worth discussing. And we're gonnahit Add. And you can see it's only utilizing this amount of the actual container, I'm goingto scroll on down here all the way to the bottom, we're gonna hit create an ad createssuper, super fast, we're gonna view that task definition.And we don't really need to doanything here. But we'll make our way over to the clusters. And we have my fargate cluster,and it should say one service running, it's probably just starting starting. It's gettinggoing. So we'll click into it here. We should see a service Oh, no, we won't, because wejust created a task definition, we actually haven't launched the service yet. So we'llhit create, we will choose the f1. We'll choose fargate. It's one we don't have anything else,we have my fargate cluster, a name is my fargate. Service, I ran into an error, where I namedthis my service, which was what we named the ECS one and then it errored out on me, whichwas weird because it didn't exist anymore. So you may have to fiddle with the name here.So I'm just changing it just so I avoid that issue. We only want one task to be running,we're gonna leave it as rolling deploys, and we'll go next.And then we have to choosethe cluster VPC and subnet, which when you think of like serverless containers, you thinkyou would have to choose anything, we're gonna go ahead here and choose VPC, we're gonnachoose the first subnet, we're going to scroll all the way down, we're not going to use aload balancer, we just want to save money and not have any complications. But for aproduction environment, you definitely would want to set a load balancer all the way downthe bottom, we're gonna hit next, we're not gonna use auto scaling group.But for production,I would absolutely do that. There's no additional cost there. So it's easy. And then we'll scrollall the way down to the bottom, we get a nice review. And we'll hit Create. And this issuper fast, way faster than ECS. Well, actually, I shouldn't say that ECS is faster, becauseit doesn't have to start up a new CTO instance.So this looks like it's fast. But when yougo here, you're gonna notice that it's provisioning. So it's actually not running as of yet. SoUCS is way way faster for getting your tasks running. So that is one trade off, you haveto consider. Now, while this is getting started, we can see we have this public IP. It's, it'sin pending or provisioning. So if we go here, we're not going to get anything. And alsoit's not going to work because it's running on port 8080. So we're going to need to exposethat port there is a security group attached to this cluster or this task or service orsomething.Security groups are around tasks, so we should click into the task. Yeah, thereit is. So we'll click into this. And we will go into inbound rules, we will edit a ruleand we are going to add a new rule 8080 for everybody in the internet. Let's save andthen we'll go back and we will check. Check on our service here to see if if it's warmedup. And so it's active. We'll click into it. We'll go to tasks. We'll click into the task,we will copy the public IP address dress. we'll paste it in here. And we'll do AD AD.And does it work? Nothing as of yet.Oh, it still says pending here. Okay, we don't haveany errors. Let me go back here. pending. I'm just trying to make sure that it's notstopping and starting, if there was a configuration issue, it would stop and start. But it alsowould tell us the error somewhere in here might be in here or in the logs. But we'renot seeing issues yet. So I think we just have to wait a little while it's still warmingup. So what I'm going to do is just stop the video here. And when it is running, I'm goingto come back or if there's an error, we'll talk about it. Okay, and we're back here ismoved to activating state and says it's running, I was looking up to see what would happenif it was in a pending state, it could say that the Docker daemon is unresponsive, theDocker image is large, the ECS container agent lost connectivity, these this greater agenttakes a long time to stop the existing task. But we're actually not having any problems,it was just me thinking that there was more of a problem there than there actually was.Because if we go here and copy this IP address, and then do AD AD, it works.So what if wewanted this not to be at ages Port 80, which would be nothing on here on the end, whatwe would need to do is either we would need to start like in our very variable, set itso that it's Port 80. And, but then we might have to set sudo, it's a big pain to get Port80 running for web apps.And so I just didn't want to go through that hassle. Another thingyou could do is you could have a, you can run multiple containers, and one containercould be nginx. And nginx could be used as a proxy, where you change Port 8080 to Port80. So that's something that you could use an nginx container for. And that'd be a greatstrategy, if you really have a bunch of containers, and then you want to then have more flexibilityon it. They're mapping. But I just want to show you how the like that it is hard codedto Port 80 there. So we go back to that task definition, I'm going to go into here, study,think f f one, we'll go to the JSON. And you can see here it says AD AD. All right, andwe didn't set that this is what it is hard coded into here. But we're done with thisservice, what we'll do is we'll go ahead and delete it. And we will type and delete me,I we don't have to do this because we never set up any service discovery with it.Thatwould be for like app mesh, which I'm not really familiar with. Maybe that would showup in the interest advanced networking certification. And we will hit delete, and that is fargate.Now we can go ahead and delete this cluster, it's not going to do anything because it'snot raining easy to instance, costs money, but let's just delete it because it's a goodhabit to do so. And there you go. So that should be deleted now. It's still showingup. Deleting Okay, sometimes you got to do it twice. Anyway, um, that's the fargate followalong and we're all done.Okay, so we're on to the ECS and fargate cheat sheet, I groupthese ones together because fargate is under the ECS console. And there's not a lot youneed to know about these two. So let's get to it. So elastic Container Service is a fullymanaged container orchestration service highly secure, reliable, scalable, and way to runcontainers. The components of VCs is as follows cluster multiple, you see two instances thatwill host the Docker containers, a task definition file, a JSON file that defines the Configureconfiguration up to 10 containers that you want to run task. Launch containers definedin task definition tasks do not, do not remain running once workloads complete service ensuretasks remain running. So I think of a web app last to you 100% need to know containeragent binary on each EC two instance, which monitors starts stops tasks. And we'll talkabout ECR for a second here, I guess it's part of this cheat sheet as well. It is afully managed Docker container registry that makes it easy for developers to store manageand deploy Docker container images.So think of it like the Git for Docker containers.And then we'll move on to fargate, which is the last thing here. So fargate is a sortof serverless containers. You don't worry about servers, run containers and pay basedon duration and consumption. When you create a fargate cluster. It's just like making ECScluster except it's empty. There's no servers, so it's really easy to set up. fargate hascold start. So if this is an issue, then you're want to consider using ECS which does nothave cold start. The duration is as long as you want. So you can run this forever. Thisis important to note versus lambdas, which have a hard limit in terms of how long theycan run. Memory can go up to 30 gigabytes, which is pretty great. And the price pay atleast one minute and every additional second from then on. I didn't mention vcpus. I don'tthink it really matters.But there's a limit to how many v CPUs you can set on it. Butit's definitely a lot more computing than lambda will give you. But yeah, there yougo, that is ECS fargate, I guess ecrs. Well, hey, this is Andrew Brown from exam Pro. Andwe are looking at ABS X ray, which helps developers analyze and debug applications utilizing microservice architecture. And also another way to think of it as x ray is a distributed tracingsystem or a performance monitoring system. So to understand X ray, we're gonna have tounderstand what micro services are. So micro services is an architectural or organizationalapproach to software development, where software is composed of small, independent servicesthat can communicate over well defined API's. So these are services owned by small selfcontained teams, microservice architecture makes apps easier to scale and faster developenabling innovation accelerating time to market for new features. And to really put this inperspective, if you're using AWS, and you're using a host of different services, you mightalready be using micro service architecture. So the idea is that you have all these isoletteservices.So as you have your source storage, you have instead of using large GC tools tohandle all the functionality of your application, you break it up into containers, or serverlessfunctions. And then you have your databases, notifications, queuing and streaming. So inthe combination of all these services being used, utilized together is microservice architecture.But the question is, is how do we keep track of or debug the communication between allthese services, because if you're using a lot of them, it can get a bit confusing. Andthat's kind of what X ray solves. So what exactly is X ray? Well, it is a distributedtracing system. So let's talk about what that means. So distributed tracing, also knownas distributed request tracing, is a method used to profile and monitor apps, especiallythose built using a microservice architecture.Distributed tracing helps pinpoint where failuresoccur, and what causes poor performance. Notice, at the end, they are where failures occur,and what causes poor performance. So those are, that's the major thing that X ray isdoing. Now, performance monitoring, which is its own thing is also kind of the scopeof X ray. So traditionally, we used to have application performance monitors, I mean,we still have them, but they refer traditional architecture where you had a single EC two.And all of your business logic or all the tasks that your application was handled wasin one specific application. But let's talk about what performance monitoring is becauseX ray basically falls under that category. So monitoring and management of performanceand availability of software apps APM. The A's for application strives to detect anddiagnose complex application performance problems to maintain an expected level of service.So you could say that x ray is both of these.But you know, I would lean towards sayingit's a distributed tracing system. So to try to help you understand X ray, let's just talkabout some third party services that are kind of similar, where it's cloud monitoring orapplication performance monitoring, I do feel that these services are starting to tack on,like distributed tracing to their services. So some of these might not really match Xray, but I'm sure they will catch up in time. The number one service I can think of is datadog, which does a lot of stuff, it does APN, log monitoring, and other sorts like that,then you have New Relic was which was traditionally just application monitoring for large applications.But now we can do considerable more signal FX, which is supposed to be like data dog,but is more in real time. And then you have lamego, which is an I don't know if I'm sayingit right. But that's that's all I'm saying lamego. And it is focused on serverless monitoring.So it looks a lot like x ray. But it has a huge emphasis on just serverless services.So you know, hopefully, if you go look up those and take a peek around there, it'llgive you a better perspective on how X ray fits in the market of these monitoring services.So x ray is a distributed tracing system.And it collects data about requests datesthat your application serves. It can view filter collected data to identify issues andavenues for optimization. For any trace request to your application. You can see detailedinformation not only about the request and response, but also about the calls that yourapp makes to downstream ated with resources, microservices, databases, HTTP web API's,and I'm certain that there are more things in there but just to visualize that, here,we can see that we have a trace and we have dynamodb. And it goes and it shows you allthe steps within the table that it's taking. So you know you can get some very detailedinformation out of this. But let's move on and start breaking up. What are the componentsto X ray? Now we're going to take a look at the anatomy of X ray. So this basically, thisis the overall landscape of how it works. So the first thing is we have the X ray console.And this is where we are going to be able to visualize monitor, or a drill down intoour tracing data.That data is going to be coming from the X ray API. Now you think thatyou would just directly send your request to this API, but that's not the case, yousend them to the X ray daemon. And this is used as a buffer, because you're going tobe sending a lot of requests, the data is coming from the X ray SDK.And so here wehave Ruby, Python and node, there's a lot more than just the ones there. We'll talkabout that later. And then we have the SDK and see ally. And this could as well be sendingthe segment data to X ray Damon, or it can be directly interacting interacting with theX ray API. Let's talk about the X ray SDK, because this is where all our segment datais coming from. And that is what we're visualizing in the X ray console.So the X ray SDK providesa way to intercept incoming HTTP requests. So that means request information around thatkind of stuff, then it has something called client handlers. And this is just really the,the SDKs that are specific per language. So when it says SDK client, it we're talkingjust about, like the Ruby client, or the Python Client, etc. And we're able to set up instrumenting,I will talk about that next slide, because instrument is kind of a vague term, that'llmake more sense, soon enough.And then it also has an HTTP client. So we can use thisto instrument calls to internal external HP web services. And also just to deliver ourour information actually, to the X ray daemon. So that's another component of it. And I wantto point out that the SDK does support instrumenting calls for the for SQL databases. And for otherfeatures, the one that's worth highlighting SQL is SQL databases, because it really letsyou drill down to see what's happening with your database calls for that. But anyway,let's move on to what instrumenting is.So I mentioned the term instrumenting, or instrumentin the last slide, I figured that was worth exploring more, because even myself, I wasn'tsure what the term meant. So instrumenting is the ability to monitor or measure the levelof a product's performance to diagnose errors and to write trace information. So to me,it just sounds like you're logging information. That's exactly what it is. So good. To getan idea of what it looks like to instrument something in X ray, is, we have this pieceof code here. And this is for a Node JS, express JS application. And what we're doing is we'reincluding the X ray SDK, and then what we do is we open up a segment segments and subsegments are the pieces of information that we want to capture, or to send X ray and thatis instrumenting. So we have a part where we open and then we have a part that we close.And everything that is in between there is going to get captured, like the duration ofit or additional information.And that's what's gonna get passed along. Generally, in thecode, you don't have to necessarily set the segments the segments get captured. So usingyour code, you're setting up sub segments. But yeah, just to give you a visualization.So just remember that when you hear instrument, just think of it like logging informationand sending it to X ray. Let's take a closer look here at the X ray daemon. And I saidearlier, when we're looking at the anatomy that we do not send our segment data directlyto the kms API, what we do is we send it to the daemon and then the daemon buffers itand then sends it off to the kms API. So let's look at that in more detail.So instead ofsending your traces directly to X ray, the SDK sends Jason segment documents. So that'swhat the segments are made of JSON files to the daemon processes and listening on UDPtraffic. So here's our SDKs and other clients. And here it is sending that JSON documentto the X ray daemon, right. And then the X ray daemon buffers segments into a queue anduploads them to X ray in batches.So the idea is that it's creating this buffer. This makessense, because if you are sending instrument, if you're sending logging or instrument data,you're gonna have a lot of requests. And so you don't want the API to be flooded there.So this is acts as a buffer. And then there it is sending the batch information to X rayAPI so that it's not so burdensome. You can install the daemon on Windows, Linux and Macand it's already included on Elastic Beanstalk and lambda platforms. So I think when you'reusing it like when you're using the serverless, like, you're setting up a serverless applicationand you turn on X ray, there's already a daemon running there.I don't know where it is, butI know it's working. It looks like you've also just set one up locally for development.And x ray uses trace data from the universe resources that power your cloud applicationsto generate a detailed service graph. So let's just say all this segment data turns intothat graph. So, you know, hopefully, that makes clear the utility of the X ray data.Now we're going to cover all the X ray concepts. So the first thing that we really need tounderstand are segments because that is the core of X ray.That's the data that X rayreceives. And x ray can group segments and have common requests, which would be calledtraces. Okay. And then X ray processes those traces to generate a graph, and that providesa visual representation for your app. So that's the general idea. But there are a lot of componentsto it. So let's go through the list really quickly here. So we have segments, we havesub segments, we have service graph, we have traces, we have sampling, we have the tracingheader, we have filter expressions, we have groups, we have annotation metadata, and thenwe have errors, faults and exceptions, mostly just exceptions. So we're going to walk throughevery single one of these components, and then we'll really understand how X ray works.So let's first look at the service graph. And this is something that's visible whenyou click into a trace. And so the service graph shows the client your front end servicesand back end services.And then might not be very clear looking at this, which is theclient front end and back end. So I'm going to help divide this up so that we can makesense of that. So the first one is the client. It's just the little person in terms of representation.So nothing super exciting there. But then we still have a bunch of these other servicesare running. So then we have your front end services. And we can divide that here. Andwhat we see running here is like computing and application integration. So that couldbe SNS, lambdas, ECS, you see two Sq s. So anything like that.And then the last thingis backend services. This is generally your databases. So here, all these calls are relatedto dynamodb. So you know, hopefully that makes that all clear there. And the service graph,the whole purpose of having this is help you perform, like identify bottlenecks, latencyspikes, and other issues. So the idea is that you can click into any of these and drilldown and really figure out how to optimize your. So let's take a look at segments, whichis the most important thing to know about x ray.And so if you have a bunch of computeresources that are running your application logic there, you're going to want to senddata about that to X ray. And we call that sending work. So we're sending our work offto specific segments. So here is an actual segment that we could drill down into. Andthere's a lot of different information here. So we have things like the host information.So that'd be hostname, alias, or IP address, I call this this lambda function, if you cansee the origins as as lambda, I call it in the console. I didn't call through API gateway,so it doesn't have any of that data here. So that's why we're not seeing it. Then you'dhave request information. So the method client address path user agent, again, I didn't useAPI gateway. So we're not seeing that information there. But if if I had done so we'd see alot more information, then you'd have the response. So that's the status and contenthere we have a 200.So that means everything's good. The work done. So the start and endtimes, we have the duration and sub segments. And then the last thing would be issues thatoccurred so errors, faults and exceptions. And we have a dedicated tab there for that.So that's what it looks like to drill down into a segment. So let's take a look at subsegments, which allows us to get more granular timing information or details about downstreamcalls that your app makes. So these are basically operations within your app that you want tocapture. So a sub segment can contain calls to either services external HTTP API's, orSQL, SQL calls within a database, and also things within dynamodb.And that's actuallyan example we're going to look at is dynamodb. So here, what we have is we have an ElasticBeanstalk environment and there's a put request going to it. And then underneath in that application,it's making calls to Dynamo dB. So we can see get item, update item, etc. So those arethe sub segments. Now, you can actually define your own arbitrary sub segments to instrumentspecific functions or lines of code. And that's how I'm gonna show you the next step. Buta lot of these things, you know, if you're using specific native services, they mightalready log these sub segments for you.But you know, if you need something more specific,you can write your own. So here, we have a trace. And within this trace, we have thesegment and then there's a sub segment, can you tell where the custom sub segment is?It's right there. So that's the one I defined. And I want to show you how you do that incode. So here is the actual code example. And the first thing you're going to want todo is include the SDK. So there's the X ray SDK core, I'm writing this, obviously, noGs.And it's very similar to when we defined a segment earlier on. But in this case, wefirst have to get the current segment. And then we're going to add a new sub segment.So I call this mining. And then you have some code that runs and then you have is you closethe sub segment, so everything in between the certain end will be captured. And that'sgoing to go to that segment there. I'm just doing console log. So it's 0.0 milliseconds,nothing super exciting there. You generally don't define, like you don't create segmentsin code. I mean, the earlier example, we saw it, because that was expressed yes application.But generally, the only thing you're defining in code for instrumentation is sub segments.But you know, that's just this is just an example to show you how to set up sub sub.Let's just take a look here at traces.So here I've drilled into a trace. And the wayyou can think of a trace is just, it's a way of grouping all your segments together. Andthey, they're all started from a single request, so that that first request is triggered. Andthen the idea is we need to keep track of its path through all these segments and services.So that's why we have a trace ID. And the idea is when that trace ID is generated forthe first service that it interacts with, it's going to generate that ID and that'sgoing to get passed along through all this stuff.And that's, that's what helps buildout this, that like graph down below that tells us all that information. So we're goingto talk about trace IDs a bit more. But we're gonna first talk more about sampling, becausenot every trace that happens here is actually recorded. So that confuses me a lot. WhenI first started using X ray, where I was triggering stuff, I was expecting to see every singlething that I triggered, recorded, and that's not the case.So I was saying the last slide,we were looking at traces that not every single request is actually passed on or captured.And the reason why is when you're using the X ray SDK, it uses a sampling algorithm todetermine which request will actually get traced. So by default, the SDK is going torecord the record the first request for each second, and 5% of every additional requests.So why would we have this why won't be the purpose of sampling. And the reason is toavoid incurring service charges. So when you're first getting started, you want the defaultsampling rate to be low. Just because you're not maybe not definitely X ray, and you don'trealize how many traces or requests you're sending. And you might not be able to makeuse of that.So you can actually see the options that are set up for this particular samplingalgorithm. And you can actually modify the sampling rule and add it add additional rule.So if you were over here, you can go hit Create sampling rule and set a bunch of options.So you can set it to match up on the service name, the service type, and a bunch of otherthings. And down below, you can see it says limit to one requests per second, and thenfive seconds per fixed rate. So the whole purpose of sampling is just to help you savemoney, because it's gonna help you reduce the amount of traces for high volume and unimportantrequests, you're just going to find that there could be like a component within your application,where, let's say does polling, so it's constantly pulling like frequently, but you don't needto capture the information because it doesn't provide you any valuable information.Nowwe're gonna take a look at the trace header itself. So all requests are traced up to aconfigurable minimum, and after reaching that minimum, a percentage of the requests aretraced to avoid unnecessary costs. Okay, the sampling decision and the trace ID are addedto the HTTP requests in in tracing headers named x, Amazon trace ID. So we were talkingabout the trace Id get set, but this is just what it looks like. So the first X ray integratedservice that the request hits is when the tracing header gets out. So here, this iswhat it looks like. It has a root, and it has a sampled number there. And then the nextthing is, if you have a tracer header, where it's originating from an actual instrumentand application, it could actually have this parent thing here as well, you don't reallyneed to remember this stuff for the exam, I'm just showing it to you.But just understandthat the trace editor determines whether this will show up in the in the in the trace ornot, it's going to be based off that sample number that is assigned to it. So even withsampling, a complex application still generates a lot of data. So sampling isn't going toremove enough so that it's very easy for you to make sense of stuff. So that's where filterexpressions come in. And this helps you narrow down to specific paths, or users.So here'san example. And this is we're looking at all traces. So there's a list of traces. And inthat Filter Expression, you can put in a thing in there. And that's going to filter out whattraces you want to look at. If you're wondering like what the actual syntax or filter expressionsis, you just click that question mark there. And there's a lot of information under there.So these are a lot of attributes that you can use. Here, you can see that we were filteringout based off the request URL, but there's a lot of stuff here. And also that you cangroup these results together. So they have a bunch of predefined groups, for you to justmake sense of stuff a lot easier. You can see in the graphic there by default, it isfor URL. So let's talk about groups we just saw a group by but you can also assign a filterexpression to a group that you create. And the way it works is you're going to name thatgroup.And it could be just a name, or you can use an Arn. And the advantage of makingyour groups is that it's going to generate on service graph, create summaries and cloudwatchmetrics. And this is gonna save you a lot of time when investigating very common scenarios.So the idea is right beside where the Filter Expression is, you can drop down and hit Creategroup. And from there, you're going to name that group and again, you can put an Arn inthere. And then you just provide your filter expression. Notice that you can only createup to 25 groups by default. But I guess you can use a service limit to increase beyondthat. If you're going beyond 25, you must have a very complex application. But I dowant to warn you about these groups, because let's and this is just the the nature of it.So when you create a group and you set that Filter Expression, and let's say down theroad, you know, like a week later, you want to adjust that Filter Expression doesn't,it's not going to retroactively apply that to all the previous data, it's only goingto apply this regular expression for or this filter expression for future data.So that'san issue where the data is doesn't exactly represent what you expect it to look like,because you have this old expression and new expression. So what you can do is if it reallymatters to you that the expression only represents data that is exactly based on that FilterExpression, you're just gonna have to delete the current group and make a new one. So theX ray, you can add annotations and metadata to your segment documents.So that's the JSONfiles that get sent to X ray. That is generally information you're doing there. And you canaggregate this at the trace level. And it can be added to both the segments and a subsegments. It's not just limited to top level segments. And so let's talk about annotations.So this is just a key value pair that that is indexed for use with filter expressions.And you can have up to 50 annotations per trace. Now for metadata, you have a key valuepair that are not indexed, and the values can be of any type, including objects andlists, so they sound really similar, they both key pair values, the only differenceis that annotation is indexed. And metadata is not. So just to better understand the usecase, you want to use annotations to record data that you want to use in group tracesin the console or when you're calling the get trace summary.So it's really for grouptraces. And then you want use metadata to record data when you want to store in thetrace, but you don't need to use it for searching traces. So for annotations, it's help, it'sfor you to help find data. And for metadata. It's just to have additional data there whenyou need it. And you can view the annotation and metadata in the segment sub segment detailsX ray console.So if you just click into a segment, you'll see annotations and met datathere. So now let's just point out errors, faults and exceptions. So when an exceptionoccurs, while your app is serving instrument requests, the X ray SDK records exceptiondetails and the stack Trace if available, these are the type of errors you could encounter.So we have error fault and throttle errors are client errors. So they're gonna be 400fault is server errors, it's gonna be 500.Throttle is throttling errors, it's just toomany requests. And if you're looking for this data, it's going to show up under the exceptionstab, tab for the segment details. And it also shows up in the actual segment graph there.So we're going to trace and you see all the segments and their their duration. Sometimesit will have like a little exclamation or it'll color it differently. So you know, there'san error at some part during the path of the trace. Now let's take a look at what servicesintegrate in with X ray.And the most important we've highlighted in yellow because this isthe Give me the most common use cases. So we have lambda API gateway app mesh cloudtrail cloud, watch, Ava's config Elastic Beanstalk. I'm not exactly sure how that works, but ElasticBeanstalk seems to want integrate with everything nowadays. Then you have EBS you have SNS Sqs EC to elastic Container Service fargate. I wouldn't be surprised if it also supportselastic Kubernetes service.But yeah, that is the run of it. But the ones that are highlightedare the ones you most likely need to know. And probably ECS as well there, even thoughI didn't highlight. For the exam, we're going to need to know what languages does X raysupport and supports go Java node, Python, Ruby, ASP. net and PHP. So it's all the culpritshere, the only thing it doesn't support is PowerShell, which there's no reason to supportthat. But yeah, this is generally what AWS would support for most services for languages.So it has the whole gambit here. Hey, this is Andrew Brown from exam Pro, and we've madeit to the X ray cheat sheet. And this one is a little bit long, but x ray is very importantto know. So we're going to be very thorough here in our review. So let's get to it. Sox ray helps developers analyze and debug applications that utilize a micro service architecture.Remember that it's really, really keen for micro service architectures to be using lambdas.Or, or you're using containers.X ray is really good for that. X ray is a distributed tracingsystem. It is a method used to prove to profile and monitor apps, especially those built usinga micro service architecture, to pinpoint where failures occur and what causes poorperformance. The X ray daemon is a software application that listens for traffic on UDP,UDP port 2000, gathers raw segment data and relays it to the database X ray API data isgenerally not sent directly to the X ray API and passes through the X ray daemon, whichuploads in bulk and that's just to help create a buffer between the x, the X ray API, andthe actual data being sent. Will the damond show up on the actual exam? Probably not.But we should really know all the components of X ray segments provide the resources namedetails about the request and details about the work done.Then you have sub segments,which provides more granular timing information and details about downstream calls of yourapp made to fulfill the original request, then you have your service graph, that graphthat is that flow chart visualization for average response of microservices individuallypinpoint failure, then you have traces. This collects all segments generated by similarrequests so that you can track the path of request through multiple services.Then youhave sampling, which is an algorithm that decides which requests should be traced bydefault X ray SDK records the first request each second, and 5% of additional requests.So if you have a question on the exam, and they're asking, like, why don't you see thisinformation, just think about sampling, tracing headers, the name x, Amazon trace ID and identifiesa trace with which pass along downstream services. What's important to remember here is x Amazontrace ID, you might see a question where they show multiple, like, like to pick the rightone. So remembering that is key. Filter expressions allow you to narrow down specific paths orusers groups allows allow you to save filter expressions, so you can quickly filter traces.And then on to the second page, we're almost done here. annotation metadata allows youto capture additional additional information and key value pairs.So for annotations, theyare index, for use with filter expressions with a limit of 50. Metadata or not indexuse metadata to record data you want to store in the trace, but don't need to use for searchingtraces. So if you need to search for traces, and need that metadata you're going to beused or that additional information, you're going to be using annotations. errors are400 faults or 500. Throttle is four to nine too many requests. You probably want to justknow the last one there. X ray supports the following languages. I don't feel like they'dasked you a question but they used to, but I think the exam is getting a little bit harder.So they're not they're not just asking you to like Choose the language that is not applicableanymore. But X ray supports basically all possible language, the data Lewis Howes onthere, as long as they're not telling you, you're putting Perl on the list, it shouldbe pretty easy to figure that out.X ray supports a ton of service integrations with the followinglambda API gateway app mesh cloud trail cloud watch, Eva's config, Eb lb, MSS Sq s, easyto ECS and fargate. So there you go, that is x ray. And we are done here. Hey, thisis Andrew Brown from exam Pro. And we are looking at Amazon certificate manager alsoknown as ACM, which is used to provision manage and deploy public and private SSL certificatesfor use within your AWS services. So let's look at ACM a little bit more in detail. Ithandles the complexity of creating and managing public SSL certificates for your AWS basedwebsites and applications. It handles two types of certificates we have public, thoseare ones that are provided by AWS, and they are free. And then you have private. So theseare certificates that you import, and they cost $400 per month, you generally just wantto use a public certificate, if you ever use Let's Encrypt, those are all public. So ifyou're comfortable with that you're comfortable with these ones, just make sure that whenyou're creating that certificate, you do not make the wrong choice.I myself have chosenincorrectly, and but luckily I reached out to to support and before they charged me theyfixed that issue. But that is a tricky one to get your money back on if you make thatmistake. So ACM can handle multiple subdomains and wildcard domains. So you can see thementering exam Pro is the naked domain. And then I have a wildcard one. That is the setup,I always recommend that you use because otherwise, you'd have to create a bunch of domains, orsorry certificates later.And that's kind of a pain. And ACM is attached to very specificEva services or resources. So you can attach it to elastic load balancer CloudFront APIgateway, and you can apparently use it with Elastic Beanstalk. But I'm imagining thatis through the ELD. So those are the three services you need to know that ACM attachesto so remember that, there you go. So we're gonna look at a couple of ACM examples. Andthe point of this is to understand SSL termination. So the first one here we have is we're usingACM, and we're attaching that certificate to our load balancer, which is applicationload bouncer. And if you see that line, the idea is that this red line represents thetraffic that is encrypted.And so once it hits the alrb, the certificate is going todecrypt that, that traffic, and then everything between the alrb to the CPU instance, is nowunencrypted. And that's totally fine, because it's within your network. So it's still secure.But you know, for someone to take advantage of that, they'd have to break into your AWSaccount, and they'd have to be able to intercept that traffic. So it's a very low risk. Butbut at the ACM can only really be attached to lots of little bits, or CloudFront, orAPI gateway. So it's not easy to protect this traffic here. But the advantage of attachingyour certificates at this level here is that you can add as many easy to instances as youwant, you don't have to configure each one of them to be able to handle a certificate.So that makes it a lot easier to manage certificates.Now, the other case is terming SSL end toend. And this is where the traffic from the start to the finish is encrypted. So evenwithin your network, it's going to be encrypted. And the way you do that. I don't know howto do with ACM, I don't even think you can deal with ACM, ACM, because I've only notedbeing able to attach to resources over here. But you, I guess you could use Let's Encrypt.And so you'd have to set that up on every single server, and then rotate them out. Andthat's kind of a bit of a hassle to maintain. But a lot of people are used to doing that.And this is if you need end to end encryption, this is going to be dependent on your compliancy.So if you're a large corporation, maybe you have like a rule that says you have to encryptend to end. But I have for 99% of other use cases, this is more ideal terminating SSLat the load balancer.So there you go. Hey, this is Angie brown from exam Pro, and weare looking at remedy three, which is a highly available and scalable domain name service.So whenever you think about revenue three, the easiest way to remember what it does isthink of GoDaddy or Namecheap, which are both DNS providers. But the difference is thatrevenue three has more synergies with AWS services, so you have a lot more rich functionalitythat you could do on on AWS than you could with one of these other DNS providers.Sowhat can you do with revenue three, you can register and manage domains, you can createvarious record sets on a domain, you can implement complex traffic flows such as bluegreen, deploys,or failing You can continuously monitor records via health checks and resolve epcs outsideof AWS. So here I have a use case. And this is actually how we use it at exam Pro is thatwe have our domain name, you can purchase it, or you can have revenue three managedthe name servers, which allow you to then set your record sets within route 53. Andso here we have a bunch of different record sets for sub domains. And we want those subdomains to point to different resources on AWS. So for our app, our app runs behind elasticload balancer, if we need to work on an ami image, we could launch a single EC two instanceand point that subdomain there for our API, if it was powered by API gateway, we use thatsubdomain for that for our static website hosting, we would probably want to point toCloudFront.So the WW dot points to CloudFront distribution. And for fun and for learning,we might run a minecraft server on a very specific IP, probably would be elastic IPbecause we wouldn't want it to change. And that could be Minecraft exam pro Co. So there'sa basic example. But we're going to jump into all the different complex rules that we cando in revenue three here. So in the previous use case, we saw a bunch of sub domains, whichwere pointing to AWS resources, well, how do we create that link so that a revenue threewill point to those resources, and that is by creating record sets. So here, I just havethe form for record sets. So you can see the kind of the types of records that you cancreate. But it's very simple, you just fill in your sub domain, or even leave the nakeddomain, and then you choose the type.And in the case for a this is allows you to pointthis sub domain to a specific IP address, you just fill it in, that's all there is toit. Okay, now, I do need to make note of this alias option here, which is a special optioncreated by AWS. So here in the next slide here, we've set alias to true. And what itallows us to do is directly select specific AWS resources. So we could select CloudFront,Elastic Beanstalk, EOB, s3 VPC API gateway. And why would you want to do this over makinga traditional type record? Well, the idea here is that this alias has the ability todetect changes of IP addresses. So it continuously keeps pointing that endpoint to the correctresource.Okay. So that's normally when if, if and whenever you can use alias always usealias because it just makes it easier to manage the connections between resources via roughlythree record sets. And the limitations are listed here as follows. The major advantageof Route 53 is it's seven types of routing policies. And we're going to go through everysingle one here. So we understand the use case, for all seven, before we get into thata really good way to visualize how to work with these different routing policies is throughtraffic flow.And so traffic flow is a visual editor that lets you create sophisticatedrouting configurations within route 53. Another advantage of traffic flow is that we conversion,these policy routes, so if you created of complex routing policy and you wanted to changeit tomorrow, you could save it as version one, version two, and roll, roll this oneout or roll back to that. And just to play around traffic flow, it does cost $2 per policyrecord. So this whole thing is one policy record. But they don't charge you until youcreate it. So if you do want to play around with it, just just create a new traffic flow,and name it and it will get you'll get to this visual editor. And it's not until yousave this. So you can play around with this to get an idea of like all the different routingrules and how you can come up with creative solutions. But now that we've covered trafficflow, and we know that there are seven routing rules, let's go deep and look at what we cando. So we're gonna look at our first routing policy, which is the simple routing policy.And it's also the default routing policies.So when you create a record set, and hereI have one called random, and we're on the a type here, down below, you're gonna seethat routing policy box that's always by default set to simple Okay, so what can we do withsimple The idea is that you have one record, which is here, and you can provide eithera single IP address or multiple IP addresses. And if it's just a single, that just meansthat random is going to go to that first IP address every single time.But if you havemultiples, it's going to pick one at or at random. So it's good way to make like a ifyou wanted some kind of random thing made for a to b testing, you could do this andthat is as simple as it is. So there you go. So now we're looking at weighted routing policies.And so what a weighting routing policy lets you do is allows you to split up traffic basedon different weights assigned. Okay, so down below, we have app.example.co. And we wouldcreate two record sets in roughly three, and they'd be the exact same thing, we both sayafter example car, but we'd set them both to waited, and we give them two differentweights.So for this one, we would name it stable. So we've named that one stable, giveit 85%. And then we make a new record set with the exact same sub domain, and set thisone to 15%. And call experiment, okay. And the idea is that when ever traffic, any traffichits app.example.co, it's going to look at the two way two values, a 5% is going to goto the stable one. And for the 15%, it's going to go to the experimental one. And a gooduse case for that is to test a small amount of traffic to minimize impact when you'retesting out new experimental features. So that's a very good use case for a weightedrouting. So now we're going to take a look at latency based routing.Okay, so layer basedrouting allows you to direct traffic based on the lowest network latency possible foryour end user based on a region. Okay, so the idea is, let's say people want to hitapp dot exam pro.co. And they're coming from Toronto. Alright, so coming from Toronto.And the idea is that we have, we've created two records, which have latency with thissub domain, and one is set to us West.So that's on the west coast. And then we haveone central Canada, I believe that's located in Montreal. And so the idea is that it'sgoing to look here and say, Okay, which one produces the least amount of latency, it doesn'tnecessarily mean that it has to be the closest one geographically, just whoever has the lowestreturn in milliseconds is the one that it's going to route traffic to. And so in thiscase, it's 12 milliseconds. And logically, things that are closer by should be faster,and so that, so it's going to route it to this a lb, as opposed to that one.So that's,that's how latency based routing works. So now we're looking at another routing policy,this one is for failover. So failover allows you to create an Active Passive setup in situationswhere you want a primary site in one location, and a secondary data recovery site and anotherone. Okay, another thing to note is that revenue three automatically monitors via health checksfrom your primary site to determine if that that endpoint is healthy. If it determinesthat it's in a failed state, then all the traffic will be automatically redirected tothat secondary location. So here we have down below an example. So we have apt out exampro CO, and we have a primary location and a secondary one.Alright. And so the ideais that roughly three, it's going to check. And if it determines that this one is unhealthybased on a health check, it's going to then reroute the traffic to our secondary location.So you'd have to create you know, two routing policies with the exact same. The exact samedomain, you just said which one is the primary and which one is the secondary, and it's thatsimple. So here, we are looking at the geolocation routing policy. And it allows you to directtraffic based on the geolocation, geographical location of where the request is originatingfrom.So down below, we have a request from the US hitting app dot exam pro.co. And wehave a a record set for a geolocation that's set for North America. So since the US isin North America, it's going to go to this record set. Alright. And that's as simpleas that. So we're going to look at geo proximity routing policy, which is probably the mostcomplex routing policy is a bit confusing, because it sounds a lot like geolocation,but it's not. And we'll see shortly here, you cannot create this using record sets,you have to use traffic flow, because it is a lot more complicated, and you need to visuallysee what you're doing. And so it's gonna be crystal clear, we're just going to go throughhere and look at what it does. So the idea is that you are choosing a region. So youcan choose one of the existing Eva's regions, or you can give your own set of coordinates.And the idea is that you're giving it a bias around this location, and then it's goingto draw boundaries.So the idea is that if we created a geo proximity routing for theseregions, this is what it would look like. But if we were to give this 120 5% more bias,you're going to see that here it was a bit smaller, now it's a bit larger, but if weminus it, it's going to reduce it. So this is the idea behind a geo proximity where youhave these boundaries, okay. Now, just to look at in more detail here, the idea is thatyou can set as many regions or points as Do you want here and so here, I just have twoas an example. So I have China chosen over here. And this looks like we have Dublin chose.So just an idea to show you a simple example. Here's a really complicated one here, I choseevery single region just so you have an idea of splits. So the idea is you can choose aslittle or as many as you want.And then you can also give it custom coordinates. So hereI chose Hawaii. So I looked at the Hawaii coordinates, plugged it in, and then I turnedthe bias down to 80%. So that it would have exactly around here, and I could have honedit in more. So it just gives you a really clear picture of how geo proximity works.And it really is boundary bays, and you have to use traffic flow for that. So the lastrouting policy we're gonna look at is multivalue. And multivalue is exactly like simple routingpolicy. The only difference is that it uses a health check. Okay, so the idea is thatif it picks one by random, it's going to check if it's healthy. And if it's not, it's justgoing to pick another one by random. So that is the only difference between multivalueand simple.So there you go. Another really powerful feature of Route 53 is the abilityto do health checks. Okay, so the idea is that you can go create a health check, andI can say for app.exam.pro.co, it will check on a regular basis to see whether it is healthyor not. And that's a good way to see at the DNS level if something's wrong with your instance,or if you want to failover. So let's get into the details of here. So we can check healthevery 30 seconds by default, and we can it can be reduced down to 10 seconds, okay, eyehealth checking, initiate a failover. If status is returned, unhealthy, a cloudwatch alarmcan be created to alert you of status unhealthy, a health check can monitor other health checksto create a chain of reactions, you can have up to 50 in a single AWS account. And thepricing is pretty affordable. So it's 50 cents. So that's two quarters for per endpoint onAWS. And there are some additional features, which is $1 per feature. Okay.So if you'reusing route 53, you might wonder well, how do I route traffic to my on premise environment.And that's where revenue three resolver comes into play, formerly known as dot two. resolveris a regional service that lets you connect route DNS queries between your VBC and yournetwork. So it is a tool for hybrid environments on premises and cloud. And we have some optionshere, if we just want to do inbound and outbound inbound only or outbound only. So that's allyou really need to know about it. And that's how you do hybrid networks. So now we're takinga look at revenue three cheat sheet, and just we're going to summarize everything that wehave learned about revenue three, so revenue three as a DNS provider to register and managedomains create record sets, think GoDaddy or namecheap. Okay, there's seven differenttypes of routing policies, starting with simple routing policy, which allows you to inputa single or multiple IP addresses to randomly choose an endpoint at random, then you haveweighted routing, which splits up traffic between different weights assigns of percentageslatency based routing, which is based off of routing traffic to the based on regionfor the lowest possible latency for users.So it's not necessarily the closest geolocationbut the the lowest latency, okay, we have a failover routing, which uses a health check.And you set a primary and a secondary, it's going to failover to the secondary if theprimary health check fails, you have geolocation which roads traffic based on the geolocation.So this is based on geolocation would be like North America or Asia. Then you have geo proximityrouting, which can only be done in traffic flow allows you to set biases so you can setbasically like this map of boundaries based on the the different ones that you have, youhave multi value answer which is identical to simple, simple routing, the only differencebeing that it uses a health check.In order to do that. We look at traffic flow, whichis a visual editor for changing routing policies, you conversion those record those policy recordsfor easy rollback, we have alias record, which is a debases Smart DNS record which detectsIP changes freedoms resources and adjusts them automatically always want to use aliasrecord, when you have the opportunity to do so you have route 53 resolver, which is ahybrid solution. So you can connect your on premise and cloud so you can network betweenthem. And then you have health checks which can be created to monitor and and automaticallyfailover to another endpoint. And you can have health checks monitor other health checksto create a chain of reactions for detecting issues for endpoints Hey, this is Andrew Brownfrom exam Pro. And we are going to take a look here at AWS command line interface, alsoknown as ci, which control multiple services from the command line and automate them throughscripts. So COI lets you interact with AWS from anywhere by simply using a command line.So down below here, I have a terminal, and I'm using the ADB COI, which starts with AWS.So to get this installed on your computer, AWS has a script, a Python script that youcan use to install the COI.But once it's installed, you're going to now have the abilityto type AWS within your terminal followed by a bunch of different commands. And so thethings that you can perform from the CIA is you could list buckets, upload data to s3,launch, stop, start and terminate, you see two instances, updates, security groups createsubnets, there's an endless amount of things that you can do. All right. And so I justwanted to point out a couple of very important flags, flags are these things where we havehyphen, hyphen, and then we have a name here. And this is going to change the behavior ofthese COI commands. So we have output and so the outputs what's going to be returnedto us. And we have the option between having Jason table and plain text. I'm for profiles,if you are switching between multiple AWS accounts, you can specify the profile, whichis going to reference to the credentials file to quickly let you perform CLA actions underdifferent accounts. So there you go. So now we're going to take a look at eight of a softwaredevelopment kit known as SDK.And this allows you to control multiple AWS services usingpopular programming languages. So to understand what an SDK is, let's go define that. So itis a set of tools and libraries that you can use to create applications for a specificsoftware package. So in the case, for the EVAs SDK, it is a set of API libraries thatyou you that let you integrate Ada services into your applications. Okay, so that fitspretty well into the description of an SDK. And the SDK is available for the followinglanguages. We have c++, go Java, JavaScript, dotnet, no, Jess, PHP, Python, and Ruby. Andso I just have an example of a couple of things I wrote in the SDK. And one is a no Jess andone is Ruby into the exact same script, it's for ABS recognition for detecting labels.But just to show you how similar it is among different languages, so more or less the,the syntax is going to be the same. But yeah, that's all you need to do.So in order touse the line SDK, we're going to have to do a little bit work beforehand and enable programmaticaccess for the user, where we want to be able to use these development tools, okay. Andso when you turn on programmatic access for user, you're going to then get an access keyand a secret, so then you can utilize these services. And so down below, you can see Ihave an access key and secret generated. Now, once you have these, you're gonna want tostore them somewhere, and you're gonna want to store them in your user's home directory.And you're going to want them within a hidden directory called dot AWS, and in a file calledcredentials. Okay, so down below here, I have an example of a credentials file. And you'llsee that we have default credentials. So if we were to use CLR SDK, it's going to usethose ones by default if we don't specify any. But if we were working with multipleAWS accounts are going to end up with multiple credentials.And so you can organize theminto something called profiles here. And so I have one here for enterprise D, and D, SpaceNine. So now that we understand programmatic access, let's move on to learning about CLR.Hey, this is Andrew Brown from exam Pro. And we are going to do the COI and SDK followalong here. So let's go over to I am and create ourselves a new user so we can generate somedatabase credentials. Oh, so um, now we're going to go ahead and create a new user.Andwe're going to give them programmatic access so we can get a key and secret. I'm goingto name this user Spock, okay. And we're going to go next here. And we're going to give themdeveloper permissions, which is a power user here, okay, and you can do the same here.So our cloud nine environment is ready here. Okay, and we have a terminal here, and it'sconnected to any c two instance. And the first thing I'm going to do is I just can't standthis light theme. So I'm gonna go to themes down here, go to UI themes, and we're goingto go to classic dark, okay, and that's gonna be a lot easier on my eyes here. And so thefirst thing we want to do is we want to plug in our credentials here so that we can startusing the COI.So the COI is already pre installed on this instance here. So if I was to typeAWS, we already have access to it. But if we wanted to learn how to install it, let'sactually just go through the motions of that. Okay, so I just pulled up a couple of dockshere just to talk about the installation process of the COI, we already have the COI installsof a type universe, it's already here. So it's going to be too hard to uninstalled itjust to install it to show you here, but I'm just going to kind of walk you through itthrough these docks here just to get you an idea how you do it.So the COI requires eitherPython two or Python three. And so on the Amazon Linux, I believe that it has both.So if I was to type in pi and do virgin here, okay. Or just Python, sorry, I'm always thinkingof shorthands. This has version 3.6, point eight, okay. And so when you go ahead andinstall it, you're going to be using Pip. So PIP is the way that you install thingsin Python, okay, and so it could be PIP or PIP three, it depends, because there usedto be Python two, and southern Python three came out, they needed a way to distinguishit. So they called it PIP three, but Python two is no longer supported. So now pin threeis just Pip. Okay, so you know, just got to play around based on your system, okay. Butgenerally, it's just Pip pip, install ATSC Li. And that's all there is to it.And toget Python install, your system is going to vary, but generally, you know, it's just forAmazon, Linux, it is a CentOS, or Red Hat kind of flavor of Linux. So it's going touse a Yum, install Python. And just for all those other Unix distributions, it's mostlygoing to be apt get Okay. So now that we know how to install the CLA, I'm just gonna typeclear here and we are going to set up our credentials. Alright, so we're going to goahead and install our credentials here, they're probably already installed, because cloudnine is very good at setting you up with everything that you need. But we're going to go throughthe motions of it anyway.And just before we do that, we need to install a one thingin cloud nine here. And so I'm going to install via node package manager c nine, which allowsus to open files from the terminal into Cloud Nine here. And so the first thing I want youto do is I want to go to your home directory, you do that by typing Tilda, which is forhome, and forward slash, OK. And so now I want you to do l LS, hyphen, LA, okay. Andit's going to list everything within this directory, and we were looking for a directorycalled dot AWS. Now, if you don't have this one, you just type it MK Dir. And you do dotAWS to create it, okay, but already exists for us. Because, again, cloud nine is verygood at setting things up for us. And then in here, we're expecting to see a credentialsfile. And that should contain our credential. So typing c nine, the program we just installedthere, I'm just going to do credentials here, okay. And it's going to open it up above here.And you can already see that it's a set of credentials for us, okay.And I'm just goingto flip over and just have a comparison here. So we have some credentials. And it is forI don't know who, but we have them. And I'm going to go ahead and add a new one, I'm justgonna make a new one down here called Spock, okay. All right. And basically, what I'm doingis I'm actually creating a profile here, so that I can actually switch between credentials.Okay. And I'm just going to copy and paste them in here. Alright, and so I'm just goingto save that there.And so now I have a second set of credentials within the credential filethere, and it is saved. And I'm just going to go down to my terminal here and do clear.And so now what I'm going to do is I'm going to type in AWS s3 Ls, and I'm going to dohyphen, hyphen profile, I'm going to now specify spark and that's going to use that set ofcredentials there. And so now, I've done that using sparks credentials, and we get a listof a bucket stuff. Okay. So now if we wanted to copy something down from s3, we're goingto use AWS s3, CP.And we are going to go into that bucket there. So it's going to beexam Pro, enterprise D, I have this from memory. And we will do data dot jpg, okay. And sowhat that's going to do is it's going to download a file but before actually run this here,okay. I'm just going to CD dot dot and go back to my home directory here. Okay. AndI'm just going to copy this again here and paste it and so I should be able to downloadit but again, I got to do a hyphen having profile specifies proc spark because I don'twant to use the default profile there. Okay, um, and uh, complain because I'm missing theG on the end of that there, okay? And it's still complaining. Maybe I have to do s3 forwardslash forward slash huh? Ah, no, that's the command.Oh, you know why? It's because whenyou use CP, you have to actually specify the output file here. So you need your sourceand your destination. Okay, so I'm just good. Dr. Spock are sorry, data sorry, data dotJPG there. Okay. And that's going to download that file. So, I mean, I already knew thatI had something for AWS there. So I'm just going to go to AWS to show you that there.So if you want to do the same thing as I did, you knew, you definitely need to go set upa bucket in a three. Okay? So if I just go over here, we have the exam, pro 00, enterpriseD, and we have some images there. Okay, so that's where I'm grabbing that image from.And I can just move this file into my environment directory, so I actually can have access toit there. Okay, so I'm just going to do MB data.And I'm just going to move that onedirectory up here. Okay. All right. And so now we have data over here, okay. And so,you know, that's how you'd go about using the CLA with credentials. Okay. Yeah, we justopened that file there if we wanted to preview it. Okay. So now let's, uh, let's move onto the SDK. And let's use our credentials to do something programmatically, okay. Sonow that we know how to use the COI, and where to store credentials, let's go actually dosomething programmatically with the SDK. And so I had recently contributed to databasedocs, for recognition. So I figured we could pull some of that code and have some fun there.Okay. So what you do is go to Google and type in Avis docs recognition. And we're goingto click through here to Amazon recognition, we're going to go to the developer developersguide with HTML, apparently, they have a new look, let's give it a go. Okay, there's alwayssomething new here.I'm not sure if I like it. But this is the new look to the docks.And we're going to need to find that code there. So I think it is under detecting faceshere. And probably under detecting faces in an image, okay. And so the code that I addedwas actually the Ruby and the node GS one, okay, so we can choose which one we want,I'm going to do the Ruby one, because I think that's more fun. And that's my language ofchoice. Okay. And so I'm just going to go ahead and copy this code here. Okay. And we'regoing to go back to our cloud nine environment, I'm going to create a new, a new file here,and I'm just going to call this detect faces, ooh, keep underscore their faces.rb. Okay.And I'm just gonna double click in here and paste that code in. Alright. And what we'regoing to have to do is we're going to need to supply our credentials here, generally,you do want to pass them as in as environment variables, that's a very safe way to providethem.So we can give that a go. But in order to get this working, we're going to have tocreate a gem file in here. So I'm just going to create a new file here. Because we needsome dependencies here, we're just going to type in gem file, okay. And within this gemfile, we're going to have to provide the gem recognition. Okay, so I'm just gonna go overhere and supply that there. There is a few other lines here that we need to supply.SoI'm just gonna go off screen and go grab them for you. Okay, so I just went off screen hereand grabbed that extra code here. This is pretty boilerplate stuff that you have toinclude in a gem file. Okay. And so what this is going to do, it's going to install theAWS SDK for Ruby, but specifically just for recognition. So I do also have open up here,the AWS SDK, for Ruby, and for no GS, Python, etc, they all have one here. And so they tellsyou how you can install gems. So for dealing with recognition here, I'm just gonna do aquick search here for recognition.Okay, sometimes it's just better to navigate on the left handside here. Alright, and so I'm just looking for a recognition. Okay, and so if we wantto learn how to use this thing, usually a lot of times with this, it's going to tellyou what gem you're gonna need to install. So this is the one we are installing. Andthen we click through here through client, and then we can get an idea of all the kindsof operations we can perform. Okay, so when I needed to figure out how to write this,I actually went to the CLR here, and I just kind of read through it and pieced it togetherand looked at the output to figure that out.Okay, so nothing too complicated there. Butanyway, we have all the stuff we need here. So we need to make sure we're in our environmentdirectory here, which is that Spock dev directory. So we're going to type tilde out, which goesto our home directory environment, okay, we're gonna do an ls hyphen, LA. And just make surethat we can see that file there and the gem file, okay, and then we can go ahead and doa bundle install. All right, and so what that's going to do is it's going to now install thatdependency. so here we can see that installed the EVAs SDK, SDK core and also recognition.Okay, and so now we have all our dependencies to run the script here.So the only thingthat we need to do here is we need to provide it an input. so here we can provide it a specificbucket and a file. There is a way to provide a locally, we did download this file, butI figured what we'll do is we'll actually provide the bucket here. So we will say, what'sthe bucket called exam pro 000. And the next thing we need to do is define the key. Soit's probably the key here. So I'm going to do enterprise D. Okay, and then we're justgoing to supply data there. All right.And we can pass these credentials via the environmentvariables, we could just hard code them and paste them in here. But that's a bit sloppy.So we are going to go through the full motions of providing them through the environmenthere. And all we have to do to do that is we're just going to paste in, like so. Okay.And we're just going to copy that, that's the first one. And then we're going to dothe password here. Oops. Okay. And hopefully, this is going to work the first time, andthen we'll have to do bundle exec detect faces, okay. And then this is how these are goingto get passed into there.And assuming that my key and bucket are correct, then hopefully,we will get some output back. Okay. All right, it's just saying it couldn't detect faceshere, I just have to hit up here, I think I just need to put the word Ruby in frontof here. Okay, so my bad. Alright, and we is it working. So we don't have the correctpermissions here. So we are a power user. So maybe we just don't have enough permission.So I'm just going to go off screen here and see what permissions we need to be able todo this. So just playing around a little bit here, and also reading the documentation forthe Ruby SDK, I figured out what the problem was.And it's just that we don't need thisforward slash here. So we just take that out there, okay, and just run what we ran lastthere, okay. And then we're gonna get some output back. And then it just shows us thatit detected a face. So we have the coordinates of a face on and if we used some additionaltool there, we could actually draw overtop of the image, a bounding box to show wherethe face is detected. There's some interesting information. So it detected that the personin the image was male, and that they were happy. Okay. So, you know, if you think thatthat is happy, then that's what recognition thinks, okay. And it also detected the facebetween ages 32 and 48.To be fair, data is an Android, so he has a very unusual skincolor. So you know, it's very hard to do to determine that age, but I would say that thisis the acceptable age range of the actor at the time of so it totally makes sense. Okay.So yeah, and there you go. So that is the pragmatic way of doing it. Now, you don'tever really want to ever store your credentials with on your server, okay? Because you canalways use Iam roles, attach them to EC two instances, and then that will safely providecredentials onto your easy two instances, to have those privileges.But it's importantto know how to use the SDK. And whenever you're in development, working on your local machine,or maybe you're in cloud nine environment, you are going to have to supply those credentials.Okay. So there you go. So now that we are done with our eight, or eight USC Li and SDK,follow along here. So we're on to the AWS COI in SDK ci ci, so let's jump into it. Soci stands for command line interface SDK stands for software development kit. The COI letsyou enter interact with AWS from anywhere by simply using a command line. The SDK isa set of API libraries that let you integrate data services into your applications. promaticaccess must be enabled per user via the Iam console to UCLA or SDK a to s config commandis used to set up your ad credentials for this Eli, the CLA is installed via a Pythonscript credentials get stored in a plain text file, whenever possible use roles insteadof at this credentials.I do have to put that in there. And the SDK is available for thefollowing programming languages c++, go Java, JavaScript, dotnet, no GS, PHP, Python andRuby. Okay, so for the solution architect associate, they're probably not going to askyou questions about the SDK, but for the developer, there definitely are. So just keep that in.Hey, this is Andrew Brown from exam Pro. And we are looking at key management service,which is used for creating and managing encryption keys for a variety of database services orwithin your applications. And the way I like to think of it is that whenever you see acheckbox in AWS to encrypt something, it's very likely using kms. So kms makes it easyfor you to create control and rotate encryption keys used to encrypt your data on AWS andkms is a multi tenant hardware security module, which we're going to talk about in the nextslide here. And the main takeaway I want you to remember about kms is that whenever youare using at a service, and you have the option to check box on encryption, so here we havean example of EBS.You're going to checkbox on and then choose a master key. And that'sall you have to do. It's going to vary per service. But that's pretty much the routine.And kms can be used alongside with cloud trail to audit access history. So you have to investigatewho used what key, that's how you're going to do it. And ATMs integrates with a lot ofdifferent AWS services. So here I've highlighted the ones which are most important to rememberfor the associate exam.So you got EC two Sq S, s3, dynamodb, elastic, cash, RDS, andmore. Okay. So kms is a multi tenant HSM. But what does that mean? So HSM, which standsfor hardware security module is a hardware that is specialized for storing your encryptionkeys. It's designed to be tamper proof, and it stores those keys in memory. So they'renever written to disk. So imagine the power went out, those keys are gone. And that isactually a security feature. And so here is an example of a piece of HSM. And these arereally, really, really expensive. And so this is where kms comes into play, because it ismulti tenant, meaning that there are multiple customers who are utilizing the same pieceof hardware.So you're sharing the costs with a bunch of different items, customers, andthose customers are isolated virtually from each other. So there is software that protectsyou from other people's data. But if you had one customer who utilize the entire pieceof hardware, which we would call dedicated, that would be also considered a single tenantbecause there's only one person using that, that server, and it AWS actually has a singletenant HSM, and that is called Cloud HSM. And this is going to give you a lot more control.And the reason why people would use Cloud HSM over kms is that cloud HSM is FIPS 142,level three, whereas kms is only FIPS 142 level two, but the takeaway from that is justunderstand that cloud HSM is, you know more for enterprises that need to meet those regulations.But kms is a really great service to utilize. So to really understand kms, we need to understandwhat a customer master key is, because this is the primary resource that kms is managing.And to start there.Let's talk about what encryption is. So encryption is the processof encoding a message or information in such a way that only authorized parties can accessit and those who are not authorized cannot. Okay, pretty basic. And so that leads us towhat are cryptographic keys or data keys. So a data key is just a string of data thatis used to lock or unlock cryptographic functions. So a cryptographic function could be authentication,authorization, or encryption. And so that leads us on to what is a master key. So amaster key is stored in a security hardware. So an HSM and master keys are used to encryptall other keys on the system. Those other keys are being data keys. And so why wouldwe want to use a key to encrypt another key, which is called envelope encryption? Well,the reason why and here's a, here's a diagram of an envelope encryption is how do you knowthat the keys, the data keys that you use that unlock the data to your database, aresecure.And that's where these master keys come into play. So the idea is that they createsecurity around those keys. So to learn a little bit more about customer master keys,a customer master key is the primary resource that AWS kms is managing. And a customer masterkey abbreviated as CMK is a logical representation of a master key. So you're not directly accessingthe master key. It's, it's a logical representation. But with that logical representation, we getto attach a lot of metadata that's going to help us understand things about our masterkey. So the key ID when it was created the creation date, we can give it a descriptionand say what state the key is in. And that CMK is going to also contain key materialused to encrypt and decrypt data. And so kms supports both symmetric and asymmetric cmKs. So if you've never heard of symmetric and asymmetric I'll give you a couple examples.So some symmetric symmetric key is generally a 256 bit key that is used for encryptionand decryption.So you have a single key that you're using. And an example of this on AWSwould be when you encrypt an s3 It uses something called a Aes 256 to be six suggests that isusing 256 bit encryption as is the protocol for encryption. And so that is that method.And the other method is asymmetric key. And so this would be where you have an RSA keypair that is used for encryption and decryption or signing verification, but not both. Andthe idea here is that you have two keys. So a great example this is with EC two key pairs,you have a public key and a private key.Now kms isn't using when you're, when you're downloadingEC two key pairs, I don't think that they're using kms, or at least that they are is probablymanaged by AWS, and it's transparent to you. But the idea of having these two methods ofkeys is based on the use case. But from a security perspective, if you have to havetwo keys one key to match to another, that is technically more secure. Whereas if youhave one key, if that one key is lost, then you know, is is less secure. Okay, so that'scustomer master keys. Okay, let's do a quick review of some CLR commands we can use withkms.And these are actually very common. And if you're studying for the developer associate,you should absolutely commit these to memory, especially for your day to day kind of stuff.But on the exam, you might see these appear. And so it's just good to know them. So youcan eliminate them from options that just do not exist. So the first one here is theCreate key command. And as the name implies, it creates a customer manage key, very straightforward.Then you have your encrypt.So this is going to encrypt plain text into ciphertext, thenyou have decrypt. So that's going to decrypt ciphertext that was encrypted by kms. Andthen you have re encrypt. And so re encrypt can be used in three scenarios for manualrotations. The CMK is when you're changing the CMK that protects the ciphertext, or you'rechanging the encryption context of a ciphertext. So re encrypt. And the last one is enablekey rotation. And the idea is that you know, once a year, if you want to rotate out thosekeys, you can turn this on, and it will just happen automatically. The only thing thatyou have to notice that this only works for symmetric customer master keys. So and youcannot perform this operation outside of CMK in a different in a different AWS account,so it's within the existing account. So yeah, there you go. So we're at the end of the kmssection, so on to the kms. Key Management Service. kms creates and manages encryptionkeys for variety of database services or for your apps. kms can be used with cloud trailto audit a keys access history.Kms has the ability to automatically rotate out your keysevery year with no need to re encrypt. Customer master keys are the primary resources in kms.kms is a multi tenant HSM multi tenant means you're sharing the hardware with multiplecustomers. hardware security modules HSM is a specialized hardware for storing your keysand is tamper proof kms is up to FIPS 142 level two compliant if you're looking at what'sanother one called Cloud HSM, that's level three if you need level three kms stores masterkeys not data keys. Master keys master keys are used to encrypt data keys and which iscalled envelope encryption. kms supports two types of keys symmetric and asymmetric.Sosymmetric is a single key using 256 bit encryption. So I always like to say think of s3 bucketsa as to be six. asymmetric uses two keys I always think of thinking about like key pairpublic and private, important kms kms API key API's to remember because you might seethem as exam questions kms, create key kms encrypt, decrypt, re encrypt enable key rotation.So there we go. That is the end of kms cheat sheet. If this was the AWS database, securitycertification, this is like a seven page cheat sheet. So this is very light. But you definitelyneed to know kms it's very important as a developer or in sysop. Hey, this is AndrewBrown from exam Pro. And we are looking at Amazon cognito, which is a decentralized wayof managing authentication. So think sign up sign in integration for your apps, socialidentity providers, like connecting with Facebook or Google. So Amazon cognito actually doesmultiple different things. And we are going to look at three things in specific. We'regoing to look at cognito user pools, which is a user directory to authenticate againstidentity providers. We're going to look at cognito identity pools, which provides temporarycredentials for your users to access database services.And we're going to look at cognitosync which syncs users data and preferences across all devices. So let's get to it. Soto fully understand Amazon cognito, we have to understand the concepts of web identityFederation and identity providers. So let's go through these definitions. So for web identityFederation, it's to exchange the identity and security information between an identityprovider and an application. So now looking at identity provider, it's a trusted providerfor your user identity that lets you authenticate to access other services. So an identity providercould be Facebook, Amazon, Google, Twitter, GitHub, LinkedIn, you commonly see this onwebsites where it allows you to log in with a Twitter or GitHub account, that is an identityprovider. So that would be Twitter or GitHub. And they're generally powered by differentprotocols.So whenever you're doing this with social social accounts, it's going to be withOAuth. And so that can be powered by open ID Connect, that's pretty much the standardnow, if there are other identity providers, so if you needed a single sign on solution,SAML is the most common one. Alright. So the first thing we're looking at is cognito.Userpools, which is the most common use case for cognito. And that is just a directory of yourusers, which is decentralized here. And it's going to handle actions such as signup signin account recovery. So that would be like resetting a password, account confirmation,that would be like confirming your email after sign up. And it has the ability to connectto identity providers. So it does have its own like email and password form that it cantake. But it can also leverage. Maybe if you want to have Facebook Connect, or Amazon Connect,and etc, you can do that as well, the way it persists a connection after it's authenticated,that generates a j WT.So that's how you're going to persist that connection. So let'slook at more of the options so that we can really bake in the utility here of user pools.So here left hand side, we have a bunch of different settings. And for attributes, wecan determine what should be our primary attribute should be our username when they sign up,or should it be email and phone phone number. And if it is, you know, can they sign up orsign in if the email address hasn't been verified, where the conditions around that we can setthe restrictions on the password the length, if it requires special characters, we cansee what kind of attributes are required to collect on signup, if we need their birthday,or email or etc.It has the capabilities of turning on MFA. So if you want multi factorauthentication, very easy way to integrate that if you want to have user campaigns, soif you're used to like sending out campaigns via MailChimp, you can easily integrate cognitowith pinpoint which is a user campaigns, right. And you also can override a lot of functionalityusing lambda. So anytime like a sign up or sign in or recovery passwords triggered, thereis a hook so that you can then trigger lambda to do something with that. So that's justsome of the things that you do with cognito user pools. But the most important thing toremember is just it's a way of decentralizing authentication that's for for user pools.All right. So now it's time to look at cognito identity pools, identity pools provide temporaryAIIMS credentials to access services such as Dynamo DB or s3, identity pools can bethought of as the actual mechanism authorizing access to the AWS resources. So you know,the idea is you have an identity pool, you're going to say, who's allowed to generate thoseAWS credentials, and then use the SDK to generate those credentials.And then that applicationcan then access those database services. So just to really hit that home here, I do havescreenshots to give you an idea what that is. So first, we're going to choose our providers,our provider can be authenticated. So we can choose cognito, or even a variety of otherones, or you can have an unauthenticated. So that is also an option for you. And thenafter you create that identity pool, they have an easy way for you to use the SDK.Soyou could just drop down your platform and you have the code and you're ready to go togo get those credentials. If you're thinking did I actually put in my real, real exampleor identity pool ID there? It's not, it's not I actually go in and replace all these.So if you're ever wondering and watching these videos, and you're seeing these things I alwaysreplay.We're going to just touch on one more, which is cognito. Sync. And so sync lets yousync user data and preferences across all devices with one line of code cognito usespush notifications to push those updates and synchronize data. And under the hood, it'susing simple notification service to push this data to devices. And the the the datawhich is user data and preferences is key value data. It's actually stored with theidentity pool. So that's what you're pushing back and forth. But the only thing you needto know is what it does. And what it does is it syncs user data and preferences acrossAll devices will have one line of code. Hey, this is Angie brown from exam Pro, and we'regoing to do the cognito.Follow along and set up a login screen. So I'm gonna go tocognito cognito. There we go. And we're going to be presented two options. User pools, identitypools, identity pools is not what we want. We want user pools so that people can login authenticate identity pools is when you want to give access to existing resourceson AWS. So we'll go to user pools here, I'm going to create a new user pool, I'm goingto call study saying, since that's kind of the project we've been working with, in our,in this developer associate, I'm going to go to review defaults. All these defaultsare fine, you can see you can set MFA, and all this stuff, you can change any of thisif you want on the create this pool, and the pool has been created, I'm going to need aapp client, I'm going to hit Add app client, we're gonna name this study sync.We're goingto leave all of this defaulted here, I'm going to go ahead and create this app client. Nowthe next thing to do is configure this app client. So the app client settings, we'regoing to enable user pools, I'm going to put an example URL here for our callback. Thisis what would happen after somebody logged in successfully. And this is where they wouldgo if they logged out. We want authorized code grants and implicit grants, Mize willtake all of these scopes, if possible, because why not. And we will go ahead and save thesehere. In order to use the host UI, this is like a UI that eight of us gives you by default,of course you can make your own, which is a lot of work. But to use this host of UI,which is what we'll be using here, we need to set a domain name, I'm gonna call it steadysync, we'll check for availability, it's available, you might have to change it based on yourcase, we'll hit save.And then to view our hosted domain or our hosted login screen,we're going to go back to App client settings. At the bottom here, hit launch UI here, I'mjust going to do a hard refresh here. Because it's giving us some trouble. So let me justgo back to the domain here. Yeah, it should be here, it looks like we also customize it,which is kind of nice.But this should work. Now, let me just see here, maybe just neededsome time. There you go. So I was just way, way too fast. And then we'll go ahead, andwe'll just sign up. And the goal here is that if we sign up successfully, we should be ableto, um, we should be able to be directly redirected to the that that URL that we provided.SoI'm gonna need a temporary email, you use 10 minute mail, minute, minute of mail here.We go here. And oh, well just changed the whole UI, I guess they have a new one. Oh,it's beta. Look at that I'm early user, if you never use this platform, it's for gettingtemporary emails very quickly. And we'll go back here, I will paste this in here.Andthen I'll make the password testing 123 exclamation mark sign up. And now we have a verificationcode. If I go back here, there it is. There's our code will copy it confirms it. And itredirected us. So that means that we successfully logged in, if we go over to our users, weshould be able to see this user now if I refresh, or there, there I am, there's Andrew Brown.So you know, that's all there is to setting it up. Clearly, there's a lot more work involved.But for the developer associate, that's all you need to be comfortable with, I just wantto make sure that you got some hands on with cognito.Generally, if you use a sample phi,it does a lot of the work for you for setting it up and integrating with your application.Integrating for cognito for your web apps without amplify is extremely painful. Butit's worth it because it's so inexpensive. The only thing that is unfortunate about usare cognito user pools is there's a limitation in terms of identity providers. So we haveFacebook, Google Amazon, but we don't have LinkedIn. And apparently you can do Twitter,I think with open ID Connect. But for me, like LinkedIn is a deal breaker. It's nota devices fault, either of us falls like some kind of security standard. And for whateverreason, LinkedIn does not conform to that. And so that's why it's not in the list here.There is a way to get LinkedIn to work, but it's a lot of effort.And so that's why wesee a lot people using author zero because it just does everything but that one's reallyexpensive. And there's a lot of things you can do in here. You know, there's a lot ofoptions and I recommend that you poke around but we're done for this case and I want youto go ahead and delete this pool. Delete this pool, we have to first delete the domain namebecause we are borrowing that from AWS. And once that's deleted, we can go to generalsettings, hit delete, type in delete, and there we go. Our pool is gone. So that's it.So we're onto the Amazon cognito cheat sheet. So let's jump into it. So cognito is a decentralizedmanaged authentication system. So when you need to easily add authentication to yourmobile or desktop apps, think cognito.So let's talk about user pools. So user poolis the user directory allows users to authenticate using OAuth two ipds, such as Facebook, GoogleAmazon to connect to your web applications. And cognito user pool isn't in itself an IPD.All right, so it can be on that list as well. User pools use JW T's to persist authentication.Identity pools provide temporary database credentials to access services, such as s3or dynamodb, cognito.Sync can sync user data preferences across devices with one line ofcode powered by SNS web identity Federation, they're not going to ask you these questions.But you need to know what these are exchange identity and security information betweenidentity provider and an application. identity provider is a trusted provider for your userto authenticate or sorry to identify that user. So you can use them to dedicate to accessother services. Then you have Oh, IDC is a type of identity provider which uses OAuthand you have SAML, which is a type of identity provider which is used for single sign onso there you go. We're done with cognito. Hey, this is Andrew Brown from exam Pro. Andwe are looking at simple notification service also known as SNS, which lets you subscribeand send notifications via text message email, web hooks, lambdas Sq s and mobile notification.Alright, so to fully understand SNS, we need to understand the concept of pub sub.Andso pub sub is a publish subscribe pattern commonly implemented in messaging systems.So in a pub sub system, the sender of messages, also known as the publisher here, doesn'tsend the message directly to the receiver. Instead, they're gonna send the messages toan Event Bus. And the event bumps categorizes the messages into groups. And then the receiverof messages known as the subscriber here subscribes to these groups. And so whenever a new messageappears within their subscription, the messages are neatly delivered to them. So it's notunlike registering for a magazine. All right. So, you know, down below, we have that kindof representation.So we have those publishers, and they're publishing to the Event Bus whichhave groups in them, and then that's going to send it off to those subscribers, okay,so it's pushing it all along the way here, okay, so publishers have no knowledge of whotheir subscribers are. Subscribers Do not pull for messages, they're gonna get pushedto them. messages are instead automatically immediately pushed to subscribers and messagesand events are interchangeable terms in pub sub. So if you see me saying messages andevents, it's the same darn thing. So we're now looking at SNS here. So SNS is a highlyavailable, durable, secure, fully managed pub sub messaging service that enables youto decouple microservices distributed systems and serverless applications. So whenever wetalking about decoupling, we're talking about application integration, which is like a familyof Ada services that connect one service to another. Another service is also Sq s.AndSNS is also application integration. So down below, we can see our pub sub systems. Sowe have our publishers on the left side and our subscribers on the right side. And ourevent bus is SNS Okay, so for the publisher, we have a few options here. It's basicallyanything that can programmatically use the EVAs API. So the SDK and COI uses the AvisAPI underneath. And so that's going to be the way publishers are going to publish theirmessages or events onto an SNS topic. There's also other services on AWS that can triggeror publish to SNS topics cloud watch, definitely can, because you'd be using those for billingalarms. And then on the right hand side, you have your subscribers and we have a bunchof different outputs, which we're going to go through, but here you can see we have lambdaSq s email, and HTTPS protocol.So publishers push events to an SNS topic. So that's howthey get into the topic. And then subscribers subscribe to the SNS topic to have eventspushed to them. Okay. And then down below, you can see I have a very boring descriptionof SNS topic, which is it's a logical access point and communication channel. So that makesa nap. That makes sense. So let's move on. So we're gonna take a deeper look here atSNS, topics and topics allow you to group multiple subscriptions together, a topic isable to deliver to multiple protocols at once. So it could be sending out email, text messageHTTPS, all the sorts of protocols we saw earlier. I'm publishers don't care about the subscribersprotocol, okay? Because it's sending a message event, it's giving you the topic and saying,you figure it out.This is the message I want to send out, and it knows what subscribersit has. And so the topic when it delivers that messages, it will automatically formatit for us. The message according to the subscribers chosen protocol, okay, and the last thingI want you to know is that you can encrypt your topics via kms key management service.And you know, so it's just as easy as turning it on and picking your key. So now we're takinga look at subscriptions. And subscriptions are something you create on a topic, okay.And so here I have a subscription, that is an email subscription. And the endpoint isobviously going to be an email. So I provided my email there. If you want to say hello,give, send me an email. And it's just as simple as clicking that button and filling in thoseoptions. Now you have to choose your protocol. And here we have our full list here on theright hand side. So we'll just go through it. So we have HTTP, HTTPS, and you're goingto want to be using this for web hooks. So the idea is that this is usually going tobe an API endpoint to your web applications that's going to listen for incoming messagesfrom SNS, then you can send out emails now, there's another service called ACS, whichspecializes in sending out emails.And so SNS is really good for internal email notifications,because you don't necessarily have your custom domain name. And also, the emails have tobe plain text only. And there's some other limitations around that. So they're really,really good for internal notifications, maybe like billing alarms, or maybe someone signedup on your platform you want to know about it, then they also have it in email, JSON.So let's just gonna send you JSON via email, then you have Sq s, so you can send an SMSmessage to Sq s. So that's an option you have there, you can also have SNS trigger lambdafunctions. So that's a very useful feature as well. And you can also send text messagesthat will be using the SNS protocol. And the last one here is platform application endpoints.And that's for mobile push. So like a bunch of different devices, laptops, and phoneshave notification systems in them. And so this will integrate with that.And we're justgonna actually talk about that a bit more here. So I wanted to talk a bit more aboutthis platform application endpoint. And this is for doing mobile push. Okay, so we havea bunch of different mobile devices, and even laptops that use notification systems in them.And so here you can see a big list. We have a DM, which is Amazon device messaging, wehave Apple, the do Firebase, which is Google and then we have two for Microsoft. So wehave Microsoft push and Windows push, okay. And so you can with this protocol, push outto that stuff. And the advantage here you're gonna when you push notification messagesto these mobile endpoints, it can appear in the mobile app just like message alerts, badges,updates, or even sound alert.So that's pretty cool. Okay, so I just want you to be aware.Alright, so on to the SMS cheat sheet. So simple notification service, also known asSNS, is a fully managed pub sub messaging service. SNS is for application integration.It allows decoupled services and apps to communicate with each other. We have a topic which isa logical access point and communication channel, a topic is able to deliver to multiple protocols.You can encrypt topics via kms. And then you have your publishers, and they use the EVAsAPI via the CLR or the SDK to push messages to a topic. Many Ava's services integratewith SNS and act as publishers Okay, so think cloud watch and other things. Then you havesubscriptions, so you can subscribe, which consists subscribe to topics. When a topicreceives a message, it automatically immediately pushes messages to subscribers. All messagespublished to SNS are stored redundantly across multi az, which isn't something we talkedin the core content, but it's good to know.And then we have the following protocols wecan use so we have HTTP HTTPS. This is great for web hooks into your web application. Wehave emails good for internal email notification. Remember, it's only plain text if you needrich text. And custom domains are going to be using sts for that. Then you have emailJSON, very similar to email just sending Jason along the way. You can also send your yourSMS messages into an ESA s Sq sq. You can trigger lambdas you can send text messages.And then the last one is you have platform application endpoints, which is mobile pushokay and that's going to be for systems like Apple, Google Microsoft Purdue alright. Hey,this is Andrew Brown from exam pro and we are looking at simple queue service also knownas Sq s which is a fully managed queuing service that enables you to decouple and scale microservices distributed systems and serverless applications.So, to fully understand Sq s,we need to understand what a queueing system is. And so a queueing system is just a typeof messaging system, which provides asynchronous communication and decouples. Processes viamessages could also be known as events from a sender and receiver but in the Case fora streaming system also known as a producer and consumer. So, looking at a queueing system,when you have messages coming in, they're usually being deleted on the way out. So assoon as they're consumed or deleted, it's for simple communication. It's not reallyfor real time. And just to interact with the queue and the messages there, both the senderand receiver have to pull to see what to do. So it's not reactive. Okay, we got some examplesof queueing systems below, we have sidekick, sq, S, rabbit, rabbit and Q, which is debatablebecause it could be considered a streaming service. And so now let's look at the streamingside to see how it compares against a queueing system.So a streaming system can react toevents from multiple consumers. So like, if you have multiple people that want to do somethingwith that event, they can all do something with it, because it doesn't get immediatelydeleted, it lives in the Event Stream for a long period of time. And the advantage ofhaving a message hang around in that Event Stream allows you to apply complex operations.So that's the huge difference is that one is reactive and one is not one allows youto do multiple things with the messages and retains it in the queue. One deletes it anddoesn't really doesn't really think too hard about what it's doing. Okay, so there's yourcomparative between queuing and streaming.And we're going to continue on with Sq s here,which is a queueing system. So the number one thing I want you to think of when you'rethinking of Sq S is application integration. It's for connecting isolette applicationstogether, acting as a bridge of communication and Sq s happens to use messages and queuesfor that you can see Sq s appears in the Ava's console under application integration. Sothese are all services that do application integration Sq S is one of them. And as wesaid it uses a queue. So a queue is a temporary repository for messages that are waiting tobe processed, right. So just think of going to the bank and everyone is waiting that line,that is the queue. And the way you interact with that queue is through the Avis SDK. Soyou have to write code that was going to publish messages to the queue. And then when you wantto read them, you're going to have to use the database SDK to pull messages. And soSQL is pull based, you have to pull things.It's not pushed based, okay. So to make thiscrystal clear, I have an SQL use case here. And so we have a mobile app, and we have aweb app, and they want to talk to each other. And so using the Avis SDK, the mobile appsends a message to the queue. And now the the web app, what it has to do is it has touse the Avis SDK, and pull the queue whenever it wants.So it's up to the this app to codein how frequently we'll check, it's gonna see if there's anything in the queue. Andif there is a message, it's going to pull it down, do something with it and report backto the queue that it's consumed it meaning to tell the queue to go ahead and delete thatmessage from the queue. All right, now this app on the mobile left hand side to know whetherit's been consumed, it's going to have to, on its own schedule, periodically check topull to see if that message is still in the queue, if it no longer is, that's how it knows.So that is the process of using Sq s between two applications. So let's look at some SQLlimits starting with message size. So the message size can be between one byte to 256kilobytes. If you want to go beyond that message size, you can use the Amazon SQL extendedclient library only for Java, it's not for anything else to extend that necessarily upto two gigabytes in size.And so the way that would work is that the message would be storedin s3 and the library would reference that s3 object, right? So you're not actually pushingtwo gigabytes to Sq s, it's just loosely linking to something in an s3 bucket. Message retention.So message retention is how long SQL will hold on a message before dropping it fromthe queue. And so the message retention by default is four days, and you have a messageretention retention that can be adjusted from a minimum of 60 seconds to a maximum of 14days. SQL is a queueing system. So let's talk about the two different types of queues. Wehave standard queue which allows for a nearly unlimited number of transactions per secondwhen your transaction is just like messages.And it guarantees that a message will be deliveredat least once. However, the trade off here is that more than one copy of a message couldbe potentially delivered. And that would cause things to happen out of order. So if orderingreally matters to you. Just consider there's that caveat here with standard queues, however,you do get nearly unlimited transactions. So that's a trade off.It does try to provideits best effort to ensure messages stay generally in the order that they were delivered. Butagain, there's no guarantee. Now it If you need a guarantee of the the ordering of messages,that's where we're going to use feefo, also known as first in first out, well, that'swhat it stands for, right. And the idea here is that, you know, a message comes into thequeue and leaves the queue. The trade off here is the number of transactions you cando per second. So we don't have nearly unlimited per second where we have a cap up to 300.So there you go. So how do we prevent another app from reading a message while another oneis busy with that message. And the idea behind this is we want to avoid someone doing thesame amount of work that's already being done by somebody else.And that's where visibilitytimeout comes into play. So visibility timeout is the period of time that meant that messagesare invisible VSU Sq sq. So when a reader picks up that message, we set a visibilitytimeout, which can be between zero to 12 hours. By default, it's 30 seconds, and so no oneelse can touch that message. And so what's going to happen is that whoever picked upthat message, they're going to work on it.And they're going to report back to the queuethat, you know, we finished working with it, it's going to get deleted from the queue.Okay, but what happens, if they don't complete it within the within the visibility timeoutframe, what's going to happen is that message is now going to become visible, and anyonecan pick up that job, okay. And so there is one consideration you have to think of, andthat's when you build out your web apps, that you you bake in the time, so that if if thejob is going to be like if it's if 30 seconds have expired, then you should probably killthat job, because otherwise you might end up this issue where you have the same messagingbeing delivered twice. And that could be an issue. Okay, so just to consideration forvisibility. So in ask us, we have two different ways of doing polling, we have short versuslong. Polling is the method in which we retrieve messages from the queue. And by default, sqs uses short polling, and short polling returns messages immediately, even if the messagequeue being pulled is empty.So short polling can be a bit wasteful, because if there'snothing to pull, then you're just calling you're just making calls for no particularreason. But there could be a use case where you need a message right away. So short pollingis the use case you want. But the majority of use cases, the majority of use cases, youshould be using long polling, which is bizarre, that's not by default, but that's what itis. So long polling waits until a message arrives in the queue, or the long pole timeoutexpires. Okay. And long polling makes it inexpensive to retrieve messages from the queue as soonas messages are available, using long polling will reduce the cost because you can reducethe number of empty receives, right.So if there's nothing to pull, then you're wastingyour time, right. If you want to enable long polling it, you have to do it within the SDK.And so what you're doing is you're setting the receive message requests with a wait time.So by doing that, that's how you set long polling. Let's take a look at our simple queueservice cheat sheet that's going to help you pass your exam. So first, we have Sq S isa queuing service using messages with a queue so think sidekick or rabbit mq, if that helpsif you know the services, sq S is used for application integration.It lets you decoupleservices and apps so that they can talk to each other. Okay, to read Sq s, you need topull the queue using the ABS SDK Sq S is not push based. Okay, it's not reactive. SQL supportsboth standard and first in first out FIFO queues. Standard queues allow for unlimitedmessages per second does not guarantee the order of delivery always delivers at leastonce and you must protect against duplicate messages being processed FIFO first in firstout maintains the order messages with a limit of 300. So that's the trade off there. Thereare two kinds of polling short by default and long. Short polling returns messages immediatelyeven if the message queue is being pulled is empty. Long polling waits until messagesarrive in the queue or the long pole time expires. in the majority of cases long pollingis preferred over short polling majority okay.Visibility timeout is the period of the timethat messages are invisible to the Sq sq. messages will be deleted from the queue aftera job has been processed. Before the visibility timeout expires. If the visibility timeoutexpires in a job will become visible to the queue again, the default visibility timeoutis 30 seconds. Timeout can be between zero seconds to a maximum of 12 hours. I highlightedthat zero seconds because that is a trick question.Sometimes on the exams. People don'trealize you can do it for zero seconds. sq s can retain messages from 60 seconds to 14days by default. It is four days so 14 days is two weeks, that's an easy way to rememberit. Message sizes can be between one byte to two and 56 kilobytes. And using the extendedclient library for Java can be extended to two gigabytes. So there you go, we're donewith SQL. Hey, this is Andrew Brown from exam Pro. And we are looking at Amazon kinesis,which is a scalable and durable real time data streaming service to ingest and analyzedata in real time from multiple sources. So again, Amazon kinesis is AWS is fully managedsolution for collecting, processing and analyzing street streaming data in the cloud. So whenyou need real time, think kinesis. So some examples where kinesis would be of use stockprices, game data, social media data, geospatial data, clickstream data, and kinesis has fourtypes of streams, we have kinesis data streams, kinesis, firehose delivery streams, kinesis,data analytics, and kinesis video analytics, and we're gonna go through all four of them.So we're gonna first take a look at kinesis data streams.And the way it works is youhave producers on the left hand side, which are going to produce data, which is goingto send it to the kinesis data stream, and that data stream is going to then ingest thatdata, and it has shards, so it's going to take that data and distribute it amongst itsshards. And then it has consumers. And so consumers with data streams, you have to manuallyconfigure those yourself using some code. But the idea is you have these YouTube instancesthat are specialized to then consume that data and then send it to something in particular.So we have a consumer that is specialized to sending data to redshift than dynamodbthan s3, and then EMR, okay, so whatever you want the consumer to send it to, it can sendit wherever it wants. But the great thing about data streams is that when data entersinto the stream, it persists for quite a while.So it will be there for 24 hours, by default,you could extend it up to 160 68 hours. So if you need to do more with that data, andyou want to run it through multiple consumers, or you want to do something else with it,you can definitely do that with it. The way you pay for kinesis data streams, it's likespinning up an EC two instance, except you're spinning up shards, okay. And that's what'sgoing to be the cost there. So as long as the chart is running, you pay X amount ofcosts for X amount of shards. And that is kinesis data stream. So on to kinesis firehosedelivery stream, similar to data streams, but it's a lot simpler. So the way it worksis that it also has producers and those producers send data into kinesis firehose. The differencehere is that as soon as data is ingested, so like a consumer consumes that data, itimmediately disappears from the queue. Okay, so data is not being persisted. The othertrade off here is that you can only choose one consumer.So you have a few options, youcan choose s3, redshift, Elasticsearch, or Splunk. Generally, people are going to beoutputting to s3. So there's a lot more simplicity here. But there's also limitations aroundit. The nice thing though, is you don't have to write any code to consume data. But that'sthe trade off is you don't have any flexibility on how you want to consume the data, it'svery limited. firehose can do some manipulations to the data that is flowing through it, Ican transform the data. So if you have something where you want it from JSON, you want to convertit to parkette. There are limited options for this. But the idea is that you can putit into the right data format, so that if it gets inserted into s3, so maybe Athenawould be consuming that, that it's now in parkette file, which is optimized for Athena,it can also compress the file. So just simply zip them, right.There's different compressionmethods, and it can also secure them. So there's that advantage. Um, the big advantage is firehoseis very inexpensive because you only pay for what you consume. So only data that's ingestedis what you what you pay for, you can think of it like, I don't know, even lambda or fargate.So the idea is you're not paying for those running shards, okay? It's just simpler touse. And so if you don't need data retention, it's a very good choice. Okay, on to kinesisvideo streams, and as the name implies, it is for ingesting video data. So you have producers,and that's going to be sending either video or audio encoded data. And that could be fromsecurity cameras, web cameras, or maybe even a mobile phone. And that data is going togo into kinesis video streams, it's going to secure and retain that encoded data sothat you can consume it from services that are used for analyzing video and audio data.So you got Sage maker recognition, or maybe you need to use TensorFlow or you have a customvideo processing or you have something that has like HL based video playback.So that'sall there is to it. It's just so You can analyze and process a video streams, applying likeml or video processing services. Now we're gonna take a look at kinesis data analytics.And the way it works is that it takes an input stream, and then it has an output stream.And these can either be firehose or data streams. And the idea is you're going to be passinginformation data analytics. And what this service lets you do is it lets you run customSQL queries so that you can analyze your data in real time.So if you have to do real timereporting, this is the service you're going to want to use. The only downside is thatyou have to use two streams. So it can get a little bit expensive. But for data analytics,it's it's really great. So that's all there is to it. It's time to look at kinesis cheatsheet. So Amazon kinesis is the ADA solution for collecting, processing and analyzing streamingdata in the cloud. When you need real time, think kinesis. There are four types of streamsof the first being kinesis data streams, and that's a you're paying per shard that's running.So think of an easy to instance, you're always paying for the time it's running. So kinesisdata streams is just like that data can persist within that stream data is ordered, and everyconsumer keeps its own position, consumers have to be manually added to that to be codedto consume, which gives you a lot of custom flexibility.Data persists for 24 hours bydefault, up to 168 hours. Now looking at kinesis firehose, you only pay for the data that isingested, okay, so think of like, I don't know, lambdas, or fargate. The idea is thatyou're not paying for a server that's running all the time, just data that's ingested, dataimmediately disappears. Once it's processed consumer, you only have the choice from apredefined set of services, so either get s3, redshift, Elasticsearch, or Splunk. Andthey're not custom. So you're stuck with what you got kinesis data analytics allows youto perform queries in real time. So it needs kinesis data streams or farfalle firehoseas the input and the output, so you have to have two additional streams to use a servicewhich makes it a little bit of expensive. And you have kinesis video analytics, whichis for securely, ingesting and storing video and audio uncoated data to consumers suchas Sage maker, recognition or other services to apply machine learning and video processing.to actually send data to the streams, you have to either use kpl, which is the kinesisProducer library, which is like a Java library to write to a stream.Or you can write datato a stream using the ABS SDK kpl is more efficient, but you have to choose what youneed to do in your situation. So there is the kinesis cheat sheet. Hey, this is AndrewBrown from exam Pro. And we are looking at SYSTEMS MANAGER parameter store, which issecure hierarchal storage for configuration, data management and secrets management.Sowith parameter store, you can store data such as passwords, data, base strings, licensecode, and as parameter values. And store configuration data and secure strings in hierarchies. Andtrack versions of that doesn't make sense it will as we work through this here. Andyou can encrypt these parameters using kms. So you can optionally apply encryption, thoughthat doesn't necessarily mean things are encrypted in parameter store. So to really understandparameter store, let's go ahead and look at what it takes to create a parameter. So thefirst thing is the name. And this is the way you group parameters together based on a namingconvention. So by using the forward slashes, you're creating hierarchies. And this allowsyou to fetch parameters at different levels.So if I created a bunch of parameters underthe Ford slash, prod, then when I use the API to do example, application support slashproduct, give me all those parameters. So, so interesting way to organize your parametersinto groups. And then you get to choose your tier. And we'll talk about that a little bitmore shortly here. And then you choose the type.So you could have a string, that's justa string, you could have a string that is a string list, which is a comma separatedstring, you can encrypt your string using kms. And then you just provide the value.So talking about those tears. There are two tiers we have standard and advanced. So generally,you're using the standard one, and this is scope based on regions. So if you never exceed10,000 parameters, this parameter store is going to be free. But once you over go over10,000, you're now using advanced parameters. Or if you need to use a parameter that hasa value higher than four kilobytes, you're gonna have to use advanced parameter.Andif you want to apply parameter policies, you're going to have to use an advanced parameter.Now a parameter can be applied per or sorry, the advanced here can be applied per parameter.So you can mix and match these two here. So that is something that's interesting No, butone thing you do need to know about these advanced parameters is that you can converta standard parameter to an advanced parameter at any time, but you can't revert an advancedparameter to a standard parameter. So it's a one way process.And the reason for thatis because if you were to revert an advanced parameter, you would end up losing data becauseyou have an advanced parameter that has eight kilobytes, and it just can't go back to fourkilobytes, it would end up truncating the data. So that is the reason for that. Let'stalk about parameter policies, which is a feature of only advanced parameters. So theidea here is it's going to help force you to update or delete your passwords, it's goingto do this by using asynchronous periodic scans. So after you create these policies,you don't have to do anything parameter store will take care of the rest.And you can applymultiple policies to a single parameter. So now that we have a bit of that, let's lookat what policies we have available to us, there's only three at this moment. So thefirst one is expiration. So the idea with this policy is you say I want this, this parameterto expire after this date and time and then it will just auto delete. The next one isexpiration notice, and this one will tell you x days before or hours or minutes beforean expiration is going to happen.So if for whatever reason you need to take action. Withthat stored data, this is gonna give you a heads up. The last one is no change notification.So let's say you're supposed to have a parameter that's supposed to be modified manually bya developer. So they're supposed to update it themselves. After X amount of days or minutesor hours, this parameter will tell you, hey, nothing has changed. So maybe we should goahead and investigate. So that is parameter policies. So to understand how the hierarchyworks with parameter store, I want to show you using the COI. So the first thing we wantto do is we want to create some parameters. So using the put parameter command, we canjust supply the name.And that's going to be how we define our hierarchy here, I'm goingto provide some values, we're going to store them as strings. And so when you run eachof these commands, it's actually going to tell you what version, it is, if you keepon doing a put on the same thing, that version count is going to go up. And this allows youto access older versions, because everything in parameter store is automatically versioned.So we saved we put three parameters here on Vulcan, so how would we actually get all theparameters in one go.And that's where we use get parameters by path. so here we canjust specify planets, Ford slash Vulcan, and all the parameters underneath will be returnedto us. So here they are. So you can see we have all of them there. So that's all it takesto get your parameters. And that's how you generally use this within your application.So we are using the CLR here, so you'd have to translate this over to the SDK. But thisis how you would get parameters in your application.Hey, this is Andrew Brown from exam Pro. Andwe are looking at secrets manager, which is used to protect secrets needed to access yourapplications and services. So easily rotate managed and retrieved database credentials,API keys and other secrets through their lifecycle. So for secrets manager, you're going to generallywant to use it to automatically store and rotate database credentials. They say theydo API keys and other things like that. But really, this is where secrets manager shines.So you know, the database is available to us as RDS or redshift document dB. Then wehave other databases, which we'll look at closely here in a second. And then you havekey value, which they say is for API's.So what you do is you go ahead and select thesecret type that you want to do for RDS, redshift and document dB. It's very straightforwardfor other database and other types of secrets that are a little bit different. So let'slook at those in greater detail here. So selecting for credentials for other database, you cansee you can select a specific type of relational engine. But here you're providing the serveraddress, the database name and the port.So for the other three managed ones, you wouldn'tdo that you just provide the username, password and select the resource within your AWS account.And then for the other types of secrets, this is just a key value if you go over to plaintext. That's not that doesn't mean you can encrypt a plain text file, it just is anotherrepresentation of that key and value. So you can just work with a JSON object. But yeah,those are all the types there. Just a few other things to highlight about secrets managerand that is when you go ahead and create any credential in encryption is enforced. So withparameter store, it doesn't necessarily have to be encrypted but with secrets manager,it has be encrypted at rest. And you can just use the default encryption key or use a differentCMK. If you want to go make one, the pricing is pretty simple.So it's point 40 cents USDper secret per month. So some people don't like secrets manager because it costs thatand you can pretty much use parameter store for free. But, you know, you have to decidewhat makes sense for you. And it's a half a cent per 10,000 API calls there. And thenone thing to note is that if you want to monitor the credentials access, in case you need toaudit them, or just investigate, if you have a cloud trail created, then it will monitorthe secrets for you. So you can investigate in the future, probably a good idea to turnon cloud trail.So the huge value with secrets manager is automatic rotation. So you canset up automatic rotation for any database credential. So any of the managed servicesand even the other databases there. There's no automatic rotation for the key value, secrettype. So it's just the database stuff. So you know, once you go through the wizard,for any of those steps are going to come to this automatic rotation. So we're going togo ahead and enable it, you're going to choose your rotation interval, which can be up to365 days, up to one year.So just to show you here, if you expand your 360 90. And youhave custom, if you go to custom, you can set it up to one year. And the way secretsmanager works is that it just creates a lambda function for you. And so some would arguethat if you wanted to use parameter store, just make your own lambda, you wouldn't needto use secrets manager and that is 100%. True. But you know, you have to know how to do that.So you know, decide whether you want to put an extra effort, so you don't have to payfor secrets manager.But yeah, it will go ahead and do that. And there's one optiondown below here. And this allows you to select what password you're going to rotate out,because you might not want to rotate out this path for you might want to rotate out a developer'spassword that's connected to the database. So you know, that's another option for youthere. For the developer associate, it's good to know the COI commands, let's just lookat a couple for secrets manager, the first one being intimate secrets manager describessecret.And this is going to describe information about a particular secret. The reason you'dwant to do this is so that you could find out what version IDs are there. Because youmight want to specify a specific version of a secret. And then you get some access informationsuch as the last time it was changed or accessed. So that might be one precursor step beforeyou actually try to get the secret. So the other ci command we need to know is get secretvalue. And this actually gets the secret. So you can see you supply the secret ID andthen the version stage. And if you don't provide virgin sage, it would just default to innovyzcurrent, which is the current version. But if you use the prior step there, you coulduse a different version stage there. And so using the COI command, you can see that wehave this secret string string. And that is that is what storing our credential information.So in this case, it's just a key value.Credential are secret. And so that's what we're lookingat here. to really help make sense of secrets manager, because it's not always clear, youknow, how do you access the secrets versus how does the database access it versus howdoes the app access it? You know, I made an architectural diagram here for you. So youknow, the first thing is that you do secrets manager, and we'd set up rotation with it.So every 90 days, it's going to rotate out that password that is stored in RDS. So thepassword is actually an RDS. And then secrets manager can store a version of it as well.But what it's doing is the lambda probably set up with a cloud watch event. And everynine days it's going to say, okay, run this lambda, swap out that password. So that'show the password gets swapped in RDS. But how does the application access it? Well,you'd have your web app running on EC two, and you'd use the SP DK, which is the iconthere in the actual EC two instance here.And you would make an SDK call to secretsmanager and get the database credentials. And then you use your web server with thatnew password and username and form a connection URL, which is just a it looks like an HTMLURL. But it's used to connect to Postgres databases. Not all relational database typeshave a connection URL. So you might just have to provide the username and password. Butthe point is, you are making connection to RDS. Now from the developer side, the waythey would probably use it, because they're in development.They just want to gain accessto the, the database that's in the cloud. They're going to use the COI to obtain thedatabase credentials and we just went through the COI commands so you have an idea Theywould run, and then the way they connect to the database would be using a database manager.So you can use table. Plus, if you didn't have a database manager, you could just goahead and use terminal and, and, and connect that way there. So hopefully that makes itvery clear in terms of how you can use secrets manager for all these use cases. Hey, thisis Angie brown from exam Pro. And we are looking at dynamodb, which is a key value and documentno SQL database, which can guarantee consistent reads and writes at any scale. So to reallyunderstand dynamodb, we need to understand what is no SQL and I can tell you what it'snot It is neither a relational database and does not use SQL to query the data for results.And the key thing that's different is how the data is stored.So it can be either keyvalue or documents. So looking first here, a key value store, this is a form of datastorage, which has a key which references a value and nothing more. Okay, so that isone data structure you can have in Dynamo dB. The other one is documents stored. Sothis is a form of data storage, which has a nested data structure. There. So that iswhat they call the document. So hopefully that makes a bit of sense. So dynamodb isa no SQL key value in document database for internet scale applications. And it has alot of great features. It's fully managed multi region, multi master durable databasebuilt in security, backup and restore an in memory caching.So you can see why this isAWS is flagship database that they're always promoting, because it has so much functionality.at scale, it can provide eventual consistent reads and strongly consistent reads, whichwe will talk about in the Dynamo DB section here. So don't worry if that doesn't makesense to just as of yet. And you can specify the Read and Write capacity per second. Sowhatever you need, you just say I need 100 reads and writes per second. And then youjust pay that cost.So there they are. And we will again, we'll talk about this in greaterdetail in the dynamodb section. And the last thing I want you to know is that the datais stored at least across three different regions on SSD storage. So there's a reallyfast drives, which makes your data extremely durable to failure. So there you go. Okay,so I made one really, really big mistake here, which I've corrected here. And it's the factthat all data is stored on SSD storage and is spread across three different easy's Thereason I thought it was regions was because when I read the documentation, it said geographicallocations, it didn't really say azs.And so I took the guess and thought it was region.But when I thought about it, and I you know, I talked to some other people that know dynamodbbetter than me. They pointed out that no, it's going to be data centers. And that makessense. Because if you have a feature called Global tables and allows you to copy to otherregions, it just doesn't make sense. So sorry for that mistake. But it's actually threedifferent Z's. It's technically three different data centers, but we'll just call these Z'sfor the sake of simplicity. And yeah, that's that correction there. Let's take a look atwhat a dynamodb table looks like and understand all the components that are involved. So thefirst thing is the table itself. And tables contain rows and columns, but we don't callthem rows and columns dynamodb, they have different names. So that is the entire tablethere.So we have items, and that is the name for rows. So there we have a row, which isa single item. And then you have attributes, and that is the name for columns. I don'tpoint it out there. But imagine the entire year column as the attributes, then you havethe keys. And that's the name of these columns. So up here we have IMDb IDs, that's namingthe key. And then you have values and that is the actual data itself. So there's an exampleof some data. So hopefully that makes it very clear how what the structure is for dynamodb.So dynamodb, it replicates your database across three regions onto three separate hard drives.And this allows for high availability, so you don't have data loss. But this comes comeswith a trade off because when you need to update your data, it's going to have to writeupdates to all those copies.And it's possible for data to be inconsistent when reading froma copy which has yet to be updated. So the way you work around this is with choosingyour read consistency with dynamodb. You can choose between two options. You can chooseeventual consistent reads, which is the default or strongly consistent reads. So let's talkabout eventual consistent reads first. So when copies are being updated, it is possiblefor you to read and be returned an inconsistent copy.Okay, because because you're readingfrom a database which has yet to be updated, but reads are super fast because you're notwaiting for data to become consistent. So you can read it immediately. But there's noguarantee of consistency. Now, the time it takes for everything to become consistentis, is around a second. So if you're building application, and you can wait up to a secondafter read, that's how you would ensure that your data is up to date. Or maybe you havean application where it'd be inconsistent isn't a deal breaker, so it doesn't reallymatter. Now, if consistency is extremely important to you, this is where strongly consistentreads are going to come into play. So this is when copies are being updated, and youattempt to read it, but it's not going to return it unless all the copies are consistent.So the trade off here is you're going to have a guarantee of consistency, it's always goingto be consistent.But you're going to have to wait longer for that, that read to comeback to you. So it's gonna be a slower read. And all copies will be consistent within asecond. That's a guarantee that it was gives you. So there you go, that's reconsidered.Let's look at dynamodb partitions. And so what is that partition? It's a allocationof storage for a table backed by a solid state drive and automatically replicated acrossazs within an AWS region, and that is a device's definition. So let's take a look at my definition,because I think that's a bit easier to digest.And that is a partition. partition is whenyou slice your table up into smaller chunks of data or partition. And the purpose of thisis to speed up reads for very large tables by logically grouping similar data together.So imagine you have a table and this table is giant. And so it would be faster if youcould partition it. And so the idea is that maybe this data would go to partition A, thisdata would go to partition B and this data, we go to partition C. And if you're wondering,well, how does it choose which tables and stuff like that, that's what we're going tofind out next. So looking at partitions a bit more here, dynamodb automatically createsthose partitions for you as your data grows.And you're going to start off with a singlepartition. And there's two cases where dynamodb will create additional partitions for you,that's when you exceed 10 gigabytes of data in a table, or if you exceed the RC use orWC use for a single partition. And, you know, each partition has a maximum of 3000, RC useor 1000. WCS, if you're wondering what those acronyms mean, it means read capacity units,and write capacity units. So an example of setting up those capacity units would be downhere below. So you have a table. And here I'm saying the reads and then I'm settingthe rights. And the way it will work is when you split, when you get a new partition, it'sgoing to split the reads and writes across those tables. So if you go over that mark,this is exactly where it would hit that threshold. So it would split. So when you create a table,you have to define a primary key. And this key is going to determine where and how yourdata will be stored in partitions.And it's important to note that the primary key cannotbe changed later. So designing your primary key, making the right choice early on is extremelyimportant because you only get one shot at this. So here is an example of the consolewhere you create your primary key. And you'd would define a partition key that's goingto determine which partition of data should be written to, then you have a sore key, whichis optional. This is how your data should be sorted on a partition, you're going tonotice that there are some date types here. And I have a third key that's a date. Butnotice that there isn't a date type in dynamodb. So in that case, you use a string. So justbe aware that there is no data type.And there are two types of primary keys. There's onewhich is called a simple primary key, that's where you only use a partition. And then youhave a composite primary key. That's where you have a partition and a sort key that makesup your primary key. So let's talk about how a primary key with only a partition key chooseswhich partition it should write data to and again, if you only have a partition key, thatmakes it a simple primary key. So the first thing is we need some data, and we want toget it into a particular partition. And so we make our our primary key and all it hasis a partition defined, and we want to choose a value that is unique.So IDs are reallygreat value because that's going to be extremely unique. And that is very important when designingsimple primary keys. So the way this is going to work is AWS has this thing called the DynamodB. Internal hash function. And we don't actually know how this thing works, because it's asecret. But it's an algorithm that decides which partition to write data to. And so itneeds your data and it needs your primary key to figure that out.So those two thingsgo in there. And then it writes to whatever partition it decides to write to. So you know,that is how that works for a simple primary key. Now, we're going to take a look at howa primary key with a partition and a sarki chooses which partition it should write datato. And again, a partition and sort key is known as a composite primary key. So we'regoing to need some data. And then we're gonna have to define our key here. And so you'llnotice that we're filling in both the partition key and also the sort key value. And what'simportant is that the combination of the partition and sort key has to be unique. So in the caseof simple primary key, we wanted the partition key to be unique, but it has to be uniquein the scope of the two combined. And then we have our internal hash function, whichagain, is a secret, nobody knows how it works.I've reached out on Twitter asked dynamodb,they won't tell me, which is great for security. And so what we're going to do is take ourprimary key and our data and it's going to get passed to that internal hash function.This happens when you just write to dynamodb. You don't have to literally call it and thenit's going to decide to put it in partition See, but this is a little bit different, wherethe composite or the simple primary key, I just put it in a random partition. This oneis putting it with data that is similar to it. So here we have an alien that is Romulan,and it's grouping it together with other Romulans. There, and it's also sorting that data fromA to Z. So the data is close together so that it's faster to access it. So that's that isthe idea behind the composite primary key. And you know how it figures out what partitionsyou go. Let's talk about primary key designs, we're going to be talking about simple keysand the composite key.Again, simple keys is only a partition key. And a composite keyis made up of both a partition and sort key. So for simple, what's important to rememberis that no two items can have the same partition key. So here, let's say the ID is the partitionkey, you can see that two values are the same, that's not going to work out. But here theyare different. So that is great. Then on the composite key side, we care about two items,where they can have the same partition key, but the partition and sort key combined mustbe unique.So here, let's say alien is the partition and name is the sort, they're boththe same, that's not going to work out. But here, you know, the partition key is the same,but the sore key is different. And that's alright, so that's going to work out great.So there's two things when designing your primary key that you have to keep in mindyou want them to be distinct, the key should be is as unique as possible. And the otherthing is you want the key so that it will evenly distribute data amongst partitions.And that is based on how much things will group together based on your so and this ismore so even with the the composite key because when you can group things based on that partitionkey, you want that to be as unique as possible. So that it's more even, you don't want 90%of your records being one thing and then 10% being the other, you want it as even as possible.So there you go. query and scan are two functions that are going to be using a lot in DynamodB.And you really need to get familiar with these two functions. And so what I recommendis opening up the dynamodb console, because they have under these, this items, this wayof exploring data where you can query. So there you can choose whether you want to queryyour scan, and then based on what you choose scan has very few options. But with query,you have a lot of things you can choose to explore based on the partition key. And justlook at what you can sort when you drop down here. So by playing around with this, you'regoing to really understand in a practical way what you can do. But let's jump into learninga little bit more about query and scan. So let's talk about the query function. So thisallows you to find items in a table based on the primary key values, and allows youto query a table or secondary index that has a composite primary key. So going back tothe actual console, you can see that you have to or you have the choice of a prior partitionand a sort.And we have some filtration options. So anything you can do there is going to reallyhelp you understand what you can do. By default, the reads are using eventual consistency.So if you want the reads to be strongly consistent, while querying, you can pass a consistentread equals true when you're using it in the SDK or the COI By default, it returns allthe attributes for the items. So that's all of the columns. And you can filter those outby using projected expressions to only get the columns you want, definitely somethingyou're going to want to do. And by default, all the all the data will be returned to youin ascending order, that's a to z. If you want it in descending order that Zed to a,you can use scan index forward false to do that, but you can see in the console, youcan just click ascending or descending, it's a lot easier there.I just want to give youan example of a payload. So this is an example where if you're configuring the CLR SDK, theseare the attributes you could pass out what you have here a consistent read, you're sayingwe want it to be true projected expression, we're saying I only want ID name created atand updated, scanning explored, we want to be returned in reverse, and we can limit it.So we say I only want 20. There's a lot of different options. But these are the mostimportant ones for you to know. Now, we're going to take a look at the dynamodb scanfunction. So this allows you to scan through all items and returns one or more items throughfilters. And by default returns all the triggers for items. So if we were to try to do a scan,within the dynamodb console, you can see all we have is the ability to add filters, it'sreally, really simple. scans can be performed on tables and secondary indexes, they canreturn specific attributes. And by using projected expressions, we can then limit it just likequery.So just like the other one, scan operations are sequential, you can speed up a scan throughparallel scans using segments and total segments. And this is important because when you doa scan, it's going to return everything. And so you know, doesn't matter how many records,they'll just return everything. And so that's why this functionality is so important. ButI'm going to tell you right now, you're going to want to avoid using scans when possible.They're much less efficient than running a query, because, as I just said, returns everything.As the table grows, the scans take longer to complete, which makes total sense. Anda large table can use all your provision throughput in a single scan. So you know, avoid scanswhen possible, but they are there for you. So one thing we have to do when we make anew table is choose its capacity, whether it's provisioned or on demand. And we're goingto first look at provision throughput capacity. And this is the maximum amount of capacityyour application is allowed to read or write per second from a table or index.So hereis an example of a table where we've chosen to use provisioned capacity. And down below,we have the option to choose or to fill in our provision capacity and zooming in to tellus what's going to cost us per month at that setting. So throughput is measured in capacityunits. So we have RC use, which stands for read capacity unit and WC use, which standsfor right capacity unit.So you'll see those abbreviations all over the place. And onething that we can do, and you'll notice there's an option for auto scaling. So if we wereto turn that on, this allows us to scale our capacity up and down based on utilization.So if we need to go beyond that five or 10, we'll just set that and we can say the minimumand the maximum. And the reason why you'd want to turn on auto scaling is to avoid throttling.So if you were to go beyond the capacity that you're set, and let's say you didn't haveauto scaling turned on and you went beyond five reads per second, that data is goingto get throttled.And that means those requests are going to be dropped. So it's literallydata loss, they're not going to make into your database. So auto scaling gives you abit more wiggle room. But you know, there are limitations to it. So there you go. That'sprovisioned capacity. So let's take a look at on demand capacity. And this is where youpay per request. So you pay only for what you use. So over here, we've have it set toon demand, you're going to notice that we cannot set the provision capacity, we cannotset the auto scaling because this is all going to happen for you automatically. And on demandis really good for cases where you have a new table, and you don't know how it's goingto turn out, you don't know what you should set your capacity to.So it's just easierto go on demand, or you're never gonna have predictable traffic is going to be all overthe place. You don't want to be throttled and lose traffic and that auto scaling groupis not going to work because it's just too erratic. Or you know, you just like the ideaof paying for only what you use, because dynamodb can get pretty darn expensive at scale. Andthe only limitations that are applied to you is whatever is the default on the upper limitsfor a table and the upper limit is 40,000 Arts use and 40,000 wc use so that's the worst,the worst damage someone could do to you. And what I mean by that when I say damageis that there is no hard limit imposed by on demand. So if you get a lot of trafficand it requires 40,000 RC use, it's going to spin up to that. So you just have to becareful that you know you don't have runaway traffic here because that will add Have youend up with a very large bill.But again on demand is still a really good feature becauseit gives you a lot more flexibility. So in Dynamo dB, it's important for us to know howto calculate the reads or the rights. So let's start with read capacity units are read capacitythat represents one strongly consistent read per second or two eventually consistent readsper second for an item up to four kilobytes in size. So the whole point is figuring outwhat to put in this box. And so if we had data that was four kilobytes, or data or less,at 10, this would equal 10, strongly consistent reads, or 20, eventually consistent reads,but if it shows up in the exam, they're not gonna, they're gonna not ask you this stuff,they're gonna ask you how to calculate this number here.And that's what we're gonna figureout. But remember that we have consistent reads and strong reads. So we have to havetwo different formulas for calculating this number here. So let's first look at how tocalculate RCS for strong, strongly consistent reads. And the formula is going to be we'regoing to round our data up to the nearest four, we're going to divide by four. And thenwe're going to times by the number of reads. So let's go through three examples. The firstbeing 50, reads at 40 kilobytes per item. So we don't need to round it up to four, because40 is already divisible by four.And we're going to divide that by four giving us 10timesing, that by 50, that's how many reads we have. And that's going to be the numberthat goes up in here 500 RC use. The next one here is 10 reads at six kilobytes peritem. So six needs to get rounded up to eight, that eight is divided by four, which turnsinto two, so we're going to take that two times two by 10. And that's going to be 20rs use so that 20 is going to go up into that box there. The last one here we have 33 readsat 17 kilobytes per item. And so 17 kilobytes need to be rounded up to 2020 is divisibleby four becomes four, and then we times four by the number of reads, which is 33. And that'sgoing to give us 132 RC use. So we're going to now look at how to calculate RCS for eventuallyconsistent reads. And remember that for each RCU, we get to eventually consistent readsper second. So the formula is going to vary here, but going to be pretty darn similar.So the first thing is we're going to round the data up to the nearest four, we're goingto divide by four, we're going to times by the number of reads that we're going to divideby the final number by two and then we have to round up to the nearest whole number.Solooking at the first example, we have 50 reads at 40 kilobytes per item. So 40 is alreadydivisible by four, so we're going to divide by four, which gives us 10, we're going totimes by 50, which gives us 500, then divide by two, and that's going to give us 250 ourCPUs. The next example, we have 11 reads at nine kilobytes per item, so we're going toaround up a nine to 12, which is the nearest four, which gives us and then we're gonnadivide that by four, which gives us three, three times 11. That's the amount of reads,so that's 33, we're gonna divide by two, which gives us 16.5. And then we'll have to roundthat up to the whole number. So that's 17. So we're at 17 RC use. And then one more examplehere. So let's say we have 14 reads at 24 kilobytes per item.So 24 is divisible byfour, so that gives us five, five divided by four, or sorry, 24 divided by four is five.And then five times five times 14 is 70. And then divide that by two uses 35 RC use. SoI know math isn't fun, but it is something you definitely need to learn. So just drillthrough the stuff until you get it. And there you go. Now let's look at how to calculate,right, so this is for right capacity units. And a right capacity unit represents one itemper second for an item up to one kilobyte. So if we have a right capacity of 10, andwe have one kilobyte or data of last, that's gonna equal 10 rights, so I wonder if younotice, but this form is gonna be a lot easier than the reads. So how to calculate WC us,we're going to round our data up to the nearest one kilobyte, we're going to times by thenumber of rights.And that's it. So our first example here we have 50 rights at 40 kilobytesper item. So that's 50 times 40, which is 2000 wc use. Our next example here is 11 rightsat one kilobyte per item. So that's one times 11 equals 11. So super easy. The last onea little bit tricky here, but we have 18 writes at 500 bytes. So we round up the 500 bytesto one kilobyte times that by 18 we have 18 WSU so super, super easy. Just remember thatwrites pretty much it's just times by whatever. Now we're gonna take a look at global tablesand global tables provides a fully managed solution for deployment. You To multi regionmulti master database without having to build and maintain your own replication solution.So this is a very powerful tool that dynamodb has when you want to go global.And in orderto use global tables, there are three things you must meet. So you need to use kms CMK.So you need to have a custom master custom master key with kms. You need to enable streamsand stream type, it has to be new and old image, I believe. But something has to beset there. So that would show up here on the right hand side. So you can see once you havethree checkboxes, you are good to go. And then you're able to create global tables.So all you got to do is add the region that you want, and choose it. And that's it. Soglobal tables are very easy to utilize. It's just more of that activation process. Butjust remember what global tables are for they're for deploying a multi region multi masterdatabase without having to build and maintain your own replication solution.Dynamo DB hassupport for transaction. So what is a transaction, this represents a change that will occur tothe database, if any dependent conditions failed, and then a transaction will roll backas if the database changes never occurred. I think at one point dynamodb didn't havetransactions. And that was one of the reasons why people were like, well, that's why weuse RDS databases, or sorry, relational databases, because they're acid compliant. But lookslike dynamodb has bolted on that functionality. So that is great. And if you're wonderingwhat acid stands for, it is for a thomassie. Thomas CD, I can't say it consistency, isolationand durability. So what I want to do is give you an example of a transaction so you canconceptualize it. And these transactions work really well like especially when you're thinkingabout money, where you have to have multiple steps before something goes through beforeyou release money. So the way it works is that if any of these steps fail, then thetransaction will immediately fail and rollback the changes.So the first thing is we createa new payee. And then we go ahead and we verify that the email is correct format before sendingout the money. But it turns out that it's not. And so what happens is it stops and rollsback. So none of these actions actually occurred. So that is in a nutshell transactions, butlet's look at it in more detail. So now we're going to look at how dynamodb transactionswork since we just covered conceptually what transactions are. So dynamodb offers the abilityto perform transactions at no additional cost using two functions they have which is transactwrite items and transact get items.You'll notice one says right and one says get. Andthat's when you want to group together a bunch of right actions or get actions. So thoseare your limitations there. The transactions allow for all or nothing changes to multipleitems both within and across tables. Notice that it says across so you can do this withmultiple tables. dynamodb will perform two underlying reads or writes for every itemin the transaction, one to prepare the transaction, one to commit the transaction, so that isgoing to consume your RC use or WC use. So there is a little bit of extra cost there.But it's negligible, you shouldn't even think about it. These two underlying read and writeoperations are visible in your Amazon cloudwatch metrics. So if you're wondering, you wantto keep track of that stuff, that's where you can check it.You can use condition checkwith dynamodb transactions to do pre conditional check. If that doesn't make sense, this isanother explanation. So it checks that an item exists or to check the condition of aspecific specific attributes of the item. So it's just a way of doing a check beforeyou run that transaction. So that is the specifics of a dynamodb transit. So TTL, which standsfor Time To Live allows you to have items in your dynamodb expire after a given amountof time. And so when I say expire, I mean they get deleted. This is great if you wantto keep your databases small and manageable. Or let's say you're working with temporary,but continuous data examples could be session data, event logs, or just based on your usagepatterns.So the way you're going to enable Time To Live is in your dynamodb table, you'regoing to click on TTL. And then you have to enable it and you need to provide it a attribute.So here I'm providing expires that which is a string that has to be in a date time format.And this is what's going to be used to determine when a record should be deleted doubt if you'vebeen paying close attention to the dynamodb section, you know that there is no date time.data structure and everything are strings, because dynamodb just doesn't have a datetime datatype so it's important that you use a you format the string in epoch format, okay,and so if you're not familiar with that, this is just a I think was Is ISO 8061.And whatyou'd have to do is convert this into a POC, which looks like this. It's a bunch of numbers.So there, you'd have to programmatically do this, or you'd have to use it online calculator.But the advantage of TTL also is that it'll save you money, because the smaller your databaseis, the less likely you'll have partitions and the more money you will save. So thereyou go. dynamodb has a functionality called streams. And when you enable streams on atable, dynamodb is going to capture every modification to a data to data items so thatyou can react to those changes.So if an insert, update or delete occurs, that change willbe captured and then sent to a lambda function. So changes are sent in batches at a time toyour custom lambda changes are sent to your custom lambda in near real time, each streamrecord appears exactly once in the stream for each item that is modify the stream records,records appear in the same sequence as the actual modification. So here's an example.Let's say I update an item.So I'm updating chief O'Brien, this could be insert, and itgets inserted in the database. But now we're going to react to that insert and send thatdata to dynamodb stream. And that dynamodb stream is configured to send it to a lambda.And when that goes that lambda, we can do anything we want with that data. So we couldhave it so that it sends out an email, or you get sent to kinesis, firehose, or whateverwe want to program. So it's just a way of reacting to inserts, updates, and deletes.So there are a couple of errors, I want to review with you on dynamodb that I think youshould know the first one being throttling exception. So the rate of requests exceedsthe allowed throughput. This exception might be returned if you perform any of the followingoperations to rapidly so CREATE TABLE update table delete table, this is likely to happen,I would say with update table because it's not it's not frequent, that you're creatingtables or deleting tables, but it's possible, you might send multiple actions to an updatetable.The other error I want you to know and this one is extremely common. It's provisionthroughput exceeded exception. So you exceeded your maximum allowed provision throughputfor it for a table or for one or more global secondary indexes. This error occurs whenyou've exceeded your, your throughput. So that's your capacity units, your reads oryour rights. And so you're very likely to see this error. When an error does occur ondynamodb, you're gonna likely be accessing dynamodb via the AWS SDK. So writing programmaticallyyour code, and when it when an error fails, the SDK has a built in so it will automaticallyretry when something failed, it'll try it again, as well, it will implement exponentialback off. Have you ever heard of this before, the idea is that I've encountered an error,I'm going to wait 50 milliseconds before I try again. And if that one fails, I'm goingto try 100 milliseconds. And if that one fails, I'm going to keep on doubling that time. Andhere, it's going to update it up to a minute before it stops.So this is just a strategyfor trying to make sure changes make their way through. So if you're using SDK that ensuresthat you're not going to lose data, because it's going to try and try again. And so Iwant to point out that these two exceptions are important because they're very likelyshopping your exam. Very, very likely provision, throughput exceeded exception. And noticethat it says provisioned throughput. So remember, there's a there's two capacity modes, we havethroughput, provision, throughput, and on demand. So this error provision throughputwould never happen for on demand for on demand, this error could never occur. On maybe itwould occur if you exceeded the 40,000 RCU, or w are the right capacity units or the readcapacity units. I don't know I've never exceeded it. So I couldn't tell you what error showsup. Maybe it's called on demand exception, but this error is extremely common. So youhave to understand. So in dynamodb, there is the concept of indexes. And indexes areextremely common with databases. So what is an index? A database index is a copy of aselected columns of data in a database, which is used to quickly sort.So it's just whenyou need to quickly look up information, you're literally cloning a version of your database.And dynamodb has two types of indexes. We have LSI, which stands for local secondaryindex. And these can only be created with the initial table. And then you have GSI eyeswhich are global secondary index, and we're going to get into both of these in extremedetail. But the takeaway I want you to remember from this is that you generally want to useGlobal over local, and this is what's recommended in database documentations.And another decidingfactor could be strong consistency. So a local secondary index can provide strong consistency,where a global secondary index cannot provide strong consistency. So now that we have alittle bit of idea what indexes are, and let's jump into lsis, and GSI. So let's first takea look at local secondary indexes. So it's considered local, and that every partitionof an LSI is scoped to a base table partition that has the same partition key value, ifyou're only what base table is, that is the initial table that you're creating, you'recreating an index from the total size of index items for any one partition key value can'texceed 10 gigabytes.So that is definitely a hard limit with local secondary indexes.It shares the same provision throughput setting for READ WRITE activity with the table thatis indexing. So that's the base table. And that makes sense because it's a local table,so of course, it's going to share, and then it has a limit of five per table. Now let'stake a look at actually how we create a local secondary index.So the LSI is can only becreated with the initial table. So here you can see I'm making a table and I'm addingindex index at the time of creation. You cannot add, modify, or delete LSI lies outside ofthis initial table step. So you have to really get a right here. And now and if you needone, you literally have to make a new table.So the only way you can make a new also, you'dhave to make a new table and move your all your data over. So you need to have both apartition and a sort key. And there are some conditions around those partitions or keys,the partition key must be the same as the base table. And this makes sense because it'sa local, it's a local partition. So it's working off that base table. The second sort key shouldbe different from the base table. Now you could make it the same, but then it defeatsthe whole purpose of having a secondary index, which is supposed to be optimized to sortin a different manner.So you'd want to choose a different sort key in this case. So youknow, hopefully, that makes local secondary indexes more clear. So we'll move on to thenext one, which is global secondary index. So global secondary indexes are consideredglobal, because their queries on the index can span all of the data in the base table.And across all partitions. These indexes have no size restrictions where we saw with LSI,there was a 10 gigabyte limit for all items in that index. They can provision their ownthroughput settings, and they consume capacity, but it's just not from the base table.Sothat's a good thing. And there's a limited to 20 per table, I think you might be ableto use a service limit increase to increase that I'm not sure. But even if that's notthe case, it's not going to show up an exam. So do not worry. If you want to create a globalsecondary index, you can create it while you're creating the table. Or you can make one afterwardsand you can modify them and delete them at any time. So it's extremely versatile to createthese things. And the partition key can be different from the base table. And you shouldmake it different generally, because that would make sense.But I guess if you had notmajor local secondary index and you wanted a partition key with a different store key,then I guess you'd have to make a GSI. But just notice that you can set the partitionkey to whatever you want. And the sort key is optional. So you don't actually have tohave it, you could just have a partition key. So that'd be a simple key. But yeah, that'sglobal secondary indexes. So now let's just pull them up side by side and make sure thatwe understand the differences clearly. Okay, so let's just reiterate through LSI versusGSI.So we really know the difference between these two and which one is better in eachspecific aspect. So starting at the top here, when we're talking about key schemas, localsecondary indexes only have composite. And remember, composite is both a partition andsort key. And then global secondary indexes support both simple and composite. Then forkey attributes for elseis. The partition key must be the same as the base table becauseremember, it's local, it has to use the same partition key for GSI is the partition andsort key can be any attribute you like. Then for size restrictions, the LSI has to be 10gigabytes or less for all index items. And then for GSI is it's unlimited. For onlineindex operations. The only time you can create indexes is on table creation.But for GSIis you can add, modify, delete indexes at any time, and you can make these indexes atthe time of creation as well. If you want to, for queries and partitions, the LSI isquery over eight or a single partition as specified by the partition key value in thequery for GSI, is it queries over entire table or across all partitions? For re consistency,we have strongly or eventual consistency. So you can see LSI win over GSI. In this onecase, where a GSI is only eventual consistency, for provision, throughput consumption, thisis shared capacity with the base table. So you're more likely to get throttled here,both for GSI is it has its own capacity, so it's not gonna affect the base table. Andthen the last one here, which we didn't talk about, but we'll talk about now is projectedattributes.So when you create that table, you say, what, what attributes. So those arethe columns that are allowed to be in that index. So for local secondary index, you canrequest attributes that are not projected in into the index, whereas GSI, you can onlyrequest attributes that are projected in the index. So over those two points, you can seewhy gsis are generally more recommended than lsis. But you know, it all comes down to youruse case, so you'll have to decide for yourself. So here we're gonna take a look at dynamodbaccelerator, also known as DAX, and it is a fully managed in memory cache for DynamoDB that runs in a cluster. And its response times can be in single digit millisecond.So DAX can reduce response times to microseconds.And that's really important for certain typesof workloads, which we'll talk about a little bit more in the next slide. So here's an illustrationof generally what DAX is. So let's just talk through the points of how this thing works.So you have a DAX cluster, which consists of one or more nodes, so I haven't calledcash here, each node runs its own instance of the DAX caching software, one of the nodesserves as the primary node for the cluster.Additional nodes if present serve as readreplicas, your app can access DAX by specifying the endpoint for the DAX cluster. So therewe can see the app and it's accessing via the endpoint. The DAX client software workswith the cluster endpoint to perform intelligent load balancing and routing. So just takescare of stuff, you just use the endpoint and figures everything out. And incoming requestsare evenly distributed across all the nodes in the cluster. So now that we kind of havea little bit of an overview of DAX, let's look at the use cases of when and when notto use DAX. So let's take a look at DAX use cases. And to best understand this, let'ssay what DAX is good for and what it's not good for.So apps that require the fastestpossible response time for reads. That's real time bidding, social gaming and trading applications.This is where you're gonna want use DAX apps that read a small number of items more frequentlythan others. apps that are read intensive, but are also cost sensitive. Please take noteof read intensive, that's what DAX is usually for apps that require repeated reads againsta large data set. Notice that we said reads again. And now on to the non ideal side appsthat requires strongly consistent reads. Because DAX is not strongly consistent is eventuallyconsistent apps that do not require microsecond response time for reads or do not need tooffload repeated reactivity from underlying tables, apps that are write intensive or thatdo not perform much activity, no services on a right.That is not what DAX is intendedfor. It's for read apps that already are using a different caching solution with dynamodb.And are using their own client side logic for working with that caching solution. Soif you run into cases on the right, what would you do for caching in Dynamo dB, and that'swhere elastic cache would come into place, because you can put elastic cache in frontof Dynamo dB. So if you're dealing with write intensive stuff, and you can make advantageof Redis there, and so that would be the case there.So we're on to the Dynamo DB cheatsheet. And this one's more special than all the rest. And that's why I prefixed it withultimate. Because Dynamo DB is the most important service, you need to know to pass the ADAcertified developer associate. It's extremely critical to the certification. And this cheatsheet is a very long, okay, it's the longest one in this course. It's seven pages long.And it actually started out as only being five pages. I had published a preview on Twitter,and Kirk, who is a senior technologist at AWS, specifically for dynamodb. Notice I madesome mistakes and made the offer to review the entire cheat sheet for accuracy.And Isend it over to him. And he turned this five page cheat sheet into a seven page to cheatand I even learned a lot of great things. So you know, I think we all benefit from Kirk'shelp here. And so I want to tell you, if you can do me a favor to go on Twitter if youhave Twitter. I want you to tweet out to him, he was certified dynamodb. And thank you thankhim for helping us with this ultimate dynamodb cheat sheet. He did it on his own time, hedidn't have to do it, you know, and this was his own effort. So we greatly appreciate it.And you know, I really hope that it helps you pass the exam. So let's jump into thisultimate dynamodb cheat sheet. Okay, so let's jump into the ultimate dynamodb cheat sheet.So dynamodb is a fully managed no SQL key value document database. dynamodb is suitedfor workloads with any amounts of data that require predictable read and write performanceand automatic scaling from small to large.And everything in between dynamodb scalesup and down to support whatever read and write capacity you specify per second in provisionedcapacity mode, or you can set it to on demand mode and there is little to no capacity planningdynamodb can be set to support eventually consistent reads by default, and stronglyconsistent reads on a per call basis. Eventually consistent reads data is returned immediately,but data can be inconsistent copies of data will be generally consistent in one second.Now talking about strongly consistent reads, will always read from the leader partitionssince it always has to be has to have an up to date copy data will never be can be inconsistent,but latency may be higher copies of data will be consistent with a guarantee of one second,Dynamo DB stores three copies of data on SSD drives across three ACS in a region. I thinkbefore I had said across three regions, but this is my misinterpretation of the documentation.And I don't even know if it's really three azs more so managed data centers by AWS. Butit's easy to understand them as AZ.So that's what we're going to call them dynamodb mostcommon data types are be binary, and number as string. I think there's a few other onesthere. And some of them share the word like start with B, so it gets a bit confusing,but these are the three that I want you to know. Tables consists of items, which we callrows, and items consist of attributes which we call columns. A partition is when dynamodbslices your table up into smaller chunks of data.This speeds up read for very large tables.dynamodb automatically creates partitions for these scenarios. So every 10 gigabytesof data when you exceed RC use of 3000 or WC use of 1000 limits for a single partition.And the last scenario when dynamodb sees a pattern of a hot partition, it will splitthat partition in an attempt to fix the issue. So that is page one, we're gonna go to pagetwo. So we're on to page two of the ultimate Dynamo DB cheat sheet.So Dynamo DB will tryto evenly split the RC use and WC use across partitions, primary keys defined where andhow your data will be stored in partitions. So primary keys come in two types. We havesimple primary keys using only a partition key, and composite primary key using botha partition and sort key. partition key is also known as hash. And sir key is also knownas range. I mentioned this before. But the reason they used to be called hash and range,I don't know. But the point is they change them.And when you're using the COI, or, orthe SDK, they still call them hash and range, so it's important to know both of them there.When creating a simple primary key, the partition key value must be unique. When creating acomposite primary key. The combined partition and sort key must be unique. When using sortkey records on the partition are logically grouped together in ascending order. dynamodbGlobal tables provide a fully managed solution for deploying multi region multi master databases.dynamodb supports transactions via the transact write items and transact get items API calls.I don't know if I mentioned it there.But the point of these transaction calls is thatthey let you go across multiple tables Oh, I write it right here. So transactions letyou query multiple tables at once in an all or nothing approach. So all API API callsmust succeed. dynamodb streams allow you to set up a lambda function triggered every timedata is modified in a table to react to changes. And I love dynamodb streams, I use it allthe time for a lot of projects. I don't feel like we need to know too much on the developerabout it. But there's definitely a lot we could write about it. And streams do not consumeRCU. So that is the nice thing about it, they're not going to use your read count up. So thatis page two, we're moving on to page three. So we're on to page three of the ultimatedynamodb cheat sheet. So dynamodb has two types of indexes. We first have lsis whichis local secondary index, and then we'll talk about gsis.So LSI supports strongly or eventuallyconsistent reads. They can only be created with the initial table. They cannot be modifiedand cannot be deleted unless you're also deleting the table. That's the only case where you'dbe able to delete it. They only use composite keys. You They have to be 10 gigabytes orless per partition the shared capacity units with, they have to share the capacity unitswith the base table, you must share a partition key with the base table. So that's very importantas well. Moving on to GSI. So global secondary index, they cannot provide strong consistency,the only way to get strong consistency is with LSI. Okay? Very important. Remember thatpoint. So only eventual consistent reads can create, modify, or delete at any time, that'sextremely convenient. Simple and composite keys is what you can use here can have whateverattributes. So the partition key can be whatever it wants, and the sore key can be whateverit wants, there is no size versus restrictions per partition.It has its own capacity settings.So there you go. That is, I think we're page three onto page four. So we're on to pagefour of the ultimate dynamodb cheat sheet. So we'll talk about scans and we'll talk aboutqueries. The first thing I'm going to tell the scans is, your table should be designedin such a way that your workload primary access patterns, do not use scans. Overall scan shouldbe needed sparingly in frequent reports.So generally, you do not want to be using scans,scan through all items in a table and then return. One or one or more items through filters,by default returns all attributes for every item you can use per project expression. tolimit the attributes that you want to use. scans are sequential, you can speed up a scanthrough parallel scans using segments and total segments. scans can be slow, especiallywith very large tables and can easily consume your provisioned throughput.Scans are oneof the most expensive ways to access data in Dynamo dB. And we'll move on to queriesnext. So it's about finding items based on the primary key values, tables must have acomposite key. And in order to be able to query by default queries are eventually consistent.But if you want to use strong, strongly consistent reads, you can use this attribute stronglyconsistent reads set it to true. And now you're doing strong, by default returns all attributesfor each item found by query. So just like scans, you can use project expression to filterstuff out by default is sorted, ascending, and you can use scan index forward, falseto reverse the order and descending.I don't know if that option is available in scan.But you know, just know that scan index forward is used to flip the order. So there you go.So we're on to the fifth page out of seven for the ultimate dynamodb cheat sheet. Sodynamodb has two capacity modes provision and on demand, we're talking about provisionfirst. So you can switch between these modes once every 24 hours. provision, throughputcapacity is the maximum amount of capacity or application is allowed to read or writeper second from a table or index. So the provision is suited for predictable or steady stateworkloads. It's very important to understand the concepts of RC use and WCS, especiallyfor provision throughput, because you definitely set these values here. So RC use is a readcapacity unit. WC uses right capacity unit. And with dyno, and with provision throughput,you can set auto scaling. And so it's recommended you enable auto scaling with provision capacitymode.In this mode, you set a floor and a ceiling for the capacity you wish the tableto support. dynamodb will automatically add or remove capacity to between these valueson behalf and throttle calls that go above the ceiling for too long. And if you go beyondyour provision capacity, you'll get an exception provision throughput exceeded exception forthe exam, you want 100% want to know this, it will absolutely show up on the exam. Andthis is what happens when throttling occurs. And if you're not familiar with throttling,it's when requests are blocked due to read or write frequencies higher than the set threshold.So an example for exceeding the set provision capacity, we got partition splitting tableindex capacity mismatch. So that is provision throughput, we're gonna move on to on demand.And on to the next page we go.So we're on to the sixth page of the ultimate dynamodbcheat sheet talking about on demand capacity. And this is pay per request. So you only payfor what you use on demand is suited for new or unpredictable workloads. The throughputis only limited by the default upper limit of the tables, that's 40k, RCS and 40k. WCS,WC is that's extremely high value. And throttling can occur if you exceed double your previouspeak capacity. So the high watermark within 30 minutes had no idea of this. If you previouslypeaked to a maximum of 30,000 ops per second, you could not peak immediately at 90,000 opsper second, but you could at 60,000 ops per second. So that is definitely something Idid not know.And I'm really glad Kirk put that in there because I had something waymore simpler before. Since there is no hard limit on on demand. It could be very expensivebased on emerging scenarios. So just be careful with on demand there. But you definitely havethe flex flexibility where you don't have to think about setting your capacity. So that'spretty nice. Now let's talk about calculating reads and writes. This is definitely moreimportant for provisioned throughput, not for on on demand capacity, but we'll go throughit now. So for calculating reads for RC use, a read capacity unit represents one stronglyconsistent reads per second or two eventually consistent reads per second for an item upto four kilobytes in size. And how you're going to calculate RCS for strong is the followinground up data to the nearest four divided by four times by numbers of numbers of reads.And then we'll move on to how to calculate for RCS for eventual, so round data up tonears four divided by four times by the number of reads and divide final number by two.AndI think you got to round it up. And then round up the nearest whole number. If you reallycan't remember that stuff, here are the examples. And I'm hoping you're printing out this cheatsheet on the day of your exam, so that you can look through these and make sure you knowthese for sure. And so that's Page Six, and we're on to the last page, page seven. Sowe're on to the last page of the ultimate dynamodb cheat sheet. So let's finish stronghere, we're going to do some calculating of rights. So our read capacity unit representsone rate per second for an item up to one kilobyte, how to calculate rights, what we'regoing to do is rounded up to the nearest one times by the number of writes. And we'll talkabout that i will i have the example there, we'll just show you there at the end. Oh,so we'll talk about dynamodb accelerator, also known as DAX is a fully managed in memorywrite through cache for dynamodb that runs in a cluster.So reads are eventually consistent,incoming requests are evenly distributed across all of the nodes in the cluster, DAX can reduceread response times to microseconds, and let's say well, where it's ideal and where it'snot. And this is definitely debatable, but I got this from the docs. So you know, youcan't argue with me, but I know some people might consider otherwise. And if it's forthe exam, they generally fall whatever on the docks until they've been changed. So IDAX is ideal for the fastest response science possible apps that have apps that read a smallnumber of items more frequently, apps that are read intensive.So that's the one I'mhighlighting there. And then DAX is not ideal for apps that require strongly consistentreads apps that do not require microsecond reads response times. apps that are writeintensive, or that do not perform much read activity. And if you don't need DAX, considerusing elastic cache. That's not a hard rule. But that's a good rule for the exam. If youif you gotta throw up, like a threw up a toss up between DAX, and elastic cash, and it doesn'tneed micro microseconds and you know, consider using elastic cash there, or if it's morewrite intensive.And just to show you there, these are the examples, you definitely wantto print out this cheat sheet and have it on exam day. You know, I hope this this ultimatedynamodb cheat really makes the difference for your exam. So it looks like I lied, andthere's actually eight pages to this Dynamo DB cheat sheet I almost forgot to includeDynamo DB API commands which you use vcli which is really important because these couldshow up on the exam so let's go through them. The first being get item this returns a setof attributes for the items with the given primary key if no matching item that it doesnot return any data and there will be no item element in the response then you have putitem creates a new item replaces an old item with a new item. If an item has the same primeprimary key as the new item already exists in the specified table, the new item completelyreplaces the existing item, then you have update item edit an existing items attributesor adds a new item to the table if it does not already exist.Then you have batch getitem this returns the attributes of one or more items from one or more tables you identifyrequested items by by primary key a single operation can retrieve up to 16 megabytesof data which can contain as many as 100 items, then you have a batch right item puts or deletesmultiple items in one or more tables. And right up to 16 megabytes of data which canbe compromised, which can compromise as many as 25 put or delete requests individual itemsto be written can be as large as 400 kilobytes then you have CREATE TABLE just as the nameimplies, it adds a new table to your account table names must be unique within each region.So you could have the same database name or search or same table name but in two differentregions.Then you have update table so modifies the provision of throughput settings, globalsecondary indexes or dynamodb stream settings for a given table, delete table and this isvery obvious it just deletes a table with all of its items. Then you have transact getitems a synchronous operation that atomically retrieves a multiple items from one or moretables but not from indexes in a single account and a region can contain up to 25 objects,the aggregate size of the items in the transaction cannot exceed Four megabytes.Then we havea transact write items a synchronous write operation that groups up to 25 action requests.These actions can target items in different tables, but not in different AWS accountsor regions. And no two action can target the same items. And we're really running out ofspace here. But we have query finds item based on primary key values, you can create tableor secondary index that has a composite primary key. And last is scan returns one or moreitems and more items and item attributes by accessing every item in a table or secondaryindex. So there you go, that's the real end to the dynamodb cheat sheet, super long section,but definitely worth it and super critical to passing the developer associate exam. Hey,this is Andrew Brown from exam Pro, welcome to the dynamodb. Follow along. And what we'regoing to do is we're going to create a table loaded up with data, write some records, deletesome records, get some records in batch. And just really understand how Dynamo DB works.What I need you to do is get Dynamo DB open up here in a tab.So I just type in DynamodB, click that there and you will make it to the dynamodb page, make sure you're inUS East one eight of us loves to put you in Ohio or somewhere else. And just to be consistent.Let's always do US East one for these fall alongs, you're going to need a cloud nineenvironment set up here I showed you some Elastic Beanstalk follow along, I show youthis in a variety of different ones here. So if you're not sure how to do it, go checkthose out or give it a go and try to spin up yourself an environment. And I have theDynamo DB documentation open up here. So we can poke through this as we're working throughthese commands. And we're also going to need a couple files from the the GitHub repo here,the free eight of us developer associate. So I have a file here that helps transformdata and then the actual data we plan on importing. And we're going to be working with starshipdata from Star Trek.So I have a list of starships. And we can see that this is the data we have.And that's what we're going to be importing. So let's make our way over to dynamodb. Let'stake a look at what it takes to create a table. We're not going to create our table throughhere we're going to use the COI. But let's just talk through what's on the page here.So the first thing you do is you'd name your table. So I'd call mine starships. And thenyou set your primary key, we have the option of setting a partition key and a third key.AWS used to call this a hash and a range. And this will show up in the code becausethere's so named that that way. So looking at our data, what we'd have to do is decidewe'll be making good partition and sort key and in this case, the good partition key isgoing to be shipped class because it's a grouping of things you can see you have Crossfield,Crossfield, Crossfield.And then, as long as you as long as you have a unique valuewith both the sort and partition, it's okay, as for the sort, we're using a registry numberthat identifies the ship, and those are all unique. So these, this will definitely bea unique value. Generally, you want your store key to be a date, but it all depends on yourdata. And in this case, we don't have a date, value. So it's going to be shipped class asour primary or our partition, sorry, and our registry as our sort key. So what we do hereis we type in the name, I would call it ship class. For some reason dynamodb likes to namethings, camel case, or at least all the documentation out there. So let's just follow suit there.But you could name the lowercase if you wanted to.And this would be registry. And over herewe have some data type string binary number dynamodb does not have a date time format,we would normally use string in that case, and there might be a few other ones outsideof here. But these are the these are the only ones you need to know S stands for stringB is for binary and is for number. And these are both going to be string values. You havesome default values here, no secondary indexes provisioned capacity five, five reads andfive writes, it has auto scaling turned on and encryption is at rest as default encryptiontype. So we just checkbox that off, we can see those values here five and five provisioned,auto scaling is turned on. And this is defaulted. If you want to recreate a local secondaryindex, the only time you can create it is at this particular time. So you'd have togo ahead here and add those. And for local tables, you always want to have the same,you always want to have the same partition key.And then you'd have a different sorting,you have to specify them both. We don't have another value here, but I put a name and nowI can checkbox on secondary, local secondary index. We're not going to create any secondaryindexes. They're not that important to know other than like how they work, but we don'treally need to go through the motions of actually using them. But this is what we're going todo but we're going to use the COI to do this. So make your way back to cloud nine. I'm goingto create a new folder here called MK or we'll type in MK dir to make that folder and typein dynamodb Dynamo dB, we'll call it playground. Any files we're working with will place themin there and already named that wrong. So I can go ahead here and just rename that file.Okay. And I'm just going to go ahead and make a new file in here. And I'm just going tocall it like scratchpad. Because this is where we're going to write out all our CI commands,and then paste them in there, just so that it's a bit easier to work with.I'm just gonnaclear that there. Yeah, and so let's get to it. So what we're going to do is create ourtable using the ccli. So what we do is type in AWS dynamodb, I think we listed out it'lltell us a bunch of information, something we've typed in help maybe. And so this isa great way of getting a lot of information about dynamodb.But we're just gonna use thedocs for this. So let's look up create table. And in here, you can see the values that wecan specify, I'm not sure which ones are required, but I generally know what we need to enter,the first thing is going to be attribute definitions. And so the attribute definitions, if we justgo to that section, an array of attributes that describe the key schema for the tableindexes. So what it's asking for is to specify the attributes for that we're going to usein the actual key schema, so the partition key and the store key, and we decided thatwe are going to use the ship class and the registry.So if you just look at the syntaxhere, this is the format that we have to type it in. So we'll make our way back, I'm justgoing to copy this, save myself a lot of trouble. So I'm really terrible at writing these thingsout. I'm going to type in create table, and then put a backslash that lets us do multiline. And we'll do attribute definitions. And to do two spaces there, and we'll go backhere, I'm just going to copy this format.And so that is our first one. And it's goingto be I said ship class. And it's going to be s for type four type string. I'm not sureif it tells us the types around here, I'm not seeing it, but I but we see there's sand NB. So back here remember we have string binary number, that's what the representedin this in the CLR in the API s and NB. So we want them to build the string. But that'sour first one. So we'll go back here.And we will need a second one. And we said thatwould be registry. And that's going to be a string type as well. The next thing we'regoing to need is the key schema. And this specifies the attributes that make up theprimary key for the table. And below, you can see that we need an attribute name anda key type. And they might have an example here. Yes, they do. So here it is. And wewill make our way back to here and we will type in key schema. And I'll just paste thatin there. And it's going to be the same thing.I know it's a bit redundant, but that's justhow it is. And the ship class is going to be the hash. And the registry is going tobe the range. Remember I said earlier that dynamodb used to call them hash and rangeand they still appear in the code. This is what I'm talking about. You can see here itsays hash is the partition key range is the sort key on the exam for the developer associate,you might actually see a bit of CSI stuff here. And then you might need to understandwhat is what. So just be aware of that. And the next thing we might need to specify isthe provision throughput, this might have a default, but we'll take a look here. I'mgonna scroll up here. So I can find the name here. provision throughput represents theprovision throughput for the specified table. This is probably a required field, I'm notseeing any of this is required. But I almost feel that it is.And we're going to just setthat default for that to five and five. So I'm just going to copy that there, make ourway back over here. And we'll just put two spaces there for backslash. And then we'regoing to set the Read and Write capacity, we're just going to set it as five and five.So we'll do five, and five. And one more thing I want to do is set the region always, alwaysalways when you're using to see Li set the region if you can, just because 80 of us mightdefault it to Ohio and then you're just gonna be scratching your head looking for stuff.So I think this is our crate commands, we type create Dynamo DB table. Nice documentationfor ourselves. And let's see if this works. We'll just copy that. Paste that on in, hitEnter. And it did not work. Let's see here. Let me just double check here. backslash backslashbackslash backslash looks all good to me. Maybe I didn't copy it and write it. We'lljust try this one more time, I'm gonna type clear here, paste that in, hit enter.Okay,just give me a second to figure this out. Oh, my goodness, it is the most obvious thing,I didn't specify the table name. If we don't specify the table name, this thing is notgoing to get created. So I guess we just skipped right on over that, I'm just going to go downhere. And we just provided a string. So I'm just going to copy this here. We'll go backup here. I call this starships. And we'll have a backslash there, I'm gonna save that.And we will copy this, paste that in, hit Enter. And we're getting output that is good.This so this means that it definitely has created and look down below where it saystable status. It says creating object crates is pretty darn quick. If we make our way overto dynamodb, I'm just going to go back to the service here. And go to tables on theleft hand side.And it's active. So it's already been created super, super fast. Here we cansee our ship class, our registry. But we don't have any data in here. So until we have data,it's gonna be a little bit hard to do anything. So I think that's what we'll do next. Butbut just before we do, I just want to show you the describe command. So let's say thatit was taking a while for our dynamodb table to create. I don't know why, but let's justsay it was really slow. And we wanted to check on the status of it. So we wanted to see thatit was active, we could do is type in and I will do it up here. So we have referenceto it.Describe table. We do alias dynamodb describe table table name, starships. I don'tknow if there's really any other output than that a lot of COI commands have this describe.So when you use the COI long enough, you just start guessing as to what it is, I didn'thave to look it up to know what it was. We'll go here, we'll see if there's any other options.No, it's just table name. So we just have table name here. And I'm going to go backto the cloud nine here. And if I just take that and paste that in there, we're gettingpretty much the same output as before. And you can see now that it's active. And a lotof times I like to change the output here. And I might do table, this one would probablybe good as a table.So I think it's a little bit easier to see table status over here,it's a little bit easier to read that way, we could also do text. For this one, it probablybe a bit messy. So that's not very readable. So for this case, I've used table. By default,it's JSON. So there you go. So what we'll do next is we'll move on to getting our datainto our table. And we're gonna have to prepare it because right now it's as a CSV file overto Jason, but we'll do that next. So what we're going to do is we're going to make anew folder in our dynamodb playground, because we're going to generate out a bunch of batchfiles, because we're going to need to batch import our data.So when you go here in theleft hand side and make a new folder and run called that fetches. And we are going to needthis the state of here. So both the CSV file and the CSV jason.rb. Probably the easiestway to get it in there is just to click the raw button here, Copy that. And we will makea new file in here, which we'll call CSV, to jason.rb. If you're wondering why thisfile is written in Ruby, it's because that's my favorite language to use. So why not. Andyou don't need to know Ruby too much, you just I'll walk you through this file herein a moment, just so you understand what's going on here. We'll go back. And the nextthing we'll need to do is grab that CSV file here. So I'm gonna go there, get the raw data,copy all that. And we will right click New File, Starfleet dot CSV. And we'll doubleclick that.Ok. Close tab here. It's because I did this project earlier. So it had a cachedversion, it was a bit confused. And so um, I pasted it in there and just make sure thatname is correct. So let's take a look. Let's take a look at this file here. Let's talkabout why we have to convert this to JSON and why it has to be in terms of batches.And the easiest way to understand is actually to first look at the command that we're goingto be using which is the batch right item. So the batch rate item operation puts or deletesmultiple items in one or more tables. A single call to a batch rate item can write up to16 megabytes of data, which can be comprised of 25 Put or delete requests. So we have somelimitations here. And individual items written can be as large as 400 kilobytes. So the thingis, is that we have records here, we go back, because we have nice visual here and GitHub.But you can see that we have a little boy 310 Records. So that's more than 25.So wehave to break this up into batches of 25. And we also have to provide it in the JSONformat that it's expecting, I'm not aware of being able to import CSV, I'm gonna checkhere, nope, there's no way to import CSV, and the format that it's expecting. It lookslike this, here's this, here's this request items, JSON, it expects you to put the tablename here, and then have this structure request, put item and then the values. So that's whatwe need to do, we need to transform our data into that format. And that's why I wrote thislittle script here to do this. So we'll quickly just walk through it.So what it does is itpulls in the CSV file, and it reads the CSV file using the CSV, a CSV file, Ruby library.And this, we may or may not need. When I originally created the CSV file, it was an Excel andit was encoding the file non UTF, eight, but this other format, but we just copy and pastedfrom here, so maybe we don't need a slide anymore. So we'll see. This says include theheaders, so it will detect these headers here, name registry, etc. And then it will map itinto a JSON file here. So when you lie, we can then specify that then like this, theidea is, what we want to do is iterate through this file, so it's going to go line by line,it's gonna start at 25.And Amelie's, it starts at 25, it's gonna reset it back to one, andit's going to push on a new batch of files. So it's gonna start with the starships filethat's named the file. And then it's going to then format the information into this putrequest. And it's going to do that it's going to create a bunch of batches, that's goingto iterate through those batches of data and create a bunch of batch files for us. So hopefully,that makes sense. But it's better to see it in action than to talk about it. So what Iwant you to do is go down below here to your batch type clear. And we want to make ourway into this folder here. So type in dB Dynamo dB, I'm hitting tab to autocomplete it thatsaves a lot of time, hit Enter. And we want to run this Ruby file. So type in Ruby, CSVto json.rb. And if everything is named correctly in here, start fleet CSV, it looks correctTo me, this should work. And there we go. There's our batch file.So we just transformedour data, let's take a look at one of them. And that's the format we're expecting, right?If we look back here, that looks like the same format. To me, this one's a bit morecompact. But that's how we transfer my data. And each of these contain 25 Records. So nowto import this, we're gonna go back here, and we need to write our import command. Sothat import command is going to look like eight of us dynamodb will say this, we'reusing the batch, right item. So Dynamo dB, batch right? Item. And we'll make our wayover here and see what we need to specify, they should have an example here. Examples.And so we just specify the file. So it's as simple as that. So I'm just gonna copy thatthere.But we could just copy this whole line here, save us any trouble if we made any spellingmistakes. And this file is in batches. So we'll type in batches. batch hyphens, 000.I'm just going to paste this a bunch of times. I'll go through here. And we'll just changethis 1-234-567-8910 11 and 12. And what we'll do is we'll just copy all these and pastethem in. There they go. And then number 12, we might just have to hit enter there. Andso it says unprocessed items. So if there was anything in there, that means that itdidn't process those. This could happen if we had maybe too many items, or if we werebeing throttled or not throttled, but we hit our read capacity or sorry, our write capacity.We didn't execute these perfectly. So if we go back to dynamodb here and give us a refresh,we can now see our records. So there they all are, look at that.How great is that?Um, so now that we know how to batch right, I guess the next thing is let's look at howwe can actually get this data programmatically through cltc Li. So to get items, what we'lldo is go back to the CI here, just click Back, click up here. And we should have get. Sothere's a basket, a basket item, which we'll look at in a moment, but we'll look at getitem first.So get item, you specify the table name, you specify the key, we have some additionalthings here, where you can keep it really, really simple. And we're just going to doa simple get, look at the examples here. And yeah, that's as simple as it gets. So we'regonna copy this over here, we're gonna make our way back to cloud nine, just type here,get item. And so we have Dynamo dB, get item table name, our table name is starships. We'lljust copy that here. That is a such a short line, I'm just gonna have it one line here.And then we need a key file. So I'm just going to make a new file here. So we'll take we'llgo here and make a new file and call it key JSON. And the format is pretty darn simple.So you just specify the actual key. So this is the, the schema key that we need to providehere to get the record. So I'll go here, and it's actual values.So we have it is registryis one. And the other one is ship class. And what we'll do is we'll look through our datahere and just find the record. I'm going to choose AB let's see here, if there's any shipsthat I like, how about the Okinawa, that sounds cool. So we'll grab the Oakland, ah, I willtry to get this record here. So we'll grab it as Excelsior class here, we'll go back.And we will paste that in there. And then we need the registry number, which we'll pastein here as well. So now we have our key, and we'll go back to our scratchpad here. Andthis Yep, that should work. So I'm just going to copy that, paste that in there. And there'sour record. So it returns all the fields by default, I think you could limit the fieldsthat you want returned, probably with projection expression, just gonna take a look at thatquickly here, a string that identifies one or more attributes retrieved from the table,These can include scalar, etc.And if nothing is specified, then it will turn everything.So just remember, projection expression, that's what it's used for. And if we want to lookat this in a different format, let's look at a table. Sometimes it looks nicer whenit's under the format. No, that looks terrible. We'll look at as text, that's a little bitbetter. So you know, just consider, you can play around with those values there.But thatis for the guide on the next we'll look at how to get things back. So we'll go back toour documentation here and click back to dynamodb. And let's look at how we get multiple items.And so we have basket item request items. So it's not key, it's something else here.And if we go to example, you can see it's as simple as this. So I'm gonna just copythis here, and this command, we're gonna make our way back here. And we're going to do batchget item, I'm gonna paste that in there, we're gonna make it a single line that makes a littlebit nicer. And we're going to create this request items file that it's suggesting here.We'll make a new file there, paste that in. And what we're going to do, I think it's verysimilar to this, except you can do multiples of it. So we'll go over here and take a look.So here, you provide the table name, so we don't have to provide the table here.Andthen you provide the keys. So it's pretty darn similar. We'll look at the actual, yeah,so you don't provide the table name. So we'll look at the actual example here, because thismight be a little bit more pared down here. No, it's the same. And you even provide theprojection expression here if you want to filter out the fields that you want, but Iwant all the fields on greedy like that. So I'm just going to paste this in below here.Oops, actually, we'll go here, paste it in there. And we're just going to grab this oneto save us some time because we'll just get the same record. And I'm going to paste thatin as one of the keys. And I really don't like for spaces, but I'm not going to playaround with this.So I'm just going to type that in here. I'm just hitting tab to do that.I really hate for spaces, this is too much. It's excessive, and I only want two recordshere. So I'll take out this one here. We don't need to project any expressions, we just wanteverything and we will get a second ship. So I will go back here to the data. We willlook for a another cool ship. Something that sounds cool USS reliance sounds pretty darncool. And so I'll grab the class, which is the Miranda. So we'll do class down here.And I'll have to replace these of course. And we also need the registry number. pastethat in there. And So Oh, and we also need the name of the ship, which is, or the table,sorry, which is called starships. And so that should do it. So let's make our way back toour scratchpad here.And copy. Whoops, that's not it, I'm going to copy this down below.I will paste that in there. And that should give us the record. So it gave us back tworecords with all their information. Let's take a look what this looks like as a table.Terrible can't even read it. So let's look at this as a as text. Yeah, that's a bit better.So again, you know, just play around with that stuff. So that is batched get item. Sonow the next thing I'm going to do is show you how to like write items to the database.But we don't have any new records.So I'm just going to go ahead and delete one andthen we're going to re add that back to the database. So let's take a look at delete.So I think it's called delete item. And we'll make our way over to the CLR. Here, we'llscroll the top go to Dynamo dB. And we will look for delete there is delete item. Andwe have delete item table name key, we'll just go look at the example here. Yep, andit's as simple as that. So it looks just like get item but it's delete item. So we'll gohere. And I'm just going to go to the top here and look for starships. Scroll that backdown their table name. And we can just use the same key that we already have, which isjust going to delete this record here. And what we'll do before we do that, let's justmake sure the data exists. So this is NCC, 139 Excelsior. So if we go here, I wonderif we can quickly find it here.I'm just going to copy the name, how you just do CommandF here and see if it shows up. Not seen it. I'm just gonna cheat here I want to show youquerying later, but I'm gonna just show it to you now. And we'll just go grab that namehere. So we'll type in the ship name Excelsior or ship placer.I'm gonna hit start search.And also we will go get the registry number. paste that in there, hit start. And so there'sthe record. So this is the record, we want to go delete. We'll go back here. And we willgo to our scratchpad, grab this here. And that should delete that record there. We'llhit enter. We didn't get any output back. But that doesn't mean it didn't work. Thatmeans it did work. And we will go ahead and hit start. And now that record is gone. Sothat record is 100% gone. And we want to bring it on back. So that's what we're going todo next. So in order to bring back that item, we're going to need to do a put. And so we'llmake our way over back to the documentation here, go to the top, search for put thereit is. And we have a put item, we specify the table, then we need to specify the actualitem. And we have a bunch of other options. But we'll go down to examples here.And youcan see that we have it was dynamodb, put item table name and the actual item itself.And then we also have returned consumed capacity, I don't know if we need that, let's just takea look and read about it determines the level of detail about provision throughput consumptionthat is returned. So this is just going to give us additional information about actualconsumed capacity, I think that's a good idea, we could check out what kind of capacity isbeing used up. So we'll go back to the top here, go to examples. But this is obviouslyoptional. And we're going to copy this command here.And we'll paste this here. And thenI'm just gonna make this a single line to make our lives a little bit easier. So it'sa little bit long, but it's not that bad. And we will make a new file called item. Andwe need to grab the name of the table again starships. And we need to fill out that itemfile. So we will open that up. And we will take a look at what that would look like.So that's an example of the file. So let's grab this example here. And what we'll dois we need to go get this record again. So we deleted What was the name of the file andwe'll go back to the key here is Excelsior type open ally zoom. It's this record here.So I'm just going to grab this data here. Grab the whole row, and we will go back tothe item file here. Paste it on down here. And then we just need to name all these willgrab the names from here. Just gonna do a refresh.So we have shipped class. We haveregistry, oops, then we have description. I really do it in any particular order. Andthen we have named the order doesn't matter, as long as you get all the information. Andso this is the name of the ship. This is the registry number. This is the ship class. Andthen this is the description. We will save that. And so that should be what we need toget this to work. So I'm going to copy this file here. We're going to paste it in on there.And we have an error hyphen item expecting a comma delimiter. So maybe we made a minormistake in our item file here.Yeah, we don't have a comma on the end here. And we'll goback, hit up, hit Enter. And there you go. And it told us how many capacity units consumed,which was one right capacity unit. So I mean, that can be good if you need to do somethingprogrammatic and make sure you're not over consuming. Let's go see if that record nowexists in the table. So we'll just do a query here.We'll go back to here. Actually, wecould just use a get item to do that. That'd be probably smarter. So I'm just going tocopy this here. paste that in. Yep, so the record is back. So now that we know how toput an item, let's talk about how we would actually go ahead and update an item next.So let's see how we can update an item in dynamodb. So we're gonna type in update item.Of course, I think you could just add it from here, right, but that's not we're gonna dowe're gonna use the COI, of course.And we'll make our way over to the we'll fix this filefirst here, refresh. Back into there. We'll go back to the COI here, it goes to the topDynamo dB, and will look for put item. And so put item here we'll scroll down, we haveput item table name, you provide the item and we have a bunch of other values. I'm certainThis one is not a simple one, if we go to the example here. Now it's not that bad. It'sjust Oh, sorry, we're not, we're not creating an item, we're going to update an item I'msorry, we've already done put item, we want to update an item. And so we have update itemtable name key. And then we have a bunch of other stuff.Let's go look at the example.And here you can see it's a lot more complicated. I don't know why it's such a pain. But thisis what it is. So I'm going to grab that on here. And we will paste our example here.And this is big, rename this to starships. We'll leave this as key I will update theexisting file we have and we'll talk about these values here. So the first one is updateexpression. So let's take a look at here update expression, an expression that defines oneor more attributes to be updated, the action can be performed on them and the values forthem. So that is what we're actually going to update. And if we look down below, it shouldgive us examples on how we can write these expressions.I'm not a lot of informationhere. But I know what we need to do here. So we'll go here. And we can see that we havethis set value here. The next one is the expression expression attributes names. One or more substitutiontokens for attribute names and expression The following are some use cases. And so itcould be one of those reserved words or placeholder or special characters. So this is just likea remapping of, of characters. This doesn't make sense, don't worry, we'll once we workthrough this example. One or more values that can be substituted in an expression. Okay,so you know, I don't feel like it's the best explanation But well, it makes sense whenwe go through it. So what we'll do is we're going to need a expressions attributes nameas well extracted expressions, attributes valued a file, we already have this file here.So I'm just gonna copy these names here.Make your way back to cloud nine. And I'm goingto make this new file in our playground. So that's one and then we'll go back here andwe will grab the other one. That's too. And so for names, what it suggests is this. Andthen for this one here, we have the values. So the way this works is it allows you toremap values. So We just go back to this example here. Notice that in the names, or sorry,the values is the values that we want to update. So here, we're saying, this number shouldnot be 2015. This string should now be louder than ever. And then we have this colon y andcolon t. And this colon colon t is just like a representation of that actual characterthan in the update expression. What it's doing, it's reassigning it saying, so make colony into pound y.And then if we look at our expression names, pound y equals zero, pound80 equals album attribute. It seems like a lot of roundabout, it does feel that way.But it's allow you to get flexibility. So you don't run into issues like with reservedwords or special characters. And we could pare this down. But let's actually try todo this in full with the example that they have here. So make your way back here. Thefirst thing I'm going to do is go back to our scratchpad, I'm going to change the setvalue, we only need, let's only change the description and make our lives a little biteasier here, call this colon D. And we'll save that there. And the next thing we'lldo is go back to our expressions, and I'm going to call this hash, hash D, that's gonnabe a description, whoops, can't see what I'm doing description. And then we will just takeout the second one here, we'll make our way over to values, I'm gonna change this to acolon D.And this is going to be a string, I'm going to get rid of the second value,what we're going to do is just shorten to the, the the item. So we'll go back to item.And so this is the string here. And I'm just going to change it here. So it says ship commandedby Admiral James laden on which Benjamin Sisko served, I'm just gonna take off the end here.Or maybe I'll just serve as so that means the same thing as first officer. Okay, andwhat we'll do is we'll go back to our scratchpad, and we will copy all the stuff. And it alsohas this return value, since let's take a look at that before we move on. Use returnvalues if you want to get the item attributes as they appear before or after they are updated.So none is if returns values if not specified, all old.So return all the attributes of theitem, return only the update attributes, return all the new attributes of the items, etc.And this kind of stuff is kind of important when we get into dynamodb streams, becauseyou get you can do the old or the new. In this case, I guess we're doing all and allseems fine to me. This is an example.That's all we're doing here. And what we'll do iswe'll just copy this here, fingers crossed, this works first time. And we've got databack and it says x Oh. So that looks great to me. And what we'll do is we'll just makesure that it has been changed, we'll use our get item up here. And it has been updated.So there you go, that is update item. So now that we've learned how to get put, delete,let's look at the scan and query options next. So let's first look at scan because scan isa lot, a lot easier to understand. There's a lot less going on there. So if we go overhere, we can see we have a query and scan option. Now if you created a table withouta source key, you'd only be able to scan.But we'll go to scan here and we'll look atwhat options we have. So the way scan works is that it returns all the records. And thenyou can apply filters after it's returned those records and then filter out what youwant. So if you have a very large table 1000s upon 1000s of records, that's not very efficient.So generally, you always want to use query when you can. But if you don't have a needto have a sore key, then you could have a table that just has scan. But anyway, if wescan, we hit start, it returns everything. And if we want to filter stuff out, we couldchoose anything we want here, I could say ship class will we'll say Luna we'll hit startsearch.And we have some other options here like be begin with contains with all thesekind of options here that we can use to filter. But just understand that even though we filterthis out, it's already returned all 100 Records and then it filters them out and returns it.So we're using up a lot of capacity when we're doing that. But let's look at how we can dothis via the COI. So we'll go back to the top here. And we will look for a scan. I feellike scan would have a lot of options. Oh yeah, there's tons of options, tons of options.We can segment stuff, we can do page size, all sorts of things. But let's go down andlook at an example here and this is an example that we are given.It looks a lot similarto the Item one here. And we're just going to do something simple. So we'll do starships.And I'm thinking, I'm thinking for this, what we'll do is we will filter out the descriptionfor something. So I'm going to go back here to the front here. Let's say we want to getthis record or something that or something that starts with science. Maybe somethingmore common would be nice, maybe destroyed. So we'll look for all the records that startwith destroyed with the description. So I'll go back here. And what I'm going to do isI'm going to think about this for a second. For the scan. I don't need to project anything,these are records that we get back remember projection is if we just want to cherry pickwhat we want. For the Filter Expression, we're just going to do a begins with. So I'm goingto do begins with and for this, it's going to be description. I'm just gonna do colonD here. And then down below, we need to specify our names.Actually, we don't need any namesbecause we're not remapping anything. Remember, we did that before we did this remapping that'sjust extremely verbose. We don't need to do that. I'm just going to do curlies. Here.I'm going to do quotations colon de Colon curlies, and we're going to make it a string,and then I just want it to return destroyed. So I just want you to notice that we didn'thave to extract this out as a JSON file, we can write them in line like this. And it makesthings a lot easier for us. But you're probably wondering what this begins with thing is,so let's go take a look back at scan. And specifically look at these filter expressions.So if we just hit enter there, if we click on to this tab here, I think it'll give usmore information about it. So down below, it shows you you can do equals these symbols.And then we have begins with this maps, all with this kind of stuff here.But so beginswith in order to use it, we have to write that and then a then substitute whatever thething is. And that's what we've done, we've said, We want the column description replacedwith colon D, and colon D, is going to be destroyed. So anything that begins with destroyedso let's give this a go and see if it works. We'll enter expecting a property and closewith double quotation. So I quickly made a mistake here, it's a little bit hard to see.It's because I'm using doubles on the outside, I'm gonna use singles on the outside, we'rejust having a conflict of quotations. And there we go, we got a lot of records back,this is a little bit of a mess to look at, I'm gonna go and try output.I don't knowwhy I'm gonna try Table No, no good. You think they'd make that actually look like a table,we'll go to text that's a little bit easier to see. And we can see we're getting all therecords back that are destroyed. So that is for scan. Now let's take a look at how touse query. Okay, so we'll take a look here now at query. So if we make our way over tothe, our actual table here where we can visualize it, we'll go over to query and with query,you have to specify the partition. I think a third key is optional. Let's take a look.So if I just wanted Luna, I'm just going to click off the filter here, that's going togive us that and then if we want to narrow it down further, I can grab the registry here.And we'll hit equals, the difference here is that it's only returning these records,it's not returning all 1000 or 100 Records.And then you can apply filters after the fact.So I could even filter this further and just say, a description. And if the string beginswith the, it's obviously going to do that. And then you have projected columns here,well, let's start, um, can only be used when creating with an index name, okay? I don'tknow. But anyway, the point is, is that that you need to know is that you have a partitionkey. And you can optionally add that or so you have to add that and then you have anoptional third key. And these are your options. And then you can again, filter further andyou can even specify these if you want again, that'd be a bit silly to do.But let's learnhow to do it query using the CLR. So we'll go down here, type the word query, and wewill make our way over to the documentation and type in query. And query probably hasa lot of options. Holy smokes, look at all those options. We're not going to go throughall through all of them. And go down to the examples and this is the example that it givesus so we have Dynamo DB query table name. projection expression. And then we have onecalled key condition expression, this one is different. And also this expression attributesvalue we had that before. But if we go here and just take a look at this, what this doesthe condition that specifies the key values for items being retrieved by reaction.Sowhat this is saying is, you're pretty much just providing these values here. So thispart right here, see this. That's what we're doing with that attribute there. So we'lljust go grab our example here. save ourselves some trouble. Make your way back here. Andwe're going to specify starships. And in this case, we need to, we're not going to projectany values by I'm just gonna cut that out, because I should show you that at least oncehow to do that. And for this, we're going to do ship class. And it's just gonna equalcoal and C. And I'm just going to do this in line because I just want to make this supereasy for us.We'll do curlies or single quotations, we learned our lesson Last time, we shouldmake these symbols on the outside. And we'll do doubles. Colon C. Colon curlies, doubles,S for string colon, and let's filter out for galaxy. I know that's the type of starshipand Star Trek. So if we do this, we copy that there, enter, we get some value. So thereyou go. That's all the the galaxy ones, we can go up output and do text here make ourlives a bit easier, it's a bit easier to read. Let's say we just wanted to get the registrynumber. paste that in there. And I'm just going to change this to output text. Oops,did not like that need a backslash on the end there. Holy smokes. Output its output.Oh, it doesn't like the projection expression that I put in there. I'm pretty certain thatwe can do that. Good example here, this, this one has one in it. Oh, you know what, becauseI named it all caps.It has to be the name of what the actual field is. You can't justput all caps. By mistake. try this again. Mmm hmm. Maybe I should go a little bit slowerhere. So I'm just going to sometimes when you have these issues with this, a smart thingto do is just make it a single line was backslashes can be tricky sometimes. There we go. So Idon't know I had some kind of error in here syntax. Maybe I didn't have a backslash there.I didn't have a trailing space after it.It was messing up. But that's always a trickthat you can use to fix your queries. And I'm just going to hit enter there and justclean this up a little bit. Not that not that we need to do anything else here. But let'sjust make sure that it works this way. I try this one more time. There you go. So yeah,that is the query. Maybe we just take a look here and look if there's any other importantfields that we glazed over. Scan index forward is a great way of flipping things the otherway. So if you did scan index for it, we'll just play around with some of the settings,why not? So look at the order that it's in right now. So the D is first. I put that inthere and I say false. I think it's all caps for this. Enter. False. I should really justread it, but I'm pretty sure now I have to read it.That's pretty sure that it's justlike false or true. I need an example. I can't tell. Oh, it's sorry. It's just scan for itindex or no scan and forward index, you don't actually provided a value. So if I paste thatin there, I think that's all we have to do. Yep, so there it's in the opposite order.And so if you want it to be the default there, you could do it the other way. Right. Let'slook at this any other values of interest here? We looked at projection there.No, I'dsay that's pretty good. So now we are all done here. For this, I think we'll just takea look at the transact COI stuff. Next here, we're not going to do them because they'rea little bit more work to set up. But it's good to know how they work. Let's just takea quick look here at transactional transactional API calls for dynamodb, we have the get inthe right, we're not going to set these up, it's a lot of work to do. So it's just goodto conceptually know what you can do here and just to see how it's specified, so transactget items, synchronous operations that automatically retrieves multiple items from one or moretables. That's the key thing, one or more tables. And the idea is that if any of thesethings happen, then the entire transaction will roll back.This is really good when youhave sensitive information that that could be committed that relies on multiple thingssame with gets and writes. But if we go to examples down below here, we can see thatits database, dynamodb, transact get items, and then you provide the items that you want.So in here, it looks like it's actually the same table. And it's saying that both we wantboth of these records. And if there's a failure on one or the other, when grabbing this information,it's going to roll back. So it has to get all the information or none of that. Let'sgo and take a look at the rights. This is very similar This is when you have a rightcondition. And there is a lot of stuff going on here, you probably want to read throughall this is probably a good idea, a condition and one or more expression is not met andongoing operation of the process updating the same time.This one's a very importantone where you have to update at the same time there is insufficient provision capacity,an item size becomes too large, their grid size of items, exceeds four megabytes, thereis a user error, we're going to go up here at the top look at the example. And again,it's very similar. And you could specify different tables here. So this is really good for crosstable rights. Um, but you can see there's a lot of information here. So just be awareof transact, write items and get items and just consider that everything must be successfulin order for them for it to do the guts of the rights. And it's cross table. So we arealmost done. Well, we only need to learn one more COI command and this is to clean up theentire project.And that is the Delete table command. We're going to go ahead and deleteour table. Now we could just go in here, and hit Delete table. But that's not fun. Let'suse the COI. So we're gonna go delete table. And we'll look at the example. I think it'sjust as simple as that. Yeah, it's that simple. If you delete through here would ask you tolike maybe you might want to back it up. We do not want to back this table up, we wantit gone. And I'm gonna type it in starships here. And we'll hit enter there. And now youcan see that is deleting if we want to get an update, we can go over here and see thatit is deleting might already be gone. Hope it's already gone. And if it was taking timeto delete, we could use the describe command to give it a look here. And we can see thatnothing is being returned because there is no longer a table.So there you go, that isthe Dynamo DB run through. Um, if you have more time on your own, you might want to tryto set up a dynamodb stream. And those are kind of interesting to see I'm just gonnatype in starships again here. You don't you don't have to do any of this, I'm just goingthrough it just so I can show you dynamodb streams here, ship class registry. So dynamodbstreams, once this table creates, we'll give it a second. If you want to set those up,it's under under triggers, you create a trigger, you'd create a existing function. And theidea here is, whenever new records come into the database, it will call that that lambdafunction. And then from that lambda function, you can then send it to wherever you want.Generally, you'd send it to kinesis firehose, or some kinesis service. But that's just theway to react to data in here. But yeah, this is outside the scope of the developer associate.It's kind of a stretch goal. If you want a little personal project to do, but our tableis deleted, I'm just gonna go delete it here.And you can see that it says create a backupbefore deleting, we're not going to do that. Um, but yeah, that's dynamodb there. So thereyou go. Hey, this is Andrew Brown from exam Pro. And we are looking at elastic ComputeCloud EC two, which is a cloud computing service. So choose your storage memory, network throughput,and then launch and SSH into your server within minutes. Alright, so we're on to the introductionto EC two. And so EC T was a highly configurable server. It's a resizeable compute capacity.It takes minutes to launch new instances, and anything and everything I need is usedas easy to instance underneath. So whether it's RDS or our ECS ORS Simple System Manager.I highly, highly believe that at AWS, they're all using EC two. Okay. And so we said they'rehighly configurable. So what are some of the options we have here? Well, we get to choosean Amazon machine image, which is going to have our LS.So whether you want redhead,Ubuntu windows Amazon links or Susi, then you choose your instance type. And so thisis going to tell you like how much memory you want versus CPU. And here, you can seethat you can have very large instances. So here is one server that costs $5 a month,and here you have one that's $1,000 a month. And this one has 36 CPUs and 16 gigabytesof memory with 10 gigabyte performance, okay, then you add your storage, so you could addEBS, or Fs and we have different volume types we can attach. And then you can configureyour your instance. So you can secure it and get your key pairs, you can have user data,Iam roles and placement groups, which we're all going to talk about starting now.Alright,so we're gonna look at instance types and what their usage would be. So generally, whenyou launch an EC two instance, it's almost always going to be in the T two, or the Tthree family. And yes, we have all these little acronyms which represent different types ofinstance types. So we have these more broad categories. And then we have subcategories,or families of instances that are specialized. Okay, so starting with general purpose, it'sa balance of compute and compute memory and networking resources. They're very good forweb servers and code repository. So you're going to be very familiar with this levelhere, then you have compute optimized instances. So these are ideal for compute bound applicationsthat benefit from high performance processors.And as the name suggests, this is computeit's going to have more computing power. Okay, so scientific modeling, dedicated gaming servers,and ad server engines. And notice they all start with C. So that makes it a little biteasier to remember, then you have memory optimized, and as the name implies, it's going to havemore memory on the server. So fast performance for workloads that process large data setsin memory. So use cases in memory caches in memory databases, real time big data analytics,then you have accelerated optimized instances, these are utilizing hardware accelerators,or co processors, they're going to be good for machine learning, computational finance,seismic analysis, be quick recognition, really cool.Future tech uses a lot of acceleratedoptimized instances. And then you have storage optimized. So this is for high sequentialreads and write access to very large data sets on local storage. your use cases mightbe a no SQL database in memory or transactional databases or data warehousing. So how is itimportant? How important is it to know all these families, it's not so important to associatetrack at the professional track, you will need to know themselves, all you need to knoware these general categories and what and just kind of remember, which, which fits intowhere and just their general purposes. Alright. So in each family of EC two instance types,so here we have the T two, we're going to have different sizes, and so we can see small,medium, large x large, I just wanted to point out that generally, the way the sizing worksis you're gonna always get double of whatever the previous one was. Generally, I say generally,because it does vary. But the price is almost always double.Okay, so from small to medium,you can see the ram has doubled, the CPU has doubled for medium large, isn't exactly doubled.But for a year, the CPU has doubled. Okay, but the price definitely definitely has doubled,almost nearly so it's almost always twice inside. So general rule is if you're wonderingwhen you should upgrade, if you need to have something, then you're better off just goingto the next version. So we're gonna look at the concept called incidence profile. Andthis is how your EC two instances get permissions, okay? So instead of embedding your AWS credentials,your access key and secret in your code, so your instance has permissions to access certainservices, you can attach a role to an instance via an instance profile.Okay? So the concepthere is you have to do instance, and you have an instance profile, and that's just the containerfor a role, and then you have the role that actually has the permissions. Alright. Andso I do need to point out that whenever you have the chance to not embed ABS credentials,you should never embed them. Okay, that's like a hard rule with AWS. And anytime yousee an exam question on that, definitely Always remember that the way you set an instanceprofile tuneecu instance, if you're using the wizard, you're going to see the IM rolehere.And so you're going to choose you're going to create one and then attach it. Butthere's one thing that people don't see is they don't see that instance profile becauseit's kind of like this invisible step. So if you're using In the console, it's actuallygoing to create it for you. If you're doing this programmatically through cloud formation,you'd actually have to create an instance profile. So sometimes people don't realizethat this thing exists. Okay. We're gonna take a look here at placement groups, theplacement groups let you choose the logical placement of your instances to optimize forcommunication performance, or durability. And placement groups are absolutely free.And they're optional, you do not have to launch your EC two instance in within a placementgroup.But you do get some benefits based on your use case. So let's first look at cluster.So cluster packs instances close together inside an AZ and they're good for low latencynetwork performance for tightly coupled node to node communication. So when you want serversto be really close together, so communication super fast, and they're well suited for highperformance computing HPC applications, but clusters cannot be multi az, alright, thenyou have partitions. And so partitions spread instances across logical partitions.Eachpartition does not share the underlying hardware. So they're actually running on individualracks here for each partition. They're well suited for large distributed and replicatedworkloads, such as Hadoop, Cassandra, and Kafka, because these technologies use partitions.And now we have physical partitions. So that makes total sense there, then you have spread.And so spread is when each instance is placed on a different rack. And so when you havecritical instances that should be kept separate from each other. And this is the case whereyou use this. And you can spread a max of seven instances and spreads can be multi az,okay, whereas clusters are not allowed to go multi AZ. So there you go. So user datais a script, which will automatically run when launching easy to instance. And thisis really useful when you want to install packages or apply updates or anything you'dlike before the launch of a an instance. And so when you're going through the easy to wizard,there's this advanced details, step where you can provide your bash script here to dowhatever you'd like.So here I have it installing Apache, and then it starts that server, ifyou were logging into any CPU instance, and you didn't really know whether user data scriptwas performed on that instance, on launch, you could actually use the this URL at 16924 116 24. If you were to curl that within that easy to instance, with user data, itwould actually return whatever script was run. So that's just good to know. But yeah,user data scripts are very useful. And I think you will be using one. So metadata is additionalinformation about your EC two instance, which you can get at runtime. Okay. So if you wereto SSH into your EC two instance, and run this curl command with latest metadata onthe end, you're going to get all this information here. And so the idea is that you could getinformation such as the current public IP address, or the app ID that was used to launchthe students, or maybe the instance type.And so the idea here is that by being ableto do this programmatically, you could use a bash script, you could do something withuser data metadata to perform all sorts of advanced Eva staging operations. So yeah,better data is quite useful and great for debugging. So yeah. So it's time to look atthe easy to cheat sheet here. So let's jump into it. So elastic Compute Cloud, easy touse is a cloud computing service. So you configure your EC to by choosing your elastic storage,memory and network throughput, and other options as well. Then you launch an SSH into yourserver within minutes. ec two comes in a variety of instance types specialized for differentroles. So we have general purpose, that's for balance of compute memory and networkresources, you have compute optimized, as the name implies, you can get more computingpower here. So I deal for compute bound applications that benefit from high performance processors,then you have memory optimized. So that's fast performance for workloads that processlarge data sets in memory, then you have accelerated optimized that's hardware accelerators orco processors, then you have storage optimized that's high sequential read and write accessto very large datasets on local storage.Then you have the concept of instant sizes. Andso instance sizes generally double in price and key attributes. So if you're ever wonderingwhen it's time to upgrade, just think when you're need double of what you need that timeto upgrade. Then you have placement groups, and they let you choose the logical placementof your instances to optimize communication performance, durability, and placement groupsare free, it's not so important to remember the types are because I don't think we'llcome up with a solution architect associate. And then we have user data. So a script thatwill be automatically run when launching EC two instance, for metadata. Metadata is aboutthe current instance. So you could access this metadata via a local endpoint when SSHinto an easy to instance. So you have this curl command here with metadata and meta datacould be the instance type, current IP address, etc, etc. And then the last thing is instanceprofile. This is a container for an IM role that you can use to pass roll informationto an easy to instance, when the instance starts. Alright, so there you go, that's easyto do.Hey, this is Andrew Brown from exam Pro. And we are looking at Virtual PrivateCloud known as VPC. And this service allows you to provision logically isolated sectionsof your database cloud where you can launch eight of his resources in a virtual networkthat you define. So here we are looking at an architectural diagram of a VPC with multiplenetworking resources or components within it. And I just want to emphasize how importantit is to learn VPC and all components inside and out because it's for every single aidablecertification with the exception of the cloud practitioner, so we definitely need to masterall these things. So the easiest way to remember what a VPC is for is think of it as your ownpersonal data center, it gives you complete control over your virtual networking environment.Alright, so the idea is that we have internet, it flows into an internet gateway, it goesto a router, the router goes to a route table, the route table passes through knakal. Andthe knakal sends the traffic to the public and private subnets. And your resources couldbe contained within a security group all within a VPC.So there's a lot of moving parts. Andthese are not even all the components. And there's definitely a bunch of different configurationswe can look at. So looking at the core components, these are the ones that we're going to learnin depth, and there are a few more than these, but these are the most important ones. Sowe're gonna learn what an internet gateway is. We're gonna learn what a virtual privategateway is route tables, knackles, security groups, public and private subnets, Nat gatewayand instances customer gateway VPC endpoints and VPC peering. So, this section is veryoverwhelming. But you know, once you get it down, it's it's pretty easy going forward.So we just need to master all these things and commit them to memory. So now that wekind of have an idea, what's the purpose of VPC, let's look at some of its key features,limitations and some other little things we want to talk about.So here on the right handside, this is the form to create a VPC, it's literally four fields. It's that simple. Youname it, you give it an address, you can also give it an additional ipv6 address. You can'tbe or it's either this and this. And you can set its tendencies to default or dedicated,dedicated, meaning that it's running on dedicated hardware. If you're an enterprise, you mightcare about that. This is what the ipv6 cider block would look like because you don't enterit in Amazon generates one for you. So v PCs are region specific. They do not span regions,you can create up to five v PCs per region. Every region comes with a default VPC, youcan have 200 subnets per v PC, that's a lot of subnets.You can create, as we said here,an ipv4 cider block, you actually have to create one it's a requirement. And in additionto you can provide an ipv6 cider block. It's good to know that when you create a VPC, itdoesn't cost you anything. That goes the same for route tables, knackles, internet gatewaysecurity groups subnets and VPC peering. However, there are resources within the VPC that aregoing to cost you money such as Nat gateways, VPC endpoints, VPN gateways, customer gateways,but most of the time, you'll be working with the ones that don't cost any money so thatthere shouldn't be too much of a concern of getting over billed.One thing I do want topoint out is that when you do create a VPC, it doesn't have DNS host names turned on bydefault. If you're wondering what that option is for what it does is when you launch easytwo instances, and so here down below, I have an easy to instance, and it will get a publicIP, but it will only get a public DNS, which looks like a domain name like an address.And that's literally what it is. But if this isn't turned on that easy to instance, won'tget one. So if you're wondering, why isn't that there, it's probably because your hostnames are disabled and they are disabled by default. You just got to turn that off. Sowe were saying earlier that you get a default VPC for every single region. And the ideabehind that is so that you can immediately launch EC two instances without having toreally think about all the networking stuff you have to set up. But for a VA certification,we do need to know what is going on.And it's not just a default VPC It comes with otherthings and with specific configurations and we definitely need to know that for the exams.So the first thing is it creates a VPC of cider block size 16. We're going to also getdefault subnets with it. So for every single AZ in that region, we're going to get a subnetper AZ and they're going to be a cider block size 20. It's going to create an internetgateway and connect it to your default VPC. So that means that our students is going toreach the internet, it's going to come with a default security group and associated withyour default VPC. So if you launch an EC two instance, it will automatically or defaultto the security group unless you override it. It will also come with by by default,a knakal. And associated with your VPC, it will also default DHCP options. One thingthat it's implied is that you It comes with a main route table, okay, so when you createa VPC, it automatically comes to the main route table.So I would assume that that comesby default as well. So there are all the default. So I just wanted to touch on this 0.0 dotzero forward slash zero here, which is also known as default. And what it is, is it representsall possible IP addresses. Okay. And so you know, when you're doing a device networking,you're going to be using this to get the GW to have a route like routing traffic to theGW to the internet. When you're using a security group, when you set up your inbound rules,you're going to set 0.0 dot 0.0 to allow any traffic from the internet to access your publicresources. So anytime you see this, just think of it as giving access from anywhere or theinternet. Okay. We're looking at VPC peering, which allows you to connect one VPC to anotherover direct network route using private IP addresses.So the idea is we have VPC, a VPCZb. And we want to treat it so like the they behave like they're on the same network. Andthat's what VPC peering connection allows us to do. So it's very simple to create apeering connection, we just give it a name, we say V, what we want is the requester. Sothat could be VP ca and then we want as the acceptor which could be VP CB, and we cansay whether it's in my account, or another account, or this region or another region.So you can see that allows pvcs from same or different regions to talk to each other.There is some limitations around the configuration.So you know, when you're peering, you're usingstar configuration, so you'd have one central VPC and then you might have four around it.And so for each one, you're going to have to have a peering connection. There's no transitivepeering. So what does that mean? Well, the idea is like, let's say VPC c wants to talkto VPC, B, the traffic's not going to flow through a, you actually would have to createanother direct connection from C to B. So it's only to the nearest neighbor, where thatcommunication is going to happen.And you can't have overlapping cider blocks. So ifthese had the same cider block, this was 172 31. This was 172 31, we're gonna have a conflictand we're not gonna be able to talk to each other. So that is the VPC peering in a nutshell.Alright, so we're taking a look here at route tables. The route tables are used to determinewhere network traffic is directed, okay. And so each subnet in your V PC must be associatedwith a route table. And a subnet can only be associated with one route table at a time,but you can associate multiple subnets subnets with the same route table. Alright, so nowdown below, I have just like the most common example of where you're using route tables.And that's just allowing your easy two instances to gain access to the internet. So you'd havea public subnet where that easy to instance resides and that's going to be associatedwith a route table that Route Route table is going to have us routes in here. And hereyou can see we have a route which has the internet gateway attached that allows accessto the internet.Okay, so there you go. That's all there is to route two. We're taking alook at Internet gateway internet gateway allows your VPC access to the internet andI N GW does two things. It provides a target in your VPC roundtables for internet routabletraffic. And it can also perform network address translation Nat which we'll get into in anothersection for instances that have been assigned a public ipv4 address. Okay, so down belowhere, I have a representation of how I GW works. So the idea is that we have internetover here and to access the internet, we need an internet gateway, but to route trafficfrom our EC to instances or anything. They're gonna have to pass through a route table toget to our router. And so we need to create a new route in our route table for the GWso I GW hyphen, Id identifies that resource, and then we're going to give it 0.0 pointzero point Zero as the destination.Alright, so that's all there is to it. So we talkedabout how we could use Nat gateways or Nat instances to gain access to the internet forour EC two instances that live in a private subnet. But let's say you wanted to SSH intothat easy to essence, well, it's in a private subnet, so it doesn't have a public IP address.So what you need is you need an intermediate EC two instance that you're going to SSH into.And then you're going to jump from that box to this one, okay? And that's why bastionsare also known as jump boxes. And this institute instance for the bastion is hardened. So itshould be very, very secure, because this is going to be your point of entry into yourprivate EC two instances. And some people might always ask, Well, if a NAT instance,like Nat gateways, we can't obviously turn into bastions, but in that instance, is justany situation it's Couldn't you have a double as a bastion, and the possibility of it ispossible.But generally the way you configure NATS and also, from a security perspective,you'd never ever want to do that, you'd always want to have a different EC two instance,as your Bastion. Now, there is a service called SYSTEMS MANAGER, session manager and it replacesthe need for bashes so that you don't have to launch your own EC two instances. So generally,that's recommended in AWS. But you know, bastions are still being commonly used throughout alot of companies because it needs to meet whatever their requirements are, and they'rejust comfortable with themselves. There you go. So we're gonna take a look at Direct Connect,and Direct Connect is in aid of a solution for establishing dedicated network connectionsfrom on premise locations to AWS, it's extremely fast. And so depending on what configurationyou get, if it's in the lower bandwidth, we're looking between 1550 megabytes to 500 megabytes,or the higher bandwidth is one gigabytes to 10 gigabytes. So the transfer rate to youron premise environment, the network to AWS, is it considerably fast.And this is can bereally important if you are an enterprise and you want to keep the same level of performancethat you're used to. So yeah, the takeaway here with Direct Connect is that it helpsreduce network costs increase bandwidth throughput, provides a more consistent network experiencethan a typical internet internet based connection. Okay, so that's all. Hey, this is Andrew Brownfrom exam Pro. And we are looking at auto scaling groups. So auto scaling groups letsyou set scaling rules, which will automatically launch additional EC two instances or shutdowninstances to meet the current demand. So here's our introduction to auto scaling groups. Soauto scaling groups, abbreviated as G contains a collection of EC two instances that aretreated as a group for the purpose of automatic scaling and management. And automatic scalingcan occur via capacity settings, health check replacements, or scaling policies, which isgoing to be a huge topic. So the simplest way to use auto scaling groups is just towork with the capacity settings with nothing else set.And so we have desired capacity,Min, and Max. Okay, so let's talk through these three settings. So for min is how manyeasy two instances should at least be running, okay, Max is the number of easy two instancesallowed to be running and desired capacity is how many easy two instances you ideallywant to run. So when min is set to one, and let's say you had a new auto scaling group,and you lost it, and there was nothing running, it would always it would spin up one. Andif that server died, for whatever reason, because when it was unhealthy or just crashed,for whatever reason, it's always going to spin up at least one. And then you have thatupper cap, where it can never go beyond two, because auto scaling groups could triggermore instances.And this is like a safety net to make sure that you know, you just don'thave lots and lots of servers running. And desired capacity is what you ideally wantto run. So as you will try to get it to be that value. But there's no guarantee thatit will always be that value. So that's capacity. So another way that auto scaling can occurwith an auto scaling group is through health checks. And down here, we actually have twotypes, we have EC two and lb. So we're gonna look at EC two first. So the idea here isthat when this is set, it's going to check the EC two instance to see if it's healthy.And that's dependent on these two checks that's always performed on DC two instances. Andso if any of them fail, it's going to be considered unhealthy. And the auto scaling group is goingto kill that EC two instance. And if you have your minimum capacity set to one, it's goingto then spin up a new EC two instance.So that's the that's the To type now let's golook at the lb type. So for the lb type, the health check is performed based on an E lbhealth check. And the lb can perform a health check by pinging an ENT, like an endpointon that server could be HTTP or HTTPS, and it expects a response. And you can say I wanta 200. Back at this specific endpoint, or so here.That's actually what we do. So ifyou have a web app, you might make a HTML page called health check. And it should return200. And if it is, then it's considered healthy. If that fails, then the auto scaling groupwill kill that EC two instance. And again, if your minimum is set to one is going tospin up AI, healthy new EC two instance. final and most important way scaling gets triggeredwithin an auto scaling group is scaling policies.And there's three different types of scalingpolicies. And we'll start with target tracking scaling policy. And what that does is it maintainsa specific metric and a target value. What does that mean? Well, down here, we can choosea metric type. And so we'd say average CPU utilization. And if it were to exceed ourtarget value, and we'd set our target value to 75%. Here, then we could tell it to addanother server, okay, whenever we're adding stuff, that means we're scaling out wheneverwe are removing instances, we're moving servers, then we're scaling in okay. The second typeof scaling policy is simple scaling policy. And this scales when alarm is breached. Sowe create whatever alarm we want.And we would choose it here. And we can tell it to scaleout by adding instances, or scale in by removing instances. Now, this scaling policy is nolonger recommended, because it's a legacy policy. And now we have a new policy thatis similar but more robust. To replace this one, you could still use it. But you know,it's not recommended. And it's still in the console. But let's look at the one that replacesit called scaling policies with steps. So same concept you scale based on when alarmis breach, but it can escalate based on the alarms value, which changes over time. Sobefore where you just had a single value here, we could say, well, if we have this, thisalarm, and the value is between one and two, then add one instance. And then when it goesbetween two and three, then add another instance, or when it exceeds three to beyond, then addanother instance.So you can, it helps you grow based on that alarm, that alarm as itchanges, okay. So earlier, I was showing you that you can do health checks based on l B's.But I wanted to show you actually how you would associate that lb to an auto scalinggroup. And so we have classic load balancers. And then we have application load balancerand network load balancer. So there's a bit of variation based on the load bouncer howyou would connect it, but it's pretty straightforward.So we're in the auto scaling group settings,we have these two fields, classic load balancers and target groups. And for classic load balancers,we just select the load balancer, and now it's associated. So it's as simple as thatit's very straightforward. But with the new ways, there's a target group that's in betweenthe auto scaling group and the load balancer. So you're associating the target group. Andso that's all there is to it.So that's how you associate. So to give you the big pictureon what happens when you get a burst of traffic, and auto scaling occurs, I just wanted towalk through this architectural diagram with you. So let's say we have a web server andwe have one easy two instance running, okay, and all of a sudden, we get a burst of traffic,and that traffic comes into revenue three, revenue three points to our application loadbalancer application load balancer has a listener that sends the traffic to the target group.And we have this these students and switches associated with that target group.And wehave so much traffic that it causes are your CPU utilization to go over 75%. And once itgoes over 75%, because we had a target scaling policy attached, that said anything above75%, spin up a new EC two instance. That's what the auto scaling group does. And so theway it does is it uses a launch configuration, which is attached to the auto scaling group,and it launches a new EC two instance. So that's just to give you the, like, full visibilityon the entire pipeline of how that actually works. So when you have an auto scaling group,and it launches in institutions, how does it know what configuration to use to launcha new new ECU instance and that is what a launch configuration is. So when you havean auto scaling group, you actually set what launch configuration you want to use. Anda launch configuration looks a lot like when you launch a new ECU instance.So you go throughand you'd set all of these options. But instead of launching an instance at the end, it'sactually just saving the configuration hence, it's called a launch configuration. A coupleof limiting Around lost configurations that you need to know is that a launch configurationcannot be edited once it's been created. So if you need to update or replace that launchconfiguration, you need to either make a new one, or they have this convenient button toclone the existing configuration and make some tweaks to it. There is something alsoknown as a launch template. And they are launched figurations, just but with versioning. Andso it's AWS new version of lock configuration. And, you know, generally when there's somethingnew, I might recommend that you use it, but it seems so far that most of the communitiesstill uses launch configuration.So the benefit of versioning isn't a huge, doesn't have alot of value there. So, you know, I don't I'm not pushing you to use launch templates,but I just want you to know the difference because it is a bit confusing because youlook at it, it looks like pretty much the same thing. And it just has versioning inhere and we can review the auto scaling group cheat sheet. So an S g is a collection ofup to two instances group for scaling and management scaling out is when you add serversscaling is when you remove servers scaling up is when you increase the size of an instanceso like you'd update the launch configuration with a larger size. The size of an ASC isbased on the min maximum desired capacity.Target scaling policy scales based on whena target value of a metric is breached. So example average CPU utilization exceeds 75%simple scaling policy triggers a scaling when an alarm is breach. Scaling policy with stepsis the new version simple scaling policy allows you to create steps based on escalation alarmvalues. desired capacity is how many situ instances you want to ideally run an ESG willalways launch instances to meet the minimum capacity.Health checks determine the currentstate of an instance in nasg. health checks can be run against either an EOB or an ECtwo instance, when an auto scaling when an auto scaling group launches a new instance,it will use a launch configuration which holds the configuration values of that new instance.For example, maybe the AMI instance type of roll launch configurations cannot be editedand must be cloned or a new one created. Launch configurations must be manually updated inby editing the auto scaling group settings. So there you go. And that's everything withauto scaling. We're looking at VPC endpoints, and they're used to privately connect yourV PC to other Ada services and VPC endpoint services. So I have a use case here to makeit crystal clear. So imagine you have an EC two instance, and you want to get somethingfrom your s3 bucket. So what you normally do is use the ABS SDK and you would make thatcall, and it would go out of your internet gateway to the internet back into the ABSnetwork to get that file or or object out of s3.So wouldn't it be more convenient ifwe could just keep the traffic within the AWS network and that is the purpose of a VPCendpoint. It helps you keep traffic within the network. And the idea is now because itdoes not leave a network, we do not require a public IP address to communicate with theseservices, I eliminates the need for internet gateway. So let's say we didn't need thisinternet gateway, the only reason we were using it too was to get test three, we cannow eliminate that and keep everything private. So you know, there you go. There are two typesof VPC endpoints, inner interface endpoints and gateway endpoints. And we're going toget into that. So we're going to look at the first type of VPC endpoint and that is interfaceendpoints.And they're called interface endpoints because they actually provision an elasticnetwork interface, an actual network interface card with a private IP address, and they serveas an entry point for traffic going to a supported service. And if you read a bit more aboutinterface endpoints, they are powered by at this private link. There's not much to sayhere, this is what it is. So Access Services host on a bus easily securely by keeping yournetwork traffic within a bus network. This is always confused me this branding of Eva'sprivate link. But you know, you might as well just think of interface endpoints and useprivate link to be in the same thing. Again, it does cost something because it is speedingup and he and I. And so you know, it's it's point 01 cents per hour.And so over a month'stime, if you had it on for the entire time, it's going to cost you around $7.50. And theinterface endpoint supports a variety of data services, not everything. But here's a goodlist of them for you. The second type of VCP endpoint is a gateway endpoint, a gatewayendpoint. It has a target for a specific route in your row table used for traffic destinedfor a supported database service. And this endpoint is 100% free because you're justadding something to your route table, and you're going to be utilizing it mostly forAmazon s3 and dynamodb. So you saw that first use case where I showed you that we were gettingused to doing stock s3 that was using a gateway endpoint.So there you go. Here we are atthe VPC endpoint cheat sheet and this is going to be a quick one, so let's get to it. VPCendpoints help keep traffic between awa services within the AWS network. There are two kindsof VPC endpoints interface endpoints and gateway endpoints. interface endpoints cost money,whereas gateway endpoints are free interface endpoints uses a elastic network interfacein the UI with a private IP address part. And this was all powered by private link.gateway endpoints is a target for a specific route in your route table.And interface endpointssupport many at the services whereas gateway endpoints only support dynamodb and s3. Hey,it's Andrew Brown from exam Pro. And we are looking at elastic load balancers also abbreviatedto lb, which distributes incoming application traffic across multiple targets such as easyto instances containers, IP addresses, or lambda functions. So let's learn a littlebit What a load balancer is. a load balancer can be physical hardware, or virtual softwarethat accepts incoming traffic and then distributes that traffic to multiple targets. They canbalance the load via different rules. These rules vary based on the type of load balancers.So for elastic load balancer, we actually have three load balancers to choose from.And we're going to go into depth for each one, we'll just list them out here. So wehave application load balancer, network, load balancer, and classic load balancer. Understandthe flow of traffic for lbs, we need to understand the three components involved. And we havelisteners rules and target groups. And these things are going to vary based on our loadbalancers, which we're going to find out very shortly here. Let's quickly just summarizewhat these things are.And then see them in context with some visualization. So the firstone are listeners, and they listen for incoming traffic, and they evaluate it against a specificport, whether that's Port 80, or 443, then you have rules and rules can decide what todo with traffic. And so that's pretty straightforward. Then you have target groups and target groupsis a way of collecting all these two instances you want to route traffic to in logical groups.So let's go take a look first at application load bouncer and network load balancer. Sohere on the right hand side, I have traffic coming into repartee three, that points toour load balancer. And once it goes our load bouncer goes to the listener, it's good checkwhat port it's running on. So if it's on port 80, I have a simple rule here, which is goingto redirect it to Port 443. So it's gonna go this listener, and this listener has arule attached to it, and it's going to forward it to target one. And that target one containsall these YouTube instances. Okay. And down below here, we can just see where the listenersare.So I have listener at 443. And this is where application load balancer, you can seeI also can attach a SSL certificate here. But if you look over at rules, and these rulesare not going to appear for network load bouncer, but they are going to appear for a lb. Andso I have some more complex rules. If you're using lb, it simply just forwards it to atarget, you don't get more rich options, which will show you those richer options in a futureslide.But let's talk about classic load balancer. So a classic load balancer is, is much simpler.And so you have traffic coming in it goes to CLB. You have your listeners, they listenon those ports, and you have registered targets. So there isn't target groups, you just haveloosey goosey two instances that are associated with the classic load balancer. Let's takea deeper look at all three load balancers starting with application load balancer. Soapplication load balancer, also known as a lb is designed to balance HTTP and HTTPS traffic.It operates at layer seven of the OSI model, which makes a lot of sense because layer sevenis application lb has a feature called request routing, which allows you to add routing rulesto your listeners based on the HTTP protocol.So we saw previously, when we were lookingat rules, it was only for lb that is this is that request routing rules. You can attacha web application firewall to a lb. And that makes sense because they're both applicationspecific. And if you want to think of the use case for application load balancer, well,it's great for web application. So now let's take a look at network load balancer whichis designed to balance TCP and UDP traffic, it operates at the layer four of the OSI model,which is the transport layer. And it can handle millions of requests per second while stillmaintaining extremely low latency. It can perform cross zone load balancing, which we'lltalk about later on. It's great for you know, multiplayer video games are when network performanceis the most critical thing to your application.Let's take a look at classic load balancers.So it was AWS is first load balancer. So it is a legacy load balancer. It can balanceHTTP or TCP traffic, but not at the same time, it can use layer seven specific features suchas sticky sessions, it can also use a strict layer for for bouncing, purely TCP application.So that's what I'm talking about where it can do one or the other. It can perform crosszone load balancing, which we will talk about later on. And I put this one in here, becauseit is kind of an exam question. I don't know if it still appears, but it will respond witha 504 error in case of timeout if the underlying application is not responding. And an applicationcould be not responding spawning would be example as the web server or maybe the databaseitself.So classic load balancer is not recommended for use anymore, but it's still around, youcan utilize it. But you know, it's recommended to use nlb or lb, when possible. So let'slook at the concept of sticky sessions. So sticky Sessions is an advanced load balancingmethod that allows you to bind a user session to a specific EC two instance. And this isuseful when you have specific information that's only stored locally on a single instance.And so you need to keep on sending that person to the same instance. So over here, I havethe diagram that shows how this works. So on step one, we wrote traffic to the firstEC two instance, and it sets a cookie. And so the next time that person comes through,we check to see if that cookie exists.And we're going to send it to that same EC twoinstance. Now, this feature only works for classic load bouncer and application loadbouncer, it's not available for nlb. And if you need to set it for application load bouncer,it has to be set on the target group and not individually, you see two instances. So here'sa scenario you might have to worry about. So let's say you have a user that's requestingsomething from your web application, and you need to know what their IP address is. Soyou know, the request goes through and then on the EC two instance, you look for it, butit turns out that it's not actually their IP address. It's the IP address of the loadbalancer. So how do we actually see the user's IP address? Well, that's through the x forwardedfor header, which is a standardized header when dealing with load balancers.So the xforwarded for header is a command method for identifying the originating IP address ofa connecting, or client connecting to a web server through HTTP proxy or a load balancer.So you would just forward make sure that in your web application that you're using thatheader, and then you just have to read it within your web application to get that user'sIP address. So we're taking a look at health checks for elastic load balancer. And thepurpose behind health checks is to help you route traffic away from unhealthy instances,to healthy instances. And how do we determine if a instance is unhealthy waltz through allthese options, which for a lb lb is set on the target group or for closet load balanceris directly set on the load balancer itself. So the idea is we are going to ping the serverat a specific URL at a with a specific protocol and get an expected specific response back.And if that happens more than once over a specific interval that we specify, then we'regoing to mark it as unhealthy and the load balancer is not going to send any more trafficto it, it's going to set it as out of service.Okay. So that's how it works. One thing thatyou really need to know is that e lb does not terminate unhealthy instances, it's justgoing to redirect traffic to healthy instances. So that's all you need to know. So here we'retaking a look at cross zone load balancing, which is a feature that's only available forclassic and network load balancer. And we're going to look at it when it's enabled, andthen when it's disabled and see what the difference is. So when it's enabled requests are distributedevenly across the instances in all the enabled availability zones. So here we have a bunchof UC two instances in two different Z's and you can see the traffic is even across allof them.Okay? Now, when it's disabled requests are distributed evenly across instances. It'sin only its availability zone. So here, we can see in az a, it's evenly distributed withinthis AZ and then the same thing over here. And then down below if you want to know howto enable cross zone load balancing, it's under the description tab and you'd edit theattributes. And then you just check box on cross zone load balancing. Now we're lookingat an application load balancer specific feature called request routing, which allows you toapply rules to incoming requests, and then for to redirect that traffic. And we can checkon a few different conditions here. So we have six in total. So we have the header hostheader source IP path is to be header, ASP header method or query string. And then youcan see we have some, then options, we can forward redirect returned to fixed responseor authenticate. So let's just look at a use case down here where we actually have 1234,or five different examples. And so one thing you could do is you could use this to routetraffic based on subdomain.So if you want an app to subdomain app to go to Target, prod,and QA to go to the target QA, you can do that you can either do it also on the path.So you could have forward slash prod and for slash QA and now it route to the respectivetarget groups, you could do it as a query string, you could use it by looking at HTTPheader, or you could say all the get methods go to prod on a why you'd want to do this,but you could and then all the post methods would go to QA. So that is request requestrouting in a nutshell. We made it to the end of the elastic load balancer section and ontothe cheat sheet. So there are three elastic load balancers, network application and classicload balancer and elastic load bouncer must have at least two availability zones for itto work on elastic load balancers cannot go cross region you must create one per region.lbs have listeners rules and target groups to route traffic and OBS have listeners andtarget groups to route traffic.And CL B's. Us listeners and ECU instances are directlyregistered as targets to the CLB. For application load balancer, it uses HTTP s or eight orwithout the S traffic. And then as the name implies, it's good for web applications. Networkload balancer is for TCP, UDP, and is good for high network throughput. So think maybelike multiplayer video games, classic load balancer is legacy. And it's recommended touse a lb, or nlb when you can, then you have the x forwarded for. And the idea here isto get the original IP of the incoming traffic passing through the lb, you can attach webapplication firewall to a lb. Because you know web application firewall has applicationthe name and nlb in CLB do not, you can attach an Amazon certification manager SSL certificate.So that's an ACM to any of the L B's.To get SSL. For a lb you have advanced request routingrules, where you can route based on subdomain, header path and other SP information. Andthen you have sticky sessions which can be enabled for CLB or lb. And the idea is thatit helps the session remember, what would you say to instance, based on cookie? Hey,it's Andrew Brown from exam Pro, we are looking at security groups. And they help protectour EC two instances by acting as a virtual firewall controlling the inbound and outboundtraffic, as I just said, security groups acts as a virtual firewall at the instance level.So you would have an easy to instance and you would attach to it security groups. Andso here is an easy to instance. And we've attached the security group to it. So whatdoes it look like on the inside for security groups, each security group contains a setof rules that filter traffic coming into.So that's inbound, and out of outbound tothat easy to instance. So here we have two tabs, inbound and outbound. And we can setthese are rules, right. And we can set these rules with a particular protocol and a portrange. And also who's allowed to have access. So in this case, I want to be able to SSHinto this YSU instance, which uses the TCP protocol. And the standard port for SSH is22. And I'm going to allow only my IP. So anytime you see forward slash 32 that alwaysmeans my IP. All right. So that's all you have to do to add inbound and outbound rules.There are no deny rules. So all traffic is blocked by default unless a rule specificallyallows it. And multiple instances across multiple subnets can belong to a security group.Sohere I have three different EC two instances, and they're all in different subnets. Andsecurity groups do not care about subnets. You just assign easy to instance, to a securitygroup and you know, just in this case, and they're all in the same one, and now theycan all talk to each other. Okay. Here, I have three security group scenarios.And they all pretty much do the same thing. But the configuration is different to giveyou a good idea of variation on how you can achieve things. And so the idea is we havea web application running on a situ instance. And it is connecting to an RDS database toget its information running in a private subnet. Okay. And so in the first case, what we'redoing is we have an inbound rule on the SU database saying allowing anything from 5432,which is the Postgres port number, for this specific IP address. And so it allows us tosee one instance to connect to that RDS database. And so the takeaway here is you can specifythe source to be an IP range, or a specific IP.And so this is very specific, it's forwardslash 32. And that's a nice way of saying exactly one IP address. Now in the secondscenario, it looks very similar. And the only difference is, instead of providing an IPaddress as the source, we can provide another security group. So now anything within thesecurity group is allowed to gain access for inbound traffic on 5432. Okay, now, in ourlast use case, down below, we have inbound traffic on port 80, and inbound traffic onport 22, for the SG public group, and then we have the EC two instance and the RDS databasewithin its own security group. So the idea is that that EC two instance is allowed totalk to that RDS database, and that EC two instance is not exposing the RDS databaseto it well, wouldn't because it's in a private subnets.It doesn't have a public IP address.But the point is, is that this is to instance, now is able to get traffic from the internet.And it's also able to accept someone from like for any SSH access, okay. And so thebig takeaway here is that you can see that an instance can belong to multiple securitygroups and rules are permissive. So when we have two security groups, and this one hasallows, and this is going to take precedence over su stack, which doesn't have anything,you know, because it's denied by default, everything, but anything that allows is goingto override that.Okay, so you can nest multiple security groups onto one ACTA Ensign. So justkeep that stuff in mind. There are a few security group limits I want you to know about. Andso we'll look at the first you can have up to 10,000 security groups in a single region,and it's defaulted to 2500. If you want to go beyond that 2500, you need to make a servicelimit increase request to a VA support, you can have 60 inbound rules and 60 outboundrules per security group.And you can have 16 security groups per EMI. And that's defaultedto five. Now, if you think about like, how many security rules can you have on an instance?Well, it's depending on how many you guys are actually attached to that security group.So if you have to realize that it's attached to a security group, then by default, you'llhave 10. Or if you have the upper limit here, 16, you'll be able to have 32 security groupson a single instance. Okay, so those are the limits, you know, I thought were worth telling.So we're going to take a look at our security groups cheat sheet. So we're ready for examtime. So security groups act as a firewall at the instance level, unless allowed specifically,all inbound traffic is blocked by default, all outbound traffic from the instance isallowed by default, you can specify for the source to be either an IP range, a singleIP address or another security group. security groups are stateful. If traffic is allowedinbound, it is also allowed outbound. Okay, so that's what stateful means. Any changesto your security group will take effect immediately.Ec two instances can belong to multiple securitygroups, security groups can contain multiple ECU instances, you cannot block a specificIP address security groups. For this, you need to use knackles. Right? So again, it'sallowed by default, sorry, everything's denying you're only allowing things okay. You canhave up to 10,000 security groups per region default is 2500. You can have 60 inbound and60 outbound rules per security group. And you can have 16 security groups associatedto that you and I default is five and I can see that I added an extra zero there so don'tworry when you print out your security scripts to cheat it will be all correct.Okay. Hey,this is Angie brown from exam Pro, and we are looking at network access control listsalso known as knackles. It is an optional layer of security that acts as a firewallfor controlling traffic in and out of subnets. So knackles act as a virtual firewall at thesubnet level and when you create a VPC, you automatically get a knakal by default, justlike security groups knackles have both inbound and outbound rules.The difference here isthat you're going to have the ability to allow or deny traffic in either way. Okay, so forsecurity groups, you can only allow whereas knackles, you have deny. Now, when you createthese rules here, it's pretty much the same as security groups with the exception thatwe have this thing called rule number And rule number is going to determine the orderof evaluation for these rules, and the way it evaluates is going to be from the lowestto the highest, the highest rule number, it could be 32,766. And at best recommends thatwhen you come up with these rule numbers, you use increments of 10 or 100. So you havesome flexibility to create rules in between if you need B, again, subnets are at the subnetlevel. So in order for them to apply, you need to associate subnets to knackles. Andsubnets can only belong to a single knakal.Okay, so yeah, where you have security groups,you can have a instances that belong to multiple ones for knackles. It's just a singular case,okay. All right. So we're just gonna look at a use case for knackles. Here, it's goingto be really around this deny ability. So let's say there is a malicious actor tryingto gain access to our instances, and we know the IP address, well, we can add that as arule to our knakal and deny that IP address. And let's say we know that we never need toSSH into these instances. And we just want an additional guarantee in case someone misconfigureda security group that SSH access is denied. So we'll just deny on port 22. And now wehave those two cases covered. So there you go. Alright, so we're on to the knakal cheatsheet. So network access control list is commonly known as nachal. v PCs are automatically givena default knakal, which allows all outbound inbound traffic each subnet within a VPC mustbe associated with a knakal.Subnets can only be associated with one knakal. at a time,associate a sub net with a new knakal with will remove the previous Association. If aknakal is not explicitly associated with a subnet, the subnet will automatically be associatedwith the default knakal knackles have both inbound and outbound rules just like a securitygroup. The rules can either be allow or deny traffic, unlike a security group, which canonly you can only apply allow rules knackles are stateless. This is This means that knacklesdo not care about the rules that you set. So if you want to deny outbound, that doesn'tnecessarily mean it's gonna deny inbound automatically, you have to set it individually for both sides.And that's why it's considered stateless. When you create a knakal it will deny alltraffic by default knackles contain a numbered list of rules that get evaluated in orderfrom lowest to highest.If you needed to block a single IP address, you could you could bea knakal. Whereas security security you cannot because it only has deny rules and it'd bevery difficult to block a single IP with a security group. So there you go. That's yourknakal teaching. Hey, this is Angie brown from exam pro and we are starting to VPC followalong and this is a very long section because we need to learn about all the kind of networkingcomponents that we can create.So we're going to learn how to create our own VPC, subnets,route tables, internet gateways, security groups, Nat gateways knackles, we're goingto touch it all. Okay, so it's very core to learning about AWS, and it's just a greatto get it out of the way. So let's jump into it. So let's start off by creating our ownVPC. So on the left hand side, I want you to click on your VPC. And right away, you'regoing to see that we already have a default VPC within this region of North Virginia.Okay, your region might be different from mine. It doesn't actually does kind of matterwhat region you use, because different regions have different amounts of available azs.SoI'm going to really strongly suggest that you switch to North Virginia to make thissection a little bit smoother for you. But just notice that the default VPC uses an ipv4cider, cider block range of 172 31 0.0 forward slash 16. Okay, and so if I was to changeregions, no matter what region will go to us, West, Oregon, we're going to find thatwe already have a default VPC on here as well and it's going to have the same a cider blockrange, okay. So just be aware that at best does give you a default VPC so that you canstart launching resources immediately without having to worry about all this networking,and there's no full power with using the default VPC it's totally acceptable to do so. Butwe definitely need to know how to do this ourselves. So we're going to create our ownVPC. Okay, and so, I'm a big fan of Star Trek, and so I'm going to name it after the planetof Bayshore. Which is a very well known planet in the Star Trek universe. And I'm going tohave to provide my own cider block, it cannot be one that already exists.So I can't usethat 172 range that ETS was using. So I'm gonna do 10.0 dot 0.0, forward slash 16. Andthere is a bit of a rhyme and rhythm to choosing these, this one is a very commonly chosenone. And so I mean, you might be looking this going, Okay, well, what is this whole thingwith the IP address slash afford 16. And we will definitely explain that in a separatevideo here. But just to give you a quick rundown, you are choosing your IP address that youwant to have here. And this is the actual range. And this is saying how many IP addressesyou want to allocate. Okay. So yeah, we'll cover that more later on. And so now we havethe option to set ipv6 cider, or a cider block here. And so just to keep it simple, I'm goingto turn it off. But you know, obviously, ipv6 is supported on AWS. And it is the futureof, you know, our IP protocol.So it's definitely something you might want to turn on. Okay,and just be prepared for the future there that we have this Tennessee option, and thisis going to give us a dedicated hosts. For our VPC, this is an expensive, expensive option.So we're going to leave it to default and go proceed and create our VPC. And so thereit has been created. And it was very fast, it was just instantaneous there. So we'regoing to click through to that link there. And now we can see we have our VPC named Bayshore.And I want you to notice that we have our IP V for cider range, there is no ipv6 set.And by default, it's going to give us a route table and a knakal. Okay, and so we are goingto overwrite the row table because we're going to want to learn how to do that by ourselves.knackles is not so important.So we might just gloss over that. But um, yeah, so thereyou are. Now, there's just one more thing we have to do. Because if you look down belowhere, we don't have DNS resolution, or DNS, or sorry, DNS hostnames is disabled by default.And so if we launch an EC two instance, it's not going to get a, a DNS, DNS hostname, that'sjust like a URL. So you can access that ecsu instance, we definitely want to turn thaton. So I'm going to drop this down to actions and we're going to set a host names here toenabled okay. And so now we will get that and that will not cause us pain later downthe road. So now that we've created our VPC, we want to actually make sure the internetcan reach it. And so we're going to next learn about internet gateways.So we have our VPC,but it has no way to reach the internet. And so we're going to need an internet gateway.Okay, so on the left hand side, I want you to go to internet gateway. And we are goingto go ahead and create a new one, okay. And I'm just going to call it for internet gateway,vaes yours, and people do it, who do it who doesn't hurt. And so our internet gatewayhas been created. And so we'll just click through to that one.And so you're gonna seethat it's in a detached state. So internet gateways can only be attached to a very specificVP v PC, it's a one to one relationship. So for every v PC, you're going to have an internetgateway. And so you can see it's attach and there is no VPC ID. So I'm going to drop thisdown and attach the VPC and then select Bayshore there and attach it.And there you go. Nowit's attached, and we can see the ID is associated. So we have an internet gateway. But that stilldoesn't mean that things within our network can reach the internet, because we have toadd a route to our route table. Okay, so just closing this tab here, you can see that therealready is a route table associated with our VPC because it did create us a default routetable. So I'm just going to click through to that one here to show you, okay, and youcan see that it's our main route table, because it's set to main, but I want you to learnhow to create route tables.So we're gonna make one from scratch here. Okay. So we'lljust hit Create route table here. And we're just going to name it our main route tableor RG, our internet road table, I don't know doesn't matter. Okay, we will just say RT,to shorten that there and we will drop down and choose Bayshore. And then we will go aheadand create that route table. Okay, and so we'll just hit close. And we will click offhere so we can see all of our route tables. And so here we have our, our, our main onehere for Bayshore, and then this is the one we created. Okay, so if we click into thisroute table here, you can see by default, it has the full scope of our local networkhere. And so I want to show you how to change this one to our main.So we're just goingto click on this one here and switch it over to main, so set as main route table. So themain road table is whenever you know, just what is going to be used by default. All right,and so we'll just go ahead and delete the default one here now because we know why.We need it. Alright, and we will go select our new one here and edit our routes. Andwe're going to add one for the internet gateway here. So I'm going to just drop down hereor sorry, I'm just going to write 0.0 dot 0.0, forward slash, zero, which means let'stake, take anything from anywhere there. And then we're going to drop down select internetgateway, select Bayshore, and hit save routes. Okay, and we'll hit close. And so now we,we have a internet gateway. And we have a way for our subnets to reach the internet.So there you go. So now that we have a route to the internet, it's time to create somesubnets. So we have some way of actually launching our EC two instances, somewhere.Okay, soon the left hand side, I want you to go to subnets. And right away, you're going to startto see some subnets. Here, these are the default ones created with you with your default VPC.And you can see that there's exactly six of them. So there's exactly one for every availabilityzone within each region. So the North Virginia has six azs. So you're going to have six,public subnets. Okay, the reason we know these are public subnets. If we were to click onone here and check the auto assign, it is set to Yes.So if a if this is set to Yes,that means any easy to instance, launch in the subnet is going to get a public IP address.Hence, it's going to be considered a public subnet. Okay. So if we were to switch overto Canada Central, because I just want to make a point here, that if you are in a anotherregion, it's going to have a different amount of availability zones, Canada only has two,which is a bit sad, we would love to have a third one there, you're going to see thatwe have exactly one subnet for every availability zone. So we're going to switch back to NorthVirginia here. And we are going to proceed to create our own subnets.So we're goingto want to create at least three subnets if we can. So because the reason why is a lotof companies, especially enterprise companies have to run it in at least three availabilityzones for high availability. Because if you know one goes out and you only have anotherone, but what happens if two goes out. So there's that rule of you know, always haveat least, you know, two additional Okay, so we're going to create three public subnetsin one pool, one private subnet, we're not going to create three private subnets, justbecause I don't want to be making subnets here all day. But we'll just get to it here.So we're going to create our first subnet, I'm going to name this Bayshore public, okay,all right, and we're going to select our VPC.And we're going to just choose the US Eastone a, and we're going to give it a cider block of 10.0 dot 0.0 forward slash 24. Now,notice, this cider range is a smaller than the one up here, I know the number is larger,but from the perspective of how many IP addresses that allocates, there's actually a few here.So you are taking a slice of the pie from the larger range here. So just be aware, youcan set this as 16, it's always going to be less, less than in by less, I mean, a highernumber than 16. Okay, so we'll go ahead and create our first public subnet here. And we'lljust hit close. And this is not by default public, because by default, the auto assignis going to be set to No.So we're just going to go up here and modify this and set it sothat it does auto assign ipv4 and now is is considered a public subnet. So we're goingto go ahead and do that for our B and C here. So it's going to be the same thing Bayshorepublic, be. Okay, choose that. We'll do B, we'll do 10.0 dot 1.0 24. Okay. And we'regonna go create that close. And we're going to that auto assign that there. All right.And the next thing we're going to do is create our next sub down here so Bayshore, how boringI Bayshore public. See, and we will do that and we'll go to see here and it's going tobe 10.0 dot 2.0.Forward slash 24. Okay, we'll create that one. Okay, let close and we willmake sure did I set that one? Yes, I did. Did I set that one not as of yet. And so wewill modify that there. Okay. And we will create a another subnet here and this is goingto be a beige, your private a, okay. And we are going to set that to eight here. And we'regoing to set this to 10.0 dot 3.0 24. Okay, so this is going to be our private subnet.All right. So we've created all of our subnets. So the next thing we need to do is associatethem with a route table.Actually, we don't have to because by default, it's going touse the main Alright, so they're already automatically associated there. But for our private one,we're not going to be wanting to really use the the the the main route table there, weprobably would want to create our own route table for our privates. That's there. So I'mjust gonna create a new one here and we can just call it private RT. Okay, I'm going todrop that down, choose Bayshore here. And we're going to hit close, okay? And the ideais that the, you know, we don't need this subnet to reach the internet, so it doesn'treally make sense to be there.And then we could set other things later on. Okay, sowhat I want you to do is just change the association here. So we're gonna just edit the route tableAssociation. And we're just going to change that to be our private one. Okay. And so nowour route tables are set up. So we will move on to the next step. So our subnets are ready.And now we are able to launch some EC two instances. So we can play around and learnsome of these other networking components. So what I want you to do is go to the tophere and type in EC two. And we're going to go to the EC two console. And we're goingto go to instances on the left hand side. And we're going to launch ourselves a coupleof instances.So we're going to launch our first instance, which is going to be for ourpublic subnet here. So we're going to choose t to micro, we're going to go next, and weare going to choose the Bayshore VPC that we created. We're going to launch this inthe public subnet here public a, okay, and we're going to need a new Im role. So I'mjust going to right click here and create a new Im role, because we're going to wantto give it access to both SSM for sessions manager and also, so we have access to s3.Okay, so just choosing EC two there. I'm going to type in SSM. Okay, SSM, there it is atthe top, then we'll type in s3, we're gonna give it full access, we're going to go next,we're going to go to next and we're going to just type in my beige or EC two. Okay.And we're going to hit Create role.Okay, so now we have the role that we need for ourEC two instance, we're just going to refresh that here, and then drop down and choose mybeige or EC two. Okay, and we are going to want to provide it a script here to run. SoI already have a script pre prepared that I will provide to you. And this is the publicuser data.sh. All this is going to do. And if you want to just take a peek here at whatit does, I guess they don't have it already open here. But we will just quickly open thisup here. all it's going to do is it's gonna install an Apache server. And we're just goingto have a static website page here served up, okay, and so we're going to go ahead andgo to storage, nothing needs to be changed here, we're going to add, we don't need toadd any tags, we're gonna go to security group, and we're going to create a new security group,I'm going to call it m, my, my base, your EC two SD, okay.And we're going to make surethat we have access to HTTP, because this is a website, we're going to have to havePort 80, open, we're going to restrict it down to just us. And we could also do thatfor SSH. So we might as well do that there as well. Okay, we're gonna go ahead and reviewand launch this EC two instance, and already have a key pair that is created, you'll justhave to go ahead and create one if you don't have one there. And we'll just go ahead andlaunch that instance there. Okay, great. So now, we have this EC two instance here, whichis going to be for our public segment. Okay. And we will go ahead and launch another instance.So we'll go to Amazon Lex to here, choose t to micro. And then this time, we're goingto choose our private subnet. Okay, I do want to point out that when you have this autoassign here, see how it's by by default disabled, because it's inheriting whatever the parentslove that has, whereas when we set it, the first one, you might have not noticed, butit was set to enable, okay.And we are going to also give it the same role there, my beigeor EC two. And then this time around, we're going to give it the other scripts here. SoI have a private script here, I'm just going to open it up and show it to you. Okay, andso what this script does, is a while it doesn't actually need to install Apache, so we'lljust remove that, I guess it's just old. But anyway, what it's going to do is it's goingto reset the password on the EC to user to chi win. Okay, that's a character from StarTrek Deep Space Nine. And we're also going to enable password authentication. So we canSSH into this using a password. And so that's all the script does here. Okay, and so weare going to go ahead and choose that file there and choose that and we will move onto storage storage is totally fine.We're not going to add tags, secure groups, we'regonna actually create a new security group here. It's not necessarily unnecessary, butI'm going to do it anyway. So I'm gonna save my private, private EC to SD, maybe put Bayshorein there. So we just keep these all grouped together note, therefore, it's only goingto need SSH, we're not going to have any access to the internet there. So like, there's nowebsite or anything running on here.And so we'll go ahead and review and launch. Andthen we're going to go watch that instance and choose our key pair. Okay, great. So nowwe're just going to wait for these two instances to spin up here. And then we will play aroundwith security groups and knackles. So I just had a quick coconut water, and now I'm backhere and our instances are running, they don't usually take that long to get started here.And so we probably should have named these to make it a little bit easier. So we needto determine which is our public and private. And you can see right away, this one has apublic public DNS hostname, and also it has its ip ip address. Okay, so this is how weknow this is the public one, so I'm just gonna say, Bayshore public.Okay. And this one hereis definitely the private one. All right, so we will say a beige, your private. Okay.So, um, yeah, just to iterate over here, if we were to look, here, you can see we havethe DNS and the public IP address. And then for the private, there's nothing set. Okay.So let's go see if our website is working here. So I'm just going to copy the publicIP address, or we can take the DNS one, it doesn't matter. And we will paste this ina new tab. And here we have our working website. So our public IP address is definitely working.Now, if we were to check our private one, there is nothing there. So there's nothingfor us to copy, we can even copy this private one and paste it in here. So there's no wayof accessing that website is that is running on the private one there.And it doesn't reallymake a whole lot of sense to run your, your website, in the private subnet there. So youknow, just to make a very clear example of that, now that we have these two instances,I guess, it's a good opportunity to learn about security groups. Okay, so we had createda security group. And the reason why we were able to access this instance, publicly wasthat in our security group, we had an inbound rule on port 80. So Port 80, is what websitesrun on. And when we're accessing through the web browser, there's and we are allowing myIP here. So that's why I was allowed to access it.So I just want to illustrate to you whathappens if I change my IP. So at the top here, I have a VPN, it's a, it's a service, youcan you can buy a lot people use it so that they can watch Netflix in other regions. Iuse it for this purpose not to watch Netflix somewhere else. So don't get that in yourmind there. But I'm just going to turn it on. And I'm going to change my IP. So I getI think that this is Brazil. And so I'm going to have an IP from Brazil here shortly onceit connects. And so now if I were to go and access this here, it shouldn't work. Okay,so I'm just going to close that tab here.And it should just hang. Okay, so it's hangingbecause I'm not using that IP. So that's how security groups work. Okay, and so I'm justgoing to turn that off. And I think I should have the same one. And it should resolve instantlythere. So great. So just showing you how the security groups work for inbound rules, okay,for outbound rules, that's traffic going out to the internet, it's almost always open likethis 0.0 dot 0.0, right, because you'd want to be able to download stuff, etc. So thatis pretty normal business.Okay. So now that now that we can see that, maybe we would liketo show off how knackles work compared to security groups to security groups. As youcan see, if we were just to open this one up here, okay. security groups, by default,only can allow things so everything is denied. And then you're always opening things up.So you're adding allow rules only you can't add an explicit deny rule. So we're knacklesare a very useful is that you can use it to block a very specific IP addresses, okay,or IP ranges, if you will. And you cannot do that for a security group. Because howwould you go about doing that? So if I wanted to block access just to my IP address, I guessI could only allow every other IP address in the world except for mine. But you cansee how that would do undue burden.So let's see if we can set our knakal to just blockour IP address here. Okay. So security groups are associated with the actual EC two instancesor so the question is, is that how do we figure out the knackles and knackles are associatedwith the subnets. Okay, so in order to block our IP address for this easy to instance,we have to determine what subnet it runs in and so it runs in our Bayshore public Right.And so now we got to find the knakal. that's associated with it. So going up here to subnets,I'm going to go to public a, and I'm going to see what knackles are associated with it.And so it is this knakal here, and we have some rules that we can change. So let's actuallytry just blocking my IP address here. And we will go just grab it from here. Okay. Allright.And just to note, if you look here, see has this Ford slash 32. That is mean,that's a cider block range of exactly one IP address. That's how you specify a singleIP address with forward slash 32. But I'm going to go here and just edit the knakalhere, and we are going to this is not the best way to do it. So I'm just going to openit here.Okay. And because it didn't get some edit options there, I don't know why. Andso we'll just go up to inbound rules here, I'm going to add a new rule. And it goes fromlowest to highest for these rules. So I'm just going to add a new rule here. And I'mgoing to put in rule 10. Okay, and I'm going to block it here on the cider range. And I'mgoing to do it for Port 80. Okay, so this and we're going to have an explicit deny,okay, so this should, this should not allow me to access that EC two instance any anylonger. Okay, so we're going to go back to our instances here, we're going to grab thatIP address there and paste it in there and see if I still have access, and I do not okay,so that knakal is now blocking it. So that's how you block individual IP addresses there.And I'm just going to go back and now edit the rule here. And so we're just going toremove this rule, and hit save.And then we're going to go back here and hit refresh. Okay.And I should now have access on I do. So there you go. So that is security groups and knakal.So I guess the next thing we can move on to is how do we actually get access to the privatesubnet, okay, and the the the ways around that we have our our private EC two instance.And we don't have an IP address, so there's no direct way to gain access to it. So wecan't just easily SSH into it. So this is where we're going to need a bastion. Okay,and so we're gonna go ahead and go set one up here.So what I want you to do is I wantyou to launch a new instance, here, I'm just going to open a new tab, just in case I wantthis old tab here. And I'm just going to hit on launch instance here. Okay, and so I'mgoing to go to the marketplace here, I'm gonna just type in Bastion. And so we have someoptions here, there is this free one Bastion host, SSH, but I'm going to be using guacamoleand there is an associated cost here with it, they do have a trial version, so you canget away without paying anything for it. So I'm just going to proceed and select guacamole.And anytime you're using something from the marketplace, they generally will have theinstructions in here. So if you do view additional details here, we're going to get some extrainformation.And then we will just scroll down here to usage information such as usageinstructions. And we're going to see there is more information, I'm just going to openup this tab here because I've done this a few times. So I remember where all this stuffis. Okay, and we're just going to hit continue here. Okay, and we're going to start settingup this instance. So we're going to need a small, so this one doesn't allow you to gointo macros, okay, so there is an associated cost there, we're going to configure thisinstance, we're going to want it in the same VPC as our private, okay, when we have tolaunch this in a public subnet, so just make sure that you select the public one here,okay. And we're going to need to create a new Im role. And this is part of guacamolethese instructions here because you need to give it some access so that it can auto discoverinstances, okay, and so down here, they have instructions here and they're just going totell you to make an IM role, we could launch a cloudformation template to make this butI would rather just make it by hand here.So we're going to grab this policy here, okay.And we are going to make a new tab and make our way over to I am okay. And once we'rein I am here, we're gonna have to make this policy so I'm going to make this policy. Okay,unless I already have it, see if it's already in here, new. Okay, good. And I'm gonna goto JSON, paste that in there, review the policy, I'm going to name it they have a suggestionhere what to name it, Glock AWS, that seems fine to me. Okay, and here you can see it'sgonna give us permissions to cloud watch an sts so we'll go ahead and create that policy.It says it already exists.So I'm I already have it. So just go ahead and create thatpolicy. And I'm just going to skip the step for myself. Okay, and we're just going tocancel there. So I'm just going to type walk. I don't know why it's not showing up. saysit already exists. Type that in again. So yeah, there it is. So I already have thatpolicy. Okay, so I couldn't hit that last step. But you'll Be able to get through thatno problem. And then once you have it, you're gonna have to create a new role.So we'regoing to create a role here and it's going to be for EC two, we're going to go next.And we're going to want I believe EC to full access is that the right Oh, read only access,okay. So we're going to want to give this easy to read only access. And we're also goingto want to give it that new GWAC role. So I'm going to type in type AWS here. Oh, let'sgive me a hard time here. And we'll just copy and paste the whole name in here. There itis. And so those are the two, two policies you need to have attached.And then we'rejust going to name this something here. So I'm gonna just call it my GWAC. Bastion, okay,roll here. I'm going to create that role. Okay, and so that role has now been created,we're going to go back here, refresh the IM roles, and we're going to see if it exists.And there it is my GWAC Bastion role. And I spell bash and wrong there. But I don'tthink that really matters. And then we will go to storage. There's nothing to do here,we'll skip tags, we'll go to security groups. And here you can see it comes with some defaultconfigurations. So we're going to leave those alone. And then we're going to launch thisEC two instance. Okay. So now we're launching that it's taking a bit of time here, but thisis going to launch. And as soon as this is done, we're going to come back here and actuallystart using this Bastion to get into our private instance.So our bashing here is now alreadyin provisioned. So let's go ahead and just type in Bastion, so we don't lose that lateron, we can go grab either the DNS or public IP, I'll just grab the DNS one here. And we'regoing to get this connection, not private warning, that's fine, because we're definitelynot using SSL here. So just hit advanced, and then just click to proceed here. Okay,and then it's might ask you to allow, we're going to definitely say allow for that, becausethat's more of the advanced functionality, guacamole there, which we might touch in.At the end of this here, we're going to need the username and password. So it has a default,so we have block admin here, okay.And then the password is going to be the name of theinstance ID. All right, and this is all in the instructions here. I'm just speaking youthrough it. And then we're going to hit login here. And so now it has auto discovered theinstances which are in the VPC that is launched. And so here, we have Bayshore, private. Solet's go ahead and try to connect to it. Okay. So as soon as I click, it's going to makethis shell here. And so we'll go attempt and login now. So our user is easy to user. AndI believe our password is KI wi n Chi win. And we are in our instance, so there you go.That's how we gain access to our private instance here. Just before we start doing some otherthings within this private easy too, I just want to touch on some of the functionalityof Bastion here, or sorry, guacamole, and so why you might actually want to use thebastion.So it does, it is a hardened instance, it does allow you to authenticate via multiplemethods. So you can enable multi factor authentication to use this. It also has the ability to doscreen recordings, so you can really be sure what people are up to, okay, and then it justhas built in audit logs, and etc, etc. So there's definitely some good reasons to usea bastion. But we can also use a sessions manager, which does a lot of this for us withthe exception of screen recording within the, within AWS. But anyway, so now that we'rein our instance, let's go play around here and see what we can do. So now that we arein this private EC two instance, I just want to show you that it doesn't have any internetaccess.So if I was to ping something like Google, right, okay, and I'm trying to getinformation here to see how it's hanging, and we're not getting a ping back. That'sbecause there is no route to the internet. And so the way we're going to get a routeto the internet is by creating a NAT instance or a NAT gateway. Generally, you want to usea NAT gateway, there are cases of use Nat instances. So if you were trying to save money,you can definitely save money by having to manage a NAT instance by herself. But we'regonna learn how to do Nat gateway because that's the way to this wants you to go.Andso back in our console, here we are in EC two instances where we're going to have toswitch over to a VPC. Okay, because that's where the NAT gateway is. So on the left handside, we can scroll down and we are looking under VPC, we have Nat gateways. And so we'regoing to launch ourselves a NAT gateway, that gateways do cost money. So they're not terriblyexpensive, but you know, we, at the end of this will tear it down. Okay. And so, theidea is that we need to launch this Nat gateway in a public VPC or, sorry, public subnet,and so we're gonna have to look here, I'm gonna watch it in the Bayshore, public eight,doesn't matter which one just has to be one of the Public ones. And we can also createan elastic IP here. I don't know if it actually is required assigned a pipe network, I don'tknow if it really matters.But we'll try to go ahead and create this here without anyIP. No, it's required. So we'll just hit Create elastic IP there. And that's just a staticIP address. So it's never changing. Okay, and so now that we have that is associatedwith our Nat gateway, we'll go ahead and create that. And it looks like it's been created.So once your Nat gateway is created, the next thing we have to do is edit your route table.So there actually is a way for that VPC to our sorry, that private instance to accessthe internet. Okay, so let's go ahead and edit that road table. And so we created aprivate road table specifically for our private EC two. And so here, we're going to edit theroutes, okay. And we're going to add a route for that private or to that Nat gateway. Okay.So um, we're just going to type in 0.0 dot 0.0, forward slash zero.And we are then justgoing to go ahead, yep. And then we're going to go ahead and choose our Nat gateway. Andwe're going to select that there, and we're going to save that route. Okay, so now ourNat gateway is configured. And so there should be a way for our instance to get to the internet.So let's go back and do a ping. And back over here, in our private EC two instance, we'rejust going to go ahead and ping Google here. Okay, and we're going to see if we get somepings back, and we do so there you go.That's all we had to do to access the internet. Allright. So why would our private EC two instance need to reach the internet? So we have oneone inbound traffic, but we definitely want outbound because we would probably want toupdate packages on our EC two instance. So we did a sudo Yum, update. Okay, we wouldn'tbe able to do this without a outbound connection. All right. So it's a way of like getting accessto the internet only for the things that we need for outbound connections, okay. All right.So we had a fun time playing around with our private EC two instance there.And so we'repretty much wrapped up here for stuff. I mean, there's other things here, but you know, atthe associate level, it's, there's not much reason to get into all these other thingshere. But I do want to show you one more thing for VP C's, which are VPC flow logs, okay,and so I want you to go over to your VPC here, okay, and then I just want you to go up, andI want you to create a flow log, so flow logs will track all the, the traffic that is goingthrough through your VPC. Okay, and so it's just nice to know how to create that. So wecan have it to accept reject, or all I'm going to set it to all and it can either be deliveredto cloudwatch logs or s3 Cloud watch is a very good destination for that.In order todeliver that, we're going to need a destination log group, um, I don't have one. So in orderto send this to a log group, we're going to have to go to cloud watch, okay. We'll justopen this up in a new tab here. Okay. And then, once we're here in cloud watch, we'regoing to create ourselves a new cloud watch log, alright.And we're just gonna say actionscreate log group, and we'll just call this Bayes your VPC flow logs or VPC logs or flowlogs, okay. And we will hit Create there. And now if we go back to here and hit refresh,we may have that destination now available to us. There it is. Okay, we might need anIM role associated with this, to have permissions to publish to cloud watch logs.So we're definitelygoing to need permissions for that. Okay. And I'll just pop back here with those credentialshere in two seconds. So I just wanted to collect a little bit of flow log data so I could showit off to you to see what it looks like. And so you know, under our VPC, we can see thatwe have flow logs enabled. So now we're done the VPC section, let's clean up whatever wecreated here, so we're not incurring any cost. So we're gonna make our way over to EC twoinstances, and you can easily filter out the instances which are in that. That VPC hereby going to VPC ID here, and then just selecting the VPC. So these are the three instancesthat are running and I'm just going to terminate them all. Because you know, we don't wantto spend up our free credits or incur cost because of that bash in there. So we'll justhit terminate there and those things are going to now shut down. We also have that VPC endpointstill running.Just double check to make sure your Nat gateway isn't still there. So underthe back in the VPC section here, just make sure that you had so and there we have ourgateway endpoint there for s3. So we'll just go ahead and delete that. I don't believeit cost us any money, but it doesn't hurt to get that out. Later, hey, this is AndrewBrown from exam Pro. And we are looking at identity access management Iam, which managesaccess of AWS users and resources. So now it's time to look at I am core components.And we have these installed identities. And those are going to be users groups and roles.Let's go through those. So a user is an end user who can log into the console or interactwith database resources programmatically, then you have groups, and that is when youtake a bunch of users, and you put them into a logical grouping. So they have shared permissions.That could be administrators, developers, auditors, whatever you want to call that,then you have roles and roles, have policies associated with them.That's what holds thepermissions. And then you can take a role and assign it to users or groups. And thendown below, you have policies. And this is a JSON document, which defines the rules inwhich permissions are allowed. And so those are the core components. But we'll get morein detail to all these things. Next, so now that we know the core components are and let'stalk about how we can mix and match them. Starting at the top here, we have a bunchof users in a user group. And if we want to on mass, apply permissions, all we have todo is create a role with the policies attached to that role. And then once we attach thatrole to that group, all these users have that same permission great for administrators,auditors, or developers.And this is generally the way you want to use Iam when assigningroles to users, you can also assign a role directly to a user. And then there's alsoa way of assigning a policy, which is called inline policy directly to a user. Okay, sowhy would you do this, or maybe you have exactly one action you want to attach to this user,and you want to do it for a temporary amount of time, you don't want to create a managerole, because it's never, it's never going to be reused for anybody else.There are usecases for that. But generally, you always want to stick with the top level here, a rulecan have multiple policies attached to it, okay. And also a role can be attached to certainAWS resources. All right. Now, there are cases where resources actually have inline policiesdirectly attached to them. But there are cases where you have roles attached to or somehowassociated to resources, all right. But generally, this is the mix and match of it. If you weretaking the ADA security certification, then this stuff in detail really matters. But forthe associate and the pro level, you just need to conceptually know what you can andcannot do. All right. So in I am you have different types of policies, the first beingmanaged policies, these are ones that are created by AWS out of convenience for youfor the most common permissions you may need.So over here, we'd have Amazon easy to fullaccess, you can tell that it's a managed policy, because it says it's managed by AWS, and aneven further indicator is this orange box, okay, then you have custom customer managedpolicies, these are policies created by you, the customer, there are edible, whereas inthe managed policies, they are read only. They're marked as customer managed, you don'thave that orange box. And then last are inline policies. So inline policies, you don't managethem because they're like they're one and done. They're intended to be attached directlyto a user or directly to a, a resource. And they're, and they're not managed, so you can'tapply them to more than one identity or resource. Okay, so those are your three types of policy.So it's now time to actually look at a policy here.And so we're just going to walk throughall the sections so we can fully understand how these things are created. And the firstthing is the version and the version is the policy language version. If this changes,then that means all the rules here could change. So this doesn't change very often, you cansee the last time was 2012. So it's gonna be years until they change it. If they didmake changes that probably would be minor, okay, then you have the statement. And sothe statement is just a container for the other policy elements. So you can have a singleone here. So here I have an array, so we have multiples. But if you didn't want to havemultiples, you just get rid of the the square brackets there, you could have a single policyelement there.Now going into the actual policy element, the first thing we have is Sid andthis is optional. It's just a way of labeling your statements. So Cid probably stands forstatement identifier, you know, again, it's optional. Then you have the effect, the effectcan be either allow or deny and that's going to set the conditions or the the access forthe rest of the policy. The next thing is we have the action so actions can be individualize.Right. So here we have I am, we have an individual one, or we can use asterik to select everythingunder s3. And these are the actual actions the policy will allow or deny. And so youcan see we have a deny policy. And we're denying access all to s3 for a very specific userhere, which gets us into the principal.And the principal is kind of a conditional fieldas well. And what you can do is you can specify an account a user role or federated user,to which you would like to allow or deny access. So here, we're really saying, hey, Barkley,you're not allowed to use s3, okay, then you have the resource, that's the actual thing.That is we're allowing or denying access to so in this case, it's a very specific s3 bucket.And the last thing is condition. And so condition is going to vary based on the based on theresource, but here we have one, and it does something, but I'm just showing you that thereis a condition in here. So there you go, that is the makeup of a policy, if you can mastermaster these things, it's going to make your life a whole lot easier. But you know, justlearn what you need to learn.So you can also set up password policies for your users. Soyou can set like the minimum password length or the rules to what makes up a good password,you can also rotate out passwords, so that is an option you have as well. So it willexpire after x days. And then a user then must reset that password. So just be awarethat you have the ability to password. Let's take a look at access keys.Because this isone of the ways you can interact with AWS programmatically either through the ad vcli,or the SDK. So when you create a user, and you say it's allowed to have programmaticaccess, it's going to then create an access key for you, which is an ID and a secret accesskey. One thing to note is that users can only have up to two access keys within their accountsdown below, you can see that we have one, as soon as we add a second one, that graybutton for create access key will vanish. And if we want more, we would either haveto we'd have to remove keys, okay. But you know, just be aware that that's what accesskeys are. And you can make them inactive and you're only allowed to have let's quicklytalk about MFA. So MFA can be turned on per user. But there is a caveat to it, where theuser has to be the one that turns it on.Because when you turn it on, you have to connect itto a device and your administrator is not going to have the device. So it's on the userto do so there is no option for the administrator to go in and say, Hey, you have to use MFA.So it cannot be enforced directly from an administrator or root account. But what theminister can do if if if they want is they can restrict access to resources only to peoplethat are using MFA. So you can't make the user account itself have MFA. But you candefinitely restrict access to API calls and things like that. So temporary security credentialsare just like programmatic access keys, except they aren't temporary. And they're used inthe following scenarios, we were dealing with identity Federation, delegation, cross accountaccess, and Im roles. And so we're going to look in more detail at identity Federationand cross account access in the upcoming slides.But let's talk about how they are differentfor programmatic access keys. The first thing is that they last for minutes to an hour.So they're very short term credentials. And they're not stored with the user. But they'regenerated dynamically and provided the user when requested. So we provide like accesskeys, they're strongly linked to a specific user. But in this case, we just give themthese credentials when they need them. And these temporary credentials are the basisfor roles and identity Federation. So you've actually been using temporary security credentialsthis entire time, but you probably just don't know it. So anytime you use a role, an IMrole, they actually generate out an STS, which is the token that is used for temporary credentials,but you just don't notice it, because eight of us does it for you. But let's dig deepmore into this stuff.So we said that the use case for these temporary security credentialswould be identity Federation. But let's talk about what that actually is. So identity Federation,is the means of linking a person's electronic identity and attributes stored across multipledistinct identity management systems. So that definition is a little bit confusing. So mytake on it is that identity Federation allows users to exist on a different platform. Soan example this would be your users or on Facebook, but They can gain access as if theyare a user in AWS, the idea is that their identities are hosted somewhere else, whetherit's Facebook, Google, Active Directory, or whatever. So with Iam, it supports two typesof identity Federation. So we have enterprise identity Federation and web identity Federation.So the protocols you can use for enterprise identity Federation would be SAML, which iscompatible with Active Directory, a very popular Microsoft Windows identity system, or customFederation, brokers.This allows you to connect to other identity systems. And generally withenterprise entity Federation, you're doing things with single sign on, but for the scopeof the developer associate, we don't need to really know about that we do need to knowabout is web identity Federation. And you've used this before, if you've ever clicked abutton where it's like connect with Facebook or connect with LinkedIn, to quickly signinto a service, you are using a web identity Federation system. So Amazon has won Facebookhas won Google has won, LinkedIn, etc, etc. But generally, the protocols that they adhereto is the open ID Connect, oh, ICD 2.0. And that's actually built off of OAuth two.Soyou might have heard of OAuth. So Oh, ice, oh, IDC is built on top of OAuth. So. Butyeah, so now we have an idea of identity Federation. Let's dig a little bit deeper here. So we'vebeen talking about these temporary security credentials, but actually, how do we get ahold of them. And that's where this service comes into play this security token serviceSTS. So it's a web service that enables you to request temporary, limited privileged credentialsfor either Im users, or for federated users. And we just talked about identity Federation.So users that are outside of AWS. And so this service is a global service. If you were togo into the AWS console and type in STS, nothing will come up, because you can't access itthrough the console, you can only access it programmatically. And there's actually anendpoint for it. So there's one that's SDS, Amazon, Ada calm.So that's where all therequests get generated. And an sts returns. So like, once you use the API to get thatSTS, it's going to return an access key ID secret access key a sessions token, an expiration,you're going to notice, the first two should be very noticeable to you. Because it's thesame thing when you do programmatic access with a user. It's the same stuff. And in fact,you can take this, the access key ID and secret access key and put it in your database credentialsfile, and make a profile. Of course, these will be temporary. So it's not great for longterm storage. But the point is, is that they're, they're exactly the same, it's just that they'retemporary. And if you want to actually get this token, you have to use the API actionsfor STS, which you could use either the SDK or the COI to use. And so this is a list ofthem, the top three are the ones that are most likely in use.So we have assume rolethat's used any time I am role gives you an STS, remember, we talked about AWS, everytime you use I am, it gives you an sts B don't see it, it uses a sumrall. Same with crossaccount roles, it uses assume role, then you have SAML, that's for the enterprise identityFederation. And then you have assume role with web identity, which is the one we reallyneed to know for the developer associate. And that's authenticating to Facebook, Google,etc, etc. So let's look at that in more detail and see how that works. So now let's lookat actually how to get an sts from using with assume role with web identity. So this actionhere is going to return a set of temporary credentials for the usage of an authorizedin a mobile or web application. So you have a developer, and the first thing you're goingto do is you need to authenticate with whatever the web identity is. So we're gonna send anOAuth call to Facebook, as an example here.And they're gonna send us back a JavaScriptweb token, a JW T. And then once we have that, what we can do is use something like the CLR.You can also use the AWS SDK, but we'll use a COI and we're going to call the assume rolewith web identity passing along that JW T. And then the STS service is going to determinewhether it's going to give us a token or not. And so there it passes along the temporarycredential, so we're going to get that information. So now that we have those temporary credentials,We can use the COI or other means to gain access to a variety of AWS services withinour AWS account, and just whatever we decide that they're allowed to gain access to.Sothat is the process for getting sts for at least the assume role with web identity. What'sreally important to note is the order that this happens, and because this definitelyshows up on the exam, they'll ask you like, they'll give you an example of the order ofWhen this happens, and just know that you always authenticate with the web ID first,and then you get the token second, from STS.So just remember that it's always the webidentity first, and you'll definitely score points on the exam. So I was saying earlierthat cross account roles are another thing that is used with sts, the security tokenservice. So let's talk about why would we want to make cross account roles. So the wholepurpose of them is to allow you to grant access for a user from another Eva's account to resourceswithin your own account. And the advantage here is that you don't have to create yourown, like a user for them in your own account. So here's an example where we have accounta so I have an account and that one, and account B wants to grant me access to specific resources,what they're going to do is create a role for me, which is a cross account role.Andthis is going to grant me access. So how do how does this role actually do that? Well,there's a policy that's attached to it. And to that role, and it's granting me accessto assume role. And so we said that sts the security token service is what issues thosetemporary credentials. And we saw there was a bunch of actions, one being web identity,assume role, the other one being assumed role.So assume role is what is granting us accessacross count. And this happens seamlessly, so you don't have to do anything. But yeah,that is cross account roles. Hey, this is Angie brown from exam Pro, and we are goingto do the I am follow along. So let's make our way over to the IM console. So just goup to services here and type in Im, and we will get to learning this a right right away.So here I am on the IM dashboard. And we have a couple things that a device wants us todo. It wants us to set MFA on our root account. It also wants us to apply an IM password policy,so that our passwords stay very secure. So let's take what they're saying in considerationand go through this. Now I am logged in as the root user. So we can go ahead and setMFA. So what I want you to do is drop this down as your root user and we'll go manageMFA. And we will get to this here.So this is just a general disclaimer here to helpyou get started here. I don't ever want to see this again. So I'm just going to hideit. And we're going to go to MFA here and we're going to activate MFA. So for MFA, wehave a few options available. We have a virtual MFA, this is what you're probably most likelygoing to use where you can use a mobile device or computer, then you can use a you two fsecurity key.So this is like a UB key. And I actually have an OB key, but we're not goingto use it for this, but it's a physical device, which holds the credentials, okay, so youcan take this key around with you. And then there are other hardware mechanisms. Okay,so but we're going to stick with virtual MFA here. Okay, so we'll hit Continue. And whatit's going to do is it's going to you need to install a compatible app on your mobilephone. So if we take a look here, I bet you authenticator is one of them. Okay. So ifyou just scroll down here, we have a few different kinds. I'm just looking for the virtual ones.Yeah. So for Android or iPhone, we have Google Authenticator or authy, two factor authentication.So you're going to have to go install authenticator on your phone. And then when you are readyto do so you're going to have to show this QR code.So I'm just going to click that andshow this to you here. And then you need to pull out your phone. I know you can't seeme doing this, but I'm doing it right now. Okay. And I'm not too worried that you'reseeing this because I'm going to change this two factor authentication out here. So ifyou decide that you want to also add this to your phone, you're not going to get toofar. Okay, so I'm just trying to get my authenticator app out here. And I'm going to hit plus andthe thing and I can scan the barcode, okay. And so I'm just going to put my camera overit here. Okay, great. And so is is save the secret. All right, so it's been added to GoogleAuthenticator. Now, now that I have it in my application, I need to enter in to twoconsecutive MFA codes. Okay, so this is a little confusing, it took me a while to figurethis out the first time I was using AWS, the idea is that you need to set the first one.So the first one I see is 089265.Okay, and so I'm just going to wait for the next oneto expire, okay, so there's a little circle that's going around. And I'm just waitingfor that to complete to put in a second one. It just takes a little bit of time here. Stillgoing here. Great. So I have new numbers. So the numbers are 369626. Okay, so it's notthe same number, but it's two consecutive numbers, and we will hit assign MFA. And nowMFA has been set on my phone.So now when I go and log in, it's going to ask me to provideadditional code. Okay, and so now my root account is protected. So we're gonna go backto our dashboard, and we're going to move on to password policies. Okay. So let's takethe recommendation down here and manage our password policy, okay. And we are going toset a password policy. So password policy allows us to enforce some rules that we wantto have under users. And so to make passwords a lot stronger, so we can say it should requireat least one upper letter, one lowercase letter, or at least one number, a non non alphanumericcharacter, enable the password expiration. So after 90 days, they're going to have tochange the password, you can have password expiration requires the administrator reset,so you can't just reset it, the admin will do it for you allow users to change theirown password is something you could set as well.And then you could say prevent passwordreuse. So for the next five passwords, you can't reuse the same one. Okay? So and I wouldprobably put this a big high numbers, so that a very high chance they won't use the sameone. Okay, so, um, yeah, there we go. We'll just hit Save Changes. And now we have a passwordpolicy in place. Okay. And so that's, that's how that will be. So to make it easier forusers to log into the Iam console, you can provide a customized sign in link here.Andso here, it has the account ID, or I think that's the account ID but we want somethingnicer here. So we can change it to whatever you want. So I can call it Deep Space Nine.Okay. And so now what we have is if I spelt that, right, I think so yeah. So now thatwe have a more convenient link that we can use to login with, okay, so I'm just goingto copy that for later, because we're going to use it to login. I mean, obviously, youcan name it, whatever you want. And I believe that these are just like, I'm like pickinglike your Yahoo or your your Gmail email, you have to be unique.Okay, so you're notgonna be at least Deep Space Nine, as long as I have to use I believe. But yeah, okay,so maybe we'll move on to actually creating a user here. So here I am under the Userstab, and I am and we already have an existing user that I created for myself, when I firstset up this account, we're going to create a new user so we can learn this process. Sowe can fill the name here, Harry Kim, which is the character from Star Trek Voyager, youcan create multiple users in one go here, but I'm just gonna make one. Okay, I'm goingto give him programmatic access and also access to the console, so you can log in. And I'mgonna have an auto generated password here, so I don't have to worry about it. And youcan see that it will require them to reset their password when they first sign in.Sogoing on to permissions, we need to usually put our users within a group, we don't haveto, but it's highly recommended. And here I have one called admin for admin, which hasadd administrator access, I'm going to create a new group here, and I'm going to call itdevelopers okay. And I'm going to give them power access, okay, so it's not full access,but it gives them a quite a bit of control within the system. Okay. And I'm just goingto create that group there. And so now I have a new group. And I'm going to add Harry tothat group there. And we will proceed to the next step here. So we have tags, ignore thatwe're going to review, we're going to create Harry Kim the user. Okay. And so what it'sdone here is it's also created a secret access key and a password.Okay, so if he wants thatprogrammatic access, he can use these and we can send the an email with the, with thisinformation along to him. Okay. And, yeah, we'll just close that there. Okay, and thenwe'll just poke around here and Harry Kim for a little bit. So just before we jump intoHarry Kim here, you can see that he has never used his access key. He, the last time hispassword was used was today, which was set today, and there is no activity and he doesnot have MFA. So if we go into Harry Kim, we can look around here and we can see thathe has policies applied to him from a group and you can also individually attach permissionsto him. So we have the ability to give them permissions via group or we can Copy permissionsfrom existing user or we can attach policies directly directly to them. So if we wantedto give them s3, full access, we could do so here.Okay. And then we can just applythose permissions. So just to wrap up this section, we're just going to cover a rulesand policies here. So first, we'll go into policies. And here we have a big list of policieshere that are managed by AWS, they say they're managed over here, and you can definitelytell because they're camelcase. And they also have this nice little orange box, okay. Andso these are policies which you cannot edit, they're read only, but they're a quick andfast way for you to start giving access to your users. So if we were just to take a lookat one of them, like the EC 214, or maybe just read only access, we can click into them.And we can see over onto the I am cheat sheets, let's jump into it.So identity access managementis used to manage access to users and resources I am is a universal system, so it's appliedto all regions at the same time, I am is also a free service. A root account is the accountinitially created when AWS is set up and so it has full administrator access. New Iamaccounts have no permissions by default until granted, new users get assigned an accesskey ID and secret when first created when you give them programmatic access. Accesskeys are are only used for the COI and SDK, they cannot access the console. Access keysare only shown once when created, if lost, they must be deleted and recreated again,always set up MFA for your root accounts. Users must enable MFA on their own administratorscannot turn it on for each user, I am allows you, you to Set password policies to set minimumpassword requirements or rotate passwords.Then you have Iam identities, such as users,groups and roles and we'll talk about them now. So we have users, those are the end userswho log into the console or interact with AWS resources programmatically, then you havegroups. So that is a logical group of users that share all the same permission levelsof that group. So think administrators, developers, auditors, then you have roles, which associatespermissions to a role, and then that role is then assigned to users or groups. Thenyou have policy. So that is a Jason document which grants permissions for specific usersgroups, roles to access services, policies are generally always attached to Im identities.You have some variety of policies, you have managed policies, which are created by AWS,that cannot be edited, then you have customer managed policies, those are policies createdby you that are editable and you have inline policies which are directly attached to theuser.So there you go, that is I am. Hey, this is Angie brown from exam Pro. And weare looking at CloudFront, which is a CDN a content distribution network, it createscache copies of your website at various edge locations around the world. So to understandwhat CloudFront is, we need to understand what a content delivery network is. So a CDNis a distributed network of servers, which deliver web pages and content to users basedon their geographical location, the origin of the web page and a content delivery server.So over here, I have a graphical representation of a CDN, specifically for CloudFront. Andso the idea is that you have your content hosted somewhere. So here the origin is s3.And the idea is that the server CloudFront is going to distribute a copy of your websiteon multiple edge locations, which are just servers around the world that are nearby tothe user.So when a user from Toronto tries to access our content, it's not going to goto the s3 bucket, it's going to go to CloudFront. And CloudFront is going to then route it tothe nearest edge location so that this user has the lowest latency. And that's the conceptbehind clustering. So it's time to look at the core components for CloudFront. And we'llstart with origin, which is where the original files are located. And generally, this isgoing to be an s3 bucket. Because the most common use case for CloudFront is static websitehosting. However, you can also specify origin to be an easy to instance, on elastic loadbalancer or route 53.The next thing is the actual distribution itself. So distributionis a collection of edge locations, which define how cash content should behave. So this definition,here is the thing that actually says, Hey, I'm going to pull from origin. And I wantthis to update the cache, whatever whatever frequency or use HTTPS, or that should beencrypted. So that is the settings for the distribution. And then there's the edge locations.And an edge location is just a server. And it is a server that is nearby to the actualuser that stores that cache content. So those are the three components to pop. So we needto look at the distribution component of CloudFront in a bit more detail, because there's a lotof things that we can set in here. And I'm not even showing you them all, but let's justgo through it. So we have an idea, the kinds of things we can do with it. So again, a distributionis a collection of edge locations. And the first thing you're going to do is you're goingto specify the origin. And again, that's going to be s three, EC two lb, or refer D three.And when you said your distribution, what's really going to determine the cost and alsohow much it's going to replicate across is the price class.So here, you can see, ifyou choose all edge locations, it's gonna be the best performance because your websiteis going to be accessible from anywhere in the world. But you know, if you're operatingjust in North America and the EU, you can limit the amount of servers it replicatesto. There are two types of distributions, we have web, which is for websites, and rtmp,which is for streaming media, okay, um, you can actually serve up streaming video underweb as well. But rtmp is a very specific protocol. So it is its own thing. When you set up behaviors,there's a lot of options we have. So we could redirect all the traffic to be HTTPS, we couldrestrict specific hv methods. So if we don't want to have puts, we can say we not includethose. Or we can restrict the viewer access, which we'll look into a little bit more detailhere, we can set the TTL, which is time to expiry, or Time To Live sorry, which sayslike after, we could say every two minutes, the content should expire and then refreshit right, depending on how, how stale we want our content to be.There is a thing calledinvalidations in CloudFront, which allow you to manually set so you don't have to waitfor the TTL. to expire, you could just say I want to expire these files, this is veryuseful when you are pushing changes to your s3 bucket because you're gonna have to gomanually create that invalidation. So those changes will immediately appear. You can alsoserve back at error pages. So if you need a custom 404, you can do that through CloudFront.And then you can set restrictions. So if you for whatever reason, aren't operating in specificcountries, and you don't want those countries to consume a lot of traffic, which might costyou money, you can just restrict them saying I'm blocking these countries, or or you coulddo the other way and say I only whitelist these countries, these are the only countriesthat are allowed to view things from CloudFront. So there's one interesting feature I do wantto highlight on CloudFront, which is lambda edge and lambda edge are lambda functionsthat override the behavior of requests and responses that are flowing to and from CloudFront.And so we have four that are available to us, we have the viewer requests, the originrequests, the origin response, and the viewer response, okay, and so on our on our CloudFrontdistribution under probably behavior, we can associate lambda functions.And that allowsus to intercept and do things with this, what would you possibly use lambda edge for a verycommon use case would let's say you have protected content, and you want to authenticate it againstsomething like cognito. So only users that are within your cognito authentication systemare allowed to access that content, that's just something we do on exam pro for the videocontent here.So you know, that is one method for protecting stuff. But there's a lot ofcreative solutions here with you can use lambda edge, you could use it to serve up a to betesting websites, so you could have it so when the viewer request comes in, you havea roll of the die, and it will change what it serves back. So it could be it could setup a or set up B and that's something we also do in the exam pro marketing website. So there'sa lot of opportunities here with lambda edge. I don't know if it'll show up in the exam,I'm sure eventually will. And it's just really interesting. So I thought it was worth talking.So now we're talking about CloudFront protection. So CloudFront might be serving up your staticwebsite, but you might have protected content, such as video content, like on exam Pro, orother content that you don't want to be easily accessible.And so when you're setting upyour CloudFront distribution, you have this option to restrict viewer access. And so thatmeans that in order to view content, you're going to have to use signed URLs or signedcookies. Now, when you do check this on, it actually will create you an origin identityaccess and oh AI. And what that is it's a virtual user identity that it will be usedto give CloudFront distributions permission to fetch private objects. And so those privateobjects generally mean from an s3 bucket that's private, right? And as soon as that set up,and that's automatically set up for you.Now you can go ahead and use signed URLs and signedcookies. So one of these things well, the idea behind it is a sign URL is just a URLthat CloudFront Provide you that gives you temporary access to those private cached objects.Now, you might have heard of pre signed URLs. And that is an s3 feature. And it's similarnature. But it's very easy to get these two mixed up because sign URLs and pre signedURLs sound very similar. But just know that pre signed URLs are for s3 and sign URLs arefor CloudFront, then you have signed cookies, okay. And so it's similar to sign URLs, theonly difference is that you're you passing along a cookie with your request to allowusers to access multiple files, so you don't have to, every single time generate a signedcookie, you set it once, as long as that cookie is valid and pass along, you can access asmany files as you want. This is extremely useful for video streaming, and we use iton exam Pro, we could not do video streaming, protected with sign URLs, because all thevideo streams are delivered in parts, right, so a cookie has to get set.So that that isyour options for protecting cloud. It's time to get some hands on experience with CloudFronthere and create our first distribution. But before we do that, we need something to serveup to the CDN. Okay, um, so we had an s3 section earlier, where I uploaded a bunch of imagesfrom Star Trek The Next Generation. And so for you, you can do the same or you just needto make a bucket and have some images within that bucket, so that we have something toserve up, okay. So once you have your bucket of images prepared, we're going to go makeour way to the CloudFront console here. And so just type in CloudFront, and then clickthere. And you'll get to the same place as me here. And we can go ahead and create ourfirst distribution. So we're gonna be presented with two options, we have web an rtmp. Nowrtmp is for the Adobe Flash Media Server protocol. So since nobody really uses flash anymore,we can just kind of ignore this distribution option.And we're going to go with web, okay.And then we're going to have a bunch of options, but don't get overwhelmed, because it's nottoo tricky. So the first thing we want to do is set our origin. So where is this distributiongoing to get its files that wants to serve up, it's going to be from s3. So we're goingto click into here, we're going to get a drop down and we're going to choose our s3 bucket,then we have path, we'll leave that alone, we have origin ID, we'll leave that alone.And then we have restrict bucket access.So this is a cool option. So the thing is, isthat let's say you only want people to access your, your bucket resources through CloudFront.Because right now, if we go to s3 console, I think we made was data public, right? Andif we were to look at this URL, okay, this is publicly accessible. But let's say we wantedto force all traffic through CloudFront. Because we don't, we want to be confident they cantrack things. So we get some rich analytics there. And we just don't want people directlyaccessing this ugly URL. Well, that's where this option comes in, restrict bucket access,okay, and it will, it will create an origin identity access for us, but we're gonna leaveit to No, I just want you to know about that. And then down to the actual behavior settings,we have the ability to redirect HTTP to HTTPS. That seems like a very sane setting, we canallow these to be methods, we're only going to be ever getting things we're never goingto be put or posting things.And then we'll scroll down, scroll down, we can set our TTL,the defaults are very good. And then down here, we have restrict viewer access. So ifwe wanted to restrict the viewer access to require signed URLs of site cookies to protectaccess to our content, we'd press yes here. But again, we just want this to be publiclyavailable. So we're going to set it to No, okay. And then down below, we have distributionsettings. And this is going to really affect our price. The cost we're going to pay here,as it says price class, okay. And so we can either distribute all copies of our filesto every single edge location, or we can just say US, Canada, Europe, or just US, Canada,yeah, Europe, Asia, Middle East Africa, or just the the main three.So I want to be costsaving here. So I'm really going to cost us a lot anyway, but I think that if we set itto the lowest class here that it will take less time for the distribution to replicatehere in this tutorial go a lot faster. Okay, then we have the ability to set an alternatedomain name, this is important if we are using a CloudFront certificate and we want a customdomain name, which we would do in a another follow along but not in this one here. Okay,and if this was a website, we would set the default route here to index dot HTML. Okay,so that's pretty much all we need to know here. And we'll go ahead and create our distribution,okay, and so our distribution is going to be in progress. And we're going to wait forit to distribute those files to all those edge locations. Okay, and so this will justtake a little bit of time here. He usually takes I don't know like three to five minutesso we'll we'll resume the video when this is done creating.So creating that distributiontook a lot longer than I was hoping for, it was more like 15 minutes, but I think theinitial one always takes a very long time. And then then after, whenever you update things,it still takes a bit of time, but it's not 15 minutes more like five minutes. Okay. Butanyway, um, so our distribution is created. Here we have an ID, we have a domain name,and we're just going to click in to this distribution, and we're gonna see all the options we havehere. So we have general origins, behaviors, error pages, restrictions, and validations.And tags. Okay, so when we were creating the distribution, we configured both general originsand behaviors all in one go. Okay, and so if we wanted to override the behaviors frombefore, we just clicked edit here, we're not going to change anything here. But I justwant to show you that we have these options previous. And just to see that they are brokenup between these three tabs here. So if I go to Edit, there's some information hereand some information there.Okay. So now that we have our distribution working, we havethis domain name here. And if we had, if we had used our own SSL, from the Amazon certificationmanager, we could add a customer domain, but we didn't. So we just have the domain thatis provided with us. And this is how we're going to actually access our our cache file.So what I'm going to do is copy that there. I'm just going to place it here in a texteditor here. And the idea here is we want to then, from the enterprise D pull one ofthe images here. So if we have data, we'll just take the front of it there, okay. Andwe are going to just assemble a new URL. So we're going to try data first here, and datashould work without issue. Okay. And so now we are serving this up from CloudFront. Sothat is how it works now, but data is set to public access.Okay, so that isn't muchof a trick there. But for all these other ones, I just want to make sure that he haspublic access here and it is set here yep, to public access. But let's look at someonethat actually doesn't have public access, such as Keiko, she does not have public access.So the question is, will CloudFront make files that do not have public access set in herepublicly accessible, that's what we're gonna find out. Okay. So we're just going to thenassemble on another URL here, but this time with Keiko, okay, and we're gonna see if wecan access her All right.Okay, oops, I copied the wrong link. Just copy that one more time.Okay, and there you go. So Keiko is not available. And this is because she is not publicly accessible.Okay. So just because you create a CloudFront distribution doesn't necessarily mean thatthese files will be accessible. So if we were to go to Keiko now and then set her to public,would she be accessible now through CloudFront? Okay, so now she is all right.So so justkeep that in mind that when you create a CloudFront distribution, you're going to get these URLs.And unless you explicitly set the objects in here to be publicly accessible, they'renot going to be publicly accessible. Okay. But yeah, that's all there is to it. So wecreated our CloudFront. So we need to touch on one more thing here with CloudFront. Andthat is invalidation. So, up here we have this Keiko image, which is being served upby CloudFront. But let's say we want to replace it. Okay, so in order to replace images onCloudFront, it's not as simple as just replacing an s3.So here we have Keiko, right, and thisis the current image. And so let's say we wanted to replace that. And so I have anotherversion of Keiko here, I'm just going to upload it here. And that's going to replace the existingone. Okay. And so I'm just going to make sure that the new one is here. So I'm just goingto right click or sorry, gonna hit open here, make sure it's set to public.And then I'mjust going to click the link here. And it still now it's the new one, right, so herewe have the new one. And if we were to go to the CloudFront distribution and refresh,it's still the old image, okay, because in order for these new changes to propagate,you have to invalidate the old the old cache, okay, and that's where invalidation is comeinto play. So to invalidate the old cache, we can go in here to create invalidations.And we can put a wildcard to expire everything or we could just expire. Keiko. So, for Keiko,she's at Ford slash enterprise D. So we would just paste that in there. And we have nowcreated an invalidation. And this is going to take five, five minutes. I'm not goingto wait around to show you this because I know it's going to work.But I just want youto know that if you update something in order, in order for it to work, you have to createa validation. So it's time to look at the CloudFront cheat sheet. And let's get to it.So CloudFront is a CDN a content distribution network. It makes websites load fast by servingcache content that is nearby CloudFront distributes cached copies at edge locations. Edge locationsaren't just read only you can actually write to them so you can do puts to them. We didn'treally cover that in the core content, but it's good to know. CloudFront has a featurecalled TTL, which is time to live. And that defines how long until a cache expires. Okay,so if you set it to expire every hour every day, that's how fresh or I guess you'd sayhow stale your content is going to be. When you invalidate your cache, you're forcingit to immediately expire.So just understand that invalidations means you're you're refreshingyour cache, okay? refreshing, the cast does cost money because of the transfer cost toupdate edge locations, right. So if you have a file, and it's and it's expired, it thenhas to then send that file to 1020, whatever amount of servers it is, and there's alwaysthat outbound transfer cost, okay? origin is the address of where the original copiesof your files reside. And again, that can be a three EC two, lb raffa, d three, thenyou have distribution, which defines a collection of edge locations and behavior on how it shouldhandle your cash content.We have two types of distributions, we have the web distribution,also known as web, which is for static website content. And then you have rtmp, which isfor streaming media, again, that is a very specific protocol, you can serve up videostreaming via the web distribution, then we have origin identity access, which is usedto access private s3 buckets. If we want to access cash content that is protected, weneed to use sign URLs or signed cookies, again, don't get signed roles confused with pre signedURLs, which is an s3 feature.But it's pretty much the same in terms of giving you accessto something, then you have lambda edge, which allows you to pass each request through alambda to change the behavior of the response or the request. Okay, so there you go. Thatis cloud front in a nutshell. Hey, this is Andrew Brown from exam Pro. And we are lookingat Cloud trail, which is used for logging API calls between AWS services. And the wayI like to think about this service, it's when you need to know who to blame.Okay, so asI said earlier, cloud trail is used to monitor API calls and actions made on an AWS account.And whenever you see these keywords, governance, compliance, operational auditing, or riskauditing, it's a good indicator, they're probably talking about AWS cloud trail. Now, I havea record over here to give you an example of the kinds of things that cloud trail tracksto help you know how you can blame someone when something's gone wrong. And so we havethe where, when, who and what so the where, so we have the account, Id what, like, whichaccount did it happen in, and the IP address of the person who created that request, thelens, so the time it actually happened, the who, so we have the user agent, which is,you know, you could say I could tell you the operating system, the language, the methodof making this API call the user itself.So here we can see Worf made this call and, andwhat so to what service, and you know, it'll say what region and what service. So thisservice, it's using, I am here, I in the action, so it's creating a user. So there you go,that is cloud trail in a nutshell. So within your AWS account, you actually already havecloud trail logging things by default, and it will collect into the last 90 days underthe event history here.And we get a nice little interface here. And we can filter outthese events. Now if you need logging be on 90 days. And that is a very common use case,which you definitely want to create your own trail, you'd have to create a custom trail.The only downside when you create a custom trail is that it doesn't have a gooey likehere, such as event history. So there is some manual labor involved to visualize that information.And a very common method is to use Amazon Athena. So if you see cloud trail, Amazon,Athena being mentioned in unison, there's that reason for that, okay. So there's a bunchof trail options I want to highlight and you need to know these, they're very importantfor cloud trail. So the first thing you need to know is that a trail can be set to login all regions.So we have the ability here to say yes, and now we know region is missed.If you are using an organization, you'll have multiple accounts and you want to have coverageacross all those. So in a single trail, you can check box on applied to my entire organization.You can encrypt your cloud trail logs, what you definitely want to do using server sideencryption via key management service, which abbreviate is SSE kms. And you want to enablelog file validation because this is going to tell whether someone's actually tamperedwith your logs so it's not going to prevent someone from being able to tamper from yourlogs. But it's going to at least let you know how much you can trust your logs.So I dowant to emphasize that cloud trail can deliver its events to cloudwatch. So there's an optionAfter you create the trail where you can configure, and then it will send your events to cloudwatchlogs. All right? I know cloud trail and cloud watch are confusing, because they seem likethey have overlapping of responsibilities. And there are a lot of Ada services that arelike that. But you know, just know that you can send cloud trail events to cloudwatchlogs, not the other way around. And there is that ability to. There are different typesof events in cloud trail, we have measurement events, and data events. And generally, you'realways looking at management events, because that's what's turned on by default. And there'sa lot of those events. So I can't really list them all out for you here. But I can giveyou a general idea what those events are. So here are four categories.So it could beconfiguring security. So you have attach rule policy, you'd be registering devices, it wouldbe configuring rules for routing data, it'd be setting up logging. Okay. So 90% of eventsin cloud trail are management events. And then you have data events. And data eventsare actually only for two services currently. So if you were creating your trail, you'dsee tabs, and I assume as one, they have other services that can leverage data events, we'llsee more tabs here. But really, it's just s3 and lamda. And they're turned off by default,for good reason. Because these events are high volume, they occur very frequently, okay.And so this is tracking more in detail s3, events, such as get object, delete objectput object, if it's a lamda, it'd be every time it gets invoked.So those are just higherthere. And so those are turned off by default. Okay. So now it's time to take a quick tourof cloud trail and create our very own trail, which is something you definitely want todo in your account. But before we jump into doing that, let's go over to event historyand see what we have here. So AWS, by default, will track events in the last 90 days. Andthis is a great safeguard if you have yet to create your own trail.And so we have someevent history here. And if we were just to expand any of them doesn't matter which oneand click a view event, we get to, we get to see what the raw data looks like here fora specific event. And we do have this nice interface where we can search via time rangesand some additional information. But if you need data beyond 90 days, you're going tohave to create a trail. And also just to analyze this because we're not going to have thisinterface, we're gonna have to use Athena to really make sense of any cloud trail information.But now that we have learned that we do have event history available to us, let's moveon to creating our own trail. Let's go ahead and create our first trail. And I'm just goingto name my trail here exam pro trail, I do want you to notice that you can apply a trailto all regions, and you definitely want to do that. Then we have management events wherewe can decide whether we want to have read only or write only events, we're going towant all of them, then you have data events.Now these can get expensive, because s3 andlambda, the events that they're tracking are high frequency events. So you can imaginehow often someone might access something from an s3 bucket, such as a get or put. So theydefinitely do not include these. And you have to check them on here to have the inclusionof them. So if you do want to track data events, we would just say for all our s3 buckets,or specify them and lambdas are also high frequency because we would track the invocationsof lambdas. And you could be in the 1000s upon millions there. So these are sanely notincluded by default. Now down below, we need to choose our source location, we're goingto let it create a new s3 bucket. For us, that seems like a good choice. We're goingto drop down advanced here, because it had some really good tidbits here. So we can turnon encryption, which is definitely something we want to do with kms.And so I apparentlyhave a key already here. So I'm just gonna add that I don't know if that's the defaultkey. I don't know if you get a default key with cloud trail. Usually you'd have one inthere. But I'm just going to select that one there. Then we have enable log file validation.So we definitely want to have this to Yes, it's going to check whether someone's evertampered with our logs, and whether we should not be able to trust her logs.And then wecould send a notification about log file delivery. This is kind of annoying, so I don't wantto do that. And then we should be able to create our trail as soon as we name our buckethere. So we will go ahead and just name it we'll say exam pro trails, assuming I don'thave one in another account. Okay, and so it doesn't like that one. That's fine. SoI'm just going to create a new kms key here. Okay. Keys do cost a buck purse, if you wantto skip the step you can totally do. So I'm just going to create one for this here calledexam pro trails. Okay. Great. And so now it has created that trail. And we'll just usethe site here. And then maybe we'll take a peek here in that s3 bucket when we do havesome data.Alright, I do want to point out one more thing is that you couldn't set thethe cloud watch event to track across all organizations, I didn't see that option there,it's probably because I'm in a sub account. So if I was in my, if you have an alias organization,right, and this was the root account, I bet I could probably turn it on to work acrossall accounts. So we didn't have that option there. But just be aware that it is there.And you can turn a trail to be across all organizations. So I just had to switch intomy route organization account, because I definitely wanted to show you that this option does existhere. So when you create a trail, we have applied all regions, but we also can applyto all organizations, which means all the accounts within an organization, okay. Soyou know, just be aware of that. So now that our trail is created, I just want you to clickinto and be aware that there's an additional feature that wasn't available to us when wewere creating the trail. And that is the ability to send our cloud trail events to cloud watchlogs.So if you wanted to go ahead and do that, you could configure that and createan IM role and send it to a log, or cloud watch log group. There are additional feesapply here. And it's not that important to go through the motions of this. But just beaware that that is a capability that you can do with cloud trail. So I said earlier thatthis will collect beyond 90 days, but you're not going to have that nice interface thatyou have an event history here.So how would you go about analyzing that log, and I said,you could use Amazon, Athena. So luckily, they have this link here, that's going tosave you a bunch of setup to do that. So if you were to click this here, and choose thes3 bucket, which is this one here, it's going to create that table for you. And Athena,we used to have to do this manually, it was quite the pain.So it's very nice that theythey've added this one link here. And I can just hit create table. And so what that'sgoing to do, it's going to create that table in Athena for us. And we can jump over toAthena, okay. And yeah, it should be created here. Just give it a little refresh here,I guess we'll just click Get Started. I'm not sure why it's not showing up here. We'regetting the splash screen. But we'll go in here and our table is there. So we got thislittle goofy tutorial, I don't want to go through it.But that table has now been created.And we have a bunch of stuff here. There is a way of running a sample query, I think hecould go here and it was preview table. And that will create us a query. And then we itwill just run the query. And so we can start getting data. So the cool advantage here isthat if we want to query our data, just like using SQL, you can do so here and Athena,I'm not doing this on a day to day basis. So I can't say I'm the best at it. But youknow, if we gave this a try here and tried to query something, maybe based on event type,I wonder if we could just like group by event type here. So that is definitely a option.So we say distinct. Okay, and I want to be distinct on maybe event type here.Okay. doesn'tlike that little bit, just take that out there. Great. So there we go. So that was just likea way so I can see all the unique event types, I just take the limit off there, the querywill take longer. And so we do have that one there. But anyway, the point is, is that youhave this way of using SQL to query your logs. Obviously, we don't have much in our logs,but it's just important for you to know that you can do that. And there's that one button,press enter to create that table and then start querying your logs. So we're on to thecloud trail cheat sheet and let's get to it.So Cloud trail logs calls between eight ofus services. When you see the keywords such as governance, compliance, audit, operationalauditing and risk auditing, it's a high chance they're talking about cloud trail when youneed to know who to blame. Think cloud trail cloud trail by default logs events data forthe past 90 days via event history. To track beyond 90 days you need to create a trailto ensure logs have not been tampered with, you need to turn on log file validation option.Cloud trail logs can be encrypted using kms cloud trail can be set to log across all Eva'saccounts in an organization and all regions in an account. Cloud trail logs can be streamedcloudwatch logs, trails are outputted to s3 buckets that you specify cloud trail logscome in two kinds. We have a management events and data events, management events, log managementoperations. So you know, attach rule policy, data events, log data operations for resources.And there's only really two candidates hear s3 and lambda.So think get object, deleteobject put put object did events are disabled by default, when creating a trail, trail logtrail logs in s3, and can be analyzed using Athena, I'm gonna have to reword that one.But yeah, that is your teaching. Hey, this is Andrew Brown. And we are looking at Eva'scloud formation, which is a templating language that defines AWS resources to be provisioned,or automating the creation of resources via code. And all these concepts are called infrastructureas code which we will cover again in just a moment here. So to understand cloud formation,we need to understand infrastructure as code because that is what cloud formation is.Solet's reiterate over what infrastructure is code is. So it's the process of managing andprovision computer data centers. So in our case, it's AWS, through machine readable definitionfiles. And so in this case, it's cloudformation, template YAML, or JSON files, rather thanthe physical hardware configuration or interactive configuration tools. So the idea is to stopdoing things manually, right. So if you launch resources, AWS, you're used to configuringin the console all those resources, but through a scripting language, we can automate thatprocess. So now let's think about what is the use case for cloud formation.And so here,I have an example, where let's pretend that we have our own minecraft server business,and people sign up on our website and pay a monthly subscription, and we will run thatserver for them. So the first thing they're gonna do is they're gonna tell us where theywant the server to run. So they have low latency and what size of servers so the larger theserver, the more performant the server will be.And so they give us those two inputs.And then we somehow send that to a lambda function, and that lambda function triggersto launch a new cloudformation stack using our cloud formation template, which defines,you know, how to launch that server, that easy to instance, running Minecraft and asecurity group and what region and what size. And when it's finished creating, we can monitormaybe using cloud watch events that it's done, and using the outputs from that cloudformationstack, send the IP address of the new micro server to the user, so they can log in andstart using their servers. So that's way of automating our infrastructure. So we're goingto look at what a cloudformation template looks like. And this is actually one we'regonna use later on to show you how to launch a very simple Apache server. But confirmationcomes in two variations. It comes in JSON, and YAML. So why is there two different formats?Well, JSON just came first. And YAML is is an intent based language, which is just moreconcise. So it's literally the same thing, except it's in that base.So we don't haveto do all these curlies. And so you end up with something that is in length, half thesize. Most people prefer to write YAML files. But there are edge cases where you might wantto use JSON. But just be aware of these two different formats. And it doesn't matter whichone you use, just use what works best for you. Now we're looking at the anatomy of acloud formation template. And so these are made up of a bunch of different sections.And here are all the sections listed out here. And we'll work our way from top to bottom.And so the first one is metadata.So that allows you to provide additional informationabout the template, I don't have one of the example here and I rarely ever use metadata.But you know, it's just about additional information, then you have the description. So that isjust describing what you want this template to do. And you can write whatever you wanthere. And so I described this template to launch a new instance running Apache and it'shard coded work for us East one, then you have parameters and parameters is somethingyou can use a lot, which is you defining what inputs are allowed to be passed within thistemplate at runtime. So one thing we want to ask the user is what size of instance typeDo you want to use. It's defaulted to micro, but they can choose between micro and nano.Okay, so we can have as many parameters as we want, which we'll use throughout our templateto reference, then you have mappings, which is like a lookup table, it maps keys to valuesso you can change your values and something else.A good example of this would be, let'ssay you have a region, and for each region, the image ID string is different. So you'dhave the region keys mapped to different image IDs based on the region. So that's a verycommon use for mappings. Then you'd have conditions these are like your if else statements withinyour template. Don't have an examples here, but that's all you need to know. Transformis very difficult to explain if you don't know what macros are, but the idea it's likeapplying a mod to The actual template, and it will actually change what you're allowedto use in the template.So if I define a transform template, the rules here could be wildly different,different based on what kind of extra functionality that transform adds, we see that with Sam,the serverless application model is a transform. So if you ever take a look at that, you'llhave a better understanding of what I'm talking about there. Then you have resources, whichis the main show to the whole template, these are the actual resources you are definingthat will be provisioned. So think any kind of resource I enroll easy to instance, lambdaRDS, anything, right. And then you have outputs is, it's just what you want to see as theend results. So like, when I create the server, it's we don't know the IP address is untilit spins it up. And so I'm saying down here, get me the public IP address. And then inthe console, we can see that IP address, so that we don't have to, like look at the easyto console pull it out.The other advantage of outputs is that you can pass informationon to other cloud formation templates, or created like a chain of effects because wehave these outputs. But the number one thing you need to remember is what makes a validtemplate. And there's only one thing that is required, and that is specifying at leastone resource. All these other fields are optional, but resource is mandatory, and you have tohave at least one resource. So if you're looking for a cloudformation templates to learn, byexample, Eva's quickstarts is a great place to do it, because they have a variety of differentcategories, where we have templates that are pre built by AWS partners and the APN. Andthey actually usually show the architectural diagram, but the idea is you launch the template,you don't even have to run it, you can just press a button here and then actually seethe raw template. And that's going to help you understand how to connect all this stufftogether. Because if you go through the IRS documentation, you're going to have to spenda lot of time figuring that out where this might speed that up if this is your interest.So I just wanted to point that out for you, it's not really important for the exam, it'snot going to come up as an exam question.It's just a learning resource that I wantyou to. Let's talk about stack updates. So the idea is that you have a cloudformationtemplate and you deploy it. And now you need to make some kind of change. And so you mightthink, Well, alright, I have to delete the entire cloudformation template, and then reupload recreate the entire stack. But that's not the case, because of the cloud formation,you can just modify your existing cloudformation template, push that stack to update and thencloudformation is going to intelligently change, delete, reconfigure your resources seriesto doing the least amount of work to make those changes, and you're not doing the mostdestructive path possible.There are two ways to perform stack updates in cloud formation.First, we have direct update, and this is very straightforward. The idea is you're goingto directly upload your template to cloudformation, you could use the COI for this. And then it'sgoing to just immediately deploy. So cloud formation is just going to go ahead and applythat directly to your existing stack. And this is super fast to do. The other methodis using change sets. So the starting process is the same, you're going to upload your templateto cloud formation. But what's going to happen is that a change set is going to be generatedall that is it's just a way of showing you the difference between what the current stateof the stack is and what the what changes will be made. And the idea is that it givesyou an opportunity to audit or review what gets changed.And so in order for those changesto take effect, a developer has to manually confirm saying yes, I'm happy with these changes,go ahead and do it. Okay. So you know, that's the two methods for stack updates. So we weresaying that stack updates are intelligent and cloudformation figures out what shouldbe performed, whether it's just configured, reconfigure it or recreate that resource.So let's talk about the circumstances or the actions that cloudformation could take duringan update on a resource. And so the first one is update with no interruption. So imagineyou have any CPU instance, and you just need to have something changed on it like a securitygroup or something. So the idea is that this update will be performed without affectingthe operation of the actual service. So the availability, the service will still remain,the physical ID will not change. So for EC two you actually have like an ID for it. Andmaybe here when they say physical ID that could also mean like the Amazon resource namewill not change. So you know, it's just this is just a configuration has taken effect.The next case is where we have updates with some interruption.So there could be caseswhere we don't need to destroy the server, but we need to maybe disappear shared it witha load balancer and then re associate it or same thing with an auto scaling group. Butbecause that happens, there is a chance for the service to experience downtime on availability,but the physical ID is going to remain the same, then the third cases where replacementhas to occur. So there's no way around it, the only way is to create a new new instance,or delete the old and make a new one. A good example is is launch configurations, launchconfigurations cannot be modified, they can only be created and cloned. And so in thiscase, you're getting a new resource. And that new resource is going to have a new physicalID. So those are the three cases.So let's talk about preventing stack updates, becauseit's possible in certain circumstances, you don't want there for an instance to be replaced,because let's say you had an RDS database, and it would the action that would be takenthat would replace the database, you would that would result in data loss. So you sayno, I don't want that updated. Or it could be that you have a critical application, andthere's certain ecsu instances that cannot be interrupted. So what you can do is createa stack policy.And specifically say that, you know, you're not allowed to do an updatereplace on this dynamodb table, and then all other actions are allowed. So that's a greatway to just make sure that critical resources are not affected by these stack updates. Solet's take a look here at cloudformation nested stacks, and nested stacks allow you to referencecloudformation templates inside of another cloudformation template. And the advantagethat we're going to get here is we're gonna be able to create modular templates, so we'regonna get reusability. And we're going to be able to assemble larger templates. So that'sgoing to reduce complexity of managing all these templates. And so just to get like ahierarchy kind of view, you have at the top here, this route, stack, and then underneath,you can nest stacks within two stacks.And you can see we can go down multiple levels.So it's not just one level down. But to understand, like, who can access what, generally, we havenested stacks can access anything from their immediate children. But the root stack isaccessible by everybody. So no matter where the stack is, anything that's defined up thereis accessible everywhere. And just to show you like how you would use a nested stack.So here I have one versus my stack, and then what we're gonna do is define that type acloudformation, stack, and then we're just going to have to provide access to that template.And you're gonna want to store that template in s3 and serve it up from there. So thereyou go. So let's take a look at drift detection in cloud formation. And to understand this,we need to first understand what is drift.So drift is when your stacks actual configurationdiffers, so has drifted by what cloudformation expects it to be. So why does drift happen?Well, when developers start making ad hoc changes to the sack, a good example is theygo ahead and delete something. So maybe you provisioned a bunch of stuff in cloud formation.And it created an easy to instance. So you didn't need any more. So the developer deletedit. But when you go back to the cloud formation console, it still says it's there, but youknow that it's been deleted, and that can cause complications, or it just doesn't giveyou good visibility in terms of, you know, what resources you have and what state theyare in.And so what the developer should have done as they should have went and update thatcloudformation template and let the cloud formation delete that resource. So cloud formationhas this feature called detect drift. And it does is what we've been talking about here,which is it determines whether something's been deleted or it has been modified. Andso all you got to do in your cloudformation stack, there is this drop down, turn on detectdrift, and then you can view the results.Now, I do want to mention about nested stacksand drift drift detection. So when you are detecting drift on a stack confirmation doesnot detect drift on any nested stacks that you belong to and said you can initiate adrift detection operation directly on those nested stacks. So you have to turn it on forall the individual stacks, it's not going to trickle down to every single thing in yourhierarchy there. So let's just take a quick peek here what drift detection kinda lookslike so the idea is you would turn it on, and it will tell you whether your stack hasdrifted. And you can see when it last checked to see if it's drifted.These are the possiblestatuses that your resources could be in. So it could be that it's been deleted, it'sbeen modified. It's in sync, meaning everything is good, and not checks lose cases where itjust cloudformation hasn't checked it. In fact, if you first launch your or you turnon drift detection, you have to wait till that check has happened so all of them willsay not checked. And just to take a look at what that looks like. Here I have a bunchof resources. And on the right hand side, you can see there's some that have been deleted.There's ones that are in sync, and there are ones that are modified. So there you go. Let'sjust talk about rollbacks here with cloud formation. So when you create, update or destroya stack, you could encounter an error.And if you encountered and, like an example ofan error could be like your cloudformation template has a syntax error, or your stackis trying to delete a resource which no longer exists. And so now it has to roll back toget it back into the previous state. That's the whole point of rollbacks. So roll backsare turned on by default. You can ignore rollbacks by using this command flag. So you say ignorerollback, if you are triggering this vcli.I don't know if you can do it via the console,I don't think so. And rollbacks can fail. So sometimes you'll have to investigate andchange resources are configurations to get it back to the state that you want. Or youmight have to use PDF support to resolve that issue on a failed rollback. Now, these arethe states that you will see it so when a rollback is in progress, you'll see rollbackin progress or rollback succeeds, you'll see update, rollback complete, and when a rollbackfails, you'll see update rollback failed. So you know, there you go. So let's take alook at pseudo parameters in cloud formation. So parameters that are predefined by cloudformation, and you do not declare them in your templates use the same way as you woulda parameter.So use it use ref to function to access them. So here is an example of ususing a predefined parameter. The one here is called AWS, a colon colon region. And soyou know, what are these and these are what they are. So we have aid was partition, itwas region it was stack ID, stack name, URL suffix. So not to go through all these becauseit's not that important.But let's go through region, because I think that is a very prominentone, you'll end up using a lot. And so the idea is that, let's say you need to get thecurrent region that this cloudformation template is running in. And so what you can do is,as you were seeing there, we do at this colon colon region, and if this was running US Eastone, that's what it would return. So that is pseudo parameters. So let's take a lookat resource attributes for cloud formation. These are additional behaviors, you can applyto resources in your cloudformation templates, to change the relationships. And you know,just how things happen when you have stack updates or deleting operations.So the firstone we want to look at is creation policy. And what this is going to do, it's going toprevent the status from reaching a crate complete until cloudformation receives a specifiednumber of success signals, or the timeout period is exceeded. So on the left hand orright hand side, we can see we expect either three successes, or we have a timeout of 15minutes, you know, and that's just to make sure that everything has been created hassuccessfully created. So it's just an additional check there that you can put it in there,then you have deletion policy. And so this is going to happen when you are deleting something.So let's say you are you have resource like an RDS database, and you want to make surethat anytime it's deleted takes a snapshot. So that's what you could do. I think in mostcases, you're going to want to retain that database, you generally do not want to deleteyour database, but it's going to depend on the situation.So you have delete, retainand snapshot. The next one is update policy. And this is only for SG Alaska cash domain,and lambda aliases, and it's just whether the handle is going to get replaced. And soyou got to believe you have to say yes, or like true or false. And that's all there isto it, then you have update replace policy. So this is when you're doing a stack update.And it's the question of what's going to happen when that stack update occurs, are you areyou gonna delete the resource, retain it or take a snapshot.So it's kind of similar deletionpolicy, but it's when resources are being replaced. The last case here is depends on.And so this is when you have a resource that's dependent on another resource, so you wantthat resource created first. So in our scenario here, we have an RDS database, and easy toinstance. And we're saying before you make these two instance, go make the RDS databasefirst. And so yeah, those are the resource attributes we can. So let's take a look hereat intrinsic functions for cloud formation. And what these do is allow you to assign valuesto properties that are not available until runtime and the two most popular ones areall the way to the bottom there. And that is reference and get attribute and these areso important that we're going to cover them shortly here in it in other slides, but let'sjust talk about them quickly. Here. So reference is going to return the value of a specifiedparameter or resource. And get attribute will return the value of an attribute from a resourcein the template.That doesn't make sense right now, don't worry, we're going to cover ithere shortly. So let's go to the top of the list and just see what kind of stuff thatwe can do with intrinsic functions. The first one is basics before. So this returns thebase 64 representation of the input string. There's just some cases where you need themto be base 64. I can't think of anything off the top of my head, I've definitely had touse this before. But yeah, there's cases for that, then you have cider. So cider, it returnsan array of cider address blocks.When you're working with VPC resources, it needs to bein the cider format. So that's when you'll be using that function, then you have conditionedfunctions. And so here we have n equals, if not, and or so this is going to allow us tohave a bit more logic within our cloudformation template. So if you want to add that kindof stuff, that's what you got there. Then you have find in map, find a map is used withthe mapping section.So whenever you're doing a mappings, you're going to definitely beusing find find in map and, as the name implies, is trying to find a corresponding value toa key. So that's what that is for, then you have transform transform is super interesting.And it's used with Sam. So the serverless application model, which we definitely coverin another section, actually right after cloudformation. That's what we look into next year. And whatit does is it performs a macro on part of the stack on cloudformation. So what essentiallydoes is changes the logic of how you can actually write cloud formation templates, giving youaccess or extending the ability of cloud formation to do things that it couldn't do before, thenyou have get azs, this is going to return a list of availability zones for a specifiedregion, then you have import value.This is really important when you're working withnested stacks. So this returns the value of an output exported by another stack. So it'skind of a way to have stacks talk to each other, then you have join. So if you havean array, and you want to turn it into a string, where they're delimited, by comma, you'd usejoin, then you have select. So let's say you have an array and you want to select an objectof that list by providing an index you do select, then you have split split is the oppositeof join. So you have a string that might be delimited by commas, and you want to turnthat back into array and use that and you have substitute this is where you can substitutea variable in an input string with another value.So generally, you know, replacing apart of a string. And those are intrinsic functions. Alright, so let's take a look atreference in closer detail here. So reference short for ref returns different things fordifferent resources. And you'll need to look up each resource in the database docs to figureout what it's returning, whether it's an Arn, or resource name or physical ID. And so here'san example. And we have a parameter for address VPC. And then we are accessing it down below.And so this example is actually for parameter. It's not for resource. So for this one, it'svery straightforward for parameters, but resources is a totally different thing.But I want youto know that if there's something you can't get with a reference, then it's good chancethat you can get it with get attribute. So let's go take a look at get attribute now.So here we're taking a look at get attribute. And this allows you to access many differentvariables on a resource. And you know, sometimes there's lots sometimes there's very few. Butyou'll again have to check the database doc to see what's available per resource. So herein our example, we have a security group at the top.And then down below, you can seewe are referencing that resource with get attribute and then we're getting the groupID. But yeah, that is get attribute but let's hop over to the docks and take a look. Alright,so here I am on the agents resource and property types reference. And this is going to helpus see what what we can return with reference or get attribute. So let's take a look atEC two. That's always a good one to compare here. So I'll open up a new tab and maybewe'll look at dynamodb. So to figure out what they return, and when there's a lot of optionsfor easy to so we'll go easy to instance.And for dynamodb we'll go down in the DB table.But there's a lot of stuff here which tells you how to work with EC two instance, butwe just care about reference. And the fastest way to find is go to get a trip, because referencesright above it. And we will go over here and do the same thing. Oops, we'll type in geta trip. And so there it is. So here for easy to win, you pass the logical ID of the resourceintrinsic function.So this returns the ID. So that's what it returns. Okay. And thenfor Dynamo dB, if you go over here, it's going to return the resource name. So there's abit of difference there but like, it's confusing because sometimes with some resources, theref will actually return the Arn, but in the case where dynamodb doesn't return the Arnyou got to use Get a trip to get it. And then with easy to instance, these are the get thetrips that you have with it. So that's pretty much what you have access to. And you know,just be aware of that and just realize that there isn't consistency across the board here.And you'll have to do a bit of digging to figure it out every single time. So let'stake a look at weight conditions. So weight conditions, as the name implies, waits fora condition.And there are two use cases where you're going to be using weight conditions.The first is to coordinate stack resource creation with configuration actions that areexternal to the stack creation. So what do I mean by that? Well, it just like mattersof whether you're dependent on something outside of your stack. So maybe you have to make surethat this domain exists. And it's not part of your cloudformation template. Or maybeyou have to go hit an external API endpoint to make sure that something is working. Soit's just something that is external, and nothing about as you're spinning up withinyour stack. And the second case is to track the status of a configuration process. Somaybe, you know, you have a resource, and you have to have that resource in a particularstate. And so you're going to pull and continuously check to see whether it's in that state beforeproceeding forward. So those are your two use cases. And so here on the right hand side,we actually have a example of a wait condition and this is for an auto scaling group. Thisis actually pulled from the Ito's docs.And it's kind of funny, because eight of us doesnot recommend it actually, I think that's an expression. But wait conditions are, aresimilar to creation policies. But Avis recommends using creation policies for ESG and EC two,but here they're using ESG. But there could be a use case for using you for ESG. So wecan't say that you can't use it without auto scaling group. But if you look on the righthand side, you see you have a weight handle. And then you have something that it dependson this depends on the web server group. And then you have a timeout, and you referencethe web server capacity. So there's a lot of different options here, you'd have to readup the docs on it.But the takeaway is that a creation policy waits on the dependent resourceand awake condition waits on the wait condition, and generally, for something external fromyour stack. So hopefully, that makes sense. This is Andrew Brown from exam Pro. And welcometo the cloud formation follow along where we're going to learn how to write a cloudformation template. And go ahead and deploy an easy to instance all using the power ofinfrastructure as code. Now you're going to need a cloud nine environment to do this.You could do this locally on your computer, but it's a lot easier to do everything throughhere. And if you want to how to get started cloud nine, we show it in multiple fall Long's.But in this particular case, we were showing how to set it up in Elastic Beanstalk followalong. So I would recommend doing that first before doing this follow along here, or justfigure out how to get a cloud nine environment set up.But once you're in here, and you havea cloud nine environment, what I'm going to do is I'm going to make a new directory tostore this cloud formation file and then a called MK dir cfn project to create that there,then we're going to need a new file in there. So I'm just going to touch a new file, I'mgoing to call it template dot yamo. It's important that you name it, y m l instead of y ml, it'sjust more consistent here. If you look up based on what is the best practices betweenthe two, it just the whole community agrees to just do the full form here. So let's makeour lives easier. And just always do dot YAML. Now that we have that file there, we can goahead and open that up. So I'm just going to open that up. And of course, you can navigatehere and click on it as well.And the first thing we need to specify is the template formatversion. So that is AWS template format version, version here. And that's going to be 2010.Oh 909. And that's this line pretty much says this is this is the version of cloudformationwe're going to be used and everything else, like anything that's in this template is affectedby this version. This version obviously hasn't changed a long time, since it's nearly 10years ago. Maybe at some point, they will change it. The next thing we'll want to dois add a description. And we are going to use this little two characters plus to domulti line and YAML. And I'm just going to say this as infrastructure. For study sync,we're not actually deploying the study sync application, just because it's a little bittoo much work.But you know, I figured we will just name it that anyway, I change thisto two spaces, you can have four spaces, but generally two spaces a lot easier to workwith. So that's what I'm going to be doing through this YAML is in database and you haveto be very careful with YAML files. Because if you are not using spaces, soft tabs anduse hard tabs, you might receive errors. So we have our format version, we have our descriptionand the first thing we need to do is specify an actual resource, I'm gonna type resources.And then we'll name our resource web server. And then it's going to be a type of AWS ECtwo instance. Now I could wrap this in single quotations or double quotations, but I'm usuallypretty good about just turning this into a string.So I like to leave a minute naked,so to speak. In order to launch an EC two instance, we're gonna have to do a couplethings. But before we do that, I just want to show you how you would know what to fillin here. If you had to read the docs. So I just typed this here, into the docs. And incloudformation, we have all these resources here. So if I really wanted to note, you seetwo, I would go down to DC two, I would read it and I'd say, Okay, well, I want to launcha DC two instance.And this would be the closest matching one. And that's how I would knowhow to add things to cloud formation template. And here, they give you a full example inJSON and then in YAML, obviously, YAML Dudamel is a lot less verbose. And that's what peopleprefer to use. When eight of us was first starting out. With cloud formation, it onlyhad JSON. So YAML came later, and it is the preferred way of doing things. Or you cansee all the formats but the question is, do we need to include all these these attributesor properties? The answer is no. But you have to figure that out by looking at the actualproperties here, it will use a required no required yes or required conditional. So theeasiest way to find out what's required, we'll just do conditional, I don't think there'sany yeses for EC two.And it says we need an image ID. And we're going to need a securitygroup ID, which is a conditional. Because I would think that if you didn't provide it,it would just automatically include the default. I think that's what it is. And that's whatwe'll probably do, we just won't include one at this point. So I think really, we justneed to include an an instance ID and instance type. Actually, I don't think it says we haveto include an instance type. I think that one is defaulted to something. It is defaultedto m one small, and we're going to want to T two micro because we want to save money.So let's make our way back over to here.And now that we've learned a little bit, whatwe're going to do is type in properties. And that's going to be all the properties forthis resource that we're making. And so we saw instance type, I want this to be a T tomicro again, I could wrap this in single or double quotations, I just like to leave themnaked. And the next thing we're going to need is the image ID. Now the image ID is goingto be the AMI ID and ami IDs vary based on region. So in order to get this, we're goingto have to make our way over to the EC two console. So type in EC two. And what we'lldo is we'll pretend that we're launching a new instance, we're not going to launch aninstance, but we'll just pretend as if we are so I'll say launch instance here. Andwe will get the EC two for North Virginia, make sure you're in US East one, there's gonnabe very problematic if you're not consistent about this.And this is what we want to havetwo versions x86 and 64 bit ARM we want the X 86 version. That's what we're used to usingarm is very cool, but not for me today. So I'm going to copy that. And we are going tojust paste that in. And that's all we need to launch an EC two instance with cloud formation.So in order to do this, we should probably put this in an s3 bucket, I'm just going togo quickly to the cloud formation wizard to show you the steps that we're going to haveto do and why we're going to put that in s3 in a second. So we'll go here, we have a lotof cloudformation templates, I never created these, these are automatically created forme when we were doing other tutorials here, or I was running example applications, butgo up to the top here and create a new stack and we'll say new resources standard.Andthe idea is we're gonna provide a template and we can either provide one from s3, orwe can upload our own template, this template is so darn small, we could upload it. ButI think it's good to get in the practice of putting it onto s3, because after a certainsize, or like when a file gets a certain size and length, you have to store it in s3. Andthat's where you're going to want it to be because a lot of services expect to find yourcloudformation templates there. So what we'll do is we will use the CLA to create a newbucket and store it there. So I'm going to type in AWS s3 API create bucket.We're gonnatype bucket here, and I'm going to call it study sink. Andrew, I think I already usedthat song called end Andrew B. And we're gonna say region, US East one. Now be sure to customizethis based on your name or some other value because these bucket names are unique acrossall AWS. So now that I have that there, we'll hit Enter. And if this is successful, it shouldgive us location back which it did that and that's all great.And now that we have ourbucket, what we can do is go ahead and copy this template file to s3. So I'm gonna typein AWS s3 and ocers s3 api in an s3. I don't know why it's like that, but that's just howit is. We'll do environment cfn project template YAML. And then we'll specify the s3 path here.So it's going to be steady sync Andrew B. And then we will do template dot YAML. Oops,oops, oops, oops, look at that. Why ml? I'm already messing things up here. I'll justdo that twice there. And I don't care about that other one will that will let it hangaround. If I can remember the Delete command, and I cannot, but we'll try it anyway. MaybeI can delete that one there. So that it? Yes, it is, there we go, my memory is doing well.So we now have copied that to cloudformation.And so if we go back to here, we can providethe URL. Now notice this says HTTP s, colon colon. And that is a different URL there.So if we go to s3 here, and we go and see your bucket, we have this one here. If welook at this item, it's going to show us this is the URL. So that's what the URL is, we'llmake our way back to cloud formation here, I'm going to hit I'm going to paste that inthere.We'll go ahead and hit click off, we can go to the viewer of the designer to checkthat out. But I don't really care about it. So we'll just go next, we have no parameters,I'm going to name this study sync, we'll hit next. We will leave all of these alone, theseall seem good to me. Yep, these are all great.And we'll go ahead and hit next, we'll goall the way down the bottom, and we will hit Create stack. And it's going to have createand progress. And so now we just have to wait, we'll have to hit refresh here a bunch oftimes. This is going to take however long it takes to spin up an EC two instance, ifthis fails, it will say rollback in progress. If we have a syntax error, it's totally possible,we could have a very minute error that we missed here because we were typing this allmanually.And we'll just wait for this resource to complete. So I think it must, the templatemust be valid, because it looks like it's been enough and easy to instance. So it'son its way. So we just have to wait for this to create here. So I'll see you here in alittle bit. Okay, so our EC two instance is now running here, two out of two checks havepassed, there's not much we can do with this instance, because it doesn't have a web serverinstalled. It doesn't have a security group that exposes Port 80. It's using the default.So there's a lot of work we need to do here, let's make our way back to cloud formation.And just take a look at the events. so here we can see the previous events, events analysiscreate complete, a lot of times when you're waiting around here, you have to hit the refreshbutton to see these changes, sometimes they'll give you a little blue pill telling you thereare available events. So you know, just don't wait around and wait for things to happen.Click, click and see what's going on.So let's get a security group in here because that'sthe next thing that we're going to need. So what we'll do is we'll make our way back tocloud nine. And we're going to want to add a security group. So if we wanted a securitygroup, it should be under EC two, because it is a C two service and I typed in securitygroup here. There it is.And so this is going to tell us all the information we need toknow on setting up a security group. So let's go back here. And we're going to create anew resource. And I'm going to call this one security group. And then it's going to betype AWS EC two security group. And the properties are going to be we're going to need a VPCID, which if we need that, we're going to have to provide our own variable for that.Right.We could just copy and paste ours in as we did with the image ID here. But I thinkit's time for us to start making this less brittle. We can give it a group description,which isn't a bad idea. So let's go ahead and do that. I'm going to say open port 80.We want to set an Ingress rule that means inbound traffic. So you have Ingress in wasaggress. I know Ingress is inbound so and what we'll do is hit enter their IP protocol,protobuf call, oops. And I'm going to type in TCP.And then we'll just align with theIP there from Port 80. And then we have to Port 80. And then we have cider IP. Colonwill in this case, I'll do double quotations just because it has like weird characters.I was thinking I have to do that here. It could be naked, I don't know. Um, forwardslash zero. And that says from anywhere on the internet, open up Port 80, which is thecommon port for just plain old HTTP.I think that looks right. I'm just checking for spellingmistakes. So yeah, we're gonna have to deal with this. Also, I think while we're here,we might as well go ahead and turn this into a A variable, so I'm going to call that imageID. And we're going to go and use some parameters. And the parameters is a great way for us topass in variables at creation of a stack, or an update of a stack as well. So we defineone called image ID, we will default that to the value we just had there, which I nolonger have.So I'm just gonna have to go to lunch easy to instance, here quickly, andthen grab it again. So that's a great way to default an option, when we know we wantsomething to be something. Alright, it's good to get these a description, because I'm goingto prompts us it's going to tell us what they are. So I'll go ahead and do that.I'm gonnasay, am I to use eg. I'll just tell them what format it has to be. And this is going tobe type string. Then the next one we need here is a VPC ID. And we will give this adescription and we'll just say the VPC used by the SG. And successful mistake there, wewill call this type string. And we're not going to default this one, we'll pass thisone in. And we're going to have to figure out what our VPC ID here in a moment, I'mjust double checking this to make sure everything is a Okay, looks good to me. But if we reallywant to know whether our template is good, we can use a tool called cfn lint, which isa node module.So I'm going to do, I'm gonna just go ahead and install that right now.And this is going to check to see if our application is good hyphen g means it's going to installit globally. And then what we'll do is I'm just going to CD into our directory, whichwe called cfn. project here. And I'm just going to do cfn, lint, validate and then providethe template name there. And if it's good, it will say it's good. If it's not, it'llerror out. And look, it's already complained. So clearly, I've done something wrong. I'mnot sure what the problem is, I think it's because I'm has an invalid type security groupresource security group hasn't developed type.So I've typed I've typed this wrong. It'ssupposed to have two colons here. I'll go ahead and save that. And that will validateit. I think we're getting closer at a sec to security group. VPC ID is not a valid property.Okay, um, Oh, you know what? lowercase here? got to be really careful with these characters.Image ID, it's the same problem here. I capitalize the D on the end of it. If I can find it inhere, image ID, do you see it? As if you're gonna tell me right. Okay, we'll save that.And I'll just hit up here.And now the templates valid. So I mean, doesn't tell you exactlywhat the problem is. I mean, you have to hunt it down there a bit and make sense of it.And you'll will have to double check the names of these, these properties. But you know,that's all there is to it. So what we'll do is we will go back here. And now that that'sdone, I guess what we'll do is we will upload this to cloud or back to s3. But let's automatethat process, because it's going to be tiresome to always type in the commands over and overagain. So I'm gonna touch a new file and call this update.sh. So we're gonna make a bashcommand here, we're gonna chmod it, which changes it so that it can be executable. Soif we didn't do that, and we tried to execute it, it's not actually a binary won't work,or whatever you want to call, it just will not execute.Now that we have that file, I'mgonna go ahead and open that up, and we are going to supply some information, we're goingto tell it where to look for bash, first of all, which is always a great idea, it shouldalready know what to do here. But this is just out of habit, I think you should alwaysdo that. And then what we're going to do is we're gonna just type in our, our CI, or ADACOI commands. And that's as simple as it is. So type in template type gamle, we'll do s3colon slash slash, study sync. And I got three slashes in there. I'm sorry about that. It'sa bit hard to see what I'm doing. So I will type in Andrew B, Ford slash template dotYAML. And that looks pretty good to me. We'll go back down here. And now we'll just do dotforward slash update.sh. You could do like I think, sh or whatever or bash but I wasjust like these the period.That's where you execute bash, bash files on Linux, we'll hitEnter. And that should work. Yep, it gave us output it uploaded at some. So looks likeeverything's great. So what we can do is we can go back to cloud formation and updatethe stack. So I'm going to go ahead and do update, we're going to replace the currenttemplate and we're going to provide that s3 URL again, I'm gonna make my way over to CloudFrontor s3, because I cannot remember that URL. I'm gonna paste that on in there. We'll hitnext and It's now going to ask us for that VPC ID. So what we need to do is, go grabit.So I think we have our easy to open here. But we will, an easier way to do this wouldbe to just go back to stacks. Here, click into this one, and we've checked the resources,we can click into the actual instance we're using and get the exact VPC that it's using.Because it could be if you have multiple v PCs, it could be in a different one, it shouldbe the default one, because we didn't specify anything.So we're going to just find it,and there it is, I'm just gonna hit that little Copy button, we'll make our way back to cloudformation.Here, paste that in, fingers crossed, and I think this will work. Nothing new here thatwe need to choose, we're gonna hit Next, move all the way down the bottom, hit update stack.And there it goes. And so we're just gonna have to wait a little while here hopefullydoes not roll back on us. But since we lifted fine, the only thing that could mess up iswe've got those parameters wrong. So Oh, I think it's already done. I think it was really,really fast, because it didn't have to create a new ECU instance, it's just adding a securitygroup. So let's go to our resources. And now we can see we have a security group and aweb server, it's going to close that tab there. We'll leave that open. We'll close that. Andwe already have it open, we'll close that. So what I'm going to do is check out the securitygroup, I just want to see if it actually set the Port 80 to be open.If we go here to inboundPort 80 is open. That is what we want. Now the question is if we check our web serveris a security group associated with it? And the answer is, the answer is no, it's on thedefault, because we actually never associated it in our cloud nine template. So in here,yeah, we have this we have that we didn't put a property here for the security group.So that's what we're going to need to do next.Because we want to open up that Port 80. Whenwe install our Apache web server to view a web application, so we're going to do is weare going to go ahead and add that property. And I'm just stalling for time, as I lookfor the line that I have to type in there it is. So what we need to do is add securitygroup security.Group IDs wouldn't be so nice if this auto auto completed. For me, it'slike if it actually auto completed all the documentation, that'd be sweet. And what we'regoing to use is get att, we're going to type in the security group here, security group,dot group ID. And so this, get a tribs is a way of getting a special properties on aresource here, and you're limited to what you can grab. But here we can grab the groupID and that's what we need for this case. So I think that's all we need to do here rightnow. So let's go ahead and actually update this our, our, our stack, and instead of goingthrough here and click on that and do all that work. Let's update our script here sothat we can just automate this further.So I'm gonna go here and type in AWS cloudformation.And it's going to then be update stack, we are going to provide our region which is agreat idea, we don't want that to default to anything funny, put a forward slash there,we'll provide the stack name, which should be called study sync. We will provide thetemplate URL which will be this link here. We are going to need to provide those parameters.Oops, parameters. And we are going to supply the parameter key is parameter key and thenparameter value very verbose. They just don't want to make anything easy for us v PC ID,we want it to be lowercase. And then we'll do parameter it's going to be value. And that'sgoing to have to be what our VPC ideas. So we'll go back here and we will grab it. Thereit is. So we'll grab it from there. And we will paste that in there. That all looks goodto me. I'm just going to double check more thoroughly here.It's very easy to make amistake when you're typing all this stuff, but I'm pretty sure that's good. And so nowthat our script is updated, we can just do update.sh Fingers crossed hopefully worksand we have an error oh geez. So the template URL or template body must be specified sowe have it right there but we forgot that Ford slash this Ford slash just allows usare backslash backslash leans to the left Ford slash leans to the right. But this allowsus to make this multi line we can make this all a single line but that's really messy.So that should fix that issue.We'll hit up here. Oh boy VPC, Id must have values. I guessI named it this. Yeah, I did. And I guess I could lowercase this because we should reallybe Consistent here, then I'll have to go down here and do that as well. Just to make ourlives a bit easier, again, that's a little mistake on my part. And actually, before weupdate this, we should really be linting. This. So we'll do a cfn, lint validate. templatedot YAML, everything's good. I'm gonna just change this to lowercase here. Everythinglooks good, we'll hit up. We're still having a hard time security group IDs, security groupIDs. So where did the linter didn't pick that up? invalid template property resource properties,security group IDs. So I'm just going to search that their security group IDs, Oh, you knowwhat, it's got to be indented.My bad. And it's surprising the linter didn't pick thatup. So maybe there's another tool out there that's a little bit better at checking thatstuff. And there we go, we know it's created because it gave us back a stack ID, we goback to our cloudformation template and give it a refresh, we can see, we can see thatit is updating. If we want to see that status, actually, in cloud nine, we can do it programmaticallyby typing AWS. cloudformation, describes stacks, stack hyphen, name, and study sync. And Iwould probably I'll put this as a table because it's a lot information doesn't exist, I typedthe wrong. And so there it is, there's all our information, could also get this backas JSON. If we just took that off there. It's showing us the status of it. So it's sayingupdate in progress. So it's kind of like a cool way of taking that up. So update complete,so it's almost done. And we can also monitor it through here. Got to choose what you wantto do.So let's delete and progress, I guess, is deleting the web server. And if it's deletingthe web server, that means we have to wait a little bit here. So I will see you backhere when this is 100%. Done. Okay, and so after a little weight here, it everything'supdated, now deleted the web server. And when I was building out this fall long, and I gotto the stage, it didn't actually delete it. So clearly, I have made a change. And maybeit's because I was fiddling with the names of the parameters, there are properties thatit forced a delete. So that's the thing with cloud formation, it doesn't always deleteresources. But in this case, it did decide to wait a little bit longer here. If we goover to EC two, we can see now that it's running. And let's say we wanted to actually go checkout whether the security group is the correct one.And it is, let's say we wanted to dothis through cloud nine, or I should say the CLR here. And what we could do is get a listof the cloudformation resources. So I'll do ABS cloudformation, list stack resources,stack name, study sync. And here, what we're getting is a list of resources. And in there,we should have the security group. And that's going to give us that SG here, which is whatwe're going to need to check the next thing and so we'll use AWS EC to describe instances.instance IDs. And we will provide Actually, we don't need the SG we need the the instancebecause once you have the instance, has a security group on it. That's what we're typingthis out for. And I'm going to just output This is a table kind of notice and conventionhere. Describe stacks describe instances. That's why it's great to work with a COI fora while, you just start to figure things out as you go.But you got to type in right ifyou want to work. So we'll go back here, hit Enter. And there it is. And so we just lookedthrough this, there should be one called security groups. So we have the instance itself. Scriptsecurity group, where are you? There it is, and so we can see that it's attached. So thereyou go. So we definitely know that it's correctly attached.But if we just want to double checkourselves and go here and check the inbound rule, and we see that Port 80 is open. Sonow we have a web app with Port 80 open, which is great. Now we just need to install ourweb server. And the way we can do that is through user data. Now, if you're launchingan EC two instance, you're just going through the steps here like I am, you get to thispart advanced details and you provide the scripting to the user data and that wouldinitially set up whatever you want.That is one way to get that set up. And that's whatwe're going to do using cloud formation. So we'll go back to our template here. I'm goingto go to our template dot YAML. And I'm just going to scroll on down here so I can seewhat I'm doing. And we're going to add another property to the web server and we're goingto call this one user data. And then we're going to add a function called base 64. Becausethis has to be base 64. That's just how it is and then we will use a sub on that becausewe need to start Institute, well, we don't really need to substitute in the values.Butin case we do, we should just do that out of habit. And this is the developer associate.So you don't need to know all these things in great detail. It's more so at the assessoption and the DevOps. So that's why I'm kind of glazing over these things. Because I'drather you get more practical experience as to whether to actually know exactly how theywork. And then we will do EC to user because it always starts you as the root user, andyou got to force it to switch to EC to user, then we'll do sudo Yum, install httpd thatis Apache.You think they just call it a potty but they don't. And then we'll do start whichwill start the service. And then we'll do sudo service HTTP enable said if that serviceis stopped, for whatever reason, it will when the if the server's restarted, it will continuerunning. I'm going to double check this to make sure this is right. User bin Mr. Bash,looks good to me. I got that right. Young y update. Good. es su ECC user, great sudoyum install. That looks good to me. So everything is great. So that's all we need to do there.And while we're here, we might as well add some outputs. outputs are going to make iteasy for us to find value. So we don't have to click around everywhere and make an outputcalled public IP, because I want to get the, the IP of the web server, if you don't mind.And I believe that is typed correctly, I'm just gonna double check here.Yeah, it looksgood to me. So what we'll do now, I'm just gonna scroll all the way up here. And whatI'm going to do is I'm just going to cfn, or we're going to lint that I'm just gonnahit up till I find that linter says everything is good. But we found out earlier, that'snot always the case. And then we'll do our update.sh. Fingers crossed. Yep, that's great.So now, that has been uploaded to s3, and it's doing a stack update, we're gonna makeour way back here to cloud formation, I'm going to do refresh. And we can see this updateis in progress. So we're gonna have to wait on that. And what we're looking for is thatoutput of the actual IP address, I don't know if it's going to replace the server. in this,in this case, it probably won't.If it did, that'd be great. But he says update in progress.So it's not doing it's not doing a delete in progress for the web server. But anyway,we'll we'll check back here in a moment. So let's see you soon. So our update is donehere. And notice that it didn't do a delete in progress, it just updated it. So that'sinteresting. If we go to output here, now we actually have that IP address, that's gonnamake our lives a lot easier, because now we can just place that in there, but it's notworking. So that's a bit of a shame. But I actually know why it's not working. It's becauseit didn't replace the the instance it didn't delete and recreate it, it just updated itin place. And the reason why I know that is the reason that is the core reason why thisis not working is that when you use user data, this script only runs the absolutely firsttime an instance has been launched.And so the only way for this to ever run again wouldbe if you destroyed it and recreate that instance entirely. If you restarted it, it wouldn'trun it again. There are ways of adding things here so that it will trigger the trigger restart,I guess, in some cases, or like replace the actual instance. But it didn't. So what we'regonna have to do here in this case is just delete our entire stack. And that's a greatchance to see how deletions work. So we'll, what we'll do is go ahead and delete it, hitDelete stack. And what it's going to do, might have to go back a step here, it says deletein progress, it's better to see it on this level, we're gonna see delete in progress,and deletes can actually fail and then roll back.But I don't think this one is goingto. So what we'll do is we'll just wait a little while until this is done. And thenwe'll recreate our stack. And actually, what we'll do while we're waiting here is we'llgo to our update script. And we'll change this from update to create stack. And we'renot going to run this until this one is done. Because you can't have a stack with the samename.We could just name our new stack a different name, but I want to keep this consistent.Oh, it looks like it's done. So that was pretty darn quick. So that means I don't have togo away. And we can just move on to the next step here. So we change this with create stack,and we will just do an update. And that is going to create our new stack here. We'lldo a refresh. There it goes. And we'll go to events. We'll just hit refresh here. Andwe'll have to wait a little bit because it's launching ecgs. And so I'll see you that.Okay, and so we're back. I don't know if we're gonna have the same IP address here. See thisone's 54 227 8467 Nope, it's different. But we'll give that a go. And we'll see now ifit actually works.And look at that we got the test page up. So we fully automated somethingusing cloud formation. So that is pretty much the scope of what I have. wanted to show youhere. And all we need to do now is just delete what we've created. So what we'll do is we'lldo through the CLR, because we already did it through here. So might as well learn howto do see ally. I don't have to take a guess. But I'm pretty sure it's a realist cloud formation,cloud formation, delete, stack, and then stack name. Study sank, I tell you, the more usethe COI, you just start guessing and you're pretty much right. So I didn't get any outputthere. I'm going to go double check here. And I'm going to refresh, see what's goingon, since the Delete is in progress. And there we go. So just make sure that this deletesbecause sometimes sometimes deletes fail, and they roll back and you still have thoseresources around there. So if you don't want to be paying for something you don't need,just double check that there.But yeah, that's cloud formation for you. So we're all donehere. So we are on to the cloudformation cheat sheet. This is a long one. It's like threepages long. super important for the SIS ops and developer associate. So we need to knowthis inside and out. So let's get into it. When being asked to automate the provisioningof resources, think cloud formation when infrastructure as code is is mentioned, think cloud formation,cloud formation can be written in either JSON or YAML. When cloud formation encourages,encounters an error it will roll back with rollback and progress, cloud formation templates,larger than 51,000 51,200 bytes, or 0.5. megabytes or too large to upload directly, and mustbe imported into cloudformation via an s3 bucket. nested stacks help you break up cloudformation templates into smaller reusable templates that can be composed into largertemplates, at least one resource under Resources must be defined for a cloudformation templateto be valid. I'm going to repeat that because it's so darn important in a second here.Andlet's talk about the the template sections. And you definitely need to know these forthe exam, because it'll give you the questions that you have to pick out which section doeswhat. So we have metadata that that is extra information about your template, the descriptionthat tells you what your template does, you have parameters, this is how you get userinput into the template.Transform applies macros, this is like applying a mod whichcan change the the anatomy to be custom. And a good example that is Sam, the serverlessapplication model. That's what stands for, then you have outputs. These are values youcan use to import into other stacks, you have mappings, these map keys to values just likea lookup table resources to find the resources you want to provision. And here, I'm goingto repeat it again, at least one resource is required conditions are whether resourcesare created or properties are assigned. And those are all the properties. So again, makesure you know are the sections, make sure you know them inside and out because it'llearn you a few points on the exam. onto the second page stack updates can be performedtwo different ways we have direct updates.And so the way those work, you directly updatethe stack, you submit changes, and it was cloudformation really deploys on this is whatyou're going to be used to doing when you're using cloudformation. The other way is executingchange sets. So you can preview the change set to cloud formation. Or sorry, you canpreview the changes to cloud formation will make to your stack. And that is what we calla change set. It's just telling you what's going to change, then decide whether you wantto apply those changes. So it's just like a review process. And then stack updates willchange the state of resources based on circumstances.So we have a few we need to know them. Soupdate with no interruption, so updates the resources without disrupting operation andwithout changing the resources physical ID, then you have updates with some interruptions,updates resources with resource with some interruption retains the physical ID replacementrecreates the resource during an update also generates a new physical ID, you can use astack policy to prevent stack updates on resources to prevent data loss or interruption to serveservices. We have drift detection, this is a feature that lets cloudformation tell youwhen your expected configuration has changed due due to manual override an example of this,let's say you have a cloud formation template or stack that creates a security group anda bunch of other resources. Adult developer comes in and deletes that as G. So cloudformationwould think that it's still there, even though it's not in drift detection. If you turn thatfeature on it's going to tell you that something that it's it's no longer the case. And thenon to the last page and this is a long one, we're gonna talk about rollback so occurswhen a cloud formation encounters an error when you create an update or destroy a stack.When a rollback is in progress, you'll see rollback and progress.When a rollback succeeds,you'll see update rollback complete rollback fails you'll see update rollback failed thenyou have sudo parameters are defined parameters. So here we have a ref a double region whichwould return US East one. So these are like eight of his predefined parameters. Then wehave resource attributes. So resource attributes we have a lot of different policies underhere we have creation policy. prevents its status from reaching create complete untilAva's confirmation receives a specified number of success signals or the timeout period isexceeded. deletion policy reserve or in some cases backup a resource when it stack is deleted,retain or snapshot update policy how to handle an update for an ASP elastic cache domainor lambda alias. An update replace policy to retain or in some cases backup the existingphysical instance for resource when it's replaced during a stack update operation. So we havedelete, retain, or snapshot. And then we have depends on the resource is created only afterthe creation of resource specified in the depends on attribute.And there's some caseswhere you use depends on there's there's other cases where you use a weight condition. Sowe have intrinsic functions allow you to assign properties that are not available during runtime,most important to to know would be the ref one returns the values of a specified parameteror resource, get a trip returns the value of an attribute from a resource in the template.And then just some CLR commands you should know you know, the Create stack command becausesometimes they'll have a question and they'll actually show you some COI commands.And it'sgood to know what's what, then you have the update stack one. And the last thing I wantto just talk about a service application model is an extension of the cloudformation is cloudformation.To let you define service application, it doesn't get its own cheat sheet because there'sjust not enough information on it. There's a lot more stuff on cloud formation. But likeyour if this is for the associates, this is good enough. If you're going for the developer,the DevOps Pro, this is like a six page cheat sheet. So there you go.You know, hopefullythis helps you on exam day. Hey, this is Andrew Brown from exam Pro, we're looking at Clouddevelopment kit, also known as cDk, which is a way to write infrastructure as code usingan imperative paradigm with your favorite language. So let's get into it. Alright, soto understand cDk, I want to talk about transpilers here for a moment. So a transpiler turns onesource code into another, and so cDk transpiled into cloudformation templates. So you know,just a simple diagram, we have a cDk on the left, and that turns into cloud formationtemplates under the hood. And so this is the difference between an imperative infrastructureand a declarative infrastructure. So let's talk about these two differences. So imperativeis when you have something that's implicit, you know, what resources will be created inthe end state. And this allows for more flexibility, less, you have less certainty because youdon't know exactly what's going to be created. You don't, you don't have fully controllervisibility on it.But you generally know what is going to happen. But you get to write lesscode. And so an example of something being imperative is saying, you know, I want anEC two instance. But you go and fill in all the other details, I just want to tell youthat I want to have one, I don't want to have to worry about everything else. And so thatis what cDk is it's imperative that when we were looking at decorative on the right handside here, it's explicit, we know what resources will be created in the end state, there'sless flexibility, we're very, very certain of every single little thing that's goingto happen at but we have to write a lot more code.And so comparative example to imperativeis I want an easy to instance. And I have to tell you exactly every detail of it. Andthat is what cloudformation is it's declarative by nature. So I said earlier, you could useyour favorite language using cDk. And so let's talk about some of the language support ithas. So cDk was first available only using TypeScript. And then they eventually startedreleasing for other languages. So we have node, TypeScript, which again, is just node,Python, Java, and ASP dotnet. So that's all we have. So far, if you're wondering exactlywhat versions, that is what it supports. I'm still waiting for a Ruby version. And hopefully,you know, when you're watching this video, a Ruby version becomes available. But generally,I think whatever languages is supported by AWS generally is what we'll see.So I wouldnot be surprised if they do PHP, one, and also a Ruby one here. But I don't think you'llget one in PowerShell. Just I want to make a note about how up to date. cDk is with cloudformation. So the cDk API, they may have yet to implement specific API's for eatables resourcesthat are available on cloud formation. It's just because it takes time for them to writethis stuff. And they have a lot of languages to support. But it's my best guess that TypeScriptwould be the one that supports the most AWS resources. And then the other ones would followbehind. I would think Python would be next and then Java and then probably ASP.NET wouldbe last. But just consider that in mind. So that is one of the things you have to thinkabout with cDk which is if you need full control of what cloud front offers, you might haveto just use cloudformation templates. So you have to explore there and see what you cando. Hey to Sandra brown from exam Pro, and we are looking at serverless application model,also known as Sam. And this is an extension of cloudformation that lets you define serverlessapplications. And I always like to think of it as just as a cloudformation. Macro reusetransforms. So let's get to it. So Sam is both a AWS COI tool and also a cloudformationmacro, which makes it effortless to define and deploy serverless applications. And youmight be looking at that word macro and thinking, what does it mean? So this is the textbookdefinition, which is a replacement output sequence according to a defined procedure,the mapping process that instantiates a macro use into a specific sequence is known as amacro expansion.Now, that doesn't make a whole lot of sense, at least to me. So I'vereworked that definition. And so I would say a macro allows you to change the rules onhow code works, allowing you to embed a language within a language macro serve to make codemore human readable, or allow you to write less code, Korean language with another languageis called a DSL a domain specific language. And by using macros, you're creating a DSL.And so cloudformation allows you to specify macros through the transform attribute.Andthis is how Sam is use. So what you do is you would use transform and then specify AWSserverless. And then you have access to now these new resource types. That's what thisthis macros, Sam is injecting into it. So you can define this function as API and simpletable along with a bunch of other properties. So to really understand the value of Sam,it's great to put it against cloudformation in a one on one comparison.So what I've doneis set up a an example here. So we have API gateway, calling a lambda and that lambdagets data from a database such as RDS. So on the left hand side, this is what it wouldlook like if you wrote it in pure cloud formation. So that's without Sam, and we're 100 linesin and on the right hand side, we have it with Sam. And so this is 5050 lines in soyou have at least a 30% reduction in code when writing in SAM.And this is a more verboseexample. I think in most cases, you could see 72 to 80% reduction in code just for whenyou're writing the serverless components. So you can see that Sam is going to save youa lot of time. So let's talk about the SAM COI because I said at the beginning of thisSam section that Sam is both a sea ally and a cloudformation macro. So the SAM COI makesit easy to run packages, deploy serverless applications or lambdas. And these are itsCOI commands, I don't think you need to learn them all.But it's good to go through themand get some hands on with this kind of stuff. So starting at the top, we have Sam build,and that prepares the lambdas sourced in the code to be deployed by patching for upload.So it doesn't upload it, it just packages it into an artifact. The next one is Sam deploys.So that uploads the lamda package code as an artifact and then deploy. If you're wonderingwhat an artifact is, it's just a fancy word for a zip file. And then the next one hereis Sam a net. So if you have yet to start a project, you could run this, it's goingto give you a bunch of default folders and files.And it's going to be set up for fora serverless project, I would think that this would be setup for the serverless applicationrepository. So whatever default files you need there, but it would set it up for you,then you have generate event. I think this is for testing, I've never used it. But ifyou get different payloads for your event sources, then you have invoke, which runsa single lambda locally, then you have start API that runs your serverless applicationlocally for quick development, then you have start lambda, which is very similar to invoke.But looking at the two I would probably say you'd more likely want to run start lambdathan invoke.So that is another one there, then you'd have logs which fetches, I wouldthink from Cloud trail would fetch the logs for that lambda function. So you can see itlocally without having to log into the console. And the last three really have to do withthe serverless application repository patching a service application. So as packages forpatching a service application, so creates a zip and uploads it. It seems like it kindof does build and deploy in one go.But again, for service application, then you have publishedso that's going to publish it to the the actual service application repository. Then you havevalidate, which just checks I think syntax errors for your Sam templates. So there yougo. That's the big rundown there. Hey, this is a Andrew Brown from exam Pro. And we'regoing to be looking at ci CD, which is the automated methodologies that prepare, testdeliver or deploy code onto a production server. Hey, this is Andrew Brown from exam Pro.Andwe are looking at the CI CD models. And before we jump into those models, I want to justtalk about a couple terminologies if you're not familiar with them, which is productionand staging production, short for prod, is the live server where real pain users areusing the platform. And staging is a private server, where developers do a final ManualTest as a customer, which we call QA. And that is short for quality assurance. And theydo this before deploying the code to production. So it's like that last check before there.So if you see staging and production and wondering what those mean, those are those terms. Sothis is our pipeline from code to deploy. And I'm going to show you three differentmodels. And if you look online about ci CD, you might see some additional steps in here.So there is a bit of flexibility in terms of like how these are defined. But this iswhat I'm gonna show you and it pretty much is what it is.So the first one we're goingto look at is continuous integration, which is short for ci. And that's automatically,that's automatically reviews the developers code. The next one is continuous deliverythat's automatically preparing developers code for release to production. And the lastone is continuous deployment, which is automatically deploying code as soon as developers pushcode. And if all tests pass, you're going to notice that both of these have the sameinitialism, which is a bit confusing, but that's just how it is. So if someone saysCD to you, you've got to get clarification because it could mean delivery, or deployment.So let's move into these individually and learn more about continuous integration isfirst on our list here. And so this is the practice of automating the integration ofcode changes from multiple contributors into a single software project. So our pipelineis going to be code, build, integrate, and test. And this, the reason we do this is itencourages small changes more frequently. And each commit triggers a build during whichtests are run that help to identify if anything was broken by the changes.So it's somebodychecking over the developers code as they're coding, which is going to definitely speedup productivity with your team. So here is an example down below. So we have a developerand they push a new feature to GitHub in a new branch. And what they would normally doon GitHub as trigger a web hook to something like circle ci, and then circle ci would gocreate a new build server, that build server would pull in other developers code, run thetest suite, and then it would report back the results.And the results would give ustest coverage saying how much test code has been written to written to cover the code.And the other part is, did any tests fail. And so that is generally what most peopleare familiar outside of AWS. But if we're using AWS, we can just replace these things.So we could replace GitHub with code commit, we could replace that what web hook with lambda,we could replace circle ci with code build, and the results would go into s3, and it'dbe called an artifact, which is a fancy word for a zip, and code pipeline. So you know,that's continuous integration. So let's move on to the next one. So now we're going tolook at continuous delivery. And this is the practice of automating the preparation ofcode to be released to production or staging branches, you're preparing the code base fordeployment, deployment is still a manual process. So in our pipeline, now we have code build,integrate, test and release with a strong focus on the last one release.So here, Ihave an example. And you're going to notice that half of it, we've already seen in theprevious example. But in this case, we're now using all AWS services. So we have codecommit the lambda function, code build, and stuff like that. And so that was the continuousintegration part. So now let's look at the continuous delivery. So we have that s3 artifact,which tells us that the test coverage is good.So in some projects, you know, code can'tbe accepted unless you write a certain amount of test code. So whatever that threshold is,is defined per project could be 30%, could be 70%. And then all the tests pass, so that'sgreat. And so we push that to a lambda, and that lambda is going to check and say, okay,all the all the tests and everything is good. So go ahead and make a pull request, it createsa pull request, and code commit. And now that is left for the developers to review. So thedevelopers are going to check over that code. And if they're happy with it, there are threeof them are going to all vote on it.And if there's a consensus that one person is goingto decide to release that code, and so they say approved, and then that that code getssome merged into master. And so master is generally the production server, or it couldbe a production branch. But whichever master production say, usually the same thing. Andso that doesn't mean it's deployed, but it's ready to be deployed. So we're on our lastmodel here, which is continuous deployment. And this is the same as continuous delivery,but it automatically deploys changes to production.And so there's our pipeline, it's blue allthe way across. And here is our technical architecture. And looks very similar to thelast one. But we have a little bit on the end here. So we know continuous integration,we know continuous delivery. And now we're looking at continuous deployment. So the lastthing we saw in continuous delivery was the feature branch being merged into master.Soall the code was ready to be deployed. And so this is where clews deployment comes in.So something would check to see if something changed in the source, it would be monitoringcode, commit, or GitHub. And as soon as it was merged into it, it would then triggercode pipeline to start doing things. And in code pipeline, you would define somethingsuch as code deploy. So the source would then get checked out, maybe go to code, build andrun the test one final time. And that those pass that pass on to code deploy, and thencode deploy would start that process of deployment. And so that is the whole pipeline there.Hey,this is Andrew Brown from exam Pro, and we are at the end of the C ICD section. So let'sdo a quick review with the C ICD cheat sheet. So C ICD is automated methodology that preparestests delivers or deploys code onto servers, and generally beat production servers, whichcan be abbreviated to prod and environment, which is intended to be used by paying users.It's the it's your, your live server. Then you have your staging server, your stagingenvironment, which is intended to simulate a production environment for last stage debugging.That's why they call it a staging server, then you have continuous integration representeda CI and this is automating the review of developers code making sure their code isin good shape before we allow it to be turned into pull requests, or just to speed up ourdevelopment cycle.So run tests with a build server. So in this case, we use code build,and that's how that would work, then you have continuous delivery. So this is automatingthe preparation of the developers code for release. So it's not being deployed. It'sjust one step before that. So an example here is very similar to last one, except you'drun the test suite with a build server at the test pass, we, it would automaticallycreate a pull request or merge branch into staging because sometimes staging is a precursorto production, you know, so it's saying we're ready to deploy this code, but you need tocheck it over before before doing so thank goodness deployment takes that a step further,also abbreviated to CD.And the idea here is it's automatically deploying developerscode with which, which is ready for release. So it's all the steps prior, so it's goingto run test suite built with a build server. And here, if the tests pass, then it's justgoing to immediately merge it into production and deploy it. So it just does everythingautomated and ends. And the last thing I just want to touch on canoes deployment here isthat it can refer to the entire pipeline. So when you do continuous deployment on AWS,you should be thinking of code, pipeline, code, commit, code, build, and code deployall combined. So there you go. Hey, this is Andrew Brown from exam Pro.And we are lookingat code commit, which is a fully managed source control service that hosts secure Git basedrepositories. And I like to think of it as the GitHub for AWS. So to understand codecommit, we need to know what a version control system is. And that is a system that recordschanges to a file or set of files over time, so that you can recall specific versions later,a very famous story about a codebase not having a version control system, I think, is eitherDoom or Wolfenstein. I can't remember which one. But the idea was that they had a bunchof people working on computers to program that game back in the early 90s. And sincethere was no control version control system, or they weren't using one, they had to movecode around on, on floppies. And, you know, it was very difficult to manage all that code.And also, you know, if you had one computer that had that source code, if anything happened,that computer, all your code was gone. So this is what version control systems alleviate.And so in 1990, we had CVS which stands for I think, control version system, not a verycreative name, but very clear as to what it does.Then you had some version, this is whereI started out using sbn. And then in 2005, we had a renaissance for version control systemswith Mercurial and also get which you might be familiar with, and Git is actually themost popular version control system and there's good reason for it. It's a distributed versioncontrol system. And its goals, which it does definitely deliver on is speed, data integrityand support for distributed nonlinear workflows. So because of, you know all those featuresof git, that's why that is the primary one. And so any version control system that isgenerally what it is using. So what is code commit, then so code commit is a service,which lets you store your Git repositories in the cloud. So developers can push and pullcode from the cloud repository, and it has tools to resolve conflicts.And so you justgo ahead and create your repository. And then you can download your code from there, uploadand etc. And if you've ever used something such as GitHub, Bitbucket or Git lab, thisis the same thing. But this is just like AWS, a solution for hosting repositories, but withsome special features. So why would you want to use code commit? So let's talk about thekey features. And so the first one, which I think is a really strong point, is thatis in scope with a lot of compliance programs.So one compliance program is HIPAA. And sothis might be an advantage it has over other competitors, or it might be more cost effectiveto get this compliancy. Whereas like, maybe with GitHub, you have to pay for enterprisesupport, it's very expensive, we're on AWS, it's very inexpensive. repositories, repositoriesare encrypted at rest, as well as in transit. So security is a definitely a key feature.It can handle repositories with large numbers of files, branches, large file sizes, andlengthy revision histories, though I've never felt any kind of limitations on other platforms.So I kind of feel like they're, they all do this. But you know, it's great that they statethat there's no limit on the size of your repositories or the file types you can store.You keep your repositories close to your other production resources Database Cloud.Now,that, to me is the largest value is that code commit has a lot of synergies with a lot ofAWS services. And the benefit there is going to help increase the speed and frequency ofyour development life cycles. And also, to be able to code in some creative automation.And to control access to code commit, you're going to use Iam. And that's going to say,hey, these users are allowed to have access to this repository. Hey, this is Andrew Brownfrom exam Pro. And we are looking at Docker, which is a third party tool designed to makeit easier to create, deploy and run apps by using containers. to best understand containers,we should compare them against virtual machines, which is what we're used to using.So imaginewe launched an EC two instance. And anytime you launch a new instance, it's likely goingto launch you a virtual machine. And on that EC two instance, there is a hypervisor installed.And that is what is used for launching VMs. So you launch your virtual machine, you chooseUbuntu, and you're gonna have to go in and then set up your dependencies. So if you wantto run a Django app, you're gonna have to install Python and additional libraries torun that application.So you go ahead and install that Django application. But thiseasy to install is really large. So you want to make best use of use of it, as you cansee, say, Okay, well, I'm gonna put my MongoDB database on here. So you'd have to go in theserver install packages and binaries to run that. And then you say, Okay, I also wantto run rabbit mq on here. So you go into the virtual machine install that. But the issuewith installing multiple applications on a virtual machine is that some of these programs,you know, their dependencies may not work best in a particular OS.So maybe rabbit mqdoesn't run best on Ubuntu or maybe Django and Mongo share, have a similar library, butthey have different conflicting versions. And that makes it hard for you to have theminstalled side by side. Another issue here is when they're all on the same virtual machine,let's say rabbit mq ends up eating up all the memory, there's nothing from stoppingit from consuming all the memory and then and stalling the Django app installing theMongo DB app.Or imagine if someone broke into your virtual machine, they now have accessto everything. So there is an issue with all this shared space. So generally, people willnot launch multiple applications on a virtual machine, they'll launch up additionally seetwo instances, and have an app per virtual machine. But you'll always end up with withleftover space. And so that is an issue. It's because the idea is that you're always choosinga virtual machine that is never the correct size for the job at hand, or just, you know,you can't make best use of that space. Now let's look at containers. So relaunching thesetwo instance. And this has the Docker daemon installed, and that is what is used to launchcontainers. And so when you launch a container, the container has a Oh s and even though likeYouTube, this one's Alpine, but even though you're using Using this Oh, so not everythingis actually virtualized. So all the hardware virtualization isn't part of that container.But you get to package in your custom libraries and packages and binaries, any dependenciesthat you need, that are specific to this Django application.Now, when you want to launchMongoDB, you can just do so and you package the libraries and the OS is specific to MongoDB.And then you do that with rabid mq, okay. And so you can see, it's very easy to launchapplications, and you can remove them easily as well. Whereas, when you install everythingon the virtual machine left, if you remove rabid mq, that doesn't mean the library'sback packages and binaries go with it. Or let's say you just want to kill MongoDB completelyand reinstall it. You can't do that because, or you could but it'd be a lot of labor todo so. So containers gives us a lot of flexibility to easily launch and destroy new applications.And so we think of this more as available space.So if this easy to instance, we'renot utilizing at all, it's easy for us to launch things into it. So just to reiterate,VMs do not make best use of space apps are not isolate, which could cause configurationconflicts, security problems or resource hogging, were with containers. This allows you to runmultiple apps, which are virtually isolated from each other, you launch new containersand configure those dependencies per container. So let's take a look at Docker files. Andthis is a file that is used to run the commands that would be required to assemble the finalimage, your Docker image. And so here's an example of a Docker file. And this is forsetting up a Ruby on Rails application. And it has the Postgres client, so it can usePostgres database. And it's installing No, Jess probably for its front end stuff. Butlet's just walk through here and I understand how this dockerfile works.So the first commandwe have here is called from, and this allows us to pull in a another Docker image as thebasis of our Docker file. The reason why you do this is because if you had to set up Ruby,there's a lot of dependencies. And this would make this fall really large. So it's niceto be able to build off another Docker image. And after this Docker file is turned intoDocker image, you could then build off of that one. So that's kind of an interestingstrategy, then you have the Run command. And this allows you to execute any kind of like,command that you normally run in bash. So you see apt get, so we're installing packages,and the Mk dir, that's for making a new directory. And it does this in layers.So every timea run command for whatever is in the scope of it, it turns it into a layer. And as Dockeris Docker is building that image, it caches those layers. So let's say you made a changelater in this Docker file, then what it would do is it wouldn't rebuild from scratch, itwould just go from the last cache layer, and then on so that makes rebuilding these a lotfaster. The next thing is working directory. And this allows you to change the defaultfolder for future commands. So just makes it a lot less verbose. When writing, writingthose future commands in this file, then you have copy, this is going to copy files orfolders from your local computer onto this image, then we have the entry point. So thisis the command that is executed when the containers first started, you cannot override this command.So that's what it's going to be, then you have the expose.And this allows you to listenon specified network ports at runtime, we can see the port is 3000. That is the defaultport for Ruby on Rails when running in development mode. So it gives you an idea that this isa Ruby on Rails app that's using Postgres that is intended for development. And thelast one here is the command command command. And it is passing the default arguments forthe entry point. So we're not sure what's in the entry point.sh. But there's somethingin there. Because it is clearly a bash script that does something and it was included intohere. But what we're doing is we're passing in Rails server hyphen B, so binding to Port000. So that starts up the application. So hopefully that gives you a perspective of,you know, what a Docker file is and how they work.So I just want to cover some Dockercommands here with you. And these are Docker commands that are very common, and I thinkthat you should know them. There's definitely a lot more than what's in this list, but Ithought we could just quickly go through it. So the first one is Docker build, and thisis the command you're going to use when you want to build an image from Docker. Then youhave docker ps, which lists produces a list of containers, you're only going to see containershere that are actually running. So you know, just be aware that Docker images is the listof images you have on your machine. So those are the Dockers images you built from yourDocker files or if you pulled them from repositories. Then you have Docker run and this is goingto run a command in a new container. Then you have Docker push and this is if you havean image you want to push it to a repo.A repo here could be Docker hub or VM or AWSis elastic Container Registry ECR. And then you have poll. So you could pull an imagefrom a repo. So, you know, those are the ones I think are important for you to get somehands on experience with. So yeah, there you go. Hey, this is Andrew Brown from exam Pro.And we are looking at code bill, which is a fully managed bill pipeline to create andcreate temporary servers to build and test code projects.So code build is a fully managedbuild service in the cloud, it compiles your source code runs unit tests, and producesartifacts that are ready to deploy, it eliminates the need to provision, manage and scale yourown build servers, which by the way, is really hard to do. I tried it and I'll never do it.Again. It provides pre packaged build environments for popular programming languages and buildtools such as Apache, Maven, Gradle, and more. You can also customize build environmentsto use your own build tools, which is very useful.They're talking about using Docker,and scales automatically to meet peak build requests. So now that we have a little bitknowledge on code build, let's look at some use cases. And dig in a bit deeper here. Solet's look at a code build workflow. So the first thing is you're gonna have to triggercode build, somehow, you could do that be via the service console, the COI the SDK,or the most common use case is that it's going to be part of your code pipeline. And afterit pulls the source, it's going to pass that on to code build. And that's how it's goingto get triggered. When you set up code build, you have to set up a build environment, andAWS has some pre built manage images for you. So they have an Amazon Linux to Ubuntu andWindows Server. If they don't have the things that you need installed on them, then youhave to provide a custom image, which is a Docker image. And you would normally storethis in ECR elastic container repository, where you'd upload your image and you couldreference it for that you could reference from other Docker sources like Docker Hub,but you're probably going to use ECR.The next thing is the source code. So you needto get source code into that build project. So you'd have to use a source provider. Sothat could be from code, commit GitHub, Bitbucket, or etc. You probably could also just passalong the codes to see how there's like no source. So you could have no source and asthe, the build script triggers, it could then just use the internet and pull from there.But generally, you want to provide a source provider.And speaking of like, how are wegoing to run our build that comes down to that build spec yamo file, and this is generallypart of your code base. So you know, when you pull in that your code from the sourceprovider, this file would be in that your root directory there, and it's going to talkabout or it's going to tell you all the commands that need to be run.But one thing I wantto tell you that even if you have a build spec file with your project, you can overrideit with other build commands, which is very important for you know, for the exam, becausethey might bring it up saying like, Hey, you have this build spec file, but you want tooverride it, how can you do it. So it's like, well use the COI for that. So hopefully thatgives you kind of an idea about the code build a workflow. So I just wanted to touch againon the build environment. So Docker images are managed by code build, which is AWS, itwas managed. And so you want to check these images to see what comes pre installed. Soyou choose that environment image, you have Amazon Lex to Ubuntu Windows Server, and theseare the images. So if you want to dig into them there in the universe documentations.But you can see that there are two variants for Amazon Lex two, there's two variants forUbuntu.And these are images, so you could download them and theoretically run them inDocker and poke around and see what's in them. But I'm pretty sure they listed on the docksthere. So if your needs aren't met by these managed images, then you'll have to make aDocker image stored on ECR and then reference it that way. So let's look at a couple ofuse cases for code build. The first one being generating outside pages from a jam stackjam stands for JavaScript API's and markup.And so we have a website that was built withGatsby that's a jam stack application that needs to render out stack pages and deliveredthem to s3 static website hosting. So let's say we have our code or Gatsby code or a codecommit. So we could trigger code build with the aid of this console. Code build is goingto pull from its source at source build being code commit, and then the build is going torender out the static pages, and then it's going to output that artifact into an s3 bucket.So those are ready for static website hosting. The next one is let's say we want to run testcode and report test coverage.This is probably the most common use case for code build. Sodeveloper needs to ensure their code passes all tests before being allowed to make a pullrequest. So, you know, you let's say you have a Ruby on Rails application, and you've pushedsome code to it to a feature branch, that's going to trigger a GitHub webhook. That willtrigger a lambda function that uses the SDK to then tell code build to start buildingon the source code. And so the code build will pull down the Ruby on Rails project.And it's going to run r spec, which will generate out test coverage, and also say, well, thetests failed or not, it's going to take those reports put in a zip, and then you could passit on to another lambda function. And then that information gets passed back to GitHubto determine whether that pull request should occur or not.So those are two use cases.Let's take a look at the build spec yamo file, which is the most important thing you needto know about code build, you need to know this inside and out. And definitely get somehands on experience with this because it's super important. So the build spec providesthe build instructions, the build spec YAML needs to be at the root of your project folder.And here I have an example. And this is actually the one we use on exam Pro. So we have a Rubyon Rails application. And so we don't use it all the stages, but you generally use prettymuch everything. And so let's walk through it. So the first thing we need to define isthe version of the build spec. And so there's 0.1, and 0.2, and 0.2 is what is generallyrecommended. And if you want to know the difference, I don't think they show up on the exam. Butit's good to know. And this affects the default shell in the built environment.So 0.1 runseach build command in a separate instance. Whereas 0.2 runs all build commands in thesame instance, build environment. So this is important because it tells you, you know,what information is going to get shared into to the next command. So you know, that changesthe way you're going to write that file. So the next thing are phases. So phases are thecommands that run during each phase of the build, and there is a very specific orderthat they go through. The first is install. So this is only for installing packages inthe build environment. Then the next one is pre build.So these are commands that runbefore building, then you have build. So these are commands that you run during the build,then you have post build, these are commands you run after the build. So they're prettystraightforward in terms of their definitions. The only thing that's not clear is actuallyat what step does the source code get pulled? I think it is after the install step. So thatis a step that is not very clear there. But you know, it's not that important for theexam. But you should know these different build phases. And then the last thing thereis artifacts, so we're not showing that here in this document. But you can configure theartifact to build somewhere else. So if you wanted, there's like a default place wherethat build artifact goes. But if you wanted to specify the s3 output, you could do soin here. And again, artifacts are just the results that are zipped that go to s3.Sothere you go. Alright, so we are at the end of the code build section. So let's move onto the code build cheat sheet. So code build is a fully managed build pipeline to createtemporary servers to build and test code, compile source code, run unit tests and produceartifacts that are ready to deploy provides pre pre packaged build environments, or youcan build your own environments as Docker containers, use a build spec yamo to providebuild instructions. This file is stored in the root of your project. And we need to knowthe contents of this file. So let's go through it right now. So we have version 0.1 of thisfile, which runs each build command in a separate instance. Then you have version 0.2, whichruns all build commands in a separate install in the same instance. And I think it's likeit's when it says instance, that means like instance of a shell or bash shell, I'm notnecessarily another easy to instance, that would be ridiculous. Then we have differentphases.So we have runs these commands and through different phases. So we have the installphase. This is only for installing packages in the built environment. We have the prebuild stage, which is for commands that run before building, we have the build stage phase,which is commands that you run during the build. And the last one is post build commandpost bill which is for commands that run after the build. So there you go. That's code buildin a nutshell, and we're grading for the exam. Hey, this is Andrew Brown from exam Pro, andwe are Looking at code deploy, which is a fully managed deploy pipeline to deploy tostaging or production environments. So looking at code deploy, it is a fully managed deployservice in the cloud, you can deploy easy to on premise lambda or elastic ContainerService, you can rapidly release new features, which is the idea behind code deploy, youcan update lambda function versions, so that as a method of deployment you can do there,you can avoid downtime during application deployment.This is generally with bluegreendeployment, because the idea is that replicates an entire environment and moves it over andthen kills the old one, where in place generally takes effect on the existing server. So wehave, we have the ability to perform in place in bluegreen, which we just talked about therea moment ago. It integrates with tools, such as Jenkins code, pipeline, and other ci CDtools, and integrates with existing configuration management tools, such as puppet chef andAnsible. Alright, so the first thing we're going to look at here for core componentsis creating an application, this is the first thing you do in code deploy on an application,it really is just a container for the rest of the components that make up code deploy.But for an application, it's as simple as going in and naming your application. Andthen you have to go ahead and configure a bunch of other components. The first componentthat you need to configure is deployment groups. And this is a set of easy to instances orlambda functions, where your new revision is going to be deployed to.And then onceyou have your deployment group where you know, you're going to do your deploys, you can thengo ahead and create a deployment. And so you create a deployment and you choose the codethat you want to upload. And then you can go ahead and configure that deployment witha lot of rules. So whether you want it to roll back, or whether you want it to, likehow you want it to fail, and a bunch of other rules in there, which we'll look at in moredetail at the end code deploy, follow along, then you're gonna have your app spec yamofile, this is extremely important to know.And we definitely cover it in here in greatdetail. And this contains all the deployment actions that code deploy is going to use tofigure out how to execute or like how to install restart your application on the actual server.It's just a YAML file. And then the last, last, here, we have the actual revision itself.So these, this is just an embodiment of all the changes that will take effect on the actualserver. So that is your apps like yamo file application files, you'd have to install configurationfiles if installed, or any kind of executable. So there you go, that is the core componentsto code deploy. So it's very important that we know the difference between inplace deploymentsand bluegreen deployments for code deploy. So we're going to start here with inplacedeployments. So when you set up your deployment group, or I think it's deployment group, youget this option between in place in blue green, so we choose in place, and then you choosethe environment, whether you want to do EC two in an auto scaling group.Or you couldchoose EC two instances based off their tags, or you can actually choose on premise instances.So let's understand the process of how this works. So the app on each instance, in thedeployment group is stopped. The latest app provision is installed and the new versionof the application is started and validated. You can use a load bouncer so that each instanceis registered during his deployment and then restored to service after the deployment.deployment is complete. Generally, when you're doing in place, you want to use a load balancer,it's a great idea. And then last only deployments that use easy to on premise compute platformcan use in place deployments. And notice that you cannot do a lambda function here. Andwe do not have ECS clusters. So just be aware of that. And now we'll move on to bluegreen.Alright, so let's take a look here at bluegreen deployments for code deploy. So here you choosethe blue green option. And then you have choosing between the automatically copy aect Auto Scalinggroup or manually provision instances, you're definitely going to want to choose the firstcase.And this is what would generally show up on the exam if they do talk about this.And the idea here is if you already have an environment, and it's using auto scaling group,you specify it here, it's going to copy that one over. And a new easy to instance is goingto spin up code deploy is going to then apply the latest code version and install it usingthe app spec yamo file. And then what's going to do is shut down the old environments aswell, it will shift over the traffic from old to new, and then destroy the old old infrastructure.So let's go through that just again here. So instances are provisioned for the replacementenvironment. So that could be the auto scaling group being cloned the latest applicationrevision installed on the on the replacement instances.So if you're using code pipeline,it could be passing the the artifacts from the source to code deploy and then it willinstall into the correct location. We will see that in the app spec yamo file and thenit will install from there. An optional wait time occurs for activities such as applicationtesting and system verification. Is this isn't the replacement environment are registeredwith an EOB, causing traffic to be rerouted to them. So that's the shift of traffic, andinstances in the original vironment are deregistered and can be terminated or kept running forother uses. So people don't always terminate their, their old environments right away.Because sometimes you need to debug them, or you need to fall back to them in the caseof a disaster.So it's up to you on that case. So we're gonna take a look at the app specyamo file, which is responsible for saying where the code should be installed, and howto get the new piece of code running. And this example here is actually from the exampros, Ruby on Rails application. It's an older version, but it makes still for a very greatexample of a real world use of this aspect gamble. So we'll just walk through this, andthere is some variation on this file here. But this file is a very good example. So thefirst thing is we choose our o ‘s, this could be Linux or Windows, then for files, we aresaying where the code should be downloaded to.So Ford slash is after that zip is becauseyou provide the code in the form of zip. So wherever that zip resides, take it and putit in home easy to use your app because I want it to be an app, then we can apply permissionsto anywhere. So I just want to make sure that easy to user is the owner of that app directory.And then we have our hooks. So you have an application stop. So this command should beresponsible for doing the stop. If you notice, we're providing a location to a bash file.So the way aspect camel works is you write bash scripts for all them. And then you specifythem with part of the zip that is provided to code deploy. So the aspect YAML will bein a zip, and then all these files. So this one, this one, this one, this one, this one,they'll all be in that zip there, then you have a timeout.So you can set a timeout,I think there is one by by default, but if you know generally how long these are goingto run, you should set them because this will just speed up the process. If one of thesehang, for whatever reason, then you can set what it should run. So I always say easy touser, especially with Amazon is one, Amazon x two. And let's look at some of the otherbooks we have before install. So that's before it's downloaded the code to your server afterinstall things that you'd want it to happen afterwards. And then a command to restartthe application up. And I want to point out that the lifecycle event hooks are are goingto be different based on whether you're using ECS, ECS, and lambda. So you have to lookup the documentation to what's available to you. But this is for an easy to instance.And so this is the most common use case.So when you do perform a deploy in code deploy,you're going to get to be able to visually see all the lifecycle event hooks that areavailable to you. As they perform, you'll see these go from pending to successful orif they fail, they'll show you additional information. And you can see that we get thedurations of it. So it's really good way to get that overview of it. In the case of afailure, it will look like this. So you can see that this script got through here, andthen it failed at this point. And then you'd click into that.And then once you click intoit, you will get a little bit more information. It's not always clear as what has gone wrong.So here you can see that it's saying start dot start underscore puma.sh has failed. Sothat script in particular that I wrote had an issue. And then inside we might get someinformation actually as to what failed Exactly. And this one is totally not clear. So there'sa lot of cases where you have to log into the EC two instance, to debug code deploy.Generally, what you want to do is stream your code, deploy logs to cloud watch logs, whichI don't show you in this in this section. But it's something you definitely want todo if you're running code deploy in production, because it's such a pain to log into an easyto instance, and debug this stuff. So but there you go. So I just wanted to hop overto the documentation here quickly to show you what hooks are actually available to you.And what's going to affect hook availability is the deployment methodology.You can noticefor end up in place deployment, we have all hooks available to us. And then for blue greenbased on the case it's going to change. So it's interesting here because when you dobluegreen deployment, you will see all these hooks in the code deploy you'll see all ofthem but it's just trying to separate them out to say these are the original ones thatare going to happen on that side the Replace replacement ones on this side. So just beaware of that and then these are just for rollback. So for rollback so you can see thescope is quite fear there. Do you need remembering these for the exam? No, but it's just goodto know because it can save you time debugging.And if you're doing this for real in a practicaluse. So to get code deploy working with your computing powers, you're going to need twothings, you're going to need the code deploy agent. I always think this is pre installedon Amazon, Linux one and two, but it's not. So what this is, it's just a service. It'slike a binary that you it's actually written in Ruby, but you have to install Ruby on yourserver, and then download the script script, install it. And then what that will do isit will have the cloud code deploy agent continuously running. And it's going to report back tocode deploy the progress of when you run the the lifecycle hooks and installs all the steps.The other component to it is you need to create a code deploy service role.And this is prettyeasy through the Iam console, if you just go to View, choose code deploy, they havesome preset ones for you. And what this does is it gives access to things like auto scalinggroup CLB, and things like that, I say you may need to create it, because I think insome cases, you might not need it. But this allows code deployed to create an additionalauto scaling group and like shift traffic between elastic load balancers. So these arethe two things you'll need to set up, which is not very clear, when you're using codedeploy. Hey, this is Angie brown from exam Pro. And we are going to be doing the codedeploy follow along. So we're going to learn how to set up automated deployment using codedeploy. And the first thing we're going to need to do is get an EC two instance, setup with a basic web page that we're going to turn to Nami, which we will use for codedeploy. So let's make our way over to EC two. And what we're going to do here is we're goingto go to instances on the left hand side, we're going to launch a new instance.AndI want you to choose Amazon Linux to the top one there it's select, we're going to go withthe T two micro because we want to save money. And what I want you to do is just drop downhere and you should have this SS SS m EC to service role. If you don't, let's just gothrough the process of creating that right now. But we have created an other follow alonghere.But just in case you don't have it, I'm just going to delete this role here andmake a new one. And this allows us to gain access to sessions manager if we need to loginto this instance, I'm gonna go up to next we are going to let this load here. I'm gonnatype SSM. Next, Next, SSM EC to service role. That's what I like to call it, we'll go aheadand create that there. And then what we'll do is go back here are refresh. And then thisrole should now appear. So that's very important that you set that because it's going to saveus a lot of trouble. Now down below, we have the ability to put in a user data script.And this is what we came here to do.So let's set up a basic web application. So what I'mgoing to do is go over to cloud nine, and you should have a cloud nine environment.And if you don't watch the Elastic Beanstalk follow along or some of the other ones whereI show you how to set up cloud nine, it's very easy to get environment set up. And whatwe're going to do is we're going to make a new folder here on the left hand side, I'mgoing to call this code deploy project. And I'm just going to make a new file in herecalled user data.sh. And what we're going to do is set up a script that's going to setup a patchy a basic web page. So the first thing we need to do is specify our shibangor shibang is going to tell tell this script, what did you use to execute it, which willbe bash, we're going to want to switch to the sudo the ES VC to user when we log in,we're going to want to install HTTP, which is what Apache is, then we will want to changethe ownership of the directory for the var WW.This just makes it a lot easier to workwith it. So we're gonna change that to the easy to user. The next thing we want to dois we're going to make a basic web page. So we're going to place an index file in barwww dot html and have index HTML. That's where a patchy looks by default. And we're goingto use a nice here doc here to do a multi line here in bash, which is very useful toknow. And so I'm just gonna make a very basic website, the ugliest website I can think of.So we'll just get an HTML in there when we will get a head and we'll get a body in there.And we'll need a title my code deploy app and we will do the same thing here in thebody except instead of having A title we're gonna make that h1.And then there's justa couple things we need to do down below, we need to start up Apache, we're using systemCTL, you could use the alternative syntax, which is sudo, sudo, service HTTP, D start.But I noticed that if you if you do that the enabled doesn't work. So this is why I'm usingthe more long form format here. So that's for anyone that is very familiar with this,for whatever reason, it's not working in the other format. And so that should be everything.So switch DC to user install, install Apache, change ownership, create that initial file,and that looks all good to me.So I'm gonna copy this here, go over here, as you can see,you could input a file, I'm just gonna use text here, nice and plain. And assuming Igot that, all right, uh, we'll be in great shape. If we don't, that's okay, we'll makeour way in there using sessions manager. So we just need to set up a security group here.So I'm gonna go next, next. Next, I like naming actually gonna go to tags, this is alwaysa good idea, we're just gonna put a name, I'm gonna say, my or my code deploy app.Becauseif we do that, it will name it. And I usually do that afterwards, we're going to make anew security group here, just so we can see what we're doing my code deploy SG, and we'regoing to open up Port 80, we do not need Port 22. Because we will not SSH into this, wecan get into it using sessions manager will hit launch. Once you drop down, proceed withouta key pair, which is super, super more safe. And we will go to will click on that instancethere. And now all we have to do is wait for it to launch. So I'll see you here in a moment.Okay, so we have a running state, both status checks have not passed. But that's okay. Becausegenerally, we can check the website.But we should usually wait for those two checks.I'm just grabbing the IP address here and refreshing. And there's my application. Nowwhat's really important to check whenever you set up an app here is to restart the serverbecause we absolutely want to be sure that if we stop and start this again, it reboots.So what we're going to do here is we're going to go stop.And we're just going to let thisshut down here, it shouldn't take too long. Keep on refreshing here, and then once thatstopped, we're going to start it back up. So that is just a pro tip for you becausethere's a high chance that the service might not start up because something's wrong theirscript. And to find that out later is such a pain, you have to rebate the EMI, I do notwant to do that.So I'm going to go ahead and start this up again. And we'll just givethat a refresh. I guess I'll see you back here in a moment. When it's a it's activeagain. Welcome back, I didn't wait for my status checks again. But it is running. Ifwe copy that IP address, we can see that the application is there. So we're all in greatshape. If you did want to connect to this instance, you could go to sessions manager,but in other ways, just go click Connect here and go to sessions manager, we don't needto do anything here, I'm just showing you around.And so what you normally do is dosudo su hyphen easy to user because it always logs you in as root, it'd be really nice.If it logged you in as easy to user, you could select that, if you're listening AWS, pleasemake that change. If we go to var WW, HTML, HTML and do all stuff in LA, there's our file.Alright, so just getting you a little bit comfortable with EC two. And we'll pop outthere. And so the next thing we need to do is make an ami of this image. So I'm goingto go here and create a new image. I'm going to call this my code deploy app 000. It'sgreat to version in this way. Always give it three zero, so you have a little bit ofroom to breathe. And what we'll do is we'll go ahead and create that image. And so ifwe go here, we'll just have to wait for that image to go pending. And once that is done,then we will actually leave our current server running.That is totally fine. But what weneed to do next is set up a code commit repos. So we'll go to code commit. And the next thinghere, I don't know if we need code commit right now. We'll need it for later. But yeah,you know, we'll come back to code commit later, I was thinking later when we added to thecode pipeline, that's something we can definitely do later. So what I'd rather do instead isto get code deploy set up. And for that, we'll we're going to need a a deploy file, whichthey call app spec dot YAML. I'm just gonna make a new file here called app spec dot YAML.Let's make our way over to code deploy, we might get stuck through this process if wedon't actually have a way of deploying.But we will do the best we can here. So what Iwant you to do is on the left hand side, go to applications, I'm going to create a newapplication. And I can see that I have a test one here, I'm gonna go ahead and delete thatyou're not going to have this. Oops. Anyway, we'll go back to applications here, createa new one, I'm going to call this my code, deploy app.And we'll hit we'll choose ECtwo, we'll create the application. And so now that we have an application, we have toset up a deployment group. And we'll call this my deployment group, or my code deploydeployment group, which is a bit silly, enter a service role. Code deploy permissions thatgrants it was code deploy to access your target instances. So we don't have one of those.So that's something we're gonna have to go get. We probably can just go edit our existingEC to roll. So maybe that's what we'll end up doing. Okay, so I just pulled up the documentationquickly here.And I believe we need to make this service roll. So what we'll do is we'llmake our way over to I am, so I already have it here on the left hand side, you might haveto type in I am, we'll go over to roles, we're going to create a new role. We are going tochoose probably code deploy, what do we have to choose here, she was a service with theroll, choose code deploy. All right. We'll go back to high em here. We'll choose codedeploy. And then down below, we already have those presets. So we're gonna choose codedeploy, which is for auto scaling. Yeah, that's fine to me. We'll go next, we will see whatwe get there.So it's already predefined, we will hit next. I love it. When they havethose predefined ones for you. It saves you so much time. And we're just going to nameit as suggested. So we'll go up to here. I'm just going to call that will create that role.That will go back here. And let's see if it autocompletes enter a service role Arn. Well,there you go, I'll use so that's what we'll do.So we'll go into here, I'm just goingto copy that Arn and paste it on in hopefully that works we have in place in blue green,I'm going to try to keep it simple and do in place the configuration, select what youwant to deploy. So we'll choose EC two instance. And you can add up to three tags. So I guessI have to tag my C two instances, oh boy. So what we'll do is we'll make our way overDC two, we'll go to instances. There's our app. And what we'll do is we'll just add atag.So we'll go edit tags, I call this app CD. And we'll save that we'll go back overhere, I'll say app, CD tag, oops, I guess we only really need one there. So that shoulddo it. Um, we're gonna do all at once sounds great to me, we're going to turn off loadbalancer, we don't need that. We're gonna disable rollbacks, we're gonna go ahead andcreate that deployment group.So now that we have this deployment group, it should knowhow to deploy. So if we want to go ahead and do create, deploy, and we scroll down here,um, what we need to do is supply the revision. So it's interesting, you can't supply likeyou have GitHub, you can't do code commit. So which is totally fine, I guess. So I guesswhat we're going to have to do is we need to supply a change to our server revisionto our server. So what we'll have to do is make our way over to cloud nine, and we willmake a revision, and then what we'll do is place it in an s3 bucket. So let's go overto cloud nine here, I'm gonna make a new file here.I'm gonna call index dot HTML. I'm goingto grab our code from over here. I'm going to copy this and paste this. I'm just goingto call this v2, v2. Now, I think, I think we're going to need to have to have an appspec dot YAML file in this because that's what it uses to figure out the deploy. Soyeah, I think that's what we'll need. So just give me a second to pull up the documentationfor that. Okay, so I just pulled up the documentation for the app spec yamo file. But actually,you know what, I think what we should do is we should go set up that code commit becauseit's gonna make things a little bit more organized.I will just get this out of the way. So I'mgonna go create here, I'm gonna say, my code deploy app. And what we'll do is We'll hit Create. And we're going to havethis clone URL. So grab the HTTPS when it copies to your clipboard, make your way backto your cloud nine environment. And what I want you to do is get into this folder here.So just go to Tilda, Tilda takes it to the home cloud deploy project. And then we'lldo git clone there. And that will put a folder inside of a folder a little bit messy, butthat works. And what am I to do is drag this index dot HTML into there. And then we'lldrag this app dot YAML into there as well. So that's a little bit more organized. Andthen let's open up our app, app spec dot YAML. So the first thing we need to do is definea version. It's gonna be version 0.0. I don't think there's any other version at this.Atthis point, this is the only version there is for app spec. Then we'll do LS, and we'llchoose Linux because we're not using Windows. And we could set some other things like filesand permissions. And that's what that's where it's going to place the file. So I'm, I'mtrying to decide where this should go. I guess I'm going to just place it. So we'll say source,which is the root of the the revision will upload.And then I'm going to supply the destinationdestination. I'm going to Home Ec to user oops, no, no, no, we're going to put var wwwdot html. So that's going to place it where we want it to be. And we're just going tomake sure the permissions are as what we need them to be. So I'm going to do object, we'regoing to say var WW. Dot, we've already done this already. But I'm going to just forceit anyway. And it's also necessary to see how we can use permissions to change permissions.You see to user, we're going to do group, EC to user as well. And the next thing we'lldo is we'll put in some hooks. So for hooks, there's the application stop, that's somethingwe definitely want to do. And we will just name this little save location. And we'llsay stop app.sh will give it a timeout, it's always good to provide timeout.So that doesn'ttake forever. Stopping the app is super, super fast. So we'll give it I don't know, 10 seconds,which is maybe we don't even need 10 seconds, I think it takes like six seconds. And we'llrun this as easy to user. And so that should be the premise of that one. We're gonna needthat that stop app there. So I'm gonna go here and make a new file, say, Stop app.sh.Whoops, we will rename that because of its name wrong, we're gonna have a lot of troubleon our hands. So I'm going to just double click that there, we're going to go to userdata.And we're going to grab this line here. And what we're going to do is paste that inthere. Generally, when you run commands, you want to give it the absolute path. So we needto know where the absolute path of httpd is. So what I'm going to do is connect to theinstance here, we're going to use sessions manager, I hit Connect.And we're going tohere sudo sudo, Su hyphen EC to user. And then what we're going to do here is type inWhere is httpd. And that's going to give us the full path. So this I believe is the fullpath. I really hope that's the case. I guess I could just run it. Yep, that's it. And whatwe'll do is we'll go back here, and I'm just going to paste this in fully here. BecauseI do not want to have any problems. And I think it's this. Here we go. So that's ourstop. And then and we don't necessarily have to stop it, we could just restart it, I suppose.Maybe that's a little bit cleaner to do, I think that's what I'm going to do, actually.So I'm going to rename this to restart app.That's going to avoid us a bunch of problems.And so instead of doing this in the application stop, I'm gonna do this the application start,which is one of the last things that it does. So for application stop, we're not going todo much here. There's the before install that we can put in here. There's nothing reallyto do there, but we're definitely going to need an after install. So after install thatmeans like after the code has been downloaded to the source directory. So wondering if thisis a smart idea to place it here.Um, maybe I'll just put in a folder called revision.That's actually what I think I would prefer to do. And then what we can do here is wewill just copy this here. And we are going to make a we'll rename this to restart andwe'll rename this to update, update app. And What we'll do here is make that new file overhere. And I call that update app.sh. And we will open that one up there. And what I needto do is I need to make a command that's going to copy that revision. So I'm just going todouble check, make sure I put a forward slash in there. Yeah. So the revision should bestored here. So the first thing I would do is remove the old file. So I'd say removevar, www dot html index dot HTML. And then I'd say copy, revision to var. W Yeah, thisone up here. Not gonna read the whole thing out there. So this should do it. Okay. Andso yeah, this should be our code spec.Now, what I'm going to go ahead here and do iscommit all this stuff to our repo. For a later step, actually, we'll leave it alone. Forthe time being, we don't have to do that. But I do need to zip the contents of the stuffthat is here. So let me think about that. We'll go back to code deploy, I'm pretty sureit expects it to be a zip, copy and paste the Amazon s3 bucket where your revision isstored. And here you can see we have folder object dot zip, targ, or t g.Zed. So whatI'm going to do is I'm just going to figure out what these command is. And I'll be backhere in a moment, because I never remember that off the top my head. Okay, I'm back.I think I know what the command is. It's zip. And then you supply the name. And then we'lldo this in the that actual folder there. So go into there, I'm going to type in zip, andthen whatever we want, call this say this is revision, revision 000 0.1, maybe zip.And then we need to say what files we want.So we want the aspect camel, we want the indexHTML, we want the restart app, we want the update, sh. Maybe we should match the version.Because it says version two, let's just call it version two, it'll be less confusing. SoI think we have everything update, restart. Yep, we have all the files we need. What we'lldo is we'll hit Enter. And that all looks good to me. So now we just need to get thisrevision file onto s3. And we're going to need a new s3 buckets. So using this, Eli,it's going to be AWS s3 API, create bucket.And we're going to give it a bucket name,of course. And the name is going to be my code deploy. app. If that doesn't work, youmight have to choose a more unique name because s3 names are unique, like domain names. Andwe're going to specify the region always specify the region makes our life a lot easier, alwaysuse East one. And I did not use the command correctly. It was s3 API three. Hold on here.Eva's s3 API. Help. Yeah, there's a create bucket command. And so we'll do create bucket.Just scroll down here for a second. Just looking for an example. Okay, so it says colon colonbucket. I called it bucket name. So that's not the command that is supposed to do there.So I'm just going to hit up there. And we're going to make our way over here and just takeout the hyphen name part. And there we go, it created the bucket. So now what we needto do is copy this revision to there.So I'm gonna do it was it was s3 cp, why it's nots3 API, why there's different ones, I have no idea why, but that's just how they haveit. And so we'll specify that file which is revision two. And then we need to specifythe bucket, so s3 colon, colon or colon slash slash, then we'll put the name of the buckethere. And we'll just maybe put in a new folder called revisions. And, yeah, maybe we'll callit version two. And we'll hit up, upload there. And so now it's uploaded. So if we make ourway back over to here, it's now expecting that bucket path. And we are just going togo back to our cloud nine environment and we have it right here. So I'm going to grabit Okay, and paste that there.The reason type is zip That is correct. And there's thatapplication stop lifecycle. I don't remember this option being here before don't fail todeploy to an instance, if this life cycle event on instance fails. That's that's a cooloption here. So we don't have to do anything with that you we don't need to override anythingthere. We don't do anything like that. Normally, you have to add a A code deploy agent to theEC two instance, for this to work. So I'm just wondering if I need to give permissionsfor that there. Maybe we should do that before we proceed forward. Because I feel like that'sgoing to give us trouble if we don't do that. So what we'll do is go back to our EC twoinstance, it's already running, and we have this SSM aect role that's here. So what Iwant you to do is I want you to go to I am, we're going to use create ourselves a newunique role for this app.So I'm just gonna go EC to next. And I'm going to, I'm goingto add some policies. So I want the SSM one definitely. And then I want the code deployagent. Yep. And we'll hit Next Next, and I'm going to call this my code deploy app role.And we will create that rule. What we'll do is we'll make our way back to EC to consoleto where that is, we'll just type that in again. We'll go to instances and left handside will look up this instance, I'm pretty sure we can just replace the roll throughhere.Where are you attach replace I am role. So we will swap it out for my code deployrole? And we'll hit Apply. And we will close that. We don't know if we'll have to restartthe AP. I don't think so. I don't think so. And the IM role is not part of the actualami, we attach it when it launches. So that's another thing we don't have to worry about.So what we'll do is we will go back to code deploy. And I think we're all in good shapehere. Deployment description, add a brief description, deploy version two.That's whatwe want to happen. Now whether that will work, I guess we'll find out here in a moment. Andwe'll hit create, deploy. And I'm so used to code deploy, deploy is failing. So if thisfails, I'm totally okay with that. Now, this is an in place deploy. Yeah, so I was thinkingI needed the code deploy agent, because code deploy might have to start up another autoscaling group or something. But I realized as an in place deployment, that means it'snot taking, not creating a new server, it's just going to take the server out of service,and then apply the updates in place. And we will see we click View events here and seehow that goes. Generally, this is super fast, so I'm not sure why it's taking its time.But I guess what I'll do here, I'm just gonna click back.Since it's in place, that shouldbe super fast. What I'm gonna do is I'm just going to wait until we have the results here.If it fails, it fails or if the time's up, I guess we'll find out here shortly. Alright,so this failed, but that's okay. I figured out what the problem was. I hopped onto itwith support, because I was really, really stuck. And I realized I didn't install thecode deploy agent. I guess I just thought it was pre installed on Amazon Linux two,but I guess it's not which is a bit frustrating, and they don't have a run command for thateither. So I guess we'll have to manually install that. Not a big deal because we weresmart enough to set up SSM just in case we had to go into our server. So let's go aheadand hit Connect, go to sessions manager Connect. And then we'll go ahead and install code deploy.So.

pexels photo 7084516

As found on YouTube

Get your RESOURCES HERE

You May Also Like