This certification preparation material would help you in getting the jobs in the AWS Fields

AWS Developer Certification : Associate Level AWS Sysops Administrator Certification : Assciate Level AWS Solution Architect Certification : Associate Level AWS Soltion Architect : Professional Level AWS Certified Security Specialty (SCS-C01) AWS Professional certification Exam AWS Certified Big Data – Specialty (BDS-C00) AWS Certified Machine Learning MLS C01 Certification Prepration Materal AWS Solution Architect : Training Associate AWS Advanced Networking Certifications AWS Exam Prepare : Kinesis Data Stream Book : AWS Solution Architect Associate : Little Guide AWS Security Specialization Certification: Little Guide SCS-C01 AWS Package Deal


While applying to the Job you need to mention referred by : admin@hadoopexam.com | or Website : http://www.HadoopExam.com


 

Question-3: You are working with a social networking company, which has huge volume of user from various geography. You are running a farm of more than 100 EC2 instances and all the EC2 instances are behind elastic load balancer. Your development team wants to deploy the patch on webserver and as they don’t want that it should be deployed to all the servers on the farm rather than start deploying one by one on each server. Suppose they deploy on one server and route minimum traffic to that server like out of 1000 request only one request should go to that newly deployed code and all the other requests should go to old server and if that is successful you will gradually increase e.g. out of 1000, 10 request should go to new code etc. and finally you will have all the servers with the new code only if each server works as expected. For the given requirement which of the solution works perfectly.

  1. You will be creating a new Auto-scaling group and keep this new group to behind a new ELB. All the servers behind new ELB should have newer version of the code deployed. And you will be using Route 53 latency based routing policy.
  2. You will be creating a new Auto-scaling group and keep this new group to behind a new ELB. All the servers behind new ELB should have newer version of the code deployed. And you will be using Route 53 weighted routing policy.
  3. You will be creating a new Auto-scaling group and keep this new group to behind a new ELB. All the servers behind new ELB should have newer version of the code deployed. And you will be using Route 53 Geo-proximity based routing policy.
  4. You will be creating a new Auto-scaling group and keep this new group to behind a new ELB. All the servers behind new ELB should have newer version of the code deployed. And you will be using Route 53 latency based routing policy.
  5. You will be creating a new Auto-scaling group and keep this new group to behind a new ELB. All the servers behind new ELB should have newer version of the code deployed. And you will be using Route 53 Geo-location based routing policy.

Correct Answer  : B

Detailed Explaination: Below information will help you to understand various routing policy using Route53

  1. Geolocation routing Policy:
    1. When you want to serve traffic based on user location (from where the query originates).
    2. Suppose if query originates from India then route the query to ELB which is in Japan region.
    3. This is one of the best solution when you want to localize your website contents like all the queries originating from India should see the prices in INR(Indian National Rupee) else they should see prices in USD.
    4. Even you can restrict the contents based on the location/region etc.
    5. It can be specified using continent, country, or states of US.
    6. If we create separate records for overlapping geographic region e.g. one for North America and one for Canada priority goes to the smallest geographic region.
    7. It works by mapping IP based on location, if some locations are not mapped then you can route it to some another default IP.
  2. Geoproximity routing policy:
    1. When you want to route traffic based on the location of your resource and users, optionally, shift traffic from resource in one location to resource in another location.
    2. Even we can configure less traffic to a given resource by specifying a value, known as bias, that expands or shrinks the size of the geographic region from which traffic is routed to a resource.
  3. Latency Routing Policy:
    1. Suppose you have resources in multiple AWS regions and you want to route traffic to the region that provides the best latency.
    2. For that you will be creating latency record for your resource in multiple AWS regions.
    3. And whenever query is received it check these records and lowest latency records IP or webserver address will be returned.
  4. Weighted Routing policy:
    1. Use the route traffic to multiple resources in proportion you specify.
    2. This policy is best suited when you have to test a software on live website and load balancing.
    3. To configure weighted routing, you create records that have the same name and type for each of your resources.
    4. Route 53 will uses following formula 

Weight for specified record/sum of the weight for all records. 

5. Suppose you are testing your new software, what you will do provide 1 weight for new software and older one with 255 weight. Hence, only 1/256(255+1) request will goes to new software and as per the success you can gradually increase the load.