Image for post
Image for post

3 AWS certification questions and answers

Answers to real-world scenarios for AWS Solutions Architect, covering RDS disk scaling, mobile app performance, and EIP costs.

In my role as the DevOps Practice Lead at my workplace in the UK, I’ve been trying to think of challenges for the Cloud engineering team that are a little different from those we see in our normal day-to-day work situation. An aim to keep sharp with our skills and technical thinking, and keeping abreast of the ever changing Cloud services platforms.

The team already did a great job at pulling together a bank of Google Cloud Platform questions to help them with the GCP Associate Cloud Engineer exam. And that got me thinking about the same for Amazon Web Services.

Couple that with being on lock-down — and so we have this article; I’m going to find sample questions and attempt to answer them.

Every week, I’ll post an article with another set of questions. And I’ll use these back at work to discuss with the team.

I’ve chosen a sample of 3 x recent real-world requests for help and sample questions posted on the Facebook groups “AWS Cloud” and “Amazon Web Services (AWS)”.

Disclaimer; I am not advocating that my answers are the correct ones — indeed, I’ve chosen two examples where the online responses were mixed.

I will explain my thinking and how I came to the answer I did. I welcome all feedback in the comments.

Image for post
Image for post

Scenario 1. An application uses an Amazon RDS MySQL cluster for the database layer. Database growth requires periodic resizing of the instance. Currently, administrators check the available disk space manually once a week.

How can this process be improved?

A. Use the largest instance type for the database.

B. Use AWS CloudTrail to monitor storage capacity.

C. Use Amazon CloudWatch to monitor storage capacity.

D. Use Auto Scaling to increase storage size.

— posted to “AWS Cloud” on 27th March.

This question looks like the style of those found in Associate certification exams. A technique that’s useful when taking Amazon AWS Certifications is to first eliminate any obvious wrong answers. This question is no exception and we immediately eliminate the reference in (B) to AWS CloudTrail.

Often confused with CloudWatch, however AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. As such, it does not monitor storage capacity and is irrelevant to this question.

The next option to eliminate is (A) for using the largest instance type for the database.

A. Use the largest instance type for the database.

Why? The question has two statements; firstly, around database growth requiring periodic resizing of the instance. And secondly, around checking for available disk space. Three answers focus in on the disk space.

It’s true that with AWS RDS you manually scale up the underlying instance — and that is already happening periodically per the problem statement.

How can this be further improved? Well, to some extent picking the largest instance size as (A) suggests would be a hammer to crack that particular nut; a rather expensive hammer though.

But it would not help at all with the disk space challenge — maximum storage for MySQL is constant at 64 TiB for each of the Latest Generation Standard (m5) or Memory Optimized (r5) Instance Classes — scaling to the largest type in its Class will not offer us an increase in disk storage.

Additionally, if the problem statement was around read performance, the introduction of “read replicas” or Amazon ElastiCache would help in many solutions before opting for vertical scaling. So we eliminate (A).

As it is, our focus in this question is on the statement around disk usage.

Currently, administrators check the available disk space manually once a week.

We have two options remaining, (C) and (D), both of which are improvements on manual checking. But one is better than the other, as I will explain next.

C. Use Amazon CloudWatch to monitor storage capacity.

D. Use Auto Scaling to increase storage size.

Let’s think through the process those administrators are taking today — first, they manually check disk usage, and secondly (we have to assume that) if disk usage is above threshold they are increasing the disk allocation manually.

Option (C) states to use CloudWatch to monitor storage capacity. There are a couple of techniques for this, including;

  1. Monitor the FreeStorageSpace metric by creating a CloudWatch Alarm, with an SNS topic, and a Subscription to alert the team automatically.
  2. Checking for predefined RDS Events for low storage, again in combination with an SNS topic and Subscription.

How is (C) an improvement? Well, it means the administrators are no longer having a weekly task in their diaries to sign on to manually check the storage usage — now, they will get notified automatically — that’s better detection — but, with (C) it still requires them to take manual action to fix the situation.

That leaves us with option (D) which is to use RDS Auto Scaling option to automatically increase the storage allocation.

Released in June 2019, RDS Storage Auto Scaling automatically scales storage capacity in response to growing database workloads, with zero downtime.

With storage autoscaling enabled, Amazon RDS will trigger a scaling event when these factors apply:

  • Free available space is less than 10% of the allocated storage.
  • The low-storage condition lasts at least five minutes.
  • At least six hours have passed since the last storage modification.

When the event triggers, Amazon RDS will add additional storage in increments of whichever of the following is greater:

  • 5 GiB
  • 10% of currently allocated storage
  • Storage growth prediction based on the FreeStorageSpace metric change in the past hour.

In this question, our RDS database engine is MySQL — so the auto-scaling will keep going up to a maximum allocation of 64 TiB for all instance types in the latest generation class (m5 or r5).

So that’s our answer; we can detect and fix the problem automatically.

D. Use Auto Scaling to increase storage size.

So how does my answer (D) compare with those on the “AWS Cloud” group — the group was roughly equal 50–50 split between options (C) and (D).

Leave a comment if you would have chosen (C) in this example.

Image for post
Image for post

Scenario 2. A company has a popular multi-player mobile game hosted in its on-premises datacenter. The current infrastructure can no longer keep up with demand and the company is considering a move to the cloud.

Which solution should a Solutions Architect recommend as the MOST scalable and cost-effective solution to meet these needs?

A. Amazon EC2 and an Application Load Balancer

B. Amazon S3 and Amazon CloudFront

C. Amazon EC2 and Amazon Elastic Transcoder

D. AWS Lambda and Amazon API Gateway

— posted to “AWS Cloud” on 26th March.

This is both a great and terrible question to pick next!

It lacks information about the current solution and what else it relies upon; what databases, user identification and access management, session handling, and so on.

What we can do is state our assumptions — and progress from there.

What is good and clear about this question is the emphasis on choosing the option that is “MOST scalable and cost-effective”.

Let’s get it out of the way to start with — Amazon GameLift, the dedicated game server hosting platform, is not listed as an option.

First of all, as before, let’s eliminate the obviously wrong answers. We shall eliminate options (B) and (C).

B. Amazon S3 and Amazon CloudFront

C. Amazon EC2 and Amazon Elastic Transcoder

We eliminate option (C) as Amazon Elastic Transcoder is for media transcoding in the cloud — not particularly relevant to a multi-player mobile game.

Option (B) is a more interesting idea. The original problem lacks information about where the scaling issues are occurring in the current on-premise solution — it would be true that a hybrid solution could be explored whereby all static content is hosted on S3 and served through a Content Delivery Network using Amazon CloudFront, and we leave all the dynamic application and database where it is on the existing servers on-premise.

For static content, using S3 and a CDN is definitely highly scalable and cost effective. It’s also useful in Single Page Applications like ReactJS or Angular.

And yes, it could also be part of a phased approach to shift static content first, and even in a fully Cloud-native replacement solution hosted entirely in AWS, we are likely to end up with some use of S3 and Amazon CloudFront.

However, given the crux of this question, and the fact that it’s a “mobile game” (can we infer an iOS or Android native app?) let’s assume that the problem with scaling is with the backend dynamic logic and database and we’ll eliminate option (B).

We’re now left with two viable options.

A. Amazon EC2 and an Application Load Balancer

D. AWS Lambda and Amazon API Gateway

Remember our core requirement; we’re looking for the “MOST scalable and cost-effective”. Both of these options allow us to create scalable solutions — so we have some more thinking to pick (A) or (D).

I’ve made an assumption before I start; the current solution uses an RDBMS database and we’ll migrate that over to Amazon RDS — and I’ve assumed that the reason this isn’t mentioned in the problem statement is that we’ll end up picking the same solution for the database regardless of option chosen. So all we have to consider is the application logic and its compute requirements.

With EC2, we can create auto-scaling groups and use Spot instances to achieve the “cost-effective”.

When considering option (A), we have to pay attention to the original problem statement — the demand has out stripped the infrastructure they can use in their on-premise data center. So that statement tells us that there’s a lot of compute currently in-use and demand is increasing. I’ve inferred, a lot.

Therefore, migrating as-is to EC2 will also require a lot of compute resources — like for like — and while costs can be managed through a combination of Reserved and Spot instances, that much compute is still going to have a price tag with it. We add the cost of the Application Load Balancer on top.

In terms of effort required to migrate the application, moving across to EC2 is likely to require fewer application code changes — it could even “lift and shift”.

Next to consider is option (D), which would likely require us to rewrite our code as AWS Lambda functions (although naturally stateless, we can handle state and combine API Gateway with web sockets).

AWS Lambda lets us run our code without provisioning or managing servers, and we only pay for the compute time we consume. It naturally scales up to meet the peak demands, and we don’t pay for idle instances.

In terms of costs, as we scale out to the volumes implied in the problem statement, running AWS Lambda is likely to be cheaper than running the equivalent load through a fleet of EC2 instances.

The downside is that we would likely need to re-develop our solution (but not doing so wasn’t stated as a constraint — so assume a greenfield).

In conclusion, for the “MOST scalable and cost-effective” solution I’d pick (D) — AWS Lambda and API Gateway.

D. AWS Lambda and Amazon API Gateway

So how does my answer (D) compare with those on the “AWS Cloud” group — the group was pretty much all going for (A), nearly everyone liked the EC2 option. So my answer definitely leaves me in the minority. Leave a comment if you would also have chosen (A) in this example and let me know why.

Image for post
Image for post

Scenario 3. When will you incur costs with an Elastic IP address (EIP)?

A. When an EIP is allocated.

B. When it is allocated and associated with a running instance.

C. When it is allocated and associated with a stopped instance.

D. Costs are incurred regardless of whether the EIP is associated with a running instance.

A quick one to close off for this week. This question was in a bank of AWS sample questions, and resonated with me earlier this week because I found I was paying for an Elastic IP Address in my own AWS account. So I’m sharing this answer out of personal experience.

An Elastic IP is free, but only as long as it is being used by a running instance.

So of these options the answer is (C); we would pay for an Elastic IP address when it is allocated and associated with a stopped instance.

A note from the author

So there we have it. Three example AWS questions and answers. Let me know in the comments if you would have answered these any differently.

Thanks for reading. You can follow me on Twitter and connect on LinkedIn.

Written by

DevOps | SRE | AWS | GCP

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store