The SAA-C03 tests recall. AWS Solutions Architect interviews test trade-offs, cost decisions, and architecture reasoning. Here are the questions that actually get asked.
You can memorise the difference between EC2 Instance Store and EBS, pass the exam, and still freeze in an interview when someone says "we have a 500GB MySQL database that needs to be available across two regions with RPO of 15 minutes — design the architecture." The exam gives you options. The interview gives you constraints.
"Design an architecture for a web application that needs to handle 10x traffic spikes."
Strong answer: Auto Scaling Group with EC2 (or ECS/Fargate) behind an Application Load Balancer. Scale-out policy on CPU > 70% for 2 minutes, scale-in after 10 minutes below 30%. Static content to S3 + CloudFront (CDN reduces origin load by 80–90%). RDS with read replicas for database read scaling, or Aurora Serverless v2 for automatic scaling. SQS queue to decouple processing of spiky workloads (order processing, image resizing). Route 53 health checks for failover.
"What's the difference between EC2 Reserved Instances, Savings Plans, and Spot?"
Reserved Instances: 1 or 3 year commitment for a specific instance type/region. Up to 72% discount. Convertible RIs allow changing instance type/OS/tenancy. Standard RIs can be sold on the Reserved Instance Marketplace. Savings Plans: more flexible — commit to a $/hour spend, not specific instances. Compute Savings Plans apply to EC2, Lambda, Fargate. EC2 Savings Plans apply to EC2 in specific family/region. Spot Instances: use spare capacity, up to 90% discount. Can be interrupted with 2-minute warning. Use for fault-tolerant, stateless workloads.
"When would you use DynamoDB vs RDS?"
RDS: relational data with complex joins, existing SQL workloads, ACID transactions across multiple tables. Good for e-commerce orders, financial systems, reporting. DynamoDB: high throughput with low, predictable latency at any scale. Single-table design eliminates joins. Use for session stores, game leaderboards, IoT event data, real-time bidding. Key rule: if your access patterns are simple and you need millions of reads/writes per second, DynamoDB. If you need complex queries and reporting, RDS/Aurora.
"How would you architect for 99.99% availability?"
99.99% = 52 minutes downtime per year. Requirements: multi-AZ deployments for every component (ALB is multi-AZ by default, RDS Multi-AZ, ElastiCache Multi-AZ). Cross-region replication for disaster recovery. Route 53 with health checks and failover routing. CloudFront for edge caching (reduces origin dependency). S3 for static assets (11 nines durability). Application Tier: Auto Scaling across 3 AZs, minimum 2 instances healthy. Database: Aurora with 6-way replication across 3 AZs, automatic failover in <30 seconds.
"Explain the difference between S3 storage classes."
S3 Standard: general purpose, 3+ AZ replication, milliseconds latency. S3 Standard-IA: infrequent access, same performance, lower storage cost + per-GB retrieval fee. S3 One Zone-IA: single AZ, 20% cheaper than Standard-IA, not resilient to AZ failure. S3 Glacier Instant Retrieval: milliseconds retrieval, low storage cost, minimum 90-day retention. S3 Glacier Flexible: minutes to hours retrieval (Expedited 1-5 min, Standard 3-5 hrs). S3 Glacier Deep Archive: 12-hour retrieval, lowest cost, minimum 180-day retention. S3 Intelligent-Tiering: automatically moves objects between access tiers based on usage patterns.
"How does IAM least-privilege work in practice?"
Start with the principle that no user or service should have more permissions than they need for their specific task. In practice: use IAM roles (not users) for services, use managed policies where possible, add explicit denies for sensitive actions regardless of other allows, use IAM Access Analyzer to identify overly broad permissions, use permission boundaries to limit what delegated admins can grant, regularly review with IAM Access Advisor (shows last access time for each service). Key exam trap: explicit deny always overrides allow.
"What's the difference between a target tracking and step scaling policy?"
Target tracking: define a target metric value (e.g. maintain CPU at 50%) and Auto Scaling automatically calculates how many instances to add/remove. Simpler to configure, recommended for most use cases. Step scaling: define specific adjustments based on metric thresholds (e.g. CPU 60-70% → add 1 instance, CPU 70-80% → add 2 instances, CPU >80% → add 4 instances). More control, useful when you need different responses at different severity levels. Simple scaling (legacy): one adjustment, cooldown period, no concurrent adjustments.
AWS interviews are about trade-offs — cost vs performance, simplicity vs resilience. The AWS SAA course on InterviUni covers all SAA-C03 domains. The Cloud Engineer mock interview tests your ability to reason through architecture decisions out loud.
Practice AI mock interviews, check your ATS score, or start a cert course — free.