Aws Interview Questions

1. What is Cloud Computing and what are its features?

Cloud computing is a general term for the delivery of hosted computing services and IT resources over the internet with pay-as-you-go pricing.

Features of Cloud Computing includes: -

  1. Resource Pooling: A cloud service provider can share resources among multiple clients, each providing a different set of services according to their needs.

  2. Broad Access: The client can access cloud data or transfer data to the cloud from any location with a device and internet connection. These capabilities are available everywhere in the organization and are achieved with the help of internet.

  3. Rapid Elasticity: This cloud feature enables cost-effective handling of workloads that are able to scale in and scale out quickly as needed. Whenever the user require server it is provided to them and it is scale out as soon as its requirement gets over.

  4. On-Demand Self-Service: This enables the client to continuously monitor server uptime, capabilities and allocated network storage. This is a fundamental feature of cloud computing, and a customer can also control the computing capabilities according to their needs.

  5. Metered Usage: This enables both the provider and the customer to monitor and report which services have been used and for what purposes. It helps in monitoring billing and ensuring optimum utilization of resources.

2. What are the different Cloud Deployment Models?

Cloud Deployment Model works as your virtual computing environment with a choice of deployment model depending on how much data you want to store and who has access to the Infrastructure.

There are 4 different types of Cloud Deployment Model available: -

  1. Public Cloud: The name says it all. It is accessible to the public. Public deployment models in the cloud are perfect for organizations with growing and fluctuating demands. It also makes a great choice for companies with low-security concerns. Thus, you pay a cloud service provider for networking services, compute virtualization & storage available on the public internet.

  2. Private Cloud: It will be integrated with your data center and managed by your IT team. Alternatively, you can also choose to host it externally. The private cloud offers bigger opportunities that help meet specific organizations' requirements when it comes to customization. Companies that look for cost efficiency and greater control over data & resources will find the private cloud a more suitable choice.

  3. Hybrid Cloud: It is a combination of two or more cloud architectures. While each model in the hybrid cloud functions differently, it is all part of the same architecture. Further, as part of this deployment of the cloud computing model, the internal or external providers can offer resources.

  4. Community Cloud: It operates in a way that is similar to the public cloud. There's just one difference - it allows access to only a specific set of users who share common objectives and use cases. This type of deployment model of cloud computing is managed and hosted internally or by a third-party vendor.

3. What are the different types of cloud computing?

There are three main types of cloud computing: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).

  1. IaaS: IaaS provides virtualized computing resources over the internet, such as virtual machines, storage, and networks. Users have control over the operating systems and applications they run on the infrastructure, allowing for greater flexibility and customization.

  2. PaaS: PaaS offers a platform for developers to build, deploy, and manage applications without the need to manage the underlying infrastructure. It provides a pre-configured environment that includes development tools, databases, and runtime environments, enabling developers to focus on application development rather than infrastructure management.

  3. SaaS: SaaS delivers software applications over the internet on a subscription basis. Users can access and use these applications without the need for installation or management. Examples of SaaS include email services, customer relationship management (CRM) software, and productivity tools like Google Workspace.

4. What are Data centers, Regions, Availability Zones(AZs), Edge Locations, Local Zones, Wavelength Zones?

A data center is a facility that provides shared access to applications and data using a complex network, compute, and storage infrastructure.

A region is a geographic area that is served by a specific set of AWS infrastructure. Each region has multiple Availability Zones, which are isolated from each other by distance and independent power and cooling.

An Availability Zone is an isolated data center within an AWS region that is designed to provide high availability and fault tolerance for applications and services.

Edge Location is the Data Center used to deliver content fast to your users. It is the site that is nearest your users.

Local Zones provide you the ability to place resources, such as compute and storage, in multiple locations closer to your end users.

Wavelength Zones allow developers to build applications that deliver ultra-low latencies to 5G devices and end users. Wavelength deploys standard AWS compute and storage services to the edge of telecommunication carriers' 5G networks.

5. What is AWS?

AWS stands for Amazon Web Services. AWS is a cloud computing platform offered by Amazon. It provides a wide range of cloud services that help businesses and individuals build and deploy various types of applications and services in a flexible, scalable, and cost-effective manner.

6. Name 5 AWS Services you have used and what’s the use cases?

  1. EC2 (Elastic Compute Cloud): EC2 is a scalable cloud computing service that allows you to launch virtual servers (instances) to run your applications. You can choose the instance type and operating system, making it suitable for a wide range of use cases, from hosting web applications to running data processing workloads.

  2. IAM (Identity and Access Management): IAM is AWS's identity management service. It allows you to control who can access your AWS resources and what actions they can perform. IAM is essential for ensuring the security of your AWS environment by managing user accounts, roles, and permissions.

  3. S3 (Simple Storage Service): S3 is a scalable object storage service that is commonly used for storing and retrieving data. It's great for storing static assets like images, videos, and backups, and it can be integrated with other AWS services to host static websites or store data for applications.

  4. RDS (Relational Database Service): RDS offers managed relational databases like MySQL, PostgreSQL, Oracle, and SQL Server. It is used for running applications, storing user data, analytics, and reporting.

  5. CloudWatch: CloudWatch is AWS's monitoring and observability service. It allows you to collect and track metrics, collect and monitor log files, and set alarms. It's crucial for gaining insights into the operational health and performance of your AWS resources, and for responding to operational events and issues in real-time.

7. What are the tools used to send logs to the cloud environment?

There are several tools available to send logs to the cloud environment. Some commonly used tools are: -

  1. Amazon CloudWatch Logs: CloudWatch Logs is a built-in service in AWS that allows you to collect, monitor, and store log data from various AWS resources and applications. You can configure your AWS resources to send their logs directly to CloudWatch Logs.

  2. AWS CloudTrail: AWS CloudTrail captures and logs API activity and events in your AWS account, providing visibility into actions taken by users, services, or resources.

  3. Elasticsearch: Elasticsearch is an open-source search and analytics engine that can be used to store, index, and analyze logs. It is often paired with Logstash and Kibana (ELK stack) for log management and analysis.

  4. Fluentd: Fluentd is an open-source data collector that can collect logs from various sources and send them to multiple destinations, including cloud storage or analysis platforms.

  5. Logstash: Logstash is part of the ELK stack and is used for collecting, parsing, and transforming logs before sending them to a storage or analytics platform.

8. What are IAM Roles? How do you create /manage them?

IAM (Identity and Access Management) roles in AWS are a way to grant permissions to entities that you trust. These entities can be AWS services, applications, or users within your AWS account or external AWS accounts. Roles are a secure way to delegate permissions to access AWS resources without the need for long-term security credentials, like access keys or passwords.

IAM roles are commonly used for services and applications that need to interact with AWS resources on your behalf.

  1. Sign in to the AWS Management Console: Go to the AWS IAM console (console.aws.amazon.com/iam).

  2. Navigate to Roles: In the left navigation pane, select “Roles.”

  3. Create a New Role:

  • Click the “Create role” button.

  • Select the trusted entity type (e.g., AWS service, another AWS account, or SSO identity provider).

  • Choose the use case that best describes the role’s purpose. For example, if you’re creating a role for an EC2 instance, you can choose “EC2” as the use case.

4. Set Permissions:

  • Attach policies to the role. Policies define what the role is allowed to do. You can choose from existing policies or create custom policies.

5. Name and Review:

  • Give the role a name and, optionally, add tags to help with organization.

  • Review the role’s configuration and click “Create role.”

  • Update Trust Relationships: You can edit the trust relationship to allow or restrict who or what can assume the role.

  • Update Permissions: You can attach or detach policies to grant or remove permissions for the role. It’s essential to review and update policies as needed to ensure the role has the necessary permissions.

  • Delete a Role: If a role is no longer needed, you can delete it. Be cautious when deleting roles, as this can impact the services and applications relying on it.

9. How to upgrade or downgrade a system with zero downtime?

Upgrading or downgrading a system with zero downtime can be achieved by implementing certain strategies and best practices. Here’s a high-level approach:

  1. Load Balancer: Set up a load balancer to distribute traffic across multiple instances or nodes. This allows for seamless traffic redirection during the upgrade/downgrade process.

  2. Multiple Environments: Create multiple environments (e.g., staging, production) to perform the upgrade/downgrade process. Direct traffic to the unaffected environment while upgrading/downgrading the other.

  3. Blue/Green Deployment: Implement a blue/green deployment strategy where the new version (green) is deployed alongside the existing version (blue). Gradually switch traffic from the blue environment to the green environment.

  4. Database Replication: Use database replication techniques to create a second instance with the upgraded/downgraded version. Sync the database changes and switch the application to use the updated database without downtime.

  5. Rolling Upgrades: Perform rolling upgrades, where you update one instance or component at a time, ensuring the application remains available throughout the process.

  6. Health Checks and Monitoring: Implement health checks to ensure the system’s availability and monitor the process closely for any issues. Roll back immediately if anomalies are detected.

10. What is infrastructure as code and how do you use it?

Infrastructure as Code (IaC) is the practice of managing and provisioning infrastructure resources using machine-readable configuration files or scripts, rather than manual processes. It treats infrastructure as software code, enabling version control, automation, and reproducibility.

  • Definition: Infrastructure as Code involves writing configuration files or scripts (e.g., using tools like AWS CloudFormation, Terraform, or Ansible) that define the desired state of infrastructure resources.

  • Automation: IaC enables automated provisioning and management of infrastructure, eliminating the need for manual configuration and reducing human errors.

  • Version Control: Infrastructure code can be versioned and stored in a version control system, allowing teams to collaborate, track changes, and roll back to previous versions if needed.

  • Reproducibility: With IaC, infrastructure can be easily replicated across different

  • environments, ensuring consistency and reducing discrepancies between development, testing, and production.

  • Scalability: IaC simplifies scaling infrastructure resources by defining parameters and policies that can be adjusted programmatically, accommodating changes in workload or demand.

11. What is a load balancer? Give scenarios of each kind of balancer based on your experience.

A load balancer is the device or service that sits between the user and the server group and acts as an invisible facilitator, ensuring that all resource servers are used equally.

  • Application Load Balancer (ALB): ALBs operate at the application layer (Layer 7) of the OSI model. They provide advanced routing capabilities, such as URL-based routing, content-based routing, and support for HTTP/HTTPS protocols.

  • Network Load Balancer (NLB): NLBs operate at the transport layer (Layer 4) and are designed for handling high volumes of traffic with ultra-low latency. They are ideal for TCP, UDP, and TLS traffic, making them suitable for use cases like gaming applications, and IoT applications.

  • Classic Load Balancer (CLB): CLBs are the legacy load balancers provided by AWS. They operate at both Layer 4 and Layer 7 and offer basic load-balancing functionalities.

12. What is CloudFormation and why is it used for?

AWS CloudFormation is a service that allows you to define and provision infrastructure resources in a declarative manner using templates. It provides a way to automate the creation, configuration, and management of AWS resources.

It is used to automate deployments, ensure resource consistency, manage scaling, handle dependencies, and streamline change management in AWS environments. It’s a key tool for infrastructure automation and management in AWS.

13. Difference between AWS CloudFormation and AWS Elastic Beanstalk?

AWS CloudFormation:

  • It’s a service for defining and provisioning AWS infrastructure as code.

  • You create templates in JSON or YAML to specify AWS resources and their configurations.

  • Useful for full infrastructure control, including EC2 instances, databases, networking, and more.

  • Supports complex architectures and can create or modify resources.

  • Typically used for infrastructure orchestration and configuration management.

AWS Elastic Beanstalk:

  • It’s a Platform as a Service (PaaS) for deploying and managing web applications.

  • Developers provide their application code, and Elastic Beanstalk handles infrastructure provisioning.

  • Ideal for simplified application deployment and scalability.

  • Supports multiple programming languages and frameworks.

  • Best for quick and straightforward application hosting without deep infrastructure management.

14. List possible storage options for Amazon EC2 instance.

  1. Amazon Elastic Block Store (EBS)

  2. Amazon EC2 Instance Store

  3. Amazon Elastic File System (EFS)

  4. Amazon Simple Storage Service (S3)

  5. Amazon Glacier

15. What are the kinds of security attacks that can occur on the cloud? And how can we minimize them?

Several security attacks can occur in the cloud environment:

  1. Unauthorized Access: Attackers may attempt to gain unauthorized access to cloud resources and data.

  2. Data Breaches: Breaches can occur when sensitive data is exposed or stolen from cloud storage or databases.

  3. Distributed Denial of Service (DDoS): Attackers flood the cloud infrastructure with excessive traffic, causing services to become unavailable.

  4. Insecure APIs: Vulnerabilities in APIs can be exploited to gain unauthorized access or manipulate cloud resources.

  5. Insider Threats: Malicious insiders with privileged access can misuse or leak sensitive information.

To minimize these attacks, follow security best practices:

  1. Implement strong access controls, including strong passwords, MFA, and least privilege principles.

  2. Encrypt sensitive data in transit and at rest.

  3. Regularly update and patch software and systems.

  4. Monitor and log activities to detect and respond to security incidents.

  5. Implement network security measures like firewalls and intrusion detection/prevention systems.

  6. Regularly perform security assessments and audits.

  7. Train employees on security awareness and best practices.

16. Can we recover the EC2 instance when we have lost the key?

If you have lost the key pair used to authenticate with an EC2 instance, you cannot recover or regain access to the instance using that key.

However, you can still regain access in several ways: -

  • Recover the Original Key Pair: If you have a backup of the private key or can retrieve the lost key, you can regain access by replacing the key pair.

  • Create a New EC2 Instance: If you can't recover the original key pair, you can create an AMI of the instance and launch a new one with a new key pair.

  • Access via Instance Metadata (Linux Instances): In some cases, for Linux instances with IAM roles, you may access the instance using instance metadata and the public key.

It’s crucial to emphasize the importance of proactive key management and maintaining backups to prevent such access issues.

17. What is a gateway?

A gateway is a networking device or service that acts as an entry point or interface between different networks, enabling communication and data transfer. It serves as a bridge or connector, connecting different networks with different protocols or architectures.

Gateways can perform various functions, such as routing, protocol translation, security enforcement, and network traffic management. They enable connectivity and interoperability between networks, allowing data to flow seamlessly between them.

Gateways are commonly used in the context of the Internet, where they facilitate communication between local networks and the wider Internet, providing access to external resources and services.

18. What is the difference between the Amazon RDS, DynamoDB, and Redshift?

Amazon RDS (Relational Database Service) is a managed service that allows you to run and scale relational databases like MySQL, PostgreSQL, Oracle, and SQL Server. It offers automated backups, replication, and patch management.

DynamoDB is a fully managed NoSQL database service that provides fast and seamless scalability, ideal for applications requiring low-latency data access. It offers flexible schema design and automatic scaling based on demand.

Redshift is a fully managed data warehousing service optimized for online analytic processing (OLAP). It enables high-performance querying and analysis of large datasets. Redshift is designed for data warehousing and analytics workloads, supporting SQL-based queries on structured data.

19. Do you prefer to host a website on S3? What’s the reason if your answer is either yes or no?

Yes, Host on S3:

  • Cost-Effective: Hosting a website on S3 is cost-effective, especially for static websites with low traffic. You pay only for the storage and data transfer you use.

  • Scalability: S3 can handle high traffic volumes and is automatically scalable. It's suitable for small to medium websites.

  • Simple Setup: Setting up a static website on S3 is straightforward, and AWS provides tools to simplify the process.

  • Security: S3 allows fine-grained control over access permissions, and you can integrate it with other AWS services for added security.

No, Don’t Host on S3:

  • Dynamic Content: If your website relies on dynamic content generated by a server, S3 alone is not suitable. You'd need a web server or serverless architecture to handle dynamic requests.

  • Database: If your website requires a database for user authentication, e-commerce functionality, or content management, S3 is not the best choice. You'd need a more comprehensive hosting solution.

  • Complexity: For complex websites with many features, interactivity, and databases, using S3 alone may become complex to manage, and other hosting solutions might be more appropriate.

20. What is AWS Lambda and how does it work?

AWS Lambda is a Serverless computing service that allows you to run your code without provisioning or managing servers.

It follows an event-driven model, where your code is executed in response to events from various AWS services or custom triggers.

Lambda functions can be written in several programming languages and can be designed to handle specific events or perform specific tasks.

Lambda functions scale automatically and can run in parallel, ensuring high availability and efficient resource utilization. With Lambda, you pay only for the compute time consumed by your code.

21. Explain VPC (Virtual Private Cloud) and its components.

VPC is a virtual network dedicated to your AWS account, providing a logically isolated section of the AWS cloud. It allows you to define a virtual network environment, including IP addressing, subnets, routing tables, security groups, and network gateways. The key components of a VPC include:

  • Subnets: Segments of IP addresses within the VPC where resources can be provisioned.

  • Route Tables: Define the rules for routing network traffic between subnets and the internet.

  • Internet Gateway: Allows communication between instances in the VPC and the internet.

  • NAT Gateway: Enables instances within private subnets to access the internet while remaining secure.

  • Security Groups: Act as virtual firewalls to control inbound and outbound traffic to instances.

  • Network Access Control Lists (NACLs): Additional layer of network security at the subnet level.

22. Explain AWS DevOps tools to build and deploy software in the cloud.

AWS DevOps tools to build and deploy software in the cloud includes: -

  • AWS Cloud Development Kit: It is an open-source software development framework for modeling and provisioning cloud application resources with popular programming languages.

  • AWS CodeBuild: It is a continuous integration service that processes multiple builds and tests code with continuous scaling.

  • AWS CodeDeploy: It helps to automate software deployments to any of the on-premises servers to choose from such as Amazon EC2, AWS Fargate, AWS Lambda, etc.

  • AWS CodePipeline: It automates code received through continuous delivery for rapid and accurate updates.

  • AWS CodeStar: It is a user interface that helps the DevOps team to develop, build, and deploy applications on AWS.

  • AWS Device Farm: It works as a testing platform to test applications on different mobile devices and browsers.

23. What is offered under Migration services by Amazon?

Amazon offers various migration services. They are: -

  • Amazon Database Migration Service (DMS) is a tool for migrating data extremely fast from an on-premise database to Amazon Web Services cloud. DMS supports RDBMS systems like Oracle, SQL Server, MySQL, and PostgreSQL in on-premises and the cloud.

  • Amazon Server Migration Services (SMS) helps in migrating on-premises workloads to Amazon web services cloud. SMS migrates the client’s server VMWare to cloud-based Amazon Machine Images (AMIs),

  • Amazon Snowball is a data transport solution for data collection, machine learning, processing, and storage in low-connectivity environments.

24. What is offered under Messaging services by Amazon?

Amazon offers various messaging services. They are: -

  • Amazon Simple Notification Service (SNS) is a fully managed, secured, available messaging service by AWS that helps decouple Serverless applications, micro-services, and distributed systems. SNS can be started within minutes from either the AWS management console, command-line interface, or software development kit.

  • Amazon Simple Queue Service (SQS) is a fully managed message queue for Serverless applications, micro-services, and distributed systems. The advantage of SQS FIFO guarantees single-time processing and exact order sent by this kind of messaging service.

  • Amazon Simple Email Service (SES) offers sending and receiving email services for informal, notify, and marketing correspondence via email for their cloud customers through the SMTP interface.

25. What is the purpose of making subnets?

Subnets are designed to divide a large network into smaller networks. It will help reduce congestion by routing traffic which increases substantially.

26. What is Elastic Beanstalk?

Elastic Beanstalk is an orchestration service by AWS, used in various AWS applications such as EC2, S3, Simple Notification Service, CloudWatch, autoscaling, and Elastic Load Balancers.

It is the fastest and simplest way to deploy your application on AWS using either AWS Management Console, a Git repository, or an integrated development environment (IDE).

27. What is Geo Restriction in CloudFront?

Geo restriction, also known as geo-blocking, prevents users in specific geographic locations from accessing content you’re distributing through a CloudFront web distribution.

28. What is the use of Amazon ElastiCache?

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud.

29. Differentiate between stopping and terminating an instance.

When an instance is stopped, the instance performs a normal shutdown and then transitions to a stopped state.

When an instance is terminated, the instance performs a normal shutdown. Then the attached Amazon EBS volumes are deleted unless the volume’s deleteOnTermination attribute is set to false.

30. What are the popular DevOps tools?

The popular DevOps tools are:

  • Chef, Puppet, Ansible, and SaltStack — Deployment and Configuration Management Tools

  • Docker — Containerization Tool

  • Git — Version Control System Tool

  • Jenkins — Continuous Integration Tool

  • Nagios — Continuous Monitoring Tool

  • Selenium — Continuous Testing Tool

31. What are the features of Amazon Cloud search?

Amazon Cloud search features:

  • AutoComplete advice

  • Boolean Searches

  • Entire text search

  • Faceting term boosting

  • Highlighting

  • Prefix Searches

  • Range searches

32. How do you access the data on EBS in AWS?

Data cannot be accessible on EBS directly by a graphical interface in AWS. This process includes assigning the EBS volume to an EC2 instance.

Here, when the volume is connected to any of the instances, either Windows or Unix, you can write or read on it. First, you can take a screenshot from the volumes with data and build unique volumes with the help of screenshots. Here, each EBS volume can be attached to only a single instance.

33. What are lifecycle hooks in AWS autoscaling?

Lifecycle hooks can be added to the autoscaling group. It enables you to perform custom actions by pausing instances where the autoscaling group terminates and launches them. Every auto-scaling group consists of multiple lifecycle hooks.

34. What is a Hypervisor?

A Hypervisor is a software used to create and run virtual machines. It integrates physical hardware resources into a platform distributed virtually to each user. Hypervisor includes Oracle Virtual Box, Oracle VM for x86, VMware Fusion, VMware Workstation, and Solaris Zones.

35. Explain the role of AWS CloudTrail.

AWS CloudTrail is a service designed for monitoring and auditing actions of API calls. With AWS CloudTrail, the user can monitor and retain account activity connected with actions covering the AWS infrastructure.

36. Explain Amazon Route 53.

Amazon Route 53 is defined as a scalable and highly available Domain Name System (DNS). It is created for the benefit of developers and companies to route end users to internet applications by translating names which is the most reliable and cost-effective process.

37. What are the parameters for S3 pricing?

The following are the parameters for S3 pricing:

  • Transfer acceleration

  • Number of requests you make

  • Storage management

  • Data transfer

  • Storage used

38. Name the different types of instances.

Following are the different types of instances:

  • Memory-optimized

  • Accelerated computing

  • Computer-optimized

  • General-purpose

  • Storage optimize

39. Name the database types in RDS.

The following are the types of databases in RDS:

  • MYSQL server

  • PostgreSQL

  • SQL Server

  • Aurora

  • Oracle

  • MariaDB

40. What is CloudWatch?

Amazon CloudWatch is a metrics repository. It allows you to monitor the complete stack, including applications, infrastructure, and services. You can also use alarms, logs, and event data to take automated actions and reduce the mean time to resolution (MTTR).

41. What are Key-Pairs in AWS?

A key pair consists of a public key and a private key and is the secure login information for your virtual machines. Amazon EC2 stores the public key, and you can have the private key.