Each week’s news is filled with stories of companies that have suffered a cloud breach that could have been prevented. In 2024 the cost to organisations of exposed cloud storage buckets, wrongly configured IAM roles and leaked API keys has already run into $100m plus. The most painful part? Each of these events could have been avoided; they were not caused by a sophisticated zero-day exploit, but rather someone did not do the basics correctly.
If you are deploying to AWS, managing your cloud infrastructure, or learning about DevOps, this guide is your practical field manual. We will present you with the cloud security mistakes that the security teams continue to see being made, and how to avoid making them.
The Reason behind Persistent Cloud Misconfigurations
You may ponder why cloud misconfiguration risks, although they are acknowledged, still result in multiple breaches.
Some factors contribute to this situation: speed, complexity, and visibility.
Cloud environments grow rapidly. For example, suppose a developer launches an EC2 instance on Friday; opens up a security group port, "just for test purposes"; and leaves the instance for the weekend; by Monday, it's in production. This is how most unintentional misconfigurations happen — not from deliberate activity but from forward motion.
Cloud service suppliers like AWS offer many different services, each with its permission types, network configurations, and logging numbers. No single engineer can know every piece of information from each of these suppliers. In addition, the cloud does not provide physical barriers to access; any device connected to the internet has access to your application's cloud resources — and vice versa.
The good news is that once you learn how to look for potential problems, there are many simple cloud security tips that can help you prevent these types of misconfigurations. Let's take a closer look.
The Cloud Security Mistakes You Need to Fix
1. Public S3 Buckets with Sensitive Information
What it is: An Amazon S3 bucket configured so that both reading and writing can be performed publicly over the Internet. Some users may do this intentionally, i.e. static websites, and others will not know they have done it accidentally.
Why they're dangerous: Anyone who knows the URL of your public S3 bucket can access the data in it. These types of buckets get indexed by search engines. Automated scanners crawl the Internet looking for these types of buckets and they occur quite often. The names of S3 buckets are usually very predictable, i.e. companyname-backups.
Real-world example: A US banking company left customer information in a public Amazon S3 bucket for more than a year. The information included names, addresses, and account numbers for millions of customers. It took an independent security researcher less than an hour to discover the publicly accessible Amazon S3 bucket using standard enumeration tools.
How to fix this:
- Ensure S3 Block Public Access is enabled at the account level - (AWS has added prompting functionality to encourage users to do this, but you should also verify that it is enabled).
- Use bucket policies to explicitly deny public access to your S3 buckets.
- Set up Amazon Macie to automatically identify sensitive data that remains in S3.
- Use AWS Trusted Advisor or AWS Security Hub to audit any existing buckets.
2. Overly Permissive Permitations and Policies
How is that? AWS Identity and Access Management (IAM) Roles That Contain More Opporunities Than Necessary. For Example, An Amazon Lambda Function With The Role Of AdministratorAccess Just To Read One Table In DynamoDB.
Why Is This Dangerous? if That Role Were To Be Compromised An Attacker Takes Over All Permissions Associated With That Role. The Biggest Threat To Cloud Environments Is Overly Permissive Roles In AWS.
Real World Example: A Build/Deploy Process From A Start-Up’s CI/CD Pipeline Utilized A Deployment Role That Had Full iam:* And s3:* Access. Once The Build Process Was Compromised Using A Malicious npm Package, The Attacker Used The Full Capabilities Of The Role To Create An Admin Account And Exfiltrate Three Months Worth of Data Before Detection.
How To Fix:
- Use The Principle Of Least Privilege - Grant The Minimum Required Access/Permissions Based On The Role/User.
- Use IAM Access Analyzer To Identify Permissions That Are Too Broad.
- Regularly Audit Roles With IAM Credential Reports.
- Use Permission Boundaries For Roles Created By Automations.
3. Hardcoded Secrets in the Source Code
What is this? This includes any credentials that are embedded in the source code of an application, such as API Keys, DB Passwords, or AWS access keys and committed to version control.
Why is this a risk? Git repositories are often shared, stored; and occasionally made available to everyone. Even if you delete a secret, it will remain in Git history forever. In addition, there are automated bots that continuously scan GitHub for leaked credentials and start using them within seconds of being pushed to the repository.
An example of this in the real world? One developer accidentally committed their AWS access key to a public GitHub repository. An automated bot found the key in under four minutes and started creating 130 EC2 instances to mine for cryptocurrency. The AWS bill for that weekend was over $50,000.
How to fix this:
- Use AWS Secrets Manager or Parameter Store for credential storage
- Schedule git-secrets or truffleHog pre-commit hooks to block anything from being committed to the repository that contains a secret
- Enable Secret Scanning on GitHub or an equivalent in your version control
- Immediately rotate any secret that may have been exposed
4. Misconfigured Security Groups (0.0.0.0/0) -
This is where the AWS Security Groups are leveraged to serve as Virtual Firewalls by allowing inbound network access (inbound traffic) to them from any Public IP (0.0.0.0/0) over Port/Protocol combinations that are deemed sensitive such as SSH (Port 22), RDP (Port 3389), or other database-related ports - Because it exposes your servers directly to the Public Internet, it puts your servers at risk of being compromised by malicious activity within minutes from the moment they are established. Automated tools (scanners) can rapidly identify those open ports that were established using the 0.0.0.0/0 configuration. As a result, it is not uncommon to see that your server has experienced a Password Attack on the SSH service (Brute-Force) followed by the infection of your server with a Ransomware Virus or deployment of the Cryptojacking solution.
An example of the impact of this can be exhibited by a Developer who opened Port 22 on an EC2 Instance to troubleshoot a problem remotely and failed to close the Port/Access after completing the work. As a result, 3 weeks later, it was identified that the EC2 Instance had become part of a Botnet and the incident was detected (identified) during the Billing Cycle when the volume of egress data was significantly higher than during normal operating periods.
To remedy:
- Prevent the configuration of any inbound Security Group Rules using the 0.0.0.0/0 CIDR Block for SSH or RDP/Telnet-based protocols - Ideally these access points should only be accessible via a bastion or AWS Systems Manager Session Manager.
- Keep the database services to the applications only that are using the database and only to those application servers that require access to the services.
- Configure AWS Config to monitor for any Security Group configurations with 0.0.0.0/0 CIDR Blocks and notify you of any occurrence or configuration violations.
- Enable VPC Flow Logs to report details of traffic patterns in your environment.
5. No Logging or Monitoring Enabled
What it is: You are operating your cloud infra-structure in a manner that has both audit logs and API call tracking disabled. Therefore, you inherently lack the ability to see anything occurring in your environment.
Why it's dangerous: Without being able to see something, you can never respond to it. Therefore, the majority of breaches are never detected because logging has never been turned on. The average time a breach remains undetected within the cloud environment is weeks and not hours.
How to fix it:
- Enable AWS CloudTrail in all regions; include the global services trail.
- Implement Amazon GuardDuty as an intelligent threat detection tool.
- That's why we recommend utilizing AWS Security Hub to aggregate all of the findings across AWS services.
- Set up CloudWatch Alarms against critical metrics, for example, root account logins or IAM Policy changes.
6. Insufficient Security for CI/CD Pipelines
What It Is: CI/CD pipelines with overly permissive cloud credentials, no secret scanning, unreviewed third-party libraries/dependencies, or no approval gates before deploying to production.
Why It's Dangerous: Your CI/CD pipeline will have production-level access. If it's compromised, then attackers can deploy bad code into your entire environment automatically using your pipeline; therefore, supply-chain attacks specifically target this pathway.
Example: The SolarWinds attack showed that, due to a compromised build pipeline, the attacker was able to distribute malicious code to thousands of downstream customers without triggering traditional detection mechanisms.
How To Fix:
- Utilize OIDC federation (rather than long-lived access keys) to generate temporary AWS credentials for GitHub Actions or other CI systems.
- Include mandatory code reviews/approvals prior to deploying to production.
- Scan all dependencies for known vulnerabilities using tools such as: Snyk, Dependabot, or AWS Inspector.
- Run your pipelines using AWS IAM scoped roles – that is, the deployment role should not have permission to read all of your secrets.
Understanding Detection Methods: Identify Security Risks Prior to Them Happening
Identifying a misconfigured resource prior to a data breach creates more value than afterwards. A multilayered approach for detecting misconfigured resources that actually has proven to work well in production includes:
Use AWS Security Hub as the centralized dashboard for aggregating aggregated results from GuardDuty, Inspector, Macie, IAM Access Analyzer, and Firewall Manager into one view. This allows you to quickly identify security-related findings against industry standards. For example, configure AWS Security Hub to utilize the AWS Foundational Security Best Practices standard, which automatically evaluates over 200 specific controls.
Use Amazon GuardDuty as a runtime threat detection solution by utilizing machine learning algorithms to detect anomalous behavior (such as abnormal API call patterns, cryptocurrency mining, and exfiltration attempts), even if there are no configurations detected as a malicious activity.
Use AWS Config with managed rules to continuously monitor your resource configurations against compliance requirements. AWS Config can detect resources that are now noncompliant (i.e., an S3 bucket with public access or a security group opened on port 22) within minutes and initiate automatic remediation.
Use CloudTrail Lake, or send your CloudTrail logs to your SIEM. Analyzing your logs over time is critical in creating an effective incident response capability as well as providing a historical analysis will often reveal trends that are not clearly shown in real time.
Best Practices For Preventing Problems
A solid cloud security program will become ingrained into how a business operates, and thus should be continually developed over time: "Shift left" is the term for moving security measures up front in a process versus waiting until the application has been deployed to check the security of it (e.g. Security testing during a CI/CD pipeline as opposed to post-deployment auditing). There are several tools available (e.g. Checkov, tfsec, cfn-nag) to scan Terraform, CloudFormation, and CDK codebases for configuration issues prior to deploying them into AWS.
Infrastructure as Code (IaC) is the preferred method for creating AWS environments because it provides a documented version of the configurational state. Having a record of how configurations were created will help the team auditing the environment understand how the configurations were established. The use of IaC also provides for easier security auditing because each code review includes security as part of the approval process.
All personnel with IAM (Identity Access Management) accounts must have MFA (Multi Factor Authentication) enabled. Two common forms of MFA are hardware tokens and passkeys. All IAM accounts must have MFA enabled; however, it is extremely important that the root account and the account(s) with elevated privileges use hardware tokens due to the nature of being able to perform actions that could result in significant damage to your organization.
Utilize the principle of immutability when designing your infrastructure within AWS. Continuously managed and containerized environments should be preferred over static EC2s (Elastic Compute Cloud). Containerized and managed service environments last for much less time than EC2s, therefore offer a smaller window for malicious attacks to take place or to become obsolete.
What's Next in Cloud Security
The cloud security threat landscape is changing fast, but the basics still apply! What’s more concerning is that the level of risk associated with these threats continues to increase, as the attack surface continues to get larger.
AI-driven attacks are getting much faster with their ability to automatically discover all your cloud resources, identify improperly configured services and provide guidance for possible attack paths – all within seconds. Therefore, your software should be also be automated in providing security.
Supply chain attacks targeting CI/CD pipelines and open source dependencies will continue to be significant, which will cause organizations to place more attention on packaging integrity, Software Bill of Material (SBOM) compliance, and build pipeline hardening.
Organizations are also exposing themselves to new misconfiguration risk due to the complexity in trying to manage security across multiple clouds (AWS, Azure and GCP). Therefore, organizations are now considering the use of CSPMs (Cloud Security Posture Management) such as Wiz, Prisma Cloud, and the native CSPM offerings provided by the cloud providers, to not only manage their overall security posture, but also make managing security across these cloud providers easier than ever.
The type of engineers that will be successful at this time are those engineers who will adopt security as code by versioning, reviewing, testing and automating it – as the changes in mindset that have been implemented will be the difference between organizations that suffer a data breach versus organizations that do not suffer any data breaches.
Conclusion
In 2025, cloud security will not be dependent upon a team of 50 security engineers or a seven-figure budget; rather it will be dependent upon discipline. The ten mistakes you can fix with the tools AWS currently provides; public S3 buckets, overly permissive IAM roles, hard-coded secrets, misconfigured security groups, missing logs, weak CI/CD security, exposed metadata services, poor segmentation, un-patched systems and ignored least privilege have all been identified in this guide as the potential cause of the security breach that occurred last year.
Start with the hardening checklist above for every team. Select the top two or three items where your team is most exposed and work on those first. Next, create a routine to run security reviews, in all future sprints and not only as a quarterly fire-drill.
The reason you see these breaches in the news, often appear to be sophisticated, when in fact they could have been avoided with proper implementation of the above listed fixes. Now that you have a clearer understanding of how to protect against future breaches, let's get started on preventing them from happening again.

















.png)

