Azure FinOps Master Program Now Available! Starting February 2026
Finding Easy S3 Cost Savings with Storage Lens
If you’re running applications on AWS, there’s a good chance you have S3 buckets accumulating forgotten data that costs you money every month. The good news is that Amazon S3 Storage Lens makes it straightforward to identify and eliminate these costs without touching production data or introducing any risk to your applications.
This guide will walk you through enabling Storage Lens and using a focused set of metrics to find two common sources of unnecessary S3 costs: Noncurrent Object Versions and Incomplete Multipart Uploads.
What Makes These “Safe” Savings?
Before we dive in, let’s take a look at why these two areas are low-risk targets for cost reduction.
Noncurrent object versions are older versions of files that your applications typically don’t access. If you have versioning enabled on your buckets (which you should for production data), S3 keeps every version of every file you’ve ever uploaded. While this protects against accidental deletions, it also means you might be paying to store dozens of versions of files that haven’t been accessed in years.
Incomplete multipart uploads are fragments from uploads that started but never completed. They serve no purpose, yet S3 continues to charge you for storing these orphaned pieces. This happens more often than you might think. Network issues, application crashes, or forgotten test scripts can all leave these fragments behind.
Neither of these affect your current, in-use data. That’s what makes them safe targets.
Enabling Storage Lens
Storage Lens is free for basic metrics (which is all we need). Here’s how to set it up:
For a single account:
- In the S3 console, go to Storage Lens → Create dashboard
- Name it something like “Cost-Optimization-Dashboard”
- Set scope to Account, include all regions
- Keep free metrics selected
- Create the dashboard
For AWS Organizations: Create the dashboard from your management account and select Organization as the scope instead of Account. This gives you a consolidated view across all accounts.
You will need to wait 24-48 hours before meaningful metrics appear.
The Metrics That Matter
Once your dashboard has collected data for a day or two, focus on two metrics:
1. Noncurrent Version Bytes
This metric shows the total storage consumed by object versions that are no longer current. To find it:
- Open your Storage Lens dashboard
- Look for the Noncurrent version bytes metric (in the cost optimization section)
- Check the breakdown by bucket or storage class
What you’re looking for:
- High absolute numbers: Buckets with hundreds of gigabytes or terabytes of noncurrent versions
- High percentages: When noncurrent versions represent 30% or more of total bucket storage
- Growth trends: Noncurrent storage that increases steadily month over month
2. Incomplete Multipart Upload Bytes
This metric reveals storage consumed by abandoned upload fragments:
- In your Storage Lens dashboard, find Incomplete multipart upload bytes
- Review both the total across all buckets and the per-bucket breakdown
Even small amounts here represent pure waste. There’s never a legitimate reason to keep incomplete multipart uploads indefinitely.
Interpreting What You Find
Not all noncurrent versions are forgotten data. Here’s how to distinguish between expected retention and optimization opportunities:
Expected Noncurrent Versions
Scenarios where noncurrent versions are intentional:
- Short-term protection windows: Many teams keep 30-90 days of versions as a safety net against accidental changes.
- Compliance requirements: Regulatory frameworks may mandate keeping version history.
- Active rollback capability: Applications designed to roll back to previous file versions.
Forgotten or Excessive Versions
Red flags that suggest optimization opportunities:
- Old versions in non-production buckets: Development, testing, or sandbox buckets that have accumulated years of versions
- Log or artifact buckets: CI/CD build artifacts or application logs rarely need multiple versions
- Static asset buckets: Website images, CSS, or JavaScript files don’t typically need extensive version history
- Versions older than any reasonable retention period: Files with 50+ versions or versions spanning multiple years
The Incomplete Multipart Upload Situation
For incomplete multipart uploads, interpretation is simple: you probably want to delete all of them. The only exception would be if you have uploads actively in progress, but these should complete within hours, not days or weeks.
Turning Findings Into Action
Use S3 lifecycle policies to automate cleanup safely. Changes are gradual, you can set conservative retention periods, and policies prevent future buildup.
For noncurrent versions: Go to your S3 bucket → Management tab → Create lifecycle rule → Enable “Expire noncurrent versions of objects” → Set the number of days (start with 90-180 for safety).
For incomplete multipart uploads: In the same lifecycle rule (or a separate one), enable “Delete expired object delete markers or incomplete multipart uploads” and set days after initiation (7 days is safe, though 1-3 days works for most cases).
You can combine both rules in a single lifecycle policy for simplicity.
Using Terraform
If you manage infrastructure as code, here’s how to implement both rules:
resource "aws_s3_bucket_lifecycle_configuration" "cleanup" {
bucket = aws_s3_bucket.example.id
rule {
id = "cleanup-versions-and-incomplete-mpu"
status = "Enabled"
noncurrent_version_expiration {
noncurrent_days = 90
}
abort_incomplete_multipart_upload {
days_after_initiation = 7
}
}
}
Measuring Your Impact
Return to Storage Lens to track the results. Noncurrent version bytes should decline within days, incomplete uploads should drop quickly (often to zero), and total storage costs will decrease proportionally in AWS Cost Explorer.
Common Pitfalls to Avoid
- Don’t over-optimize. Production data deserves generous retention. Storage is relatively cheap, don’t spend hours to save $5/month.
- Test before broad deployment. Start with 2-3 non-critical buckets, monitor, then gradually expand. Document your standard retention periods. Remember compliance always comes first.
- Communicate changes. Let teams know about new policies and provide a way to request exceptions for specific buckets.
What This Approach Doesn’t Cover
This guide focuses on low-risk savings. We’re not addressing storage class optimization, Intelligent-Tiering, cross-region replication cleanup, or application-level data deletion. These carry more risk and require deeper analysis. Start here, then consider advanced optimizations once you’re comfortable.
TL;DR
Here’s a simple action plan to get started:
- Enable Storage Lens with the default free metrics
- Review the dashboard once data has collected, identify your top 3-5 buckets by noncurrent version storage
- Implement lifecycle policies on 1-2 non-critical buckets as a pilot
- Monitor the results, then expand to additional buckets
This approach ensures you gain confidence with the tools while capturing meaningful savings. You’re not trying to architect the perfect storage optimization strategy; you’re identifying and eliminating obvious waste.
Storage Lens makes this analysis straightforward. The metrics are clear, the risks are minimal, and the savings are immediate.
Photo by Lucas van Oort on Unsplash