🪣 AWS S3 Buckets in Next.js: Public & Private with Full Implementation
Aug 25, 2025 • 8 min read

Amazon Simple Storage Service (S3) is one of the most fundamental and widely-used services in the AWS ecosystem. S3 is cloud object storage with industry-leading scalability, data availability, security, and performance, making it ideal for a vast range of applications from simple file storage to complex data analytics workloads.
What is AWS S3?
AWS S3 is an object storage service that stores data as objects within buckets. Unlike traditional file systems, S3 uses a flat namespace where each object is identified by a unique key within a bucket. S3 also offers robust Service Level Agreements (SLAs) that allows you to access your data when you need it, ensuring high availability and durability for your applications.
Core Components
Component | Description |
---|---|
Buckets | Containers for objects, must have globally unique names |
Objects | Individual files stored in buckets, can be up to 5TB |
Keys | Unique identifiers for objects within a bucket |
Regions | Geographic locations where buckets are stored |
Access Control | Permissions and policies that control access to resources |
Key Features
S3 provides numerous features that make it suitable for various use cases:
- Durability: 99.999999999% (11 9’s) durability
- Availability: 99.99% availability SLA
- Scalability: Virtually unlimited storage capacity
- Security: Multiple layers of security including encryption
- Cost Optimization: Multiple storage classes for different access patterns
- Integration: Seamless integration with other AWS services
Public vs Private S3 Buckets
Understanding the difference between public and private buckets is crucial for implementing proper security and access controls.
Public Buckets
Public buckets allow anyone on the internet to access the stored objects. This configuration is suitable for:
- Static website hosting
- Public content distribution
- Open datasets
- Public documentation
Important Security Note: AWS has implemented additional safeguards to prevent accidental public exposure. You must explicitly configure both bucket policies and object ACLs to make content publicly accessible.
Private Buckets
Private buckets restrict access to authorized users only. This is the default and recommended configuration for:
- User-uploaded content
- Sensitive documents
- Application data
- Backup and archival
Setting Up S3 Buckets
Creating a Public Bucket Configuration
// bucket-policy-public.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-public-bucket-name/*"
}
]
}
Private Bucket (Default Configuration)
Private buckets don’t require additional policies as they deny public access by default. Access is controlled through:
- IAM policies
- Bucket policies
- Access Control Lists (ACLs)
- Pre-signed URLs
NextJS Implementation with TypeScript
Let’s implement both public and private bucket operations using NextJS and TypeScript.
Environment Setup
First, set up your environment variables:
// .env.local
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_S3_BUCKET_NAME=your-bucket-name
AWS_S3_PUBLIC_BUCKET_NAME=your-public-bucket-name
Installing Dependencies
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
npm install --save-dev @types/node
AWS S3 Client Configuration
// lib/s3-client.ts
import { S3Client } from '@aws-sdk/client-s3';
const s3Client = new S3Client({
region: process.env.AWS_REGION!,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
});
export default s3Client;
Pre-signed URLs for File Uploads
A pre-signed URL is a URL that you generate with your AWS credentials that provides temporary access to upload or download files from specific S3 buckets. This approach is secure and scalable for client-side file operations.
Creating Pre-signed Upload URLs
// pages/api/upload-url.ts
import { NextApiRequest, NextApiResponse } from 'next';
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import s3Client from '../../lib/s3-client';
interface UploadUrlRequest {
fileName: string;
fileType: string;
isPublic?: boolean;
}
export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
if (req.method !== 'POST') {
return res.status(405).json({ message: 'Method not allowed' });
}
try {
const { fileName, fileType, isPublic = false }: UploadUrlRequest = req.body;
const bucketName = isPublic
? process.env.AWS_S3_PUBLIC_BUCKET_NAME
: process.env.AWS_S3_BUCKET_NAME;
const key = `uploads/${Date.now()}-${fileName}`;
const command = new PutObjectCommand({
Bucket: bucketName,
Key: key,
ContentType: fileType,
// For public buckets, set ACL to public-read
...(isPublic && { ACL: 'public-read' }),
});
// Generate pre-signed URL valid for 5 minutes
const uploadUrl = await getSignedUrl(s3Client, command, {
expiresIn: 300
});
const publicUrl = isPublic
? `https://${bucketName}.s3.${process.env.AWS_REGION}.amazonaws.com/${key}`
: null;
res.status(200).json({
uploadUrl,
key,
publicUrl,
});
} catch (error) {
console.error('Error generating upload URL:', error);
res.status(500).json({ message: 'Error generating upload URL' });
}
}
Frontend Upload Component
// components/FileUpload.tsx
import React, { useState } from 'react';
interface FileUploadProps {
isPublic?: boolean;
onUploadComplete?: (url: string, key: string) => void;
}
const FileUpload: React.FC<FileUploadProps> = ({
isPublic = false,
onUploadComplete
}) => {
const [uploading, setUploading] = useState(false);
const [uploadProgress, setUploadProgress] = useState(0);
const handleFileUpload = async (file: File) => {
setUploading(true);
setUploadProgress(0);
try {
// Get pre-signed URL
const response = await fetch('/api/upload-url', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
fileName: file.name,
fileType: file.type,
isPublic,
}),
});
const { uploadUrl, key, publicUrl } = await response.json();
// Upload file to S3
const uploadResponse = await fetch(uploadUrl, {
method: 'PUT',
body: file,
headers: { 'Content-Type': file.type },
});
if (uploadResponse.ok) {
const finalUrl = publicUrl || key;
onUploadComplete?.(finalUrl, key);
} else {
throw new Error('Upload failed');
}
} catch (error) {
console.error('Upload error:', error);
alert('Upload failed. Please try again.');
} finally {
setUploading(false);
setUploadProgress(0);
}
};
const handleDrop = (e: React.DragEvent) => {
e.preventDefault();
const files = Array.from(e.dataTransfer.files);
if (files.length > 0) {
handleFileUpload(files[0]);
}
};
return (
<div className="upload-container">
<div
className="drop-zone"
onDrop={handleDrop}
onDragOver={(e) => e.preventDefault()}
style={{
border: '2px dashed #ccc',
borderRadius: '8px',
padding: '40px',
textAlign: 'center',
cursor: 'pointer',
}}
>
{uploading ? (
<div>
<p>Uploading... {uploadProgress}%</p>
<div className="progress-bar">
<div
style={{
width: `${uploadProgress}%`,
height: '4px',
backgroundColor: '#007bff',
transition: 'width 0.3s ease',
}}
/>
</div>
</div>
) : (
<div>
<p>Drag and drop a file here, or click to select</p>
<input
type="file"
onChange={(e) => {
const file = e.target.files?.[0];
if (file) handleFileUpload(file);
}}
style={{ display: 'none' }}
id="file-input"
/>
<label htmlFor="file-input" style={{
padding: '8px 16px',
backgroundColor: '#007bff',
color: 'white',
borderRadius: '4px',
cursor: 'pointer',
}}>
Choose File
</label>
</div>
)}
</div>
</div>
);
};
export default FileUpload;
Pre-signed URLs for File Downloads
For private files, you need to generate pre-signed download URLs to provide temporary access.
Creating Pre-signed Download URLs
// pages/api/download-url.ts
import { NextApiRequest, NextApiResponse } from 'next';
import { GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import s3Client from '../../lib/s3-client';
export default async function handler(
req: NextApiRequest,
res: NextApiResponse
) {
if (req.method !== 'POST') {
return res.status(405).json({ message: 'Method not allowed' });
}
try {
const { key, expiresIn = 3600 } = req.body; // Default 1 hour expiration
const command = new GetObjectCommand({
Bucket: process.env.AWS_S3_BUCKET_NAME,
Key: key,
});
const downloadUrl = await getSignedUrl(s3Client, command, {
expiresIn
});
res.status(200).json({ downloadUrl });
} catch (error) {
console.error('Error generating download URL:', error);
res.status(500).json({ message: 'Error generating download URL' });
}
}
File Download Component
// components/FileDownload.tsx
import React, { useState } from 'react';
interface FileDownloadProps {
fileKey: string;
fileName: string;
isPublic?: boolean;
}
const FileDownload: React.FC<FileDownloadProps> = ({
fileKey,
fileName,
isPublic = false
}) => {
const [downloading, setDownloading] = useState(false);
const handleDownload = async () => {
setDownloading(true);
try {
if (isPublic) {
// For public files, use direct URL
const publicUrl = `https://${process.env.NEXT_PUBLIC_S3_PUBLIC_BUCKET}.s3.${process.env.NEXT_PUBLIC_AWS_REGION}.amazonaws.com/${fileKey}`;
window.open(publicUrl, '_blank');
} else {
// For private files, get pre-signed URL
const response = await fetch('/api/download-url', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ key: fileKey }),
});
const { downloadUrl } = await response.json();
// Create temporary link and trigger download
const link = document.createElement('a');
link.href = downloadUrl;
link.download = fileName;
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
}
} catch (error) {
console.error('Download error:', error);
alert('Download failed. Please try again.');
} finally {
setDownloading(false);
}
};
return (
<button
onClick={handleDownload}
disabled={downloading}
style={{
padding: '8px 16px',
backgroundColor: downloading ? '#6c757d' : '#28a745',
color: 'white',
border: 'none',
borderRadius: '4px',
cursor: downloading ? 'not-allowed' : 'pointer',
}}
>
{downloading ? 'Downloading...' : 'Download File'}
</button>
);
};
export default FileDownload;
Security Best Practices
1. Bucket Policies and IAM
Always follow the principle of least privilege:
// Example IAM policy for S3 operations
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::your-bucket-name/*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::your-bucket-name"
}
]
}
2. Pre-signed URL Expiration
Set appropriate expiration times based on your use case:
Use Case | Recommended Expiration |
---|---|
File uploads | 5-15 minutes |
Document downloads | 1-24 hours |
Temporary sharing | 1-7 days |
Long-term access | Use IAM roles instead |
3. Content Type Validation
Always validate file types on both client and server:
const validateFileType = (file: File, allowedTypes: string[]): boolean => {
return allowedTypes.includes(file.type);
};
const ALLOWED_IMAGE_TYPES = ['image/jpeg', 'image/png', 'image/gif'];
const ALLOWED_DOCUMENT_TYPES = ['application/pdf', 'text/plain'];
Performance Optimization
1. Multipart Upload for Large Files
For files larger than 100MB, consider implementing multipart upload:
// Example multipart upload initiation
import { CreateMultipartUploadCommand } from '@aws-sdk/client-s3';
const initiateMultipartUpload = async (bucketName: string, key: string) => {
const command = new CreateMultipartUploadCommand({
Bucket: bucketName,
Key: key,
});
return await s3Client.send(command);
};
2. CloudFront Integration
For frequently accessed files, integrate with Amazon CloudFront for better performance and cost optimization.
Monitoring and Logging
Use Amazon CloudWatch to track the operational health of your AWS resources and configure billing alerts for estimated charges that reach a user-defined threshold. Use AWS CloudTrail to track and report on bucket- and object level activities.
Setting up Basic Monitoring
// Add CloudWatch metrics to your API routes
import { CloudWatchClient, PutMetricDataCommand } from '@aws-sdk/client-cloudwatch';
const cloudWatch = new CloudWatchClient({ region: process.env.AWS_REGION });
const recordMetric = async (metricName: string, value: number) => {
const params = {
Namespace: 'YourApp/S3Operations',
MetricData: [
{
MetricName: metricName,
Value: value,
Unit: 'Count',
Timestamp: new Date(),
},
],
};
try {
await cloudWatch.send(new PutMetricDataCommand(params));
} catch (error) {
console.error('Error recording metric:', error);
}
};
Conclusion
AWS S3 provides a robust, scalable solution for object storage needs. By implementing proper security measures, using pre-signed URLs for client-side operations, and following best practices, you can build secure and efficient file management systems. The combination of public and private buckets allows for flexible access control, while NextJS integration enables seamless frontend-backend communication.
Remember to always test your implementation thoroughly, monitor costs and usage, and keep security as a top priority when handling user data and file operations.
Sources
Related articles


