Step 1: Create the Storage Bucket
The storage bucket is where files will be uploaded. It’s also the source of events that trigger our function.
What is a Storage Bucket?
A bucket is a container for files. Think of it like a folder in the cloud:
- You can upload files to it
- Files have names and paths
- You can organize files in folders
- Access can be public or private
Bucket Properties
When creating a bucket, you’ll configure:
Region - Where the bucket is stored geographically
- Choose a region close to you (lower latency)
- Or close to your users (better performance)
- Common choices: US East, EU West, Asia Pacific
Access Level - Who can access files
- Private: Only you (and your functions) can access
- Public: Anyone with the URL can access
- For this tutorial: Keep it private
Naming - Bucket names must be unique globally
- AWS S3: Globally unique across all AWS accounts
- Azure: Unique within your subscription
- GCP: Globally unique across all GCP projects
Create Bucket: AWS (S3)
Using AWS Console
- Go to AWS Console
- Navigate to S3 service
- Click Create bucket
- Enter bucket name (e.g.,
my-notifications-bucket-2025) - Select region (e.g.,
us-east-1) - Block Public Access: Keep enabled (private bucket)
- Click Create bucket
Using AWS CLI
# Install AWS CLI (if not installed)
# macOS: brew install awscli
# Linux: pip install awscli
# Windows: Download from aws.amazon.com/cli
# Configure credentials
aws configure
# Create bucket
aws s3 mb s3://my-notifications-bucket-2025 --region us-east-1
# Verify bucket created
aws s3 ls # Install Azure CLI (if not installed)
# macOS: brew install azure-cli
# Linux: curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# Windows: Download from azure.microsoft.com/cli
# Login
az login
# Create resource group (if needed)
az group create --name notifications-rg --location eastus
# Create storage account
az storage account create \
--name mynotifications2025 \
--resource-group notifications-rg \
--location eastus \
--sku Standard_LRS
# Create blob container
az storage container create \
--name uploads \
--account-name mynotifications2025 \
--auth-mode login # Install gcloud CLI (if not installed)
# macOS: brew install google-cloud-sdk
# Linux: curl https://sdk.cloud.google.com | bash
# Windows: Download from cloud.google.com/sdk
# Initialize and login
gcloud init
gcloud auth login
# Create bucket
gsutil mb -p my-project-id -c STANDARD -l us-east1 gs://my-notifications-bucket-2025
# Verify bucket created
gsutil ls Create Bucket: Azure (Blob Storage)
Using Azure Portal
- Go to Azure Portal
- Click Create a resource → Storage account
- Fill in details:
- Subscription: Your subscription
- Resource group: Create new or use existing
- Storage account name:
mynotifications2025(lowercase, numbers only) - Region: Choose closest to you
- Performance: Standard
- Redundancy: LRS (Locally Redundant Storage)
- Click Review + create → Create
Create Container
After storage account is created:
- Go to your storage account
- Click Containers in left menu
- Click + Container
- Name:
uploads - Public access level: Private
- Click Create
Create Bucket: GCP (Cloud Storage)
Using GCP Console
- Go to GCP Console
- Navigate to Cloud Storage → Buckets
- Click Create bucket
- Enter bucket name (e.g.,
my-notifications-bucket-2025) - Choose location type: Region
- Select region (e.g.,
us-east1) - Storage class: Standard
- Access control: Uniform
- Public access: Prevent public access
- Click Create
Bucket Structure (Optional)
You can organize files in folders:
bucket-name/
├── incoming/ # New uploads
├── processed/ # After processing
└── archive/ # Old files
For this tutorial, we’ll keep it simple and use the root of the bucket.
Verify Bucket Creation
Test that your bucket works:
# List buckets
aws s3 ls
# Upload a test file
echo "Hello, World!" > test.txt
aws s3 cp test.txt s3://my-notifications-bucket-2025/
# List files in bucket
aws s3 ls s3://my-notifications-bucket-2025/
# Delete test file
aws s3 rm s3://my-notifications-bucket-2025/test.txt # List containers
az storage container list \
--account-name mynotifications2025 \
--auth-mode login
# Upload a test file
echo "Hello, World!" > test.txt
az storage blob upload \
--account-name mynotifications2025 \
--container-name uploads \
--name test.txt \
--file test.txt \
--auth-mode login
# List files
az storage blob list \
--account-name mynotifications2025 \
--container-name uploads \
--auth-mode login
# Delete test file
az storage blob delete \
--account-name mynotifications2025 \
--container-name uploads \
--name test.txt \
--auth-mode login # List buckets
gsutil ls
# Upload a test file
echo "Hello, World!" > test.txt
gsutil cp test.txt gs://my-notifications-bucket-2025/
# List files in bucket
gsutil ls gs://my-notifications-bucket-2025/
# Delete test file
gsutil rm gs://my-notifications-bucket-2025/test.txt Important Settings
Keep It Private
For this tutorial, keep your bucket private:
- Only you (and your functions) can access files
- Prevents unauthorized access
- More secure for real applications
Region Selection
Choose a region based on:
- Your location: Lower latency for you
- Your users: Better performance for them
- Cost: Some regions are cheaper
- Compliance: Some data must stay in specific regions
Naming Best Practices
Good bucket names:
my-notifications-bucket-2025company-uploads-prodproject-name-storage
Avoid:
- Generic names like
bucket1(might be taken) - Special characters (some providers don’t allow)
- Uppercase letters (some providers require lowercase)
Checklist
Before moving to the next page:
- Bucket created in your cloud provider
- Region selected (note which one you chose)
- Access level reviewed (should be private)
- Bucket name saved (you’ll need it later)
- Test upload successful (verified bucket works)
What’s Next?
Now that you have a storage bucket, let’s write the serverless function. In the next page, you’ll create code that receives events and processes file information.