Configure Loki with IRSA on Amazon EKS
What you’ll accomplish
Section titled “What you’ll accomplish”You’ll configure Loki to authenticate to S3 using IAM Roles for Service Accounts (IRSA), replacing static access keys with temporary credentials the IRSA webhook injects automatically. Loki accesses both of its S3 buckets (chunks and admin) without storing long-lived keys in your cluster.
Prerequisites
Section titled “Prerequisites”- UDS CLI installed
- UDS Registry account created and authenticated locally with a read token
- Access to an EKS cluster with UDS Core deployed
- An OIDC (OpenID Connect) identity provider configured on the cluster
- Permission to create IAM roles and policies in AWS
- Two S3 buckets for Loki: one for chunks (log data) and one for admin (compactor state and internal metadata)
- OpenTofu installed
Before you begin
Section titled “Before you begin”IRSA works by annotating the Loki service account (loki/loki) with an IAM role ARN. The account is named loki because UDS Core sets fullnameOverride: loki in the Helm values; if you customize that override, update the trust policy sub condition to match. The Amazon EKS OIDC webhook automatically injects temporary credentials, which the Loki S3 client uses in place of static access keys.
Loki requires access to two buckets: one for log chunk data and one for admin data (compactor state and internal metadata such as delete markers and tenant configuration). The same IAM role covers both buckets.
-
Create an IAM policy for Loki S3 access
The following examples use OpenTofu to provision the required IAM resources. They assume an
awsprovider is already configured in your workspace. Create the S3 access policy, which includess3:DeleteObjectso Loki’s compactor can expire old chunks according to the configured retention period:loki-s3-policy.tf # Loki S3 storage policy: covers both chunks and admin buckets# Reference: https://grafana.com/docs/loki/latest/setup/install/helm/deployment-guides/aws/data "aws_iam_policy_document" "loki_s3" {statement {effect = "Allow"actions = ["s3:PutObject","s3:GetObject","s3:DeleteObject","s3:AbortMultipartUpload","s3:ListMultipartUploadParts",]resources = ["arn:aws:s3:::YOUR_CHUNKS_BUCKET/*","arn:aws:s3:::YOUR_ADMIN_BUCKET/*",]}statement {effect = "Allow"actions = ["s3:ListBucket","s3:GetBucketLocation","s3:ListBucketMultipartUploads",]resources = ["arn:aws:s3:::YOUR_CHUNKS_BUCKET","arn:aws:s3:::YOUR_ADMIN_BUCKET",]}}resource "aws_iam_policy" "loki_s3" {name = "loki-s3-policy"policy = data.aws_iam_policy_document.loki_s3.json} -
Create an IAM role with an IRSA trust policy
Create a role that the
lokiservice account in thelokinamespace can assume:loki-irsa-role.tf # The OIDC provider URL for your EKS cluster, without the https:// prefix.# Example: oidc.eks.us-east-1.amazonaws.com/id/EXAMPLE1234567890variable "oidc_provider" {description = "EKS cluster OIDC provider URL (without https://)"}# Look up the OIDC provider already registered in IAM for this clusterdata "aws_iam_openid_connect_provider" "eks" {url = "https://${var.oidc_provider}"}data "aws_iam_policy_document" "loki_irsa_trust" {statement {effect = "Allow"principals {type = "Federated"identifiers = [data.aws_iam_openid_connect_provider.eks.arn]}actions = ["sts:AssumeRoleWithWebIdentity"]condition {test = "StringEquals"variable = "${var.oidc_provider}:sub"values = ["system:serviceaccount:loki:loki"]}condition {test = "StringEquals"variable = "${var.oidc_provider}:aud"values = ["sts.amazonaws.com"]}}}resource "aws_iam_role" "loki" {name = "loki-s3-role"assume_role_policy = data.aws_iam_policy_document.loki_irsa_trust.json}resource "aws_iam_role_policy_attachment" "loki_s3" {role = aws_iam_role.loki.namepolicy_arn = aws_iam_policy.loki_s3.arn}Place both
.tffiles in the same directory and replace the bucket name placeholders. Supplyoidc_providervia a-varflag or aterraform.tfvarsfile, then apply:Terminal window tofu inittofu plantofu apply -
Configure your bundle for IRSA
Add the overrides below to your bundle. The
valuesentries clear the access key fields that UDS Core sets by default (populated with MinIO credentials) and remove the MinIO endpoint so Loki derives the correct endpoint from the AWS region. Thevariablesentries supply your bucket names, region, and the IRSA role ARN.uds-bundle.yaml packages:- name: corerepository: registry.defenseunicorns.com/public/coreref: x.x.x-upstreamoverrides:loki:loki:variables:# S3 bucket for Loki log chunk data- name: LOKI_CHUNKS_BUCKETpath: loki.storage.bucketNames.chunks# S3 bucket for Loki admin data (compactor state and internal metadata)- name: LOKI_ADMIN_BUCKETpath: loki.storage.bucketNames.admin# AWS region for both S3 buckets- name: LOKI_S3_REGIONpath: loki.storage.s3.region# IRSA role ARN annotated on the Loki service account- name: LOKI_IRSA_ROLE_ARNpath: serviceAccount.annotations.eks\.amazonaws\.com/role-arnvalues:# Set S3 as the storage backend type- path: loki.storage.typevalue: "s3"# Clear the MinIO endpoint so Loki derives the endpoint from the AWS region- path: loki.storage.s3.endpointvalue: ""# Leave access keys empty; Loki will use the IRSA credential chain instead- path: loki.storage.s3.accessKeyIdvalue: ""- path: loki.storage.s3.secretAccessKeyvalue: ""# Use virtual-hosted-style S3 URLs (required for AWS S3; path style is for MinIO)- path: loki.storage.s3.s3ForcePathStylevalue: falseSupply the bucket names, region, and role ARN in your
uds-config.yaml:uds-config.yaml variables:core:LOKI_CHUNKS_BUCKET: "your-loki-chunks-bucket"LOKI_ADMIN_BUCKET: "your-loki-admin-bucket"LOKI_S3_REGION: "us-east-1"LOKI_IRSA_ROLE_ARN: "arn:aws:iam::123456789012:role/loki-s3-role" -
Create and deploy your bundle
Build the bundle artifact and deploy it to your cluster:
Terminal window uds create <path-to-bundle-dir>uds deploy uds-bundle-<name>-<arch>-<version>.tar.zst
Verification
Section titled “Verification”Confirm the Loki pods are running and can reach your S3 buckets:
# Verify the IRSA annotation is present on the Loki service accountuds zarf tools kubectl get sa -n loki loki -o jsonpath='{.metadata.annotations}' | grep eks.amazonaws.com
# Confirm access keys are empty in the active Loki configuration (should return no output)uds zarf tools kubectl get secret -n loki loki -o jsonpath='{.data.config\.yaml}' | base64 -d | grep access_key
# Check that all Loki tier pods are running (write, read, backend)uds zarf tools kubectl get pods -n loki -l app.kubernetes.io/name=loki
# Check Loki write-tier logs for S3 authentication or connection errorsuds zarf tools kubectl logs -n loki -l app.kubernetes.io/component=write --tail=30Success criteria:
- The
lokiservice account in thelokinamespace has aneks.amazonaws.com/role-arnannotation matching your role ARN - The
access_keycheck returns no output (access keys are empty, not the MinIO defaults) - All Loki write, read, and backend pods are
Running - Loki write logs contain no
AccessDeniedor credential errors - Grafana can query recent logs from the Loki data source (Explore → Loki → run
{namespace="vector"})
Troubleshooting
Section titled “Troubleshooting”Problem: Loki pods in CrashLoopBackOff
Section titled “Problem: Loki pods in CrashLoopBackOff”Symptoms: Loki write or backend pods restart repeatedly; logs show S3 authentication or connection errors.
Solution: Verify the IRSA annotation is on the service account and that the role ARN is correct:
uds zarf tools kubectl get sa -n loki loki -o yaml | grep eks.amazonaws.comIf the annotation is missing, confirm LOKI_IRSA_ROLE_ARN is set in uds-config.yaml and redeploy the bundle.
If the annotation is present but S3 errors continue, check that the loki.storage.s3.endpoint override is set to "". A non-empty endpoint (such as the default MinIO URL) overrides the AWS regional endpoint and prevents Loki from reaching S3.
Problem: Access denied to S3
Section titled “Problem: Access denied to S3”Symptoms: Loki logs show AccessDenied or 403 Forbidden errors for S3 operations.
Solution: Verify the IAM role trust policy’s sub condition exactly matches system:serviceaccount:loki:loki, the aud condition is set to sts.amazonaws.com, and the OIDC provider ARN in the Federated principal matches your EKS cluster.
Confirm the S3 policy covers both the chunks and admin bucket ARNs, including the /* suffix on the object-level statements. A missing suffix limits the policy to bucket-level actions only and blocks object reads and writes.
Problem: Loki pods start but write no data to S3
Section titled “Problem: Loki pods start but write no data to S3”Symptoms: Loki write or backend pods log InvalidAccessKeyId errors, or Loki appears healthy but log queries return no data.
Solution: Verify that loki.storage.s3.accessKeyId and loki.storage.s3.secretAccessKey are explicitly set to "" in the bundle values. If the Zarf defaults (uds / uds-secret) are present in the Loki config, the S3 client uses those credentials directly instead of the IRSA credential chain, causing InvalidAccessKeyId errors.
Check the active Loki configuration Secret to confirm the access keys are empty:
uds zarf tools kubectl get secret -n loki loki -o jsonpath='{.data.config\.yaml}' | base64 -d | grep access_keyRelated documentation
Section titled “Related documentation”- Grafana Loki: AWS Deployment Guide - provider-specific Loki configuration for AWS S3
- AWS: IAM Roles for Service Accounts - IRSA setup and OIDC provider configuration
- Configure HA for Logging - Loki S3 storage, replica tuning, and Vector resource configuration
- Logging concepts - background on the Vector → Loki → Grafana pipeline in UDS Core