Setup Jenkins to access resources in another AWS account using one of these 4 assume role methods.

At some point most Jenkins jobs are going to need to deploy the application they’ve built. If you’re following AWS best practices, you’ll have a different account for your production and development environments. This creates separation between environments, making it less likely something will accidentally get broken in production.

How then can Jenkins deploy your application into the other AWS account?

In this article we’ll explore 4 of the most popular answers to this question, all of which rely on Jenkins assuming a role that exists in the other AWS account.

  1. Assuming a role within a Jenkins pipeline
  2. Assuming a role in a freestyle Jenkins job
  3. Assuming a role using the AWS SDK from within your application’s build
  4. Assuming a role in a Jenkins instance deployed outside of AWS, using Jenkins credentials

If you’re looking for a step-by-step guide for any of these solutions, you’re in the right place.

Why would Jenkins need to assume a role in AWS?

A common setup is that you’re running Jenkins inside one AWS account and you want to deploy your production services into another. Let’s call the account where Jenkins is running development, as this is the same account where you’re building your service and doing other development related activities. You might even deploy a version of your service in this account for the purposes of testing.

Cross-account deployment

Deploying your service into the same AWS account where Jenkins lives is straightforward. Remember that Jenkins running in AWS will have an IAM ((Identity & Access Management) role assigned to it. However you’re running Jenkins in AWS (EC2, ECS, or EKS), when you create the AWS resource you can assign the role.

You just need to modify the IAM role Jenkins is running under to have permissions to deploy your service. Depending on how your service is deployed, this might include adding permissions to:

  • create an EC2 instance
  • run an ECS task
  • run an EKS pod

When it comes to deploying the service into production though, this isn’t going to help. We need a way of configuring Jenkins to allow it to deploy the service into a completely different AWS account.

The way we do that is to configure the Jenkins role to allow Jenkins to assume another role in the production account.

How assuming roles works in AWS

Given the development and production multiple account setup described above, we’ll look into how assuming a role works in general. We have the following players:

  • an AWS resource in development we want to gain access to production: this could be an EC2 instance or ECS task running Jenkins. It will already have a role assigned to it from the development account.
  • an AWS resource in production we only want to be modified by a specific resource in development: this could also be an EC2 instance or EC2 task, where you might want Jenkins to deploy your service.

In terms of IAM roles we have:

  • the development ‘original’ role assigned to the AWS resource: for us this is the Jenkins role assigned to Jenkins
  • the role to be assumed: this is the magic sauce which is going to allow Jenkins deployed in development to make changes in production

Cross-account roles

Later on we’ll run through a step-by-step example of how to set this up. For now, there are 3 important points:

  1. Access to production role is provided to development account through a trust relationship
  2. Production role is given whatever additional permissions are required for access to production resources
  3. Original role in development is given permissions to assume the production role

Let’s run through these in more detail:

1) Provide cross-account access with trust relationships

In the world of IAM a trust relationship defines what AWS resources can use the role. This might be a specific AWS service, user, or importantly for us, account.

Here’s how that looks in the AWS Console for an IAM role:

AWS trust relationships

This trust relationship means that the specified account can use this role.

By clicking the Edit trust relationship button, we can see the actual JSON policy behind this trust relationship.

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::504127819189:root"
      "Action": "sts:AssumeRole",
      "Condition": {}
  • the Principal defines what is allowed access to this role. In our case, that’s a specific AWS account.
  • the Action is sts:AssumeRole, a permission of the Security Token Service (STS) which will allow resources in the specified account to assume this role

We’ll get into how to create an IAM role with the correct trust relationship in the step-by-step examples later on. For now, just understand that the trust relationship of the production role means access is allowed from the development account.

2) Providing access to production resources

Maybe you’re already familiar with this part? You can attach policies to an IAM role. This means any AWS resource which is assigned the role will be able to perform the actions defined in those policies.

Here’s an example of an IAM role’s permissions. Permissions are defined in policies, and here it’s a predefined policy allowing read-only access to S3.

Example permission policy

We’ll need to attach whatever policies we need to the production role so it can modify whatever AWS resources we need it to. In the case of our service built by Jenkins, this will be deploying the service using EC2, ECS, EKS, or whatever other mechanism.

3) Allow AWS resources with the development role to assume the production role

The AWS resource in our development account will need to assume the production role in order to access production resources. For us this means the Jenkins role in development will need a policy allowing the sts:AssumeRole action on the production role.

Here’s what that might look like:

    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Allow",
        "Action": "sts:AssumeRole",
        "Resource": "arn:aws:iam::345021171020:role/assumed-role-name"
  • the Action is sts:AssumeRole which means we can use the Security Token Service to assume another role
  • the Resource should be the ARN (Amazon Resource Name) of the production role to be assumed

Prerequisites to examples

If you want to setup cross-account access to a pre-existing Jenkins instance and AWS accounts, you can skip forward to the next section. If you’re starting from scratch, follow these steps to setup the prerequisites from a blank AWS account.

Create two AWS accounts

You’ll need two AWS accounts to demonstrate the cross account access. If you don’t have that, it’s super-easy to setup more accounts.

  1. Login to your main AWS account and go to Services > Organizations
  2. Click Add account then Create account
  3. Fill out the following details:
    • Account name will be Production
    • Email address must be a unique email address from AWS’s point of view. If you use GMail you can reuse the same email account by adding +xyz after your username e.g.
    • IAM role name will be the suggested value OrganizationAccountAccessRole. This means we’ll be able to switch to this account in the AWS Console using that role.
  4. Click Create

Create AWS account

You now have a production account! Repeat the same process to create a development account with an Account name of Development, another Email address, and the same IAM role name.

You should now have three accounts listed in AWS Organizations, including your main account (AWS calls this your management account):

All AWS accounts

Make note of the account ids, as we’ll now use them to switch from the main AWS account to production and development.

Click on your username and select Switch Roles, then click the blue Switch Role button. On this form fill out:

  • Account: the id for your production account
  • Role: the role name of OrganizationAccountAccessRole (this role got created automatically by AWS when our new accounts were created)
  • Display Name: Production
  • Color: pick a colour to help you identify this account. I normally choose red for production accounts.

Switch roles

Click Switch Role and you’ll be switched into your production account, whose name will be shown in red in the top bar.

Production account name

Switch back to your main account by clicking the account name and clicking Back to <your main account user>.

Repeat the process for your development account, using the development account id, a Display Name of Development, and a different colour.

Role history

Back in your main account, you should be able to easily switch accounts using the Role History which is shown when you click your username.

This is really helpful since it means we don’t need to keep logging in and out of accounts using AWS credentials.

If you do want to access your new account using the root user email address provided in the previous steps, you’ll have to run through the password recovery options on the AWS Console login page.

Create a Jenkins instance

Now we’ve got both AWS accounts setup, let’s switch into our Development account and deploy Jenkins. In a previous article Deploy your own production-ready Jenkins in AWS ECS I described exactly how to setup Jenkins. We’re going to use the CloudFormation template from that article.

Launch CloudFormation stack

Apply the template into your own development account by clicking the Launch Stack button. Click Next, and on the Specify stack details page you’ll need to set these parameters:

  1. CertificateArn should be set to a certificate created in Certificate Manager. This is to enable HTTPS access to Jenkins on whatever domain you want it to be available on. See the Certificate & DNS setup section in the article for full details
  2. JenkinsDockerImage should be overridden to be tkgregory/jenkins-with-aws:latest. This Docker image I’ve made available specifically for this article, and it comes with the AWS CLI as well as some extra plugins we’ll need.

Create CloudFormation stack

Keep clicking Next, accepting all the other default options, and on the final page accept the required capabilities then click Create stack. After 5-10 minutes the stack will be created, and you’ll have a Jenkins instance running in your development account.

You can access it using either:

  • the Application Load Balancer DNS name (see Services > EC2 > Load Balancers), prefixing it with https://. You’ll have to accept the warning in your browser because the certificate doesn’t match the domain name.
  • setup your own DNS CNAME record in your domain’s DNS settings (in my case I setup my DNS so I can access Jenkins on

Follow all the setup steps outlined in the Getting started with the Jenkins instance section of the article to get to a point where you can create new Jenkins jobs.

Welcome to Jenkins

Setting up IAM roles for cross-account access

Given what we learnt in the previous section about assuming roles in AWS, we’ll now need to:

  • create a production role with a trust relationship with the development account. We’ll also edit this role’s permissions so it can access to the relevant production resources.
  • edit the Jenkins role in development to allow it to assume the production role

For the following examples, we’re going to get Jenkins to access the AWS S3 Simple Storage Service. Listing all available buckets in the production account will show that Jenkins has access to production resources. Once you’ve seen this working, you’ll be able to configure Jenkins to do whatever kind of updates to production you need for your specific use case.

Jenkins cross-account S3 access

Creating a production cross-account role

Log into your production account (if you followed the earlier steps, switch role to Production). Go to Services > IAM > Roles and select Create role.

AWS makes it easy to setup a role with a trust relationship with the development account. Under Select type of trusted entity just choose Another AWS account then enter the Account ID of your Development account. Select Next: Permissions.

Create cross-account production role

On the Attach permissions policies page we’ll add S3 permissions so any resource using this role will be able to list S3 buckets. Search for s3 then select AmazonS3ReadOnlyAccess. Click Next: Tags.

Attach permissions to role

Click Next: Review, then provide a Role name of cross-account-role. Click Create role.

Review cross-account role

Now it’s created, click on the cross-account-role to go to the role details. Select Trust relationships and you’ll see a relationship with your development account has been setup. Awesome! ✅

View trust relationship

Updating the Jenkins role to allow Jenkins to assume the production role

Now we’ve setup a role in our production account that has a trust relationship with our development account, let’s update the Jenkins role to allow Jenkins to assume the production role.

Head into your development account (if you followed the earlier steps, switch role to Development). Go to Services > IAM > Roles and select the role your Jenkins instance is using. If you applied the CloudFormation template for Jenkins, this role is conveniently called jenkins-role.

On the role details page, select Add inline policy (on the right-hand side) which will allow us to attach permissions to this role.

  • search for the Service called STS. STS is the Security Token Service which we’ll use in the following steps to generate temporary credentials for access to the production account.
  • under Actions expand the Write actions and choose AssumeRole.
  • under Resources choose Specific then click Add ARN. On the popup that appears enter your production account id and the role name of cross-account-role. Click Add and the interface will then generate the role ARN automatically.

Click Review policy.

Create Jenkins role policy

On the review page give the policy a name such as assume-production-role and select Create policy.

Jenkins role policy review

Now Jenkins should be able to assume the production role. Very cool! Shall we give it a go?

Assuming a role for Jenkins running in AWS

OK, enough theory & setup! Let’s jump into some step-by-step examples to show the different ways you can get Jenkins to assume a role in another AWS account.

In order to demonstrate access to the production account, we’ll list the production S3 buckets. If you don’t have a bucket created there already, switch to your production account and go to Services > S3 > Create bucket.

Give your bucket a unique Bucket name, accept all the defaults, and hit Create bucket.

S3 bucket list

Whatever buckets you have listed here, we’ll expect them to be output in each of the following methods for assuming a production role in Jenkins.

Jenkins pipeline assume role (method one)

An easy way to integrate assume role functionality into a Jenkins pipeline is to use the AWS Steps plugin. If you applied the CloudFormation from earlier on, the plugin is already installed. Otherwise, grab it by going to Manage Jenkins > Manage Plugins > Available and searching for aws steps.

Install AWS Steps plugin

Select the check box and hit Install without restart.

Now let’s create a new Jenkins pipeline where we’ll do the assume role (or apply this to your own pipeline). Go to the Jenkins home page and select New Item, give it the name pipeline-assume-role, and select the Pipeline type job. Click OK.

Create pipeline

On the following page, scroll down to the pipeline definition script section and enter the following code, making sure to insert your own production account id.

pipeline {
    agent any

    stages {
        stage('List production S3 buckets') {
            steps {
                withAWS(roleAccount:'<your-production-account-id>', role:'cross-account-role') {
                    sh 'aws s3 ls'
  • we use the withAWS function provided by the AWS Steps plugin to assume the cross-account-role in our production AWS account. Any code executed within this block will use the assumed role.
  • we call the AWS CLI to list our production S3 buckets. The Jenkins instance provisioned through CloudFormation as described earlier comes with the AWS CLI pre-installed. If you’re running this against your own Jenkins, you can follow these steps to install it.

Click Save then Build Now to run the new pipeline job. Click on the build id then Console Output. You should see something like this.

Jenkins pipeline console output

  • using the withAWS function we have successfully assumed the cross-account-role from the production account
  • the AWS CLI command successfully listed the S3 buckets that exist in the production account. Success! 👍

Jenkins freestyle project assume role (method two)

A Jenkins freestyle project is one which is defined through configuration in the UI and not by a pipeline script. You can execute whatever shell scripts you like, which we’ll make use of to assume the production role using the AWS CLI.

The steps look like this:

  1. Run the aws sts assume-role command through the AWS CLI to get temporary credentials for assuming the production role
  2. Use the output of that command to define environment variables to be used by the AWS CLI
  3. run any subsequent AWS CLI commands such as aws s3 ls. These will use the temporary credentials.

Start off by clicking New Item on the Jenkins home page. Enter an item name of freestyle-assume-role, select Freestyle project, then click OK.

Create freestyle Jenkins project

Scroll down to the Build section, and click Add build step > Execute shell. Paste the following bash script into the text box, adding your own production account id in line 1.

ASSUME_ROLE_OUTPUT=$(aws sts assume-role --role-arn arn:aws:iam::<your-production-account-id>:role/cross-account-role --role-session-name jenkins)
ASSUME_ROLE_ENVIRONMENT=$(echo $ASSUME_ROLE_OUTPUT | jq -r '.Credentials | .["AWS_ACCESS_KEY_ID"] = .AccessKeyId | .["AWS_SECRET_ACCESS_KEY"] = .SecretAccessKey | .["AWS_SESSION_TOKEN"] = .SessionToken | del(.AccessKeyId, .SecretAccessKey, .SessionToken, .Expiration)
 | to_entries[] | "export \(.key)=\(.value)"')
aws s3 ls

On line 1 we’re using the AWS CLI sts assume-role command to get temporary credentials to use the production role. We pass the command the ARN of the production role, and a session name used to identify our temporary session. The output of this is assigned to a variable, and looks like this:

    "Credentials": {
        "AccessKeyId": "ASIA3QZHEGINQXLJBUP5",
        "SecretAccessKey": "p466yCx3xCjXopdw1sKcSUnmEAsnj6ElvG4EMw0o",
        "SessionToken": "IQoJb3JpZ2luX2VjEML//////////wEaCWV1LXdlc3QtMSJIMEYCIQDx0dDAzOouoOTA6uFjqyrXVFMgLvDi7ZpwoyoBCmgBjAIhAOS/ao7x0NQGiCYO84YrqLXkuVmaP88QWfs8v4RnzmbNKpQCCCsQABoMNzkxOTY2NjU5MDk5Igxn4RHCYbllmMr+uhMq8QGz00yCQHiELqO2g4attCeXFLfp6WvL7QWRH9sK+g/UM21lYj1BsVukUYGyP39VMP7yvoVTHV3jewgaQTWRJrkE10ddh6SLyD2f7+aErpX5YRGwFw0Xd163MXd1D1/YBzaFR+BmsLEcudEJHulGFMlHZQfKyToaT3tbAe/yJOr7KALS5CmQUNDjoCH+w4mF5/ot8LkfWzEZDKI9NaxEk8LVcrY5YgwRuqh3dVPrbc4fxglgdFCG3TU1MgINUNf51qIsO2cJv2cji8IngWTR11/q+s0Ka5gdVnNjsTAaRxb9t4pXr/2+mNRWMPWTGlTE9SdGMJHkmf0FOpwB+o/OinQmOBUYRtu4kSUizo2FU94yZuaoE+ybBXPOCU/XlaKqpJ7Kz5gquKI37VgeRekP46OzMkM731JUZ+UrQsgLfD4luyIg2zvV4mkg9P1WXKMhEf/tq+Z9lLEUb8knJm8tRej9g4sZCDyH5kM9C4qdm+duNTO4e4pqlC5hV9Qger4Tu33dA/nH1qRwUXFYCs2R3xu3uiF9I8x0",
        "Expiration": "2020-11-07T11:08:17+00:00"
    "AssumedRoleUser": {
        "AssumedRoleId": "AROA3QZHEGINRTWAXJGM2:jenkins",
        "Arn": "arn:aws:sts::791966659099:assumed-role/cross-account-role/jenkins"

It’s a JSON document that has 3 keys we’ll need to setup in environment variables for the AWS CLI to use: AccessKeyId, SecretAccessKey, and SessionToken. We need to map these variables to environment variables like this:

  • AccessKeyId ➡️ AWS_ACCESS_KEY_ID
  • SecretAccessKey ➡️ AWS_SECRET_ACCESS_KEY
  • SessionToken ➡️ AWS_SESSION_TOKEN

To do that, on line 2 we use the JSON parsing tool jq, which is also installed on the jenkins-with-aws Docker image used earlier in the Jenkins CloudFormation setup.

  • jq extracts the Credentials key from the JSON
  • it adds new keys for the environment variables, pulling the values from the relevant JSON keys
  • it removes the old keys, so we’re left with the 3 environment variable names and values
  • we prefix the output of each line with export, which is the command that exports an environment variable to your local shell

On line 3 we execute the jq output using eval. This sets the environment variables in our shell.

Lastly, on line 4 we execute the aws s3 ls command which should print out the list of buckets in our production account.

Shall we try it out? OK, go on then. Hit Save, then Build Now, and you’ll see Console Output like this.

Jenkins freestyle project output;

There’s a lot going on here, but importantly at the end you’ll see your production bucket list has been printed out as expected! 👏

If you look at the rest of the output you can follow along with each stage of the script, as described above.

AWS credentials security: you may have noticed that the above image contains all my AWS security credentials. By the time you read this, those credentials will have expired, since credentials generated like this through STS expire by default after 1 hour. To prevent Jenkins including credentials in the console output, add set +x to the top of your script.

AWS SDK assume role (method three)

The previous 2 assume role methods were done using functionality provided by Jenkins. What if instead, we relied on our build to do this for us?

To see this working, we’ll be using a project which uses the popular build tool Gradle. If you haven’t used Gradle before, feel free to check out this Gradle tutorial for complete beginners.

Gradle has a very flexible build.gradle build script where you can write any Groovy or Java code you like. We’ll take advantage of that to:

  1. import the Java AWS SDK, allowing us to make calls to AWS services through code
  2. call the STS service to generate temporary credentials to use the production role
  3. use those temporary credentials to list the production S3 buckets

From the Jenkins home page, click New Item. Enter a name of gradle-assume-role, select Pipeline, then click OK.

Jenkins Gradle AWS assume role;

On the next page, scroll down to the pipeline definition script section, and enter the following pipeline code, adding in your own production account id on line 7.

pipeline {
    agent any
    stages {
        stage('List production S3 buckets') {
            steps {
                git ''
                sh './gradlew listS3Buckets -ProleArn=arn:aws:iam::<your-production-account-id>:role/cross-account-role'

This is a single stage pipeline that

  • clones this Git repository
  • executes a Gradle task called listS3Buckets. We pass through a roleArn property containing the ARN of the production role we want to assume.

Click Save, but hold off running the pipeline until we’ve had a look at the Gradle project we’re going to run.

The Gradle project

In the Gradle project is a build.gradle which contains the assume role logic.

import com.amazonaws.auth.AWSStaticCredentialsProvider
import com.amazonaws.auth.BasicSessionCredentials

buildscript {
    repositories {
    dependencies {
        classpath group: 'com.amazonaws', name: 'aws-java-sdk-sts', version: '1.11.895'
        classpath group: 'com.amazonaws', name: 'aws-java-sdk-s3', version: '1.11.895'

task listS3Buckets() {
    doLast {
        AssumeRoleRequest assumeRoleRequest = new AssumeRoleRequest()
                .withRoleArn(findProperty('roleArn') as String)

        Credentials sessionCredentials = AWSSecurityTokenServiceClientBuilder

        BasicSessionCredentials awsCredentials = new BasicSessionCredentials(

        new AmazonS3Client(new AWSStaticCredentialsProvider(awsCredentials))
                .each {
                    bucket -> println
  • this build has a dependency on two AWS libraries, pulled from Maven central:
    • aws-java-sdk-sts will be used to generate temporary credentials to access the production account
    • aws-java-sdk-s3 will be used to list out our production S3 buckets
  • we’re defining a new Gradle task called listS3Buckets. The code within the doLast section will be executed when ./gradlew listS3Buckets is run.
  • in the 1st code block in doLast we create an AssumeRoleRequest. We get the role ARN from a Gradle property, which can be passed in the command line as -ProleArn=<role-arn>.
  • in the 2nd code block we use the assumeRoleRequest to generate temporary session credentials using the AWSSecurityTokenService built by the builder.
  • in the 3rd code block we create a BasicSessionCredentials object using the credentials returned from the AWSSecurityTokenService
  • in the last code block we create a new AmazonS3Client using the temporary credentials, and use it to list out the name of the S3 buckets

Executing the Jenkins job

Now we understand how the Gradle project is doing the assume role, let’s head back to Jenkins. Build the gradle-assume-role job and you’ll see Console Output like this.

Jenkins Gradle AWS assume role console output

We can see that the listS3Buckets Gradle task has been called, outputting the list of S3 buckets in our production account. Nice! 👌

Assuming a role for Jenkins running outside AWS (method four)

The methods above to get Jenkins to assume a role in your production account work only when Jenkins already has a role assigned to it. This is the case when Jenkins is running inside AWS.

But what if you wanted to run Jenkins outside of AWS, and give it access to a production AWS account? Well, here are two options:

  1. Create a Jenkins user in your development account. Store credentials for that user in Jenkins credentials. Perform an assume role using one of the methods discussed earlier.
  2. Create a Jenkins user in your production account. Store credentials for that user in Jenkins credentials. No cross-account role switching is required.

Let’s take a look at the first option, which looks like this.

Jenkins outside AWS

Notice this time we have an IAM user and group rather than role. A user is required in AWS whenever you need to generate credentials. A group is required to assign permissions to that user.

We’ll run through a quick example of this setup, where we:

  1. Create a IAM group, and attach an inline policy to it to allow sts:AssumeRole on the production role
  2. Create a Jenkins IAM user, belonging to the IAM group
  3. Run a Jenkins instance locally in Docker, outside of AWS
  4. Configure credentials in Jenkins for the Jenkins IAM user
  5. Create a new Jenkins job which uses those credentials to access an S3 bucket in our production account

Creating an IAM group and user for Jenkins

In your Development account, go to Services > IAM > Groups > Create New Group. Set Group Name to jenkins, and click Next Step.

Create Jenkisn IAM group

Skip over the remaining configuration by clicking Next Step again, then click Create Group.

Notice we didn’t set any permissions? We’ll attach an inline policy to this group, which we do by editing the group once it’s created. Click on the group name to get to the group details page, then click Permissions. Expand Inline Policies, then click click here.

Create Jenkins group inline policy

On the next page click Select next to Policy Generator. Here we can add the sts:AssumeRole permission to allow any users in this group to assume the production role.

  • for AWS Service select AWS Security Token Service
  • for Actions choose AssumeRole
  • for Amazon Resource Name enter the ARN of your production role (e.g. arn:aws:iam::<your-production-account-id>:role/cross-account-role)

Click Add Statement, then Next Step.

Set group permissions

On the next page you can review your policy in JSON format. Hit Apply Policy.

Now we’ll create a user for Jenkins and assign it to our jenkins group. Go to Users in the IAM dashboard, then click Add user.

Enter a Username of jenkins, and select the Programmatic access checkbox. This will give us user credentials which we can configure inside Jenkins. Click Next: Permissions.

Create Jenkins user

On the next page select the jenkins group and click Next: Tags.

Add Jenkins user to group

Click Next: Review, then Create user.

On the confirmation page you’ll be given an access key id and secret access key for this new user. Make a note of these details because this is the only time AWS will show you them.

Jenkins user credentials

Setup a local Jenkins instance

We’ll use the same jenkins-with-aws Docker image we used earlier. Run it like this:

docker run --name jenkins-with-aws -p 8080:8080 tkgregory/jenkins-with-aws

View the logs by running docker logs jenkins-with-aws. Look for this section, and copy the admin password.

Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:


Go to http://localhost:8080, paste in the admin password, and hit Continue. Since this Docker image comes with all the plugins we need, click Select Plugins to Install, click None, then click Install.

Jenkins plugins none option

Setup a user for Jenkins, then on the last configuration page accept the suggested Jenkins URL. Click Save and Finish, then Start Using Jenkins.

Configure Jenkins credentials

Now we’ll create a Jenkins credential for our AWS jenkins user.

Go to Manage Jenkins > Manage Credentials. Under Stores scope to Jenkins, click Jenkins. Then click Global credentials.

On the left click Add Credentials.

  • for Username enter the AWS access key id for the jenkins user
  • for Password enter the AWS secret access key for the jenkins user
  • for ID enter aws

Click OK.

Create credentials

Create a Jenkins pipeline to access the AWS production account

We’re going to create a Jenkins pipeline that uses the AWS Steps plugin we used earlier on. This plugin allows you to use a Jenkins credential (like the one we just created) to assume a role and run pipeline commands using that assumed role.

On the home page of your local Jenkins instance click New Item. Enter a name of outside-aws-assume-role, select Pipeline, then click OK.

Create job for Jenkins outside AWS

Scroll down to the pipeline definition script section, and enter the following pipeline script, making sure to enter your own production account id on line 7.

pipeline {
    agent any

    stages {
        stage('List production S3 buckets') {
            steps {
                withAWS(credentials: 'aws', roleAccount:'<your-production-account-id>', role:'cross-account-role') {
                    sh 'aws s3 ls'

Maybe you noticed this is almost the same as the pipeline we created in the earlier section for method one? That’s right, except this time we’re passing the credentials parameter to withAWS.

These credentials will be used to perform the STS assume role operation. That will return some temporary credentials providing access into our production account. Any commands within the withAWS block will use them, therefore having access to production.

Click Save. I think you know the drill now. Hit Build Now!

You’ll see some Console Output like this.

Console output from Jenkins outside AWS

We can see the list of S3 buckets in production. So our Jenkins instance deployed outside of AWS has accessed resources in our production account! 🙏

This was done using a Jenkins IAM user and group defined in the development account, then doing an assume role to generate temporary credentials to access the production account. Pretty cool!

Jenkins flag


There’s quite a lot going on in this article, so here’s a summary of the four different approaches we discussed.

Jenkins deployed in AWS? Additional Jenkins plugins required
Pipeline job using AWS Steps plugin (link) AWS Steps plugin
Freestyle job (link) None
Gradle project using AWS SDK (link) None
Pipeline job using AWS Steps plugin and Jenkins credentials (link) AWS Steps plugin

There’s no silver bullet, but your choice mainly comes down to your answers to these questions:

  1. Am I able to make changes to Jenkins? e.g. install AWS Steps plugin
  2. Do I want the assume role logic to live in Jenkins or in the projects themselves? e.g. use Gradle or other build tools to handle assuming the production role

Hopefully one of these options will fit your scenario. If not, or if you have any other suggestions please leave a comment and we can start a discussion.

Clean up

Don’t forget to delete any resources you created in AWS while following along with this article.

In your Production account

  • Go to Services > IAM > Roles and delete the cross-account-role
  • Delete any S3 buckets you created

In your Development account

  • Go to Services > IAM > Roles and click on jenkins-role. Next to the assume-production-role policy, click the cross icon to remove it. This is required before deleting the CloudFormation stack as this change was made manually.
  • Go to Services > CloudFormation and delete the jenkins-for-ecs stack

Deleting the Development and Production accounts

You’ll have to login as the root user for these accounts, with the email address you provided when you created each account. When it comes to entering your password, follow the Forgot password? link to gain access.

Once you’re in, click on your username in the top bar, then click My Account. The Close Account option is at the bottom of the page.


jenkins-with-aws Docker image

We used the tkgregory/jenkins-with-aws:latest Docker image throughout this article. It’s a Jenkins image with the AWS CLI preinstalled, as well as the Pipeline, AWS Steps, and Git plugins.

Jenkins CloudFormation

Launch CloudFormation stack

We applied this CloudFormation stack to create a Jenkins instance (using the above Docker image). Follow along with the Deploy your own production-ready Jenkins in AWS ECS article for full instructions.

Jenkins plugins

We used the Jenkins AWS Steps plugin to assume a role in a different AWS account. Awesome plugin!


Check out this Gradle tutorial for complete beginners.

comments powered by Disqus