Syntax highlighter header

Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Thursday, 14 August 2025

Java 21 with Spring Security 6.5.2, Oauth2 and AWS Cognito Pool

Recently we were trying to migrate our spring boot application to Java 21 and latest spring boot version to for enhancing security of our application. We hit major road blocks related to working of JWT tokens. Here I am documenting the solution which worked for us. Major idea is similar to my previous post https://blog.bigdatawithjasvant.com/2023/08/spring-security-60-with-oauth2-and-aws.html

But there were some new challenges which we faced. Let us get started.

We were having some public apis which were open to all without any authentication and some apis were secured with Cognito pool JWT token. This application was different from the one described in my previous post. So we were not building on top of application described in my previous post. For configuring access to apis you need to create a configuration class called SecurityConfiguration like below:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.method.configuration.EnableMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configurers.AbstractHttpConfigurer;
import org.springframework.security.config.http.SessionCreationPolicy;
import org.springframework.security.oauth2.jwt.JwtDecoder;
import org.springframework.security.web.SecurityFilterChain;
import org.springframework.web.cors.CorsConfiguration;
import org.springframework.web.cors.UrlBasedCorsConfigurationSource;

import java.util.Arrays;

@Configuration
@EnableWebSecurity
@EnableMethodSecurity
public class SecurityConfig {

    @Autowired
    private JwtDecoder jwtDecoder;

    @Bean
    public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
        // @formatter:off
        http.cors(c-> {
            UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
            CorsConfiguration config = new CorsConfiguration();
            config.setAllowCredentials(true);
            config.addAllowedOriginPattern("*");
            config.addAllowedHeader("*");
            config.setAllowedMethods(Arrays.asList("OPTIONS", "GET", "POST", "PUT", "DELETE", "PATCH"));
            source.registerCorsConfiguration("/**", config);

            c.configurationSource(source);
        });
        http.csrf(AbstractHttpConfigurer::disable);

        http
                .authorizeHttpRequests((authorize) -> authorize
                        .requestMatchers("/","/api/public/**","/actuator/**").permitAll()
                        .anyRequest().authenticated()
                ).sessionManagement(sm-> sm.sessionCreationPolicy(SessionCreationPolicy.STATELESS))
                .oauth2ResourceServer(rs->  rs.jwt(j-> j.decoder(jwtDecoder)));
        // @formatter:on
        return http.build();
    }

}

For decoding JWT tokens we need to use JwtDecoder. The JWT decoder is defined in another config class like this:

import com.nimbusds.jose.*;
import com.nimbusds.jose.proc.JWSKeySelector;
import com.nimbusds.jose.proc.SecurityContext;
import com.nimbusds.jwt.proc.DefaultJWTProcessor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.convert.converter.Converter;
import org.springframework.security.oauth2.jwt.JwtDecoder;
import org.springframework.security.oauth2.jwt.MappedJwtClaimSetConverter;
import org.springframework.security.oauth2.jwt.NimbusJwtDecoder;
import java.security.Key;
import java.util.*;

import com.nimbusds.jose.jwk.JWKSet;
import com.nimbusds.jose.jwk.RSAKey;
import org.springframework.http.ResponseEntity;
import org.springframework.web.client.RestTemplate;

import java.text.ParseException;

@Configuration
public class JwtConfiguration {

    public static final String COGNITO_GROUPS = "cognito:groups";
    private static final String SPRING_AUTHORITIES = "scope";
    public static final String COGNITO_USERNAME = "username";
    private static final String SPRING_USER_NAME = "sub";

    @Value("${security.oauth2.resource.jwk.key-set-uri}")
    private String keySetUri;

    @Bean
    JwtDecoder jwtDecoder() throws ParseException {
        // Obtain the JWKS from the endpoint
        RestTemplate restTemplate = new RestTemplate();
        ResponseEntity<String> jwksResponse = restTemplate.getForEntity(keySetUri, String.class);
        String jwksJson = jwksResponse.getBody();

        JWKSet jwkSet = JWKSet.parse(jwksJson);

        DefaultJWTProcessor<SecurityContext> defaultJWTProcessor =  new DefaultJWTProcessor<>();

        defaultJWTProcessor.setJWSKeySelector(new JWSKeySelector<SecurityContext>() {
            @Override
            public List<? extends Key> selectJWSKeys(JWSHeader header, SecurityContext context) throws KeySourceException {
                RSAKey rsaKey = (RSAKey) jwkSet.getKeyByKeyId(header.getKeyID());
                try {
                    return new ArrayList<Key>(Arrays.asList(rsaKey.toPublicKey()));
                } catch (JOSEException e) {
                    e.printStackTrace();
                }
                return null;
            }
        });
        NimbusJwtDecoder jwtDecoder = new NimbusJwtDecoder(defaultJWTProcessor);

        Converter<Map<String, Object>, Map<String, Object>> claimSetConverter = MappedJwtClaimSetConverter
                .withDefaults(Collections.emptyMap());

        jwtDecoder.setClaimSetConverter( claims -> {
            claims = claimSetConverter.convert(claims);

            HashMap<String, Object> hashMap = new HashMap<>(claims);
            if (claims.containsKey(COGNITO_GROUPS))
                ((Map<String, Object>) hashMap).put(SPRING_AUTHORITIES, claims.get(COGNITO_GROUPS));
            if (claims.containsKey(COGNITO_USERNAME))
                ((Map<String, Object>) hashMap).put(SPRING_USER_NAME, claims.get(COGNITO_USERNAME));

            return hashMap;
        });
        return jwtDecoder;
    }
}

When I tried running this code I faced a class not found exception which took a lot of time to resolve. The Exception was:

[ main] o.s.boot.SpringApplication , 857 : Application run failed java.lang.ClassNotFoundException: org.springframework.security.oauth2.server.resource.authentication.DPoPAuthenticationProvider 
  at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641) 
  at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188) 
  at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:526) 
  ... 57 common frames omitted 
Wrapped by: java.lang.NoClassDefFoundError: org/springframework/security/oauth2/server/resource/authentication/DPoPAuthenticationProvider

The reason of the problem was mismatch between different versions of spring security libraries. The problem was solved when I used following versions of the dependencies:

	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>3.5.3</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
        <dependencies>
		<dependency>
			<groupId>com.fasterxml.jackson.core</groupId>
			<artifactId>jackson-databind</artifactId>
			<version>2.15.2</version> <!-- match AWS SDK requirements -->
		</dependency>
		<!-- https://mvnrepository.com/artifact/org.springframework.security/spring-security-jwt -->
		<dependency>
			<groupId>org.springframework.security</groupId>
			<artifactId>spring-security-jwt</artifactId>
			<version>1.1.1.RELEASE</version>
		</dependency>
		<!-- https://mvnrepository.com/artifact/org.springframework.security/spring-security-oauth2-client -->
		<dependency>
			<groupId>org.springframework.security</groupId>
			<artifactId>spring-security-oauth2-client</artifactId>
			<version>6.5.2</version>
		</dependency>
		<!-- https://mvnrepository.com/artifact/org.springframework.security/spring-security-config -->
		<dependency>
			<groupId>org.springframework.security</groupId>
			<artifactId>spring-security-config</artifactId>
			<version>6.5.2</version>
		</dependency>
		<dependency>
			<groupId>org.springframework.security</groupId>
			<artifactId>spring-security-oauth2-resource-server</artifactId>
			<version>6.5.2</version>
		</dependency>
		<dependency>
			<groupId>org.springframework.security</groupId>
			<artifactId>spring-security-oauth2-jose</artifactId>
			<version>6.5.2</version>
		</dependency>
		<dependency>
			<groupId>com.nimbusds</groupId>
			<artifactId>nimbus-jose-jwt</artifactId>
			<version>10.0.2</version>
		</dependency>
      </dependencies>

Please comment if you find it useful or need some more info.

Friday, 4 August 2023

Spring Security 6.0 with Oauth2 and AWS Cognito Pool

Recently we were migrating our old application from Spring Security 5 to Spring security 6. The major problem we faced was there was no documentation available on internet for Spring Security 6. After struggling for a long time we were able to move our old application to latest version of spring which is 6.0 . Here I am documenting our finding in hope that it will be helpful for the readers.

The first class you need to write is SecurityConfiguration. The content of file is provided below:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.method.configuration.EnableMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configurers.AbstractHttpConfigurer;
import org.springframework.security.config.http.SessionCreationPolicy;
import org.springframework.security.oauth2.jwt.JwtDecoder;
import org.springframework.security.web.SecurityFilterChain;
import org.springframework.security.web.util.matcher.AntPathRequestMatcher;
import org.springframework.web.cors.CorsConfiguration;
import org.springframework.web.cors.UrlBasedCorsConfigurationSource;

import java.util.Arrays;

@Configuration
@EnableWebSecurity
@EnableMethodSecurity
public class SecurityConfiguration {

    @Autowired
    private JwtDecoder jwtDecoder;

    @Bean
    public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
        // @formatter:off
        http.cors(c-> {
            UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
            CorsConfiguration config = new CorsConfiguration();
            config.setAllowCredentials(true);
            config.addAllowedOriginPattern("*");
            config.addAllowedHeader("*");
            config.setAllowedMethods(Arrays.asList("OPTIONS", "GET", "POST", "PUT", "DELETE", "PATCH"));
            source.registerCorsConfiguration("/**", config);

            c.configurationSource(source);
        });
        http.csrf(AbstractHttpConfigurer::disable);

        http
                .authorizeHttpRequests((authorize) -> authorize
                        .requestMatchers(AntPathRequestMatcher.antMatcher("/error")).permitAll()
                        .requestMatchers(AntPathRequestMatcher.antMatcher("/api/public/**")).permitAll()
                        .requestMatchers(AntPathRequestMatcher.antMatcher("/api/test/**")).permitAll()
                        .requestMatchers(AntPathRequestMatcher.antMatcher("/actuator/**")).permitAll()
                        .anyRequest().authenticated()
                ).sessionManagement(sm-> sm.sessionCreationPolicy(SessionCreationPolicy.STATELESS))
                .oauth2ResourceServer(rs-> rs.jwt(j-> j.decoder(jwtDecoder)));
        // @formatter:on
        return http.build();
    }
}

In the above class I defined a CorsFilter to allow all domains. CSRF is disabled and few URLs were allowed without authentication. We need to use OAuth2 for integrating Cognito pool JWT tokens, for that we specified jwtDecoder. The JwtDecoder is defined in next class:

import com.nimbusds.jose.*;
import com.nimbusds.jose.proc.JWSKeySelector;
import com.nimbusds.jose.proc.SecurityContext;
import com.nimbusds.jwt.proc.DefaultJWTProcessor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.convert.converter.Converter;
import org.springframework.security.oauth2.jwt.JwtDecoder;
import org.springframework.security.oauth2.jwt.MappedJwtClaimSetConverter;
import org.springframework.security.oauth2.jwt.NimbusJwtDecoder;
import java.security.Key;
import java.util.*;

import com.nimbusds.jose.jwk.JWKSet;
import com.nimbusds.jose.jwk.RSAKey;
import org.springframework.http.ResponseEntity;
import org.springframework.web.client.RestTemplate;

import java.text.ParseException;

@Configuration
public class JwtConfiguration {

    public static final String COGNITO_GROUPS = "cognito:groups";
    private static final String SPRING_AUTHORITIES = "scope";
    public static final String COGNITO_USERNAME = "username";
    private static final String SPRING_USER_NAME = "sub";

    @Value("${security.oauth2.resource.jwk.key-set-uri}")
    private String keySetUri;

    @Bean
    JwtDecoder jwtDecoder() throws ParseException {
        // Obtain the JWKS from the endpoint
        RestTemplate restTemplate = new RestTemplate();
        ResponseEntity<String> jwksResponse = restTemplate.getForEntity(keySetUri, String.class);
        String jwksJson = jwksResponse.getBody();

        JWKSet jwkSet = JWKSet.parse(jwksJson);

        DefaultJWTProcessor<SecurityContext> defaultJWTProcessor =  new DefaultJWTProcessor<>();

        defaultJWTProcessor.setJWSKeySelector(new JWSKeySelector<SecurityContext>() {
            @Override
            public List<? extends Key> selectJWSKeys(JWSHeader header, SecurityContext context) throws KeySourceException {
                RSAKey rsaKey = (RSAKey) jwkSet.getKeyByKeyId(header.getKeyID());
                try {
                    return new ArrayList<Key>(Arrays.asList(rsaKey.toPublicKey()));
                } catch (JOSEException e) {
                    e.printStackTrace();
                }
                return null;
            }
        });
        NimbusJwtDecoder jwtDecoder = new NimbusJwtDecoder(defaultJWTProcessor);

        Converter<Map<String, Object>, Map<String, Object>> claimSetConverter = MappedJwtClaimSetConverter
                .withDefaults(Collections.emptyMap());

        jwtDecoder.setClaimSetConverter( claims -> {
            claims = claimSetConverter.convert(claims);

            HashMap<String, Object> hashMap = new HashMap<>(claims);
            if (claims.containsKey(COGNITO_GROUPS))
                ((Map<String, Object>) hashMap).put(SPRING_AUTHORITIES, claims.get(COGNITO_GROUPS));
            if (claims.containsKey(COGNITO_USERNAME))
                ((Map<String, Object>) hashMap).put(SPRING_USER_NAME, claims.get(COGNITO_USERNAME));

            return hashMap;
        });
        return jwtDecoder;
    }
}

In the above class we are trying to decode the JWT token generated by Cognito pool which is passed to our API as bearer token. The JWT tokens are signed using a private key and validated using a public key. The public keys are provided as a JSON Web Key Set(JWKS) at the URL https://cognito-idp.<aws-region>.amazonaws.com/<cognito-pool-id>/.well-known/jwks.json by AWS. It contains public keys for validating tokens. There can be multiple keys in this set so you need to parse this JSON Web Key Set(JWKS) and pick the correct key as per header of JWT token.

After that you need to do some customization in claimSetConverter. The customization we required are that we want to use username of the cognito pool as user identification rather than UUID generated by cognito for the user. For doing that we overwritten value of "sub" with username received in claim. This value will be returned by principal.getName() when used in the code. 

The second customization we did was to overwrite "scope" with "cognito:groups" because we want to use congnito groups as authority for authenticating our API calls. Assume cognito group name is ROLE_ADMIN then the authority it will be mapped to will be SCOPE_ROLE_ADMIN and we can use @PreAuthorize("hasAuthority('SCOPE_ROLE_ADMIN')") for pre authorizing the API calls. One sample API is implemented below:

import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.validation.annotation.Validated;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.security.Principal;

@RestController
@RequestMapping("/api/users")
@RequiredArgsConstructor
@Validated
@Slf4j
public class UserNameTestController {
    @GetMapping("/userName")
    @PreAuthorize("hasAuthority('SCOPE_ROLE_ADMIN')")
    public  ResponseEntity<String>  getCurrentUserInfo(final Principal principal) {
        return ResponseEntity.status(200).body(principal.getName());
    }
}

The important maven dependencies used in this code are:

        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-config</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-web</artifactId>
        </dependency>
		<dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-security</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-oauth2-resource-server</artifactId>
            <version>6.0.2</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-oauth2-jose</artifactId>
            <version>6.0.5</version>
        </dependency>
        <dependency>
            <groupId>com.nimbusds</groupId>
            <artifactId>nimbus-jose-jwt</artifactId>
            <version>9.8.1</version>
        </dependency>

I hope this will be helpful for the readers. Please drop me a comment if you need some more information.

References:

https://medium.com/javarevisited/json-web-key-set-jwks-94dc26847a34 

Saturday, 18 March 2023

Creating nginx docker image for a react application

 Recently we were trying to deploy a PWA application developed in react on production. We wanted to create a docker image for our application for an easy deployment. We did not want to run node process in our docker container because it will consume a lot of memory. So, we decided to compile our application as static files and serve these files using nginx to make it lightweight. 

We were developing the application on a Windows machine and deploy on a Linux machine so we decided to do the compilation in a docker build. We used following code in our Dockerfile.

FROM node:12 as mynode
ARG environment
# Create app directory
WORKDIR /usr/src/app
# Copy source code to build context
COPY . .
RUN npm install
RUN npm run build:${environment}

# Start from a clean nginx image
FROM nginx
#Copy the build generated file to target image
COPY --from=mynode /usr/src/app/build/ /usr/share/nginx/html
#delete the default nginx config
RUN rm /etc/nginx/conf.d/default.conf
#copy the required nginx config file to image
COPY ui-nginx.conf /etc/nginx/conf.d

Following is the content of ui-nginx.conf file:

server {
listen 80 default_server;
server_name localhost;
location / {
        root /usr/share/nginx/html;
        access_log /var/log/nginx/ui-access.log;
        error_log /var/log/nginx/ui-error.log;
        index index.html index.htm;
        try_files $uri /index.html;
        }
}

Following part is used in package.json file for making the build:

  "scripts": {
    "start": "env-cmd -f ./environments/.env.development react-scripts start",
    "build": "env-cmd -f ./environments/.env.development react-scripts build",
    "start:development": "env-cmd -f ./environments/.env.development react-scripts start",
    "build:development": "env-cmd -f ./environments/.env.development react-scripts build",
    "start:production": "env-cmd -f ./environments/.env.production react-scripts start",
    "build:production": "env-cmd -f ./environments/.env.production react-scripts build",
    "test": "react-scripts test",
    "eject": "react-scripts eject"
  }

Following command is used for making the build. We use different environment variables for development and production environment. We pass required build as build args.

docker build --build-arg environment=production -t <imagename>:<tag>  . 

The above command generates the production build. Using the above method we generated a docker image which is less than 160MB in size. It can be deployed in AWS ECS container of 512MB size.

Friday, 10 March 2023

Client is configured for secret but secret was not received

Recently we were trying to authenticate using username and password of the user using a AWS Cognito client which was configured with a secret. We got the following error:

Client <client ID> is configured for secret but secret was not received (Service: AWSCognitoIdentityProvider; Status Code: 400; Error Code: NotAuthorizedException; Request ID: e42de11b-380b-4935-b97e-1069ab925a35; Proxy: null)

We tried may ways for passing Cognito client secret to authenticate API but nothing worked. After trying a lot and searching internet I came to the following page which tells about calculating secret hash.

https://docs.aws.amazon.com/cognito/latest/developerguide/signing-up-users-in-your-app.html#cognito-user-pools-computing-secret-hash

Following is the code for calculating secret hash. This code is taken from above link.

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
 
public static String calculateSecretHash(String userPoolClientId, String userPoolClientSecret, String userName) {
    final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
    
    SecretKeySpec signingKey = new SecretKeySpec(
            userPoolClientSecret.getBytes(StandardCharsets.UTF_8),
            HMAC_SHA256_ALGORITHM);
    try {
        Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
        mac.init(signingKey);
        mac.update(userName.getBytes(StandardCharsets.UTF_8));
        byte[] rawHmac = mac.doFinal(userPoolClientId.getBytes(StandardCharsets.UTF_8));
        return Base64.getEncoder().encodeToString(rawHmac);
    } catch (Exception e) {
        throw new RuntimeException("Error while calculating ");
    }
}

The secret hash calculated hash needs to passed to authentication API along with username and password.

Map<String, String> authParams = new LinkedHashMap<>() {{
	put("USERNAME", username);
	put("PASSWORD", password);
	put("SECRET_HASH", secret_hash);
}};

Using this method we were able to finally generate authentication token for Cognito client for which client secret is configured.

Wednesday, 8 July 2020

Mounting EFS volume on EC2 machine

In my previous post I explained you how to mount EFS volume inside a fargate task container. EFS serves as persistent storage for ephemeral  container. Sometime you may want to see the data containers are storing in EFS. Most of the time containers are special purpose containers like MongoDB which does not provide any interface to browse the file system.

The easy way to look at EFS is to mount it on an EC2 instance. EFS volume can be mounted on multiple machines therefore you can mount it on EC2 machine and inside the container at the same time. You need to look at DNS name for EFS volume.

This DNS name is used while mounting EFS volume as a NFS drive on EC2 volume.

Create a directory for mounting EFS volume. For example I am creating /efs3


sudo mkdir /efs3

Now you can mount the EFS volume using following command. Please replace DNS name of your EFS volume:

sudo mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-9ce1684d.efs.ap-south-1.amazonaws.com:/ 
 /efs3

You can follow the following link for more information https://docs.aws.amazon.com/efs/latest/ug/mounting-fs-mount-cmd-dns-name.html

Saturday, 27 June 2020

Mounting EFS volume in fargate task in AWS

In this post I am going to explain process of mounting an EFS disk in fargate service in AWS.
First step is to define volumes in task definitions. I am going to use an file browser container in this exercise which allow you to browse root file system of container. This container should be used only for experimentation and should be terminated as soon as you are done with your experiment.

I am creating a task definition fargate-filebrowser1 of fargate type.


Click on "Add Volume" link at the bottom of task definition page and fill in details of EFS volume.

After adding EFS volume to task definitions add container to task definition. You can add multiple containers to one task definitions but we will add only one. After clicking "Add Container" button provide container name and image name:

The Image name is jasvantsingh/myfilebrowser:latest it is based on filebrowser/filebrowser with just one change that it exposes root file system rather than one directory.

Scroll down to bottom of container options and provide mount point for EFS volume inside the container.

The EFS volume will be mounted at /vol2 as per this configuration. Any file created inside /vol2 directory will be persisted across task restart or deletion of service and recreation of service. EFS is a kind of NFS mount which reside on some other persistent storage outside the container.


Click on "Create" button under service tab of your cluster.

Provide service details. Please note that you need to select PLATFORM version 1.4.0 and not LATEST. LATEST version does not work maybe it is not mapping to 1.4.0


On the next screen provide networking details. Please make sure that "Auto-assign Public IP" is ENABLED. I will not be using any load balancer so the task need to have a public IP address to be accessed from outside.


On Auto-scaling screen keep auto scaling as disabled.


On review screen click create service button. The service will be created. Click on "View Service" button. It will take you to service details page. Wait for some for task to be started and listed in Task tab.


Click on the task ID. It will take you to task details page. Please note the public IP address of the task.


Paste the public IP Address in browser and access the file browser. User username as admin and password as admin for login in the file browser. You can see "vol2" directory there. This is persistent directory. You can place any file inside this directory and it will be persisted.


Please make sure to terminate the service and task because anybody can hack into this file browser and put malicious content there without your knowledge. This file browser is only for experimentation.

Please comment if something is not working. I will reply to your comment.

EFS does not work with fargate service in AWS

On April 8, 2020 Amazon announced availability of EFS for fargate services. I spent one week trying to mount EFS volume in my fargate service with no success. I was using platform version as LATEST. The service startup always failed with error:
Service creation failed: One or more of the requested capabilities are not supported.
I was trying to follow the tutorial at:
https://aws.amazon.com/blogs/aws/amazon-ecs-supports-efs/
After struggling for one week I noticed the following line in tutorial.
It’s also essential here to make sure that I set the platform version to 1.4.0
When I tried 1.4.0 platform version it worked.  Due to some reason 1.4.0 and LATEST are not same. EFS works in 1.4.0 but not in LATEST platform version.

Tuesday, 14 April 2020

The ACL permission error while exporting Cloud Watch logs to S3

Yesterday I struggled for more than 6 hours to export Cloud Watch logs to S3 bucket. I was getting the following error:
The ACL permission for the selected bucket is not correct. The Amazon S3 bucket must reside in the same region as the log data that you want to export. Learn more.


I tried following all the steps mentioned in the link but still it did not work. Later on I found the mistake, it interesting one so I am writing it in my blog so that you don't make same mistake.

says that you need to set following policy to S3 bucket:

{
    "Version": "2012-10-17",
    "Statement": [
      {
          "Action": "s3:GetBucketAcl",
          "Effect": "Allow",
          "Resource": "arn:aws:s3:::my-exported-logs",
          "Principal": { "Service": "logs.us-west-2.amazonaws.com" }
      },
      {
          "Action": "s3:PutObject" ,
          "Effect": "Allow",
          "Resource": "arn:aws:s3:::my-exported-logs/random-string/*",
          "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
          "Principal": { "Service": "logs.us-west-2.amazonaws.com" }
      }
    ]
}

Here my-exported-logs is the bucket name and it needs to be replaced with your bucket name and us-west-2 needs to be replaced with your region code for Mumbai it is ap-south-1

The page says that random-string can be replaced with any random string which makes you believe that this string is not important but that is wrong. It is most important string for exporting logs to S3. The random string which you use in bucket permission needs to be provide as S3 bucket prefix while exporting logs to S3 bucket. If you don't provide S3 bucket prefix or provide a different prefix then you get the ACL error because the policy provide s3:PutObject permission only on random-string directory so if we try to put logs in some other directory then it will fail. The following export configuration works.
The only difference between working and not working dialog box is random-string being provided as S3 bucket prefix. I learned it the hard way by wasting 6 hours.

Monday, 9 September 2019

Running Map-Reduce program on AWS EMR

In this post we will learn how to run a Map-Reduce program on AWS EMR cluster. This post is continuation of my earlier post:
https://blog.bigdatawithjasvant.com/2019/09/setting-up-machine-code-and-data-for.html
Please read it before continuing.

The video for this post is available below:

For running any jobs on AWS EMR the code input data needs to be in S3 storage. There should be a directory where log files will be stored. The output directory should not exist, because it is going to create it. If they output directory already exists then Map-Reduce program will fail.



We put the jar file containing the code for Map-Reduce program in S3 storage.


The input for Map-reduce program is also placed in S3 storage.


Now open EMR service console in AWS.


Now create a new cluster. Provide name of cluster. Provide a directory where log files for the cluster will be placed. These files can be used for analyzing reason of failure later on. For adding steps to cluster choose "Step type" as "Custom Jar" and click on "Configure" button to specify JAR file which contains our Map-Reduce program and arguments for the program.



In the pop-up dialog box select the jar file which contains the code. I the arguments box provide following arguments.


com.wordcount.WordCountHashPartitioner

s3://bigdatawithjasvant-emr/bigtext

s3://bigdatawithjasvant-emr/output


The first line contains the main class name. Second line contains location of the input files. Third line contains the location of the output directory. The output directory should not exist already. If the output directory exists the the job will fail. Select "Action on failure" as "Terminate the cluster" because we don't want to keep running the cluster if our Map-Reduce program fails.


Now we select number of nodes in the cluster. I will be using 5 m5.xlarge nodes for the cluster.One will be used for master node and 4 core nodes of cluster.



Once we click on "Create cluster" button. The cluster will be created and started. It will take around 3 minutes to cluster to come up. You can keep clicking the refresh button to update the status.


Once cluster is in running state. we can see status of steps which we configured in the cluster.



Once our setp is in running state we can click "View jobs" to see jobs running as part of that step.


To see details of tasks of our job we need to click on "View tasks" link.


You can see total number of tasks, completed, running and pending tasks in the tasks page. I as you can see 13 tasks are currently running. there are 4 core in each of 4 core instances. Total 16 cores and 64GB RAM in core instances. I don't understand size of one execution slot in EMR in terms of RAM and CPU. The number of running tasks peeked at 13 which gives a hint that total number of slots were close to 13.

You can look at console logs of the job from web interface.


You can look at controller logs also using web console.


Once Map-Reduce job step is complete the cluster will shut down automatically. I my case it took around 5 minutes for my program to run. Around 5 minutes were taken in starting up and shutdown.

AWS EMR is a good option if you have a lot of data and want to process it quickly. You can launch a cluster with upto 20 instances and tear down once it is not needed. In our case we were billed for 10 minutes for each of 5 instances and for 10 minutes for EMR usage fee for each 5 instances. You are billed for 50 minutes for instance and 50 minutes for EMR usage fee. But you saved 40 minutes of time by running you program on a multi node cluster.