Syntax highlighter header

Tuesday 12 September 2023

Uploading a large file to S3 from flutter

We were recently trying to upload a large zip file from flutter browser app and we discovered that when we try to upload files larger than 500 MB it was throwing Buffer Allocation error. We needed to upload files larger than 10 GB.

So we started exploring other options. We finally solved the problem using S3 Multipart upload. We did not want to expose S3 credentials to client due to security reasons. So we implemented 3 APIs on server:

  1. InitiateMultipartUpload - This API initiates multipart upload on S3 and return uploadId to client
  2. UploadPart - This API receives a part from client and sends it to S3 and returns PartETag to client
  3. ComplateMultipartUpload - This API completes multipart upload in S3 using PartETags received from client.
Now for client side support we developed a custom component to upload file to S3 taking inspiration from https://github.com/Taskulu/chunked_uploader/blob/master/lib/chunked_uploader.dart

After these changes I was able to upload 5GB file to S3 in 20 minutes without getting buffer allocation error.

Monday 11 September 2023

Uploading file to S3 using flutter

Recently were working on uploading a zip file to S3 using presigned url. We exposed an API to client for generating a presigned url over an authenticated API call. After generating presigned URL client is supposed to send content in a PUT call to S3 on the presigned url. But when we tried sending the file using MultipartRequest the file got corrupted. After struggling for a long time we found StreamedRequest using which we were able to upload the file to S3 without getting it corrupted. Following is the code for uploading the file to S3.

Future<http.StreamedResponse?> uploadFileToS3(
  String s3Url,
  file_picker.PlatformFile file,
) async {
  var request = http.StreamedRequest('PUT', Uri.parse(s3Url)); 
  final streamSink = request.sink as StreamSink<List<int>>;
  print('Uploading file '+file.name+' to '+s3Url);
  var resp = request.send();
  await streamSink.addStream(file.readStream!);
  streamSink.close();
  print('returning response for '+file.name);
  return await resp;
}

Please note readStream will be null if you don't pass required arguments to file_picker. The required arguments for file picker are:

FilePickerResult? result = await FilePicker.platform.pickFiles(withData:false, withReadStream: true);

Please let me know if you face any problem in using this approach.

Thursday 24 August 2023

Using new HttpClient API for downloading file

Recently we were required to download a CSV file from our partner website in our Java application. So I explored new Java HttpClient API. The APIs are asynchronous which is good for scalability but confusing for the users. After struggling for a long time I was able to figure out how to download the file which was protected by a username and password. 

The code for downloading file is provided below, it also include a call to get just headers using HEAD call.

import java.io.IOException;
import java.net.Authenticator;
import java.net.PasswordAuthentication;
import java.net.URI;
import java.net.URISyntaxException;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;

public class HttpClientMainClass {
    public static void main(String[] args) throws URISyntaxException, IOException, InterruptedException {
        HttpClient httpClient = HttpClient.newBuilder().authenticator(new Authenticator() {
                    @Override
                    protected PasswordAuthentication getPasswordAuthentication() {
                        return new PasswordAuthentication("username","password".toCharArray());
                    }
                })
                .build();

        HttpRequest httpRequest2 = HttpRequest.newBuilder().method("HEAD", HttpRequest.BodyPublishers.noBody())
                .uri(new URI("http://example.com/data.csv"))
                .build();
        HttpResponse<Void> resp2 = httpClient.send(httpRequest2,
                HttpResponse.BodyHandlers.discarding());

        System.out.println(resp2);
        System.out.println(resp2.headers());

        HttpRequest httpRequest = HttpRequest.newBuilder().GET()
                .uri(new URI("http://example.com/data.csv"))
                .build();
        HttpResponse<Path> resp = httpClient.send(httpRequest,
                HttpResponse.BodyHandlers.ofFile(Path.of("/tmp","data.csv"),
                        StandardOpenOption.CREATE, StandardOpenOption.WRITE));

        if(resp.statusCode()==200) {
            System.out.println("File downloaded At : "+resp.body());
        }
    }
}

Hope it will be useful for the readers.

Ref: https://www.baeldung.com/java-9-http-client

Friday 4 August 2023

Spring Security 6.0 with Oauth2 and AWS Cognito Pool

Recently we were migrating our old application from Spring Security 5 to Spring security 6. The major problem we faced was there was no documentation available on internet for Spring Security 6. After struggling for a long time we were able to move our old application to latest version of spring which is 6.0 . Here I am documenting our finding in hope that it will be helpful for the readers.

The first class you need to write is SecurityConfiguration. The content of file is provided below:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.method.configuration.EnableMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configurers.AbstractHttpConfigurer;
import org.springframework.security.config.http.SessionCreationPolicy;
import org.springframework.security.oauth2.jwt.JwtDecoder;
import org.springframework.security.web.SecurityFilterChain;
import org.springframework.security.web.util.matcher.AntPathRequestMatcher;
import org.springframework.web.cors.CorsConfiguration;
import org.springframework.web.cors.UrlBasedCorsConfigurationSource;

import java.util.Arrays;

@Configuration
@EnableWebSecurity
@EnableMethodSecurity
public class SecurityConfiguration {

    @Autowired
    private JwtDecoder jwtDecoder;

    @Bean
    public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
        // @formatter:off
        http.cors(c-> {
            UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
            CorsConfiguration config = new CorsConfiguration();
            config.setAllowCredentials(true);
            config.addAllowedOriginPattern("*");
            config.addAllowedHeader("*");
            config.setAllowedMethods(Arrays.asList("OPTIONS", "GET", "POST", "PUT", "DELETE", "PATCH"));
            source.registerCorsConfiguration("/**", config);

            c.configurationSource(source);
        });
        http.csrf(AbstractHttpConfigurer::disable);

        http
                .authorizeHttpRequests((authorize) -> authorize
                        .requestMatchers(AntPathRequestMatcher.antMatcher("/error")).permitAll()
                        .requestMatchers(AntPathRequestMatcher.antMatcher("/api/public/**")).permitAll()
                        .requestMatchers(AntPathRequestMatcher.antMatcher("/api/test/**")).permitAll()
                        .requestMatchers(AntPathRequestMatcher.antMatcher("/actuator/**")).permitAll()
                        .anyRequest().authenticated()
                ).sessionManagement(sm-> sm.sessionCreationPolicy(SessionCreationPolicy.STATELESS))
                .oauth2ResourceServer(rs-> rs.jwt(j-> j.decoder(jwtDecoder)));
        // @formatter:on
        return http.build();
    }
}

In the above class I defined a CorsFilter to allow all domains. CSRF is disabled and few URLs were allowed without authentication. We need to use OAuth2 for integrating Cognito pool JWT tokens, for that we specified jwtDecoder. The JwtDecoder is defined in next class:

import com.nimbusds.jose.*;
import com.nimbusds.jose.proc.JWSKeySelector;
import com.nimbusds.jose.proc.SecurityContext;
import com.nimbusds.jwt.proc.DefaultJWTProcessor;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.convert.converter.Converter;
import org.springframework.security.oauth2.jwt.JwtDecoder;
import org.springframework.security.oauth2.jwt.MappedJwtClaimSetConverter;
import org.springframework.security.oauth2.jwt.NimbusJwtDecoder;
import java.security.Key;
import java.util.*;

import com.nimbusds.jose.jwk.JWKSet;
import com.nimbusds.jose.jwk.RSAKey;
import org.springframework.http.ResponseEntity;
import org.springframework.web.client.RestTemplate;

import java.text.ParseException;

@Configuration
public class JwtConfiguration {

    public static final String COGNITO_GROUPS = "cognito:groups";
    private static final String SPRING_AUTHORITIES = "scope";
    public static final String COGNITO_USERNAME = "username";
    private static final String SPRING_USER_NAME = "sub";

    @Value("${security.oauth2.resource.jwk.key-set-uri}")
    private String keySetUri;

    @Bean
    JwtDecoder jwtDecoder() throws ParseException {
        // Obtain the JWKS from the endpoint
        RestTemplate restTemplate = new RestTemplate();
        ResponseEntity<String> jwksResponse = restTemplate.getForEntity(keySetUri, String.class);
        String jwksJson = jwksResponse.getBody();

        JWKSet jwkSet = JWKSet.parse(jwksJson);

        DefaultJWTProcessor<SecurityContext> defaultJWTProcessor =  new DefaultJWTProcessor<>();

        defaultJWTProcessor.setJWSKeySelector(new JWSKeySelector<SecurityContext>() {
            @Override
            public List<? extends Key> selectJWSKeys(JWSHeader header, SecurityContext context) throws KeySourceException {
                RSAKey rsaKey = (RSAKey) jwkSet.getKeyByKeyId(header.getKeyID());
                try {
                    return new ArrayList<Key>(Arrays.asList(rsaKey.toPublicKey()));
                } catch (JOSEException e) {
                    e.printStackTrace();
                }
                return null;
            }
        });
        NimbusJwtDecoder jwtDecoder = new NimbusJwtDecoder(defaultJWTProcessor);

        Converter<Map<String, Object>, Map<String, Object>> claimSetConverter = MappedJwtClaimSetConverter
                .withDefaults(Collections.emptyMap());

        jwtDecoder.setClaimSetConverter( claims -> {
            claims = claimSetConverter.convert(claims);

            HashMap<String, Object> hashMap = new HashMap<>(claims);
            if (claims.containsKey(COGNITO_GROUPS))
                ((Map<String, Object>) hashMap).put(SPRING_AUTHORITIES, claims.get(COGNITO_GROUPS));
            if (claims.containsKey(COGNITO_USERNAME))
                ((Map<String, Object>) hashMap).put(SPRING_USER_NAME, claims.get(COGNITO_USERNAME));

            return hashMap;
        });
        return jwtDecoder;
    }
}

In the above class we are trying to decode the JWT token generated by Cognito pool which is passed to our API as bearer token. The JWT tokens are signed using a private key and validated using a public key. The public keys are provided as a JSON Web Key Set(JWKS) at the URL https://cognito-idp.<aws-region>.amazonaws.com/<cognito-pool-id>/.well-known/jwks.json by AWS. It contains public keys for validating tokens. There can be multiple keys in this set so you need to parse this JSON Web Key Set(JWKS) and pick the correct key as per header of JWT token.

After that you need to do some customization in claimSetConverter. The customization we required are that we want to use username of the cognito pool as user identification rather than UUID generated by cognito for the user. For doing that we overwritten value of "sub" with username received in claim. This value will be returned by principal.getName() when used in the code. 

The second customization we did was to overwrite "scope" with "cognito:groups" because we want to use congnito groups as authority for authenticating our API calls. Assume cognito group name is ROLE_ADMIN then the authority it will be mapped to will be SCOPE_ROLE_ADMIN and we can use @PreAuthorize("hasAuthority('SCOPE_ROLE_ADMIN')") for pre authorizing the API calls. One sample API is implemented below:

import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.validation.annotation.Validated;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import java.security.Principal;

@RestController
@RequestMapping("/api/users")
@RequiredArgsConstructor
@Validated
@Slf4j
public class UserNameTestController {
    @GetMapping("/userName")
    @PreAuthorize("hasAuthority('SCOPE_ROLE_ADMIN')")
    public  ResponseEntity<String>  getCurrentUserInfo(final Principal principal) {
        return ResponseEntity.status(200).body(principal.getName());
    }
}

The important maven dependencies used in this code are:

        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-config</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-web</artifactId>
        </dependency>
		<dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-security</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-oauth2-resource-server</artifactId>
            <version>6.0.2</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-oauth2-jose</artifactId>
            <version>6.0.5</version>
        </dependency>
        <dependency>
            <groupId>com.nimbusds</groupId>
            <artifactId>nimbus-jose-jwt</artifactId>
            <version>9.8.1</version>
        </dependency>

I hope this will be helpful for the readers. Please drop me a comment if you need some more information.

References:

https://medium.com/javarevisited/json-web-key-set-jwks-94dc26847a34 

Friday 21 July 2023

Reversing a linked list in batch of K

 Today I solved geeksforgeeks problem https://practice.geeksforgeeks.org/problems/reverse-a-linked-list-in-groups-of-given-size/1 to reverse a linked in list in batch of K. The code I wrote was very different from the one suggested by their editorial solution so I am sharing the code here for everyone's benefit.

class Solution
{
    public static Node reverse(Node node, int k)
    {
        Node head = null;
        Node oldtail = null;
        if(node==null) {
            return null;
        }
        int i=1;
        
        Node prev = node;
        node = node.next;
        Node newtail = prev;
        prev.next = null;
        while(node!=null) {
            if(i%k==0) {
                if(oldtail!=null) {
                    oldtail.next = prev;
                } else {
                    head = prev;
                }
                oldtail = newtail;
                prev = node;
                newtail = prev;
                node = node.next;
                prev.next=null;
                i++;
            } else {
                Node temp = node.next;
                node.next = prev;
                prev = node;
                node = temp;
                i++;
            }
        }
        if(oldtail!=null) {
            oldtail.next = prev;
        }
        if(head==null) {
            head=prev;
        }
        
        return head;
        
    }
}


Friday 26 May 2023

Unable to connect to aurora DB from java 11

Recently we were trying to connect to 5.7.mysql_aurora.2.11.2 DB from Java 11 and were getting communication link failure in our Spring boot application. We were getting following error:
2023-05-26 21:20:43.826  INFO [,] 10788 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-3 - Starting...
2023-05-26 21:20:45.756 ERROR [,] 10788 --- [           main] com.zaxxer.hikari.pool.HikariPool        : HikariPool-3 - Exception during pool initialization.

com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
	at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:198) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-3.4.5.jar:na]
	at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) ~[HikariCP-3.4.5.jar:na]
	at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-3.4.5.jar:na]
	at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477) ~[HikariCP-3.4.5.jar:na]
	at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560) ~[HikariCP-3.4.5.jar:na]
	at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) ~[HikariCP-3.4.5.jar:na]
	at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) ~[HikariCP-3.4.5.jar:na]
	at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcEnvironmentInitiator.java:180) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:68) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:35) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.initiateService(StandardServiceRegistryImpl.java:101) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.service.internal.AbstractServiceRegistryImpl.createService(AbstractServiceRegistryImpl.java:263) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:237) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.id.factory.internal.DefaultIdentifierGeneratorFactory.injectServices(DefaultIdentifierGeneratorFactory.java:152) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.service.internal.AbstractServiceRegistryImpl.injectDependencies(AbstractServiceRegistryImpl.java:286) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:243) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.boot.internal.InFlightMetadataCollectorImpl.<init>(InFlightMetadataCollectorImpl.java:176) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.complete(MetadataBuildingProcess.java:127) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.metadata(EntityManagerFactoryBuilderImpl.java:1224) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1255) ~[hibernate-core-5.4.25.Final.jar:5.4.25.Final]
	at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:58) ~[spring-orm-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:365) ~[spring-orm-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:391) ~[spring-orm-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:378) ~[spring-orm-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.afterPropertiesSet(LocalContainerEntityManagerFactoryBean.java:341) ~[spring-orm-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1853) ~[spring-beans-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1790) ~[spring-beans-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:594) ~[spring-beans-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:516) ~[spring-beans-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:324) ~[spring-beans-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:322) ~[spring-beans-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) ~[spring-beans-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1109) ~[spring-context-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:869) ~[spring-context-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:551) ~[spring-context-5.2.12.RELEASE.jar:5.2.12.RELEASE]
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:143) ~[spring-boot-2.3.7.RELEASE.jar:2.3.7.RELEASE]
	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:758) ~[spring-boot-2.3.7.RELEASE.jar:2.3.7.RELEASE]
	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:750) ~[spring-boot-2.3.7.RELEASE.jar:2.3.7.RELEASE]
	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:405) ~[spring-boot-2.3.7.RELEASE.jar:2.3.7.RELEASE]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:315) ~[spring-boot-2.3.7.RELEASE.jar:2.3.7.RELEASE]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1237) ~[spring-boot-2.3.7.RELEASE.jar:2.3.7.RELEASE]
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) ~[spring-boot-2.3.7.RELEASE.jar:2.3.7.RELEASE]
	at com.influencers.shared.InfluencersBackofficeApplication.main(InfluencersBackofficeApplication.java:12) ~[classes/:na]
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:na]
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:na]
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:na]
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490) ~[na:na]
	at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.protocol.a.NativeProtocol.negotiateSSLConnection(NativeProtocol.java:340) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.protocol.a.NativeAuthenticationProvider.connect(NativeAuthenticationProvider.java:167) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.protocol.a.NativeProtocol.connect(NativeProtocol.java:1348) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.NativeSession.connect(NativeSession.java:157) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:956) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:826) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	... 50 common frames omitted
Caused by: javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
	at java.base/sun.security.ssl.HandshakeContext.<init>(HandshakeContext.java:170) ~[na:na]
	at java.base/sun.security.ssl.ClientHandshakeContext.<init>(ClientHandshakeContext.java:103) ~[na:na]
	at java.base/sun.security.ssl.TransportContext.kickstart(TransportContext.java:238) ~[na:na]
	at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:394) ~[na:na]
	at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:373) ~[na:na]
	at com.mysql.cj.protocol.ExportControlled.performTlsHandshake(ExportControlled.java:317) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.protocol.StandardSocketFactory.performTlsHandshake(StandardSocketFactory.java:188) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.protocol.a.NativeSocketConnection.performTlsHandshake(NativeSocketConnection.java:97) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	at com.mysql.cj.protocol.a.NativeProtocol.negotiateSSLConnection(NativeProtocol.java:331) ~[mysql-connector-java-8.0.22.jar:8.0.22]
	... 55 common frames omitted
This error is occurs because 5.7.mysql_aurora.2.11.2 DB is not supporting SSL protocols which are enabled by Java 11. We tried passing connection parameters to JDBC connector to enable TLSv1.2 in the server but it did not help. We had to modify java.security file to remove TLSv1 and TLSv1.1 from list of disabled algorithms.

We were using AWS ECS for deploying our application so we were creating a docker image. We modified the Dockerfile to copy modified java.security file in the docker image. Following is the content of the Dockerfile:
FROM eclipse-temurin:11
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
COPY java.security ${JAVA_HOME}/conf/security/java.security
ENTRYPOINT ["java","-jar","/app.jar"]
This modification of java.security file solved our problem. Other methods of enabling TLSv1.2 did not work. We were able to connect MySQL 5.7 instance in docker without modifying java.security file but not with auora database.

The modified java.security file can be downloaded from https://www.bigdatawithjasvant.com/blogdata/00/0003/java.security

Saturday 18 March 2023

Creating nginx docker image for a react application

 Recently we were trying to deploy a PWA application developed in react on production. We wanted to create a docker image for our application for an easy deployment. We did not want to run node process in our docker container because it will consume a lot of memory. So, we decided to compile our application as static files and serve these files using nginx to make it lightweight. 

We were developing the application on a Windows machine and deploy on a Linux machine so we decided to do the compilation in a docker build. We used following code in our Dockerfile.

FROM node:12 as mynode
ARG environment
# Create app directory
WORKDIR /usr/src/app
# Copy source code to build context
COPY . .
RUN npm install
RUN npm run build:${environment}

# Start from a clean nginx image
FROM nginx
#Copy the build generated file to target image
COPY --from=mynode /usr/src/app/build/ /usr/share/nginx/html
#delete the default nginx config
RUN rm /etc/nginx/conf.d/default.conf
#copy the required nginx config file to image
COPY ui-nginx.conf /etc/nginx/conf.d

Following is the content of ui-nginx.conf file:

server {
listen 80 default_server;
server_name localhost;
location / {
        root /usr/share/nginx/html;
        access_log /var/log/nginx/ui-access.log;
        error_log /var/log/nginx/ui-error.log;
        index index.html index.htm;
        try_files $uri /index.html;
        }
}

Following part is used in package.json file for making the build:

  "scripts": {
    "start": "env-cmd -f ./environments/.env.development react-scripts start",
    "build": "env-cmd -f ./environments/.env.development react-scripts build",
    "start:development": "env-cmd -f ./environments/.env.development react-scripts start",
    "build:development": "env-cmd -f ./environments/.env.development react-scripts build",
    "start:production": "env-cmd -f ./environments/.env.production react-scripts start",
    "build:production": "env-cmd -f ./environments/.env.production react-scripts build",
    "test": "react-scripts test",
    "eject": "react-scripts eject"
  }

Following command is used for making the build. We use different environment variables for development and production environment. We pass required build as build args.

docker build --build-arg environment=production -t <imagename>:<tag>  . 

The above command generates the production build. Using the above method we generated a docker image which is less than 160MB in size. It can be deployed in AWS ECS container of 512MB size.

Sunday 12 March 2023

Streaming output asynchronously in spring boot

We are using spring boot in our application and return ResponseEntity<String> from our service method. The drawback of this is that we need to load whole result in the memory and due to memory requirement increases. We had a requirement to dump data from database to as CSV to the client. The total amount of data can be huge and we can't afford to load whole data in memory. 

What we were looking for was a way to dump data to result in chunks so that memory requirement does not increase. While searching the internet I came across this article which explains asynchronous processing of the http requests in spring boot. This page was really helpful in solving our problem. I am not copying any part of this article in my post. You can read the article directly at

Friday 10 March 2023

Client is configured for secret but secret was not received

Recently we were trying to authenticate using username and password of the user using a AWS Cognito client which was configured with a secret. We got the following error:

Client <client ID> is configured for secret but secret was not received (Service: AWSCognitoIdentityProvider; Status Code: 400; Error Code: NotAuthorizedException; Request ID: e42de11b-380b-4935-b97e-1069ab925a35; Proxy: null)

We tried may ways for passing Cognito client secret to authenticate API but nothing worked. After trying a lot and searching internet I came to the following page which tells about calculating secret hash.

https://docs.aws.amazon.com/cognito/latest/developerguide/signing-up-users-in-your-app.html#cognito-user-pools-computing-secret-hash

Following is the code for calculating secret hash. This code is taken from above link.

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
 
public static String calculateSecretHash(String userPoolClientId, String userPoolClientSecret, String userName) {
    final String HMAC_SHA256_ALGORITHM = "HmacSHA256";
    
    SecretKeySpec signingKey = new SecretKeySpec(
            userPoolClientSecret.getBytes(StandardCharsets.UTF_8),
            HMAC_SHA256_ALGORITHM);
    try {
        Mac mac = Mac.getInstance(HMAC_SHA256_ALGORITHM);
        mac.init(signingKey);
        mac.update(userName.getBytes(StandardCharsets.UTF_8));
        byte[] rawHmac = mac.doFinal(userPoolClientId.getBytes(StandardCharsets.UTF_8));
        return Base64.getEncoder().encodeToString(rawHmac);
    } catch (Exception e) {
        throw new RuntimeException("Error while calculating ");
    }
}

The secret hash calculated hash needs to passed to authentication API along with username and password.

Map<String, String> authParams = new LinkedHashMap<>() {{
	put("USERNAME", username);
	put("PASSWORD", password);
	put("SECRET_HASH", secret_hash);
}};

Using this method we were able to finally generate authentication token for Cognito client for which client secret is configured.

Monday 6 February 2023

Overriding base URL for feign client with every call

We had a requirement to notify our partner of some events in our system. For implementing this callback into their system we decided to use FeignClient inside our Spring application. But the challenge we faced was to override base URL of FeignClient in every API call. Over FeignClient interface is defined as follows:

@FeignClient(name = "BrandPartnerNotificationForEnabledSku")
public interface BrandPartnerFeignClient {
    @RequestMapping(value= "{path}",method = RequestMethod.POST, consumes="application/json", produces = "application/json")
    ResponseEntity<String> callWebhook(@RequestBody String enabledSkuList, @PathVariable("path") String path, @RequestHeader Map<String,String> headers);
}

In above call we are passing http headers as a map to the call because http header requirement for different partners will be different. Next challenge was to override the URL and basic authentication for each partner for callback. We used following code for creating feign client manually rather than allowing spring to create it automatically.

BrandPartnerFeignClient brandPartnerFeignClient = new FeignClientBuilder(applicationContext).forType(BrandPartnerFeignClient.class, "client1")
		.url("http://localhost:8080/receive")
		.customize(feginBuilder-> feginBuilder.requestInterceptor(new BasicAuthRequestInterceptor("dummyuser", "dummypassword")))
		.build();
// create jsonString and headersMap
ResponseEntity<String> response = brandPartnerFeignClient.callWebhook(jsonString,"/headersasmap",headersMap);

We passed base url to method url while creating the feign client and other part of URL as path parameter. This way you can cater to same base url hosting more than one sub url. 

Thursday 26 January 2023

Increasing container startup timeout of testcontainers Mysql container

Recently I moved my Docker from SSD drive to HDD drive as per my post https://blog.bigdatawithjasvant.com/2023/01/installing-docker-in-secondary-drive-on.html

After doing this change I faced a problem that my code build was failing because docker containers launched by testcontainers were slow to come up and testcontainers was only waiting for 120 seconds for it to come up, after that it was killing the container and starting an new one. The new container was also taking more than 120 second so it was also killed and after 3 attempts the build failed. I got the following error in my logs:

[ERROR] Could not start container
java.lang.IllegalStateException: Container is started, but cannot be accessed by (JDBC URL: jdbc:mysql://localhost:51209/test), please check container logs
    at org.testcontainers.containers.JdbcDatabaseContainer.waitUntilContainerStarted (JdbcDatabaseContainer.java:165)
    at org.testcontainers.containers.GenericContainer.tryStart (GenericContainer.java:466)
    at org.testcontainers.containers.GenericContainer.lambda$doStart$0 (GenericContainer.java:329)
    at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess (Unreliables.java:81)
    at org.testcontainers.containers.GenericContainer.doStart (GenericContainer.java:327)
    at org.testcontainers.containers.GenericContainer.start (GenericContainer.java:315)
    at org.testcontainers.jdbc.ContainerDatabaseDriver.connect (ContainerDatabaseDriver.java:118)
    at org.jooq.codegen.GenerationTool.run0 (GenerationTool.java:362)
    at org.jooq.codegen.GenerationTool.run (GenerationTool.java:236)
    at org.jooq.codegen.GenerationTool.generate (GenerationTool.java:231)
    at org.jooq.codegen.maven.Plugin.execute (Plugin.java:207)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
    at org.apache.maven.lifecycle.internal.MojoExecutor.doExecute (MojoExecutor.java:301)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:211)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:165)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:157)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:121)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:127)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:294)
    at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
    at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
    at org.apache.maven.cli.MavenCli.execute (MavenCli.java:960)
    at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:293)
    at org.apache.maven.cli.MavenCli.main (MavenCli.java:196)
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
    at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:566)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:282)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:406)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
[ERROR] Log output from the failed container:

I was looking for a option to increase this timeout but did not find any provision for doing that in the source code of testcontainers. So I had to take a fork from https://github.com/testcontainers/testcontainers-java and change the code to add new provision to override the timeout. The enhanced code is available at following location:

https://github.com/jasvant6thstreet/testcontainers-java-jasvant

You can take a clone of above repository and build it using following command:

gradle build -x test

This command will compile the code on your local machine without running the testcases. It takes some time to build so be patient and wait for it to complete. Once build is complete you can push it to your local maven repository using following command:

gradle publishToMavenLocal

The above command will push enhanced testcontainers to your maven repository and after that you can use it in your maven projects. The version number is 1.17.6-j1 for this fix.

For setting new timeout following new properties are added:

tc.jdbc.startuptimeout=120
tc.jdbc.connectiontimeout=120

You can set it up at three places as per documentation provide in page

https://www.testcontainers.org/features/configuration/

I used the first option to set it via environment variables. I set up the following environment variables in the shall where I was running my build:

TESTCONTAINERS_TC_JDBC_CONNECTIONTIMEOUT=600
TESTCONTAINERS_TC_JDBC_STARTUPTIMEOUT=600

Using the above environment variables testcontainers Mysql connector was waiting for 600 seconds and the docker container was able to come up successfully in this time and build was successfully completed.

I have submitted a pull request to testcontainers for incorporating this change.

Installing Docker in secondary drive on Windows

 Recently I faced a problem that my C drive on widows was almost full. This was due to the fact that docker was consuming 57 GB of space. I had 900 GB in my secondary drive free and my C drive was just 165 GB in total. But the docker installer for Windows does not provide any option to install it on any alternate drive setting "data-root" in configuration file did not help but it made docker desktop hung and I had to manually remove that entry to make docker desktop come up again successfully.

But there is solution to this problem. On windows Docker uses WSL module for running its virtual machine and run a linux distribution inside that. This virtual machine's virtual hard disk is stored in "C:\Users\<Username>\AppData\Local\Docker\wsl\data" folder. There are two linux distributions which runs to support Docker on windows.

PS C:\Windows\system32> wsl -l
Windows Subsystem for Linux Distributions:
docker-desktop (Default)
docker-desktop-data
PS C:\Windows\system32>

docker-desktop is the default and takes less space. docker-desktop-data is the one which takes a lot of space and grows with time. We need to move docker-desktop-data to secondary drive which is D: for my case. We need to stop docker-desktop before moving docker-desktop-data. We don't need to stop docker-desktop-data. We need to run the following commands on priviledged  power shell of window for which you need to run it with "Run as administrator". These call hangs if docker desktop is running. It is difficult to make sure that docker is not running so I recommend to run the following commands just after restarting your computer. 

wsl --shutdown

wsl --export docker-desktop-data docker-desktop-data.tar

wsl --unregister docker-desktop-data

mkdir D:\docker-desktop-data

wsl --import docker-desktop-data D:\docker-desktop-data  .\docker-desktop-data.tar --version 2

After running the above command you will notice that a VHD file is created under "D:\docker-desktop-data" folder. This is the virtual hard disk where Docker stores all the data. Now you can start the docker and start using it with no data loss.

After doing that I faced one problem that docker was running slow because I moved it from SSD to HDD. In normal case this problem is bearable but in my case the build started failing because I was using Mysql in testcontainers and it gives 120 seconds for a container to start and if it does not start in 120 sec it kills it and start a new container. The second container also get killed after 120 seconds and after 3 retries it fails the build. 

There was no provision in testcontainers for increasing the timeout so I had to take a fork of testcontainers and add a provision for doing that. After enhancing testcontainers and increasing timeout to 600 sec my build started working. I described that in my post  

https://blog.bigdatawithjasvant.com/2023/01/increasing-container-startup-timeout-of.html