Syntax highlighter header

Saturday 26 June 2021

When to use ForkJoinPool vs ExecutorService?

With introduction of ForkJoinPool in Java people started getting confused weather to use ForkJoinPool or ExecutorService. In this post I am going to discuss which thread pool use in which case.

ForkJoinPool is designed to be used for CPU intensive workloads. The default number of threads in ForkJoinPool is equal to number of CPUs on the system. If any threads goes into waiting state due to calling join() on some other ForkJoinTask an new compensatory thread is started to utilize all CPUs of the system. ForkJoinPool has a common pool which can be get by calling ForkJoinPool.commonPool() static method. The aim of this design is to use only single ForkJoinPool in the system with number of threads being equal to number of processors on the system. It can utilize full computation capacity of the system if all ForkJoinTasks are doing computation intensive activities.

But in real life scenario tasks are a mix of CPU and IO intensive tasks. IO intensive task are a bad choice for a ForkJoinPool. You should use Executor service for doing IO intensive tasks. In ExecutorService you can set number of threads according to IO capacity of your system instead of CPU capacity of your system.

If you want to call an IO intensive operation from a ForkJoinTask then you should create a class which implement ForkJoinPool.ManagedBlocker interface and do IO intensive operation in block() method. You need to call your ForkJoinPool.ManagedBlocker implementation using static method ForkJoinPool.managedBlock(). This method creates a compensatory threads before calling block() method. block() method is supposed to do IO operation and store result in some instance variable. After calling ForkJoinPool.managedBlock() you are supposed to call your business method to get result of IO operation. This way you can mix CPU intensive operations with IO intensive operations. A classic example is WebCrawler where you fetch pages from internet which is an IO intensive operation and after that you need to parse the HTML page to extract links which is a CPU intensive operation. 

I have not implemented a full WebCrawler but a sample code where I fetch web pages using an ExecutorService with 10 threads. I am using common pool of ForkJoinPool for submitting ForkJoinTasks. My ForkJoinTask submits the page fetch request to ExecutorService and wait for result using ForkJoinPool.managedBlock() static method. After getting the page it calculates SHA-256 sum for the content of the page and stores it in a ConcurrentHashMap. This way we can make full use of CPU capacity of the system and IO capacity of the system.

The sample code is:


import java.math.BigInteger;
import java.nio.charset.StandardCharsets;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.concurrent.Callable;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.Future;
import java.util.concurrent.RecursiveTask;

import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
public class ForkJoinPoolTest {
	
	public static class FetchPage implements ForkJoinPool.ManagedBlocker {
		
		private String url;
		private ExecutorService executorSerivce;
		private byte[] pageBytes;
		
		private static ConcurrentHashMap<String,byte[]> pagesMap = new ConcurrentHashMap<>();
		
		public FetchPage(String url, ExecutorService executorSerivce) {
			this.url = url;
			this.executorSerivce = executorSerivce;
		}

		@Override
		public boolean block() throws InterruptedException {
			if((pageBytes= pagesMap.get(url))!=null) {
				return true;
			}
			Callable<byte[]> callable= new Callable<byte[]>() {
				public byte[] call() throws Exception {
					CloseableHttpClient client = HttpClients.createDefault();
					HttpGet request = new HttpGet(url);
					CloseableHttpResponse response = client.execute(request);
					return EntityUtils.toByteArray(response.getEntity());
				}
			};
			Future<byte[]> future = executorSerivce.submit(callable);
			try {
				pageBytes = future.get();
			} catch (InterruptedException | ExecutionException e) {
				pageBytes=null;
			}
			return true;
		}

		@Override
		public boolean isReleasable() {
			if(pageBytes!=null) {
				return true;
			}
			return false;
		}
		
		public byte[] getPage() {
			return pageBytes;
		}
		
	}
	
	private static ConcurrentHashMap<String, String> hashPageMap = new ConcurrentHashMap<>();
	
	public static class MyRecursiveTask extends RecursiveTask<String> {
		
		private String url;
		private ExecutorService executorSerivce;
		public MyRecursiveTask(String url, ExecutorService executorSerivce) {
			this.url = url;
			this.executorSerivce = executorSerivce;
		}

		protected String compute() {
			try {
				FetchPage fp = new FetchPage(url,executorSerivce);
				ForkJoinPool.managedBlock(fp );
				byte[] bytes = fp.getPage();
				if(bytes!=null) {
					String code = toHexString(getSHA(bytes));
					hashPageMap.put(url, code);
					return code;
				}
			} catch (InterruptedException | NoSuchAlgorithmException e) {
				return null;
			}
			return null;
		}
		
	}
	
	public static void main(String[] args) {
		ExecutorService executorSerivce = Executors.newFixedThreadPool(10);
		ForkJoinPool forkJoinPool = ForkJoinPool.commonPool();
		
		MyRecursiveTask task1 = new MyRecursiveTask("https://www.yahoo.com", executorSerivce);
		MyRecursiveTask task2 = new MyRecursiveTask("https://www.google.com", executorSerivce);
		
		Future<String> f1 = forkJoinPool.submit(task1);
		Future<String> f2 = forkJoinPool.submit(task2);	
		try {
			String res1 = f1.get();
			String res2 = f2.get();
			System.out.println(res1);
			System.out.println(res2);
			executorSerivce.shutdown();
		} catch (InterruptedException | ExecutionException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
		
	}
	
    public static byte[] getSHA(byte[] input) throws NoSuchAlgorithmException
    { 
        // Static getInstance method is called with hashing SHA 
        MessageDigest md = MessageDigest.getInstance("SHA-256"); 
  
        // digest() method called 
        // to calculate message digest of an input 
        // and return array of byte
        return md.digest(input); 
    }
    
    public static String toHexString(byte[] hash)
    {
        // Convert byte array into signum representation 
        BigInteger number = new BigInteger(1, hash); 
  
        // Convert message digest into hex value 
        StringBuilder hexString = new StringBuilder(number.toString(16)); 
  
        // Pad with leading zeros
        while (hexString.length() < 32) 
        { 
            hexString.insert(0, '0'); 
        } 
  
        return hexString.toString(); 
    }
}

This article is also published at GeeksForGeeks

Apache HttpClient hangs

Recently we were trying to hit a website in multiple threads and get the response to improve the speed. But unfortunately the HttpClient hangs when we try to hit it from multiple threads. Following is my code which does not work.


import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;

public class HttpClientTest {
	public static class CallableRequest implements Callable<CloseableHttpResponse> {
		CloseableHttpClient client;
		
		public CallableRequest(CloseableHttpClient client) {
			this.client = client;
		}
		
		@Override
		public CloseableHttpResponse call() throws Exception {
			HttpGet request = new HttpGet("https://www.yahoo.com");
			CloseableHttpResponse response = client.execute(request);
			return response;
		}
		
	}
	public static void main(String[] args) throws IOException, InterruptedException, ExecutionException {
		CloseableHttpClient client = HttpClients.createDefault();
		ExecutorService executorSerivce = Executors.newFixedThreadPool(10);
		List<Future<CloseableHttpResponse>> respList = new ArrayList<>();
		
		for(int i=0; i<10; i++) {
			CallableRequest req = new CallableRequest(client);
			Future<CloseableHttpResponse> future = executorSerivce.submit(req);
			respList.add(future);
		}

		for(Future<CloseableHttpResponse> respFuture: respList) {
			CloseableHttpResponse resp = respFuture.get();
			System.out.println("Status code:" + resp.getStatusLine().getStatusCode());
			resp.close();
		}
		executorSerivce.shutdown();
	}

}

The HttpClient looks like to have an internal queue and process a limited number of open requests. It can process new requests only after response of earlier requests is closed.

I am closing the response after reading it, so it should free up the open requests and it should work. Right? But it does not work.  

It is not working because I am submitting 10 requests and waiting for response in sequence. But the sequence of submitting the request and execution of them by HttpClient can be different. If HttpClient can process only 4 concurrent requests and your first request happen to be 5th request in HttpClient then you will be waiting for your first request to complete and it will never complete because some other 4 requests need to be closed for your first request to be processed. The 4 request which are completed are later in your list. That is why this program hanged.

The Solution

The solution is to wait for results in sequence of completion and not in sequence of submission. For doing that you need to use ExecutorCompletionService for submitting your request. It has a method take() which returns you next completed task. You can process the results of you requests in sequence in which they are getting completed. This way you HttpClient will not get locked up. Following is the code:


import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorCompletionService;
import java.util.concurrent.CompletionService;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;

import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;

public class HttpClientTest2 {
	public static class CallableRequest implements Callable<CloseableHttpResponse> {
		CloseableHttpClient client;
		
		public CallableRequest(CloseableHttpClient client) {
			this.client = client;
		}
		
		@Override
		public CloseableHttpResponse call() throws Exception {
			HttpGet request = new HttpGet("https://www.yahoo.com");
			CloseableHttpResponse response = client.execute(request);
			return response;
		}
		
	}
	public static void main(String[] args) throws IOException, InterruptedException, ExecutionException {
		CloseableHttpClient client = HttpClients.createDefault();
		ExecutorService executorSerivce = Executors.newFixedThreadPool(10);
		CompletionService<CloseableHttpResponse> completionService = new ExecutorCompletionService<>(executorSerivce);
		List<Future<CloseableHttpResponse>> respList = new ArrayList<>();
		
		for(int i=0; i<10; i++) {
			CallableRequest req = new CallableRequest(client);
			Future<CloseableHttpResponse> future = completionService.submit(req);
			respList.add(future);
		}

		for(int i=0; i<10; i++) {
			CloseableHttpResponse resp = completionService.take().get();
			System.out.println("Status code:" + resp.getStatusLine().getStatusCode());
			resp.close();
		}
		executorSerivce.shutdown();
	}

}

This code works perfectly with reordering of processing of the requests which can happen in multi-threading.

Saturday 12 June 2021

Accessing postgres running on your local machine from Minikube cluster

I was looking to containerize my application running in Wildfly  and Tomcat connecting to Postgres database. For experimenting with this I installed Minikube on my CentOS 7 Laptop.

I did not want to move my postgres database to Kubernetes cluster because of ephemeral nature of containers. I wanted to move my Wildfly and Tomcat instances to Kubernetes and connect to the database running on my local machine from these containers. In my production environment I plan to use AWS Postgres service.

Connecting to postgres server on my local machine was not straight forward, but once you know how to do it, it is an easy process. Here I am describing what I learned in this process.


Running postgres service on IP address interfaced with Minikube

Minikube runs inside a docker container. Docker creates many virtual networks on your machine. These interfaces are like multiple LAN cables connected to your computer and other end of these LAN cables it connected to different containers. Your computer have multiple IP addresses assigned to it. One for each interface. When you try to connect back from container you need to connect on the IP address of your computer assigned by the network on which your container is hosted. 

You can run following command to list out all interfaces attached to your computer.

ip addr
On my computer it returned the following output.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether 70:5a:0f:2a:78:76 brd ff:ff:ff:ff:ff:ff
3: wlp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 44:85:00:58:b2:a9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.7/24 brd 192.168.1.255 scope global noprefixroute dynamic wlp2s0
       valid_lft 61564sec preferred_lft 61564sec
    inet6 fe80::c8fe:901b:5a99:57ca/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:d1:46:6b brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:d1:46:6b brd ff:ff:ff:ff:ff:ff
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:28:3e:b3:9e brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:28ff:fe3e:b39e/64 scope link 
       valid_lft forever preferred_lft forever
8: br-fdf8159cfe40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:22:25:29:3e brd ff:ff:ff:ff:ff:ff
    inet 192.168.49.1/24 brd 192.168.49.255 scope global br-fdf8159cfe40
       valid_lft forever preferred_lft forever
    inet6 fe80::42:22ff:fe25:293e/64 scope link 
       valid_lft forever preferred_lft forever
14: veth683e46d@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-fdf8159cfe40 state UP group default 
    link/ether 1e:b5:dd:3f:2c:87 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::1cb5:ddff:fe3f:2c87/64 scope link 
       valid_lft forever preferred_lft forever

Now run following command to find IP address of Minikube container.

minikube ip
On my local machine it returned following IP address:
192.168.49.2
Now you can analyze output of  "ip addr" and find out that interface 8 is connected to Minikube and IP address for virtual inteface connected to that network is 192.168.49.1

For your postgres instance to be connectable from Minikube containers it need to listen at 192.168.49.1

You need to modify /var/lib/pgsql/13/data/postgresql.conf file to include 192.168.49.1 in listen addresses. My file reads as follows:
listen_addresses = 'localhost,192.168.49.1'             # what IP address(es) to listen on;
Now you need to modify /var/lib/pgsql/13/data/pg_hba.conf file to allow access form Minikube network. My file reads like this:
# TYPE  DATABASE        USER            CIDR-ADDRESS            METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     trust
#local   all         postgres                             password
# IPv4 local connections:
host    all             all             127.0.0.1/32           password
# IPv6 local connections:
host    all             all             ::1/128                password 
#localhost and kubernetes connection
host    all             all             192.168.49.0/24         password
local   all             dashboard                              password
Now restart postgres service using following command:
sudo systemctl restart postgresql-13
Now you are ready to connect to your local postgres server from Minikube cluster containers. You need to use 192.168.49.1 as host IP address for connecting to your local postgres server.

Testing connectivity to local postgres instance from Minikube cluster

For testing connectivity to your local postgres server from Minikube cluster you need to run a postgres image on Minikube cluster. This image contains "psql" command which can be used to connect to any remote postgres server. We will use this command to connect to our local postgres server from a pod running postgres image to confirm that connectivity is established.

Now run following command to run postgres image on one pod in Minikube cluster.
kubectl run postgres --image=postgres --env="POSTGRES_PASSWORD=mysecretpassword"
Now get a list of running pods using following command:
kubectl get pods
On my computer it returned following output:
NAME                                     READY   STATUS    RESTARTS   AGE
balanced-5744b548b4-fnvff                1/1     Running   3          13d
hello-minikube-6ddfcc9757-vfpcj          1/1     Running   3          13d
hello-node-7567d9fdc9-bpcfd              1/1     Running   3          13d
kubernetes-bootcamp-769746fd4-q65cm      1/1     Running   3          13d
kubernetes-bootcamp-769746fd4-tv2bd      1/1     Running   3          13d
postgres                                 1/1     Running   0          2m44s
tomcat-jas-deployment-85f8f68b99-jk6zr   2/2     Running   0          3h46m
I can see that my pod is running. Now run following command to connect to postgres pod and open a terminal window for this pod.
kubectl exec postgres -t -i -- /bin/bash
Transcript for my session is as follows:
localhost.localdomain: ~/docker-files/tomcat-jas >kubectl exec postgres -t -i -- /bin/bash
root@postgres:/# psql -h 192.168.49.1 -U postgres
Password for user postgres: 
psql (13.3 (Debian 13.3-1.pgdg100+1))
Type "help" for help.
Password used here is password for my local postgres server instance. I can access to all my tables from this postgre prompt.

Initially I faced connectivity problems and they were resolved after running following command:
sudo iptables --flush
Use the above command only if you need it. 

Please let me know in comments below if you face any problem in following this post.

Substituting environment variables in tomcat configuration file

I was looking to run my application running under tomcat in Kubernetes cluster. For that I need to create a container image. I don't want to put username and passwords of database in the image. But at the same time I need it in tomcat configuration. I was looking for a way to  substitute environment variables in tomcat configuration.

In Kubernetes we can populate environment variables from secrets managed by Kubernetes. This way you can keep your secrets and containerize your application also.

You need to add following argument to CATALINA_OPTS


-Dorg.apache.tomcat.util.digester.PROPERTY_SOURCE=org.apache.tomcat.util.digester.EnvironmentPropertySource

 

After passing this argument you can use environment variables same way as system properties. You can use it like this:


${ENV_VARIABE_NAME}


Substituting environment variables in Wildfly config

 I was looking a way to run my Wildfly application in Kubernetes. I realized that my database hostname, database name, username and passwords are written in Wildfly's standalone-full.xml file. I don't want to make these details a part of container image. This information is secret and confidential. Kubernetes provide a way to manage secrets and can make these secret available as environment variables.

It will be good if I can use these environment variables in my config file. This way I don't have to hard code sensitive  information in container image. Luckily Wildfly has a way of doing this. This is how you use it.


                <datasource jta="true" jndi-name="java:/PostgresDS" pool-name="PostgresDS" enabled="true" use-java-context="true">
                    <connection-url>jdbc:postgresql://${env.DBHOST}:5432/${env.DBNAME}</connection-url>
                    <driver>postgresql</driver>
                    <pool>
                        <min-pool-size>20</min-pool-size>
                        <initial-pool-size>20</initial-pool-size>
                        <max-pool-size>200</max-pool-size>
                        <prefill>true</prefill>
                    </pool>
                    <security>
                        <user-name>${env.DBUSER}</user-name>
                        <password>${env.DBPASSWORD}</password>
                    </security>
                </datasource>

As you can see environment variable name are prefixed with "env." it tells wildfly that we are looking for environment and not for system property. You can specify a default value also in case environment variable is undefined syntax for that is:


${env.DBUSER:sampleuser}

Here sampleuser is the default value.

Running local Docker image in Minikube

I was using Minikube to learn Kubernetes. Kubernetes pulls images from public container registries or your private registry hosted on a public server.  I did not want to upload my confidential container images to any public repository or spend money in creating my private repository hosted on a public server.

In this post I am going to describe a method to run container images from my local machine in single node kubernetes cluster hosted in Minikube. I am doing this experiment on n CentOS 7 Laptop.

Creating a Dockerfile

For creating a container image you first need to create a Dockerfile. In Dockerfile you provide a base image and modification you want on top of the content provided by the base image.

You can refer to https://docs.docker.com/develop/develop-images/dockerfile_best-practices/ for more information on Dockerfile.

I am using a tomcat image and creating my image without doing any modification to it. I just want to test if Minikube will be able to pick my image locally.

My Dockerfile content is:

FROM tomcat:latest

Setting docker environment variables

Minikube runs inside a docker container on linux box and hosts a local docker registry there. You can point to that docker registry by setting some environment variables. These environment variables and their value is listed by running following command.


minikube docker-env

The output on my local machine is:


export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.49.2:2376"
export DOCKER_CERT_PATH="/home/deployer/.minikube/certs"
export MINIKUBE_ACTIVE_DOCKERD="minikube"

# To point your shell to minikube's docker-daemon, run:
# eval $(minikube -p minikube docker-env)

Creating Docker image

After setting these environment variables you can run following command to build your image:


docker build -t tomcat-jas .

Now my docker image named tomcat-jas is pushed to Minikube's container registry. Now we can use it in docker deployments. 

Creating Kubernetes deployment

We need to create deployment using yaml file instead of providing image name in kubectl command because there we can't provide imagePullPolicy parameter. My tomcat-jas.yaml file has following content:


apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-jas-deployment
  labels:
    app: tomcat-jas
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tomcat-jas
  template:
    metadata:
      labels:
        app: tomcat-jas
    spec:
      containers:
      - name: tomcat-jas
        image: tomcat-jas:latest
        imagePullPolicy: Never
        ports:
        - containerPort: 8080
      - name: httpd
        image: httpd:latest
        ports:
        - containerPort: 80

I am running two containers inside my pod one is httpd running at port 80 and another is tomcat running at port 8080. You can create deployment using following command:

kubectl create -f tomcat-jas.yaml

You can check status of you deployment using following command:


kubectl get deployment

Output on my machine is:


NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
balanced                1/1     1            1           13d
hello-minikube          1/1     1            1           13d
hello-node              1/1     1            1           13d
kubernetes-bootcamp     2/2     2            2           13d
tomcat-jas-deployment   1/1     1            1           2m21s

You can see that deployment is successfully running.

Exposing the deployment as a service

You can expose your newly created deployment to outside world using the following command:


kubectl expose deployment/tomcat-jas-deployment --type LoadBalancer --port 80,8080

You can check status of your service using following command:

kubectl get service

Output on my machine is:


NAME                    TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                       AGE
balanced                LoadBalancer   10.105.156.60   <pending>     8080:31961/TCP                13d
hello-minikube          NodePort       10.106.246.13   <none>        8080:31293/TCP                13d
hello-node              LoadBalancer   10.104.148.22   <pending>     8080:31192/TCP                13d
kubernetes              ClusterIP      10.96.0.1       <none>        443/TCP                       13d
kubernetes-bootcamp     NodePort       10.99.64.116    <none>        8080:30470/TCP                12d
tomcat-jas-deployment   LoadBalancer   10.103.14.144   <pending>     80:30147/TCP,8080:30692/TCP   19s

Accessing tomcat and httpd from your browser

The port 80 of our deployment is mapped to port number 30147 and port 8080 of our deployment is mapped to port 30690 of Minikube host. The IP address of host can be checked with following command:

minikube ip

Output of above command for me is:


192.168.49.2

You can combine port number and host IP address to make URL of exposed service for me URL will be http://192.168.49.2:30147 and http://192.168.49.2:30692

Thursday 3 June 2021

Fixing JBOSS-LOCAL-USER: javax.security.sasl.SaslException: ELY05128: Failed to read challenge file

 Recently I was trying to access EJBs hosted in wildfly 23 from a remote machine. Earlier I tested the client running on same machine and it was working fine. But when I put the client on a remote machine it started failing with a strange FileNotFoundError.


Caused by: javax.security.sasl.SaslException: Authentication failed: all available authentication mechanisms failed:
   JBOSS-LOCAL-USER: javax.security.sasl.SaslException: ELY05128: Failed to read challenge file [Caused by java.io.FileNotFoundException: /xxx/wildfly/standalone/tmp/auth/local3418030740192890591.challenge (No such file or directory)]
        at org.jboss.remoting3.remote.ClientConnectionOpenListener.allMechanismsFailed(ClientConnectionOpenListener.java:109) ~[jboss-client.jar:20.0.1.Final]
        at org.jboss.remoting3.remote.ClientConnectionOpenListener$Capabilities.handleEvent(ClientConnectionOpenListener.java:445) ~[jboss-client.jar:20.0.1.Final]
        at org.jboss.remoting3.remote.ClientConnectionOpenListener$Capabilities.handleEvent(ClientConnectionOpenListener.java:244) ~[jboss-client.jar:20.0.1.Final]
        at org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92) ~[jboss-client.jar:20.0.1.Final]
        at org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66) ~[jboss-client.jar:20.0.1.Final]
        at org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89) ~[jboss-client.jar:20.0.1.Final]
        at org.xnio.nio.WorkerThread.run(WorkerThread.java:591) ~[jboss-client.jar:20.0.1.Final]
        at ...asynchronous invocation...(Unknown Source) ~[?:?]
        at org.jboss.remoting3.EndpointImpl.connect(EndpointImpl.java:599) ~[jboss-client.jar:20.0.1.Final]
        at org.jboss.remoting3.EndpointImpl.connect(EndpointImpl.java:565) ~[jboss-client.jar:20.0.1.Final]
        at org.jboss.remoting3.ConnectionInfo$None.getConnection(ConnectionInfo.java:82) ~[jboss-client.jar:20.0.1.Final]
        at org.jboss.remoting3.ConnectionInfo.getConnection(ConnectionInfo.java:55) ~[jboss-client.jar:20.0.1.Final]
        at org.jboss.remoting3.EndpointImpl.doGetConnection(EndpointImpl.java:499) ~[jboss-client.jar:20.0.1.Final]
        at org.jboss.remoting3.EndpointImpl.getConnectedIdentity(EndpointImpl.java:445) ~[jboss-client.jar:20.0.1.Final]
        at org.jboss.remoting3.UncloseableEndpoint.getConnectedIdentity(UncloseableEndpoint.java:52) ~[jboss-client.jar:20.0.1.Final]
        at org.wildfly.naming.client.remote.RemoteNamingProvider.getFuturePeerIdentityPrivileged(RemoteNamingProvider.java:151) ~[jboss-client.jar:20.0.1.Final]
        at org.wildfly.naming.client.remote.RemoteNamingProvider.lambda$getFuturePeerIdentity$0(RemoteNamingProvider.java:138) ~[jboss-client.jar:20.0.1.Final]
        at org.wildfly.naming.client.remote.RemoteNamingProvider$$Lambda$80/601221733.run(Unknown Source) ~[?:?]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_31]
        at org.wildfly.naming.client.remote.RemoteNamingProvider.getFuturePeerIdentity(RemoteNamingProvider.java:138) ~[jboss-client.jar:20.0.1.Final]
        at org.wildfly.naming.client.remote.RemoteNamingProvider.getPeerIdentity(RemoteNamingProvider.java:126) ~[jboss-client.jar:20.0.1.Final]
        at org.wildfly.naming.client.remote.RemoteNamingProvider.getPeerIdentityForNaming(RemoteNamingProvider.java:106) ~[jboss-client.jar:20.0.1.Final]
        ... 90 more
        Suppressed: javax.security.sasl.SaslException: ELY05128: Failed to read challenge file
                at org.wildfly.security.sasl.localuser.LocalUserClient.evaluateMessage(LocalUserClient.java:108) ~[jboss-client.jar:20.0.1.Final]
                at org.wildfly.security.sasl.util.AbstractSaslParticipant.evaluateMessage(AbstractSaslParticipant.java:219) ~[jboss-client.jar:20.0.1.Final]
                at org.wildfly.security.sasl.util.AbstractSaslClient.evaluateChallenge(AbstractSaslClient.java:98) ~[jboss-client.jar:20.0.1.Final]
                at org.wildfly.security.sasl.util.AbstractDelegatingSaslClient.evaluateChallenge(AbstractDelegatingSaslClient.java:54) ~[jboss-client.jar:20.0.1.Final]
                at org.wildfly.security.sasl.util.PrivilegedSaslClient.lambda$evaluateChallenge$0(PrivilegedSaslClient.java:55) ~[jboss-client.jar:20.0.1.Final]
                at org.wildfly.security.sasl.util.PrivilegedSaslClient$$Lambda$128/635454149.run(Unknown Source) ~[?:?]
                at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_31]
                at org.wildfly.security.sasl.util.PrivilegedSaslClient.evaluateChallenge(PrivilegedSaslClient.java:55) ~[jboss-client.jar:20.0.1.Final]
                at org.jboss.remoting3.remote.ClientConnectionOpenListener$Authentication.lambda$handleEvent$0(ClientConnectionOpenListener.java:649) ~[jboss-client.jar:20.0.1.Final]
                at org.jboss.remoting3.remote.ClientConnectionOpenListener$Authentication$$Lambda$129/1032360688.run(Unknown Source) ~[?:?]
                at org.jboss.remoting3.EndpointImpl$TrackingExecutor.lambda$execute$0(EndpointImpl.java:991) ~[jboss-client.jar:20.0.1.Final]
                at org.jboss.remoting3.EndpointImpl$TrackingExecutor$$Lambda$127/1508635946.run(Unknown Source) ~[?:?]
                at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) ~[jboss-client.jar:20.0.1.Final]
                at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982) ~[jboss-client.jar:20.0.1.Final]
                at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486) ~[jboss-client.jar:20.0.1.Final]
                at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377) ~[jboss-client.jar:20.0.1.Final]
                at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_31]

Initially the problem looked like to be a bug in LocalUserClient class. But later I found that it is a functionality. For authenticating local users using sasl server creates a challenge file on server and send path of the file to client and client is supposed to return content of that file. If client is also running on same machine then it is able to read content of the file and return and authentication passes. If you are doing it from a remote machine it fails.

So how to authenticate from a remote client?

You need comment out local authentication line from your standalone-full.xml file to force it use remote authentication mechanism.


            <security-realm name="ApplicationRealm">
                <server-identities>
                    <ssl>
                        <keystore path="application.keystore" relative-to="jboss.server.config.dir" keystore-password="xxx" alias="xxx" key-password="xxx" generate-self-signed-certificate-host="localhost"/>
                    </ssl>
                </server-identities>
                <authentication>
                   <!-- <local default-user="$local" allowed-users="*" skip-group-loading="true"/>-->
                    <properties path="application-users.properties" relative-to="jboss.server.config.dir"/>
                </authentication>
                <authorization>
                    <properties path="application-roles.properties" relative-to="jboss.server.config.dir"/>
                </authorization>
            </security-realm>

After that you need to use add-user.sh command to add a user to wildfly. Now create wildfly-config.xml file with your user's credential on client machine.


<configuration>
    <authentication-client xmlns="urn:elytron:1.0">
        <authentication-rules>
            <rule use-configuration="default"/>
        </authentication-rules>
        <authentication-configurations>
            <configuration name="default">
                <sasl-mechanism-selector selector="#ALL"/>
                <set-user-name name="user"/>
                <credentials>
                    <clear-password password="password"/>
                </credentials>
            </configuration>
        </authentication-configurations>
    </authentication-client>
</configuration>

Now you can pass this credentials file to your EJB client using  parameter:


-Dwildfly.config.url=<your dir>/wildfly-config.xml

Now your client should start working without any authentication error. 

Please comment if you need any more information on this.