Android Gradle, add native .so dependencies

Background

A few months ago, I wrote a Key-Value database for Android called SnappyDB based on Google’s LevelDB.

Since it uses native C++ code, the generated bundle contains (.so) binaries libs, along with Jars.

Distribution via Maven repo is not a problem (As soon as you pass the hassle of the publishing process:), maven-android-plugin can help you include the shared libs.
Maven dependencies convention allows you to specify the type of ABI (different CPU architectures) & the format of the library (obviously .so in our case) you want to resolve using classifier:

Ex: resolving arm shared lib for SnappyDB

<dependency>
  <groupId>com.snappydb</groupId>
  <artifactId>snappydb-native</artifactId>
  <version>0.2.0</version>
  <classifier>armeabi</classifier>
  <type>so</type>
</dependency>

This approach works fine if you use Maven & Eclipse ADT as a build system, until you succumbed to Gradle’s siren call!

Android Studio & Gradle

Android Gradle plugin, handle gracefully all Jars dependencies by using maven repos (among others …)

ex: declaring a dependency inside build.gradle

dependencies {
     classpath 'commons-io:commons-io:2.4'
}

but it struggles when it comes to native dependencies, as compared with Maven, you can’t¹ write something like this:

dependencies {
       classpath 'com.snappydb:snappydb-native:2.+:arm-v7a'
}

This is due to the fact that the NDK support is still a work in progress with Android plugin. (as with Android Studio)

¹ actually, technically speaking you can, but Gradle will just ignore these native file since it doesn’t know what to do with them.

jniLibs to the rescue!

In their 0.7.2 release of the Android plugin, Google introduced a new folder ‘jniLibs‘ to the source sets. This means, that you can now add your prebuilt .so files to
this folder, and Android plugin will take care of packaging those native libraries inside your APK.

.
├── AndroidManifest.xml
└── jniLibs
    ├── armeabi
    │   └── libsnappydb-native.so
    ├── armeabi-v7a
    │   └── libsnappydb-native.so
    ├── mips
    │   └── libsnappydb-native.so
    └── x86
        └── libsnappydb-native.so

This feature is great, but the developer still need to download & copy his prebuilt .so files manually, which isn’t great especially if you use a Continuous Integration  server like Jenkins or Travis for instance.

a lot of hacks & workarounds emerged to try to sort this out, but a lot of them are really verbose & require the user to download manually his native dependencies.

So, you get the picture. There has to be a better way.

Meet android-native-dependencies

android-native-dependencies is a Gradle plugin I wrote to automate the process of resolving & downloading & copying the native dependencies into jniLibs folder, so Android plugin can include them automatically in your APK build.

the plugin uses the same repository declared to resolve regular dependencies (jar)
here is an example:

buildscript {
  repositories {
    mavenCentral()
  }
  dependencies {
    classpath 'com.android.tools.build:gradle:0.10.+'
    classpath 'com.nabilhachicha:android-native-dependencies:0.1'
  }
}

apply plugin: 'android'
apply plugin: 'android-native-dependencies'

native_dependencies {
    artifact 'com.snappydb:snappydb-native:0.2+:armeabi'
    artifact 'com.snappydb:snappydb-native:0.2+:x86'
}

dependencies {
    //regular Jar dependencies ...
}

Convention

The artifact DSL follows the naming convention for Maven artifacts. thus, you can use one of the following syntax:

  • abbreviated group:name:version[:classifier]
//adding x86 classifier will resolve only intel's (.so) lib
native_dependencies {
    artifact 'com.snappydb:snappydb-native:0.2+:x86'
}

//omit the classifier will resolve all supported architectures
native_dependencies {
    artifact 'com.snappydb:snappydb-native:0.2+'
}
  • map-style notation
//adding x86 classifier will resolve only intel's (.so) lib
native_dependencies {
    artifact group: 'com.snappydb', name: 'snappydb-native', version: '0.2+', classifier: 'x86'
}

//omit the classifier will resolve all supported architectures
native_dependencies {
    artifact group: 'com.snappydb', name: 'snappydb-native', version: '0.2+'
}

In both notations, classifier is optional. this means that when omitted, the plugin try to resolve the artifacts for all architectures: armeabi, armeabi-v7a, x86 and mips.

Conclusion

Until we get a full support for NDK in Android Gradle plugin, using android-native-dependencies can help you build your CI & automate repetitive task with native dependencies. Please try it and send your feedback to @nabilhachicha.

Another great Gradle plugin I recommend is the android-sdk-manager by (Jake Wharton) who helps downloads and manages your Android SDK.

 

Posted in Android | Tagged , , , , , , , , , , | 5 Comments

Set the best Zoom level for your Maps

Android Maps V2 API makes it easy to work with Maps & Added POI (Marker), but what if you have a lot of Markers to display?

Two possibilities comes in hand:

  1. Use this wonderful Map extensions lib, that extends Maps V2 to regroup nicely the stack of markers.
  2. Find the appropriate zoom level that is not overwhelming for the user and display a restricted number of POI around a location.
    This is actually what I implemented for a professional project
  • On the left image is what we’re trying to avoid (Markers are stacked).
  • On the right image is the desired effect, here markers are placed proportionally because we set the perfect zoom level that allow us to display just 3 POI within a radius of 1 kilometer.

The idea to achieve this is pretty simple:

  1. First We set the zoom level to maximum (ground level)
  2. mSupportFrag = (SupportMapFragment) getSupportFragmentManager().findFragmentById(R.id.map);
    mMap = mSupportFrag.getMap();
    MAP_ZOOM_MAX = mMap.getMaxZoomLevel();
    MAP_ZOOM_MIN = mMap.getMinZoomLevel();
    
    mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(loc, MAP_ZOOM_MAX));
    
  3. We use LatLngBounds to get the current rectangle of the Map visible to the user to try to find all available POI within this current Zoom
  4. LatLngBounds bounds = mMap.getProjection().getVisibleRegion().latLngBounds;
    
    for (Marker k : mListMarkers) {
       if (bounds.contains(k.getPosition())) {
          currentFoundPoi++;
       }
    }
    
  5. If we reach the number of POIs we stop.
  6. keepSearchingForWithinRadius = (Math.round(location.distanceTo(latlngToLocation(bounds.northeast)) / 1000) > radius) ? false : true;
    
    if (keepSearchingForWithinRadius) {
    mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(loc, currentZoomLevel--));
    }
    
  7. Else we keep zooming out until we reach the desired radius
  8. //keep looking but within the limits (we don't want to go outer space do we ?
    if (currentZoomLevel < MAP_ZOOM_MIN) {
      break;
    }
    

More details and code are available on GitHub, you’re free to Hack it🙂

Posted in Android | Tagged , , , , | 1 Comment

Using JSONP with JAX-RS

Over the past few weeks I spent some time developing some REST services using JAX-RS, those services where invoked within jQuery scripts via Ajax. However, I’ve quickly fall into the same origin policy limitation.

The same origin policy prevents a document or script loaded from one origin (domain) from getting or setting properties of a document from another origin.

This is when JSONP came in help.
JSONP is designed to request data from a server in a different domain, basically it works by retrieving arbitrary JavaScript data instead of JSON data, the browser then send the name of the callback JavaScript function that will be fired once the content is returned.
Here is a detailed article if you want to know more about JSONP.

To demonstrate this concept let’s create a simple project that contains a REST service and a JavaScript client.

Server part

We’ll create a simple REST service using Jersey and Grizzly
For this, we use maven

mvn archetype:create -DgroupId=dev.nhachicha -DartifactId=restservice

Let’s added some dependency

		<dependency>
			<groupId>com.sun.jersey</groupId>
			<artifactId>jersey-grizzly2</artifactId>
			<version>1.12</version>
		</dependency>
		<dependency>
			<groupId>com.sun.jersey</groupId>
			<artifactId>jersey-json</artifactId>
			<version>1.12</version>
		</dependency>
		<dependency>
			<groupId>com.sun.jersey</groupId>
			<artifactId>jersey-grizzly2-servlet</artifactId>
			<version>1.12</version>
		</dependency>

Now we can added a REST resource class (Service) this class define one method sayHello which take 2 arguments callback and  username

@Path("/rest")
public class Service {

	@GET
	@Produces("application/x-javascript")
	@Path("greeting/{username}")
	public JSONWithPadding sayHello (@QueryParam("callback") String callback,
								 @PathParam("username") String username) {
		Gson gson = new Gson();
		Message msg = new Message("Hello " + username);
		String json = gson.toJson(msg);
		return new JSONWithPadding(json,callback);
	}
}

It return a message using the given username and return the response as JSONP type with the given callback.
Since we use Grizzly as our embedded Webserver, we need to tell it where to find our service

ResourceConfig rc = new PackagesResourceConfig("dev.nhachicha");
			HttpServer server = GrizzlyServerFactory.createHttpServer(
							UriBuilder.fromUri("http://localhost/").port(9998).build(), rc);
			server.start();

To run the service

mvn exec:java

Our server is responding to the following URL http://localhost:9998/rest/greeting/

We’re done with the server part

Client part

The client part is a simple jQuery script that makes Ajax calls to our REST service
We specify JSONP as the expected data.

var url = 'http://localhost:9998/rest/greeting/' + $("#name").val();
	    $.ajax({
            type: "GET",
            url: url,
            data: {},
            async:true,
            contentType: "application/json; charset=utf-8",
            dataType: "jsonp",
            success: function(data) {
            	showResponse(data);
            },
            error: function (XMLHttpRequest, textStatus, errorThrown) {
                    alert('error');
            },
            beforeSend: function (XMLHttpRequest) {
						//show loading
            },
            complete: function (XMLHttpRequest, textStatus) {
					//hide loading
            }
		});

The complete project is available on GitHub.

Resources

http://jersey.java.net/nonav/apidocs/1.5/jersey/com/sun/jersey/api/json/JSONWithPadding.html
http://persistentdesigns.com/wp/2009/08/jsonwithpadding-callbacks-json-xml-string-and-the-genericentity/
http://grizzly.java.net/nonav/docs/1.9/apidocs/com/sun/grizzly/http/servlet/ServletAdapter.html
http://www.fbloggs.com/2010/07/09/how-to-access-cross-domain-data-with-ajax-using-jsonp-jquery-and-php/

Posted in Java | Tagged , , , , , , , | Leave a comment

MySQL & Apache Derby as jdbcRealm for Apache Shiro

In this post I’d like to show you, how you could use Apache Derby or MySQL as Security Realm for Apache Shiro.

Apache Shiro is a powerful and easy-to-use Java security framework that performs authentication, authorization, cryptography, and session management

Step 1 creating a simple WebApp

First we need to create a simple WebApp using maven

mvn archetype:generate -DgroupId=dev.nabil -DartifactId=ShiroDemo -DarchetypeArtifactId=maven-archetype-webapp

Generate eclipse configuration files (if you want to import as Eclipse project)

mvn eclipse:eclipse 

Added jetty Plugin into your pom.xml, in order to be able to run the WebApp

<build>
	<plugins>
		<plugin>
			<groupId>org.mortbay.jetty</groupId>
			<artifactId>maven-jetty-plugin</artifactId>
		</plugin>
	</plugins>
    <finalName>ShiroDemo</finalName>
  </build>

At this point if you start jetty

mvn jetty:run

You should be able to access the WebApp at http://localhost:8080/ShiroDemo/

Step 2 securing some content

Now we’re going to use Apache Shiro to secure access to a JSP page.

Create a new directory “auth” and add a new JSP under it, let’s call it “BackOffice.jsp

<%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%>
<%@page import="org.apache.shiro.SecurityUtils"%>
<html>
<body>
<h2>Master <%= SecurityUtils.getSubject().getPrincipal() %> I'm Here To Serve You :)</h2>
</body>
</html>

This acquire and display the current authenticated user.

Now we have to create a database that will hold the list of the authorized users along with their password
I use Apache Derby for my staging environment (we’ll see later how we could use MySQL)

CREATE TABLE T_CUSTOMER
(
   IDCUSTOMER varchar(255) PRIMARY KEY NOT NULL,
   PINCODE varchar(255) NOT NULL
)
INSERT INTO T_CUSTOMER (IDCUSTOMER,PINCODE) VALUES ('nabil','changeit');

Now that we have our database ready, we will enable Shiro into our project by adding a ServletFilter into our Web.xml

<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5">
	<filter>
	    <filter-name>ShiroFilter</filter-name>
	    <filter-class>
	        org.apache.shiro.web.servlet.IniShiroFilter
	    </filter-class>
	    <!-- no init-param means load the INI config
	        from classpath:shiro.ini --> 
	</filter>
	<filter-mapping>
	     <filter-name>ShiroFilter</filter-name>
	     <url-pattern>/*</url-pattern>
	</filter-mapping>
	
  <display-name>Archetype Created Web Application</display-name>
</web-app>

Don’t forget to added Shiro’s dependencies into pom.xml

       <dependency>
		<groupId>org.apache.shiro</groupId>
		<artifactId>shiro-core</artifactId>
		<version>1.1.0</version>
	</dependency>

	<dependency>
		<groupId>org.apache.shiro</groupId>
		<artifactId>shiro-web</artifactId>
		<version>1.1.0</version>
	</dependency>

        <dependency>
		<groupId>commons-logging</groupId>
		<artifactId>commons-logging</artifactId>
		<version>1.1.1</version>
	</dependency>

        <dependency>
	        <groupId>org.apache.derby</groupId>
                <artifactId>derbyclient</artifactId>
	        <version>10.4.2.0</version>	
 	</dependency>

        <dependency>
		<groupId>com.jolbox</groupId>
		<artifactId>bonecp</artifactId>
		<version>0.7.1.RELEASE</version>
	</dependency>

Finally create shiro.ini under resource dir, it’s the configuration file.

[main]
jdbcRealm=org.apache.shiro.realm.jdbc.JdbcRealm
jdbcRealm.authenticationQuery = select pincode from t_customer where idcustomer = ?

ds = com.jolbox.bonecp.BoneCPDataSource
ds.driverClass=org.apache.derby.jdbc.ClientDriver
ds.jdbcUrl=jdbc:derby://localhost:1527/shiro_schema
ds.username = APP
ds.password = APP
jdbcRealm.dataSource=$ds

[users]
[roles]
[urls]
/auth/** = authcBasic
/** = anon

As you can see the configuration is pretty straightforward.
First we setup the jdbc realm, this is where Shiro will find the authorized users.
Then, we map the URLs to be protected, all the url under /auth should be authenticated with basic HTTP authentication
All the other URLs should be accessed without authentication.

Note: The mapping order matter.

Now we’re ready to restart jetty

mvn clean jetty:run

Try to access http://localhost:8080/ShiroDemo/auth/secured.jsp you should be prompted to login.

What about MySQL?

In this project I wanted to use two different Databases, let’s say Apache Derby for Dev/staging environment and MySQL for production.

To achieve this, we will use Maven profile and filter.

First Add a new directory under src let’s call it production we will create a new shiro configuration file compatible with MySQL.

[main]
jdbcRealm=org.apache.shiro.realm.jdbc.JdbcRealm
jdbcRealm.authenticationQuery = select pincode from t_customer where idcustomer = ?

ds = com.mysql.jdbc.jdbc2.optional.MysqlDataSource
ds.serverName = localhost
ds.user = ADM
ds.password = secret123
ds.databaseName = shiro_schema
jdbcRealm.dataSource = $ds

[users]
[roles]
[urls]
/auth/** = authcBasic
/** = anon

As you can see, the configuration is almost the same especially the mapping part, what’s different indeed is the jdbcRealm which use a MySQL driver
This is why we need to added the appropriate dependency to maven pom.xml

<dependency>
        <groupId>mysql</groupId>
	<artifactId>mysql-connector-java</artifactId>
	<version>5.1.6</version>
</dependency>

Putting it all together

<profiles>
  <!-- STAGING ENV (default) -->
        <profile>
            <id>staging</id>
            <activation>
                <activeByDefault>true</activeByDefault>
                <property>
                    <name>environment.type</name>
                    <value>staging</value>
                </property>
            </activation>
            <properties>
                <jdbc.user>APP</jdbc.user>
                <jdbc.passwd>APP</jdbc.passwd>
                <jdbc.url>jdbc:derby://localhost:1527/shiro_schema</jdbc.url>
                <jdbc.driver>org.apache.derby.jdbc.ClientDriver</jdbc.driver>
            </properties>
            <build>
	            <resources>
					<resource>
						<directory>src/main/resources</directory>
						<filtering>true</filtering>
					</resource>
				</resources>
			</build>
			<dependencies>
                <dependency>
						<groupId>org.apache.derby</groupId>
						<artifactId>derbyclient</artifactId>
						<version>10.4.2.0</version>						
			  	</dependency>
            </dependencies>
        </profile>
        
        <!-- PRODUCTION ENV -->
        <profile>
            <id>production</id>
            <activation>
                <property>
                    <name>environment.type</name>
                    <value>prod</value>
                </property>
            </activation>
            <properties>
                <jdbc.user>ADM</jdbc.user>
                <jdbc.passwd>secret123</jdbc.passwd>
                <jdbc.ds>com.mysql.jdbc.jdbc2.optional.MysqlDataSource</jdbc.ds>
                <jdbc.serverName>localhost</jdbc.serverName>
                <jdbc.databaseName>shiro_schema</jdbc.databaseName>
            </properties>
            <build>
	            <resources>
					<resource>
						<directory>src/production/resources</directory>
						<filtering>true</filtering>
					</resource>
				</resources>
			</build>
			<dependencies>
				<dependency>
		            <groupId>mysql</groupId>
		            <artifactId>mysql-connector-java</artifactId>
		            <version>5.1.6</version>
		        </dependency>
	        </dependencies>
        </profile>
        
  </profiles>

We separate here the staging conf (Apache Derby) from the production configuration that include MySQL driver and some specific properties. This allow us to change environment/conf easily while maintaining a single pom.xml

  • To build and run for staging
mvn clean jetty:run

this will use staging’s profile as it’s the default.

  • To build for production
mvn clean jetty:run -Denvironment.type=prod

This project is available on GitHub

Resources

Posted in Java | Tagged , , , , , | 1 Comment

XMPP client with Android

The use case

I needed to send asynchronously messages to my Android app, first choice that popped up to my mind is using JMS But since Android (I should say Dalvik) does not include all javax packages (especially javax.naming.*) I gave up using JMS under Android.

Instead, I wrote a simple MessageDrivenBean (using Java EE6 annotation @MessageDriven(mappedName=”jms/myFancyTopic”) Server side, that will consume my JMS messages and then write them back (as xmpp messages) to an XMPP Server (pubsub). The Android application will then subscribe and listen for incoming XMPP messages to retrieve the content.

So fare so good

I was looking for a good XMPP library for Android, after some search I came to the conclusion that there are two ways to achieve my design
– Do It Yourself ! code an XMPP client from scratch, conform to Dalvik limitations
– Use asmack
Since I’m a lazy programmer I choose the latter :)

Asmack

Asmack is the Android build of the famous smack Api (Open Source XMPP formerly Jabber client library for instant messaging and presence).

There are some known Bugs though

 1) Use IPv4

Otherwise you’ll have an Exception java.net.SocketException: Bad address family

System.setProperty("java.net.preferIPv6Addresses", "false");

2) Thou shalt use BKS !

The second problem I encountered is related to location and the type of the truststore under Android.
Unlike a regular Java Virtual Machine where the truststore is located under

/lib/security/cacerts

In Android the cacerts is here

/system/etc/security/

(I don’t known if this Path is normalized or is subject to modification – I tested successfully with Froyo 2.2)

So you have to set these properties also

config.setTruststorePath("/system/etc/security/cacerts.bks");
config.setTruststorePassword("changeit");
config.setTruststoreType("bks");

Another details, Android use a different format to store the certificates, again unlike the regular JVM where the Keystore and Truststore container are JKS (JavaKeystore) within Android we use BKS (BouncycastleKeyStore)

3) work around ClassCastException: org.jivesoftware.smack.util.PacketParserUtils

The smack jar include smack.providers file under META-INF, this file allow the XMPP ProviderManager to be configured when initialized. But since Dalvik does not allow the loading of META-INF files from the filesystem, we have to register every provider manually.
I used the code provided by Florian Schmaus from the gtalksms project

ConfigureProviderManager.configureProviderManager();

Note: I attached my modified version of ConfigureProviderManager here.

Putting it all together

Here is the Android XMPP client


ConfigureProviderManager.configureProviderManager();
			System.setProperty("java.net.preferIPv6Addresses", "false");

			ConnectionConfiguration config = new ConnectionConfiguration("192.168.0.1");
			config.setDebuggerEnabled(true);// Enable xmpp debugging at Logcat

			// set up cert location
			config.setTruststorePath("/system/etc/security/cacerts.bks");
			config.setTruststorePassword("changeit");// this is the default password
			config.setTruststoreType("bks");

			mConnection = new XMPPConnection(config);
			mConnection.connect();
			// Log into the server as jack, passwd reacher
			mConnection.login("jack", "reacher");

			// Create a pubsub manager using an existing Connection
			String pubSubAddress = "pubsub." + mConnection.getServiceName();
			PubSubManager mgr = new PubSubManager(mConnection, pubSubAddress);

			// Get the node
			LeafNode node = (LeafNode) mgr.getNode("MY_NODE_NAME");
			node.subscribe("jack@192.168.0.1");

			// This will collect all XMPP messages
			PacketCollector collector = mConnection.createPacketCollector(new PacketFilter() {
				public boolean accept(Packet packet) {
					return true;
				}
			});

			while (true) {
				Packet packet = collector.nextResult();

				if (packet instanceof Message) {

					Collection pktExt = ((Message) packet).getExtensions();

					for (PacketExtension ext : pktExt) {
						if (ext instanceof EventElement) {
							String json = ((PayloadItem) ((ItemsExtension) ((EventElement) ext)
									.getExtensions().get(0)).getExtensions()
									.get(0)).toXML();

							System.out.println(">>>>>>GOT JSON?<<<<<<<<: " + json);

						}
					}
				}
			}

Bonus Track!

This is the server code (Intercepting JMS message – transforming the message to JSON and finally sending the Json message to XMPP pubsub)


    		//=======================================
    		//==========    XMPP CONNECTION    ======
    		//=======================================

    		  ConnectionConfiguration config = new ConnectionConfiguration("127.0.0.1");
			  XMPPConnection connection = new XMPPConnection(config);
			  connection.connect();
			  connection.login("Test", "Test");// Log into the server

		      PubSubManager mgr = new PubSubManager(connection);

		      // Create the node
		      ConfigureForm form = new ConfigureForm(FormType.submit);
		      form.setAccessModel(AccessModel.open);
		      form.setDeliverPayloads(true);
		      form.setNotifyRetract(true);
		      form.setPersistentItems(true);
		      form.setPublishModel(PublishModel.open);

		      LeafNode myNode = null;
		        try{
		          mgr.getNode("MY_NODE_NAME");
		          //node exists, so delete
		          mgr.deleteNode("MY_NODE_NAME");

		        }catch(XMPPException e){//node does not exists,
		        }

				MyConstant.XMPP_PUB_SUB_LEAF = (LeafNode) mgr.createNode("MY_NODE_NAME", form);

			//=====================
    		//========  EJB  ======
    		//=====================

			@MessageDriven(mappedName = "jms/producer/Topic")
			public class XmppEjb implements MessageListener {

			@Override
			public void onMessage (Message message) {

				try {
					TextMessage msgTxt = (TextMessage) message;

					if (null != MyConstant.XMPP_PUB_SUB_LEAF ) {
						MyConstant.XMPP_PUB_SUB_LEAF.send(new PayloadItem(null,
								new SimplePayload("book", "pubsub:test:book", ""+msgTxt.getText()+"")));
					}

				} catch (JMSException e) {
					e.printStackTrace();

				} catch (XMPPException e) {
					e.printStackTrace();
				}

			}
		}

URLs

– My XMPP Server is Openfire

http://www.igniterealtime.org/projects/openfire/index.jsp

– Smack API

http://www.igniterealtime.org/projects/smack/index.jsp

http://www.igniterealtime.org/builds/smack/docs/latest/documentation/

– Asmack

http://code.google.com/p/asmack/

Posted in Android | Tagged , , , , , , , | 5 Comments

Standalone JMS client

Configuring a standalone JMS client could be a little bit confusing. Over the Internet you’ll probably find many code snippet to show you how to send or receive a message in a queue but few sites in my experience talk about how you create & configure the connection.

I will use in this example Glassfish 3.0.1 as it’s a full certified Java EE6 Server, in addition it contain OpenMQ the reference implementation for Java Messaging Service (message-oriented middleware platform)

If you’re not familiar with the different concepts of JMS, I suggest that you take a look at this article: OpenMQ, JMS under GlassFish

Administered Objects

These are the objects that the client need to know about and use.

Destination:

This is where the client send or receive messages

ConnectionFactories:

Allow the client to create the connection to a destination

These objects are exposed from the broker(OpenMQ) using JNDI, thus in order to obtain them you have to do a JNDI lookup.

For our examples we will create a ConnectionFactory and a Topic, using GlassFish’s asadmin utility

asadmin create-jms-resource --restype javax.jms.ConnectionFactory jms/producer/ConnectionFactory
asadmin create-jms-resource --restype javax.jms.Topic jms/producer/Topic

You can check that the JMS resources are created using asadmin

asadmin list-jms-resources

jms/producer/Topic
jms/producer/ConnectionFactory

or the Web Console

Client run within a Container

If you run your client using the ACC (Application-client-container) for example, the container can Inject you a reference of those JNDI resources


@Resource(lookup = "jms/producer/ConnectionFactory")
 private static ConnectionFactory connectionFactory;
@Resource(lookup = "jms/producer/Topic")
 private static Topic topic;

They are also available in the IntialContext


Context jndiContext = new InitialContext ();

// get Connection Factory

ConnectionFactory connectionFactory = (ConnectionFactory) jndiContext.lookup("jms/producer/ConnectionFactory");

Topic topic = (Topic) jndiContext.lookup("jms/producer/Topic");

Client run without a Container

So here comes the tricky part, what if my client run as a simple Java application (Ex: java -jar my_jms_client.jar) ?

Step1

Go back to the machine where you installed GlassFish (i.e Broker)

Open the Open Message Queue Administration Console

$GLASSFISH_HOME/imq/bin/imqadmin

Step2

Connect to your broker (or create a new connection) as shown

We can see the previously created destinations

Step3

Create a new object store (right click at the Object Store, then click add Object Store)
Now you have to specify the following JNDI properties: java.naming.factory.initial and java.naming.provider.url

java.naming.factory.initial = com.sun.jndi.fscontext.RefFSContextFactory
java.naming.provider.url = file:///opt/java/my_broker

Step4

Now you have an Object store, we need to added, the destination that the clients will use along with the connection factory.
right click on destination to add a new one


jms/consumer/Topic (is the JNDI name that we will use it in our client code)
jms_producer_Topic is the current destination name the one we created previously with Glassfish asadmin

Step5:

We need to do the same with the connection factory

Right click under Connection Factories (under Our Object Store)


that’s, it we are done with the Queue Administration Console

Step6:

Remember the directory from the step 3, the clients will use the configuration file within this directory in order to lookup the Administred Object.

Here is a simple client that will subscribe to JMS topic and listen for new messages

    // ======================================
    // =           CLIENT                   =
    // ======================================

    public static void main(String[] args) {

         try {
            InitialContext jndiContext = new InitialContext();

            ConnectionFactory connectionFactory = (ConnectionFactory) jndiContext.lookup("ConsumerConnectionFactory");
            Topic topic = (Topic) jndiContext.lookup("jms/consumer/Topic");

            Connection connection = connectionFactory.createConnection();
            Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
            MessageConsumer consumer = session.createConsumer(topic);

            consumer.setMessageListener(new Main());

            connection.start();

        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    public void onMessage(Message message) {
        try {
            System.out.println("Message received: " + ((TextMessage) message).getText());
        } catch (JMSException e) {
            e.printStackTrace();
        }
    }
 
You need to put this properties files in your classpath

jndi.properties

 java.naming.factory.initial = com.sun.jndi.fscontext.RefFSContextFactory
 java.naming.provider.url = file:///opt/java/my_broker
 

Note: If you don’t want to put jndi.properties file into your classpath you have to intialize the JNDI context by specifying the Naming properties

 Properties env = new Properties();
 env.put("java.naming.factory.initial","com.sun.jndi.fscontext.RefFSContextFactory");
 env.put("java.naming.provider.url", "file:///opt/java/my_broker");
 InitialContext jndiContext = new InitialContext(env);
 

But wait there’s more🙂

If you are curious you can go see what’s under the /opt/java/my_broker directory, well, there is a hidden file called .bindings
you can edit this file to correct/modify the hostname/ip addresse of the broker ex: in my case I nedded to replace localhost with the correct IP addresse of my server, so the client will be able to reach the JNDI namespace.

This file is to be given to all clients, this way the clients don’t worry about the broker configuration (loosely-coupled). So if I want to install my client under Windows, in another machine, I will just modify the JNDI properties java.naming.provider.url that will point to the .bindings directory location.

This is my maven configuration, in case you’re interested to see the dependencies I’m using

<dependencies>  
	 	<dependency>
            <groupId>org.glassfish</groupId>
            <artifactId>javax.jms</artifactId>
            <version>${glassfish-version}</version>
        </dependency>
        
       <dependency>
			<groupId>com.sun.messaging.mq</groupId>
			<artifactId>imq</artifactId>
			<version>4.4.2</version>
			<scope>runtime</scope>
	   </dependency> 
	   
	   <dependency>
			<groupId>com.sun.messaging.mq</groupId>
			<artifactId>fscontext</artifactId>
			<version>4.4.2</version>
			<scope>runtime</scope>
		</dependency> 
            
            
  </dependencies>

Server side code


public class Main {
    @Resource(lookup = "jms/producer/ConnectionFactory")
    private static ConnectionFactory connectionFactory;
    @Resource(lookup = "jms/producer/Topic")
    private static Topic topic;

    // ======================================
    // =           PRODUCER                 =
    // ======================================

    public static void main (String [] args){
        try {
            Connection connection = connectionFactory.createConnection();
            Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
            MessageProducer producer = session.createProducer(topic);

            // Sends a text message to the topic
            TextMessage message = session.createTextMessage();
            message.setText("Hello, Clients !! ");
            producer.send(message);

            connection.close();

        } catch (Exception e) {
            e.printStackTrace();
        }
    }

}
Posted in Java | Tagged , , , , , , | 4 Comments

Mutual authentication with Tomcat

Mutual authentication with Tomcat (using a Local Certificate Authority)

This is a quick guide that will walk you through the setup of a secure SSL authentication. To achieve this, we will create a local certificate authority that will sign both server and client certificate

Install OpenSSL

If you’re running Linux chances are that you already have the binaries, if not a simple

apt-get install openssl
 

will do the job

(works for Debian/Ubuntu distrib)

If you’re under windows you could use Cygwin

Create new CA Authority

mkdir local_ca
mkdir local_ca/private
mkdri local_ca/public
cd local_ca
openssl req -new -x509 -days 3652 -sha1 -newkey rsa:1024 -keyout private/ca_private.key -out public/ca_public.crt -subj '/O=MyCompany/OU=Seccure Root CA' 

This will create the private and public key of our new Certificate Authority (Remember the PEM password, it will be necessary to all signing operations)

Create Server certificate

mkdir ~/local_ca/server
cd ~/local_ca/server
openssl genrsa -des3 -out server.key 1024

(Like the CA, we need to enter/remember the password used to create the server certificate, we will use it later)

In order to be signed with our local authority, we need to generate a Certificate Signing Request

openssl req -new -key server.key -out server.csr

You have to use the same password defined earlier to unlock the server private key, then you have to fill the information related to this server (Name, Organization etc.)

Now that we have our CSR we’re going to sign it using our CA

cd ~/local_ca
openssl x509 -req -days 360 -CA public/ca_public.crt -CAkey private/ca_private.key -CAcreateserial -in server/server.csr -out server/server.crt

Note that you have to enter the CA password and not the server password

This will sign the server public key (server.crt)

We have now a private key and a public key signed with a CA, in order to be able to use them by Tomcat, we have to store them within a JKS (JavaKeyStore) file, to do this first create a pkcs12 file with server.crt and server.key

cd ~/local_ca/
openssl pkcs12 -export -in server.crt -inkey server.key -out server.p12

We transform the pkcs12 into a JKS with this Java program

Download Jetty (http://repo1.maven.org/maven2/org/mortbay/jetty/jetty/4.2.12/jetty-4.2.12.jar) and copy the jar in your ~/local_ca/server directory

java -cp jetty-4.2.12.jar org.mortbay.util.PKCS12Import server.p12 server.jks

Create a certificate truststore for the server

Our server public key is signed with our local CA, in order for Tomcat to recognize our local CA, we have to create another JKS file that contain the public key of our local CA, we call it truststore

This is will tell Tomcat to trust certificates (including our own) signed (issued) with our local CA

Create a new keystore (empty keystore)

cd ~/local_ca/server
keytool -genkey -alias foo -keystore truststore.jks
keytool -delete -alias foo -keystore truststore.jks

Note: Keytool is a tool of the JVM

Once we have out empty truststore we’re going to added the local CA public key

cd ~/local_cd/server
keytool -import -alias root -keystore truststore.jks -trustcacerts -file ../public/ca_public.crt

To make sure that the certificate was added, you can list the installed certificate in the truststore

keytool -list -keystore truststore.jks

At this point we are done with the server certificates, we have our server.jks and truststore.jks that we’re going to install in Tomcat later, but before let’s create some client certificate

Create Client Certificate

cd ~/local_ca
mkdir clientcd client
openssl req -new -newkey rsa:1024 -nodes -out client.req -keyout client.key

As we did earlier with the server certificate, we’re going to sign the client certificate with our local CA

cd ~/local_ca/
openssl x509 -CA public/ca_public.crt -CAkey private/ca_private.key -req -in client/client.req -out client/client.pem -days 100

We put together the client’s private and public key into a PKCS12 file

cd ~/local_ca/client</span>
openssl pkcs12 -export -clcerts -in client.pem -inkey client.key -out client.p12 -name client

This is it, we have now a client certificate and a server certificate signed with our local CA.

Let’s put this all together under Tomcat

Create a directory under your tomcat conf , where we’re going to put our JKS files (mine is under <YOUR_TOMCAT_ROOT>/conf/certifs.)

Copy server.jks and truststore.jks

Open your Tomcat configuration file <YOUR_TOMCAT_ROOT>/conf/server.xml

Add the following connector

<Connector clientAuth=”true” protocol=”org.apache.coyote.http11.Http11Protocol” port=”8443″ minSpareThreads=”5″ maxSpareThreads=”75″ enableLookups=”true” disableUploadTimeout=”true”
acceptCount=”100″
maxThreads=”200″
scheme=”https”
secure=”true”
SSLEnabled=”true”
keystoreFile=”conf\certifs\server.jks”
keystoreType=”JKS”
keystorePass=”server”
truststoreFile=”conf\certifs\truststore.jks”
truststoreType=”JKS”
truststorePass=”server”
SSLVerifyClient=”require”
SSLEngine=”on”
SSLVerifyDepth=”2″
sslProtocol=”TLS”
/>

Now your Tomcat is set to use a server certificate and require a client authentication

Posted in System | Tagged , | 1 Comment