OpenShift comes with several preconfigured deployment platforms which they call cartridges. There are cartrige for Java app servers (JBoss, Tomcat), Python (Django) or Ruby on Rails. Instead of deploying my Gomoku app, I just use the bare-bone DIY OpenShift virtual machine. To my surprise the DIY version of OpenShift already comes with JDK 7 (OpenJDK), Maven 3 and Ant 1.8 pre-installed. That saves me a tons of time. Since my Gomoku app is just a regular Maven project that uses Dropwizard all I need to do is to push my code into the RedHat OpenShift git repository (resides inside my VM) to build and deploy the app.
There are some procedures I need to follow to make sure my app can run on OpenShift. First, if the app needs to open a socket connection, I need to make sure that it binds on the permitted IP address. This is defined as $OPENSHIFT_INTERNAL_IP and the only allowed port is 8080. In Dropwizard I can easily configure that in the http session:
1 2 3 |
|
You need to make sure your deploy script (more below) replace the @OPENSHIFT_INTERNAL_IP@ with the actual permitted IP address. You also want to make sure that the adminPort is the same 8080 as the app server port. OpenShift free tier only allows one port 8080 open as far as I know.
Second, OpenShift has several activation hooks to automate things when you push commits to git. There are hooks for start, stop, deploy and build the app.
For my application, the build hook is just to build using Maven.
1 2 |
|
After the build, the deploy hook is called. My deploy script is to replace the @OPENSHIFT_INTERNAL_IP@ string with real permitted IP to bind to in the app config file.
1 2 |
|
Finally, the start hook is called. This time I just run the JVM with my jar file. Dropwizard does a great job making this so easy.
1 2 |
|
After this my Gomoku app is up and ready to serve requests. To summarize, the whole workflow is:
* Push your changes into your OpenShift Git repo
* Commit hooks got call, in turn it: builds, deploys and starts your app
* Serve!
I was new to OpenShift and got the whole done in under an hour. I bet the pre-defined cartriges (JBoss or Tomcat) can even speed up this process even more.
]]>Some service calls are expensive and the underlying data don’t change so often. We tend to cache the results. A naive approach would be to find such methods and refactor them so that cache can be used before the call to the services are made. The better way is to use AOP where you intercept those calls with ‘aspects’. The in those aspects you can make the decision to whether get the results from the cache or to make the expensive service calls. Another common scenario is retry when you encounter exceptions in your service calls: transient network issues (latency, timed out, spillover in load balancer…) or database hiccups. Normally, you should at least retry the calls for several times before giving up. Like noted previously, a naive approach would be to go every methods and apply the retry logic. Or you could use AOP.
In this post, I’m going to talk about how to use aspect oriented way to ease the refactoring effort. I will not talk about the full blown bytecode level AOP solution which uses AspectJ with bytecode weaving. Instead, I will talk about a lighter weight of aspect programming using the Java’s dynamic proxy and its reflection mechanism. I think it’s pretty similar to the way Spring AOP works. The only difference is that my code will assume every method calls implement interfaces. Thus, it will not have to use cglib to generate the proxies. Also, I think programming to interface is a much cleaner and prefer way for your service calls Data access objects (DAO).
At the end you could decorate your method with annotations/aspects like this:
1 2 3 4 5 6 7 |
|
Let’s define an example interface for the DAO and its implementation.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
I intentionally throw a RuntimeException 30% of the times this method run to simulation transient error that could be retried. Now is the fun part: we will add additional functionalities over this method without modifying its code. As the begining I want to time the method performance and retry if it fails (up to 3 times before I give up).
The easiest way to do this is to use annotation to denote your new aspects.
1 2 3 4 5 |
|
And the Retry aspect with maximum of 3 retries before giving up:
1 2 3 4 5 6 |
|
In order to facilitate the annotations in the java dynamic proxy, we need to create an InvocationHandler for each of those annotations. For this I first borrow the utility class from “Java Reflection in Action”. You can get full source at the end of this post. I then create a base Interceptor on top of this invocation handler to make the dynamic proxy generation handling annotations easier.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|
The nice thing in doing this is that in order to create an aspect based on an annotation you just need to implement the Invoker interface shown above. Then you can create the dynamic proxy of the targeted object by calling:
1 2 3 |
|
Interceptor.createProxy takes 3 arguments: the targeted object to be proxied, the aspect annotation class and the object to handle the aspect. For the Timer (or Timeit) aspect, it could be as simple as this:
1 2 3 4 5 6 7 8 9 10 11 |
|
Here is why this is an aspect: the execute method takes note of the current time. It then invokes the original method call. Finally it calculates how long this method call takes. I believe in AspectJ this is called “before and around pointcut”.
Similarly, I would create the Retry aspect by implementing the Invoker interface and call
1
|
|
After we have the aspects to handle those annotations, how do chain them in a correct order, an order which makes sense at all? It all depends on your aspects’ logic but in this case I would make the Timer aspect outside of the Retry aspect. Confused? Here is the order of execution:
1. Enter the Timer aspect, take note of the current time
2. Enter the Retry aspect, retry count set to 0
3. Invoke the actual Dao method
4. If it fails, retry aspect catch the exception and retries! It keeps track of the number of retries (up to 3 times by default)
5. Either the call fails if retries exceed 3 times or it exits the Retry aspect and yield the command to Timer Aspect again
6. Timer aspect calculate how long this Dao method takes
7. Return the result to the caller
One thing you need to pay close attention is the order of the execution of those chained aspects influenced by the way you create them. The inner most aspect will need to be created last. The outer most aspect will need to be created first. For this example, this is the order of aspect creation:
1 2 3 |
|
Here is the complete code in 2 simple classes. I hope you find this useful.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
|
I spend most of my time troubleshooting this issue and found out that we need to run “optimize” to make the index perform well. But anyway I am tired of having to hear users’ complaints everyday so I got rid of Oracle full text search and refactor the search service to use Apache Lucene.
Here is how is done:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
|
I also have to admit that C# has a lot of language features that I really wish Java does too. Among them is the concept of delegate and function pointer used for asynchronous method calls. I know I can get the same thing with Groovy or Scala.
This is how I implement a asynchronous call in C#. The DoAsyncHelper class is an example of extension method, akin to Groovy’s category concept.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
|
Example: If you make 3 RemoteObject request simultaneously and let assume the 1st request takes 2 seconds, the 2nd requests takes 10, the 3rd takes 30 seconds. You result will come back in 42 seconds. In other word, you get the result when they ALL complete. This is because AMF gateway takes all the request and queues them up. It processes each in turn and when all have been run, returns back the results. This is not a bug. Most of the time, it is fine. In our application the issue shows its limitation. The app uses client-side caching to improve performance except for memos-related service. We also fire multiple requests at the same time, which includes the cached services as well as the memos-related services. Memo-related services would take a few seconds to run. This in turn defeats the purpose of cached services since all the service requests need to be processed sequentially.
Solution: We define a separate http channel for the memos-related services to use. Other cached services use a different channel.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
|
Technology stack: Hibernate with JPA through Srping JpaTemplate helper.
So for example, suppose this is what my SQL template looks like
1 2 3 4 5 6 |
|
This is what the coressponding Java code looks like
1 2 3 4 5 |
|
I use the native query just as a placeholder for my sql template. Then I use Apache StringUtils to replace those variables with the actual values. I can’t think of another better way to do it!
If you need a more complicated template, ones with logical condition then use Velocity as the templating engine. Velocity is what I am using but FreeMarker also is a good candidate.
1 2 3 4 5 6 7 |
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
In this story I will demonstrate my attempt to bring RIA development to a portlet environment. The technology stack I am using includes: Sun’s OpenPortal portlet container, Flex RIA with blazeDS remoting. One of the most challenging obstacles developers face in developing RIA in a portlet environment is the asynchronous communication channel between the UI and the portlet itself. Portlet development somewhat is very similar to that of servlet development most of the time. But when it comes to technologies that tightly integrated into servlet, things start to break: try to put DWR/Wicket/Tapestry… in a portlet environment you would feel the pain!
My attempt to put the BlazeDS remoting into a portlet environment was a success. But it doesn’t mean I didn’t run into any bumps along the way. Here how I did it: I have a regular portlet applicaption which provides nothing but a single JSP view. Then I create a Flex component which will be embedded in the JSP view. The Flex component has a single combo box widget which will make a remote RPC call via BlazeDS remoting to get the list data for itself. Now in the portlet application (which is nothing more than regular web app), I add blazeDS and all of the remote services support. To describe the setup is beyond the scope of this article but you can find many tutorials/articles online regarding the config. So far this is what we have:
Now comes the critical section: making Flex component calling the remoting services from a portlet app. Normally when you build a flex application with remoting services, you provide a fixed services-config.xml containing the channel/endpoint configuration. This won’t work in a portlet environment since you don’t know the hostname, port numbers, context root, etc… before hand. All of those configurations depend on the portlet container implementation. To solve this issue I configure the Flex remoting services at runtime. In the JSP view embedding the Flex component I add a flashvars parameter to hold the BlazeDS remoting URL context. I advice that you use the standard portlet API to generate this URL instead of hardcoding it. For example in the JSP view, I get the remote service URL using this:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Now in the flex component, you need to create the remote channel/endpoint programmatically:
1 2 3 4 5 6 7 8 9 |
|
One of the coolest thing about putting RIA in a portlet via Flex/BlazeDS remoting services is that you get asynchronous services for free! It means no whole page refresh, ever, for your portal. Another nice thing is that since we use the standard portlet API to expose the BlazeDS services, all communication between your Flex component and the back-end services will be proxied through the portlet container resource servlet. What it means is that you can literally WSRP-enable your Flex portlet application if you want to and things would continue to work: no firewall setting to mess with, no blazeDS services to reconfigure.
Source code and binary files: flex-portlet.war (to be deployed in a portlet container) flex-portlet.zip (Source)
]]>Just for reference, a FlatXmlDataSet looks like this
1 2 3 4 5 6 7 |
|
DbUnit will create 2 rows for table1 and 2 rows for table2 in your schema. For more info please see this Now Groovy Builder, especially the MarkupBuilder is perfect for this kind of task: extract data from a query and build the XML file based on the extracted data. The code looks like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
|
For each query the map ‘queries’ define, MarkupBuilder will generate the FlatXmlDataSet format entries for the returned tuples. There we go your test data file!
]]>So far, my feeling with OpenPortal implementation is nothing but impressed. I’m impressed from how easy it is to setup the container/producer in Tomcat, the ease of portlet deployment to the active support of OpenPortal’s community. Of course, I encounter a few glitches here and there but they all could be overcome within a matter of hours by looking at the code or requesting support of the community.
What about performance, how does OpenPortal perform compare to Apache portal suite. I gear up and setup the two framework to go head to head for a benchmark test.
What I use:
OpenPortal | Apache Portal |
OpenPortal portlet container milestone 4 (July 2008) OpenPortal wsrp implementation milestone 4 (July 2008) |
Apache Portlet Container Pluto 1.0.1 Release (some time 2006??) Apache wsrp4j Revision 327501 on August 28 2005 (they have not a stable release yet) |
Web Services stack: JAX-WS Sun’s implementation | Web Services stack: Apache Axis 1.3 |
All run from JDK 1.6 and tomcat 6 servlet.
How do I conduct the test: - In the consumer side I setup a servlet filter to capture the time to complete a request: this can be used to benchmark the web services stacks. - in the producer side I also setup a filter to capture the time the producer take to process the request: this can be used to benchmark the portlet containers. I’m making one assumption here: producers’ performance difference from both implementations can be ignored since it delegates most of its work to the portlet container.
Benchmark Item | OpenPortal Consumer | Apache wsrp4j Consumer | OpenPortal Producer | Apache wsrp4j Producer |
# of Requests | 10 | 10 | 10 | 10 |
Min Time (ms) | 1406 | 1641 | 438 | 468 |
Avg Time (ms) | 1807.8 | 1736.3 | 611.2 | 666.9 |
Max Time (ms) | 1985 | 2032 | 1047 | 859 |
Conclusion If we continue to choose open source solution for our portal, OpenPortal would be the best choice at this moment. wsrp4j has not had any activity recently (3 years). Its community support is weak. The code is unstable: latest unstable release (0.5) won’t even compatible with any Pluto container. While Apache pluto/jetspeed container already supports portlet spec 2.0 wsrp4j is still clinging at spec 1.0
from the last couple of weeks working with OpenPortal I feel the support is pretty strong. Whenever I report a bug, a response always follows the next day. Sometimes it comes with patches as well. OpenPortal will not work out of the box at this moment with our existing consumer but it would not take much time or effort to fix it. best of all, OpenPortal comes with the support of portlet 2.0 spec and wsrp version 2 spec!
]]>The code presented in this book is all C++. And I have a hard time to make those code built in Visual C++ 2008. It is guaranteed to work with Visual C++ 2005 only! So I rolled up my sleeves and wrote my own code in Java. And I think this is the best way to test how much I understand the book.
So far, I have finished coding the following behaviors: arrive, seek, avoid, wandering and obstacle avoidance.
I put those behaviors in a single applet illustrated here. It’s just a bunch of flies flying around trying to avoid their predator, a spider may be!
]]>The learning curve is not so steep. I pick up the tutorial material available at Adobe website, specially the animated diagram is very helpful and learn to use the framework in less than a day. The second day I already begin to integrate it into our application.
One thing annoys me most about this framework is that there is no way (sort of) for the UI to get notified when a remote call is done getting data and the result is available. I end up use some hack to get around it. For example, use a ChangeWatcher to monitor the model. When the result ready, the command object updates the model, just triggers the ChangeWatcher’s propertyChange event.
This is kind of messy and the more I use ChangeWatcher, the more I tend to stay away from it if I could. Until I find a solution…
Remember the event class you must extend from the CairngormEvent (not Flash event)?
You just need to include a function object (yes, a functor) from the UI component you want Cairgorm to notify when the result is ready to the cairgorm event object before dispatching it. The in your command class, when the result come back, just invoke that function object.
Sound complicated? Not so, here it is in code
In your event class:
1 2 3 4 5 |
|
In your Command class
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
|
1 2 3 4 5 6 7 8 9 10 11 |
|
When the result is ready, the command object will invoke the callback function. Thus in the above example notfyMeWhenDataReady will be called!
]]>