This is my third in series of “reporting”  from Qcon. My earlier posts are here and here.
In this post, I will briefly talk about few Design and Architectural strategies that I came across at Qcon. Some of these were new to me and others I have used\known in some form or other for sometime, but it was good to hear about them as established patterns.
Here goes…

Feature toggle
This is a common problem in Scrum, but I am sure other methodologies can find a spot for this as well. You have two or more features under development – with all code being checked into Trunk (Branches are evil- remember ?) One of the feature reaches the finish line first and is now ready for prime time. The other one is still in progress. How do you now release one feature alone from a trunk which has both features ? One (non-)solution would be to work on feature branches or roll back changes for one feature later on. One of this is ugly and the other is insane.
Consider Feature Toggle.

Basically, you control these features with a parameter.This parameter acts like a kill switch which can be turned off when you decide to take one of the features to production. After turned off, this feature becomes invisible -it’s like it was never implemented  This parameter can be managed via property file\spring \Database- whatever floats your boat.
In the past, I have often used Feature toggles to remove a feature gone wild from Production without having to make an emergency upgrade. This is probably not the published reason for using a Feature Toggle- but has worked very well for me in past.
However, I would strongly discourage using Feature toggle to send any untested code to Production- it may work, but would be kinda immoral.

Event Sourcing
Any application worth its salt will have some sort of Audit trail which leaves breads and crumbs on how the object got to the state where it is right now. It is often implemented as Database Triggers (or AOP or Hibernate Interceptors) and besides satisfying Auditors, are generally cumbersome to use and provide little or no benefit the real application users.
Consider Event sourcing.
The idea is to capture all changes to an object- updates and deletes and un-deletes as events. Now not only you have a very extensive and useful history of the object, you have now numerous opportunities to use this information effectively and provide new features to end users which were not possible earlier.
E.g. you can literally reconstruct the current state of the object by replaying the events from scratch (if the need arises) OR you can rollback the state to a previous state OR you can find what attributes have changed between two points in the timeline OR you can find which attribute changes the most and so on…
Since you have all the events, you probably do not even need to store the current state – (though this makes me personally slightly uncomfortable). Other possible implementations are -you can store events and current state in different data sources and synch them together in an asynch manner. NoSQL would probably be a good candidate for storing events. Event-sourcing also ties in quite neatly with CQRS (read below).

Strangler Pattern
How many times do you come across a Legacy application that you wish to replace with a brand new shiny application ? Too often- eh?
We generally talk about refactoring it into the new application- one method\unit test at a time. But there is another way…
Strangle it slowly. You start on a new application which provides a subset of feature and then you somehow siphon away requests for the original application to this new application.
Gradually more and more requests will be send to the new application , with the old application servicing fewer and fewer requests. Eventually you would have replaced the old with the new. Also see this discussion on Stackoverflow with some quite interesting insight on the usage if this pattern in practice.

Command Query Segregation Principle
A very simple yet powerful idea- CQRS is based mostly on the principle that for most applications number of reads is far more than number of writes (Think- Ebay? Twitter? Facebook?)
In a simplified manner – Your interface should either modify something (be a command) or return some value without any side affect (be a query)- it should not do both. Basically,the idea is to segregate your interfaces based on usage – writes in one and reads in another. This is quite simple in itself but this alone sets in motion greater opportunities in terms of architectural capabilities. You can now segregate these services on two different servers and perhaps put lot more servers for read rather than for writes. Another possibility is for both services to have a different data store. And the two data stores can be kept in synch via some kind of offline process.

DSL

I started to realize the power of DSL when I saw Martin Fowler convert a state change engine into a DSL at one of the presentation.
A software is basically a collection of two elements- an algorithm defined implicitly or explicitly by business and STUFF that engineers do to morph it into a software solution. Observe a discussion between a technologist and business analyst. The business analyst will want to talk about the algorithm, so does the developer, but often his part will be interlaced with boiler-plate stuff that makes the conversation about the algorithm difficult if not impossible. The problem is technologists often find it difficult to distinguish between the two aspects of the software.

This is where DSL comes in. It provides a common language for both sans any technological frills.

Languages like Groovy, Ruby,Scala make it easier to write DSLs. Opportunities to create an internal DSL are probably lot more than external DSLs

  • Nadeem

    I kinda like the Strangler Pattern :-)

   
© 2011 Technology Cafe Suffusion theme by Sayontan Sinha