Archive for the ‘Articles’ Category

Printers going Java

January 4, 2006

Embedding Java doesn’t always equate mobiles. Nor does it necessarily mean stuffing it into a rover on Mars. Introducing new possibilities is sometimes just a question of putting it on a small card hooked up to your printer.

Axis, a well-known creator of print servers, network cameras etc has done just this. They took one of their print servers and added Java for a big time printer manufacturer in Japan. How did they do it? And what did they learn? I teamed up with three of the people – Jens Johansson, Kristofer Johansson and Stefan Andersson – behind this Java enabled print server to get more of the gory details.

The Start

The project started around four years ago with a prototype after some seven months. The release onto the market was roughly one and half years ago. The first step, however, was to assess the feasibility of it. Could they perform to their customer’s demands? What type of handling of Java classes should they use just in time (JIT) or ahead of time (AOT) compiling? Even the choice of hardware wasn’t given from the start. Should they use third party chip dedicated to running Java or do it themselves? The former would have meant a bus and shuffling data in between the main chip and the Java chip.
Not interesting. Ten man-years later, here’s what they
ended up with.

A schematic view of the print server and its different parts. The functionality is extended by uploading Java classes as a normal jar file.

The Box

First off they had to decide which road to travel – base it on a hardor software? Roll their own or go third part?
Buying an ASIC block doing the Java and integrating it on their chip architecture would have been very expensive and wouldn’t have resulted in a cost effective system. Buying a third-party chip? That would have meant hooking it up on the system bus and making the whole system rather complex. In the end they settled on a software solution and doing it themselves.
Basically they took the Axis ETRAX 100LX chip, added more memory and used software to implement the Java support. The ETRAX 100LX systemon-chip is a 32 bit, 100 MHz RISC CPU. The design then uses the Java 2 Platform, Micro Edition (J2ME) with the CDC and Foundation profiles. This resembles the abilities of a PDA or a set top box on your TV.

The Java foundation classes reside in a separate memory and are loaded by the print server before any uploaded jar files from the file system on the board are run. The memory is divided into a 16 MB file system and 16 MB for all the executable code including the Java Foundation classes. Add to this a 64 MB RAM that houses the system, of which approximately 13 MB is used for Java. This is considerably more than the standard print server that typically has 4 MB of memory.

The operating system is Axis’ own proprietary RTOS for embedded systems. The unit consumes a sparing 0.3 W.Add to this a USB port and you get a pretty good idea of this competent little circuit board.
You change the function of the print server by uploading an ordinary Java jarfile. These compiled classes are then run on the print server and have access to the various ports and, of course, to its printer.
There is even a remote call possibility (RMI) to query the server for its status et al.

The result

Even though there were apprehensions at the start of the project about the performance, the result speaks for itself. CaffeineMark 3.0 is a Java benchmark. It is used to measure the speed of a Java Virtual Machine (JVM) running on a particular hardware platform. The test is calibrated to return a score of 100 running on a Pentium 133 MHz using Windows 95 and Sun’s JDK 1.1 interpreter. Even though this feels like an old PC, it does give you a good reference point for embedded solutions.
Axis’ print server landed on an impressive 60. This result is even more to write home about since the entire floating-point math is done in software.

The Lessons Learned

As with most projects, there were problems to overcome. In this case the floating-point library proved to contain bugs. The TCP stacks used at Axis were different compared to the Sun version. There were compiler issues going from Java to C. As coup de grace they needed the Java logo and that meant going through a grueling TCK test (Technology Compatibility Kit). This is a suite of tests, tools, etc that provide a way of testing an implementation for compliance with a Java.
”This has been among the most complex printer projects we’ve done ”, says Jens Johansson with a weary smile. However, the Java portion worked without any major problems as soon as the foundation classes were in order. ”For one and half year we haven’t had any Java related bugs”, explained the tech lead Kristofer Johansson.
They chose AOT compiling to implement the foundation classes, but this meant a large code base and it took time to implement. Hindsight is one of the best visual aids: might it have been better to use JIT instead?


So techno stuff is all just fine, but what can we do with this Java enabled print server? Ever forgot about an important printout that shouldn’t lie around? Well, why not a simply hook up a card reader? Once you arrive at the printer you swipe your card and, presto, your pulp stack pops out. Authentication and authorization could be done against an LDAP server.
Still too simple. Why not implement a distributed load-balancing scheme for printers – next available printer pulls a job from a print queue somewhere. Need more power? Just hook up another printer to the network. Could I combine the print server with a web camera and motion detection? Add something to sample the temperature and we get surveillance, fire detection and printing to go.

Puleeze Axis, can I get a couple of print servers to try out?

Originally published in JayView.

Scrum down!

January 4, 2006

Scrum is not another rugby term*; it is both a rugby term and the name of one of the hottest project management processes to come off the assembly lines. Yes, it actually draws its heritage from Toyota and lean production! Most of it is just common sense though.

A scrum is when the teams pack in a special formation in rugby, head to head, in order to gain possession of the ball.
Here is the opening explanation from Wikipedia, assuming someone didn’t change it after printing.

“Scrum is an agile method for project management, first implemented by a team led by Jeff Sutherland at Easel Corporation in 1993. It has been called a ”hyper-productivity tool”, and has been documented to dramatically improve productivity in teams previously paralyzed by heavier methodologies.”

That sounds too good to be true! What is this?

* A scrum is when the teams pack in a special formation in rugby, heads to heads, in order to gain possession of the ball.

The short story

Scrum is an agile, iterative process where small self-organizing teams focus on producing a working demo of the system at the end of each sprint, iteration. One such sprint is typically four weeks.
The product owner handles the product backlog. This is a list of prioritised features or activities. At the start of a sprint, the team and product owner collaboratively select a goal for the coming sprint. They then select the product backlog items that best support that goal. These backlog items are then decomposed into the necessary tasks. This is known as the sprint backlog. While the team may add or remove items on the sprint backlog during the sprint, the sprint goal remains unchanged.
Through 15-minute daily meetings the team shares status information and the Scrum master removes any obstacles (impediments). The Scrum master controls the process, but the team is totally in control of the implementation.
An executable is presented at the end of the sprint. After this demo, a new sprint starts and a sprint backlog is created.

An overview of the Scrum process.

What is the problem anyway?

Several methodologies assume that building a system is something you can define or specify up front. The struggle to attain this level of confidence is often a difficult up hill marathon or just plain futile. Even small projects need to adapt during their implementation. Scrum, on the other hand, recognizes
that IT is so complex that the only way to control it is through an empirical process. Watch the process. Constantly monitor it in order to get the desired output. The complexity, or our inability to fully understand the problem, has become a dogma.

Who’s who?

The roles in Scrum are actually quite few – Product Owner, Scrum Master and the Team Members.

The Product Owner

First we have the product owner, and this perhaps is the easiest role to understand. This person handles the product backlog and is responsible for setting the priorities for the product.

The Scrum Master

The Scrum master is not far from being a project leader if it wasn’t for the many things he/she does not do. The Scrum master protects the group during the sprint, removes impediments that hinder development and guards the process itself. He or she does not hand out assignments, dictate a solution or even make decisions for the team.

The Team Members

This cross-functional team is made up of around seven people with different background – programmers, testers, etc. They self-organize and structure their work. This means they alone decide on how to form a working group that can implement a solution to reach the sprint goal. This is important and slightly terrifying. Scrum seeks to unleash the group’s potential to strive for a goal as effectively as possible. The group therefore not only has the normal burden of obligations, but more importantly the right to decide how it should be done. This means the group could decide to use its cousin XP (extreme programming) as a base for their development model or perhaps RUP.

Why so few people? In order to get a closely-knit development with fewer overheads, as is typical in agile projects, the group cannot be too big. Even the physical distance will affect the outcome. Going further than a ”coffee cup walk” could be a problem.

Why so many people? I’ve heard people say that a group should not extend beyond 3-4 people in order to stay tight and focused. Scrum, on the other hand, seeks to reach hyper productivity where the group is drastically more effective at cranking out solutions. Call it synergies. Call it direct coupling of brains. Whatever the name, people can be a lot more effective when interacting without obstacles. You get too little of this if the group is too small.

The Process

Scrum is built up around the sprint, or the iteration. Though before we can start, we need a Product Backlog. This list contains everything that needs to be done on the project – stories, activities and so on. The product owner prioritizes every item.

The Sprint Planning Meeting

The sprint planning meeting sets off the iteration and produces a sprint goal and a Sprint Backlog. The goal and this list is the sole target for the team during the sprint. Tasks may by adjusted or split during its course, but no new no product backlog items can be added to the sprint. No boss sidetracking the Scrum master, no desperate plea from the salespeople can alter the backlog. This produces a box of calm that lets the group do real work instead of endless switching between tasks and trying to hit a target that is constantly moving.
Is there no way to change the list? Well, there’s the simple case when tasks are completed faster than planned. The group can then decide to take on more tasks. In some cases, the product owner can OK changes to tasks in order to meet the deadline.
And of course there is the ultimate change the sprint is cancelled; but that is a rather drastic measure.

The Daily Scrum Meeting

How do the team steer the work effort? How does the Scrum master know that all’s well? The answer is the daily Scrum meeting. This meeting lasts roughly 15 minutes and each team member is responsible for answering three questions:

  • What have you done since the last Scrum meeting?
  • What is preventing you from doing your work?
  • What will you do until the next daily Scrum meeting?

The Scrum master will pay close attention to what is said and how it is delivered. Are the team members doing what they should be doing? Is work progressing as it should be? Is the group functional or is something hindering it from performing?


To track progress the team uses a Sprint Burn Down Chart. In this graph we can find the number of hours left to do and the number of days available. Hitting deadline is then a question of making sure the line reaches the x-axis by the last day.
In the example graph above we can see that the group’s remaining hours actually increased around day five. The level of complexity was fully understood at this point in time. On day six a meeting with the product owner resulted in a change in ambition and made it possible to reach the sprint goal, albeit with a reduced ambition.

The Sprint Burn Down Chart. Each Scrum team tends to have a ”signature” on how the remaining hours de- crease. Note the bump around day five.

The Sprint Review Meeting

The sprint’s final meeting is the review where the most important piece of artifact is shown to the product owner and others who might be interested – the executable. This piece of code should be something that is actually running. It might be small, it might just do one of the many things it will do in the end, but it is real. This means that there are no mockups, no PowerPoint slides and no dummies – just real stuff.
This concludes the sprint where we’ve done all the designing, coding and testing that is needed to get the executable part of our system to do a demo.
One more thing; the artifacts we need are marked in bold above, not a lot and very focused.

Scale? Control?

Having come this far you might be saying:
”This all sounds all well and good for the programmers, but it can’t scale.”
Actually, it can. By setting up a Scrum of Scrums – the Scrum masters from different projects form a Scrum of their own – you can scale up, as Jeff Sutherland has, to more than 500 people.
I asked Sutherland whether Scrum doesn’t fail sometimes:
”A high estimate would be that 5% fail, but they do so in two weeks.”
There are a lot of projects that take a lot longer to implode than a mere two weeks.
Scrum accommodates the complexity of IT and can increase efficiency drastically. It’s all about common sense and daring to live close to chaos. You let the team free and then check progress. Set free and monitor. Let go in order to gain control – a true paradox.

In conclusion

The process, the artifacts and hopefully even this article are easy to understand. However, Scrum is more, it is also a mindset. Without a deeper understanding of the process, the desired effects might not just appear. It’s a bit like mimicking Rembrandt, armed only with the angle and speed
of the master’s stroke. This is the only project management method that I’ve found I can relate to. So my final question has to
be… You wanna Scrum?

Originally published in JayView.

JavaForum – nu nära Dig!

December 4, 2005

En gång i tiden, före bubblan före dotcom dog sotdöden, fanns det ett Javaforum i Sverige. Nu är det dags att återfödas! Sun och Jayway är stolta föräldrar till Javaforum i södra Sverige, generation två!

Det fanns en tid då Java-utvecklare kunde samlas för att ta del av det senaste inom Java och dessutom mingla. Efter några års dvala är det nu dags att starta denna uppskattade form av kompetensutbyte igen. Fast nu kommer vi i en starkare version både ansikte-mot-ansikte och på nätet.
Java handlar om öppenhet och samarbete. Javaforum är ett sätt att förverkliga visionen.
”Det finns dessutom stora synergieffekter i att koordinera aktiviteterna” säger Bert Rubaszkin (Sun), en av de drivande krafterna bakom JavaForum. ”I och med Jayways engagemang i Javaforum börjar nu ett stabilt fundament för ett långsiktigt samarbeta att ta form. Nu förväntar vi oss att andra intresserade partners, som är beredda att engagera sig i detta hör av sig!”
Seminarierna kommer att variera från det praktiska till det teoretiska, från det lilla till det stora men alltid vara intressanta.
Du är hjärtligt välkommen till Javaforum.

Kom in och anmäl dig nu på, så är det redan klart!

Urkopplad Nästa forum är den 23 mars kl 13:00 i Jayways nya lokaler på Hans Michelsensgatan 9. Skriv till bjorn.granvik snabel-a och anmäl dig!

Vi ses!
Originally published in JayView.

Øredev 2005

November 20, 2005

Øredev is the biggest conference for developers in Sweden and covers several areas
such as Java, .Net, Methods & Tools and Embedded.

The conference in our own backyard got off to a very good start indeed and featured internationally renowned speakers such as Eric Evans, Rickard Öberg. Kai Tödter etc etc. The number of attendees was 320 and they came from as far away as Copenhagen and Gothenburg. “Own backyard” is quite literal since Jayway was one of the principal organizers. Nevertheless, why not try and analyze our own effort?


As Java fans we had not only the Java track to enjoy. The Methods & Tools and Embedded tracks contained several interesting sessions with a Java base. This made it sometimes difficult to choose between colliding talks, just as it should be.
Looking at the reviews from the sessions, we find “9 ways to hack a webapp” at the top. It was a great talk given by Martin Nystrom (yes, there are no dots above the “o”) – but somewhat embarrassing. The “I’ve made that mistake!” feeling was a bit too apparent. The top position of this talk came as no surprise. JavaOne attendees gave it an honorable second place in a review of all presentations given there.
The talks covered a lot of ground. They ranged from the challenges of implementing Java on a mobile phone to 64-bit technology and its effects on the JVM, from standards like EJB3 to up-and-coming technologies like aspect oriented programming. On the anecdotal side we note Bert Rubaszkin, Chief Technologist at Sun Sweden, who in his “10 years with Java” talked about several achievements. Some of the examples included Rickard Öberg sitting in the audience. Times flies by faster within IT. Or, better still, it’s just the family!
A few rough edges around the conference were apparent: the sun (yes, the one in the sky) together with a curtain on the loose, made life difficult for some lecturers on the main scene, nothing that can’t be fixed for round two of this annual conference.

Mingling at Øredev 2005


The success of this, the first, developer’s conference in Öresund is promising. The conference will return this year in November. Let’s hope it continues to build on its promising start.

Originally published in JayView.

DDSteps – data driven sanity

August 1, 2005

”Aaarrrgh”, is not only the sound of pirates. It is also the desperate cry from programmers who realize what a mess they’ve got themselves into. This is especially true in testing. The number of combination that you should run your code through easily outpaces you. But fear no more, here’s one pill with a great taste ordered by Dr Jay DDSteps.

DDSteps is on open source project the brainchild of Adam Skogman which aim to make sense of your unit tests when they consist of god knows how many permutations of input. Read on to find out exactly how.

A Typical Project

Let us tell the tale of a too-successful project. It all started out as a prototype web site, quickly hacked together over a week to show to the investors. Big hit! Some swift fixes later it went live slow, rocky, but making money. Refactoring the code was an inevitable next. As new people joined the project, test driven development was introduced to make sure that programmers did not break each other’s code. The unit tests covered more and more of the code. Even the happy hackers in the group had to admit that things were getting better and bugs were fewer.
The business side of the now booming company poured new feature requests into the development and demanded that they appear in production “real soon”. Therefore, a shorter release cycle was introduced, releasing every month. Also, a quality assurance test team was set up to test each release. So developers had to “pre-test” the release before the test team got it. Testing now took two weeks. Bugs were all around and had to be corrected. At this point the site went global, requiring small specific changes for each country.
This is when it all broke down. Testing now took over a month and bugs were often reported months after the release was made. Nobody wanted to re-release and do all those tests again, so the site went into production with known, serious bugs. It was at this point that the programmers went: “Aaarrrgh!”

The Remedy

Testing your solution for every type of input and making sure it will pass function tests is difficult at best. Usually, this is done “by hand” and requires real people. And, yes, it is boring and time consuming. So you have to automate it, just like you did with JUnit tests for unit testing.
By Function tests we mean, ”Does it work like the use case says it should”. We mean full, system, end-to-end tests, using a production-like environment. We mean using a browser, surfing the web application and then checking the real database for results.
Automating function tests is hard, mostly since they suffer from the same problem as the manual testing. Many of the tests are the same; just the data is slightly different in each test.

Data driven testing on the other hand separates the testing code from its input. This and the reusable test steps makes it easy to maintain and evolve the test cases, you get reuse instead of copy-paste. Need another field? Just add new column in your input and handle it in your test case and voila. There will be no explosion of test methods, no massive round of changes!
Ok, let’s start from the top and look what DDSteps is and how it makes your life easier.

Figure: Overview of the DDSteps’ process. 1) Input is retrieved from an Excel file and populates your test cases and test step POJOs. 2) Yellow is your code. TestSomething is your test method that (re)uses different test steps. 3) The fixture F typically sets the database in the correct state for the test. The Navigators, Executors and Validators in this example use JWebUnit to perform their respective step. 4) Output – reports, console etc.

Since most tests are the same, with just different data, first we separate data (input) and the test code. The data, found in an Excel file, is inserted into your test case, each row becoming a test case run. The second important part is the framework of reusable test steps. Test cases are broken down into steps:

  • Fixture – set up the needed data in your database.
  • Navigator – Navigate your web site.
  • Executor – Input data into the system, like filling out a form on a page and pressing a button.
  • Validator – Validate the output on a page or a row in the database etc.
  • Cleaner – Clean up any mess you have made.

You implement these types of test steps using for instance JWebUnit and Spring JDBC. You can easily imagine that many test cases will use the same executor, since they pass through the same web page, so reuse is everywhere.
Finally, DDSteps is just standard JUnit, so the test outcome is reported back via Ant, CruiseControl or your preferred JUnit compatible IDE. It integrates perfectly into your existing environment.

Divide and Conquer

Let us dive deeper. Again, how do we separate data from the code? You can do this several ways. We chose to use Excel. It’s not only easy to format and to enter data, it’s a de facto standard and you can use formulas etc. And if you want you can always use OpenOffice.
Let us take the following use case from PetClinic from Spring – ‘Add new pet’.

Figure: Use case ‘Add New Pet’. Green is data that is put into an Excel file. See next figure. The letters N, E and V are our test steps for this use case.

We have now divided our use case above into reusable test steps. Next is the Excel file. From our test case we can see that we need: Owner, pet name etc.

Figure: The Excel file that holds our input and fixture data.

Each test method is entered on separate worksheets. The fixture data we need for the database is entered as well. DDSteps find your test data in the spreadsheet using the method names in your test code, and uses JavaBeans get/set to inject data into your test case and your test steps.


Next is the test code. Let us look at just one part of it – navigating. The example covers points 1-6,7 of the use case, i.e. the first and second test step. In order for this part to work our HTML pages need to be written with JWebUnit in mind, i.e. an id is put on elements, so that we can find it and “click” etc. Let us assume this has been done. We also use Spring even though this is optional.
The test case would then look something like this.

// Our test case, see PetFTest.xls for input.
public class PetFTest extends PetclinicTestCase {

    protected NavigateToAddPet nav;
    protected ValidatePetForm valForm;

    // The test method, same as tab name in Excel file.
    public void testAddPet_Ok() throws Exception {

    // Used from data file to access navigator properties, e.g. “”.
    public NavigateToAddPet getNav() {
        return nav;

In the code snippet above we declare our PetFTest test case that inherits from PetclinicTestCase. This base class only holds common things like web browser, fixture etc. The test method testAddPet_Ok is simple and reduced to only use necessary test steps, nav (NavigateToAddPet) and valForm (ValidatePetForm).
Also note that we have getNav() which is used as the first part in the data file “nav. name”. The access to properties is JavaBean based, i.e. using get/set methods. We will only look at the first test step, the navigator NavigateToAddPet, since the other steps are similar in concept.

public class NavigateToAddPet extends JWebUnitTestStep {

    // Reuse another navigation step
    NavigateToOwner navigateToOwner;

    public NavigateToAddPet(WebBrowser webBrowser) {
        // The web browser is injected by Spring.
        navigateToOwner = new NavigateToOwner(webBrowser);

    public void runStep() throws Exception {

        // Delegate

        // Click ‘Add new pet’ button,
        // which is the submit button in the form ’formAddPet’

        // Snapshot of web page.
        writeTrail(”Add New Pet Form”);

    // Full name is populated from ”” in the data file.
    public void setName(String name) {

This test step is a composite of another navigator, NavigateToOwner, and going to the “Add new Pet” page. WriteTrail will write specified html page to the hard disk, a visual page debugger if you will.


Having come this far, we can now run our test code and get the result.

Figure: And finally the output, in this example, in Eclipse.

Oops, better fix our code! In the example above we see the errors per row, not just by test method.

What happened to the Project?

When our team employ DDSteps, they can automate their tests, and cut down the time for testing to minutes, not weeks. This means you can run all function tests every night or every time the source code changes! Suddenly, you can release to production with no bugs because as soon as you know there is a one, you fix it during ordinary development. Function testing is now a part of development, not an afterthought.


DDSteps both tries to break new ground and to reuse solutions perfected by others. We think it is a good mix to implement function tests using data driven testing. Is it for you? Who knows? Get it from to see for yourself.
Originally published in JayView.

Brave new EJB?

March 4, 2005

The new specification of Enterprise Java Beans (EJB) version 3 came out a little while ago. Now this may be an early draft, but the contours are already in place. It’s going to be different – very different. In this article we take a close look at every little detail… not. Rather, we attempt to take an overall view and check out some code!

The story so far….

Within J2EE, EJB has been seen as one of the heaviest technologies – not just for its complexity, but also from the point of view of the various factors involved in doing something. The list of “points for improvement” could be long. Onto the stage came Microsoft with its Dotnet, and began to use words like architecture. Sun, which still is used to sing in the final scene when it comes to creating major systems, realized that it had to be easier to develop with EJB, preferably without losing power. Cur- tain up! Enter EJB3 stage left!


There were several objectives behind the new version.
New simpler configuration. Only state what is different – configuration by

  • Encapsulate the beans to reduce dependency and knowledge of the world around. No artificial methods which are used, etc.
  • Simpler persistence through simpler Java objects – ”Plain old java object” (Pojo).
  • Support for lightweight modelling, easy to test outside the container (no need for heavy application servers).
  • Support for inheritance, polymorphism, etc. Yes, that’s right! Actual object orientation!
  • No checked exceptions, e.g. RemoteException.
  • …and a whole pile of improvements.

In other words: easier, simpler and neater! Let’s take a closer look to see how successful they were.


One of the first simplifications was to make use of the metadata which comes with Java 5 (JDK 1.5). To put it simply, it is a way of describing the code from a more comprehensive point of view. For example, why write long XML files when it should be enough just to say, “This method must be accessible from all machines.”

In our example, this will be ”@remote” in front of the method name. Obviously much simpler!

However, this has some unexpected effects. Now there are many new different meta settings and above all their default values. You don’t need to use all of them, but what is needed is new knowledge. Nevertheless, I still think it is better. The alternative would be to do as in the current version of EJB, where one is forced to always state values for all settings.

A simple bean

So let’s look at some code! A stateless session bean is the closest we get to “Hello World” with EJB.
Do you remember all those methods, for example ejbCreate, which had to be implemented despite being empty? And don’t forget the various interfaces and a description file with lots of XML.
Why not like this?

public class FooBean {
    public void printBar() {
        System.out.println( ”Bar” );

Much better the way I see it. Stateless indicates that this class is a stateless session bean. “Remote” indicates that it must be capable of being called from an- other machine. In reality more settings are needed but these either have standard values or are retrieved from the code. In the example we quote above, for instance, an interface file is generated, which we can later use in e.g. a client program which will call up the server.

And a session bean that saves its state isn’t all that much more difficult.

public class BarBean {
    public void orderSomething( String aDrink ) { ... }

This time we had both transaction management and security. Note that it is just as easy to order an alcohol free alternative ;-).

A wilder bean

However, the great challenge with EJB 3 isn’t the session beans but the entity beans (CMP/BMP). The principle for saving these in a database has seen a real change. Gone are the monoliths that were connected to the mother ship, the application server.

An entity bean is marked with ”Entity”.

/** Our customer Pojo. */
@Entity public class Customer {
    private Long id;
    private String name;
    // A customer has one address.
    private Address address;
    private Set orders = new HashSet();

    // Our primary key
    public Long getID(){
        return id;
    protected void setID (Long id){ =id;

    // A customer has zero or many orders.
    public Set getOrders(){
        return orders;
    public void setOrders(Setorders){
        this.orders =orders

All fields in the class are saved unless otherwise stated. Access to these fields is typically based on get and set methods in precisely the same way as for ordinary JavaBeans.

Both the name of the table and the columns are calculated in this case from the names we gave them in the code. Naturally, we can specify something different, but for the sake of simplicity we take a straightforward approach here. It is also possible to put the configuration in separate files – as previously – and in this way we could override the meta settings.

The first reference to another class is ”Address”. If it is marked as persistent, these instances are also read and written automatically.

Our primary key is “id”, which also contains information on how to generate new IDs.

We have also generated an Order pojo in a similar manner. The relation to this is written with “OneToMany”, as a customer has several orders. Order relates back to our Customer in the same way as the Address field above, thus the relation is two- way, which differs in outlook from a relational database. You will find an example of this further down in the article.

If we have read in a customer from the database, we have also automatically received that customer’s Orders. This will be an object tree where the root is the customer with his primitive values and references to the other sub-object Address and the list with Order.

Finally, the relation from customer to order is set as “cascade=ALL”, which speci- fies how events such as “save”, “delete”, etc are to be propagated. In the example we send all on to all order objects in the list.

Read a little, write a little

Let’s move on to retrieving a customer from the database. Therefore we insert a ses- sion bean that does just that. First we must acquaint ourselves with what replaces the application server – “EntityManager”. In a loose sense this class represents our database/session and manages the lifecycle for our pojos. It also contributes to gene- rating our queries (Query) to the database.

First a session bean:

public class OrderEntryBean {
    @Inject private EntityManager em;

Inject allows us to manage our pojos, i.e. get into the database – search, generate, etc. EntityManager can come from anything; it need not be an application server, but maybe a somewhat simpler test bench.
Now we can insert a method for searching for customers who have a certain name.

public List findByName (String name) {
    Query q = em.CreateQuery ( “SELECT c FROM Customer c ” + “WHERE LIKE :custName ” );
    q.setParameter(“custName ”,name);
    return q.listResults();

The result is a list of the customers which matched the names we included. Re- trieving the value of these is quite simply a matter of looping over the list and calling up get and set methods.
Now we’ll make it a bit more difficult and add an order.

public void enterOrder(int custID, Order newOrder){
    Customer c = em.find(“Customer ”,custID);

The relation is two-way and therefore they both require a reference to each other.
As yet the object is not in our database. Somewhere we have to tell our Entity- Manger to do the job and save our customer “c”.


The customer is already in the database and will therefore be ignored. The event “save” will be sent on to sub-object (cascade=all) and the order we had stored in the database.


The new specification does not just involve simplifications. We also have entirely normal improvements to EJB QL, for example:

  • bulk processing of delete and update,
  • group by, having,
  • sub and dynamic queries,
  • and more functions (UPPER, CURRENT_DATE, etc) and so on.

One of my favourites is probably a polymorphism, which can already be used here:

SELECT avg(e.salary) FROM Employee e

So if Manager now inherits from Employee, we will obtain an calculated mean value for everyone – both ordinary employees and managers. Though maybe it is a less interesting example to mix up salaries like this ;-).

When and how?

If we use the number of pages in the specification as a mark of how much more needs to be done, then there is a lot. EJB 2.1 weighs in with a brave 646 pages. The draft of version 3 is more modest with 93. Moreover, many aspects invite discussion. At the time of this writing a second draft has been released. It is now divided into two documents – persistence is kept seperately. Combined they land at nearly 200 pages.

It’s going to take time. I hardly dare guess where the horizon lies. Shall we say a year before the specification is ready? And should we add another year until it is generally available? True enough, this is my pessimistic side talking – I hope. There are already some versions which, so to speak, comply with the standard. That’s why “now” is a good time to prepare for what is to come.

Would you like to have a deeper understanding of what it’s like to develop such a solution? Trying out a Hibernate/Xdoclet combo is good idea since it is so similar EJB3 persistence. If you are interested in experimenting, JBoss has a beta version of the EJB specification based on Hibernate.

Brave new EJB?

Yes, in fact. Sun has chosen the right path to take. And for that they deserve praise. It requires courage to break with old doctrines. The “old” EJB was a heavy affair which hasn’t exactly made life any easier for us programmers. But – and there’s always a but – the new specification is both simpler and more complex. The complexity is tied up with the new power that object orientation, pojos, etc gives us. But extra
power tends to require extra learning. There are going to be quite a few more ups and downs before we can set the “pedal to the pojo”.

JSR #220

Originally published in JayView.

Personligt: JavaOne 2004

August 4, 2004

Från varje stor och viktig konferens så måste det finnas en objektiv och heltäckande rapport – det här är inte den artikeln. Istället får ni mina personliga iakttagelser kryddat med diverse spekulationer och tyckanden.

Med denna varning ur vägen och utan eftertanke, låt oss dyka in i Javavärldens största konferens för programmerare – JavaOne 2004.


En av de viktigaste drivkrafterna på sistone hos Sun är EoD, Ease of Development. Denna idé om att det måste vara lättare att jobba med Java har fått starkt fotfäste. Tack Microsoft! Tack för Dotnet och för att ni tvingade Sun att konkurrera på nya villkor.
Ta till exempel Sun Java Studio Creator (SJSC) 1.0 som presenterades under dagarna. Det är en vanlig utvecklingsmiljö förklädd till Visual Basic. Detta låter kanske inte så häftigt, men faktum är att både gränssnitt och den genererade lösningen ser bra ut – till och med om man tar hänsyn till demons glittriga yta. Listan med förkortningar, JSP, JSTL, JSF, etcetera är precis så vanlig Java som man kunde hoppas på. Sun kan mycket väl lyckas med att attrahera f.d. VB-programmerare, och kanske även gamla härdade javaiter, som vill köra J2EE med krockkudde.

Tänk er att kunna skapa en prototyp i SJSC för att sedan rycka ut olika lager och ersätta det med något mer industriellt. Jag håller tummarna för att Sun gör det rätta. Lite rita-dra-ochtesta-kod skulle inte skada för en gångs skull.

Mer EoD – EJB 3

Men det finns andra mer långtgående effekter av EoD. Ta metadata, dvs. att man kan berätta saker om koden. Man skulle enkelt kunna ange att en metod är ”remote” och vips så genereras de filer som behövs för att skapa en web service. Bästa exemplet är nog EJB 3.0 som hade en del riktigt enkla powerpoint-slajds – de var helt enkelt tomma. Det lilla som fanns kvar var uttryckt som metadata i koden. Applåderna lät inte vänta på sig.

Jag får erkänna att jag aldrig tyckt om EJB och framförallt inte CMP. När jag talat om bristerna har jag ibland känt mig ganska gnällig. Expertgruppen för EJB har gjort en rejäl sväng och skapat en bra mycket mer elegant lösning. Tänk er enkla Java-bönor med metadata (typ Hibernate med XDoclet) så får ni en rätt bra bild. En så pass ny kurs innebär att gamla sanningar utsätts för hårda prövningar. Vad sägs om följande citat från konferensen ”You shouldn’t have to be a rocket scientist to be an EJB-developer” eller ”the Data Transfer Object [DTO] anti pattern”? Det sista innebär inte mindre än en dogms död för vad god EJB-programmering har inneburit.

Något oväntat kanske, så hamnade de nu i samma säng som JDO – Java Data Objects – en av Suns andra persistenslösningar. Diskussionerna både bakom och framför ridån var intensiv under dagarna. Resultatet har tack och lov inte väntat på sig. De två expertgrupperna har precis kommit ut med ett öppet brev där man lovar att arbeta ut en gemensam lösning. Det lär ta tid, men det är definitivt ett steg i rätt riktning. Jag får erkänna att jag antagligen inte kan gnälla länge till.

Det finns långt fler exempel och gemensamt för dessa nyheter är EoD. Vi kommer att få se mer av denna katalysator.

SOA, nu med XL hajpfaktor

Det verkar finnas en naturlag som alla konferenser måste följa – minst ett ämne måste trissas upp. Flosklerna ska flöda och en exakt förklaring saknas. I år vågar jag mig på att utnämna SOA till huvudkandidat. Service Oriented Architecture var på många föreläsares läppar och tanken på en sorts gemensam arkitektur är lockande. Jag kunde dock inte förstå exakt vad det är som skiljer SOA från web services, möjligtvis är den senare ett specialfall av SOA. Nåja, gräv ner fötterna i jordmyllan och var beredd på att höra mer om detta begrepp. Det finns många intressanta tankar, men också en hel del floskler som vi hört förr. Det lär ta tid, men chansen är att det kommer på en server nära dig.

Klienter, feta som mobila

Vi har fått leva i flera år med anorektiska klienter (läs: webbläsare med uselt gränssnitt som bara en mamma kan älska;-) men det verkar förändras nu. Det är inte längre självklart att en klient måste vara tillgänglig överallt på bekostnad av användargränssnitt, funktionalitet etc. Under konferensen fanns det flera exempel på varierade klienter – det var allt från att löften om att kunna generera feta klienter ”i nästa version” till skräddarsydda mobila varianter.
Står dina användare fortfarande ut med tunna skal som blinkar och fladdrar? Jag gissar att de närmaste åren kommer fler av oss att rycka fram de gamla böckerna om gui och Swing för att vi glömt vad vi redan kommer ihåg.


PR-avdelningen på Sun har slagit till igen. Förra gången detta hände fick de för sig att använda både 1.2 och 2 som nummer för samma release – den första var JDK och den senare det mer övergripande och luddiga Java 2 Standard Edition. Nu räknar vi upp hela härligheten till Java 5.0 eller var det JDK:n? Och vad var det nu J2SE skulle heta…
Vi tar det igen; 1.5 är det interna numret, 5.0 är det externa. Och tvåan i J2SE blir då det utomjordiska numret?

Far out

Första kandidaten i denna klass är Nasa och deras Rover – Marslandaren. Både den och program runt omkring använder sig av Java. Det var roligt att höra om deras lösning med robotarmar och allt som fanns på deras bilder. När de fick frågan om vilken databas de använt för att spara all bilddata, så spred sig ett förtjust fniss bland åhörarna – MySQL. Man behöver tydligen inte så mycket mer om man bara ska till Mars.

Den andra kandidaten är Groovy som är ett nytt språk från veteraner som James Strachan (Geronimo, jelly, dom4j, maven, Jakarta commons etc.). De hade lagt märke till att mycket kodande går åt till att limma ihop olika komponenter. Varför inte uppfinna ett nytt språk som gör detta och mycket annat lättare? Född var Groovy. Några absolut minimala exempel:

str = ”testing”
// Notera inget semicolon!

c = str[-1]
// c är nu lika med ’g’

s1 = str[3..1]
// s1 är nu lika med ”tse”

// v är nu lika med 2

I listan ”1,2,3” så letar vi efter den första posten som uppfyller villkoret vi skickat med (”it” är en sorts standardparameter). Vi kan alltså skicka med kod som ska exekveras!
Man behöver många fler exempel för att förstå hur häftigt det är, så jag ska inte ens försöka göra det rättvisa här. Groovy fick dock det att klia i mina programmerarfingrar.

Siffror och visioner

När Suns ledning talade var det många siffror. Det var antalet Java X, antalet miljoner dollar, antalet… ja, ni förstår. Allt var förvisso intressant och visionen var också ”Everything and everyone connected to the Internet”. Och jag fick onekligen en känsla att Java är med överallt. Scott McNealy uttryckte det mer subtilt när Sun fick en utmärkelse för sin marknadsledande Java Card och dess dominans:
We prefer ’interesting market share’.

Var fanns då den rena råa visionen, passionen? Mestadels hittade jag den i de olika grupperna, allt från expertgrupper inom Sun till, och kanske framförallt, olika kommuniteter (”communities”). Här fanns de galna idéerna, de lovande framtida teknologierna och de intressanta verktygen. Det gör säkerligen inte våra liv lättare på kort sikt, men jösses vad roligt jag kommer att ha det!

Originally published in JayView.

AOP featuring Rickard Öberg

February 2, 2004

I didn’t know what to expect, but the appearance of Rickard Öberg – a renowned programmer on the Java world scene – surprised me.
Apart from having written a book on RMI, he has been one of the core programmers behind JBoss. He instigated the XDoclet tool. WebWork, winning Java programming competitions, lectures and other achievements are also on his CV. Moreover, he’s not your average online personality, or perhaps he is? Anyone who remembers the Pet Store debacle*?

* Microsoft vs. Sun over the J2EE blueprint called PetStore. It resulted in a hot debate with many arguments.

I was sort of waiting to meet someone, well, far out. Instead this wholesome dude with a four-day beard greets me with a smile. I can’t help but to think of a strong caffe latte smooth milk blended with mind-altering caffeine.

Q: SiteVision is the content management system (CMS) sold by the company you work for, SenseLogic. Ok, without any prenuptial agreement, let’s get at the first question why?
Why do you use aspect oriented programming in SiteVision? After all, there isn’t that many product grade systems using aspects as a core design choice.

A: There are several reasons. When we began writing SiteVision we soon realized that we had to encapsulate reusable code in a number of places, especially features that were side effects of method calls. Object had to be saved on setter calls, events had to be sent, logging had to be done, transactions should be managed, and so on. We couldn’t figure out a way to do it using standard technology, e.g. EJB, so the only solution was to use AOP.

We were also on a very tight schedule. In just a couple of months we had to build a commercially viable CMS-tool using only five developers, and after that we had to maintain and develop it at such a rate as to catch up, and pass, our competition. It really should be impossible, and it would be with standard technology. The solution was to avoid writing code by using AOP. AOP allows an extreme degree of reuse of code, and because of this we managed to write lots of features with a minimal effort, and since we avoided writing unnecessary code to a large extent it also became easier to maintain. In addition, since AOP makes it easy to add functionality incrementally we were also able to enhance the architecture and implementation as we went along without having to change much of what was already written.

Q: Why roll your own aspect engine? Why didn’t you decide to use AspectJ or some other aspect-inspired tool?
A: After the decision to use AOP was made I researched the available tools on the market, and compared their feature sets with our requirements, especially with regard to code maintenance, performance, scalability. None of the tools, including AspectJ, had been written with such requirements in mind, so we had no alternative but to write our own tools. My background in EJB made it possible for me to combine the principles of AOP with our requirements in a good way.

In hindsight I can conclude that it was the right decision, and if I were to make the same decision today I would still do the same. There is, in my opinion, still a lack of good AOP-tools that support the demands of a complex server side application. The awareness of the importance of such tools is on the increase however, so I would be surprised if this has not changed by next year.

Q: AOP being a true paradigm shift what are the biggest changes when going aspect?
A: There are two things that stand out, from a design and implementation point of view. First of all it takes some time to learn to recognize “cross-cuttting concerns”, that is, code that can be extracted into an aspect. Second, the use of static introductions as a design tool gives one a whole new set of posssibilities with regard to domain model design. Instead of using inheritance as the main building block for reuse of code one would use design by composition, which gives you many more possibilities for combining existing components. It takes a while to get used to this new freedom, but once you do that everything is fine.

Q: SiteVision is soon to go version 2. Has your view on AOP changed during the trip?
A: In the beginning I was, as most others seem to be now, fascinated with the concept of interception. What difffers now, almost two years later, is partly a realization of the importance of static introduction, and partly the understanding of how important it is to have a powerful pointcut model. AOP has so much more to offer than just method call side effects.

As for AOP in general it is great to see that more developers are beginnning to see the advantages of it, even though the examples often are so trivial that it’s hard to see the point of it. I’m also waiting for more people to realize just how powerful static introductions are as a design tool, and the consequences this has for design pattterns and such issues.
It’s going to be quite a revolution, and several tools that exist today in order to avoid the problems of current system design techniques are going to get a tough time.

Q: Do you dare to mention tools, or types, that will have problems?
A: I suppose XDoclet is the most obvious one. We are not using any code generation at all in SiteVision, and that is solely because we have been able to avoid all the problem of standard design principles by using AOP.

Q: Let’s pull out the tarot cards. What’s in the future for AOP?
A: The theoretical part of AOP seems to be stabilizing, but we will probably see more tools in the AOP space. Some people ask, “Why not only use AspectJ?” but I think that’s like saying, “Why not just drive BMW?”. There’s nothing wrong with a BMW, but sometimes a Volkswagen works better, and sometimes a Porsche is the only choice.
As usual it’s about choosing the tool that best matches your requirements, and I can’t see AspectJ being able to match all different kinds of requirements at the same time.

Apart from the AOP frameworks it will also be very important to have good support tools, especially with regard to visualization of AOP systems. Since AOP to a large extent is about fragmentizing and componentizing code it can sometimes be difficult to see what a component does just by reading code. A visualization tool for AOP can put together all the different pieces that make up the whole, and present them as one unit even though they have been developed separately. Access to this kind of tools is, of course, crucial.

We have built such a tool ourselves for our own system, and it has been very helpful when we are expanding the system and when we are looking for bugs.
The step after that, I guess, is the possibility to build packaged aspects that can be reusable in your own projects. However, predicting component markets have proven to be a notoriously difficult task so I’m not going to bet on it.

Q: …and the future for SiteVision?
A: Without revealing too much we can see that our framework and way of building systems gives us interesting opportunities to build application using AOP which run in SiteVision as portlets. Our next version expands the product into the portal segment, and the next logical step after that is to include document management features. We are also looking into how to provide scalability through clustering, and have some rather unique ideas there. But, one thing at a time.

Q: For the finale What are your top five advices for the wannabe aspect programmer?

  1. Read “Design Patterns” (the GoFbook)
  2. Read the documentation for AspectJ
  3. Find a medium size example application and implement it fully using AOP. Pick the framework which best suits your needs. I would start looking at AspectJ for client side development and AspectWerkz for the server side.
  4. Apply design patterns using AOP as far as possible.
  5. Read the AOP-entries in my blog 🙂

It is somewhere around here, at the end of the interview, that I start to long for a good cup of coffee, some Java and lots of aspects 🙂

SenseLogic: http://
Rickard's blog:

Originally published in JayView.

Ytterligare en Aspekt på din kod

January 20, 2004

Har du någonsin kopierat vissa snuttar kod – om, om och om igen? Trots att du redan har en snygg objektorienterad arkitektur? Trots att den är full med de bästa design-mönster som går att uppbringa? Trots att du valt de bästa verktygen och ramverken? Lika förbaskat så sitter vissa rader, som loggning, både i ryggraden och i varje klass. Vad är fel? Och hur kan man undvika det? Aspektorienterad programmering (AOP) kan vara svaret – om inte annat så är det ett paradigmskifte värt några hjärnceller.

Trots att vi anstränger oss till det yttersta att modellera våra system intill perfektion, nåja i alla fall väldigt, väldigt nära, så är det fortfarande svårt att fånga strukturen hos ett problemområde. Detta märks t ex genom att vi kopierar välbehövliga små bitar kod och lägger in dem överallt. Inte nog med detta, de klasser som helst bara skulle behöva lösa en uppgift, tvingas att känna till säkerhet, databas, loggning osv. Allt detta leder till system som är svåra att underhålla och förändra.

Vad skulle hända om vi kunde plocka ut all loggning i våra projekt och lägga dem i en separat klass – en aspekt? Istället för att sprida ut en viss lösning i vårt system, kan vi nu underhålla den på ett enda ställe. När programmet kompileras så förs koden tillbaks ut där vi hade den innan.

För att bättre förstå vad aspekter är tittar vi närmare på AspectJ, som är en variant av AOP baserad på öppen källkod. Den har flera år på nacken och skapades på Xerox PARC som ett forskningsprojekt med stöd av DARPA.

Ett exempel: Var finns egentligen koden för loggning?

En vanlig syn är loggningen som ligger utspridd över ett helt system. När vi inför en aspekt så får vi ett enda ställe att underhålla den på.

Kodpunkt, punktsnitt och direktiv

I aspektorienterad programmering kan man samla alla dessa små bitar kod på ett enda ställe. Men vi måste ju fortfarande kunna köra dessa på givna ställlen i programmet? Lösningen i AspectJ är att identifiera punkter i exekveringen av ett program – så kalllade kodpunkter (joinpoints).
Det finns flera olika typer av kodpunkter man kan identifiera; metodanrop, skapande av objekt, att sätta attribut, utlösning av undantag (exceptions) osv. Ett punktsnitt samlar en omgång kodpunkter och ger dem ett namn.
Genom att ange kod, direktiv (advice), som ska köras vid dessa kodpunkter kan vi samla tvärgående intressen (crosscutting concerns) i en enda fil – en aspekt. Eller annorlunda uttryckt:

aspekt = punktsnitt(kodpunkter) + direktiv

En aspekt består av en omgång identifierbara punkter i programmet där extra kod ska köras.

Vad kan man göra mer?

Vi har bara gått igenom ett hyfsat enkelt exempel till höger, men det finns många saker man kan använda AspectJ:

  • Olika former av loggning och debugging.
  • Design-by-contract, d v s att man kollar inoch utgående tillstånd och parametrar. När man är nöjd att systemet uppfyller kraven kan man ta bort dessa aspekter.
  • Testa optimering, alternativa lösningar etc.
  • Cachning, synkronisering, accesskontroll etc.
  • … och fler tekniska aspekter som inte har med vårt problemområde att göra.

Innan har vi endast talat om dynamisk förändring av ett program, d v s att påverka arbetsflödet. Men det går också att ändra den statiska strukturen genom att lägga till t ex klassvariabler, metoder, ändra arv. Här uppstår fler möjligheter:

  • Multipelt arv som t ex mix-in i C++
  • Genomförande av kodstandard. ”Ingen får ändra en variabel direkt”. Denna koll kan t o m göras vid kompilering!

Listan kan göras mycket längre. Vi befinner oss bara i början av vår förståelse om var och hur vi bäst använder aspektorienterad programmering. Även om denna inte kommer att ersätta objektorienterad programmering (OOP) så kommer den att vara grunden för AOP på samma sätt som funktioner utgör grunden för OOP.

Ett enkelt (nåja) exempel

Vi skulle kunna gå igenom ett enkelt ”Hello Aspect World”, men jag väljer iställlet att dyka i lite djupare. Jag vill mäta hur lång tid det tar att anropa databasen i vårt hemmasnickrade modelllager.
Vår modellkod innehålller en abstrakt klass som definierar ett gränssnitt för att arbeta mot databasen. Personklassen representerar en post i person-tabellen och implementerar själva anropen med SQL mot databasen i metoderna dbSave, dbUpdate och dbRemove. I ett anfall av hybris lägger jag in mig själv i databasen. Demokod:

Vårt enkla modell-lager som kapslar in databasen.

DBObject tPerson = new Person();
tPerson.setField("firstName", "Björn");
tPerson.setField("lastName", "Granvik");

tPerson.setField("company", "Jayway");

Du gamla vanliga lösning

Låt oss nu göra en enkel tidtagning. På det gamla sättet så blir det till att kopieraoch-klistra, så mycket som vi nu orkar. Till exempel skulle Person.dbSave kunna se ut som följer:

void dbSave() {
    long tTime = System.currentTimeMillis();
    // Do something...
    tTime = System.currentTimeMillis() tTime;
    System.out.println( thisJoinPoint + ": " + tTime );

Och på samma sätt för de andra metoderna dbUpdate och dbRemove. Även om vi förbättrar situationen genom att introducera en tidtagningsklass blir det bara marginellt bättre – det är fortfarande korkad kod! Dessutom så kommer den att ta tid att exekvera. Inget vi vill ha kvar i en produktionsversion.

Du sköna nya aspekt

Låt oss titta på hur man skulle kunna lösa problemet med hjälp av aspekter. Vår nya lösning innehåller en del nya ord som tillägg till Java-språket, se fetstil nedan, men är inte så svår som det först verkar.

/* A timing aspect on DBObject.db methods */
public aspect DbTimerAspect {

    /* Public DBObject methods starting with 'db',
     * taking no parameters and returning anything. */
    pointcut dbMethods() : execution( public * DBObject.db*());

    /* A simple timer advice to be called instead
     * of the original method. */
    void around() : dbMethods() {
        long tTime = System.currentTimeMillis();
        proceed();    // Proceed with call to DbObject.
        tTime = System.currentTimeMillis() tTime;
        System.out.println( thisJoinPoint + ": " + tTime );

Vår aspekt DbTimerAspect innehåller dels de metoder som vi vill mäta och den tidmätare vi vill ska köras.
I kodsnittet dbMethods (pointcut) väljs de publika metoder i DBObject och dess subklasser som börjar på ”db”. Vi anger också att det är när metoden körs (execution) som är intressant. Här kan man välja på fler varianter; när metoden anropas, när ett attribut ändras, när ett objekt skapas och många fler sorters kodpunkter.

När demokoden anropar Person.dbSave så är det vårt direktiv som körs istället. Nyckelordet som anger detta är around. Vi hade naturligtvis kunnat ange att den skulle köras före, efter osv. Det första vi gör är att ta reda på tiden. I nästa steg anropar vi originalmetoden med ”proceed”. Nu körs Person.dbSave som vanligt.

Efter att vi kommer tillbaka räknar vi ut den tid som tillbringats i anropet och skriver ut den på konsolen. Lägg märke till att vi har tillgång till den kodpunkt som vi befinner oss i m h a thisJoinPoint. Resultat
När vi kompilerar vår kod gör vi det med kompilatorn från AspectJ. Den genererar ren Java ”byte code”, dvs. koden kan köras precis som vanligt.

Resultatet på konsolen blir:

execution(void se.jayway.jayview.aop.Person.dbSave()): 111
execution(void se.jayway.jayview.aop.Person.dbUpdate()): 30

Ganska mycket det vi förväntade oss.


Denna artikel räcker egentligen inte för att förstå hela vidden av det man kan göra med aspekter. Jämför vi med en vinprovning har vi bara hunnit titta på etiketten. Men, likväl, vårt lilla exempel pekar på de stora möjligheter som finns med aspekter.
När och var bör man då använda AOP? Än så länge är aspekter inte ”bevisade”. Hur kommer de att skala? Hur påverkas arkitekturen i skarpt läge etc?
Däremot så är det mycket intresssant att förbättra hur vi utvecklar dessa system. Mitt råd blir därför; Skapa aspekter som stödjer din kodstandard, testar varianter av din lösning utan att behöva modifiera koden, gör verktyg som t ex loggningsaspekter, etc.
Om jag ska våga mig på en gissning tror jag att vi inom några år kommer att ha sådana ”tekniska” aspekter som öppen källkod. Fast jag kan ha fel – de kommer kanske redan om ett år eller så 🙂


kodpunkt (joinpoint) En punkt i exekveringen av ett program, till exempel i anropet av en metod i en klass.
punktsnitt (pointcut) En samling utvalda kodpunkter.
direktiv (advice) Kod som körs vid utvalt punktsnitt under angivna omständigheter, t ex kan man logga ett anrop innan det körs.
tvärgående intressen (crosscutting concerns) Ett system har oftast flera mål som det försöker uppfylla. Förutom att lösa grundproblemet (t ex lönehantering) så finns där också uppgifter som t ex loggning, säkerhet etc. Dessa uppgifter, eller intressen, är typiskt utspridda i systemet – de går på tvären genom strukturen.
OOP Objektorienterad programmering en programmeringsteknik och synsätt som gör det möjligt att öka komplexitetsgrad och återanvändning.
AOP Aspektorienterad programmering att göra fristående moduler som hanterar tvärgående intressen på ett sådant sätt att de kan sömlöst kan föras in i koden igen vid t ex kompilering.

Eclipse, AspectJ plug in
JavaWorld: I want my AOP!
AOSD – Aspect Oriented Software Development

Originally published in JayView.

Applets för vuxna

August 4, 2001

Java Web Start (JWS) är en teknik som tillåter automatisk distribution av Java- applikationer via webben. Dessa applika- tioner kan antingen köras i en begränsad miljö, likt appletar, eller om så krävs få fulla systemresurser till klientens dator. Applets – fast denna gång för vuxna.

JWS-applikationer startas första gången typiskt från en länk på en html sida, applikationen laddas då ner till klienten. I fortsättningen kommer den dock bara att titta efter uppgraderingar från servern. Men JWS- applikationer är ingalunda kopplade till webbläsaren, de körs helt fristående vilket innebär att de även går att starta som van- liga applikationer.

En annan intressant funktion är den dyna- miska resurshanteringen. Första gången en applikation körs laddas ett minimum av den ner, vid behov laddas sedan ytterligare data ner under körtid.

För att tillhandahålla dessa funktioner lig- ger det under JWS ett api som heter Java Network Launching Protocol (JNLP). Den innehåller en rad olika services som tillåter sandlådsapplikationer att spara och öppna fi- ler, komma åt clipboard och skrivare med mera. Användaren måste dock via en dialog godkänna sådana operationer.

Vad krävs?

Så vad krävs för att köra JWS-applikatio- ner? Klienten behöver Java Runtime Environment 1.2.2 eller senare och Java Web Start 1.0.1 eller senare – inget utöver det vanliga. Dessutom behövs en standard webbserver som kan registrera nya MIME- typer.

För att utveckla JWS-applikationer behövs ingen speciell kod, exekveringen börjar på samma sätt som vanliga applikationer. Utvecklandet innebär följande steg:

  • Skapa applikationens kod.
  • Paketera applikationen och dess resur- ser till en eller flera jar-filer.
  • Signera jar-filer(na).
  • Skapa en XML-fil som beskriver applikationen, kallas JNLP-filen.
  • Lägg upp allt på webbservern, tillsammans med en lämplig html-fil.

Första gången måste du även se till att din webbserver känner igen .jnlp filer genom att registrera den som en ny MIME-typ.

En fet klients räddning

När man läser rubrikerna från tiden då applets skulle rädda världsfreden är det svårt att låta bli och le. Java Web Start känns som client/server med rätt anslag. Om man har behov av av feta klienter är det klart intres- sant att kolla upp – annars lär man utveckla exakt samma kod själv.

JWS används med fördel av utvecklings- intensiva applikationer eller av applikatio- ner vars uppgraderingar snabbt måste ut till ett antal användare. Ett annat behov kan vara att bryta de bindningar som en traditionell applet har med sin webbläsare. Eller om man bara vill vara bekväm av sig.
Originally published in JayView.