Wednesday, December 19, 2012

A TDD approach using Spring Framework + Mockito + Lombok

Intro

This is simple Hello World Java application that successfully combines Spring Framework with Mockito and Lombok. It's just for demonstrative purposes, so the presented example is very simple and used many times before in the literature.

The final goal is to show up how simple is to combine some of the best features of each involved technology with Test-Driven-Development in mind:

  • Spring Framework: wiring
  • Mockito: isolation
  • Lombok: simplicity

Example

A text editor uses and spell checker among other things to make the life of user easier. Depending on the target language, an spell editor may change is behavior  and its dependencies as well, for example dictionaries and tokenizers. Simplifying the whole idea you may encounter a contract-driven (interface based) solution like this:

with many possible combinations of spell checkers, dictionaries and tokenizers. For example this one:
where every contract was replaced by a concrete instance. Combinations are multiple, and sometimes you cannot predict all them in a classes/interfaces design.

You want to start engaging TDD for sure, and you want to do it in a top-down fashion. That means, start testing the text editor first, then the spell checker, then the remaining pieces. You also want to do it in isolation, so spell checker and the remaining implementations aren't involved in editor tests, and it's your desire to obtain zero infrastructure code (no constructor calls). As a plus no getter nor setter nor trivial constructor implementations.

What to test?

Our test will be very simple as well, here is the top-down ordered list (remember it's a simplification of the reality):

1- Given an Editor, when a new text is pasted it should be added.
2- Given an Editor, when a new text is pasted the spell should be checked against the appended text.
3- Given an English spell checker, when a word is not found in a dictionary during the spell check then it must be signaled as misspelled and returned back.


You may be wondering, why should I test these  obvious use cases? Trust me, it's very important to cover all possible scenarios if you really want to delivery software with built-in quality. This is the hidden synergy behind TDD.


Hands on Spring Tool Suite

  1. Open STS and create a new Maven project.
  2. Check 'Create a simple project (skip archetype selection)'.
  3. Enter a Group Id = demos.sf.editor, Artifact Id = spring-hello-world and Version = 1.0.
  4. Edit your pom.xml and append all the required dependencies. It should ends like:
  5. 
      4.0.0
      demos.sf.editor
      spring-hello-word
      1.0
      Spring Framework Hello World
      Spring Framework Hello World demo w/ Unit Tests
      
        UTF-8
        3.2.0.RELEASE
      
      
        
          junit
          junit
          4.10
        
        
        
          org.springframework
          spring-test
          ${spring.version}
        
        
        
          org.springframework
          spring-beans
          ${spring.version}
        
        
        
          org.springframework
          spring-context
          ${spring.version}
        
        
          org.mockito
          mockito-all
          1.9.5
        
        
          org.projectlombok
          lombok
          0.11.6
        
      
    
    
  6. Start testing! A golden rule, don't forget it. Create a new JUnit 4 Test Case at src/test/java. Name it EditorTest in package demos.sf.editor. 
  7. Delete the existing test, called test and append a new failing test for the first use case. Its name testPasteSuccess:
  8. package demos.sf.editor;
    
    import static org.junit.Assert.fail;
    
    import org.junit.Test;
    
    public class EditorTest {
    
     @Test
     public void testPasteSuccess() {
      fail();
     }
    
    }
    
    Making the test fail it's very important, if you start failing you won't forget it until its fixed and the test pass. So the "last thing" to do is to remove the failing statement.
  9. Write the assertion first. How's that? See use case 1: "... the text should be added":
  10. ...
    import static org.junit.Assert.assertEquals;
    
    public class EditorTest {
    
     @Test
     public void testPasteSuccess() {
      String expected = "Hello everybody!";
      String actual = editor.getText();
      assertEquals(expected, actual);
      fail();
     }
    }
    The code above will fail to compile, because there isn't and variable/field called editor. This is perfectly normal in TDD, guide your design only by needs (the tests).
  11. Right-click on editor and choose "Create local variable...". Change its type from Object to Editor, a non-existing class:
  12. public class EditorTest {
    
     @Test
     public void testPasteSuccess() {
      Editor editor;
    
      String expected = "Hello everybody!";
      String actual = editor.getText();
      assertEquals(expected, actual);
      fail();
     }
    }
    
    The code above, still without compiling due to the non-existing class Editor. Right click on Editor and "Create a new class...".
  13. Annotate the Editor class for auto-implementing the getter using Lombok annotations:
  14. package demos.sf.editor;
    
    import lombok.Getter;
    
    public class Editor {
     @Getter
     private String text;
    }
    
    NOTE: Lombok must be attached to Spring Tool Suite (or Eclipse) for completion at development time. Copy your lombok.jar to STS installation folder and append the following settings to your STS.ini (eclipse.ini):
    -javaagent:/home/lago/Soft/springsource2.9.2/sts-2.9.2.RELEASE/lombok.jar
    -Xbootclasspath/a:/home/lago/Soft/springsource2.9.2/sts-2.9.2.RELEASE/lombok.jar
    
    Alternative open a terminal/command prompt and run:
    $ java -jar ~/.m2/repository/org/projectlombok/lombok/0.11.6/lombok-0.11.6.jar
    
    An install wizard gets launched. Choose your STS.ini (eclipse.ini) and press Install/Update. Finally restart your IDE.
  15. Go back to the test, and perform the call to a non-existing paste() method:
  16. public class EditorTest {
    
     @Test
     public void testPasteSuccess() {
      Editor editor;
      editor.paste("Hello everybody!");
    
      String expected = "Hello everybody!";
      String actual = editor.getText();
      assertEquals(expected, actual);
      fail();
     }
    }
    
  17. Ctrl+1 on top the editor.paste("...") call and choose "Create method paste()....". A new method is generated:
  18. public class Editor {
     @Getter
     private String text;
    
     public void paste(String cut) {
     }
    }
    
    Go back to the test, don't waste your time implementing anything at this moment, the test will drive you to that point later.
  19. In the test, the only missing thing is the editor initialization. Let's inject the Editor at test time. Convert the editor local variable to a field declaration and annotate it as @Autowired:
  20. ...
    import org.springframework.beans.factory.annotation.Autowired;
    
    public class EditorTest {
    
     @Autowired
     private Editor editor;
    
     @Test
     public void testPasteSuccess() {
      editor.paste("Hello everybody!");
    
      String expected = "Hello everybody!";
      String actual = editor.getText();
      assertEquals(expected, actual);
      fail();
     }
    }
    
  21. Create a new Spring Bean Configuration file at src/test/resources/test-applicationContext.xml and declare the editor bean on it:
  22. 
      
      
    
    
    
  23. Use the Spring test runner and load the application context via annotations:
  24. ...
    import org.junit.runner.RunWith;
    import org.springframework.test.context.ContextConfiguration;
    import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
    
    @RunWith(SpringJUnit4ClassRunner.class)
    @ContextConfiguration(locations = { "/test-applicationContext.xml" })
    public class EditorTest {
    ...
    }
    
  25. Right click on test case and Run As JUnit test. The test will fail but corner stones are ready to support more agile tests. Make the test pass by implementing the paste() method. Oops! remember to remove the fail() from the test.
    public class Editor {
     @Getter
     private String text;
    
     public void paste(String cut) {
      if(text == null) {
       text = "";
      }
      text += cut;
     }
    }
    
  26. Re-run the test, this time also using a Maven run configuration like this:
  27. mvn test
    
  28. The spring default behavior is to cache the application context between different tests in the same test case. They also introduced the annotations @TestExecutionListeners and @DirtiesContext to mark the application context as dirty after each test:
  29. ...
    import org.springframework.test.context.TestExecutionListeners;
    import org.springframework.test.context.support.DependencyInjectionTestExecutionListener;
    import org.springframework.test.context.support.DirtiesContextTestExecutionListener;
    import org.springframework.test.annotation.DirtiesContext;
    import org.springframework.test.annotation.DirtiesContext.ClassMode;
    
    @RunWith(SpringJUnit4ClassRunner.class)
    @ContextConfiguration(locations = { "/test-applicationContext.xml" })
    @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class,
     DirtiesContextTestExecutionListener.class })
    @DirtiesContext(classMode = ClassMode.AFTER_EACH_TEST_METHOD)
    public class EditorTest {
    ...
    }
    
  30. Now starts the use case 2. First append the test in failing mode:
  31. ...
    public class EditorTest {
     ...
     @Test
     public void testAddParagraphSpellIsChecked() {
      fail();
     }
    }
    
  32. Using Mockito style, we need to verify that the spell is checked against the pasted text. The syntax is straightforward it means that check() method must be called only once with "Hello everybody!":
  33. ...
    import static org.mockito.Mockito.only;
    import static org.mockito.Mockito.verify;
    
    @RunWith(SpringJUnit4ClassRunner.class)
    @ContextConfiguration(locations = { "/test-applicationContext.xml" })
    @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class,
      DirtiesContextTestExecutionListener.class })
    @DirtiesContext(classMode = ClassMode.AFTER_EACH_TEST_METHOD)
    public class EditorTest {
    
     ...
    
     @Autowired
     private SpellChecker spellChecker;
    
     @Test
     public void testAddParagraphSpellIsChecked() {
      editor.paste("Hello everybody!");
      verify(spellChecker, only()).check("Hello everybody!");
      fail();
     }
    }
    
  34. At this time we need to create a new SpellChecker interface:
  35. package demos.sf.editor;
    
    public interface SpellChecker {
    
     void check(String text);
    
    }
    
  36. The next step is to provide an spell checker mock to the application context:
  37. 
      
      
      
        
      
    
    
  38. Remove the fail() from the test and run all tests and you will get a test failure saying: 'Wanted but no invoked: spellChecker.check("Hello everybody!")'. That makes sense since we are not invoking the SpellChecker.check() from Editor.paste(). In fact haven't create any kind of dependency Editor -> SpellChecker. Let's do it at class level and force its injection at construction time using Lombok's @RequiredArgsConstructor:
  39. package demos.sf.editor;
    
    import lombok.Getter;
    import lombok.RequiredArgsConstructor;
    
    @RequiredArgsConstructor
    public class Editor {
     @Getter
     private String text;
    
     private final SpellChecker spellChecker;
    
     public void paste(String cut) {
      if (text == null) {
       text = "";
      }
      text += cut;
     }
    }
    
  40. Declare the constructor injection in the application context as well and re-run the tests:
  41. 
      
        
      
      
        
      
    
    
  42. Oops! The same failure: 'Wanted but no invoked: spellChecker.check("Hello everybody!")'. Implement the SpellChecker.check() call and run the tests again:
  43. package demos.sf.editor;
    
    import lombok.Getter;
    import lombok.RequiredArgsConstructor;
    
    @RequiredArgsConstructor
    public class Editor {
     @Getter
     private String text;
    
     private final SpellChecker spellChecker;
    
     public void paste(String cut) {
      if (text == null) {
       text = "";
      }
      text += cut;
      spellChecker.check(cut);
     }
    }
    
    All tests should now pass:

Final Snapshot

  • The test case at : src/test/java/demos/sf/editor/EditorTest.java:
  • package demos.sf.editor;
    
    import static org.junit.Assert.assertEquals;
    import static org.mockito.Mockito.only;
    import static org.mockito.Mockito.verify;
    
    import org.junit.Test;
    import org.junit.runner.RunWith;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.test.annotation.DirtiesContext;
    import org.springframework.test.annotation.DirtiesContext.ClassMode;
    import org.springframework.test.context.ContextConfiguration;
    import org.springframework.test.context.TestExecutionListeners;
    import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
    import org.springframework.test.context.support.DependencyInjectionTestExecutionListener;
    import org.springframework.test.context.support.DirtiesContextTestExecutionListener;
    
    @RunWith(SpringJUnit4ClassRunner.class)
    @ContextConfiguration(locations = { "/test-applicationContext.xml" })
    @TestExecutionListeners({ DependencyInjectionTestExecutionListener.class,
      DirtiesContextTestExecutionListener.class })
    @DirtiesContext(classMode = ClassMode.AFTER_EACH_TEST_METHOD)
    public class EditorTest {
    
     @Autowired
     private Editor editor;
    
     @Autowired
     private SpellChecker spellChecker;
    
     @Test
     public void testPasteSuccess() {
      editor.paste("Hello everybody!");
    
      String expected = "Hello everybody!";
      String actual = editor.getText();
      assertEquals(expected, actual);
     }
    
     @Test
     public void testAddParagraphSpellIsChecked() {
      editor.paste("Hello everybody!");
      verify(spellChecker, only()).check("Hello everybody!");
     }
    }
    
  • The test case at : src/main/java/demos/sf/editor/Editor.java:
  • package demos.sf.editor;
    
    import lombok.Getter;
    import lombok.RequiredArgsConstructor;
    
    @RequiredArgsConstructor
    public class Editor {
     @Getter
     private String text;
    
     private final SpellChecker spellChecker;
    
     public void paste(String cut) {
      if (text == null) {
       text = "";
      }
      text += cut;
      spellChecker.check(cut);
     }
    }
    
  • The test case at : src/main/java/demos/sf/editor/SpellChecker.java:
  • package demos.sf.editor;
    
    public interface SpellChecker {
    
     void check(String text);
    
    }
    
  • The test case at : src/main/resources/test-applicationContext.xml:
  • 
      
        
      
      
        
      
    
    
You can obtain the source code from here. Enjoy it!

Tuesday, December 11, 2012

Running webtoolkit application as nginx FastCGI on CentOS 6.x

Wt (pronounced as witty) is a powerful C++ library for developing web applications. Witty-based apps can be integrated as FastCGI with nginx and other web servers. This guide is about the integration of witty w/ nginx on CentOS 6.x OS.

On this how to, we'll be using the simplest witty example: hello

I assumed you already had installed a CentOS 6.x minimal x86_64.

The Steps

  1. Login as a sudoer user.
  2. Append EPEL repository by creating the file /etc/yum.repos.d/epel.repo with the following content:
  3. [epel]
    name=Extra Packages for Enterprise Linux 6 - $basearch
    #baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
    mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
    failovermethod=priority
    enabled=1
    gpgcheck=0
    
  4. Install required packages: nginx, witty and CentOS development kit:
  5. $ sudo yum install nginx 
    $ sudo yum install wt
    $ sudo yum install fcgi
    $ sudo yum install spawn-fcgi
    $ sudo yum install wt-devel wt-examples      # Only for development env
    $ sudo yum groupinstall "Development Tools"  # Only for development env
    $ sudo yum install nano                      # Only for development env
    
  6. Go to Wt's examples directory and edit the CMakeLists.txt archive:
  7. $ cd /usr/lib64/Wt/examples/hello
    $ sudo nano CMakeLists.txt
    
    and replace
    WT_ADD_EXAMPLE(hello.wt hello.C)
    by:
    ADD_EXECUTABLE(hello.wt hello.C)
    TARGET_LINK_LIBRARIES(hello.wt ${EXAMPLES_CONNECTOR})
    
  8. Run CMake specifying FastCGI support & copy resulting binary to nginx document root:
  9. $ sudo rm -rf target
    $ sudo mkdir -p target
    $ cd target
    $ sudo cmake ../ -DEXAMPLES_CONNECTOR=wtfcgi -DCONNECTOR_FCGI=yes -DCONNECTOR_HTTP=no 
    $ sudo make
    $ sudo cp -a hello.wt /usr/share/nginx/html/
    
  10. Create a new archive /etc/sysconfig/spawn-fcgi-hello.wt with the following content:
  11. FCGI_SOCKET=/var/run/hello.wt.socket
    FCGI_PROGRAM=/usr/share/nginx/html/hello.wt
    FCGI_USER=nginx
    FCGI_GROUP=nginx
    FCGI_EXTRA_OPTIONS="-M 0700"
    OPTIONS="-u $FCGI_USER -g $FCGI_GROUP -s $FCGI_SOCKET -S $FCGI_EXTRA_OPTIONS -F 1 -P /var/run/hello.wt.socket.pid -- $FCGI_PROGRAM"
  12. Allow nginx to write at /var/spool/wt/run/:
  13. $ sudo chgrp nginx /var/spool/wt/run/
    $ sudo chmod g+w /var/spool/wt/run/
  14. Launch hello app via spawn-fcgi:
  15. $ source /etc/sysconfig/spawn-fcgi-hello.wt
    $ spawn-fcgi $OPTIONS
    
    now the FastCGI is running and waiting.
  16. Create a new file /etc/nginx/conf.d/wt.conf w/ the following content:
  17. server {
        listen  9091;
        server_name  _;
    
        # by default relative to /usr/share/nginx/html
        location / {
          access_log /var/log/nginx/nginx-fastcgi-access.log;
          gzip off;
    
          # the full path /usr/share/nginx/html/hello.wt
          if ($uri !~ "^/hello.wt/$") {
            fastcgi_pass unix:/var/run/hello.wt.socket;
          }
          include /etc/nginx/fastcgi_params;
          fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        }
    }
    
  18. Restart nginx:
  19. $ service ngnix restart
    
  20. Visit http://HOST:9091 and enjoy it!

Wednesday, August 8, 2012

Packaging Csync2 in RPM on CentOS 6.X

This is a how to package in RPM format, the Csync2 synchronization tool on CentOS 6.x.

Steps

  1. Append RPM Forge repository by creating the file /etc/yum.repos.d/rpmforge.repo with the following content:
  2. $ sudo dd of=/etc/yum.repos.d/rpmforge.repo << EOT
    [rpmforge]
    name=CentOS-$releasever - Rpmforge
    baseurl=http://apt.sw.be/redhat/el6/en/$basearch/rpmforge
    gpgcheck=0
    enabled=1
    EOT
    
    
  3. Install building dependencies:
  4. $ sudo yum -y groupinstall "Development Tools"
    $ sudo yum -y install openssl-devel 
    $ sudo yum -y install librsync librsync-devel
    $ sudo yum install gnutls openssl libtasn1 gnutls-devel
    
    
  5. Create a temporal cache and define a downloader command:
  6. $ mkdir -p /tmp/cache
    
    #/** 
    # * Downloads a file to the cache if doesn't exists
    # *
    # * @param $1 the file to download
    # * @param $2 the url where the file is located
    # */
    $ get() {
     [ -f /tmp/cache/$1 ] || wget -t inf -w 5 -c $2/$1 -O /tmp/cache/$1
    }
    
    
  7. Download latest csync2 tarball with sources:
  8. $ lastver=1.34
    $ cs=csync2-$lastver.tar.gz
    $ get $cs http://oss.linbit.com/csync2/$cs 
    
    
  9. Download an sqlite compatible version:
  10. $ sqver=2.8.16
    $ sq=sqlite-$sqver.tar.gz
    get $sq http://pkgs.fedoraproject.org/repo/pkgs/sqlite/$sq/9c79b461ff30240a6f9d70dd67f8faea/$sq
    
    
  11. Create a clean RPM build environment:
  12. $ rm -rf ~/rpmbuild ~/.rpmmacros
    $ mkdir -p ~/rpmbuild/{BUILD,RPMS,S{OURCE,PEC,RPM}S}
    $ cat > ~/.rpmmacros <<< "%_topdir $HOME/rpmbuild"
    
    
  13. Copy the sources tar balls to SOURCES and BUILD:
  14. $ cd ~/rpmbuild
    $ cp /tmp/cache/$cs SOURCES/
    $ cp /tmp/cache/$sq BUILD/
    
    
  15. Extract the csync2.spec archive from tar ball and modify it to user sqlite sources and include some missing files into the final RPM:
  16. $ cd SPECS
    $ tar --strip-components=1 -x csync2-$lastver/csync2.spec -vzf ../SOURCES/$cs
    $ sed -i \
      -e 's/^%changelog/%files\n%defattr(-,root,root,-)\n\/usr\/sbin\/csync2-compare\n\/etc\/csync2.cfg\n\/etc\/xinetd.d\/csync2\n\/usr\/sbin\/csync2\n\/usr\/share\/man\/man1\/csync2.1.gz\n\n&/' \
      -e 's/\(^%configure\)/\1  --with-libsqlite-source=..\/sqlite-2.8.16.tar.gz  --disable-gnutls/' csync2.spec
    $ cd ..
    
    
  17. Build and install the csync2 RPM packages:
  18. $ rpmbuild -bb SPECS/csync2.spec
    $ sudo rpm -ivh RPMS/x86_64/csync2-$lastver-1.x86_64.rpm
    
    

Enjoy it!

Thursday, July 5, 2012

Building and packaging the latest Gearman server in CentOS 6.2

This is a how to package, install and test the latest version of Gearman server in RPM format using CentOS 6.2.

Motivation

The latest version available of Gearman in the Fedora repository is very outdated, CentOS is even more. So, if you plan to use the latest Gearman features, you have two choices 1) compile it using the tarball; 2) package it (hence compile it) in RPM format.
This guide uses the second approach, but not from scratch. Instead using the latest available SRPM German's package and performing some minor changes.

At this moment, the latest Gearman version is 0.33 and the latest Fedora-based SRPM is 0.23.

Hands on bash

  1. Append EPEL repository by creating the file /etc/yum.repos.d/epel.repo with the following content:
  2. [epel]
    name=Extra Packages for Enterprise Linux 6 - $basearch
    #baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
    mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
    failovermethod=priority
    enabled=1
    gpgcheck=0
    
    [epel-debuginfo]
    name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
    #baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
    mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
    failovermethod=priority
    enabled=0
    gpgcheck=0
    
    [epel-source]
    name=Extra Packages for Enterprise Linux 6 - $basearch - Source
    #baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
    mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
    failovermethod=priority
    enabled=0
    gpgcheck=0
    
  3. Install building dependencies:
  4. $ sudo yum groupinstall "Development Tools"
    $ sudo yum install libevent-devel libuuid-devel
    $ sudo yum install boost-devel
    $ sudo yum install libmemcached-devel memcached google-perftools-devel 
    
  5. Create a temporal cache and define a downloader command:
  6. $ mkdir -p /tmp/cache
    
    #/** 
    # * Downloads a file to the cache if doesn't exists
    # *
    # * @param $1 the file to download
    # * @param $2 the url where the file is located
    # */
    $ get() {
     [ -f /tmp/cache/$1 ] || wget -t inf -w 5 -c $2/$1 -O /tmp/cache/$1
    }
    
  7. Download latest Gearman tarball with sources and the latest SRPM package provided by Fedora
  8. $ lastver=0.33
    $ gm=gearmand-$lastver.tar.gz
    $ get $gm https://launchpadlibrarian.net/104788829/$gm 
    
    $ srcver=0.23
    $ srpm=gearmand-$srcver-1.fc16.src.rpm
    $ get $srpm http://www.muug.mb.ca/mirror/fedora/linux/releases/16/Everything/source/SRPMS/$srpm
    
    
  9. Create a clean RPM build environment and install the SRPM package on it:
  10. $ rm -rf ~/rpmbuild ~/.rpmmacros
    $ mkdir -p ~/rpmbuild/{BUILD,RPMS,S{OURCE,PEC,RPM}S}
    $ cat > ~/.rpmmacros <<< "%_topdir $HOME/rpmbuild"
    
    $ rpm -ivh /tmp/cache/$srpm
    
    
  11. Remove the unpacked old Gearman tarball and copy the new one to SOURCES:
  12. $ cd ~/rpmbuild
    $ cp /tmp/cache/gearmand-$lastver.tar.gz SOURCES/
    $ rm SOURCES/gearmand-$srcver.tar.gz
    
  13. Now the trick. The sed command below performs some changes on gearmand.spec file: 1) replaces the old version number with the new one; 2) Comments dependencies from systemd packages not available yet on CentOS; and 3) Adds various file/directory entries only available on the latest version of Gearman.
  14. $ sed -i \
      -e 's/\(Version:[[:space:]]\+\)'${srcver}'/\1'${lastver}'/' \
      -e 's/^BuildRequires:[[:space:]]\+systemd-units/#&/' \
      -e 's/^Requires(post):[[:space:]]\+systemd-sysv/#&/' \
      -e 's/^Requires(post):[[:space:]]\+systemd-units/#&/' \
      -e 's/^Requires(preun):[[:space:]]\+systemd-units/#&/' \
      -e 's/^Requires(postun):[[:space:]]\+systemd-units/#&/' \
      -e 's/install -m 0644 %{SOURCE3} %{buildroot}%{_unitdir}\/%{name}.service/#&/' \
      -e 's/^%changelog/%files\n%defattr(-,root,root,-)\n\/usr\/include\/libgearman-1.0\n\/etc\/rc.d\/init.d\/gearmand\n\/etc\/sysconfig\/gearmand\n\/usr\/bin\/gearadmin\n\/usr\/bin\/gearman\n\/usr\/sbin\/gearmand\n\/usr\/share\/man\n\n&/' \
      SPECS/gearmand.spec
    
  15. Build and install the Gearman RPM packages:
  16. $ rpmbuild -bb SPECS/gearmand.spec
    
    $ sudo rpm -ivh RPMS/x86_64/libgearman-$lastver-1.el6.x86_64.rpm RPMS/x86_64/gearmand-$lastver-1.el6.x86_64.rpm
    
    
  17. Register gearmand as a daemon on standard runleves:
  18. $ sudo chkconfig --add gearmand
    $ sudo chkconfig gearmand on
    
  19. Create an emtpy log archive with permissions for gearmand user amd group (created by the RPM installer), crate pid directory and start the daemon:
  20. $ sudo mkdir -p /usr/local/var
    $ sudo ln -s /var/log /usr/local/var/log 
    
    $ sudo touch /var/log/gearmand.log
    $ sudo chown gearmand:gearmand /var/log/gearmand.log
    
    $ sudo mkdir -p /var/run/gearmand/
    
    $ sudo /etc/init.d/gearmand start
    

Testing

  1. Use the examples provided by Gearman to test. Enter examples directory and run the reverse_worker example:
  2. $ cd BUILD/gearmand-$lastver/examples
    
    $ ./reverse_worker 
    
    
  3. Open a new terminal, enter examples directory and run the reverse_client:
  4. cd ~/rpmbuild/BUILD/gearmand-$lastver/examples
    
    $ ./reverse_client "Hello Gearman"
    

    in the reverse_worker terminal you should see:
    Recieved 12 bytes
    Job=H:centosarq62.localdomain:1  Reversed=HelloGearman
    
    

Now your Gearman with the latest features is ready for battle.

Enjoy it!

Wednesday, July 4, 2012

The NGINX's RAM-only Jail. Adding nginx to RAM-only diskless Linux box

Without any doubt, nginx is one of the best ever created HTTP servers, it solves the C10K problem with an elegant event-drive architecture. Many big sites around the WWW use it: Wikipedia, Hyves, etc.

This is a how to prepare your rambox to run nginx smoothly. It makes sense only after the preparation of the RAM-only PXE boot in my previous post. This post isn't about any kind of nginx optimization stuff beside of its compilation in a single binary rock. At the end of this guide, you will be running the nginx very much like in a jail.

Run these steps BEFORE the step "Change ownership to everything" at RAM-only PXE boot in my previous post.
  1. Download latest stable version nginx and upack it.
  2. $ pushd /tmp/wrk
    $ ngx=nginx-1.2.1.tar.gz
    $ cache $ngx http://nginx.org/download/$ngx
    $ tar -xvzf /tmp/cache/$ngx -C .
    $ mv $ngx nginx
    
  3. Add options depending on the features that you want to support. Beside of the default options I only add static compilation options:
  4. $ pushd nginx
    $ ./configure --with-ld-opt="-static -static-libgcc" \
      --with-cc-opt="-static -static-libgcc"
    
    make it (jN means N threads devoted to compilation and linking):
    $ make -j2
    
    ensure that it's not a dynamic executable:
    $ ldd objs/nginx
     not a dynamic executable
    $ popd
    
  5. Copy the nginx executable to rambox's /sbin :
    $ popd 
    $ pushd sbin
    $ chmod +w .
    $ cp ../../nginx/objs/nginx .
    $ chmod -w .
    $ popd
    
  6. Create /usr/local/nginx directories:
  7. $ mkdir -p -m 0755 usr/local/nginx/{conf,logs}
    
  8. Copy some needed libs:
  9. $ chmod +w lib
    $ cp /lib64/{ld-2.12.so,ld-linux-x86-64.so.2} lib/
    $ cp /lib64/{libc-2.12.so,libc.so.6} lib/
    $ cp /lib64/{libnsl-2.12.so,libnsl.so.1} lib/
    $ cp /lib64/{libnss_compat-2.12.so,libnss_compat.so.2} lib/
    $ chmod -w lib
    
  10. Copy mime conf file:
  11. $ cp ../nginx/conf/mime.types usr/local/nginx/conf/
    
  12. Create the nginx.conf file with some basic settings:
  13. $ dd of=usr/local/nginx/conf/nginx.conf << EOT
    # user and group to run nginx
    user www www;
    
    # numbers of dedicated CPUs
    worker_processes 1;
    
    # pid archive
    pid /var/run/nginx.pid;
    
    events {
        # max connections in WAIT 
        worker_connections  128;
      
        # accept & enqueue NEW connections, put them in WAIT
        multi_accept        on;
    }
    
    http {
            include       mime.types;
            default_type  application/octet-stream;
    
            server {
                    listen 80;
    
                    autoindex on;
     
                    location / {
                            root /var/www;
                            index  index.php index.html index.htm;
                    }
            }
    }
    EOT
    
  14. Resume the rambox creation process and once it gets started run:
  15. $ nginx
Enjoy it by browsing at http://HOSTADDRESS

Post install steps

Daemonize nginx by running it at /init script!

Tuesday, July 3, 2012

The Parking Permit Demo. README and Sources for Season 01

Here I let you the README and Sources for the Season 01 of The Parking Permit Demo with Oracle BPM/SOA suite.

 

 Enjoy it!

Thursday, June 7, 2012

RAM-only PXE boot & the "smallest" diskless Linux box

This is a how to easily create a very small Linux box purely running on RAM memory that boots using PXE. This is an introductory topic (not new at all), for further posts about scalability, load balancing and high availability. That's why I mention clustering very often and also start simple & small by preparing a single node that boots smoothly in a controlled environment.

Why RAM-only & diskless?

I must say that such configuration of a Linux box can bring many advantages if you plan to assemble a cheap cluster without persistence in mind and with low maintenance costs. Consider the following:
  • ram availability: RAM memory is cheaper & fast wrt. the pass years
  • modern hardware, big ram: A lot of hardware support a great amount RAM memory installed, from 1 GB to 128 GB, and go on
  • Linux rocks: 64-bits Linux systems are able to manage a lot of RAM memory efficiently
  • HDD & the planet: Electromechanical HDD have serious implications in energy consumption,  recycling, NOISE and are more susceptible to failures
  • SSD & your wallet: SSD are more advanced wrt. their electromechanical counterparts (less susceptible to physical shock, are silent, and have lower access time and latency) but at present market prices, more expensive per unit of storage. So if you problem is not the storage, just processing, caching and networking, you are in the right place!
  • less is sometimes cheaper: It's not a bad idea, if you have a good chance to buy cheaper nodes by parts/complete w/o HDD
  • crashing doesn't matter: if a misbehaving node crashes you just need to restart it and it'll wake up again in a healthy state. A single node state doesn't wander in time
  • scaling better: adding a node to the cluster is easy, just connect it, enable PXE boot and add an entry in DHCP config
  • network congestion is reduced: the RAM filesystem is copied once per boot to the target node
Life is easy and cluster's maintenance costs are reduced, but remember that this is only if you don't need persistence in every single node, just CPU power, networking and RAM memory.

When don't I need persistence?

I don't have a full inventory of persistence-less & memory-network-only scenarios, but a practical and discrete list, I'm sure you can see the benefits:
  • cryptographic stuff, privacy: you need to run a cryptographic algorithm and ensure a full cleanup of private keys after the execution is complete, a HDD formating is not enough sometimes, and recovery data from a RAM memory after a full power of is very difficult if not impossible. Also an encrypted filesystem on top of the RAM shall be challenging for hackers
  • caching efficiently: if your RAM is enough and your backend cluster is under a constantly growing demand for static content. You can delegate all your caching needs to a dedicated frontend cluster running purely in RAM and release the load of backend servers by processing only dynamic content on this physical layer
  • time only algorithms: many algorithms have only processing power needs and low/medium memory foot print, some of them even only need volatile (non-persistence) memory for allocating data structures   
  • display only apps: some software solutions only need for displaying incoming data via graphs, video streaming, etc.. So a good display, a RAM-only system and a network is enough  

What will I obtain at the end of this guide? A Linux box, named it rambox, purely running in RAM memory, that means a root (/) filesystem mounted in RAM, that's why memory preservation is a priority as well as avoidance of a filesystem full of never-used archives which also increase the memory usage.

We'll also make a customized Kernel compilation to shrink it, with a "minimal" set of features incorporated. Keeping it simple small! At this point you should be careful about omitting mandatory kernel features, there's another set of features that are not mandatory but useful to obtain the best performance. They mainly depend on your hardware, so take care of them.

What's a RAM filesystem? A filesystem mounted on RAM isn't a new invention, is a awesome Kernel feature mostly used to load firmware/modules before starting the normal boot process.  It's called initrd or initramfs, there are differences between both (see references) and we'll be using initramfs.

What do I need?

For this guide I use two KVM-virtualized computers, running in a CentOS 6.2 host with bridged networking. For simplicity, the host and the two guests are in the same subnetwork
  1. pxe: a server computer with CentOS 6.2 amd64 installed, w/ 16 GB HDD, 1 GB RAM, no GUI, networking. With DHCP and TFTP role. Static IP  = 192.168.24.202, subnet =192.168.24.0/24 
  2. rambox: a RAM-only computer, w/o HDD installed, w/ networking. With  cluster node role. Dynamic DHCP-designated IP = 192.168.24.203, subnet =192.168.24.0/24

NOTE: Notice that BIOS used for QEMU and KVM virtual machines (the SeaBIOS) supports an open source implementation of PXE, named gPXE. So KVM-based virtual machine is able to boot via network. Now days almost any motherboard should have a BIOS with PXE support. Ensure that your rambox support it by checking the BIOS setup.

How does it work?

In summary, when the rambox with PXE boot activated wake ups:
  1. the BIOS PXE boot loader requests an address to DHCP server
  2. the DHCP server offers an IP address, a TFPT server IP address (himself), and the Linux PXE boot loader's location on the TFTP server
  3. the BIOS PXE boot loader downloads the Linux PXE boot loader from the TFPT server
  4. the Linux PXE boot loader takes control and uses the same IP configuration to connect to TFTP server and fetch two archives: the kernel and the ramdisk
  5. the Kernel takes control and configures its network interface, statically or by performing a second round of DHCP request, it depends
  6. the Kernel uncompress the ramdisk in memory
  7. the RAM disk is mounted on / and the /init script gets invoked
What do we have to configure and where? In pxe server computer is where everything takes place:
  1. Install and configure a DHCP server with support for PXE extensions
  2. Install and configure a TFTP server
  3. Create a reduced ramdisk with a minimal set of utils and programs
  4. Compile and optionally shrink the Kernel to include support for Kernel-level IP configuration, including NIC drivers
  5. Locate all the stuff in the correct place and wake up the rambox!
There are several detailed explanations of the Linux boot process, some of them are outdated but still useful. At the moment, I won't make a full description of every single step of the boot process, ramdisk, PXE, Kernel-level IP, etc. (see references)    

Hands on Bash

Login in pxe as a sudoer user (named bozz on this guide)

Installing phase

  1. Install the dhcp, tftp-server and syslinux packages, syslinux contains the Linux PXE boot loader:
  2. $ sudo yum install dhcp tftp-server syslinux
    
  3. Additionally, install some tools:
  4. $ sudo yum install bc wget
    
  5. Finally install kernel packages for kernel compilation. These packages ensure that you have all the required tools for the build:
  6. $ sudo yum install kernel-devel
    $ sudo yum groupinstall "Development Tools"
    
    # This is required to enable a make *config command to execute correctly. 
    $ sudo yum install ncurses-devel
    
    # These are required when building a CentOS-6 kernel. 
    $ sudo yum install hmaccalc zlib-devel binutils-devel elfutils-libelf-devel 
    
    # These are required when working with the full Kernel source
    $ sudo yum install rpm-build redhat-rpm-config unifdef
    
    # These are needed by kernel-2.6.32-220.el6
    $ sudo yun install xmlto asciidoc newt-devel python-devel perl-ExtUtils-Embed
    
    

Configure DHCP

  1. Ensure that the dhcpd starts at boot time:
  2. $ sudo chkconfig --level 35 dhcpd on
    $ chkconfig --list dhcpd
    dhcpd              0:off    1:off    2:off    3:on    4:off    5:on    6:off
    
  3. Edit dhcp.conf adding PXE specific options:
  4. $ sudo nano /etc/dhcp/dhcpd.conf
    it should finally look like this:
    # dhcpd.conf
    #
    # DHCP configuration file for ISC dhcpd
    #
    
    # Use this to enble / disable dynamic dns updates globally.
    ddns-update-style none;
    
    # Definition of PXE-specific options
    # Code 1: Multicast IP address of boot file server
    # Code 2: UDP port that client should monitor for MTFTP responses
    # Code 3: UDP port that MTFTP servers are using to listen for MTFTP requests
    # Code 4: Number of seconds a client must listen for activity before trying
    #         to start a new MTFTP transfer
    # Code 5: Number of seconds a client must listen before trying to restart
    #         a MTFTP transfer
    option space PXE;
    option PXE.mtftp-ip               code 1 = ip-address;  
    option PXE.mtftp-cport            code 2 = unsigned integer 16;
    option PXE.mtftp-sport            code 3 = unsigned integer 16;
    option PXE.mtftp-tmout            code 4 = unsigned integer 8;
    option PXE.mtftp-delay            code 5 = unsigned integer 8;
    option PXE.discovery-control      code 6 = unsigned integer 8;
    option PXE.discovery-mcast-addr   code 7 = ip-address;
    
    subnet 192.168.24.0 netmask 255.255.255.0 {
    
      class "pxeclients" {
        match if substring (option vendor-class-identifier, 0, 9) = "PXEClient";
        option vendor-class-identifier "PXEClient";
        vendor-option-space PXE;
    
        # At least one of the vendor-specific PXE options must be set in
        # order for the client boot ROMs to realize that we are a PXE-compliant
        # server.  We set the MCAST IP address to 0.0.0.0 to tell the boot ROM
        # that we can't provide multicast TFTP (address 0.0.0.0 means no
        # address).
        option PXE.mtftp-ip 0.0.0.0;
    
        # This is the name of the file the boot ROMs should download.
        filename "pxelinux.0";
    
        # This is the name of the server they should get it from.
        next-server 192.168.24.202;
      }
    
      pool {
        max-lease-time 86400;
        default-lease-time 86400;
        range 192.168.24.203 192.168.24.203;
        deny unknown clients;
      }
    
      host rambox {
        hardware ethernet 08:00:07:26:c0:a5;
        fixed-address 192.168.24.203;
        hostname rambox01.home.dev;
      }
    
    }
    
    NOTE: In this configuration the nodes will always use the same IP addresses leased by their MAC and the nodes with an unknown hardware address will be rejected. You can easily change this behavior by replacing "deny unknown clients" directive with "allow unknown clients" and deleting all the hosts entries.

Configuring TFPT

  1. To enable the TFTP server, edit /etc/xinetd.d/tftp replacing the word yes on the disable line with the word no. Then save the file and exit the editor:
  2. $ sudo nano /etc/xinetd.d/tftp
    it should finally look like:
    # default: off
    # description: The tftp server serves files using the trivial file transfer \
    # protocol.  The tftp protocol is often used to boot diskless \
    # workstations, download configuration files to network-aware printers, \
    # and to start the installation process for some operating systems.
    service tftp
    {
            socket_type             = dgram
            protocol                = udp
            wait                    = yes
            user                    = root
            server                  = /usr/sbin/in.tftpd
            server_args             = -s /var/lib/tftpboot
            disable                 = no
            per_source              = 11
            cps                     = 100 2
            flags                   = IPv4
    }
    
  3. Restart the xinetd daemon to reload configuration files:
  4. $ sudo service xinetd restart
  5. Verify if xinetd is started at boot time, it should be, if not then use chkconfig like the previous step:
  6. $ chkconfig --list xinetd
    xinetd             0:off    1:off    2:off    3:on    4:on    5:on    6:off
    

Concerning the firewall

  • Allow access to TFTP via standard ports:
  • $ sudo iptables -I INPUT -p udp --dport 69 -j ACCEPT
    $ sudo iptables -I INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT
    $ sudo service iptables save
    $ sudo service iptables restart
    

Configuring the PXE environment

  1. Copy the Linux PXE boot loader pxelinux.0 to tftpboot published root directory:
  2. $ sudo cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot
  3. Create PXE config directory on TFP root, this directory will contains a single configuration file per node or per subnet:
  4. $ sudo mkdir -p /var/lib/tftpboot/pxelinux.cfg
  5. The Linux PXE boot loader uses its own IP address in hexadecimal format to look for a single configuration file under pxelinux.cfg directory, if its not found it will remove the last octet and try again, repeating until it runs out of octets. That's why I define a helper function to convert IPv4 decimal to an hexadecimal string:
  6. #/**
    # * converts an IPv4 address to hexadecimal format completing the missing 
    # * leading zero
    # * 
    # * @example:
    # *   $ hxip 10.10.24.203
    # *   0A0A18CB
    # *
    # * @param $1: the IPv4 address 
    # */
    hxip() {
      ( bc | sed 's/^\([[:digit:]]\|[A-F]\)$/0\1/' | tr -d '\n' ) <<< "obase=16; ${1//./;}"
    }
    
    test the function via command line:
    $ hxip 192.168.24.203
    C0A818CB
  7. Create PXE Linux config file using the designated IPv4 address in hexadecimal format:
    $ sudo nano /var/lib/tftpboot/pxelinux.cfg/$(hxip 192.168.24.203)
    with the following content:
    DEFAULT bzImage
    APPEND initrd=initramfs.cpio.gz rw ip=dhcp shell
    
    or if you prefer to avoid the second round of DHCP issued by the Kernel:
    DEFAULT bzImage
    APPEND initrd=initramfs.cpio.gz rw ip=192.168.24.203:192.168.24.202:192.168.24.1:255.255.252.0:rambox:eth0:off shell
    
    where DEFAULT provides the Kernel archive and APPEND the Kernel parameters passed on boot:
    • bzImage: is the name of the compressed Kernel image
    • initrd=initramfs.cpio.gz: tells to Linux PXE boot loader to download this file and pass it to the Kernel later which will interpret it to be a compressed ramdisk filesystem image
    • rw: Kernel mounts the ramdisk filesystem in read-write mode
    • ip=dhcp: a Kernel-level IP parameter indicating to perform a DHCP request to obtain a valid network parameters, or alternative you can used a fixed network configuration
    • ip=192.168.24.203:192.168.24.202:192.168.24.1:255.255.252.0:rambox:eth0:off
      • node IP address = 192.168.24.203
      • server IP address = 192.168.24.202
      • default gateway IP address = 192.168.24.1
      • network mask = 255.255.252.0
      • node hostname = rambox
      • device = eth0
      • auto configuration protocol = off
    • shell: a custom parameter added by me to run a shell

Creating a compressed root filesystem

The Kernel support for initramfs allow us to create a customizable boot process to load modules and provide a minimalistic shell that runs on RAM memory. An initramfs disk is nothing else than a compressed cpio archive, that is then either embedded directly into your kernel image, or stored as a separate file which can be loaded by the Linux PXE boot loader. Embedded or not, it should always contains at least:
  • a minimum set of directories:
    • /sbin -- Critical system binaries
    • /bin -- Essential binaries considered part of the system
    • /dev -- Device files, required to perform I/O
    • /etc -- System configuration files 
    • /lib, /lib32, /lib64 -- Shared libraries to provide run-time support 
    • /mnt -- A mount point for maintenance and use after the boot/root system is running
    • /proc -- Directory stub required by the proc filesystem. The /proc directory is a stub under which the proc filesystem is placed
    • /root -- the root's home directory 
    • /sys -- 
    • /tmp -- Temporal directory 
    • /usr -- Additional utilities and applications 
    • /var -- Variable files whose content is expected to continually change during normal operation of the system—such as logs, spool files, and temporary e-mail files.
  • basic set of utilities: sh, ls, cp, mv, etc
  • minimum set of config files: rc, inittab, fstab, etc
  • devices: /dev/hd*, /dev/tty*, etc
  • runtime libraries to provide basic functions used by utilities
Is there any other simple method to create the RAM disk? Creating an initramfs can be also achieved by copying the content of an already installed Linux distro into an empty directory then package it, but you must be aware of carrying undesired and/or useless archives. There other methods, some of them simple, some of them not, but they are outside of the scopte of this guide which aims to show you a handy approch to obtain a lightweight RAM disk and Kernel

Use the following steps to create the initramfs:

  1. Creating a download cache & working zone. Also defining a helper command to download and cache archives:
  2. $ mkdir -p /tmp/cache
    $ mkdir /tmp/wrk
    $ pushd /tmp/wrk
    
    #/** 
    # * Downloads a file to the cache if doesn't exists
    # *
    # * @param $1 the file to download
    # * @param $2 the url where the file is located
    # */
    $ get() {
     [ -f /tmp/cache/$1 ] || wget -t inf -w 5 -c $2/$1 -O /tmp/cache/$1
    }
    
  3. Creating and entering to initramfs root directory:
  4. $ mkdir initramfs
    $ pushd initramfs
    
  5. Creating filesystem's base directories:
  6. $ mkdir -p -m 0755 dev etc/{,init,sysconfig} mnt sys usr/{,local} var/{,www,log,lib,cache} run
    $ mkdir -p -m 0555 {,s}bin lib{,32,64} proc usr/{,s}bin
    $ mkdir -p -m 0700 root
    $ mkdir -p -m 1777 tmp
    $ pushd var
    $ ln -s ../run run
    $ popd
    
  7. Creating /etc/profile to exports environment variables:
  8. $ dd of=etc/profile << EOT
    ## /etc/profile
    
    export PATH="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/sbin"
    EOT
  9. Creating /etc/fstab with various mount points:
  10. $ dd of=etc/fstab << EOT
    devpts  /dev/pts  devpts  nosuid,noexec,gid=5,mode=0620  0 0
    tmpfs   /dev/shm  tmpfs   nosuid,nodev,mode=0755  0 0
    sysfs   /sys      sysfs   nosuid,nodev,noexec  0 0
    proc    /proc     proc    nosuid,nodev,noexec  0 0
    EOT
    $ chmod 0644 etc/fstab
    
    
  11. Configure passwd & group settings:
  12. $ dd of=etc/passwd << EOT
    root:x:0:0:root:/root:/bin/sh
    nobody:x:99:99:NoBody:/none:/bin/false
    www:x:33:33:HTTP Server:/var/www:/bin/false
    EOT
    $ dd of=etc/group << EOT
    root:x:0:
    nobody:x:99:
    www:x:33:
    EOT
    
  13. Configure some host related settings:
  14. $ dd of=etc/host.conf <<< "multi on"
    $ dd of=etc/hostname <<< "rambox"
    $ dd of=etc/hosts << EOT
    127.0.0.1 localhost.localdomain localhost
    127.0.1.1 $(cat etc/hostname)
    EOT
    
  15. Configure timezone:
  16. $ dd of=etc/timezone <<< "America/New_York"
    $ cp /usr/share/zoneinfo/$(cat etc/timezone) etc/localtime
    
  17. Busybox is a handful tool used very often in ramdisks and small devices with very limited resources, providing a self-contained and minimal set of POSIX compatible unix tools in a single executable archive. I'll be using busybox on this guide. Getting busybox and create sh symbolic link:  
  18. $ pushd bin
    $ chmod +w .
    $ bb=busybox-x86_64 && get $bb http://www.busybox.net/downloads/binaries/latest/busybox-x86_64
    $ cp /tmp/cache/$bb busybox && chmod +x busybox
    $ ln -s busybox sh
    $ chmod -w . 
    $ popd
    
  19. Additionally we MAY need an DHCP configuration script, so we'll use busybox's udhcp and simple.script. The we'll create an script named renew_ip that performs all the job:
  20. $ pushd bin
    $ chmod +w .
    $ ss=simple.script
    $ get $ss http://git.busybox.net/busybox/plain/examples/udhcp/$ss
    $ cp /tmp/cache/$ss . && chmod +x $ss 
    
    $ dd of=renew_ip << EOT
    #!/bin/sh
    
    ifconfig eth0 up
    udhcpc -t 5 -q -s /bin/simple.script
    EOT
    
    $ chmod +x renew_ip
    $ chmod -w . 
    $ popd
    
  21. One of the most important phases is /init script execution, this is a simple shell script file that performs all initialization process on the ramdisk. It usually mounts all filesystems listed on fstab, creates device nodes (like udev device manager), loads device firmware and finally remounts another root (/) directory in other device and relaunches the new mounted /sbin/init. This is the point where we intervened, by just launching the shell or by executing our own /sbin/init w/o remounting the root (/). So edit init script and add the following content:
  22. $ nano init
    
    w/ this content:
    #!/bin/sh
    
    # Make all core utils reachable 
    . /etc/profile
    
    # Create all busybox's symb links
    /bin/busybox --install -s
    
    # Create some devices statically
    
    # pts: pseudoterminal slave 
    mkdir dev/pts
    
    # shm
    mkdir dev/shm
    chmod 1777 dev/shm
    
    # Mount the fstab's filesystems.
    mount -av
    
    # Some things don't work properly without /etc/mtab.
    ln -sf /proc/mounts /etc/mtab
    
    # mdev is a suitable replacement of the udev device node creator for loading 
    # firmware
    touch /etc/mdev.conf 
    echo /sbin/mdev > /proc/sys/kernel/hotplug
    mdev -s
    
    # Only renew the IP address via DHCP if you need it. Not needed if Kernel-level 'ip=...' 
    # was used. 
    #renew_ip 
    
    # set hostname
    hostname $(cat /etc/hostname)
    
    # shell launcher 
    shell() {
     echo "${1}Launching shell..." && exec /bin/sh
    }
    
    # launch the shell if the 'shell' parameter was supplied
    grep -q 'shell' /proc/cmdline && shell
    
    # parse kernel command and obtain the init & root parameters
    # if not then use default values
    for i in $(cat /proc/cmdline); do
     par=$(echo $i | cut -d "=" -f 1)
     val=$(echo $i | cut -d "=" -f 2)
     case $par in
      root)
       root=$val
       ;;
      init)
       init=$val
       ;;
     esac
    done
    init=${init:-/sbin/init}
    root=${root:-/dev/hda1}
    
    # if rambox parameter is supplied then keep the ramdisk mounted, ignore root parameter 
    # and run the other init script. Located at /sbin/init by default
    if grep -q 'rambox' /proc/cmdline ; then
      [ -e ${init} ] || shell "Not init found on ramdisk at '${init}'... " 
    
      echo "Keeping the ramdisk since rambox param was supplied & executing init... "
      exec ${init}
      
      #This will only be run if the exec above failed
      shell "Failed keeping the ramdisk and executing '${init}'... "
    fi
    
    # Neither shell nor rambox parameters were supplied then, try to switch to the new
    # root and launch init 
    mkdir /newroot
    mount ${root} /newroot || shell "An error ocurred mounting '${root}' at /newroot... "
    [ -e /newroot${init} ] || shell "Not init found at '${init}'... " 
    
    echo "Resetting kernel hotplugging... "
    :> /proc/sys/kernel/hotplug
    echo "Umounting all... "
    umount -a
    echo "Switching to the new root and executing init... "
    exec switch_root /newroot ${init}
    
    #This will only be run if the exec above failed
    mount -av
    mdev -s
    shell "Failed to switch_root... "
    
    as you may notice, every single step is commented, however this is an overall explanation of the process:
    1. /etc/profile is sourced to export PATH variable and make all executables reachable 
    2. All busybox's symbolic links are created
    3. Some special devices are created by hand
    4. All /etc/fstab filesystems are mounted
    5. The rest of the devices are discovery and created by busybox's mdev
    6. The Kernel command line located at /proc/cmdline is parsed to see if the shell parameter was supplied, is so the shell is immediately launched replacing the current process instance, hence everything else is ignored
    7. The Kernel command line is checked again to see if rambox parameter was supplied, indicating that we want to keep the ramdisk mounted at / and launch the normal /sbin/init process
    8. If neither shell nor rambox parameters were supplied then, try to mount the new root (/) and launch the /sbin/init on this new location
    9. Finally if neither the new root cannot be mounted nor the /sbin/init script cannot be executed, then a shell is launched indicating this situation
    10. If on any of these shell launching steps an error is produced, then a Kernel panic is issued   
  23. Append execution permissions to /init:
  24. $ chmod +x init
    
  25. Change ownership to everything:
  26. $ sudo chown -R root:root *
    
  27. Create the initramfs.cpio.gz compressed archive and copy it to tftp's root directory:
  28. $ sudo find . -print0 | sudo cpio --null -ov --format=newc | gzip -9 > ../initramfs.cpio.gz
    $ sudo cp ../initramfs.cpio.gz /var/lib/tftpboot
    
  29. Go back to working directory:
  30. $ popd
    

Now the Kernel stuff:

What I am about to do with the Kernel is very simple, compile it using a minimal set of features that makes it boot and recognize MY hardware, mainly the NIC device. Hence, depending on your hardware you should probably use a different selection of features for Kernel compiling. So I recommend first to do a once-time installation of any modern Linux distribution (like I did) like CentOS, Gentoo, Fedora, Debian or Ubuntu with a modern Kernel version and check the modules loaded on boot using /sbin/lsmod. Then using this modules list, look for the corresponding Kernel options and INCLUDE them all in the Kernel, making it a solid rock!. That's what I did.

NOTE: In our journey for making the Kernel simple and small, we should be careful in omitting some Kernel critical features and lost the hardware advantages, for example SMP features. So if we really want to use it in a production environment, then a deep research and customization must be done before.

Start Kerneling...    
  1. Download the Kernel sources from the sky:
  2. $ krn=linux-2.6.39
    $ get $krn.tar.xz http://www.kernel.org/pub/linux/kernel/v2.6/$krn.tar.xz
    
  3. Uncompress it into a working directory, named it linux:
  4. $ tar xvf /tmp/cache/$krn.tar.xz -C .
    $ mv $krn linux
    $ pushd linux
    
  5. Clean all configuration settings and enter the menu:
  6. $ make clean
    $ make allnoconfig
    $ make menuconfig
    
  7. An ncurses menu dialog should be opened. Now check a "minimal" set of features, and uncheck the unneeded ones, I'll only list what changes wrt the clean configuration settings. So [*] means explicitly checked to be EMBEDDED it into the Kernel, and [ ] means explicitly unchecked to be not included
    1. General Setup (here the RAM filesystem is the most important feature)
    2. [*] Prompt for development and/or incomplete code/drivers
      (-minimal) Local version - append to kernel release
      [*] Initial RAM filesystem and RAM disk (initramfs/initrd) support
      
    3. Bus options (PCI etc.)  ---> (Enable support for PCI devices, you may add support for your PCI hardware here)
    4. [*] PCI support 
    5. Executable file formats / Emulations  ---> (An important piece!, you won't be able to execute almost anything if you don't check it)
    6. [*] Kernel support for ELF binaries
      
    7. [*] Networking support (Beside of enabling TCP/IP and disable Wireless, IPSec, etc. The most important feature to check here is the  IP-Kernel level auto configuration with DHCP support)
    8.   [ ]   Wireless  ---> 
       Networking options 
          [*] Packet socket                                                                                 
          [*]   Packet socket: mmapped IO                                                                     
          [*] Unix domain sockets
          [*] Transformation sub policy support (EXPERIMENTAL)
          [*] Transformation migrate database (EXPERIMENTAL)
          [*] PF_KEY sockets
          [*]   PF_KEY MIGRATE (EXPERIMENTAL)                                                                                  
          [*] TCP/IP networking
          [*]   IP: multicasting                              
          [*]   IP: advanced router                           
              Choose IP: FIB lookup algorithm (choose FIB_HASH if unsure) (FIB_HASH  
          [*]   IP: policy routing                            
          [*]   IP: equal cost multipath                      
          [*]   IP: verbose route monitoring                                                                               
          [*]   IP: kernel level autoconfiguration                                                           
          [*]     IP: DHCP support
          [*]   IP: tunneling                                 
          [*]   IP: GRE tunnels over IP                       
          [*]     IP: broadcast GRE over IP                   
          [*]   IP: multicast routing                         
          [*]     IP: PIM-SM version 1 support                
          [*]     IP: PIM-SM version 2 support                
          [*]   IP: ARP daemon support                        
          [*]   IP: TCP syncookie support (disabled per default)                         
          [*]   IP: AH transformation                         
          [*]   IP: ESP transformation                        
          [*]   IP: IPComp transformation                                                                            
          [ ]   IP: IPsec transport mode                                                                     
          [ ]   IP: IPsec tunnel mode                                                                        
          [ ]   IP: IPsec BEET mode
          [*]   TCP: advanced congestion control  --->                                                                           
           [*]   CUBIC TCP (NEW) (only cubic)
          [*]   TCP: MD5 Signature Option support (RFC2385) (EXPERIMENTAL) 
          [ ]   The IPv6 protocol  --->                                                                      
      
      
    9. Device Drivers  ---> (RAM block device support and Network device support + Ethernet are the most important things, the remaining stuff is related to my current hardware)
    10.   [*] Block devices  --->
          [*]   RAM block device support
        [*] Multiple devices driver support (RAID and LVM)  --->
          [*]   Device mapper support
        [*] Network device support  ---> 
          [*]   Ethernet (10 or 100Mbit)  --->
          [ ]   Wireless LAN  ---> 
       Character devices  --->
          [*] /dev/kmem virtual device support
          [*] Hardware Random Number Generator Core support
        [*] I2C support  --->
          [*]   I2C device interface
          I2C Hardware Bus support  ---> 
            [*] Intel PIIX4 and compatible (ATI/AMD/Serverworks/Broadcom/SMSC)
       Serial ATA (prod) and Parallel ATA (experimental) drivers (ATA [=n])
          [*]   ATA SFF support (NEW) 
            [*]    Intel ESB, ICH, PIIX3, PIIX4 PATA/SATA support
            [*]    Generic ATA support
      
    11. File systems  --->  (File systems are very important, they support depend on what's your final goal: mount an NFS remotely for a shared storage? use a GlusterFS / Ceph filesystem in top of a NAS? The configuration I used is the simplest one, only support for initramfs and other pseudo filesystem. I recommend to start with this one, then gradually embed your filesystems) 
    12.   [ ] Network File Systems  --->
        Pseudo filesystems  --->
          [*] Virtual memory file system support (former shm fs)
          [*]   Tmpfs POSIX Access Control Lists 
      
    13. [*] Virtualization  ---> (As I mentioned earlier, I'm using a KVM-virtualized hardware with a wide usage of Virtio paravirtualization technology. Virtio adds supports for a paravirtual Ethernet card, a paravirtual disk I/O controller, a balloon device for adjusting guest memory usage, and a VGA graphics interface using SPICE drivers. Virtio drivers for guest machines are included in the Kernel >= 2.6.25, see details here)    
    14.   [*]   PCI driver for virtio devices (EXPERIMENTAL)
        [*]   Virtio balloon driver (EXPERIMENTAL)
      
    15. Device Drivers ---> [for virtualization]
    16.  
        [*] Block devices  --->
          [*]   Virtio block driver (EXPERIMENTAL)
        [*] Network device support  ---> 
          [*]   Virtio network driver (EXPERIMENTAL)
        Character devices  --->
          [*] Virtio console
          [*] Hardware Random Number Generator Core support
          [*]   VirtIO Random Number Generator support
      
    17. Exit the Kernel configuration menu and don't forget to save the settings file.
  8. Compile the Kernel (-j4 means 4 threads devoted to compilation), copy it to TFPT's root directory:
  9. $ make -j4 bzImage
    $ sudo cp arch/x86/boot/bzImage /var/lib/tftpboot/
    
  10. Do cleanup:
  11. $ popd
    $ sudo rm -rf /tmp/wrk
    
  12. Power on the rambox and enjoy it! It should boot smoothly and launch the busybox's shell.
You will find the basic tools at /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/sbin, all these tools are indeed in the PATH environment variable. To renew your IP address just run renew_ip. Finally notice that any Kernel module is loaded since all that you need is embedded.

Enjoy it!

Post install

Perform some checks after install to ensure that everything is OK and measure for resource consumption:

  • Free memory, as you may notice approximately only 12Mb are used:
  • $ free -m
                    total    used    free    shared    buffers
    Mem:             1255      12    1243         0          0 
    -/+ buffers:               12    1243
    Swap:               0       0       0 
    
  • Network configuration/connectivity, both interfaces should be listed, eth0 and lo:
  • $ ifconfig
    eth0      Link encap:Ethernet  HWaddr 08:00:07:26:c0:a5  
              inet addr:192.168.24.203  Bcast:192.168.24.255  Mask:255.255.255.0
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:378 errors:0 dropped:0 overruns:0 frame:0
              TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:43745 (42.7 KiB)  TX bytes:1180 (1.1 KiB)
              
    lo        Link encap:Local Loopback  
              inet addr:127.0.0.1  Mask:255.0.0.0
              UP LOOPBACK RUNNING  MTU:16436  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:0 (0 B)  TX bytes:0 (0 B)
    
    $ ping -c 3 192.168.24.202
    PING 192.168.24.202 (172.26.24.202) 56(84) bytes of data.
    64 bytes from 192.168.24.202: icmp_req=1 ttl=63 time=0.774 ms
    64 bytes from 192.168.24.202: icmp_req=2 ttl=63 time=0.639 ms
    64 bytes from 192.168.24.202: icmp_req=3 ttl=63 time=0.574 ms
    
    --- 192.168.24.202 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2001ms
    rtt min/avg/max/mdev = 0.574/0.662/0.774/0.085 ms
    
    
  • Mounted partitions:
  • $ mount
    rootfs on / type rootfs (rw)
    devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=0620)
    tmpfs on /run type tmpfs (rw,nosuid,nodev,relatime,mode=0755)
    sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
    proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
    
  • Disk usage, about 1.1Mb: 
  • $ du -chs /*
    1.1M    /bin
    0       /dev
    28.0K   /etc
    4.0K    /init
    0       /lib
    0       /lib32
    0       /lib64
    0       /linuxrc
    0       /mnt
    0       /proc
    0       /root
    36.0K   /sbin
    0       /sys
    0       /tmp
    20.0K   /usr
    0       /var
    1.1M    total
    
  • Loaded modules, since no module was loaded an empty list or a 'No such a file or directory' message is issued
  • $ lsmod
    lsmod: can't open '/proc/modules': No such file or directory
    
  • Device nodes created by device manager(mdev/udev). The result depends on many factors:
  • $ find /dev | wc -l
    106
    

References

Wednesday, April 25, 2012

Matching IP address on Bash

I use bash very often, this is a tip on how to match an IPv4 address and address list using bash. It makes use of the Bash's matching operator ~=. Enjoy it!

Tuesday, April 17, 2012

A WSDL-first asynchronous JAX-WS webservice. Correlating the messages using WS-Addressing and a callback interface

This how to is a WSDL-first (top-down) example of implementing a pure JAX-WS asynchronous web service with the following features:
  • the service, named JobProcessor is designed top-down, starting from WSDL and generating the interface and classes
  • the service support WS-Addressing for correlating the asynchronous request message with another request message targeted to callback endpoint
  • the service convention is document/literal
  • the service supports a single asynchronous operation: JobProcessor.processJob()
  • the desired message exchange pattern is simple:
    • a caller invokes the JobProcessor.processJob() operation in asynchronous manner
    • a <wsa:MessageID/>'s WS-Addressing header is supplied in the request
    • a callback endpoint is also supplied with the call using WS-Addressing headers, specifically <wsa:ReplyTo/> 
    • the service simulates some time-consuming processing (hence the need for asynchronous WS)
    • once the processing is finished, the service callbacks the caller issuing  correlation via WS-Addressing headers, that is using <wsa:RelatesTo/> 
  • the service also acts like a client when invokes the callback endpoint, so Messages, PortTypes, Bindings, etc.. are also needed

NOTE: the term caller denotes an external entity/user invoking the JobProcessor service, the term client denotes the role of the service when acts like a client

Hands on


I use JDeveloper on this how to, it can be easily done using another IDE.
  1. Create an empty generic application and project, named them JobProcessorApp and JobProcessorProj respectively, add only Java and Web Service technologies. Choose a default package name like dev.home.examples.jobprocessor 
  2. Create an new folder, named it wsdl. 
  3. Create a new XML Schema archive into the wsdl folder, name it jobprocessor.xsd and use a custom namespace like http://examples.home.dev/jobprocessor/types and a significant prefix like jpt 
  4. Create a new WSDL Document into the wsdl folder, named it jobprocessor.wsdl, use a custom namespace like: http://examples.home.dev/jobprocessor 
  5. Edit the WSDL an replace the default namespace prefix from tns to a significant one like jp, and declares the XML Schema namespace using the jpt prefix: xmlns:jpt="http://examples.home.dev/jp/types" 
  6. Edit the WSDL an add the namespace WS-Addressing for WSDL: xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl" 
  7. Include the XSD archive into the schema section of WSDL: definitions/types/xsd:schema/xsd:include, at this point the WSDL should look like:
  8. <?xml version="1.0" encoding="UTF-8" ?>
    <definitions targetNamespace="http://examples.home.dev/jobprocessor"
                 xmlns:jpt="http://examples.home.dev/jobprocessor/types"
                 xmlns:jp="http://examples.home.dev/jobprocessor"
                 xmlns="http://schemas.xmlsoap.org/wsdl/"
                 xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
                 xmlns:xsd="http://www.w3.org/2001/XMLSchema"
                 xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
                 xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"
                 xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/"
                 xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl">
      <types>
        <xsd:schema targetNamespace="http://examples.home.dev/jobprocessor/types"
                    elementFormDefault="qualified">
          <xsd:include schemaLocation="jobprocessor.xsd"/>
        </xsd:schema>
      </types>
    </definitions>
    
    
    
  9. Define schema types and elements for message payloads: Job, JobType, JobReply and JobReplyType, the final XSD should look like:
  10. <?xml version="1.0" encoding="UTF-8" ?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
                xmlns:jpt="http://examples.home.dev/jobprocessor/types"
                targetNamespace="http://examples.home.dev/jobprocessor/types"
                elementFormDefault="qualified">
      <xsd:element name="Job" type="jpt:JobType"/>
      <xsd:complexType name="JobType">
        <xsd:sequence>
          <xsd:element name="jobId" type="xsd:string"/>
          <xsd:element name="payload" type="xsd:string"/>
        </xsd:sequence>
      </xsd:complexType>
      <xsd:element name="JobReply" type="jpt:JobReplyType"/>
      <xsd:complexType name="JobReplyType">
        <xsd:sequence>
          <xsd:element name="jobId" type="xsd:string"/>
          <xsd:element name="result" type="xsd:string"/>
        </xsd:sequence>
      </xsd:complexType>
    </xsd:schema>
  11. Define WSDL messages: Job and JobReply either with their corresponding element-based parts job and jobReply 
  12. Define WSDL port types: JobProcessor for the service interface and JobProcessorNotify for callback interface with their corresponding operations procesJob() and replyFinishedJob(), also specify the corresponding WS-Addressing actions for each input message, at this point the WSDL looks like:
  13. <?xml version="1.0" encoding="UTF-8" ?>
    <definitions targetNamespace="http://examples.home.dev/jobprocessor"
                 xmlns:jpt="http://examples.home.dev/jobprocessor/types"
                 xmlns:jp="http://examples.home.dev/jobprocessor"
                 xmlns="http://schemas.xmlsoap.org/wsdl/"
                 xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
                 xmlns:xsd="http://www.w3.org/2001/XMLSchema"
                 xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
                 xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"
                 xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/"
                 xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl">
      <types>
        <xsd:schema targetNamespace="http://examples.home.dev/jobprocessor/types"
                    elementFormDefault="qualified">
          <xsd:include schemaLocation="jobprocessor.xsd"/>
        </xsd:schema>
      </types>
      <message name="Job">
        <part name="job" element="jpt:Job"/>
      </message>
      <message name="JobReply">
        <part name="jobReply" element="jpt:JobReply"/>
      </message>
      <portType name="JobProcessor">
        <operation name="processJob">
          <input message="jp:Job" wsaw:Action="http://examples.home.dev/jobprocessor/processJob"/>
        </operation>
      </portType>
      <portType name="JobProcessorNotify">
        <operation name="replyFinishedJob">
          <input message="jp:JobReply" wsaw:Action="http://examples.home.dev/jobprocessor/replyFinishedJob"/>
        </operation>
      </portType>
    </definitions>
    
  14. Define document-literal bindings with HTTP transport for each port type JobProcessor and JobProcessorNotify, enforce WS-Addressing using <wsaw:UsingAddressing required="true"/>
  15. Define the service JobProcessor and the callback service JobProcessorNotify, the final WSDL should look like:
  16. <?xml version="1.0" encoding="UTF-8" ?>
    <definitions targetNamespace="http://examples.home.dev/jobprocessor"
                 xmlns:jpt="http://examples.home.dev/jobprocessor/types"
                 xmlns:jp="http://examples.home.dev/jobprocessor"
                 xmlns="http://schemas.xmlsoap.org/wsdl/"
                 xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"
                 xmlns:xsd="http://www.w3.org/2001/XMLSchema"
                 xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
                 xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"
                 xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/"
                 xmlns:wsaw="http://www.w3.org/2006/05/addressing/wsdl">
      <types>
        <xsd:schema targetNamespace="http://examples.home.dev/jobprocessor/types"
                    elementFormDefault="qualified">
          <xsd:include schemaLocation="jobprocessor.xsd"/>
        </xsd:schema>
      </types>
      <message name="Job">
        <part name="job" element="jpt:Job"/>
      </message>
      <message name="JobReply">
        <part name="jobReply" element="jpt:JobReply"/>
      </message>
      <portType name="JobProcessor">
        <operation name="processJob">
          <input message="jp:Job" wsaw:Action="http://examples.home.dev/jobprocessor/processJob"/>
        </operation>
      </portType>
      <portType name="JobProcessorNotify">
        <operation name="replyFinishedJob">
          <input message="jp:JobReply" wsaw:Action="http://examples.home.dev/jobprocessor/replyFinishedJob"/>
        </operation>
      </portType>
      <binding name="JobProcessor" type="jp:JobProcessor">
        <wsaw:UsingAddressing wsdl:required="true"/>
        <soap:binding style="document"
                      transport="http://schemas.xmlsoap.org/soap/http"/>
        <operation name="processJob">
          <soap:operation style="document"
                          soapAction="http://examples.home.dev/jobprocessor/processJob"/>
          <input>
            <soap:body use="literal" parts="job"/>
          </input>
        </operation>
      </binding>
      <binding name="JobProcessorNotify" type="jp:JobProcessorNotify">
        <wsaw:UsingAddressing required="true"/>
        <soap:binding style="document"
                      transport="http://schemas.xmlsoap.org/soap/http"/>
        <operation name="replyFinishedJob">
          <soap:operation style="document"
                          soapAction="http://examples.home.dev/jobprocessor/replyFinishedJob"/>
          <input>
            <soap:body use="literal" parts="jobReply"/>
          </input>
        </operation>
      </binding>
      <service name="JobProcessor">
        <port name="jobProcessor" binding="jp:JobProcessor">
          <soap:address location="http://localhost/JobProcessor"/>
        </port>
      </service>
      <service name="JobProcessorNotify">
        <port name="jobProcessorNotify" binding="jp:JobProcessorNotify">
          <soap:address location="http://localhost/JobProcessorNotify"/>
        </port>
      </service>
    </definitions>
    
  17. Once finished the WSDL design, start coding. Go to File > New > Java Web Service from WSDL 
  18. Choose Java EE 1.5 with support for JAX-WS RI 

  19. Choose the jobprocessor.wsdl  archive, ensure that the interface is also generated by marking Add Service Endpoint Interface, also uncheck copy WSDL locally:

  20. Choose JobProcessor service
  21. Enter dev.home.examples.jobprocessor.ws for the Package Name and dev.home.examples.jobprocessor.types for Root Package for Generated types (SDO types). 

  22. Choose jobProcessor port then click Finish
  23. You will end with several classes and interfaces generated from the WSDL. But we won't implement the JobProcessorNotify service in Java, so you can delete the JobProcessorNotify.java and JobProcessorNotifyImpl.java   
  24. Annotate the JobProcessorImpl class to enforce Addressing with javax.xml.ws.soap.Addressing:
  25. @Addressing(required = true)
    public class JobProcessorImpl {
    ...
    }
    
  26. It's necessary to create a Java-based WS client to invoke the caller via the  JobProcessorNotify  PortType. Unfortunately JDeveloper 11.1.1.5.0 cannot do that in pure JAX-WS RI manner. Instead I use JDK's wsimport tool in this way, open a terminal/command prompt and type:  
  27. cd [PROJECT HOME DIRECTORY]
    wsimport -p dev.home.examples.jobprocessor.client -d classes/ -s src/ \
      -verbose wsdl/jobprocessor.wsdl 
    
  28. Then delete the unnecessary class dev.home.examples.jobprocessor.client.JobProcessor_Service and the interface dev.home.examples.jobprocessor.client.JobProcessor since we won't call ourselves
  29. At this point you will notice that the SDO types JobType and JobReplyType were generated twice, the first time in the dev.home.examples.jobprocessor.types package and the second one on dev.home.examples.jobprocessor.client package,  if you compare them you'll notice they are almost identical except for the package name. So, delete dev.home.examples.jobprocessor.client.JobType and dev.home.examples.jobprocessor.client.JobReply leaving the SDO only on the  dev.home.examples.jobprocessor.types package.
  30. Now the magic behind, the correlation is very simple: 1) grab request.MessageID and request.ReplyTo headers from the request message, and 2) set on the reply (a new request) message the reply.RelatesTo = MessageID and reply.To = ReplyTo. To achieve this I create a helper class named CorrelationHelper, it takes a javax.xml.ws.Service subtype as a template parameter, and pass a service of this type as constructor argument, a WebServiceContext is also needed:
  31. /**
     * Performs correlation between incoming one-way message and an outgoing 
     * one-way message by mapping the header wsa:To with wsa:ReplyTo and 
     * wsa:RelatesTo with wsa:MessageID
     *
     * @param <S> the service type
     */
    public final class CorrelationHelper<S extends Service> {
    
        private WebServiceContext wsc;
        private S service;
    
        public CorrelationHelper(S service, WebServiceContext wsc) {
            this.service = service;
            this.wsc = wsc;
        }
    
        /**
         * Retrieves the headers
         * @return
         */
        private HeaderList getHeaders() {
            return (HeaderList)wsc.getMessageContext().get(JAXWSProperties.INBOUND_HEADER_LIST_PROPERTY);
        }
    
        /**
         * Creates the correlated port, by appending a WS-Addressing relatesTo
         * header and assignind the MessageID
         * @param <P> the port type, castable to WSBindingProvider
         * @return the correlated port
         */
        public <P> P getCorrelatedPort(Class<P> portType) {
            P port = service.getPort(getReplyTo(), portType);
            ((WSBindingProvider)port).setOutboundHeaders(Headers.create(AddressingVersion.W3C.relatesToTag,
                                                                        getMessageId()));
            return port;
        }
    
        /**
         * Grab WS-Addressing ReplyTo/Address header
         * @return
         */
        private EndpointReference getReplyTo() {
            return getHeaders().getReplyTo(AddressingVersion.W3C,
                                           SOAPVersion.SOAP_11).toSpec();
        }
    
        /**
         * Grab WS-Addressing MessageID header
         * @return
         */
        private String getMessageId() {
            return getHeaders().getMessageID(AddressingVersion.W3C,
                                             SOAPVersion.SOAP_11);
        }
    }
    
  32. The processJob() implementation is trivial:
  33. @WebService(serviceName = "JobProcessor",
                targetNamespace = "http://examples.home.dev/jobprocessor",
                portName = "jobProcessor",
                endpointInterface = "dev.home.examples.jobprocessor.ws.JobProcessor")
    @HandlerChain(file = "JobProcessor-HandlerChain.xml")
    @Addressing(required = true)
    public class JobProcessorImpl {
    
        @Resource
        private WebServiceContext wsc;
    
        private CorrelationHelper<JobProcessorNotify_Service> correlationHelper;
        private Random random;
    
        public void processJob(JobType job) {
            
            // do processing
            int seconds = doJob();
    
            // prepare reply message
            JobReplyType jobReply = new JobReplyType();
            jobReply.setJobId(job.getJobId());
            jobReply.setResult(String.format("Job payload %s processed in %d seconds!",
                                             job.getPayload(), seconds));
            
            // do correlation and perform the callback
            JobProcessorNotify jobProcessorNotify =
                correlationHelper.getCorrelatedPort(JobProcessorNotify.class);
            jobProcessorNotify.replyFinishedJob(jobReply);
        }
    
        /**
         * Sleeps random time between 5 and 10 seconds to simulate processing
         * @return
         */
        private int doJob() {
            int seconds = random.nextInt(6) + 5;
            try {
                Thread.currentThread().sleep(1000 * seconds);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            return seconds;
        }
    
        @PostConstruct
        public void doPostConstruct() {
            correlationHelper =
                    new CorrelationHelper<JobProcessorNotify_Service>(new JobProcessorNotify_Service(),
                                                                      wsc);
            random = new Random(System.nanoTime());
        }
    }
    
  34. As you noticed, all the magic occurs in the getCorrelatedPort() template method, which takes a generic parameter Class<P> and supplies the wsa:ReplyTo header as the endpoint argument in the javax.xml.ws.Service.getPort() invocation, then relates to the outgoing message with the incoming one via wsa:RelatesTo = wsaMessageID   
  35. I also changed some initialization routines in the generated code, specially in JobProcessorNotify_Service to able to load the WSDL from the classpath, notice I changed the wsdlLocation's @WebServiceClient parameter and url assigment statement:
  36. @WebServiceClient(name = "JobProcessorNotify",
                      targetNamespace = "http://examples.home.dev/jobprocessor",
                      wsdlLocation = "classpath:wsdl/jobprocessor.wsdl")
    public class JobProcessorNotify_Service extends Service {
    
        private final static URL JOBPROCESSORNOTIFY_WSDL_LOCATION;
        private final static WebServiceException JOBPROCESSORNOTIFY_EXCEPTION;
        private final static QName JOBPROCESSORNOTIFY_QNAME =
            new QName("http://examples.home.dev/jobprocessor",
                      "JobProcessorNotify");
    
        static {
            URL url =
                ClassLoader.getSystemClassLoader().getResource("classpath:wsdl/jobprocessor.wsdl");
            WebServiceException e = null;
            JOBPROCESSORNOTIFY_WSDL_LOCATION = url;
            JOBPROCESSORNOTIFY_EXCEPTION = e;
        }
    ...
    }
    
  37. Finally notice that the generic class CorrelationHelper<S extends Service> can be used in any kind of JAX-WS web service (hence the generic) that engages this message exchange pattern.


What's next?

The testing of course:
  1. Use soapUI to make your life easier.
  2. Deploy the WS to a testing / staging weblogic/tomcat/... server, once deployed you will end up with an arbitrary endpoint address, e.g.: http://HOST/JobProcessor
  3. Create a new soapUI project by supplying the WSDL address, it should be http://HOST/JobProcessor?WSDL or simply use  the system path's WSDL
  4. Check "Create a Web Service Simulation from the imported WSDL" and uncheck everything else:
  5. Generate the Mock service ONLY for replyFinishedJob operation, check Starts MockService immediately. 
  6. Name the mock JobProcessorNotify MockService, your mock will end up listening on the address http://YOURPC:8080/mockJobProcessNotify
  7. Go to JobProcessor / processJob,  create a new request and fill it:
  8. <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 
      xmlns:typ="http://examples.home.dev/jobprocessor/types">
       <soapenv:Header/>
       <soapenv:Body>
          <typ:Job>
             <typ:jobId>1</typ:jobId>
             <typ:payload>Euardo Lago Aguilar</typ:payload>
          </typ:Job>
       </soapenv:Body>
    </soapenv:Envelope>
    
  9. In the request window's bottom, open WS-Addressing related settings, check Enable/Disable WS-A Addressing
  10. The Action should be already filled up with: http://examples.home.dev/jobprocessor/processJob
  11. Set To equal to: http://HOST/JobProcessor and also set the endpoint address (above) to the same value
  12. Set ReplyTo equal to your mock listening address:  http://YOURPC:8088/mockJobProcessorNotify
  13. Check Randomly generate MessageId, at the end the request should look like:
  14. Send the request and you should receive a reply message within 5 and 10 seconds later, see the mock log in the picture below:


  15. open the log entry by double clicking, the content should be:
    <S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/">
       <S:Header>
          <wsa:RelatesTo xmlns:wsa="http://www.w3.org/2005/08/addressing">uuid:732cf156-6de4-4701-9ae8-1aaa1c7a3bd9</wsa:RelatesTo>
          <wsa:To xmlns:wsa="http://www.w3.org/2005/08/addressing">http://YOURPC:8088/mockJobProcessorNotify</wsa:To>
          <wsa:Action xmlns:wsa="http://www.w3.org/2005/08/addressing">http://examples.home.dev/jobprocessor/replyFinishedJob</wsa:Action>
          <work:WorkContext xmlns:work="http://oracle.com/weblogic/soap/workarea/">rO0ABXoAA...AAAA=</work:WorkContext>
       </S:Header>
       <S:Body>
          <JobReply xmlns="http://examples.home.dev/jobprocessor/types">
             <jobId>1</jobId>
             <result>Job payload Euardo Lago Aguilar processed in 8 seconds!</result>
          </JobReply>
       </S:Body>
    </S:Envelope>
    
That's all folks! Thanks for be patient! Get the code here: JobProcessorApp.tar.gz