August 2007

Voici un article incontournable de Martin Fowler sur le sujet: Continuous Integration


Practices of Continuous Integration

  • Maintain a Single Source Repository

    Although many teams use repositories a common mistake I see is that they don’t put everything in the repository. … everything you need to do a build should in there including: test scripts, properties files, database schema, install scripts, and third party libraries.

    The basic rule of thumb is that you should be able to walk up to the project with a virgin machine, do a checkout, and be able to fully build the system.

    Keep your use of branches to a minimum. In particular have a mainline: a single branch of the project currently under development.

  • Automate the Build

    Automated environments for builds are a common feature of systems. … Make sure you can build and launch your system using these scripts using a single command.

    A common mistake is not to include everything in the automated build. The build should include getting the database schema out of the repository and firing it up in the execution environment.

  • Make Your Build Self-Testing

    A good way to catch bugs more quickly and efficiently is to include automated tests in the build process.

    For self-testing code you need a suite of automated tests that can check a large part of the code base for bugs. The tests need to be able to be kicked off from a simple command and to be self-checking. The result of running the test suite should indicate if any tests failed. For a build to be self-testing the failure of a test should cause the build to fail.

  • Everyone Commits Every Day

    The more frequently you commit, the less places you have to look for conflict errors, and the more rapidly you fix conflicts.

    Frequent commits encourage developers to break down their work into small chunks of a few hours each. This helps track progress and provides a sense of progress.

  • Every Commit Should Build the Mainline on an Integration Machine

    There are two main ways I’ve seen to ensure this: using a manual build or a continuous integration server.

    The manual build approach is the simplest one to describe. Essentially it’s a similar thing to the local build that a developer does before the commit into the repository. The developer goes to the integration machine, checks out the head of the mainline (which now houses his last commit) and kicks off the integration build. He keeps an eye on its progress, and if the build succeeds he’s done with his commit.

    A continuous integration server acts as a monitor to the repository. Every time a commit against the repository finishes the server automatically checks out the sources onto the integration machine, initiates a build, and notifies the committer of the result of the build. The committer isn’t done until she gets the notification – usually an email.

  • Keep the Build Fast

    Nothing sucks the blood of a CI activity more than a build that takes a long time.

    For most projects, however, the XP guideline of a ten minute build is perfectly within reason.

  • Test in a Clone of the Production Environment
  • Make it Easy for Anyone to Get the Latest Executable

    One of the most difficult parts of software development is making sure that you build the right software. We’ve found that it’s very hard to specify what you want in advance and be correct; people find it much easier to see something that’s not quite right and say how it needs to be changed. Agile development processes explicitly expect and take advantage of this part of human behavior.

    To help make this work, anyone involved with a software project should be able to get the latest executable and be able to run it: for demonstrations, exploratory testing, or just to see what changed this week.

  • Everyone can see what’s happening
  • Automate Deployment

The original article:
Plan Your Testing

All tests are not created equal—as project teams that can’t distinguish and selectively allocate resources to the most important tests will discover. These teams spend most of their test time on less important tests.

In general, the more important tests—such as integration tests, which confirm interfaces between two or more modules—usually come after you complete unit tests. Tests to demonstrate the system’s ability to handle peak loads and system tests are usually performed last. The errors these tests reveal are often the most serious, yet they’re the most likely to be crunched.

There are three key characteristics of how we “proactively” plan tests: First, planning tests before coding; second, planning tests top-down; and third, planning tests as a means to reduce risks.

  • Planning tests before coding
    If you create test plans after you’ve written the code, you’re testing that the code works the way it was written, not the way it should have been written. Tests planned prior to coding tend to be thorough and more likely to detect errors of omission and misinterpretation.
    When test plans already exist, you can often carry out tests more efficiently. First, there’s no delay. You can run tests as soon as the code is ready. Second, having the test plan lets you run more tests in the same amount of time, because you are using your time to run the tests on the plan instead of interrupting your train of thought to find test data.
    Planning tests first can help developers write the code right the first time, thereby reducing development time.
  • Top-down planning
    Planning tests top-down means starting with the big picture and systematically decomposing it level by level into its components. This approach provides three major advantages over the more common bottom-up method.

    1. First, systematic decomposition reduces the chance that you will overlook any significant component. Since each lower level simply redefines its parent in greater detail, the process of decomposition forces confirmation that the redefinition is complete.
    2. Second, by structuring test design, you can build and manage them more easily and economically. The test structure lets you reuse and redefine the software structure so you can test it with less effort and less rework.
    3. Third, top-down planning creates the view you need to enable selective allocation of resources. That is, once the overall structure is defined, the test planner can decide which areas to emphasize and which to give less attention.
  • Testing as a means to reduce risks
    At each level, and for each test item, ask the following set of questions to identify risks:

    • What must be demonstrated to be confident it works?
    • What can go wrong to prevent it from working successfully?
    • What must go right for it to work successfully?

    Ensure that key software elements will function properly before building the other elements that depend on them.

Here is an article of Steve McConnell:
Software Quality at Top Speed


Some project managers try to shorten their schedules by reducing the time spent on quality-assurance practices such as design and code reviews. Some shortchange the upstream activities of requirements analysis and design. Others–running late–try to make up time by compressing the testing schedule, which is vulnerable to reduction since it’s the critical-path item at the end of the schedule.

These are some of the worst decisions a person who wants to maximize development speed can make. In software, higher quality (in the form of lower defect rates) and reduced development time go hand in hand.

Software development at top speed

Design Shortcuts

Projects that are in schedule trouble often become obsessed with working harder rather than working smarter. Attention to quality is seen as a luxury. The result is that projects often work dumber, which gets them into even deeper schedule trouble.

Error-Prone Modules

Barry Boehm reported that 20 percent of the modules in a program are typically responsible for 80 percent of the errors.

If development speed is important, make identification and redesign of error-prone modules a priority. Once a module’s error rate hits about 10 defects per thousand lines of code, review it to determine whether it should be redesigned or reimplemented. If it’s poorly structured, excessively complex, or excessively long, redesign the module and reimplement it from the ground up. You’ll shorten the schedule and improve the quality of your product at the same time.

Very funny Sometimes to interview developers can be funny

Seven Deadly Sins of Software Reviews


  • Participants Don’t Understand the Review Process
  • Reviewers Critique the Producer, Not the Product
  • Reviews Are Not Planned
  • Review Meetings Drift Into Problem-Solving
  • Reviewers Are Not Prepared
  • The Wrong People Participate
  • Reviewers Focus on Style, Not Substance

This discussion is very interesting:
Why do people insist on doing EVERYTHING in Java?

Recently, I have got this error with JSP. It seems to me quite strange !
When I wrote:

<% String includePage = “/simulation/salaries/” + ref.getIncludeListeSalaries(); %>
<jsp:include page=”<%= includePage %>”>

it works (with Resin 2.0.5)
But when I wrote:

<jsp:include page=”/simulation/salaries/<%= ref.getIncludeListeSalaries() %>”>

it does NOT work, and the error message is:

[21/08/07 15:16:46.814] [2007/08/21 14:16:46] D:\SVN_LOCAL\V1.64\sf\simulation\salaries\creerListeSalariesSaisieManuelleArt83.jsp:415: interpolated runtime values are forbidden by the JSP spec at `/simulation/salaries/<%= ref.getIncludeListeSalaries() %>’
[21/08/07 15:16:46.814] com.caucho.jsp.JspParseException: D:\SVN_LOCAL\V1.64\sf\simulation\salaries\creerListeSalariesSaisieManuelleArt83.jsp:415: interpolated runtime values are forbidden by the JSP spec at `/simulation/salaries/<%= ref.getIncludeListeSalaries() %>’
at com.caucho.jsp.JspGenerator.error(
at com.caucho.jsp.JspGenerator.hasRuntimeAttribute(
at com.caucho.jsp.JavaGenerator.generateInclude(
at com.caucho.jsp.JspGenerator.generateChildren(
at com.caucho.jsp.JspGenerator.generate(
at com.caucho.jsp.JspParser.parse(
at com.caucho.jsp.JspParser.parse(
at com.caucho.jsp.JspManager.createPage(
at com.caucho.jsp.PageManager.getPage(
at com.caucho.jsp.PageManager.getPage(
at com.caucho.jsp.QServlet.getPage(
at com.caucho.server.http.FilterChainPage.doFilter(
at com.caucho.server.http.Invocation.service(
at com.caucho.server.http.QRequestDispatcher.forward(
at com.caucho.server.http.QRequestDispatcher.forward(
at com.caucho.server.http.QRequestDispatcher.forward(
at fr.acmn.simu.ControleArt83.controleListeSalariesNomSaisieManuelle(
at fr.acmn.simu.ControleArt83.service(
at javax.servlet.http.HttpServlet.service(
at com.caucho.server.http.FilterChainServlet.doFilter(
at com.caucho.server.http.Invocation.service(
at com.caucho.server.http.RunnerRequest.handleRequest(
at com.caucho.server.http.RunnerRequest.handleConnection(
at Source)
[21/08/07 15:16:46.814] invocation:/general/page500.jsp -> (host:, context:, servletPath:/general/page500.jsp, pathInfo:null, servlet:com.caucho.jsp.JspServlet, filter:null)

I don’t know why but this problem is quite popular
Possible bug with interpolated runtime values are forbidden
interpolated runtime values are forbidden?)

Here is the extrait from the book “Professional Java Server Programming”, Chapter 12: JSP Tag Extensions

There is a curious and confusing inconsistency in JSP syntax when non-String tag attributes are the results of JSP expressions. Let’s suppose we want to pass an object of class examples.Values (a kind of list) to a tag extension. The syntax:

<wrox:list values=”<%=values%>”

is problematic, because we know from the JSP specification that an expression “is evaluated and the result is coerced to a String which is subsequently emitted into the current out JspWriter object”. In the case of the custom tag above, however, the value of the expression is not coerced to a String, but passed to the tag handler as its original type.

Next Page »