Software Development


An interesting article: Selling your own software vs working for the man

Money is a factor, of course you need it to live. But when you work for yourself, you can develop software the way you like and do something that is really meaningful for you. That is, everything you do, every second you pass are counted for you and just for you.

Voici un article incontournable de Martin Fowler sur le sujet: Continuous Integration

Résumé:

Practices of Continuous Integration

  • Maintain a Single Source Repository

    Although many teams use repositories a common mistake I see is that they don’t put everything in the repository. … everything you need to do a build should in there including: test scripts, properties files, database schema, install scripts, and third party libraries.

    The basic rule of thumb is that you should be able to walk up to the project with a virgin machine, do a checkout, and be able to fully build the system.

    Keep your use of branches to a minimum. In particular have a mainline: a single branch of the project currently under development.

  • Automate the Build

    Automated environments for builds are a common feature of systems. … Make sure you can build and launch your system using these scripts using a single command.

    A common mistake is not to include everything in the automated build. The build should include getting the database schema out of the repository and firing it up in the execution environment.

  • Make Your Build Self-Testing

    A good way to catch bugs more quickly and efficiently is to include automated tests in the build process.

    For self-testing code you need a suite of automated tests that can check a large part of the code base for bugs. The tests need to be able to be kicked off from a simple command and to be self-checking. The result of running the test suite should indicate if any tests failed. For a build to be self-testing the failure of a test should cause the build to fail.

  • Everyone Commits Every Day

    The more frequently you commit, the less places you have to look for conflict errors, and the more rapidly you fix conflicts.

    Frequent commits encourage developers to break down their work into small chunks of a few hours each. This helps track progress and provides a sense of progress.

  • Every Commit Should Build the Mainline on an Integration Machine

    There are two main ways I’ve seen to ensure this: using a manual build or a continuous integration server.

    The manual build approach is the simplest one to describe. Essentially it’s a similar thing to the local build that a developer does before the commit into the repository. The developer goes to the integration machine, checks out the head of the mainline (which now houses his last commit) and kicks off the integration build. He keeps an eye on its progress, and if the build succeeds he’s done with his commit.

    A continuous integration server acts as a monitor to the repository. Every time a commit against the repository finishes the server automatically checks out the sources onto the integration machine, initiates a build, and notifies the committer of the result of the build. The committer isn’t done until she gets the notification – usually an email.

  • Keep the Build Fast

    Nothing sucks the blood of a CI activity more than a build that takes a long time.

    For most projects, however, the XP guideline of a ten minute build is perfectly within reason.

  • Test in a Clone of the Production Environment
  • Make it Easy for Anyone to Get the Latest Executable

    One of the most difficult parts of software development is making sure that you build the right software. We’ve found that it’s very hard to specify what you want in advance and be correct; people find it much easier to see something that’s not quite right and say how it needs to be changed. Agile development processes explicitly expect and take advantage of this part of human behavior.

    To help make this work, anyone involved with a software project should be able to get the latest executable and be able to run it: for demonstrations, exploratory testing, or just to see what changed this week.

  • Everyone can see what’s happening
  • Automate Deployment

The original article:
Plan Your Testing

Résumé:
All tests are not created equal—as project teams that can’t distinguish and selectively allocate resources to the most important tests will discover. These teams spend most of their test time on less important tests.

In general, the more important tests—such as integration tests, which confirm interfaces between two or more modules—usually come after you complete unit tests. Tests to demonstrate the system’s ability to handle peak loads and system tests are usually performed last. The errors these tests reveal are often the most serious, yet they’re the most likely to be crunched.

There are three key characteristics of how we “proactively” plan tests: First, planning tests before coding; second, planning tests top-down; and third, planning tests as a means to reduce risks.

  • Planning tests before coding
    If you create test plans after you’ve written the code, you’re testing that the code works the way it was written, not the way it should have been written. Tests planned prior to coding tend to be thorough and more likely to detect errors of omission and misinterpretation.
    When test plans already exist, you can often carry out tests more efficiently. First, there’s no delay. You can run tests as soon as the code is ready. Second, having the test plan lets you run more tests in the same amount of time, because you are using your time to run the tests on the plan instead of interrupting your train of thought to find test data.
    Planning tests first can help developers write the code right the first time, thereby reducing development time.
  • Top-down planning
    Planning tests top-down means starting with the big picture and systematically decomposing it level by level into its components. This approach provides three major advantages over the more common bottom-up method.

    1. First, systematic decomposition reduces the chance that you will overlook any significant component. Since each lower level simply redefines its parent in greater detail, the process of decomposition forces confirmation that the redefinition is complete.
    2. Second, by structuring test design, you can build and manage them more easily and economically. The test structure lets you reuse and redefine the software structure so you can test it with less effort and less rework.
    3. Third, top-down planning creates the view you need to enable selective allocation of resources. That is, once the overall structure is defined, the test planner can decide which areas to emphasize and which to give less attention.
  • Testing as a means to reduce risks
    At each level, and for each test item, ask the following set of questions to identify risks:

    • What must be demonstrated to be confident it works?
    • What can go wrong to prevent it from working successfully?
    • What must go right for it to work successfully?

    Ensure that key software elements will function properly before building the other elements that depend on them.

Here is an article of Steve McConnell:
Software Quality at Top Speed

Extrait:

Some project managers try to shorten their schedules by reducing the time spent on quality-assurance practices such as design and code reviews. Some shortchange the upstream activities of requirements analysis and design. Others–running late–try to make up time by compressing the testing schedule, which is vulnerable to reduction since it’s the critical-path item at the end of the schedule.

These are some of the worst decisions a person who wants to maximize development speed can make. In software, higher quality (in the form of lower defect rates) and reduced development time go hand in hand.

Software development at top speed

Design Shortcuts

Projects that are in schedule trouble often become obsessed with working harder rather than working smarter. Attention to quality is seen as a luxury. The result is that projects often work dumber, which gets them into even deeper schedule trouble.

Error-Prone Modules

Barry Boehm reported that 20 percent of the modules in a program are typically responsible for 80 percent of the errors.

If development speed is important, make identification and redesign of error-prone modules a priority. Once a module’s error rate hits about 10 defects per thousand lines of code, review it to determine whether it should be redesigned or reimplemented. If it’s poorly structured, excessively complex, or excessively long, redesign the module and reimplement it from the ground up. You’ll shorten the schedule and improve the quality of your product at the same time.

Seven Deadly Sins of Software Reviews

Résumé:

  • Participants Don’t Understand the Review Process
  • Reviewers Critique the Producer, Not the Product
  • Reviews Are Not Planned
  • Review Meetings Drift Into Problem-Solving
  • Reviewers Are Not Prepared
  • The Wrong People Participate
  • Reviewers Focus on Style, Not Substance

This discussion is very interesting:
Why do people insist on doing EVERYTHING in Java?

Pour toutes les applications Web, interaction entre la BDD et l’interface utilisateur est un élément de base.
L’utilisateur consulte/modifie/supprime/crée les informations via son interface d’utilisation, ces actions sont prises en compte par l’application et il y a forcément la récupération/la modification/la suppression/la création des données dans la BDD de l’application.
Mais qu’est-ce qui se passe au milieu de la BDD et l’interface utilisateur, où se trouve souvent les codes (les objets Java) de la couche Métier ?
Examinons les choses dans l’ordre, commencer par la BDD
La BDD contient des données de l’application, avec les valeurs connues mais souvent n’ont pas de sens (pour comprendre ces valeurs, généralement, on a besoin au moins d’un dictionnaire de données – un type de méta-données). Ces données sont souvent stockées dans un ordre indéfini (mais on peut toujours récupérer les données dans certains ordres avec les requêtes SQL)
La couche métier stocke les données (récupérées de la BDD) souvent dans un ordre controllable (un List, par exemple). Les valeurs de données ici sont connues et ont un sens (le sens métier – que le développeur de la couche métier doit maitriser).

Une question se pose: l’ordre et les valeurs de données de la BDD et ceux de données de la couche métier sont-il les mêmes ?
La réponse est souvent NON. Parce que la logique relationnelle utilisée par les BDD pour stocker les données est différente de la logique objet utilisée par le langage de programmation orienté-objet pour traiter ces données. En plus, pour stocker les données, le DBA pense à minimiser les redondants, optimiser le temps de requête, … (on trouve souvent les ‘code’ dans la BDD, d’où la nécessité d’un dictionnaire de données). Mais pour traiter les données dans le sens métier, le développeur pense à encapsuler les objets par des classes, représenter la logique métier de manière le plus claire et le plus facile à traiter possible (on peut trouver les classes qui ‘ressemblent’ les objets dans la vie – contrat, dossier, formule, … et les méthodes qui font les actions métiers)
Résultat: il faut souvent une couche de mapping entre la BDD et la couche métier. On a entendu parlé de la couche DAO (Data Access Object) ou plus récent, il y a les frameworks de mapping R-O (Relationnel – Objet) comme par exemple Hibernate.
Bref, il faut toujours penser à une solution de mapping entre la BDD et la couche métier pour les application un peu compliqué et si l’on veut avoir maximum de souplesse dans la structure de l’application.

La couche l’interface utilisateur, quant à elle, représente les données sous forme de l’information, dans un ordre fixé, avec les valeurs affichés qui ne sont pas obligées d’être la même chose qu’on trouve dans la couche métier et la BDD. Il y a souvent aussi les valeurs cachées qui sont aussi différentes des valeurs affichées et les valeurs de la couche métier.
Là, on a la même problème, c’est-à-dire, il faut une couche de mapping entre la couche métier et l’interface utilisateur. Mais à ma connaissance, cette couche n’est pas prise en compte suffisamment sérieusement par les développeurs. Preuve, on trouve souvent les règles d’affichage de la page JSP dans le bean modèle, ou bien, à l’inverse, les valeurs métiers exposées dans les codes JSP ou pire encore, dans les codes JavaScript.
(… à suivre)

Next Page »