Tuesday, February 9, 2010

Testing and the Web


The web is another example of a disrupting technology (like RAD and XP) requiring that a lot of old lessons get relearned.  
These lessons are:
• All approaches to non-trivial software development require disciplined and trained practitioners.
• Any information which isn’t written down gets forgotten.
• Users can’t read code, don’t very much like diagrams, and need some specification to sign off  before they’ll part with their money.
• Changes have ripple effects throughout systems.
• Developers, testers, and test managers need to have some objective way of deciding if a bug exists.
• If developers want to take a risk, and try and integrate a system in a big bang they are simply ignoring 60 years of experience, and will eventually realize that unit and integration testing reduce risk, and save time.
 
Early web development was characterized by many programmers quickly hacking simple static websites with minimal features and quite terrible graphics, using simple tools. The sites were image threats only. As the commercial web developed, the ghastly possibility that such sites represented a commercial threat too was realized, and some degree of formality and accountability began to reassert itself, to counterbalance the supremacy of developmental speed. In short People Who Mattered realized it didn’t matter how fast a site was developed if it was “wrong.”
Web development is dominated by architectural concerns.
 
There are many web system types, all of which have similar architectures to the one shown above. The elements of the architecture can interact as shown in Figure
 
The browser pages communicate with the web server using details (form fields filled, check-boxes ticked, etc.) sent to the web server along with a cookie which identifies the particular browser. A series of scripts held by the web server interprets the details and calls an application (legacy, COTS, or new).
The application in turn interrogates a database or a banking application (for payment and credit card details), and then returns data to the web server. The web server creates a page and returns it with a cookie to the browser.
Variations of this approach will include closed internets for suppliers to (say) car companies or clients of private trading systems.

Monday, February 8, 2010

Spiral Model

The spiral model is based on the need to iterate. It contains as many iterations as are necessary to bring a product to fruition. Each iteration requires that the participants plan, define their life-cycle, prototype, analyze risks, write requirements, build models, detailed designs, code, unit, and system tests, and install.
Fig - Spiral Model

The spiral model has a number of advantages:
• It is flexible and allows for multiple iterations.
• It employs prototyping extensively.
• It allows for the coexistence of other models (indeed it expects candidate models to be proposed and adopted if useful).
• It makes risk evaluation explicit.
• It acknowledges the need to validate requirements and design.
• It was originally designed with a particular need to accommodate COTS, and is therefore more amenable to software reuse.

Its dangers are:
• It is less easy to allocate phases to groups and responsibilities than other models.
• It requires that staff are well-versed in software engineering.
• It requires much team self-discipline in the capture of emerging requirements.
• It does not acknowledge the need to have test input from the start of the project.
• It allocates particular phases to requirements definition and high- and low-level design.
• It doesn’t make the baselines explicit.
• It doesn’t allow for process decomposition.
• Much prototype code may eventually be used in the final version.
• It must be very tool-supported to work or it will either decay or become enmeshed in  the bureaucracy it was intended to minimize.
 
The implications of this for the system testing team are that:
• The status of emerging requirements must be constantly reviewed.
• The team is committed to validating both the requirements and the design.
• Any use of prototype code in the production version will require much more rigorous unit testing than is normal.

Cooper’s Stage-Gate Process Model



Cooper’s stage gate
model is a variant of the waterfall. It splits the life-cycle into six stages separated by “gates.” Each gate is a decision point. It differs from the waterfall in that the activities in each stage may
be simultaneous [Cooper]

•Discovery stage: a product manager thinks of a new idea for a product.
– Idea screen: the idea is presented to potential stakeholders for their buy-in.
•Scoping stage: the market for the product is assessed and key features are identified.
– Second screen: the idea is re-presented to potential stakeholders for their buy-in, but with more-rigorous requirements and other information.
•The business case stage: in which the product, market, organization, project management and environment, competitors, budget, RoI, and legal issues are defined.
– Go to development is the moment at which the organization can commit to the large budget required for development.
•The development stage includes requirements refining, design, code, and build. Its output is a product ready for beta testing.
–Go to testing is the moment when the testing budget and the marketing and operational plans must be committed to. It is based on the continued existence of a market opportunity.
•Testing is system and acceptance testing at internal and friendly customer sites. It generates a
product fit for launch.
–Go to launch: is the moment when marketing and training plans become operative.
•Launch the product.

It is easy to see a number of critical dangers in this approach:
• Half the activities are oriented to the development of a business case. Since this is likely to occupy between 5–10% of the total manpower, more detail on the other 90–95% of the manpower’s activities would be useful.
• No allowance has been made for the (inevitable) requirements changes.
• Testing is relegated to the penultimate activity. The possibility that the requirements are deeply flawed will thus tend to be hidden. Similarly the testers will not learn how to use the product until too late causing considerable delay. The tests they prepare may thus need much rewriting.
• That a decision can be taken on the marketability of a product which has yet to enter beta testing requires enormous faith in the ability of developers. The amount of iteration between the development and testing groups is not shown, and the delays (which will also affect the go-to-market decision) can be considerable.

To make such a process work it is imperative that testers:
• Focus on the earliest access to the requirements as they are assembled.
• Get early access to prototype versions so they can prepare tests.
• Provide review and possibly modeling feedback to management such that inconsistent or missing requirements be identified asap.

Models














The Waterfall Model

The waterfall is the best known. It consists of 5 phases:
1. Requirements (in which the customer requirements are written).
2. Design (in which the high and low-level design documents are written).
3. Code (in which the code is written and (hopefully) unit tested).
4. System test (which is where we come in).
5. Installation and cutover: (in which the finished (!) system is installed, the users are trained, and the system is put to use).


This is an approach which everyone says is out-of-date and which everyone uses more or less. It suffers from the dangers that:

• Bureaucrats believe such phases are finite and cannot be iterated upon.
• It doesn’t allow for parallel activities such as prototyping or the development of user interface specifications (which in themselves require their own life-cycle) or for safety-critical system issues such as the development of a safety case.
• It makes no mention of contract preparation, project management, reviews, or audits.
• It implies that system testing starts only when coding is finished.
• It says nothing about software reuse.

It has the merit that:
• Each phase generates some baseline deliverable.
• It is well-known.
• It has been used for many years.
• It is very adaptable.
• Each process can be decomposed into others.
• You can add any process you want.

If people can simply accept that:
• Each phase may have to be repeated (as requirements change, as prototypes evolve).
• It needs to be seen in parallel with a number of other life-cycles with which it must stay synchronized.
• It can be modified for prototypes and software reuse.
• Testing input begins at least as early as requirements definition (and arguably it is as well to have the test team review the request for proposal and contract for test implications).
• Any change to requirements, design, or code must be (manually) reflected through all levels of documentation to ensure all documents are consistent (which rarely happens).

then it is perfectly usable.

Life-Cycles

There is a large number of life-cycles around. You need to be aware of their strengths and weaknesses, and the subtle ways they can (be used to) louse up projects. A fervent belief on a particular life-cycle untrammeled by any concern for its weakness is an early danger sign in a project manager.

Remember that life-cycles only exist as models; they’re simply attempts at describing what we do.

Who Cares about Process Models?
Life-cycle or process models are only useful in that they help us think about how we develop systems.
If we fail to have any process we get chaotic. If we have the wrong process we can at least rethink it.
In the end process models subtly influence thinking. What matters in any project is that the artefacts: specifications, code, manuals, and tests get written, tested, and used.

Test Management Principles

I keep six honest serving men
(They taught me all I knew);
Their names are What and Why and When and How and Where and Who.
Rudyard Kipling


Here are some principles acquired, like most painful experiences, when I was trying to do something else.
• If you can’t plan or process model it, what makes you think you can do it?• Never put into a plan what you can put into a strategy document. Only put the unchanging essentials into the strategy document. Any part of the plan which doesn’t somehow depend on the strategy document is probably not thought through properly. Any part of the strategy document which isn’t reflected somehow in the plan is a risk.• As soon as you have finished the plan, go through it and see what you can cut out.
• Limit the things you monitor to the phase you are in, the start of the next phase, and the top priority alerts.• Anyone should be free to raise a bug. If bugs are duplicated the test manager can weed out duplicates daily at the bug clearing meeting.• You must have access to all bugs reported from the field.
• You are paying test staff primarily for their ability to think. If they cannot or will not do this, then sack them.• Testers need to work with the best. It’s part of your job to keep the idiots out.
• Pay your staff the compliment of reviewing at least a sample of their tests. If you can’t be bothered why should they?• Testers can concentrate on details: you must be able to both take an overview,and to
concentrate on details.• Testers know the value of everything and the cost of nothing: you must know both.
• If your processes are wrong you’ll be forever fighting them.
• The more your processes are tool-based the fewer documents you’ll need: you only write a document because there’s no tool capable of holding the information.
• If no one really needs a document it won’t be read. A document is there to remind you, to tell you how to do something, to tell someone else something or to help you think. If it doesn’t, don’t write it.
• Do not accept the unacceptable. Even if your customers do. Because someday they won’t. Try and
be somewhere else when that happens.

Software Testing

We all know what testing is. We’ve been doing it for years, in and out of school. We took tests, and
teachers gave us marks and told us we were good, bad, or indifferent. People had got the idea that tests might be used to predict things about other people.
In 1904 a Frenchman called Alfred Binet was given the task of deciding whether or not children were subnormal. Monsieur Binet was a member of a committee of Eminent Frenchmen, each eager to propose his own Theory of Child Intelligence and How It can be Determined. M. Binet listened, extracted from each Eminent Frenchman a set of tests, added many of his own invention, and tried them out on sets of children. From the behavior of the children he decided which tests were useful. He then threw out those tests which failed to predict successfully, and tried again.

What did these tests prove? Nothing. They indicated the level of a child’s intelligence at the time the test was administered. They did not prove that the child has great ability or potential. They have been given to an enormous number of children. They are still in use today.

Mr. Cyril Burt was a psychologist interested in predicting the ability of children. The British government wanted to reduce the cost of education. They reached a position whereby Mr. Cyril Burt provided (much) questioned figures that “proved” that it was possible to predict a child’s ability, and the British government imposed a test on all British children at the age of 11 (called the 11-plus), which determined whether a child would go to a Grammar school (for the brighter) or a Secondary-Modern school (for the less-bright). Many studies have shown that the tests were very bad predictors of a child’s ability and have blighted the lives of a generation of children. Britain still has a terrible shortage of graduates and technicians, but Mr. Cyril Burt was knighted.

Conclusions

1. Theories can be disproved.
2. Theories can be very useful and good predictors within a limited range of environments.
3. Tests only work in a limited range of environments.
4. Conclusive tests are very difficult to write.
5. There is always someone who believes despite the evidence.
6. Test the tests before you use them.
7. Faking test results can be a way to social advancement — but destroys your reputation.