Computer Science 121 - Project 2
Design, Review, Implementation and Testing
Fall 2009
Critical Dates
- Preliminary proposals and project plans - midnight Sunday 11/08.
- Final reports and post-mortems - midnight Sunday 12/13.
- all other due dates are, are under your control.
Project Description
- Introduction
- Overview and Grading
- Project Details
- Rules and Submission
1. Introduction
The first project went from concept through proposal.
The second project deals with the design and implementation phases of a project:
- the development of a design to meet component specifications.
- the preparation of a design document.
- taking a design through a formal design review.
- the preparation of a test-plan.
- taking a test plan through a formal review.
- acting as a reviewer in design and test plan reviews.
- implementing the design and test plans
- post mortems ... on your experiences with both the
original project 2 and this revised one.
What you choose to design and build is pretty-much up to you.
You may choose to continue building components you started with as
part of the greater game, some part of your original game proposal,
or something entirely different. The only constraints I intend to
impose on your choice are to ensure that it is of sufficient
complexity to require meaningful design and test analysis, where
considered decisions must be made, and where reviews are likely to
add significant value.
I suspect that these requirements can be met by the game (story arcs) design ...
but if someone wants to do this, we will have some details to work out.
It is strongly encouraged, but not actually required, that you carry
these activities out in teams (e.g. 4-5 person groups) where your individual
components will combine to implement some larger and more interesting
functionality. From a learning perspective, working on a team project
will require you to manage evolving specifications and integration with
work done by other people. A team of people working on related components
provides a natural panel of well informed people to review your designs
and test plans.
It will also help you by providing other people
to pace your efforts and help you with problems.
If you prefer to complete these activities as an individual, please come
speak to me about your ideas.
As with the first project, the primary goal is to give you real
experience with the skills, processes and work-products that we
will be discussing in this class, but we are also looking to
develop your abilities to intelligently plan a group endeavor,
effectively manage it, and critically evaluate the completed effort.
These skills will serve you well in clinic, and (more importantly)
in the larger projects that you will surely prosecute in your careers
to follow (whether commercial or academic).
In the first project we gave you due dates for each of the major
deliverables. In the second project, you will be responsible for
deciding how to spread the required work over the available time.
The only externally imposed dates are for your initial proposal
and project plan, and for your final report delivery. Everything
in between is under your control.
The following table describes the general model we will use to
compute your grades for this project.
If you have work that you completed for the greater-game, and would like to
have graded and averaged in to your final grade, come and talk to me about it.
3.1 Design Documents
The majority of the work involved in project 2 will be the
development and review of designs and test plans.
Each student will prepare a component design to satisfy
provided specifications, package it for design review,
revise it in accordance with the results of that review,
and get the revisions approved by the review team.
- presenting designs for reivew is the most
effective means of finding and eliminating
design errors.
- the ability to envision solutions to compex
problems is only useful if you can then accuarately
describe those solutions to other people.
- the ability to describe complex things in an
comprehensible manner is a skill that has to
be developed ... and preparing material for
design reviews is an excellent way of developing
those skills.
You will find that preparing designs and test-plans for review
will
- force you to more deeply consider your designs and testing plans
- greatly reenforce the elements of good design
- effectively exercises your both your envisioning and descriptive skills.
- provide you with detailed feed-back on where
your descriptions were unclear.
Reviewing plans developed by others will:
- develop your ability to read descriptions of complex software.
- exercise your skills at critically analyzing proposed systems.
- force you to learn how to clearly articulate concerns and issues.
- give you many demonstrations of the difficulty and importance of clear descriptions.
- give you the oppportunity to learn design and communications techniques from others.
For all of these reasons, every student will be required to
prepare a design plan and a test plan for review, and to
participate in a graded design review process.
3.1.1 initial proposal
Each team will:
- come up with an idea for a set of related modules that
they want to implement, that provides at least one "interesting"
module for each person in the team to design, develop a test plan,
and implement.
Note that the integration of those modules must work (in the
sense that they exchange calls/data/services) but there is
no requirement that it deliver useful functionality to anybody
else. All you will do is initial implementation, integration,
and bench testing.
- write up a brief description of the overall project, and the
individual modules to be developed.
- prepare estimates for:
- (as a team) developing detailed specifications (functionality
and the external interfaces) for each of the key modules.
- (as individuals) preparing, reviewing, and revising detailed designs for each key module.
- (as individuals) preparing, reviewing, and revising test plans for each key module.
- (as a team) developing, reviewing, and revising an integration and system testing plan.
- (as individuals) implementing and unit-testing each key module
- (as a team) integrating and system testing the modules developed by the indivduals.
- (as a team) writing up a post-mortem for project 2.
- prepare a preliminary schedule to accomplish all of this work by midnight, Sunday 12/13.
- combine all of these into a proposal to be submitted by midnight, Sunday 11/08
(which I will review and approve by Tuesday 11/10).
The only requirements I would like to put on this work are:
- each student must have a component of sufficient complexity to be worth
designing, design reviewing, developing a test plan, and reviewing the
test plan.
- the modules developed by the various team members must, in some sense,
combine to provide some demonstrable and testable higher level functionality
(which means that they must exchange calls, data or serivices).
Beyond meeting these requirements, you will not be graded on the amount of work
you have to do to implement this project.
If you would like to meet the project 2 requirements in a different way (e.g. with
work you did for the greater game, where the notions of implementation and integration
are not well defined) come and talk to me about your ideas first.
These initial proposals will be graded on the basis of:
- 25% do they supply all of the above-specified information.
- 25% how well they meet the "sufficient complexity" requirement.
- 25% how well they meet the "integratability" and "system testability" requirements.
- 15% interesting application, or good use of previous work
- 10% overall clarity, readability, and specificity
The grading for the estimates and schedule is discussed under section 3.4.1 (management).
3.1.1+ Architecture and Component Specifications
Between the time you develop your concept, and the time you have component
specifications to which you will design, you must develop an architecture
and component specifications. Had we not been working on the game for the
last few weeks, you would have prepared and reviewed these as graded
deliverables. As a concession to time, I am eliminating these as
graded deliverables ... but you will still have to spend some time
working up, reviewing, and refining an architecture in order to develop
component specifications. You can shorten this step, but you cannot
eliminate it (and I would encourage you not to try to do so :-)
3.1.2 Component Design
Each student will prepare a design document for the
component that they will define, implement and test.
The design presentation should address two goals:
- present the detailed interface definitions,
component specifications, and proposed implementation
in sufficient detail to permit reviewers to examine
their correctness (and raise issues) before you commit
to a particular implementation.
- describe the interfaces, component specifications,
data structures, algorithms, and proposed plan of
implementation in sufficient
detail to enable a developer (familiar with the
domain and tools) to sit down and implement it
(without having to solve additional problems).
A single document can almost always serve both purposes.
While rationale is not (technically) considered part of a design,
an overview of the problems and considerations that have guided
the design process can be extremely valuable to both reviewers and implementers.
In some cases, the actual work to be done may be trivial ... but
the analysis that led to the definition of that work is what
must be carefully scrutinized.
What must be specified?
- all relevant external functionality, specifications and requirements for
your component, and its role in the overall architecture (~20%)
These are the standards against which your design will be evaluated.
- all external interfaces (~20%)
- method/routine signatures
- service protocols and messages
- external data formats
These should be specified in sufficient detail to
- enable reviewers to assess the reasonableness of those interfaces.
- enable reviewers to determine whether or not you can correctly
implement the required functionality within those interfaces.
- enable a would be partner to start designing components
to interface with yours.
They will be evaluated in terms of how complete, detailed, clear
and reasonable they are, and how well they respond to the requirements.
- key external dependencies (~10%)
Components, frameworks or services upon which your design is based,
and that must be present for your component to work, along with
a discussion of the implications of those depenencies for the
development process and how they will be managed in your development
and testing plan.
- key internal data structures (~10%)
Often our most important design decisions are not about code,
but data.
The data structures we use to represent information determine
the manners in which that information can be used, and impose
numerous constraints on the code that will maintain and manipulate
them.
- internal structure (the architecture within the component ~10%)
A decomposition of the component into the sub-components
(classes, routines, etc) from which it will be implemented,
along with external specifications for each of those.
- non-obvious algorithms (~10%)
There is no need to spell out the details of algorithms that
everybody already knows, but any non-obvious algorithms must
be spelled out in sufficient detail to enable reviewers to
assess the appropriateness of the choice, and the correctness
of the proposed implementation. Note, however, that you are
preparing this input for a "design review" and not a "code review".
- "interesting" issues (~10%)
A few designs are simply "the obvious top-down-decomposition" using
"the obvious algorithms" to implement "the obvious functions".
Usually, however, there are performance, correctness, robustness,
interoperability, or other constraints that require cleverness
and/or trade-offs. These are the places where many of the most
interesting design decisions are made, and so they must be
described for review.
Were significant challenges clearly recognized
and met? This may call for you to present an
analysis of those issues and your rationale for
how they were addressed. This is not merely a
matter of "showing your work". The reviewers
need to see your understanding of the problem,
and to understand how this has motivated your solution.
(Note that it is legitimate to go into a design
review with known open issues ... but if so, those
issues must be well spelled out ... and this is
the place where those discussions will be evaluated)
One particularly "interesting" issue is testablility. If any of
your components has complex interfaces or hidden mechanisms, your
design should include a brief discussion of your proposed testing
strategies. Testability problems are much more easily addressed
at design time than later.
- packaging for reivew (~10%)
A table of contents for the review package, suggesting an order
of review, accompanied by sufficient context and background to
enable reviewers to undestand and evaluate the proposed design.
How completely specified?
- This depends on the experience level of the people for whom the design is being prepared
In some situations it may be sufficient ot merely name an algorithm (e.g.
Discrete Cosine Transform), protocol (e.g. NFSv3) or framework (e.g. Kerberos).
People familiar with these can be assumed to understand the costs,
benefits and limitations associated with those decisions, and will consider
the implementations to be well known.
For people who are not familiar with those tools, it might be necessary to
direct them to extensive background reading, enumerate the most important
interfaces, and summarize or even detail the key algorithms.
- This depends on the types of problems you are hoping to find in the review
If you are hoping to find problems in your choice of algorithm, you
must describe the algorithm you have chosen and your reasons for choosing it.
If you are hoping to find problems in error handling, you must thoroughly
enumerate all possible error cases, and describe how you will detect and
respond to each.
You are preparing a design to be reviewed by the other students in your team.
Describe things in sufficient detail to enable them to understand it, but don't
waste your time or theirs with things that should be obvious to all of you.
You are hoping that they can help you find the kinds of design
mistakes that you are likely to make. This means that you must elaborate your
design in sufficient detail to give them a chance to see your "mistakes".
You need not show the details of obvious or trivial algorithms. This is not a
"code review" but a "design review".
3.1.3 Design Updates
You will revise your design to respond to all of the
must-fix issues raised in the design review, until you
have a final approval from the review team. (note, if
no revisions are required, these are free points)
The grading of these updates will be based on
- ~25% response to must-fix issues
The degree to which you respond to and resolve
all must-fix issues raised in the review, and the
quality of those responses.
- ~10% response to should-fix issues
The degree to which you respond to and resolve
all should-fix issues raised in the review, and the
quality of those responses.
- ~10% documentation and process
Documented communication between the submitter and the
reviewed team and approval of the final design document.
- ~55% quality of the resulting design
This is your chance to earn back points you may have
lost on your initial design proposal by fixing things
on which you may have lost points.
3.1.4 Component Test Plan
Considering the component design, you will prepare a risk analysis,
and propose a comprehensive testing plan (goals, risks, testing
strategy, suites, test cases, automation tools) for your component.
You will submit this test plan for review, revise it in accordance
with that review, and get the revisions approved by the review team.
Deciding on an appropriate plan is not at all a simple thing:
- if you propose to do too much, the work will become unaffordable.
- if you propose to do too little, significant risks will be
unaddressed, with the likely result that the whole schedule
will fall apart when the problems start emerging.
Therefore, test plans often begin with a risk analysis.
looking at the proposed design, assess:
- the likelihood of your making mistakes that would result
in run-time errors. Where the likelihood of errors is low,
(or the consequences of those errors negligible)
it doesn't make sense to spend a lot of effort looking for them.
- how obvious the presence of those errors would be in
basic component use. Where errors will be obvious, a
few simple tests will probably be adequate to find them.
- how easy it would be to track down those errors if the
component failed in basic use. Where failure modes are
subtle or complex, it may be worth while developing more
comprehensive tests for each individual piece of
functionality.
- how easy or difficult it would be to convince yourself
of the correctness of the component's implementation.
Where there is functionality whose correctness would not be obvious
from directly exercise (either for combinatoric or hidden mechanism
reasons), discuss what makes it difficult to exercise or examine.
NOTE: The best way to deal with such problems is to examine
the overall testability of non-trivial mechanisms in
the design review. You are, in fact, welcome to propose
your design and test plans together, and have both reviewed
at the same time.
These assessments are the rational (expectancy/cost) basis for your testing plan.
Areas that are identified as simple and low risk can be tested
by a small number of simple test cases. For areas where complexity
and risk are high, you will be able to propose (and justify the
adequacy) of a testing plan that directly addresses the risks
(as best you understand them). Your goal is a test plan that
is demonstrably adequate but not wasteful.
This risk analysis will be graded (~20%) on the basis of the
completeness of your analysis (covering most likely problems),
the depth and resonableness of your assessments, and whether
or not you derive useful conclusions about where you need to
focus your testing efforts.
After your risk analysis, I would like to see a high level testing
strategy, in which you lay out:
- each general class of funcationality that your propose
to test.
- the general approach you propose to use for testing it
(general types of test cases, automation framework)
- why you believe the proposed testing to be sufficient
Your testing strategy will be graded (~20%) on the basis of its
completeness (does it cover all component functionality, the
confidence it is likely to yield about the correctness of your
component, the practicality of your approach, and the clarity
with which you argue for the correctness of this proposal.
I would then like to see detailed testplans (~40%), where you
- enumerate specific test assertions that come (black-box)
from the specifications and requirements.
- assess the degree to which all components can be adequately
tested by black-box tests, and where these are inadequate
enumerate specific white-box test cases. If no white-box
testing is appropriate or needed, explain why not.
- describe specific automation tools/frameworks that will
be used to implement these test cases ... or explain why
such automation is impossible/impractical.
Your detailed testing plans will be graded on the basis of completeness,
response to your risk analysis, congruence with your testing strategy,
the clarity with which you describe the assertions to be tested,
how functionality will be exercised, how you will ascertain correctness,
and the reasonableness of your automation plan.
All of this this must all be packaged for review (~10%) with
a table of contents for the review package, suggesting an order
of review, accompanied by sufficient context and background to
enable reviewers to undestand and evaluate the proposed design.
Note that providing this context by reference to other documents
is completely acceptable.
And the final 10% of your grade will be based on the overall
practicality of this plan and the confidence it inspires.
How much detail?
Specifying test cases in too much detail takes a great
deal of time (to fill out some standardized form) while
little improving the likelihoood that the test plan will
be correctly implemented.
Specifying test cases in insufficient detail leaves it
un-clear what test cases need to be implemented and what
each needs to do ... increasing the work that has to be
done by the implementor (to interpret the instructions)
and leaving room for important tests to be "missed" in the
interpretation.
You do not want to make either of these mistakes.
- I do not need you to fill out a standard form description for
every test case to be prepared.
- I do need you to enumerate every test case (either explicitly
or by a clear generation rule) that is to be implemented.
- I do not need you to spell out all of the (obvious) details of a
correct implementation.
- I do need you to make it clear what is to be tested, under
what circumstances, and how you will assess the result.
NOTE: the design document can be required reading for all
test plan reviewers, and there is no need to repeat
information already well covered in the design document.
3.1.5 Test Plan Updates
You will revise your test plan to respond to all of the
must-fix issues raised in the review, until you
have a final approval from the review team. (note, if
no revisions are required, these are free points)
The grading of these updates will be based on
- ~25% response to must-fix issues
The degree to which you respond to and resolve
all must-fix issues raised in the review, and the
quality of those responses.
- ~10% response to should-fix issues
The degree to which you respond to and resolve
all should-fix issues raised in the review, and the
quality of those responses.
- ~10% documentation and process
Documented communication between the submitter and the
reviewed team and approval of the final design document.
- ~55% quality of the resulting test plan
This is your chance to earn back points you may have
lost on your initial test plan proposal by fixing things
on which you may have lost points.
3.1.6 Integration & System Test Plan
In addition to your individual component design and test plans,
each team will be asked to write a System Integration and test plan:
- how will you determine that component A is likely to work
with component B while they are being developed independenly
(this will probably show up in the unit test plans)?
- which components will you combine when, how, and how will
you determine that they are working together?
- when you are done, how will you verify that the entire assembly
of components works?
Your integration and system testing plan will be graded on the basis of:
- ~15% a clear enumeration of the components and interfaces
being integrated, and the funcationality across those
interfaces.
- ~15% the degree to which this plan enables pre-integration
unit testing of all components and functionality.
- ~15% the completeness and reasonabless of the plan for when
and how the integration will be performed.
- ~40% the plan for how the whole system will be tested,
(assessment of testability, completeness and adequacy
of the proposed testing regimen, the clarity and
resasonableness of the proposed test cases)
- ~15% overall confidence and practicallity of the plan
3.2 Reviews
Every student will be required to take part in at least one
graded design/test review processes (similar to the architectural
review performed in class).
3.2.1 Review notes
Prior to each review, each of the reviewers will be required
to study the plans to be reviewed, and prepare written comments.
These will be graded on the basis of:
- 40% the depth and completeness of study to which they attest ...
that you read it all, understood what you read,
and noticed the things you would have been expected
to notice. Valid questions about basic functionality
and relationships, or missing pieces and cases are
what we are looking for, along with significant
algorithmic issues.
- 40% the appropriateness and clarity of the observations ...
your notes show that you thought about the problems,
considered their implications, and took the care to
articulate the issues in sufficiently clear and specific
terms to enable correction, or at least exploration of
the issue.
- 20% organization for efficient discussion
comments in clear sections, organized along
to the proposed review order ... or where
your comments did not fit the agenda, this
was noted.
3.2.2 Review process
Each team will conduct a formal review, of each submitted design
or test plan. One of these reviews will be observed and graded
(as a process).
- 10%: agenda and flow ... confirm all reviewers
prepared, review scope, order (and perhaps
conduct), and then honored proceeded to
conduct the review in a methodical order
that honored the stated scope.
- 25%: issue discussions ...
all had an opportunity to inject their concerns,
there was adequate discussion of questionalble points,
while avoiding significant rat-holes and digressions,
and most issues reached consensus.
- 25%: adherence to protocol ... focuse on product and
substantial issues, raise but do not try to resolve
design issues, avoiding trivial and stylistic issues,
and all reviewers prepared and participating constructively.
- 25%: quality of the review ... depth and completeness,
was it a good use of the reviewers' time, did it
add value to the component?
- 15%: issue dispositions ... all discussions resulted
in clear and reasonable dispositions (categorized
as a defect or issue, major/minor/advice, etc).
Also, the final project disposition was determined
(accepted, accepted w/following changes, resolve these
issues and return, etc).
3.2.3 Review Reports
Each review (not merely the graded one) will produce a written
report. It should be written by the person who acted as the scribe
for the review. The entire review team will receive a grade
for it, AND the scribe will receive a primary-authorship work
quality assessment as well. Thus, you probably want to have
different people act as scribe for each review.
The report will be graded on the basis of:
- 15% form: who, when, where, scope
- 35% completeness: captures all significant decisions
- 35% clarity: all issues and dispositions (e.g. must-fix,
advice) are clearly captured.
- 15% results: a clear statement of approval, rejection,
or conditions and what the required action items are
to complete the process.
3.3 Implemenation
Each member of the team will implement their component design and test plan.
Then the components will be combined (as described by the integration plan),
tested (as described by the system testing plan), and demonstrated for
the professor and/or grutors.
3.3.1 Implementation
Each implementation will be graded on the basis of:
- 30% did you implement your final design
- 20% did you implement your final test plan
- 20% did you clearly pass those tests
- 10% implementation code quality
- 10% test code quality
- 10% readability
3.3.2 Integration and System Test
The final integration will be graded on
- 20% did you integrate
- 20% did you integrate according to your plan
- 20% did you implement system test plan
- 20% did you clearly pass those tests
- 20% demonstration of working components
3.4 Management
3.4.1 Plan & Maintenance
Each team will submit, along with their initial proposal, an initial plan
for the work to be done in project 2. This plan should include:
- a breakdown of all of the project 2 work into individual
tasks and group meetings.
- specific owners and due-dates for each task or process.
As the project proceeds, the task list and definitions will probably
change, and issues will arrise. As issues arrise, they should be
documented and the plan revised (all under subversion control).
This management work will be graded on the basis of:
- 30% completness, specificity, and plausibility of the initial plan.
- 20% extent to which issues were promptly recognized and responded to
- 20% quality and effectiveneness of the responses to those issues
- 20% effectivness of communication and coordination within the group
- 10% extent to which the plan was kept up to date, and the issues documented
3.4.2 On-Time, Issue & Status reporting
Each individual will be graded on:
- the degree to which they are able to do work within the
original estimates, and according to the (checked in) plan-of-record
- the regularity and informativeness of their written status
reports to the team.
3.4.2 Post Mortem
You will, as a team, review all aspects of project 2 (both
your activities in the greater-game project, and these more
limited activities).
One of you will then summarize that process into a post-mortem
analysis report. It should specifically address:
- the organization and operation of the team(s) you were on
in the greater-game project.
- the overall organization and operation of the greater-game
project.
- the process of selecting or defining the functionality
you decided to implement.
- the process of developing/negotiating module specifications.
- the process of submitting your module design to review.
- the process of reviewing other peoples' designs.
- the process of developing a unit-test plan.
- the process of developing an integration and system-test plan
- the actual component implementation.
- the actual integration and system testing.
- project 2 as an educational exercise.
You will be graded on the basis of:
- 50% whether or not you meaningfully discuss each of the required activities.
- 25% whether or not you identify all of the important incidents.
- 25% the extent to which you are able to derive useful lessons
(and good future advice) from those experiences.
Note that if you make no mistakes, you will not be able to earn any points
on the post-mortem. Fortunately, no teams have ever found it necessary
to deliberately make mistakes in order to have something to analyze.