CS181AA Projects (Spring 2024)

This course is about software project skills other than programming (e.g. concept development, requirements, architecture and design, reviews, testing, and management). You will be asked to form teams, and come up with a single (large) project concept. Then, over the course of the semester, you will work on different aspects of that one project:

Due Date1 Assignment2 Summary of Activities
Phase 1: Concept Development, Requirements and Proposal
Sun 1/21 1a. Team, Concept, Plan Form teams, identify a preliminary concept, write plan for turning it into a proposal.
Sun 1/28 1b. Competitive Research Research existing products in this space, position your proposal against this field, and develop a competitive value proposition.
Sun 2/4 1c. Requirements Development 1. Create a concept presentation to introduce people to the type of product you are exploring
2. Identify and characterize potential users
3. conduct interviews to gather requirements
4. analyze and report on the results.
Sun 2/11 1d. Final Proposal 1. Combine all of the above into a complete project/product proposal (what will be built, why you will be successful) suitable to be summitted for funding/approval.
2. Review the processes you followed in this project to see what lessons you can learn for how to do things more effectively in the future.
Phase 2: Architecture and Review
Sun 2/18 2a. Plan and Prelimnary Architecture 1. Develop a plan for this project, dividing the work over your team members and the available time.
2. Develop and document an architecture and high-level component specifications for your project. This includes doing any required research and prototyping to address critical questions.
Sun 2/25 2b. Architecture Review 1. Study another team's architecture, and prepare notes for a design review, as they will study and pepare for a review of yours.
2. Conduct a design review with that other team, as they will with you.
3. Write up a report of the review.
4. Continue to work any issues with the team that raised them.
Sun 3/03 2c. Final Architecture 1. Revise your preliminary architecture based on the results of your review and investigations, and submit a report on the identified issues and their resolutions.
2. Prepare and submit a final architectural proposal.
3. Review the processes you followed in this project to see what lessons you can learn for how to do things more effectively in the future.
Phase 3: Component Specifications, Design and Test Plan
Sun 3/10 3a. Component Selection and Specifications 1. Select the system components to be designed and implmented
2. Select an owner for each component
3. Develop a schedule and assign responsibilities for the remaing phases
4. As a team, elaborate the architecture above the chosen pieces to develop detailed specfiications for each.
5. As individuals, write up detailed specifications for your assigned component.
Component Design and Test Plan" 3b. Component Design and Test Plan 1. As individuals, prepare a detailed design for your assigned component.
2. As individuals, prepare detailed unit-testing plans for your assigned component.
Sun 3/31 3c. Design Review notes, meeting and report. 1. As individuals, package your specifications, design and test plans for review.
2. As individuals, read each package and prepare notes.
3. As a group, review each package, and write a report for each review.
Sun 4/07 3d. Final Specification, Design, and Test Plan 1. As individuals, revise your specifications, designs, and test plans to address all important issues raised in the review.
2. As a team, review the processes you have followed, and write up a post-mortem report.
Phase 4: Implementation and Testing Sprint
Fri 4/26 4. Final Reports 1. Each of you will implement the component you designed in phase 3.
2. Two of you will spend (at least) one session doing pair-programming and write up a brief report on the experience.
3. (at least) one of you will develop your code in a Test Driven Development fashion (implementing and running tests as you complete each routine or feature) and write up a brief report on your experience.
4. (at least) one of you will implement your component and then submit it to code review by other members of your team and produce a report from that review.
5. Each of you will implement the component test plan you designed (in phase 3) and use it to validate the correctness of your implementation.
6. As a team, design a demo that shows your (independently implemented) components working together.
7. As a team, prepare and present a brief sprint review and demo.
8. As a team, review the processes you have followed, and write up a post-mortem report.

Project Phase 1
Concept Development, Requirements and Proposal

Introduction

What is the difference between a concept and a proposal?

Before a grant is given or a new project is funded, a clear proposal has to be developed for the work to be done, why it is worth doing, and why the proposed effort is likely to be successful. There are many steps on the path from concept to proposal.

The primary goal of this project is to give you real experience with the development of a concept, the gathering, organizing and prioritizing of requirements, and the development of a proposal for a software product. A secondary goal is to develop your abilities to intelligently plan a group endeavor, effectively manage it, and critically evaluate the completed effort.

You are to form into small (idealy 4 person) teams and come up with a concept for a new or improved software product. You will spend the next few weeks developing that concept into a proposal.

There are multiple phases to this project, each of which has its own goals, processes, and deliverables: Project 1 Activities
Phase Assignment Value3
1a Concept & Initial Plan 10
1b Competitive Research and Positioning 5
1c Preliminary Requirements 5
Concept Presentation4 10
Requirements Elicitation4 15
Report and Requirements Analysis4 10
1d Final Proposal 10
Post-Mortem Report 10
management 5
individual contribution 20

P1A - Concept and Plan

Every project begins with:

A concept document is a very brief description of what you propose to do and a justification for why it is worth doing.

The living document that lays out the plan (du-jour), its progress, and its evolution over the course of the project is the Management Plan.

P1A.1 Concept

Write up a brief (e.g. no more than one page) description of your idea, including justifications for why you believe it to be:

In order to exercise a full range of software engineering skills (e.g. competitive analysis, requirements, research and prototyping, architecture, etc), the project you propose must be a fairly large one. Many acceptable proposals are too large to be implemented by four people in one semester. Fortunately you will only be required to implement a few pieces of the whole product.

In choosing your project, make sure that it is one that will enable you to meet all of the future project requirements. Your project must ...

  1. be a software development project (designing and implementing a moderately large amount of code). A research project, user interface, web site or extending existing code will not do.
  2. be in a space where there is some existing software that you can research, and against which you can position your proposal.
  3. have potential users from whom you can gather (a few pages of) functional requirements.
  4. require some architecture ... typically this means four or more distinct functional components (other than the platform on which they execute) for which you can specify functionality, interfaces, and the means in which they interact to provide the desired service. Whether these components interact via calls, client-server protocols, or through shared data is entirely up to you. It is acceptable if one or two of these components already exist (e.g. in standard toolkits or services) ... but you must design the architecture. Creating a new plug-in for an existing architecture is unlikely to meet this requirement
  5. have multiple (e.g. at least one per person) moderately large and complex (e.g. a few hundred lines of non-simple code) components that you are capable of building, testing, and combining to yield demonstratable product functionality.
To enable me to assess how well the proposed project will fulfill these requirements, you should briefly (e.g. 1-3 sentences per component) describe each of the major run-time components that you expect to implement (or use).

Your initial concept will be graded on the basis of:

P1A.2 Management Plan

Most of the individual sub-tasks associated with this project can be done in a couple of hours by one or two people. But there are a great many of these tasks to be performed, and (you will discover) not a great deal of time in which to get them done. The only way you will succeed is if you have a plan (who is going to do what, when, and then who will do what with it) for the entire project (from initial concept through final proposal and Post Mortem report).

Each team will prepare a task-breakdown, identify the dependency relationships between tasks, and owners for each sub-task, assign due-dates, and schedule regular reviews of both work-products and progress (to enable adequate time to deal with the problems that will happen).

The keys to ensuring problems are detected (while there is still time to fix them) are regular communication, and continuous status tracking (of actual progress vs the plan). A good management plan will include regular (e.g. daily) status checks:

You can prepare and submit this plan in any form you like (e.g. perhaps a Google Sheet), but the sort of information I had in mind was:

Task/Deliverable Due Date Owner 1st Draft Review Due Final Version Dependencies Risks Comments
concept 9/9 Algernon 9/6 9/6 9/7 none it will be too hard to spec/design discuss implementability in our review
management plan 9/9 Zebulon 9/7 9/7 9/8 careful reading of assignment leave something out multiple people read assignment, review against deliverables and grading standards
list of competing products n/a Xenophon 9/9 9/10 9/11 concept miss important competitors review list before continuing
competitive research 9/16 Xenophon 9/12 9/14 9/15 list miss important features first review is outline of key features
...

   MEETINGS AND PROGRESS MONITORING:

        We will have a face-to-face meeting every Tuesday after class to review
        our plan for the week.

        We will have a 5 minute chat each day at 8PM for a quick status check.
        Other working meetings will be scheduled as needed.

        If something goes wrong, send-email to the rest of the team THAT DAY.

 Zebulon will ensure that all of these are added to minutes.txt
 on a daily basis.

Your initial Management Plan will be graded on the basis of:

Maintain your concept description with history (e.g. ASCII text on Github or in Google Docs). Depending on format, you might prefer to keep your plan in Google Docs, but documenting the changes. You will probably be updating them daily, and we will be reviewing this history.

When you are ready to submit them for grading:

P1B - Competitive Research and Positioning

Before building a new software product, one should first study the products that are already available:

Do research to identify existing products in your space (or the closest existing products to a new space), their capabilities, strengths, and weaknesses. The existing products might be commercial products, private projects, or open source efforts. Identify multiple ways in which you could significantly improve on these products. (if you cannot significantly improve on existing products, then there is no need for a new one). This research is not merely perfunctory:

Start with a brief overview of your product-space, the way(s) in which you seached for comparable offerings, and your rationale for choosing the ones you choose to study more deeply.

Write up a brief description of the existing products, how they arose, how they are used, and notes on their key strengths and weaknesses. Note the (positive or negative) lessons you can learn (about either features or implementation approaches) from each of these products. Write up a list of feasible and valuable improvements that you could reasonably make in a new product, and briefly justify their value and practicality (e.g. why/how you should be able to do better).

Note that this is a list of ideas, and will be graded on organization, quality of analysis, and clarity of insight. It will not be graded as prose (e.g. complete sentences).

Your research and analysis will be graded on the basis of:

Maintain your research report with history (e.g. on Github or in Google Docs). When you are ready to submit it for grading:

P1C - Requirmenents Development

Clarity is power.

Many well-funded projects have failed because they did not deliver functionality that people actually needed (or did not deliver it in a usable form). Our success in building a new software product is a function of how clearly we understand its intended functionality. Its success in satisfying its users is a function of how well we understand their needs. If we do not clearly understand what must be done, it will only be by accident that we do the right things. Requirements Development is the (often ignored by developers) process of developing those critical understandings.

In this phase of the project you will:

P1C.1 - Preliimnary Requirements

Brainstorm on your product concept and discuss the capabilities of existing products to get ideas for what your product should be able to do.
Identify different types of users (who might have different abilities or needs) and then identify capabilities, characteristics, and/or use cases for each.

The specific form in which you choose to represent these requirements is up to you. Basic capabilities may be best captured by simple declarative sentences. Complex or role based interactions may be best captured by use cases. Use the forms that you think best capture the requirements in question.

Again, there is no need for this document to be any more than organized notes.

Your preliminary requrements document should include:

This is only a brain-stormed initial list. You will gather considerably more input from potential users in your requirements elicitation. As such, it probably doesn't make sense to try to go into too much detail or prioritize these preliminary requirements.

These preliminary requirements will be graded on the basis of:

Maintain your requirements with history (e.g. on Github or in Google Docs). When you are ready to submit it for grading:

P1C.2 Concept Presentation

Before asking potential users for requirements suggestions, we have to give them some idea of what we are talking about. This should be a brief (five minutes or less) presentation on the type of product being considered. This is not a sales pitch! As we will discuss in our lecture on requirements, it is crucial that we not contaminate the panel with our own thoughts. The purpose of a requirements elicitation is for us to get information from potential users. As such, the presentation should be limited to establishing a context for the discussions to follow.

A brief (3-6 minutes) prepared presentation (including slides and/or other visual aids) introducing the product concept as background for the requirements elicitation.

This presentation will be graded on:

This presentation will be given at the start of your requirements elicitation session, but any prepared materials (e.g. slides) must be prepared and made available for review prior to the actual elicitation.

You may prepare and deliver this presentation in any form you choose, but the written submission should be in some relatively universal format (e.g. pdf, HTML, Google Presentation).

Maintain your concept presentation on github (or in a Google Presentation with history).
When you are ready to submit it for grading:

P1C.3 - Requirements Elicitation

As described in the Requirements lecture, there are many possible sources of product requirements. One of the most important sources is the intended users. The better we understand what they do, and how the proposed product would be used, the better we can design a product to meet their needs. Each team will be asked to plan and conduct a session in which they will gather requirements information from potential users.

This is a ~30 minute face-to-face meeting with potential users where you will gather information to develop and validate requirements.

  1. Briefly (in less than 60 seconds) introduce yourselves and the agenda
  2. Give your concept presentation
  3. Start with open-ended information gathering about the relevant activities, how they pursue them, what problems they encounter, and what they wish they could do.
  4. Then move into more specific questions, following up on interesting points raised in the open-ended questions, and posing the more directly product-focused questions that you have come up with.
  5. Get their comments on information you have gained from other sources. Note: It is possible that, in the process of giving you their own requirements, the customers will actually touch on all of your own previously gathered requirents. If this is the case, when you get to that part of the agenda, specifically affirm that they have reiterated (or how they have changed) that prior input.
  6. Present a summary of the key messages you have gotten from them today, and give them the opportunity to correct/amend those.
  7. Thank them for their participation.
Team roles:

This process will be graded on the basis of:

When you have identified a panel of potential users, schedule an appointment (with me or a grader) for your requirements elicitation session.

P1C.4 - Elicitation Report and Requrements Analysis

After the elicitation session, scribe should write up a summary report, review it with the team, and add it to your repo. This report should include:

This report is a record/summary of what you learned in the elicitation. It can certainly call out new things, and organize the input into clear messages. But this report should be treated as "raw data", and it is probably best not to add opinions (agreement or disagreement with what they said) to this record. Those can be explored in the (subsequent) requirements analysis.

This report will be graded on the basis of:

When you are ready to submit your elicitation report for grading:

The elicitation report is raw input to the requirements development process. The characteristics of good requirements are discussed in both the reading and the lecture on requirements. Starting with your initial (brain-stormed) requirements:

  1. revise them based on customer feedback (and your evolving understanding of the problem)
  2. add the new requirements gathered from customers.
  3. look for ambiguous requirements, and clarify them.
  4. assign a value (e.g. high/medium/low) to each requirement, and justify that assignment.
  5. assign a confidence (e.g. high/medium/low) to each requirement (that you have properly understood it and/or its value).
  6. look for conflicting requirements, and resolve the conflicts.
  7. assign a difficulty (easy, moderate, hard) to each, and (briefly) justify this assignment.
  8. assign a priority (e.g. must-have, should-have, nice-to-have) (based on the combination of value and difficulty) to each requirement, and then ladder them by priority.
  9. suggest a cut-off line for release 1.0, and justify why that is the right place to draw the line.

When you sit down to come up with a final set of requirements, it is likely that you will discover that some of the input you got in your elicitation was not as clear as it seemed at the time. When this happens, you have a few options:

Based on the results of your requirements analysis, prepare a priority ordered list of product requirements. For each, list:

The requirements analysis will be graded on the basis of:

Maintain your requirements analysis with history (e.g. on Github or in Google Docs). When you are ready to submit it for grading:

P1D - Final Proposal and Post Mortem

Having thought about what you want to build, describing it to others, and getting their feedback, you should now be prepared to:

  1. write up a proper proposal for your project.
  2. assess how this process went, and what you have learned from it.

P1D.1 - Final Proposal

The key deliverable is a written proposal (likely to management or a granting agency) about what this project is and why they should approve (or fund) it:

Most of this information has already been assembled (in your initial proposal, competitive analysis, concept presentation and requirements analysis). Now you are assembling it into a final form.

Hints (about the real world):

This proposal will be graded much more on content than on format, so ASCII text is completely acceptable. If you want to submit it in a richer format (e.g. so you can include images), PDF, Googledocs, or HTML are also acceptable.

This proposal will be graded on the basis of:

Maintain your final proposal on github (or in a GoogleDoc with history enabled). When you are ready to submit it for grading:

P1D.2 - Post-Mortem Analysis

This project is a learning exercise, and one of the major ways we learn is by analyzing past experiences (positive and negative). You will, as a team, review all aspects of this project. One of you will then summarize that process into a post-mortem analysis report.

A report, summarizing the key issues raised in your post-mortem, and the conclusions you came to. Your post-mortem discussion should include:

The submission and grading of Post Mortem reports is described in the General Grading information.

Make sure that you have kept your meeting minutes and management plan up-to-date. When you are ready to submit the Post-Mortem report (and management notes) for grading:

P1D.3 - Work Share Estimates

Most of your grade for this project will be based on the team deliverables, but some of it will be based on the quality, amount, and timeliness of work done by each team member. Towards this end, we ask each team member to estimate the amount of work done by each team member on this project. Please prepare your own assessment of how much work was done by whom and submit it as workshare_1.txt (or perhaps workshare_1.csv).

Project Phase 2
Architecture and Review

Introduction

If something is simple, a well-trained person can just sit-down and do it. But real-world problems often defy simple solutions, and complex systems tend to be confronted by non-obvious challenges, and to exhibit unanticipated behaviors. We would like to simply sit down, start writing the parts that are obvious, and trust that the rest will become clear to us as we move down that path. That story has been told countless many times, and it almost always ends in tragedy.

Now that you have a well formulated idea of what your product should do, it is time to figure out how such a product can be built. This design must be elaborated and described well enough that outside reviewers can evaluate it, assess its feasibility, and identify holes in your design. This project will exercise a great many key design skills:

All of the above skills are fundamental to any significant software development effort.

There are multiple parts to this project, each of which has its own goals, processes, and deliverables:
Phase Assignment Value3
2a Plan and Preliminary Architecture 25
2b Pre-Review Notes, Architectural Review, and Review Report 15
2c Final Architecture 25
Post-Mortem Report 10
management 5
individual contribution 20

P2A.1 Plan

The development of the architecture may prove much more difficult than you imagine, the process of describing it clearly in writing may prove extremely challenging, and you will probably discover that (after your design review) you have some non-trivial work to do to fix your design. Also, remember that design is a wicked problem:

In project 1, you were given a relatively complete list of things you had to do, along with some guidance on how to do them. In project 2

These factors can cause the work to very quickly grow much larger than your initial estimates. Many students consider the development and description of the preliminary architecture to be the most difficult part of the entire course. Many final post mortem analyses express regret for not having adequately investigated issues and options in project 2 that caused them considerable difficulty in projects 3 and 4. This means you need to get started as soon and fast as possible, and budget a great deal of time for experimentation, revision and rework.

Each team will prepare a task-breakdown, identify the dependency relationships between tasks, and owners for each sub-task, assign due-dates, and schedule regular reviews of both work-products and progress (to enable adequate time to deal with the problems that will arise). A good management plan will include regular (e.g. daily) status checks, whose results should be recorded in a minutes files and/or white-board snapshots.

As your understanding of the problem evolves and you respond to unanticipated events, you will have to revise your plan (not merely estimates, but the work to be done). If deadlines are missed, or deliverables fail to pass review, the fact, as well as the causes and the plan to remedy them must be documented.

In Project 2B another team will review your P2A architecture. Trying to line up reviewers at the last minute is extremely difficult and stressful. You will find that things go much better if you have lined up which team will review your architecture during this (P2A) phase.

Your initial Management Plan will be graded on the basis of:

Maintain your plan (and status update minutes) in Google Docs. You will probably be updating them daily, and we will be reviewing this history.

When you are ready to submit your plan for grading:

P2A.2 Preliminary Architecture

You will design and specify a set of components that is (in combination) capable of delivering your version 1.0 product functionality. It is tempting to draw some high level pictures and a brief set of descriptions and call it an architecture ... but the devil is in the details. The hard part is working out the implementation and characteristics of the architecture in sufficient detail to demonstrate that the described system is buildable and likely to work:

Please keep in mind that a description of proposed screens/commands and the paths between them is not an architecture. That is a proposal for a user interface and functionality. An architecture describes the structure of the software (e.g. programs or classes) that will implement that functionality, and the interactions of those components with one-another and external services or agents. For each component in your architecture, you should be able to characterize its interfaces (e.g. procedure calls, HTTP gets) and functionality (e.g. the methods or operations it supports).

In Chapter 3.5 of Code Complete, McConnell provides an overview (and check-list) for the things that you might want to cover in your architectural description; The most relevant (for this project) sections from his presentation are probably:

To create your architecture, it may be helpful to:

You may find it helpful to record your component interfaces as class and method declarations for all external entry points, and then implement trivial mocks (that simply make calls to other components and return simulated results of the appropriate types). Doing so may enable you to build a toy version of your entire application, and more clearly understand the components you have defined and the flows of information and control between them. It will also help to turn up misunderstandings about proposed interfaces, and serve as a starting point for continuous integration.

Recall that McConnell described design as a wicked problem, in that you may not even have a list of problems to be solved until you have designed (and discarded) a few solutions. This is a highly iterative process, and these iterations may consume a great deal of your time. Often, the best way to convince ourselves of the feasibility of an architecture is to sketch out possible implementations of each major component, looking for options and issues. Such preliminary research, design, analysis, and prototyping is an important sanity check on the viability of the proposed architecture. The goal of these investigations is not to fully design the major components, but merely to present a compelling argument for the "feasibility" of the proposed component descriptions:

Your architecture need not result in a complete top-down design for the whole system. A component analysis only needs to be complete enough to convince the reviewers that the major likely problems have been identified, and are all likely to be solveable. Every major component should be analyzed to this "feasibility" level. Each member of the team should take ownership of the analysis for at least one major architectural component. If you have many more components than people, each person should choose at least one interesting component for thorough analysis or one interesting question to resolve through a prototyping effort.

If you have too many questions and components, it may be impossible for you to do all of the necessary design and prototyping within a few weeks. After each person has taken on a difficult component or question, you can do a more superficial job on the simpler questions and components (but note explicitly in each that you are doing so). In the unlikely situation that you have more people than interesting components, sub-divide your major components to create more pieces or find a problem that requires further analysis or prototyping.

NOTES ON INVESTIGATIONS

Your architectural description should include:

This should be backed up by a set of component analyses (research, designs, analyses, prototypes, etc) that were done to establish the feasibility of implementing each major component:

You may want to sketch out parts of the user interface (e.g. screens, web pages, operations and navigational options) to guide your definition of required functionality. These may be a key foundation for the architecture that will implement them, and valuable background for reviewers of that architecture. But these are not software architecture, and will not satisfy the requirements for either the preliminary or final architecture.

Getting Started and Early Feedback

To help you through the most difficult parts of this process, we suggest that you (within first two days):

  1. conceive a list of proposed software components.
  2. prepare a one-line description of what each component does and its role in the product.
  3. draw a diagram showing their relationship to one-another and the context in which they run and interact with the outside world.
  4. prepare a list of the issues (e.g. new technologies, available tools, non-obvious APIs, difficult algorithms, etc) that might arise in the implementation of each component.
  5. schedule a presentation with me (and/or a Grutor) to review those component descriptions and concerns.
  6. get feedback on what else you might need to do before you start researching tools, doing prototypes and elaborating designs.
Once you have a reasonable model of how your functionality can be implemented and the questions/risks that must be examined you will know what research, prototyping and design examinations you need to complete. But if you do not have a reasonable starting point and list of issues to address, you may wind up wasting a few weeks of work, much of which winds up being thrown away, and does not enable you to move forward.

Describing an Architecture

There is a stereotype that scientists and engineers tend not to be good writers. To the extent there is any truth in this sterotype, it is strange and tragic ... because story-telling is fundamental to what all researchers and engineers do. They may spend much of their time doing experiments, models, and designs, but understandings of complex phenomena and systems are of little value until they have been explained to others (who will apply and/or instantiate them). Scientists and engineers do not (or at least should not, as part of those jobs) write fiction; But they do none-the-less create stories where characters, with satisfying backgrounds, fulfill their roles, and traverse their arcs, in accordance with, and exemplifying compelling themes. You understand the system you have designed; Now you have to devise a story that will recreate that vision for others:

You will note that:

It is suggested that you describe your architecture in multiple, complementary ways:

But I do not give you forms, or even a table of contents for your architectural description. There are a few reasons for this:

But, if you review the Garlan and Shaw paper on software architectures you will see prose and graphical descriptions of many classical architectures. For client-server models, you might find it interesting to look at simple models of the ORB client server binding and Google File System client to data path and Hadoop client to data path file systems.

Recognize that your reviewers may not have read your initial concept and competitive analysis, and that they may not be familiar with the technologies you propose to use. Include, in your introductory sections, references to your proposal and to overview documentation for the tools you propose to use.

Your preliminary architecture will be graded on the basis of

Maintain your architecture description with history (e.g. on Github or in Google Docs). When you are ready to submit your preliminary architecture for grading:

P2B Architectural Review

As we refine our design, we will attempt to see how it handles the scenarios we have imagined as exercises of all of its major (run-time and non-run-time) requirements. But, it is natural for us to become prone to confirmation bias. It is easy to become convinced that we have made the right decision and that our design is therefore correct. After we have done our best at refining and evaluating our architecture, it is time to seek independent validation from people who haven't already bought-in to our solution.

Each team will read over another team's architectural presentation, prepare detailed notes, conduct a formal design review, produce a written report, and work with the designing team to ensure reasonable resolution of all outstanding issues. You will select a moderator, who will work with the submitting team to ensure that the package is ready for review before it is sent to the rest of your team for review. Please document any discussions between your moderator and the submitting team.

It is likely that the reviewing team may need a little bit of background before they can fully review the proposed architecture. It is suggested that, along with your preliminary architecture, you also send a copy of your (P1D) final project proposal, and URLs for general background on any tool kits or APIs you are proposing to use.

NOTE: for this project, it is the reviewing team that will be graded.

P2B.1 Review Notes (per person submissions)

Digesting other peoples' designs is another fundamental skill for scientists and engineers. Prior to your review meeting, each of you (individually) will read the submitted architecture description and prepare detailed notes on all questions and concerns. These notes must be submitted at least 24 hours prior to the actual review session. They should be neat notes, describing legitimate issues clearly enough to be sent as email, and organized for discussion (e.g. in the recommended review order). But these are notes, and there is no need for polished prose, or even complete sentences.

Note that the primary purpose of the architectural review is to identify issues with the proposed architecture, and you will be graded on your ability to understand and explore the implications of the described architecture. But, you are also invited to make observations about the proposed functionality and competitive positioning. While these are not architectural, they may well be within scope for early stage project reviews.

There are a few standard approaches to architectural reviews:

  1. Imagine scenarios to exercise the key requirements, and see how well the proposed architecture seems able to handle them. Following this path will force you to understand the details of component relationships, functionality, and interfaces.
    After considering the basic flows of information and control, you might then consider obvious modes of failure, and the ability of the specified components to reasonably detect and handle each.
  2. After you believe you understand the components' responsibilities, functionality, and interfaces, consider a list of general architectural goodnesses (e.g. modularity, generality, testability, maintainability, portability, etc) and how well this architecture fares in each of these respects.
  3. Consider areas where are not yet confident of your understanding or the rightness of the proposed solution, and think of questions (to be asked during the review) that would better illuminate those issues.

When you have completed your study notes in preparation for the review, each individual should prefix them with a standard submission prologue and submit them (with the name review_notes_2b.txt).

NOTE: Late points cannot be used for this submission.

P2B.2 Review Meeting (graded for the revewing team)

Your team will conduct a formal design review of the submitted architecture. The team that submitted that architecture will be present to answer questions, but will have no other formal role in the process.

Schedule your review meeting with the professor (or Grutor). Make sure the submitting team can get you the architectural package far enough in advance to enable you to do the required study and prepare your notes prior to the review meeting.

P2B.3 Review Report (graded for the reviewing team)

The designated scribe for your review session will write up a report of all conclusions reached in the design review. This is specifically not "meeting minutes" (including all questions, answers, and discussion points). Rather it is a distillation of key issues and decisions. It should be carefully written and reviewed. It must contain:

Please note that must fix does not mean I feel strongly about this; It means:

Thus, unless it is obvious, it is often a good idea to accompany must fix designations with a justification/rationale.

When you are ready to submit your review report for grading:

You should also send a copy to the team whos project your reviewed, so they can start working on the required changes.

These submissions be graded on the basis of:

P2C.1 Revised Architectural Proposal

You will get considerable feedback on your initial architecture proposal from the professor (or Grutor) and the team that reviews it. Based on this feedback, you should revise your architecture to address all of the must-fix issues, and as many as possible of the should-fix issues. You will be graded on these resolutions, so make sure you document all changes made and agreement from the reviewers.

Your final proposal is a new version of the original proposal, revised based on the feedback from the graders and reviewers. The grading criteria for the revised architectural proposal will be similar to the initial architectural presentation, but with the additional expectation that your rationale and issue discussions will include issues raised in the reviews, and your design will have responded to those suggestions:

Please make sure that your final architecture acknowledges this feedback and explains how you have responded to each item.

The key questions we will ask when grading your final architectural proposal are:

Prepare a new architectural description (or revise your original description) with history (e.g. on Github or in Google Docs). When your team has addressed all issues and you are ready to submit your final architecture for grading:

P2C.2 Post-Mortem Report

This project is a learning exercise, and one of the major ways we learn is by analyzing past mistakes. You will, as a team, review all aspects of this project. One of you will then summarize that process into a post-mortem analysis report.

A report, summarizing the key issues raised in your post-mortem, and the conclusions you came to. Your post-mortem discussion should include:

The submission and grading of Post Mortem reports is described in the General Grading information.

Make sure that you have kept your meeting minutes and management plan up-to-date. When you are ready to submit the Post-Mortem report (and management notes) for grading:

P2C.3 - Work Share Estimates

Most of your grade for this project will be based on the team deliverables, but some of it will be based on the quality, amount, and timeliness of work done by each team member. Towards this end, we ask each team member to estimate the amount of work done by each team member on this project. Please prepare your own assessment of how much work was done by whom and submit it as workshare_2.txt (or perhaps workshare_2.csv).

Project Phase 3
Specifications, Design and Review

Introduction

Thus far, most of your work has been team activities. Moving forward, you will do much of your work as individuals, tho still with considerable coordination, collaboration, and assistance from other team members. Even though much of the work in the next two phases will be individual it is important that you continue to work as a team:

The architecture has described the roles, functionality, and interfaces of each of the key components in our system. A few components may be simple enough that we can simply sit down, code them up, and watch them work ... but most components are more complex than that. Before we start coding a non-trivial component we need to make sure that:

In this project, we will complete our implementation prerequisites.

In this phase you will create specifications, designs and testing plans for chosen components. In the next (and final) phase you will execute these plans, building, testing, integrating, and demonstrating working software. I suggest that you review the work you will have to do in projects 3 and 4, and then give considerable thought to which components (or parts of which components) you want to choose. Depending on your architecture, these could each be a complete architectural component, or small pieces (e.g. an applet or a few classes) from a single architectural component, or even from multiple components:

Note, however, that you are each implementing only one (or a few) module(s) ... just enough to require a reasonable amount of design and coding work, and a modest number of test cases. It is not required that the sum of these modules must add up to the whole of your proposed product. Neither must each of you implement a single complete architectural component: It is very important that you consider these requirements when choosing the components to be implemented. It would be unfortunate of you chose components A, B, and C, and only later realized: Give some thought to how each component could be (independently) built and tested, and how you might demonstrate their successful integration. If components B and C are fundamentally dependent on component A, perhaps the owner of component A can create a cheap mock (simulator) that can be used for testing until the real component A is available. If you have any questions or doubts, please talk to me or your grutor before finalizing your decisions.

Your pieces can be implemented in any appropriate language or combination of languages, and use any tool-kits or middle-ware you find convenient ... but it must be compilable/executable code with some algorithmic complexity (much more than U/I widgets, data, HTML or images), and must be accompanied by a fully automated unit test suite.

The warning about choosing U/I components is based on two concerns:

  1. Putting up U/I widgets and responding to their call-backs is so simple that it is commonly assigned as a project in introductory programming classes.
  2. If the primary inputs to your component are touches/clicks, and your primary outputs are pixels on the screen
    1. it may be difficult to create a comprehensive automated unit-test suite
    2. most of the code being tested is not yours, but the GUI tool kit.
But this does not mean that it is impossible for U/I components to satisfy the design and implementation requirements: If your U/I is rich enough in functionality (e.g. several hundered lines of code involving complex widgets and event processing), does much error checking, and and is thoroughly exercisable with such tools, it could easily qualify as a component for design and implementation in the next phases of the project. If you believe you have a U/I component that would qualify, review it with me before completing your selections and plans.

The first draft designs and test plans for the chosen components are due in the second week of this project ... but you will be defining the interfaces you will be exporting for use by other team members this week. These interfaces represent critical inter-dependencies between otherwise individual development efforts, and must be negotiated between the producers and consumers.

  1. Sketching out and discussing those interfaces up-front will make it much easier to pursue your (individual component) design activities next week.
  2. As you pursue your own designs, you will likely come to a better understanding of what functionality (and interfaces) you need from the other components with which you interact.
Thus, component interfaces must be well specified before you begin your detailed designs, but it is quite likely that those interfaces will change as a result of understandings gained in the process of doing those detailed designs. Even though detailed designs are not due until the next phase, you should already have some pretty good ideas about how each component will be implemented. If not, you may find that you have specified something that cannot be built ... and all of the work done based on that specification may wind up being thrown away.

There are multiple phases to this project, each of which has its own goals, processes, and deliverables (most of which are individual rather than team):
Phase Assignment Value3
3A Plan (for your component) 5
Component Specifications (for your component) 10
3B Component Design (for your component) 20
Component Test Plan (for your component) 20
3C Review Notes (you prepared for other reviews) 10
Review Report (from review of your component) 5
3D Final Specifications, Design and Test Plan (for your component) 20
Post-Mortem Report (team) 10

P3A.1 Plan

Perhaps the most important part of your plan is which components or classes each of you will implement. But each team will prepare a task-breakdown, identify the dependency relationships between tasks (and components), and owners for each sub-task, assign due-dates, and schedule regular reviews of both work-products and progress (to enable adequate time to deal with the problems that will arise).

Most of the work involved in creating your specifications, designs, and test plans will be individual. But team-mates will still have significant dependencies on one another:

Even though much of this work will be done individually, the team must agree on (and meet) a schedule for when materials will be ready and reviews will happen. A slip in one person's schedule may cause delays for team-mates who are depending on those results. One of the advantages of team activities is that regular meetings (e.g. daily stand-ups) keep us on schedule. Thus, you are encouraged to continue having regular status updates and to maintain a minutes text file or white-board snapshots in Google Docs.

The amount of work required to refine your architecture to the point that it is possible to identify and specify your chosen components will vary greatly from one team/product to the next, and I would encourage you to get this behind you as quickly as possible. Once you have a sense of what the chosen components are, you should have a pretty good idea of how much work it will be to do the designs and test plans. You should, however, leave yourself ample time for discovering issues with the chosen components, the initial designs, and making the required changes.

As your understanding of the problem evolves and you respond to unanticipated events, you will have to revise your plan (not merely estimates, but the work to be done). Make sure that you document each of these problems and the manner in which you decide to respond to it. If deadlines are missed, or deliverables fail to pass review, the fact, as well as the causes and the plan to remedy them must be documented.

Your initial Management Plan will be graded on the basis of:

Maintain your plan (and status update minutes) with history (e.g. on Github or in Google Docs). You will probably be updating them daily, and we will be reviewing this history.

When you are ready to submit your plan for grading:

P3A.2 Specifications

Each team member will take ownership of one or more modules (or components). To prepare a specifications, you will:

  1. (perhaps) expand the architecture (above the chosen components) to describe them and the components with which they interact. If you have decided to implement pieces of your system that are smaller than complete architectural components (from project 2), you will have to expand and refine that architecture down to the level of the pieces you want to use for projects 3 and 4. Note that you do not have to expand everything in your architecture to this level of detail. You only have to do a top-down refinement along the path to the components you will be using for this project.

    If a component to be implemented was already fully described (detailed specifications for all external interfaces/public methods) in the (Project 2) architecture, no further expansion is required. If further top-down refinement is required, your component specifications should be accompanied by addenda to the (submitted for project 2) architecture. It may be possible that a single addendum (created by the entire team) could be used for all of the component specifications.

  2. generally describe the functionality of each of the component(s) to be designed, and their role(s) in the overall architecture.
  3. determine the requirements to be imposed on each of the component(s) to be designed, based on the product requirements and the components' roles in the overall architecture.
  4. enumerate all of the external interfaces/public methods (in both directions) between the component(s) to be designed and the rest of the system.
  5. write a complete specification (both form and function, detailed enough to define acceptance criteria) for all of the external interfaces/public methods to the chosen components.

There is no universal definition for what "specifications" should describe, in how much detail. The answers to those questions depend on:

In this case, we are asking you to describe

The requirements developed so far most likely describe what the whole product must do. You should be translating these into component-level requirements (what should happen as a result of each call). It is possible that your component will have no requirements beyond delivering above-described functionality. But it is likely that, as a result of its role in the overall architecture, and the things that other components need it to do, there will be additional requirements imposed on the manner in which your component provides its functionality or cases that must be handled.

How can you know if your specifications are adequate?

The form in which you provide this information is up-to-you, but the most obvious form is (compilable) declarations and comments.

Specifications can be written as prose, but we are probably describing class and method APIs. The most obvious form for representing APIs is as (class, method and field) declarations (in the language in which they will be implemented) with Docstring comments to describe them:

You are free to choose the format that best tells your story, but we have provided an example of an architectural overview, at the end of which is a table of links to sample specifications, designs, and test plans.

Even though you will each be developing specifications for your own components, you are strongly encouraged to review all of these specifications as a team. One person, who will be using a component, may have a different understanding of the specifications than another person, who will be implementing it. In a team review, the users of a component can ask questions about how they can correctly use it ... clarifying ambiguities and making it much more likely that the components will all work together when they are finally integrated.

As a further step, each team member might want to built a cheap mock for their component that implements all of the external entry points (returning trivial artifical results) and makes calls to the other components. This will make it possible to build the entire system and confirm that the components agree on the basic interfaces between them, and enable continuous integration as the implementations of the individual components proceed.

This submission be graded on the basis of:

Maintain your specifications with history (e.g. on Github or in Google Docs). When you are ready to submit your component specifications for grading:

Design and Test Plan

P3B.1 Component Design

Each team member will prepare a detailed design for one or more modules, ideally comprising 100-400 lines of code when complete. This design need not be at the level of complete pseudo-code, but (in combination with the specifications) should be sufficient to enable a skilled programmer to easily and correctly implement the specified module(s).

Each module design should include:

If any of these design elements are non-obvious, the rationale for those decisions should be described so that the implementer can better understand what must be done. People are more likely to make mistakes when working on things they do not understand.

Often, in the course of a component design, we realize that it would be easier or better if we made some changes to the specifications. This is normal, and (in most cases) a good thing. But other people may be doing their own designs based on the original specifications. As your understanding of your component evolves, make sure to share this evolution with your team mates. This is another good reason to (prior to full integration) supply (up to date) mocks for your component ... so that the compiler can ensure that you are all working to (at least) the same basic interface specifications.

You can prepare your designs in any form you find convenient, but you may find it easiest to create them as code modules with compilable declarations (complemented by full JavaDoc/PyDoc tags), and algorithmic comments (rather than code). Overview and rationale can be presented as comments in front of the described elements. Many people regularly do most of their designs in this way:

The sample architectural overview includes links to class descriptions and API documentation that were automatically from source code and comments with the Epydoc tool.

If the chosen component is of reasonable size, complexity, and testability this design be graded on the basis of:

Maintain your component design with history (e.g. on Github or in Google Docs). When you are ready to submit your component design for grading:

P3B.2 Component Test Plan

Once you have specifications and a design to satisfy them, you have to figure out how you will test your implementation to demonstrate that it actually works. You, as a responsible developer, have an obligation (to the project and your team) to verify that your component is working correctly before integrating it with the other components for system testing.

It is possible that, in the process of developing your test plan, you will find assertions that are hard to test, or functionality that is difficult to verify (or verify automatically). If this happens, you may need to revisit your requirements, specifications and design in order to make those aspects of program behavior more testable. Please feel free to ask for help if you find yourself in such a situation.

The difference between black-box and white-box testing (based on specifications vs implementation) sounds like a simple binary distinction based on a three-word-assertion ... but the real world is often more complex and nuanced than our simple classifications:

The bottom line is that you have been asked to demonstrate your ability to design non-trivial software. I also need to see you demonstrate the ability to recognize situations that might result in different computations, and to devise test cases to verify the correct handling of each.

In general, your test plan should include:

This could be a fair amount of information. Please do not feel you have to give me a half-page of prose for each of a two-digit-number of test cases.

The following is an example of a concise representation of test cases, where the assertions being tested are obvious from the names, set-ups and expected result.

Name Set-up Test Expected Result
passwd-correct dbase: USER1 w/password PASSWD1 command: LOGIN USER1 PW=PASSWD1 response = 200 (success)
passwd-incorrect dbase: USER1 w/password PASSWD1 command: LOGIN USER1 PW=PASSWD2 response = 510 (failure)
passwd-badname dbase: no such user as XXX command: LOGIN XXX PW=XXX response = 511 (failure)
auth-user-OK dbase: USER1 as a normal user logged in as USER1
command: STATUS USER1
response = 200 (success)
auth-user-priv dbase: USER1 as normal user
dbase: USER2 as normal user
logged in as USER1
command: STATUS USER2
response = 523 (not allowed)
auth-manager-priv dbase: MGR1 as manager
dbase: USER2 as normal user
logged in as MGR1
command: STATUS USER2
response = 200 (success)

The sample submission on the web site includes general overview of the proposed approach to testing, and a list of the specific test cases to be written for each class.

The above discussion focuses on the required elements and form for your test plan ... but those are merely wrappings. What is most important is that your test suite gives us high confidence that your component has been correctly implemented. Have you exercised every interesting case in every major function? Have you tried every equivalence class of interesting inputs, and tested the handling of all likely errors? You are encouraged to review your test plans with your team before submitting them.

Your test plan will be graded on the basis of:

Maintain your test plan with history (e.g. on Github or in Google Docs). When you are ready to submit your test plan for grading:

Design and Test Plan Reviews

Because your component has some algorithmic complexity and requires a non-trivial number of test cases your design and test plans should be submitted for review. In Project 2 we required you to follow a fairly formal process (with another team supplying the facilitator and scribe). This is a simpler design, entirely appropriate to be reviewed by other members of your team (all of whom should already be well-familiar with what your component does). These reviews are your opportunity to:

You are welcome, for this less formal process, to act as the facilitator and scribe for your the review of your own component. But the other basic rules (e.g. about content, scope and behavior) still apply:

Each member of the team will submit his/her preliminary specifications, design and test plans for review by the one or more other team members. Each team member will participate in the reviews of the designs and plans submitted by other members of his/her team.

P3C.1 Review Notes

Prior to each review meeting, each of you (individually) will read the submitted specifications, designs, and test plan and prepare detailed notes on all questions and concerns. These notes must be submitted at least 24 hours prior to the actual review session. They should be neat notes, describing legitimate issues clear enough to be sent as email, and organized for discussion (e.g. in a reasonable review order).

Each set of review notes will be graded on the basis of:

When your notes are eady for submission and grading:

P3C.2 Review and Report

You will conduct design reviews for each submitted Specification/Design/Test Plan package. The process will similar to the architectural review ... but because this is simpler and you have already followed this process (for your architectural reviews) these reviews will not be observed and graded. You also have much more latitude in these reviews:

But each person must create and submit notes for (at least) one review, and must write up a report for one review.

As with the architectural review, the report is not "meeting minutes". Rather it is a distillation of key issues and decisions. It must contain:

Each review report will be graded on the basis of:

When your review report is eady for submission and grading:

P3D Final Component Specifications (P3D.1), Design (P3D.2) and Test Plan (P3D.3)

Note: this is likely to be a relatively light week for project work. You would be well advised to use this opportunity to get an early start on the project 4 implementation ... so that you can have that out of the way in your (likely) otherwise busy final weeks.

It is likely that your design and test case development, and their reviews will turn up issues that require changes to your specifications, design and test plan. Address those issues (by revising your specifications, design and test plan), document the changes that were made to address each, and get agreement from the reviewers that the issues have been satisfactorally addressed.

The primary parts of the final design submission are:

Each of thes should include a discussion of the issues that were discovered (since the preliminary submissions), and a summary of the changes that have been made, and the reasons for each.

This submission be graded on the basis of:

When you have addressed all of the issues raised in your review and are ready to submit your final component specifications, design and plan,

P3D.4 Post-Mortem Report

This project is a learning exercise, and one of the major ways we learn is by analyzing past mistakes. You will, as a team, review all aspects of this project. One of you will then summarize that process into a post-mortem analysis report.

A report, summarizing the key issues raised in your post-mortem, and the conclusions you came to. Your post-mortem discussion should include:

The submission and grading of Post Mortem reports is described in the General Grading information.

Make sure that you have kept your meeting minutes and management plan up-to-date. When you are ready to submit the Post-Mortem report (and management notes) for grading:

This report be graded on the basis of:

Project Phase 4
Implementation and Testing Sprint

The first three projects took us through all of the activities that precede implementation. In this final project you will (using skills you mastered long before you got here) actually implement the components you designed in phase 3. But, by now it should come as little surprise to you that only part of your time will be spent coding (and if you have done it right, very little of your time will be spent debugging). An implementation is not completed when it finally compiles; We must convince ourselves (and others) of its correctness. If we are implementing this as part of a team effort (or for use by others) we will probably be expected to package it and hand it off:

This final project also includes multiple activities, but no intermediate deliverables. You will complete your implementation and testing, and (when you are done) you will conduct a Sprint Review wherein you present your completed implementation (to your product owner)

Part Assignment Value3
4A Final Code (individual) 25
4A Test Suite (individual) 25
4B Pair Programming Exercise and Report (team) 10
4B Code Review Notes and Reports (team) 10
4B Test Driven Development Report (team) 10
4C Integration and Sprint Review/Demo (team) 10
4D Post-Mortem Report (team) 10

You may note that there is no management plan or grade associated with this project. You should, by now be able to plan and coordinate activities for yourselves, and you are already being graded on your ability to deliver the required work on schedule ... which is the point. But, be warned:

P4A.1 Final Code

The primary activity in this project is for each person to implement and test the component(s) they designed in project 3. There will be many processes and exercises surrounding this implementation, but the primary deliverable is working code that implements the requirements and specifications set out in project 3.

The primary deliverable is source files and scripts (e.g. ant/Makefile) required to build them. Make sure that you document the build procedure and the environment that is required to build your components (e.g. in a README.md), because part of your grade will depend on the grader being able to independently build your product from the checked-in sources and instructions. If this is not practical (e.g. because your component cannot be built on a basic Linux developer desktop) make arrangements with the professor (or Grutor) to have him/her either do a check-out and build-from-scratch on an appropriate system, or watch you do so.

You should also re-submit URLs for the specifications and design for this component (from project 3) with any changes you have made since then. These are the standard against which the completeness and correctness of your implementation will be judged.

After you have completed all of your implementation (including reviews, testing and correction) and you believe your code is in final form

It is likely that you will be submitting your code as part of a large repo, and I will have a difficult time figuring out which files are which parts of who's submissions. If you are submitting more than a few (1-4) files, please include a table of contents, describing all of the directories and files in your submission. This is not merely for grading ... you will find that many repos include a README.MD that provides an overview of the contained files.

Each code submission will be graded on the basis of:

P4A.2 Final Test Suite Results

Each team member will, for his/her component, implement the test plan proposed in project 3, and run (and pass) those tests against their component implementation.

The execution of your test cases should be automated (e.g. so that all tests can be run with a single command), and all of the test cases and scripts should be checked in to your repo. Make sure that you document the environment and procedure for running these tests, because part of your grade will depend on the grader being able to independently test your product from the checked-in sources and instructions (in a README.md file). If this is not practical (e.g. because your component cannot be tested on a basic Linux developer desktop) make special arrangements with the professor (or Grutor) to have him/her do a check-out and build-and-test-from-scratch (or watch you do so) on an appropriate system.

It is possible that you will, as a result of lessons learned during the implementation, decide you want to change your test plan. If this happens

  1. update your project 3 test plan accordingly
  2. include, with your submission, a summary and explanation of the changes

After you have completed all of your implementation and testing, and you believe your test suite to be entirely complete

This activity and report be graded on the basis of:

P4B.1 Pair Programming Exercise

At least one member of the team will ask another member to join them for at least one pair-programming session. A meaningful pair-programming exercise should produce ~200 lines of code (including tests). How you divide up your effort (think/code, code/review, code/test) is entirely up to you, and you are welcome to try multiple/various approaches.

You should, during this session, do regular (e.g. hourly) commits, and each commit comment should describe the division of responsibilities (who was doing what).

NOTES:

  1. The module used for this exercise must be of moderate to significant complexity (to benefit from two minds), or no points will be earned.
  2. The commit comments must describe the division of work, or no points will be earned.
  3. What ever component this is done for should not also be used for code review or TDD.

After the end of each pair-programming session, each of the people involved should jot down notes on what happened. After the component has been completed, the two people should get together (ideally discussing it with the entre team) and write up a report on the experience. This report should cover:

When you are ready to submit this report for grading

This activity and report be graded on the basis of:

P4B.2 Code Review

At least one member of the team will write all of his/her code, and before running test suites against it, submit that code for review by the other members of their team. The other team members will study the code, prepare notes and conduct a code review, producing a report with must-fix/should-fix/advice items. The author will make the appropriate revisions, and then move on to testing. After the code is working, the author will discuss the process with the rest of the team and then write up a report on the process.

NOTES:

  1. The module used for this exercise must be of moderate to significant complexity (to benefit from review), or no points will be earned.
  2. What ever component this is done for should not also be used for pair programming or TDD.

The author's review report should include (in addition to the usual information):

When you are ready to submit this report for grading

The grading of the code review exercise will be based on:

P4B.3 Test Driven Development

At least one member of the team will use Test Driven Development to implement his/her component, building and running the test cases for each increment of code as the new code is added. The rewards for this approach should be:

  1. the there should be little debugging to do, and what little there is should be quite simple.
  2. by the time the coding is done, most of the testing will also be done.
But it will require the test framework to be working first, and more up-front planning about the order in which things should be implemented. Write-up and commit plan before you start coding. There are a few tricks to this planning:

As evidence that you did infact follow a TDD process, and for keeping a record of the problems found, please:

NOTE: What ever component this is done for should not also be used for code review or pair programming.

After completing development, the/each person who uses this methodology will discuss the experience with the team and write a brief report, covering what they did, and specifically addressing the following questions:

When you are ready to submit this report for grading

This activity and report be graded on the basis of:

P4C Integration and Sprint Review/Demo

You have built your components and test suites, and you have passed your tests. You were advised to give some thought to independent development and the integration process when you chose these components for specification, design and implementation. It was further suggested that you should have been doing continuous integration (full product builds, starting with trivial mocks of each component) since the middle of project 3. You should now be able to combine your components and demonstrate functionality for the integrated whole.

Now it is time for you to review what you have produced with your "product owner". At the end of each sprint, the team presents the work that was completed during that sprint to the product owner. This is, in part, ceremonial (the team can claim success and receive feedback on the work they have completed) but it is also the SCRUM acceptance/sign-off process:

When higher level modules depend on lower level modules, the correct execution of those higher level modules may be adequate evidence of successful integration. But if the implemented modules are parallel, it may be necessary to create an additional piece to exercise them all together. Part of the score is based on a demonstration that these modules are all working together.

Your review presentation should include:

This is not a long presentation (4-5 minutes will be fine). It might be simplest to have slides to cover the components, their requirements, and their test plans, but this is not necessary.

When you have an idea when you will be ready for your review, contact either the professor (or Grutor) to schedule it.

This presentation be graded on the basis of:

P4D Post-Mortem Report

This project is a learning exercise, and one of the major ways we learn is by analyzing past mistakes. You will, as a team, review all aspects of this project. One of you will then summarize that process into a post-mortem analysis report.

A report, summarizing the key issues raised in your post-mortem, and the conclusions you came to. Your post-mortem discussion should include:

When you are ready to submit the Post-Mortem report for grading:

This report be graded on the basis of:

General Comments on Rules and Grading

Submissions

Project deliverables should be submitted by up-loading files to the appropriate Canvas Assignment. Note that several project phases have multiple deliverables (e.g. presentation, meeting notes, and analysis), each of which must be submitted (as a distinct assignment).
Team deliverables need only be submitted by one (any) team member. Personal deliverables (e.g. notes you prepared before a design review, or code you have implemented) must be submitted (individually) by each team member.

Each up-loaded assignment should begin with a standard submission prologue describing:

A typical example might be:
    Team: Kallisti

    Members: Andromeda, Algernon, Medea, Zebulon

    Project: 1D - Final Proposal
       Product proposal: https:/github.com/kallisti/proposal.txt
       primary author: Algernon

    Slip Days: 1 (Zebulon)

Where practical, the preferred submission format is ASCII text (the most common format for source code, universally readable on almost any computer), but ...

A short and simple ASCII text submission (e.g. review notes or a post-mortem) can be submitted in the same file, immediately after the submission prologue.
For larger documents, maintained on Github or in Google Docs (and can therefore be accessed via the web), simply include the Github or Google URLs in the submission prologue. If you do this, please ensure that the repo is public or the Google docs are readable by anyone with the URL.

Due Dates and Slip Days

Any assignment that is not turned in by midnight at the end of its due date, will have its grade reduced by 10% (of its nominal value) for each late day. I understand that problems happen, and so each student is granted a few (4) slip days. A team of four students would have (between them) four times that number of slip days.

One slip-day will excuse a single day's deliverables being late by up to twenty-four hours.

But, be careful about using slip-days:

If you plan on using slip days on a deliverable, please tell me (or your grutor) that you will be doing so (so that we don't nag you about whether or not we lost your submission).

Team and Individual Grades

When we tackle big projects, we succeed or fail as a team. Consequently, the majority of the grade you earn on a team project will be based on the overall quality of the team's product. But a team can only be successful if everybody is working towards producing quality results. Thus, a non-negligible portion of your grade will be based on your individual contributions:

criterion value
quality of primary authorships 10%
share of project work 5%
personal on-time performance 5%

Primary Authorship

While many activities (e.g. Post Mortem review) are fundamentally team activities, each major work product will typically have a primary author: one person who works up the basic organization, pulls together the contributions from the other team members, and writes most of the prose.

The ability to organize large collections of complex information into a coherent narrative is a fundamental skill for any engineer or researcher. Each team member should take primary authorship for multiple work products. The quality portion of the individual contribution grade will be based on the work products for which that person was the primary author (presided over a process, or wrote at least 2/3 of the text in a document).

Each work product submission must include (in its prologue) an indication of the primary author. If (e.g. in a larger document) different sections had different primary authors, include this information in the submission prologue. Primary authorships are important for two reasons:

  1. For a team to handle a complex project, it is usually necessary for a single member to take primary responsibility for each deliverable. I want to see that you can do this.
  2. Even though the whole team will review (and perhaps revise) each major deliverable, most of its quality (depth, clarity, cogency) comes from the original draft. I want to see how well each of you can tell an important technical story.

Team Review of All Deliverables

Since the whole team will be graded on the quality of most work products, it is only prudent for the entire team to review any high-value work product before it is submitted. The description of every deliverable for every project includes:

If you, as a team, review each deliverable (whether team or individual) against those criteria, you should expect to find that both the reviewers and reviewees reap significant benefits:
  1. In reviewing someone else's submission, we have the opportunity to observe and learn from things they have done well.
  2. In analyzing the completeness and cogency of someone else's submission, we will gain a deeper understanding of what makes a good presentation.
  3. We all suffer from confirmation bias, which makes it difficult for us to see things that we have missed. Getting critical review and feedback from others before submitting our final version will greatly improve both our submissions and scores.
This is not merely a trick that students can use to improve their learning and grades in this class. We all depend on our friends and co-workers to help us do our best (or the right thing) on anything that matters.

Share of Project Work

The quantification of individual contributions is an inexact process. The most obvious unit of contribution might be hours worked, tho reported hours often seem poorly correlated to results obtained. None the less, team members generally have a pretty good sense of who is working how hard and contributing how much value. At the end of each project, each member of each team should (privately) submit their own assessment of how the overall effort/contributions were divided between the various project activities and team members. Ideally this might be csv export from a spreadsheet like:

activity % tot Balder Osirus Algernon
management 15% 50% 30% 20%
research 25% 75% 25%
report #1 10% 10% 75% 15%
prototyping 35% 25% 15% 65%
final report 15% 50% 25% 25%



22% 37% 35%
(where the bottom row is a sumproduct of the second column and the per-person columns)

These submissions will be kept confidential, and averaged together to get a sense of the individual contributions to each team's efforts.

Management Grading

The team will be graded on how effectively they managed themselves and the work.

  1. Perform the work according to the plan.
    We will determine when a deliverable was completed by looking at the names and dates for the commits.
  2. Monitor progress and detect problems.
    Monitoring plans should be included in your project plan. Status should be reviewed regularly (e.g. daily) with minutes promptly checked in to github.
  3. Reasonably adjust the plan in response to problems.
    Detected problems and responses should be discussed in your post-mortem, and reflected in revisions to the plan.

In assigning the management grades we will review the commit history (or Google Docs history) for your meeting minutes and work plan.
Please include the URLs (e.g. GitHub or Google Docs) for these files in the final submission prologue for each post-mortem you submit.

criterion value
regular status checks (as verified by checked-in minutes) 40%
plan (approach, assignments, schedule) kept up-to-date 20%
problems promptly and reasonably addressed, work performed according to (updated) plan 40%

Post-Mortem Analyses

Each project is a learning exercise, and one of the major ways we learn is by analyzing past mistakes. You will, as a team, review all aspects of each project. One of you will then summarize that discussion into a post-mortem analysis report.

Like all good post-morti, this should be a safe activity ... where there are no penalties discussing mistakes. Your grade for all post mortem reports is not based on how well you did in following the project processes, but what you learned. More specifically, the grade will be based on:

Note that if you make no mistakes or learned no lessons, you will not be able earn the points for identifying and discussing them. Fortunately, no teams have ever yet found it necessary to deliberately make mistakes in order to have something to analyze. :-)

When your Post-Mortem Anslysis is ready for submission, prefix it with a standard submission prologue that includes the primary author and GitHub URLs for:

Collaboration and Citation

This is a team project, but different individuals will have primary responsibility for different processes or work products (or different parts of a single work product or process). Each team will be working on a different type of product. You are free to talk your team members (and for that matter other teams) about the processes you are following. You may review your work products with your own team members, and revise them based on their feedback ... but ...

  1. If you are the primary author of a work-product, you must cite the source of text you did not write (if it is more than a sentence), or any information that did not originate within your team or from your interviews.
  2. You will be doing research to develop your product definition and requirements. I expect that many of the ideas for your product will come from this research. Cite all of your sources for each work product, and explain how each source contributed to your work products.
  3. You may not share any of your work products (other than as required for reviews) with members of other teams.
(Last Updated: "Jan 26 2024")