Patch Peer Review
As in HW 3, our goal is to choose one “patch” (a.k.a. “pull request”) to apply to the OS/161 repository that will provide the implementation of file- and process-related system calls that we'll use in future assignments. We'll do so by reviewing each other's code.
On GitHub, you'll find that that there are several new branches in your Homework 3 repository, named review-, where
One of the solutoins is obviously Prof. Melissa's, so we'll disclose that, it's the alder patch. Also, one of the patches for you to review is your own, which has the name ours.
Important: Maintain Anonymity
To ensure that the review process is as fair and unbiased as possible, do not speculate about who authored any particular branch, or even share with your classmates which pseudonyms you were given to review.
On the CS 131 server, you can test one of these branches by running the following commands (assuming that you previously built your own kernel as described in step 1 of the homework):
cd ~/cs131/hw4-my-team-name
git pull
git switch review-pseudonym
cd src/kern/compile/SYSCALLS
bmake && bmake install
cd ../../../../root
sys161 kernel
In case this fails for some reason, you can also find prebuilt kernels in /usr/local/os161/hw4/kernel- on the CS 131 server. (You can also delete the SYSCALLS directory try again from configuring the kernel.)
Things to try:
p /testbin/hw4testp /testbin/badcalls(orp /bin/sh) to start the shell, and then run:hw4testargtest 1 2 3 4
Pull Requests on GitHub
For each of the review branches, there is a pull request on GitHub to merge that branch into the patch branch. You can find the pull requests by clicking on the tab on the GitHub page for your repository.
For each pull request there is a description of the changes that were made (as originally written in lock-cv-design.md). If you click on the tab, you'll get a page that shows all the changes that were made to the code in that patch (where additions are shown with a green background and deletions with a red background).
There are two different ways of showing the changes: a unified view, which shows both the additions and deletions mixed together, and a __split view_, which shows the changes side-by-side, with the original file on the left and the changed file on the right.
You can switch between these views using the gear icon above the diffs.
If you need more context than is shown by default, there are icons with a dotted line and an up or down arrow you can click to expand the view. (The one in the file header, which has both up and down arrows, shows you the whole file; for each set of changes, the one with just the up arrow shows you more context above the change, and the down arrow icon shows you more context below the change.)
You can add a comment via the tab or click on the green button. You can also add comments for each file, but that's not necessary.
- For each pull request, you should add a comment with your evaluation of the code.
Your review must use the following template, completed using the guidance given after the template. (You can copy the template with the button beneath it.)
# Review
## Documentation
(Delete this text and replace with a brief evaluation of the documentation.)
**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent
## Clarity
(Delete this text and replace with a brief evaluation of the clarity of the code.)
**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent
## Conciseness
(Delete this text and replace with a brief evaluation of the conciseness of the code.)
**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent
## Fit
(Delete this text and replace with a brief evaluation of how well the code fits with OS/161.)
**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent
## Correctness — FileTable Subsystem
(Delete this text and replace with a brief evaluation of the correctness of the code.)
**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent
## Correctness — OpenFile Subsystem
(Delete this text and replace with a brief evaluation of the correctness of the code.)
**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent
## Correctness — File System Calls
(Delete this text and replace with a brief evaluation of the correctness of the code.)
**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent
## Correctness — Fork and Exit
(Delete this text and replace with a brief evaluation of the correctness of the code.)
**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent
## Correctness — PIDs and Waitpid (and additions to fork and exit)
(Delete this text and replace with a brief evaluation of the correctness of the code.)
**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent
## Correctness — execv
(Delete this text and replace with a brief evaluation of the correctness of the code.)
**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent
Assessment
For each category, you'll delete the placeholder text and write about how the patch meets the requirements for that category.
For the
**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent
block, you should choose the option that best fits your assessment of the code and delete the rest (but don't delete the **Assessment:** part at front!).
After selecting an assessment, the line should look like this (i.e., all on the same line):
**Assessment:** Proficient
Here's what they mean:
- AboveAndBeyond: The code is excellent in this regard. You're surprised and pleased by how well it was done.
- Good: There's nothing really to nitpick about here. It's solid work.
- Proficient: The code is fine. Maybe a little more could have been done in places, but nothing important has been missed.
- NearlyThere: The code is okay, but there are some minor issues (e.g., errors or omissions) that need to be addressed, but the fix is a minor one. But for the most part, it's okay.
- NeedsWork: Something important is wrong or omitted. The fix is not trivial.
- Attempted: None of the above, but at least an attempt was made.
- Absent: This aspect is missing entirely.
Note that “Good” and “Proficient” are very similar, and it's often a toss-up which to use. If you're not sure, you can use “Proficient” as a default. “AboveAndBeyond” is for going the extra mile in some way, and you probably won't choose it often, if at all.
Assessing Documentation
For documentation, you should consider the following questions:
- Does the documentation provide a good roadmap for understanding the code?
- Does it provide a rationale for the design decisions taken?
- Does the documentation overdo it, explaining things that are minor details that would readily be understood by someone looking at the code?
Here are some criteria to use to help you choose an assessment:
| Assessment | Criteria |
|---|---|
| AboveAndBeyond | Documentation is Good (as described below) but also makes you think about aspects of the code or design space that you hadn't considered before or weren't obvious from the assignment. |
| Good | Documentation provides a good roadmap for understanding the code in the patch, and provides a rationale for the design decisions taken. |
| Proficient | Documentation provides a good roadmap for understanding the code; or Documentation would be classed as “Good”, but is so verbose as to be annoying. |
| NearlyThere | Documentation is helpful but somewhat incomplete. |
| NeedsWork | Documentation is present, but woefully incomplete, or Documentation attempts to be helpful, but is seriously inaccurate. |
| Attempted | Code has useful comments, but no other documentation is provided. |
| Absent | Essentially no documentation at all. |
Assessing Clarity
For clarity, you should consider the following questions:
- Is the code easy to understand?
- Are the comments helpful?
- Is the code well-organized?
Here are some criteria to use to help you choose an assessment:
| Assessment | Criteria |
|---|---|
| Good | It is very easy to understand all the code and why it's there. |
| Proficient | The code itself is simple and clear, but it feels a bit under or over commented. |
| NearlyThere | Some of the code is hard to follow, or seems pointless, contains junk comments, commented out code, or leftover debugging statements. It feels like you might want to clean it up a bit before merging. |
| NeedsWork | Most of the code is hard to follow. |
| Attempted | The code is utterly impenetrable. |
| Absent | There's no code to evaluate. |
Assessing Conciseness
For conciseness, you should consider the following questions:
- Is the code simple and elegant?
- Is there any duplicated or redundant code?
- Does it use its helper functions well?
Here are some criteria to use to help you choose an assessment:
| Assessment | Criteria |
|---|---|
| Good | The code is elegantly simple, it doesn't do anything unnecessary. |
| Proficient | A couple of places where it could perhaps be coded more neatly, but even those might matters of taste. |
| NearlyThere | Some duplicated code or a few places the code labors too hard to achieve a simple end result. |
| NeedsWork | Code shows "premature optimization", where extra code seems to have been written purely for nebulously justified performance (not correctness) reasons. Or it's way too cryptic. Or it's seriously verbose. |
| Attempted | Large amounts of code missing in action. |
| Absent | There's no code to evaluate. |
Assessing Fit
For fit, you should consider the following questions:
- Does the code match the style of OS/161?
- Does it use the same variable conventions?
- Does it use the same commenting style?
Here are some criteria to use to help you choose an assessment:
| Assessment | Criteria |
|---|---|
| Good | The code seems like it fits very well with OS/161, with identical formatting, variable conventions, commenting style, approach to solving problems, etc. |
| Proficient | Mostly good but one or two small mismatches with OS/161 feel. |
| NearlyThere | Authors seemed to pay little attention to matching style, but the added code is consistent with itself. |
| NeedsWork | Code is inconsistent with itself (and with OS/161). |
| Attempted | Authors seem to believe that OS/161 should change to match their style rather than that they should write their code to match its style. |
| Absent | There's no code to evaluate. |
Assessing Correctness (all categories)
For correctness, you should consider the following questions:
- Does the code basically work?
- Does it have any bugs? Any missed edge cases?
Here are some criteria to use to help you choose an assessment:
| Assessment | Criteria |
|---|---|
| Good | The code seems to run very well, and most edge cases are caught. |
| Proficient | The code seems to run fine, but perhaps crashes in some erroneous-use edge cases. |
| NearlyThere | The code looks okay, but seems to have a bug when run, or The code runs okay, but has a subtle bug we noticed reviewing the code. |
| NeedsWork | Code is clearly buggy. |
| Attempted | Code is unfinished. |
| Absent | There's no code to evaluate. |
Vote for a Favorite by Merging
You may pick one of the patches to merge into the patch branch. To do so, you should go to the pull request on GitHub and click the button.
If you don't feel that any of the patches you were given were well enough written to merge, you should not merge any of them.
(When logged in, completion status appears here.)