CS 134

Patch Peer Review

As in HW 3 and HW4, our goal is to choose one “patch” (a.k.a. “pull request”) to apply to the OS/161 repository that will provide the implementation of file- and process-related system calls that we'll use in future assignments. We'll do so by reviewing each other's code.

On GitHub, you'll find that that there are several new branches in your Homework 3 repository, named review-pseudonym, where pseudonym is a pseudonym for one of the other teams in the class. This time around, each team is represented by the name of a tree.

One of the patches for you to review is your own, which has the name ours.

Important: Maintain Anonymity

To ensure that the review process is as fair and unbiased as possible, do not speculate about who authored any particular branch, or even share with your classmates which pseudonyms you were given to review.

On the CS 131 server, you can test one of these branches by running the following commands (assuming that you previously built your own kernel as described in step 1 of the homework):

cd ~/cs131/hw5-my-team-name
git pull
git switch review-pseudonym
cd root
(cd ../src/kern/compile/SCHED ; bmake && bmake install)
(cd ../src; bmake && bmake install)
sys161 kernel

In case this fails for some reason, you can also find prebuilt kernels in /usr/local/os161/hw4/kernel-pseudonym on the CS 131 server, and their shell programs as sh-pseudonym. (You can also delete the SCHED directory try again from configuring the kernel.) Using this approach, you can test the kernel by running:

cd ~/cs131/hw5-my-team-name/root
cp /usr/local/os161/hw5/kernel-pseudonym kernel
cp /usr/local/os161/hw5/sh-pseudonym bin/sh
sys161 kernel

Things to try, from inside their shell:

hw4test >hw4test.out
farm >farm.out

You can also put the following code in echotest.sh

sleepyecho 250000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 &
sleepyecho 500000 aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq ar as at au av aw ax ay az ba bb bc bd be bf bg bh bi bj bk bl bm bn bo bp bq br bs bt bu bv bw bx by bz &
sleepyecho 750000 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z &
wait
exit

and run it by writing (in the shell)

sh <echotest.sh

Pull Requests on GitHub

For each of the review branches, there is a pull request on GitHub to merge that branch into the patch branch. You can find the pull requests by clicking on the Pull requests tab on the GitHub page for your repository.

For each pull request there is a description of the changes that were made (as originally written in design.md). If you click on the Files changed tab, you'll get a page that shows all the changes that were made to the code in that patch (where additions are shown with a green background and deletions with a red background).

There are two different ways of showing the changes: a unified view, which shows both the additions and deletions mixed together, and a split view, which shows the changes side-by-side, with the original file on the left and the changed file on the right.

You can switch between these views using the gear icon above the diffs.

If you need more context than is shown by default, there are icons with a dotted line and an up or down arrow you can click to expand the view. (The one in the file header, which has both up and down arrows, shows you the whole file; for each set of changes, the one with just the up arrow shows you more context above the change, and the down arrow icon shows you more context below the change.)

You can add a comment via the Conversation tab or click on the green Review changes button. You can also add comments for each file, but that's not necessary.

  • For each pull request, you should add a comment with your evaluation of the code.

Your review must use the following template, completed using the guidance given after the template. (You can copy the template with the button beneath it.)

# Review

## Documentation

(Delete this text and replace with a brief evaluation of the documentation.)

**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent

## Clarity

(Delete this text and replace with a brief evaluation of the clarity of the code.)

**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent

## Conciseness

(Delete this text and replace with a brief evaluation of the conciseness of the code.)

**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent

## Fit

(Delete this text and replace with a brief evaluation of how well the code fits with OS/161.)

**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent

## Correctness — Nanosleep

(Delete this text and replace with a brief evaluation of the correctness of the code.)

**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent


## Correctness — Wake-at-Front

(Delete this text and replace with a brief evaluation of the correctness of the code.)

**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent

## Correctness — More Time Tracking

(Delete this text and replace with a brief evaluation of the correctness of the code.)

**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent

Assessment

For each category, you'll delete the placeholder text and write about how the patch meets the requirements for that category.

For the

**Assessment:** AboveAndBeyond|Good|Proficient|NearlyThere|NeedsWork|Attempted|Absent

block, you should choose the option that best fits your assessment of the code and delete the rest (but don't delete the **Assessment:** part at front!).

After selecting an assessment, the line should look like this (i.e., all on the same line):

**Assessment:** Proficient

Here's what they mean:

  • AboveAndBeyond: The code is excellent in this regard. You're surprised and pleased by how well it was done.
  • Good: There's nothing really to nitpick about here. It's solid work.
  • Proficient: The code is fine. Maybe a little more could have been done in places, but nothing important has been missed.
  • NearlyThere: The code is okay, but there are some minor issues (e.g., errors or omissions) that need to be addressed, but the fix is a minor one. But for the most part, it's okay.
  • NeedsWork: Something important is wrong or omitted. The fix is not trivial.
  • Attempted: None of the above, but at least an attempt was made.
  • Absent: This aspect is missing entirely.

Note that “Good” and “Proficient” are very similar, and it's often a toss-up which to use. If you're not sure, you can use “Proficient” as a default. “AboveAndBeyond” is for going the extra mile in some way, and you probably won't choose it often, if at all.

Assessing Documentation

For documentation, you should consider the following questions:

  • Does the documentation provide a good roadmap for understanding the code?
  • Does it provide a rationale for the design decisions taken?
  • Does the documentation overdo it, explaining things that are minor details that would readily be understood by someone looking at the code?

Here are some criteria to use to help you choose an assessment:

Assessment Criteria
AboveAndBeyond Documentation is Good (as described below) but also makes you think about aspects of the code or design space that you hadn't considered before or weren't obvious from the assignment.
Good Documentation provides a good roadmap for understanding the code in the patch, and provides a rationale for the design decisions taken.
Proficient Documentation provides a good roadmap for understanding the code; or Documentation would be classed as “Good”, but is so verbose as to be annoying.
NearlyThere Documentation is helpful but somewhat incomplete.
NeedsWork Documentation is present, but woefully incomplete, or Documentation attempts to be helpful, but is seriously inaccurate.
Attempted Code has useful comments, but no other documentation is provided.
Absent Essentially no documentation at all.

Assessing Clarity

For clarity, you should consider the following questions:

  • Is the code easy to understand?
  • Are the comments helpful?
  • Is the code well-organized?

Here are some criteria to use to help you choose an assessment:

Assessment Criteria
Good It is very easy to understand all the code and why it's there.
Proficient The code itself is simple and clear, but it feels a bit under or over commented.
NearlyThere Some of the code is hard to follow, or seems pointless, contains junk comments, commented out code, or leftover debugging statements. It feels like you might want to clean it up a bit before merging.
NeedsWork Most of the code is hard to follow.
Attempted The code is utterly impenetrable.
Absent There's no code to evaluate.

Assessing Conciseness

For conciseness, you should consider the following questions:

  • Is the code simple and elegant?
  • Is there any duplicated or redundant code?
  • Does it use its helper functions well?

Here are some criteria to use to help you choose an assessment:

Assessment Criteria
Good The code is elegantly simple, it doesn't do anything unnecessary.
Proficient A couple of places where it could perhaps be coded more neatly, but even those might matters of taste.
NearlyThere Some duplicated code or a few places the code labors too hard to achieve a simple end result.
NeedsWork Code shows "premature optimization", where extra code seems to have been written purely for nebulously justified performance (not correctness) reasons. Or it's way too cryptic. Or it's seriously verbose.
Attempted Large amounts of code missing in action.
Absent There's no code to evaluate.

Assessing Fit

For fit, you should consider the following questions:

  • Does the code match the style of OS/161?
  • Does it use the same variable conventions?
  • Does it use the same commenting style?

Here are some criteria to use to help you choose an assessment:

Assessment Criteria
Good The code seems like it fits very well with OS/161, with identical formatting, variable conventions, commenting style, approach to solving problems, etc.
Proficient Mostly good but one or two small mismatches with OS/161 feel.
NearlyThere Authors seemed to pay little attention to matching style, but the added code is consistent with itself.
NeedsWork Code is inconsistent with itself (and with OS/161).
Attempted Authors seem to believe that OS/161 should change to match their style rather than that they should write their code to match its style.
Absent There's no code to evaluate.

Assessing Correctness (all categories)

For correctness, you should consider the following questions:

  • Does the code basically work?
  • Does it have any bugs? Any missed edge cases?

Here are some criteria to use to help you choose an assessment:

Assessment Criteria
Good The code seems to run very well, and most edge cases are caught.
Proficient The code seems to run fine, but perhaps crashes in some erroneous-use edge cases.
NearlyThere The code looks okay, but seems to have a bug when run, or The code runs okay, but has a subtle bug we noticed reviewing the code.
NeedsWork Code is clearly buggy.
Attempted Code is unfinished.
Absent There's no code to evaluate.

Vote for a Favorite by Merging

You may pick one of the patches to merge into the patch branch. To do so, you should go to the pull request on GitHub and click the Merge pull request button.

If you don't feel that any of the patches you were given were well enough written to merge, you should not merge any of them.

To Complete This Part of the Assignment…

You'll know you're done with this part of the assignment when you've done all of the following:

(When logged in, completion status appears here.)