The autotesting results are pre-loaded. Switch to the "feedback files" tab to see them.
You will see which tests your code failed, and the code needed to reproduce them. You can run the code, check and check your output against what the output is supposed to be.
At the top, for most students, you'll see a line along the lines of:
uam-master/P1_kazemia3/dattaame/xfor_cov.py 260 6 98% 59, 135, 174, 437-440
59, 135, and 174 are the lines from gamify_cov.py that your testing code didn't run. Any complete testing strategy would run all the lines in gamify_cov.py, but an incomplete strategy could do that as well. (Please ignore the coverage percentage, and line numbers >= 189 -- they are not meaningful.)
(Some students' tests crashed the coverage script, and there won't be results like that for them.)
init¶As mentioned several times in class, initialize() should be run outside of the main block. This was autotested, and worth 2/100 marks in the assignment
The TAs left comments on your code. Move your mouse over the areas highlighted in yellow to see the comments. You can also see the Overall Comments (if there are any) on MarkUs.
Progress was be marked out of 18. Marking scheme:
18/18 points for making an effort to code up everything or almost everything, even if there are mistakes
15/18 points for making an effort to code up almost everything that's needed
12/18 points for making a decent attempt, even if it is very incomplete
9/18 points for doing almost nothing (I don't think there was a project like that)
The progress grade (as a percentage) should always be higher than the autotester grade.
The testing strategy grade is out of 20. The marking scheme is as follows:
20/20 points: an excellent attempt, with everything (or nearly everything) systematically tested. It is fine if students argue that by e.g. testing star_can_be_taken() comprehensively and then testing that perform_activity() more lightly in the context of stars, they did their job.
18/20: an excellent attempt, but with some omissions (e.g., everything is great, but no testing for an interval with exactly two hours.)
15/20 points: a clear attempt at testing the functions, but clearly the testing is not comrehensive. Testing may not be very systematic.
12/20 points: minimal interesting testing beyond the supplied test cases
2/20 points: only the supplied test cases were used
We have graded about 250 projects -- inevitably there will be a few mistakes. If you'd like to request a regrade, please send me an email to guerzhoy@cs.toronto.edu. For a regrade request to be considered:
My goal is to assign everyone the correct grade. That means I can review your entire submission, and regrade all of it. Don't let that discourage you from submitting regrade requests -- if you were graded unfairly, I do want to fix that. At the same time, if you lost a small amount of marks somewhere but got a little bit more than was fair elsewhere in the project, consider whether you need to request a regrade.
In a few cases, the autotester assigned a grade that was inappropriately low. For example, a last minute typo could make it so that nothing at all ran. We tried to fix those kinds of cases.
A borderline case is printing in get_hedons() instead of returning. That's an important mistake, but one that we were somewhat lenient about (though people still lost a lot of marks for that), since a large part of the autotester grade relied on get_hedons() working correctly.
Similar cases to these might exist, but there were no other common cases where we fixed the initial automarking grade.