Hi! My name is Robert Lee and I had the pleasure of supervising for the inaugural BirdSO invitational. I was the event supervisor for Machines C with Jessica Shah and the co-event supervisor for Fermi Questions B/C with Andrew Zhang and Caleb Chiang. Thank you to all of the teams who competed and congratulations for surviving through a slew of tough tests (from what I've heard). Best of luck in the upcoming regional, state, and national competitions!
Machines C
Statistics:
Mean: 41.4 (27.6%)
Median: 37.3 (24.9%)
St. Dev.: 18.4
Max: 89.8 (59.9%)
Graphs: More in-depth statistics and graphs pertaining to sections and specific questions can be found at this link.
Thoughts:
Overall, I am content with the results, but as always, scores could have been better separated at the bottom. This test was a bit too hard for the field, but I think teams can learn a lot from this test. Congratulations to the top teams!
- Section A (Multiple Choice) had a good distribution with teams scoring around an average of 50%. The questions generally had a high correct rate and was generally written to be simple one step calculations. The correct rate of questions 7 and 15 were definitely outliers as the former was a multiple select question and the other used silly EE units (a conversion table is helpful to have on hand!). I do think the amount of kinematics questions may have been higher than necessary for a Machines test, but other than that, nothing of note.
- Section B (Free Response) was where many teams hit a wall, in terms of points. I hoped that teams would be able to get the first two questions as I tried to make them more accessible and only used simple machines. I'll try to include drawings next time as the description may have been unclear. This section had the points allocated a little top-heavy, with many of the points locked up in questions 4 and 5. Regarding question 3, the device design, I was surprised at the low number of submissions (around ~30 in a field of 120 teams) but the teams who did submit scored an average of around 10-12 points. One point that I want to stress is the difference between an inclined plane and wedge. The applied force on the two are different: a mass is pushed parallel and up the inclined plane whereas a wedge is pushed into a mass and it moves upwards (or apart). The direction of forces on the two machines are fundamentally different, or else they wouldn't be put in two categories. I saw this confusion happen at GGSO where planes were substituted for wedges and, surprisingly, at BirdSO where the opposite occurred. Questions 4 and 5 were definitely tougher questions aimed for the most prepared of teams; however, not a single team really made any significant progress on them, so I hope teams can attempt them on their own and learn something new!
Test Folder:
The exam and all other material can be found in this folder. The folder includes a solution guide walking through each of the questions in section B. I hope teams find it useful!
Fermi Questions C
Statistics:
Mean: 92.4 (30.8%)
Median: 91 (30.3%)
St. Dev.: 30.4
Max: 156 (52.0%)
Graphs: More in-depth statistics and graphs pertaining to specific questions can be found at this link.
Thoughts:
The results turned out as we expected with a surprisingly nice, even distribution of scores. The test was generally divided into three (unlabeled) sections of easy (1-15), medium (16-45), and hard (46-60). Teams faired pretty well in the first two sections, as can be seen in the detailed statistics for each individual question, but had a pretty steep drop-off in the last section. We hoped that there was a good distribution of difficulties and variety of topics so teams didn't feel too overwhelmed by too much of one thing. Unfortunately, no one got the last game theory-esque question . When grading the tests, we noticed many teams didn't answer using Fermi numbers, but rather in scientific notation or in full numbers (lots of counting digits) and these teams were tiered. Remember! Answers must be Fermi numbers! Regardless, thank you to all of the teams who competed and took our test. Even the ones who just took it on a whim and didn't even study, because that's really what the event is about: estimating and fudging numbers.
Thank you to my co-event supervisors, Andrew and Caleb, for being amazing test writing partners, coming up with some crazy questions, and putting up with my last minute problem writing.
Test Folder:
The exam and all other material can be found in this folder. The folder includes short write-ups for each question which we hope teams find helpful.
Fermi Questions B
Statistics:
Mean: 63.7 (25.5%)
Median: 65 (26.0%)
St. Dev.: 29.5
Max: 118 (47.2%)
Graphs: More in-depth statistics and graphs pertaining to specific questions can be found at this link.
Thoughts:
The division B test was pretty much the division C test with 10 of the hard section questions taken out. Once again, many teams did not answer in Fermi numbers and had to be tiered because of it. I'm not sure how well Fermi questions can be run in division B, but I hope teams had fun with it!
Test Folder:
The exam and all other material can be found in this folder. The folder includes short write-ups for each question which we hope teams find helpful.
Test Feedback
If you have feedback for either test, feel free to leave it here! I would appreciate it a ton, since feedback helps a lot with gauging what I need to adjust in my tests. The test codes are as follows:
- Machines C: 2021BirdSO-MachinesC-Screw
- Fermi Questions C: 2021BirdSO-FermiQuestionsC-Chickadee
- Fermi Questions B: 2021BirdSO-FermiQuestionsB-Crane