Nationals Event Discussion
-
- Member
- Posts: 25
- Joined: Wed Mar 18, 2015 6:47 am
- Division: C
- State: AL
- Has thanked: 0
- Been thanked: 0
Re: Nationals Event Discussion
Anatomy and Physiology (33th): 5/10 The overall test was relatively limited. I felt that the diseases got a huge emphasis, which is fine, but there were plenty of concepts from the rules that were left out. The nature of the matching stations allowed the process of elimination to be rather effective, which made those stations really easy. The matching stations had good content, but you didn't really need to understand the content to be able to get questions correct. I liked the question regarding the uptake of amino acids.
Disease Detectives (37th): 10/10 The test was good. This time around I did most of the math, which was probably the reason for our placement. Math is hard.
Remote Sensing (44th): 10/10 Great test. Physics is hard.
Disease Detectives (37th): 10/10 The test was good. This time around I did most of the math, which was probably the reason for our placement. Math is hard.
Remote Sensing (44th): 10/10 Great test. Physics is hard.
2019 Interests: Anatomy, Disease Detectives, Fossils, Experimental Design, Geologic Mapping, Designer Genes
Anatomy/Disease/Experimental/Fossils/Circuit Lab:
MIT: 12/25/13/22
Regionals: 1/1/x/x/1
State: 1/1/2/1/x
Nationals:
Anatomy/Disease/Experimental/Fossils/Circuit Lab:
MIT: 12/25/13/22
Regionals: 1/1/x/x/1
State: 1/1/2/1/x
Nationals:
-
- Member
- Posts: 2
- Joined: Thu May 10, 2018 2:20 pm
- Division: C
- State: PA
- Has thanked: 0
- Been thanked: 0
Re: Nationals Event Discussion
Herpetology (2): 1/10. The Nationals Herpetology test was very disappointing. The proctor was insistent upon how “challenging” his test was, when in fact it was exceedingly too simple for a Science Olympiad Nationals test. The test was made up of 34 stations, each of which only had TWO questions: One was identification, and the other was either a multiple choice question or a fill in the blank (only 2 stations were different: a rest station and one with two trivia questions). The multiple choice and fill in the blank questions were very baseline and simple (“what is the diet of this specimen?” “what does the animal do when in a cool environment?”). These questions are very basic, and each answer easily found even on the specimen’s Wikipedia page. Frankly, someone could have printed out the Wikipedia page for each specimen on the list and could have performed well. Additionally, some of the questions that were multiple choice in fact had multiple correct answers (ex: asking what the habitat of soft shell turtles are, two of the possible answers being “brackish environment” and “freshwater”, when in fact the Chinese soft shell turtle can live in brackish waters and most others live in freshwater). Limiting the answers to multiple choice when the answers are complex and nuanced limited our ability to prove our knowledge and research, and, in fact misrepresented many of the samples. Many teams could have lost points on these sorts of questions (as there were many examples of this on the test, most of them relating to habitats). However, the most egregious part of this test was how the proctor mistakenly provided the answers to the identification portion. He cited the images he provided, many of the being urls which had the NAME of the specimen inside of it. He did a very poor job of sharpie-ing out the specimen’s name and thus provided the answer. This test was too simple for this level of competition, and was evidently poorly run. A Herpetology test like MIT, which had in-depth biological, ecological content that required intensive research and knowledge would have been much more appropriate for Nationals. On the Nationals test, we weren’t asked to identify sounds, know about evolutionary history, specific anatomy of reptiles and amphibians (barring 1 question), or provide detailed information on every specimen. Months and months are slaved into researching and learning this event, and all of this felt wasted after taking this test.
Rocks and Minerals (9): 8/10. This event was run very efficiently and had great samples. However, the questions were on the easy side and were strikingly similar to last year's test.
Experimental Design (10). 7/10. Plenty of materials, allowed for interpretation and creativity with the given topic. Seemed better than years before.
Rocks and Minerals (9): 8/10. This event was run very efficiently and had great samples. However, the questions were on the easy side and were strikingly similar to last year's test.
Experimental Design (10). 7/10. Plenty of materials, allowed for interpretation and creativity with the given topic. Seemed better than years before.
Last edited by heterodon on Sun May 20, 2018 7:19 pm, edited 1 time in total.
Re: Nationals Event Discussion
Astronomy (3rd): 9/10. Overall, a well-written test with a good level of complexity (as per usual). DSOs were pretty doable with maybe a few difficult questions and images, and General Knowledge and Physics had some interesting questions. I guess I expected it to be a little bit harder but I was very happy with the quality of the test.
Herpetology (2nd): 1/10. See heterodon's comment The main thing I have to say about this test is that unfortunately it was nowhere near the appropriate difficulty level that a Nationals test should be, and while I appreciate the effort the supervisor(s) put in, it was a shame to see all the work and studying we put in basically go unused because of how easy the test was. Hopefully changes are implemented next year and we get a test that is more pertinent to a national level of competition.
Herpetology (2nd): 1/10. See heterodon's comment The main thing I have to say about this test is that unfortunately it was nowhere near the appropriate difficulty level that a Nationals test should be, and while I appreciate the effort the supervisor(s) put in, it was a shame to see all the work and studying we put in basically go unused because of how easy the test was. Hopefully changes are implemented next year and we get a test that is more pertinent to a national level of competition.
-
- Member
- Posts: 23
- Joined: Sat Dec 21, 2013 10:06 am
- Division: Grad
- State: TX
- Has thanked: 0
- Been thanked: 0
Re: Nationals Event Discussion
Game On (11): 10/10. It was run great and I liked how systematic the proctors were in the process of saving your game and everything like that. Prompt was 2-player racing, Polymerization.
Dynamic Planet (6): 6/10. Extremely weird test. I really don't understand why the proctor was so set on making a "different" test (the MC had 20 answer choices each over some random map, and the free response was literally a throw up of information of everything you know in these specified letters of the rules manual). It was a bit disappointing that last year's entire Nationals test was gravity and magnetic anomalies, we primarily studied this for the month we had after State, and none of that showed up. It was an okay test in the end, but definitely not something I would expect at Nationals.
Herpetology (31): 3/10. 34 stations and 1:30 for 2 questions per station. I don't really know what happened here; the test was way too easy for Nationals and I'm guessing we made a few mistakes that dropped us down to the bottom half of teams. It really is disappointing to put so much time into an event, medal in it at MIT, but bomb it at Nationals. If I could redo this event, I have no idea what I would do differently. Taking the test itself was really boring and tedious, usually the adrenaline of the competition makes the time pass fast and makes you forget about everything except for the test itself. I constantly found myself trying to stay focused and keep working or checking my answers, and it really irked me that the proctor prided himself on how challenging this test was, when every question asked the most stale and basic questions in existence. Overall, I have no idea what to think about this, from blaming myself for not being good enough to wondering what there was even to be done when anyone is infallible to a few ID mistakes and this in itself shouldn't determine your skill level.
The tournament was really nice overall; you could tell it was very well planned and the opening/awards ceremonies were all amazing experiences. If Nationals was judged purely based on this, it'd be a 10/10. Despite that, it was disappointing that we do so much to get here in the first place (Texas bloodbath), and we end up getting tests that are comparable to Cyfalls in rigor and level of relevant-ness to the rules. I felt that the proctors tried too hard on having "different" tests, in which many of them do not know the fine line between creative and janky. I've heard of some events (all the physics ones) being run extremely well, which is great, but for me the tests were mediocre in general. I remember at MIT taking these intensive tests were fun, and the creativity of each test made me smile as I was writing as fast as I possibly could. Here, everything just seemed bland and sketchy. Nationals was a great experience, but the tests really decreased the quality of the competition overall. I've been dreaming about going to Nationals since I joined Science Olympiad in 7th grade, and finally having gone, the tests really changed my view on Nationals. I'd still like to thank all the proctors for taking time to write all these tests; they had good intentions in writing the test and it's a bit unfair of me to judge it so harshly. Sorry for these massive paragraphs:P
Dynamic Planet (6): 6/10. Extremely weird test. I really don't understand why the proctor was so set on making a "different" test (the MC had 20 answer choices each over some random map, and the free response was literally a throw up of information of everything you know in these specified letters of the rules manual). It was a bit disappointing that last year's entire Nationals test was gravity and magnetic anomalies, we primarily studied this for the month we had after State, and none of that showed up. It was an okay test in the end, but definitely not something I would expect at Nationals.
Herpetology (31): 3/10. 34 stations and 1:30 for 2 questions per station. I don't really know what happened here; the test was way too easy for Nationals and I'm guessing we made a few mistakes that dropped us down to the bottom half of teams. It really is disappointing to put so much time into an event, medal in it at MIT, but bomb it at Nationals. If I could redo this event, I have no idea what I would do differently. Taking the test itself was really boring and tedious, usually the adrenaline of the competition makes the time pass fast and makes you forget about everything except for the test itself. I constantly found myself trying to stay focused and keep working or checking my answers, and it really irked me that the proctor prided himself on how challenging this test was, when every question asked the most stale and basic questions in existence. Overall, I have no idea what to think about this, from blaming myself for not being good enough to wondering what there was even to be done when anyone is infallible to a few ID mistakes and this in itself shouldn't determine your skill level.
The tournament was really nice overall; you could tell it was very well planned and the opening/awards ceremonies were all amazing experiences. If Nationals was judged purely based on this, it'd be a 10/10. Despite that, it was disappointing that we do so much to get here in the first place (Texas bloodbath), and we end up getting tests that are comparable to Cyfalls in rigor and level of relevant-ness to the rules. I felt that the proctors tried too hard on having "different" tests, in which many of them do not know the fine line between creative and janky. I've heard of some events (all the physics ones) being run extremely well, which is great, but for me the tests were mediocre in general. I remember at MIT taking these intensive tests were fun, and the creativity of each test made me smile as I was writing as fast as I possibly could. Here, everything just seemed bland and sketchy. Nationals was a great experience, but the tests really decreased the quality of the competition overall. I've been dreaming about going to Nationals since I joined Science Olympiad in 7th grade, and finally having gone, the tests really changed my view on Nationals. I'd still like to thank all the proctors for taking time to write all these tests; they had good intentions in writing the test and it's a bit unfair of me to judge it so harshly. Sorry for these massive paragraphs:P
Last edited by birdylayaduck08 on Sun May 20, 2018 11:41 pm, edited 1 time in total.
Seven Lakes High School '19
-
- Member
- Posts: 175
- Joined: Sun Jun 25, 2017 7:06 am
- Division: Grad
- State: TX
- Has thanked: 0
- Been thanked: 1 time
Re: Nationals Event Discussion
Experimental Design (19th): 5/10 While this event was run okay overall and the proctors knew what they were doing, I was disappointed with the actual prompt given. The topic of the experiment was momentum and they gave a ton of different sized balls as well some other materials that I don't recall. I was really looking forward to a topic that was more interesting and unique, as they are fun to do and they help differentiate teams better. Also, the proctors seemed a bit pushy with our team by making us describe our experiment halfway through the event and having us redo some portions of the procedure, which took away a good amount of time from us. Overall, I'd say that it was a pretty mediocre experience.
Hovercraft (4th): 7/10 Although I did not personally get to see the build portion for this event (I stayed behind while my partner handled the device testing), I heard it was run pretty well. The tracks were similar to the ones for MIT (or maybe the same, idk) which caused some issues for some other teams because of how slippery they were, but it didn't seem to be a problem for us. As for the test, it was definitely a bit on the easy side and I wished it could've been more challenging. Furthermore, a lot of the questions were pulled straight out of last years Hovercraft nationals test as well, which was a bit irking to see. Despite this, the test had a couple good questions (I liked the calc questions spread out across the test!) and I had an overall good experience taking it.
Remote Sensing (1st): 9/10 I felt that this test was exceptionally hard (I actually thought it was harder than last years) and my partner and I had a ton of fun taking it. So many Remote Sensing tests are just recalling random facts from the notes sheet (knowing the date of launch of a specific satellite or finding the wavelength of a specific band of a given instrument) and none of them seem to really test the interpretation aspect of the event, which is undoubtedly the most difficult part. Overall, I had a really awesome experience taking the test, even if half the time we were lost while taking it. While I'm sad to see that this event is leaving next year (having started this event midway through the year at MIT), I'm glad that I got to end the year this way.
Thermodynamics (12th): 9/10 I was extremely pleased with the proctors because of how well they ran the event (better than any other tournament I have been to this year). I appreciated how much care they took in ensuring that the water bath stayed at the designated temperature and how efficient they were with transferring the water into the beakers using the syringes. I also was happy that they actually followed the rules and allowed us to measure the temperature of the water before making the prediction, which was definitely not the case at other tournaments my partner and I have been to. As for the test, it was was of good difficulty, but I wished that it had a lot more calculation questions, which are wayyy more fun in my opinion. However, I did like that there were only a few history questions (rather than some tests are literally half history questions). Taken as a whole, this whole event was a great experience and I hope that other proctors in the future will emulate these same methods.
Hovercraft (4th): 7/10 Although I did not personally get to see the build portion for this event (I stayed behind while my partner handled the device testing), I heard it was run pretty well. The tracks were similar to the ones for MIT (or maybe the same, idk) which caused some issues for some other teams because of how slippery they were, but it didn't seem to be a problem for us. As for the test, it was definitely a bit on the easy side and I wished it could've been more challenging. Furthermore, a lot of the questions were pulled straight out of last years Hovercraft nationals test as well, which was a bit irking to see. Despite this, the test had a couple good questions (I liked the calc questions spread out across the test!) and I had an overall good experience taking it.
Remote Sensing (1st): 9/10 I felt that this test was exceptionally hard (I actually thought it was harder than last years) and my partner and I had a ton of fun taking it. So many Remote Sensing tests are just recalling random facts from the notes sheet (knowing the date of launch of a specific satellite or finding the wavelength of a specific band of a given instrument) and none of them seem to really test the interpretation aspect of the event, which is undoubtedly the most difficult part. Overall, I had a really awesome experience taking the test, even if half the time we were lost while taking it. While I'm sad to see that this event is leaving next year (having started this event midway through the year at MIT), I'm glad that I got to end the year this way.
Thermodynamics (12th): 9/10 I was extremely pleased with the proctors because of how well they ran the event (better than any other tournament I have been to this year). I appreciated how much care they took in ensuring that the water bath stayed at the designated temperature and how efficient they were with transferring the water into the beakers using the syringes. I also was happy that they actually followed the rules and allowed us to measure the temperature of the water before making the prediction, which was definitely not the case at other tournaments my partner and I have been to. As for the test, it was was of good difficulty, but I wished that it had a lot more calculation questions, which are wayyy more fun in my opinion. However, I did like that there were only a few history questions (rather than some tests are literally half history questions). Taken as a whole, this whole event was a great experience and I hope that other proctors in the future will emulate these same methods.
Last edited by Justin72835 on Mon May 21, 2018 5:41 am, edited 2 times in total.
"The fault, dear Brutus, is not in our stars,
But in ourselves, that we are underlings."
University of Texas at Austin '23
Seven Lakes High School '19
But in ourselves, that we are underlings."
University of Texas at Austin '23
Seven Lakes High School '19
-
- Member
- Posts: 128
- Joined: Sun Apr 30, 2017 12:27 pm
- Division: Grad
- State: MI
- Has thanked: 0
- Been thanked: 0
Re: Nationals Event Discussion
There were calculus questions on the test? I thought that was generally frowned upon in SciolyJustin72835 wrote:(I liked the calc questions spread out across the test!)
Unless you just mean calculation problems lol
University of Michigan Science Olympiad Executive Board
-
- Moderator
- Posts: 4315
- Joined: Sun Jan 26, 2014 12:48 pm
- Division: Grad
- State: GA
- Has thanked: 216 times
- Been thanked: 75 times
Re: Nationals Event Discussion
This basically exactly describes what I think of the test. I spent the second half of most of the stations sitting on the tables to rest my feet because we had absolutely nothing to do, despite going very slowly on most stations. Also would like to second an earlier comment about incredibly vague answer choices (that one about the habitat of Trionychidae stood out to me as well, and I think there was something similar about Malaclemys).birdylayaduck08 wrote:Herpetology (31): 3/10. 34 stations and 1:30 for 2 questions per station. I don't really know what happened here; the test was way too easy for Nationals and I'm guessing we made a few mistakes that dropped us down to the bottom half of teams. It really is disappointing to put so much time into an event, medal in it at MIT, but bomb it at Nationals. If I could redo this event, I have no idea what I would do differently. Taking the test itself was really boring and tedious, usually the adrenaline of the competition makes the time pass fast and makes you forget about everything except for the test itself. I constantly found myself trying to stay focused and keep working or checking my answers, and it really irked me that the proctor prided himself on how challenging this test was, when every question asked the most stale and basic questions in existence. Overall, I have no idea what to think about this, from blaming myself for not being good enough to wondering what there was even to be done when anyone is infallible to a few ID mistakes and this in itself shouldn't determine your skill level.
The only reason I gave it even 5/10 is because the station rotation was well-designed.
-
- Member
- Posts: 592
- Joined: Thu Jan 05, 2017 9:39 am
- Division: Grad
- State: OH
- Has thanked: 0
- Been thanked: 1 time
Re: Nationals Event Discussion
I agree with both herp and mission analysis even if I can't complain too much.heterodon wrote:Herpetology (2): 1/10. The Nationals Herpetology test was very disappointing. The proctor was insistent upon how “challenging” his test was, when in fact it was exceedingly too simple for a Science Olympiad Nationals test. The test was made up of 34 stations, each of which only had TWO questions: One was identification, and the other was either a multiple choice question or a fill in the blank (only 2 stations were different: a rest station and one with two trivia questions). The multiple choice and fill in the blank questions were very baseline and simple (“what is the diet of this specimen?” “what does the animal do when in a cool environment?”). These questions are very basic, and each answer easily found even on the specimen’s Wikipedia page. Frankly, someone could have printed out the Wikipedia page for each specimen on the list and could have performed well. Additionally, some of the questions that were multiple choice in fact had multiple correct answers (ex: asking what the habitat of soft shell turtles are, two of the possible answers being “brackish environment” and “freshwater”, when in fact the Chinese soft shell turtle can live in brackish waters and most others live in freshwater). Limiting the answers to multiple choice when the answers are complex and nuanced limited our ability to prove our knowledge and research, and, in fact misrepresented many of the samples. Many teams could have lost points on these sorts of questions (as there were many examples of this on the test, most of them relating to habitats). However, the most egregious part of this test was how the proctor mistakenly provided the answers to the identification portion. He cited the images he provided, many of the being urls which had the NAME of the specimen inside of it. He did a very poor job of sharpie-ing out the specimen’s name and thus provided the answer. This test was too simple for this level of competition, and was evidently poorly run. A Herpetology test like MIT, which had in-depth biological, ecological content that required intensive research and knowledge would have been much more appropriate for Nationals. On the Nationals test, we weren’t asked to identify sounds, know about evolutionary history, specific anatomy of reptiles and amphibians (barring 1 question), or provide detailed information on every specimen. Months and months are slaved into researching and learning this event, and all of this felt wasted after taking this test.
Rocks and Minerals (9): 8/10. This event was run very efficiently and had great samples. However, the questions were on the easy side and were strikingly similar to last year's test.
Experimental Design (10). 7/10. Plenty of materials, allowed for interpretation and creativity with the given topic. Seemed better than years before.
Mission was dissappointing. If you'd like to know more about why, see the mission thread.
Solon '19 Captain, CWRU '23
2017 (r/s/n): Hydro: 3/5/18 Robot Arm: na/1/1 Rocks: 1/1/1 2018 (r/s/n): Heli: 2/1/7 Herp: 1/4/4 Mission: 1/1/6 Rocks: 1/1/1 Eco: 6/3/9 2019 (r/s/n): Fossils: 1/1/1 GLM: 1/1/1 Herp: 1/1/5 Mission: 1/1/3 WS: 4/1/10 Top 3 Medals: 144 Golds: 80
-
- Member
- Posts: 175
- Joined: Sun Jun 25, 2017 7:06 am
- Division: Grad
- State: TX
- Has thanked: 0
- Been thanked: 1 time
Re: Nationals Event Discussion
There was one calculus free response and one calculus multiple choice. They're always super fun when they come up, even if they are frowned upon by Science Olympiad .MIScioly1 wrote:There were calculus questions on the test? I thought that was generally frowned upon in SciolyJustin72835 wrote:(I liked the calc questions spread out across the test!)
Unless you just mean calculation problems lol
"The fault, dear Brutus, is not in our stars,
But in ourselves, that we are underlings."
University of Texas at Austin '23
Seven Lakes High School '19
But in ourselves, that we are underlings."
University of Texas at Austin '23
Seven Lakes High School '19
-
- Member
- Posts: 34
- Joined: Wed Sep 14, 2016 5:10 am
- Division: C
- State: PA
- Has thanked: 0
- Been thanked: 0
Re: Nationals Event Discussion
Technically, the hydrostatic force problem didn't need calculus if you had the integrated form of the equation.Justin72835 wrote:There was one calculus free response and one calculus multiple choice. They're always super fun when they come up, even if they are frowned upon by Science Olympiad .MIScioly1 wrote:There were calculus questions on the test? I thought that was generally frowned upon in SciolyJustin72835 wrote:(I liked the calc questions spread out across the test!)
Unless you just mean calculation problems lol