Page 11 of 23
Re: Replaying Events for 2021
Posted: March 12th, 2020, 2:34 pm
by pepperonipi
Unome wrote: ↑March 12th, 2020, 1:48 pm
LIPX3 wrote: ↑March 12th, 2020, 12:05 pm
I know this has been said, but Detector is going to be an absolute joke next year.
I've heard it was already considered very easy back when it was a trial event, do you have any more info?
Sure. These are my issues with the event.
Like Umaroth said somewhere earlier, luck is a huge factor with the
devices because most of the top teams are able to build accurate temperature sensors. However, a lot of variation can come with factors, such as where you place the probe end in the water or timing. In hot water (such as say, 70C), I have found that moving the probe from the bottom of a cup to the top can result in a change of about 1-2 degrees - which is a pretty significant margin. Also, note that time can definitely factor in to the situation - for example, your device may be 0.1-0.0 degrees off 95% of the time, but then the weird probe that the tournament is using lags behind for a second and suddenly luck has made the difference between the probe and your device 0.2, while other teams didn't experience this. I understand that the devices at nats are all supposed to be pretty good, but like this is ridiculous - I mean, it's just luck between the top teams at this point.
The
design log is basically free points, if you do it. Most supervisors just scan over the info, and the design log doesn't even make sense in a variety of ways. For example, what's the point of showing your equation on a physical graph when you may just change it during the calibration period anyways?
The
LEDs are obviously easy to get as well - this basically just comes down to simple wiring and coding skills. Literally something along the lines of " if (T > 20) { redLED(true); greenLED(true); blueLED(false); } ..."
The parameters for the
test are four bullet points in the rules. I don't have a huge problem with this, but considering that in many cases this has been what has broken some teams apart (such as
Princeton), it would be nice to see some more expansion in this section.
Also, this event really needs some clarification on what is allowed and what isn't allowed. This caused a lot of confusion and FAQ's this year, and caused a lot of frustration when a competitor was told that the piece they were using was no longer allowed. Just my two cents.
Re: Replaying Events for 2021
Posted: March 12th, 2020, 2:36 pm
by pb5754
chalker wrote: ↑March 11th, 2020, 5:51 pm
Another point that hasn't been raised is the running of trial events. It's very likely that many tournaments next year will end up running events that were scheduled for the 20-21 season as trial events. That will be an opportunity for competitors to participate in new things or events they might have been looking forward to.
tbh I think it may be better to instead run 2019-2020 events as trials and just switch to the originally scheduled events for 2020-2021.
Re: Replaying Events for 2021
Posted: March 12th, 2020, 2:47 pm
by SilverBreeze
pepperonipi wrote: ↑March 12th, 2020, 2:34 pm
Unome wrote: ↑March 12th, 2020, 1:48 pm
LIPX3 wrote: ↑March 12th, 2020, 12:05 pm
I know this has been said, but Detector is going to be an absolute joke next year.
I've heard it was already considered very easy back when it was a trial event, do you have any more info?
Sure. These are my issues with the event.
Like Umaroth said somewhere earlier, luck is a huge factor with the
devices because most of the top teams are able to build accurate temperature sensors. However, a lot of variation can come with factors, such as where you place the probe end in the water or timing. In hot water (such as say, 70C), I have found that moving the probe from the bottom of a cup to the top can result in a change of about 1-2 degrees - which is a pretty significant margin. Also, note that time can definitely factor in to the situation - for example, your device may be 0.1-0.0 degrees off 95% of the time, but then the weird probe that the tournament is using lags behind for a second and suddenly luck has made the difference between the probe and your device 0.2, while other teams didn't experience this. I understand that the devices at nats are all supposed to be pretty good, but like this is ridiculous - I mean, it's just luck between the top teams at this point.
The
design log is basically free points, if you do it. Most supervisors just scan over the info, and the design log doesn't even make sense in a variety of ways. For example, what's the point of showing your equation on a physical graph when you may just change it during the calibration period anyways?
The
LEDs are obviously easy to get as well - this basically just comes down to simple wiring and coding skills. Literally something along the lines of " if (T > 20) { redLED(true); greenLED(true); blueLED(false); } ..."
The parameters for the
test are four bullet points in the rules. I don't have a huge problem with this, but considering that in many cases this has been what has broken some teams apart (such as
Princeton), it would be nice to see some more expansion in this section.
Also, this event really needs some clarification on what is allowed and what isn't allowed. This caused a lot of confusion and FAQ's this year, and caused a lot of frustration when a competitor was told that the piece they were using was no longer allowed. Just my two cents.
People have had to rebuild after an FAQ came out, and there was a close call with the regression thing at Solon invite... I hope Detector rules are fixed next year.
Maybe the Bird List could be fixed as well? The no-annotations rule means spelling mistakes on scientific names can't be corrected, so you just kind of have to remember it, and some of the scientific names don't match the given common name... also, only some species have scientific names...
Re: Replaying Events for 2021
Posted: March 12th, 2020, 3:38 pm
by LIPX3
pepperonipi wrote: ↑March 12th, 2020, 2:34 pm
Unome wrote: ↑March 12th, 2020, 1:48 pm
LIPX3 wrote: ↑March 12th, 2020, 12:05 pm
I know this has been said, but Detector is going to be an absolute joke next year.
I've heard it was already considered very easy back when it was a trial event, do you have any more info?
Sure. These are my issues with the event.
Like Umaroth said somewhere earlier, luck is a huge factor with the
devices because most of the top teams are able to build accurate temperature sensors. However, a lot of variation can come with factors, such as where you place the probe end in the water or timing. In hot water (such as say, 70C), I have found that moving the probe from the bottom of a cup to the top can result in a change of about 1-2 degrees - which is a pretty significant margin. Also, note that time can definitely factor in to the situation - for example, your device may be 0.1-0.0 degrees off 95% of the time, but then the weird probe that the tournament is using lags behind for a second and suddenly luck has made the difference between the probe and your device 0.2, while other teams didn't experience this. I understand that the devices at nats are all supposed to be pretty good, but like this is ridiculous - I mean, it's just luck between the top teams at this point.
The
design log is basically free points, if you do it. Most supervisors just scan over the info, and the design log doesn't even make sense in a variety of ways. For example, what's the point of showing your equation on a physical graph when you may just change it during the calibration period anyways?
The
LEDs are obviously easy to get as well - this basically just comes down to simple wiring and coding skills. Literally something along the lines of " if (T > 20) { redLED(true); greenLED(true); blueLED(false); } ..."
The parameters for the
test are four bullet points in the rules. I don't have a huge problem with this, but considering that in many cases this has been what has broken some teams apart (such as
Princeton), it would be nice to see some more expansion in this section.
Also, this event really needs some clarification on what is allowed and what isn't allowed. This caused a lot of confusion and FAQ's this year, and caused a lot of frustration when a competitor was told that the piece they were using was no longer allowed. Just my two cents.
That's about what I was going to say. Everyone who knows what they're doing is close in score that who does well just comes down to luck. I might make a video about how to build a competitive device to attempt to force Science Olympiad into making the rules less simple.
Re: Replaying Events for 2021
Posted: March 12th, 2020, 3:54 pm
by BadDai
Would it still be possible for the Science Olympiad committee be able to change the events back into what they were supposed to be for next year.
Re: Replaying Events for 2021
Posted: March 12th, 2020, 3:58 pm
by BennyTheJett
BadDai wrote: ↑March 12th, 2020, 3:54 pm
Would it still be possible for the Science Olympiad committee be able to change the events back into what they were supposed to be for next year.
Anything's always possible, but science olympiad is known to have its mind made up by the time it posts anything that important, so it'd be quite unusual for them to go back on it. I have my fingers crossed, however!
Re: Replaying Events for 2021
Posted: March 12th, 2020, 4:06 pm
by Jdh3
I recognize Science Olympiad is not a democracy but it would make sense to me to have each State Director send an email (or survey monkey) to every coach in their state asking if they want new events or repeat events. The results could then be tallied and the true sentiment of the competitors recognized. If we can do it with hundreds of millions of votes, we can do it with a small number of schools (no State has more than 800).
I also realize this would mean a possible shift for the Rules writers but most of the rules were well along anyway.
Thanks
Re: Replaying Events for 2021
Posted: March 12th, 2020, 5:02 pm
by JoeyC
Jdh3 wrote: ↑March 12th, 2020, 4:06 pm
I recognize Science Olympiad is not a democracy but it would make sense to me to have each State Director send an email (or survey monkey) to every coach in their state asking if they want new events or repeat events. The results could then be tallied and the true sentiment of the competitors recognized. If we can do it with hundreds of millions of votes, we can do it with a small number of schools (no State has more than 800).
I also realize this would mean a possible shift for the Rules writers but most of the rules were well along anyway.
Thanks
I think most of the people on the forums support you on this. Unfortunately, SOInc is somewhat clear that they don't really care about these type of things.
I mean just like at the bid situation....
Re: Replaying Events for 2021
Posted: March 12th, 2020, 5:12 pm
by BennyTheJett
JoeyC wrote: ↑March 12th, 2020, 5:02 pm
Jdh3 wrote: ↑March 12th, 2020, 4:06 pm
I recognize Science Olympiad is not a democracy but it would make sense to me to have each State Director send an email (or survey monkey) to every coach in their state asking if they want new events or repeat events. The results could then be tallied and the true sentiment of the competitors recognized. If we can do it with hundreds of millions of votes, we can do it with a small number of schools (no State has more than 800).
I also realize this would mean a possible shift for the Rules writers but most of the rules were well along anyway.
Thanks
I think most of the people on the forums support you on this. Unfortunately, SOInc is somewhat clear that they don't really care about these type of things.
I mean just like at the bid situation....
What specifically do you mean about the bid situation? For the most part, SOinc does a pretty good job with bids I think. I know Texas disagrees a little bit for reasons I won't go into, but that's the way some things work. The system isn't as flawed there as it is with repeating the events.
Re: Replaying Events for 2021
Posted: March 12th, 2020, 5:20 pm
by JoeyC
Yeah... but there's a lot that could be done in quite a few states about bids that wouldn't be too hard. However, SOinc doesn't really care.
To put it in different terms, it feels like Nintendo; it does good with its stuff, but doesn't support its users.