This year's Gravity Vehicle is different from many other vehicle events. Speed isn't counted: there's no controllable and consistent separating factor between teams. Due to this change (which makes sense given a virtual format, but not optimal), distances to target point is the only scores that are counted.
Mousetrap Vehicle, for instance, has speed involved in the score - where one needs to optimize how they build the lever arm in order to score better. In Battery Buggy, teams need to curve the vehicle between two cans, forcing teams to design as thin of a car as possible. Moreover, no other events have vehicles that are not starting off the ground, other than Gravity Vehicle.
I've pushed for a 3 run/best of 2 system before (see here or here). For most vehicle events, this really isn't an issue - best run out of the two is fine (and has worked for the past few years). Entering into Gravity Vehicle, scores are way too close - at MIT, medalling (top 6) was 3 cm - within the margin of "slanted flooring", "didn't have a great day" or just "an imperfect track". By using a 3 run/best of 2 system, it eliminates some of the luck involved, but not nearly enough.
In a virtual setting, all tracks are different. Many teams simply do not have the track length to go the minimum distance, 9 meters. Many can't even go 6. Disadvantaged teams are forced to find other places - and in many instances, are forced to go outside, defeating the purpose of an "at home" competition. At both SO Practice November and December, the only teams with ANY distance less than 15 centimeters away tested in a school/home setting. Doing well is not "having a good vehicle" anymore. It's having a good track.
Even if some teams can run at home and have the length, many hallways aren't wide enough for many vehicles that veer even slightly to either side. I've found a few quotes from the video submissions:
(After hitting the a kitchen island)Anonymous competitor wrote: ...and that's what happens when you're in a house...
Due to this, competitors are resorting to driveways, sidewalks, and tennis courts to test their vehicles. How can you expect a team to get lets say, 1 centimeter away if the gaps in the road are larger than 1 cm?Anonymous competitor wrote: ...I don't know anyone that has a 9 meter long [hallway] in their house...
The whole point of vehicle events is to test for a range of distances - giving out the distance 24-48 hours just makes it so teams can test an unlimited amount of times prior to their run. The distances may as well be announced a week or two before competition. It won't make a difference.
I've heard of competitions lowering distances to 4-6 or 6-9 meters. This solution also blows my mind - if vehicle events worked with this short of a distance, it would've been changed already. Horizontal distances are divided by two (target divided by two) - making winning distances at competitive competitions as low as 0.20 or 0.25 cm (winning score at MIT was 0.5 cm, GGSO was similar). You can't accurately score at this point - there will be ties at high levels of competition (if measured to tenths according to rules).
Teams don't know how to measure distances to target points. A surprising amount of teams (~1/3) measured the vertical distance to the target (didn't measure horizontal/diagonal distance). Their taped "target point" was a line at the end. It's impossible to grade if you have so many teams that do the same mistake - do you tier them? Do you DQ or give participation points?
Many teams have been using metersticks and measuring 9 of them (9 meters). Other's had tape measures that were only 6 meters long. How can you expect a straight, accurate track if you can't even set up the track?
All combined - given distances early, inability to record measurements, lack of testing space - defeats the scientific purpose of testing and trials. I'm doing Science Olympiad to learn engineering, not to pray for good luck every time I compete.
As like some other's, I'm a fan of Vehicle Design - running some type of report + interview/short answer type of event, where the effort put in directly leads to a result (with no luck involved). Teams can use things such as CAD softwares, mathematics, or physics to actually design vehicles and respond with their knowledge on how the event works or the design process. However, you must eliminate subjectivity:
1) Release a rubric, like what is given in Experimental Design ahead of time.
2) Eliminate an interview, and ask short answer questions pertaining to designing the vehicle. One could ask about 3D printing, physics, CAD, or other topics relevant to how one might construct a vehicle. These short answer questions could be put into a 50 minute test format, but must somehow address differences in vehicles and differences in vehicle construction processes.
Something to note: I've heard concerns about report grading subjectivity. If so, then shouldn't it be a problem with Protein Modeling or Experimental Design as well?
At almost every single competition I've been to, I've competed in any sort of vehicle event; with BEARSO being my most recent (running Vehicle Design). I was intrigued by the opportunity to apply the skills I've learned in a more formal environment. I do hope that this event becomes a possibility in the future; it allows disadvantaged/underfunded teams to compete in an online setting as well.
With BirdSO, we are running both Gravity Vehicle and Mousetrap as trial events - resulting in 22 events counting towards team score, and one extra trial event. Hopefully, this allows other competitions to see the options they have, and allow them to reconsider running the event as an event that counts towards team score. If any tournament directors are reading this - hopefully you take this into consideration.