The Programmers find ways to overcome problems with coding- how to program autonomous and code vision to complete objectives of the challenge.
For this year's vision code, we are continuing to use Python and OpenCV to give sight to our robot. We are stepping up from a Jetson TK-1 to a Jetson TX-1, and stepping up our toolchain as well. Instead of using nVidia’s tuned version of OpenCV 2.4.x and Python 2.7, we are moving to our own optimized OpenCV 3.3.1 and Python 3.5. In our testing, we have seen 15-20% improvement in execution speed in just the transition from Python 2.7 to 3.5. OpenCV 3.x has significantly improved Python bindings and optimizations for nVidia’s 64-bit Tegra processors, so we anticipate better performance.
This year, we are focusing our efforts on identifying both scale and switch lighting patterns. We also are working on identifying power cube pose and positioning the robot for the endgame climb.
As a team that programs our robot with LabVIEW, we had been using the tried and true basic LabVIEW robot architecture since the introduction of the cRIO, many years ago. With Steamworks, we really started to chaffe at the limitations of this framework when we tried to make more semi-autonomous modes to execute during Teleop. Other teams had hit this frustration before us, and worked with National Instruments to create the new Command and Control architecture to deal with these frustrations.
For FIRST Power Up, we will be moving our programming to this new architecture, and we can’t wait to realize some of the new benefits. Each subsystem on the robot gets its own space to implement its own methods, and new modes that will make both autonomous and semi-autonomous much more flexible.
Holonomic means rolling without slipping. The whole point of this drive type is that when implemented properly, the rollers on the wheels will roll without slipping, giving nimble control without the power consumption caused by the scrubbing seen in tank-drive. The basic holonomic drive code provided by WPI is a great solution for most teams, provided your drive system fits within some very specific constraints. You need to have exactly four wheels that match in type, size, and distance from the spin center of the robot. Further, the wheels must be paired on orthogonal axes if they are of the omni-wheel type or a similar arrangement of the contact points of parallel-axis rollers of mecanum type wheels. These restrictions impose wheel positions that sketch out a square, which doesn’t usually match well with the allowed or optimal robot dimensions for each year’s contest.
We’ve never let those limitations hold us back. In prior years, we’ve modified the venerable WPI code to make robots with different spin-center distances and three and four wheel hybrid omni-mecanum drives of different diameters and spin-center distances. To this point, excepting the three-wheel hybrid, we’ve kept our wheel axes or effective axes orthogonal, so we could continue to take advantage of the clever motor calculations offered by the WPI code.
The flexibility of holonomic drives makes tight control a necessity. We do gyro and accelerometer correction for macro deviations from our motion model, and rely on closed-loop speed control at every wheel. This latter portion drastically changed for us this year during the 2018 control system beta, with the substantial changes in the CTRE Talon SRX motor controller’s driver code, which no longer is speed-commandable from the WPI code. Looking to the future, and recognizing our long-term lack of conformance with the status quo, we designed our own n-way holonomic drive code.
Our new code lets any team configure a holonomic drive system in a very flexible way. The only constraint is that each wheel axis (or effective axis for mecanum) must pass through the robot spin center. Any configuration from two to eight holonomic type wheels of any size can be accomodated. We will be using this code to drive our six-wheel omni drive train with two different spin-center distances and non-orthogonal wheel axes.