CAVS is on the forefront of autonomous mobility research. Our primary focus? Developing solutions for non-urban environments.
Off-road, industrial, and heavy-duty vehicle automation are among the last frontiers of autonomous mobility, and CAVS is combining our unique capabilities to be a leader in these fields. With top-rated high-performance computing capabilities and one-of-a-kind vehicle proving grounds, CAVS is able to validate advanced modeling and simulation developments in real-world situations. CAVS also has full-suite capability for autonomous system development, including sensor research, artificial intelligence and vehicle robotization.
As artificial intelligence continues to drive the development of autonomous vehicles, the use of practical, real-time deep learning algorithms has also been on the rise. CAVS is leading the way in developing deep learning technology, with teams of faculty, research engineers and students using neural networks to identify objects from camera images. While working with industry leaders like NVIDIA, CAVS is specifically exploring deep learning for use in rural environments.
Training neural networks requires extensive data sets and human interaction to manually classify objects. By using simulation, this cycle time can be reduced. This is a critical feature for off-road applications, since there is more variety in natural obstacles, like trees, than in man-made objects.
In autonomous vehicle development, designs for unstructured, off-road environments can be very different than those designed for structured, on-road applications. A typical on-road deep learning system must focus on identifying cars, pedestrians and traffic signals, and are built to respond to highway markings, such as signage and lane markings. In contrast, off-road autonomous vehicles must be able to detect trees, rocks, abrupt changes in elevation and other obstacles. Active sensors, such as LiDAR, also respond differently to organic materials, like foliage, than to man-made materials, like steel and glass.
The CAVS vehicle proving grounds, located on 55 acres of land adjacent to the CAVS building, provides controlled-access testing capabilities for both autonomous vehicles and vehicle mobility in an off-road environment. The proving grounds feature various terrains, including sand, rocks, tall grass, wooded trails and lowlands. Varying courses provide transition points between different terrain and lighting scenarios. Future improvements to the site include hard-surface test capabilities, such as four-lane roads, entrance and exit ramps, and a general use concrete and asphalt pad.
Utilizing simulation for autonomous systems allows safe, repeatable conditions for users to develop, test, and train autonomous algorithms. By using simulation, autonomy algorithms can be trained and debugged before bringing the robot to the physical test track, or even before the hardware is completed. Test time can be optimized by performing thousands of simulated tests, instead of field tests. Simulation can also perform some safety tests more realistically than field tests can. For example, testing obstacle detection and avoidance algorithms are much safer when simulating a pedestrian walking, falling or kneeling in front of the robot, instead of using a human test subject.
CAVS is developing a physics-based simulator for sensors, environment simulation, and vehicle dynamics and mobility that will enable real-time simulation of closed-loop autonomous performance. Featuring MPI-based communication between components, the environment will be scalable and can simulate large numbers of robots interacting with each other. The software framework will also allow different codes for sensor and vehicle dynamics for use in the simulator. The sensor simulator will feature a high-fidelity ray-tracing engine for simulating realistic LIDAR returns from vegetation, GPS pop and drift error, and complex lighting effects – utilizing MSU’s supercomputing resources to perform all computations in real time.
For a car to be autonomous, a computer must accurately and reliably control the vehicle, a task normally accomplished by a human driver’s hands and feet. CAVS has developed a proprietary drive-by-wire vehicle platform for developing and testing autonomous vehicle technology. This platform, called the Systems Testbed for Autonomous Research (STAR), provides fully computerized access to all vehicle functions, including steering, throttle, brakes, shifting, cranking, lights, door locks and horn. Steering and speed control are reinforced by a triple-redundant fail-safe system with hardware overrides, which allows a driver or remote operator to revert to stock vehicle controls with any human input. The system also provides real time feedback from all vehicle sensors, and includes multiple configurable power channels for powering sensors and computers. All of these functions are accessible through a CAN-based API.
The STAR platform includes a wireless server with ability to cast live sensor feeds to an accompanying app, allowing realtime onboard visualization of sensor data.
The STAR platform allows rapid testing of autonomous control system architectures, planning modules, ADAS systems, sensors, fusion blocks, or other algorithms. The vehicle platform is available as a resource for CAVS researchers working on external projects, or as a licensable software/hardware kit.
The sensors that give an autonomous vehicle its view of the surrounding world are one of its most critical components. CAVS researchers are working with various sensors, including LiDAR, multispectral cameras, radar, ultrasonic, and others, to investigate the following factors:
Three dimensional scene reconstruction from a single infrared camera on a ground vehicle. Images are color coded. Blue equals smaller and red represents larger values of “depth images” for different standoff distances from the vehicle. Infrared sensors Detection of buried explosive hazards in infrared imagery. Top: Features (histogram of local binary patterns) extracted from the raw imagery. Bottom: Features extracted from a filtered image.