Ford Teams Up With MIT, Stanford to Advance Automated Driving Research

4

Ford is embarking on new automated driving research projects with MIT and Stanford as part of its Blueprint for Mobility project, which envisions fully automated vehicles on the road around 2025.

The research with MIT focuses on scenario planning, which will allow cars to predict the actions of other vehicles and pedestrians.  The work at Stanford is more geared towards finding a way for sensors on a vehicle to see around obstructions.

Through these programs, Ford is exploring potential solutions for the long-term societal, legislative, and technological issues posed by a future of fully automated driving.

“To deliver on our vision for the future of mobility, we need to work with many new partners across the public and private sectors, and we need to start today,” said Paul Mascarenas, chief technical officer and Vice President, Ford research and innovation. “Working with university partners like MIT and Stanford enables us to address some of the longer-term challenges surrounding automated driving while exploring more near-term solutions for delivering an even safer and more efficient driving experience.”

The automated Ford Fusion Hybrid that was unveiled last month is Ford’s first step in the direction of automated cars.  It uses the same technology as other Ford vehicles, but adds four LiDAR sensors to generate a real-time 3D map of the vehicle’s surrounding environment.

The Fusion Hybrid is not yet capable of planning future paths for the car.  This is what the MIT research aims to look into – how a car can plan a path to safely avoid other moving objects.  The Stanford research looks to make cars capable of knowing the surrounding area, even when it is not in direct view.  This would be useful if a vehicle is stuck behind a large truck.

“Our goal is to provide the vehicle with common sense,” said Greg Stevens, global manager for driver assistance and active safety, Ford research and innovation. “Drivers are good at using the cues around them to predict what will happen next, and they know that what you can’t see is often as important as what you can see. Our goal in working with MIT and Stanford is to bring a similar type of intuition to the vehicle.”