It’s not as hard as you think, What if someone writes code for a software update that pushes out one day to all cars then takes affect at a specific time. What if they push software that makes self driving cars al suddenly turn right.
Actually that is
very hard. Let's look at Tesla for example:
1 - Their level 3 autonomous driving still has redundancy after redundancy built into the system where a Manuel override still takes precedent.
2 - That's assuming that when an update is written that redundancy on redundancy
misses that in the update.
3 - That also assumes that everyone processed the update in time.
I know you are trying to play devils advocate, but there's reasons that terrorists go for single high value targets over multiple low value targets. The effort, planning, and work to go after the small targets is not worth the work as opposed to the high value target. Something like the Colonial Pipeline doesn't have people constantly working on, constantly updating, and always watching the programing to the level that something like Tesla's programming update does. No to mention Tesla's software communication if a two way road, and when the car notices something wrong with it's programming, it can communicate with Tesla to get itself fixed.
This article shows how hard it is to defeat Tesla's programming from 6 years ago, and it's only gotten better:
https://mashable.com/2015/08/10/tesla-model-s-hack/
Heck look at current Ford vehicles, literally a scan of a barcode in the door jam
and you can take control of a vehicle remotely.
Yea hmmmm that system that starts, locks, unlocks, locates a parked car, and tells you range to empty is so dangerous. And that's still only single car operations with a limited to no back door way to effect every
new Ford that's on the road.
So I don't think you are considering just how hard what you are suggesting is. There's many more redundancies in place for something as complicated as remote care updates, or even single control updates. On top of that the NTHSB is
constantly reviewing autonomous programming to look at any loopholes that could cause something to go wring and not allow a user override. Hence why there's unlikely to ever by full autonomous self driving cars, as a manual override is almost always going to end up required.
And that's less a hacker question and more a morality question. And the one at the heart of this is the Trolly Problem, which goes like this:
There is a runaway
trolley barreling down the
railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:
- Do nothing and allow the trolley to kill the five people on the main track.
- Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the more ethical option? Or, more simply: What is the right thing to do?
If a car goes fully autonomous, that means that a car is programmed to make this choice for the driver, passengers, and pedestrians. The question at hand comes from a legality standpoint of who's liable? Is it the driver? The auto manufacturer? The car owner? Or despite people being killed would it be a no fault accident?