WHAT CARS TELL US ABOUT WHO’S RISK. IN 2016, WEEKS before the Tesla autopilot killed Joshua Brown,I lobbied with the US Senate Committee on Commerce, Science. Transportation to regulate the use of artificial intelligence in vehicles. Neither my plea nor Brown’s death could get the government to act.
Since then, automotive AI in the US has been linked to at least 25 confirmed deaths and hundreds of injuries and property damage.
Advanced Driver Assistance Systems (ADAS)
The technical ignorance in industry and government is appalling. People don’t understand that the AI powering the vehicles – both cars operating in actual self-driving mode. A much larger number of cars offering advanced driver assistance systems (ADAS) – are based on the same principles as ChatGPT and another big model language (LLM). These systems control the car’s horizontal and vertical position – for lane changes, braking. Acceleration – without waiting for commands from the person behind the wheel.
Both types of AI use statistical reasoning to guess what the next word or phrase or input direction will be. Burdening the computation with recently used words or actions. Go into your Google search box and type “it’s time” and you’ll get the result “It’s time for all good men”. And when your car detects an object in the road ahead, even if it’s just a shadow. Also, watch for the car’s self-driving module to brake suddenly.
National Highway Traffic Safety Administration
Neither the AI in LLM nor in self-driving cars can “understand” the situation, the context, or any of the unobserved factors that a person would consider in a similar situation. The difference is that while a language model can make you meaningless, a self-driving car can kill you. At the end of 2021, despite threats to my physical safety for speaking the truth about the dangers of AI in vehicles. I agreed to work with the Traffic Safety Administration. National Highway Traffic Safety Administration (NHTSA) as Senior Security Advisor. What qualifies me for this job is that my Ph.D. is focused on the design of automated systems that incorporate humans and 20 years of designing and testing unmanned systems, some of which are currently being used in the military, mining, and medicine.
My time at NHTSA gave me a bird’s-eye view of how real-world transportation AI applications work or don’t. It also showed me the internal problems with regulation, especially in our current divided political landscape. My in-depth research helped me formulate five practical ideas. I believe they can serve as a guide for the industry and the organizations that govern them.
Human-operated error is replaced by a human-written error
Self-driving vehicle advocates often claim that the sooner we eliminate drivers, the safer we will all be on the road. They cite NHTSA statistics that 94% of crashes are caused by drivers. But this statistic is taken out of context and inaccurate. As NHTSA itself noted in that report, driver error is “the last event in the chain of accident causes…. It is not intended to be explained as the cause of the accident. In other words, there are also many other possible causes, such as poor lighting and bad road design.
Additionally, the claim that self-driving cars will be safer than human-driven cars overlooks something anyone who has worked in software development knows all too well. This software code is extremely error-prone and problems only grow as systems become more complex.
TuSimple
Consider recent incidents where faulty software was the cause. There was the October 2021 crash of the Pony. Ai driverless car in the dashboard, the April 2022 crash of the TuSimple semi-trailer hitting a concrete fence. The June crash in 2022 of a Cruise robotaxi suddenly stopped while making a left turn. Also in March of 2023, the crash of another Cruise vehicle overturned the bus.
These and many other episodes make it clear that AI has not yet ended the role of human error in traffic accidents. This role simply moves from the end of the sequence of events to the beginning, to code the AI itself. Because these errors are latent, they are much harder to mitigate. Testing, both in simulation and mainly in the real world, is key to reducing the risk of such failures, especially in safety-critical systems. However, without adequate government regulation and clear industry standards. Autonomous vehicle companies will cut back to get their products to market quickly.
AI failure modes are difficult to predict
A large language model guesses which words and phrases will appear next by referring to the archive collected during training from available data. The self-driving module interprets the scene and decides how to drive around an obstacle by making the same guesses, based on a database of labeled images – this is a car, this is a car. As a walker, this is the tree – also provided during training. But not all possibilities can be modeled, so the multitude of failure patterns is extremely difficult to predict. All things being equal, a self-driving car can behave very differently on the same stretch of road at different times of the day. Perhaps due to different angles of sunlight. And anyone who has tested LLM and just changed the word order in the prompt will immediately see a difference in the system response.
A previously unforeseeable failure mode is virtual braking. For no apparent reason, a self-driving car will suddenly brake suddenly. Possibly causing a rear-end collision with the vehicle directly behind it and other vehicles in front. Phantom brakes have been observed in self-driving cars of various manufacturers as well as cars equipped with ADAS.
NHTSA
The cause of such incidents remains a mystery. Initially, the experts blamed the driver for the self-driving car too closely (often accompanying their assessments by citing the statistic that 94 percent of driver errors). However, more and more such incidents are being reported to NHTSA. For example, in May 2022, NHTSA sent a letter to Tesla announcing that the agency had received 758 complaints about virtual brakes on Model 3 and Y vehicles. Last May, the German publication Handelsblatt reported 1,500 complaints of braking problems with Tesla vehicles. As well as 2,400 complaints about sudden acceleration. It now appears that self-driving cars are subject to twice as many rear-end collisions as human-driven cars.
Obviously, the AI is not working as it should. Also, this isn’t just a problem for one company – all automakers that leverage computer vision and AI are susceptible to this problem.
As other types of AI begin to make their way into society, it is imperative that standards. Also, regulatory bodies understand that forms of AI failure will not follow a predictable path. They also need to be wary of automakers’ tendency to excuse poor technological behavior and blame humans for the abuse or misuse of AI.
Probability estimates do not approximate under uncertainty
A decade ago, the rise of IBM’s AI-based Watson, the forerunner of today’s LLM, caused widespread concern. People fear that AI will soon lead to serious job losses, especially in the medical field. Meanwhile, some artificial intelligence experts have said that we should stop training radiologists.
These fears did not materialize. Although Watson may be good at guessing, he has no real knowledge. Especially when it comes to making judgments under conditions of uncertainty and deciding to act on imperfect information. Today’s LLMs are no different: The underlying models simply cannot cope with the lack of information and the inability to judge whether their estimates are good enough in this context.
Cruise Robotaxi
These problems are frequently observed in the autonomous driving world. The June 2022 collision involving a Cruise Robotaxi occurred when the vehicle decided to make a violent left turn between the two cars. As car safety expert Michael Woon detailed in a crash report. The car chose the correct path to take, but halfway through the turn it braked and stopped. in the middle of the intersection. He guessed that an oncoming car in the right lane would turn. Even though turning was not possible given the speed at which the car was traveling. Uncertainty confuses Cruise and she makes the worst decision possible. The oncoming vehicle, a Prius, failed to turn and crashed into Cruise, injuring passengers in both cars.
Travel vehicles also have many problematic interactions with first responders, who by default operate in areas of considerable uncertainty. These encounters included cruiser cars moving through active fire and rescue scenes as well as driving through downed power lines. In one incident, a firefighter had to smash the cruiser’s window to get it out of the scene. Waymo, Cruise’s main rival in the robotaxi business, has a similar problem.
These problems show that although neural networks can classify many images and give a set of actions that operate in common contexts. They still have difficulty performing even operations basic when the world does not match the training data. The same is true for LLM and other forms of artificial intelligence. What these systems lack is the ability to judge in the face of uncertainty, an important precursor to factual knowledge.
Maintaining AI is as important as creating AI
Since neural networks can only be effective if trained on large amounts of relevant data, data quality is paramount. But such training is not a one-size-fits-all scenario:
Models cannot be trained and sent to perform well forever. In dynamic contexts such as driving, models need to be constantly updated to reflect new types of cars, bikes, and scooters, construction areas, traffic patterns, and more.
In the March 2023 crash in which a touring car crashed into the back of an articulated bus.Experts were surprised as many believed such collisions were almost non-existent. It is possible for a system that carries lidar, radar, and computer vision. Cruise attributed the crash to a faulty model that guessed the position of the back of the bus based on the size of a normal bus; In addition, the model rejected the lidar data that correctly detected the bus.
This example highlights the importance of updating AI models. “Pattern drift” is a known problem in AI, and it occurs when the relationship between input and output data changes over time. For example, if a fleet of self-driving cars operates in a city with one type of bus. Then the fleet moves to another city with different bus types. The bus detection model basis may be wrong, leading to serious consequences.
Such drift affects AI operations not only in transportation but in all areas where new discoveries are constantly changing our understanding of the world. This means that large language models cannot learn a new phenomenon until it loses its novelty and occurs frequently enough to be included in the dataset. Maintaining the current model is just one of the many ways that AI requires routine maintenance. Also any discussion of future AI regulation must address this important aspect.
AI has system-level implications that cannot be ignored
Self-driving cars are designed to stop when they can no longer reason and can no longer handle uncertainty. This is an important safety feature. But as Cruise, Tesla, and Waymo have demonstrated, managing such an outage poses an unexpected challenge. A stopped car can block roads and intersections, sometimes for hours, clogging traffic and hindering first responders from entering. Companies have set up remote monitoring centers and quick action groups to ease this congestion and confusion, but at least in San Francisco, where hundreds of self-driving cars are running. Along the way, city officials questioned the quality of their responses.
Self-driving cars depend on a wireless connection to stay aware of the road, but what happens when that connection is lost? A driver had difficulty when his car was stuck in the intersection of 20 tourist cars and lost connection with the remote control center, causing serious traffic jams.
New Technology is Subject
Of course, any new technology is subject to increasing difficulties, but if these difficulties become severe enough, they will erode public trust and support. Sentiment towards self-driving cars was once growing in San Francisco in favor of the technology but has now turned negative due to the myriad of problems facing the city. Such feelings could eventually lead to a public rejection of the technology if a self-driving vehicle stopped causing the death of a person who was unable to get to the hospital in time.
So what does the self-driving car experience say about more general regulation of AI? Not only do businesses need to make sure they understand the broader implications of AI at the system level, but they also need to be monitored — they shouldn’t be left to their own control. Regulators should strive to set reasonable operational limits for systems that use AI and issue appropriate licenses and regulations. When the use of AI poses clear security risks, agencies should not rely on the industry for solutions and should proactively set limits.
AI still has a long way to go for cars and trucks. I’m not asking for a ban on self-driving cars. There are clear benefits to using AI, and it is irresponsible for people to call for banning or even tearing it down. But we need more government oversight to prevent unnecessary risk-taking.
Artificial Intelligence
And yet, the regulation of AI in vehicles has not been carried out yet. This may be partly due to excessive demand and pressure from the industry, but also due to the incompetence of the regulatory bodies. The European Union has been more proactive in regulating artificial intelligence in general and self-driving cars in particular. In the United States, we simply don’t have enough people in federal and state transportation departments who are tech-savvy enough to effectively advocate for balanced public policies and regulations. The same goes for other types of AI. This is not a matter of a single administration. Artificial intelligence not only transcends party boundaries but also involves all agencies and all levels of government. The Department of Defense, Department of Homeland Security, and other government agencies all face a workforce that lacks the technical skills to effectively oversee cutting-edge technologies, especially AI, in development. fast.
AI Regulation
To engage in an effective discussion of AI regulation, everyone around the table needs to have technical AI skills. Currently, these discussions are being heavily influenced by industry (which has an obvious conflict of interest) or Chicken Littles, who argue that machines have acquired the ability to be smarter than humans. Until government agencies have skilled people to understand AI’s critical strengths and weaknesses, regulatory discussions will make little meaningful progress.
Recruiting such people can be done easily. Improve salary and bonus structures, integrate government employees into university labs, reward faculty for government service, offer advanced diploma and certificate programs in AI for All levels of government employees and scholarships are available to college students who agree to serve in government for a few years after graduation. Also, to better educate the public, university courses that teach AI topics should be free 바카라사이트.
We need less hysteria and more education so that people can understand the promises and realities of AI.