Моментот на Cry Wolf на возбудата со вештачката интелигенција не е корисен

Although I am someone who studies end of humanity scenarios, I believe the “expert letter” suggesting a 6-month AI moratorium or the more recent statement that AI risk is at the level of pandemic and nuclear risk, are both overhyped. The even wilder opinion that we need to shut AI down is irresponsible. Any worry has to be proportional to the risks we face. Right now, we are in no immediate danger from AI.

Current AIs are not capable of taking over society. They don’t have feelings and don’t deserve protection the way human lives do. They are not superintelligent and don’t surpass humans in any general way. In fact, they don’t think at all. Right now, if fed abundant data, AIs are very good at specific tasks such as calculation and prediction. That’s not worrisome, those are features these systems have by design. The promise of AIs include solving cancer, transforming industrial production, modeling future scenarios, and managing environmental challenges. Having said that, there are legitimate reasons to criticize current AIs for resource-use, transparency, bias, cybersecurity, and its future impact on employment.

AIs are computationally expensive–which means they are a grand waste of scarce, fossil energy. This has to be addressed immediately. But it is not an existential issue, it is a matter of rational resource use. The fact that AIs that rely on big and inefficient data models are becoming too expensive to track and investigate by academia or government is a real issue. But it is imminently fixable. Consortia of elite academic institutions or governments could go together and share computing resources the way they have done for supercomputing.

Large Language Models (LLM) are AI models that can generate natural language texts from large amounts of data. One problem with that is that these texts are directly derived from other people’s honest intellectual contributions. They are in fact stolen. Generative AI, in particular, recombines both consumer and organizational data as well as creative content in stark breach of copyright. This is serious, but not existential, and moreover, the EU, lobbyists from Hollywood and the “big five” book publishers are already on the case. Expect this to slow down AI’s heft. At the current rate, AIs will run out of good training data well before it approaches sentience.

Algorithms already used to calculate our taxes, select our online feeds, or put people in jail have a striking lack of transparency. However, this has been the case for years, and has nothing to do with the latest AI developments. AI bias is a feature and not a bug. Stereotyping is, in fact, the main approach through which such models work. Except the prejudice is hidden in impenetrable layers of machine reasoning elusive to humans, experts or not. What we should question is the wisdom of the developers who developed such systems, not the capability of the system they created, which is a given. Systems will rarely be better than the wisdom or intentions of those who build or run it.

AI training data reflects the biases present in society from which that data was collected. The re-use of bad training data is a worrysome practice that pollutes AI models already. Current AI approaches simply amplify bias to quickly get to a result. This is, admittedly, the opposite of what we want. What we want to do is use technology to safeguard human error. Worrying about machine error is a wasteful use of human intelligence.

Despite the “neural network” metaphor, current AIs don’t resemble brains by any stretch of the imagination. Current AI systems cannot reason by analogy like humans do. This is good. We may not actually want the kind of AI alignment that zealots are advocating for and trying to emulate. Machines should be different from humans. That’s how we can maximize each other’s strengths. And how we can keep machines distinct and apart. Machines should not have any interests to align.

AI increasingly represents a significant cybersecurity threat as an asset for criminals and hostile states. But cybersecurity is a mature industry with plenty of experts well equipped to handle the challenge. There is no reason to shut down AI because of cybersecurity fears.

Disruption of employment because of AI has been a policy issue for years, first with robots, now with software-based AI systems. That means governments will be ready to deal with it. The MIT Work of The Future study found the concern about unemployment due to robots to be overstated. Humans have always found ways to work and will do so in the future as well. Will manufacturing be transformed by AI? It is already happening, but in a fairly controlled fashion.

From time to time, AI suffers from overhyped promises about current functionality or future scope. The first AI winters started in 1974–1980, as the US government pulled its funding. The second was from 1987–1993, as costs escalated, and AI failed to deliver on its lofty promises.

Awaiting new paradigms to arrive, in the period from 2025–2030, we will likely enter a third AI winter. At least as compared to the hot AI summer we are promised. The reason is that, despite the hype, for all of the reasons outlined above, large language models are about to reach their maximum utility and will eventually need to be superseded by computationally more elegant approaches that are more transparent.

One such candidate is hyperdimensional computing which would make machines reason more efficiently because they give machines semantic understanding, the ability to process meaning and context behind real-world information. Right now, AI systems don’t understand the relationships between words and phrases, they are simply good at guesswork. That’s insufficient. We will eventually need embodied AI, because thinking is tied to perception of space. That is definitely the case in manufacturing which is a highly physical game. We will also need AI that is capable of human memory features such as prioritizing based on foregrounding some information and backgrounding other information. Forgetting is a tool humans use for abstract thinking, moving on from obsolete organizational practices, making decisions, and for staying in the moment and is not simply a flaw. No machines can do that very well yet.

In the meantime, we do need to regulate, but not this second. And, when we regulate, we better do it well. Bad regulation of AI is likely to make the situation worse. Waking up regulators to this challenge can be helpful, but I’m not sure the current generation of regulators are up for that kind of sweeping changes that would be needed to do it well. It would entail curtailing powerful companies (possibly all listed companies), limiting AI’s use in governance, and would mean enormous changes to the way consumer markets currently work. Essentially, we would have to rewire society. It would usher us into degrowth a few decades earlier than we might wish for. The transparency challenge surrounding AI might be more formidable than the control variables everyone seems so worried about, not that they are unrelated, of course.

Moreover, we cannot be equally worried every time an AI benchmark is reached. We need to conserve our energies for truly big moments of cascading risk. They will come and, in fairness, we are not prepared. My envisioned future scenarios (see Extinction Scenarios for 2075) include massive data breaches that keep entire countries locked out of their own processes for months. I also worry about AIs that are helped along by criminal groups or state actors. Most of all, I worry about combinations of AI, nanotech, synthetic biology and quantum technology–near invisible quasi-organic intelligence of unknown capability, perhaps only a few decades away, happening just when the world will be consumed by the cascading effects of climate change.

Current AI models don’t yet work sufficiently well to be a threat to humanity. Before we can consider shutting them down, we need better AIs. More than that, we need wiser developers, more sensitized citizens, and better informed policy makers. We also need a concept for HOW to regulate AI. But this can be done without slowing anything down. It will be an educational journey for all. The moratorium letter concerning GPT 4 (2023) is a cry wolf moment with only a faint resemblance to the cascading risks humanity faces in coming decades. Putting AI risk at the level of pandemic risk and nuclear risk in 2023 is premature. Will we get there? Perhaps. But crying wolf has consequences. It sucks the oxygen out of coming debates about real scares.

Source: https://www.forbes.com/sites/trondarneundheim/2023/05/31/the-cry-wolf-moment-of-ai-hype-is-unhelpful/