Categories
AI Emergency Management

AI and Emergency Management

Artificial Intelligence has found application in many areas, where its particular ability to find patterns in data makes it a useful tool. Let’s look at AI’s application to Emergency Management, a critical activity in today’s world of climate change, pandemics, and social unrest.

Emergency Management seeks to minimize the impact, on people and property, of emergencies such as earthquakes, hurricanes, floods, disease, and terrorism. EM involves four kinds of interrelated and sometimes overlapping activities:

  1. Mitigation (also called Disaster Risk Reduction) – steps taken to reduce the likelihood (e.g., forest management to reduce wildfire risk) or reduce impact (flood protection levies) of emergencies
  2. Preparation – equipping responders and the public with tools and knowledge that will minimize emergency impacts. Examples include stockpiling personal protective equipment for pandemics, and training Community Emergency Response Teams
  3. Response – actions taken during an emergency or in its aftermath, to prevent further suffering or financial loss. International relief efforts after a devastating earthquake are an example of this, as is medical care for the victims of a pandemic
  4. Recovery – work to return communities back to ‘normal’ after an emergency, for example rebuilding destroyed structures or re-opening a shut down economy.

Deep neural networks, currently a leading edge of AI, map patterns in data to useful interpretations of the data, such as the condition of a building, the likelihood of a flood, or the best evacuation route. This kind of information can be very useful for the planning, prediction, situation assessment, and decision making that are at the heart of Emergency Management.

Here are some examples of AI’s use in the four kinds of EM activity:

Mitigation

Mitigation seeks to reduce the risks associated with emergencies and disasters. Two ways this can be done are by recognizing human-made dangers and reducing them, or by predicting dangerous natural phenomena in time for actions to be taken to reduce their impact.

Poor urban areas are especially vulnerable to disasters and poverty data is in scare supply and difficult to collect. Researchers at Oak Ridge National Laboratory in the US have developed a AI-based technique to identify poor, informal settlements from high-resolution satellite imagery. Their approach uses a variety of spatial, structural, and contextual features to classify areas as formal, informal, and non-settlement classes. The method was tested in Caracas, Kabul, Kandahar, and La Paz, and demonstrated that good accuracy could be obtained using the same features in these diverse areas.

El Niño is a climate phenomenon that disrupts normal weather, leading to intense storms in some areas and droughts in others. It happens at irregular intervals of two to seven years, and lasts nine months to two years. The farther in advance an El Niño event can be predicted, the better a region can prepare for it. Recently deep neural networks have been able to forecast El Niño 18 months in advance, which is an improvement of 6 months over previously used methods.

In California, two high school students invented a device to predict the probability of a forest fire occurring. The device is placed in the forest and takes real-time photographs together with measurements of humidity, temperature, carbon monoxide/dioxide, and wind. This data is then used with a deep neural network to predict the probability of a fire.

Preparation

A primary responsibility of emergency managers is to develop good plans to execute when disaster strikes. Such plans must deal with patterns of natural and social phenomena, and AI can help analyze these patterns and guide effective planning.

For example, Google has been partnering with India’s Central Water Commission to develop AI-enabled flood forecasting and early warning.  Google uses a variety of elements such as historical events, river level readings, terrain and elevation, to run hundreds of thousands of simulations for each location to create river flood forecasting models that can more accurately predict where and when a flood might occur, and also how severe it will be.

Response

Emergency response must provide aid where it is needed. Knowing where and what sort of aid is needed is a challenge, especially in large-scale disasters. Our modern world is flooded with situational information from social media, surveillance cameras (fixed, drones, satellites), and internet-of-things sensors. However, it is very challenging for emergency managers to sort through and interpret this data. This is an ideal application for AI.

A system called Artificial Intelligence for Disaster Response (AIDR) has been developed to help analyze Twitter tweets during emergencies and disasters. The system is available as free and open software, and it is designed to be tailored to responder needs. The responder first identifies keywords and/or hashtags that are used as a preliminary filter for tweets. Next responders identify topics of interest such as “Medical Needs” or “Sheltering”, and manually tag example tweets in each category. A deep neural network then learns to classify relevant tweets in each category, and automatically streams relevant information to responders.

AI is being used in the fight against the ongoing COVID-19 pandemic. Deep neural networks are being used to identify patterns in medical imagery in lungs and heart that will allow early detection and personalized therapies. AI is also being used to identify research and drugs most likely to lead to COVID-19 treatments and vaccines, and to track the disease by monitoring the deluge of data on social media and the internet.

Recovery

During disaster recovery a wide range of activities are undertaken to attend to casualties and survivors, restore buildings and infrastructure, and re-establish social systems and businesses. When international aid is involved, complex interactions among multiple organizations must be coordinated. Situation assessment, resource allocation, and planning can all be supported by AI’s ability to recognize patterns in data.

For example Google, in collaboration with the United Nations World Food Program Innovation Accelerator, has developed a system for automatic damage assessment using very high-resolution satellite imagery. The system uses a deep neural network to identify buildings and compare their condition before and after the disaster. This automated damage assessment can greatly improve the timeliness and effectiveness of recovery efforts for disasters that damage large numbers of structures, such as the 2010 Haiti earthquake, which required assessment of over 90,000 buildings in the Port-au-Prince area alone.

AI and EM

AI’s pattern recognition capability can be an invaluable asset for planning, prediction, situation assessment, and decision making. These activities are critical to many lines of work, especially Emergency Management.

Processing…
Success! You’re on the list.
Categories
AI

AI’s Superpower

Just putting ‘artificial’ and ‘intelligence’ together in the same term is enough to get people pretty excited.

For some, ‘Artificial Intelligence’ can only be a misnomer. True intelligence is uniquely human, biologically evolved, embodied, necessarily shaped by environment and social relationships, non-algorithmic, and unknowable by mere human consciousness. Anything that becomes possible for human-built computers is by definition not really artificial intelligence.

For others, natural intelligence is simply computation performed on a relatively slow biological computer, that took hundreds of thousands of years to evolve. It is only a matter of time before the exponential improvement in computing technology will allow AI to surpass the power of human brains.

The loaded nature of the term AI has also led to a variety of definitions, and identification of subcategories such as Narrow AI and Artificial General Intelligence. Sometimes AI is differentiated from terms such as ‘machine learning’ or ‘automation’.

I prefer a simple and pragmatic definition for AI – technology that can perform tasks previously requiring human intelligence. This definition does not address the limits or scope of AI, it simply acknowledges that we have developed and will continue to develop systems that perform tasks previously requiring human intelligence.

This definition will be too broad for some people’s taste. After all, electronic calculators fit the definition, and nobody considers them AI. However, I think of AI as a pursuit rather than a destination, with a leading edge that continues to advance. In practice, when we talk about AI, we are usually talking about technology near the leading edge.

In a previous post, I addressed why I think AI is neither comparable to human intelligence, nor a threat to humans. But I also think the leading edge of AI, deep neural networks, is very impressive.

Deep neural networks map patterns in data to outputs that represent some useful interpretation of the data, such as the identity of a face or the translation of a spoken sentence. In a sense, this capability is pretty simple; these AIs can be dismissed as mere ‘curve fitters‘. What makes deep neural networks so useful?

Here are three things that give these AIs ‘superpowers’:

  • Patterns are everywhere
  • Data is abundant
  • AI learning extends human programming.

Patterns are everywhere

Recognizing patterns is central to the way we humans live, work, and play. For example:

  • Patterns in our environment tell us what we can eat, where we can find food, when we need to take shelter, and how to turn the wheel of our car
  • Social patterns bond children to mothers, attract mates, expose cheaters
  • Humans impose patterns on their environment – constellations in the stars, orbital mechanics – to enrich understanding and guide exploration
  • Patterns of language – spoken, written, schematic – communicate ideas and directions, and preserve the growing body of human knowledge
  • Patterns are used by detectives to fight crime, and by financial analysts to make money
  • We amuse and enrich ourselves through patterns in music and art, and in puzzles and games.

Data is abundant

Much of our reality these days is represented digitally, on the web or in databases. This gives unprecedented access to information about the patterns central to our lives. If only we had enough eyes and brains to examine and digest this huge volume of data! But this task is a perfect fit for deep neural networks: feed them enough data and they can discover extremely complex patterns.

For example, automatic speech-to-text recognition has been revolutionized by deep neural networks. One such network with 5 billion connections is possible only because it could be trained with lots of data: 3 million audio samples, together with 220 million text samples from a 495,00-word vocabulary. 

AI learning extends human programming

Obviously, it takes humans to program deep neural networks. But these networks are programmed to ‘learn’, in the sense that they adjust their own parameters during the training process.

The fact that very large deep neural networks can be trained and give good results is a relatively recent discovery in AI.  Why these networks work so well is not well understood theoretically, but extensive experimentation has led to innovative designs and good results. This work has been carried out by a thriving, innovative community of AI researchers and engineers, who are building and extending a shared body of open-source software, datasets, and results.

One of the things observed in these experiments is that as deeper neural networks have become feasible, human engineers have needed to do less preprocessing of the inputs to the networks, to identify important features in the data. By letting the networks ‘learn’ what features are important, better results are obtained with less human programming.

An example is automatic speech-to-text recognition, mentioned above. For decades engineers developed these systems using approaches that drew on linguistic analysis of human vocalization and language: speech as composed of elemental sounds, phonemes, which are then built up into words and sentences, all governed by language syntax and semantics. Up through the early 2000’s, systems mirrored this analysis: sounds were mapped to phonemes and possible words, sometimes using neural networks, then symbolic or statistical models of language were used to predict, correct, and make sense of words and sentences.

As effective deep neural networks became available, engineers put more and more of the linguistic analysis burden on the networks. Eventually, networks were trained to directly map sound (digital time-frequency plots) to words, resulting in a dramatic improvement in accuracy.

Artificial Intelligence?

Whether or not AI is really ‘intelligent’, AI research and development continues to move the limit of machine capability.

Categories
AI Leadership Organizations

Which Jobs Will AI Impact?

The workplace is being disrupted by Artificial Intelligence. A 2019 report by McKinsey Global Institute projects that by 2030, up to 39 million jobs in the US will be displaced by AI. What is the nature of this displacement? In the shadow of AI, how can workers make sure they stay employed and organizations make sure they have the skills they need?

First let’s talk about what AI does for an organization, how it’s used. Recall that today’s AI uses massive artificial neural networks, trained with massive amounts of data. The ‘superpower’ these systems bring to the table is their ability to recognize very complex patterns in digital data. AIs are essentially pattern recognizers. And they are a good fit to today’s world, much of which is represented online as digital data: images, video, business reports, news, financial transactions, customer preferences, ….

AI’s ability to recognize patterns can do lots of useful work. For example AI can be used to augment or replace a human worker’s eyes, ears, brain, or even arms and legs in performing tasks such as:

  • Industrial inspection, inventory management and warehousing, farming, security, transportation, and elder care
  • Data entry, customer service, market analysis, travel booking, legal document review, and sentiment analysis
  • Personal assistance, language translation, news and weather reporting, image captioning, and document summarization
  • Crime pattern detection, materials and drug discovery, credit checking and fraud detection, business lead generation, and business forecasting
  • Warehousing, factory assembly, taxi service, delivery service, and long-haul trucking.

Where does this leave us humans?

A short answer: as with other workplace revolutions such as steam engines, mass production, computers, or the internet, some jobs will disappear, new jobs will appear, and many jobs will morph into something different.

AI will tend to replace human labor in jobs that involve routine mental or physical work. For example the 2019 McKinsey report projects that the number of office support jobs in the US will decline from 21 million in 2017 to 18 million in 2030 (an 11% loss), while the workforce will grow 9% over the same period for all jobs in the categories analyzed. Factory jobs, over the same period, are expected to decline by 5%.

Strong job growth 2017 – 2030 is expected in occupational categories that emphasize human relations (health care, 36%; business and legal professionals, 20%; education and training, 18%), as well as in STEM professions (37%).

Of course in a growing economy there can be job growth even in job categories where AI will replace a lot of human labor. For example, McKinsey estimates that 25% of today’s work in customer service and sales will be replaceable by AI by 2030; however the number of jobs in this occupational category will still grow by 10% during that time.

Across all job categories, McKinsey estimates that 25% of human labor expended in 2017 will be replaceable by AI by 2030. This replacement potential ranges from 10% to 39% of the current workforce for every job category. This means few of us will be far from AI’s impact.

How can workers and organizations get ready for the AI transition? A 2019 MIT Sloan School report highlights the need for employers and workers to create and maximize the motivation to learn and adapt over their lifetimes. In a previous blog post I address the need for organizations to adopt an “all-in” approach to AI, integrating organization, IT, and operations.

At the individual level, many workers will need to become familiar with and learn to work with AI. Although AI technology is being developed through the work of specialized researchers with advanced university degrees, the AI research community has made learning about and using the technology surprisingly accessible.

For example Google, Amazon, and Apple all offer free cloud-based environments where workers can learn about AI, and develop and run business tools, without an extensive AI background. A 2019 Northeastern University/Gallup study found that organizations are increasingly developing the internal AI skills they need by training existing staff or through non-degreed interns.

Yes, AI is disrupting the workplace. It is accelerating automation. Workers and organizations must adapt and learn, and then adapt and learn again. But this challenge is accompanied by unparalleled opportunity.

So, how is it going so far? Interesting data point: by analyzing job postings, ZipRecruiter estimates that AI created about three times as many jobs as it destroyed in 2018.

Categories
AI Leadership Organizations

How Will AI Impact Organizations?

We can think of AI as software that helps organizations more effectively reach their goals, e.g., by reducing costs and increasing revenues.

Gaining benefits from AI, or any other innovative technology, requires organizational change. New strategies. New job descriptions. New workflows. New org charts. Training.

What makes AI different? Are the challenges faced by organizations adopting AI different from those encountered in adopting other software innovations?

After all, using computers to revolutionize organizations is nothing new. IBM developed the SABRE reservation database for American Airlines back in 1964, replacing manual file cards with a system that could handle 83,000 reservation requests. Pretty disruptive!

So how is AI changing organizations? Let’s take an example – the financial industry. AI’s ability to find patterns in mountains of data can help financial organizations:

  • Make more accurate and objective credit decisions by accounting for more complex relationships among a wider variety of factors
  • Improve risk management using forecasts based on learning patterns in high volumes of historical and real-time data
  • Quickly identify fraud by matching continuously monitored data to learned behavioral patterns
  • Improve investment performance by rapidly digesting current events, monitoring and learning market patterns, and making fast investment decisions
  • Personalize banking with smart chatbots and customized financial advice
  • Automate processes to read documents, verify data, and generate reports.

To make these improvements using AI, a financial organization needs to undertake the sort of activities needed to introduce any new software into their operations and products, such as:

  • Establish strategic priorities and budgets
  • Clarify and communicate objectives and plans with stakeholders
  • Work with software developers/vendors/users to establish and carry out software/system development projects
  • Create/modify procedures and organizations to take advantage of the new software
  • Hire, train, retrain the workforce as needed
  • Monitor results and adapt as required.

What are the special challenges AI brings to these activities?

The first challenge is AI’s high profile. Managers feel compelled to catch the wave of the future, and workers fear they will lose their jobs. As a consequence:

  • Managers may undertake AI projects with unrealistic expectations. AI can be extremely effective, however only when there is access to large volumes of data relevant to an operational role that truly benefits the organization
  • Employees essential to successful adoption of the new systems may stand in the way or quit if they see AI as a threat.

Clearly due diligence is required in the first case, and effective employee engagement in the second.

A second challenge is an “all or nothing” aspect of AI. To reap the benefits of AI, the core AI technology must be fully integrated with an organization’s IT infrastructure and business operations. Notice in the financial organization example above, how many aspects of the organization could be affected by AI. To successfully integrate AI, an organization must be “all in”. To do this requires particularly high levels of communication, investment, and cross-organizational participation.

A third challenge is that with successful adoption of AI, the requirement for personal growth and change is pervasive, up and down the organization. Leaders, engineers, and operators all need to learn and embrace the changes brought about by AI. For many, this can be an exciting opportunity for career growth and more fulfilling jobs. Others will mourn the lost relevance of hard-won experience. The organization must be prepared to invest in training, re-training, and professional development. The more AI takes over routine data gathering and analysis, the more important ‘soft’ skills will be to every worker.

Finally, a fourth challenge is that even a very capable AI can produce unintended results. For example, although AI-based analysis can lend objectivity to credit decisions, training AIs using historical data can promulgate past biases. Also, when highly-trained AIs encounter situations they have never seen before, the results can be unpredictable. This means AIs need human supervisors, and these supervisors are dealing with a whole new kind of employee!

Next: What kinds of jobs will AI impact?

Categories
AI Organizations

Will AI Take My Job?

AI is smart – it recognizes our faces, drives our cars, translates our languages, wins at Jeopardy, chess, and Go.

AI is scary – as if movies like Terminator aren’t frightening enough, prominent figures such as Stephen Hawking, Elon Musk, and Bill Gates have expressed strong concern that AI might eventually lead to disaster.

And AI is big, too. CEOs across the business spectrum are putting AI in their strategic plans, because they see the competitive advantage being gained by early adopters. An Oxford University study in 2013 predicted that half of US job categories are “at high risk of automation”.

No wonder people look over their shoulders, waiting to be overtaken by some tireless AI with access to unthinkable amounts of digital data and unbelievable computational power!

There is no question that AI is revolutionizing organizations and the world of work. It continues the digital revolution, which disrupted publication, photography, movies, music, telephones, business meetings and more, even creating new definitions of ‘community’ and ‘coworker’.

But thinking of the AI revolution as machines replacing people is a misleading oversimplification, just as it was earlier in the digital revolution. Consider this:

  • Just like a pocket calculator, AI’s are ‘superhuman’ in very narrow specialities. Even Watson, the seemingly ‘wise’ IBM Jeopardy champion, is a master at learning statistical patterns and relationships in vast collections of human-composed documents. There is enough of a trace of human knowledge in these millions of documents to allow Watson to win at Jeopardy, but Watson does not possess anything like the practical knowledge of a human, even a child.
  • All human jobs require adaptability, creativity, and social interactions that are way beyond AI in the foreseeable future. We may think what makes us special in our job is a rare talent for data collection and analysis, but that talent would be nothing but a parlor trick without the human ability to apply, adapt, learn, and interact.

These disruptive AI machines need people. In the first place, they need people to build and maintain them. In the second place, the masterfully calculated result an AI might give can only be just a part of any job. The AI’s part may be pivotal, enabling, competitive, disruptive. Can it do your job? No. Will it change your job? Yes.

So as AI’s come after our jobs, I think we have a choice. If we insist on doing our job the way it’s always been done, we might lose our job. If we embrace AI as the empowering tool it can be, we can elevate our job to one that contributes more value to our organization and frees us to do work that is more challenging and satisfying.

Categories
AI

Is AI an Existential Threat?

“Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven’t even landed on the planet yet!”

Andrew Ng, Is Ai an Existential Threat to Humanity?

Where will AI take us? One school of thought envisions the progression of AI first to human-level Artificial General Intelligence, then on to Superintelligence, by virtue of AI’s ability to self-improve. Once AIs are superintelligent, the speculation continues, humankind will essentially be at the mercy of these superior beings, who may well decide against humans in favor of their own goals. This may sound fanciful, however prominent figures such as Stephen Hawking, Elon Musk, and Bill Gates are among those sounding the alarm that AI might eventually lead to disaster.

Yes, AI can be dangerous – autonomous weapons, social manipulation, fraud. However I think experience with AI so far gives us no evidence that superintelligent AIs will become an existential threat to humankind. Here are some of the reasons:

  • Today’s AI is not in the same league as human intelligence
  • Human intelligence is intertwined with today’s AI
  • Human-level intelligence can’t be achieved simply by scaling up today’s AI on faster computers
  • Even a superintelligent AI would not seek human domination.

Today’s AI is is not in the same league as human intelligence

Today’s AIs are impressive. People have made it their life’s work to learn the game of Go, and AlphaGo-Zero learned to play brilliantly and beat any human opponent in just 40 days, through trial and error. AIs have learned to recognize spoken language by mapping patterns of speech to patterns of text, without being given any information about the things and actions and ideas represented by the spoken words, nor any information about the structure of language.

Playing Go and understanding language are certainly evidence of intelligence in humans. But when AIs accomplish these feats, they are drawing on capabilities that are very narrowly specialized and shallow compared to human intelligence.

A single human has the capability to learn pretty much anything they need to know about their environment, including things microscopic and light-centuries away. An AI can become an ‘expert’ only in a very limited domain. The “universe” “understood” by AlphaGo-Zero is a 19 x19 game board, black and white stones, and the simple rules of Go.

The AI community got pretty excited when a single AI mastered 57 Atari video games. Parents might be impressed if their child became an Atari ace, but they pretty much take it for granted that their child might master reading, writing, arithmetic, music, dance, sports, ….

A narrow superhuman capability is not intelligence. Even electronic calculators are superhuman in a sense – they can do arithmetic much faster than we can. But no one thinks of them as ‘intelligent’.

While today’s neural networks may be deep, what they learn is shallow compared to human learning. When a child learns words, the words have meaning in the physical world. An AI can do a great job of mapping spoken words to text, but the AI is only mapping patterns of sound to patterns of text, without any regard to what words mean.

The depth of human understanding is key to human intelligence. That depth allows humans to become competent in many things. It also allows children to learn words from a small number of examples. The voice-to-text AI needed 3 million audio samples and 220 million text samples to learn its mappings!

Human intelligence is intertwined with today’s AI

AIs function in a context totally created by humans. To enable machine learning, humans must first design the machine, structuring perhaps billions of elements controlled by adjustable parameters. It has taken decades of human engineering to find good designs, designs that can, with the right parameters, effectively map input data to recognized patterns. Humans must then run many examples through the neural network to discover parameter values that lead to accurate recognition. The machine ‘learns’ only because humans have developed algorithms to adjust parameters to get better and better results.

Note that AI learning does not take place in the physical world occupied by humans, but instead in artificial worlds created by humans: visual scenes converted to numbers by digital cameras, sound converted to time-frequency arrays by electronics, online retail patterns created by a vast online infrastructure.

Human-level intelligence can’t be achieved simply by scaling up today’s AI on faster computers

It has been simply remarkable how far AI has advanced in the last few years, as artificial neural networks have become bigger and faster – self-driving cars, language translation, personal assistants, advanced robots. Artificial neural networks of larger and larger size have become practical (175 billion parameters!), and the larger the networks, the more the networks seem to be able to learn.

And the way modern deep neural networks learn is impressive. For example, in the old days, the relatively small neural networks built to recognize objects in images relied on humans to give them a significant head start: engineers would write programs that extracted edges, objects, and other features in order to simplify the patterns the networks had to learn to recognize. Modern networks don’t need that head start – they can take digital picture elements directly as inputs, figuring out for themselves what features are important. And they perform far better than earlier networks.

Some argue that since intelligence is based on computation (even the human brain is essentially an organic computer), intelligence is limited only by computational power. Therefore as human-built computers continue their exponential increase in power, it’s only a matter of time before machines will match and then exceed human intelligence.

I (and others) disagree. I think we have no evidence that achieving super intelligence is just a matter of more powerful computers and more elaborate versions of today’s AI. As discussed earlier in this post, there are significant differences between the machine and human versions of ‘intelligence’ and ‘learning’. The gap between machines and humans is too large to expect achievement of even human-level intelligence, let alone superintelligence, by just doing ‘more of the same’. An analogy would be claiming the inevitability of faster than light travel because of the rapid rise in the top speed of human-made vehicles: from the 36 mph steam locomotive ‘Rocket’ 200 years ago, to the 90,000 mph Juno spacecraft today.

Even a superintelligent AI would not seek human domination

A more fundamental error is in thinking that future AIs and humans would act like two tribes competing for world domination. Predicting that AIs and humans would fight each other in the arena of world domination requires mistaking AIs for humans and humans for AIs.

Humans have evolved to live on earth with other humans using conscious and unconscious cognitive capabilities. Our interactions with our environment and each other lead to complex motivations and behaviors – writing symphonies, engineering AIs, seeking power, waging war. The part of our cognitive abilities we capture in AI is just a small part – the kind of problem solving we can understand and model.

Imagine that part of your cognitive ability – let’s say the ability to play chess or understand languages – were suddenly amplified thanks to a new drug. Should your neighbors become worried that you will become a bully, or start trying to dupe them out of money?

Of course some people, finding themselves suddenly smarter, might apply their new capability to crime. However, these would be people already inclined to crime, using their enhanced intelligence as a new tool. Higher intelligence does not increase inclination toward crime. (In fact, human IQ and crime are negatively correlated.)

If smart AIs of the future become dangerous, it won’t be because they are super smart problem solvers. It will because they are designed by humans to be dangerous. Motivation to dominate humanity is not an automatic consequence of superintelligence, even if it could be achieved.

Another barrier to AI world domination is that agency – getting things done in the world – takes more than intelligence. Materials must be acquired, artifacts fashioned, millions of details coordinated in the physical world. Even if an AI did achieve superintelligence and decide to try to dominate humanity, that would far from guarantee it enough agency in the real world to succeed.

Superintelligence?

The thought of AIs that are somehow superhumanly intelligent yet very human-like in their vices is certainly chilling. Just like faster-than-light travel, it makes a great device for science fiction. However, I think our worries are better spent on more plausible threats.