Which Jobs Will AI Impact?

The workplace is being disrupted by Artificial Intelligence. A 2019 report by McKinsey Global Institute projects that by 2030, up to 39 million jobs in the US will be displaced by AI. What is the nature of this displacement? In the shadow of AI, how can workers make sure they stay employed and organizations make sure they have the skills they need?

First let’s talk about what AI does for an organization, how it’s used. Recall that today’s AI uses massive artificial neural networks, trained with massive amounts of data. The ‘superpower’ these systems bring to the table is their ability to recognize very complex patterns in digital data. AIs are essentially pattern recognizers. And they are a good fit to today’s world, much of which is represented online as digital data: images, video, business reports, news, financial transactions, customer preferences, ….

AI’s ability to recognize patterns can do lots of useful work. For example AI can be used to augment or replace a human worker’s eyes, ears, brain, or even arms and legs in performing tasks such as:

  • Industrial inspection, inventory management and warehousing, farming, security, transportation, and elder care
  • Data entry, customer service, market analysis, travel booking, legal document review, and sentiment analysis
  • Personal assistance, language translation, news and weather reporting, image captioning, and document summarization
  • Crime pattern detection, materials and drug discovery, credit checking and fraud detection, business lead generation, and business forecasting
  • Warehousing, factory assembly, taxi service, delivery service, and long-haul trucking.

Where does this leave us humans?

A short answer: as with other workplace revolutions such as steam engines, mass production, computers, or the internet, some jobs will disappear, new jobs will appear, and many jobs will morph into something different.

AI will tend to replace human labor in jobs that involve routine mental or physical work. For example the 2019 McKinsey report projects that the number of office support jobs in the US will decline from 21 million in 2017 to 18 million in 2030 (an 11% loss), while the workforce will grow 9% over the same period for all jobs in the categories analyzed. Factory jobs, over the same period, are expected to decline by 5%.

Strong job growth 2017 – 2030 is expected in occupational categories that emphasize human relations (health care, 36%; business and legal professionals, 20%; education and training, 18%), as well as in STEM professions (37%).

Of course in a growing economy there can be job growth even in job categories where AI will replace a lot of human labor. For example, McKinsey estimates that 25% of today’s work in customer service and sales will be replaceable by AI by 2030; however the number of jobs in this occupational category will still grow by 10% during that time.

Across all job categories, McKinsey estimates that 25% of human labor expended in 2017 will be replaceable by AI by 2030. This replacement potential ranges from 10% to 39% of the current workforce for every job category. This means few of us will be far from AI’s impact.

How can workers and organizations get ready for the AI transition? A 2019 MIT Sloan School report highlights the need for employers and workers to create and maximize the motivation to learn and adapt over their lifetimes. In a previous blog post I address the need for organizations to adopt an “all-in” approach to AI, integrating organization, IT, and operations.

At the individual level, many workers will need to become familiar with and learn to work with AI. Although AI technology is being developed through the work of specialized researchers with advanced university degrees, the AI research community has made learning about and using the technology surprisingly accessible.

For example Google, Amazon, and Apple all offer free cloud-based environments where workers can learn about AI, and develop and run business tools, without an extensive AI background. A 2019 Northeastern University/Gallup study found that organizations are increasingly developing the internal AI skills they need by training existing staff or through non-degreed interns.

Yes, AI is disrupting the workplace. It is accelerating automation. Workers and organizations must adapt and learn, and then adapt and learn again. But this challenge is accompanied by unparalleled opportunity.

So, how is it going so far? Interesting data point: by analyzing job postings, ZipRecruiter estimates that AI created about three times as many jobs as it destroyed in 2018.

How Will AI Impact Organizations?

We can think of AI as software that helps organizations more effectively reach their goals, e.g., by reducing costs and increasing revenues.

Gaining benefits from AI, or any other innovative technology, requires organizational change. New strategies. New job descriptions. New workflows. New org charts. Training.

What makes AI different? Are the challenges faced by organizations adopting AI different from those encountered in adopting other software innovations?

After all, using computers to revolutionize organizations is nothing new. IBM developed the SABRE reservation database for American Airlines back in 1964, replacing manual file cards with a system that could handle 83,000 reservation requests. Pretty disruptive!

So how is AI changing organizations? Let’s take an example – the financial industry. AI’s ability to find patterns in mountains of data can help financial organizations:

  • Make more accurate and objective credit decisions by accounting for more complex relationships among a wider variety of factors
  • Improve risk management using forecasts based on learning patterns in high volumes of historical and real-time data
  • Quickly identify fraud by matching continuously monitored data to learned behavioral patterns
  • Improve investment performance by rapidly digesting current events, monitoring and learning market patterns, and making fast investment decisions
  • Personalize banking with smart chatbots and customized financial advice
  • Automate processes to read documents, verify data, and generate reports.

To make these improvements using AI, a financial organization needs to undertake the sort of activities needed to introduce any new software into their operations and products, such as:

  • Establish strategic priorities and budgets
  • Clarify and communicate objectives and plans with stakeholders
  • Work with software developers/vendors/users to establish and carry out software/system development projects
  • Create/modify procedures and organizations to take advantage of the new software
  • Hire, train, retrain the workforce as needed
  • Monitor results and adapt as required.

What are the special challenges AI brings to these activities?

The first challenge is AI’s high profile. Managers feel compelled to catch the wave of the future, and workers fear they will lose their jobs. As a consequence:

  • Managers may undertake AI projects with unrealistic expectations. AI can be extremely effective, however only when there is access to large volumes of data relevant to an operational role that truly benefits the organization
  • Employees essential to successful adoption of the new systems may stand in the way or quit if they see AI as a threat.

Clearly due diligence is required in the first case, and effective employee engagement in the second.

A second challenge is an “all or nothing” aspect of AI. To reap the benefits of AI, the core AI technology must be fully integrated with an organization’s IT infrastructure and business operations. Notice in the financial organization example above, how many aspects of the organization could be affected by AI. To successfully integrate AI, an organization must be “all in”. To do this requires particularly high levels of communication, investment, and cross-organizational participation.

A third challenge is that with successful adoption of AI, the requirement for personal growth and change is pervasive, up and down the organization. Leaders, engineers, and operators all need to learn and embrace the changes brought about by AI. For many, this can be an exciting opportunity for career growth and more fulfilling jobs. Others will mourn the lost relevance of hard-won experience. The organization must be prepared to invest in training, re-training, and professional development. The more AI takes over routine data gathering and analysis, the more important ‘soft’ skills will be to every worker.

Finally, a fourth challenge is that even a very capable AI can produce unintended results. For example, although AI-based analysis can lend objectivity to credit decisions, training AIs using historical data can promulgate past biases. Also, when highly-trained AIs encounter situations they have never seen before, the results can be unpredictable. This means AIs need human supervisors, and these supervisors are dealing with a whole new kind of employee!

Next: What kinds of jobs will AI impact?

Will AI Take My Job?

AI is smart – it recognizes our faces, drives our cars, translates our languages, wins at Jeopardy, chess, and Go.

AI is scary – as if movies like Terminator aren’t frightening enough, prominent figures such as Stephen Hawking, Elon Musk, and Bill Gates have expressed strong concern that AI might eventually lead to disaster.

And AI is big, too. CEOs across the business spectrum are putting AI in their strategic plans, because they see the competitive advantage being gained by early adopters. An Oxford University study in 2013 predicted that half of US job categories are “at high risk of automation”.

No wonder people look over their shoulders, waiting to be overtaken by some tireless AI with access to unthinkable amounts of digital data and unbelievable computational power!

There is no question that AI is revolutionizing organizations and the world of work. It continues the digital revolution, which disrupted publication, photography, movies, music, telephones, business meetings and more, even creating new definitions of ‘community’ and ‘coworker’.

But thinking of the AI revolution as machines replacing people is a misleading oversimplification, just as it was earlier in the digital revolution. Consider this:

  • Just like a pocket calculator, AI’s are ‘superhuman’ in very narrow specialities. Even Watson, the seemingly ‘wise’ IBM Jeopardy champion, is a master at learning statistical patterns and relationships in vast collections of human-composed documents. There is enough of a trace of human knowledge in these millions of documents to allow Watson to win at Jeopardy, but Watson does not possess anything like the practical knowledge of a human, even a child.
  • All human jobs require adaptability, creativity, and social interactions that are way beyond AI in the foreseeable future. We may think what makes us special in our job is a rare talent for data collection and analysis, but that talent would be nothing but a parlor trick without the human ability to apply, adapt, learn, and interact.

These disruptive AI machines need people. In the first place, they need people to build and maintain them. In the second place, the masterfully calculated result an AI might give can only be just a part of any job. The AI’s part may be pivotal, enabling, competitive, disruptive. Can it do your job? No. Will it change your job? Yes.

So as AI’s come after our jobs, I think we have a choice. If we insist on doing our job the way it’s always been done, we might lose our job. If we embrace AI as the empowering tool it can be, we can elevate our job to one that contributes more value to our organization and frees us to do work that is more challenging and satisfying.

Is AI an Existential Threat?

“Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven’t even landed on the planet yet!”

Andrew Ng, Is Ai an Existential Threat to Humanity?

Where will AI take us? One school of thought envisions the progression of AI first to human-level Artificial General Intelligence, then on to Superintelligence, by virtue of AI’s ability to self-improve. Once AIs are superintelligent, the speculation continues, humankind will essentially be at the mercy of these superior beings, who may well decide against humans in favor of their own goals. This may sound fanciful, however prominent figures such as Stephen Hawking, Elon Musk, and Bill Gates are among those sounding the alarm that AI might eventually lead to disaster.

Yes, AI can be dangerous – autonomous weapons, social manipulation, fraud. However I think experience with AI so far gives us no evidence that superintelligent AIs will become an existential threat to humankind. Here are some of the reasons:

  • Today’s AI is not in the same league as human intelligence
  • Human intelligence is intertwined with today’s AI
  • Human-level intelligence can’t be achieved simply by scaling up today’s AI on faster computers
  • Even a superintelligent AI would not seek human domination.

Today’s AI is is not in the same league as human intelligence

Today’s AIs are impressive. People have made it their life’s work to learn the game of Go, and AlphaGo-Zero learned to play brilliantly and beat any human opponent in just 40 days, through trial and error. AIs have learned to recognize spoken language by mapping patterns of speech to patterns of text, without being given any information about the things and actions and ideas represented by the spoken words, nor any information about the structure of language.

Playing Go and understanding language are certainly evidence of intelligence in humans. But when AIs accomplish these feats, they are drawing on capabilities that are very narrowly specialized and shallow compared to human intelligence.

A single human has the capability to learn pretty much anything they need to know about their environment, including things microscopic and light-centuries away. An AI can become an ‘expert’ only in a very limited domain. The “universe” “understood” by AlphaGo-Zero is a 19 x19 game board, black and white stones, and the simple rules of Go.

The AI community got pretty excited when a single AI mastered 57 Atari video games. Parents might be impressed if their child became an Atari ace, but they pretty much take it for granted that their child might master reading, writing, arithmetic, music, dance, sports, ….

A narrow superhuman capability is not intelligence. Even electronic calculators are superhuman in a sense – they can do arithmetic much faster than we can. But no one thinks of them as ‘intelligent’.

While today’s neural networks may be deep, what they learn is shallow compared to human learning. When a child learns words, the words have meaning in the physical world. An AI can do a great job of mapping spoken words to text, but the AI is only mapping patterns of sound to patterns of text, without any regard to what words mean.

The depth of human understanding is key to human intelligence. That depth allows humans to become competent in many things. It also allows children to learn words from a small number of examples. The voice-to-text AI needed 3 million audio samples and 220 million text samples to learn its mappings!

Human intelligence is intertwined with today’s AI

AIs function in a context totally created by humans. To enable machine learning, humans must first design the machine, structuring perhaps billions of elements controlled by adjustable parameters. It has taken decades of human engineering to find good designs, designs that can, with the right parameters, effectively map input data to recognized patterns. Humans must then run many examples through the neural network to discover parameter values that lead to accurate recognition. The machine ‘learns’ only because humans have developed algorithms to adjust parameters to get better and better results.

Note that AI learning does not take place in the physical world occupied by humans, but instead in artificial worlds created by humans: visual scenes converted to numbers by digital cameras, sound converted to time-frequency arrays by electronics, online retail patterns created by a vast online infrastructure.

Human-level intelligence can’t be achieved simply by scaling up today’s AI on faster computers

It has been simply remarkable how far AI has advanced in the last few years, as artificial neural networks have become bigger and faster – self-driving cars, language translation, personal assistants, advanced robots. Artificial neural networks of larger and larger size have become practical (175 billion parameters!), and the larger the networks, the more the networks seem to be able to learn.

And the way modern deep neural networks learn is impressive. For example, in the old days, the relatively small neural networks built to recognize objects in images relied on humans to give them a significant head start: engineers would write programs that extracted edges, objects, and other features in order to simplify the patterns the networks had to learn to recognize. Modern networks don’t need that head start – they can take digital picture elements directly as inputs, figuring out for themselves what features are important. And they perform far better than earlier networks.

Some argue that since intelligence is based on computation (even the human brain is essentially an organic computer), intelligence is limited only by computational power. Therefore as human-built computers continue their exponential increase in power, it’s only a matter of time before machines will match and then exceed human intelligence.

I (and others) disagree. I think we have no evidence that achieving super intelligence is just a matter of more powerful computers and more elaborate versions of today’s AI. As discussed earlier in this post, there are significant differences between the machine and human versions of ‘intelligence’ and ‘learning’. The gap between machines and humans is too large to expect achievement of even human-level intelligence, let alone superintelligence, by just doing ‘more of the same’. An analogy would be claiming the inevitability of faster than light travel because of the rapid rise in the top speed of human-made vehicles: from the 36 mph steam locomotive ‘Rocket’ 200 years ago, to the 90,000 mph Juno spacecraft today.

Even a superintelligent AI would not seek human domination

A more fundamental error is in thinking that future AIs and humans would act like two tribes competing for world domination. Predicting that AIs and humans would fight each other in the arena of world domination requires mistaking AIs for humans and humans for AIs.

Humans have evolved to live on earth with other humans using conscious and unconscious cognitive capabilities. Our interactions with our environment and each other lead to complex motivations and behaviors – writing symphonies, engineering AIs, seeking power, waging war. The part of our cognitive abilities we capture in AI is just a small part – the kind of problem solving we can understand and model.

Imagine that part of your cognitive ability – let’s say the ability to play chess or understand languages – were suddenly amplified thanks to a new drug. Should your neighbors become worried that you will become a bully, or start trying to dupe them out of money?

Of course some people, finding themselves suddenly smarter, might apply their new capability to crime. However, these would be people already inclined to crime, using their enhanced intelligence as a new tool. Higher intelligence does not increase inclination toward crime. (In fact, human IQ and crime are negatively correlated.)

If smart AIs of the future become dangerous, it won’t be because they are super smart problem solvers. It will because they are designed by humans to be dangerous. Motivation to dominate humanity is not an automatic consequence of superintelligence, even if it could be achieved.

Another barrier to AI world domination is that agency – getting things done in the world – takes more than intelligence. Materials must be acquired, artifacts fashioned, millions of details coordinated in the physical world. Even if an AI did achieve superintelligence and decide to try to dominate humanity, that would far from guarantee it enough agency in the real world to succeed.

Superintelligence?

The thought of AIs that are somehow superhumanly intelligent yet very human-like in their vices is certainly chilling. Just like faster-than-light travel, it makes a great device for science fiction. However, I think our worries are better spent on more plausible threats.

Five Ways Engineers Struggle to Become Managers

Engineers can make great managers. They have highly-developed problem solving skills, and they have mastered knowledge needed to lead technology-based organizations. But the road from engineering to management can be surprisingly rocky. Management requires ways of behaving and thinking that can seem contrary to hard-learned engineering values. This can lead to teams of engineers, frustrated by managers who can’t manage, and newly-promoted managers who find themselves in positions of bewildering stress. 

Engineers understand that management is different from engineering. New managers learn they need to set objectives, delegate, give feedback, handle finances, etc. The concepts are easy, make sense, and engineers can convincingly articulate them. But engineers frequently struggle with applying them. Why? 

One reason is that engineers try to manage while still thinking like an engineer. They follow the checklist of good management practices, but they approach management with deeply-held engineering values. An engineer who wants to manage must not only learn new skills, but also learn to look at things from a new perspective.  Here are five areas where mindset can torpedo even the best engineer’s attempt to manage: 

  • Identity: What is My Job? An engineer is focused on product, tools, and technical expertise. A manager must focus on people – roles, relationships, organization. 
  • Independence: How do I do my job? Engineers treasure independent thinking and personally coming up with the best idea or solution. A manager needs to treasure effective teamwork and success of the group.
  • Aesthetics: What does excellence look like? Excellent engineering is elegant, flawless, uncompromising, and efficient. Excellent management often compromises, trading performance for affordability, efficiency for buy-in, perfection in the face of resource constraints.
  • Influence: How do I work with others? Engineers seek unambiguous, data-driven interactions with others, to communicate and resolve issues. Managers engage in rich personal interactions to bridge barriers to communication, pool multiple perspectives, explore ambiguity, and achieve consensus.
  • Learning: How do I develop as a professional? An engineer’s growth is driven by expanding explicit, technical knowledge in an area, so the engineer can operate there without making mistakes.  Managers become good managers by managing, making mistakes, and adding to their base of tacit knowledge. 

It’s not that the engineering mindset is not useful to managers. An engineering background can give a manager a real advantage. And of course, like managers, engineers need to relate to people and learn from experience. However, an engineer stepping into management needs to know that his or her job involves not just new duties, but new ways of thinking. And these new ways of thinking are going to be especially hard to learn, because they can grate against engineering sensibilities. Engineers becoming managers need to honor their engineering instincts, but recognize their limitations and make sure they don’t get in the way.

–Tom

The Thinking Organization

This pyramid summarizes the Thinking Teams view of a successful organization, an organization effectively moving toward its goals, powered by collaborating teams of wholehearted individuals. The pyramid represents building from foundational elements; its layers show focus areas for organizational improvement and growth.             Starting from the base:  

Self-Aware, Productive Individuals – a wonderful alignment happens when an organization helps its members participate in a way that connects with and acknowledges their full selves, not just the conscious tip of the mind’s iceberg (The Committee in Your HeadThe Power of Alignment) 

Open, Respectful Person-Person Communication – when teams learn to communicate, not from the deceptively efficient but narrow perspective of concrete positions, but from an expanded awareness of shared goals and the potential for mutual learning, each team member moves closer to full potential and contribution (Apples and OrangesAnother Iceberg

Focused, Agile, Managed Teams – when the natural tools of human cooperation are focused and enabled by agile structure, the team becomes greater than the sum of its members (Hive MindBendable ConcreteBeing Rational About Irrationality ) 

Thriving Enterprise Led With Vision – when leaders embrace the sometimes contradictory facets of their organization (Corrective Lenses), they can expand their options and tailor effective action by tending to all four aspects of their organization*, their organization as:

  • “Factory” – creating structure that mutually supports individual and organization (Happy CogsTaking Chances ) 
  • “Family” – Holding and fostering organizational values that transcend the daily To-Do list (The 26 Hour-a-Day Manager)
  • “Arena” – creating a transparent, fair space to engage divergent interests and allocate scarce resources (Clean Politics)
  • “Theater” – articulate the symbols that connect an organization’s mission to meaning and value for team members and stakeholders (Theater of Symbols)

–Tom

*Bolman, Lee G. and Deal, Terrance E.  Reframing Organizations – Artistry, Choice, and Leadership. San Francisco: Jossey-Bass A Wiley Imprint, 2003.