No one needs reminding of how modern technologies disrupt industries. But technology is also disrupting business functions like supply chain, finance, procurement, human resources, talent management, and organizational change. Artificial Intelligence (AI) is one of the technologies rewriting the playbook for many professionals, and its impact will continue to grow. The challenge for enterprises is to look past the hype and separate myth from reality, enabling them to better understand the opportunity to apply AI to the business of improving business performance.
In talent management, AI is already at work today helping employees design career paths and navigate the corporate ladder without human intervention. These business solutions use natural language processing (NLP) to read and a digest a person’s CV, LinkedIn profile, and other data from an employee’s profile. Using machine learning, the system can generate likely career paths and guide employees about how to navigate those paths (e.g., professional development courses or mentors).
This is just one example of AI’s benefit to HR and employees, but the technology can also pose some ethical and other perils. That’s why it’s vital to understand the basics of AI – how it works and how it can be applied – which we’ll address during this three-part series. In this post, we explore AI and how it works. Subsequent posts will address the kind of questions an organisation needs to ask when meeting a solution provider about an AI-enabled application, what introducing AI truly means for an organisation from a people perspective, and the role talent and change management teams can play in this process.
A Little Background on AI
Artificial intelligence is not new. In fact, computer scientists and technologists have been working on the AI challenge since the 1950s. But AI was more talked about than used for decades until three of today’s most important technologies became mainstream: cloud computing, big data, and mobile devices. These have enabled AI and placed it squarely in the mainstream.
AI systems demand lots of computing resources, which is why cloud computing has been critical to the development of contemporary AI. AI engines reside in the cloud and are accessed using application programming interfaces (APIs).
Big data is fuel for AI. The more data an AI system has, the more likely it will be able to uncover insights and correlations that can help with decision making. Unlike systems of the past, AI can process unstructured data – data that is not easily captured in the rows and columns of a spreadsheet, for example. This is crucial, because 80% of all data is unstructured (images, sounds, wearables, sensors, emails, and videos).
Meanwhile, computers, smartphones and other internet-enabled devices – such as sensors, IOT, wearables, portable scanners, and others – generate big data and fuel AI engines and systems.
These three resources represent the underpinnings or preconditions of AI. But the ideas behind AI are quite different from the rigidly programmed norms of traditional computing. An AI solution is not programmed in the traditional sense; it is taught to understand and to learn, just like people understand and learn.
The term “machine learning” (ML) is one of AI’s core components. ML makes a correlation between a pattern and an outcome and formulates a hypothesis about that correlation. The system then interacts with a human or machine and receives feedback on the hypothesis. Then, an AI system integrates that feedback into the next hypothesis. This process of continuous refinement is how ML learns to predict future outcomes and events.
How do systems learn? Deep learning leverages neural networks that can make a large number of connections and learn from being exposed to vast amounts of data. Deep learning models can be built using both supervised and unsupervised learning. Reinforcement learning, on the other hand, applies feedback algorithms for the system to learn through rewards and penalties.
Another component of AI is natural language processing (NLP). NLP systems understand normal (‘natural’) language and can extract meaning from it. They can also interpret core emotions (e.g., anger, joy, fear, disgust, sadness, or happiness) in human psychology and understand the tone of a person’s language (knowing a customer’s mood, for example, can radically improve customer service). NLP is the basis for sentiment analysis and powers applications like intent recognition, chatbots, mood mapping, and smart summarization.
There is still a certain amount of anxiety and fear surrounding AI. This technology is often compared to the world described by Ray Kurzweil in his popular book Singularity, a universe in which machines evolve to become much more intelligent than the entire human race combined and end up merging with people.
But that level of artificial intelligence – sometimes called ‘Artificial General Intelligence’ – will take decades to arrive (and may never). What today’s practitioners are focusing on is called ‘Artificial Narrow Intelligence’ (ANI). ANI is able to perform tasks that are limited to a certain context and performed by separate AI applications. For example, an ANI system can perform specific tasks like calculating the flying path of a drone or predicting the weather.
What AI still can’t do is connect many single tasks and come up with overall solutions. A simple example: The AI in a music-streaming app can recommend songs and artists we might want to hear, but it cannot think laterally to recommend food or drink that might go with the music you are playing.
Common AI Applications in Use Today
Most real-world AI applications are examples of ANI limited to a certain context or use case. Many use it for sound and image recognition; Spotify and Netflix are good examples in the consumer realm. They are also used for facial recognition, gesture detection, and digitalised voices. For example, consider a wearable digital assistant built into an engineer’s or field worker’s vest. It can use an AI-powered system to listen for ‘wake words’ like ‘I need help’ or ‘I can’t figure this out,’ then provide support by accessing material available (e.g., manuals, input from experts) to generate answers.
Sentiment analysis is another popular entry point for companies experimenting with AI. Its most common use case is call centres and service desks dealing with large numbers of emails, calls, and messages. Again, thanks to NLP, the system understands language as it’s written or spoken. It is able to link a message to core emotions, extract tone, and map the communications accordingly. This is also the kind of AI that chatbots, automated translation, and smart summarisation use.
AI has the ability to make a correlation between a pattern and an outcome, come up with a hypothesis, and learn from the feedback it receives: predictive analytics. That’s how AI systems can predict future events, assess situations, and anticipate outcomes. Predictive AI is typically used in applications like predictive maintenance (IoT) and intelligent search.
Siri, the voice assistant in Apple’s IOS devices, is one of the most common and widely used AI-based applications. Siri uses AI to recognise a particular human voice, parse the query to understand its meaning, and provide answers. But Siri is not that intelligent. Its answers generally lack nuance and feel more like programmatic responses, which they are.
But a truly intelligent voice assistant – the Siri of the future – would be much more powerful. It will understand more context by mining a user’s schedule, preferences, and past experiences to be more helpfully predictive. A truly intelligent voice assistant would learn in order to anticipate questions before the user even asks. Imagine a user on a business trip, and Siri wakes the person up in a different time zone, suggests a place for breakfast within two blocks of the hotel, and provides an overview of the day’s schedule without prompting. That would be a more advanced level of achievement in terms of AI – and that’s the kind of AI that will really change how people think about AI.
What’s Happening Now
In the pandemic-related shift to remote working, many companies were driven to put more and more data and computing power into the cloud. With that hurdle cleared, they are redoubling their efforts to embrace AI, the only technology able to mine and gain insight from vast pools of data, largely unstructured.
That’s why AI is the next big item on the CXO’s agenda. And it will change the way people work and the way enterprises manage people. Thoughtful enterprises will pay close attention to potential biases in AI-based solutions and to their ethical use – one element where HR and talent management professionals have a particularly important role. Automation, as we’ll see in the next posts in this series, has human consequences, which also underscores the importance of HR and change management in this brave new world. Organisations cannot afford to forget the people side and the work that needs to be done to make their talent ready for this unprecedented technological shift.
Post #2 will focus on understanding what ‘AI-enabled’ means and questions to ask of Business solution vendors that claim their solutions are AI enabled.