AI (Artificial Intelligence) is getting a lot of talk right now.  This is the first of two posts we are planning to look at the subject - its not a short topic.  In this post we look at AI in general, what you can do with it and question whether it really does deserve all the hype it is getting in terms of delivering useful services right now and in the future.  We plan a follow-up post which looks at AI with particular reference to Recruitment and HR.  If you work in the HR and Recruitment space then the two posts are broadly complementary.   For those not in recruitment or HR, the second post may also prove useful, as it will provide a useful guide to the practical business application of AI. 

In understanding AI, it is definitely worth just standing back a little and taking some care to understand what it is, and in turn what it not - or at least what it may not be yet.  

Ask people what AI is and you will get a range of answers.   However, in broad terms they all generally refer to a set of technologies that try to imitate or augment human intelligence.  You may also often hear the term 'Machine Learning' when people talk about AI.  Machine Learning - or ML - is an application of AI around the idea that machines with access to data can learn for themselves.  In lay-terms the terms are often used interchangeably.

Right now AI is 'hot'.  Marketers know people are interested in the concept and are keen to talk about it and to associate their products and companies with it.  It's a bit like 'Big Data' and 'The Cloud' was a couple of years ago.  The terms cropped up everywhere even though people's specific understanding of what they meant and their application was perhaps a little vague...   Popular culture does not help of course;  all those Sci-fi films of robots and computers that take over from humans, all feeds the AI hype.  Will the machines take over?  Is it happening right now, etc?

However what really is the reality? What is the current state of AI?  To answer, this lets look at some "AI" application examples that you will likely have come across.

Google/Amazon/Apple.  This what will likely be most people's experience of using AI type tools.  All three companies have well-known tools that are generally referred to as AI and which are accessible to the average person.  Google has its 'Assistant' and now its 'Google Home' speaker device, Amazon has its 'Alexa' home speaker device and Apple has 'Siri' and has just told the market that it will also soon be launching a home speaker device.  

In essence, each of these services is broadly similar.  They take your spoken commands or questions and seek to do something with them.   All these devices/services can; answer general knowledge type questions, tell you the weather, dial phone numbers, and in the case of Google and Amazon if paired with other household items - switch a TV channel and turn on or off your lights and heating.  They can be quite impressive - the 'Google Home' speaker device we particularly like (by way of disclaimer we are Google fans).  However, if you have used any of these tools you will very quickly find that they often don't 'understand' your requests.  Whilst they can 'remember' and reuse items you have told them, they don't really show an innate understanding of what is going on.   

By way of example, lets look at using the Google Home device - arguably the smartest of the three.  When asking it a question, it will most times remember your preceding question and use that to interpret your next question by way of context.  So asking it for local restaurants in e.g Richmond London will likely get it to read out a list of restaurants in that area.  Ltes imagine you then ask a followup question about what Indian restaurants there are.  In answering your second question, Google Home will assume that you mean Indian restaurants in Richmond.  Great, that is really useful.  It seemed to remember the context and it's all quite impressive when you first try it.  However,lets take another example, like asking it to play music and to do so through your wifi attached HiFi system.  There, you will quickly find that you have to keep reminding it that you have asked it to play music through the HiFi every time you ask and not through its own speaker.  In essence, it forgets the context of what you are wanting to do fairly quickly.  So, in human terms it has not really picked up what you are trying to do - a human would get this instantly.  This is a simple example, but does illustrate at a basic level the limitations with many current AI systems.  All three (Google, Amazon and Apple) have very interesting tools.  However the results of using them can be quite frustrating.  They have had huge sums on money spent on them.  But, we think its fair to say; they still have a long way to go before they come across as having more than very simple 'intelligence'.

Other areas you may come across AI with are with tools called 'Chatbots'.  This is where an AI system picks up emails or messenger enquires and seeks to answer them or provide some feedback online or by phone.  As you will see in our next post, these are starting to appear in a few HR and recruitment application areas.  You will also likely have heard of AI being used to help make decisions round diagnosing illnesses etc in medical use or for uncovering trends within large datasets.  And lastly there are the robots.  Mechanical type physical machines which are controlled by computers that are equipped with AI tools.

What is evident in all these areas when you look at how the technology actually works, is that the AI tools require a huge sets of data with which they can "look for" patterns and thus come up with answers.   The datasets needed to power them can be vast.  The Google Home device on your kitchen top, is quite small and it probably cost just over a hundred dollars.  However, that is not where the computing power is.  That is all back with Google. The other companies tools are the same.  Your Home speaker simply records your voice and feeds to Google's servers which then do the actual work.  They use algorithms - basically complex rules - to determine what it is you are asking for.  If the algorithm determines it has a matching response it will give it to you or do the task.  Otherwise you will get what will become a very common response: "Sorry I did not understand that".  Trust us - you will quickly tire of hearing this.  It's not really the same AI that you see in StarWars with R2D2...


So is it really all just Hype?

The Garter Group publish a Hype Cycle which ranks technologies based on how the market perceives them and how far away they are from mainstream adoption.  Machine Learning (AI) is right at the top - see graph below.

According to them, ML/AI is about to go through the 'trough of disillusionment' before comes back to much more even expectations.  So, does this suggest it's all hype and very overdone etc.  Is Garner right?

We think both Yes and No.  Yes, the expectations are too high and frankly unrealistic at least in the short to mid-term.  However, that does not mean it's just hype.  From our own research and discussions, we think that expectations round AI will most likely alter over time within the public consciousness.  The term AI will still be there.  But peoples' expectations round AI, won't be quite what they are today.  Here is another way of thinking about this.  IBM - who have been working in the field for longer than most - break AI down into three stages.  

  1. The first is recognition intelligence, in which algorithms running on ever more powerful computers can recognize patterns and glean topics from blocks of text, or perhaps even derive the meaning of a whole document just from a few sentences. 
  2. The second stage is cognitive intelligence, in which machines can go beyond pattern recognition and start making inferences from data. 
  3. The third stage will be reached when we can create virtual human beings, who can think, act, and behave as humans do.

On the IBM model - we are still at the first stage today of; 'recognition intelligence' where computers use 'learning' to discover patterns faster and better.  That said we are now starting to see companies work on technologies that can be used for inferring meanings - step two.  They are not really there yet outside the labs, but some are getting close.  We are however, a long long way from step 3 where machines can act and behave even in a limited way like humans.  

This may provide some relief that robots won't become, like the Terminator films, and simply try and take over.  So yes it is overhyped.  But that does not mean that AI is not worthy of serious application at perhaps a more mundane and a more basic level.  Indeed, if we set aside sci-fi, forget R2D2 and look at where AI can work, and work well.  We think AI-type tools has a very promising future in augmenting human intelligence and activity.  Why not think of AI as providing enhancements to your current processes and technology rather than a radical shift  - at least for now.  See our next Post for a look at some practical applications.