Keep It Smart – Basics #5: What are Artificial Intelligence and Machine Learning?
Outside the tech industry, artificial intelligence or AI tends to conjure up images of psychotic, sentient robots intent on annihilating the human race. In practice, the term ‘artificial intelligence’ was first used in 1956 and we’ve yet to develop anything close to The Terminator’s (1984) T-1. Instead we’re creating increasingly reactive and proactive machines to assist with complex electrical processes, with seemingly endless scope for research and development. But how does the tech world define true AI and does it actually exist?
In general terms, AI means a computer acting with human-level intelligence and reasoning. In other words, a machine that learns continuously and reacts, predicts and invents based on that learning. It not only learns but understands what it needs to learn with no human interference. To date, no computer has yet achieved human-level intelligence but machines continue to be trained to emulate human activity with growing complexity, and the term artificial intelligence is frequently used to describe these emulations.
To gain a clearer understanding of the concepts around AI, we need to explore a few of AI’s subfields: artificial neural networks, machine learning and deep learning.
Artificial Neural Networks (ANN)
A typical human brain contains anywhere from 50-500 billion cells called neurons that send and receive electrical signals at lightning speed, creating an extraordinarily complex data transfer network. Similarly, a computer’s microprocessor contains up to several billion transistors that act as switches, receiving or blocking electrical signals. However, where one transistor in a traditional computer might be connected to a few other transistors in a series, human neurons can be connected to thousands of other neurons in parallel. It’s the complexity of the human neural network that allows us to be creative, innovative, progressive and diverse in our thinking.
Attempting to connect transistors to thousands of other transistors in parallel would be a gargantuan and impractical task, so programmers created artificial neural networks (ANN) by using software that simulates human neuron connections. The software instructs connected transistors, grouped to form input, hidden and output ‘units’. By arranging the units in layers, the connected units simulate the parallel connections in a human neural network. Unlike humans, who act independently using their own internal programming, an artificial neural network still requires software programming with examples, which keeps it distinct from an actual human brain. Over time, through comparing previous and current data and recognising patterns, the artificial neural network can learn to generate new responses and solutions.
Before a machine can learn, it needs to understand how to learn. Machine learning is a general term applied to a subfield of artificial intelligence where machines use algorithms (sets of rules) to learn how to analyse patterns, draw conclusions and apply these findings to subsequent data. Although a machine requires human interaction to code the machine at the initial stages, once programmed with examples the machine should not require further coding, in theory. The examples must include vast amounts of diverse data – the more extensive the data, the more effective the machine learning. The end result is a feedback cycle of data input and output, processed and analysed by a machine without human interference, that allows the machine to make an informed ‘decision’.
Just as machine learning is a subfield within AI, deep learning is a subfield within machine learning. Deep learning involves computers choosing to learn specific types of knowledge, enabling adaptive and predictive analysis.
A child, for example, will have no concept of a banana until it first receives ‘data’ by seeing, touching and tasting a banana and/or hearing descriptions and examining pictures. The more sensory input the child receives, the more accurate the child’s concept of a banana. This then leads to reactive and predictive behaviour: he/she decides whether to eat the banana based on prior knowledge, knows that bananas are likely to be found in the kitchen fruit bowl, can recognise images or descriptions of a banana and choose to describe/depict one to others, and so on.
We take such basic knowledge for granted in our lives, but there was a time as a child when the word ‘banana’ meant nothing to us. In the same way, deep learning occurs when child-like machines are targeted with specific data in order to understand a concept, then take that understanding to inform simple and complex decisions. Drones, for example, now have the capability to map a designated area, track objects and analyse the input to provide real-time feedback on what they ‘see’.
The AI in Smart
When applied to devices, buildings or environments, the word ‘smart’ is supposed to be synonymous with intelligence. However, the marketing industry tends to apply the term ‘smart’ to any internet-connected device as an attention-grabbing strategy. Everyday smart devices and buildings can certainly be responsive and reactive based on data input, but are they even close to being artificially intelligent?
The short answer is no, but while building systems and assets can’t achieve human-level intelligence, they can emulate human behaviour in specified tasks. Smart, sensor-connected lighting systems now have the capability to predict and identify a fault (machine learning), such as an LED bulb wearing out, and alerting a technician for repair (human interaction).
Previously a facility manager would have had to keep extensive records of when every LED in an entire building had been installed and predict when it might need replacing. Realistically, not many facility managers have time to monitor the age of every bulb. Alternatively, the fault in the bulb would only be identified by a building user or facility manager after it occurred (human interaction only).
Similarly, smart sensors within a building can monitor building use and analyse the data to make predictions on how users interact with the building. For example, Meeting Room 1 on Level 16 of a commercial high rise is only used on Monday and Thursday mornings between 10-11am. Analysis of the sensors’ data will make recommendations based on predictive use, such as shutting off lights, HVAC systems, closing/opening blinds based on the time of day, weather and level of occupancy in the building. For example, if no-one enters the building on a public holiday Monday, the sensor data will inform the management system, which will then predict no meeting in Meeting Room One and keep the lights off, heating/cooling off and blinds closed.
In the long-term, a building with entirely automated, self-perpetuating and self-repairing systems and assets seems achievable. We’re not there yet, but investment in smart tech and intelligent environments is booming. The future seems bright for investors and consumers both.
Ethics and the Future
Development of artificial intelligence is far from limited to the terms defined above. Authorities are recognising a growing need for ethical guidelines to govern AI research. Such guidelines examine legal, social, cultural and environmental concerns, among others. Just yesterday on 8 May 2019, the European Commission published a set of ethics guidelines for developing trustworthy artificial intelligence.Despite the growing need for ethical AI development, the race to create a machine that truly acts with human-level intelligence shows no sign of slowing down. Fortunately, we’ve passed the year of 2001: A Space Odyssey (1968), and it seems we have a long way to go before HAL refuses to open the Pod bay doors.
At mySmart we’ve been working with smart technology for more than a decade. We’re an Australian company at the forefront of creating intelligent environments.
Contact us to identify how our solutions can effect positive change for your needs – it’s what we’re good at.
Building smart cities, one mySmart building at a time.