Neural Networks: Can Androids Recognise Electric Sheep?

Neural Networks: Can Androids Recognise Electric Sheep?

Damien Clarke - Senior Consultant, Data Exploitation

By: Damien Clarke
Senior Consultant, Data Exploitation

20th September 2017

Home » Artifical Intelligence

In 2010, Lt. Gen. David Deptula, the US Air Force deputy chief of staff for intelligence, was quoted as saying:

“We’re going to find ourselves, in the not too distant future, swimming in sensors and drowning in data.”

Since then, this flood of data is showing no signs of slowing. In fact, it is actually accelerating as greater volumes of data are being generated every day. This is just as true in the civilian context as the military context.

For organisations with access to these large volumes of data, it would be profitable to employ data exploitation techniques to convert the raw data into useful information. This can sometimes be achieved by developing custom data processing techniques for specific situations. However, in many cases, it is better to use machine learning techniques to allow computers to learn from data without being explicitly programmed. At Plextek, we’re passionate about developing and implementing the right data exploitation techniques for the application and are working to ensure that humanity stays afloat (and dry) in Deptula’s prediction.

There is a wide range of potential machine learning techniques to choose from, but one approach is to copy nature and mimic biological brains. This was inspired by the fact that one of the primary purposes of a brain is to process sensory inputs and extract useful information for future exploitation. A biological brain can be produced in software form by modelling a large set of connected neurons. This is an artificial neural network.

How does an artificial neural network work?

The basic building block of a neural network is a single neuron. A neuron transforms a set of one or more input values into a single output by applying a mathematical function to the weighted sum of input values. This output value is then passed to one or more connected neurons to be used as a subsequent input value.

The neural network as a whole can, therefore, be defined by three sets of parameters:

  The weight assigned to each input value for each neuron.

  The function which converts the weighted sum of input values into the output value.

  The pattern of connections between neurons.

A simple example neural network consists of three layers. The first layer contains the input values which represent the data being analysed. This layer is then connected to a hidden layer of neurons. The hidden layer then connects to the third and final layer which contains the output neurons whose values represent the processed data. This design allows a complicated relationship between inputs and outputs.



How is a neural network trained?

Just like biological brains, simply creating a neural network is not sufficient to extract information from raw data. It is also necessary to train the network by exposing it to data for which the desired outputs are already known. This process is used to define the weights assigned to each connection throughout the entire network.

As the size and complexity of the neural network increases, the number of weights that must be defined for optimum performance increases significantly. This training process, therefore, requires a large and representative set of labelled data; otherwise, the neural network may not work successfully on future input data. Also, this training process is computationally challenging and may take significant processing time to perform. GPU acceleration can be used to mitigate this; however, the process may still take days for very large data sets.  

Conversely, if large volumes of suitable training data are available, it is possible to create a more complex neural network to improve performance. This can be achieved by increasing the number of hidden layers and therefore the total number of connections within the network. This use of complex neural networks with many layers and connections is called deep learning.



What can neural networks be used for?

With a sufficiently large neural network and suitable training data, it is possible to learn complex non-linear relationships between input and output values. This can reveal insights into data which are not possible when using simple linear mathematical models.

While neural networks are suitable as general purpose problem solvers, they are particularly suited for tasks when an understanding of the underlying relationships in the raw data is neither available nor necessarily required and sufficient data is also available for training.

An important example of this capability is the recognition of objects in images. This is achieved through the use of a neural network which has been trained on a large volume of photos of known objects (e.g. ImageNet). While the training process can take a long time, subsequent object recognition is much faster and can potentially be performed in real time. Due to the large volume of training data and the complexity of the neural network used the resulting object recognition performance is close to human level performance. This can be used to in a military context to recognise different vehicles (e.g. a tank) or in a civilian context to see if computers can distinguish between different animals (Do Androids Dream of Electric Sheep?).



Neural networks are not just limited to processing photos and the same approach can be applied to a wide range of sensor and non-sensor data. The most important requirement is that a suitable volume of labelled training data is available to train the network before it can be used on unknown data.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

In 2010, Lt. Gen. David Deptula, the US Air Force deputy chief of staff for intelligence, was quoted as saying:

“We’re going to find ourselves, in the not too distant future, swimming in sensors and drowning in data.”

Since then, this flood of data is showing no signs of slowing. In fact, it is actually accelerating as greater volumes of data are being generated every day. This is just as true in the civilian context as the military context.

For organisations with access to these large volumes of data, it would be profitable to employ data exploitation techniques to convert the raw data into useful information. This can sometimes be achieved by developing custom data processing techniques for specific situations. However, in many cases, it is better to use machine learning techniques to allow computers to learn from data without being explicitly programmed. At Plextek, we’re passionate about developing and implementing the right data exploitation techniques for the application and are working to ensure that humanity stays afloat (and dry) in Deptula’s prediction.

There is a wide range of potential machine learning techniques to choose from, but one approach is to copy nature and mimic biological brains. This was inspired by the fact that one of the primary purposes of a brain is to process sensory inputs and extract useful information for future exploitation. A biological brain can be produced in software form by modelling a large set of connected neurons. This is an artificial neural network.

How does an artificial neural network work?

The basic building block of a neural network is a single neuron. A neuron transforms a set of one or more input values into a single output by applying a mathematical function to the weighted sum of input values. This output value is then passed to one or more connected neurons to be used as a subsequent input value.

The neural network as a whole can, therefore, be defined by three sets of parameters:

  The weight assigned to each input value for each neuron.

  The function which converts the weighted sum of input values into the output value.

  The pattern of connections between neurons.

A simple example neural network consists of three layers. The first layer contains the input values which represent the data being analysed. This layer is then connected to a hidden layer of neurons. The hidden layer then connects to the third and final layer which contains the output neurons whose values represent the processed data. This design allows a complicated relationship between inputs and outputs.



How is a neural network trained?

Just like biological brains, simply creating a neural network is not sufficient to extract information from raw data. It is also necessary to train the network by exposing it to data for which the desired outputs are already known. This process is used to define the weights assigned to each connection throughout the entire network.

As the size and complexity of the neural network increases, the number of weights that must be defined for optimum performance increases significantly. This training process, therefore, requires a large and representative set of labelled data; otherwise, the neural network may not work successfully on future input data. Also, this training process is computationally challenging and may take significant processing time to perform. GPU acceleration can be used to mitigate this; however, the process may still take days for very large data sets.  

Conversely, if large volumes of suitable training data are available, it is possible to create a more complex neural network to improve performance. This can be achieved by increasing the number of hidden layers and therefore the total number of connections within the network. This use of complex neural networks with many layers and connections is called deep learning.



What can neural networks be used for?

With a sufficiently large neural network and suitable training data, it is possible to learn complex non-linear relationships between input and output values. This can reveal insights into data which are not possible when using simple linear mathematical models.

While neural networks are suitable as general purpose problem solvers, they are particularly suited for tasks when an understanding of the underlying relationships in the raw data is neither available nor necessarily required and sufficient data is also available for training.

An important example of this capability is the recognition of objects in images. This is achieved through the use of a neural network which has been trained on a large volume of photos of known objects (e.g. ImageNet). While the training process can take a long time, subsequent object recognition is much faster and can potentially be performed in real time. Due to the large volume of training data and the complexity of the neural network used the resulting object recognition performance is close to human level performance. This can be used to in a military context to recognise different vehicles (e.g. a tank) or in a civilian context to see if computers can distinguish between different animals (Do Androids Dream of Electric Sheep?).



Neural networks are not just limited to processing photos and the same approach can be applied to a wide range of sensor and non-sensor data. The most important requirement is that a suitable volume of labelled training data is available to train the network before it can be used on unknown data.

Save

Save

Save

Save

Save

Save

Save

Save

The start-ups using artificial intelligence to solve everyday tasks

The start-ups using artificial intelligence to solve everyday tasks

Dr Matthew Roberts - Senior Consultant, Data Exploitation

By: Matthew Roberts
Senior Consultant, Data Exploitation

5th July 2017

Home » Artifical Intelligence

I recently attended the inaugural Cambridge Wireless Artificial Intelligence & Mobility Conference. The event focussed on artificial intelligence (AI), the business use cases enabled by AI, innovative start-up companies, and how start-up companies can gain funding. Unlike the technical conferences that I am used to attending, this event was much more about the business-side of AI.

Like many engineers, I usually like to look at the technical aspects of things, but this event gave me a different, and somewhat refreshing, perspective on the use of AI. I enjoy hearing about how companies, like DeepMind, are using AI to play video games and diagnose medical conditions, but perhaps I don’t pay enough attention to the companies that are using AI to solve everyday tasks. The Cambridge-based event gave start-ups the opportunity to talk and exhibit and gave people like me the chance to learn more about them.

You have probably heard of the driverless car technology being developed by organisations like Google and Uber, but what you might not know about are the driverless cars in the UK. Three driverless car projects were awarded funding by the UK government, and members of the public were given the opportunity ride in driverless cars.

Oxbotica, an Oxford University spinout, was involved in two of the projects. Oxbotica’s Selenium software formed the brains of the vehicles used in both projects. The software almost certainly uses AI to perform two key tasks: understanding the wealth of sensor data that is used to observe the car’s environment and controlling the car.

Another company that is working on self-driving cars is FiveAI. At the event, Stan Boland, CEO of FiveAI, spoke of how FiveAI is aiming to become a customer to large organisations instead of a supplier. FiveAI intends to do this by competing with the likes of Uber, but with self-driving cars. The company is currently part of a consortium that plans to test such cars on public roads in London, and AI will be a key part of making that a success.

Hoxton Analytics is using AI to solve a completely different kind of perception task. It is using cameras combined with AI to measure footfall. The cameras are mounted at ground level in order to avoid privacy concerns. Not only can it be used by shops to determine how many people it can attract, but it can also be used to infer the types of shoppers. This information can be used to help determine which demographics are being lured into shops and at what times. Solving such a task manually can be very labour-intensive.

Another example of the use of AI to solve everyday tasks is the 3D sensor that has been created by Titan Reality. Titan Reality’s sensor can be used in a wide variety of perception and control tasks, from sorting objects to pouring the correct drink based on what kind of glass is placed on the sensor.

This is just a tiny set of examples of where small companies have embraced AI to provide high-tech solutions to everyday tasks that would traditionally be performed by people. It is not just large companies like Google and Netflix that are using AI to make a big impact.

Save

Save

Save

Save

Save

Save

I recently attended the inaugural Cambridge Wireless Artificial Intelligence & Mobility Conference. The event focussed on artificial intelligence (AI), the business use cases enabled by AI, innovative start-up companies, and how start-up companies can gain funding. Unlike the technical conferences that I am used to attending, this event was much more about the business-side of AI.

Like many engineers, I usually like to look at the technical aspects of things, but this event gave me a different, and somewhat refreshing, perspective on the use of AI. I enjoy hearing about how companies, like DeepMind, are using AI to play video games and diagnose medical conditions, but perhaps I don’t pay enough attention to the companies that are using AI to solve everyday tasks. The Cambridge-based event gave start-ups the opportunity to talk and exhibit and gave people like me the chance to learn more about them.

You have probably heard of the driverless car technology being developed by organisations like Google and Uber, but what you might not know about are the driverless cars in the UK. Three driverless car projects were awarded funding by the UK government, and members of the public were given the opportunity ride in driverless cars.

Oxbotica, an Oxford University spinout, was involved in two of the projects. Oxbotica’s Selenium software formed the brains of the vehicles used in both projects. The software almost certainly uses AI to perform two key tasks: understanding the wealth of sensor data that is used to observe the car’s environment and controlling the car.

Another company that is working on self-driving cars is FiveAI. At the event, Stan Boland, CEO of FiveAI, spoke of how FiveAI is aiming to become a customer to large organisations instead of a supplier. FiveAI intends to do this by competing with the likes of Uber, but with self-driving cars. The company is currently part of a consortium that plans to test such cars on public roads in London, and AI will be a key part of making that a success.

Hoxton Analytics is using AI to solve a completely different kind of perception task. It is using cameras combined with AI to measure footfall. The cameras are mounted at ground level in order to avoid privacy concerns. Not only can it be used by shops to determine how many people it can attract, but it can also be used to infer the types of shoppers. This information can be used to help determine which demographics are being lured into shops and at what times. Solving such a task manually can be very labour-intensive.

Another example of the use of AI to solve everyday tasks is the 3D sensor that has been created by Titan Reality. Titan Reality’s sensor can be used in a wide variety of perception and control tasks, from sorting objects to pouring the correct drink based on what kind of glass is placed on the sensor.

This is just a tiny set of examples of where small companies have embraced AI to provide high-tech solutions to everyday tasks that would traditionally be performed by people. It is not just large companies like Google and Netflix that are using AI to make a big impact.

Save

Save

Save

Save

Save

Save