Quantum Computing and How Cryptography Will Have to Change

By: Laurence Weir
Technology Lead, Biomedical Engineer

23rd January 2020

5 minute read

Home » sensors

The creation of quantum computers is one of the ethereal technological challenges of the modern age, along with the likes of nuclear fusion reactors, and low-cost space travel. Algorithms designed for quantum computers will offer results which have a profound impact on nearly every aspect of our lives. Problems like protein folding (used to find new Cancer drugs), or SETI (the search for extra-terrestrial intelligence), will be solved many orders of magnitude faster than is currently possible with supercomputers. However, this also means most of our secure data is at risk.

Back to basics; the state of a “bit” in computing, is binary;

1 OR 0. HIGH OR LOW. VOLTAGE OR GROUND.

With the inception of quantum computing and the quantum bit, or “qubit”, this is going to change. Qubits are not just 1 or 0. They are 1 and 0 and everything in between. Those without a background in quantum mechanics, feel free to just go with the flow.

Conventional binary computers offer amazing abilities to solve linear logical problems. Many of these problems can be simplified as:

“WE KNOW INPUTS A,B,C…, AND HOW THEY INTERACT TO PRODUCE OUTPUTS X,Y,Z…”

These algorithms almost instantaneously change their outputs to changes in inputs. Problems such as:

“HOW MUCH MONEY DO I HAVE TO SPEND THIS WEEK?”

“WHEN IS MY TRAIN GOING TO ARRIVE?”

“WHAT IS THE WEATHER GOING TO BE LIKE TOMORROW?”

However, with qubits, in a quantum computer, as well as being able to solve do everything the conventional computer can, new problems will be solvable. These can be simplified as:

“WE KNOW OUTPUTS X,Y,Z…, BUT HOW DID WE GET TO THIS?”

These problems are solved right now using either brute force algorithms, or rely on being able to identify patterns. Here are some examples:

“HOW TO WIN THIS CHESS GAME?”

“HOW DO I FOLD THESE PROTEINS TO CREATE A CURE FOR CANCER?”

“HOW DO I BREAK THIS PASSWORD?”

For instance, a password in binary is just a fixed series of 1’s and 0’s. A traditional computer can crack this by trying every combination of 1’s and 0’s, perhaps also intelligently predicting what series are most likely. However, with limited processing power, and a long enough password, solving this takes longer than is reasonable (usually the age of the universe). However, with enough qubits, a quantum computer is able to solve it. The qubits instantly try every combination of 1’s and 0’s, and the password is cracked.

The modern cryptographic method involves multiples of two primes to create very long numbers. Certain numbers only have two factors, both of which are prime numbers. For instance, the number 889. To find them might take you several minutes by hand. A conventional computer would be able to brute force it by checking a list of primes. However, if the number was 2000 digits long, this search algorithm would take too long. Again the quantum computer is able to solve it by just using two groups of qubits representing the two primes.

BTW…THE PRIME FACTORS OF 889 ARE 7 AND 127.

When this quantum computer potentially emerges over the next decade, it will be able to break every encryption method and protected piece of information. It will also be able to impose its own encryption on the data which can never be broken by conventional computing. The owner of the quantum computer will be in sole possession of most of the world’s protected data.

Before that happens, the designers of quantum computers will have to overcome immense technical hurdles. A single qubit right now costs around $10k to create, compared to around $0.0000000001 for a conventional computer bit. These $10k qubits are still not of good enough quality for large scale computers. This creates compounded problems to develop error corrective algorithms to overcome this poor quality. At the moment, controlling multiple qubits simultaneously is very difficult. Lastly, each qubit requires multiple control wires.

Regardless of these challenges, we are now looking at a post-quantum era to which we should be designing our cryptography. In 2016, the National Institute of Standards and Technology (NIST) put out a call to propose algorithms that would not be able to be solvable by a quantum computer. They are analysing 26 leading candidate before implementation in 2024. IBM has selected one, in particular, called CRYSTALS (Cryptographic Suite for Algebraic Lattices). This method generates public and private keys based on “lattice algorithms”. An example of which is; A set of numbers is produced, as well as the sum of a subset of those numbers. Determining the different combinations of numbers which made up the final answer is currently unsolvable by quantum computing due to the multidimensional nature of the problem.

Therefore, quantum computing will solve many of life’s problems but will make some of our current cryptographic methods redundant. We will have to start soon moving to new methods to keep our future data safe.

If you want to know more about Quantum Computing, please get in contact with us below.

The creation of quantum computers is one of the ethereal technological challenges of the modern age, along with the likes of nuclear fusion reactors, and low-cost space travel. Algorithms designed for quantum computers will offer results which have a profound impact on nearly every aspect of our lives. Problems like protein folding (used to find new Cancer drugs), or SETI (the search for extra-terrestrial intelligence), will be solved many orders of magnitude faster than is currently possible with supercomputers. However, this also means most of our secure data is at risk.

Back to basics; the state of a “bit” in computing, is binary;

1 OR 0. HIGH OR LOW. VOLTAGE OR GROUND.

With the inception of quantum computing and the quantum bit, or “qubit”, this is going to change. Qubits are not just 1 or 0. They are 1 and 0 and everything in between. Those without a background in quantum mechanics, feel free to just go with the flow.

Conventional binary computers offer amazing abilities to solve linear logical problems. Many of these problems can be simplified as:

“WE KNOW INPUTS A,B,C…, AND HOW THEY INTERACT TO PRODUCE OUTPUTS X,Y,Z…”

These algorithms almost instantaneously change their outputs to changes in inputs. Problems such as:

“HOW MUCH MONEY DO I HAVE TO SPEND THIS WEEK?”

“WHEN IS MY TRAIN GOING TO ARRIVE?”

“WHAT IS THE WEATHER GOING TO BE LIKE TOMORROW?”

However, with qubits, in a quantum computer, as well as being able to solve do everything the conventional computer can, new problems will be solvable. These can be simplified as:

“WE KNOW OUTPUTS X,Y,Z…, BUT HOW DID WE GET TO THIS?”

These problems are solved right now using either brute force algorithms, or rely on being able to identify patterns. Here are some examples:

“HOW TO WIN THIS CHESS GAME?”

“HOW DO I FOLD THESE PROTEINS TO CREATE A CURE FOR CANCER?”

“HOW DO I BREAK THIS PASSWORD?”

For instance, a password in binary is just a fixed series of 1’s and 0’s. A traditional computer can crack this by trying every combination of 1’s and 0’s, perhaps also intelligently predicting what series are most likely. However, with limited processing power, and a long enough password, solving this takes longer than is reasonable (usually the age of the universe). However, with enough qubits, a quantum computer is able to solve it. The qubits instantly try every combination of 1’s and 0’s, and the password is cracked.

The modern cryptographic method involves multiples of two primes to create very long numbers. Certain numbers only have two factors, both of which are prime numbers. For instance, the number 889. To find them might take you several minutes by hand. A conventional computer would be able to brute force it by checking a list of primes. However, if the number was 2000 digits long, this search algorithm would take too long. Again the quantum computer is able to solve it by just using two groups of qubits representing the two primes.

BTW…THE PRIME FACTORS OF 889 ARE 7 AND 127.

When this quantum computer potentially emerges over the next decade, it will be able to break every encryption method and protected piece of information. It will also be able to impose its own encryption on the data which can never be broken by conventional computing. The owner of the quantum computer will be in sole possession of most of the world’s protected data.

Before that happens, the designers of quantum computers will have to overcome immense technical hurdles. A single qubit right now costs around $10k to create, compared to around $0.0000000001 for a conventional computer bit. These $10k qubits are still not of good enough quality for large scale computers. This creates compounded problems to develop error corrective algorithms to overcome this poor quality. At the moment, controlling multiple qubits simultaneously is very difficult. Lastly, each qubit requires multiple control wires.

Regardless of these challenges, we are now looking at a post-quantum era to which we should be designing our cryptography. In 2016, the National Institute of Standards and Technology (NIST) put out a call to propose algorithms that would not be able to be solvable by a quantum computer. They are analysing 26 leading candidate before implementation in 2024. IBM has selected one, in particular, called CRYSTALS (Cryptographic Suite for Algebraic Lattices). This method generates public and private keys based on “lattice algorithms”. An example of which is; A set of numbers is produced, as well as the sum of a subset of those numbers. Determining the different combinations of numbers which made up the final answer is currently unsolvable by quantum computing due to the multidimensional nature of the problem.

Therefore, quantum computing will solve many of life’s problems but will make some of our current cryptographic methods redundant. We will have to start soon moving to new methods to keep our future data safe.

If you would like to learn more about quantum computing please get in contact below.

Early Detection and Treatment of Hearing Loss May Stave off Dementia

By: Nigel Whittle
Head of Medical & Healthcare

29th November 2019

4 minute read

Home » sensors

In a recent landmark study, researchers at Columbia University Irving Medical Centre in New York have demonstrated a clear link between hearing loss and impairment of memory and cognitive skills.[1]

Previous investigations had already indicated a connection between hearing loss and cognitive decline, but those studies only examined people already diagnosed with hearing loss, defined as the inability to hear sounds below 25dB. This current study, published in the Journal of the American Medical Association, has taken the investigation a step further.

A team led by hearing specialist Justin Golub MD studied data from 6,451 adults with an average age of 59 who took hearing and cognitive tests. They found that for every 10dB increase in the lower limit of hearing, there was a significant decrease in cognitive ability. Moreover, the largest decrease occurred in those whose hearing was just starting to become impaired – just 10dB off normal hearing capability, when hearing is still considered normal.

This is significant as age-related hearing loss affects about two-thirds of the elderly over 70, while only 14% of American adults with hearing loss wear a hearing aid.

“Most people with hearing loss believe they can go about their lives just fine without treatment and maybe some can,” says Golub. “But hearing loss is not benign. It has been linked to social isolation, depression, cognitive decline and dementia. Hearing loss should be treated. This study suggests the earlier, the better.”

The current study could not prove that hearing loss caused cognitive impairment and it is possible that declines in both hearing and cognitive performance are related to common ageing-related processes. But the study’s design suggests a causal link: “It’s possible that people who don’t hear well tend to socialise less. Over many years this could have a negative impact on cognition.” Golub said that if that were the case, preventing or treating hearing loss could reduce the incidence of dementia.

Plextek Technology

Plextek has developed innovative hearing analysis technology that can reliably detect early signs of hearing loss well before a person becomes aware of the symptoms. It has been designed for integration within standard everyday consumer headphones and has been described as a potential ‘game-changer’ in the prevention of tinnitus and hearing loss.

“This clinical study indicates the importance of early detection of hearing loss, allowing remedial action to be taken in a timely manner. The data strongly validates our approach to hearing loss and we are excited about the impact that our technology could have on the rising incidence of dementia”, said Dr Nigel Whittle, Head of Medical & Healthcare.

If you would like to chat further to our medical team, please email hello@Plextek.com to set up a call.

 


[1]‘Association of Subclinical Hearing Loss With Cognitive Performance’, Golub, JS; Brickman, AM; Ciarleglio, AJ; et al. JAMA Otolaryngol. Head Neck Surg. November 14, 2019. https://doi.org/10.1001/jamaoto.2019.3375

In a recent landmark study, researchers at Columbia University Irving Medical Centre in New York have demonstrated a clear link between hearing loss and impairment of memory and cognitive skills.[1]

Previous investigations had already indicated a connection between hearing loss and cognitive decline, but those studies only examined people already diagnosed with hearing loss, defined as the inability to hear sounds below 25dB. This current study, published in the Journal of the American Medical Association, has taken the investigation a step further.

A team led by hearing specialist Justin Golub MD studied data from 6,451 adults with an average age of 59 who took hearing and cognitive tests. They found that for every 10dB increase in the lower limit of hearing, there was a significant decrease in cognitive ability. Moreover, the largest decrease occurred in those whose hearing was just starting to become impaired – just 10dB off normal hearing capability, when hearing is still considered normal.

This is significant as age-related hearing loss affects about two-thirds of the elderly over 70, while only 14% of American adults with hearing loss wear a hearing aid.

“Most people with hearing loss believe they can go about their lives just fine without treatment and maybe some can,” says Golub. “But hearing loss is not benign. It has been linked to social isolation, depression, cognitive decline and dementia. Hearing loss should be treated. This study suggests the earlier, the better.”

The current study could not prove that hearing loss caused cognitive impairment and it is possible that declines in both hearing and cognitive performance are related to common ageing-related processes. But the study’s design suggests a causal link: “It’s possible that people who don’t hear well tend to socialise less. Over many years this could have a negative impact on cognition.” Golub said that if that were the case, preventing or treating hearing loss could reduce the incidence of dementia.

Plextek Technology

Plextek has developed innovative hearing analysis technology that can reliably detect early signs of hearing loss well before a person becomes aware of the symptoms. It has been designed for integration within standard everyday consumer headphones and has been described as a potential ‘game-changer’ in the prevention of tinnitus and hearing loss.

“This clinical study indicates the importance of early detection of hearing loss, allowing remedial action to be taken in a timely manner. The data strongly validates our approach to hearing loss and we are excited about the impact that our technology could have on the rising incidence of dementia”, said Dr Nigel Whittle, Head of Medical & Healthcare.

If you would like to chat further to our medical team, please email hello@Plextek.com to set up a call.

 


[1]‘Association of Subclinical Hearing Loss With Cognitive Performance’, Golub, JS; Brickman, AM; Ciarleglio, AJ; et al. JAMA Otolaryngol. Head Neck Surg. November 14, 2019. https://doi.org/10.1001/jamaoto.2019.3375

Railway Revolution

Nicholas Hill, Plextek

By: Nicholas Hill
CEO

5th November 2019

3 minute read

Home » sensors

If you view the railway network as still lodged in the Victorian era, you should think again. A revolution in rail travel is in progress. Ever-increasing road congestion and worsening global warming are pushing more traffic onto the rail network and will continue to do so. Rail travel is an inherently efficient method of moving both people and goods in an environmentally sustainable manner.

We can build more routes, but the existing rail network needs to move more people and more goods every day. This means running more trains, more frequently and more sustainably.

But to push more trains onto the track, train spacing must be greatly reduced. This requires a revolution in train management, abolishing fixed track sections and creating new systems for detecting the precise location of trains, automated control across the network, highly sophisticated scheduling and more robust safety systems.

Building a sustainable network

Further improvements to sustainability will see the removal of diesel traction, replaced by further track electrification and battery or hydrogen fuel cell-powered trains. Rolling stock will also use advanced materials to reduce weight, regenerative braking to conserve power and more intelligent power control. Routing slow goods traffic in between passenger trains is difficult and inefficient and will be done at night when currently, routes are often closed for manual inspection.

Happily, manual inspection will become a thing of the past as track and rolling stock monitoring is performed by automated and robotic systems. Track, overheads and rolling stock will be fitted with extensive sensing for continuous monitoring and diagnostics. Further sensors built into track and overheads will monitor rolling stock while conversely, sensors built into rolling stock will monitor track and overheads, at full train operating speeds. Robotic trains and autonomous drones operating beyond-line-of-sight will conduct automated surveys. Sophisticated data exploitation techniques will process and examine all this data to look for trends in wear and defects, predicting potential failure before it happens and improving network up-time.

All about the passenger

All the above will benefit the passenger experience, through improved punctuality, better reliability and more frequent services. But this is only a start. Better management of passenger flows at busy stations will direct travellers to the most appropriate train carriage. Improved security screening techniques will keep people safe without impeding the flow, while accurate real-time passenger information will make travel decisions easier to make.

This revolution demands a strong culture of innovation to drive radical changes in train operating practice. It also requires the very best of current technology, including advanced sensing, ubiquitous communications, powerful but trustworthy data processing and enhanced autonomy.

If you need to be sure you are building the very best of current technology into your products and systems, do give us a call. We’d love to talk about how we can help you create the railway revolution.

If you view the railway network as still lodged in the Victorian era, you should think again. A revolution in rail travel is in progress. Ever-increasing road congestion and worsening global warming are pushing more traffic onto the rail network and will continue to do so. Rail travel is an inherently efficient method of moving both people and goods in an environmentally sustainable manner.

We can build more routes, but the existing rail network needs to move more people and more goods every day. This means running more trains, more frequently and more sustainably.

But to push more trains onto the track, train spacing must be greatly reduced. This requires a revolution in train management, abolishing fixed track sections and creating new systems for detecting the precise location of trains, automated control across the network, highly sophisticated scheduling and more robust safety systems.

Building a sustainable network

Further improvements to sustainability will see the removal of diesel traction, replaced by further track electrification and battery or hydrogen fuel cell-powered trains. Rolling stock will also use advanced materials to reduce weight, regenerative braking to conserve power and more intelligent power control. Routing slow goods traffic in between passenger trains is difficult and inefficient and will be done at night when currently, routes are often closed for manual inspection.

Happily, manual inspection will become a thing of the past as track and rolling stock monitoring is performed by automated and robotic systems. Track, overheads and rolling stock will be fitted with extensive sensing for continuous monitoring and diagnostics. Further sensors built into track and overheads will monitor rolling stock while conversely, sensors built into rolling stock will monitor track and overheads, at full train operating speeds. Robotic trains and autonomous drones operating beyond-line-of-sight will conduct automated surveys. Sophisticated data exploitation techniques will process and examine all this data to look for trends in wear and defects, predicting potential failure before it happens and improving network up-time.

All about the passenger

All the above will benefit the passenger experience, through improved punctuality, better reliability and more frequent services. But this is only a start. Better management of passenger flows at busy stations will direct travellers to the most appropriate train carriage. Improved security screening techniques will keep people safe without impeding the flow, while accurate real-time passenger information will make travel decisions easier to make.

This revolution demands a strong culture of innovation to drive radical changes in train operating practice. It also requires the very best of current technology, including advanced sensing, ubiquitous communications, powerful but trustworthy data processing and enhanced autonomy.

If you need to be sure you are building the very best of current technology into your products and systems, do give us a call. We’d love to talk about how we can help you create the railway revolution.

Could Radar Be a More Cost-Effective Security Screening Alternative to X-Rays?

By: Damien Clarke
Lead Consultant

10th October 2019

5 minute read

Home » sensors

A key task in the security market is the detection of concealed threats, such as guns, knives and explosives. While explosives can be detected by their chemical constituents the other threats are defined by their shape. A threat detection system must, therefore, be able to produce an image of an object behind an opaque barrier.

X-rays are probably the most commonly known technology for achieving this and they are widely used for both security and medical applications. However, while they produce high-quality images, x-ray machines are not cheap and there are health concerns with their frequent use on or in the vicinity of people.

An alternative to x-rays often used at airports for full-body screening are microwave imaging systems. These allow the detection of concealed objects through clothes though the spatial resolution is relatively low and objects are often indistinguishable (hence the requirement for a manual search). The ability to detect and identify concealed items can, therefore, be improved by using a high-frequency mm-wave (60 GHz) system.

Plextek has investigated this approach through the use of a Texas Instruments IWR6843 60 – 64 GHz mm-wave radar which is a relatively inexpensive consumer component that could be customised to suit many applications. However, a single radar measurement only contains range information and not angle information. It is, therefore, necessary to collect multiple measurements of an object from different viewpoints to form an image. This is achieved through the use of a custom 2D translation stage that enables the radar to be automatically moved to any point in space relative to the target object. In this example, radar data was collected across a regular grid of 2D locations with millimetre spacing between measurements.

This large set of radar measurements can then be processed to form an image. This is achieved by analysing the small variations in the signal caused by the change in viewpoint when the object is measured from different positions. The set of range only measurements is then extended to include azimuth and elevation as well. In effect, this process produces a 3D cube of intensity values defining the radar reflectivity at each point in space. A slice through this cube at a range corresponding to the position of the box allows an image to be formed of an object that is behind an (optically) opaque surface.

In this case, a cardboard box containing a fake gun was used as the target object. Clearly, a visual inspection of this box would not reveal the contents, however, 60 GHz mm-waves can penetrate cardboard and therefore an image of the concealed object can be produced. In this case, the resulting image of the contents of the box clearly shows the shape of the concealed gun.

This example simulates the detection of a gun being sent through the post and automatic image analysis algorithms would presumably be capable of flagging this box for further inspection. This would remove the need for human involvement in the screening process for each parcel.

A more mature sensor system using this approach could be produced that did not require the manual scanning process but used an array of antenna instead. It would also be possible to produce similar custom systems that were optimised for different target sets and applications.

 

Acknowledgement

This work was performed by Ivan Saunders during his time as a Summer student at Plextek before completing his MPhys at the University of Exeter.

A key task in the security market is the detection of concealed threats, such as guns, knives and explosives. While explosives can be detected by their chemical constituents the other threats are defined by their shape. A threat detection system must, therefore, be able to produce an image of an object behind an opaque barrier.

X-rays are probably the most commonly known technology for achieving this and they are widely used for both security and medical applications. However, while they produce high-quality images, x-ray machines are not cheap and there are health concerns with their frequent use on or in the vicinity of people.

An alternative to x-rays often used at airports for full-body screening are microwave imaging systems. These allow the detection of concealed objects through clothes though the spatial resolution is relatively low and objects are often indistinguishable (hence the requirement for a manual search). The ability to detect and identify concealed items can, therefore, be improved by using a high-frequency mm-wave (60 GHz) system.

Plextek has investigated this approach through the use of a Texas Instruments IWR6843 60 – 64 GHz mm-wave radar which is a relatively inexpensive consumer component that could be customised to suit many applications. However, a single radar measurement only contains range information and not angle information. It is, therefore, necessary to collect multiple measurements of an object from different viewpoints to form an image. This is achieved through the use of a custom 2D translation stage that enables the radar to be automatically moved to any point in space relative to the target object. In this example, radar data was collected across a regular grid of 2D locations with millimetre spacing between measurements.

This large set of radar measurements can then be processed to form an image. This is achieved by analysing the small variations in the signal caused by the change in viewpoint when the object is measured from different positions. The set of range only measurements is then extended to include azimuth and elevation as well. In effect, this process produces a 3D cube of intensity values defining the radar reflectivity at each point in space. A slice through this cube at a range corresponding to the position of the box allows an image to be formed of an object that is behind an (optically) opaque surface.

In this case, a cardboard box containing a fake gun was used as the target object. Clearly, a visual inspection of this box would not reveal the contents, however, 60 GHz mm-waves can penetrate cardboard and therefore an image of the concealed object can be produced. In this case, the resulting image of the contents of the box clearly shows the shape of the concealed gun.

This example simulates the detection of a gun being sent through the post and automatic image analysis algorithms would presumably be capable of flagging this box for further inspection. This would remove the need for human involvement in the screening process for each parcel.

A more mature sensor system using this approach could be produced that did not require the manual scanning process but used an array of antenna instead. It would also be possible to produce similar custom systems that were optimised for different target sets and applications.

Acknowledgement

This work was performed by Ivan Saunders during his time as a Summer student at Plextek before completing his MPhys at the University of Exeter.

Being Your User

Nicholas Hill - Chief Executive Officer

By: Nicholas Hill
Chief Executive Officer

19th December 2018

Home » sensors

One of the important steps in the Design Council’s recommendations for good design is called “Being Your Users” and is a “Method to put yourself into the position of your user.” Its purpose is “building an understanding of and empathy with the users of your product …” Approaching product design from this perspective is critical to ensuring that the features incorporated are actually beneficial to the user – as opposed to features that are of benefit to the manufacturer, for example, or “because we can” features that have no obvious benefit at all.

It’s clear that domestic appliances are becoming more sophisticated, a trend which is facilitated by the availability of low-cost sensors and processing power. This has some clear benefits, such as the availability of more energy- or water-efficient wash cycles for example. And if designers stay focused on providing something of value to the end user this is a trend to be welcomed.

In practice, I see examples of what looks rather like engineers wondering what else they can do with all this additional sensor data, rather than being driven by user need. One example is the growing size of the error codes table in the back of most appliance manuals. These may occasionally add value, but for the most part, I see them as reasons why the product you paid good money for is refusing to do the job it is supposed to.

Here’s an example: the “smart” washing machine that I own doesn’t like low water pressure. It has a number of error codes associated with this. What does it do if the mains pressure drops temporarily – e.g. if simultaneously a toilet is flushed and the kitchen tap is running? It stops dead, displays the error code and refuses to do anything else until you power off the machine at the wall socket, forcing you to start the wash cycle again from scratch. This gets even more annoying if you’d set the timer and come back to a half-washed load. In the days before “smart” appliances, a temporary pressure drop would have either simply caused the water to fill more slowly, or else the machine would pause until pressure returned.

In what way does this behaviour benefit the user? Clearly, it doesn’t, and a few moments thought from a design team that was focussed on user needs, “being your user”, would have resulted in a different requirement specification being handed to the engineering team. It’s a good example of what happens when you start implementing a solution without properly considering the problem you are trying to solve.

My “intelligent” dishwasher has a different but equally maddening feature: it doesn’t like soft water. Its designers have clearly put water saving above all else, and the machine relies on either hard water or very dirty plates to counteract the natural foaming of the detergent tablets. With soft water, if you try washing lightly soiled dishes on a quick wash cycle (as you might expect appropriate), the machine is unable to rinse off the detergent. About 20 minutes into the cycle it skips to the end and gives up, leaving you with foamy, unrinsed plates.

I say unable, when the machine is actually unwilling, as all that is required is the application of sufficient water to rinse off the detergent – which is what I, as a user, then have to do manually. Who is working for whom here? Once again the user’s needs have not been at the top of the designer’s agenda when the requirement specification was passed to the engineering team. A truly smart device would finish the job properly, using as much water as was needed, and possibly suggest using less detergent next time.

Unless designers get a better grip, keeping the end user experience on the agenda, I fear examples of this type of machine behaviour will proliferate. We will see our devices, appliances and perhaps vehicles develop an increasingly long list of reasons why they can’t (won’t) perform the function you bought them for – because they’re having a bad hair day today, which becomes your problem to solve.

All to a refrain of “I’m sorry Dave, I’m afraid I can’t do that.”

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

One of the important steps in the Design Council’s recommendations for good design is called “Being Your Users” and is a “Method to put yourself into the position of your user.” Its purpose is “building an understanding of and empathy with the users of your product …” Approaching product design from this perspective is critical to ensuring that the features incorporated are actually beneficial to the user – as opposed to features that are of benefit to the manufacturer, for example, or “because we can” features that have no obvious benefit at all.

It’s clear that domestic appliances are becoming more sophisticated, a trend which is facilitated by the availability of low-cost sensors and processing power. This has some clear benefits, such as the availability of more energy- or water-efficient wash cycles for example. And if designers stay focused on providing something of value to the end user this is a trend to be welcomed.

In practice, I see examples of what looks rather like engineers wondering what else they can do with all this additional sensor data, rather than being driven by user need. One example is the growing size of the error codes table in the back of most appliance manuals. These may occasionally add value, but for the most part, I see them as reasons why the product you paid good money for is refusing to do the job it is supposed to.

Here’s an example: the “smart” washing machine that I own doesn’t like low water pressure. It has a number of error codes associated with this. What does it do if the mains pressure drops temporarily – e.g. if simultaneously a toilet is flushed and the kitchen tap is running? It stops dead, displays the error code and refuses to do anything else until you power off the machine at the wall socket, forcing you to start the wash cycle again from scratch. This gets even more annoying if you’d set the timer and come back to a half-washed load. In the days before “smart” appliances, a temporary pressure drop would have either simply caused the water to fill more slowly, or else the machine would pause until pressure returned.

In what way does this behaviour benefit the user? Clearly, it doesn’t, and a few moments thought from a design team that was focussed on user needs, “being your user”, would have resulted in a different requirement specification being handed to the engineering team. It’s a good example of what happens when you start implementing a solution without properly considering the problem you are trying to solve.

My “intelligent” dishwasher has a different but equally maddening feature: it doesn’t like soft water. Its designers have clearly put water saving above all else, and the machine relies on either hard water or very dirty plates to counteract the natural foaming of the detergent tablets. With soft water, if you try washing lightly soiled dishes on a quick wash cycle (as you might expect appropriate), the machine is unable to rinse off the detergent. About 20 minutes into the cycle it skips to the end and gives up, leaving you with foamy, unrinsed plates.

I say unable, when the machine is actually unwilling, as all that is required is the application of sufficient water to rinse off the detergent – which is what I, as a user, then have to do manually. Who is working for whom here? Once again the user’s needs have not been at the top of the designer’s agenda when the requirement specification was passed to the engineering team. A truly smart device would finish the job properly, using as much water as was needed, and possibly suggest using less detergent next time.

Unless designers get a better grip, keeping the end user experience on the agenda, I fear examples of this type of machine behaviour will proliferate. We will see our devices, appliances and perhaps vehicles develop an increasingly long list of reasons why they can’t (won’t) perform the function you bought them for – because they’re having a bad hair day today, which becomes your problem to solve.

All to a refrain of “I’m sorry Dave, I’m afraid I can’t do that.”

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save