From Kindles to Protecting Tanks: The Different Uses for Electrophoretic Displays

From Kindles to Protecting Tanks: The Different Uses for Electrophoretic Displays

Dr Matthew Roberts - Senior Consultant, Data Exploitation

By: Matthew Roberts
Senior Consultant, Data Exploitation

6th September 2017

Home » Matt Roberts

Most people have heard of a Kindle or an e-reader: a device that uses an electronic paper display to allow users to read on a paper-like display in bright sunlight. What many people won’t have heard of is the other uses for this technology.

The display technology used in e-readers is usually an electrophoretic display (an ‘EPD’). EPDs work by suspending charged pigments in a fluid contained within a capsule. A voltage can be applied between electrodes on either side of the capsule in order to move the pigment. This configuration can vary, but typically a clear fluid is used with black and white pigments. The white pigments are positively charged and the black pigments are negatively charged. Varying the voltage moves the pigments to control how much of each pigment type is at the visible surface of the microcapsule. The pigments at the visible surface determine how much light is reflected and therefore how white that part of the display looks.

This is a very different approach to producing an image compared to the display technology used in TVs, laptops, and mobile phones which typically use liquid crystal displays (LCDs) or organic light-emitting diode (OLED) displays. LCD and OLED technologies alter the amount of light that is emitted. An EPD, on the other hand, is a reflective technology.

Reflective display technologies don’t need to compete with sunlight in order to be visible outdoors. This is why it is much easier to read an e-reader than a smartphone when in direct sunlight. In addition to this, EPD technology allows text and images to be displayed in such a way that doesn’t require power to maintain the image (you only need power to move the pigments).

The combination of these two properties makes for a very compelling technology for low power displays that can be used both indoors and outdoors. The applications of this technology are more varied than some people might realise. For example, EPD technology has been used in electronic supermarket price labels, indoor signs, bus timetables, bracelets, and watches. There have even been attempts to incorporate EPD display technology into phone cases and shoes.

Plextek has experimented with using the same technology to create an adaptive visual camouflage system for vehicles. We essentially use thin and flexible EPD panels to cover a vehicle with displays that are low power and visible in daylight conditions. To use an emissive display to achieve this would require huge amounts of power (and produce a lot of heat)! It would also need careful control of the brightness to blend in, whereas, the reflective nature of EPDs naturally varies in brightness as lighting conditions change.

Most EPDs create a greyscale image. We have used a colour filter array to convert black and white into shades of green and yellow. It’s a bit like putting colour overhead projector acetate over a piece of paper. The colour gamut that is produced is surprisingly flexible, ranging from light green and cream to dark green and dark brown.

This allows us to display a wide variety of camouflage schemes that are similar to those found on military vehicles. We can even display pictures and text, such as messages relating to humanitarian aid. A scheme can be changed in seconds. The versatility that this provides is very different to the traditional method of repainting a vehicle in order to change the scheme. The new capability that it provides allows schemes to be chosen that work well in one environment rather than finding a compromise for the range of environments that might be encountered.

The possibilities don’t stop there. Colour EPD technology is currently being developed, where more pigment colours are used in each capsule instead of an overlay. This will enable EPDs to cover a much richer colour gamut enabling new applications such as tablet PCs with daylight readable low power screens and large colour billboards that can be updated remotely and consume significantly less power than emissive versions.

Save

Save

Save

Save

Save

Save

Save

Most people have heard of a Kindle or an e-reader: a device that uses an electronic paper display to allow users to read on a paper-like display in bright sunlight. What many people won’t have heard of is the other uses for this technology.

The display technology used in e-readers is usually an electrophoretic display (an ‘EPD’). EPDs work by suspending charged pigments in a fluid contained within a capsule. A voltage can be applied between electrodes on either side of the capsule in order to move the pigment. This configuration can vary, but typically a clear fluid is used with black and white pigments. The white pigments are positively charged and the black pigments are negatively charged. Varying the voltage moves the pigments to control how much of each pigment type is at the visible surface of the microcapsule. The pigments at the visible surface determine how much light is reflected and therefore how white that part of the display looks.

This is a very different approach to producing an image compared to the display technology used in TVs, laptops, and mobile phones which typically use liquid crystal displays (LCDs) or organic light-emitting diode (OLED) displays. LCD and OLED technologies alter the amount of light that is emitted. An EPD, on the other hand, is a reflective technology.

Reflective display technologies don’t need to compete with sunlight in order to be visible outdoors. This is why it is much easier to read an e-reader than a smartphone when in direct sunlight. In addition to this, EPD technology allows text and images to be displayed in such a way that doesn’t require power to maintain the image (you only need power to move the pigments).

The combination of these two properties makes for a very compelling technology for low power displays that can be used both indoors and outdoors. The applications of this technology are more varied than some people might realise. For example, EPD technology has been used in electronic supermarket price labels, indoor signs, bus timetables, bracelets, and watches. There have even been attempts to incorporate EPD display technology into phone cases and shoes.

Plextek has experimented with using the same technology to create an adaptive visual camouflage system for vehicles. We essentially use thin and flexible EPD panels to cover a vehicle with displays that are low power and visible in daylight conditions. To use an emissive display to achieve this would require huge amounts of power (and produce a lot of heat)! It would also need careful control of the brightness to blend in, whereas, the reflective nature of EPDs naturally varies in brightness as lighting conditions change.

Most EPDs create a greyscale image. We have used a colour filter array to convert black and white into shades of green and yellow. It’s a bit like putting colour overhead projector acetate over a piece of paper. The colour gamut that is produced is surprisingly flexible, ranging from light green and cream to dark green and dark brown.

This allows us to display a wide variety of camouflage schemes that are similar to those found on military vehicles. We can even display pictures and text, such as messages relating to humanitarian aid. A scheme can be changed in seconds. The versatility that this provides is very different to the traditional method of repainting a vehicle in order to change the scheme. The new capability that it provides allows schemes to be chosen that work well in one environment rather than finding a compromise for the range of environments that might be encountered.

The possibilities don’t stop there. Colour EPD technology is currently being developed, where more pigment colours are used in each capsule instead of an overlay. This will enable EPDs to cover a much richer colour gamut enabling new applications such as tablet PCs with daylight readable low power screens and large colour billboards that can be updated remotely and consume significantly less power than emissive versions.

Save

Save

Save

Save

Save

Save

Save

Further Reading

The start-ups using artificial intelligence to solve everyday tasks

The start-ups using artificial intelligence to solve everyday tasks

Dr Matthew Roberts - Senior Consultant, Data Exploitation

By: Matthew Roberts
Senior Consultant, Data Exploitation

5th July 2017

Home » Matt Roberts

I recently attended the inaugural Cambridge Wireless Artificial Intelligence & Mobility Conference. The event focussed on artificial intelligence (AI), the business use cases enabled by AI, innovative start-up companies, and how start-up companies can gain funding. Unlike the technical conferences that I am used to attending, this event was much more about the business-side of AI.

Like many engineers, I usually like to look at the technical aspects of things, but this event gave me a different, and somewhat refreshing, perspective on the use of AI. I enjoy hearing about how companies, like DeepMind, are using AI to play video games and diagnose medical conditions, but perhaps I don’t pay enough attention to the companies that are using AI to solve everyday tasks. The Cambridge-based event gave start-ups the opportunity to talk and exhibit and gave people like me the chance to learn more about them.

You have probably heard of the driverless car technology being developed by organisations like Google and Uber, but what you might not know about are the driverless cars in the UK. Three driverless car projects were awarded funding by the UK government, and members of the public were given the opportunity ride in driverless cars.

Oxbotica, an Oxford University spinout, was involved in two of the projects. Oxbotica’s Selenium software formed the brains of the vehicles used in both projects. The software almost certainly uses AI to perform two key tasks: understanding the wealth of sensor data that is used to observe the car’s environment and controlling the car.

Another company that is working on self-driving cars is FiveAI. At the event, Stan Boland, CEO of FiveAI, spoke of how FiveAI is aiming to become a customer to large organisations instead of a supplier. FiveAI intends to do this by competing with the likes of Uber, but with self-driving cars. The company is currently part of a consortium that plans to test such cars on public roads in London, and AI will be a key part of making that a success.

Hoxton Analytics is using AI to solve a completely different kind of perception task. It is using cameras combined with AI to measure footfall. The cameras are mounted at ground level in order to avoid privacy concerns. Not only can it be used by shops to determine how many people it can attract, but it can also be used to infer the types of shoppers. This information can be used to help determine which demographics are being lured into shops and at what times. Solving such a task manually can be very labour-intensive.

Another example of the use of AI to solve everyday tasks is the 3D sensor that has been created by Titan Reality. Titan Reality’s sensor can be used in a wide variety of perception and control tasks, from sorting objects to pouring the correct drink based on what kind of glass is placed on the sensor.

This is just a tiny set of examples of where small companies have embraced AI to provide high-tech solutions to everyday tasks that would traditionally be performed by people. It is not just large companies like Google and Netflix that are using AI to make a big impact.

Save

Save

Save

Save

Save

Save

I recently attended the inaugural Cambridge Wireless Artificial Intelligence & Mobility Conference. The event focussed on artificial intelligence (AI), the business use cases enabled by AI, innovative start-up companies, and how start-up companies can gain funding. Unlike the technical conferences that I am used to attending, this event was much more about the business-side of AI.

Like many engineers, I usually like to look at the technical aspects of things, but this event gave me a different, and somewhat refreshing, perspective on the use of AI. I enjoy hearing about how companies, like DeepMind, are using AI to play video games and diagnose medical conditions, but perhaps I don’t pay enough attention to the companies that are using AI to solve everyday tasks. The Cambridge-based event gave start-ups the opportunity to talk and exhibit and gave people like me the chance to learn more about them.

You have probably heard of the driverless car technology being developed by organisations like Google and Uber, but what you might not know about are the driverless cars in the UK. Three driverless car projects were awarded funding by the UK government, and members of the public were given the opportunity ride in driverless cars.

Oxbotica, an Oxford University spinout, was involved in two of the projects. Oxbotica’s Selenium software formed the brains of the vehicles used in both projects. The software almost certainly uses AI to perform two key tasks: understanding the wealth of sensor data that is used to observe the car’s environment and controlling the car.

Another company that is working on self-driving cars is FiveAI. At the event, Stan Boland, CEO of FiveAI, spoke of how FiveAI is aiming to become a customer to large organisations instead of a supplier. FiveAI intends to do this by competing with the likes of Uber, but with self-driving cars. The company is currently part of a consortium that plans to test such cars on public roads in London, and AI will be a key part of making that a success.

Hoxton Analytics is using AI to solve a completely different kind of perception task. It is using cameras combined with AI to measure footfall. The cameras are mounted at ground level in order to avoid privacy concerns. Not only can it be used by shops to determine how many people it can attract, but it can also be used to infer the types of shoppers. This information can be used to help determine which demographics are being lured into shops and at what times. Solving such a task manually can be very labour-intensive.

Another example of the use of AI to solve everyday tasks is the 3D sensor that has been created by Titan Reality. Titan Reality’s sensor can be used in a wide variety of perception and control tasks, from sorting objects to pouring the correct drink based on what kind of glass is placed on the sensor.

This is just a tiny set of examples of where small companies have embraced AI to provide high-tech solutions to everyday tasks that would traditionally be performed by people. It is not just large companies like Google and Netflix that are using AI to make a big impact.

Save

Save

Save

Save

Save

Save

Further Reading