Ai, healthcare, alzheimers, medical imaging

The Future Impact of Artificial Intelligence in Medical Practice

Nigel Whittle - Head of Medical & Healthcare

By: Nigel Whittle
Head of Medical & Healthcare

5th December 2018

Home » comment

We are all aware of the challenges facing healthcare in general, and the NHS in particular. Shortage of funding, increased demands for services, rising costs of innovative drug treatments, the needs of an ageing population. All these issues limit the efficacy of our healthcare service.

There is only so much that can be done with improved efficiency. But what if we could improve diagnosis, so that diseases are detected and treated earlier? In almost every disease, early diagnosis would allow cheaper and more effective treatment, with improved patient outcomes. But of course accuracy is important too, so that scarce resources can be targeted to the right patients.

And that is what makes clinical diagnosis so difficult, requiring skilled and knowledgeable practitioners, whether the local GP or the Harley Street physician. These skills, developed by years of medical training, allow effective diagnosis of disease based not just on clinical information, but also on the patients past history, social background, age and ethnicity.

But at the end of the day, the clinician is simply processing data. And this is the primary strength of Artificial Intelligence.

So which areas are likely to be most impacted by AI?

Medical Imaging

medical imaging, HEALTHCARE , AIUK hospitals generate a staggering 50 petabytes of data every year, of which the vast majority comes from medical imaging. But more than 97% of that data is unused or unanalysed, perhaps because it is unusable, or redundant, or simply swamping the capacities of the clinicians. But AI-powered medical imaging systems can now reliably produce scans that help radiologists identify subtle patterns, helping them treat patients with emergent conditions more quickly. Will this lead to the disappearance of radiology as a clinical profession? Perhaps a more likely outcome is that radiologists will be able to allocate their time more effectively, to work closely with patients with the most serious or complex conditions.

Similarly, cancer diagnosis can be made more accurate through the use of AI systems running scans linked to complex recognition algorithms. When cancer is detected early, treatment is more likely to be successful. But too often, cancers are diagnosed at a late stage when they’re much harder to treat. But AI systems are beginning to take on some of the workload: for example, an algorithm has been developed to diagnose skin cancer more accurately than dermatologists (95% compared with 87%). But in doing so, we must remember that AI systems are not infallible, and the relationship between the patient and the doctor is important so that false negatives are not dismissed out of hand.

In another example, researchers at Imperial College London are working with DeepMind Health to develop AI-based techniques to improve the accuracy of breast cancer screening, using a database of 7,500 anonymised mammograms to develop screening algorithms that can spot early signs of breast cancer whilst reducing over-diagnosis.

But perhaps more interestingly, could there be ways to detect hidden clues in people’s lives that point to cancer? As we generate, collect and share more data than ever before, some of which may be relevant to our health, is there a way to gather this information and help detect diseases such as cancer earlier? And even if it is possible, is it something that we would allow big data systems and corporations to do?

Alzheimer’s Disease

Currently, there’s no easy way to diagnose Alzheimer’s Disease: no single test exists, and brain scans alone can’t determine whether someone has the disease. But alterations in the brain can cause subtle changes in behaviour and sleep patterns years before people start experiencing confusion and memory loss. Artificial intelligence could recognize these changes early and identify patients at risk of developing the most severe forms of the disease, allowing clinicians to target drug and behavioural therapies most effectively.

ai, healthcare, patient doctor

The role of the doctor

It is clear that managing patient data is a core component of the healthcare delivery process, and AI systems will increasingly play an important role in this process. AI is capable of processing larger amounts of data and at a faster rate than human clinicians, is capable of achieving a higher level of accuracy and is not subject to fatigue or burnout.

Which naturally raises a question, what will be the future role of the doctor?

No matter is strengths, AI lacks human sensitivity; clinical applications still require human expertise in the interpretation of data and recommendations. As the role of the physician evolves in the era of AI, the humanity of healthcare delivery will remain critical, and rituals (‘the bedside manner’) that may have been lost in the rush for efficiency savings may emerge with a new-found focus on the patient at the centre of treatment.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

 

 

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save


If you are interested in talking to Nigel Whittle, our Head of Medical & Healthcare about how Plextek can assist with your project, please email nigel.whittle@plextek.com

We are all aware of the challenges facing healthcare in general, and the NHS in particular. Shortage of funding, increased demands for services, rising costs of innovative drug treatments, the needs of an ageing population. All these issues limit the efficacy of our healthcare service.

There is only so much that can be done with improved efficiency. But what if we could improve diagnosis, so that diseases are detected and treated earlier? In almost every disease, early diagnosis would allow cheaper and more effective treatment, with improved patient outcomes. But of course accuracy is important too, so that scarce resources can be targeted to the right patients.

And that is what makes clinical diagnosis so difficult, requiring skilled and knowledgeable practitioners, whether the local GP or the Harley Street physician. These skills, developed by years of medical training, allow effective diagnosis of disease based not just on clinical information, but also on the patients past history, social background, age and ethnicity.

But at the end of the day, the clinician is simply processing data. And this is the primary strength of Artificial Intelligence.

So which areas are likely to be most impacted by AI?

Medical Imaging

medical imaging, HEALTHCARE , AIUK hospitals generate a staggering 50 petabytes of data every year, of which the vast majority comes from medical imaging. But more than 97% of that data is unused or unanalysed, perhaps because it is unusable, or redundant, or simply swamping the capacities of the clinicians. But AI-powered medical imaging systems can now reliably produce scans that help radiologists identify subtle patterns, helping them treat patients with emergent conditions more quickly. Will this lead to the disappearance of radiology as a clinical profession? Perhaps a more likely outcome is that radiologists will be able to allocate their time more effectively, to work closely with patients with the most serious or complex conditions.

Similarly, cancer diagnosis can be made more accurate through the use of AI systems running scans linked to complex recognition algorithms. When cancer is detected early, treatment is more likely to be successful. But too often, cancers are diagnosed at a late stage when they’re much harder to treat. But AI systems are beginning to take on some of the workload: for example, an algorithm has been developed to diagnose skin cancer more accurately than dermatologists (95% compared with 87%). But in doing so, we must remember that AI systems are not infallible, and the relationship between the patient and the doctor is important so that false negatives are not dismissed out of hand.

In another example, researchers at Imperial College London are working with DeepMind Health to develop AI-based techniques to improve the accuracy of breast cancer screening, using a database of 7,500 anonymised mammograms to develop screening algorithms that can spot early signs of breast cancer whilst reducing over-diagnosis.
But perhaps more interestingly, could there be ways to detect hidden clues in people’s lives that point to cancer? As we generate, collect and share more data than ever before, some of which may be relevant to our health, is there a way to gather this information and help detect diseases such as cancer earlier? And even if it is possible, is it something that we would allow big data systems and corporations to do?

Alzheimer’s Disease

ai, healthcare, patient doctorCurrently, there’s no easy way to diagnose Alzheimer’s Disease: no single test exists, and brain scans alone can’t determine whether someone has the disease. But alterations in the brain can cause subtle changes in behaviour and sleep patterns years before people start experiencing confusion and memory loss. Artificial intelligence could recognize these changes early and identify patients at risk of developing the most severe forms of the disease, allowing clinicians to target drug and behavioural therapies most effectively.

The role of the doctor

It is clear that managing patient data is a core component of the healthcare delivery process, and AI systems will increasingly play an important role in this process. AI is capable of processing larger amounts of data and at a faster rate than human clinicians, is capable of achieving a higher level of accuracy and is not subject to fatigue or burnout.

Which naturally raises a question, what will be the future role of the doctor?

No matter is strengths, AI lacks human sensitivity; clinical applications still require human expertise in the interpretation of data and recommendations. As the role of the physician evolves in the era of AI, the humanity of healthcare delivery will remain critical, and rituals (‘the bedside manner’) that may have been lost in the rush for efficiency savings may emerge with a new-found focus on the patient at the centre of treatment.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save


If you are interested in talking to Nigel Whittle, our Head of Medical & Healthcare about how Plextek can assist with your project, please email nigel.whittle@plextek.com

Further Reading

Distance Finding in Mobile Autonomous Systems

Damien Clarke - Senior Consultant, Data Exploitation

By: Damien Clarke
Lead Consultant, Data Exploitation

7th November 2018

Home » comment

You might already be familiar with FiveAI, a driverless car startup based in Cambridge, and their recent work making headlines but for those that aren’t, allow me to bring you up to speed. FiveAI’s vision is to bring a shared driverless taxi service to London by next year and they have already started gathering data of London’s streets with their iconic blue branded cars.

A key component in the development of mobile autonomous systems is the ability to produce a 3D map of the local environment which can be used for route planning and collision avoidance (e.g. sense and avoid). There are various sensors which can be used to achieve this and each one has specific advantages and disadvantages.

Stereo Vision

The first approach (and one that many animals, including humans, use) is to combine images from a pair of cameras placed at slightly different positions to enable depth perception. This is achieved by determining the horizontal disparity between the same object in both cameras. Nearby objects produce a large disparity in position between the two cameras whereas far objects have a small disparity.

This technique can also be used with a single camera if it is moving as the video is effectively a series of images taken at different positions. This is known as Structure from Motion and is commonly used with airborne cameras, such as those used on small consumer drones.

The primary advantage of this technique is that cameras are small and inexpensive. At close range, good depth resolution can be achieved and fused with the image content itself to produce a 3D colour image. A large number of cameras with overlapping fields of view can potentially produce a 360° panoramic 3D map of the environment around a vehicle.

The main limitation of this approach is that it will only work when suitable images can be produced and therefore adverse environmental conditions (e.g. dust, fog, rain, etc.) will prevent the production of a 3D map. Operation at night time is potentially possible with illumination or the use of thermal imagers rather than standard cameras. Poor camera dynamic range can also be a problem as bright lights (e.g. headlights or the sun) will cause glare. In addition, the processing required to locate features within both images and match them is complex and adds computational burden when using this technique to produce a 3D map.

Lidar

An alternative optical approach to stereo vision is a scanning laser range finder, also known as lidar. This approach uses a laser to send a pulse towards a surface and a sensor to record how long it takes for the reflection to return. The measurement of the time of flight can then be used to determine the range. To produce a 3D map of a scene, this beam must then be scanned in azimuth and elevation. To reduce the amount of scanning, some lidar sensors use multiple beams at different elevation angles and then only scan in azimuth.

Lidar has very good depth resolution and due to the narrow beam can also produce very good lateral resolution. In general, the technology for emitting and sensing light is entirely solid state, however, at present many lidar system still use the mechanical method to scan the beam across the scene. Fully solid state systems would be small and cheap, though this promise has not yet been fully realised in commercial lidar systems which are often large and expensive.

As simple lidar sensors only record the time for the first reflection to return, a drawback of some lidar systems is that they will only detect the nearest object in a specific direction. This is problematic when the environment is dusty or foggy as the first reflection may not be from a solid object and the resulting 3D map will be degraded. More sophisticated (and costly) systems measure the entire reflection over time which then allows a full range profile to be measured through the obscurant. Direct sunlight can also produce problems as the large level of background illumination can make it difficult to detect weak reflections. Similarly, if a surface has low reflectivity (i.e. it is black) then it may not be detected by the lidar. This can be a problem for autonomous vehicles as black car surfaces will only be detected at a closer range than more reflective vehicles.

Radar

Radar is similar to lidar but uses microwaves rather than light (typically 25 or 77 GHz). Lidar was in fact inspired by radar (e.g. laser radar) and only became possible once lasers were invented. The exact mechanism by which the distance is measured varies slightly between different radar systems; however, the concept is the same. A signal is emitted, the length of time it takes for a reflection to return is measured and this is then converted into a range profile. While panoramic mechanically scanned radars are available, it is more common to use an antenna array and calculate the angle of arrival of a reflection by the difference in signal across the array.

One advantage of radar is the ability to measure speed directly via Doppler shift without complex processing. Therefore, objects moving relative to a mainly static scene are generally easy to detect. Poor environmental conditions (e.g. fog, rain and snow) have little impact on the performance of the radar which provides a useful all-weather capability for autonomous vehicles. Single chip radars with integrated processing capabilities are also available for use as small and inexpensive sensor solutions.

A disadvantage of radar is the limited lateral resolution. While the depth resolution can be good, the angular resolution is significantly lower than for optical sensors. However, this is partially mitigated if an object can be uniquely separated from other objects and clutter by its range or velocity value.

Ultrasonic

The final sensor used for range finding on autonomous vehicles is an ultrasonic sensor which emits high-frequency sounds beyond the range of human hearing. Bats are, of course, well-known users of this approach. Ultrasonic sensors are very similar to lidar sensors; however, as the speed of sound in air is vastly slower than the speed of light it is much easier to measure the time for a reflection to return from a surface.

Ultrasonic sensors work well regardless of light level or environmental conditions and are very small and inexpensive. This makes the technology ideal for ultra-short range collision avoidance sensors on small or slow moving vehicles which can be placed in many locations to provide wide area coverage.

The main disadvantage of ultrasonic sensors is their extremely short range as they can only produce distance measurements for surfaces up to a few metres away. For this reason, it is also uncommon for an ultrasonic sensor to be used to explicitly form a 3D map.

Data Fusion

In practice, to achieve a robust and effective sensor solution for autonomous vehicles it is necessary to combine different sensors and perform sensor fusion. As yet there is no standard sensor suite and research is still ongoing to determine the optimum combination with an acceptable performance across all weather conditions.

Furthermore, as an example, Tesla’s latest models that are claimed to be suitable for autonomous operation have eight cameras (with varying fields of view) and twelve ultrasonic sensors to enable panoramic sensing while a single forward-looking radar measures range and speed of objects up to 160m away.

The combination of cameras with radar is a common sensor choice as it provides good lateral and range resolution under various weather conditions for a relatively low price. It remains to be seen whether or not it is sufficient for safe autonomous operation without the addition of lidar.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

You might already be familiar with FiveAI, a driverless car startup based in Cambridge, and their recent work making headlines but for those that aren’t, allow me to bring you up to speed. FiveAI’s vision is to bring a shared driverless taxi service to London by next year and they have already started gathering data of London’s streets with their iconic blue branded cars.

A key component in the development of mobile autonomous systems is the ability to produce a 3D map of the local environment which can be used for route planning and collision avoidance (e.g. sense and avoid). There are various sensors which can be used to achieve this and each one has specific advantages and disadvantages.

Stereo Vision

The first approach (and one that many animals, including humans, use) is to combine images from a pair of cameras placed at slightly different positions to enable depth perception. This is achieved by determining the horizontal disparity between the same object in both cameras. Nearby objects produce a large disparity in position between the two cameras whereas far objects have a small disparity.

This technique can also be used with a single camera if it is moving as the video is effectively a series of images taken at different positions. This is known as Structure from Motion and is commonly used with airborne cameras, such as those used on small consumer drones.

The primary advantage of this technique is that cameras are small and inexpensive. At close range, good depth resolution can be achieved and fused with the image content itself to produce a 3D colour image. A large number of cameras with overlapping fields of view can potentially produce a 360° panoramic 3D map of the environment around a vehicle.

The main limitation of this approach is that it will only work when suitable images can be produced and therefore adverse environmental conditions (e.g. dust, fog, rain, etc.) will prevent the production of a 3D map. Operation at night time is potentially possible with illumination or the use of thermal imagers rather than standard cameras. Poor camera dynamic range can also be a problem as bright lights (e.g. headlights or the sun) will cause glare. In addition, the processing required to locate features within both images and match them is complex and adds computational burden when using this technique to produce a 3D map.

Lidar

An alternative optical approach to stereo vision is a scanning laser range finder, also known as lidar. This approach uses a laser to send a pulse towards a surface and a sensor to record how long it takes for the reflection to return. The measurement of the time of flight can then be used to determine the range. To produce a 3D map of a scene, this beam must then be scanned in azimuth and elevation. To reduce the amount of scanning, some lidar sensors use multiple beams at different elevation angles and then only scan in azimuth.

Lidar has very good depth resolution and due to the narrow beam can also produce very good lateral resolution. In general, the technology for emitting and sensing light is entirely solid state, however, at present many lidar system still use the mechanical method to scan the beam across the scene. Fully solid state systems would be small and cheap, though this promise has not yet been fully realised in commercial lidar systems which are often large and expensive.

As simple lidar sensors only record the time for the first reflection to return, a drawback of some lidar systems is that they will only detect the nearest object in a specific direction. This is problematic when the environment is dusty or foggy as the first reflection may not be from a solid object and the resulting 3D map will be degraded. More sophisticated (and costly) systems measure the entire reflection over time which then allows a full range profile to be measured through the obscurant. Direct sunlight can also produce problems as the large level of background illumination can make it difficult to detect weak reflections. Similarly, if a surface has low reflectivity (i.e. it is black) then it may not be detected by the lidar. This can be a problem for autonomous vehicles as black car surfaces will only be detected at a closer range than more reflective vehicles.

Radar

Radar is similar to lidar but uses microwaves rather than light (typically 25 or 77 GHz). Lidar was in fact inspired by radar (e.g. laser radar) and only became possible once lasers were invented. The exact mechanism by which the distance is measured varies slightly between different radar systems; however, the concept is the same. A signal is emitted, the length of time it takes for a reflection to return is measured and this is then converted into a range profile. While panoramic mechanically scanned radars are available, it is more common to use an antenna array and calculate the angle of arrival of a reflection by the difference in signal across the array.

One advantage of radar is the ability to measure speed directly via Doppler shift without complex processing. Therefore, objects moving relative to a mainly static scene are generally easy to detect. Poor environmental conditions (e.g. fog, rain and snow) have little impact on the performance of the radar which provides a useful all-weather capability for autonomous vehicles. Single chip radars with integrated processing capabilities are also available for use as small and inexpensive sensor solutions.

A disadvantage of radar is the limited lateral resolution. While the depth resolution can be good, the angular resolution is significantly lower than for optical sensors. However, this is partially mitigated if an object can be uniquely separated from other objects and clutter by its range or velocity value.

Ultrasonic

The final sensor used for range finding on autonomous vehicles is an ultrasonic sensor which emits high-frequency sounds beyond the range of human hearing. Bats are, of course, well-known users of this approach. Ultrasonic sensors are very similar to lidar sensors; however, as the speed of sound in air is vastly slower than the speed of light it is much easier to measure the time for a reflection to return from a surface.

Ultrasonic sensors work well regardless of light level or environmental conditions and are very small and inexpensive. This makes the technology ideal for ultra-short range collision avoidance sensors on small or slow moving vehicles which can be placed in many locations to provide wide area coverage.

The main disadvantage of ultrasonic sensors is their extremely short range as they can only produce distance measurements for surfaces up to a few metres away. For this reason, it is also uncommon for an ultrasonic sensor to be used to explicitly form a 3D map.

Data Fusion

In practice, to achieve a robust and effective sensor solution for autonomous vehicles it is necessary to combine different sensors and perform sensor fusion. As yet there is no standard sensor suite and research is still ongoing to determine the optimum combination with an acceptable performance across all weather conditions.

Furthermore, as an example, Tesla’s latest models that are claimed to be suitable for autonomous operation have eight cameras (with varying fields of view) and twelve ultrasonic sensors to enable panoramic sensing while a single forward-looking radar measures range and speed of objects up to 160m away.

The combination of cameras with radar is a common sensor choice as it provides good lateral and range resolution under various weather conditions for a relatively low price. It remains to be seen whether or not it is sufficient for safe autonomous operation without the addition of lidar.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Further Reading

A Look Back in Events: Engineering Design Show 2018

A Look Back in Events: Engineering Design Show 2018

By: Ehsan Abedi
Product Designer

24th October 2018

Home » comment

The Engineering Design Show (EDS) exhibition was packed with over 220 exhibitors offering different areas of expertise and services. Being Plextek’s first time attending the show as exhibitors we were keen to show our creative and technical capabilities, observe how the industry is changing and exemplify how we can help others adapt to these changes.

Power of being genuine

As designers and engineers at Plextek, we are rarely involved in selling our capabilities but this proved to be of our benefit at the engineering design show. A lot of industry events are filled with slick salesmen who can sometimes intimidate or detract attention of designers, engineers and others looking to solve their own problems. As individuals untrained in selling, I found that by simply being our natural selves we felt that people at the show could chat to us genuinely and naturally on a range of matters.

We had great pleasure in meeting many like-minded engineers and designers which we hope to collaborate with. And they themselves faced a massive variety of problems, in everything from developing new wind farm technology to difficulties in intricate medical device development.

Breadth and depth in design & development

Plextek’s capabilities across the whole design and development process and history of working in a diverse range of sectors mean that we were able to interact with a lot of people at the show and think which experts within Plextek would be able to help them overcome their specific issues.


So how is the design engineering industry changing?

With a diverse range of exhibitors, workshops and conferences at the Engineering Design Show, it was possible to make observations on how the industry is changing.

Rate of change

Many people I met at EDS thought that the current rate of technological change is beginning to exceed our ability to adapt. This signifies how important it is for companies to implement a collaborative approach and ensure they are able to evolve and adapt to these rapid changes.

Automation

The technology on show at EDS demonstrated some of the major advances being made in automation. There was a range of mechatronic devices on show and it is easy to see how these technologies could be implemented within robotics and for the automation of more production processes.

Newer and more effective rapid prototyping technologies were also on show, which are continually making it cheaper and easier to rapidly design and test ideas to help inform the usability of the final products.

User Centric Design

Whether it is a small component being optimised for assembly or a final product optimised for comfort and usability, user centred design is clearly becoming more prevalent.

Designers in close contact to users are likely to build a sense of empathy for their users and hence develop more pleasing products.

The implementation of user centred design methods means products: reduce misuse, are safer to use and meet a user’s expectations and requirements. This in turn can lead to increased product sales and a reduction in the costs incurred by customer services.

Source: http://www.engineering-design-show.co.uk/gallery/

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

The Engineering Design Show (EDS) exhibition was packed with over 220 exhibitors offering different areas of expertise and services. Being Plextek’s first time attending the show as exhibitors we were keen to show our creative and technical capabilities, observe how the industry is changing and exemplify how we can help others adapt to these changes.

Power of being genuine

As designers and engineers at Plextek, we are rarely involved in selling our capabilities but this proved to be of our benefit at the engineering design show. A lot of industry events are filled with slick salesmen who can sometimes intimidate or detract attention of designers, engineers and others looking to solve their own problems. As individuals untrained in selling, I found that by simply being our natural selves we felt that people at the show could chat to us genuinely and naturally on a range of matters.

We had great pleasure in meeting many like-minded engineers and designers which we hope to collaborate with. And they themselves faced a massive variety of problems, in everything from developing new wind farm technology to difficulties in intricate medical device development.

Breadth and depth in design & development

Plextek’s capabilities across the whole design and development process and history of working in a diverse range of sectors mean that we were able to interact with a lot of people at the show and think which experts within Plextek would be able to help them overcome their specific issues.


So how is the design engineering industry changing?

With a diverse range of exhibitors, workshops and conferences at the Engineering Design Show, it was possible to make observations on how the industry is changing.

Rate of change

Many people I met at EDS thought that the current rate of technological change is beginning to exceed our ability to adapt. This signifies how important it is for companies to implement a collaborative approach and ensure they are able to evolve and adapt to these rapid changes.

Automation

The technology on show at EDS demonstrated some of the major advances being made in automation. There was a range of mechatronic devices on show and it is easy to see how these technologies could be implemented within robotics and for the automation of more production processes.

Newer and more effective rapid prototyping technologies were also on show, which are continually making it cheaper and easier to rapidly design and test ideas to help inform the usability of the final products.

User Centric Design

Whether it is a small component being optimised for assembly or a final product optimised for comfort and usability, user centred design is clearly becoming more prevalent.

Designers in close contact to users are likely to build a sense of empathy for their users and hence develop more pleasing products.

The implementation of user centred design methods means products: reduce misuse, are safer to use and meet a user’s expectations and requirements. This in turn can lead to increased product sales and a reduction in the costs incurred by customer services.

Source: http://www.engineering-design-show.co.uk/gallery/

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Further Reading

Can Technology Ever Beat Face to Face Interactions?

Interview: Agnieszka Krysztul
Events Manager

17th October 2018

Home » comment

This week sees the last large event we are exhibiting at for 2018.  It’s been busy: The Disruption SummitDVD and Mission Critical Technologieswere amongst the highlights of a busy season.

As a technology company, we often rely on technology too much to support our human to human interactions.   We sat down today and chatted to Agnieszka, our Event Manager to discuss whether there really is still a place for business events in today’s modern tech-driven world:

Are business events still popular?

These days we have such a high-tech lifestyle. We are working in highly competitive and fast changing environments dominated by digital technology but what hasn’t changed is that people still buy from people. Like any event, business events are organized for a strong purpose. They bring people together and give an opportunity to talk to different individuals, and meet old contacts. They are also great place to keep up with latest trends and technologies, to learn about and discover new opportunities.

Why do these business events inspire business growth?

We all attend various events in life (whether it’s a social event or for business) and we go there with the purpose to meet contacts who are all there for similar purpose. They brighten up our daily routine in life and give us a fresh view on what’s happening in our interest group.  They often strengthen our relationships, broaden our horizons and bring fresh ideas. Ultimately, it’s easier to understand a person and determine whether you want to work with them when you can see the whites of their eyes!

Whenever I go to one of these events I feel I have gained something positive. I am inspired by talking to different people and by finding out what they have been up to recently. Being in a different environment with different groups of people can give you  fresh ideas and new prospects. They bring value to both the business development team and to all employees who attend.

Is there really an employee benefit to attending events?

Yes. When I see my team getting involved in attending an event, they get engaged with different colleagues in the company that they don’t regularly speak to. It is good for team building inside the organization but also when they come back from the event, the value and the new opportunities get shared across the business. It involves all of us and helps the business work together.

One of our biggest successes for the events team is increasing our company’s reputation in the marketplace, not only as a trusted and valuable partner but also as a growing & successful business. Events are a powerful way to deliver that message.

What’s next for events in the future?

As the business world works more globally, there may be more technology involved in communicating across distance and we work with international clients on that basis. But I haven’t seen that detract from events where you can make great initial contacts. People are still prepared and seem to prefer flying the distances to meet with people face to face and make more valuable human connections in order to accelerate business.

This week we will be showing our product design and concept generation skills at the Engineering Design Show where our Product Designers, who often get stuck back at the studio, have a chance to meet with our customers and prospects on an open platform.  These are real people working on real, exciting projects, and the best way to understand what they do will always be to see them in real life.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Further Reading