Innovation… What Does It Mean?

Stewart Da'Silva - Senior Designer, Product Design

By: Stewart Da’Silva
Senior Designer, Product Design

8th November 2017

Home » Stewart Da'Silva

The buzz word of the moment that is constantly being bandied about is ‘innovation’. There is hardly a departmental or company briefing where that word isn’t mentioned.

Indeed, it seems to be held up in the business world as the holy grail of survival; a panacea against the risk of extinction (in the corporate sense). Market gurus metaphorically stand on tip-toes whilst balancing on rooftops shouting through megaphones…”INNOVATE OR DIE!”

But what exactly does ‘innovation’ mean? What does it mean to us as individuals and as a company?

My perception of ‘innovation’ is that it isn’t something that I, personally, should bother my pretty little head about. After all, I know for certain that having spent my whole working life immersed in the world of engineering… I have never once in all those many, many years had a spark of an original idea that has ever taken seed and germinated in the wilderness that is my brain.

No, I had assumed that this call for us to innovate was directed towards the more intelligent amongst us and that they were being asked to dream up some new ground-breaking idea… a blinding flash of inspiration that our company could exploit in the form of some great new product.

Then the realisation began to dawn; that there, in fact, had been very few real inventions of any substance for many years.

A case in point is in our own industry – electronics.

It is accepted that the transistor was the starting point of the phenomenal growth of the electronics industry as we know it today. The ‘invention’ of the transistor took place in the Bell Laboratories in 1947 by John Barbeen and Walter Brattain, in fact, they, together with William Shockley, received the 1956 Nobel Prize in Physics for “their researches on semiconductors and their discovery of the transistor effect.”

Except… they didn’t ‘discover’ the transistor effect.

It was, in fact, described by one Julius Lilienfeld in a patent that he filed in Canada on the ‘field effect transistor’ in 1925. Although he patented it – he published no known research articles on the subject. Bell scientists Bardeen and Brattain, in fact, built a field effect transistor utilising Lilienfeld’s patent in their research laboratory and surprisingly it worked, they then set about improving and refining the efficiency of the device and then published their findings – although Lilienfeld’s patent was the basis for their transistor, he was never credited in their published papers.

But then Lilienfeld himself had built upon research and observations that had gone before.

In 1833, Faraday’s research on the negative temperature coefficient of resistance of silver sulphide was the first recorded observation of any semiconductor property. The trail from Faraday’s experiments to the Lilienfeld patent had many, many contributors.

My point?

Nanos gigantum humeris insidentes’ – discovering truth by building on previous discoveries.

The first working transistor wasn’t invented in 1947, it evolved from Faraday’s first observations in 1833. At that time, that is all it was, an observation – with no obvious applications.

This meandering pathway had then progressed towards its conclusion (the transistor) in a succession of incremental steps. Academics and scientists didn’t carry on their given research in splendid isolation from those that went before. If they found some relevance to their own research then they applied those previous observations and investigations to further their own knowledge and that of those that were to follow.

Which brings me back to where I started – ‘Innovation… what does it mean?’

In today’s engineering environment, I believe that it means that we, each and every one of us, could be an innovator. We don’t have to be qualified in a specific field. We just need to be open and have the vision to see how established techniques in the world around us could be transferred and applied to other disciplines to create or improve an existing product: cross-pollination of ideas and skills. Indeed, in the first instance, there is no need for detail… just the vision.

I believe that each and every one of us is capable of doing that.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

The buzz word of the moment that is constantly being bandied about is ‘innovation’. There is hardly a departmental or company briefing where that word isn’t mentioned.

Indeed, it seems to be held up in the business world as the holy grail of survival; a panacea against the risk of extinction (in the corporate sense). Market gurus metaphorically stand on tip-toes whilst balancing on rooftops shouting through megaphones…”INNOVATE OR DIE!”

But what exactly does ‘innovation’ mean? What does it mean to us as individuals and as a company?

My perception of ‘innovation’ is that it isn’t something that I, personally, should bother my pretty little head about. After all, I know for certain that having spent my whole working life immersed in the world of engineering… I have never once in all those many, many years had a spark of an original idea that has ever taken seed and germinated in the wilderness that is my brain.

No, I had assumed that this call for us to innovate was directed towards the more intelligent amongst us and that they were being asked to dream up some new ground-breaking idea… a blinding flash of inspiration that our company could exploit in the form of some great new product.

Then the realisation began to dawn; that there, in fact, had been very few real inventions of any substance for many years.

A case in point is in our own industry – electronics.

It is accepted that the transistor was the starting point of the phenomenal growth of the electronics industry as we know it today. The ‘invention’ of the transistor took place in the Bell Laboratories in 1947 by John Barbeen and Walter Brattain, in fact, they, together with William Shockley, received the 1956 Nobel Prize in Physics for “their researches on semiconductors and their discovery of the transistor effect.”

Except… they didn’t ‘discover’ the transistor effect.

It was, in fact, described by one Julius Lilienfeld in a patent that he filed in Canada on the ‘field effect transistor’ in 1925. Although he patented it – he published no known research articles on the subject. Bell scientists Bardeen and Brattain, in fact, built a field effect transistor utilising Lilienfeld’s patent in their research laboratory and surprisingly it worked, they then set about improving and refining the efficiency of the device and then published their findings – although Lilienfeld’s patent was the basis for their transistor, he was never credited in their published papers.

But then Lilienfeld himself had built upon research and observations that had gone before.

In 1833, Faraday’s research on the negative temperature coefficient of resistance of silver sulphide was the first recorded observation of any semiconductor property. The trail from Faraday’s experiments to the Lilienfeld patent had many, many contributors.

My point?

Nanos gigantum humeris insidentes’ – discovering truth by building on previous discoveries.

The first working transistor wasn’t invented in 1947, it evolved from Faraday’s first observations in 1833. At that time, that is all it was, an observation – with no obvious applications.

This meandering pathway had then progressed towards its conclusion (the transistor) in a succession of incremental steps. Academics and scientists didn’t carry on their given research in splendid isolation from those that went before. If they found some relevance to their own research then they applied those previous observations and investigations to further their own knowledge and that of those that were to follow.

Which brings me back to where I started – ‘Innovation… what does it mean?’

In today’s engineering environment, I believe that it means that we, each and every one of us, could be an innovator. We don’t have to be qualified in a specific field. We just need to be open and have the vision to see how established techniques in the world around us could be transferred and applied to other disciplines to create or improve an existing product: cross-pollination of ideas and skills. Indeed, in the first instance, there is no need for detail… just the vision.

I believe that each and every one of us is capable of doing that.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Further Reading

The Freerunners of Wearables

The Freerunners of Wearables

Stewart Da'Silva - Senior Designer, Product Design

By: Stewart Da’Silva
Senior Designer, Product Design

19th July 2017

Home » Stewart Da'Silva

Is the push for new technologies, such as wearables, and the immersive technology of Virtual Reality (VR), a means of us short-cutting natural evolution? Is this the start of a journey where we will enhance our natural abilities with various wearables marketed as specific desirable traits?

Rubbish! I can hear you say – but consider this.

For a millennium, Mother Nature has advanced our capabilities in her own slow, haphazard way – it takes generations for what we would perceive as “needed developments” to take place within our bodies. We are impatient and technology serves as a way of supplementing the limitations that nature has imposed upon us.

Globally, we have already travelled quite a way down that road.

Generally, we no longer walk anywhere – vehicles have become our legs. Our intellect, in various degrees of reliance, has become our smartphone or tablet. Who needs a good memory when we can depend on an internet search engine? Imagination? Why not just slip on your VR headset and become immersed within an installed experience of your choice.

For all the examples above, wearables are rapidly becoming mandatory in order for us to function. They are now becoming more personal – monitoring our pulse, our blood pressure, how active we are and other indicators of health and wellbeing.

What of the future? In some quarters, wearables have moved into semi-permanent implants, from mini defibrillators that monitor the heart for abnormal rhythms and correct them, through to implantable Radio Frequency Identifiers (RFIDs).

Kevin Warwick, the Professor of Cybernetics at the University of Reading, was the first person to have a RFID implanted into his own hand. Inserted by a trained medical staff member, the implant enables him to use card reading doors and to control lighting. Another one of his firsts was to have a device implanted into his median nerves that linked his nervous system directly to a computer, programming a robotic arm to exactly mimic his own arm movements.

Now we come to ‘freerunning’ wearables.

There are people out there that are experimenting with ‘off-grid’ wearable implants to heighten and augment sensations within and outside their own bodies. They call themselves grinders, biohackers or body hackers and many of them manufacture and insert the wearable implants themselves. The favoured wearable for biohackers is a magnet that is implanted into various parts on the body, usually on the end of a finger.

These wearable magnets can detect magnetic fields emanating from various sources. Microwave ovens or power lines, for example, cause the implanted magnets to vibrate against the adjacent nerves – giving the grinder a ‘sixth sense’. It also means that they can attract light ferrous based objects without physical touching them… magic!

Grinders aren’t only about ‘sixth sense’ stuff – some have a personal objective to enhance or correct what nature has dealt them. Some of these wearable implants, like Warwick’s experiments, could in the near future prove mainstream.

For example, in conjunction with the wearable implanted magnets, the ‘Bottlenose’ device can be slid over the ‘magnet finger’. Bottlenose, a project by open source biotechnology company Grindhouse Wetware, mimics the sonar echolocation that the dolphin of the same name uses. This device sends out electromagnetic pulses and the implanted magnets are extremely sensitive to the returning waves. Vibration intensity also increases the closer the subject is to the obstacle. With practice, a mental picture can be formed of the shape and distance of surrounding objects.

The implications are obvious, people with sight loss could ‘see’ again using a refined version of Bottlenose. However, what if the whole wearable device could be miniaturised? Perhaps it could become a mainstream wearable implant in its own right?

Save

Save

Save

Save

Save

Save

Is the push for new technologies, such as wearables, and the immersive technology of Virtual Reality (VR), a means of us short-cutting natural evolution? Is this the start of a journey where we will enhance our natural abilities with various wearables marketed as specific desirable traits?

Rubbish! I can hear you say – but consider this.

For a millennium, Mother Nature has advanced our capabilities in her own slow, haphazard way – it takes generations for what we would perceive as “needed developments” to take place within our bodies. We are impatient and technology serves as a way of supplementing the limitations that nature has imposed upon us.

Globally, we have already travelled quite a way down that road.

Generally, we no longer walk anywhere – vehicles have become our legs. Our intellect, in various degrees of reliance, has become our smartphone or tablet. Who needs a good memory when we can depend on an internet search engine? Imagination? Why not just slip on your VR headset and become immersed within an installed experience of your choice.

For all the examples above, wearables are rapidly becoming mandatory in order for us to function. They are now becoming more personal – monitoring our pulse, our blood pressure, how active we are and other indicators of health and wellbeing.

What of the future? In some quarters, wearables have moved into semi-permanent implants, from mini defibrillators that monitor the heart for abnormal rhythms and correct them, through to implantable Radio Frequency Identifiers (RFIDs).

Kevin Warwick, the Professor of Cybernetics at the University of Reading, was the first person to have a RFID implanted into his own hand. Inserted by a trained medical staff member, the implant enables him to use card reading doors and to control lighting. Another one of his firsts was to have a device implanted into his median nerves that linked his nervous system directly to a computer, programming a robotic arm to exactly mimic his own arm movements.

Now we come to ‘freerunning’ wearables.

There are people out there that are experimenting with ‘off-grid’ wearable implants to heighten and augment sensations within and outside their own bodies. They call themselves grinders, biohackers or body hackers and many of them manufacture and insert the wearable implants themselves. The favoured wearable for biohackers is a magnet that is implanted into various parts on the body, usually on the end of a finger.

These wearable magnets can detect magnetic fields emanating from various sources. Microwave ovens or power lines, for example, cause the implanted magnets to vibrate against the adjacent nerves – giving the grinder a ‘sixth sense’. It also means that they can attract light ferrous based objects without physical touching them… magic!

Grinders aren’t only about ‘sixth sense’ stuff – some have a personal objective to enhance or correct what nature has dealt them. Some of these wearable implants, like Warwick’s experiments, could in the near future prove mainstream.

For example, in conjunction with the wearable implanted magnets, the ‘Bottlenose’ device can be slid over the ‘magnet finger’. Bottlenose, a project by open source biotechnology company Grindhouse Wetware, mimics the sonar echolocation that the dolphin of the same name uses. This device sends out electromagnetic pulses and the implanted magnets are extremely sensitive to the returning waves. Vibration intensity also increases the closer the subject is to the obstacle. With practice, a mental picture can be formed of the shape and distance of surrounding objects.

The implications are obvious, people with sight loss could ‘see’ again using a refined version of Bottlenose. However, what if the whole wearable device could be miniaturised? Perhaps it could become a mainstream wearable implant in its own right?

Save

Save

Save

Save

Save

Save

Further Reading

Fifty Years in Engineering - Part 2

Fifty Years in Engineering – Part 2

Stewart Da'Silva - Senior Designer, Product Design

By: Stewart Da’Silva
Senior Designer, Product Design

29th March 2017

Home » Stewart Da'Silva

In my previous blog, I attempted to explain my journey of manually designing Printed Circuit Boards (PCBs) but, as always, time marches on and the world of taped artworks was consigned to history’s bin…

Around 1978, I started my ‘career’ in Computer Aided Design (CAD).

5_25-floppy-disc-driveRacal-Redac’s Redboard was the first CAD system that I used; it was also the first fully integrated PC-based workstation that allowed for small single and double-sided PCBs to be designed. To compliment this, there was Redlog; a schematic capture software package. Although at this time, the main schematics were still hand drawn by tracers from an engineer’s ‘fag packet’ sketches.

Data entry was a manual task. A print was taken of the original schematic and on this print, all components were assigned pin numbers (e.g. resistors were designated pins 1 and 2). Then one person marked off and called out the connections whilst another typed the data onto a 51/4 inch disc cassette (floppy disc) to create a netlist. This netlist together with the addition of a manually created component list could then be used in the generation of a PCB. This data then had to be rechecked manually… a time-consuming task!

Apart from the PC-based Redboard, I did 90% of my work on Racal-Redac Maxi and Mini workstations, which were based on a DEC PDP11/34 16 bit mini-computer system.

RL02DriveEach designer had their own 10 megabytes read/write working discs… yes, you read that correctly… 10 megabytes; it had to be loaded into the 19” rack like computer. These RL02 drives were approximately 30 centimetres in diameter and were quite hefty!

Both the Maxi and Mini workstations consisted of a bulky monochrome monitor and a line printer. The first task, when your shift started (yes shiftwork!), was to load your RL02 disc drive into the PDP11/34 and then boot up the system. Next task was to login via the line printer and then your PCB start dump could be called in.

Much like today, the components and connections – ‘rats nest’- as it was termed in those days appeared on the screen. There were a couple of major differences from today… the image on the monitor’s screen was monochrome. To differentiate different trace width, various line patterns were used: solid, dash-dot, dash, dot etc. As you can well appreciate, it was pretty mind blowing… especially if you were on a 5 a.m. start!

The working environment left a lot to be desired as each Maxi/Mini was situated in a small darkened room, the only light in there emanated from a small shielded lamp that enabled you to read the printer and, of course, the monitor screen. The basic idea was to eliminate any distracting reflections appearing on the monitor screen. It also meant that the designer was on his own, with only the machine for company.

PDP11-34ComputerSystemThe second major disadvantage was that all placement and routing was carried out on a 25 thou grid (Data Structure Unit), this was fine for 99% of components as the boards were a mixture of discrete and dual-in-line integrated circuits (TTL being the order of the day!); but if a ‘D’ type connector was called up it presented a problem. Then, as now, ‘D-type’ connectors were dimensioned in 1/64ths of an inch, this made the component ‘off-grid’. To place the pads and route to them, these off-grid components had to have a special ‘off-grid’ programme written. A punch tape was created to give X-Y coordinates to the photo-plotter so that it could move the plotting head to the desired positions to plot these special pads. This in itself created yet another problem; the checking procedures could only check points that fell on the basic DSU grid, meaning, that you could only see the results of your programme once the plot had been completed. Inspection of this completed plot revealed if the clearances were correct or not. As the writing of these programmes, although not too difficult, was still a pain – it was felt that one person should be responsible to create them… and yes… it fell to yours truly!

Racal-Redac brought out a new software package: Cadstar, which, for the first time, was an affordable PC-based software for an individual or small design office. With this in mind, I left my then contract design house and with a work colleague started our own design house: GS Designs.

Apart from its affordability, Cadstar was a leap forward as far as users were concerned. One of the most important innovations was the departure from the basic DSU to a one thou grid for placing and routing, to all intents and purposes, the system gridless. No more off-grid programmes to write!. Another feature was that the system had a colour monitor that enabled multi-layer boards to be designed with comparative ease. There was also a new important step as far as preparing gerber data was concerned. A preview of what the actual end result would be could be observed for the first time. The buzz phrase was ‘WYSIWYG’ – ‘What You See Is What You Get’ this obviously lessened the risk of errors being made.

Around this time, it was decided that maybe engineers could be trusted enough to create their own schematics – Orcad schematic capture being the software of choice as this was the favoured university software package of the time. Orcad could generate a netlist that was compatible with most popular PCB software packages, Cadstar being one of them.

We continued to use Cadstar, although I did flirt with PCAD, Cadnetics and Cadstar’s big mainframe brother, Visula. Cadstar still remained the main PCB software until experiencing designing with Pads software. At the time, Pads was more user-friendly and it had more options regarding PCB design than Cadstar so… we are still using it.

Read part 1 of Stuart’s blog here.

Save

Save

Save

Save

Save

Save

Save

Save

In my previous blog, I attempted to explain my journey of manually designing Printed Circuit Boards (PCBs) but, as always, time marches on and the world of taped artworks was consigned to history’s bin…

Around 1978, I started my ‘career’ in Computer Aided Design (CAD).

5_25-floppy-disc-driveRacal-Redac’s Redboard was the first CAD system that I used; it was also the first fully integrated PC-based workstation that allowed for small single and double-sided PCBs to be designed. To compliment this, there was Redlog; a schematic capture software package. Although at this time, the main schematics were still hand drawn by tracers from an engineer’s ‘fag packet’ sketches.

Data entry was a manual task. A print was taken of the original schematic and on this print, all components were assigned pin numbers (e.g. resistor were designated pins 1 and 2). Then one person marked off and called out the connections whilst another typed the data onto a 51/4 inch disc cassette (floppy disc) to create a netlist. This netlist together with the addition of a manually created component list could then be used in the generation of a PCB. This data then had to be rechecked manually… a time-consuming task!

Apart from the PC-based Redboard, I did 90% of my work on Racal-Redac Maxi and Mini workstations, which were based on a DEC PDP11/34 16 bit mini-computer system.

RL02DriveEach designer had their own 10 Mbytes read/write working discs… yes, you read that correctly… 10 Mbytes; it had to be loaded into the 19” rack like computer. These RL02 drives were approximately 30 centimetres in diameter and were quite hefty!

Both the Maxi and Mini workstations consisted of a bulky monochrome monitor and a line printer. The first task, when your shift started (yes shiftwork!), was to load your RL02 disc drive into the PDP11/34 and then boot up the system. Next task was to login via the line printer and then your PCB start dump could be called in.

Much like today, the components and connections – ‘rats nest’- as it was termed in those days appeared on the screen. There were a couple of major differences from today… the image on the monitor’s screen was monochrome. To differentiate different trace width, various line patterns were used: solid, dash-dot, dash, dot etc. As you can well appreciate, it was pretty mind blowing… especially if you were on a 5 a.m. start!

The working environment left a lot to be desired as each Maxi/Mini was situated in a small darkened room, the only light in there emanated from a small shielded lamp that enabled you to read the printer and, of course, the monitor screen. The basic idea was to eliminate any distracting reflections appearing on the monitor screen. It also meant that the designer was on his own, with only the machine for company.

PDP11-34ComputerSystemThe second major disadvantage was that all placement and routing was carried out on a 25 thou grid (Data Structure Unit), this was fine for 99% of components as the boards were a mixture of discrete and dual-in-line integrated circuits (TTL being the order of the day!); but if a ‘D’ type connector was called up it presented a problem. Then, as now, ‘D-type’ connectors were dimensioned in 1/64ths of an inch, this made the component ‘off-grid’. To place the pads and route to them, these off-grid components had to have a special ‘off-grid’ programme written. A punch tape was created to give X-Y coordinates to the photo-plotter so that it could move the plotting head to the desired positions to plot these special pads. This in itself created yet another problem; the checking procedures could only check points that fell on the basic DSU grid, meaning, that you could only see the results of your programme once the plot had been completed. Inspection of this completed plot revealed if the clearances were correct or not. As the writing of these programmes, although not too difficult, was still a pain – it was felt that one person should be responsible to create them… and yes… it fell to yours truly!

Racal-Redac brought out a new software package: Cadstar, which, for the first time, was an affordable PC-based software for an individual or small design office. With this in mind, I left my then contract design house and with a work colleague started our own design house: GS Designs.

Apart from its affordability, Cadstar was a leap forward as far as users were concerned. One of the most important innovations was the departure from the basic DSU to a one thou grid for placing and routing, to all intents and purposes, the system gridless. No more off-grid programmes to write!. Another feature was that the system had a colour monitor that enabled multi-layer boards to be designed with comparative ease. There was also a new important step as far as preparing gerber data was concerned. A preview of what the actual end result would be could be observed for the first time. The buzz phrase was ‘WYSIWYG’ – ‘What You See Is What You Get’ this obviously lessened the risk of errors being made.

Around this time, it was decided that maybe engineers could be trusted enough to create their own schematics – Orcad schematic capture being the software of choice as this was the favoured university software package of the time. Orcad could generate a netlist that was compatible with most popular PCB software packages, Cadstar being one of them.

We continued to use Cadstar, although I did flirt with PCAD, Cadnetics and Cadstar’s big mainframe brother, Visula. Cadstar still remained the main PCB software until experiencing designing with Pads software. At the time, Pads was more user-friendly and it had more options regarding PCB design than Cadstar so… we are still using it.

Read part 1 of Stuart’s blog here.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Further Reading

Save

Save

Save

Save

Save

50 Years in Engineering

Fifty Years in Engineering

Stewart Da'Silva - Senior Designer, Product Design

By: Stewart Da’Silva
Senior Designer, Product Design

22nd February 2017

Home » Stewart Da'Silva

pcb_layouttoolsIn 1966 whilst serving my apprenticeship as a mechanical design draughtsman, I was assisting a senior draughtsman on a design that required a simple power supply. This product was going to be manufactured in medium volume and he suggested that maybe I would like to investigate the possibility of using a Printed Circuit Board to connect it up instead of using wire to make the connections. This was my first introduction to PCBs.

He had really set me a challenge as no-one in my company had used one before. Anyway, suffice to say that that very first layout of mine was constructed using an ink pen, rule and compass using Indian black ink on white Bristol board. To increase accuracy of the finished PCB, it was drawn 4:1 scale. As is probably obvious to the reader, mistakes whilst drawing the PCB usually necessitated starting from scratch again. The next stage was arranging for an industrial photographer to generate a 1:1 positive film from the 4:1 artwork that could be used by a printed circuit board manufacturer to fabricate the PCB.

I finished my apprenticeship in 1968 and not long afterwards started work as a mechanical designer in a ’Contract Office.’ This was essentially a design house offering design capabilities to companies that did not have the necessary skills or had to outsource projects due to a high volume of work.

taped-artwork_black-tapeIt was here that I learnt the basis of PCB design. Things had moved on, although designs were still only single or double sided. Instead of ink on Bristol board, the initial design was drawn, again at a scale of either 2:1 or 4:1, on a stable semi-transparent plastic film that was placed over a similar transparent film with a 0.1inch matrix printed on it that was fixed to an A0 drawing board. This grid was used as a guide for the PCB layout.

If the PCB design was double sided, the usual convention used was blue pencil for the component side and red pencil for the solder side. Once completed and checked this pencil layout was flipped over and secured over another grid that was in turn attached to the surface of an A0 size light box. A translucent film was positioned over.

lightbox1Using pre-cut adhesive backed-tapes and pads of various sizes and following the red colour of the layout that was under this sheet as a guide, the solder side of the PCB took form as the designer built up the artwork. When the solder side artwork was complete it was removed from the light box together with the pencil layout. The artwork was flipped over and secured once again to the light box, another plastic sheet was placed over this and again, using the pre-cut pads, the designer aligned these with the pads on the completed solder side artwork. Once all the pads were positioned, the solder side artwork was removed and this ‘pads only’ component side was again placed over the now turned over pencil layout and the blue colour followed to tape up the component side. The two sides of the finished and checked artworks were then sent to an industrial photographer who generated a 1:1 artwork from the originals.

red-blue-artworkThe next step that the industry took was to use only one piece of stable plastic sheet instead of two. The pre-cut black pads were still used but instead of black tapes, transparent blue and red tapes were used and were placed on opposite sides of the sheet. The industrial photographer would then attach filters such that only the red or the blue traces appeared as black when he created the 1:1 artworks. This may seem a small step but it did mean that alignment of both sides of the PCB artworks was guaranteed as exactly the same pads were used.

Read part 2 of Stuart’s blog here.

Save

Save

Save

Save

Save

Further Reading

Save

Save

Save

Save

Save

Save