musings by the electronics design, fabrication and assembly industry's best minds
Category Archives: Dr. Ron
Materials expert Dr. Ron Lasky is a professor of engineering and senior lecturer at Dartmouth, and senior technologist at Indium Corp. He has a Ph.D. in materials science from Cornell University, and is a prolific author and lecturer, having published more than 40 papers. He received the SMTA Founders Award in 2003.
In the last post on tin whiskers, we discussed detection. In this post, we will cover mitigation. Since compressive stresses are a primary cause of tin whiskers, minimizing these stresses will help to mitigate tin whisker formation. There are several approaches to accomplish this compressive stress reduction. The first is to establish a process that produces a matte finish as opposed to a bright tin finish. Experience has shown that a satin or matte tin finish, which has larger grain sizes, has lower internal compressive stresses than a bright tin finish. Studies have shown that avoiding a bright tin finish alone can reduce tin whisker formation by more than a factor of ten. Thicker tin layers will often reduce compressive stresses.
Since a major source of the compressive stresses in tin is due to copper diffusion into the tin, minimizing this diffusion will significantly reduce tin whisker formation. One proven approach to minimizing copper diffusion is to have a flash of nickel between the copper and the tin. Since nickel does not readily diffuse into the tin after initial intermetallic formation, tin whisker formation can be all but eliminated in many cases.
Adding bismuth to the tin, in small amounts, can also reduce tin whisker formation. The bismuth solid solution strengthens the tin. This strengthening will often reduce tin whisker formation.
Another mitigation approach is the use of coatings. Acrylics, epoxies, urethanes, alkali silicate glasses and parylene C have been used. Parylene C appears to be the most promising.
Often a tin whisker will penetrate the coating as seen in Figure 1. However, to be a reliability risk, it must penetrate a second coating.
Figure 1. A tin whisker about to penetrate a polymer coating. Source: Dr. Chris Hunt, NPL.
This situation is almost impossible as the tin whisker is fragile and will bend as it tries to penetrate the second layer of coating. See Figure 2. So, coatings can be a very effective tin whisker mitigation approach.
Figure 2. To be a reliability concern, a tin whisker must penetrate two protective coatings.
The next and last tin whisker post will be on using FMEA (failure modes and effects analysis) to develop a tin whisker reduction strategy.
One of the great challenges of tin whiskers is detecting them. When one considers that their median thickness is in the 3 to 5 micron range (a human hair is about 75 microns,) they can be hard to see with direct lighting. Right angle lighting facilitates visual detection. See Figure 1. In this figure, Panashchenko shows that with direct light (left image), it is impossible to see the tin whisker, however with right angle light the tin whisker jumps out.
Figure 1.* It is not possible to see the tin whisker with direct lighting as in the left image. However, in the right image, right angle lighting makes it easy to see the tin whisker.
Start with low magnification and work up to high magnification
Have the ability to tilt the sample in 3 axes
Use a flexible lamp that allows multiple angles of illumination, do not use a ring light
Use a LED or fiber optic lighting, not incandescent lights which can cause shadowing
Vary the brightness of the light source
The most important tip is to vary the angle of lighting while varying the magnification. Thus, analyzing a sample should take several minutes, at least. However, even the most thorough inspection may miss some tin whiskers.
In the next post, I will discuss mitigation techniques.
*The image is from Lyudmila Panashchenko, “The Art of Metal Whisker Detection: A Practical Guide for Electronics Professionals,” IPC Tin Whisker Conference, April 2012.
Tin whiskers are very fine filaments or whiskers of tin that form out of the surface of the tin. See Figure 1. They are the result of stress release in the tin. Tin whiskers are a phenomenon that is surprising when first encountered, as their formation just doesn’t seem intuitive.
They are a concern, as they can cause electrical short circuits or intermittent short circuits as a fusible link. Lead in tin-lead solder greatly suppresses tin whisker growth. Therefore, with the advent of lead-free solders there is a justifiable concern for decreasing reliability due to tin whisker growth in electronics.
Tin whiskers can vary in length and width, as is seen in Figure 2. Note that although only about 10% are as long a 1000 microns (1mm). That length and occurrence rate is such as to cause many reliability concerns.
Figure 2. The length and width of some tin whiskers. The source is also the NASA Tin Whisker Website.
Over the following weeks I plan to post how tin whiskers form and strategies to alleviate them. Most of the information I will post comes from a paper I presented with Annaka Balch at the SMTA PanPac 2019.
NASA has an excellent website that provides much information about tin whiskers and is a source for historic critical failures caused by tin whiskers.
I am reposting an updated blog post on Cp and Cpk calculations with Excel, as I have improved the Excel spreadsheet. If you would like the new spreadsheet, send me an email at [email protected].
One of the best metrics to determine the quality of data is Cpk. So, I developed an Excel spreadsheet that calculates and compares Cps and Cpks.
It is accepted as fact by everyone that I know that 2/3 of all SMT defects can be traced back to the stencil printing process. A number of us have tried to find a reference for this posit, with no success. If any reader knows of one, please let me know. Assuming this adage is true, the right amount of solder paste, squarely printed on the pad, is a profoundly important metric.
In light of this perspective, some time ago, I wrote a post on calculating the confidence interval of the Cpk of the transfer efficiency in stencil printing. As a reminder, transfer efficiency is the ratio of the volume of the solder paste deposit divided by the volume of the stencil aperture. See Figure 1. Typically the goal would be 100% with upper and lower specs being 150% and 50% respectively.
Figure 1. The transfer efficiency in stencil printing is the volume of the solder paste deposit divided by the volume of the stencil aperture. Typically 100% is the goal.
I chose Cpk as the best metric to evaluate stencil printing transfer efficiency as it incorporates both the average and the standard deviation (i.e. the “spread”). Figure 2 shows the distribution for paste A, which has a good Cpk as its data are centered between the specifications and has a sharp distribution, whereas paste B’s distribution is not centered between the specs and the distribution is broad.
Figure 2. Paste A has the better transfer efficiency as its data are centered between the upper and lower specs, and it has a sharper distribution.
Recently, I decided to develop the math to produce an Excel® spreadsheet that would perform hypothesis tests of Cpks. As far as I know, this has never been done before.
A hypothesis test might look something like the following. The null hypothesis (Ho) would be that the Cpk of the transfer efficiency is 1.00. The alternative hypothesis, H1, could be that the Cpk is not equal to 1.00. H1 could also be that H1 was less than or greater than 1.00.
As an example, let’s say that you want the Cpk of the transfer efficiency to be 1.00. You analyze 1000 prints and get a Cpk of 0.98. Is all lost? Not necessarily. Since this was a statistical sampling, you should perform a hypothesis test. See Figure 3. In cell B16, the Cpk = 0.98 was entered; in cell B17, the sample size n = 1000 is entered; and in cell B18, the null hypothesis: Cpk = 1.00 is entered. Cell B21 shows that the null hypothesis cannot be rejected as false as the alternative hypothesis is false. So, we cannot say statistically that the Cpk is not equal to 1.00.
Figure 3. A Cpk = 0.98 is statistically the same as a Cpk of 1.00 as the null hypothesis, Ho, cannot be rejected.
How much different from 1.00 would the Cpk have to be in this 1000 sample example to say that it is statistically not equal to 1.00? Figure 4 shows us that the Cpk would have to be 0.95 (or 1.05) to be statistically different from 1.00.
Figure 4. If the Cpk is only 0.95, the Cpk is statistically different from a Cpk = 1.00.
The spreadsheet will also calculate Cps and Cpks from process data. See Figure 5. The user enters the upper and lower specification limits (USL, LSL) in the blue cells as shown. Typically the USL will be 150% and the LSL 50% for TEs. The average and standard deviation are also added in the blue cells as shown. The spreadsheet calculates the Cp, Cpk, number of defects, defects per million and the process sigma level as seen in the gray cells. By entering the defect level (see the blue cell), the Cpk and process sigma can also be calculated.
Figure 5. Cps and Cpks calculated from process data.
The spreadsheet can also calculate 95% confidence intervals on Cpks and compare two Cpks to determine if they are statistically different at greater than 95% confidence. See Figure 6. The Cpks and sample sizes are entered into the blue cells and the confidence intervals are shown in the gray cells. Note that the statistical comparison of the two cells is shown to the right of Figure 6.
Figure 6. Cpk Confidence Intervals and Cpk comparisons can be calculated with the spreadsheet.
This spreadsheet should be useful to those who are interested in monitoring transfer efficiency Cpks to reduce end-of-line soldering defects. It is not limited to calculating Cps and Cpks of TE, but can be used for any Cps and Cpks. I will send a copy of this spreadsheet to readers who are interested. If you would like one, send me an email request at [email protected].
In a recentpost, I discussed Moore’s Law. I challenged readers to solve for “a” and “b” from the equation a*2^(b*(year-1970)) from the graph in Figure 1.
Moore’s Law posits that the number of transistors doubles every two years. If so, “b” should be 0.5. It ends up that “b”, from the solution in Figure 2, is 0.4885, so a double occurs about 1/0.4885 =2.047 years, but this number is really close to two years. The solution follows:
BTW, congrats to Indium Corporation’s Dr. Huaguang Wang as he got a close solution.
Moore’s Law was developed by Gordon Moore in 1965. It predicted that the number of transistors in integrate circuits would double approximately every two years. Surprisingly, it has held true up to today. Figure 1 shows some of the integrated circuit transistor counts as a function of time. The red line is a good fit.
Figure 1. A plot of transistor count in selected ICs as a function of the year.
A reasonable equation for the red line is Transistor Count = a*2^(b*(year-1970)). What should “b” be if the count doubles every two years? To the first person that can solve for “a” and “b” using the red line and the equation above, we will send a Dartmouth sweatshirt.
But, I have to admit to being somewhat of a skeptic. Are all, or even most, of these factories up and running without a hitch? I have toured a 100 or so factories world-wide, and most are in Industry 2-3.0.
The multiple AI and IoT technologies that have to be connected and work flawlessly to get the Lighthouse factory to work is daunting. To me, it is like self-driving cars: they are 95% to full self-driving capability today, but the last 5% may not be obtained for decades…if ever.
A recent article in the Washington Post presents a similar perspective. The author Dalvin Brown, argues that robotics and AI firms have struggled to make something like robot butlers. However, these efforts have only had success on very focused tasks. Nothing like a robot butler will exist for decades. Stephen Pinker’s argument that no AI can empty a dishwasher is still the most powerful way to clarify the primitive state of practical, common sense, robot-type machines.
Figure 1. Dalvin Brown points out in his article that nothing like The Jetsons’ Rosey the Robot exists today. Image source is here.
As I always state, we in electronics assembly should be cheering these folks on, as more electronics will be required than predicted with the slow emergence of complex interdependent technologies.
In addition, I think the hype around Industry 4.0 always neglects the important role that people have to play. When we watch something as complex as a landing of a spacecraft on Mars, we always see the Control Center with scores of people cheering the success. All of the important tasks were not handled by AIs.
So if anyone reading this article would like to invite me to a Lighthouse factory, please do. If I am wrong, I will write a retraction.
The vast majority of solders used in electronic assembly have, as their base metal, tin. There are some specialty gold solders, like gold-copper or gold-indium, indium based solders, and a few others that do not contain tin. Although these solders have important applications, the sheer volume of tin-based solders is overwhelming in comparison.
Tin was a metal known to the ancients, and it led them out of the Copper Age into the Bronze Age. Ten to twelve percent tin in copper yields bronze, which is much stronger than copper (see Figure 1) and has the added benefit of melting at about 950°C vs. copper’s 1085°C.
This difference in temperature is significant in that with primitive heating technology, 1085°C is hard to achieve. In addition, since bronze freezes at a lower temperature, it fills molds much better. This property enabled the casting of much more complex shaped objects. See Figure 2. All of these benefits resulted in a dramatically increasing demand for tin. This demand established much more sophisticated trade routes for tin and its most common ore, cassiterite; this enhanced overall trade and accelerated the spread of civilization and learning.
Back to solder. Soldering is a technology that has existed almost as long as the copper age. It is thought to have originated in Mesopotamia as long ago as 4000BC. Soldering was used for joining and making jewelry, cooking tools, and stained glass. Today, in addition to these applications, plumbing, musical instrument repair, and plated metal are common uses. However, electronics assembly is the largest user of tin-based solder by far. See Figure 3.
One of the greatest benefits of solder is its reworkability. This property enables rework of electronics assemblies, plumbing, jewelry, and musical instruments. Without the ability to rework electronics, the industry would struggle to be profitable. Another benefit, of course, is the miracle of soldering I discussed in another post.
So, the next time you stare at your smartphone, tablet, TV, etc., remember tin-based solder and soldering are fundamental to its existence.
SMT assembly is an optimization process. There is no single stencil printing process for all PWB designs. The stencil printing parameters of stencil design, squeegee speed, snap off speed, stencil wipe frequency, and solder paste for assembling all PWBs will not be the same; just as there is no single reflow oven profile for all PWBs. Fortunately, most solder paste specifications give good boundaries for all of these parameters, but typically some trial and error experiments will be needed when assembling a new PWB design that is not similar to past assemblies.
The need for optimization is most obvious when trying to minimize defects. As an example, minimizing graping is often facilitated by using a ramp to peak reflow profile. However, the ramp to peak profile may acerbate voiding. See Figure 1.
Figure 1. The ramp to peak reflow profile may minimize graping, but acerbate voiding.
Thankfully your SMT soldering materials and equipment suppliers deal with these optimization issues on a daily basis. So if you are ever stuck with some challenging SMT assembly process, contact these solder materials and equipment experts first.
I read with interest Zohair Mehkri’s SMTAI 2020 paper titled“How Quantum Computing (QC) will Revolutionize Electronics Manufacturing.”I will start by saying that he gives a very good Quantum Computing 101 overview. This is no easy feat, as QC is a difficult technology to understand. I will humbly state that I still struggle to understand the basics, and I’m sure I don’t understand QCs as well as he does.
However, I have two main concerns with Zohair’s paper. One is that it may give the impression that QC is becoming a practical technology and will soon be widely available — to the point that we can use it to solve electronics manufacturing problems.
QCs are rare; there are about 30 worldwide, 15 of which are owned by IBM. Although to be fair, Shenzhen SpinQ Technology gave this recent announcement: “On 29 January 2021 Shenzhen SpinQ Technology announced that they will release the first-ever desktop quantum computer. This will be a miniaturized version of their previous quantum computer based on the same technology (nuclear magnetic resonance) and will be 2 qubit device. Applications will mostly be educational for high school and college students. The company claims SpinQ will be released to the public by the fourth quarter of 2021.”
Since the device has only two qubits, it will more than likely be for educational purposes not intended to solve real problems. It will be interesting to see how it emerges later in the year.
Almost all QCs are superconducting, meaning that they require very low temperatures to operate as cold as -460°F, which is colder than liquid helium. They are also extremely delicate; even slight vibrations causes them to fail.
So, we might be able to rent time on a useful QC sometime in the future, but QCs won’t be common any time soon.
The other concern I have is what is the need for QCs? Most of the practical problems that face us can be solved by conventional computers. In addition, only certain types of problems can be solved by QCs. As stated in Wikipedia: “However, the capacity of quantum computers to accelerate classical algorithms has rigid upper bounds, and the overwhelming majority of classical calculations cannot be accelerated by the use of quantum computers.”
QC is an exciting technology and many wonderful discoveries will no doubt come from it. However, I am skeptical that it will solve practical problems anytime soon.