Why Square Apertures Provide More Solder Paste than Circular Apertures

Folks,

When comparing the volume of solder paste provided by a circular versus square aperture, consider that if the side of the square is D and the diameter of the circle is also D, the square has greater than 25% more area. (i.e., (1-0.785)/0.785 = 0.274). See Figure 1.

Figure 1. Square vs. circle areas.

However, the greater area of a square is not the only reason square apertures deposit more solder paste. The curving of the circular aperture enables more surface of the stencil to contact more of the solder particle’s area. See Figure 2. So, the solder particles will adhere to a cicular aperture more readily and not adhere to the pad, resulting in a smaller solder paste deposit. 

Figure 2. The curving of a circular aperture results in more contact area with solder particles than a square aperture

These two effects can result in dramatically different soldering results, as seen in Figure 3. Using the square aperture provides so much more solder paste; when compared to what a circular aperture provides, it is stunning in the soldering result.

Figure 3. Circular aperture/pad (left) and square aperture/pad (right), using the same Type 3 powder size, area ratio, flux chemistry (no-clean), and reflow profile (RTP)

Cheers,

Dr. Ron


The Only Way to Demonstrate Zero Defects is to Sample all the Product

Folks,

John Foster felt very fortunate. Not only did he get his undergraduate degree summa cum laude, but was now a graduate student at Ivy University under the tutelage of the famous Professor Patty Coleman. While contemplating these pleasant thoughts, he was working on his homework in advanced statistics when Professor Coleman walked up to his desk.

“Hey, John, I have a little assignment for you. Mike Madigan, CEO of ACME, has a vendor that is guaranteeing zero defects in lots of diodes that ACME buys, yet when ACME gets the lots they find a defect rate of around 1% or more. Can you contact ACME’s quality engineer, Frank Ianonne, to see how you can help?” Patty asked. “We covered this topic in the intro stats class you took last term,” she finished.

“Sure, glad to help,” replied John.

“Thanks, I’m going to SMTA’s PanPac for the first time and have a lot going on there,” Patty said thankfully.

“Wow,” John thought, “the pressure is on.”

John contacted Frank and learned that the vendor’s sales engineer, Mike Gladstone, said that they sample 20 diodes from each 10,000-part lot. If they find no defects in in the sample of 20, they claim they can say that there are 0 defects with 95% confidence, since 19 out of 20 is 95% and they found no defects.

“Yikes,” John thought, “this can’t be right.”

He thought about this and finally came up with what he confidently felt was the answer, especially after looking at his notes from the class Professor Coleman mentioned. He contacted Frank and they set up Zoom call with Mike to discuss the issue.

On the Zoom call after introductions, Frank asked Mike how they determine that a lot has zero defects.

“I’m glad to have the opportunity to explain this to youse guys,” said Mike.

It seemed to John that his tone was arrogant.

Mike continued, “Well, you will agree that 19 out of 20 is 95%, right?”

“Yes,” responded Frank and John.

“So, if we don’t get no defects in 20 samples, we got zero defects in the lot with 95% confidence. If we had one defect in the 20 samples, we couldn’t claim to have no zero defects in the lot,” Mike said.

“Mike, look at the image I took of one defect (a red bead) out of 2000 beads.” (See Figure 1.) “If I selected 20 beads on the left side of the container, how would I know that the defect rate is 0.0005 (1 in 2000)?” asked John.

Figure 1. The red bead is one “defect” out of 2000.

There was long silence.

“Mike what is your answer?” asked Frank.

Still no answer.

“The answer is that the only way you can assure zero defects is to evaluate all of the samples,” said John.

“You’re just confusing the issue with that there photo,” Mike spit out.

“Seems quite clear to me,” said Frank.

“You Ivy League types is all the same. You confuse the issue with mumbo jumbo when any dufuss can see I’m right,” Mike screamed.

Some profanity followed and Frank cut Mike’s Zoom feed.

“I see your point John,” Frank said. “but, can you give me some math to back it up?”

“Sure,” John replied.

“Let’s consider a case where the defect rate is not zero, but quite low, say 1 in 10,000 in a very large population. When we select the first sample, the likelihood of it being good is 0.9999 (10,000-1)/10000). What is the likelihood that the second one will be good?” John asked.

“Ah, let’s see…0.9999, right?” Frank answered.

“But what is the likelihood of both events?” John asked.

“Wait, I remember from a statistics class I took a few years ago, it’s 0.9999 x 0.9999,” Frank said triumphantly.

“And the likelihood of three in a row being good?” John asked again.

“0.99993,” Frank answered confidently.

“So let’s say we sample so many times, let’s call it n times, that 0.9999n = 0.05. What does this tell us?” asked John.

“Hmm …., ” replied Frank.

“Well, how likely is this to happen if the defect rate is 1 in 10,000?” John asked.

“Wait, I see, it would only happen 0.05 or 5% of the time.” Frank responded excitedly.

“So, let’s say we didn’t know the defect rate, what could we say if we sampled n and got no defects?” queried John.

Frank was stumped.

“I’ll tell you what, think about it and we will get back tomorrow. It’s already almost 6PM. Oh, and see if you can calculate what n is. Let’s Zoom at 10AM,” John proposed.

Time flew quickly and John and Frank were Zooming again.

“John, you just about killed me, I had trouble sleeping, but I think I have it after reviewing my stats book and doing some Youtubing,” Frank began.

“Well, if we didn’t know the defect rate and wanted to see if it was at least as good as 1 in 10,000 and we sampled n such that 0.9999n = 0.05, we could say with 0.95 (1 – 0.05) confidence that the defect rate was 1 in 10,000 or less,” Frank said triumphantly.

“Precisely,” John exclaimed.

“But what is n?” John asked.

“That’s where I am stuck. We have the equation 0.9999= 0.05, but I can’t solve for n,” Frank said dejectedly.

“Hint: Logarithms,” John replied.

“That’s it, I got it,” said Frank enthusiastically.

Frank worked for a few minutes with a calculator, and came up with the solution in Figure 2.

Figure 2. The defects calculation.

“So, to show with 95% confidence that the defect rate is 1 in 10,000 or less, we would have to sample almost 30,000 components and find no defects,” Frank exclaimed.

“By looking at the equation, you can see if the defect rate was zero, 0.9999 would be replaced by 1 and the log of 1 is 0 so you would need an infinite sample,” said John.

“So, the only way to show 0 defects is to sample all of the components,” Frank said.

“Right!” replied John.

Cheers,

Dr. Ron

Using FMEA to Determine Your Tin Whisker Mitigation Strategy

In our last post, we discussed techniques to mitigate tin whiskers (TW). To help determine what your tin whisker mitigation strategy should be, consider using failure modes and effects analysis (FMEA). The central metric of FMEA is the risk priority number (RPN). For tin whiskers, the RPN is equal to the product of: (1) the probability of tin whiskers (P); (2) the severity, if a tin whisker exists (S); and (3) how hard it is to detect a tin whisker (D).  In equation form:

RPN = P*S*D

As a first example, consider a consumer product, like a mobile phone with a life of 5 years. With mitigation, on a scale of 1 to 10, P might be 2. For S, we might rate it at a 3, as a failure in the device is unlikely to cause severe harm to anyone. Detection (D) is a problem because the tin whiskers that form later cannot be detected during manufacturing; hence, we would have to rate D as a 10. So, the RPN is: 2*3*10 = 60, which is not too high. Therefore, with P and S at relatively low values, a tin whisker mitigation strategy would likely be successful for any consumer product. It should be pointed out that determining the RPN numbers would almost certainly require supporting data, brainstorming sessions, and a buy-in from the entire product team. The team would also have to determine any appropriate mitigation strategy such as avoiding bright tin coatings on component leads and perhaps using a flash of nickel between the copper and the tin (Figure 1).

Figure 1. In  mission-critical products, coatings may be required. It is almost impossible for a TW to penetrate both layers of coating as shown above.

Now consider a mission-critical product, such as certain types of military equipment. If we assume that the electronics have a service life of 40 years and that a failure could cause bodily harm or death, we could likely end up with a consensus that RPN = 10*10*10 =1000, the highest RPN possible. This situation would demand that special tactics be used to address the tin whisker risk. These tactics were discussed in my paper and presentation given at SMTA Pan Pacific 2019.

Cheers,

Dr. Ron

Tin Whiskers IV: Mitigation

Folks,

In the last post on tin whiskers, we discussed detection. In this post, we will cover mitigation. Since compressive stresses are a primary cause of tin whiskers, minimizing these stresses will help to mitigate tin whisker formation. There are several approaches to accomplish this compressive stress reduction. The first is to establish a process that produces a matte finish as opposed to a bright tin finish. Experience has shown that a satin or matte tin finish, which has larger grain sizes, has lower internal compressive stresses than a bright tin finish. Studies have shown that avoiding a bright tin finish alone can reduce tin whisker formation by more than a factor of ten. Thicker tin layers will often reduce compressive stresses.

Since a major source of the compressive stresses in tin is due to copper diffusion into the tin, minimizing this diffusion will significantly reduce tin whisker formation. One proven approach to minimizing copper diffusion is to have a flash of nickel between the copper and the tin. Since nickel does not readily diffuse into the tin after initial intermetallic formation, tin whisker formation can be all but eliminated in many cases.

Adding bismuth to the tin, in small amounts, can also reduce tin whisker formation. The bismuth solid solution strengthens the tin. This strengthening will often reduce tin whisker formation.

Another mitigation approach is the use of coatings. Acrylics, epoxies, urethanes, alkali silicate glasses and parylene C have been used. Parylene C appears to be the most promising.

Often a tin whisker will penetrate the coating as seen in Figure 1. However, to be a reliability risk, it must penetrate a second coating.

Figure 1. A tin whisker about to penetrate a polymer coating. Source: Dr. Chris Hunt, NPL.

This situation is almost impossible as the tin whisker is fragile and will bend as it tries to penetrate the second layer of coating. See Figure 2. So, coatings can be a very effective tin whisker mitigation approach.

Figure 2. To be a reliability concern, a tin whisker must penetrate two protective coatings.

The next and last tin whisker post will be on using FMEA (failure modes and effects analysis) to develop a tin whisker reduction strategy.

Cheers,

Dr. Ron

Tin Whiskers 101: Part III: Detection

Folks,

One of the great challenges of tin whiskers is detecting them. When one considers that their median thickness is in the 3 to 5 micron range (a human hair is about 75 microns,) they can be hard to see with direct lighting. Right angle lighting facilitates visual detection. See Figure 1. In this figure, Panashchenko shows that with direct light (left image), it is impossible to see the tin whisker, however with right angle light the tin whisker jumps out.

Figure 1.* It is not possible to see the tin whisker with direct lighting as in the left image. However, in the right image, right angle lighting makes it easy to see the tin whisker.

In her excellent presentation, “The Art of Metal Whisker Detection: A Practical Guide for Electronics Professionals,” Panashchenko offers these tips for identifying tin whiskers with a stereo optical microscope:

  • Use a 3x to 100x stereo microscope
  • Start with low magnification and work up to high magnification
  • Have the ability to tilt the sample in 3 axes
  • Use a flexible lamp that allows multiple angles of illumination, do not use a ring light
  • Use a LED or fiber optic lighting, not incandescent lights which can cause shadowing
  • Vary the brightness of the light source

The most important tip is to vary the angle of lighting while varying the magnification. Thus, analyzing a sample should take several minutes, at least. However, even the most thorough inspection may miss some tin whiskers. 

In the next post, I will discuss mitigation techniques.

Cheers,

Dr. Ron

*The image is from Lyudmila Panashchenko, “The Art of Metal Whisker Detection: A Practical Guide for Electronics Professionals,” IPC Tin Whisker Conference, April 2012.

Tin Whiskers 101: What Are They?

Folks,

Tin whiskers are very fine filaments or whiskers of tin that form out of the surface of the tin. See Figure 1. They are the result of stress release in the tin. Tin whiskers are a phenomenon that is surprising when first encountered, as their formation just doesn’t seem intuitive.

Figure 1. Note how thin a tin whisker can be compared to a human hair. The image is from the NASA Tin Whisker Website

They are a concern, as they can cause electrical short circuits or intermittent short circuits as a fusible link. Lead in tin-lead solder greatly suppresses tin whisker growth. Therefore, with the advent of lead-free solders there is a justifiable concern for decreasing reliability due to tin whisker growth in electronics.

Tin whiskers can vary in length and width, as is seen in Figure 2. Note that although only about 10% are as long a 1000 microns (1mm). That length and occurrence rate is such as to cause many reliability concerns.

Figure 2. The length and width of some tin whiskers. The source is also the NASA Tin Whisker Website.

Over the following weeks I plan to post how tin whiskers form and strategies to alleviate them. Most of the information I will post comes from a paper I presented with Annaka Balch at the SMTA PanPac 2019.

NASA has an excellent website that provides much information about tin whiskers and is a source for historic critical failures caused by tin whiskers.

Cheers,

Dr. Ron

Hypothesis and Confidence Interval Calculations for Cp and Cpks

Folks,

I am reposting an updated blog post on Cp and Cpk calculations with Excel, as I have improved the Excel spreadsheet. If you would like the new spreadsheet, send me an email at rlasky@indium.com.

One of the best metrics to determine the quality of data is Cpk. So, I developed an Excel spreadsheet that calculates and compares Cps and Cpks.

———————————————————————————————————-

Folks,

It is accepted as fact by everyone that I know that 2/3 of all SMT defects can be traced back to the stencil printing process. A number of us have tried to find a reference for this posit, with no success. If any reader knows of one, please let me know. Assuming this adage is true, the right amount of solder paste, squarely printed on the pad, is a profoundly important metric.

In light of this perspective, some time ago, I wrote a post on calculating the confidence interval of the Cpk of the transfer efficiency in stencil printing. As a reminder, transfer efficiency is the ratio of the volume of the solder paste deposit divided by the volume of the stencil aperture. See Figure 1. Typically the goal would be 100% with upper and lower specs being 150% and 50% respectively.

Figure 1. The transfer efficiency in stencil printing is the volume of the solder paste deposit divided by the volume of the stencil aperture. Typically 100% is the goal.

I chose Cpk as the best metric to evaluate stencil printing transfer efficiency as it incorporates both the average and the standard deviation (i.e. the “spread”). Figure 2 shows the distribution for paste A, which has a good Cpk as its data are centered between the specifications and has a sharp distribution, whereas paste B’s distribution is not centered between the specs and the distribution is broad.

Figure 2. Paste A has the better transfer efficiency as its data are centered between the upper and lower specs, and it has a sharper distribution.

Recently, I decided to develop the math to produce an Excel® spreadsheet that would perform hypothesis tests of Cpks. As far as I know, this has never been done before.

A hypothesis test might look something like the following. The null hypothesis (Ho) would be that the Cpk of the transfer efficiency is 1.00. The alternative hypothesis, H1, could be that the Cpk is not equal to 1.00. H1 could also be that H1 was less than or greater than 1.00.

As an example, let’s say that you want the Cpk of the transfer efficiency to be 1.00. You analyze 1000 prints and get a Cpk of 0.98. Is all lost? Not necessarily. Since this was a statistical sampling, you should perform a hypothesis test. See Figure 3. In cell B16, the Cpk = 0.98 was entered; in cell B17, the sample size n = 1000 is entered; and in cell B18, the null hypothesis: Cpk = 1.00 is entered. Cell B21 shows that the null hypothesis cannot be rejected as false as the alternative hypothesis is false. So, we cannot say statistically that the Cpk is not equal to 1.00.

Figure 3. A Cpk = 0.98 is statistically the same as a Cpk of 1.00 as the null hypothesis, Ho, cannot be rejected.

How much different from 1.00 would the Cpk have to be in this 1000 sample example to say that it is statistically not equal to 1.00? Figure 4 shows us that the Cpk would have to be 0.95 (or 1.05) to be statistically different from 1.00.

Figure 4. If the Cpk is only 0.95, the Cpk is statistically different from a Cpk = 1.00.

The spreadsheet will also calculate Cps and Cpks from process data. See Figure 5. The user enters the upper and lower specification limits (USL, LSL) in the blue cells as shown. Typically the USL will be 150% and the LSL 50% for TEs. The average and standard deviation are also added in the blue cells as shown. The spreadsheet calculates the Cp, Cpk, number of defects, defects per million and the process sigma level as seen in the gray cells. By entering the defect level (see the blue cell), the Cpk and process sigma can also be calculated. 

Figure 5. Cps and Cpks calculated from process data.

The spreadsheet can also calculate 95% confidence intervals on Cpks and compare two Cpks to determine if they are statistically different at greater than 95% confidence. See Figure 6. The Cpks and sample sizes are entered into the blue cells and the confidence intervals are shown in the gray cells. Note that the statistical comparison of the two cells is shown to the right of Figure 6.

Figure 6. Cpk Confidence Intervals and Cpk comparisons can be calculated with the spreadsheet.

This spreadsheet should be useful to those who are interested in monitoring transfer efficiency Cpks to reduce end-of-line soldering defects. It is not limited to calculating Cps and Cpks of TE, but can be used for any Cps and Cpks. I will send a copy of this spreadsheet to readers who are interested. If you would like one, send me an email request at rlasky@indium.com.

Cheers,

Dr. Ron

Solution to Moore’s Law

Folks,

In a recent post, I discussed Moore’s Law. I challenged readers to solve for “a” and “b” from the equation a*2^(b*(year-1970)) from the graph in Figure 1.

Figure 1. Moore’s Law: Note that in 1982, ICs had about 100,000 transistors, whereas in 2016, they had about 10^10.

Moore’s Law posits that the number of transistors doubles every two years. If so, “b” should be 0.5. It ends up that “b”, from the solution in Figure 2, is 0.4885, so a double occurs about 1/0.4885 =2.047 years, but this number is really close to two years. The solution follows:

Figure 2. The Solution of a and b in the Moore’s law equation. 

BTW, congrats to Indium Corporation’s Dr. Huaguang Wang as he got a close solution.

Cheers,

Dr. Ron

Checking Moore’s Law

Folks,

Moore’s Law was developed by Gordon Moore in 1965. It predicted that the number of transistors in integrate circuits would double approximately every two years. Surprisingly, it has held true up to today. Figure 1 shows some of the integrated circuit transistor counts as a function of time. The red line is a good fit.

Figure 1. A plot of transistor count in selected ICs as a function of the year.

A reasonable equation for the red line is Transistor Count = a*2^(b*(year-1970)). What should “b” be if the count doubles every two years? To the first person that can solve for “a” and “b” using the red line and the equation above, we will send a Dartmouth sweatshirt.

Cheers,

Dr. Ron

Lighthouse Factories

Folks,

I recently read an article about “Lighthouse Factories” which appear to be implementations of Industry 4.0. It is encouraging that engineers and scientists are working on these complex systems that are implementations of artificial intelligence (AI), the internet of things (IoT), and other examples of modern technology. According to the “Lighthouse Factories” article above, there are 54 such factories that now join the World Economic Forum’s Global Lighthouse Network.

But, I have to admit to being somewhat of a skeptic. Are all, or even most, of these factories up and running without a hitch? I have toured a 100 or so factories world-wide, and most are in Industry 2-3.0.

The multiple AI and IoT technologies that have to be connected and work flawlessly to get the Lighthouse factory to work is daunting. To me, it is like self-driving cars: they are 95% to full self-driving capability today, but the last 5% may not be obtained for decades…if ever.

recent article in the Washington Post presents a similar perspective. The author Dalvin Brown, argues that robotics and AI firms have struggled to make something like robot butlers. However, these efforts have only had success on very focused tasks. Nothing like a robot butler will exist for decades. Stephen Pinker’s argument that no AI can empty a dishwasher is still the most powerful way to clarify the primitive state of practical, common sense, robot-type machines.

Figure 1. Dalvin Brown points out in his article that nothing like The Jetsons’ Rosey the Robot exists today. Image source is here.

As I always state, we in electronics assembly should be cheering these folks on, as more electronics will be required than predicted with the slow emergence of complex interdependent technologies.

In addition, I think the hype around Industry 4.0 always neglects the important role that people have to play. When we watch something as complex as a landing of a spacecraft on Mars, we always see the Control Center with scores of people cheering the success. All of the important tasks were not handled by AIs.

So if anyone reading this article would like to invite me to a Lighthouse factory, please do. If I am wrong, I will write a retraction.

Cheers,

Dr. Ron