The post Polymers in Electronic Packaging: Fan-Out Wafer Level Packaging Part Five appeared first on Polymer Innovation Blog.

]]>The last post described some of the new developments in film-based epoxy mold compounds from Hitachi Chemical (EBIS) and Ajinomoto (see image above courtesy of Ajinomoto Fine-Techno). The main driver for panel level processing is to reduce cost. Another area where there is an opportunity to reduce cost is in the redistribution layer process. The main dielectrics used in wafer-based fanout is photosensitive polyimides and polybenzoxazole (PBO) positive tone resists. A commonly used positive tone photosensitive PBO is HD8940 (Hitachi DuPont). A challenge has been to lower the cure temperature of the PBO and polyimides to be less harsh on the epoxy mold compounds.

High temperature processing of the photosensitive dielectrics will be an even bigger challenge in panel level processing. The panel process will initially use a printed circuit board type process flow which entails lower temperature processes. With this in mind, Dr. John Lau at ASM Pacific has been leading a consortium investigating panel level processing. The consortium includes ASM, Dow, Huawei, Indium, JCET and Unimicron. The team recently published an interesting paper describing the printed circuit board approach to panel level fanout [1]. The paper demonstrates several concepts that will be important for panel level processing:

- Dry film epoxy mold compound
- Build up films used as dielectrics for redistribution layers (RDL1 and RDL2)
- Heterogeneous integration of 4 chips into the fanout package
- Demonstration of a new assembly process called uni-substrate-integrated-package (Uni-SIP)
- Printed circuit board processes like electroless copper seed layer, laser direct write imaging and circuit board copper plating processes are used

The Uni-SIP process used Ajinomoto Build Up Films (ABF) for the redistribution layers. The cross-section of the package is shown in Figure 1.

*Figure 1. Cross-section of package build using the Uni-SIP process (image from reference [1])*

In this case, a liquid epoxy mold compound from Nagase (R4507) was used in a modified compression molding process. The EMC had high filler loading (85%) and a Tg of 150°C (DMA). The CTE below Tg was 10 ppm/K. The redistribution layers were fabricated using Ajinomoto Build Up Films.

Figure 2 shows the attributes of the ABF type of RDL (courtesy of Ajinomoto Fine-Techno). The ABF films are negative tone materials, that is they crosslink under UV exposure and thus the exposed area is rendered insoluble. Recall that positive tone PBO and PI RDL dielectrics become soluble under UV exposure, the opposite of a negative tone resist.

*Figure 2. Characteristics of Ajinomoto Build Up Films used for redistribution layers in a panel level process (courtesy of Ajinomoto Fine-Techno). *

In Figure 2, the optical photograph shows the ABF approach is capable of 2 µ lines and spaces and 5 µ diameter UV laser drilled vias. The ABF dielectric layers are vacuum laminated onto the panel using printed circuit board methods.

The properties of the ABF layers are shown in the following table from reference [1].

The Ajinomoto build-up film RDL layers are highly filled (80-82% loading) resulting in a low CTE in the range of 7-8 ppm/K. Note that this is very close to the CTE of the Nagase R4507 epoxy mold compound. The dielectric constant is important in the RDL since the signal speed will be impacted by the Df of the RDL layer.

A future post will cover the recent advances in build up films from both Ajinomoto and Sekisui.

References:

1 John Lau, et. al., Chip-First Fanout-Out Panel-Level Packaging for Heterogeneous Integration, IEEE Transaction on Components, Packaging and Manufacturing Technology, Vol. 8, No. 9, September 2018, p. 1561

The post Polymers in Electronic Packaging: Fan-Out Wafer Level Packaging Part Five appeared first on Polymer Innovation Blog.

]]>The post Happy Thanksgiving appeared first on Polymer Innovation Blog.

]]>The post Happy Thanksgiving appeared first on Polymer Innovation Blog.

]]>The post Polymers in Electronic Packaging: Fan-Out Wafer Level Packaging Part Four appeared first on Polymer Innovation Blog.

]]>*Figure 1. Manufacturing landscape for panel-level Fan-Out Wafer Level Packaging (source: Yole’ Developpement) *

As discussed in the last post, the current Fan-Out Wafer Level Packaging format uses a 300 mm (and some 200 mm) round wafer. The existing equipment set was easily adapted for the round format with the exception of the epoxy molding process. Transitioning to a panel format will also require some significant new tooling development. Currently, there is no standard format for panels in FOWLP. In printed circuit board and semiconductor substrate manufacturing there are established panel sizes, typically 18” x 24” (approximately 450 mm x 600 mm).

Advanced Semiconductor Engineering (ASE) has partnered with DECA Technologies to use their Adaptive Patterning and Adaptive Routing technologies in the panel process. ASE has also settled on a 600 x 600 mm panel. The rationale is that the larger panel can be segmented into for 300 x 300 mm quadrants as shown in Figure 2.

*Figure 2. ASE panel approach using a 600 x 600 mm square panel. *

Additionally, there are several advantages of the 600 x 600 format:

- The current 300 mm round re-constituted wafer tooling would be adaptable for processing one quadrant at a time (see the lower left quadrant in Figure 2).
- Processing one quadrant at a time (more specifically the photoimaging process) would allow for multiple designs to be made per panel (i.e. one design in one or more quadrants)
- Panel level processing will ultimately result in lower cost per unit

What will be some of the major technical challenges? At a high level, thin panel handling will need to be addressed, since currently the round re-constituted wafers are processed on a rigid carrier. Secondly, the epoxy molding compound (EMC) encapsulation process will need to be developed for a panel format. Liquid EMC will be difficult to process in the panel format. In response to the need for a panel level EMC, several EMC suppliers have developed EMC in sheet form. As shown in Figure 3, Hitachi Chemical has developed an embedded insulation resin for chip sealing called EBIS.

*Figure 3. Film-based epoxy mold compound for FOWLP (source: Hitachi Chemical)*

In this form, the highly-filled epoxy base resin is coated onto a carrier in thicknesses ranging from 50 – 250 µ). The EMC film is used in a vacuum lamination process to encapsulate the chips on the re-constituted wafer or panel. Ajinomoto Fine-Techno has also developed two types of epoxy mold compounds; liquid EMC (called the MI Series) for wafer level fan-out packages and a film EMC (called the LE Series) for the panel level fan-out process.

*Figure 4. Two types of epoxy molding compounds for fan-out packaging (source: Ajinomoto Fine-Techno LTD)*

From the table in Figure 4, the glass transition temperature (Tg) is in the range typical of cured EMC. One property to note is that for the FOWLP process, the coefficient of thermal expansion (CTE) needs to be very low and is usually matched to the CTE of the carrier. Note that the CTE reported in Figure 4, is identified as the CTE below the glass transition temperature. I also like the details provided which indicate the Tg was measured using dynamic mechanical analysis (DMA) and the CTE reported was over the temperature range of 30-150°C. The major driver for lowering the CTE and modulus is to reduce warpage after EMC processing.

There has not been a lot of discussion at the recent packaging meetings about the process that may be used for film EMC. ASE has extensive experience in the manufacturing of complex semiconductor substrates, so a good assumption would be that the EMC would be vacuum laminated onto the reconstituted panel. Precision vacuum lamination processes have been developed for laminating build-up films (Ajinomoto and Sekisui) onto high density interconnect (HDI) printed circuit boards.

The next post will cover more on the materials and processes required for panel level fan-out packages.

The post Polymers in Electronic Packaging: Fan-Out Wafer Level Packaging Part Four appeared first on Polymer Innovation Blog.

]]>The post Remember Our Fallen Heroes on Veterans Day appeared first on Polymer Innovation Blog.

]]>The post Remember Our Fallen Heroes on Veterans Day appeared first on Polymer Innovation Blog.

]]>The post Polymers in Electronic Packaging: Fan-Out Wafer Level Packaging Part Three appeared first on Polymer Innovation Blog.

]]>Let’s start with a quick update on the current status of wafer level packaging (WLP). Wafer level packaging has transitioned from “powerpoint engineering” to volume production. The key event in the explosion of eWLP was the announcement that Taiwan Semiconductor Manufacturing Corporation (TSMC) was going to package the Apple A10 processor using TSMC’s InFO (__In__tegrated __F__an __O__ut). The large impact of the TSMC entry is shown in Figure 1 (source: Yole Developpement).

*Figure 1. FOWLP revenues as a function of time (years)*

Note in Figure 1 that the Apple share currently dominates the FOWLP production volumes from 2016 out to 2020. While there is healthy growth for the rest of the FOWLP packagers (blue portion in Figure 1), the volumes are much smaller. It should be noted this data is a bit dated, but I use it to demonstrate the market trends.

To review, the original FOWLP was developed to leverage the wafer fab equipment and processes. The first FOWLP was done on circular 200 mm wafers, but quickly transitioned to 300 mm wafers to use the existing equipment set for that size wafer. To date, most of the materials have been developed for 300 mm wafer processes (mold compounds, redistribution layers, temporary bonding adhesives). A high level overview of the FOWLP process flow for 300 mm round wafers is shown in Figure 2.

*Figure 2. Fan-Out Wafer Level Process for 300 mm wafers (Source: ASE Group)*

In anticipation of future volume ramps, many packagers started thinking about how to reduce cost. For high-end chips (like the Apple A10 or A11) the wafer format will yield the best results in terms of yield and technology (smaller lines and spaces, more density, etc). For low-end FOWLP such as used in mobile phones, cost will be an important driver. In order to reduce cost, several packagers in the industry have started to evaluate making the FOWLP on a panel format. The panel format is commonly used in the printed circuit board (PCB) and semiconductor substrate manufacturing process. The typical panel size in the PCB industry is 18 inches X 24 inches.

Chet Palesko from SavanSys presented a paper at the 2014 IWLPC conference where he presented data on the cost structure of panel versus wafer. His data indicated the following:

- 300 mm wafer yields
**616**10 mm x 10 mm packages - 400 mm x 500 mm panel yields
**1,911**10 mm x 10 mm packages

To graphically illustrate the panel versus wafer production scale, AT&S (PCB producer, and thus used standard 18 x 24 inch panels) showed the following comparison in Figure 3.

*Figure 3. Panel versus wafer for FOWLP (Source: AT&S)*

In Figure 3, one 18 x 24 inch panel is equivalent to 3.8 reconstituted 300 mm wafers which is similar to the 3.1 ratio SavanSys presented in the case of the 400 x 500 mm panel size (slightly smaller panel). At the most recent electronic packaging conferences, there were panel discussions on “panels.” While most of the details are closely held, there is consensus that at some point in the future (likely around 2020) panel formats will be widely available from several packagers. On the other hand, TSMC will likely to continue to use the wafer format for the InFO process since it is integrated into the back end of the wafer fab process.

The next few posts will cover additional recent developments in the area of FOWLP.

References:

- https://polymerinnovationblog.com/polymers-in-electronic-packaging-introduction-to-fan-out-wafer-level-packaging/
- https://polymerinnovationblog.com/polymers-in-electronic-packaging-fan-out-wafer-level-packaging-part-two/

The post Polymers in Electronic Packaging: Fan-Out Wafer Level Packaging Part Three appeared first on Polymer Innovation Blog.

]]>The post Thermoset Cure Kinetics Part 14: Analysis of Autocatalytic Systems from Conversion and Rate of Conversion Data appeared first on Polymer Innovation Blog.

]]>In this post we illustrate the **p1** form described in Part 11 of this series by fitting conversion – rate of conversion data to a phenomenological equation based in chemical kinetics that is commonly used to describe autocatalytic cure kinetics. Such data can be measured by DSC although in these examples we generate the data mathematically as outlined previously.

Analyses utilizing this equation, discussed in detail in Part 10 of this series, will primarily benefit users who purchase thermosetting materials but do not have access to their chemical makeup.

The purpose here is to validate our computational schemes and to illustrate the utility of mathematically analyzing thermoset cure data. Our approach is to generate perfect data from Eq. 1 with a selected set of experimental parameters, add noise to simulate actual laboratory data, fit these noisy data to Eq. 1 and compare the computed results with the selected input parameters. A detailed description of the computational process is given in the Appendix. Three types of input data are examined: one with perfect data (ie no noise added), one set of data with a moderate level of noise, and one very noisy data set. Noise is added as described in Part 11 of this series.

The selected parameters are shown in the second column of Table 1. This is a simple system where m and n values are those expected from epoxy – amine chemistry and k_{2} is 100 times larger than k_{1}. This is identical to the *Small* catalyst level example in Part 12 of this series and describes a system that is mostly autocatalytic in nature with about a 1% influence of external catalysis.

Rate of conversion (dα/dt) data is tabulated in Table 2 and plotted as *Perfect data*, *Noisy data *and *Computed data*. While we do not show units for rate of conversion note that they will be the same as the rate constants. For example, if k_{1} and k_{2} have units of sec^{-1} then these dα/dt values will be in sec^{-1}. *Perfect* data is dα/dt computed from Eq. 1 with the selected parameters, *Noisy *data is *Perfect *data + Noise, and *Computed *data is generated by fitting the *Noisy *data to Eq. 1 and plugging those parameters back into Eq. 1. Table 1 contains the kinetic parameters affiliated with various stages of computation. The last three columns are discussed below. The sum of the squares, described in the Appendix, is a measure of the goodness of the fit to the simulated cure data, ie the noisy data.

** ****Case 1, No Noise**

The first column of Table 2 contains conversion values between 0 and 1 in equal increments determined by the number of data points, 30 in these examples. In the second column are *Perfect Data *computed from Eq. 1 without any added noise. In this case the *Noisy *data are identical to the *Perfect *data as expected, since no noise was added. The *Computed *results were also found to be identical to the *Perfect *data, and the computed parameters in the *No Noise* column of Table 1 were identical to the actual or input values even though the starting point values for k_{1} and k_{2} were each set to zero. These results combined with the perfect overlap seen in Fig. 1 and the exceptionally low value for the sum of the squares (essentially zero) all lend confidence to our computational schemes.

**Table 1. Kinetic Parameters**

**Table 2. Conversion and Rate of Conversion Data for Three Noise Levels. Data in the Rightmost Five Columns are Rate of Conversion dα/dt**

**Case 2, Moderate Noise**

The third column of Table 2 contains the moderately noisy data for the rate of conversion. Data such as these might be expected from careful experimentation with a well-maintained modern DSC. These data were fit to Eq. 1 as described earlier, giving the *Moderate Noise* results shown in Tables 1 and 2. Values for m, n and k_{2} are within a few percent of the input values. While k_{1} shows the most variation it has only a minor effect on the overall results due to its small value vis-à-vis k_{2}.The 4^{th} column of Table 2 contains rate data computed from Eq. 1 with the *Moderate Noise* parameters. While the data in Fig. 2 are not superimposed right on top of each other as in Fig. 1, the three sets of data can be seen to correspond remarkably well, especially the *Computed data* with the *Perfect data*. The sum of squares value of 3.8 x 10^{-8} compares favorably with the 1-sigma estimate of 3.9 x 10^{-8} in the Appendix, confirming a good fit of the computed data to the noisy simulated cure data.

**Case 3, Very Noisy**

The fifth column of Table 2 contains the *Very Noisy *data for the rate of conversion. Data such as these might be expected from less careful experimentation or with a not well-maintained or older and less sensitive DSC. These data were fit to Eq. 1 giving the *Very Noisy* results shown in Tables 1 and 2. The sixth column contains rate data computed from Eq. 1 with these *Very Noisy* parameters. From Table 2 and Fig. 3 these results can be seen to be much closer to the *Perfect Data* than to the *Very Noisy *data. Thus even though the simulated data are quite noisy the fit to Eq. 1 is nonetheless quite good as can be seen by the virtual overlap of the *Computed* data with the *Perfect *data in Fig. 3. The sum of squares value of 1.4 x 10^{-7} vis-a-vis the 1-sigma estimate of 1.55 x 10^{-7} in the Appendix supports this conclusion.

To summarize, even with noisy cure data we can expect a good fit to the underlying relationships governing the cure reaction. With its unique ability to independently measure both conversion and rate of conversion DSC is ideally suited to characterize cure processes. In the real world of actual cure data we can anticipate the ability to model cure reactions with good predictability. The k_{2} values for both the *Moderate* and *Very Noisy* cases are sufficiently reproducible to be used in an Arrhenius plot of *ln *k_{2} vs. T^{-1} to estimate the activation energy from data taken at multiple temperatures.

This concludes our current series on Thermoset Cure Kinetics. Future posts will focus on the kinetic analysis of conversion – time data, which can be generated by a number of techniques including DSC and FTIR and Raman spectroscopies, and the incorporation of temperature as a variable. ** **

**Appendix: ****Fitting data.**

** **In Equation 1 we start with a set of α values. In our experiments, we picked a uniformly spaced sample of conversions between 0 and 1. In addition we chose values for the 4 parameters: , , m and n that are used to generate the “perfect” dα/dt and the noisy dα/dt as described above. We then let the 4 parameters be unknowns and try to recover those values using a numerical least squares process. This process consists of starting with an approximation to the “unknown” parameters and iteratively modifying them to minimize the sum of the squares of the differences between the noisy dα/dt values and the “perfect” dα/dt values obtained from Eq. 1 with the estimated parameters. There are libraries of excellent software routines for such problems and we used the method in reference [1].

As part of the computation we display the sum of the squares of the differences between the noisy dα/dt and the computed dα/dt in Table 1. The noise values that are used for each data point are of the form:

where r is a random variable with standard deviation σ representing the noise level (e.g. 4% or 8%) and (dα/dt)_{mean} is the average value of dα/dt. We would expect the sum of squares to be approximately n times the noise level squared where n is the number of data points. If we replace r by in the equation, we obtain a 1-sigma estimate for the sum of squares. If we have minimized the sum of squares, we would expect its value to be no larger than this 1-sigma estimate. On the other hand, if the sum of squares exceeds the 1-sigma estimate by a significant amount, then we probably have not minimized the sum of squares. In the cases shown in Table 2, the mean value for dα/dt is approximately 9 x 10^{-4 }and there are 30 data points. This yields a 1-sigma estimate for the sum of squares of 30(σ x 9 x 10^{-4})^{2} which is 3.9 x 10^{-8} for σ = .04 and 1.55 x 10^{-7} for σ = .08. Such values would indicate a high degree of confidence for the computed parameters.

**References**

[1] Commons Math: The Apache Commons Mathematics Library, *https://commons.apache.org/proper/commons-math/*

The post Thermoset Cure Kinetics Part 14: Analysis of Autocatalytic Systems from Conversion and Rate of Conversion Data appeared first on Polymer Innovation Blog.

]]>The post Thermoset Cure Kinetics Part 13: Modelling the Effect of Stoichiometry on Autocatalytic Cure Kinetics appeared first on Polymer Innovation Blog.

]]>This post explores how the epoxy-amine stoichiometry can affect the cure path of autocatalytic cure reactions, providing additional insight into the behavior of epoxy-amine thermosets. We illustrate some interesting benefits as well as precautions of altering the stoichiometry. For a variety of reasons it is not uncommon for systems to be either amine-rich or epoxy-rich. For example an excess of amine will help ensure that there is no unreacted epoxy that can react with itself at elevated temperatures.

We start with Eq. 2 in Part 10 of this series, an equation based on the chemistry of the amine-epoxy cure reaction, and generate conversion-time and rate of conversion-time data with selected parameter values.

In all examples in this post we keep k_{1} constant at a value of 0.0001 sec^{-1} and k_{2} constant at a value of 0.01 sec^{-1}, the values yielding the *Small* catalyst level curves in the previous post, and examine different stoichiometric levels, both excess amine and excess epoxy, vis-à-vis 1:1 or balanced stoichiometry. Utilizing the methodologies described in Part 11 of this series we generated α_{Ep} and dα_{Ep}/dt vs. time data from Eq. 1 which are plotted in the figures below. α_{Am} and dα_{Am}/dt were computed from the following relationship between α_{Ep} and α_{Am}. Ep and Am stand for epoxide and amine, respectively.

Figure 1 illustrates the expected behavior for an amine-rich system. The epoxide conversion is complete, ie α* _{Ep}* = 1, at about 30 minutes. At the same time the amine conversion reaches its ultimate conversion of α

Figures 2 and 3 illustrate how increasing the amine concentration in amine-rich systems accelerates the cure reaction and helps drive the epoxide reaction to completion in much shorter times compared with a balanced 1:1 stoichiometry. While the benefits in accelerating the cure process seem obvious other properties will change in ways that may or may not be beneficial and need to be considered. For example, T_{g} and modulus will be lower and the surface chemistry and reactivity will be altered. Both the time to gel and the conversion at the gel point will also be affected [1]. Notice in Fig. 3 how the maximum reaction rate, and as a consequence the maximum rate of heat release, appears to increase as the stoichiometry becomes less balanced. Because there is less material to react the size of the exotherm in Joules/gram will be less but it may be more intense. Also notice that the autocatalytic nature of the reaction is not altered.

We pointed out in the last post that Eq. 3 below leads to dα/dt = k_{1} at t = 0 where a = 0. But it can be seen from Eq. 1 that dα/dt is actually equal to Bk_{1}, as can be observed by close inspection of Fig. 3, and therefore only equal to 1 when B = 1.

Figures 4 and 5 are interesting to compare with each other. Figure 4 projects the epoxide conversion vs. time and Fig. 5 amine conversion vs. time at 1:1 stoichiometry, 25% excess amine and 25% excess epoxide. The effects on epoxide conversion are more dramatic and straightforward. Note in Fig. 4 that conversion of the epoxide is only complete (α_{Ep} = 1) for B = 1.25 (after 25-30 minutes) and the conversion of amine (α_{Am} = 1.0) for B = 0.8 (after 35-40 minutes). As can be observed the order of time to complete or ultimate conversion is excess amine first, excess epoxide second, and 1:1 stoichiometry far behind in third place. The effect on amine conversion is more complex but the order of the time to complete conversion remains the same. We ascribe these differences to the epoxide being the source of the hydroxyl. While interesting we doubt that they are significant.

In summary, there are potential benefits from altering the stoichiometry of the amine-epoxy reaction. It is not uncommon for systems to be amine-rich, with for example a 30% excess of amine. This will help ensure that there is no unreacted epoxy that can react with itself at elevated temperatures leading to unwanted increases in T_{g}, modulus and other physical properties. From Fig. 2 we see that even a small excess of amine will greatly reduce the amount of time needed to reach complete conversion. And Fig. 3 suggests that the intensity of the exotherm may increase as the excess of amine increases. In amine-rich systems the reaction of epoxide will be complete, ie it’s conversion will be equal to 1. However the reaction of amine will be incomplete, and its ultimate conversion will be less than 1. An excess of epoxy could provide a beneficial increase in T_{g} and related properties in a postcure process if that were desired. A surface rich in either unreacted amine or epoxy could also be useful when building layers, eg in pseudo-isotropic fiber reinforced composites or 3d printed parts. Both the conversion at the gel point and the time to gel will be affected by either excess amine or excess epoxy and need to be considered when formulating epoxy-amine systems [1].

In our next post we shift gears and look at the fitting of cure data to the phenomenological equation below (Eq. 3 in Part 11 of this series).

Reference

1. Polymer Innovation Blog *Thermoset Characterization Part 5: Calculation of Gel Point* (posted May 12, 2014)

The post Thermoset Cure Kinetics Part 13: Modelling the Effect of Stoichiometry on Autocatalytic Cure Kinetics appeared first on Polymer Innovation Blog.

]]>The post Thermoset Cure Kinetics Part 12: Modelling the Effect of Adding Catalyst on Autocatalytic Kinetics appeared first on Polymer Innovation Blog.

]]>In this post we explore how adding an external catalyst or accelerator to increase the reaction rate can affect the cure path of autocatalytic cure reactions, providing additional insight into the behavior of epoxy-amine thermosets as well as commenting on some of the pros and cons of doing so. We show how added catalyst can explain the difference between a 60-minute epoxy and a 5-minute epoxy and illustrate how adding catalyst differs from increasing temperature to accelerate the cure process. A good reference on this topic is *Acceleration of Amine-Cured Epoxy Resins* by B. L. Burton, Huntsman Corp.

We start with the chemically based Eq. 2 from the previous post where the stoichiometry is balanced, ie B = 1

dα/dt = (k_{1} + k_{2} α)(1-α)^{2} (1)

Recall again that the term (1-α) represents the concentration of reactants, eg the epoxide and the amine in an epoxy-amine system and that a represents the concentration of catalyst generated in the reaction, eg hydroxyl functionality in the case of epoxide reacting with amine. k_{2} is the rate constant for the autocatalytic reaction and k_{1} the rate constant for the externally catalyzed reaction, which can be accomplished by impurities in the reactants, by the surfaces of fillers or by added catalyst.

In this exercise we keep k_{2} constant at a value of 0.01 sec^{-1} and incrementally increasing the value of k_{1} to simulate the effect of adding catalyst. We chose the four values shown in the table below and generated α and dα/dt vs. time data utilizing the methodologies described in the previous post. Note that a very small value of 0.00001 sec^{-1} was assigned to k_{1} for the *No added catalyst *case to simulate the unavoidable presence of impurities in the reactants including absorbed moisture.

The table below summarizes the cases evaluated while Fig. 1 shows projected results for the *Small *catalyst level case. Note that, as predicted by Eq. 1, dα/dt = k_{1} at t = 0. This is demonstrated more clearly in Fig. 3 and suggests the ability of conversion rate measured by isothermal DSC to estimate k_{1} and shed light on the catalyst level. We say estimate, because the conversion rate at t = 0 is obtained by extrapolation and, as shown in the next post, the full relationship is dα/dt = Bk_{1} at t = 0 which only equals k_{1} when B = 1. Also note that k_{1} encompasses an underlying rate constant, the concentration of external or added catalyst, and the strengths of the catalysts.

Several effects can be noted in the conversion-time curves in Fig. 2 below, both in the speed of the reaction and in the shape of the curves. As expected the addition of catalyst accelerates the reaction. But also notice how the shapes of the curves change from typical autocatalytic behavior with obvious inflection for the *No added catalyst* case to progressively less autocatalytic with increasing k_{1} and eventually to a curve that more closely resembles *n*th order characteristics with no observable inflection. With a typical autocatalytic epoxy the early reaction is slow due to lack of catalyst and only picks up after sufficient catalyst is generated, having the effect of extending the work life. Adding catalyst not only accelerates the overall reaction but diminishes and eventually eliminates this extension of the work life. This amplifies the effect on gelation resulting in a dramatic shortening of the work life vis-à-vis a smaller effect on the overall cure (eg time to 95% conversion). As shown in *Thermoset Characterization Part 5: Calculation of Gel Point* (posted May 12, 2014) the conversion at the gel point α_{gel} = 0.58 when B = 1 which is indicated in Figure 2.

Figure 2 should be compared and contrasted with Fig. 1 in the previous post and reproduced as Fig. 3 here. The material and process descriptions of this epoxy mold compound (EMC) were described in the previous post. Like the addition of catalyst, increasing the temperature clearly accelerates the reaction speed. But unlike the addition of catalyst the shapes of the curves, all close to the *Intermediate* catalyst level curve in Fig.2 (k_{1} = 0.00075 sec^{-1}), remain the same indicating that increasing temperature only speeds up the reaction but in general doesn’t change it. As a consequence these curves could be superimposed to form a master cure curve (see *Thermoset Cure Kinetics Part 5: Time-Temperature Superposition Kinetics* (posted November 24, 2014).

*Figure 3. Degree of cure (conversion) as a function of time for various mold temperatures for an epoxy mold compound (source: Hitachi Chemical).*

The rate of conversion-time curves in Fig. 4 also show the reaction shifting to shorter times as the reaction accelerates with added catalyst and losing its autocatalytic character. In the *Very fast* reaction there is just a hint of the characteristic autocatalytic peak in the curve. Additional catalyst will result in the maximum rate occurring at t = 0, which is characteristic of entirely *n*th order reactions.

It is interesting to compare the conversion-time curves in Fig. 2 with the T_{g}-time curves from *MTDSC of Thermosets Part 5: Five-Minute Epoxy, Continued* (posted August 10, 2015) and reproduced in Fig. 5 below. The 60-minute epoxy exhibits some autocatalytic behavior similar to the *Moderate* curve in Fig. 2 while the two five-minute epoxies show *n*th order behavior similar to the *Very Fast* curve.

T_{g} begins to increase almost immediately for the two 5-minute epoxies which then separate as the faster Devcon® epoxy approaches its full cure T_{g} of 35 to 40**°**C and the Gorilla® epoxy lags slightly behind. Recall that the 5 and 60 minute labels refer to the work life and approximate gel times. Notice that the T_{g} achieved in 60 minutes at 25**°**C for the Loctite epoxy is about the same as the T_{g}s achieved for the Devcon and Gorilla epoxies after 5 minutes. The Loctite® 60-minute epoxy undergoes a much slower increase in T_{g} and shows characteristics of autocatalytic cure where the rate of increase in T_{g} accelerates with time in the early part of cure. From the Material Safety Data Sheets (MSDS) all three adhesives contain a bisphenol A epoxy resin and a hardener. Devcon lists the hardener as *Trade Secret*, Gorilla lists three amines plus 0.1-1.0% bisphenol A which, with the two hydroxyl groups as shown below, will catalyze the epoxy-amine reaction, and Loctite lists several amines and 1-5% benzyl alcohol with a single hydroxyl as shown below. We speculate that the difference in the level and catalytic strength of these two alcohols accounts for the different reaction speeds.

To summarize, the addition of catalyst will accelerate the epoxy-amine reaction and at the same time alter its character from autocatalytic toward *n*th order with a marked reduction in the work life and time to gel. Increasing temperature will also accelerate the reaction but without altering its autocatalytic nature. We should remind readers that DSC has the unique ability to measure both conversion and rate of conversion vs. time, accounting for it being a preferred method to characterize the cure process. In the next post we explore the effect of stoichiometry on cure.

The post Thermoset Cure Kinetics Part 12: Modelling the Effect of Adding Catalyst on Autocatalytic Kinetics appeared first on Polymer Innovation Blog.

]]>The post Thermoset Cure Kinetics Part 11. A Mathematical Approach to Cure Kinetics Analysis appeared first on Polymer Innovation Blog.

]]>This post outlines the mathematical approaches to model the cure behavior and then analyze the cure data to extract kinetic parameters. We do this by utilizing equations developed in the previous post that model the cure path of autocatalytic systems like epoxy-amine.

__Modeling Cure Behavior__

For this case we developed relatively simple software routines for generating and plotting conversion – time and rate of conversion – time data from chemically based equations. The objective is to explore thermoset cure behavior such as the effects of stoichiometry and added catalyst. We generate the data as follows. We start with an equation that models the behavior of interest. We then choose appropriate parameter values and numerically compute conversion and rate of conversion at a selected number of points or times. We make use of Eq. 2 in the previous post (Eq. 1 in this post), a detailed equation based on the chemistry of the cure reaction that is able to explore how stoichiometry may affect the cure path.

dα_{Ep}/dt = (k_{1} + k_{2 }α_{Ep} )(1-α_{Ep})(B-α_{Ep}) (1)

where dα_{Ep}/dt is the rate of conversion of epoxide, α_{Ep} the fractional conversion of epoxide and B the ratio of amine hydrogen equivalents to epoxide equivalents. Remember that the amine concentration α_{Am} is represented by the term (B-α_{Ep}), k_{1 }is the rate constant for the externally catalyzed reaction and k_{2} is the rate constant for the autocatalyzed reaction. Equation 1 will be used to model the effects of stoichiometry in Part 13.

For applications where the stoichiometry is balanced, B = 1 and α_{Ep}_{ }= α_{Am} = α, where α is the overall conversion in an amine-epoxide system and equation 1 simplifies to:

dα/dt = (k_{1} + k_{2} α)(1-α)^{2} (2)

In the case of epoxide reacting with amine recall that the term (1-α) represents the concentration of reactants, for example, the epoxide and the amine and that α represents the concentration of catalyst generated in the reaction, for example, hydroxyl functionality. Addition of external catalyst can be modeled by increasing k_{1} and is the subject of the next post.

__Analysis of Cure Data__

This case is more complex where our ultimate intention is to analyze actual cure data. In the absence of such data we generate “typical” cure data and then fit them to Eq. 3 below. Here we address the generation of data for ‘model’ autocatalytic systems in one of two forms:

- (p1) conversion vs rate of conversion data, typically generated by isothermal DSC which is uniquely capable of simultaneously measuring both of these parameters.
- (p2) conversion vs time data, which may be generated by a variety of techniques including DSC, FTIR and Raman spectroscopies.

In particular we examine the analytical approach by generating data in one of these two forms with appropriate noise values such as one might observe in laboratory conditions. In each of these two forms, the assumption is that there is an underlying equation that models the data. Equation 3 is such an equation (from (2), where reaction orders m and n are treated as variables [1,2]. It has four parameters (k_{1}, k_{2}, m and n) that can be adjusted to best fit the given data. Generally, these parameters enter into the equations in a nonlinear manner which makes an analytical approach to their estimation usually impossible. Fortunately there are good numerical methods for these problems [3].

dα/dt = (k_{1} + k_{2 }α^{m)} (1-α)^{n} (3)

A common approach is to estimate the cure parameters (rate constants and reaction orders), measure the discrepancy between the data and the values predicted by the model equation, make appropriate changes to the estimated parameters, and repeat. In the least squares formulation, we attempt to minimize the sum of the squares of the discrepancies over all the data points.

*Generation of simulated cure data*

For the demonstration examples in this blog series we have generated data as follows. We first start with Eq. 3. We then choose parameter values and numerically compute the conversion and conversion rates for the given model at a number of points. To these values we then add “noise” represented by a normally-distributed random number with zero mean and a variance which we can specify. The resulting data are then treated as the input to the least squares algorithm and the goal is to see how well we can recover the parameters used to produce the pre-noisy data. In the course of exercising these algorithms we have observed that the preferred settings for the estimated starting values are k_{1} = k_{2} = 0, m =1, and n = 2.

*The p1 form*

First we consider the **p1** form. The data consists of pairs ( α and dα/dt) at specific (but unspecified) values of time, where α is the conversion and time is given by t.

For the model, we use Eq. 3, a phenomenological equation based on epoxy-amine chemistry [1,2]. If we let y_{i} denote the “measured” rate of conversion corresponding to α_{i}, ie the data with noise, and we let β_{i} be the value of the rate of conversion obtained for α_{i} and a particular set of approximate parameter values, then we wish to minimize the value of *e* as given by:

(4)

This is the so-called least squares problem. The goal is to recover the parameter values used to generate the model data. This subject is addressed in Part 14 of this series.

*The p2 form*

In the p2 form, the subject of Part 15 in this series, we are given data pairs of the form (t_{i}, α_{i}). Assuming the same model as given above, the least squares problem is the same except that there is a wrinkle: β_{i} is not computable directly as in **p1**. Instead, each time the value of any of the parameters is modified, one must solve the differential equation that models the data. There is a numerical software library [3] that we have used effectively to solve both forms. In general, the solution in the p2 form is substantially more computationally intensive.

*Rationale*

Let’s look at a practical example of the type of thermoset curing where modeling would be very important. In this case, an epoxy mold compound (EMC) used in a new type of electronic package (called fan out wafer level package) is compression molded. The gel point (αgel) occurs at a conversion of approximately 40%. During the compression molding process, the EMC must be cured to at least the gel point so the part will have dimensional stability after the molding process. In Figure 1, the Hitachi recommendation is that the degree of cure be at least 40% prior to releasing from the mold. Figure 1 shows that with increasing mold temperature, the time to reach 40% conversion decreases. At a mold temperature of 150°C the molding time is 100 seconds versus 350 seconds for a mold temperature of 120°C. After mold release, the EMC is gelled but not fully cured, requiring a second post mold bake step. From a manufacturing throughput perspective, the time in the mold is optimized to achieve the required 40% conversion and minimize warpage on cooling.

*Figure 1. Degree of cure (conversion) as a function of time for various mold temperatures for an epoxy mold compound (source: Hitachi Chemical) *

* **Numerical results*

The example shows the importance of modeling the conversion versus time relationship for thermoset curing. In subsequent posts examples will be presented showing how the approaches discussed here are used to generate conversion-time plots as shown in Figure 2. Note the similar shape of the modeling results with the data in Figure 1.

*Figure 2. Degree of cure (conversion) as a function of time for various model parameters*

References

- K. Horie et al., J. Polym. Sci., Polym. Chem. Ed.
__8__, 1357 (1970). - S. Sourour and M. R. Kamal, Thermochim. Acta
__14__, 41 (1976). - https://commons.apache.org/.

The post Thermoset Cure Kinetics Part 11. A Mathematical Approach to Cure Kinetics Analysis appeared first on Polymer Innovation Blog.

]]>The post Thermoset Cure Kinetics Part 10: Autocatalytic Equations appeared first on Polymer Innovation Blog.

]]>This post describes the kinetic equations used to characterize thermoset reactions that are autocatalytic in nature. The autocatalytic equations originate from a kinetic equation based on the chemistry of the epoxy-amine reaction and invoke assumptions that simplify the math. Epoxy-amine thermosets are in widespread use from 5-minute epoxy adhesives to matrix materials for advanced composites. They are also the most widely studied thermosets.

Referring to Schemes 1 and 2 in the previous post, the complete chemical equation, Eq. 1 below, is based on four discreet reactions: epoxy with primary amine catalyzed by alcohol produced in the cure reaction, epoxy with primary amine catalyzed by catalyst initially present, epoxy with secondary amine catalyzed by alcohol produced in the cure reaction, and epoxy with secondary amine catalyzed by catalyst initially present (see Ref. 1). k_{1, }k’_{1, }k_{2 }and k’_{2 }are the corresponding rate constants. The rate of consumption of epoxide dx/dt is given by

dx/dt = k_{1}a_{1}ex + k’_{1}ec_{0} + k_{2}a_{2}ex + k’_{2}a_{2}ec_{0} (1)

where e is the molar concentration of epoxide, a_{1} the molar concentration of primary amine and a_{2} the molar concentration of secondary amine at time t. e_{0}, a_{0}, and c_{0} are initial concentrations of epoxide, primary amine and external catalyst. x is the epoxide consumed.

Assuming equal reactivity of all amine hydrogens and converting to a fractional concentration basis leads to the following autocatalytic equation (1)

dα_{Ep}/dt = (k_{1} + k_{2 }α_{Ep} )(1-α_{Ep})(B-α_{Ep}) (2)

where dα_{Ep}/dt is the rate of conversion of epoxide, α_{Ep} the fractional conversion of epoxide and B the ratio of amine hydrogen equivalents to epoxide equivalents. Note that the amine concentration α_{Am} is represented by the term (B-α_{Ep}). Equation 2 will be used to model the effects of stoichiometry.

Ideally the only reaction is an epoxide with an amine. When stoichiometric quantities of reactants are mixed B = 1 and α_{Ep}_{ }= α_{Am} = α, where α is the overall conversion in an amine-epoxide system. k_{1 }is the rate constant for the externally catalyzed reaction[1] and k_{2} is the rate constant for the autocatalyzed reaction. Under these circumstances Eq. 2 reduces to

dα/dt = (k_{1} + k_{2} α)(1-α)^{2} (3)

This equation will be used to model stoichiometrically balanced reactions. It is also appropriate for the user who has control over, or at least knowledge of, the formulation. For thermosets of unknown composition one must resort to phenomenological equations based on the above considerations, such as

dα/dt = (k_{1} + k_{2 }α^{m)} (1-α)^{n} (4)

which contains the four variables k_{1}, k_{2}, m and n. Users of thermosets who do not have access to their composition will find this equation, first attributed to Sourour and Kamal (2), useful for analyzing cure data.

It should be pointed out that when k_{1} = 0, Eqs. 2-4 describe purely autocatalytic behavior without any influence of external catalyst. In practice k_{1} can be small compared to k_{2} but will have a finite value due to unavoidable impurities in the starting materials or even absorbed water. It should also be noted that when k_{2} = 0 these equations revert to *n*th order equations, broadening their applicability. For example Eq. 4 becomes

dα/dt = k_{1 }(1-α)^{n} (5)

which described the 2^{nd} order cure of a fast-reacting polyurethane with n = 2 (see Part 6 of this series *A Practical Example of Cure Kinetics in Action)*.

In the next blog post we will describe the mathematical approaches taken to generate model cure data from Equations 2 and 3, as well as to fit cure data, eg from DSC or FTIR studies, to Eq. 4

_____________

[1] k_{1} encompasses an underlying rate constant for the epoxy-amine reaction, eg k’_{1}, times the concentration of external catalyst, eg k_{1} = k’_{1}c_{0}.

References

- K. Horie et al., J. Polym. Sci., Polym. Chem. Ed.
__8__, 1357 (1970). - S. Sourour and M. R. Kamal, Thermochim. Acta
__14__, 41 (1976).

The post Thermoset Cure Kinetics Part 10: Autocatalytic Equations appeared first on Polymer Innovation Blog.

]]>