The post Thermoset Cure Kinetics Part 14: Analysis of Autocatalytic Systems from Conversion and Rate of Conversion Data appeared first on Polymer Innovation Blog.

]]>In this post we illustrate the **p1** form described in Part 11 of this series by fitting conversion – rate of conversion data to a phenomenological equation based in chemical kinetics that is commonly used to describe autocatalytic cure kinetics. Such data can be measured by DSC although in these examples we generate the data mathematically as outlined previously.

Analyses utilizing this equation, discussed in detail in Part 10 of this series, will primarily benefit users who purchase thermosetting materials but do not have access to their chemical makeup.

The purpose here is to validate our computational schemes and to illustrate the utility of mathematically analyzing thermoset cure data. Our approach is to generate perfect data from Eq. 1 with a selected set of experimental parameters, add noise to simulate actual laboratory data, fit these noisy data to Eq. 1 and compare the computed results with the selected input parameters. A detailed description of the computational process is given in the Appendix. Three types of input data are examined: one with perfect data (ie no noise added), one set of data with a moderate level of noise, and one very noisy data set. Noise is added as described in Part 11 of this series.

The selected parameters are shown in the second column of Table 1. This is a simple system where m and n values are those expected from epoxy – amine chemistry and k_{2} is 100 times larger than k_{1}. This is identical to the *Small* catalyst level example in Part 12 of this series and describes a system that is mostly autocatalytic in nature with about a 1% influence of external catalysis.

Rate of conversion (dα/dt) data is tabulated in Table 2 and plotted as *Perfect data*, *Noisy data *and *Computed data*. While we do not show units for rate of conversion note that they will be the same as the rate constants. For example, if k_{1} and k_{2} have units of sec^{-1} then these dα/dt values will be in sec^{-1}. *Perfect* data is dα/dt computed from Eq. 1 with the selected parameters, *Noisy *data is *Perfect *data + Noise, and *Computed *data is generated by fitting the *Noisy *data to Eq. 1 and plugging those parameters back into Eq. 1. Table 1 contains the kinetic parameters affiliated with various stages of computation. The last three columns are discussed below. The sum of the squares, described in the Appendix, is a measure of the goodness of the fit to the simulated cure data, ie the noisy data.

** ****Case 1, No Noise**

The first column of Table 2 contains conversion values between 0 and 1 in equal increments determined by the number of data points, 30 in these examples. In the second column are *Perfect Data *computed from Eq. 1 without any added noise. In this case the *Noisy *data are identical to the *Perfect *data as expected, since no noise was added. The *Computed *results were also found to be identical to the *Perfect *data, and the computed parameters in the *No Noise* column of Table 1 were identical to the actual or input values even though the starting point values for k_{1} and k_{2} were each set to zero. These results combined with the perfect overlap seen in Fig. 1 and the exceptionally low value for the sum of the squares (essentially zero) all lend confidence to our computational schemes.

**Table 1. Kinetic Parameters**

**Table 2. Conversion and Rate of Conversion Data for Three Noise Levels. Data in the Rightmost Five Columns are Rate of Conversion dα/dt**

**Case 2, Moderate Noise**

The third column of Table 2 contains the moderately noisy data for the rate of conversion. Data such as these might be expected from careful experimentation with a well-maintained modern DSC. These data were fit to Eq. 1 as described earlier, giving the *Moderate Noise* results shown in Tables 1 and 2. Values for m, n and k_{2} are within a few percent of the input values. While k_{1} shows the most variation it has only a minor effect on the overall results due to its small value vis-à-vis k_{2}.The 4^{th} column of Table 2 contains rate data computed from Eq. 1 with the *Moderate Noise* parameters. While the data in Fig. 2 are not superimposed right on top of each other as in Fig. 1, the three sets of data can be seen to correspond remarkably well, especially the *Computed data* with the *Perfect data*. The sum of squares value of 3.8 x 10^{-8} compares favorably with the 1-sigma estimate of 3.9 x 10^{-8} in the Appendix, confirming a good fit of the computed data to the noisy simulated cure data.

**Case 3, Very Noisy**

The fifth column of Table 2 contains the *Very Noisy *data for the rate of conversion. Data such as these might be expected from less careful experimentation or with a not well-maintained or older and less sensitive DSC. These data were fit to Eq. 1 giving the *Very Noisy* results shown in Tables 1 and 2. The sixth column contains rate data computed from Eq. 1 with these *Very Noisy* parameters. From Table 2 and Fig. 3 these results can be seen to be much closer to the *Perfect Data* than to the *Very Noisy *data. Thus even though the simulated data are quite noisy the fit to Eq. 1 is nonetheless quite good as can be seen by the virtual overlap of the *Computed* data with the *Perfect *data in Fig. 3. The sum of squares value of 1.4 x 10^{-7} vis-a-vis the 1-sigma estimate of 1.55 x 10^{-7} in the Appendix supports this conclusion.

To summarize, even with noisy cure data we can expect a good fit to the underlying relationships governing the cure reaction. With its unique ability to independently measure both conversion and rate of conversion DSC is ideally suited to characterize cure processes. In the real world of actual cure data we can anticipate the ability to model cure reactions with good predictability. The k_{2} values for both the *Moderate* and *Very Noisy* cases are sufficiently reproducible to be used in an Arrhenius plot of *ln *k_{2} vs. T^{-1} to estimate the activation energy from data taken at multiple temperatures.

This concludes our current series on Thermoset Cure Kinetics. Future posts will focus on the kinetic analysis of conversion – time data, which can be generated by a number of techniques including DSC and FTIR and Raman spectroscopies, and the incorporation of temperature as a variable. ** **

**Appendix: ****Fitting data.**

** **In Equation 1 we start with a set of α values. In our experiments, we picked a uniformly spaced sample of conversions between 0 and 1. In addition we chose values for the 4 parameters: , , m and n that are used to generate the “perfect” dα/dt and the noisy dα/dt as described above. We then let the 4 parameters be unknowns and try to recover those values using a numerical least squares process. This process consists of starting with an approximation to the “unknown” parameters and iteratively modifying them to minimize the sum of the squares of the differences between the noisy dα/dt values and the “perfect” dα/dt values obtained from Eq. 1 with the estimated parameters. There are libraries of excellent software routines for such problems and we used the method in reference [1].

As part of the computation we display the sum of the squares of the differences between the noisy dα/dt and the computed dα/dt in Table 1. The noise values that are used for each data point are of the form:

where r is a random variable with standard deviation σ representing the noise level (e.g. 4% or 8%) and (dα/dt)_{mean} is the average value of dα/dt. We would expect the sum of squares to be approximately n times the noise level squared where n is the number of data points. If we replace r by in the equation, we obtain a 1-sigma estimate for the sum of squares. If we have minimized the sum of squares, we would expect its value to be no larger than this 1-sigma estimate. On the other hand, if the sum of squares exceeds the 1-sigma estimate by a significant amount, then we probably have not minimized the sum of squares. In the cases shown in Table 2, the mean value for dα/dt is approximately 9 x 10^{-4 }and there are 30 data points. This yields a 1-sigma estimate for the sum of squares of 30(σ x 9 x 10^{-4})^{2} which is 3.9 x 10^{-8} for σ = .04 and 1.55 x 10^{-7} for σ = .08. Such values would indicate a high degree of confidence for the computed parameters.

**References**

[1] Commons Math: The Apache Commons Mathematics Library, *https://commons.apache.org/proper/commons-math/*

The post Thermoset Cure Kinetics Part 14: Analysis of Autocatalytic Systems from Conversion and Rate of Conversion Data appeared first on Polymer Innovation Blog.

]]>The post Thermoset Cure Kinetics Part 13: Modelling the Effect of Stoichiometry on Autocatalytic Cure Kinetics appeared first on Polymer Innovation Blog.

]]>This post explores how the epoxy-amine stoichiometry can affect the cure path of autocatalytic cure reactions, providing additional insight into the behavior of epoxy-amine thermosets. We illustrate some interesting benefits as well as precautions of altering the stoichiometry. For a variety of reasons it is not uncommon for systems to be either amine-rich or epoxy-rich. For example an excess of amine will help ensure that there is no unreacted epoxy that can react with itself at elevated temperatures.

We start with Eq. 2 in Part 10 of this series, an equation based on the chemistry of the amine-epoxy cure reaction, and generate conversion-time and rate of conversion-time data with selected parameter values.

In all examples in this post we keep k_{1} constant at a value of 0.0001 sec^{-1} and k_{2} constant at a value of 0.01 sec^{-1}, the values yielding the *Small* catalyst level curves in the previous post, and examine different stoichiometric levels, both excess amine and excess epoxy, vis-à-vis 1:1 or balanced stoichiometry. Utilizing the methodologies described in Part 11 of this series we generated α_{Ep} and dα_{Ep}/dt vs. time data from Eq. 1 which are plotted in the figures below. α_{Am} and dα_{Am}/dt were computed from the following relationship between α_{Ep} and α_{Am}. Ep and Am stand for epoxide and amine, respectively.

Figure 1 illustrates the expected behavior for an amine-rich system. The epoxide conversion is complete, ie α* _{Ep}* = 1, at about 30 minutes. At the same time the amine conversion reaches its ultimate conversion of α

Figures 2 and 3 illustrate how increasing the amine concentration in amine-rich systems accelerates the cure reaction and helps drive the epoxide reaction to completion in much shorter times compared with a balanced 1:1 stoichiometry. While the benefits in accelerating the cure process seem obvious other properties will change in ways that may or may not be beneficial and need to be considered. For example, T_{g} and modulus will be lower and the surface chemistry and reactivity will be altered. Both the time to gel and the conversion at the gel point will also be affected [1]. Notice in Fig. 3 how the maximum reaction rate, and as a consequence the maximum rate of heat release, appears to increase as the stoichiometry becomes less balanced. Because there is less material to react the size of the exotherm in Joules/gram will be less but it may be more intense. Also notice that the autocatalytic nature of the reaction is not altered.

We pointed out in the last post that Eq. 3 below leads to dα/dt = k_{1} at t = 0 where a = 0. But it can be seen from Eq. 1 that dα/dt is actually equal to Bk_{1}, as can be observed by close inspection of Fig. 3, and therefore only equal to 1 when B = 1.

Figures 4 and 5 are interesting to compare with each other. Figure 4 projects the epoxide conversion vs. time and Fig. 5 amine conversion vs. time at 1:1 stoichiometry, 25% excess amine and 25% excess epoxide. The effects on epoxide conversion are more dramatic and straightforward. Note in Fig. 4 that conversion of the epoxide is only complete (α_{Ep} = 1) for B = 1.25 (after 25-30 minutes) and the conversion of amine (α_{Am} = 1.0) for B = 0.8 (after 35-40 minutes). As can be observed the order of time to complete or ultimate conversion is excess amine first, excess epoxide second, and 1:1 stoichiometry far behind in third place. The effect on amine conversion is more complex but the order of the time to complete conversion remains the same. We ascribe these differences to the epoxide being the source of the hydroxyl. While interesting we doubt that they are significant.

In summary, there are potential benefits from altering the stoichiometry of the amine-epoxy reaction. It is not uncommon for systems to be amine-rich, with for example a 30% excess of amine. This will help ensure that there is no unreacted epoxy that can react with itself at elevated temperatures leading to unwanted increases in T_{g}, modulus and other physical properties. From Fig. 2 we see that even a small excess of amine will greatly reduce the amount of time needed to reach complete conversion. And Fig. 3 suggests that the intensity of the exotherm may increase as the excess of amine increases. In amine-rich systems the reaction of epoxide will be complete, ie it’s conversion will be equal to 1. However the reaction of amine will be incomplete, and its ultimate conversion will be less than 1. An excess of epoxy could provide a beneficial increase in T_{g} and related properties in a postcure process if that were desired. A surface rich in either unreacted amine or epoxy could also be useful when building layers, eg in pseudo-isotropic fiber reinforced composites or 3d printed parts. Both the conversion at the gel point and the time to gel will be affected by either excess amine or excess epoxy and need to be considered when formulating epoxy-amine systems [1].

In our next post we shift gears and look at the fitting of cure data to the phenomenological equation below (Eq. 3 in Part 11 of this series).

Reference

1. Polymer Innovation Blog *Thermoset Characterization Part 5: Calculation of Gel Point* (posted May 12, 2014)

The post Thermoset Cure Kinetics Part 13: Modelling the Effect of Stoichiometry on Autocatalytic Cure Kinetics appeared first on Polymer Innovation Blog.

]]>The post Thermoset Cure Kinetics Part 12: Modelling the Effect of Adding Catalyst on Autocatalytic Kinetics appeared first on Polymer Innovation Blog.

]]>In this post we explore how adding an external catalyst or accelerator to increase the reaction rate can affect the cure path of autocatalytic cure reactions, providing additional insight into the behavior of epoxy-amine thermosets as well as commenting on some of the pros and cons of doing so. We show how added catalyst can explain the difference between a 60-minute epoxy and a 5-minute epoxy and illustrate how adding catalyst differs from increasing temperature to accelerate the cure process. A good reference on this topic is *Acceleration of Amine-Cured Epoxy Resins* by B. L. Burton, Huntsman Corp.

We start with the chemically based Eq. 2 from the previous post where the stoichiometry is balanced, ie B = 1

dα/dt = (k_{1} + k_{2} α)(1-α)^{2} (1)

Recall again that the term (1-α) represents the concentration of reactants, eg the epoxide and the amine in an epoxy-amine system and that a represents the concentration of catalyst generated in the reaction, eg hydroxyl functionality in the case of epoxide reacting with amine. k_{2} is the rate constant for the autocatalytic reaction and k_{1} the rate constant for the externally catalyzed reaction, which can be accomplished by impurities in the reactants, by the surfaces of fillers or by added catalyst.

In this exercise we keep k_{2} constant at a value of 0.01 sec^{-1} and incrementally increasing the value of k_{1} to simulate the effect of adding catalyst. We chose the four values shown in the table below and generated α and dα/dt vs. time data utilizing the methodologies described in the previous post. Note that a very small value of 0.00001 sec^{-1} was assigned to k_{1} for the *No added catalyst *case to simulate the unavoidable presence of impurities in the reactants including absorbed moisture.

The table below summarizes the cases evaluated while Fig. 1 shows projected results for the *Small *catalyst level case. Note that, as predicted by Eq. 1, dα/dt = k_{1} at t = 0. This is demonstrated more clearly in Fig. 3 and suggests the ability of conversion rate measured by isothermal DSC to estimate k_{1} and shed light on the catalyst level. We say estimate, because the conversion rate at t = 0 is obtained by extrapolation and, as shown in the next post, the full relationship is dα/dt = Bk_{1} at t = 0 which only equals k_{1} when B = 1. Also note that k_{1} encompasses an underlying rate constant, the concentration of external or added catalyst, and the strengths of the catalysts.

Several effects can be noted in the conversion-time curves in Fig. 2 below, both in the speed of the reaction and in the shape of the curves. As expected the addition of catalyst accelerates the reaction. But also notice how the shapes of the curves change from typical autocatalytic behavior with obvious inflection for the *No added catalyst* case to progressively less autocatalytic with increasing k_{1} and eventually to a curve that more closely resembles *n*th order characteristics with no observable inflection. With a typical autocatalytic epoxy the early reaction is slow due to lack of catalyst and only picks up after sufficient catalyst is generated, having the effect of extending the work life. Adding catalyst not only accelerates the overall reaction but diminishes and eventually eliminates this extension of the work life. This amplifies the effect on gelation resulting in a dramatic shortening of the work life vis-à-vis a smaller effect on the overall cure (eg time to 95% conversion). As shown in *Thermoset Characterization Part 5: Calculation of Gel Point* (posted May 12, 2014) the conversion at the gel point α_{gel} = 0.58 when B = 1 which is indicated in Figure 2.

Figure 2 should be compared and contrasted with Fig. 1 in the previous post and reproduced as Fig. 3 here. The material and process descriptions of this epoxy mold compound (EMC) were described in the previous post. Like the addition of catalyst, increasing the temperature clearly accelerates the reaction speed. But unlike the addition of catalyst the shapes of the curves, all close to the *Intermediate* catalyst level curve in Fig.2 (k_{1} = 0.00075 sec^{-1}), remain the same indicating that increasing temperature only speeds up the reaction but in general doesn’t change it. As a consequence these curves could be superimposed to form a master cure curve (see *Thermoset Cure Kinetics Part 5: Time-Temperature Superposition Kinetics* (posted November 24, 2014).

*Figure 3. Degree of cure (conversion) as a function of time for various mold temperatures for an epoxy mold compound (source: Hitachi Chemical).*

The rate of conversion-time curves in Fig. 4 also show the reaction shifting to shorter times as the reaction accelerates with added catalyst and losing its autocatalytic character. In the *Very fast* reaction there is just a hint of the characteristic autocatalytic peak in the curve. Additional catalyst will result in the maximum rate occurring at t = 0, which is characteristic of entirely *n*th order reactions.

It is interesting to compare the conversion-time curves in Fig. 2 with the T_{g}-time curves from *MTDSC of Thermosets Part 5: Five-Minute Epoxy, Continued* (posted August 10, 2015) and reproduced in Fig. 5 below. The 60-minute epoxy exhibits some autocatalytic behavior similar to the *Moderate* curve in Fig. 2 while the two five-minute epoxies show *n*th order behavior similar to the *Very Fast* curve.

T_{g} begins to increase almost immediately for the two 5-minute epoxies which then separate as the faster Devcon® epoxy approaches its full cure T_{g} of 35 to 40**°**C and the Gorilla® epoxy lags slightly behind. Recall that the 5 and 60 minute labels refer to the work life and approximate gel times. Notice that the T_{g} achieved in 60 minutes at 25**°**C for the Loctite epoxy is about the same as the T_{g}s achieved for the Devcon and Gorilla epoxies after 5 minutes. The Loctite® 60-minute epoxy undergoes a much slower increase in T_{g} and shows characteristics of autocatalytic cure where the rate of increase in T_{g} accelerates with time in the early part of cure. From the Material Safety Data Sheets (MSDS) all three adhesives contain a bisphenol A epoxy resin and a hardener. Devcon lists the hardener as *Trade Secret*, Gorilla lists three amines plus 0.1-1.0% bisphenol A which, with the two hydroxyl groups as shown below, will catalyze the epoxy-amine reaction, and Loctite lists several amines and 1-5% benzyl alcohol with a single hydroxyl as shown below. We speculate that the difference in the level and catalytic strength of these two alcohols accounts for the different reaction speeds.

To summarize, the addition of catalyst will accelerate the epoxy-amine reaction and at the same time alter its character from autocatalytic toward *n*th order with a marked reduction in the work life and time to gel. Increasing temperature will also accelerate the reaction but without altering its autocatalytic nature. We should remind readers that DSC has the unique ability to measure both conversion and rate of conversion vs. time, accounting for it being a preferred method to characterize the cure process. In the next post we explore the effect of stoichiometry on cure.

The post Thermoset Cure Kinetics Part 12: Modelling the Effect of Adding Catalyst on Autocatalytic Kinetics appeared first on Polymer Innovation Blog.

]]>The post Thermoset Cure Kinetics Part 11. A Mathematical Approach to Cure Kinetics Analysis appeared first on Polymer Innovation Blog.

]]>This post outlines the mathematical approaches to model the cure behavior and then analyze the cure data to extract kinetic parameters. We do this by utilizing equations developed in the previous post that model the cure path of autocatalytic systems like epoxy-amine.

__Modeling Cure Behavior__

For this case we developed relatively simple software routines for generating and plotting conversion – time and rate of conversion – time data from chemically based equations. The objective is to explore thermoset cure behavior such as the effects of stoichiometry and added catalyst. We generate the data as follows. We start with an equation that models the behavior of interest. We then choose appropriate parameter values and numerically compute conversion and rate of conversion at a selected number of points or times. We make use of Eq. 2 in the previous post (Eq. 1 in this post), a detailed equation based on the chemistry of the cure reaction that is able to explore how stoichiometry may affect the cure path.

dα_{Ep}/dt = (k_{1} + k_{2 }α_{Ep} )(1-α_{Ep})(B-α_{Ep}) (1)

where dα_{Ep}/dt is the rate of conversion of epoxide, α_{Ep} the fractional conversion of epoxide and B the ratio of amine hydrogen equivalents to epoxide equivalents. Remember that the amine concentration α_{Am} is represented by the term (B-α_{Ep}), k_{1 }is the rate constant for the externally catalyzed reaction and k_{2} is the rate constant for the autocatalyzed reaction. Equation 1 will be used to model the effects of stoichiometry in Part 13.

For applications where the stoichiometry is balanced, B = 1 and α_{Ep}_{ }= α_{Am} = α, where α is the overall conversion in an amine-epoxide system and equation 1 simplifies to:

dα/dt = (k_{1} + k_{2} α)(1-α)^{2} (2)

In the case of epoxide reacting with amine recall that the term (1-α) represents the concentration of reactants, for example, the epoxide and the amine and that α represents the concentration of catalyst generated in the reaction, for example, hydroxyl functionality. Addition of external catalyst can be modeled by increasing k_{1} and is the subject of the next post.

__Analysis of Cure Data__

This case is more complex where our ultimate intention is to analyze actual cure data. In the absence of such data we generate “typical” cure data and then fit them to Eq. 3 below. Here we address the generation of data for ‘model’ autocatalytic systems in one of two forms:

- (p1) conversion vs rate of conversion data, typically generated by isothermal DSC which is uniquely capable of simultaneously measuring both of these parameters.
- (p2) conversion vs time data, which may be generated by a variety of techniques including DSC, FTIR and Raman spectroscopies.

In particular we examine the analytical approach by generating data in one of these two forms with appropriate noise values such as one might observe in laboratory conditions. In each of these two forms, the assumption is that there is an underlying equation that models the data. Equation 3 is such an equation (from (2), where reaction orders m and n are treated as variables [1,2]. It has four parameters (k_{1}, k_{2}, m and n) that can be adjusted to best fit the given data. Generally, these parameters enter into the equations in a nonlinear manner which makes an analytical approach to their estimation usually impossible. Fortunately there are good numerical methods for these problems [3].

dα/dt = (k_{1} + k_{2 }α^{m)} (1-α)^{n} (3)

A common approach is to estimate the cure parameters (rate constants and reaction orders), measure the discrepancy between the data and the values predicted by the model equation, make appropriate changes to the estimated parameters, and repeat. In the least squares formulation, we attempt to minimize the sum of the squares of the discrepancies over all the data points.

*Generation of simulated cure data*

For the demonstration examples in this blog series we have generated data as follows. We first start with Eq. 3. We then choose parameter values and numerically compute the conversion and conversion rates for the given model at a number of points. To these values we then add “noise” represented by a normally-distributed random number with zero mean and a variance which we can specify. The resulting data are then treated as the input to the least squares algorithm and the goal is to see how well we can recover the parameters used to produce the pre-noisy data. In the course of exercising these algorithms we have observed that the preferred settings for the estimated starting values are k_{1} = k_{2} = 0, m =1, and n = 2.

*The p1 form*

First we consider the **p1** form. The data consists of pairs ( α and dα/dt) at specific (but unspecified) values of time, where α is the conversion and time is given by t.

For the model, we use Eq. 3, a phenomenological equation based on epoxy-amine chemistry [1,2]. If we let y_{i} denote the “measured” rate of conversion corresponding to α_{i}, ie the data with noise, and we let β_{i} be the value of the rate of conversion obtained for α_{i} and a particular set of approximate parameter values, then we wish to minimize the value of *e* as given by:

(4)

This is the so-called least squares problem. The goal is to recover the parameter values used to generate the model data. This subject is addressed in Part 14 of this series.

*The p2 form*

In the p2 form, the subject of Part 15 in this series, we are given data pairs of the form (t_{i}, α_{i}). Assuming the same model as given above, the least squares problem is the same except that there is a wrinkle: β_{i} is not computable directly as in **p1**. Instead, each time the value of any of the parameters is modified, one must solve the differential equation that models the data. There is a numerical software library [3] that we have used effectively to solve both forms. In general, the solution in the p2 form is substantially more computationally intensive.

*Rationale*

Let’s look at a practical example of the type of thermoset curing where modeling would be very important. In this case, an epoxy mold compound (EMC) used in a new type of electronic package (called fan out wafer level package) is compression molded. The gel point (αgel) occurs at a conversion of approximately 40%. During the compression molding process, the EMC must be cured to at least the gel point so the part will have dimensional stability after the molding process. In Figure 1, the Hitachi recommendation is that the degree of cure be at least 40% prior to releasing from the mold. Figure 1 shows that with increasing mold temperature, the time to reach 40% conversion decreases. At a mold temperature of 150°C the molding time is 100 seconds versus 350 seconds for a mold temperature of 120°C. After mold release, the EMC is gelled but not fully cured, requiring a second post mold bake step. From a manufacturing throughput perspective, the time in the mold is optimized to achieve the required 40% conversion and minimize warpage on cooling.

*Figure 1. Degree of cure (conversion) as a function of time for various mold temperatures for an epoxy mold compound (source: Hitachi Chemical) *

* **Numerical results*

The example shows the importance of modeling the conversion versus time relationship for thermoset curing. In subsequent posts examples will be presented showing how the approaches discussed here are used to generate conversion-time plots as shown in Figure 2. Note the similar shape of the modeling results with the data in Figure 1.

*Figure 2. Degree of cure (conversion) as a function of time for various model parameters*

References

- K. Horie et al., J. Polym. Sci., Polym. Chem. Ed.
__8__, 1357 (1970). - S. Sourour and M. R. Kamal, Thermochim. Acta
__14__, 41 (1976). - https://commons.apache.org/.

The post Thermoset Cure Kinetics Part 11. A Mathematical Approach to Cure Kinetics Analysis appeared first on Polymer Innovation Blog.

]]>The post Thermoset Cure Kinetics Part 10: Autocatalytic Equations appeared first on Polymer Innovation Blog.

]]>This post describes the kinetic equations used to characterize thermoset reactions that are autocatalytic in nature. The autocatalytic equations originate from a kinetic equation based on the chemistry of the epoxy-amine reaction and invoke assumptions that simplify the math. Epoxy-amine thermosets are in widespread use from 5-minute epoxy adhesives to matrix materials for advanced composites. They are also the most widely studied thermosets.

Referring to Schemes 1 and 2 in the previous post, the complete chemical equation, Eq. 1 below, is based on four discreet reactions: epoxy with primary amine catalyzed by alcohol produced in the cure reaction, epoxy with primary amine catalyzed by catalyst initially present, epoxy with secondary amine catalyzed by alcohol produced in the cure reaction, and epoxy with secondary amine catalyzed by catalyst initially present (see Ref. 1). k_{1, }k’_{1, }k_{2 }and k’_{2 }are the corresponding rate constants. The rate of consumption of epoxide dx/dt is given by

dx/dt = k_{1}a_{1}ex + k’_{1}ec_{0} + k_{2}a_{2}ex + k’_{2}a_{2}ec_{0} (1)

where e is the molar concentration of epoxide, a_{1} the molar concentration of primary amine and a_{2} the molar concentration of secondary amine at time t. e_{0}, a_{0}, and c_{0} are initial concentrations of epoxide, primary amine and external catalyst. x is the epoxide consumed.

Assuming equal reactivity of all amine hydrogens and converting to a fractional concentration basis leads to the following autocatalytic equation (1)

dα_{Ep}/dt = (k_{1} + k_{2 }α_{Ep} )(1-α_{Ep})(B-α_{Ep}) (2)

where dα_{Ep}/dt is the rate of conversion of epoxide, α_{Ep} the fractional conversion of epoxide and B the ratio of amine hydrogen equivalents to epoxide equivalents. Note that the amine concentration α_{Am} is represented by the term (B-α_{Ep}). Equation 2 will be used to model the effects of stoichiometry.

Ideally the only reaction is an epoxide with an amine. When stoichiometric quantities of reactants are mixed B = 1 and α_{Ep}_{ }= α_{Am} = α, where α is the overall conversion in an amine-epoxide system. k_{1 }is the rate constant for the externally catalyzed reaction[1] and k_{2} is the rate constant for the autocatalyzed reaction. Under these circumstances Eq. 2 reduces to

dα/dt = (k_{1} + k_{2} α)(1-α)^{2} (3)

This equation will be used to model stoichiometrically balanced reactions. It is also appropriate for the user who has control over, or at least knowledge of, the formulation. For thermosets of unknown composition one must resort to phenomenological equations based on the above considerations, such as

dα/dt = (k_{1} + k_{2 }α^{m)} (1-α)^{n} (4)

which contains the four variables k_{1}, k_{2}, m and n. Users of thermosets who do not have access to their composition will find this equation, first attributed to Sourour and Kamal (2), useful for analyzing cure data.

It should be pointed out that when k_{1} = 0, Eqs. 2-4 describe purely autocatalytic behavior without any influence of external catalyst. In practice k_{1} can be small compared to k_{2} but will have a finite value due to unavoidable impurities in the starting materials or even absorbed water. It should also be noted that when k_{2} = 0 these equations revert to *n*th order equations, broadening their applicability. For example Eq. 4 becomes

dα/dt = k_{1 }(1-α)^{n} (5)

which described the 2^{nd} order cure of a fast-reacting polyurethane with n = 2 (see Part 6 of this series *A Practical Example of Cure Kinetics in Action)*.

In the next blog post we will describe the mathematical approaches taken to generate model cure data from Equations 2 and 3, as well as to fit cure data, eg from DSC or FTIR studies, to Eq. 4

_____________

[1] k_{1} encompasses an underlying rate constant for the epoxy-amine reaction, eg k’_{1}, times the concentration of external catalyst, eg k_{1} = k’_{1}c_{0}.

References

- K. Horie et al., J. Polym. Sci., Polym. Chem. Ed.
__8__, 1357 (1970). - S. Sourour and M. R. Kamal, Thermochim. Acta
__14__, 41 (1976).

The post Thermoset Cure Kinetics Part 10: Autocatalytic Equations appeared first on Polymer Innovation Blog.

]]>The post Thermoset Cure Kinetics; Guest Post Series appeared first on Polymer Innovation Blog.

]]>**Thermoset Cure Kinetics Part 10: Autocatalytic Equations**

**Thermoset Cure Kinetics Part 11: A Mathematical Approach to Kinetics Analysis**

**Thermoset Cure Kinetics Part 12: Modelling the Effect of Catalysis on Autocatalytic Kinetics**

**Thermoset Cure Kinetics Part 13: Modelling the Effect of Stoichiometry on Autocatalytic Kinetics**

**Thermoset Cure Kinetics Part 14: Analysis of Autocatalytic Systems with Unknown Composition: Part A; From Conversion and Rate of Conversion Data**

**Thermoset Cure Kinetics Part 15: Analysis of Autocatalytic Systems with Unknown Composition: Part B; From Conversion –Time Data**

**Biographies of the Presenters:**

**Bruce Prime, Ph.D.**

Dr. Prime is a consultant to industry and government. He has over 40 years’ experience developing polymeric materials and their processes. A focus of his work is the cure and properties of cross-linked polymer systems such as coatings, adhesives and electronic materials. His work is documented in over 50 publications and in the chapter on *Thermosets* in __Thermal Characterization of Polymeric Materials__ (E. A. Turi, ed., Academic Press, 1981 and 1997). He has a Ph.D. in chemistry from Rensselaer Polytechnic Institute with Bernhard Wunderlich. He spent 30 years at IBM where he led teams that developed polymer systems and processes for printer and information storage technologies. He retired as a Senior Scientist from the IBM Materials Laboratory in San Jose, CA in 1998. Bruce is a fellow of SPE and NATAS, and was the 1989 recipient of the international Mettler Toledo Award in Thermal Analysis. He is co-editor of the book __Thermal Analysis of Polymers: Fundamentals and Applications__, (J. D. Menczel and R. B. Prime, eds.) John Wiley&Sons, 2009. Dr. Prime and Dr. Gotro co-authored a book chapter entitled “Thermosets” published in the Encyclopedia of Polymer Science and Technology, John Wiley and Sons (2017).

**John Avila, Ph.D.**

Dr. John Avila is a retired professor who taught Computer Science for 18 years at San Jose State University. Before that he worked for over 20 years in industry as a numerical analyst focusing on very large numerical problems involving the use of supercomputers. His background includes General Electric’s Nuclear Energy Division. He also worked as a contractor at NASA’s Ames Research Center in their Supercomputer Center. Currently he has been focused on the numerical solution of the kinetics equations in this blog and the development of apps for the iPhone and iPad.

The post Thermoset Cure Kinetics; Guest Post Series appeared first on Polymer Innovation Blog.

]]>The post Thermoset Cure Kinetics Part 9: Review of Thermoset Cure Kinetics appeared first on Polymer Innovation Blog.

]]>Like cooking, thermoset curing is a time-temperature process. Kinetics addresses relationships between time and temperature. For the next seven weeks we continue the series of 8 blogs on Thermoset Cure Kinetics posted in October, November and December 2014. The previous series covered the basic concepts of kinetics, including conversion or degree of cure, rate of conversion, rate constants, activation energy, and reaction orders. We discussed two common types of chemical reactions encountered in thermoset cure: *n*^{th} order and autocatalytic. And we discussed time-temperature-superposition kinetics.

On the analytical side the utility of differential scanning calorimetry (DSC) was highlighted since it has the ability to measure both conversion and rate of conversion. Both isothermal and multiple heating rate kinetic measurements were discussed. We described the use of thermogravimetric analysis (TGA) to measure the progress of cure when the formation of a crosslink is accompanied by a discreet weight loss, for example in the cure of phenolic resins. The capability of dynamic mechanical analysis (DMA) was also described via the measurement of elastic modulus which is related to crosslink density or the measurement of the glass transition temperature (T_{g}) as a measure of conversion through the T_{g}-conversion relationship.

In Part 6 of this series *A Practical Example of Cure Kinetics in Action* we highlighted the case study of a fast-reacting polyurethane resin used in a pultrusion process. Multiple heating rate measurements yielded an estimate of the activation energy, which was then used in the construction of a master curve of conversion vs. time at a reference temperature. Polyurethanes typically follow 2^{nd} order kinetics due to the reaction of a difunctional isocyanate with a trifunctional or higher-order alcohol, as embodied in Eq. 1.

(Equation 1)

Where α is conversion, *t* is time and *k is* the rate constant. Integration of Eq. 1 yields

Thus for a reaction that follows 2^{nd} order kinetics a plot of (1-α)^{-1} vs. time will be linear with an intercept of *1* and slope equal to *k*, as illustrated in Figure 1 for the fast-reacting polyurethane resin and confirming 2^{nd} order cure kinetics. A kinetic equation was developed and conversion plotted along a typical pultrusion time-temperature profile.

*Figure 1. Plot of conversion – time data from master curve according to 2 ^{nd} order kinetics equation.*

Some other thermosetting systems, most notably epoxies, follow autocatalytic kinetics. As illustrated in Scheme 1 (epoxy reacting with primary amine) and Scheme 2 (epoxy reacting with secondary amine) the reaction of a difunctional epoxy with a tetrafunctional amine results in a bond that adds to the size of the growing chain and eventually leads to crosslinking. But it also produces an alcohol that catalyzes further reaction of the epoxy with amine. The production of this internal catalyst is the basis for autocatalysis.

**Scheme 1**

**Scheme 2**

Equation 3 shows a common autocatalytic rate equation proposed by Sourour and Kamal (1) that is based on epoxy-amine chemistry (Horie et al. (2)). Where k_{1} is the catalyzed rate constant, attributed to added catalyst, impurities and/or fiber or particle surfaces that have a catalytic effect; k_{2} is the autocatalyzed rate constant, due to catalytic species produced by the reaction; and *m* and *n* are reaction orders. The chemistry for a stoichiometrically balanced reaction suggests that *m* = 1 and *n* = 2. For real systems with multiple components and possible imbalances in stoichiometry, values are often close but not identical to these. Since each epoxy-amine reaction produces one hydroxyl the alcohol concentration can be represented by the conversion a. Note that there are four unknowns k_{1}, k_{2}, m and n, necessitating nonlinear regression analysis to fit cure data to this equation.

In subsequent postings in this series we will add to our kinetics toolbox by developing mathematical approaches to kinetics and demonstrate their utility in two different ways. First we will explore the effects of added catalyst and stoichiometry on cure behavior, phenomena that could be of use to those designing or formulating thermosetting systems or to those just interested in the behavior of thermosets. We will also look at the fitting of epoxy-amine cure data to Eq. 3 by mathematically generating cure data with random noise to simulate actual experimental data. In this series we focus on isothermal cure and mathematically generated cure data and plan to examine the effects of temperature and the analysis of actual cure data at a later time.

References

- S. Sourour and M. R. Kamal, Thermochim. Acta
__14__, 41 (1976). - K. Horie et al., J. Polym. Sci., Polym. Chem. Ed.
__8__, 1357 (1970).

The post Thermoset Cure Kinetics Part 9: Review of Thermoset Cure Kinetics appeared first on Polymer Innovation Blog.

]]>The post Happy Labor Day appeared first on Polymer Innovation Blog.

]]>

The post Happy Labor Day appeared first on Polymer Innovation Blog.

]]>The post Polymers in Electronic Packaging: Introduction to Filler Dispersion Techniques appeared first on Polymer Innovation Blog.

]]>The last several posts have provided details of the rheological properties of highly filled thermoset resins. Fumed silica was shown to be a very effective rheology modifier when a yield point and shear thinning are required or to control sagging, such as for die attach or coil bond adhesive. Silica fillers are used extensively to reduce the coefficient of thermal expansion (CTE) in many types of thermoset polymers used in electronics. A good example are underfills used in flip-chip packages where the CTE needs to be a low as possible to reduce the CTE mismatch between the semiconductor chip and the substrate.

This post will provide an introduction to some of the methods used to disperse fillers into thermoset resins. In order to adequately disperse many types of fillers, high shear mixing is required. There are many commercially available high shear mixers to choose from. The right mixer depends on your volume, type of fillers to disperse and the need for degassing during mixing (thus require a vacuum mixer). The main types of high shear mixers are:

- Single shaft with dispersing blade (Cowles blade)
- Multiple shaft mixer with dispersing blade and sweep blades (dual and triple shaft mixers are available)
- Double planetary mixers
- Double planetary with dispersing blades
- Three roll mills

This post will cover the simple case where high shear is required without vacuum degassing. The mixer consists of a single shaft with a Cowles dispersing blade mounted on the end. The image below is from Mixer Direct (Mixer Direct high speed disperser).

In the image above, the mixing vessel is not shown in order to visualize how the dispersing blade would be positioned. Depending on the volume to disperse, the horsepower and mixing vessel size needs to be determined. Once the formulation is close, manufacturing trials are required to “fine-tune” the filler loading and order of addition to obtain optimal rheological properties. Typically, the low viscosity resin(s) are added first and blended together. Having a low resin viscosity will aid in both dispersing and wetting of the fillers.

The following guidelines were developed for dispersing fumed silica into liquid resins. The fumed silica loading can vary from 1-2% for inks and coatings (to get good anti-sagging properties) up to 4-8% for adhesives and sealants where a good yield point and significant shear thinning are required. High shear dispersion is required to achieve optimum dispersion. The tip speed (peripheral velocity) is a critical variable in the dispersing process. The tip speed (in feet/minute or fpm) can be calculated:

**fpm = RPM X 0.262 X blade diameter (inches)**

Typical tip speeds are in the range of 1,800 – 6,000 fpm. Degussa recommends starting at a tip speed of at least 30 ft/sec (1800 fpm) for dispersing the AEROSIL grades of fumed silica [1]. Careful experiments need to be performed to determine the optimal tip speed to get proper dispersion and rheological properties. For most high speed dispersing applications a Cowles dispersing blade is used and is shown in the following image:

The “saw tooth” type blade provides high shear in the mixing vessel. The Cowles blades rotate in a clockwise manner and have usually have a rotation arrow stamped on the blade to ensure proper orientation of the mixing shaft. Most dispersing blades are easily removed for cleaning and optimizing the size depending on the mixing vessel size and volume.

The second critical variable is the geometry in the mixing vessel. As shown in Figure 1, the placement of the dispersing blade relative to the vessel walls and the bottom is key to get good flow and dispersion.

*Figure 1. Blade diameter and location for optimal mixing (sources: Degussa [1] and MorehouseCowles [2])*

The objective of the dispersing process is to establish a strong vortex that extends down to the blade while ensuring good material movement into the dispersing blades and off the bottom of the tank. If the blade is too small, the material will often cling to the vessel walls and a weak vortex will form resulting in long dispersion times. As shown in Figure 1, four mixing zones should be active. Zones 1 and 2 pull the material off the vessel walls and into the dispersing blade. Zones 3 and 4 pull the material off the bottom of the vessel and into the dispersing blades.

In Figure 1, a range of values is presented. Degussa recommends the blade: vessel diameter to be in the range of 1:2 to 1:3 (ref [1]). MorehouseCowles dispersion instructions suggest the blade:vessel diameter to be 1:3 (ref [2]). Degussa recommends that for fumed silica, the blade be 0.5-1 blade diameters from the bottom of the vessel. More generally, MorehouseCowles recommends the dispersing blade should be 1 to 1.5 diameters off the bottom of the tank.

Degussa recommends a tip speed of at least 1800 fpm and MorehouseCowles suggests tip speeds in the range of 4,000 – 6,000 fpm. The lesson here is that for each formulation, careful experiments should be performed to evaluate all of the appropriate process parameters, such as tip speed and the dispersing blade geometry and placement.

Guidelines for dispersing:

**Blade speed too slow**: long dispersion times and potential for material settling**Blade speed too fast**: result in pulling air into the product, excess heat build-up and low quality dispersion**Blade too small**: material near the vessel wall will not be pulled into the disperser, poor overall product movement, and potential for filler settling**Blade too large**: result in pulling air into the product and not achieve optimal product movement into the dispersing blades.**Blade too low in mixing vessel**: decrease product movement at the top of the tank and potentially create dead spots that will not be dispersed properly.**Blade too high in mixing vessel**: flow off the bottom of the tank will be decreased potentially allowing filler settling in the bottom region and not get dispersed, poor overall material movement and potential for air entrapment

After Labor Day, I am pleased to have Dr. R. Bruce Prime back again for another detailed series of guest posts on thermoset cure kinetics.

References:

- Degussa, Successful Use of Fumed Silica in Liquid Systems
- MorehouseCowles presentation “Fundamentals of Dispersion”

The post Polymers in Electronic Packaging: Introduction to Filler Dispersion Techniques appeared first on Polymer Innovation Blog.

]]>The post Polymers in Electronic Packaging: Impact of Particle Geometry on Rheological Properties of Highly Filled Compositions appeared first on Polymer Innovation Blog.

]]>In previous posts, the characterization and rheological properties of highly filled systems was discussed. In particular, the rheological response of epoxies filled with the common thixotrope, fumed silica filler was covered in detail. Recall that the particle-particle interactions during shearing control to a large extend the viscosity shear rate relationship. When using fillers to modify both the rheological and final mechanical properties, there are other fillers to consider. Remember that the fumed silica is typically only added in small quantities. So what are some other common fillers and how do they impact both the rheology and the final properties of the cured formulation?

*Figure 1. Viscosity as a function of aspect ratio and filler volume fraction. (Silflake 135 and Atomized silver power SEM images from Technic, Inc.)*

The data in Figure 1 tells an interesting story. The relative viscosity is plotted so as to normalize all of the starting viscosities at 1 to enable clearly demonstrating the role of both filler loading (volume fraction) and filler aspect ratio. The most common filler is fused silica which would have the geometry of the silver powder in Figure 1. The spherical nature has the least impact on the viscosity. The silver or silica spheres can easily roll past one another during shearing. The loading level causes a sharp increase in the viscosity, even for the spherical particles. High filler loadings are required for many applications, such as to reduce the coefficient of thermal expansion, or to impart electrical or thermal conductivity.

For electrically conductive adhesives, silver flake is the preferred choice to get the maximum amount of particle-particle interactions (in this case flake-flake interactions). Think about silver flakes as poker chips. One can envision that it is much easier to get large interactions with poker chips compared to marbles. The large amount of interactions between flakes helps achieve the percolation threshold at lower filler loadings and facilitates electrical conductivity. However, the “poker chip” shape also causes a significant viscosity penalty as seen in Figure 1. Rod-like particles have the largest impact on the viscosity and this is potentially the reason nanotubes are typically only loaded in small percentages. Large amounts of rod-like fillers would severely limit the dispensing ability even at relatively low loadings.

A clever way to leverage the particle size dilemma is to use a hybrid filler geometry. Let’s say you need electrical conductivity, but need relatively high filler loading to achieve the desired conductivity but can’t really have a huge viscosity due to the dispensing constraints. One was around this is to use a combination of flake and powder as shown in Figure 2.

*Figure 2. SEM image of Silflake 771 94-774 from Technic, Inc. *

As clearly observed in Figure, the silver filler consists of a combination of flake and powders. Think of this as “raisin bran.” The silver flake will aid in getting good silver flake interactions and the silver powder will help lower the viscosity while also participate in the development of electrical conductivity. Depending on the ration of the flake to the powder, the viscosity curve will lie somewhere in-between the sphere and the plate curves in Figure 1. Working with your silver supplier, it might be possible to specify a given ratio of flake to powder to achieve the rheology and electrical performance required in your formulation. Other silver suppliers such as Metalor also provide “raisin bran” type silver fillers.

Particle size is another variable to consider as you are determining the role of fillers on the rheological properties.

*Figure 3. Schematic of the viscosity as a function of shear stress for three different particle sizes (at constant filler loading).*

The particle size also has an impact of the viscosity profile. In the case in Figure 3, the dashed lines represent the yield points. The larger particles have a lower yield point while the smaller particles have a higher yield point. This seems backwards until you consider the surface area and as discussed earlier, the yield point is largely driven by a network structure resulting from particle-particle interactions. Smaller fillers have higher surface areas and thus potentially more particle-particle interactions. So we now see that the formulator and another “dial to turn” when developing highly filled materials.

*Figure 4. Schematic of the viscosity as a function of shear stress for three different particle sizes (at constant filler loading).*

As we have shown before, the viscosity shear rate, has a slightly different shape, but the impact of the particle size on the viscosity is the same, that is larger particles have a lower viscosity over a given shear rate range compared with smaller particles which have a higher viscosity as shown in Figure 4.

**In summary:**

- Filler aspect ratio plays an important role in controlling the viscosity shear rate relationship.
- The particle size is important in controlling the yield point
- The filler loading amount controls both the viscosity magnitude as well as the development of the yield point.

Carefully understanding the role of fillers and then using this understanding will allow the formulator to achieve both the desired rheological properties critical for dispensing, but also allow the final fully cured properties to be optimized. Fillers in action. Cool stuff!

The post Polymers in Electronic Packaging: Impact of Particle Geometry on Rheological Properties of Highly Filled Compositions appeared first on Polymer Innovation Blog.

]]>