Particle Sizing in the Submicron Range by Dynamic Light Scattering t

The application of dynamic light scattering for the determination of particle size distributions in the submicron range is reviewed. First, the basic principles and assumptions used in this application are presented. The practical performances are illustrated with results obtained from round-robin studies. A short comparison with particle sizing by static light scattering is included. Finally, new developments for on-line or in-situ characterization of concentrated and opaque dispersions are briefly presented.


Introduction
In the last two decades, dynamic light scattering (DLS) techniques were applied in an increasing number of applications 1 -S).In this contribution, we will focus on the application of DLS-also referred to as quasi-elastic light scattering (QELS), or photon correlation spectroscopy (PCS) -for the determination of particle sizes.Although the technique was not at all developed for the determination of particle sizes -it was actually meant as a research tool for probing dynamics of polymers and colloidal systems, and as a way of studying critical phenomena and the statistical nature of light-it became the main application of DLS.The increasing success of PCS for particle sizing is based on the fact that it provides absolute estimations in very short measuring times without elaborate sample preparation procedures, and that easy-to-use commercial equipment is available.
In this paper, the basic principles of the application of DLS for particle sizing will be briefly reviewed.The practical performances will be illustrated with case studies.Finally, a brief summary of new developments is presented.

Spectral analysis
In a typical experiment the particles are illuminated with a collimated beam, called the incident Pleinlaan 2 -1050 Brussel, Belgium t Received May 24, 1993 KONA No.ll (1993) beam, and part of the radiation scattered under an angle 0 with respect to the incident beam is registered with an ultra-sensitive detector, very often a photomultiplier tube (Fig. 1).
--+ sample k, 0 --:-0 ci-de-nt-b-ie~am _ _ _ 0_-J ~----~"<edbe>m In a DLS experiment, the dynamic information of the scatterers can be deduced from the spectrum, i.e. from the "amount" of light as a function of wavelength A or frequency v or circular frequency w = 21rv.The incident light is a monochromatic laser source, i.e. a light source with a single wavelength or circular frequency w 0 .The spectrum S (w) of the incident light therefore has a single peak at w = w 0 (Fig. 2).The first question that arises is what is the spectrum of scattered light?The answer is that when a monochromatic beam with ---+ frequency w 0 and incident wave vector k i (with modulus ki = 21rm 1 /A-0 , A-0 is the wavelength in vacuo and m 1 is the refractive index of the propagation medium) shines on a particle in motion, it emits scattered radiation in all directions.A fixed observer will register a slightly different frequency w = w 0 + .1w,whereby the frequency shift dw is nothing more than an (optical) Doppler shift (Fig. 2).The frequency shift depends on the velocity -;J of the particle and the angle of observation In Eq. ( 1), q = ks-~ is the so called scattering vector with modulus q = 4?rm 1 sin (8/2)//.. 0 .
In the application of particle sizing, the particles do not all move with the same velocity along the same direction.On the contrary, the investigated particles, which are usually smaller than about 1/Lm, are in constant thermal or random Brownian motion.Typical for such a motion is that the particles often change their direction of motion and their speed.Intuitively, one can then predict that the spectrum of scattered light will be a superposition of different positive (particles moving towards the detector) and negative (particles moving away from the detector) frequency shifts ~w.Therefore, the spectrum of light scattered by particles in Brownian motion looks like a bell-shape curve (Fig. 3).The average displacements for a   In order to obtain reliable values of the particles' diffusion coefficient, accurate measurements of the spectrum are required.In the early experiments, S(w) was determined with spectrum analysers.The accuracy of DLS experiments was significantly enhanced by the use of digital correlators instead of spectrum analysers.

Intensity autocorrelation
Typical DLS measurements are performed in the time domain.Information theory tells that the dynamic properties of the studied scatterers can be obtained equally well from the Fourier transform of the spectrum.The latter is the autocorrelation function G 2 (r) of the scattered intensity I(t), and is an average value of the product of the intensity registered at an arbitrary time t, I(t), multiplied by the intensity registered at a time delay r later I(t + r): The brackets denote an average which is performed practically by forming the product in (3) for a great number of times t.

Some basic equations
For a dispersion of monodisperse particles in Brownian motion, the intensity autocorrelation function G 2 (r) is modelled by In Eq. ( 4), A and B can be considered as instrumental factors with B < A. The ratio B/A :::; 1 or (B/ A-1) is often designated as the intercept, as a o/o merit or as a signal to noise ratio.The decay rate r is linked to the translational diffusion coefficient D by r = Dq 2 (5) Note that the diffusion coefficient is determined and not the particle size.The latter quantity can only be determined by relating the diffusion coefficient to the particle size.Unfortunately, there is no general relation that applies in all situations.The frequently used Stokes-Einstein expression for the diffusion coefficient D 0 where k is Boltzmann's constant, T the absolute temperature, 1J the viscosity of the suspension liquid and d the particle diameter, has only limited validity: it applies only for non-interacting spherically shaped particles.The effect of particle interactions on the diffusion coefficient will be discussed in section 3.3.Polydispersity is presented in section 3.4.

Sample concentration 3.1.1 Dispersion
In order to characterize particles with PCS, the first task that arises is to prepare a suitable sample.Although this is easy compared to the sample preparation procedures required for, e.g.Transmission Electron Microscopy (TEM), it demands due caution.Especially the particle characteristics (dimensions, interactions) should not be modified by the dispersion procedure.Much can be said about this requirement: it may involve the whole field of colloid chemistry to which the interested reader is referred.Practically, one has to choose the right solvent and/or dispersing agent and to ensure that no coagulation or physical or chemical change of the dispersed phase occurs.

Single scattering-multiple scattering
The particles must scatter independently.This requirement implies that effects of multiple scattering are to be avoided.Multiple scattering is the phenomenon whereby light scattered at an angle £h by a first particle, is scattered a second (or even a third or fourth) time at an angle 8 2 by another particle.

Single scattering
Multiple scattering Fig. 4 Single and multiple scattering Since e 1 and 8 2 can take any value, the scattering vector magnitude q of the light inpinging on the detector is no longer fixed.As a result, the diffusion coefficient and particle size can no longer be deter-KONA No.ll (1993) mined unambiguously from the decay rate of the autocorrelation function (Eq.5).Obviously, multiple scattering effects are expected to become more important at relatively high particle concentration.
The net effect as a function of increasing concentration is as follows: a.The instrumental factor B/ A decreases 9 l.This factor is dependent on the degree of constructive interference of the light waves inpinging on the detector area.The largest value is obtained when all scattered rays have the same scattering vector q, that is the same scattering angle 8 and only single scattering.In the event of multiple scattering, the scattered waves with different 8 values interfere destructively, resulting in a decrease of the instrumental (coherence) factor B/ A. b.At higher concentrations, however, mostly only multiple scattered light reaches the detector.In this case, the particle size estimated by PCS becomes smaller.This is due to the fact that the autocorrelation function G 2 (r) decays faster for a multiple scattered signal than for a single scattered one 10 l.Thus multiple scattering limits the application of the technique to very dilute dispersions.In section 4, some tricks that allow measurements in concentrated dispersions, whereby multiple scattering effects are circumvented, are presented.

Number fluctuations
At low concentrations, another complication can occur.In the derivation of Eq. (4), it is assumed that the registered fluctuations in the scattered intensity arise only from a change in position of the particles inside the measuring or scattering volume.
If the particle number concentration becomes low, additional intensity fluctuations are caused by particles moving in and out of the measuring volume 11 l.These effects become significant when the average number of particles N in the scattering volume does not satisfy the condition N 2 » N. In practice, it is assumed that for N larger than about 1000, this condition is fulfilled.For a given concentration of dispersed material and a fixed scattering volume V, N decreases with particle size and can be estimated by (7) In Eq. ( 7), <Pis the volume fraction of the dispersed phase related to the concentration in weight per unit volume c and the particle density Q by c = Q</1.Eq (7) predicts that, for a volume fraction f of 10-4 (at higher volume fraction, multiple scattering may bias the measurements) and for a typical value of the scattering volume V of 10-6 cm 3 , effects of number fluctuation are to be expected for particle diameters above about 500 nm.In a round-robin comparison, a monodisperse latex with TEM diameter of 804 nm was studied by different labora-tories9l.At a concentration of 2 x 10-4 g/cm3 or a volume fraction of about 2 x 10-4 , the PCS diameters ranged from 1082 to 1372 nm.This overestimation can be explained as follows.Number fluctuations lead to an additional time decaying term in the measured intensity autocorrelation function.Since the characteristic decay time of this additional term is usually much slower than the decay attributed to the Brownian motion of the particles 11 l, the average particle size, proportional to the average decay time, will be overestimated when neglecting the effect of number fluctuations.
Another complication that arises for larger particles is sedimentation.Micron-sized particles will sediment during the PCS experiment if their difference in density is large enough.The sedimentation rate v 5 can be estimated using Stokes law by (8) where .0.Q is the difference in density between the dispersed phase and the dispersion medium, and g is the gravitational acceleration.Eq. ( 8) predicts that a 1J.tm diameter particle sediments at a rate of 1J.tm/s in water for a density difference .0.Q of about 2. Since a typical vertical linear dimension of the measuring volume is about 100J.tm, it means that it takes no more than 2 min for such a particle to sediment through the measuring volume.Particularly in a polydisperse sample, the larger (micron sized) particles may sediment while the smaller ones (d < 0.5J.tm)do not.As a result, the average PCS diameter decreases as a function of the measuring time.Therefore, it is advisable to check the sample at the end of a PCS experiment to see whether or not a sediment did deposit at the bottom of the test tube.
From the discussion in sections 3.1.2and 3.1.3,it appears that the concentration of dispersed material must not be too high in order to avoid complications from multiple scattering and not too low to avoid bias from number fluctuation.As a rule of thumb, a volume fraction of dispersed material in the range 10-4 to 10-5 fulfils the requirements for particle sizes below about 500 nm.For larger particles, it is not always possible to find a concentration that is neither too low nor too high.For sizes above 1J.tm, a concentration suitable for PCS determinations can only be found in exceptional cases, and even then one must keep in mind that biasing due to sedimentation may occur.

Narrow size distributions
One of the strong points of PCS is that it allows determination of particle sizes on an absolute basis, i.e. without calibration, in only a few minutes.For narrow size distributions, it is difficult to imagine a faster, more repeatable and more accurate technique for sizes below about 500 nm.Both the repeatability and the accuracy are typically better than ± 2o/o.As a result, some commercially available standards, certified in the past by TEM measurements, are nowadays increasingly certified by PCS measurements.Especially since PCS determinations do not require calibration, it occurred that improper labelling caused by calibration errors in TEM determinations were revealed by PCS measurements 12 l.Typical examples of narrow size distributions that were characterized very successfully by PCS are polystyrene latex dispersion, microemulsions, and liposomes, to name just a few, and there is still a continuing need to measure such systems with particle sizes in the submicron range.

Particle interaction
As mentioned, one of the basic assumptions is that the particles must scatter independently.Besides multiple scattering, there is a second complication that leads to dependent scattering, namely particle interaction 13 -14 l.The effect of particle interaction is proportional to the average interparticle distance and hence to particle concentration, much like the effect of multiple scattering.Hence the question arises: which one of the two effects comes first with increasing concentration.The answer is dependent on particle size.Since single particle scattering power increases dramatically with particle size, multiple scattering effects will occur first for the larger particle sizes.For a fixed volume fraction, the average interparticle distance decreases with particle size.Hence the effect of interaction will be less pronounced for the larger sizes.On the other hand, smaller particles, typically with diameters below 100 nm, scatter much less light so that disper-often do not scatter enough for reliable PCS measurements.In these cases, particle concentration can be increased to volume fractions up to 0.1 without effects of multiple scattering.However, at these concentrations, particle interaction effects now arise.Particle interaction affects the diffusion coefficient.For small particles at high concentration (average interparticle distance small compared to the inverse scattering vector magnitude q-1 , the collective diffusion coefficient De of an ensemble of interacting particles is determined by PCS15-16l.
Qualitatively, the effect of interactions can be summarized as follows.Repulsive interactions such as hard sphere, electrostatic or steric interactions lead to an increase of De with concentration, whereas for attractive, Van der Waals type interactions, De decreases with concentration.In these circumstances, particle sizes can in principle only be determined from extrapolations to infinite dilution of measurements of the collective diffusion coefficient De as a function of particle concentration.The particle size is estimated from the extrapolated diffusion coefficient D 0 with the Stokes-Einstein equation.If, at finite concentration, an apparent particle diameter is calculated from the collective diffusion coefficient De. this apparent diameter will be underestimated for repulsive interactions and overestimated for attractive interactions.

Polydispersity analysis
Since PCS was so successful for a fast determination of average particle sizes, much effort has been spent -especially for narrow sized distributions -on the determination of particle size distribution.It turned out, however, that the inversion of intensity autocorrelation functions for particle size distributions is considerably more difficult 1 7-3Sl.
In the application of particle sizing in polydisperse systems, it is assumed that all dispersed particles are homogeneous spheres.The only difference between individual particles is their size or diameter.

Autocorrelation function
The extension of Eq. ( 4) to non-interacting homogeneous spherical particles is as follows.The intensity autocorrelation function G 2 ( 7) is related to the modulus of the field autocorrelation function g 1 ( 7) by a Siegert relation (if the number of particles N in the measuring volume V is large enough) G 2 (7) = A + Bgr(T) (9) KONA No.ll (1993) For non-interacting monodis perse spherical particles, g 1 ( 7) is an exponentially decaying function: (10) Substitution of Eq. (10) in Eq. ( 9) yields then Eq. ( 4).Since the decay rate r is inversely proportional to the particle diameter (see eqs. 5 and 6), there are different decay rates ri inversely proportional to the different particle diameters di in the case of non-interacting homogeneous spherical particles with different sizes.Hence Eq. (10) can be written as a sum of exponentials: In Eq. (11 a), the coefficient ci represents the normalized intensity weight of the particles with diffusion coefficient Di = f/q 2 • The continuous form of Eq. (lla) is where C(r) represents the normalized intensityweighed distribution of decay rates.Typical polydispersity data analysis generally involves two steps.In the first, the modulus of the field autocorrelation function g 1 ( 7) is estimated from the experimentally measured autocorrelation function G 2 (7).In the second step, Eq. (lla) or Eq.(llb) is inverted for the distribution of decay rates.One of the limitations of the resolution comes from the extremely ill-conditioned nature (in the mathematical sense) of this Laplace inversion.Practically, very small differences in g 1 ( 7) may result in quite different particle size distributions after inversion or in other words quite different types of particle size distributions correspond within typical experimental accuracy with the same autocorrelation function 36 -37 l.Therefore, not only accurate data are required, but the necessary care has to be exercised on both steps in the data analysis procedure.Many efforts have been spent on the second step, i.e. the Laplace inversion.However, less attention was paid to the first one.We will now discuss both steps in more detail.

Estimation of the field autocorrelation function
Firstly, it should be noted that experimental data are always "contaminated" by noise and experi-mental uncertamties, so that Eq. ( 9) should be written as (12) where 0 (7) is the (unknown) experimental uncertainty.In order to extract the field autocorrelation function under normalized, g 1 (7), or unnormalized form, G 1 (7) = B 112 g 1 (7), the baseline A has to be evaluated.Two experimental strategies are used to this end: 1.Since for large time delays, G 2 (7) decays to its background value A, the baseline is approached by a measurement of G 2 (7) for large delay times.2. The baseline can be estimated from the timeaveraged intensity which is monitored by separate counters in the instrumentation.The unnormalized field autocorrelation function G 1 ( 7) is calculated as or the normalized field autocorrelation function is given by (13b) The experimental uncertainties in both G 2 (7) and A lead to complications, however.For larger values of the time delay, the experimental estimates for [G 2 (7) -A] are sometimes negative so that it is not possible to extract the square root.In order to circumvent this problem several strategies are used: 1. Measurements with [G 2 (7) -A] < 0 are discarded 2. If [G 2 (7) -A] <0, G 1 (7) is made equal to zero 3.If [G 2 (7) -A] < 0, G 1 (7) is estimated as - The first two strategies lead to a biased estimation of G 1 (7).Although the third strategy seems intuitively reasonable, it has not been shown that this does not bias the estimates of G 1 (7).Secondly, although the error in baseline A may be of a random nature, the errors introduced upon normalization (Eq.13b) are systematic ones because all data points are divided by the same (erroneous) estimated value of the baseline 36 l.These normalization errors increase as the values of the field autocorrelation function g 1 ( 7) decrease, i.e. with increasing delay time 7 and can be approximated by3 7 l 22 (14) where 11A is the estimated error on the baseline.

Inversion for the intensity-weighed par-
ticle size distribution The next step is the inversion of the Laplace transform (Eq.llb) for the distribution function C(r), or the inversion of the sum of exponentials (Eq.lla) for the set decay rates { Gi, i = 1, ... , n} and intensity weights { ci, i = 1, ... , n}.
There are essentially two kinds of methods used for the inversion: methods that do not require any prior knowledge about the distribution and methods that do require prior knowledge.The majority of the different methods used belongs to the second kind.The most frequently used methods are the following.

Methods that require no prior knowledge i. The cumulants method 17 l
This is probably the most widely used method.The idea behind this method is that for monodisperse samples, G 1 (7) is a mono-exponential decaying function so that In G 1 (7) is a straight line with constant slope proportional to the decay rate or inversely proportional to the particle size.For polydisperse samples, G 1 (7) is a multi-exponential.As a result, 1n G 1 ( 7) is no longer a linear function of 7: for relatively large values of 7, differences from the initial slope of 1n G 1 (7) can be observed.The departure from the initial slope is used as a measure of polydispersity.In practice, this method is mostly used to obtain average particle sizes from an average decay rate r and a polydispersity index defined as K 2 / < f> 2 .The average particle size drcs obtained from f is a harmonic intensity average 3Sl: For particles which are small compared to the wavelength of light, the intensity scattered by a particle of diameter di is proportional to the volume squared or the sixth power of particle size.In this case where ni is the number of particles with diameter d;.Note that even then, dpcs is not a z-average which would be the d 7 .6 average diameter.However, in the more common case where the particle size is comparable to the wavelength of light, dpcs is not given by Eq. ( 16), i.e. as d 6 .5 , but is usually smaller.The kind of average for dpcs depends in most practical cases on the size range of the distribution and particle refractive index.For polydisperse samples, dpcs is in all cases significantly larger than the number or geometric average diameter that is usually determined by TEM.Whereby one should be aware that only minor amounts of relatively large particles will strongly determine the value of dpcs• while they have little effect on the number average diameter 38 l.
Therefore, the necessary care should be taken when comparing PCS and TEM results for polydisperse samples.
ii.The singular value analysis and reconstruction method (SVR) 3 4 -35 l This is one of the methods dealing with the illconditioned nature of the problem.The particularly attractive feature is that the first step in this procedure gives, without any prior knowledge, an answer to the question of how many exponential decay rates can be recovered from noisy data.The main limitation of the method is that it can only be applied to data sampled at equidistant time delays.

Methods requiring prior knowledge
A large number of this kinds of method has been reported 18 • 32 l.A review of the state of the art up to about 1984 has been published 33 l.Of all the methods described in 33 l, the non-negatively constrained leastsquares method (NNLS) 21 l and the regularization method of Provencher (Contin) 22 • 23 l are the ones most often used.More recently, it has been reported that the maximum entropy method (MEM) also allows reliable reconstructions of particle size distributions 27 l. 30l.We will therefore limit this brief review essentially to these three methods.

The prior knowledge
The common feature of these methods is that they require the following prior knowledge 3 9 l:  1.In order to extract the field autocorrelation function G 1 (r), g 1 (r) or G~(r) from the experimental data for the intensity autocorrelation function G 2 ( r), the knowledge of the baseline is required.More often than not, constant baseline values determined separately are used.
Corrections for normalization errors can be KONA No.ll (1993) made 37 l.
2. The weighing of the data.Here, the necessary prior information about the experimental errors in determining the intensity autocorrelation function is lacking.Hence only empirical weighings are used.Most commonly it is assumed that the experimental errors are uncorrelated.Following the fact that in many counting processes the data follow Poisson statistics, it is often assumed that the weights are inversely proportional to the square root of data points.

The range of particle sizes (or decay rates)
where solutions for the inversion of g 1 (r) for the distribution is expected.This range must be discretized.The setting of the intervals and the number of grid-points N in the interval have to be provided by the user based on (subjective) experience, trial and error or prior knowledge about the answer.Note that preprocessing with the non-a priori methods (cumulants and/or SVR) can be used as a guide to set the range.For the discretization of the range of particle sizes, samplings according to geometric series are preferred to equally spaced samplings.This choice, sometimes designated as exponential sampling, is based on the Pike-Ostrowski eigenvalue analysis of the Laplace transform 2 0l.Once baseline, weighing of the data, range of particle sizes and its discretization have been obtained, the following set of simultaneous equations is obtained for a set of M data: N I: C; exp (r;rj) + £j i=1 (j = 1, 2, 3, ... , M) (17) In other words, in the first term of the r.h.s. of Eq. ( 17), the set decay rates {r;} have been fixed and the remaining unknowns are the intensity weights c;, including the weighing resulting from the discretization.Since in general, the set of equations is overdetermined (M > N), the computation of the set { C; = i = 1 ... N} can in principle be performed with a linear least -squares procedure.However, owing to the ill-conditioned nature of the problem and the fact that no reliable prior information about the noise terms 8i is available, a simple least-squares algorithm often yields strongly biased and even non-physical answers for the distribution, e.g. the set intensity weights c; should be a set of positive numbers, whereas often negative values for some intensity weights are obtained with simple leastsquares fitting procedures.Adding to the confusion is the fact that the results obtained are unstable in the sense that they are sometimes strongly dependent on the range of particle sizes set and the number of grid-points N, i.e. on the prior choice of the set of decay rates {ri}.In order to deal with these problems and in order to come to a selection of the "best answers" out of all possible answers fitting the data set, different strategies are used.We shall briefly review how this selection procedure is carried out by NNLS, Contin and MEM.The procedures are schematically represented in Fig. 5.A first criterion for selecting solutions out of all possible answers fitting the data is to use the prior knowledge that distribution functions are represented by positive numbers.This is the basis for the nonnegative constrained least-squares fitting methods whereby only solutions with ci > 0, V i are retained.
Mostly, the NNLS routine published in the book of Lawson and Hanson 40 l is used.Several commercially available software packages are based on the NNLS method.Although the positive constraint on the intensity weights is an important improvement compared to unconstrained least-squares fitting, the final results are still dependent on the range of particle sizes set and the number of grid-points N. Therefore, even more prior information that allows further selection of the possible answers is needed.This is achieved in two different ways by Contin and MEM.

24
Con tin 22 -23 )  In addition to the positive constraints, Contin uses the prior knowledge that the simplest solution, i.e. the one that reveals the least amount of new information or detail for the distribution function, is to be preferred (parsimony principle).Both constraints, i.e. non-negativity and parsimony, are achieved by constrained regularization.Nevertheless, in practical cases the main weak points are the prior choices of the baseline and the range of particle sizes: the final results are very sensitive to small differences in baseline estimates and not always independent of the range of particle sizes.

MEM27). 31)
In the maximum entropy method, the "best solution'' is estimated as the most probable solution.The most probable solution is the set of intensity weights { ci} * that maximizes the Shannon-Jaynes-

Skilling entropy function
In Eq. (18), the set { c9} is the measure of the prior information of the distribution.Without detailed prior information, it is assumed that the intensity weights of all particle sizes are equal over the range of sizes set, i.e. all values of c 0 are made I equal.The maximization of the entropy function also satisfies the positivity constraints.It is also claimed that the maximum entropy solution is a smooth distribution 27 l.The main weak points are essentially the same as those of Contin: the final results are very sensitive to estimates of baselines and are not always independent of the range of particle sizes and the number of grid points N.

Analysis of multiangle measure-
ments34l.[41][42] Since the scattering power of submicrometer and micrometer size particles is strongly dependent on the scattering angle and particle size, it may happen that the fraction of particles of a certain size class in a polydisperse specimen is hardly detectable at a given angle, whereas it dominates the scattering at another angle.Hence a survey of the autocorrelation functions over several angles may give more information on the size distribution than any single-angle analysis does.This point is illustrated by analysis of measurements on binary mixtures of 250 nm and 520 nm diameter lattices at several scattering angles including an angle for which the scattering of the larger 520 nm particle is hardly detectable 41 l.Since the angular dependence of particle scattering power is used as a constraint, the method can only be used if the particle shape and refractive index are known.
Simultaneous analysis of multi-angle PCS data that do not require prior knowledge of the angular dependence of the scattering power on particle size is also possible by the singular value analysis and reconstruction method 35 l.In general, however, our experience is that simultaneous analysis of multiangle PCS data only slightly improves the resolution in particle size, and that one is still limited by the ill-conditioning of the inversion of multiexponential PCS data.In order to benefit fully from the angular dependence of particle scattering power, the simultaneous analysis of multi-angle static light scattering data (SLS) is preferable, owing to the better conditioning of the inversion of SLS data 43-48).

Information content
With the a-priori methods of section 3.4.4.2, typically a set of 40 -50 values of the particle size distribution are computed.However, due to the ill-conditioned nature of the inversion of the Laplace transform, this is not a set of 40 -50 independent parameters.Therefore, it is important to know how many truly independent parameters or pieces of information there are in a solution.
Several estimations for the number of independent parameters are used in the different methods: essentially, the number of degrees of freedom in Contin and the parameter "good" in classical MEM.Typical values for these parameters range from about 2 to 5.These values compare fairly well with the number of parameters that is determined using the non-a priori SVR method.This illustrates that the number of independent parameters that can be extracted reliably from PCS data is limited even for advanced inversion methods.The fact that the number of independent parameters is low implies that mainly average particle sizes and distribution widths can be determined, but that the detailed shape of the distribution cannot or hardiy be determined reliably.

Distributions by weight and number
The primary information obtained from PCS data constitutes intensity distributions whereby the re-KONA No.ll (1993) lative amount of each particle size is weighted by the intensity scattered by all the particles of the considered size 38 l, i.e. the intensity-weighted coefficients ci in Eq. (lla) are proportional to niii, where ii is the intensity scattered by one single particle of size class i and ni is the number of particles in that size class.The intensity-weighted distributions can be converted into distributions by weight and by number provided prior knowledge of the particle size and mass on the one hand, and particle scattering power on the other hand, is available.In the absence of prior information about particle shape, it is assumed that the particles are spherical objects, mostly homogeneous spheres.In this case, the relationship between the particle diameter and scattering power depends on the ratio particle diameter d to wavelength A. 0 of light, the ratio of the refractive index of the particles m 2 to the refractive index of the dispersion medium m 1 , the scattering angle (} and the state of polarisation of the incident light, mostly orthogonal to the incident and scattered direction (vertical polarization).The detailed functional dependence is given by the Mie scattering equations for spheres 49 ).As a result, distributions by number or weight can only be obtained if the refractive index of the particle is known.Without this knowledge, a relationship between particle size and scattering power is available only in the limiting case for particles which are small compared with the wavelength of light (Rayleigh and Rayleigh-Gans-Debye limits).
It should be noted that the parameters for the computed weight and number distributions are less accurate than those describing the intensity distributions, owing to the propagation of errors by the transformation of intensity to weight or number distributions.

Polydispersity analysis. Practice
Most methods of data inversion have been developed and tested with synthetic data and/or with samples for which the size distribution, i.e. the expected answer for the inversion problem, was known a-priori.Practical performances of PCS for the determination of particle size distribution for samples with a-priori unknown size distributions were investigated in several round-robin studies by the Belgian Particle Technology Group (BPTG)9l, [50][51].
The aim of these round robins was to take a snapshot of the results obtained by routine procedures by different users (industrial and academic research groups and manufacturers) with mostly commercially available equipment and software.In a first study, several commercial monodisperse latex dispersions with particle diameters in the range 30 nm to about 2~tm were investigated 9 ).The results confirm the fact that the best accuracy and repeatability was obtained in the deep submicron size range, i.e. the size range below roughly 0.5~tm.For the large sizes, the accuracy and repeatability suffered from complications due to number fluctuations (see sec-.tion 3.1.3).In another study it was investigated in how much detail a particle size distribution can be characterized by PCS in a relatively short measuring time (typically by six repeated measurements of 5 min duration) 51 ).In particular, the ability of PCS to discriminate between monomodal and bimodal distributions of several industrial samples was studied.Four samples were distributed to eight laboratories.The first two samples were monomodal but not monodisperse.Samples 3 and 4 were mixtures of the first two in weight ratios of 1:1 and 1:3.This information was only communicated to the participants after five of them had returned their results.All participants returned results for the average size dpcs and the polydispersity index K 2 !< r > 2 as obtained by a cumulants analysis.The results are summarized in Fig. 6.The larger spread of average diameters for sample 1, compared to sample 2, is due to the fact that for this particle size, large in terms of PCS determinations, the average number of particles in the measuring volume is rather small (see section 3.1.3).From the values of the second cumulants it can be concluded that none of the samples are monodisperse and that the size distributions of samples 3 and 4 (i.e. of the mixtures of samples 1 and 2) are broader compared to those of samples 1 and 2. However, the results of a cumulant analysis do not allow discrimination between monoand multimodal distributions.Note that the reproducibility of the determinations of the normalized second cumulants is about an order of magnitude poorer than the reproducibility of the average diameter.
The other analysis methods used in this study basically allowed multimodal distributions to be resolved.In the main, four inversion methods were used.Most participants reported results obtained with commercially available Non-Negative Least-Squares (NNLS) methods.Three participants used the Cantin software package, and one participant also used the Maximum Entropy (MEM) and the Singular Value analysis and Reconstruction (SVR) methods.In a first step, each participant was asked to report  the number of modes of the size distribution as obtained from an analysis of single-angle PCS experiments.The results are summarized in Fig. 7.
The results for the intensity weighted distribution are in agreement for samples 1 and 2: all participants agree on mono-modal particle size distributions, i.e. for the two samples with the smaller normalized second cumulants.For samples 3 and 4, the results for the intensity-weighted distributions are no longer in agreement.Note that the results for the last two samples, i.e. for the mixtures of samples 1 and 2, reported by the three contributors (labs 1, 7 and 8) who had prior knowledge that the distributions were bi-modal, are also in disagreement.Even the results reported by different participants but obtained with the same software package disagree for samples 3 and 4.
The collective simultaneous analysis of data sets obtained at different scattering angles led to the conclusion that samples 1 and 2 were monomodal, whereas samples 3 and 4 were bimodal.However, this collective multi-angle analysis appeared to be only a slight, and certainly not a spectacular, improvement.
This case study illustrates that due to the ill-conditioning of the (Laplace) data inversion, the amount of information that can be extracted reliably from measurements of a relatively short time duration is limited.More precisely, the mean and the variance of a distribution, as obtained by a cumulants analysis,  already exploited successfully for the characterization of larger particles (diameters larger than a few micrometre) by forward light scattering, sometimes also referred to as Fraunhofer diffraction 52 l.This method can be extended for smaller submicron particles taking into account the inverse relation between particle size and scattering angle.The dependence of scattering power of submicron particles at large scattering angles, e.g. in the angular range 10 to 150°, contains enough information for a• general interpretation procedure 4 3-4 8l.
As an example, the angular dependence of scattering power of the four samples studied by the BPTG 51 l is shown in Fig. 8.The distributions obtained by inversion of these data revealed that samples 1 and 2 were monomodal and samples 3 and 4 bimodal.The better performance compared to PCS comes mainly from the higher information content of the variation of particle scattering power as a function of the scattering angle.This can be illustrated by the angular variation of scattering power of the four samples shown in figure 8.For sample 1 with the larger.particle size, four minima are obtained in the investigated angular range, whereas for sample 2 (with the smaller particle size), only one minimum is found.The scattering curves of samples 3 and 4 contain the characteristic features (e.g. the angular positions of the minima) of the scattering curves of both samples 1 and 2, and can be recovered by linear superposition of the curves of samples 1 and 2 in a ratio of 1 : 1 for sample 3, and 1 : 3 for sample 4. From this observation, the author concluded that sample 3 was a 1 : 1 weight mixture of samples 1 and 2, and sample 4 a 1 : 3 mixture by weight of the first two samples, without having the prior knowledge of the composition of samples 3 and 4.

Recent and Future Developments
Since PCS allows fast determinations of average size and distribution width, the applications for quality control are still increasing.One of the aims thereby is to arrive at preferably portable "PCS sensors" that allow on-line or even in-situ measurements in concentrated dispersions.Promising is the downsizing of the different instrumentation components.There is a trend to replace the He-Ne lasers by smaller solid state lasers 53 l.The use of monomode fibers allows construction of particularly simple PCS systems 54 • 56 l.Avalanche photodiodes have been proposed as a replacement for photo- multiplicator tubes 57 -58 l.In addition to their small size, an interesting feature of avalanche photodiodes is their higher quantum efficiency compared to photomultiplicators. Correlators are continually downsized.
On the other hand, great efforts are spent to allow measurements in highly concentrated and opaque samples.In order to extract single particle pro-perties from measurements on concentrated dispersions, two main problems have to be solved, i.e. the effects of multiple scattering (see section 3.1.2)and of particle interaction (see section 3.3).
The effect of multiple scattering can be reduced very substantially by the use of cross-correlation techniques S6-6 1 l.In this technique, one illuminates a sample with two antiparallel laser beams of the same wavelength, positions two detectors on opposite sides of the sample at 90° angles so that scattering vectors q and -q are defined, and then studies the cross correlations in intensity fluctuations.Although multiple scattered signals reach both detectors, they do not contribute to the crosscorrelation of the signals.The reason is that the observed scattered electric fields add coherently only for single scattered events, while for multiple scattering events the different scattered fields interfere destructively.A variant of this cross-correlation technique that allows varying the scattering angle by using two laser beams at different wavelength has been reported by Drewel eta!. 62 l.From a practical point of view, the present problem with cross-correlation is the extreme care required in alignment of the instrument (both detectors must register signals coming from the same scatterers).As a result, it is not yet used for routine measurements but only in off-line research applications.
Another, much simpler, way to avoid complications of multiple scattering is to collect the backscattered signal (i.e.() = 180°) with a monomode optical fibre 63 -66 l.This technique, whereby the incident beam is launched through the same optical fibre is particularly promising since it allows in-situ measurements in concentrated and opaque dispersions.There are, of course, a few problems that still have to be solved.Firstly, with the fibre optic backscatter system, not only light scattered by dispersed particles is collected, but also incident light reflected by the probes tip.This problem can be solved by working with carefully cleaned slanted optode probes 66 l or by taking the interference between back scattered and reflected incident light into account in the data analysis 67 l.For instance, with the latter technique, not only the certified particle diameter of some monodisperse samples were recovered, but even closely spaced bimodal mixtures of them were resolved 67 l.
The other complication for characterization of very concentrated dispersions (i.e. at a volume fraction above roughly 0.01) are particle interactions.From KONA No.ll (1993) a theoretical point of view, much progress has been made in understanding the behaviour of concentrated systems.An excellent review was given by P. Pusey 14 l recently.Due to the complexity of this matter and to the variability of particle interactions in different practical systems, no general strategy for accounting for particle interaction has been developed yet.This does not exclude that for particular quality control applications, particle interactions can be taken into account in the data analysis of measurements on very concentrated dispersions.

Conclusions
Originating some twenty years ago from a research tool in a form only suitable for experts, PCS has become a routine analytical instrument for the determination of particle sizes.Like all other techniques, it has its strong and its weak points.The major strong point is that it is difficult to imagine a faster technique for sizing submicron particles: average particle sizes and distribution width can be determined in a few minutes without elaborate sample preparation procedures.The price to be paid for this advantage is the low resolution.Reasonably accurate resolution of the shape of the particle size distribution requires extremely accurate measurements over a period of 10 hours and more and careful and critical data analysis including interaction with a highly qualified operator.Nevertheless, since in many quality control applications only an average size is sufficient, PCS is very often an excellent choice.The recent trend for this analytical application is the development of measuring systems that allow the control of production processes by online and in-situ measurements, preferably in highly concentrated dispersions.
One has to bear in mind that some of the commercially available PCS equipment also allows the measurement of the time-averaged scattered intensity as a function of the scattering angle, and that the inversion of such data for particle size distribution yields more reliable particle size distributions.

Fig. l
Fig.l Basic light scattering geometry

Fig. 2
Fig. 2 Spectrum of scattered light

Fig. 3
Fig.3 Spectrum of light scattered by particles in Brownian motion Brownian motion can be quantified by the particles (translational) diffusion coefficient D. It can be shown that this quantity is related to the half width at half height ~w 112 of the bell-shaped spectral curve by

AFig. 5
Fig. 5 Venn-diagram representation of several methods of data analysis.A All possible sets of solutions (an infinite number) B All possible sets of solutions (also an infinite number) fitting the data within one standard deviation on average C All possible sets of non-negatively constrained solutions (NNLS) D Contin preferred solution (NNLS + parsimony principle) E Maximu'TI Entropy most probable solution NNLS 21 lA first criterion for selecting solutions out of all possible answers fitting the data is to use the prior knowledge that distribution functions are represented by positive numbers.This is the basis for the nonnegative constrained least-squares fitting methods

Fig. 6
Fig.6 Single angle (90°) PCS harmonic intensity-weighed average particle diameters and normalized second cumulants (polydispersity indices, PI) for samples 1 to 4 as reported by the different contributors.

Fig. 7
Fig. 7 Number of modes of the particle size distribution of samples 1 to 4 as determined by PCS in different laboratories (1 to 8)

Fig. 8
Fig.8 Variation of sample scattering power as a function of scat tering angle