Testing and Evaluation

section epub:type=”chapter” role=”doc-chapter”>


Chapter 17
Testing and Evaluation


17.1 Introduction


One of the key parameters in high temperature oxidation is the parabolic rate constant. This is true as long as protective oxidation determines the material behavior. Consequently, measurement of the weight change of the specimen is the key experimental technique in high temperature oxidation. Generally, there are two possibilities. The first is to take a number of specimens of the same type, expose them to the respective atmosphere in a closed furnace in which defined atmospheres can be established, and take the specimens out after different oxidation states. Before and after the tests, the specimens are weighed on a high‐resolution laboratory balance, and the weight change Δm provided by the surface area of the specimen, A, is plotted versus time. Therefore, before exposure, the surface area A of the specimens has to be determined accurately. If the results are plotted in a parabolic manner, that is, Δm/A is squared while t remains linear, the rate constant can be determined directly from the slope, as discussed in Chapter .


A more elegant method is to use continuous thermogravimetry (Grabke and Meadowcroft 1995). In this case, a platinum or quartz string is attached to a laboratory balance that extends down into a furnace. At the lower end of the string, a coupon specimen is attached for which the surface area must be determined before the test. A movable furnace can also be installed that allows thermocyclic oxidation testing by periodically moving the furnace over and away from the specimen area. Ideally, the specimen should be in a quartz chamber or a chamber of another material that is highly corrosion resistant so that defined gas atmospheres can be used in the tests. The interior of the microbalance must be shielded against the aggressive gas atmospheres, usually by a counterflux of a nonreactive gas, such as argon. In a more sophisticated type of thermobalance, acoustic emission (AE) measurements can be made, for example, by acoustic emission thermography (AET) (see below; also Walter et al. 1993). This becomes possible if a waveguide wire is attached to the string hanging down from the balance, to which the specimen is attached. In particular, under thermocyclic conditions, AE measurements allow the determination of the critical conditions under which the oxide scales crack or spall (Schütze 1997). This type of scale damage is accompanied by mass loss due to spallation of the scale, and then this is directly reflected in the mass change measurements and can be correlated with AE results.


In some situations, internal oxidation or corrosion may occur that cannot be detected directly by thermogravimetric measurements. Therefore, it is necessary to perform metallographic investigations as well. In particular, for continuous thermogravimetric testing, at the end of each test, a metallographic cross section should be prepared in order to check whether the mass change effects measured in the tests are caused by surface scales alone or whether the metal cross section has been significantly affected. Furthermore, if the kinetics of internal corrosion are to be determined, it is necessary to perform discontinuous tests where specimens are taken out of the test environment after different testing times and then investigated by metallographic techniques (Baboian 1995; Birks et al. 2006; Glaser et al. 1994; Wouters et al. 1997).


Standard high temperature corrosion investigations also usually include an analysis of the corrosion products formed in the tests or under practical conditions because this allows conclusions as to which are the detrimental species in the environment and whether protective scales had formed. In most cases, this is done either in the scanning electron microscope (SEM) by using energy‐dispersive X‐ray analysis (EDX) or with metallographic cross sections in the electron probe microanalyzer (wavelength‐dispersive X‐ray analysis, WDX). Another tool may be glancing angle X‐ray diffraction (GAXRD) technique, which allows analysis of the composition of thin layers.


Common experimental investigation techniques used to assess corrosion morphology, identify corrosion products, and evaluate mechanical properties are the following (Birks et al. 2006; Marcus and Mansfeld 2006; Rahmel 1982; Sequeira et al. 2008; Taniguchi et al. 2006):



  1. Corrosion morphology assessment

    Optical microscopy


    Scanning electron microscopy (SEM)


    Electron probe microanalysis (EPMA)


    Transmission electron microscopy (TEM)


  2. Corrosion products identification

    X‐ray diffraction (XRD)


    Energy‐dispersive X‐ray analysis (EDX)


    Secondary ion mass spectroscopy (SIMS)


    X‐ray photoelectron spectroscopy (XPS)


    Auger electron spectroscopy (AES)


    Laser Raman spectroscopy (LRS)


  3. Mechanical properties evaluation

    Creep rupture


    Postexposure ductility


    Modulus of rupture (MOR).


In general, creep rupture, hardness, and MOR have been used equally to assess the mechanical properties of corroded test pieces. When the material is difficult to grip (as is a ceramic), its strength can be measured in bending. The MOR is the maximum surface stress in a bent beam at the instant of failure (International System [IS] units, megapascals; centimeter–gram–second units, 107 dyn cm−2). One might expect this to be exactly the same as the strength measured in tension, but it is always larger (by a factor of about 1.3) because the volume subjected to the maximum stress is small and the probability of a large flaw lying in the highly stressed region is also small. (In tension all flaws see the maximum stress.) The MOR strictly applies only to brittle materials. For ductile materials, the MOR entry in the database is the ultimate strength.


The technical domains in which high temperature corrosion is of importance include thermal machines, chemical industry, incineration of domestic or industrial waste, electric heating devices, and nuclear engineering, and apart from purely environmental aspects, high temperature corrosion also constitutes a stage of some industrial processes of high financial or human cost (e.g. materials preparation with controlled properties, thermochemical surface treatment processes, etc.). Therefore, besides the oxidation problems discussed above, high temperature corrosion also involves other gaseous atmospheres (N2, S2, Cl2, etc.), molten liquids (salts, metals, etc.), and more complex environments. Therefore, it is clear that to attain the expected performances of the systems and devices subjected to high temperature corrosion, as well as to characterize the corrosion scales and understand the corrosion mechanisms, it is necessary to use many experimental techniques. These include spectroscopic, electrochemical, and many other complex techniques. Spectroscopic techniques used for analysis of corrosion problems and characterization of thin and thick layers of corrosion scales are of considerable importance, but electrochemical techniques and other techniques using indirect measurements for the study of solid‐to‐solid, solid‐to‐liquid, and solid‐to‐gas properties are now becoming of great interest.


In the next section, brief considerations are included on the basic testing equipment and monitoring, at laboratory scale, as well as on optical microscopy and thermogravimetry. Then, a very brief summary of the main spectroscopic techniques in current use, their main limitations, and scope for development is provided. In fact, spectroscopic techniques used for chemical analysis of oxidation problems and characterization of thin layers of corrosion scales are more and more of considerable importance, thus deserving an entire section for their general discussion. In the following sections, numerous former and actual experimental techniques are considered, being possible to obtain more detailed methodology or results of specific techniques. In these sections, nondestructive inspection (NDI) techniques are also included.


The effects of temperature on mass transport and kinetics phenomena of corrosion nature along with its use in a number of technological industries of particular interest, such as nuclear, fossil‐fueled, geothermal, high temperature fuel cells and high‐energy batteries, and so on, require the knowledge of detailed mechanistic studies involving high temperature aqueous and solid‐state electrochemistry. Thus, a number of electrochemical techniques and procedures are discussed in the last section of this chapter.


17.2 Testing Equipment and Monitoring


The reaction vessels for studying high temperature corrosion may be horizontal or vertical tubes, depending on the type of measurement required. Pyrex glass can be used up to 450 °C only and must be changed to vitreous silica (often improperly called “quartz”) that can then be used up to 1050 °C. These glassy materials have the advantage of being transparent to light and of being joined and readily molded to shape by flame processing. For higher temperatures, ceramic materials such as mullite or alumina should be employed. Metallic reaction vessels are seldom used since they themselves may react with the oxidizing gas. For studies of corrosion by fluorine, F2, or its compounds such as HF, SOF2, or SO2F2, it should be noted that silica glasses are not stable, and, similarly, for halogen–carbon reactions where alumina is chlorinated, nickel‐based alloys are used. It will also be necessary to use metallic vessels (termed autoclaves) when high pressure reactions are studied.


Tubular electric furnaces are of common use in air up to 1250 °C with metal wire heating elements (NiCr, FeCrAl, NiCrAl), but, for higher temperatures, these elements have to be protected against oxidation. Heating elements made of SiC or MoSi2 are self‐protecting through the formation of a silica layer, and their use extends the temperature range of air furnaces to 1500 °C. Tungsten or molybdenum resistance furnaces can be used up to 2500 °C, provided that they are kept under vacuum to prevent oxide volatilization.


Reliable measurement of test temperature is of great importance because of the strong dependence of the reaction kinetics on temperature. Several types of thermocouples are available, depending on the temperature range being used:



  • NiCr/NiAl (K type) up to 900–1000 °C.
  • Pt‐10%Rh/Pt (S type) for temperatures up to 1700 °C.
  • W‐Rh/W up to 2500 °C.

The higher the maximum temperature for these couples, the lower their sensitivity. Locating the thermocouple inside the reaction is obviously the best option for accurate temperature measurement, but the thermocouple in relation to the specimen within the furnace is also of major importance. A location inside the reactor tube would obviously be better. For particularly corrosive environments, it is sometimes useful to use stainless steel or Inconel sheeting, particularly for the K‐type thermocouples. When the thermocouple tip has to be placed inside the reaction vessel, careful calibration measurements have to be performed to map the longitudinal and radial temperature gradients within the furnace. In all cases, effective draft proofing has to be placed at both ends of the furnace, between the reactor and furnace tubes, to avoid air convection currents that would disturb the temperature distribution within the vessel. The use of such insulation is particularly recommended for vertical furnaces.


Static or dynamic atmospheres may be used during oxidation testing. For the case of static atmospheres, the oxidant, usually a gas, is introduced into the reaction chamber after it has been evacuated, and the reaction vessel is then subsequently closed. Such atmospheres are characterized by the total pressure and the molar fraction (partial pressures, in the case of a gas) of each constituent. For the case of dynamic atmospheres, the oxidant continuously circulates in the open reaction chamber. The flow rate is then an additional experimental variable, and it is necessary to know this as part of the complete characterization of the oxidation test.


Static atmospheres should be used only when the reactive oxidizing species, assumed gaseous for illustration, are in overwhelmingly large concentrations so that the products of reaction do not have any significant effect on the original concentration. In all other cases, only dynamic atmospheres can ensure constant partial pressures of the reacting constituents and control over the buildup of reaction products. Gas flow rates in the range of 0.1–10 mm s−1 are commonly used, but simple mass balance calculations can ensure that the concentration of oxidant is in large excess compared with the amount lost during the reaction. This condition has to be fulfilled to avoid affecting the kinetics of reaction through a limitation on the supply of oxidant. Such a limitation may occur, for example, when very small amounts of highly reactive gases (HCl, SO2, etc.) are diluted in an inert carrier gas. Experiments conducted at different flow rates can be usefully used to establish whether kinetic limitations apply for particular test conditions.


To monitor the extent of the corrosion reaction, a few common approaches are described below.



  • Geometrical monitoring uses the thickness of the growing oxide scale or of the recessing metal as a measure of the progress of the reaction. Discontinuous measurements are performed on one or several samples submitted to the same temperature and oxidizing environment. The measurement may be destructive (by using optical microscopy or SEM on cross sections), so that one sample is consumed for each data point, or nondestructive (by using ellipsometry; Rutherford backscattering, RBS; nuclear reaction analysis, NRA) so that the same sample may be used for several successive data point measurements. A continuous geometry‐related measurement, seldom used, consists in following the electrical resistance increase of a recessing metallic wire during scaling.
  • Manometric monitoring can be used when the reaction being studied consumes the gaseous oxidant without any gas release. The decrease of the oxidant pressure in a closed reaction vessel is then used as a monitoring parameter. A highly sensitive pressure transducer must be used to allow experiments to be performed at near‐constant pressure. An improved version of the method consists in using two vessels connected by a motorized vane. The first vessel contains the sample where the gas pressure is continuously measured and adjusted to a constant value by repeated small additions from the second vessel. In this latter vessel, the pressure decrease as a function of time is monitored and used as a measure of the progress of the reaction.
  • Gravimetric monitoring is the most commonly used method and consists in following the mass of the sample as a function of time. This can be done discontinuously by removing the specimen from the furnace, to allow weight measurements at room temperature, and then reinserting. Continuous measurements are possible using thermobalances (the approach is known as thermogravimetric monitoring) of which highly reliable samples are nowadays available, covering a large range of temperature and pressure conditions. It should be pointed out that the choice of the type of gravimetric measurement, continuous or discontinuous, is not without consequence and may influence the kinetic results. Discontinuous monitoring using the same sample imposes thermal cycling, possibly leading to scale degradation and accelerated corrosion. This type of monitoring is, however, closer to many industrial service conditions where high temperature parts are generally submitted to thermal cycling. For more academic purposes, the continuous thermogravimetric measurements are preferred in order to better understand the mechanisms of isothermal corrosion.

Laboratory testing equipment and experimental monitoring of the corrosion extent other than oxidation processes are also briefly described in the context of the techniques analyzed below.


17.3 Optical Microscopy


Optical microscopy is a very common technique used for the corrosion morphology assessment, and it is outlined here for studying carbons and other lamellar structures.


Specimens are prepared by mounting the sample in a resin block and polishing the surface to optical flatness using alumina or diamond paste (if the specimen is massive enough, it can be polished without mounting). The polished surface is examined using reflected light microscopy usually with polarized light. To observe interference colors, due to the orientation of the graphitic lamellae at the surface, parallel polars are used with a half‐wave retarder plate between the specimen and the analyzer. The general arrangement is shown in Figure 17.1, together with diagrams illustrating the generation of the interference colors.

c17f001

Figure 17.1 Polarized light optical microscopy and interference colors.


The appearance of a surface is called its optical texture. Figures 17.217.4 illustrate the type of textures observed. The size and shape of the isochromatic areas can be estimated. Dimensions vary from the limit of resolution ∼0.5 µm to hundreds of micrometers. The nomenclature used to describe the features has been developed over many years, and discussions are still underway to establish a more standard system. Table 17.1 gives definitions of the classes of optical anisotropy together with a definition of optical texture index (OTI), factor that is useful for characterizing a carbon material.

c17f002

Figure 17.2 Optical micrograph of coke surface – fine‐grained mosaic, OTI = 1.

c17f003

Figure 17.3 Optical micrograph of coke surface – medium and coarse mosaic, OTI = 3.

c17f004

Figure 17.4 Optical micrograph of coke surface – coarse flow, OTI = 30.


Table 17.1 Description of optical anisotropy and OTI









































Nomenclature used to describe optical texture OTI factor
Isotropic (Is and Ip) No optical activity  0
Fine mosaics (F) <0.8 µm in diameter  1
Medium mosaics (M) >0.8 to <2.0 µm in diameter  3
Coarse mosaics (C) >2.0 to <10.0 µm in diameter  7
Granular flow (GF) >2 µm in length  7
>1 µm in width
Coarse flow (CF) >10 µm in length 20
>2 µm in width
Lamellar (L) >20 µm in length 30
>10 µm in width

By point counting the individual components of the microscope image and multiplying the fraction of points encountered for each component by the corresponding OTI factor and summing the values, the optical texture index for the sample can be obtained. This number gives a measure of the overall anisotropy of the carbon.


It must be stressed that this is only a comparative technique and that material characterized as isotropic may only be so at the level of resolution. This becomes obvious when a surface is examined with a higher‐grade objective (not just higher power), allowing more anisotropy to be observed (distinguished).


Other uses of optical microscopy in the study of carbon materials are mainly concerned with coal petrography, a discipline based on the measurement of reflectance from the coal surface and dealing with coal materials in terms of macerals. These forms are identifiable as deriving from the original plant material via the coalification process. A related area is the study of char forms derived from the pyrolysis of coal.


17.4 Thermogravimetry


The most widely used method to follow the kinetics of high temperature oxidation and corrosion is to measure the mass change (amount of metal consumed, amount of gas consumed, amount of scale produced) with temperature: thermogravimetry. The simplest method of mass change monitoring is to use a continuous automatic recording balance. The apparatus suitable for this is shown diagrammatically in Figure 17.5, which is self‐explanatory.

c17f005

Figure 17.5 Typical experimental arrangement for measuring oxidation kinetics with an automatic recording balance (Birks et al. 2006).


To increase the accuracy of these balances toward the microgram range, the physical interactions of the sample with the gas must be taken into account and combated. Three of these interactions are of importance.


17.4.1 Buoyancy


Thermogravimetric kinetic measurement will be perturbed, and errors introduced in all cases where buoyancy varies with time. To illustrate this, consider a solid of volume V immersed at temperature T in a perfect gas mixture of mean molar mass M at a pressure P. The effect of buoyancy can then be described by


17.1equation

where g is the acceleration due to gravity and α the ratio T°/P°V° of the temperature, pressure, and molar volume of gases under normal conditions. This relation first shows that buoyancy depends on temperature so that temperature variations during the experiment lead to apparent mass changes, but these are easy to calculate. As example, consider a typical rectangular sample with dimensions 20 × 10 × 2 mm3 placed in oxygen. Heating this sample from room temperature to 1000 °C will lead to an apparent increase in mass of approximately 0.5 mg. It can be ascertained that, to achieve maximum balance sensitivity under nominally isothermal measurements, care must be taken to ensure a high standard of temperature control of the furnace. For example, with temperature variations of ±3 °C near 300 °C, the apparent mass variations are of the order of ±2 µg; at higher temperatures, the error in the determination of mass is less, e.g. ±0.3 µg at 1000 °C, because of the lower gas density. Buoyancy force variations with the increase of sample volume during oxidation can also be calculated but are generally negligible compared with the accuracy of the thermobalances.


17.4.2 Convection Currents


Convection currents result from the effect of gravity on temperature‐induced differences in the specific mass of gas at different locations within the reaction vessel. They become noticeable for pressures greater than 100–200 mbar and are manifested by convection loops that can perturb the thermogravimetric measurements.


In open static reaction vessels, the convection loops curl outside the reaction vessel, and the well‐known chimney effect is observed (Figure 17.6a). Such a configuration has to be avoided, or one end of the reaction vessel has to be plugged.

c17f006

Figure 17.6 Convection currents in reaction vessels. (a) Vertical open vessel. (b) Closed static vessel.


In closed static reaction vessels, convection may occur due to radial temperature gradients, as shown schematically in Figure 17.6b where the sample is envisaged to have a temperature slightly lower than that of the vessel wall. The convection loops in this case lead to an apparent mass increase. Near the top of the furnace, where strong radial and longitudinal gradients are present, convection phenomena are present. Such a region (20 cm above the furnace) is subject to turbulence that may be minimized by the use of thin suspension wires having no geometrical irregularities such as asperities or suspension hooks.


A semiquantitative assessment of the importance of these natural thermal gravity convection currents can be obtained through the use of the Rayleigh number, Ra. This dimensionless number is defined as


17.2equation

where g is the acceleration due to gravity, β the coefficient of thermal expansion of the gas (for a perfect gas: β = 1/T), Cp the thermal capacity of the gas at constant pressure, ρ the density of the gas, b the length of the non‐isothermal zone, ΔT the temperature difference, η the dynamic viscosity of the gas, and K the thermal conductivity of the gas.


For low values of the Rayleigh number (Ra < 40 000), natural thermal gravity convection may be neglected. For higher values, convection loops may perturb thermogravimetric measurements, sometimes creating very strong movements of the sample suspension system and leading to a drastic decrease in accuracy.


In dynamic reaction vessels, the imposed gas flow leads to forced convection whose effects are generally greater than those due to natural convection. The flow behavior within the vessel can then be described by the Reynolds number (Re). This dimensionless number is defined as


17.3equation

where ρ is the density of the gas, u the linear velocity of the gas, L the characteristic length of the system, and η the dynamic viscosity of the gas, calculated using the following equation:


17.4equation

where k is the Boltzmann’s constant, m the mass of one gas molecule, T the absolute temperature of the reaction vessel, and σ the mean collisional cross section of a molecule (σ = 4πr2, where r is the molecule radius).


This approach, though necessarily simple, provides insights into the factors that affect the accuracy of the experimental values obtained in thermogravimetric tests. It should be noted that the values given for the Rayleigh and Reynolds numbers have to be considered as an order‐of‐magnitude guide only. For example, a gas flow with a Reynolds number of 2000 may be turbulent in a tube of high internal roughness but perfectly laminar in a smooth silica tube.


17.4.3 Thermomolecular Fluxes


In contrast to convection forces, which are active at moderate and high gas pressures, thermomolecular fluxes appear at low pressures in the domain where the gas may be considered as a Knudsen gas. The term Knudsen gas describes a situation where molecules do not collide with each other but only with the vessel wall. Such behavior occurs for values greater than 1 of the dimensionless Knudsen number (Kn):


17.5equation

where λ is the mean free path of molecules and d is a characteristic distance, e.g. the tube radius for a cylindrical reaction vessel.


In this Knudsen domain, gas pressure relates to temperature according to


17.6equation

Consider a cylindrical sample immersed in a Knudsen gas and submitted to a temperature difference T2 − T1 applied across a horizontal plane whose trace is XY in Figure 17.7. This sample is then submitted to two resulting Knudsen forces:



  • A resulting normal Knudsen force, Fn, acting on the horizontal circular section (Figure 17.7a).
  • A resulting tangential Knudsen force, Ft, acting on the vertical surfaces of the cylindrical sample (Figure 17.7b).
c17f007

Figure 17.7 Normal and tangential forces acting on a vertical cylindrical sample submitted to a temperature gradient in a Knudsen gas.


The resulting force Fn arises from the difference in the pressure forces on the two circular bases of the cylinder (Figure 17.7a):


17.7equation

The resulting tangential force is due to the difference in the momentum of the molecules hitting the vertical surface of the sample and arriving from the upper (hot) or the lower (cold) part of the reaction vessel (Figure 17.7b). Such a force was forecasted by Maxwell (1973). It can be expressed by


17.8equation

In the intermediate domain, defined by a Knudsen number between 1 and 10−5, normal and tangential forces are the result of a gas flux generated according to the exchange of momentum between molecules. The calculation of these forces is complex, but it can be shown that they increase with pressure and pass through a maximum before they decrease. This decrease results from the modification of gas properties and the appearance of a regime where the pressure in a closed isothermal vessel has a unique value.


Table 17.2 describes the different gas flow phenomena that may perturb thermogravimetric measurements and identifies the domains where they are active.


Table 17.2 Gas flow regimes where perturbation of thermogravimetric measurements may occur
























Knudsen number Pressure domain (mbar; air, 298 K, R = 5 cm) Type of gas flow Observations
104–1 10−7–10−3 Thermomolecular Knudsen forces
1–10−5 10−3–100 Intermediate Knudsen forces
≤10−5 ≥100 Convection Fluctuations around an equilibrium position

Table 17.3 Methods of material characterization by excitation and emission







































































Primary excitation Detected emission Methods of analysis: name and nomenclature
Photons, optical Optical Spectroscopy

  • AA: atomic absorption
  • IR: infrared
  • UV: ultraviolet
  • Visible

Electrons UPS: vacuum UV photoelectron spectroscopy


Outer shell
Photons Electrons XPS: X‐ray photoelectron spectroscopy
X‐rays X‐rays Inner shell; also called ESCA


ESCA: electron spectroscopy for chemical analysis


XFS: X‐ray fluorescence spectrometry


XRD: X‐ray diffraction
Electrons X‐rays EPMA: electron probe microanalysis

Electrons SEM: scanning electron microscopy


TEM: transmission electron microscopy


STEM: scanning transmission electron microscopy


SAM: scanning AUGER microanalysis


AES: Auger electron spectroscopy
Ions Optical SCANIIR: surface composition by analysis of neutral and ion impact radiation

X‐rays IIXA: ion‐induced X‐ray analysis

Ions (±) ToFMS: time‐of‐flight mass spectrometry


SIMS: secondary ion mass spectrometry


IPM: ion probe microanalysis


ISS: ion scattering spectrometry


RBS: Rutherford backscattering spectrometry
Radiation Optical ES: emission spectroscopy

Ions (±) SSMS: spark source mass spectrography

In order to limit all the perturbations described above, a symmetrical furnace setup is particularly efficient. In order to optimize performance, both furnaces are operated as nearly as possible at the same temperature and with the same temperature gradients.


Setaram, for example, supplies accurate thermobalances of this type of symmetrical furnaces.


17.5 Spectroscopy


Chemical analysis by spectroscopy has made rapid advances in high temperature studies and almost always includes equipment for high‐resolution microscopy. Several books and monographs are available, including most of the old and newly developed techniques (Kofstad 1988; Marcus and Mansfeld 2006). Glow discharge mass spectroscopy (GDMS) is fast, sensitive, accurate, simple, and reliable and can be used for surface analysis if the specimen can be attached to a vacuum cell (Bernéron and Charbonnier 1982).


The resolving power in‐depth profiling is similar to AES and SIMS. A 1 kV glow discharge causes ion bombardment and surface erosion that is fed (optically) to a multichannel spectrometer for elemental analyses. Other long and complex methods of surface analysis, such as AES, SIMS, XPS, ISS (ion scattering spectroscopy), RBS, NRA, IIXA (ion‐induced X‐ray emission), and ESCA (electron spectroscopy for chemical analysis), are difficult for field use. Several authors have reviewed these methods. Tables 17.317.6 compare the techniques, and Figure 17.8 shows the relative sizes of areas analyzed using these techniques.


Table 17.4 Summary of various characteristics of the analytical techniques






























































































Characteristic AES XPS ISS SIMS RBS NRA IXX
Sample alteration High for alkali
halogen organic
insulators
Low Low Low Very low Very low Very low
Elemental analysis Good Good Good Poor Fair Fair Good
Sensitivity, variation, resolution Good Good Fair Good Fair Good Good
Detection limits 0.1% 0.1% 0.1% 10−4% or higher 10−3% or higher 10−2% or higher 10−2% or higher
Chemical state Yes Yes No Yes No No No
Quantification With difficulty, required standards With difficulty, required standards With difficulty, required standards With difficulty, required standards Absolute, no standards Absolute, no standards Absolute, no standards
Lateral resolution 100 nm 2 mm 100 m 100 to 1 m 1 mm 1 mm 1 mm
Depth resolution 200 nm To atomic layer To atomic layer To atomic layer 10 nm 10 nm None
Depth analysis Destructive, sputter Destructive, sputter Destructive, sputter Destructive, sputter Nondestructive Nondestructive Very difficult

Table 17.5 Outline of some important techniques to study metallic surfaces












































Technique Abbreviation Information Comments
Optical microscopy OM Surface topography and morphology Inexpensive but modest resolving power and depth of field
Transmission electron microscopy TEM Surface topography and morphology Very high resolution but requires replication; artifacts can be a serious problem
Scanning electron microscopy SEM Surface topography and morphology combined with X‐ray spectroscopy gives “bulk” elemental analysis Resolving power > optical microscopy; preparation easier than TGEM and artifacts much less likely
X‐ray photoelectron spectrometry XPS (ESCA) Chemical composition, depth profiling Especially useful for studying adhesion of polymers to metals
Secondary ion mass spectroscopy SIMS Elemental analysis in “monolayer range,” chemical composition, and depth profiling Extremely high sensitivity for many elements
Auger electron spectroscopy AES Chemical composition, depth profiling, and lateral analysis High spatial resolution that makes the technique especially suitable for composition–depth profiling
Contact angle measurement Contamination by organic compounds Inexpensive; rapid

Table 17.6 Types of samples and techniques nearly appropriate for their analysis




























Required sample analysis Appropriate technique
Depth profiling lower Z elements and thin films; trace or minor analysis of light elements; quantitative analysis NRA
Depth profiling of higher Z elements and thin films; trace analysis of heavy elements in light matrix; quantitative analysis RBS
Trace, minor, and major element analysis in thicker samples; quantitative analysis IIX
Minor and major elements at surface or interface of small samples AES
Trace elements at surface or interface of medium to small size samples; analysis of insulators; sputter profiling of light elements SIMS
Chemical state analysis; analyses of organics and insulators XPS
Analysis of outer atom layer; analysis of insulators ISS
c17f008

Figure 17.8 Schematic illustrating relative sizes of areas scanned by spectrometric analytical techniques.


Commonly used methods are optical and SEM for surface studies. TEM of interfaces has been explored. Selected area diffraction patterns (SADPs) show the orientation between different grains. In a ceramic coating, the interface between different phases can be coherent, semi‐coherent, or incoherent. Coherent phases are usually strained and can be studied by TEM contrast analysis. Other aspects of analytical electron microscopy analysis are discussed (Hansmann and Mosle 1982; Thoma 1986). TEM resolution is better than 1 nm, and selected volumes of 3 nm diameter can be chemically analyzed. Methods of preparing thin TEM transparent foils are described (Doychak et al. 1989; Lang 1983).


Photoemission with synchrotron radiation can probe surfaces of an atomic scale (Ashworth et al. 1980; Pask and Evans 1980), but this method requires expensive equipment. Complex impedance measurements can separate surface and bulk effects, but problems of interpretation still need to be resolved (Marcus and Mansfeld 2006). X‐ray and gamma radiographs, as used in weld inspection, can be used to inspect coating for defects. The method has been discussed by Helmshaw (1982). Inclusions, cracks, porosity, and sometimes lack of fusion can be detected. Surface compositions of ion‐implanted metals have been studied by RBS (Brewis 1982; Marcus and Mansfeld 2006). In this nondestructive way, a microanalysis of the near‐surface region is obtained. Interpretation is relatively easy. Assessment of radiation damage in ion‐implanted metals by electron channeling is described using SEM (Ashworth et al. 1980) for the characterization of surface films (Marcus and Mansfeld 2006).


AES and XPS analyze the top of the surface only, and erosion by ion bombardment or mechanical tapering is needed to analyze deeper regions. AES detects 0.1% of an impurity monolayer in a surface. Auger electrons are produced by bombarding the surface with low‐energy (1–10 keV) electrons. In XPS the surface is exposed to a soft X‐ray source, and characteristic photoelectrons are omitted. Both AES and XPS electrons can escape from only 1 nm depth from the surface, and so these are surface analytical methods (Brewis 1982; Marcus and Mansfeld 2006).


It is most important to avoid contamination during preparation for surface analysis; semiquantitative in situ analysis by AES has been reported (Bosseboeuf and Bouchier 1985). Nitrides and other compound refractory coatings are frequently analyzed by AES and RBS methods. Depth and crater edge profiling have been carried out for direct‐current (DC) magnetron sputtered and activated reactive evaporation (ARE) samples of (Ti, Al)N, TiN, and TiC coatings (John et al. 1987; Kaufherr et al. 1987). Round‐robin tests of characterization by including a range of analyses such as XPS, EPMA, XRS, AES, APMA, and XRD are not uncommon. Among these, XRD was felt to be unreliable (Perry et al. 1987).


Ion spectroscopy is a useful technique for surface analysis (Marcus and Mansfeld 2006). ISS uses low‐energy backscattered ions (Czanderna 1975) and has a high sensitivity. SIMS has the possibility of sputter removal of layers, allowing depth profiling (Brewis 1982). It can act as a stand‐alone system to solve surface analysis. Three‐dimensional (3D) SIMS of surface‐modified materials and examination of ion implantation is reported (Fleming et al. 1987). Lattice vacancy estimation by positron annihilation is another approach (Brunner and Perry 1987). TEM and SEM are valuable techniques, and replication methods using, for example, acetate replicas can nondestructively reveal surface features of specimens too thick for TEM (Brewis 1982; Grabke and Meadowcroft 1995). ARE coatings of V–Ti in C2H2 give wear‐resistant (V, Ti)C coatings. The hardness is related to grain size, stoichiometry, free graphite, and cavity networks. SEM and XRD analysis could not be used to explain the large hardness variations obtained by varying temperature and gas pressure, but TEM revealed microstructural changes (Grabke and Meadowcroft 1995; Lang 1983; Marcus and Mansfeld 2006). Beta backscatter and X‐ray fluorescence have low sensitivity (0.5 cm2 min−1 and 1 cm2 h−1, respectively). Thickness and uniformity of silica coatings on steel have been determined by X‐ray fluorescence measurements of Si concentrations along the surface (Bennett 1984; Lang 1983). Round‐robin tests for microstructure and microchemical characterization of hard coatings have included XPS, UPS (UV photoelectron spectroscopy), AES, EELS (electron energy loss spectroscopy), EDX, WDX (wave‐dispersive X‐ray analysis), RBS, SIMS, TEM, STEM (scanning transmission electron microscope), and XTEM (X‐ray transmission electron microscopy) (Bennett 1984; Bennett and Tuson 1989). Field emission STEM has been applied for profiling Y across a spinel–spinel grain boundary (Bennett 1984; Bose 2007; Grabke and Meadowcroft 1995; Sundgren et al. 1986).


In summary, when studying oxidation behavior at high temperatures, the foremost requirement is to monitor the extent and kinetics of attack. To obtain a complete mechanism understanding, such data have to be augmented by precise details of all the processes involved, starting with the chemical reaction sequence, leading to the formation of gaseous products and solid products at the reacting surface. The development and failure of protective surface scales crucially govern the resistance of most materials in aggressive environments at elevated temperatures. Knowledge is also essential on the changes throughout the exposures of the scale chemical composition, physical structure (including topography), stress state, and mechanical properties as well as on the scale failure sequence (e.g. by cracking and spallation).


All these processes involved in high temperature oxidation are dynamic. Therefore, to obtain unambiguous information, the main experimental approach in research should be based on in situ methods. These can be defined as being techniques that either measure or observe directly high temperature oxidation processes, as they happen in real time. Although numerous in situ methods have been developed to date, with several notable exceptions, the most important being controlled atmosphere thermogravimetry, the deployment of these techniques has been often limited. This may be attributed largely to experimental difficulty and also to the lack of suitable equipment. Current understanding of the chemical and physical characteristics, stress state, and mechanical properties of oxidation scales largely derives from postoxidation investigations. In fact, certain detailed aspects, for example, variations in mechanical properties and microstructure through scales, can be revealed only by postoxidation studies. The two main experimental approaches, in situ oxidation and postoxidation, are not mutually exclusive, as they complement and augment each other. Nevertheless, at the current state of mechanistic knowledge of high temperature oxidation, further understanding of many critical facets (e.g. the breakdown of protective scales) will be revealed only by real‐time experimentation. These requirements taken in conjunction with recent advances in both commercial and experimental equipment design/capabilities and in data storage/processing make it imperative that all investigators in this field be fully aware of the available in situ experimental test methods.


The purpose of this section is to provide a very brief summary of some major techniques in current use, their main limitations, and scope for development. Information on the detailed methodology of any technique or the complete results of any specific study using any such technique should be obtainable from the following sections and/or the references given to published papers.


17.6 Diffraction Techniques


Diffraction techniques are the most important to the analysis of crystalline solids – both phase and structural information. The techniques of greatest interest in this area include XRD, low‐energy electron diffraction (LEED), reflection high‐energy electron diffraction (RHEED), and neutron scattering. XRD is included even though it is not a surface‐specific technique since it is by far the most common of the diffraction‐based techniques used, i.e. this is the standard method for solving crystal structures for both single‐crystal samples and powdered crystalline samples. Surface specificity is lost in XRD due to the geometry used and the fact that photons have long path lengths within solids for both elastic and inelastic collisions. The remainder are surface‐specific techniques.


17.6.1 X‐Ray Diffraction


The average bulk structure of many materials can be readily revealed using XRD. The technique provides a measure of the amount of ordered material present and can be used to give an indication of the size of the crystallites that make up the ordered structure.


The samples are usually prepared either as powders in capillaries or spread on a flat sample holder. The XRD pattern is recorded either on film or with a diffractometer. Figure 17.9 shows the arrangement of a powder diffractometer.

c17f009

Figure 17.9 X‐ray diffraction.


The resulting pattern is the amount of scattering over a range of scattering angles, θ, and can be analyzed in terms of diffraction peaks, their positions, and their widths. For the most accurate work, a standard (usually a crystalline salt) is added to the powder to provide internal calibration of the peak positions and widths, thereby allowing any instrumental factors to be taken into account.


Although the main scattering derives from the ordered material present, some indication of the amount of disorder can be obtained by the background scatter. Similarly, the broadening of the diffraction peaks allows an estimation of the mean particle size to be made. Line broadening arises from both the strain or defects in the lattice and the finite crystal size. Assuming that the defects in the lattice reduce the extent of order, an effective crystallite size, t, can be estimated from the amount of broadening, β, using the Scherrer equation


17.9equation

where λ is the wavelength, θ is the scattering angle, and the value of κ (∼1) depends upon the shape of the crystallite, e.g. κ has the values of 0.9 for Lc and 1.84 for La.


β is the amount of broadening due to the sample, and the observed broadening, B, usually needs to be corrected for the instrumental broadening, b, using relationships such as


17.10equation

The measuring of line broadening is illustrated in Figure 17.10.

c17f010

Figure 17.10 X‐ray diffraction line broadening.


The parameters usually quoted from XRD experiments are



  • d(002) interlayer spacing, Lc stack height, and La stack width.

These give some indication of the degree of crystallization of the material. One specialized area of diffraction is that of fiber diffraction. Both normal and small‐angle scattering patterns are used for this investigation. Materials with large surface area can also be successfully investigated by a combination of small (low) and normal (high) angle scattering. Small‐angle scattering has also been used to give some indication of pore structure.


17.6.2 Low‐Energy Electron Diffraction


Scattering of electrons from solid surfaces is one of the paradigms of quantum physics. The pioneering experiments of Davisson and Germer, in 1928, confirmed the Broglie’s concept of the wave nature of particles (de Broglie 1925), a concept at the very heart of quantum mechanics (wave–particle dualism). Already in these early works, they recognized the potential of LEED as a tool for the determination of surface structures and applied it to gas adsorbate layers on Ni(111) (Davisson and Germer 1928a,b). This success was only possible due to two important properties of LEED: surface sensitivity and interference.


In fact, an alternation of the electron beam intensity with sample thickness was observed by Davisson and Germer (1928a,b). Electrons in an LEED experiment have a typical kinetic energy in the range of 20–500 eV.


Experiments with energies below this range are called very low‐energy electron diffraction (VLEED), and those with higher energies medium‐energy electron diffraction (MEED). At even higher energies, one uses grazing electron incidence and emission to obtain surface sensitivity, that is, RHEED. Due to the interaction of the incoming electron with the electrons in the sample, the former penetrates into the solid only a few angstroms. Typical penetration lengths taken from the “universal curve” (Figure 17.11) range from 5 to 10 Å (Rundgren 1999). Therefore, LEED spectra usually carry less information about the geometrical structure of the volume of the solid, i.e. the bulk, than of the solid’s surface region.

c17f011

Figure 17.11 Compilation for elements of the inelastic mean free path λm (dots) in monolayers as a function of energy above the Fermi level. This “universal curve” is almost independent of the solid, for example, surface orientation of elemental composition. The solid line serves as a guide to the eye (Seah and Dench 1979).


Concerning interference, de Broglie showed that a particle with momentum p can be associated with a wave with wavelength 2π/p (p = |p|). For example, an electron in vacuum can be described by a plane wave:


17.11equation

with wavenumber k = 2π/λ and energy E = ω = k2/2. de Broglie’s picture of electrons as waves and the interpretation of Davisson–Germer LEED experiments lead to the question: Are electrons waves? (Davisson 1928). Comparison was made to X‐ray scattering in view of the determination of structural information, and Davisson came to the conclusion that if X‐rays are waves, then electrons are, too. However, he/she admitted that the picture of electrons as particles is better suited for the explanation of the Compton effect or the photoelectric effect.


The schematic setup of a LEED experiment is shown in Figure 17.12. A monoenergetic beam of electrons with kinetic energy E impinges on the sample. The reflected electron beams are detected and analyzed with respect to their direction and energy. Usually, one detects only elastically reflected electrons (for which the energy is conserved) and uses incidence normal to the surface. Therefore, a set of LEED spectra – or I(E) curves – represents the current I of each reflected beam versus the initial energy E. Note that the reflected intensities are roughly as large as 1/1000 of the incoming intensity.

c17f012

Figure 17.12 Scheme of the LEED setup. An incoming beam of electrons e is elastically scattered by the solid. The latter is considered as a compound of the substrate (gray circles) and a thin film (black circles). A reflected electron beam is detected. The dashed–dotted arrow represents the normal surface.


In the wave picture of electrons, the LEED experiment can be regarded as follows. An incoming plane wave, the incident beam, is scattered at each site, and the outgoing plane waves, the outgoing beams, are measured. Both amplitude and phases of each outgoing wave are determined by the scattering properties and the position of each scatterer. For example, a change in the position of a scatterer will change the wave pattern in the solid and, therefore, will affect both amplitudes and phases of the outgoing waves. Because the LEED current of a beam is given by the wave amplitude, it carries information on both positions and scattering properties of the sites. This mechanism can be used, for instance, to obtain images of the geometrical structure in configuration space by LEED holography (Heinz et al. 2000).


Although LEED is sensitive to the outermost region of the sample, it is capable of detecting fingerprints of the electronic states of the film–substrate system. As mentioned previously, the LEED intensities depend on the electronic structure of the sample above the vacuum level and, therefore, contain information of the electronic structure of the entire film. In particular, quantized states that are confined to the film have a pronounced effect on the LEED spectra.


It took considerable time to develop theories that include multiple scattering of the LEED electron (Kambe 1967), which obviously is necessary for a proper description of LEED spectra. Textbooks that are introduced to the field and present computer codes for the calculation of I(E) spectra were written by Pendry (1974) as well as van Hove et al. (1986). Van Hove and Tong also provide review articles (1979).


Additional information can be obtained if one uses a spin‐polarized beam of incoming electrons, that is, spin‐polarized low‐energy electron diffraction (SPLEED), and uses a spin‐sensitive detector, for example, a Mott detector or a SPLEED detector. Interestingly, the latter exploits the LEED mechanism itself for a spin resolution in the experiment. Pioneering works were carried out by Feder (1985) and by Kirschner and Feder (1979) on the experimental side.


17.6.3 Reflection High‐Energy Electron Diffraction


The short history of RHEED study of oxides is relatively recent, and good general introductions to this matter date from Lagally (1985) and Lagally and Savage (1993); there one can find a clear explanation of the electron diffraction, reciprocal‐space representation, reflection from imperfect surfaces, and so on.


RHEED patterns result from and contain detailed information on the crystalline properties of surfaces. In the field of oxide thin films, RHEED analysis is currently used mainly for qualitative information, simply to watch diffraction problems and to note their evolution in time. Even this can be quite useful and can reveal atomic scale information on how a sample is growing with reference to phenomenology developed by experience. As discussed below, it is possible to distinguish a flat two‐dimensional (2D) surface from one having (usually unwanted) 3D nanoparticles. For the time being, we consider the former case.


At least in principle, the diagnostic value of RHEED monitoring could be substantially increased if it were supplemented by a quantitative analysis of the entire RHEED pattern. Ultimately, one could envision a numerical routine that solves the inverse problem in real time, that is, computes the real‐space atomic arrangement on the surface that corresponds to a given RHEED image.


In Figure 17.13, a schematic of a typical calculation and of an experimental RHEED setup is shown. The electron beam (E) impinges onto the film surface (F) at a nearly grazing angle (θ) and is reflected (E′) onto the screen (S).

c17f013

Figure 17.13 The model used in this chapter to calculate RHEED patterns. The electron beam (E) impinges onto the film (F) at a nearly grazing angle (0). The reflected beam (E′) forms a diffraction pattern on the screen S.


The electron beam is coherent over several thousand angstroms and is nearly monoenergetic; in our case, the electron energy is typically ≈8.5–10.0 keV. The film surface is approximated as a perfect but finite rectangular N × M array of atoms with the lattice periods a and b, respectively.


Since typically a ≈ b ≈ 4 Å > λe ≈ 0.15 Å, the electrons are strongly diffracted, and one expects to see multiple interference maxima on the screen. In this case, the refinement procedure provides an estimate of the size of in‐plane crystallographic coherence, that is, an estimate of the numbers N and M.


To calculate the diffraction pattern, the scattering contributions from all of the atoms are summed. The resulting intensity distribution on the screen is given by


17.12equation

where


17.13equation

and


17.14equation

where E and m are the electron energy and mass and r = (ia)ex + (ja)ez, Ra, Xaex + Yaey + Zaez and R = Xex + Yey + Zez are the positions of the (i, j)th atom in the plaquette, the electron source, and the screen pixel, respectively.


When the atoms are not all identical, their contributions are weighted by the respective form factors, Sij (not shown in Eq. 17.13).


As the first step to aid the reader’s intuition, in Figure 17.13 we show the calculated RHEED patterns for three ideal but finite atomic lattices, 10 × 10, 10 × 100, and 10 × 1000 unit cells, respectively, with a = b = 4 Å. These three cases correspond to different degrees of crystallographic coherence in the direction that the electron beam is propagating. Here we have taken Za = Z = 1 000 000 Å, ensuring that we are in the far‐field regime. We set Ya = 0, which is equivalent to setting the azimuthal angle of the sample crystal to zero; the beam is almost parallel to the <100> film direction, except for the out‐of‐plane tilt by θ = 0.5°. It can be seen from Figure 17.14 that as the long‐range order along the beam direction is increased, the streaks get shorter and eventually evolve into a spot – the specular reflection image of the incoming electron beam. Note that a substantial domain of local flatness is required to obtain coherent scattering into a specularly reflected spot.

c17f014

Figure 17.14 Calculated RHEED patterns for three finite atomic lattices: (a) 10 × 10, (b) 10 × 100, and (c) 10 × 1000 unit cells, respectively. The unit cell is a 4 × 4 Å2. The incidence angle is Θ = 2.5°, off the (100) direction.


The presence of the specular spot is suggestive of essentially perfect crystalline order extending over thousands of unit cells. On the other hand, when they appear as streaks, this shows evidence for the presence of terracing and other surface irregularities. Hence, some amount of surface roughness or disorder typically transforms interference spots into streaks.


In Figure 17.15 we show the calculated RHEED pattern for the same 10 × 10 lattice model as in Figure 17.13a but with Ya = 200 000 Å; this corresponds to the azimuthal rotation of the sample by arctan(0.2) = 11.3°. As the crystal is rotated around the (001) axis, the pattern apparently changes. Reversing the argument, one can infer the crystal orientation from the shape of the RHEED pattern. In practice, one simply rotates the substrate azimuthal orientation and brings it to a desired “low‐index” orientation for analysis during growth.

c17f015

Figure 17.15 Calculated RHEED pattern for the same model as in Figure 17.13a but rotated azimuthally (around the vertical axis) by 11.3°.


In recent years, there has been a surge of interest in deposition of high‐quality (i.e. single‐crystal) films of complex oxides such as cuprate superconductors or perovskite ferroelectrics. The RHEED method is currently one of the important techniques for in situ real‐time analysis of the growing surface of such films.


Here, we have reviewed some basic issues specific to RHEED monitoring of deposition of complex oxides.


The RHEED analysis can be made quantitative. This is being done by performing some numerical simulations and comparing the calculated RHEED patterns with the experimentally observed ones. In the future, it could be expected that real‐time RHEED analysis can and will be further improved. For example, it could be expected that intelligent programs will be developed, including pattern recognition capability based on a built‐in library of RHEED images, to aid the grower and ultimately even to control the growth.


17.6.4 Neutron Scattering


A neutron is an uncharged (electrically neutral) subatomic particle with mass m = 1.675 × 10−27 kg (1839 times that of the electron), spin 1/2, and magnetic moment −1.913 nuclear magnetrons. Neutrons are stable when bound in an atomic nucleus while having a mean lifetime of approximately 1000 seconds as a free particle. The neutron and the proton form nearly the entire mass of atomic nuclei, so they are both called nucleons. Neutrons are classified according to their wavelength and energy as “epithermal” for short wavelengths (λ ∼ 0.1 Å) and “thermal” and “cold” for long wavelengths (λ ∼ 10 Å). The desired range of λ is obtained by moderation of the neutrons during their production, either in reactions or spallation sources.


Neutrons interact with matter through strong, weak, electromagnetic, and gravitational interactions. However, it is their interactions via two of these forces – the short‐range strong nuclear force and their magnitude moments – that make neutron scattering such a unique probe for condensed matter research. The most important advantages of neutrons over other forms of radiation in the study of structure and dynamics on a microscopic level are summarized below:



  • Neutrons are uncharged, which allows them to penetrate the bulk of materials. They interact via the short‐range strong nuclear force with the nuclei of the material under investigation.
  • The neutron has a magnetic moment that couples to spatial variations of magnetization on the atomic scale. They are therefore ideally suited to the study of magnetic structures and the fluctuations and excitations of spin systems.

The energy and wavelength of neutrons may be matched, often simultaneously, to the energy and length scales appropriate for the structure and excitations in condensed matter. The wavelength, λ, is dependent on the neutron velocity following the de Broglie relation


17.15equation

where h is Planck’s constant (6.636 × 10−34 J s) and v the particle velocity. The associated kinetic energy is


17.16equation

Because their energy and wavelength depend on their velocity, it is possible to select a specific neutron wavelength by the time‐of‐flight (TOF) technique. Neutrons do not significantly perturb the system under investigation, so the results of neutron scattering experiments can be clearly interpreted. Neutrons are nondestructive, even to delicate biological materials. The high‐penetrating power of neutrons allows probing the bulk of materials and facilitates the use of complex sample environment equipment (e.g. for creating extremes of pressure, temperature, shear, and magnetic fields). Neutrons scatter from materials by interacting with the nucleus of an atom rather than the electron cloud. This means that the scattering power (cross section) of an atom is not strongly related with its atomic number, unlike X‐rays and electrons where the scattering power increases in proportion to the atomic number. Therefore, with neutrons, light atoms such as hydrogen (deuterium) can be distinguished in the presence of heavier ones. Similarly, neighboring elements in the periodic table generally have substantially different scattering cross sections and so can be distinguished. The nuclear dependence of scattering also allows isotopes of the same element to have substantially different scattering lengths for neutrons. Hence, isotopic substitution can be used to label different parts of the molecules making up a material.


Neutron beams may be produced in two general ways: by nuclear fission in reactor‐based neutron sources or by spallation in accelerator‐based neutron sources. The two world’s most intense neutron sources are the Institut Laue–Langevin (ILL) in Grenoble, France (ILL World Wide Web), and the ISIS facility at the Rutherford Appleton Laboratory in Didcot, UK (ISIS World Wide Web).


Neutrons have traditionally been produced by fission in nuclear reactors optimized for high neutron brightness. In this process, thermal neutrons are adsorbed by uranium‐235 nuclei, which split into fission fragments and evaporate a very high‐energy (megaelectron volts) constant neutron flux (hence the term “steady‐state” or “continuous” source). After the high‐energy (megaelectron volts) neutrons have been thermalized to megaelectron volt energies in the surrounding moderator, beams are emitted with a broad band of wavelengths. The energy distribution of the neutrons can be shifted to higher energy (shorter wavelength) by allowing them to come into thermal equilibrium with a “hot source” (at the ILL this is a self‐heating graphite block at 2400 K) or to lower energies with a “cold source” such as liquid deuterium at 25 K (Finney and Steigenberger 1997). The resulting Maxwell distributions of energies have the characteristic temperatures of the moderators (Figure 17.16a). Wavelength section is generally achieved by Bragg scattering from a crystal monochromator or by velocity selection through a mechanical chopper. In this way, high‐quality, high‐flux neutron beams with a narrow wavelength distribution are made available for scattering experiments. One of the most powerful of the reactor neutron sources in the world is the 58 MW HFR (high‐flux reactor) at the ILL.

c17f016

Figure 17.16 (a) Typical wavelength distributions for neutrons from a reactor, showing the spectra from a hot source (2400 K), a thermal source, and a cold source (25 K). The spectra are normalized so that the peaks of the Maxwell distributions are unity. (b) Typical wavelength spectra from a pulsed spallation source. The H2 and CH4 moderators are at 20 and 100 K, respectively. The spectra have a high‐energy “slowing” component and a thermalized component with a Maxwell distribution. Again, the spectra are normalized at unity. (c) Neutron flux as a function of time at a steady‐state source (gray) and a pulsed source (black). Steady‐state sources, such as ILL, have high time‐averaged fluxes, whereas pulsed sources, such as ISIS, are optimized for high brightness (not drawn to scale) (ISIS World Wide Web).


In accelerator sources, neutrons are released by bombarding a heavy metal target (e.g. U, Ta, W), with high‐energy particles (e.g. H+) from a high‐power accelerator – a process known as spallation. The methods of particle acceleration tend to produce short intense bursts of high‐energy protons and hence pulses of neutrons. Spallation releases much less heat per useful neutron than fission (typically 30 MeV per neutron, compared with 190 MeV in fission). The low heat dissipation means that pulsed sources can deliver high neutron brightness – exceeding that of the most advanced steady‐state sources – with significantly less heat generation in the target. As said above, one of the most powerful spallation neutron sources in the world is the ISIS facility that is based around a 200 µA, 800 MeV, proton synchrotron operating at 50 Hz, and a tantalum (Ta) target that releases approximately 12 neutrons for every incident proton.


At ISIS, the production of particles energetic enough to result in efficient spallation involves three stages:



  1. Production of H ions (proton with two electrons) from hydrogen gas and acceleration in a pre‐injector column to reach an energy of 665 keV.
  2. Acceleration of the H ions to 70 MeV in the linear accelerator (Linac) that consists of four accelerating tanks.
  3. Final acceleration in the synchrotron – a circular accelerator 52 m in diameter that accelerates 2.8 × 1013 protons per pulse to 800 MeV. As they enter the synchrotron, the H ions pass through a very thin (0.3 µm) alumina foil so that both electrons from each H ion are removed to produce a proton beam. After traveling around the synchrotron (approximately 10 000 revolutions), with acceleration on each revolution from electromagnetic fields, the proton beam of 800 MeV is kicked out of the synchrotron toward the neutron production target. The entire acceleration process is repeated 50 times a second.

Collision between the proton beam and the target atom nuclei generates neutrons in large quantities and of very high energies. As in fission, they must be slowed by passage through moderating materials so that they have the right energy (wavelength) to be useful for scientific investigations. This is achieved by hydrogenous moderators around the target. These exploit the large inelastic scattering cross section of hydrogen to slow down the neutrons passing through by repeated collisions with the hydrogen nuclei. The moderator temperature determines the spectral distributions of neutrons produced, and this can be tailored for different types of experiments (Figure 17.16b). The moderators at ISIS are ambient temperature water (316 K, H2O), liquid methane (100 K, CH4), and liquid hydrogen (20 K, H2).


The characteristics of the neutrons produced by a pulsed source are therefore significantly different from those produced at a reactor (Figure 17.16c). The time‐averaged flux (in neutrons per second per unit area) of even the most powerful pulsed source is low in comparison with reactor sources. However, the judicious use of TOF techniques that exploit the high brightness in the pulse can compensate for this. Using TOF techniques on the white neutron beam gives a direct determination of the energy and wavelength of each neutron.


Scattering events arise from radiation–matter interactions and produce interference patterns that give information about spatial and/or temporal correlations within the sample. Different modes of scattering may be produced: elastic or inelastic but also coherent or incoherent. Coherent scattering from ordered nuclei produces patterns of constructive and destructive interference that contain structural information, while incoherent scattering results from random events and can provide dynamic information. In small‐angle neutron scattering (SANS) (Bacon 1977), only coherent elastic scattering is considered, and incoherent scattering, which appears as a background, can be easily measured and subtracted from the total scattering. To further explain, coherent scattering is “in phase” and thus can contribute to small‐angle scattering; incoherent scattering is isotropic and in a small‐range experiment and thus contributes to the background signal and degrades signal to noise.


Neutrons interact with the atomic nucleus via strong nuclear forces operating at very short range (∼10−15 m), i.e. much smaller than the incident neutron wavelength (∼10−10 m). Therefore, each nucleus acts as a point scatterer to the incident neutron beam, which may be considered as a plane wave. The strength of interaction of free neutrons with the bound nucleus can be quantified by the scattering length, b, of the atom, which is isotope dependent. In practice, the mean coherent neutron scattering length density, ρcoh, abbreviated as ρ, is a more appropriate parameter to quantify the scattering efficiency of different components in a system. As such, ρ represents the scattering length per unit volume of substance and is the sum over all atomic contributions in the molecular volume Vm:


17.17equation

where bi, coh is the coherent scattering length of the ith atom in the molecule of mass density D and molecular weight Mw. Na is Avogadro’s constant. Some useful scattering lengths are given in Table 17.7, and scattering length density for selected molecules in Table 17.8 (King 1997). The difference in b values for hydrogen and deuterium is significant, and this is exploited in the contrast variation technique to allow different regions of molecular assemblies to be examined; i.e. one can “see” proton‐containing hydrocarbon‐type material dissolved in heavy water, D2O.


Table 17.7 Selected values of coherent scattering length, b (King 1997)


































Nucleus b/(10−12 cm)
1H −0.3741
2H (D)   0.6671
12C   0.6646
16O   0.5803
19F   0.5650
23Na   0.3580
31P   0.5131
32S   0.2847
Cl   0.9577

Table 17.8 Coherent scattering length density of selected molecules, ρ, at 25 °C
































Molecule ρ/(1010 cm−2)
Water H2O −0.560
D2O 6.356
Heptane C7H16 −0.548
C7D16 6.301
AOT (C8H17COO)CH2CHSO3 0.542
(Na+)
(C8D17COO)CH2CHSO3 5.180a
(Na+)

aValue calculated for the deuterated form of the surfactant ion only (i.e. without sodium counterions) and where the tails only are deuterated (King 1997).


In neutron scattering experiments, the intensity I is measured as a function of a scattering angle, θ, which in the case of SANS is usually less than 10°.


For coherent elastic scattering, the incident wave vector of magnitude 2π/λ, |ko| = |ks| = 2πn, where n is the refractive index of the medium, which for neutron is ∼1, and Ks is the scattered wave vector, so the amplitude of the scattered vector, |Q|, can be obtained by geometry as


17.18equation

The magnitude Q has dimensions of reciprocal length and units are commonly Å−1; large structures scatter to low Q (and angle) and small structures at higher Q values.


Radiation detectors do not measure amplitudes as they are not sensitive to phase shift, but instead the intensity Isc of the scattering (or power flux), which is the squared modulus of the amplitude


17.19equation

For an ensemble of np identical particles, Eq. 17.19 becomes (Dickinson 1995)


17.20equation

where the ensemble averages are over all orientations, o, and shapes, s.


Therefore, there is a convenient relationship (Eq. 17.18) between the two instrumental variables, θ and λ, and the reciprocal distance, Q, that is related with the positional correlations r between point scattering nuclei in the sample under investigation. These parameters are related with the scattering intensity I(Q) (Eq. 17.20) that is the measured parameter in an SANS experiment and contains information on intraparticle and interparticle structure.


17.7 Electron Microscopy


SEM and TEM are the two major types of electron microscopical examination of materials that present excellent opportunities for structural study and, equally, problems in specimen preparation and interpretation of the images produced. But two other techniques, high‐resolution transmission electron microscopy (HRTEM) and low‐energy electron microscopy (LEEM), deserve consideration today. In this section, these four techniques are briefly covered.


17.7.1 Scanning Electron Microscopy


SEM was a central part in a work for surface image and analysis. The SEM operates by scanning the surface with a beam of electrons, which are generated by an electron gun and focused with magnetic lenses down to a diameter of about 10 Å when hitting the specimen. The electrons interact with atoms at the surface, leading to emission of new electrons. These emitted electrons are collected and counted with a detector. SEM can be used in a mode detecting either secondary electrons (SEs) or backscattered electrons (BSE). SEs can escape only from a shallow region and offer the best image of surface topography. BSEs undergo a number of collisions before eventually scattering back out of the surface. They are generated from a larger region than SEs and provide information about specimen composition since heavier elements generate more BSEs and thus a brighter image.


The quality and resolution of SEM images are function of three major parameters: (i) instrument performance, (ii) selection of imaging parameters (e.g. operational control), and (iii) nature of the specimen. All three aspects operate concurrently, and neither of them should be or can be ignored or overemphasized.


One of the most surprising aspects of SEM is the apparent ease with which SEM images of 3D objects can be interpreted by any observer with no prior knowledge of the instrument. This is somewhat surprising in view of the unusual way in which image is formed, which seems to differ greatly from normal human experience with images formed by light and viewed by the eye.


The main components of a typical SEM are electron column, scanning system, detector(s), display, vacuum system, and electronic controls (Figure 17.17).

c17f017

Figure 17.17 Main components of a typical SEM.


The electron column of the SEM consists of an electron gun and two or more electromagnetic lenses operating in vacuum. The electron gun generates free electrons and accelerates these electrons to energies in the range of 1–40 keV in the SEM. The purpose of the electron lenses is to create a small, focused electron probe on the specimen. Most SEMs can generate an electron beam at the specimen surface with spot size less than 10 nm in diameter while still carrying sufficient current to form acceptable image. Typically, the electron beam is defined by probe diameter (d) in the range of 1 nm to 1 µm, probe current (ib) – picoamperes to microamperes, and probe convergence (α) – 10−4–10−2 rad.


In order to produce images, the electron beam is focused into a fine probe, which is scanned across the surface of the specimen with the help of scanning coils (Figure 17.17). Each point on the specimen that is struck by the accelerated electrons emits signal in the form of electromagnetic radiation. Selected portions of this radiation, usually SEs and/or BSEs, are collected by a detector, and the resulting signal is amplified and displayed on a TV screen or computer monitor. The resulting image is generally straightforward to interpret, at least for topographic imaging of objects at low magnifications.


The electron beam interacts with the specimen to a depth of approximately 1 µm. Complex interactions of the beam electrons with the atoms of the specimen produce a wide variety of radiation. The need of understanding of the process of image formation for reliable interpretation of images arises in special situations and mostly in the case of high‐magnification imaging. In such case, knowledge of electron optics, beam–specimen interactions, detection, and visualization processes is necessary for successful use of the SEM power.


The purpose of the electron lenses is to produce a convergent electron beam with desired crossover diameter. The lenses are metal cylinders with cylindrical hole, which operate in vacuum. Inside the lenses, magnetic field is generated, which in turn is varied to focus or defocus the electron beam passing through the hole of the lens.


The general approach in SEM is to minimize the probe diameter and maximize the probe current. The minimum probe diameter depends on the spherical aberration of the SEM electron optics, the gun source size, the electron optical brightness, and the accelerating voltage.


The probe size, which directly affects resolution, can be decreased by increasing the brightness. The electron optical brightness β is a parameter that is function of the electron gun performance and design. For all types of electron guns, brightness increases linearly with accelerating voltage, so every electron source is 10 times as bright at 10 kV as it is at 1 kV. Decreasing the wavelength and the spherical aberration also decreases the probe size.


The interaction volume of the primary beam electrons and the sampling volume of the emitted secondary radiation are important both in the interpretation of SEM images and in the proper application of quantitative X‐ray microanalysis. The image details and resolution in the SEM are determined not by the size of the electron probe by itself but rather by the size and characteristics of the interaction volume.


When the accelerated beam electrons strike a specimen, they penetrate inside it to depths of about 1 µm and interact both elastically and inelastically with the solid, forming a limiting interaction volume from which various types of radiation emerge, including BSE, SE, characteristic and bremsstrahlung X‐rays, and cathodoluminescence in some materials.


The combined effect of elastic and inelastic scattering controls the penetration of the electron beam into the solid. The resulting region over which the incident electrons interact with the sample is known as interaction volume. The interaction volume has several important characteristics that determine the nature of imaging in the SEM.


The energy deposition rate varies rapidly throughout the interaction volume, being greatest near the beam impact point. The interaction volume has a distinct shape (Figure 17.18). For low‐atomic‐number target, it has distinct pear shape. For intermediate and high‐atomic‐number materials, the shape is in the form of hemisphere.

c17f018

Figure 17.18 Excited volume and electron interaction within specimen.


The interaction volume increases with increasing incident beam energy and decreases with increasing average atomic number of the specimen. For SEs, the sampling depth is from 10 to 100 nm, and diameter equals the diameter of the area emitting BSEs. BSE are emitted from much larger depths compared with SE.


Ultimately, the resolution in the SEM is controlled by the size of the interaction volume.


Since the SEM is operated under high vacuum, the specimen that can be studied must be compatible with high vacuum (∼10−5 mbar). This means that liquids and materials containing water and other volatile components cannot be studied directly. Also, fine powder samples need to be fixed firmly to a specimen holder substrate so that they will not contaminate the SEM specimen chamber.


Nonconductive materials need to be attached to a conductive specimen holder and coated with a thin conductive film by sputtering or evaporation. Typical coating materials are Au, Pt, Pd, their alloys, and carbon.


There are special types of SEM instruments such as variable pressure scanning electron microscopy (VP‐SEM) and environmental scanning electron microscopy (ESEM) that can operate at higher specimen chamber pressures, thus allowing for nonconductive materials (VP‐SEM) or even wet specimens to be studied (ESEM). SEM can also be combined with a number of different techniques for chemical analysis, the most common being energy‐dispersive spectroscopy (EDS). When the electron beam interacts with the surface, X‐ray photons are generated. The energy of radiating photons corresponds to a transition energy that is characteristic for each element. Figure 17.17 illustrates the interaction volume from which electrons and X‐rays are generated. With wavelength‐dispersive X‐ray spectroscopy (WDS), the radiated photons are diffracted by a crystal, and only X‐rays with a specific wavelength will fall onto the detector. WDS analysis is more accurate but also more time consuming than EDS analysis and used in particular for analyses of light elements or separating overlaps in the EDS spectra.


Carbon surfaces that have been polished for optical microscopy show very few features in a SEM examination (no topography). However, etching the surface either chemically (with chromic acid) or by ion bombardment reveals a wealth of detail that can be related to the optical texture of the sample. A specialized application of this is the “same area” technique where a specific part of a polished surface that has been identified and characterized optically is reexamined by SEM following etching. Figures 17.19 and 17.20 show micrographs illustrating this technique.

c17f019

Figure 17.19 Optical micrograph of calcinated shot coke – before etching.

c17f020

Figure 17.20 SEM micrograph of calcinated shot coke – after chromic acid etching.


SEM is an excellent method for monitoring the changes in topography following various treatments, such as gasification, heat treatment, etc. Figures 17.21 and 17.22 are micrographs of metallurgical (blast furnace) coke before and after reaction with carbon dioxide.

c17f021

Figure 17.21 SEM micrograph of blast furnace coke – original surface.

c17f022

Figure 17.22 SEM micrograph of blast furnace coke – after 75% burn‐off in CO2.


17.7.2 Transmission Electron Microscopy


TEM provides a means of obtaining high‐resolution images of diverse materials. Figure 17.23 includes an outline of the general arrangement of a TEM. Electrons are generated in the same way as in SEM, using a tungsten filament, and focused by a condenser lens system. The electrons strike the thin‐film specimen and may undergo any of several interactions with the specimen. One of these interactions, diffraction of the electrons by the periodic array of atomic planes in the specimen, ultimately produces the contrast that most commonly enables observation of structural details in crystalline material. The electrons that pass through the thin crystal without being diffracted are referred to as transmitted electrons.

c17f023

Figure 17.23 Schematic of a TEM.


Downstream of the specimen are several post‐specimen lenses that include the lower half of the objective lens, a magnifying lens, an intermediate lens, and a projection lens. The series of post‐specimen lenses is referred to as an image formation system. After passing through the image formation system, the electrons form an image either on a fluorescent screen or on a photographic film. The theoretical resolution in a TEM image approaches the wavelength of the incident electrons, although this is generally not attained due to a spherical and chromatic aberration and aperture diffraction. Typical line‐to‐line resolution in TEM is around 1.5 Å.


The conventional TEM is thus capable of simple imaging of the specimen and generation of SADP. Images formed using transmitted electrons are known as bright‐field (BF) images, while those due to specific (hkl) planes are known as dark‐field (DF) images. An EDX system can be attached to a TEM to determine the elemental composition of various phases.


The techniques of TEM can be divided into three areas: conventional transmission electron microscopy (CTEM), analytical transmission electron microscopy (AEM), and HREM.


CTEM techniques include BF imaging, DF imaging, and selected area electron diffraction (SAD). These methods can be used to identify reaction products and to characterize the microstructure of the scale and the metal/scale interface. Defects, such as dislocations, voids, and microcracks, which play an important role on the growth and adhesion of the scale, can also be detected. With electron diffraction techniques, it is possible to determine the crystal structure and the relative orientation relationship between the metal and the scale.


The second area is AEM. The distribution of impurities and dopants as well as chemical profiles and compositional gradients can be determined with AEM techniques. AEM techniques include energy‐dispersive X‐ray spectroscopy (EDS) and EELS. A review and comparison between EDS and EELS was presented by Müllejans and Bruley (1993). The highest spatial resolution (0.4 nm) and smallest probe size (<1 nm) can be reached if these analytical methods are combined with a dedicated STEM, such as a VG HB501 STEM.


EELS can provide information about the chemical bonding at interfaces. For example, the oxidation states of metals can be investigated as has been demonstrated for Nb/α‐Al2O3 interfaces (Bruley et al. 1994). It is possible by analyzing the energy‐loss‐near‐edge structure (ELNES) of EELS spectra to determine the coordination and distance of atoms at interfaces. A line scan across the interface can provide information about the chemical width of the interfacial region.


The third area of TEM is HREM. Using HREM, the crystal lattice can be imaged, and the atomistic structure of materials can be investigated. The point‐to‐point resolution (Spence 1988) of conventional HREM instruments (400 kV) is about 1.7 Å (0.17 nm). However, a new generation of HREM instruments operates at 1250 kV with a point‐to‐point resolution of 1 Å (0.10 nm). With this resolution, the structure of many different interfaces and defects in materials can be investigated.


However, HREM can only be applied if special conditions are fulfilled. The thickness of the TEM sample has to be smaller than 10 nm. This means that very high quality of sample preparation is required. Lattice images of heterophase boundaries or grain boundaries are only possible if both crystals adjacent to the interface are oriented parallel to low‐index Laue zones. The interface itself has to be parallel to the incoming electron beam, since small tilts away can change the lattice image and make interpretation difficult.


The interpretation of HREM micrographs is not possible on a naïve basis owing to phase shifts introduced by lens aberrations, defocus values, sample composition, and thickness. Therefore, computer calculations are necessary to simulate images of model structures of crystals and defects (e.g. interfaces) for comparison with experimental images (Spence 1988).


The preparation techniques of TEM are quite difficult. It is necessary to obtain a very thin section of the carbon, less than 100 nm, of a uniform thickness. Specimen breakage can often give good results, but this should be treated with caution as it can lead to random variations in thickness that cannot be fully interpreted in terms of the image produced. A more controlled method of producing suitable material is by cutting a thin section of a microtome and further thinning the center portion by ion bombardment. The uniform thickness is important because, in these investigations, it is the variation in the amount of material through which the electron beam passes that provides the image contrast, and fringe imaging can be an artifact of a tapered sample. When the conditions are correctly established, high‐resolution TEM can provide direct imaging of the layer planes in carbon materials, reveal the complexity of the most regular structures, and show the ordering present down to the nanometer level.


In the following paragraphs (see Figure 17.24), a standard method for TEM cross‐sectional preparation will be explained (Strecker et al. 1993). After characterization of the specimen surface with optical microscopy and SEM, two pieces of the metal carrying the corrosion scale are glued together as a sandwich (Figure 17.24a). The sandwich is glued within a brass tube, with an outer diameter of 3 mm (Figure 17.24b,c). Thin disks are cut from this tube and ground with SiC paper to a thickness of about 200 µm (Figure 17.24d). The disks are dimpled from both sides with a 3 µm diamond, resulting in a residual thickness of 10 µm.

c17f024

Figure 17.24 Schematic summary of the TEM cross‐sectional preparation technique.


The final step in thinning is ion milling. The samples are ion thinned with a low angle of incidence (4°) from both sides with a BalTec ion mill. This method produces cross‐sectional samples with high efficiency (80%). The details of the preparation method will differ between different systems in order to obtain the optimal sample.


Although the effort for TEM sample preparations of the relevant systems is very high, TEM yields important information in the investigation of high temperature corrosion, as has been demonstrated with oxidation studies of Ni (Sawhill and Hobbs 1984), Fe and FeCrNi (Newcomb and Stobbs 1985), NiAl (Doychak and Rühle 1989), Ni3Al (Bobeth et al. 1994; Schumann and Rühle 1994), and on thin oxide films, and the cross section of the scales for identification of various phases.


17.7.3 High‐Resolution Transmission Electron Microscopy


HRTEM is the ultimate tool in imaging defects. In favorable cases, it shows directly a 2D projection of the crystal with defects and all. Of course, this only makes sense if 2D projection is down to some low‐index direction, so atoms are exactly on top of each other. Accordingly, HRTEM is easy to grasp: consider a very thin slice of crystal that has been tilted so that low‐index direction is exactly perpendicular to the electron beam. All lattice planes almost parallel to the electron, enough to the Braggs position, will diffract the primary beam. The diffraction pattern is the Fourier transform of the periodic potential for the electron in two dimensions. In the objective lens, all diffracted beams and the primary beam are their interference, which provides a back‐transformation and leads to an enlarged picture of the periodic potential. This picture is magnified by the following electron optical system and finally seen on the screen at magnifications of typically 106. The practice of HRTEM, however, is more difficult than the simple theory. Even so, significant work is now being conducted to analyze the cross sections of the scales to obtain information such as porosity voids and interface morphology between the metal substrate and oxide, as well as between the various oxide layers in a more complex sample. Figures 17.25 and 17.26 show two examples of the resolution layer planes in both highly ordered and less ordered materials obtained by HRTEM.

c17f025

Figure 17.25 HRTEM micrograph of highly graphitic carbon – highly ordered.

c17f026

Figure 17.26 HRTEM micrograph of PVDC carbon (HTT 1473 K) – disordered.


17.7.4 Low‐Energy Electron Microscopy


LEEM is a branch of microscopy involving low‐energy elastically BSEs for imaging of solid body surfaces.


LEEM was invented by Ernst Bauer in the early 1960s and has been widely used in surface research since the 1980s. In a microscope, primary low‐energy (up to 100 eV) electrons are emitted onto a subject surface, and the electrons reflected from the surface are focused to create a magnified image of the surface. This type of microscope has a spatial resolution of as much as several dozen nanometers. Image contrast depends on the variations of a surface’s ability to reflect slow electrons due to differences in crystal orientation, surface reconstruction, or coverage. While microscopic images may be generated very quickly, LEEM is often used to examine dynamic processes occurring on different surfaces, including the growth of thin films, etching, adsorption, and phase transitions in real time. In illustration thereof, Figure 17.27 shows a microscopic image of the surface of Si(111) during phase transition from reconstruction 7 × 7 to reconstruction 1 × 1 occurring in an environment with temperature 860 °C (Tromp 2000).

c17f027

Figure 17.27 Microscope image of slow electrons in the phase transition from 7 × 7 reconstruction to 1 × 1 reconstruction on the Si(111) surface. The 7 × 7 phase (bright sections) decorates atomic stages, while the surface of terraces is mostly covered with the 1 × 1 structure (dark sections). Image field size is 5 µm (Tromp 2000).


A layout of the Elmitec LEEM system is shown in Figure 17.28. This instrument is capable of BF and DF imaging with 10 nm resolution, selected area low‐energy electron diffraction (μLEED), and spectroscopic low‐energy electron reflectivity (LEER) measurements. This is primarily used for characterizing epitaxial films of 2D materials, in which case the spatial and crystal orientation of the grains is determined (from imaging of the μLEED), along with the number of 2D layers in each grain (from LEER measurements).

c17f028

Figure 17.28 Typical LEEM experimental setup used for surface science studies.


The operation of this instrument is easy to grasp: electrons are produced in an electron gun containing a thermionic LaB6 emitter, which is biased at typically −20 keV. Once the electron beam has left the emitter, it is accelerated to high energy by a grounded extractor into the illumination column, after which the beam is deflected toward the sample surface by a magnetic deflector. Passing through the grounded objective lens, the beam is rapidly decelerated to low energy due to the large potential difference between the objective and the sample, which is also close to −20 keV. A potential difference, Vs (start voltage or sample voltage), can be applied between the sample surface and the gun filament to alter the incident electron energy. Typically, incident energies of 0–50 eV are employed. The electrons are then reflected, or diffracted, from the sample surface. They pass through the magnetic deflector again and then are imaged (either as a diffraction pattern or as a real‐space image) on the micro‐channel plate. A contrast aperture is used to select particular diffraction spots for imaging. BF images are formed using the reflected (0, 0) spot, whereas DF images are formed using other, specifically selected, diffraction spots. Due to their low energies, the only electrons to leave the surface are those that originate from the top few atomic layers of the sample. Hence, LEEM is a very surface‐sensitive technique.


LEEM has developed into one of the premier techniques for in situ studies of surface dynamical processes, such as epitaxial growth, phase transitions, chemisorption, and strain relaxation phenomena. Over the last few years, new LEEM instruments have been designed and constructed, aimed at improved resolution, improved diffraction capabilities, and greater ease of operation compared with present instruments (Bauer 1998).


17.8 Electron Spectroscopy and Ion Scattering


Electron spectroscopy is a group of analytical techniques to study the electronic structure and its dynamics in atoms and molecules. In general, an excitation source such as X‐rays, electrons, or synchrotron radiation will eject an electron from an inner‐shell orbital of an atom. Experimental applications include ion‐resolution measurements on the intensity and angular distributions or emitted electrons as well as on the total and partial ion yields. Ejected electrons can only escape from a depth of approximately 3 nm or less, making electron spectroscopy most useful to study surfaces of solid materials. Depth profiling is accomplished by combining an electron spectroscopy with a sputtering source that removes the surface layers. Ion scattering techniques operate across a large energy range, from 1 keV to >10 MeV, each with different benefits and different aspects that can be investigated with each technique. Compared with other surface analytical techniques, the physics governing ion scattering is relatively simple. Being a real‐space technique, the complexity of converting reciprocal‐space data to real space is not required, and the overall equation governing collisions can be simply shown as a primary collision model. To obtain quantitative information about the species present at the surface of a material, it is necessary to understand the interaction potentials due to the effect of ion neutralization and scattering cross section. The main techniques considered are as follows.


17.8.1 Extended X‐Ray Absorption Fine Structure


The origins of extended X‐ray absorption fine structure (EXAFS) are attributed to Kossel and Kronig more than 85 years ago. The X‐ray absorption of a material will in general display several sudden upward jumps (termed K, L edges) as the X‐ray photon energy is increased, corresponding to specific electron excitations in the constituent elements of the material. Closer examination of the high‐energy side of these edges (i.e. some hundreds of electron volts above the edge) reveals small oscillations in absorption, which are the fine structure referred to in EXAFS. These oscillations arise from small energy fluctuations associated with interference effects between an outgoing electron wave from the excited atom and that fraction that is scattered back by the surrounding atoms (Figure 17.29). Thus, they carry information about the local environment of an excited atom/ion. An appropriately weighted Fourier transform of these EXAFS spectra will produce a radial distribution plot, that is, a plot of the surrounding atomic density versus distance from the excited atoms.

c17f029

Figure 17.29 Basic principles of EXAFS. On excitation of atom A by an X‐ray photon, the outgoing electron wave interferes with the fraction scattered back by the surrounding atoms B and C. The result is fine oscillations (arrowed) in the absorption versus energy curve above the edge.


The key feature of EXAFS is that each absorption edge is specific of a given atom type in the material, so that each EXAFS spectrum yields an individual atom type’s view of its local environment. The technique does not require that the material be crystalline: indeed it can be amorphous or liquid. This contrasts sharply with conventional crystallography that seeks diffraction patterns from crystalline material, the diffraction patterns themselves being complex superpositions of scattering from all the constituent atomic/electron density. Considerable crystallographic skill is required to disentangle the diffraction contributions from each atom in order to produce a 3D structural map of the material. EXAFS can directly probe the structure of each atom type, though this structural information extends only to the nearer coordination shells around the given atom.


Not surprisingly, being such a direct and versatile technique, EXAFS has become an increasingly popular technique in all walks of materials science. In particular, catalytic processes can be studied where specific atomic interactions are implicated and also amorphous/glassy materials where crystallography can give only limited structural information. Although EXAFS spectra can in principle be collected using the weak bremsstrahlung (white) X‐ray yield from laboratory sources, the synchrotron, with its intense and continuous energy X‐ray spectrum, is ideal for EXAFS. Practical collection times (minutes to hours) are therefore many orders of magnitude faster than laboratory measurements (days to weeks). The case for synchrotron EXAFS is now so strong in academic and industrial materials research programs that dedicated synchrotrons devote a sizeable fraction of their resources to EXAFS instrumentation.


EXAFS has evolved into many forms of measurement. A number of acronyms sprung up to describe the more popular versions of its use (Table 17.9).


Table 17.9 Acronyms for various forms of EXAFS




























Transmission EXAFS The standard measurement mode is transmission
Fluorescent EXAFS (or FLEXAFS) Complementary to transmission mode, where X‐ray emission is detected particularly for dilute elements
XANES Refers to the near‐edge structure, usually the first 100 eV above the edge that, through multiple scattering, is also sensitive to local symmetry
SOXAFS Soft (energy) EXAFS, e.g. 800 eV to 8 keV, particularly for K edges of light elements
SEXAFS Surface EXAFS, usually by photoelectron detection
REFLEXAFS SEXAFS performed by using the fluorescent mode with glancing incident X‐rays totally externally reflected from the surface
QEXAFS Quick EXAFS scans (∼1 min)
ED‐EXAFS Energy‐dispersive EXAFS: a geometry for extremely rapid EXAFS scans, dispersing the transmitted beam and measuring by fast position‐sensitive detector

The use of EXAFS for studies on catalysis has largely been mentioned. An exciting development in this pursuit is the marrying of EXAFS data with diffraction data. One example involves nickel‐exchanged zeolite Y that is used for converting three acetylene molecules into one benzene molecule. By making in situ EXAFS and powder diffraction measurements in controlled environments, it has been possible to study the complex structural changes involved: during dehydration, some nickel cations move from the super cage into the smaller cages, while others remain attached to the walls of the super cage; further movement of the cations occurs during catalytic activity.


Glasses, unlike crystalline zeolites, do not display long‐range order. Conventional X‐ray/neutron scattering can give short‐range structural information, but this is averaged over all atom types, and special techniques (anomalous absorption for X‐rays, isotopic substitution for neutrons) are required to apportion scattering to individual atom types. EXAFS has proved to be a more discriminating probe of local structure in glasses.


The differing environments of silicon, sodium, and calcium in soda lime–silica glass are readily obtained from the K‐edge EXAFS spectra, Figure 17.30 (Greaves 1990): silicon is tetrahedrally coordinated with oxygen at 1.61 Å, sodium is sixfold coordinated with a short bond of 2.3 Å, while calcium is similarly octahedrally coordinated but with a longer oxygen distance of 2.5 Å. These EXAFS data are consistent with a modified random network model of the glass containing channels of mobile cations ionically bonded in non‐bridging oxygens. This model is also consistent with the known bulk properties such as the reported emission of sodium on fracture, which appears to occur along these cation channels.

c17f030

Figure 17.30 EXAFS spectra (a) and resulting atomic distributions (b) around Si, Na, and Ca, respectively, in soda lime–silica glass.


17.8.2 Photoemission


Photoemission is a low‐energy technique exploited by surface science. It uses X‐ray photons in the low 100 eV energy range (i.e. wavelengths ∼100 Å). For example, photoemission spectra from an aluminum (111) surface, before and after exposure to oxygen gas at ∼2 × 10−7 torr, record the very early stages of aluminum oxidation (McConville et al. 1987). The chemical shifts of the photoemission peaks can be interpreted in terms of various proposed models. One possibility is that at low coverage, the oxygen chemisorbs onto the aluminum surface in three different states. This is identified by three peak shifts corresponding to aluminum atoms bonded to one, two, or three chemisorbed oxygen atoms. At higher exposures, the oxygen may diffuse to form a complete oxide‐like underlayer beneath the surface.


Although surface science is an expensive and time‐consuming discipline, it addresses fundamental problems in materials science, involving the structure and composition of surfaces. These influence important technologies such as semiconductor growth, surface catalysis, and corrosion protection.


17.8.3 Auger and X‐Ray Photoelectron Spectroscopy


These processes, described in detail elsewhere (Haugsrud 2003), are schematically illustrated in Figure 17.31. In both techniques, a surface atom is ionized by removal of an inner‐shell electron. In XPS or ESCA, this is achieved by bombarding the surface with photons with energies between 1 and 2 keV. The resulting photoelectron has an energy given by


17.21equation
c17f031

Figure 17.31 Schematic representation of the Auger process.


The energy of the photoelectron is measured using a concentric hemispherical analyzer. Typically the energy can be measured to an accuracy of ±0.1 eV. Knowing the energy of the incident photon, the binding energy of the electron in the atom can be determined. In addition, changes in binding energy that occur when elements combine together can be detected, and the chemical state of the atom identified.


In AES, ionization is produced by bombarding the surface with electrons with energies from 2 to 15 keV. When an electron is ejected from an inner shell (K shell) of an atom, the resultant vacancy is soon filled by an electron from one of the outer shells (L). This releases energy, which may be transferred to another outer electron (L shell), which is ejected. The energy of the Auger electron is given by


17.22equation

The energy of the Auger electron is measured by either hemispherical or cylindrical analyzers. The former has better energy resolution and hence gives chemical state information, while the latter has a higher transmission function and is useful for kinetic studies.


XPS has the advantage that it can give reliable chemical state information but has limited spatial resolution, while AES has excellent spatial resolution (10 nm) but limited chemical state.


Concerning the oxide characterization, it is important to identify the initial stages and corresponding thin films that form initially. In this respect, both XPS and AES are useful.


Figure 17.32 shows the oxygen 1s peak during the initial stages of exposure of nickel to oxygen at room temperature and then the effect of heating on this oxide (Allen et al. 1979). Initially undissociated oxygen is detected (stage 1), which then dissociates forming a layer of dissociated and undissociated oxygen on the top of the nickel metal (stage 2). Oxygen then diffuses into the nickel (stage 3) that may be slow at room temperature, but as the temperature is increased, the speed of the reaction increases. By determining the temperature at which this diffusion occurs from the O 1s peak shift, the activation energy can be determined.

c17f032

Figure 17.32 Changes in the oxygen O 1s peak during oxidation of nickel (Allen et al. 1979).


In binary and ternary alloys, the oxide that first forms is determined by the temperature and rate of arrival of the gas atoms. Figure 17.33 shows spectra from stainless steel exposed to low oxygen pressure at room temperature and 873 K. At room temperature, the initial oxide is an iron‐rich spinel of the form Fe3O4, but, at high temperature and low gas pressure, the chromium‐rich rhombohedral oxide Cr2O3 is able to form.

c17f033

Figure 17.33 Auger spectra from stainless steel exposed to a low oxygen pressure. (a) Room temperature. (b) 873 K.


Elements present in the bulk in very small quantities can have a dramatic effect on the initial oxidation. Sulfur is present in austenitic steels at a level of approximately 200 ppm. At temperatures of 500 °C and above, the sulfur diffuses to the surface where it reacts with the impinging gas atoms to form SO2 that is released into the environment (Wild 1977). Figure 17.34 shows the surface composition of a stainless steel, determined using AES, as a function of time exposed to 10−5 Pa oxygen at 873 K. Initially the surface has a high concentration of sulfur present, but this is gradually reduced by reaction with oxygen. It is only when the sulfur has been effectively removed from the bulk that the surface oxide can form. It then does this by forming the rhombohedral Cr2O3, but almost immediately manganese is incorporated into this oxide forming the spinel oxide MnCr2O4.

c17f034

Figure 17.34 Surface composition of stainless steel during exposure to 10−5 Pa oxygen at 873 K (Wild 1977).


The surface layer to be analyzed by depth profiling using AES or XPS may be divided into different groups according to their thickness:



  1. Very thin layers of 1–5 nm thickness may be analyzed by angle resolution AES or XPS measurements.
  2. An ion sputtering may be used for layer thickness of 3–100 nm.
  3. For 100–2000 nm thick layers, depth profiling, lapping, or ball cratering may be applied.
  4. For layer thicknesses >2 mm, lapping or ball cratering is recommended.

Further information on the oxidation process during oxidation can also be obtained by AES and XPS. Complete characterization of oxides and other products can be obtained by these surface analytical techniques combined with other specialized techniques (Birks et al. 2006; Khanna 2004; Levitin 2005; Schütze 1997; Tempest and Wild 1988).


17.8.4 Rutherford Backscattering


In RBS, the specimen to be analyzed is bombarded by a beam of α‐particles with energy of approximately 0.9–3 MeV. The energy of the backscattered particles depends on the atomic number of the atom at which the particle is scattered and the distance of this atom from the specimen surface. Therefore, RBS is a nondestructive depth profiling method that is element and depth sensitive. Although a standard is needed to calibrate the experimental setup, RBS can be more or less considered as a nondestructive, absolute method (Quadakkers et al. 1992).


The quantitative depth information requires an iterative fitting procedure. In most laboratories, the program RUMP is being used. The information depth is c. 0.5–2 µm depending on the primary energy. The depth resolution is typically 10–30 nm and, in most cases, will therefore be determined by the irregularities of the oxide scale. A major requirement for the successful application of RBS is that the surfaces of the specimens have to be flat. An important limitation in using RBS is the overlapping of signals (Quadakkers et al. 1992). Figure 17.35 shows RBS spectra of an alumina scale on an yttria‐containing FeCrAl alloy. The measured intensity at the high‐energy edge of Fe unequivocally reveals the presence of a low concentration of this element at the alumina surface. The yield at slightly lower energies, however, might be caused by the presence of Fe in greater depth and/or Cr and/or Ti at the surface. Therefore, reliable in‐depth information on the elements Fe, Cr, and Ti cannot be derived. The situation is different if a high Z element like Y is incorporated in the scale. Only then, quantitative in‐depth concentration of this element, in this case up to around 0.3 µm, can be obtained. Because of the mentioned significant limitations, RBS cannot be considered as a suitable technique for standard depth profiling of corrosion scales. The method can sometimes be advantageous if heavy elements are to be analyzed in a light matrix. For analysis of light elements in a heavy matrix, NRA can, in some cases, be suitable.

c17f035

Figure 17.35 RBS spectra, α‐particles, 2 MeV, showing Y, Fe(Cr, Ti), impurity in the outer part of an alumina scale on an FeCrAl‐ODS alloy after various oxidation times at 1100 °C (Quadakkers et al. 1992).


17.8.5 Secondary Ion Mass Spectrometry


In SIMS, the specimen to be analyzed is bombarded by a beam of ions of energy 3–12 keV. The particles that are eroded from the surface by the bombardment leave the surface in the form of neutrals or ions (Rudenauer and Werner 1987). Analysis of the secondary ions by a mass spectrometer allows a chemical analysis of the specimen (corrosion scale) laterally and as a function of sputtering time, i.e. eroded depth (depth profiling by dynamic SIMS).


SIMS is a suitable method for the depth profiling of laterally uniform scales of 50 nm to 10 µm thickness. For thinner scales, AES/XPS might be more suitable, whereas for thicker scales conventional or tapered cross sections should be used. With modern ion sources, scales of up to several micrometers thickness can be analyzed within one hour. Compared with other surface analysis methods, SIMS is very suitable to measure trace elements because it can theoretically detect all elements and their isotopes, most of them down to concentrations in the ppb range; however the sensitivity differs for the various elements, and, in practice, nitrogen, for example, appears to be difficult to measure.


The analysis of oxide scales can be hampered by the fact that the sputtering process induces charging of the specimen surface. Charging affects the primary ion beam and, more important, alters the part of the energy distribution contributing to the detected signal. For thin films this can be overcome by a gold coating (10–20 nm) of the specimen, but, in most cases, especially for thicker scales, charge neutralization by electron bombardment of the specimen is required.


Studies reported by Benninghoven et al. (1991) have shown that the matrix effect can strongly be reduced by using MCs+‐SIMS. In this technique, the oxidized specimen is bombarded by Cs+ ions, whereas the positive molecule cluster ions of the elements (M) with Cs are being analyzed. Although the mechanisms of the molecule formation at the specimen surface are not yet completely understood, a number of recent studies seem to confirm the suitability of this technique for quantitative scale analysis, provided that the impact angle is larger than about 65° (depending on material and sputter yield). Figure 17.36 shows an example of the reduced matrix effect during analysis of an alumina scale formed on an FeCrAl alloy. It can be seen that a good approximation of the real concentration profiles is obtained by using relative sensitivity factors (RSFs) for the various elements (same RSFs for oxide and alloy) that lie in the same order of magnitude, in contrast to conventional SIMS, where the sensitivity factors for the various elements can differ by several orders of magnitude. Figure 17.36 illustrates that MCs+‐SIMS is capable of measuring concentrations as high as 60 at.% and as low as 0.01% simultaneously in one depth profile. If the RSFs are constant throughout the depth profiling, they provide the possibility to derive the relative sputter yields during the depth profile. Combined with an absolute measurement of the crater depth, a depth scale can then be derived. Further studies are necessary to prove the suitability of the MCs+‐SIMS technique.

c17f036

Figure 17.36 MCs+ analysis of alumina scale on Fe–20Cr–5Al ODS alloy, composition in wt%, after 215 hours oxidation at 1100 °C SIMS parameter, 50 nA, 5.5 keV, scanning area 150 × 150 m, and charge compensation by electron gun. (a) Non‐corrected intensities of main elements Fe, Cr, Al, O. (b) Intensities after isotope correction; multiplication with sensitivity factors, Al = 1, Cr = 0.85, Fe = 0.9, O = 6.5; and normalization to total corrected intensity.


If the oxide scale composition is not laterally homogeneous, depth profiles can only give limited information of the real element distribution in the scale because they give an average value over the analyzed area. In such cases, the good lateral resolution that is achieved by SIMS imaging can be used for scale characterization. The achievable lateral resolution depends on a number of experimental factors.


In imaging, even by using the MCs+‐SIMS technique, the matrix effect can make the mapping result difficult to interpret, because differences in sputter rates and/or ion yields of different phases in the oxide scale can indicate different concentrations of the various elements that, in reality, do not exist. Therefore, in multiphase scales, the intensities at every measured pixel of the image should be corrected to account for the differences in sputter rate and ion yield for obtaining the real element distribution. This obviously requires significant computer capacity; however, in using MCs+‐SIMS, promising results have been obtained.


The big advantage of the technique is its high sensitivity for most elements in many materials including insulating compounds. The profiling mode must be used with care due to the poorly defined relation between sputtering time and depth, particularly in nonhomogeneous materials such as oxidized metals.


An example of SIMS profiling is also given in Figure 17.37 for the case of the oxidation in oxygen of the binary nitride TixAl1−xN obtained by chemical vapor deposition.

c17f037

Figure 17.37 SIMS spectrum as a function of depth of the oxide scale formed by oxidation of TiAlN at 850 °C for 150 minutes using a 4 keV Xe+ beam at 45° incidence.


The figure shows that alumina, Al2O3, is located at the external part of the oxidized scale, whereas the internal part consists of a mixture of alumina and titania, Al2O3 + TiO2.


17.8.6 Ion Scattering Spectroscopy


Using ion beams of much lower energies (0.2–2 keV), produced by low‐cost and easy to use ion guns, may generate the same analytical information as RBS but limited to surface atoms. The physical phenomenon is not changed, and the expression of the kinematic factor describing the backscattering yield is always given by Eq. 17.23:


17.23equation

where μ is the ratio m/M of the masses of the incident ion and of the target atom and θ is the angle at which the backscattering ions are collected. The technique is here called ion scattering spectroscopy, and this is mainly a surface analysis technique comparable with the electron spectroscopes described above. However, it is possible to obtain depth information by sputtering inward from the specimen surface. The lifetime tm of the surface monolayer of the solid under ion sputtering is given by


17.24equation

where Cs is the surface concentration of the solid (atoms m−2), FB is the ion flux of the beam (ion m−2 s−1), and S is the sputtering yield (number of sputtered atoms/number of incident ions). The first two of these parameters are easy to measure and control, but the third is more difficult to evaluate. The sputtering yield is dependent not only on the nature, energy, and the angle of incidence of the sputtering ions but also on the nature, crystalline orientation, and properties of the bombarded surface. For example, using Ar+ ions of 1 keV to bombard pure polycrystalline aluminum at 60° incident angle gives a sputtering yield of about 2. Using a value of 1 × 1019 atoms m−2 for Cs, a monolayer removal time of one second (tm = 1) needs a beam flux of 5 × 1018 ions m−2 s−1, corresponding to an ion current of monocharged species of 0.8 A m−2. Modern ion guns cover a large range of currents from 10−5 to 10 A m−2 so that an experimentally convenient choice of sputtering rate can usually be obtained.


17.8.7 Low‐Energy Electron Loss Spectroscopy


Low‐energy electron loss spectroscopy (LEELS) is a surface analysis technique that provides quantitative information about physicochemical properties of materials from their nano‐size near‐surface region. It is well known that physical phenomena such as secondary electron emission (SEE) can be used to investigate the near‐surface area of a solid to obtain information on its crystal structure, elemental composition, and electronic configuration of its atoms (Luth 1993). Figure 17.38 shows the energy distribution of reflected SEE from a surface irradiated by an electron beam of primary energy E0. Four ranges in N(E) can be observed that are due to interactions between elastic and inelastic scattering together with SEE.

c17f038

Figure 17.38 Total energy distribution of secondary electron emission from a surface that is irradiated by an electron beam of primary energy E0.


Region II is due mainly to electrons that have lost some of their energy by inelastic scattering; directly by the elastic peak, one finds electrons that have suffered discrete energy losses from the excitation of inter‐ and intraband electronic transitions, surface and bulk plasmons, hybrid modes of plasmons, and ionization losses (ionization spectroscopy). That range is usually 30–100 eV. Usually, the losses related to surface and bulk plasmon excitations are most intensive lines in the electron energy loss spectrum. The spectra of plasma oscillations are potential data carriers about composition and chemical state of elements on the surface of solid and in the adsorbed layers. The energy losses are called as characteristic losses because they do not depend on the primary electron energy E0 and its value is individual for the chemical element and compound. Region II is called as electron energy loss spectroscopy. At energy E0 < 1000 eV, it can be called as LEELS. On Figure 17.39 really LEELS spectra are shown with interpretation of losses for the Co–Cr–Mo alloy surface that was measured at the primary electron beam energy E0 = 350 eV in dN/dE mode (Vasylyev et al. 2008).

c17f039

Figure 17.39 Example low EELS spectra obtained for the Co–Cr–Mo alloy surface at the primary energy E0 = 350 eV with identification of energy losses (Vasylyev et al. 2008).


LEELS, with its variants ionization spectroscopy and surface and bulk plasmon excitations (these are potential data carriers about composition and chemical state of elements on the surface and bulk of solid and in the adsorbed layers), is based on the measurement of the energy spectra of electrons, which have lost a particular portion of the energy ΔEβ for the excitation of electronic transitions that are typical for a given kind of atom β. The position of an intensity line (IL) in the spectrum with respect to the primary electron energy E0 is determined by the binding energy of electrons in the ground state and by the distribution of the density of empty states, but it does not depend on the value of E0, on the work function, or on the value of the surface charge.


The calculation of the contribution to the intensity of an IL by the electrons having lost an amount of energy ΔEβ at the depth Z from the sample surface by the ionization of the core states of the atoms β is simple when a traditional experimental configuration is used (an incident beam of the primary electrons is directed perpendicularly to the sample surface (θ0 = 0), and the SE are registered at the angle θ with respect to the normal). In this case, calculations within the framework of a two‐stage model allow us to obtain the following expression for the intensity of an IL (Vasylyev et al. 2006):


17.25equation

where K is an instrumental factor, σβ is the ionization cross section of the core level, nβ(Z) is the concentration of atoms at depth Z from surface, and images is the elastic scattering factor of electrons. Λβ is the effective free path of electrons in a sample with respect to inelastic collisions, which is determined by the equation


17.26equation

For the Pt–Me (Me: Fe, Co, Ni, Cu) alloys (Seah and Dench 1979)


17.27equation

An effective probing depth in IS amounts to ∼3Λβ because the SE created in the near‐surface region of this thickness contribute with 95% to the total intensity of an IL. An increase in the effective probing depth upon increasing the energy E0 also results in an increased contribution from the deeper layers of the concentration profile into the IL intensity. This enables us to carry out a layer‐by‐layer reconstruction of the concentration profiles of the elements using the energy dependencies of the IL.


In summary, LEELS with energy ionization losses allows investigation of the layer‐by‐layer concentration profile for the single‐crystal alloys with monolayer resolution, element distribution on the depth for the polycrystalline alloys, and study of kinetics of surface processes at thermo‐induced treatment or after ion irradiation of the surface.


Plasmon excitations are very sensitive to structural and chemical state of surface and bulk and can be used for the study of electronic states of free electrons in the near‐surface region and influence of different kinetic processes on changing of electronic structure of materials.


Accordingly, LEELS with plasmon excitations can also define a surface–bulk interface with different physicochemical properties as compared with the bulk material. These results have good correlation with data of surface composition on depth that are obtained by ISS and AES.


17.9 Surface Microscopy


Surface microscopy includes atomic force microscopy (AFM), scanning tunneling microscopy, topographic confocal Raman imaging, low‐energy electron microscopy, lateral force microscopy (LFM), surface force apparatus (SFA), and other advanced techniques. Here, we give some attention to scanning tunneling microscopy, AFM, SFA, and LFM.


17.9.1 Scanning Tunneling Microscopy


Classically, an object hitting an impenetrable barrier will not pass through. In contrast, objects with a very small mass, such as the electron, have wavelike characteristics that allow such an event, referred to as tunneling.


Electrons behave as beams of energy, and, in the presence of a potential U(z), assuming 1‐dimensional case, the energy levels Ψn(z) of the electrons are given by solutions of Schrödinger’s equation:


17.28equation

where h is the reduced Planck’s constant, z is the position, and m is the mass of an electron.


Inside a barrier, E < U(z) so the wave functions that satisfy this are decaying waves. Knowing the wave function allows one to calculate the probability density for that electron to be found at some location. In the case of tunneling, the tip and sample wave functions overlap such that when under a bias, there is some finite probability to find the electron in the barrier region and even on the other side of the barrier (Chen 1993).


When a small bias V is applied to the system, only electronic states very near the Fermi level, within eV (a product of electron charge and voltage, not to be confused here with electron volt unit), are excited. These excited electrons can tunnel across the barrier. In other words, tunneling occurs mainly with electrons of energies near the Fermi level. However, tunneling does not require that there be an empty level of the same energy as the electron for the electron to tunnel into on the other side of the barrier. It is because of this restriction that the tunneling current can be related to the density if available or filled states in the sample. The current due to an applied voltage V (assume tunneling occurs sample to tip) depends on two factors: (i) the number of electrons between Ef and eV in the sample and (ii) the number among them that have corresponding free states to tunnel into on the other side of the barrier at the tip. The higher the density of available states, the greater the tunneling current. When V is positive, electrons in the tip tunnel into empty states in the sample; for a negative bias, electrons tunnel out of occupied states in the sample into the tip (Chen 1993).


One can sum the probability over energies between EfeV and Ef to get the number of states available in this energy range per unit volume, thereby finding the local density of states (LDOS) near the Fermi level. Now, using Fermi’s golden rule, which gives the rate for electron transfer across the barrier, it is possible to obtain the full tunneling current and consequently the Fermi distribution function that described the filling of electron levels at a given temperature.


A scanning tunneling microscope (STM) is an instrument for imaging surfaces at the atomic level, with a 0.1 nm lateral resolution and 0.01 nm (10 pm) depth resolution. It is based on the concepts of quantum tunneling reported above and can be used not only in ultrahigh vacuum (UHV) but also in air, water, and various other liquid or gas ambients and at temperatures ranging from near‐zero kelvin to over 1000 °C.


The components of an STM include a scanning tip, a piezoelectric controlled height and an x, y scanner, coarse sample‐to‐tip control, vibration isolation system, and computer (Oura et al. 2003).


The resolution of an image is limited by the radius of curvature of the scanning tip of the STM. Additionally, image artifacts can occur if the tip has two tips at the end rather than a single atom; this leads to “double‐tip imaging,” a situation in which both tips contribute to the tunneling. Therefore, it has been essential to develop processes for consistently obtaining sharp, usable tips. Recently, carbon nanotubes have been used in this instance (Lapshin 2007).


The tip is often made of tungsten or platinum–iridium, though gold is also used. Tungsten tips are usually made by electrochemical etching, and platinum–iridium tips by mechanical shearing (Bai 1999). Maintaining the tip position with respect to the sample, scanning the sample, and acquiring the data are computer controlled. The computer may also be used for enhancing the image with the help of image processing, as well as performing quantitative measurements (Lapshin 2007).


17.9.2 Atomic Force Microscopy


AFM or scanning force microscopy (SFM) is a type of scanning probe microscopy (SPM) with demonstrated resolution in the order of fractions of a nanometer or more than 1000 times better than the optical diffraction limit. The technique allows force measurement manipulation and acquisition of topographical images.


In force measurement, AFMs can be used to measure the forces between the probe and the sample as a function of their mutual separation. This can be applied to perform force spectroscopy.


For imaging, the reaction of the probe to the forces that the sample imposes on it can be used to form an image of the 3D shape (topography) of a sample surface at a high resolution. This is achieved by raster scanning the position of the sample with respect to the tip and recording the height of the probe that corresponds to a constant probe–sample interaction. The surface topography is commonly displayed as a pseudocolor plot.


In manipulation, the forces between tip and sample can also be used to change the properties of the sample in a controlled way. Examples of this include atomic manipulation, scanning probe lithography, and local stimulation of cells.


An AFM typically consists of a small cantilever, a piezoelectric element, a tip or probe, a detector of deflection, and motion of the cantilever, the sample, an x, y, z drive, and the sample stage (Butt et al. 2005).


According to this configuration, the interaction between tip and sample, which can be an atomic scale phenomenon, is transduced into changes of the motion of cantilever that is a macroscale phenomenon. Several different aspects of the cantilever motion can be used to quantify the interaction between the tip and sample, most commonly the value of the deflection, the amplitude of an imposed oscillation of the cantilever, or the shift in resonance frequency of the cantilever (tip contact or static mode of motion, tip tapping or intermittent contact mode of motion and of the detection mechanism or noncontact mode of motion).


Various methods of detection can be used, e.g. interferometry, optical levers, the piezoresistive method, the piezoelectric method, and STM‐based detectors (beam deflection measurement, capacitive detection, laser doppler vibrometry, etc.).


The AFM signals, such as sample height or cantilever deflection, are recorded on a computer during the xy scan. They are plotted in a pseudocolor image, in which each pixel represents an xy position on the sample and the color represents the recorded signal.


Noncontact mode AFM does not suffer from tip or sample degradation effects that are sometimes observed after taking numerous scans with contact AFM. This makes noncontact AFM preferable to contact AFM for measuring soft samples, e.g. biological samples and organic thin film. In the case of rigid samples, contact and noncontact images may look the same. However, if a few monolayers of adsorbed fluid are lying on the surface of a rigid sample, the images may look quite different. An AFM operating in contact mode will penetrate the liquid layer to image both the liquid and surface.


AFM has several advantages over the SEM. Unlike the electron microscope, which provides a 2D image of a sample, the AFM provides a 3D surface profile. In addition, samples viewed by AFM do not require any special treatments (such as metal/carbon coatings) that would irreversibly change or damage the sample and do not typically suffer from charging artifacts in the final image. While an electron microscope needs an expensive vacuum environment for proper operation, most AFM modes can work perfectly well in ambient air or even in a liquid environment. This makes it possible to study biological macromolecules and even living organisms. In principle, AFM can provide higher resolution than SEM. It has been shown to give true atomic resolution in UHV and, more recently, in liquid environments. High‐resolution AFM is comparable in resolution with scanning tunneling microscopy and TEM. AFM can also be combined with a variety of optical microscopy techniques such as fluorescent microscopy, further expanding its applicability. Combined AFM–optical instruments have been applied primarily in the biological sciences but have also found a niche in some materials applications, especially those involving photovoltaic research (Geisse, 2009).


Table 17.10 summarizes the functions of STM and AFM, thus clarifying the advantageous characteristics of these microscopy surface techniques. But it should be noticed that disadvantages also exist. For example, a disadvantage of AFM compared with SEM is the single scan image size; the scanning speed of an AFM is also by far slower than the scanning speed of a SEM. AFM images can also be affected by nonlinearity, hysteresis, and creep of the piezoelectric material and cross talk between the x, y, z axes that may require software enhancement and filtering (Lapshin, 1995, 2007).


Table 17.10 Summary of STM and AFM functions

























































STM AFM
Instrumentation Tip, scanner, controller Cantilever, scanner, optics controller
Conducting samples Yes Yes
Nonconducting samples No Yes
Resolution in vacuum <0.1 Å ∼Å
In dry air <1 Å ∼nm
In liquid ∼nm ∼10 nm
Operation in liquid Tip coating No coating needed
Modes of operation Constant height Constant height

Constant current Constant force


Contact mode


Tapping mode
Applications Imaging Imaging

Tunneling spectroscopy Force mapping

Manipulation of atoms/molecules Nanolithography

17.9.3 Surface Force Apparatus


The SFA (Israelachvili and Adams 1978) was the pioneer scientific instrument to measure nanoscale forces. It was originally designed to study colloidal interactions, including steric, electrostatic, van der Waals, and solvation forces, and, today, it can also be used to monitor the assembly of biomolecules in real time.


The SFA technique stems from right after World War II as David Tabor was studying frictional interactions between surfaces in the Cavendish laboratory. He/she had an industrial contract aimed at developing improved windscreen wipers, which led him to modelize the interactions between rubber and glass in water. Using hemispherical rubber samples pressed against a flat glass surface, his/her team could follow the interaction by interferometry. When the rubber/glass interaction experiments were performed in air, contact was immediately established, and the rubber hemisphere became flattened over some area even under zero (compression) force. Under negative (pulling) force, and up to the onset of separation, the contact area remained nonzero, suggesting that attractive forces were operating between these two solid surfaces. This observation led his/her team to develop the Johnson–Kendall–Roberts (JKR) theory (Johnson et al. 1971) for the adhesion between two solid bodies, which predicts that the pull‐off force F to separate a deformable sphere of radius R from a plane is equal to


17.29equation

Only gold members can continue reading. Log In or Register to continue

Aug 11, 2021 | Posted by in Fluid Flow and Transfer Proccesses | Comments Off on Testing and Evaluation
Premium Wordpress Themes by UFO Themes