Hostname: page-component-8448b6f56d-42gr6 Total loading time: 0 Render date: 2024-04-24T03:02:36.236Z Has data issue: false hasContentIssue false

NetNotes

Published online by Cambridge University Press:  21 December 2012

Thomas E. Phillips*
Affiliation:
University of Missouri

Abstract

Selected postings from the Microscopy Listserver from September 1, 2012 to October 31, 2012. Complete listings and subscription information can be obtained at http://www.microscopy.com. Postings may have been edited to conserve space or for clarity.

Type
NetNotes
Copyright
Copyright © Microscopy Society of America 2013

Immunocytochemisty:

normalization of immunogold labeling

During the last review process of a manuscript showing quantitative immunogold data collected by TEM the question arose as to why we are not normalizing the gold particle density of the desired antigen to an internal control protein such as actin through double labeling. Apparently, this is especially crucial when using different samples cultivated under different conditions. Looking through the literature in my field (plant sciences) I was not able to find publications where quantitative immunogold labeling data was normalized against an internal standard. So my questions are: 1) Why is this procedure, which is commonly used during rt-PCR and microarrays, not used for quantitative immungold labeling? 2) Is anybody aware of publications where gold particle density was normalized against an internal standard, e.g., actin? 3) If such normalization were to be performed, which protein could serve as a control protein in plants, in my opinion it would need to occur in all cell compartments? In my case, I've analyzed immunogold labeling density after the incubation of ultrathin sections with primary (against the desired antigen) and secondary gold conjugated antibodies in different cell compartments. Then I've compared labeling density in the different cell compartments between different control and treated/stressed samples. BerndTue Oct

As I understand it, the quantification of actin is used to normalize the amount of material analyzed. So if you quantify a protein you express it as μg of this protein per mg of total material (cell extract, cell number, whatever). In your case you normalize the density of labeling by expressing your results per cm2 for example. I think it is dangerous to compare the density of two labelings together. You can use double labelings to compare the localization of 2 epitopes but not for quantification. Each labeling has its own characteristics and one condition may show an increase of labeling not because there are more epitopes per cm2 but because, for example, the epitope is better presented, or has been relocalized in a compartment where the antibodies have better access to it, or because it was present in a dense complex and after treatment it is not associated with the complex anymore, or the protein has refolded. So after treatment, you see that your labeling for actin is more dense (gold particles per cm2), can you conclude anything about another labeling? What is the relationship with the other epitope? There is simply none. It simply doesn't help to quantify actin at the same time as your epitope of interest because you cannot conclude for one from the results of the other. Acknowledgement: I am a biologist and I do labelings in LM but not in EM. I am just using my common sense here, so it would still be interesting to hear from those-who know. Stephane Nizets Tue Oct 2

As usual with these things, potential answers depend on the research question. Often, immunogold is used to identify a structure (what is this funny looking blob?) or localize an antigen (is my protein present in the lysosome?). For these purposes, it should be fine to count label densities over appropriate regions, with the usual no primary control. It is more complex if you want to use immunogold to measure the amount of antigen in some region. It has always been my impression that, in sections, the relation between antigen amount and antibody binding is non-linear (or linear over a quite restricted range). This limits the precision fairly extensively. Therefore, I think no amount of normalization is going to magic away this problem and give you perfect quantification. I'd say the best you can get is a reproducible approximation. If you want to push the PCR analogy, most careful qPCR runs use several genes to normalize, not just one gene. Following this, you would need to mix three control primary abs, detect them all with one size gold particle secondary, and then detect yours with another. While this sounds cute in principle, in practice I don't think it will work well enough to justify the effort (or example steric hindrance is likely to be a big problem). One should be able to control for the different conditions by reporting ratios of labeling, rather than the absolute amounts. So the density of gold particles in mitochondria to chloroplasts say would be the parameter reported. This seems robust to most sorts of sample prep vagaries (quality of fixation, so on). Tobias Baskin Tue Oct 2

Image Analysis:

particle quantification

I am a biologist and I am a little bit lost with a mineralogist issue. Someone asked me to quantify mineral particles in a powder (number of particles per mg powder). The particles have a size distribution compatible with LM (0.4–10 μm). I could just weigh a small amount of powder (5 mg) on a glass slide and count the particles in LM but there are too many particles and weighting less than 5 mg with precision is not realistic. So alternatively, I could suspend the particles and put a certain volume on a glass slide, dry the liquid and count the particles on a LM but I fear that the particles would agglomerate. Is it realistic to think that I could resuspend the particles in say methanol or acetone, sonicate the suspension and quickly put (quantitatively) a small amount on a glass slide (eventually on a heating plate)? I could probably do something similar with a stub and SEM, but this would probably unnecessarily increase the analysis time (however I could clearly see agglomerates). I would appreciate it if you would share your experience and thoughts with me. Even the weirdest ones that sometimes turn out to be the best. Any other method will be considered too, as far as I can practically use it. I also have access to a TEM and a SEM. Stephane NizetsWed Aug 29

Many thanks for helping. Most of you proposed using a hemacytometer to count the particles. It is my (limited) understanding that mineral particles around 1 μm will stay in suspension for a long time, probably more than 24 h. In this condition it will be impossible to count the particles because they'll swim all the time. The solution must take the particle size into account; it is not a trivial question. Particles above 10 μm will definitely sediment within minutes, do not agglomerate and are easy to filter, but the problem is more complex for small particles. Considering a SEM analysis, I wonder how I could reconcile the small field of view with the necessity to count quantitatively (completely) all the particles in the stub. How could I know which particle I have already counted and not count it twice? I would love to have access to a particle counter but this is not the case. Stephane Nizets Wed Aug 29

Why not taking images of your fields and count them from the image, by hand, or with ImageJ. You restrict sedimentation, and the possibility of counting particles twice, if you can collect your fields fast enough. Joachim Siegmund Wed Aug 29

“When you are a hammer, every problem looks like a nail.” Being an SEM-EDS guy, I do think of using SEM rather than light microscopy. I think the analysis could be done rather quickly and might render it more cost-effective than LM. You have not said what the two phases are—the mineral and the powder. If they are significantly different, I would consider using EDS to determine their relative abundance, SEM to determine the average particle size, and then use known densities to calculate the number of particles in a volume (mass) of sample. Of course that assumes that there is some chemical differentiation between mineral and powder. It also would help if the powder is not an organic material. It would be hard to do a decent chemical analysis in that case. If that were the case, it might be easier to determine the mass loading by some other chemical analysis and use the average particle size from SEM to go through the calculation. Warren Wed Aug 29

Many thanks to all those who spent some time to send me a reply. I think I found a solution to my problem and I'll post it on the list for the archive, since it may help someone else someday. Using a glass slide was not possible since I couldn't make them clean enough. Fortunately I noticed that small plastic petri dishes used for cell culture are perfectly clean. Unfortunately when I tried to dry 2 μL of suspension on the petri dish, the liquid spread a lot (ethanol) and the surface to analyze was really too big. Since the particles are sometimes quite small I require a minimum of 20× objective, and I needed to stitch 540 fields together to get the whole picture. I don't want to pipet less than 2 μL because I think the pipetting error will increase too much. So I am not able to quantify the whole volume (2 μL). Fortunately someone gave me the solution: just add a known concentration of latex beads to the powder suspension and you can relate the number of mineral particles to the number of latex beads (thus to the volume) anytime. I have the added advantage to be able to recognize specifically the latex beads through fluorescence, so it will make the task of separating them from the powder very easy. Once again the list has proved invaluable for me so thanks again. Stephane Nizets Fri Aug 31

Image Processing:

photomerge tool in Photoshop

I'm looking for tips from those familiar with Adobe Photoshop's Photomerge tool. Has anyone used this for stitching multiple microscope field images, followed by subsequent quantitative image analysis? I tried this feature for the first time yesterday and found that the image magnification calibration changed after merging. Specifically a 100 μm scale bar placed on one of the original images was measured by Photoshop's ruler at ~200 μm on the resulting merged image. Of course this is not a problem for a handful of images, we would just recalibrate. However we have a large volume to process in this way and are wondering if anyone has gone through the steps to see how reproducible the outcome is? If one always begins with image sets captured on the same microscope, same objective, does the merged image always result in the same reproducibly changed pixel to micron ratio? Does this depend on the number of images being merged? I would of course run the tests myself but this is a short-term project on a tight timeline. I am also curious if anyone has looked into whether there are any spatial alterations following the merge. Visually the result is impressive; however there may be more than meets the eye? Karen ZarubaWed Oct 10

How was the image calibrated and how was the measurement done? It would probably to help to know which software was used. I don't think that Photoshop enters into this, per se. Some systems store the calibration as the dimensions of each pixel. That should not change with stitching. However, what should be the case and what is actually the case are often enough different. It is good to verify. Other systems will calibrate based on the field width at a given magnification regardless of the number of pixels in the image. (We have one.) If I took four images at 1000× with a 120-μm horizontal field width and stitched them together, the results would be (nearly) twice as many pixels in each direction and would represent twice the view (240 μm). But 240 μm is the field of view of a 500× image on our system. If I entered the mag of my new image as 1000× and measured the feature, I would be about 2× in error. A 100 μm scale bar would be measured as 200 μm. It seems like that could be going on in your case. I would investigate that avenue. Warren Straszheim Wed Oct 10

Thank you for the quick replies! A number of you asked for more detail on the version of Photoshop being used (CS5 for Mac) and the steps for calibration (micrometer slide image, Photoshop Analysis⟩ Set Measurement Scale⟩ Custom). Looking further into the details I realized somewhat sheepishly that I was unable to reproduce the error that we saw yesterday. Possibly we neglected to check the “Use Measurement Scale” box in Photoshop's Ruler menu bar on one image but not the other. In any case the magnification error seems to have been one of operator origin. Spatial alteration concerns remain: I can't answer whether Photoshop's algorithm involves simple translating of images or whether any stretching is going on. I had hoped someone else might have explored this? If not, these concerns should be dispelled by a few checks, such as measuring features “before” and “after” in portions including those where the images overlap. Also thanks for the suggestion of Fiji J's Track2EM option, which I may want to explore for another project! Karen Zaruba Wed Oct 10

The Photomerge dialogue describes what changes it will do with the pictures under each option: Auto, Perspective, cylindrical, spherical, and collage will all scale images independently to get the best match (and most of those options will do much worse distortions too). The reposition option will not scale images. Do not select geometric distortion correction, as if the objective is high quality there should be next to none, and this process may introduce some. Having used this feature extensively, I have learnt that the process is not the same each time it is applied to the same set of images. If you see a misalignment of images, or wrongly placed image, it is often possible to re-run the process, and it will do the merge differently the next time. Ben Thu Oct 11

The resolution issue in Photoshop could be due to either the method of making the measurement or, possibly, the setting for the image cache. I don't know whether or not the geometric methods for image stitching in Photoshop contribute, since I only use Reposition and Interactive methods. If you use the measure tool, the zoom of the image is tantamount to getting an accurate measurement. If the image view is at less than 100%, then you will likely get the wrong measurement. The measure tool appears to be tied to the screen resolution, and so the only accurate method is to fill the screen with the scale bar and then draw with the measure tool to get an accurate reading. The same is true when drawing a line to create a scale bar. The reading on the Info box is not accurate unless zoomed in. A sure fire way to obtain an accurate measurement, no matter what the zoom, is through the use of any means that would create a Region of Interest or, in Photoshop, and outline that is referred to as “marching ants.” The magic wand tool or the Color Range tool with a Fuzziness set at zero (0) can be used to create regions of interest. The reading from the Info Box will then provide an accurate measurement. That is a relief because the Analysis Tool in Photoshop requires regions of interest. As an added safety precaution, you might also want to set the memory cache to 1 (versus the default setting). In Edit (PC) or Photoshop (Mac)⟩ Preferences, find the cache setting (in Performance on CS3) and set this to 1 (one). If the cache is set higher, then Photoshop retains low resolution copies of the image for display at lower zooms to save on memory. I haven't really been convinced that a setting of 1 prevents caching, but I'm more comfortable with this setting. I'll write to Adobe to get more info. I created a scale bar and put it through the paces to replicate the scale bar differences when using “Reposition Only” as the photostitching method. I could only get a 2 pixel difference in reading depending on zoom when using the Measure tool. I'm on a PC. Jerry Sedgewick Thu Oct 11

LM:

cleaning camera

I am afraid I am not able to solve a seemingly basic problem. We have an inverted Zeiss light microscope with a CCD camera (PixelLink) just above the oculars. There is a lens (a ring) between the camera and the microscope. We have 3 fixed halos on the background and I can't find the origin of the dust. The halo appears as if the dust were near focus. Here are the tests I did in order to diagnose the problem: When I turn the camera with the ring attached (both turn together in relation to the microscope), the halos don't move even a bit. The dust must be part of the camera/ring, not the microscope. If I detach the camera from the microscope (with the ring attached), the background (when I direct the camera to the light coming from the window) is perfectly clean. The dust is not part of the camera/ring but is part of the microscope?!! It makes no sense! Or perhaps the ambient light is not enough to give a clear picture?? I have a bright uniform background though. One additional note: if I make the ring loose and I slowly shake the camera/ring a bit, the halos shake also a bit on the picture. I already cleaned the lens/ring and the fine glass in front of the camera CCD with lens paper and ethanol. Stephane NizetsMon Oct 15

The light coming through the microscope is coming at the camera/coupling optics from a small range of angles, making the dust cast sharp shadows on the sensor. Off the microscope the light comes into the camera/coupling optics from a wide range of angles, and the shadows are softened to the point of being invisible. The same happens with dust on DSLR sensors—often dust is not visible until you stop down a lens to f/11 or so. You could try performing the same test in a dark room with a small high-powered LED torch located a few feet away. When you say you cleaned the glass in front of the CCD, do you mean an IR (hot mirror) filter in front of the sensor? If so, there could be dust on the internal side of the hot mirror (if there is a significant gap between that and the sensor). Does the hot mirror have a retaining ring allowing its removal? Regards, Ben Micklem Mon Oct 15

TEM:

correcting beam tilt

I work on JEM-2100f and have not-so-much experience on TEM. While aligning the beam I am bit confused. If our gun tilt is ok and the rotation center has been centered, then the caustic spot is not symmetrical, rather it is like a circle with a bright dot on the side. On the other hand if I make my caustic spot symmetrical using bright tilt then the image shows some kind of “band” passing though it near exact focus (see link) with severe astigmatism. On closer inspection, it looks similar to the ring of infinite radial magnification in a Ronchigram. What exactly am I missing in alignment? I exposed a Formvar film for too long and saw that the “burn mark” also looks as if the beam is tilted a bit. What should I focus more on? Rotation centering from HT wobbling or symmetric caustic spot? Should they not be the same? Link to images: https://sites.google.com/site/auxilliarylinks/ Amit GuptaThu Sep 6

You clearly have a good question here, as there does not seem to be a flood of replies. The first thing that I must say is it is impossible to have a variation in astigmatism across an image; the streaking that you see seems to be because you are too near to condenser crossover when the illumination is incorrectly aligned. I've set out below my TEM Generic Alignment Data that I believe is appropriate to you. Try these alignment procedures because the problem has to be alignment. Steve Chapman Fri Sep 7

EDITOR'S NOTE: Steve's Operating Instructions for a JEOL 200C, 2000F, 2000FX are far too long to include here. We have reluctantly deleted this excellent resource for space reasons, but Steve has given us permission to invite interested readers to contact him directly by e-mail, whereupon he will send the instructions to you directly. — Ron Anderson, Exec. Editor

Just to point out that this is a FEG-TEM—so the part in Steve's protocol below about reducing the filament current doesn't apply; you just use the anode wobbler instead and make the beam oscillation radial about its center. The 2100 can be difficult to align since it has a fairly sophisticated condenser lens system—3 condenser lenses plus a condenser mini-lens. This means you can independently change the convergence angle (alpha control) and beam dimensions (spot size). Although the alignment procedure is nominally the same as the older two-lens machines, in practice you may have to go round a loop a couple of times until everything is aligned—i.e., when you align one parameter, another one may become misaligned. It's more like “tuning” than aligning. The other very important thing is that you do the alignments at “standard focus” (special button with a cover!). Generally you don't touch the focus apart from very fine tweaks, instead you set the machine to standard focus and use the Z control to bring the specimen to the right height (i.e., in focus). Also you need to be at 40,000× or above to make sure all the projector lenses are switched on (some changes in focus and alignment will be present at low magnifications, when some projector lenses are turned off). Do this first and then follow the steps below. 1) Because of the interaction between the different alignments, I generally start with the Tilt center. Choose your alpha (usually 3) and spot size 1. Focus the beam to a spot, turn the Tilt-X wobbler on and make the two spots one by selecting the Tilt align and using the Def-X knob. If the spots move on parallel tracks when you do this and never become one, select the Angle align to make the spots move in the perpendicular direction, again with Def-X. You can center the beam with Shift-X while you're doing this, if you need to. Once you have a single spot, turn Tilt X wobbler off, and repeat for Tilt Y. 2) Check the Condenser aperture is centered and next try a voltage (HT) center using the HT wobbler and Bright Tilt → Def-X,Y—if this is way off, just make the beam oscillate concentrically as you did with the anode wobbler. If it is not too far off, make the center of image stationary (tip—it's easier to find the mid-point of the /direction /of the image movement, rather than the smallest amplitude of movement). Once you've done this, check the condenser aperture again, and then the anode wobbler again. 3) Finally, do the Gun shift alignment (the only alignment in which the Shift-X and Y knobs change their function). Spot size 5—center with Beam shift, then spot size 1, center with Gun shift. Repeat until both are centered. Go back to (2) and check they are still okay (will probably have to do at least a HT center). Eventually you should get to the point where all the different alignments are pretty close. There is one more alignment which can have a big effect on everything else—the Shift center—but hopefully this is okay for you, if not make it #4 in the list above. If you change alpha (or even go to Low Mag and back to Mag again) you may have to repeat the alignment, usually only fairly small adjustments. Finally at the end you can save the alignment in a file, so even if someone else comes along and messes everything up, you can reload the same lens currents and use it as a good starting point (but note this will not work after a change of filament!!). Hope this helps and good luck. Richard Beanland Fri Sep 7

Thank you every one who replied. A major problem is now solved. I aligned gun shift and tilt more meticulously and avoided an overexcited C2 so those streaks don't appear now. But one more thing keeps bothering me. There is still an asymmetrical beam. I took two more pictures of beam as shown in the link below (bottom 2). We can see such “spikes” on one half of the beam. What are they? Is it anything that degrades the quality? Is there any way to correct them? I noticed that instead of being completely circular beam is bit “flat” on one side, like the shape of partially deflated football. https://sites.google.com/site/auxilliarylinks/. Amit Gupta Sat Sep 22

The “spikes” are due to a dirty beam-defining aperture (usually the C2 aperture). Something on the aperture edge is charging. Replace or clean the aperture and you should be good to go! As a quick test, you should be able to change to a different C2 aperture and if that aperture is clean, you should see an improved beam. Also be sure to do a proper C2 aperture centering. Hendrik O. Colijn Sat Sep 22

Your third TEM image with the “spikes” for me has the look of a dirty inner surface / dirty aperture hole causing astigmatism on the beam. Did you try other (contrast) apertures and did you do a careful alignment of the condenser astigmatism? There seems also to be an orientation pattern in the edge of the structure. Did you use a mesh of a hexagonal grid when shooting the image? Stefan Diller Sun Sep 23

Thank you all for replies, Stefan—no I was not using any hexagonal grids, and these spikes were there despite careful alignment. So today I used a different condenser aperture and beam was as expected, symmetrical. So it's a dirty aperture only. Anyway to correct it myself? Or is removing it the only way? One more question, look at the second picture in line, the one I posted earlier. There we can see one side of beam bit different than other (darkening more). Can this also be attributed to the same problem? Amit Gupta Sun Sep 23

Glad the aperture problem is diagnosed. Apertures can be cleaned; check your microscope owners guide for the procedure. It will differ depending on whether you are using Pt or Mo apertures. The other option is to just replace the dirty aperture. The uneven illumination in the second photo appears to be due to the “hot spot” in the emission of a FEG source. I assume that your microscope has a Schottky “FEG” emitter. The emission from a Schottky source is strongly distributed in the forward direction and falls off at larger emission angles. If you use a large beam limiting aperture (usually C2) you can see this distribution (so-called “witches hat profile”). For my Tecnai F20, I see something like this when I use a 150 μm C2 aperture. The fact that the hot spot is off center means that the gun alignment is off by a little bit. This is corrected by using the gun tilt and shift. Once you get the “hot spot” centered in the large aperture, you will generally operate with a smaller C2 aperture which cuts off the less intense “halo” around the central “hot spot.” Hendrik O. Colijn Sun Sep 23

Electron Backscatter Detection:

differentiating phases

Does anyone have any thoughts on differentiating C14 from C36 laves phases via electron backscatter detection (EBSD)? The ICDD reports crystal files for both polytypes of Cr 2Zr. I have (Cr,Fe) 2Zr EBSD patterns that fit with extremely high confidence to either phase file. Both are the same space group (#194, P63/mmc) so it is no surprise EBSD is confounded. C14 laves is the MgZn 2structure, and C36 is the MgNi 2structure, in which the C36 is more-or-less two C14 unit cells stacked together down c, with a slightly different stacking sequence (http://cst-www.nrl.navy.mil/lattice/struk/ctype.html). Looking at predicted band widths (lattice parameters) is less than convincing, due to the similarity of the a-parameters and the fact that the C36 c-parameter is almost exactly twice C14 c-parameter, so high-order reflections tend to fall near the same place. By squinting at the patterns with superimposed predicted bands widths, I think the phase is C14; the literature agrees, but I'd like to convince myself further. Is there anything short of TEM or dynamical simulation I might try? Chad M. ParishTue Sep 18

If you can deposit patches of each of the laves onto the specimen—one hopes they will be epitaxial—EBSD from areas with the appropriate lave will be completely superimposed. Of course, you would need to know which of the patches corresponded to which lave and the areas from which the EBSD data are taken must contain both the unknown and the known patch. I have no idea whether this is practical for these substances, so good luck. Bill Tivol Wed Sep 19

Electron Probe Microanalysis:

EPMA

I have experience in TEM and SEM but none in EPMA. I would like to know what kind of gun is used: W, LaB 6, FEG? I would like to know what X-Y resolution can be attained. I suppose that it is not possible to go down too much with the kV otherwise you cannot measure lighter elements so one has probably to take strongly in consideration the interaction volume inside the specimen (Z resolution), which increases with the kV. To take an example, what X-Y (and Z) resolution can be expected using a Cameca SX 100 instrument with an aluminosilicate specimen (“light” mineral). Does it make sense to expect to be able to resolve different minerals in particles 3–10 μm in size? Stephane NizetsWed Oct 3

Roughly speaking, I'd say that SEM and EPMA are the same type of instrument today, but SEM is optimized for resolution, and EPMA is optimized for needs of X-ray microanalysis (i.e., stability, specimen positioning, ports for a number of WDS spectrometers). Any kind of a gun can be used, with exception of cold FEG (not stable enough). As for spatial analytical resolution, it is exactly the same as for SEM (same physics.) Energy (wavelength) resolution of WDS is, of course, much better than that of EDS. As for accelerating voltage, on EPMA with Schottky gun you can go as low as you wish (the same as for SEM/EDS). For aluminosilicates you can use 3–4 kV; the interaction volume I believe (but I am not sure) will be about 100–200 nm. Lighter elements are good for high resolution analytical work. As for particles 3–10 μm in size—they are huge; no problems at all with qualitative analysis. Vladimir M. DusevichWed Oct 3

Let me add a couple of comments to the information Vladimir gave. The largest difference between SEM and EPMA is not the electron gun and column, it is the issue of detectors… EPMA utilizes wavelength dispersive spectrometers with gas detectors whereas SEM utilizes a solid state energy dispersive spectrometer + detector. Of course many EPMAs also have an EDS detector, and a few SEMs have an add-on WDS. You mentioned specifically measurement of light element contents. 1) The spectral resolution of WDS is on the order of 10 times or more better than EDS, so that interferences by L and M lines of elements present in the sample falling on or near the light element of interest can in many/most cases will not cause errors, whereas with EDS this may not be the case. 2) EPMA-WDS traditionally utilizes standards, so that quality control is possible (is the analytical total close to 100 wt%?), whereas SEM-EDS traditionally does not utilize explicit standards and normalizes the results to 100 wt%, so quality control can be an issue. John Fournelle Wed Oct 3

It looks that one essential advantage of EPMA (or WDS) on EDS is still missing. On the hand the EDS detector receive all the X-ray photons within the collection angle from all elements present under the electron probe. On the other hand the counting rate is limited by the processor/multichannel analyzer. When analyzing samples with major and minor elements—let say steel or high temperature alloys for instance—the major element (Fe, Ni…) reach a number of counts that give an excellent statistical relevancy. At contrary minor elements like C or addition elements (Mo, W, Ta, Zr…) have poor counts even after a long acquisition and their quantification remains quite uncertain. In an EPMA each detector sees only photons of one element at a time. Thus an educated operator will choose a weak line (or second order line) for the major element to bring its count rate down to the level of that one of the most intense line of the minor elements. Then he will boost the probe current to get a high count rate for everybody. This dramatically improves the statistical relevancy and composition accuracy for light elements. It also brings better detectability of traces elements towards tens of ppm or even ppm instead of the classical 0.1% of EDS. Philippe Buffat Thu Oct 4