Hostname: page-component-8448b6f56d-gtxcr Total loading time: 0 Render date: 2024-04-24T19:12:42.682Z Has data issue: false hasContentIssue false

NetNotes

Published online by Cambridge University Press:  06 September 2019

Abstract

Type
NetNotes
Copyright
Copyright © Microscopy Society of America 2019 

Edited by Bob Price

University of South Carolina School of Medicine

Selected postings are from discussion threads included in the Microscopy (http://www.microscopy.com) and Confocal Microscopy (https://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy) Listservers from May 1, 2019 to June 30, 2019. Postings may have been edited to conserve space or for clarity. Complete listings and subscription information can be found at the above websites.

Techniques

Confocal Microscopy Listserver

Well-Liked 405 Secondary Antibodies (Thread started May 2, 2019)

We have had some reproducibly disappointing results with Alexa Fluor 405 tagged secondary antibodies. Does anyone have a go-to secondary in this color other than the AF? We are trying to multiplex as many channels as possible, and this is the only one we struggle to get good brightness/SNR with. Thanks! Joe Lebowitz

What ever happened to Brilliant Violet? I remember a bit of marketing about it a while ago but never got around to trying it myself. Did anyone get a chance to give it a go? Craig Brideau

I've had good success with Becton Dickinson BV dyes. BV421 is the best blue fluorescent dye I've tried.

http://www.bdbiosciences.com/us/solrSearch?text=bv421&x=0&y=0 Kathryn Spencer

Hi Joe, we struggle with this also. I imagine it is a common complaint. I would have to say that while imaging blue fluorophores with a 405 nm LASER on a Confocal, we have had the best luck with Alexa 405. Sorry. Brian Armstrong

I really like DyLight 405 both for confocal and 3D-SIM. It's very picky regarding anti-fades though. Use Prolong Diamond (uncured), and it is just fine (not so good with Prolong Gold). Just to warn you, if you look down the microscope visually you will think the staining hasn't worked—the autofluorescence overwhelms the dye—but when you start imaging it's totally different. Good luck! Alison North

We also find that AlexaFluor 405 performs the best for our confocal systems, as all except one have 405 nm lasers. One other comment I would make is that depending on which filter set you have available for observation, the signal may look very weak to your eye (as it's violet) but may be much better than you think. I have often been pleasantly surprised by the signal strength of staining with this fluorophore when I switch over to confocal. I would also suggest increasing the concentration of the secondary and putting your best primary antibody into this channel if possible. Cheers, Jacqueline Ross

(Commercial Response) I'm wondering how the results were disappointing. I'd be happy to troubleshoot offline. But what I can add is that violet dyes in general tend to be dimmer and less photostable than many other wavelengths.

In my testing, I have found the DyLight 405 is brighter and a bit more photostable than Alexa Fluor 405 when compared side-by-side, if that might help. Jason A. Kilgore

BV421 is good, but I always use it for highly expressed antigens. Also, since it is a polymer dye, it does not penetrate thick tissue sections well, so some optimization needs to be done. All the best, James Muller

This is complicated by the 405 nm lasers on confocals. Alexa Fluor 350 works well on wide-field fluorescent scopes with standard DAPI blocks but does not work at all on the confocal microscopes with 405 nm lasers. Alexa Fluor 405 is weak with our wide-field scopes that use a DAPI block with ex at 365 nm (except for one with a 395 nm LED instead of a DAPI block) whereas ideal with 405 nm laser using confocal. We have to find out which scope people want to use before advising on a blue probe, but most people don't ask before they label. Michael Cammer

(Commercial Response) You've probably heard some of this from me, and I know that George McNamara has also been impressed with and written of the Brilliant Horizon fluors from BD (BD owns this from Sirigen purchase). Here's a reference using Brilliant Violet 421 and Brilliant Violet 480: “Multilocus Imaging of the E. coli Chromosome by Fluorescent In Situ Hybridization.” https://www.researchgate.net/publication/319289637_Multilocus_Imaging_of_the_E_coli_Chromosome_by_Fluorescent_In_Situ_Hybridization

BV421 is approx. 20- to 40-fold brighter than AF405. BV480 is also approximately 25-fold brighter than CFP (depending on who is measuring). The problem remains the availability of reagents, as BD has decided that their business model is better served by reserving these reagents for their custom-conjugated Ab's for flow. But I believe that BD still entertains custom requests for their reagents. IMHO, directly conjugated primary Ab's are the best way to leverage these fluors because they're so bright that they don't need amplification. This removes the problem of species specificity. Why wouldn't one try this? Here's a link to a presentation that illustrates the efficacy of this approach of filling the gap of violet and blue fluors. Given the headlong stampede toward increased multiplexing, I don't understand how these fluors could be overlooked: https://www.chroma.com/5-channel-fluorescence-imaging-simplified

I hesitate using the “COMMERCIAL RESPONSE” qualifier, because Chroma Technology has no formal relationship with BD Biosciences and receives no credit for purchases of related products. Chroma Technology profits only from sales of filter sets which are used to image these fluors. Jeff Carmichael

As previously noted in another reply, BV421 = Brilliant Violet 421 works fine with a 405 nm laser. BV 421 is available from BD Biosciences (acquired the manufacturer, Sirigen, for the Brilliants), and BioLegend. We have used a CD## direct labeled primary antibody added to mouse leukocytes, imaged live on our Leica SP8 confocal microscope (HyD1, 2nd gen hybrid detector), including both time series (100 min) and Z-series. I note BD Biosciences now has BUVs (excite well ~350 nm), a few BBs (excite 488 nm) and one BYG (excite ~561 nm). I have suggested to a number of users that they move on from DAPI and Hoechst to BioLegend's Zombie NIR, excite ~640 nm, emission peak ~750 nm … BV421-antibody is narrow emission spectrum, bright, and has lots of antibodies … DAPI and Hoechst are broad spectrum one-trick fluors. George McNamara

Jeff, Excellent post. The BD business model limitation can be dealt with. And the fluorescence microscopy community would be well served by using as many “flow cytometry” compatible reagents as possible. BD Biosciences and BioLegend each sell BV421 streptavidin conjugates:

http://www.bdbiosciences.com/us/reagents/research/antibodies-buffers/second-step-reagents/avidinstreptavidin/bv421-streptavidin/p/563259

https://www.biolegend.com/en-us/products/brilliant-violet-421-streptavidin-7297

I note that streptavidin has four binding sites for biotin, so it may lose some efficiency in conjugation, and there is a need to optimize the amount of BV-SA : biotin-mAb - ideally get 1:1 conjugation. Maybe the best counter to BD's business model is to use Fab (or scFv) instead of full-length IgG. George McNamara

Interesting topic. BD Biosciences and BioLegend each sell BV421 streptavidin conjugates. Does this work? From my days in chemistry I remember that streptavidin sticks to almost anything, not what you want to combine with your highly specific antibody. Isn't t this why they developed neutravidin?

“Maybe the best counter to BD's business model is to use Fab (or scFv) instead of full length IgG”. We used single Fab a lot for single-molecule tracking and STORM. But they come off very easily, while you have less problems with inaccessible binding sites, labeling can be quite inefficient. Andreas Bruckbauer

As I see it, the main reasons for NOT going with directly conjugated primary antibodies are (1) the extra time and effort required to perform the conjugation and clean up the product, and (2) the fact that, in principle, conjugation can alter the specificity of an antibody, which would require repeating its characterization. However, the latter is a theoretical issue rather than one that (to my knowledge) has been documented. Anybody have data on this point? I've found nothing. Martin Wessendorf

I've been using dylight 405 on our SP8 confocal (and airyscan), and it works really well (though I am using it with my strongest primary for instance Nucleopores) used in combination with 488 (Alexa or Dylight) and red secondaries. The mounting medium is based on sorbitol (https://www.nature.com/articles/s41598-018-32191-x). I haven't tested it in mounting media like prolong et al. Debora Olivier

Martin, Yes, I've had the misfortune of ruining perfectly good primary antibodies in conjugation reactions that resulted in labeled antibodies with excellent fluorophore: protein ratios, but no remaining antigenicity. So, yes, this is largely hypothetical, but I do know that in the case of the reference I provided, BD Biosciences did offer assistance with the conjugation. I'm not sure if the author (David Bates) required the assistance or not, and I believe he conjugated oligos and not antibodies, but I believe BD will offer conjugation assistance, as the chemistry with these polymers can be challenging.

My own take on the extra time is pick your poison….lots of preferred ways of doing things take more time, and it seems like many folks have a protocol or two that they are obsessive about. In the end it seems worth the effort to me. If imaging is the approach a worker has chosen to focus on to obtain some highly multiplexed data, then it seems that using the best tools is worth the trouble considering all of the time/effort going into doing the work at the bench, preparing samples/slides, acquiring the images, processing the images, and then spending the time/effort doing image analysis. Being able to exploit this violet and blue fluorescence gap in the spectrum seems like an obvious approach to improve the reach of imaging. If the data is simply supporting in nature and not central to the work, then one probably wouldn't do the extra work. It also seems that in this age of reproducibility crisis, better tools like these may be part of the solution. Jeff Carmichael

Thanks for the interesting discussion. It is interesting to note the potential change in binding efficacy when you attach the fluorophore in the case of a polymer. How does this compare to the coating chemistry of quantum dots?

Craig Brideau

So interestingly, not just for large polymers, this is a potential problem when chemically conjugating anything to an antibody. My unfortunate experience was with Alexa Fluors and Oregon Green 488, small organic molecules. Here's a link to a paper in the Biophysical Journal (2018) describing the alteration of antigen binding following antibody conjugation to one of two different AlexaFluors: https://www.cell.com/biophysj/pdf/S0006-3495(17)35091-9.pdf

I suspect that there are some additional concerns when it involves large polymers, but in general it seems to be a steric type of conformational hindrance, not necessarily related to the particular chemical moiety, like a conductive polymer vs. a ZnS QDot shell vs. small organic fluorophore vs. the chemical linker used in the conjugation. Jeff Carmichael

Microscopy Listserver

Uranyl Formate - Negative Stain - Freeze/Thaw Quality

Does anyone freeze small aliquots of uranyl formate for storage? Our facility has recently increased its use of uranyl formate for negative staining, but, once prepared, it has a very short shelf life. This means we are generating a lot of waste. I have seen references for freezing small aliquots but have not used thawed UF myself. I am wondering what others’ experiences have been? Do you have a strong opinion on the efficacy or quality of thawed UF as a negative stain? Thank you. Charlene Wilke

Yes, we do store our uranyl formate in 300 ml aliquots in a -80° freezer. We thaw once just before use (cold running water). The solution has already been filtered (0.22 µm). We have never had any problems. Good luck. Michael Delannoy

Microscopy Listserver

Permanent Slide Labels (Thread started June 12, 2019)

I need to make some permanent labels for my teaching slide sets. Does anyone have a label maker and tape recommendation? I am particularly interested in tape that resists water and smudging from handling. I want a long-lasting durable label. Thanks. Thomas E. Phillips

To create absolutely permanent, un-removable marking on glass slides, just Google for a machine shop offering laser marking in your area. The shop doesn't necessarily have to be currently working with glass—most laser marking machines would etch glass. Chances are, some lab at your University has pulsed laser ablation or a laser marking system, and they could do the marking for you. If you have a steady hand, then a $15 micro-engraver from Amazon with diamond burr would do the same permanent marking, or insert the engraver into a pantorgraph or CNC router. Valery Ray

I second the engraving option (manual on my end—I don't have access to a laser engraving system… yet). Definitely the most resistant one (although it takes a little practice for a proper engraving by hand with a diamond or tungsten carbide pen). Several years ago, I had several samples given to my old university collection with sample numbers written with a high-quality permanent marker on the glass slide, and the inscription covered with a varnish or some kind of glue/epoxy. I don't remember what varnish/glue they were using, but it was like a kind of clear nail polish. Not sure on the very long term (>50 years) how it would hold (and not crack/flake), and the ink might actually diffuse into the epoxy, creating a blurry purplish effect, and do a nice chromatographic analysis of the ink over the decades? Another solution that we employ when making epoxy blocks or thin sections (of rock) with epoxy/glue is to print a small label and embed this label on the side of the thin section. However, the last two solutions apply well for rocks/mineral sample preparation or samples that are embedded in epoxy—not sure if this would work in your field. Allaz Julien

You might check with the histopathology lab of a nearby hospital. If you like what they use, perhaps you could work out an arrangement with the lab and save yourself the trouble/expense of purchasing a high-end label printer. Their slides often need to be stored for decades, so your interests and theirs are similar. Doug Cromey

Microscopy Listserver

Cell Transfer from PDMS (Polydimethylsiloxane) Wells (Thread started June 18, 2019)

I'm trying to utilize a combination cell stretcher/electrical stimulator to induce certain behaviors in iPSC-derived cardiomyocytes. The system uses PDMS wells to stretch 2D cell monolayers, but I also want to see how said system impacts intercalated disc ultrastructure. Does anyone have any idea how I might transfer a cell monolayer from the PDMS wells onto a substrate that can be prepped for TEM? The kicker: I can't use trypsin or any other enzyme that digests cell/ECM contacts, as that will also disrupt the structures I want to image. I also don't want to damage the very expensive PDMS wells. Any ideas? Thanks! Tristan Raisch

If the well is not too small, you could briefly fix the cells with glutaraldehyde and use a small cell scraper (you can search Google for a supplier) to remove the cells from the bottom of the PDMS wells. Use a pipet to transfer cells to a microfuge tube, spin to make a firm pellet, let fix for 1 hr, and replace the glutaraldehyde with buffer, twice (trying to not disturb the pellet). Fix in osmium, rinse, embed in 2% Agar (you will need to stir the cells a bit), refrigerate until the Agar is hardened, remove the plug of Agar, cut into smaller pieces that contain the cells, and process for TEM. If there are not enough cells to make a visible pellet, you could combine the cells from several wells. The glutaraldehyde fix should help maintain the ultrastructure of the intercalated discs. I hope you find this helpful. Cynthia Goldsmith

You might try growing the cells on Permanox petri plates. These plastic plates are 60 mm in diameter, not too very large, and they are compatible with Spurr's plastic resins all the way through the process (no need to use propylene oxide—in fact DO NOT use PO as it will etch the plate). The plate will separate easily from the plastic dish while still warm from the embedding oven. You can select the area you want for ultra-microtomy by examining the cell “cookie” under a light microscope. The plates are available from Electron Microscopy Sciences in sterile bags of 40. Debra Townley

Microscopy Listserver

Raman/FIB Removing a Carbon Coating Layer and Alternatives to Carbon (Thread started May 29, 2019)

Does anyone have a suggestion for how to remove a 50 nm carbon coating OR reduce FIB charging when milling on a rock sample, by other means than carbon coating? We have an experiment where we would like to measure Raman spectra on a rock sample after milling some holes in it with a focused ion beam (FIB). Unfortunately, we are observing heavy charging when imaging/milling with the ion beam. We carbon-coated a sample with a 50 nm carbon layer, which totally removed the charging problem but introduced a new problem—the Raman spectra are severely impaired by the carbon.

Can anyone suggest a way around this? We would like to either remove the carbon coating in a gentle manner, or find a way to reduce FIB charging without impairing the Raman. We have many samples we would like to image in this manner and the opportunity to experiment with different suggestions. I would greatly appreciate any advice. Best regards, Thomas Aarholt

One possibility—no guarantee to work—is to use a sharpie marker or highlighter pen instead of carbon coating. A new sharpie that is very wet will wick out a thin layer past the tip of the pen that *might* be thin enough and conductive enough to FIB through. Touching the Omniprobe to the ink might give you enough grounding to FIB at a low beam current. A plasma cleaner that is operating correctly will remove sharpie ink after ~15 minutes. Chad M. Parish

Carbon coating is fairly easy to remove with Oxygen or Oxigen/Helium plasma. A dedicated plasma cleaner would work the best, but you can even try Evactron or IBSS in-situ cleaners. Plasma cleaner is preferred over plasma etcher, as it typically works with much lower RF power and therefore provides “gentler” cleaning. For complete chemical removal of carbon coating, get a UV cleaner (or find a lab which has it)—the removal would be terribly slow (hours or even a couple of days for 50 nm C removal), but absolutely no physical damage to the surface. An easily removable coating for preventing FIB charging would be to spray on a conductive polymer, either with regular atomizer or, better yet, with an ultrasonic nozzle. Plenty of formulations, I've worked with aquaSAVE (removable by water) and E-Shield (removable by alcohol) with great success (although never tried them on the rock samples). Valery Ray

Software

Microscopy Listserver

Photoshop or Alternatives Going Forward (Thread started June 28, 2019)

I'm looking for advice on the use of Adobe Photoshop for image processing and measuring going forward. I have seen news regarding warnings against using old versions of the CC software, and I am wondering if I should be concerned by the fact that it may only be supported/approved in its subscription version. Could it be subject to more change on short notice, and more difficult for users to keep a stable version? Could Adobe make changes that affect the way scientists use it, without affecting [what I assume is] the larger base of artistic users?

For a little background to frame my questions, our lab employs stereological tools to quantify kidney structural features using digital TEM images. We rely on the Photoshop Ruler for calibrating magnification and some measuring of features. We trust that the fundamental image pixel size, image resolution, and the way counting grids are layered will not be altered. We generate large montages of TEM digital micrographs, so handling multiple 400 MB files without locking up is key. And, finally, stability in appearance is important in streamlining procedures like this involving heavy user input.

Many of us like the idea of installing a certain version of Adobe Photoshop and using that version throughout the course of analyzing a complete data set (which may take years for a multi-year study).So my question to the community: If you were about to begin a multi-year study, what application would you trust for simple manipulation and measurement of large images? Are you considering cutting the Photoshop cord, or are these concerns I have nothing but chatter and spin? Thanks in advance! Karen [Zaruba] Feller

I would suggest looking into Gatan Microscopy Suite Software. I've used it, criticized it, and continued to use it for decades. It is not going away, and Gatan does offer a free download version that may provide what you need. I have no commercial or other interest in Gatan, but I do find it pretty good for calibrating images and then making measurements. You can find it under the resources/software tools on the gatan.com website. Good luck. Roseann Csencsits

As Roseann already mentioned, Gatan Microscopy Suite (also referred to as Digital Micrograph) is a very useful software package, with many plug-ins to help analyze images and diffraction patterns. The features and plug-ins are probably weighted to benefit the materials scientist more than other fields, and the imaging editing functionality is certainly basic compared to something like Photoshop.

ImageJ/Fiji is another image editing, processing, and calibration tool. It's very powerful and has a seemingly unlimited number of plug-ins to extend the functionality. The tools are very beneficial to those in the life sciences community, and you'll find a huge number of tutorials for things like performing automated particle counting, image segmentation to select ultrastructural features, and more. It can handle multi-dimensional data, large files, and pretty much anything you can throw at it. It is also very lightweight and has a portable version for easy installation on most any computer/OS. I would certainly look into ImageJ. Good luck. Chris Winkler

Thanks to all for the quick responses—and on a Friday! I'm happy to find that so many recommend GIMP or ImageJ/Fiji, both of which I have used a bit in the past, and my colleague Ann Palmer has mentioned (thanks to Chris Winkler, Guenter Resch, James Ehrman, Michael Cammer, Mike Marko and Michael Schoel). I am leaning toward GIMP because I know the interface can be set up to resemble that of Photoshop. This is an important factor for the multiple analysts in our group who have become accustomed to using Photoshop over the years. However there seem to be more votes for Fiji. Can the Fiji/ImageJ interface be set up with default preferences to resemble Photoshop?

Aperio's ImageScope is an interesting suggestion (thanks Paula Keene). I have experience using it with the accompanying online Spectrum image database in a histology lab. However I never considered trying it for basic image handling. I just tried opening one of our finished montages in ImageScope, but I did not have control over the separate layers (it seemed to flatten the image to a single layer). If this is something that can be easily overcome let me know. Most of my group has this software already downloaded, so having a bit of familiarity with it removes a barrier to acceptance.

Gatan's Digital Micrograph also seems to be popular (thanks to Roseann Csenscits and Chris Winkler). I didn't realize there might be a free version for basic image editing and analysis, so something to keep in mind. However I think we will pursue Fiji or GIMP first unless there is a compelling reason otherwise.

Not many responses regarding stability of Photoshop itself, although a few seem to be comfortable continuing to use older versions. In our academic healthcare setting we cannot use unsupported software unless we go offline on standalone machines, so that was a concern I forgot to mention. I don't want to tie up more bandwidth, so if there are further comments, feel free to reply to me privately unless you feel the response would benefit the group. Have a good weekend all. Karen [Zaruba] Feller

Confocal Listserver

PNG vs TIFF Formats (Thread started June 4, 2019)

May I query the listserver for guidance on which file format is best for converting proprietary .czi and .lif micrographs? Clearly, I am looking for best quality. Thanks so much. Steven Samuelsson

PNG is lossless and highly compatible (for 8 bit per channel data anyway; 16 bit per channel data can be hit or miss), so I often prefer it. Unfortunately it is an extremely slow format, which makes it very painful for things like whole slide images. In one script I wrote rotating and converting WSIs to PNG, the PNG encoder alone was more CPU time than all other processing combined. TIFF is a more complex format that supports a lot more types of compression than PNG but tends to have compatibility issues, especially for large files because the format is less commonly used and often poorly tested. The current release of Matlab, for instance, cannot read its own TIFF files if the file size becomes too large. This makes working with TIFF much more annoying, but it is often a better idea than PNG, especially for larger files when PNG is too slow to be practical. Assuming you use lossless TIFF and set the bits per pixel correctly, there will be no difference in the output between PNG and TIFF, only in how compatible and efficient they are. Michael Giacomelli

Compressed PNG is usually lossless, and TIFF is uncompressed, so the quality will be the same, with a smaller size for PNG. However, metadata is better preserved with TIFF (such as ImageJ metadata: spatial scale, acquisition parameters…), so I'd advise to store TIFF rather than PNG. Christophe Leterrier

TIFF is not a single format but a vast variety of formats, depending on what is in the header and how many bits in which the information from each pixel is encoded. It is often not readable by software not used for its collection. As mentioned by somebody already, it is often convertible if you have saved an image as a lossless file. Even then, the new software may misplace some bits, and you get image data that is not suitable for quantitative comparisons. These comments are a pretty perfunctory analysis, but we teach a whole course in this subject! Carol Heckman

Has anyone mentioned OME TIFF yet? Most of the microscope/acquisition info is retained in the header, which is useful for some microscopy-oriented software; and the result is also still a TIFF, so it is also usually readable by most non-microscopy programs (others can add their experience here). https://docs.openmicroscopy.org/ome-model/5.6.3/ome-tiff/ It is relatively easy to set up a macro in ImageJ/Fiji to do the conversion quickly. Jeff Reece

Hardware

Confocal Listserver

Spinning Disk Microscopes: Yokogawa Borealis (Thread started June 26, 2019)

We are having a problem with field flatness on our Yokogawa-W1-Borealis Unit using Andor 888s as the cameras. We measure a drop of intensity of about 60% from the middle to the sides after a rigid adapter was installed between the stand and the unit. Has anyone encountered similar problems? Thank you! Jens Bernhard Bosse (unknown email)

Jens, maybe you know this already, but you can't use a thick Chroma slide to measure uniformity on a spinning disk due to pinhole crosstalk. When I first got my Borealis update it looked terrible with a Chroma slide, but when I switched to a saturated dye solution it showed excellent uniformity. Or you can test it by imaging a thin slide (Molecular Probes prepared slide #1 for example) and moving a feature from the middle to the corner. The saturated dye solution is a neat idea (first heard about it from John Oreopoulos probably on this listserver—he is now with Andor). Just put concentrated FITC in a cover glass bottom dish, and it quenches everywhere except in a thin layer at the surface thereby creating a thin fluorescent layer that doesn't photobleach (diffusion takes away the photobleached molecules). If the non-uniformity is truly messed up, then I look forward to hearing how you solve it. Good luck! James Jonkman

Vignetting can occur when the camera sensor is moved further away from the design position (it's due to clipping of marginal rays). You may be able to look at the image on a sheet of paper at the camera sensor position with bright-field illumination to try to find what is clipping. To fix it you may need to change coupling lens strength and field size. Mark Cannell

If the rigid adapter changed the distance between the scan head and the microscope, a different field lens on the Yokogawa would likely have to be used. If the distance between the Yokogawa and microscope is the same, then it is most certainly an alignment issue. My guess is that you have a combination of both issues. Patrick Deguelle

(Commercial Response) I am intrigued as to why a rigid adapter was installed. Can you expand—possibly off-line if you wish? We will try to assist you recover the performance. Mark Browne

Credit must be given to Mike Model (also very active on this listserver) for conceiving the idea of using a concentrated dye solution as a microscope diagnostic sample. He first published about them almost 2 decades ago (https://www.ncbi.nlm.nih.gov/pubmed/11500847) and subsequently produced other publications showing how these samples can be used for other useful quantitative imaging purposes. A thick fluorescent plastic specimen is probably suitable for measuring field flatness or uniformity on a laser-scanning confocal microscope, but for a spinning disk confocal microscope, it produces the worst possible (and unrealistic) case of complete and total pinhole cross-talk, which superimposes the illumination profile. As mentioned by James, the beauty of the concentrated dye solutions—besides their cheapness, ease of creation, and spectral variety—is that they only produce fluorescence from a diffraction-limited layer adjacent to the coverslip due to their optical density. An example of the difference can be seen here, where a maximum intensity z-projection of plastic slide and dye slide acquired using Andor's Borealis illumination technique on a spinning disk confocal system are presented: https://drive.google.com/file/d/1vOP0TxLCDAJYVMlKe7dQSv7XxZLEEWfy/view?usp=sharing

In the example above, the illumination was purposefully reduced to be a sub-section of the entire camera field of view to show the pinhole cross-talk effect. Not only does such a specimen allow one to measure the true illumination profile on a spinning disk confocal microscope, but a z-scan will also reveal other useful metrics, such as the thickness of the optical section. They can also be used for flat-field/shading correction, and Kurt Thorn wrote a nice piece on this in his imaging blog:

http://nic.ucsf.edu/blog/2014/01/shading-correction-of-fluorescence-images/

http://nic.ucsf.edu/blog/2014/01/fluorescent-dyes-for-shading-correction/

http://nic.ucsf.edu/blog/2014/04/shading-correction-for-different-objectives-and-channels/

But it turns out that the story on achieving images free of non-uniformity artifacts is more complicated. Making sure the illumination light coming out of the confocal scan head is flat is just one aspect of the situation. How the scan head mates to the microscope and funnels the laser light to the objective lens, and the chromatic aberrations of the objective lens itself, play an equally important role, especially on systems using large field of view sCMOS cameras. John Oreopoulos

Confocal Listserver

Spinning Disk Microscopes: Rolling Shutter sCMOS Cameras (Thread started June 6, 2019)

I would very much appreciate some feedback from those who run spinning disk systems with sCMOS cameras with a rolling shutter (like the Flash 4 or similar). Have you experienced any problems? I know of course that you can create imaging conditions where “only” a rolling shutter can be a problem (can create artefacts), but I am wondering whether if in real experimental situations this is indeed an issue. Thanks for your comments. Csucs Gabor

Although I haven't used these cameras on a spinning disk system, I have some experience with them on a light-sheet setup. The Flash4 models have a global reset mode for the exposure, which means all rows will expose simultaneously, and only the readout follows the rolling mode. This way it's possible to avoid the typical imaging artifacts affecting the standard rolling shutter sensors. One drawback, though, is that the exposure and readout for different rows can't be overlapped, which means the maximum frame rate is slightly lower. Bálint Balázs

I've been using the following sCMOS cameras with a Clarity Laser-free Spinning Disk Confocal unit without any problems whatsoever: Hamamatsu: ORCA Flash 4 V2, Flash V3 and Fusion, Photometrics: Prime 95B and Prime BSI

PCO Edge 5.5, and Andor Zyla 5.5. Perhaps the unique disk architecture basing on structured grids rather than pinholes plays a role. Also other types of cameras than sCMOS work fine; right now I'm testing Retiga R6 CCD. Mika Ruonala

We tried a Prime 95B on our Andor WD spinning disk during an equipment demo, and it seemed to work fine. Silas Leavesley

We're using a Prime95B and have tried an Andor Zyla as well on our X-light spinning disk without trouble as well. We've gone up to 200 fps. I can't say if there are issues if you try to go faster. Moritz Armbruster

Confocal Listserver

SSD-RAID (Solid State Drive Redundant Array of Independent Disks) for a New Wide-Field Microscopy System

(Thread started May 23, 2019)

We are buying a new fluorescent microscope wide-field system. The system includes a workstation with 64 GB DDR4 RAM, 1 1GB graphic card, 512 Gb of memory, and SSD-RAID of 2 TB. There is an option to increase the SSD-RAID to 4 TB for about $6,000 extra. The system will be used for both fixed material imaging and for live-imaging with time-lapse, z-stack, tiling, and multi-fluorophore experiments. I was wondering if it is worthwhile to invest extra money for this option or not. This will be the first time we will have time-lapse experiments, so I don't know what to expect as pictures size. What do you think? Thanks in advance for your comments and opinions! Mirco Martino

How fast are you acquiring data? Entry level SSDs are about $120/TB (~1 GB/s write), while high-performance SSDs are $250/TB (~2.5 GB/s write). Since SSDs are very inexpensive, that $6,000 is probably getting you more than just a couple hundred dollars’ worth of disks? Michael Giacomelli

Honestly, I see no reason to have so much memory and such a high spec graphics card for an acquisition computer. Regarding the HDs, I would keep the 2 TB and add extra space non-SSD. Then just automate data flow from the fast to the slow drive. 2 TB extra for $6,000 is just too much and might be too short anyway. Nuno Moreno

(Commercial Response) $6,000 USD seems a very high price for 2 more TB. Just a few months ago, Samsung had 2 TB SSDs that were priced around $600 USD. 1 TB SSD is priced at $300 USD. My company offers computers as well for our COAX high-speed cameras, and it should not be very expensive to add 2 TB. Philippe Clemenceau

Multidimensional time lapse files can grow in size very fast, and I think it's something worth considering. I would do some math with the expected experiments. That said, I think the delta in price for 2 TB is abusive. If you are not sure about the data production, I would delay the investment and see if it is something you may do yourself for a few hundred dollars. Meanwhile, consider the infrastructure downstream. How fast can you get the data out of the machine? How are you going to store and process that data? Julio Mateos Langerak

While it can be hard to predict exactly what form new experiments will take, I assume that you are buying a system for live time lapse imaging because you or another user has a particular process in mind that you want to image. It helps to make some back-of-the-envelope type calculations—if the timescale of my process is x seconds, I usually want 2–3 frames every x seconds for minimalist capture, up to maybe 10 frames every x seconds if I want kinetics. That capture rate, stack size (given by the physical dimensions of your sample and the z diffraction limit), and the expected time you can keep a sample on the scope gives you a good maximum estimate of how many frames to expect, and therefore file size. I would echo what others have said about the SSD though—$6 K is an awful lot of money for that much drive, and there is not much benefit to having a RAID be solid state to begin with. If possible I would upgrade with a normal RAID for a fraction of the cost. Pat Robison

Where did you get the numbers? Regular SSDs I've seen are around 500 MB/s, both read and write. And SATA III speed is around 600 MB/s! That's not quite enough even for a single 10-tap CameraLink camera (800 MB/s). SSD RAID may seem overkill, but you don't want to sacrifice speed just because of poor PS specs. Of course RAID 0 array of regular hard drives should work too, but the probability of data loss (due to failure of one of the disks) rises with the number of disks. Zdenek Svindrych

I'd recommend splitting the scope computer from storage/analysis even independent of cost concerns. Connect an external RAID to the scope with high speed Ethernet and have a separate workstation with deconvolution/processing/analysis apps. That way there isn't a traffic jam at the scope computer when one user wants to process their stuff while another is acquiring images, AND you don't pay scope vendor markup for more computer/storage than you strictly need for acquisition. Timothy N. Feinstein

I would second the motion of separating image acquisition from analysis and long-term storage, especially if you will capture lots of multidimensional data sets. You want the acquisition computer to be reserved for acquisition and not sequestered for long times by people doing image analysis and rendering. Leoncio A. Vergara

Agreed. That's what we're doing—streaming to SSD array storage over a dedicated 10 GigE network.

Christopher Yip

SATA SSDs have been obsolete for a while now. PCIe 3.0 NVME devices have a bidirectional 4 GB/s transfer rate. For smaller acquisitions, any cheap device will be protocol-limited (>3GB/s), but only for the first 50–100 GB, and they'll drop down to about 1 GB/s as the disk fills up. More expensive devices can sustain closer to 2 GB/s over the entire space. Very expensive devices will sustain 3 GB/s, but there is no point with RAID. Just buy another disk and you get more capacity and the same bandwidth for less. By the way, now that SSD storage is hitting $100/TB, and PCIe 4.0 devices are about to launch, affordable devices with 6–7 GB/s bandwidth in a single drive shouldn't be too far away. I think pretty soon there will be little point in RAID; parallelism will be internal to the “disk.” Michael Giacomelli

Thanks for the update, Mike! I missed the moment they grew to usable size (2 TB only, though). And RAID controllers are quite affordable, too. So a decent 8 TB fast storage (4 × 860 EVO 2 TB, PCIe ×16 controller) would come at $1,800. This would easily handle two cameras. I need to remember this in case we do some upgrades. Zdenek Svindrych

NVME SSDs don't typically use a RAID controller, they're just PCIe devices, so you just plug them in and let the OS (or Intel's PCIe chipset if you use RST) handle RAID. Those 860 EVOs are SATA devices, so the 970s would be a better choice from Samsung. Beware these are TLC disk and will slow down considerably (~1 TB/s) as you write more and more, but this is usually okay unless you require very high sustained performance. There are also much cheaper and only marginally slower options you could use as well: https://www.amazon.com/dp/B07R6V31K8?tag=georiot-us-default-20 Michael Giacomelli

A 1TB Samsung 860 PRO SSD using SATA600 is around 2500SEK (~$200). Assuming you will have decent Intel architecture in the new PC (e.g., HP Z4 workstation), you can use the Intel rapid storage technology driver to put multiples in RAID0. You would need to do this if you'd like to stream 100 fps from a sCMOS. In general, I agree with the other posts on having such a graphics card/amount of RAM. For the acquisition PC I would go for stability and run the heavy computation (deconvolution, advanced 3D analysis, etc.) on an offline station. Good luck! Garner Oliver, BergmanLabora AB

NVMe M.2 or U.2 solid state drives are about the fastest things possible these days. If you hooked a pair up in RAID 0 configuration, you would have the fastest practical read/write speed available. This is great for working with large data sets, but silly for long-term storage. I'd suggest enough “high-speed” drive to handle working on your data sets (however big you believe they will be), and then, as others have suggested, use a 10-gig connection to a much larger and (relatively) slower storage system. Craig Brideau

I assume you inadvertently added an extra “0” to the price. Even so, I'd do a 10 TB or more HDD (not SSD). That is what I have on our wide-field live imaging system and lightsheet microscope, and it works well for semi-temporary storage, and, yes, I do need that much space for wide-field live imaging in a multi-user facility; 4 TB would be a bit inconvenient for me as I'd have to get data out more frequently. One 10 TB HDD should be about $300, and I guess you'd need two for backup/parallel RAID array. G. Esteban Fernandez

SSDs on an acquisition computer really need to be only large enough in capacity for about a day's worth of experiments (obviously more is convenient). The SSDs write speed also needs to be fast enough to not be a bottleneck in your workflow. As someone else pointed out, a 10-link Camlink frame grabber, like the BitFlow Carbon that is used with the Andor Zyla, will be bottlenecked by a SATA III SSD. A single NVMe 4-lane PCIe SSD will pretty much beat 4× SATA III drives in RAID 0. SATA III has gone the way of the dinosaur and should not be used in new systems unless you have capacity or budgeting concerns. SATA III supports 500 MB/sec write; 4 in a RAID 0 will support about 2000 MB/sec. If you need more throughput than about 2000 MB/sec writes, which a single NVMe ×4 can handle by itself, you should look into RAID 0 of NVMe drives. There are PCIe cards that fit in a 16-lane slot, which can accommodate up to 4 NVMe drives (with 4 lanes each) in RAID 0 configuration. Of course you can look at things like sequential vs. random writes, real-world benchmarks, etc., so these are estimations. Rafael Jaimes