Selected postings are from recent discussion threads included in the Microscopy (http://www.microscopy.com), Confocal Microscopy (https://lists.umn.edu/cgi-bin/wa?A0=confocalmicroscopy), and 3DEM (https://mail.ncmir.ucsd.edu/mailman/listinfo/3dem) listservers. Postings may have been edited to conserve space or for clarity. Complete listings and subscription information can be found at the above websites.
Core Facility Training During COVID-19
Confocal Listserver
As we continue to grapple with the long term effects of COVID-19 those of us who operate shared instrumentation facilities have been finding ways to address the problem of how to provide the same high level of instruction while maintaining our commitment to social distancing and limited interpersonal contact. While there is no one-size-fits-all solution I wanted to share the route we have chosen in the hopes that it will provide some benefit to others. Confocal microscopes are the bread and butter of our core and we have created video-based training that, so far, has been doing a consistent job of replicating our traditional confocal training. The key for us has been creating high-quality video content that replicates every aspect of our in-person training session. The main issue I've noticed with much of the video-based training publicly available is that it is nowhere near as in-depth as we traditionally get during our one-on-one sessions. Creating detailed content that is specific to our instruments has allowed us to almost completely remove ourselves from the initial training. We've been requiring users to watch an introductory video as well as a full training video before arriving for their ‘in-person’ events. When they arrive, we give them a quick overview of the core and then have them go through the full-resolution video on a monitor right next to the instrument. This way they can follow along and start/stop the instruction to move at their pace. A second session is then used to gauge how well the user is absorbing the material. It has been quite well received and I've posted the full 4K videos for our first instrument (Zeiss LSM 880 with Airyscan) to Youtube here: https://www.youtube.com/channel/UCVOV2V_G20MvT79gfXBYO5A. We will be adding videos as we expand training for our other instruments, as well as more advanced techniques as our users request them. Of course, these videos are specific to our instruments, but I hope they can be useful to others looking to do similar things. Jason Kirk jason.kirk@bcm.edu
You have made some very high-quality videos. Thanks for sharing them with the community. In our core, we've been doing something very similar, and have also had good results. We do a few things slightly differently: 1) We ask users to watch the videos in advance, not during the training. Like yours, those videos are also very detailed and specific to our instruments. 2) For our more complicated systems (laser scanning confocals, LaVision light-sheet) we provide detailed written instructions that they can use to follow along during the video, as well as bring to the training. 3) We have an online quiz they must complete before the in-house training to make sure they watched the videos and got the most important points. 4) We check that their sample has some sort of reasonable signal before the in-house training. 5) The in-house training involves a brief interaction reinforcing the steps for setting up their sample. The rest is remote, via Zoom, with screen sharing and remote control. The “remote” staff member is in an office nearby and can walk over and help if the issue is not resolvable via Zoom. This training process has worked very well. With few exceptions, people come much better prepared and the training is smoother. I will probably keep the requirement to watch the training videos when this mess is over. Here are our video playlists: https://www.youtube.com/channel/UCTqLyJ-2uBIl0hHV9mD4pCQ/playlists. The production quality is significantly worse than what Jason's core has developed, but I think some of the playlists may be useful to others. In addition to instrument trainings, we have some general lectures and software tutorials. I also have a series of maintenance videos for a LaVision Ultra-II light-sheet, mainly as a reference for my own staff. Pablo Ariel pablo_ariel@med.unc.edu
Just looked at the Introduction video and it is superb! It touches on everything I tell people during training with great illustrations to drive home the points. Thank you, Jason. Esteban Fernandez g.esteban.fernandez@gmail.com
I second Esteban's congrats with an excellent confocal training video that covers all major areas that a new user should be familiar with. It is especially useful for the people using Zeiss confocals. I wonder why you did not show “Smart Setup” function and fluorophore emission spectra database. In my experience reusing previous experiment/image configurations and Smart Setup are the two main features that allow faster acquisition and better images for beginners. Arvydas Matiukas matiukaA@upstate.edu
I intentionally ignore the Smart Setup in ZEN for a few reasons. For sequential acquisition it forces the design of the beam path to use frame-wise track switching and then proceeds to design light paths that switch beam combiners and filters between tracks when it is completely unnecessary to do so. The Frame mode imposes an ~500ms delay between tracks to allow for mechanical changes, which doesn't sound like a lot but when the delay between each channel plus Z, tile or other multi-dimensional acquisition parameters required are accounted for it quickly adds up. Over the course of an average 3D acquisition manually refining the light path can reduce the total acquisition time by up to 50% vs. Smart Setup. Also programs extremely wide emission bands for each fluorophore which, if left unmodified, will most certainly lead to emission bleed. Another reason is that it completely ignores the Airyscan detector as a viable 4th confocal detector. So, on our setup we can do 4-color line switching between tracks for sequential imaging. While I do agree that the beam path configuration has a steep learning curve for new users, I feel that glossing over this aspect of the confocal use would be doing them a disservice in the long run. I don't expect my users to manually design beam paths every time they use the instrument. Once they've worked out their ideal parameters the Reuse and experiment configuration functionality is invaluable, but while they are learning I feel that it is important that they understand the level of control they have over the design of their acquisition. Jason Kirk jason.kirk@bcm.edu
I just want to say that my staff and I are 100% behind you on this one! We have many users who do a ton of tiled, multicolor z-stacks and the overall difference in time between the “Smart” Setup track and one that we have set up manually is unbelievable. Not to mention that, if I remember correctly, when Smart Setup is used to establish a single color track for GFP on one of our systems, it decides to avoid using the GaAsP detector. And I also agree entirely that education about what is going on in the track should be a critically important part of training users. Many of them go off to buy their own confocal systems later, and it is important that they learn how to get the best out of the system for themselves. Thank you for making and sharing such great videos, I am hearing very positive comments about them! Alison North northa@rockefeller.edu
These videos are fantastic. Thank you! The intro will now be essential viewing for all my new users. I'll encourage all users (especially experienced ones) to watch them all as they should reverse any bad habits they've fallen into. To Smart Setup or not? I teach new users to use Smart Setup and then show them that switching to line (for using live) and frame (for fast acquisition) is much faster but requires changing a few settings. In the end, I find that easier than setting all up from scratch. The user gets a helping hand from Smart Setup and then makes it smarter with a little fine tuning. I think on the 880 software, Smart Setup offers a compromise that always switches by line? Single and two-channel Smart Setup seems to always put the lower wavelength to Ch1, and the second wavelength to Ch2. So Smart Setup never uses the GaAsP detector for a single channel image. I find that strange, so I let the users decide if Smart Setup was smart enough. Dale Moulding d.moulding@ucl.ac.uk
Really good points. Smart Setup can take you to some fairly non-standard setups. However, I use Smart Setup as a learning experience for new users. It lets me delve into the finer points of how to use the microscope more effectively including filter specifications, detector sensitivity, beam path descriptions, etc. And now that the Leica LAS X software has something very similar it's not going away, and it can be a useful prop for an in-depth training session. It is certainly a lot more illuminating than taking someone through a training session with their take-home message being “I can always use Apply/Reuse” for future imaging sessions, which is, unfortunately, all too common when users have not been adequately trained. Darren Clements dkc25@cam.ac.uk
I think the discussion and sharing of training experiences are very interesting and useful. Confocal software and its tools have evolved a lot since 2000, and if it is not yet at the level of the self-driving car, it is getting quite user friendly and flexible with different levels of menus and tools. Therefore, I think it is useful to mention all available setup options/tools (along with their pros/cons) and let the user choose the one that works best. I always assure trainees that any workflow is acceptable (and likely close to optimal) as long as it provides the required quality images and measurements. Smart setup works better on our 780 microscope because it does not have Airyscan. In my experience it provides close to optimal optical setup for 2 (sometimes even 3) fluorophores with well-separated excitation and emission spectra and can be used as a quick template for further optimization. I agree that Smart Setup is not efficient with 4 dyes, multicolor z-stacks and other advanced experiments. How do you teach reduction of bleed-through by narrowing emission bandwidth? Do you use external spectral viewers/calculators, by comparing signals under different settings, by using the built-in spectra database, etc.? Arvydas Matiukas matiukaA@upstate.edu
@Dale – Smart Setup does have a “Smartest” option which forces a line track switch. But with more than 2 colors it does this funky thing where it puts the most disparate fluorophores on a single track. In my book this is a no-no unless they are very far apart, such as DAPI and AF647. Even then I test controls to make sure they get no cross-talk. At one time I was teaching people to use the Smart Setup to get a baseline track configuration and then modify the settings, which is a perfectly reasonable way to do things. The reason I shied away from it was that I ended up spending more time teaching most how to modify almost all the settings manually anyway. For our video-based references I think it was the best play to go into the weeds immediately.
@Darren – I think it is important to keep in mind why these simplified configuration design tools like Smart Setup exist, which is to sell you a microscope. For what they are, these tools do a pretty good job of programming rough settings for instruments that can vary wildly in their configuration. But sadly, they aren't designed to be a rock-solid configuration utility. They are built to help vendors make their very complex instruments look like they can be operated as one operates a microwave. This is a very slippery slope and over time it has contributed to the idea that maybe these things should be as easy to use as a microwave. After a swift punch in the gut, though, users realize these instruments are not microwaves and then you and I are left to work out how to help them overcome their disappointment.
@Arvydas – the emission bandwidth question is a common one and our general recommendation that I reinforce during training is that the user should be familiar with the spectral traces of their fluorophores before training (using documentation from the fluorophore vendor). Depending on the profile they can start their collection bandwidth between 30–50nm around the central peak of the emission profile. If the profile is broad, then they can increase the bandwidth as long as that range stays away from neighboring fluorophores. Jason Kirk jason.kirk@bcm.edu
The BINA Training and Education working group is gathering links to training resources just like these to be posted shortly as a resource on the BINA website. If you are putting together videos or other training materials and are willing to share them more broadly, we'll be delighted to include a link to your training webpage or YouTube channel. Please don't hesitate to send links our way! We are working on an easy submission portal, but for now, the easiest way to get us the info is via email. Christina Baer christina.baer@umassmed.edu
Standard Practice on Co-Authorship
Microscopy Listserver
I would appreciate input on standard practice concerning co-authorship on TEM data collection by core facility staff. Is it appropriate for the staff to ask for co-authorship if they collect data and the images are used in publication? The TEM user only pays the machine time, not the staff labor time. There is no data analysis involved. Or in this case, the user should just acknowledge the staff? Thanks very much. Yan Xin xin@magnet.fsu.edu
This is an interesting question where I feel that there is a lot of confusion. In an academic setting the question of funding, or who pays for what, isn't really relevant. The only question that needs to be answered is if a person has contributed “enough” to the science to be identified as a co-author. In our lab we try to follow the Vancouver recommendations: http://www.icmje.org/icmje-recommendations.pdf which gives the following guidelines: Who Is an Author? The ICMJE recommends that authorship be based on the following 4 criteria: 1) Substantial contributions to the conception or design of the work, or the acquisition, analysis, or interpretation of data for the work; AND 2) Drafting the work or revising it critically for important intellectual content; AND 3) Final approval of the version to be published; AND 4) Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The criteria are not intended for use as a means to disqualify colleagues from authorship who otherwise meet authorship criteria by denying them the opportunity to meet criteria 2 or 3. Therefore, all individuals who meet the first criterion should have the opportunity to participate in the review, drafting, and final approval of the manuscript. As you see, any person who has given substantial contribution to acquisition of data “for the work” should be recognized as a co-author. Any data that are included in the manuscript would have to fall under the heading of “for the work”, and a person who in actual fact was responsible for acquiring that data would necessarily have to be recognized as giving a “substantial contribution” to the acquisition. So my conclusion would be that yes, it is usually appropriate for staff to be included in the list of authors.
On a more general note, I would like to stress that in my opinion anything beyond the very simplest experiments in TEM require a level of scientific understanding that goes beyond a “technical” task. Decisions must be made on sample preparation and design, finding appropriate projections for diffraction and imaging, in STEM deciding on detectors and illumination conditions to use, in EELS deciding on which edges to select to find the information needed, in EDS understanding if absorption and fluorescence is an issue, and multiple other small and large decisions taken on the fly during the experiment. Many of these are things I expect my students to describe and discuss in their thesis, and if they do not, I would conclude that they have not done a proper scientific job. Personally I would have been hesitant to submit a paper without including the person actually doing this job; who is then ensuring that what we write is correct? Finally, I have never understood the impulse that some people have to limit the author list as much as possible. Of course, we shouldn't include people improperly, but neither should we look for ways of preventing people from becoming authors. That would very easily create a culture where people are guarded and careful about where they help out, and may even foster an atmosphere of mistrust and feelings of being underappreciated. Personally, I would much rather err on the side of being too generous than risk leaving out people who in reality should have been included. If people generally feel secure in the knowledge that their contributions will be acknowledged properly, they will be more inclined to help out, and may in time also more easily feel that they can forgo co-authorships for what they consider “trivial” tasks. This makes for a more healthy scientific environment. Øystein Prytz oystein.prytz@fys.uio.no
The subjective manner of choosing authorship recognition has always been too far open to interpretation and bias, in my opinion. In my lab manager job for a university decades ago, I collected all of the data, from live animal sedation through perfusion, fixation, staining, blocking and embedding, sectioning, UA-LC staining, imaging, darkroom work, writing the Methods section, and all plates and posters labeled and assembled (remember dry mount?). I didn't even receive acknowledgement, since “I was being paid to do that” (as a technician). When I worked in industry, we often had several co-authors, and our department head included himself as last author for no other reason than he had to read and approve the final result. This was company policy, and therefore his job. He has over 700 papers to his name. If this sounds like a gripe, it's because I've experienced both sides of the coin. One or two authors as one standard and seven to twelve as another standard. I think reasonable acknowledgment without bloat is the way to go. My opinion, for what it's worth. Gregg Sobocinski greggps@umich.edu
I think that your experience unfortunately is quite common, and in my opinion that way of doing it is inappropriate. A mention in the acknowledgements should be the very minimum for what you describe. Regarding who gets-paid-for-what you are entirely on point: we are all doing the job we are being paid for. So what? I should hasten to say that in my first email I was a little imprecise: fulfilling the first of the Vancouver criteria is not alone enough to qualify as a co-author, but qualifies to be invited to contribute on the second (and third) criteria, and thereby further qualify as a co-author. On the point of department heads or lab leaders being co-authors as a matter of policy: In my opinion this is not an acceptable way of doing it. You might very well have a policy that the department head should get the opportunity to be involved as a co-author, but it is actual involvement that qualifies him/her as a co-author, not the policy! That person should still fulfill the same authorship criteria as everyone else (for example, the Vancouver recommendations). Whether or not he/she actually fulfills those criteria has to be evaluated in each specific case. I don't have much experience with such policies, but I don't think they are common in Norway (or the Nordic countries in general). However, as a student I have experienced it in some international collaborations. In those cases, I think it turned out to improve the final papers quite a bit. Perhaps I was lucky, but the policy in those cases secured input from very experienced scientists. If implemented properly, such policies could also be of benefit to junior researchers. Øystein Prytz oystein.prytz@fys.uio.no
You make a very good point. As a PhD student, my mentor even published my results without my name! And yes, he can do it. He is all-powerful and you as a student simply have no right, the university is not a democracy. The same person also never included his technical assistant in his publications, although he couldn't publish anything without her as she was a very skillful technician. Another research group next door included all personnel in all publications, independently of who did the real work (which may be questionable too, but it doesn't hurt anybody, does it?). I completely share your opinion: everyone involved in the paper should be mentioned. But scientists are human beings like others, with their ego problems and the complexity of human relationships as we know them. Stephane Nizets nizets2@yahoo.com
I took a workshop earlier this year on exactly this topic. I suggest checking out the flowcharts and discussion documents on the COPE website (Committee on Publication Ethics): https://publicationethics.org/. This was very helpful for working through who might only warrant an acknowledgement for a minor contribution and who deserves co-authorship. It also includes rather harsh words about ghost, guest, and courtesy “authors”. So, in the original poster's case, the question of whether that technician deserves a nod likely centers on whether they did the bulk of the benchwork or just a little bit of it. As a point of amusement - I am currently working on a review article summarizing how my lab's approach to one particular topic has evolved since our founding in 1971, and I have 49 years’ worth of possible people to credit! In many of those cases, posthumously. That will be a fun one to sort out. Cindy Connelly Ryan crya@loc.gov
When journal articles started (I am thinking late 1600's) and for a while after, authorship was easy. The author (usually just one) wrote the paper. Somewhere along the line the concept morphed from the writer to the doer, usually more than one. Complexity ensued. The guidelines that various societies have written up are helpful, if sometimes overly idealistic. For example, I think it is unrealistic to demand that person in lab X be “responsible” for results from lab Y in a paper that combines results from both. For me, I use the golden rule: If I contributed to the project in such and such a way, and I was *not* given authorship, would I feel cheated? Admittedly subjective but it works pretty well. Write on! Tobias Baskin baskin@bio.umass.edu
Thank you, Cindy Connelly Ryan, for that reference. It took me a bit to find the correct document, but it was appropriately succinct, describing my situations as “ghost author” and “gift author”. Since department heads were specifically mentioned in the article, I'm guessing that my gift author experience was not unusual. Here is the shortcut, for those who wish to go right to it: https://publicationethics.org/files/2003pdf12_0.pdf. Gregg Sobocinski greggps@umich.edu
Each lab and university obviously set their own policies, but I do have some recommendations. All facilities should require acknowledgments for use of their equipment, whether the user pays for just instrument time or full service. This is an easy argument to make. If the users make use of your facility it means they need it, will want it in the future and will want it to be excellent and well equipped. Acknowledgements make it easy to track the publication impact of usage and in doing so this invests in the future of the facility. So, the argument you make to your users is “Do you want this facility to be here next year? Do you want us to be able to invest in equipment and staff that make your research easier? Do you want to avoid having to outsource experiments with great inconvenience and cost?” An acknowledgement is not a large price to pay as a future investment! The other argument you can make is that users do not pay the full cost and that their work is subsidized by the institution, and as such they need to acknowledge this. This is a negative argument and not as useful as the above. I have long been a fan of billing users the FULL costs of their microscope and staff time, and then adding a subsidy line to the bill so it explicitly states who is actually paying. Co-authorship is a grey area. As an academic (and a facility scientist) I am able to require co-authorship after a certain amount of work, usually if I spend more than a day on a user's research (not training); this is, as far as I am concerned, non-negotiable. But I appreciate that most facility staff are not in my position. As soon as staff have an intellectual investment in a project, beyond their day-to-day duties, I feel this should be a co-authorship. Doing something new and developing tasks that take more time than the routine should be acknowledged with co-authorship. For small extra work a personalized mention in the acknowledgements is polite, but often this gets forgotten. In your situation I think the staff, if they are doing routine microscopy and this is part of their job, and not any special analysis, should not expect co-authorship. A personal acknowledgement would be polite, but a facility acknowledgement should be expected. The time factor is also an issue. If your staff are going above and beyond regarding the amount of time on one project, and in doing so offering intellectual input for a project, this is a different matter. Matthew Weyland matthew.weyland@monash.edu
Using Methanol in Cryo-Electron Microscopy Sample Preparation
Microscopy Listserver
Has anyone used methanol in Cryo-EM instead of water? I wonder because vitrified water has a density of 1000 kg/m3 and vitrified methanol has an approximate density of 800 kg/m3. So, if a protein sample is placed in methanol and vitrified will it have better contrast due to lower density of the surrounding material? Of course, there are some issues with the use of methanol. I am just curious. Sayit Ugurlu sayitugurlu@gmail.com
I previously worked with methanol - it's possible. We were looking at block copolymer micelles in solution in methanol. Two points made it tricky: 1) The methanol evaporates very quickly compared to water sample volumes and blotting times need to be experimented with for specific samples, and 2) the boiling point of methanol is obviously much lower than water so you have to work very fast/low dose in the TEM so that your sample doesn't disappear in front of your eyes! A reference: https://pubs.acs.org/doi/abs/10.1021/ma2020936. Thomas Philip Smart tsmart@eastman.com
The important quantity for phase contrast TEM is the electrostatic potential, not the density. You'd have to check how much the mean inner potential of water and methanol differ to see how much contrast you gain. Then there's also the biological question. Is the protein functional in methanol and does it keep its native conformation? It's certainly an interesting idea. Philip Köck koeck@kth.se
It is an interesting question but as always with science, the first question to ask is does it make any sense? What I mean is, the main reason cryo-EM was developed is to keep the system as native as possible and thus avoid the artifacts associated with preparing a sample in artificial conditions. Since most biological systems swim in some sort of watery soup, water seems to be the optimal medium for cryo-EM, not because the properties of water are optimal for observation in TEM but because it is the most meaningful medium. Stephane Nizets nizets2@yahoo.com
Thank you for your input. Does it make sense is a very good question, but I think it does. After all, we use cryo-EM because we cannot observe many systems in their native state. Adding an extra step (methanol) may bring us further away from native conditions but may also enable us to observe features that may not be visible in water. There is always the caveat that artifacts could be caused by the methanol. Stefano Rubino stefano@soquelec.com
Glow Discharge for TEM Grids
Microscopy Listserver
We have a Quorum sputter coater (Q150T ES - turbopumped) with the glow discharge attachment. We would like to use it for making TEM grids hydrophilic and would appreciate any suggestions or comments on how to determine the best parameters for this process. Should we use air or argon as the processing gas? Currently, both gas inlets are hooked to argon, and I am wondering if using air may cause any differences. What is an optimal height of the stage or the distance from the bottom surface of the glow discharge insert to the surface of the stage/TEM grids? In the default procedure, the time is set at 25 seconds and the current at 25 mA. Does anyone have comments on how these parameters may affect the process? Thank you in advance and best regards. Stefano Rubino stefano@soquelec.com
We are using an old Polaron sputter-coater for glow-discharge activation of carbon and carbon/formvar coated grids. We modified it for glow-discharge over 30 years ago and since then we have used it with the following settings: electrode distance = 40 mm; current = 10 mA (DC); time = 30 seconds and pressure = 0.1 torr (reduced air atmosphere). Grids are placed exactly at the border of Crookes dark space on a support made of a mica sheet. I hope helps in setting up your Q150T ES. Oldřich Benada benada@biomed.cas.cz
Quorum support provided me with the following parameters for glow discharge using the glow discharge insert (10262) on a Q150T ES. I mainly use this function to prepare grids for negative staining. Distance: 35 mm below the insert (optimal is 30–35 mm; a shorter distance takes less time); current: 20 mA, 25–30 mA; max duration: 20 seconds; gas: room air (I use a filter on the air inlet). If you overcook it. for example, excess time or current, or place the grids too close to the insert, you can burn the carbon resulting in cracks. In addition, if you use argon it will etch the coating. Wai Pang Chan wpchan@uw.edu
I don't process grids, but I can tell you what is going on with your glow discharge. To make the surface hydrophilic, you are cleaning the hydrocarbons from the surface. You need oxygen to do that well, which you can get from air. Argon can give some cleaning, but it is not the best processing gas. If you were to use pure oxygen, your pumps have to be rated for oxygen. Air or a mixture of argon-25% oxygen is safe for any mechanical pump that uses a hydrocarbon-based oil. What cleans the grids are activated oxygen molecules that cause the hydrocarbons to break down and H2O and CO2 to form, which is pumped away. Plasma cleaners do the same thing. The difference between plasma cleaning and glow discharge is the energy of the species that strike the surface. In using an RF plasma cleaner, the energy is relatively low. In DC glow discharge, the energy is higher and can warm the sample. With respect to your distance, I don't know that system. However, as long as the grids are in the plasma, they won't take long to clean. Plasma cleaning is usually 5 minutes or less. I would suspect that glow discharge takes about the same amount of time or less. If you want to experiment with your system, take a piece of material such as copper tape, silicon wafer, or perhaps even a glass slide (I don't know whether that will charge in a DC plasma). Put a drop of distilled water on the surface and you will see that it beads up on the surface. Because of the hydrocarbons on the surface, the surface is hydrophobic, and it has a high contact angle. After processing, a water droplet will wet out the surface and spread as it becomes hydrophilic and the contact angle is low. To test how long it takes for a given distance in your coater, use that test surface and see how long it takes to process to give a hydrophilic surface. So, for your process parameters, just try them on that test surface. If it doesn't work, do it multiple times until it does and then go from there. Scott Walck s.walck@comcast.net
Oxygen in the plasma interacts with the grid surface to make it hydrophilic, so make sure there is enough oxygen in the chamber during discharge. As to your other two questions, I would start with the default and check the grids; if a 3μl droplet spreads evenly and quickly when applied to the grid, it is sufficiently hydrophilic. If not, increase the time until success. Bill Tivol wtivol@sbcglobal.net
Uranyl Acetate
Microscopy Listserver
We are having a hard time finding uranyl acetate up here in Canada. Does anyone: 1) Know a Canadian supplier? 2) Know if this is a Canada-only problem or are other countries also experiencing this? 3) Know what the story is behind this issue (super curious)? Thanks! Andrew Sydor andrew.sydor@sickkids.ca
Regulations on uranyl acetate have become very tough. It is not that there is a shortage of it as we are making it and have it. The question becomes does your facility allow for depleted uranium on site? We have an ample supply to ship so please let us know how we can help to get you what you need. We also offer, and this may be of assistance to you and others, uranyl replacements such as UranyLess and UAR. We are here to discuss the shipping of UA and any of the replacements to you. Thank you and we look forward to hearing from you. Stacie Kirsch sgkcck@aol.com
The lab I was in at Harvard had heard that it was becoming more difficult to get UA in Canada. We don't know of any supplier in Canada as we purchase nearly everything from Electron Microscopy Sciences. Canada is likely going the route that Japan took in severely limiting the use of UA. There is a group at the University of Toronto you might reach out to that uses UA, at least for HPF-FS (ref - Mulcahy et al., Frontiers in Neural Circuits 2018). You might want to consider alternatives such as gadolinium acetate and samarium acetate (refs: Nakakoshi et al., JEM 2011; Odriozola et al., bioRxiv 2017) which have shown promise particularly as en bloc stains. You could also try UranyLess (from EMS - not sure about a Canadian supplier for this), but I have found it useful only for post-staining on grids, not for en bloc work (if that is a concern). Brandon Drescher brandon.drescher@gmail.com
You can get UranyLess from Edge Scientific in Canada: https://edgescientific.com/product/uranyless-uranium-free-aqueous-staining-solution-for-leica-em-stain-200ml-bottle/. In addition you can also get this from our website in the United States: https://ravescientific.com/rave-shop/category/uranyless-staining. Jeff Streger jeff@ravescientific.com
Disappearing Alexa Fluor 647
Confocal Listserver
I wonder if anyone can help me solve a puzzle. I'm able to see Alexa 647 on my Zeiss LSM710 in confocal mode, but when I switch to 2p excitation I can no longer detect it, though I know I'm exciting it. Let me explain further. As a test slide I have fixed cultured cells labeled with DAPI and AF647-phallodin (coverslipped, Prolong gold). The objective lens is a 20x/1.0 NA water immersion (expensive, intravital 2p type) and the laser is a Coherent Discovery (680–1300nm tunable, and 1040nm fixed beam). Using 1p excitation DAPI and AF647 work great. I start with 1p excitation (405nm and 633nm respectively), set up 2 sequential tracks, 1AU pinhole, and adjust the internal PMTs to get nice images. Both channels look very nice, with AF647 requiring 8% laser power (at 633nm) and a modest gain setting of around 600V. (Ok, those are meaningless numbers because the vendors don't bother to calibrate anything in real-world units, but for someone with a similar instrument it might be useful - sorry, pet peeve of mine!). There is no appreciable photobleaching with these settings. Using 2p excitation with DAPI, PMTs in the scan head and pinhole wide open all works great. Now I switch to 2p mode, first just to the exact same detectors (forget about NDDs for now). I start with the DAPI channel by turning off the 405 laser, switching on the 2p laser at 780nm, and opening the pinhole wide open. I keep the detector gain exactly as before, and now I slowly start increasing the 2p laser power until I achieve the exact same image as the previous 1p image. Easy! Again, no appreciable photobleaching with these settings for DAPI. This makes sense: if I keep the detector the same and open the pinhole wide, to 2p excitation intensity, that gives me more-or-less the exact same result as the 1p excitation should exist. However, using 2p excitation with AF647, PMTs in the scan head, and pinhole wide open no emission is detected! Now I do the exact same thing with AF647. I turn off the 633nm laser, turn on the 2p laser at 1150nm and open the pinhole wide open. I keep the detector gain exactly as it was for 1p excitation of AF647. Now I slowly start increasing the 2p laser power… but I see nothing! I can crank it up to 20% power or higher, being mindful of the fact that when you double the excitation, you get 4x the signal for 2p. Now here's the real conundrum: if I pull it down to a more modest 10% power (which is still super high for our Discover laser), then move the stage to a fresh field of view, I see signal for just a single frame, and then it instantly and completely photobleaches. In other words, I'm exciting tons of fluorescence, but just not detecting it. In theory, I should be able to do exactly as I did for DAPI: if I leave the detector gain as I had it for 1p excitation, I should be able to change to 2p excitation and increase the laser power slowly until I get the exact same image as I had for 1p. But something seems to be blocking the emission when I have the 2p laser engaged. Has anybody else seen this problem? Does Zeiss slip in an IR blocking filter when the 2p laser is scanning? Our local application specialist is very knowledgeable but is not aware of any such thing.
Other things that I've considered/tried: The 2p laser is coupled in with a “MBS 760+” dichroic, which should reflect wavelengths above 760nm and pass everything below it. In fact, I already had this in place for the 1p images. It has virtually no effect on the 1p images so for consistency I just had it in from the beginning. My other option is an MBS 690+ beamsplitter, but the 690 beamsplitter throws away some of the AF647 (as observed during 1p excitation) so I want to avoid it. I didn't try adjusting the GDD compensation, but again I know that I'm getting strong excitation, just not collecting it so this shouldn't affect anything. Maybe the focus is off? But I tried being very careful with the laser power (cranking up the LUT to catch any hint of signal) and adjusting the focus, but there is no better focal plane. In fact, when I move the stage to an adjacent position, I can tell briefly that we're perfectly in focus, before it photobleaches. I tried the NDD detector. We recently upgraded to a 4NDD module with 2 PMTs and 2 GaAsPs, which has a BP 645–710nm IR+ cube on a PMT (first element in the series). This was my first time trying the NDDs with AF647. We don't see anything at all. Not even a hint of light. There was a 690nm LP filter at the start of the NDD unit which I removed but it didn't help. So, this is even more crazy. Could there be a problem with both detectors? DAPI on the NDD looks fantastic. But why is the AF647 disappearing!? Thanks for your suggestions! James Jonkman james@aomf.ca
I would try to detect second harmonic generation (which can be MUCH brighter than 2p excitation). To do this, get some solid 2-nitro-4-methyl aniline and mount it on a slide (in my case I just mixed it with some Permount to make a permanent SHG reference slide). Mount the slide under the microscope and see if you can see amber light (575 nm) reflecting off the slide. SHG is bright so you should see the light even with a 10x objective and the room lights on. This way you can be certain that you are getting femtosecond pulses at 1150 nm onto the sample. I would then see if I can image the sample with my detector. If you can't, that means that the issue may be in the detector path. If you can and you still can't see Alexa 647, that means you may have a filter blocking in the emission range of Alexa 647. I do know that non-descanned detectors require high OD IR blocking (shortpass) filters to block the strong 2P laser reflections from getting to the detectors. While technically the detectors don't normally detect nIR light, they will saturate if exposed to the several milliwatts of IR light that can reflect off the sample back into the detector. Additionally, laser reflections in the 680nm - 850nm range can be especially damaging as this is in the detection range for most visible wavelength photocathodes, so it would not surprise me at all if these wavelengths are being blocked with a high OD filter. The fact that you can image DAPI at 720nm without seeing a strong laser reflection suggests such a filter. Benjamin Smith benjamin.smith@berkeley.edu
Have you tried exciting the AF647 at 1200 nm? Marc Reinig mreinig2@gmail.com
Have you tried exciting AF647 at around 800 nm? Michael Cammer michael.cammer@med.nyu.edu
As Marc and Michael suggest, try different wavelengths. It may be that 1150nm is doing something strange. In my own experience water and tissue absorption gets complicated between 900 to 1300nm, so you may be hitting some sort of absorption sweet spot and are thermally shifting/cooking/bleaching your sample. The fact you see it for a single frame and then it vanishes is telling. Craig Brideau craig.brideau@gmail.com
We had good results with 2p excitation of AF647, ATTO647N and STAR 635P at 840–850 nm. Alexander Jurkevich jurkevica@missouri.edu
Is it possible the high laser power during 2p excitation puts the dye into a long-lived dark state? Alexa Fluor 647 can be a tricky dye, which is observed when trying to get good STED images (in 2 colors). AF 647 is bright and photostable and has a great STED efficiency, meaning you can get great images even for very low STED powers. But couple AF 647 with a second dye that requires a higher STED power (so most other red dyes), then the AF 647 signal disappears almost instantaneously. It is impossible to record in a line-interleaved mode, as you can literally watch the far-red signal disappear with the scanning laser. This instead requires wo sequential tracks (AF 647 first). What happens is the 775 nm STED laser bumps the AF 647 into a long-lived dark state, from which it recovers after 30 to 60 minutes. The blinking and photoswitchable dark-state are a reason why AF 647 is frequently used in dSTORM imaging modalities. (See here: https://chemistry-europe.onlinelibrary.wiley.com/doi/10.1002/chem.201904117). I've never tried imaging AF 647 with a 2p signal, but the laser power intensities of a 2p-excitation and a STED laser should be fairly comparable (with the 2p pulses being much tighter, so I would expect a much more severe effect). How about first imaging with 1p, then trying to “bleach” away a square pattern using 2p, then waiting half an hour or longer before recording another 1p image. I have no idea what happens at those longer wavelengths, but this should be an easy test. Nicolai Urban nicolai.urban@mpfi.org
We have also experienced difficulty when imaging AF647 with two-photon excitation. In cases of bright labeling we can image it only once and then it has been bleached. Furthermore, we have to start imaging with the long wavelength first for sequential imaging including shorter excitations such as DAPI. It might indeed be that AF647 is put in a dark state, however, I have not observed recovery after multiphoton imaging of AF647. In your case, exciting with 780 nm first might have excited AF647 as well. Many red dyes absorb energy efficiently at much shorter wavelength. I remember an article from Drobishev et al., Nature Methods 2011, stating that at these short wavelengths molecules are excited in a higher than first electron state. The energy release of these higher states favors chemical alteration of the molecules, causing rapid bleaching. Turning around the sequence and starting with AF647 excitation might give you an image, if the detectors are sensitive enough and if labeling density is sufficient. Gert-Jan Bakker gert-jan.bakker@radboudumc.nl
I think the answer to your question is simple, and you have already described it. The fluorophores disappear after an instant. Nicolai might have explained part of the problem with the dark states, but there is an important difference between 1p and 2p excitation: the pulsing. The peak power is massive, which is fine in water. Wonderful intravital imaging without major damage to the animal can be done because in water the heat dissipates instantly. But you are imaging a fixed sample mounted in Prolong Gold sandwiched between two bits of glass. The heat can't go anywhere, and if you would have tried imaging the DAPI for a bit longer, you would also have observed nice little holes appearing in your Prolong as if the nuclei had exploded (which is exactly what they do). AF647 is probably less heat stable than DAPI so it is instantly destroyed. Try the same in water and will all be fine. Martin Spitaler spitaler@biochem.mpg.de
Thank you all for the excellent suggestions! I've followed up on a number of them so keep reading below if you're interested. Essentially AF647 doesn't turn out to be a very good far-red fluorophore in our hands, even with a better choice of wavelength and with better sample prep. Does anybody have a suggestion for 4-color simultaneous 2p imaging with just 2 laser lines: fixed 1040nm and tunable 680–1300nm (preferably without having to re-tune the laser in between channels)? The user has a transgenic mouse expressing TdTomato, but wants to perfuse antibody labels for 2 other proteins and add Hoechst or DAPI for good measure (that one is kind of optional), then excise an organ, slice, and image it live (eventually moving to a window chamber model). I had suggested AF488 and AF647 to go along with the TdTomato and DAPI. What do you suggest as a replacement for my far-red choice? And is it possible to excite 4 well-chosen fluorophores without having to change the tunable laser? James Jonkman james@aomf.ca
Here's what I found out over the last couple of days: It's not a microscope problem. I can confirm that there are no extra filters swinging into place on my Zeiss LSM710 when I use the 2p laser. To test this, I set up a regular confocal scan (633 Ex, 640–700 Em) and got a great AF647 image. While scanning (live preview mode), I toggled on the 2p laser at 0% power: there is no change to the image. As I continued to scan, I slowly turn up the 2p excitation power, but I never see an increase in the AF647 emission: instead at some point it starts to disappear.
Choice of wavelength: thanks to several of you (Gert-Jan, Marco, Craig, Michael, Dan Stevens from Zeiss, I probably missed others!) for suggesting I try different wavelengths. We had originally tried 800nm (Muetze, Biophysical Journal 2012) and 1150nm (Schuh, Kidney International 2016). The Spectra Database hosted at University of Arizona shows a peak at 1240nm which I hadn't noticed before. Switching to 1240nm gave me my best results. I still can't get the image quite as bright as the 1p image, but I can get a half-decent image without completely photobleaching in a single scan. Repeated scanning sees almost no photobleaching in 1p mode, but still rapid photobleaching in 2p mode as also described by Gert-Jan. I've been looking up other references and Kobat et al. (Kobat, Optics Express 2009) show in Figure 7 that Alexa 680 gives much brighter signal than AF647 or Cy5 (best excited at 1280). Ueki et al. (Ueki, Nature Prot 2020) tried exciting AF 594 and 647 both using 910nm Ex, but while AF594 was moderately bright, AF647 was “not detected”. Unfortunately overlap between TdTomato and AF594 is probably too severe.
Thermal Damage: Thanks very much to Martin for pointing out the mistake in my sample prep! Instead of troubleshooting on the user's fresh excised tissues, I just grabbed a slide we had sitting around with cultured cells and DAPI + AF647-phalloidin. I guess I've just never tried 2p imaging before in such a thin sample, but I can also confirm that Molecular Probes Prepared Slide #1 quickly gives severe thermal damage. My colleague Feng made a fresh AF647-tubulin sample with a bit of a spacer and plenty of PBS and there is no immediate thermal damage with 1240nm excitation. Probably not nuclear fusion, but close! :)
Dark state: Nicolai, I loved your suggestion that the AF647 was going into a dark state. Would it recover??!! I tried it today using the better sample prep to avoid thermal damage (see previous paragraph), but alas after 30min I see the area as bleached out as before. Nevertheless, I'm certain that you're on the right track: there is definitely some kind of photophysics happening here where a good proportion of the photons are being absorbed without leading to emission. Maybe like blinking fluorophores are now used for STORM/PALM we might someday think of how to use this as a feature rather than thinking of it as a drawback! But it just looks like AF647 isn't a very good far-red fluorophore for 2p excitation.
SHG, laser: Benjamin, great idea about making an SHG sample—I didn't know you could make one that was visible by eye! I have ruled out any additional optics sneaking into the emission path (see above), but I haven't totally ruled out the possibility that my laser isn't behaving at these higher wavelengths. It is definitely giving me femtosecond pulses from 700 to 800nm to give great DAPI images. The strong absorption I saw at 1150nm suggests a 2p behavior as these thin samples should otherwise be transparent from 1000 to 1300nm. James Jonkman james.jonkman@uhnresearch.ca
I would not image the nucleus if it is not needed. Instead, I would collect second or third harmonics to get the tissue context for the fluorescent signal. If possible, I would acquire SGH/TGH together with a channel where the fluorescence signal looks very different from SGH/TGH pattern. This way you can collect 2 pieces of information in the same image, but they can easily be separated during segmentation. This might make it easier to find a combination of excitation wavelengths for 3 relevant signals + the tissue context. Sylvie le Guyader sylvie.le.guyader@ki.se
Thanks for the nice description. Atto647N and Aberrior Star 635P are excellent STED dyes that are depleted with a pulsed 775. Although pulses are quite different for 2p excitation and depletion, this might be a start. You possibly could get DAPI with the same wavelength with 3p excitation. Or use SIR-DNA or SIR-700 DNA for a counterstain. According to https://www.sciencedirect.com/science/article/pii/S000634951200063X (Figure 2), Alexa 633 should be a lot brighter than A647 and excitable with 800 nm (probably S2-state) as is A488. Steffen Dietzell lists@dietzellab.de
I have the same laser setup and have done 4 colors as follows: Scan line sequentially with 1040 and ~920nm. For the 1040, capture a narrow 520/5 or 520/10 BP for SHG and your choice of red filter for tdTomato. For 920, you can capture 2-photon images of EGFP or AF488 (I think, I was using EGFP) with ~525/50 filter, and you can also grab 3pi excitation of Hoechst with a 460/80 filter. I think AF488 and Hoechst can both be excited at about 780nm too. The problem with this setup is that if you use a DNA counterstain it will bleed through into the green and may even be picked up with the 1040 excitation so this usually requires some unmixing. I agree with an earlier post that leaving it out is usually better, and the SHG signal can give enough cellular resolution of the tissue. Glyn Nelson glyn.nelson@ncl.ac.uk
These papers from 2016 and 2011 from Steffen Dietzel's group might be informative and give another approach to the combination you're trying to achieve. Rehberg, Markus, et al., https://doi.org/10.1371/journal.pone.0028237 and Dietzel, Steffen, et al., https://onlinelibrary.wiley.com/doi/abs/10.1002/smll.201503766. The authors confirmed to me that DRAQ5 was excited with 1275nm for the 2-p visualization of the nuclei alongside the THG in the Small 2016 paper. Roy Edward roy@biostatus.com
Light Sheet Test Target
Confocal Listserver
Are there commercially available imaging targets that can be used for testing a light sheet set up? Anjul Loiacono anjul@doublehelixoptics.com
I've never used a Zeiss light sheet microscope, but one beauty of a light sheet is that almost all critical aspects of system performance and alignment can be visualized using a sample spiked with a little bit of fluorescent dye, for example, fluorescein. For any light sheet system, the two main goals are to check if the excitation beam (for scanned systems) or sheet (for stationary optics) are coming out straight and parallel to the image plane. Straightness is most easily assessed in a scanned system since the beam (when the scanning optic is stationary) should ideally be parallel to the X axis of the camera. To check whether the sheet is parallel, one merely needs to move the imaging objective in and out while observing the stationary beam or sheet. If parallel, it will go evenly in and out of focus across the entire FOV. If not parallel, the waist will translate laterally in the image when the imaging objective is moved. A final check (although one that shouldn't drift much) is to make sure that the scanning mirror that generates the sheet in a scanned system is parallel to the image plane by inspecting the top and bottom edges of the sheet and verifying that both are in focus. One parameter that needs a sample to measure is the thickness of the beam waist. For that we add sub-diffractive beads to the sample at a low concentration, focus on a single bead, and then acquire a stack without moving the imaging objective (just sweeping the sheet over the bead). The axial profile of the bead gives the thickness of the sheet. Measuring system PSF is a good way to get a sense of whether the system as a whole is in decent optical shape but isn't as directly informative as the light sheet alignment checks. For this, we use the same beads as for measuring sheet thickness and pass the resulting stack through PSFj: https://github.com/cmongis/psfj. We usually get 100 nm FluoSpheres or TetraSpeck beads for these checks. The diSPIM wiki has excellent illustrations of these alignment checks made by Jon Daniels from ASI. While some details are specific to the diSPIM (software and the dual view alignment) the general principles are common to all instruments. Of course, this doesn't include alignment checks for the beam shaping optics for fancier beam profiles like Bessel, Airy or lattices, but the overall principle of the alignment checks still hold. Pavak Shah pavak@ucla.edu
Thanks for sharing your experiences. How do you prepare the beads for the light sheet testing, embed them in an agarose block? I'm wondering if there's a way to prepare a more permanent testing sample analogous to bead slides commonly used for PSF measurement of “normal” microscopes. One possibility is a mirror at 45 degrees to the light sheet, which reflect the light sheet to the imaging objective. Does Zeiss have a system designed specifically for the Z1 Lightsheet microscope? Radek Machan radek.machan@ntu.edu.sg
When we bought a LaVision Biotec UMII light sheet microscope we received an alignment tool. It is an intelligent glass cuvette filled with a fluorescent dye with a thin opaque membrane in the middle. Some holes in the membrane help with seeing if the L-sheets (6 in our case) are illuminating the same area. It is not perfect, but quite useful. I also able to use TetraSpeck beads in agar for checking light sheet alignments in PBS/water media. The beads are very useful to see camera misalignment, chromatic aberrations and PFSs distortions. I have tried to clear (ECI protocol) some agar blocks containing beads, but I lost the fluorescent signal after 1 week. I agree that it would be very useful to have some commercially available/standardized tools for light sheet alignment. Alessandro Ciccarelli alessandro.ciccarelli@crick.ac.uk
We did almost the same as Alessandro to make a test sample for our Miltenyi/LaVision Biotec Ultramicroscope II microscope. We have a fixed 12× objective for this microscope and the standard alignment tool (with a kind of a crosshair slit) was not good enough. We also made an ECI cleared agar block with fluorescent beads, though we used another brand which might be more stable (melamine resin-based fluorescent beads, see https://www.sigmaaldrich.com/technical-documents/articles/biofiles/fluorescent-microparticles.html). They worked well for characterization of the PSF and light sheet alignment, especially with the 12 × objective. Gert-Jan Bakker gert-jan.bakker@radboudumc.nl
We also have a LaVision light sheet system and mostly use the calibration tool that comes with it. However, when we've tried beads we've used iDISCO+ cleared agarose blocks with the following beads: https://www.micromod.de/de/produkte-3-fluoreszent.html (example: Prod.-Nr.: 40-02-103, 1μm diameter red). These do not dissolve in DBE and were recommended by someone from LaVision. Unfortunately, there are no multi-fluorophore versions to evaluate chromatic aberrations. If anyone knows of solvent-stable multi-fluorophore beads (TetraSpeck-like), I would be very interested. Pablo Ariel pablo_ariel@med.unc.edu
We have always used mirror and grid test targets inserted at 45 degrees into the chamber. These can be homebuilt by cutting a microscope test slide with a diamond pen and gluing it (for example, with UV-curable optics glue). If the mirror is translated through the chamber along the light sheet, the beam profile at different positions is observed and the light sheet waist measured. Regarding bead samples: agarose embedded beads in a falcon with some distilled water to prevent the agarose from drying out are good for several months. I've also used dye solution mixed with agarose to visualize the light sheets. These bleach after 1–2 weeks when stored at room temperature. For something more durable, I've been wondering whether the UV curable polymer from this pre-print might be used to embed beads. The company has polymers with a range of refractive indices as well. www.biorxiv.org/content/10.1101/2020.10.04.324996v1.article-metrics and www.mypolymers.com/bio-133-enables-diverse-applications-in-fluorescence-microscopy. Wiebke Jahr wiebke.jahr@st.ac.at
I typically use 200 nm fluorescent beads in agarose to acquire PSFs. We have a Zeiss Lightsheet Z1 with dual side illumination. Although it is a very fine microscope, ours appears to have some problem: the waists of left and right light sheets do not exactly coincide in the X dimension (along the light sheet). I think they should (correct me if I am wrong). I know the serviceman uses the mirror preparation to evaluate the system. I imagine that, ideally, the mirror preparation after rotation (that is, changing between the 45-degree-position for left and right illumination) should reflect the laser light towards the detection lens at exactly the same position in the X dimension. However, this might not always be the case, especially with homemade mirror preps. This might be the reason why everything looks fine after service with the mirror preparation, but the waists fail to coincide in dual-side illumination systems. Tomasz Wegierski confocal@iimcb.gov.pl
On a light sheet microscope with double-sided illumination it is not uncommon to have the waists of the two light sheets separated a bit. If you assume that you will use the left light sheet to primarily illuminate the left part of the sample, you may want to have the waist more on that side. You could even put the waist of the left light sheet in the middle of the left half of the FOV and the waist of the right light sheet in the middle of the right half. As a result, both light sheets can be made thinner by a factor of 1/sqrt(2) = 0.7 because each light sheet only needs to cover half of the FOV (Huisken and Stainier, Opt. Lett. 2007). However, since the sample never fills the entire FOV exactly, and some overlap of the two datasets in the middle is optimum, the two waists can be a bit closer.
Regarding the alignment mirror, we started making our own mirrors. We are happy to share them with anyone who is interested. They have a fully reflective part, a grid, parallel lines, and a target similar to the USAF target. A grid is fantastic as an alignment tool as it nicely shows all the critical parameters of the light sheet(s). The precise position of the light sheet where the waist is in focus and if the sheet is at an angle or turned, etc. It does not matter if the mirror's angle is exactly 45 degrees. You are basically looking at the bright stripe that the light sheet produces on a tilted surface. The stripes or grid lines on the target simply help observe where the focal plane of the detection system is. This figure provides the alignment process: https://imgur.com/tUrwtuM.JanHuisken jhuisken@gmail.com