We need controls in research. We need to pay close attention to our assays, reagents, and the pathways we affect.
And we know you know that. But it’s worth reiterating and thinking about it for a while.
This isn’t us preaching the value of controls, it’s an open plea for scientists to be in love with their findings. So much so that we place them under the same scrutiny we’d expect from the huge body of work that has gone before us.
Biology is an incomplete mess, so let’s control for that
The thing is, the biological systems we all work with are complex. Every protein we knock out, every ion channel we block, and every little thing we modify has myriad downstream effects – often on other signaling molecules and pathways we hadn’t considered.
Because our knowledge of biology is incomplete.
We’re still putting all the pieces together and, in all likelihood, we’ll continue to do so forever. Which at least means there’ll always be more research to do and more discoveries to make.
But since our knowledge is incomplete, we all have to make every effort to control what we can. That means controls to confirm our assays work. Controls to show our reagents do what they’re supposed to. Positive controls. Negative controls. We need them all.
And yes, of course, we’d all rather be running the experiments to get the answers we crave; and yes, running controls means more of our limited budget needs to be spent on additional things; and yes, it also means more of our limited and precious time to run a whole suite of controls.
But it’s time well spent.
It’s money well invested.
Because controlling for the variables we can, means less time wasted in the future. Controls lay the foundations for the months or years of research that lay ahead.
Think back to the feeling of getting a novel result. The excitement. The confusion. The possibilities. Because at that moment in time, you are, in all likelihood, the only person who has that bit of new information at that particular time. It’s all yours. And because it’s only you in possession of that new hint of truth, it’s up to you to disprove it. It’s up to you to throw every control and account for every possible alternative for why you have that novel result. Only when you’ve ruled out assay mistakes, reagent issues, and interfering pathways, can you start to look at that novel result as truth.
These controls against chance and error are what build your foundations. Your controls prepare your research for the unknown.
And what is research if not a quest into the unknown?
GPCRs: a case for looking up and downstream
Take G-protein-coupled receptors (GPCRs), for example. Lots of us have worked or work with GPCRs at some point, and we all know how these multifunctional receptors and their subunits form different combinations and have different interactions with G proteins, which in turn go on to affect multiple ion channels and signaling pathways. Gβγ subunits activate enzymes and ion channels, while Gα subunits will also influence ion channels by modulating those same Gβγ subunits.
So when we then start tinkering with GPCRs, there are going to be consequences – some big, some small. Voltage-gated Ca2+ channel inhibition by GPCRs can cause membrane-delimited inhibition, which first involves G protein βγ subunits, and then comes voltage-independent modulation by a second messenger. GPCRs also modulate inwardly rectifying K+ channels (Kir) (GIRK) channels, and so if you use a GPCR agonist you’re also going to affect pain transmission, the subsequent release of multiple neurotransmitter types, and the polarization events that happen after.
A lot happens. And our actions can have a lot of consequences, so it’s imperative we at least try to control for these consequences.
Making our research rigorous and reproducible is more than just adding a simple positive and negative control: we need to consider the up- and downstream effects that our experiments will likely have. Doing so gives us a much more biologically relevant perspective and helps unravel the tangle of results when troubleshooting something.
We need careful, thoughtful controls.
Tied into this is reagent selection.
Don’t pick your reagents blindly
Research starts with reagent selection. And this part falls to you as researchers to scrutinize the reagents you choose to use in your experiments.
Sometimes that means you have to go beyond reading and running a protocol in autopilot mode that you may have run dozens of times before. Sometimes you need to pay extra attention to the reagents you pick.
If any of us had to bake a cake from a recipe that calls for “4 eggs”, we’d check to see that the eggs were fresh. That they were chicken eggs and not goose eggs. We wouldn’t blindly add four giant ostrich eggs just because the recipe called for “four eggs.”
So why don’t more of us do that with our protocols? If a protocol calls for an anti-X antibody, how often do we run a Blast to check that it cross-reacts with the species we use? Or run our own testing to check that it binds to where it’s supposed to? How often do we try alternative channel modulators that have a higher potency? Or silence/activate competing pathways to see how much noise to the signal?
Probably not often enough.
Even if you follow the recipe – or the protocol –research can still quickly turn into a disaster. Which is why we all need to pay more attention. Look at the immunogen used to make the antibody; run the Blast; calibrate for noise; and look at neighboring pathways.
Vendors need to be more transparent about data
But this doesn’t all rest on the researcher’s shoulders. Scientists like you put a degree of trust in us as a reagent vendor. If a vendor specifies that their anti-X antibody cross-reacts with your species, you have every right to believe them. You put faith in their validation data.
But for this relationship to work, there must be ample honesty and transparency from reagent vendors. Too many people have heard of or encountered buying what they thought were different reagents from different vendors only to find they’re the same faulty reagents sold under different names. This is unacceptable.
There is a heavy responsibility resting squarely on vendors to include unique identifiers in those cases, to provide sufficient validation data, and to provide researchers with all the information they need to make informed decisions about their reagents. With so many vendors to choose from now, it’s critical that the complete reagent data are freely available.
Researchers in turn need to look for those data. Track down vendors that are honest and transparent with reagent data. Look for validation and testing data; look for immunogen sequences; look for vendors you can freely ask questions of. These are the ones making a change in the industry for the better.
Science depends on us all doing better
For science to be rigorous, it’s simply not good enough to press ahead with the most basic of negative controls, with no thought to the downstream ramifications of modulating a protein, and with no research into the reagents listed in the protocol we have. It’s not good enough for vendors to provide incomplete data, withhold useful data, or engage in ethically questionable reselling practices.
We all have to do more. We all have to be better.
If we want science to be seen as the rigorous pursuit of truth and knowledge that we all deem it to be, then we need to work harder to improve our standards.
Because controls build your research, your paper, and the integrity of science.
Some resources to help you find the right controls
- Isotype and blocking peptide controls for antibodies
- An explainer on the amount of information available to you in the product pages
- Essential immunohistochemistry controls
- Scientific FAQs and troubleshooting
Photo by Daniele Levis Pelusi