国际米兰对阵科莫 - reproducibility /taxonomy/subjects/reproducibility en 鈥楻obot scientist鈥 Eve finds that less than one third of scientific results are reproducible /research/news/robot-scientist-eve-finds-that-less-than-one-third-of-scientific-results-are-reproducible <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/breast-cancer-cell.jpg?itok=A3oLbOmf" alt="Breast Cancer Cell" title="Breast Cancer Cell, Credit: NIH Image Gallery" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>The researchers, led by the 国际米兰对阵科莫, analysed more than 12,000 research papers on breast cancer cell biology. After narrowing the set down to 74 papers of high scientific interest, less than one-third 鈥 22 papers 鈥 were found to be reproducible. In two cases, Eve was able to make serendipitous discoveries.</p> <p>The <a href="https://royalsocietypublishing.org/doi/10.1098/rsif.2021.0821">results</a>, reported in the journal <em>Royal Society Interface</em>, demonstrate that it is possible to use robotics and artificial intelligence to help address the reproducibility crisis.</p> <p>A successful experiment is one where another scientist, in a different laboratory under similar conditions, can achieve the same result. But more than 70% of researchers have tried and failed to reproduce another scientist鈥檚 experiments, and more than half have failed to reproduce some of their own experiments: this is the reproducibility crisis.</p> <p>鈥淕ood science relies on results being reproducible: otherwise, the results are essentially meaningless,鈥 said Professor Ross King from 国际米兰对阵科莫鈥檚 Department of Chemical Engineering and Biotechnology, who led the research. 鈥淭his is particularly critical in biomedicine: if I鈥檓 a patient and I read about a promising new potential treatment, but the results aren鈥檛 reproducible, how am I supposed to know what to believe? The result could be people losing trust in science.鈥</p> <p>Several years ago, King developed the robot scientist Eve, a computer/robotic system that uses techniques from artificial intelligence (AI) to carry out scientific experiments.</p> <p>鈥淥ne of the big advantages of using machines to do science is they鈥檙e more precise and record details more exactly than a human can,鈥 said King. 鈥淭his makes them well-suited to the job of attempting to reproduce scientific results.鈥</p> <p>As part of a project funded by DARPA, King and his colleagues from the UK, US and Sweden designed an experiment that uses a combination of AI and robotics to help address the reproducibility crisis, by getting computers to read scientific papers and understand them, and getting Eve to attempt to reproduce the experiments.</p> <p>For the current paper, the team focused on cancer research. 鈥淭he cancer literature is enormous, but no one ever does the same thing twice, making reproducibility a huge issue,鈥 said King, who also holds a position at Chalmers University of Technology in Sweden. 鈥淕iven the vast sums of money spent on cancer research, and the sheer number of people affected by cancer worldwide, it鈥檚 an area where we urgently need to improve reproducibility.鈥</p> <p>From an initial set of more than 12,000 published scientific papers, the researchers used automated text mining techniques to extract statements related to a change in gene expression in response to drug treatment in breast cancer. From this set, 74 papers were selected.</p> <p>Two different human teams used Eve and two breast cancer cell lines and attempted to reproduce the 74 results. Statistically significant evidence for repeatability was found for 43 papers, meaning that the results were replicable under identical conditions; and significant evidence for reproducibility or robustness was found in 22 papers, meaning the results were replicable by different scientists under similar conditions. In two cases, the automation made serendipitous discoveries.</p> <p>While only 22 out of 74 papers were found to be reproducible in this experiment, the researchers say that this does not mean that the remaining papers are not scientifically reproducible or robust. 鈥淭here are lots of reasons why a particular result may not be reproducible in another lab,鈥 said King. 鈥淐ell lines can sometimes change their behaviour in different labs under different conditions, for instance. The most important difference we found was that it matters who does the experiment, because every person is different.鈥</p> <p>King says that this work shows that automated and semi-automated techniques could be an important tool to help address the reproducibility crisis, and that reproducibility should become a standard part of the scientific process.</p> <p>鈥淚t鈥檚 quite shocking how big of an issue reproducibility is in science, and it鈥檚 going to need a complete overhaul in the way that a lot of science is done,鈥 said King. 鈥淲e think that machines have a key role to play in helping to fix it.鈥</p> <p>The research was also funded by the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI), and the Wallenberg AI, Autonomous Systems and Software Program (WASP)</p> <p>聽</p> <p><em><strong>Reference:</strong><br /> Katherine Roper et al. 鈥<a href="https://royalsocietypublishing.org/doi/10.1098/rsif.2021.0821">Testing the reproducibility and robustness of the cancer biology literature by robot</a>.鈥 Royal Society Interface (2022). DOI: 10.1098/rsif.2021.0821</em></p> </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Researchers have used a combination of automated text analysis and the 鈥榬obot scientist鈥 Eve to semi-automate the process of reproducing research results. The problem of lack of reproducibility is one of the biggest crises facing modern science.</p> </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">One of the big advantages of using machines to do science is they鈥檙e more precise and record details more exactly than a human can</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Ross King</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/132318516@N08/28264909965" target="_blank">NIH Image Gallery</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Breast Cancer Cell</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> The text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright 漏国际米兰对阵科莫 and licensors/contributors as identified.聽 All rights reserved. We make our image and video content available in a number of ways 鈥 as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution-noncommerical">Attribution-Noncommerical</a></div></div></div> Tue, 05 Apr 2022 23:20:02 +0000 sc604 231261 at Codecheck confirms reproducibility of COVID-19 model results /research/news/codecheck-confirms-reproducibility-of-covid-19-model-results <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/christian-wiediger-wkfdrhxdmc8-unsplash.jpg?itok=I1cA74Gg" alt="Closeup of computer keyboard" title="Closeup of computer keyboard, Credit: Photo by Christian Wiediger on Unsplash" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>The code, script and documentation of the 16 March report, which is <a href="https://github.com/mrc-ide/covid-sim/tree/master/report9">available on Github</a>, was subject to an<a href="https://zenodo.org/doi/10.5281/zenodo.3865490"> independent review led by Dr Stephen Eglen</a>, from 国际米兰对阵科莫鈥檚 Department of Applied Mathematics and Theoretical Physics.</p> <p>Eglen co-founded Codecheck last year to help evaluate the computer programs behind scientific studies. Researchers provide their code and data to Codecheck, who run the code independently to ensure the work can be reproduced.</p> <p>Last week, Codecheck certified the reproducibility of arguably the most talked-about computational model of the COVID-19 pandemic, that of the Imperial College group led by Professor Neil Ferguson. The model suggested that there could be up to half a million deaths in the UK if no measures were taken to slow the spread of the virus, and has been cited as one of the main reasons that lockdown went into effect soon after. However, the Imperial group did not immediately make their code publicly available.</p> <p>Codecheck.org.uk provided an independent review of the replication of <a href="https://www.imperial.ac.uk/mrc-global-infectious-disease-analysis/covid-19/report-9-impact-of-npis-on-covid-19/">key findings from Report 9</a> using CovidSim reimplementation. The process matches domain expertise and technical skills, taking place as an open peer review. The reviewer conducts the codecheck and submits the resulting certificate as part of their review.</p> <p>The results confirm that the key finding of <a href="https://www.imperial.ac.uk/mrc-global-infectious-disease-analysis/covid-19/report-9-impact-of-npis-on-covid-19/">Report 9</a> - on the impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand - are reproducible. Eglen did not review the epidemiology that went into the Imperial model, however.</p> <p>In <a href="https://zenodo.org/doi/10.5281/zenodo.3865490">his analysis, Dr Eglen said</a>: 鈥淓ach run generated a tab-delimited file in the output folder. Two R scripts provided by Prof Ferguson were used to summarise these runs into two summary files... These files were compared against the values generated by Prof Ferguson...The results were found to be identical. Inserting my results into his Excel spreadsheet generated the same pivot tables.鈥</p> <p>The codecheck found that: 鈥淪mall variations (mostly under 5%) in the numbers were observed between Report 9 and our runs.鈥 The codecheck confirmed the trends and findings of the original report.</p> <p>Building in part on code originally developed, published and peer-reviewed in <a href="https://pubmed.ncbi.nlm.nih.gov/16079797/">2005</a> and <a href="https://pubmed.ncbi.nlm.nih.gov/16642006/">2006</a>, the code used for Report 9 continues to be actively developed to allow examination of the wider range of control policies now being deployed as countries relax lockdown. The Imperial team is sharing the code to enhance transparency and to allow others to contribute and make use of the simulation.</p> <p>Refactoring the code has allowed changes to be made more quickly and reliably, including incorporating new data that has become available as the pandemic has progressed.</p> <p>In addition to the features presented in<a href="https://www.imperial.ac.uk/mrc-global-infectious-disease-analysis/disease-areas/covid-19/report-9-impact-of-npis-on-covid-19/"> Imperial Report 9</a>, further strategies can now be examined such as testing and contact tracing, which was not a UK policy option in March.</p> <p>Users also now have the ability to vary intensity of interventions over time and to calibrate the model to country-specific epidemic data.</p> <p><em>Adapted from a </em><a href="https://www.imperial.ac.uk/news/197875/codecheck-confirms-reproducibility-covid19-model-results/"><em>piece</em></a><em> originally published on the Imperial College London website</em></p> <h2>How you can support 国际米兰对阵科莫's COVID-19 research effort</h2> <p><a href="https://www.philanthropy.cam.ac.uk/give-to-cambridge/cambridge-covid-19-research-fund" title="Link: Make a gift to support COVID-19 research at the University">Donate to support COVID-19 research at 国际米兰对阵科莫</a></p></div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>国际米兰对阵科莫 researcher confirms reproducibility of high-profile Imperial College coronavirus computational model.</p> </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://unsplash.com/photos/closeup-photo-of-computer-keyboard-WkfDrhxDMC8" target="_blank">Photo by Christian Wiediger on Unsplash</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Closeup of computer keyboard</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br /> The text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. Images, including our videos, are Copyright 漏国际米兰对阵科莫 and licensors/contributors as identified.聽 All rights reserved. We make our image and video content available in a number of ways 鈥 as here, on our <a href="/">main website</a> under its <a href="/about-this-site/terms-and-conditions">Terms and conditions</a>, and on a <a href="/about-this-site/connect-with-us">range of channels including social media</a> that permit your use and sharing of our content under their respective Terms.</p> </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Mon, 08 Jun 2020 23:00:01 +0000 sc604 215292 at Pilot programme encourages researchers to share the code behind their work /research/news/pilot-programme-encourages-researchers-to-share-the-code-behind-their-work <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/news/crop_22.jpg?itok=-73Q51_p" alt="Close up code" title="Close up code, Credit: Lorenzo Cafaro" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>A new pilot project, designed by a 国际米兰对阵科莫 researcher and supported by the <em>Nature</em> family of journals, will evaluate the value of sharing the code behind published research.</p>&#13; &#13; <p>For years, scientists have discussed whether and how to share data from painstaking research and costly experiments. Some are further along in their efforts toward 鈥榦pen science鈥 than others: fields such as astronomy and oceanography, for example, involve such expensive and large-scale equipment and logistical challenges to data collection that collaboration among institutions has become the norm.</p>&#13; &#13; <p>Recently, academic journals, including several <em>Nature</em> journals, are turning their attention to another aspect of the research process: computer programming code. Code is becoming increasingly important in research because scientists are often writing their own computer programs to interpret their data, rather than using commercial software packages. Some journals now include scientific data and code as part of the peer-review process.</p>&#13; &#13; <p>Now, in a <a href="https://www.nature.com/articles/nn.4550">commentary</a> published in the journal <em>Nature Neuroscience</em>, a group of researchers from the UK, Europe and the United States have argued that the sharing of code should be part of the peer-review process. In a separate <a href="https://www.nature.com/articles/nn.4579">editorial</a>, the journal has announced a pilot project to ask future authors to make their code available for review.</p>&#13; &#13; <p>Code is an important part of the research process, and often the only definitive account of how data were processed. 鈥淢ethods are now so complex that they are difficult to describe concisely in the limited 鈥榤ethods鈥 section of a paper,鈥 said Dr Stephen Eglen from 国际米兰对阵科莫鈥檚 Department of Applied Mathematics and Theoretical Physics, and the paper鈥檚 lead author. 鈥淎nd having the code means that others have a better chance of replicating your work, and so should add confidence.鈥</p>&#13; &#13; <p>Making the programs behind the research accessible allows other scientists to test the code and reproduce the computations in an experiment 鈥 in other words, to reproduce results and solidify findings. It鈥檚 the 鈥渉ow the sausage is made鈥 part of research, said co-author Ben Marwick, from the University of Washington. It also allows the code to be used by other researchers in new studies, making it easier for scientists to build on the work of their colleagues.</p>&#13; &#13; <p>鈥淲hat we鈥檙e missing is the convention of sharing code or the tools for turning data into useful discoveries or information,鈥 said Marwick. 鈥淩esearchers say it鈥檚 great to have the data available in a paper 鈥 increasingly raw data are available in supplementary files or specialised online repositories 鈥 but the code for performing the clever analyses in between the raw data and the published figures and tables are still inaccessible.鈥</p>&#13; &#13; <p>Other Nature Research journals, such as <a href="https://www.nature.com/nature-portfolio/editorial-policies/reporting-standards">Nature Methods</a> and <a href="https://blogs.nature.com/tradesecrets/2016/07/18/guidelines-for-algorithms-and-software-at-nature-biotechnology">Nature Biotechnology,</a> provide for code review as part of the article evaluation process. Since 2014, the company has encouraged writers to make their code available upon request.</p>&#13; &#13; <p>The Nature Neuroscience pilot focuses on three elements: whether the code supporting an author鈥檚 main claims is publicly accessible; whether the code functions without mistakes; and whether it produces the results cited. At the moment this is a pilot project to which authors can opt in. It may be that in future it becomes mandatory and only when the code has been reviewed will a paper then be accepted.</p>&#13; &#13; <p>鈥淭his extra step in the peer review process is to encourage 鈥榬eplication鈥 of results, and therefore help reduce the 鈥榬eplication crisis鈥,鈥 said Eglen. 鈥淚t also means that readers can understand more fully what authors have done.鈥</p>&#13; &#13; <p>An open science approach to sharing code is not without its critics, as well as scientists who raise legal and ethical questions about the repercussions. How do researchers get proper credit for the code they share? How should code be cited in the scholarly literature? How will it count toward tenure and promotion applications? How is sharing code compatible with patents and commercialization of software technology?</p>&#13; &#13; <p>鈥淲e hope that when people do not share code it might be seen as 鈥榟aving something to hide,鈥 although people may regard the code as 鈥榯heirs鈥 and their IP, rather than something to be shared,鈥 said Eglen. 鈥淣owadays, we believe the final paper is the ultimate representation of a piece of research, but actually the final paper is just an advert for the scholarship, which here is the computer code to solve a particular task. By sharing the code, we actually get the most useful part of the scholarship, rather than the paper, which is just the author鈥檚 鈥榞loss鈥 on the work they have done.鈥</p>&#13; &#13; <p><em>Adapted from a University of Washington <a href="https://www.washington.edu/news/2017/05/25/uw-anthropologist-why-researchers-should-share-computer-code/">press release</a>.聽</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>New project, partly designed by a 国际米兰对阵科莫 researcher, aims to improve transparency in science by sharing 鈥榟ow the sausage is made鈥.聽</p>&#13; </p></div></div></div><div class="field field-name-field-content-quote field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even">Having the code means that others have a better chance of replicating your work.</div></div></div><div class="field field-name-field-content-quote-name field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Stephen Eglen</div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.pexels.com/photo/close-up-code-coding-computer-239898/" target="_blank">Lorenzo Cafaro</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Close up code</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; The text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div> Fri, 02 Jun 2017 07:30:00 +0000 sc604 189332 at Opinion: The science 鈥榬eproducibility crisis鈥 鈥 and what can be done about it /research/discussion/opinion-the-science-reproducibility-crisis-and-what-can-be-done-about-it <div class="field field-name-field-news-image field-type-image field-label-hidden"><div class="field-items"><div class="field-item even"><img class="cam-scale-with-grid" src="/sites/default/files/styles/content-580x288/public/news/research/discussion/56134101292091c5602do.jpg?itok=A6BEJC8V" alt="Study of Human Immune Response to HIV" title="Study of Human Immune Response to HIV, Credit: NIAID" /></div></div></div><div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>A survey by Nature revealed that <a href="https://www.nature.com/articles/533452a">52% of researchers</a> believed there was a 鈥渟ignificant reproducibility crisis鈥 and 38% said there was a 鈥渟light crisis鈥.</p>&#13; &#13; <p>We asked three experts how they think the situation could be improved.</p>&#13; &#13; <h2>Open Research is the answer</h2>&#13; &#13; <p><em>Danny Kingsley, head of the Office of Scholarly Communication, 国际米兰对阵科莫</em></p>&#13; &#13; <p>The solution to the scientific reproducibility crisis is to move towards <a href="https://osc.cam.ac.uk/open-research">Open Research</a> 鈥 the idea that scientific knowledge of all kinds should be openly shared as early as it is practical in the discovery process. We need to reward the publication of research outputs along the entire process, rather than just each journal article as it is published.</p>&#13; &#13; <p>As well as other research outputs 鈥 such as data sets 鈥 we should reward research productivity itself as well as the thought process and planning behind the study. This is why <a href="http://neurochambers.blogspot.co.uk/2013/04/scientific-publishing-as-it-was-meant_10.html">Registered Reports</a> was launched in 2013, where researchers register the proposal and how the research will be conducted, before any experimental work commences. It allows editorial decisions to be based on the rigour of the experimental design and increases the likelihood that the findings could be replicated.</p>&#13; &#13; <p>In the UK there is now a requirement from most <a href="https://www.data.cam.ac.uk/funders">funders</a> that the data underpinning a research publication is made available. However, although there are moves towards open research, many argue against the sharing of data among the research community.</p>&#13; &#13; <figure class="align-center "><img alt="" src="https://cdn.theconversation.com/files/160520/width754/image-20170313-9613-2cfmqw.jpg" /><figcaption><em><span class="caption">Questionable findings are often hidden.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/product-researching-marketing-team-work-loft-425326300?src=WZtYxmdFeSANhTM2RN1K6w-2-98">Shutterstock</a></span></em></figcaption></figure><p>Researchers often write multiple papers from a single data set and many fear that if this data is released with the first publication then the researcher will be 鈥渟cooped鈥 by another research group, who will publish findings from similar data sets before the original authors get the chance to publish follow up articles 鈥 to gain maximum credit for the work. If the publication of data itself could be recorded as a 鈥渞esearch output鈥, then being scooped would no longer be such an issue, as such credit will have been given.</p>&#13; &#13; <p><a href="https://journals.plos.org:443/plosone/article?id=10.1371/journal.pone.0026828">One benefit of sharing data</a> could be an improvement in its quality 鈥 as previous research has shown. And there have been small steps towards this goal, such as a <a href="https://force11.org/info/joint-declaration-of-data-citation-principles-final/">standard method of citing data</a>.</p>&#13; &#13; <p>We also need to publish 鈥渘ull鈥 results 鈥 those that do not support the hypothesis 鈥 to prevent other researchers wasting time repeating work. There are a few publication outlets for this, and a <a href="https://techcrunch.com/2017/02/28/researchgate-raises-52-6m-for-its-social-research-network-for-scientists/">recent press release from ResearchGate</a> indicated that it supports the sharing of failed experiments through its 鈥減roject鈥 offering. It lets users upload and track experiments as they are happening 鈥 meaning no one knows how they will turn out.</p>&#13; &#13; <h2>Psychology is leading the way out of crisis</h2>&#13; &#13; <p><em>Jim Grange, senior lecturer in psychology, Keele University</em></p>&#13; &#13; <p>To me, it is clear that there is a reproducibility crisis in psychological science, and across all sciences. Murmurings of low reproducibility began in 2011 鈥 the 鈥<a href="https://ejwagenmakers.com/2012/Wagenmakers2012Horrors.pdf">year of horrors</a>鈥 for psychology 鈥 with a high profile fraud case. But since then, <a href="https://osf.io/vmrgu/">The Open Science Collaboration</a> has published the findings of a large-scale effort to closely replicate 100 studies in psychology. Only 36% of them could be replicated.</p>&#13; &#13; <p>The <a href="https://arxiv.org/abs/1205.4251">incentive structures</a> in universities and the attitude that you 鈥減ublish or perish鈥 means that researchers prioritise 鈥済etting it published鈥 over 鈥済etting it right鈥. It also means that some, implicitly or explicitly, use questionable research practices to achieve publication. These may include failing to report parts of data sets or trying different analytical approaches to make the data fit what you want to say. It could also mean presenting exploratory research as though it was originally confirmatory (designed to test a specific hypothesis).</p>&#13; &#13; <p>However, many psychology journals now recommend or require the preregistration of studies which <a href="https://www.apa.org/science/about/psa/2015/08/pre-registration">allow researchers to detail their predictions</a>, experimental protocols, and planned analytical strategy before data collection. This provides confidence to readers that no questionable research practices have occurred.</p>&#13; &#13; <figure class="align-center "><img alt="" src="https://cdn.theconversation.com/files/160499/width754/image-20170313-19247-57184o.jpg" /><figcaption><em><span class="caption">Erasing data: a questionable research practice.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/erasing-data-correction-fluid-427863787?src=1XEqIKb5SpySP5ZUCpLmZg-1-69">Shutterstock</a></span></em></figcaption></figure><p><a href="https://www.elsevier.com/connect">Registered Reports</a> has taken this further. But of course, once results are produced, isolated findings don鈥檛 mean much until they have been replicated.</p>&#13; &#13; <p>I make efforts to replicate results before trying to publish and you鈥檇 be forgiven for thinking that replication attempts are common in science, but this is simply not the case. Journals seek novel theories and findings, and view replications as treading over old ground which offers little incentive for career-minded academics to conduct replications.</p>&#13; &#13; <p>This has also led to the introduction of <a href="https://www.psychologicalscience.org/publications/replication">Registered Replication Reports</a> in <a href="https://journals.sagepub.com/home/pps">Perspectives on Psychological Science</a>. This is where teams of researchers each follow identical procedures independently and aim to replicate important findings from the literature. A single paper then collates and analyses them to establish the size and reproducibility of the original study.</p>&#13; &#13; <p>Although psychology is leading the way for improvements with these pioneering initiatives, it is certainly not out of the woods. But it has started to move beyond a crisis and make impressive strides 鈥 more disciplines need to follow suit.</p>&#13; &#13; <h2>This is a publication bias crisis</h2>&#13; &#13; <p><em>Ottoline Leyser, director of the Sainsbury Laboratory, 国际米兰对阵科莫</em></p>&#13; &#13; <p>Reproducibility is a fundamental building block of science. If two people do the same experiment, they should get the same result. But there are many good reasons why two 鈥渋dentical鈥 experiments might not give the same result such as unknown differences that have not been considered 鈥 and some <a href="http://www.plantcell.org/content/2/4/279.abstract">exciting discoveries have been made this way</a>.</p>&#13; &#13; <p>So if a lack of reproducibility is itself not necessarily a problem, why is everybody talking about a crisis? In some cases poor practice and corner cutting have contributed to lack of reproducibility, and there have been some <a href="https://www.science.org/news/2012/11/final-report-stapel-affair-points-bigger-problems-social-psychology">high profile cases of out and out fraud</a>. It鈥檚 a major concern, but what is causing it?</p>&#13; &#13; <p>In 2014 I chaired a project on the research culture in Britain for the <a href="https://www.nuffieldbioethics.org/publication/the-culture-of-scientific-research-the-findings-of-a-series-of-engagement-activities-exploring-the-culture-of-scientific-research-in-the-uk/">Nuffield Council on bioethics</a>, which was motivated by <a href="https://theconversation.com/the-dark-side-of-research-when-chasing-prestige-becomes-the-prize-35001">concerns about research integrity</a> including over-claiming, rushing prematurely to publication and incorrect use of statistics. The main conclusions were that poor practice is incentivised by hyper-competition with overly narrow rules for winning.</p>&#13; &#13; <p>There is an excessive focus on the publication of groundbreaking results in prestigious journals. But science cannot only be groundbreaking, as there is a lot of important digging to do after new discoveries 鈥 but there is not enough credit in the system for this work and it may remain unpublished because researchers prioritise their time on the eye-catching papers, hurriedly put together.</p>&#13; &#13; <p>The reproducibility crisis is actually a publication bias crisis which is driven by the reward structures in the research system. Various approaches have been suggested to address problems, such as pre-registration of experiments. However, the research landscape is highly diverse and this type of solution is only sensible for some research types. The most widely relevant solution is to change the reward structures. In the UK there is a major opportunity to do this by reforming the <a href="https://theconversation.com/qanda-what-is-the-ref-and-how-is-the-quality-of-university-research-measured-35529">Research Excellence Framework</a> (REF). Through the REF, public money is allocated to universities based on the 鈥渜uality鈥 of the four best research outputs, usually papers, produced by each of their principal investigators over approximately six years and it disproportionately rewards groundbreaking research.</p>&#13; &#13; <p>We need reward for a portfolio of research outputs, including not only the headline grabbing results, but also confirmatory work and community data sharing, which are the hallmarks of a truly high quality research endeavour. This would go a long way to shifting the current destructive culture.</p>&#13; &#13; <p><em><span><a href="https://theconversation.com/profiles/ottoline-leyser-147196">Ottoline Leyser</a>, Director of the Sainsbury Laboratory, <a href="https://theconversation.com/institutions/university-of-cambridge-1283">国际米兰对阵科莫</a>; <a href="https://theconversation.com/profiles/danny-kingsley-3258">Danny Kingsley</a>, Head, Office of Scholarly Communication, 国际米兰对阵科莫, <a href="https://theconversation.com/institutions/university-of-cambridge-1283">国际米兰对阵科莫</a>, and <a href="https://theconversation.com/profiles/jim-grange-344560">Jim Grange</a>, Senior Lecturer in psychology, <a href="https://theconversation.com/institutions/keele-university-1012">Keele University</a></span></em></p>&#13; &#13; <p><em>This article was originally published on <a href="https://theconversation.com/">The Conversation</a>. Read the <a href="https://theconversation.com/the-science-reproducibility-crisis-and-what-can-be-done-about-it-74198">original article</a>.</em></p>&#13; </div></div></div><div class="field field-name-field-content-summary field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><p>Reproducibility is the idea that an experiment can be repeated by another scientist and they will get the same result. It is important to show that the claims of any experiment are true and for them to be useful for any further research. However, science appears to have an issue with reproducibility.聽</p>&#13; </p></div></div></div><div class="field field-name-field-image-credit field-type-link-field field-label-hidden"><div class="field-items"><div class="field-item even"><a href="https://www.flickr.com/photos/niaid/5613410129/" target="_blank">NIAID</a></div></div></div><div class="field field-name-field-image-desctiprion field-type-text field-label-hidden"><div class="field-items"><div class="field-item even">Study of Human Immune Response to HIV</div></div></div><div class="field field-name-field-cc-attribute-text field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p><a href="http://creativecommons.org/licenses/by/4.0/" rel="license"><img alt="Creative Commons License" src="https://i.creativecommons.org/l/by/4.0/88x31.png" style="border-width:0" /></a><br />&#13; The text in this work is licensed under a <a href="http://creativecommons.org/licenses/by/4.0/" rel="license">Creative Commons Attribution 4.0 International License</a>. For image use please see separate credits above.</p>&#13; </div></div></div><div class="field field-name-field-show-cc-text field-type-list-boolean field-label-hidden"><div class="field-items"><div class="field-item even">Yes</div></div></div><div class="field field-name-field-license-type field-type-taxonomy-term-reference field-label-above"><div class="field-label">Licence type:&nbsp;</div><div class="field-items"><div class="field-item even"><a href="/taxonomy/imagecredit/attribution">Attribution</a></div></div></div> Mon, 20 Mar 2017 09:57:15 +0000 cjb250 186372 at