– My New Twitter Follower: Computer History Museum @ComputerHistory Note: The person read my message about this blog @ Relevant images, texts and links about this blog @ EDITORIAL 20 MARCH 2019 It’s time to talk about ditching statistical significance – Looking beyond a much used and abused measure would make science harder, but better. Editorial – Moving to a World Beyond “p < 0.05” Ronald L. Wasserstein,Allen L. Schirm &Nicole A. Lazar Pages 1-19 | Published online: 20 Mar 2019 @ My new Twitter Follower @gary_lyman read my message sent to him about this blog. Dr. Gary Lyman – Hutchinson Institute for Cancer Outcomes Research, Fred Hutchinson Cancer Research Center and the University of Washington – Gary Lyman, oncologist, health economist and HICOR co-director. Lyman research is funded by the National Cancer Institute, the National Heart Lung and Blood Institute, the American Society of Clinical Oncology along with industry grants related to supportive cancer care. Dr Lyman has published some 400 research articles in in the professional medical literature. Direct funding from all sources supports an annual Comparative Effectiveness and Outcomes Research budget of over $5 million. He serves as an advisor on new oncologic agents to the US FDA. Dr Lyman is among the top 1% of investigators by citations in Web of Science. In addition to his work at the Hutch, Dr. Lyman holds leadership positions within the American Society of Clinical Oncology as well as the SWOG Cancer Research Network, for which he serves as executive officer for Cancer Care Delivery, Symptom Control and Quality of Life Research. He also serves as Editor-in-Chief of Cancer Investigation and on the editorial boards of several prestigious research journals @ Very important links, images, websites and social networks of the world

One-Time
Monthly
Yearly

Recognition donation

Recognition donation

Recognition donation

Choose the value

$5.00
$15.00
$50.00
$3.00
$10.00
$25.00
$50.00
$80.00
$100.00

Or enter a custom value

$

We appreciate your contribution!

We appreciate your contribution!

We appreciate your contribution!

DonateDonateDonate

Do the downloads !!! Share!! Thanks!!

The diffusion of relevant information and knowledge is essential for a country progress always!!

A difusão de relevantes informações e conhecimentos é sempre essencial para o progresso de um país!!

links-of-this-blog-part-1

links-of-my-blog-part-2

Informações relevantes relacionadas à leitura de livros e seus aspectos interligados no ambiente escolar – Rodrigo Nunes Cal – Parte 1 @ RELEVANT INFORMATION RELATED TO BOOK READING AND ITS INTERCONNECTED ASPECTS IN THE SCHOOL ENVIRONMENT – RODRIGO NUNES CAL – PART 1 

Informações relevantes relacionadas à leitura de livros e seus aspectos interligados no ambiente escolar – Rodrigo Nunes Cal – Parte 2 @ RELEVANT INFORMATION RELATED TO BOOK READING AND ITS INTERCONNECTED ASPECTS IN THE SCHOOL ENVIRONMENT – RODRIGO NUNES CAL – PART 2

  • – >Mestrado – Dissertation – Tabelas, Figuras e Gráficos – Tables, Figures and Graphics – ´´My´´ Dissertation @ #energy #time #tempo #energia #life #health #saúde #vida #people #person #pessoa #pessoas #reading #leitura #vision #visão #Innovation #internet #history #história #Graphics #Gráficos #dissertation #dissertação #mestrado #research #pesquisa #details #detalhes #thoughts #thinking #reflection #reflexão #pensamentos #importance #communication #comunicações #importância #information #knowledge #informações #conhecimento #Ciência #Science #data #dados #diffusion #difusão #countries #países #cell #DNA #Célula #RNA #substances #drugs #vaccines #TherapeuticalSubstances #efficacy #eficiência #diagnosis #prognosis #treatment #disease #UnknownDiseases #name #times #influences #longevity #age #ages #test #humans #AnimalTesting #MedicalDevices #tests #laboratories #investmens #researches #references #citations #ImpactFactor #journals


Impact_Fator-wise_Top100Science_Journals

GRUPO_AF1 – ´´My´´ Dissertation

GRUPO AFAN 1 – ´´My´´ Dissertation

GRUPO_AF2 – ´´My´´ Dissertation

GRUPO AFAN 2 – ´´My´´ Dissertation

Slides – mestrado – ´´My´´ Dissertation

CARCINÓGENO DMBA EM MODELOS EXPERIMENTAIS

DMBA CARCINOGEN IN EXPERIMENTAL MODELS

Avaliação da influência da atividade física aeróbia e anaeróbia na progressão do câncer de pulmão experimental – Summary – Resumo – ´´My´´ Dissertation

Do the downloads !!! Share!! Thanks!!

´´The world people need to have very efficient researches and projects resulting in very innovative drugs, vaccines, therapeutical substances, medical devices and other technologies according to the age, the genetics and medical records of the person. So, the treatment, diagnosis and prognosis will be very efficient and better, of course´´. Rodrigo Nunes Cal

https://science1984.wordpress.com/2021/08/14/do-the-downloads-of-very-important-detailed-and-innovative-data-of-the-world-about-my-dissertation-like-the-graphics-i-did-about-the-variations-of-weights-of-all-mice-control/

Mestrado – Dissertation – Tabelas, Figuras e Gráficos – Tables, Figures and Graphics


Impact_Fator-wise_Top100Science_Journals

GRUPO_AF1

GRUPO_AF2

GRUPO AFAN 1

GRUPO AFAN 2

Slides – mestrado

CARCINÓGENO DMBA EM MODELOS EXPERIMENTAIS

Avaliação da influência da atividade física aeróbia e anaeróbia na progressão do câncer de pulmão experimental – Summary – Resumo

Mestrado – Dissertation – Tabelas, Figuras e Gráficos – Tables, Figures and Graphics

Redefine Statistical Significance

´´We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.´´ https://www.nature.com/articles/s41562-017-0189-z Published:  Daniel J. Benjamin, James O. Berger, […]Valen E. Johnson Nature Human Behaviour volume 2, pages6–10 (2018)

Um mundo além de p < 0,05 « Sandra Merlo – Fonoaudiologia da Fluência

American Statistical Association – https://www.amstat.org/

https://www.amstat.org/ASA/about/home.aspx?hkey=6a706b5c-e60b-496b-b0c6-195c953ffdbc

´´The world people need to have very efficient researches and projects resulting in very innovative drugs, vaccines, therapeutical substances, medical devices and other technologies according to the age, the genetics and medical records of the person. So, the treatment, diagnosis and prognosis will be very efficient and better, of course´´. Rodrigo Nunes Cal

https://science1984.wordpress.com/2021/08/14/do-the-downloads-of-very-important-detailed-and-innovative-data-of-the-world-about-my-dissertation-like-the-graphics-i-did-about-the-variations-of-weights-of-all-mice-control/

Mestrado – Dissertation – Tabelas, Figuras e Gráficos – Tables, Figures and Graphics


Impact_Fator-wise_Top100Science_Journals

GRUPO_AF1

GRUPO_AF2

GRUPO AFAN 1

GRUPO AFAN 2

Slides – mestrado

CARCINÓGENO DMBA EM MODELOS EXPERIMENTAIS

Avaliação da influência da atividade física aeróbia e anaeróbia na progressão do câncer de pulmão experimental – Summary – Resumo

Mestrado – Dissertation – Tabelas, Figuras e Gráficos – Tables, Figures and Graphics

Redefine Statistical Significance

´´We propose to change the default P-value threshold for statistical significance from 0.05 to 0.005 for claims of new discoveries.´´ https://www.nature.com/articles/s41562-017-0189-z Published:  Daniel J. Benjamin, James O. Berger, […]Valen E. Johnson Nature Human Behaviour volume 2, pages6–10 (2018)

Um mundo além de p < 0,05 « Sandra Merlo – Fonoaudiologia da Fluência

American Statistical Associationhttps://www.amstat.org/

https://www.amstat.org/ASA/about/home.aspx?hkey=6a706b5c-e60b-496b-b0c6-195c953ffdbc

The American Statistical Association is the world’s largest community of statisticians, the “Big Tent for Statistics.” It is the second-oldest, continuously operating professional association in the country. Since it was founded in Boston in 1839, the ASA has supported excellence in the development, application, and dissemination of statistical science through meetingspublications, membership serviceseducationaccreditation, and advocacy.

https://en.wikipedia.org/wiki/American_Statistical_Association

  • I DID VERY INTERESTING, INNOVATIVE, IMPORTANT AND DETAILED GRAPHICS ABOUT VARIATIONS OF ALL MICE WEIGHTS OF DIFFERENT AGES DURING ALL EXPERIMENTAL TIME OF ´´MY´´ DISSERTATION. THEY´RE AVAILABLE IN THIS BLOG AND ARE VERY IMPORTANT TO THE SCIENTIFIC COMMUNITY!! THE DIFFUSION OF RELEVANT KNOWLEDGE IS ALWAYS ESSENTIAL FOR A COUNTRY PROGRESS. NEW SCIENTIFIC DISCOVERIES NEED TO EMERGE URGENTLY !! BELOW YOU CAN DO DOWNLOAD OF THESE GRAPHICS AND OTHER DOCUMENTS RELATED TO SCIENCE, TECHNOLOGY AND INNOVATION. SO, SHARE THESE GRAPHICS AND OTHER DOCUMENTS TO OTHER PEOPLE KNOW ABOUT IT AND PERHAPS USE THEM AS AN EXCELLENT REFERENCE IN THE SCIENTIFIC RESEARCHES. @ PERSON – PEOPLE – ANALYSIS – TIME – DATA – GRAPHICS – RESEARCHES – VISION – READING – SPEAKING – LISTENIG – INFORMATION – KNOWLEDGE – INTENTIONS – INNOVATIONS – CHANGES – DATA INTERPRETATIONS – NEW INNOVATIONS – INTERNET – BOOKS – GRAPHICS INTERPRETATIONS – GRAPHICS COMPARISONS – INFLUENCES – TIME – SUBSTANCES – DRUGS – VACCINES – NEW MEDICAL DEVICES – WORLD HISTORY – NEW TECHNOLOGIES – HUMAN ENERGY – WORK – NEW SCIENTIFIC DISCOVERIES – SCIENCE – GRAPHICS ANALYSIS – AGES – AGE – GENETICS – PHYSIOLOGY – MIND – MOLECULAR BIOLOGY – STATISTICS – BIOSTATISTICS – HUMAN LONGEVITY

Mestrado – ´´My´´ Dissertation – Tabelas, Figuras e Gráficos – Tables, Figures and Graphics


Impact_Fator-wise_Top100Science_Journals

GRUPO_AF1 – ´´my´´ dissertation

GRUPO_AF2 – ´´my´´ dissertation

GRUPO AFAN 1 – ´´my´´ dissertation

GRUPO AFAN 2 – ´´my´´ dissertation

Slides – mestrado – ´´my´´ dissertation

CARCINÓGENO DMBA EM MODELOS EXPERIMENTAIS

Avaliação da influência da atividade física aeróbia e anaeróbia na progressão do câncer de pulmão experimental – Summary – Resumo – ´´my´´ dissertation

Positive Feedbacks by Facebook and Twitter about this Blog, like the very important, innovative and detailed graphics I did about variations of all mice weights (Control and Study Groups) of different ages during all experimental time of ´´my´´ dissertation. Note: I have received positive feedbacks about this Blog by LinkedIn, E-mails and Instagram too. @ Internet invitations I received by direct messages to participate in very important science events worldwide in less than 1 year because I participated of great researches in Brazil and other informations @ Links & The next step in nanotechnology | George Tulevski & Animated Nanomedicine movie @ Nanotechnology Animation & Powering Nanorobots & The World’s Smallest Robots: Rise of the Nanomachines & Building Medical Robots, Bacteria sized: Bradley Nelson at TEDxZurich @ Mind-controlled Machines: Jose del R. Millan at TEDxZurich & The present and future of brain-computer interfaces: Avi Goldberg at TEDxAsheville & Future of human/computer interface: Paul McAvinney at TEDxGreenville 2014 @ Bio-interfaced nanoengineering: human-machine interfaces | Hong Yeo | TEDxVCU @ Very important images, websites, social networks and links – https://science1984.wordpress.com/2019/03/17/feedbacks-on-facebook-related-to-researches-i-participated-in-brazil-for-example-the-graphics-i-did-about-variations-of-all-mice-weights-control-and-study-groups-of-different-ages-during-all-exper/

CARCINÓGENO DMBA EM MODELOS EXPERIMENTAIS

monografia – ´´my´´ monograph

Feedback positivo de pessoas sobre minha dissertação pelo Messenger – Facebook. Positive feedback of people about my dissertation, blog and YouTube channel by Facebook – Messenger. Ano – Year: 2018

My suggestion of a very important Project…

rodrigonunescal_dissertation

Apostila – Pubmed

LISTA DE NOMES – PEOPLE´S NAMES – E-MAIL LIST – LISTA DE E-MAILS

A Psicossomática Psicanalítica

O Homem como Sujeito da Realidade da Saúde – Redação

ÁCIDO HIALURONICO

As credenciais da ciência (1)

Aula_Resultados – Results

Frases que digitei – Phrases I typed

Nanomedicina – Texto que escrevi. Nanomedicine – Text I typed(1)

Nanomedicine123(2)57

Genes e Epilepsia

MÉTODOS DE DOSAGEM DO ÁCIDO HIALURÔNICO

microbiologia-famerp – Copia

Impact_Fator-wise_Top100Sciene_Journals

Positive feedback of people about my dissertation, blog and YouTube channel by Messenger (Facebook). Feedback positivo de pessoas sobre minha dissertação, blog e canal do YouTube pelo Facebook (Messenger) Year / Ano: 2018 – positive-feedback-of-people-about-my-dissertation-blog-and-youtube-channel-by-facebook-messenger-ano-year-2018

https://www.youtube.com/results?search_query=american+statistical+association

https://www.fredhutch.org/en/faculty-lab-directory/lyman-gary.html

https://medicine.duke.edu/faculty/gary-herbert-lyman-md

https://www.linkedin.com/in/gary-lyman-a4811a5/

https://depts.washington.edu/oncology/faculty/lyman.html

https://www.researchgate.net/profile/Gary-Lyman

Gary Lyman, M.D., M.P.H.

Senior Lead, Health Care Quality and Policy
Hutchinson Institute for Cancer Outcomes Research (HICOR), Fred Hutch

Professor, Cancer Prevention Program
Public Health Sciences Division, Fred Hutch

Professor
Clinical Research Division, Fred HutchPhone: 206.667.6670Email: glyman@fredhutch.orgFax: 206.667.5977Mail Stop: M3-B232

Dr. Gary Lyman is a medical oncologist, hematologist and public health researcher who focuses on comparative effectiveness, health technology assessment, and health services and outcomes research. Dr. Lyman serves as a senior lead for health care quality and policy within the Hutchinson Institute for Cancer Outcomes Research, or HICOR. An internationally recognized thought leader in cancer care delivery, supportive care and health care policy, his research compares the effectiveness of novel diagnostic and therapeutic strategies; examines clinical decision-making; explores risk modeling and precision medicine; assesses health technology research and synthesis; and delves into the factors that drive disparities in cancer care. He is also interested in cancer prevention, pharmaco-economics, and cancer treatment and supportive care for the elderly. Dr Lyman is among the top 1% of investigators by citations in Web of Science. In addition to his work at the Hutch, Dr. Lyman holds leadership positions within the American Society of Clinical Oncology as well as the SWOG Cancer Research Network, for which he serves as executive officer for Cancer Care Delivery, Symptom Control and Quality of Life Research.

Other Appointments & Affiliations

Professor, Medicine – Oncology
University of Washington School of Medicine

Adjunct Professor
University of Washington School of Public Health

Adjunct Professor
University of Washington School of Pharmacy

Medical Oncologist, Breast Cancer Program
Seattle Cancer Care Alliance

Executive Officer, Cancer Care Delivery, Symptom Control and Quality of Life Research
Southwest Oncology Group (SWOG)

Board of Governors and Executive Officer for Immunotherapy, Cancer Care Delivery, Symptom Control and Quality of Life Research, and Palliative Care
SWOG Cancer Research Network

Board of Directors
The Hope Foundation for Cancer Research

Fellow
Royal College of Physicians

Fellow
American College of Physicians

Fellow
American Society of Clinical Oncology

Editor-in-Chief
Cancer Investigation

Education

M.P.H., Biostatistics, Harvard University, 1982

M.D., School of Medicine, State University of New York at Buffalo, 1972

B.A., Psychology and Physics, State University of New York at Buffalo, 1968

Research Interests

Comparative Effectiveness Research of Novel Diagnostic and Therapeutic Strategies

Clinical Decision Making, Risk Modeling, and Precision Medicine

Health Technology Assessment and Research Synthesis

Health Disparities and Quality of Cancer Care Delivery

Health Economics, Pharmacoeconomics and Healthcare Policy

Cancer Treatment and Supportive Care in the Elderly

Gary Herbert Lyman, MD

Adjunct Professor in the Department of MedicineCampus mail 2424 Erwin Road, Suite 205, Durham, NC 27705Phone (919) 681-1736Email address gary.lyman@duke.edu

Dr. Gary Lyman is Professor of Medicine in the Division of Medical Oncology, Department of Internal Medicine at Duke University School of Medicine and the Duke Cancer Institute. He serves as Director of the Comparative Effectiveness and Outcomes Research Program in Oncology. Dr Lyman is also Senior Fellow in the Duke Center for Clinical Health Policy Research. He is a nationally and internationally recognized authority on comparative effectiveness and health services and outcomes research. Lyman research is funded by the National Cancer Institute, the National Heart Lung and Blood Institute, the American Society of Clinical Oncology along with industry grants related to supportive cancer care. Dr Lyman has published some 400 research articles in in the professional medical literature. Direct funding from all sources supports an annual Comparative Effectiveness and Outcomes Research budget of over $5 million.

Dr Lyman’s research interests include:
Personalized Medicine and Cancer Supportive Care: In addition to the conduct of randomized controlled trials of new cancer diagnostic, prognostic, treatment and supportive care approaches, Dr Lyman’s research interests include the personalized management of early-stage breast cancer and supportive care of patients receiving cancer chemotherapy, most notably those at risk for febrile neutropenia and venous thromboembolism. Based on clinical trial results, Dr Lyman is actively involved in the development and validation of clinical risk models for patient selection and targeted intervention and preventive strategies. Dr Lyman is co-PI on an NCI on comparative effectiveness of cancer pharmacogenomics grant to discover and validate new genomic tools for guiding more personalized cancer treatments and on an NHLBI trial of thromboprophylaxis in high risk ambulatory patients receiving cancer chemotherapy.

Evidence synthesis, clinical practice guidelines and health policy: Dr Lyman conducts systematic reviews and meta-analyses of major clinical issues in support of clinical practice guidelines. Dr Lyman chairs several guidelines for the American Society of Clinical Oncology including those on antiemetics, venous thromboembolism, sentinel node biopsy in patients with breast cancer and cutaneous melanoma and on weight based dosing of chemotherapy. Dr Lyman also conducts analyses of large population studies of clinical outcomes associated with of current cancer patient management in a real world setting with a particular focus on cancer management in the elderly patient with cancer. Dr Lyman leads several decision simulation studies for improved clinical decision making and cost-effectiveness analysis of new and novel therapies based on results of clinical trials, systematic reviews and population studies. He serves as an advisor on new oncologic agents to the US FDA. Dr Lyman also serves as Editor-in-Chief of Cancer Investigation and on the editorial boards of several prestigious research journals.

Education and Training

  • Research Fellow, Medicine, Dana-Farber Cancer Institute, 1981 – 1982
  • Medical Oncology Clinical Fellowship, Medicine, Roswell Park Cancer Institute, 1974 – 1976
  • Intern and Junior Assistant Resident, Medicine, University of North Carolina – Chapel Hill, 1972 – 1974
  • M.D., State University of New York – Buffalo, 1972

Gary H. Lyman, MD, MPH, FASCO, FRCP (Edin)

Senior Lead, Healthcare Quality and Policy, Hutchinson Institute for Cancer Outcomes Research
Public Health Sciences and Clinical Research Divisions
Fred Hutchinson Cancer Research Center
Professor of Medicine/Medical Oncology, University of Washington School of Medicine

Mailing Address

Seattle Cancer Care Alliance
825 Eastlake Ave East
Seattle, WA 98109-1023

Admin Contact

Andrea Doherty
206.667.3701
Adoherty@fredhutch.org
Fax: 206.667.5977

Specialty / Expertise

  • Executive Officer, SWOG Cancer Research Network
  • Member, Public Health Sciences Division, Fred Hutchinson Cancer Research Center
  • Member, Clinical Research Division, Fred Hutchinson Cancer Research Center

Research Interests

  • Comparative Effectiveness Research of Novel Diagnostic and Therapeutic Strategies
  • Clinical Decision Making, Risk Modeling, and Precision Medicine
  • Health Technology Assessment and Research Synthesis
  • Health Disparities and Quality of Cancer Care Delivery
  • Health Economics, Pharmacoeconomics and Healthcare Policy
  • Cancer Treatment and Supportive Care in the Elderly

Current Research Projects

  • Personalized cancer supportive care through risk stratification and targeted intervention; Systematic evidence summaries of several clinical practice topics in support of ASCO clinical practice guidelines;
  • Defining and measuring the value of cancer care using multi-stakeholder engagement (HICOR)

Training

Dr Lyman received his BA and MD through the State University of New York at Buffalo and an MPH in Biostatistics at the Harvard School of Public Health. He completed his Internal Medicine Residency at the University of North Carolina in Chapel Hill and his Fellowships in Hematology and Medical Oncology at the Roswell Park Memorial Institute as well as a postdoctoral fellowship at the Dana Farber Cancer Center in Boston.

Selected Publications

Runowicz CD, Leach CR, Henry NL, Henry KS, Mackey HT, Cowens-Alvarado RL, Cannady RS, Pratt-Chapman ML, Edge SB, Jacobs LA, Hurria A, Marks LB, LaMonte SJ, Warner E, Lyman GH, Ganz PA. American Cancer Society/American Society of Clinical Oncology Breast Cancer Survivorship Care Guideline. J Clin Oncol. 2015 Dec 7. [Epub ahead of print].

Runowicz CD, Leach CR, Henry NL, Henry KS, Mackey HT, Cowens-Alvarado RL, Cannady RS, Pratt-Chapman ML, Edge SB, Jacobs LA, Hurria A, Marks LB, LaMonte SJ, Warner E, Lyman GH, Ganz PA. American Cancer Society/American Society of Clinical Oncology Breast Cancer Survivorship Care Guideline. CA Cancer J Clin. 2015 Dec 7. [Epub ahead of print] Review.

Deverka P, Messner DA, McCormack R, Lyman GH, Piper M, Bradley L, Parkinson D, Nelson D, Smith ML, Jacques L, Dutta T, Tunis SR. Generating and evaluating evidence of the clinical utility of molecular diagnostic tests in oncology. Genet Med. 2015 Dec 3. [Epub ahead of print].

Ma XM, Chen XH, Wang JS, Lyman GH, Qu Z, Ma W, Song JC, Zhou CK, Zhao LP. Evolving Healthcare Quality in Top Tertiary General Hospitals in China during the China Healthcare Reform (2010-2012) from the Perspective of Inpatient Mortality. PLoS One. 2015 Dec 1;10(12):e0140568.

Weycker D, Chandler D, Barron R, Xu H, Wu H, Edelsberg J, Lyman GH. Risk of infection among patients with non-metastatic solid tumors or non-Hodgkin’s lymphoma receiving myelosuppressive chemotherapy and antimicrobial prophylaxis in US clinical practice. J Oncol Pharm Pract. 2015 Nov 14. [Epub ahead of print].

Denduluri N, Patt DA, Wang Y, Bhor M, Li X, Faver AM, Morrow PK, Barron RL, Asmar L, Saravanan S, Li Y, Garcia J, Lyman GH: Dose Delays, Dose Reductions, and Relative Dose Intensity in Patients with Cancer Who Received Adjuvant or Neoadjuvant Chemotherapy in Community Oncology Practices. J Natl Compr Canc Netw 2015;13 (11):1383-93.

Hesketh PJ, Bohlke K, Lyman GH, Basch E, Chesney M, Clark-Snow RA, Danso MA, Jordan K, Somerfield MR, Kris MG.Antiemetics: American Society of Clinical Oncology Focused Guideline Update. J Clin Oncol. 2015 Nov 2.

Lyman GH, Lal L, Radtchenko J, Harrow B, Schwartzberg L. Evaluation Of Resource Utilizaton For Chemotherapy Induced Nausea And Vomiting (Cinv) In Patients Treated With Anthracycline+Cyclophosphamide (Ac) For Solid Cancers With And Without Nk-1 Based Regimens. Value Health. 2015 Nov;18(7):A465.

Halpern AB, Lyman GH, Walsh TJ, Kontoyiannis DP, Walter RB. Evidence-based focused review of primary antifungal prophylaxis during curative-intent therapy for acute myeloid leukemia. Blood. 2015 Oct 26.

Last updated: February 2016

Top

Copyright © 2010 University of Washington. All Right Reserved.

EDITORIAL  20 MARCH 2019

It’s time to talk about ditching statistical significance

Looking beyond a much used and abused measure would make science harder, but better.

https://www.nature.com/articles/d41586-019-00874-8

Editorial

Moving to a World Beyond “p < 0.05”

Ronald L. Wasserstein,Allen L. Schirm &Nicole A. LazarPages 1-19 | Published online: 20 Mar 2019

https://www.tandfonline.com/doi/full/10.1080/00031305.2019.1583913

https://en.wikipedia.org/wiki/American_Statistical_Association

YouTube Channel – Computer History Museum: https://www.youtube.com/channel/UCHDr4RtxwA1KqKGwxgdK4Vg

Hi! How are you? I hope you are well!

For you who do not know about me and this blog, I am graduated in Biomedicine at Federal University of Triangulo Mineiro (Uberaba – 2003-2007), I have a Master Degree in lung cancer research in mice at Faculty of Medicine of Sao Jose do Rio Preto (2008-2012) and nowadays I work as inspector of students since 2012 in Sao Jose do Rio Preto. Maybe I can try to do a doctorate and PhD to work as professor, scientist and/or researcher in Brazil or abroad. As you know, the human interdependence is essential for a country progress in many aspects. In my dissertation I did very interesting, important, innovative and detailed graphics about variations of all mice weights (Control Group, Study Group 1 and Study Group 2) of different ages during all experimental time and I did comparisons between them to a better and detailed analysis. It was a very innovative and important research as well my monograph (Chagas disease research -> Induction of benzonidazole resistance in human isolates of Trypanosoma cruzi). There were not statistical difference significantly among them, but the discussion of this fact is very important to the Scientific Community, conforming an article published in Nature (It’s time to talk about ditching statistical significance – Looking beyond a much used and abused measure would make science harder, but better. 20 MARCH 2019). There are posts in my blog about this very important subject for Science. Discovering how the initial stages of certain diseases work, such as fatal diseases, is of huge importance for Science, of course. There are several biological factors connected in the human body that acting in a very complex form. Therefore, it is of great necessity to carry out new scientific researches much more detailed and modern, with high degree of precision, even if there is no significant statistical difference between certain factors. New Scientific Discoveries are essential for the world progress always. As you know, we need to have more efficient vaccines and drugs, so it is very important to do more detailed and efficient researches in mice and humans, of course. There are very important stages of development of vaccines and drugs, for sure. So, I would like so much you visit/share this blog https://science1984.wordpress.com I did. I do not earn money from this blog. The blog content is very good with very high quality! There are a very big amount of excellent information in this blog like human health, scientific researches in humans and animal models for human diseases like cardiovascular diseases, for example. The diffusion of knowledge is essential for a country progress always!! Article of my dissertation: The influence of physical activity in the progression of experimental lung cancer in mice – Pathol Res Pract. 2012 Jul 15;208(7):377-81. The graphics I did about the variations of all mice weights of different ages during all experimental time aren´t in the scientific article related to my dissertation nor in my dissertation as well as details about time of exercise and rest of the animals. These data can be an excellent reference for many types of researches like genetic engineering. The age of the mouse and the human being with the genetics influence in certain ways in pathophysiology in the humans and mice. So, mice researches are very important for the society as well as researches with humans. I was invited by Internet through direct messages to participate in 72 very important science events in 31 cities in less than 2 years (Auckland, Melbourne, Toronto, Edinburgh, Madrid, Suzhou, Stanbul, Miami, Singapore, Kuala Lumpur, Abu Dhabi, San Diego, Bangkok, Dublin, Sao Paulo, Dubai, Boston, Berlin, Stockholm, Prague, Valencia, Osaka, Amsterdam, Helsinki, Paris, Tokyo, Vienna, Rome, Zurich, London and Frankfurt) because I participated of very important researches. Images about it are in my blog, of course. Many people worldwide visited it and liked my blog, such as renowned professors, scientists and researchers!! So, this blog sharing is very important to the world society!! Visit and share it if possible!! More people worldwide need to know about it!! 02/26/2021

I´d like to inform you relevant links that are in this blog I did it with so much dedication in a very little time. This blog content is very good and important with a very high quality! 

Maybe I can try to do a doctorate and PhD to work as professor, scientist and/or researcher in Brazil or abroad.

There´re a very big amount of important information like links, videos, websites, images, texts and photos. Many people worldwide have visited it and liked it, such as renowned professors, scientists and researchers around the world. I have received many positive feedbacks about this blog like on Facebook, Twitter, LinkedIn, e-mail and Instagram. Images about it are in my blog.

In less than 2 years I was invited by Internet through direct messages to participate in 77 very important scientific events in 32 cities in different countries because I participated of very important researches in Brazil (my dissertation and my monograph). Data about it, like images and videos, are available in this blog. Unfortunately there´re fatal diseases without efficient drugs and total prevention methods like vaccination.

Video – Gratitude: I am very grateful because I was invited by Internet through direct messages to participate in 55 very important science events in the world in 25 cities of different countries in less than 1 year. I participated of very important researches in Brazil. Information like images about it are in this blog.

Vídeo – Gratidão: Estou muito grato porque fui convidado através de mensagem direta por meio da Internet para participar de 55 eventos muito importantes do mundo em 25 cidades em menos de 1 ano porque participei de ótimas pesquisas no Brasil. Informações sobre este assunto estão no neste blog.

I do not earn any money from this blog nor my social networks nor my e-mails accounts, for example. I hope collaborate significantly with these information in the world scientific progress always! 02/26/2021

The human interdependence is always present in the world. The diffusion of very important information and knowledge is always essential for a country progress in all aspects, as you know. So this blog sharing is very important to the world society.

In my dissertation I did very interesting, important, innovative and detailed graphics about variations of all mice weights (Control Group, Study Group 1 and Study Group 2) of different ages during all experimental time and I did comparisons between them to a better and detailed analysis. It was a very innovative and important research as well my monograph (Chagas disease research -> Induction of benzonidazole resistance in human isolates of Trypanosoma cruzi). There were not statistical difference significantly among them, but the discussion of this fact is very important to the Scientific Community, conforming an article published in Nature (It’s time to talk about ditching statistical significance – Looking beyond a much used and abused measure would make science harder, but better.  20 MARCH 2019 – https://www.nature.com/articles/d41586-019-00874-8). There are posts in my blog about this very important subject for Scientific Community.

– DISCOVERING HOW THE INITIAL STAGES OF CERTAIN DISEASES WORK, SUCH AS A FATAL DISEASE, IS OF HUGE IMPORTANCE FOR SCIENCE, FOR SURE. THERE ARE SEVERAL BIOLOGICAL FACTORS CONNECTED IN THE HUMAN BODY THAT ACTING IN A VERY COMPLEX FORM. THEREFORE, IT IS OF GREAT NECESSITY TO CARRY OUT NEW SCIENTIFIC RESEARCHES MUCH MORE DETAILED AND MODERN, WITH A HIGH DEGREE OF PRECISION, EVEN IF THERE IS NO SIGNIFICANT STATISTICAL DIFFERENCE BETWEEN CERTAIN FACTORS. 

´´Understanding the great difficulties of a given subject can be a very long process, however, of great value for the progress of Science in all aspects´´ Rodrigo Nunes Cal    

Link about me and ´´my´´ blog:  https://science1984.wordpress.com/sobre/

Links about ´´my´´ monograph: Induction of benzonidazole resistance in human isolates of Trypanosoma cruzi: 

https://science1984.wordpress.com/2018/07/15/my-monography-chagas-disease-research-in-laboratory-2/

https://science1984.wordpress.com/2019/05/08/21809/

-Links about animal model for human diseases like cardiovascular diseases: 

https://science1984.wordpress.com/2020/04/29/33426/

https://science1984.wordpress.com/2020/04/29/the-good-thing-about-science-is-that-its-true-whether-you-believe-in-it-or-not/

– Links related with ´´my´´ dissertation: The influence of physical activity in the progression of experimental lung cancer in mice. Pathol Res Pract.  2012 Jul 15;208(7):377-81

https://science1984.wordpress.com/2019/11/28/links-of-my-dissertation-the-influence-of-physical-activity-in-the-progression-of-experimental-lung-cancer-in-mice-and-monograph-induction-of-benzonidazole-resistance-in-human-isolates-of-trypanoso/

https://science1984.wordpress.com/2019/11/29/about-my-dissertation-the-influence-of-physical-activity-in-the-progression-of-experimental-lung-cancer-in-mice-pathol-res-pract-2012-jul-152087377-81-doi-10-1016-j-prp-2012-04-006-epub-20/

https://science1984.wordpress.com/2018/07/15/i-did-very-important-detailed-and-innovative-graphics-about-variations-of-all-mice-weigths-during-all-exerimental-time-my-dissertation-they-can-be-an-excelent-reference-for-future-researches-like-2/

https://science1984.wordpress.com/2019/03/08/it-is-fundamental-professors-students-researchers-and-other-people-worldwide-know-about-my-dissertation-because-the-research-was-very-innovative-and-important-in-the-world-these-data-like-graphics/

https://science1984.wordpress.com/2019/05/08/21809/

https://www.weforum.org/agenda/2019/03/what-happens-in-an-internet-minute-in-2019

https://vanallenlab.dana-farber.org/  https://www.spacex.com/ http://www.nasa.gov http://www.usa.gov

http://www.caltech.edu http://www.nobelprize.org http://www.michigan.edu http://www.columbia.edu http://www.yale.edu

http://www.wordpress.com http://www.forbes.com http://www.science1984.wordpress.com http://www.wikipedia.org

http://www.youtube.com http://www.instagram.com http://www.twitter.com http://www.facebook.com http://www.linkedin.com

My Curriculum Lattes: http://buscatextual.cnpq.br/buscatextual/visualizacv.do?id=K4240145A2

http://www.google.com http://www.gmail.com http://www.yahoo.com http://www.ucla.edu http://www.harvard.edu http://www.wikipedia.org

https://www.dfhcc.harvard.edu/insider/member-detail/member/eliezer-van-allen-md/

https://www.broadinstitute.org/bios/eliezer-van-allen

http://www.youtube.com http://www.instagram.com http://www.nobelprize.org

http://www.facebook.com http://www.linkedin.com http://www.twitter.com http://www.forbes.com

https://www.facebook.com/MetastaticProstateCancerProject/ https://www.tesla.com/

http://www.forbes.comhttp://www.wordpress.com

http://www.wikipedia.org http://www.microsoft.com http://www.amazon.com http://www.alibaba.com

https://www.linkedin.com/in/eliezer-van-allen-8b587633/

https://www.researchgate.net/scientific-contributions/Eliezer-Van-Allen-2157885122

http://www.google.com http://www.gmail.com http://www.yahoo.com

My YouTube Channel: https://www.youtube.com/channel/UC9gsWVbGYWO04iYO2TMrP8Q

My Facebook Page: http://www.facebook.com/scientificblog

https://de.wikipedia.org/wiki/Konrad_Kleinknecht

https://www.karolinska.se/ https://www.manchester.ac.uk/ https://www.cam.ac.uk/

http://www.ox.ac.uk/ http://www.ita.br http://www.unicamp.br http://www.famerp.br http://www.cornell.edu

http://www.google.com http://www.gmail.com http://www.yahoo.com http://www.forbes.com http://www.facebook.com http://www.twitter.com http://www.linkedin.com http://www.facebook.com/scientificblog

http://www.instagram.com http://www.wikipedia.org http://www.nobelprize.org http://www.nasa.gov http://www.harvard.edu http://www.ucla.edu http://www.princeton.edu http://www.stanford.edu

http://www.youtube.com http://www.yale.edu http://www.duke.edu http://www.columbia.edu

https://www.internetlivestats.com/one-second/

https://www.allaccess.com/merge/archive/31294/infographic-what-happens-in-an-internet-minute

https://cerncourier.com/a/einstein-and-heisenberg-the-controversy-over-quantum-physics/

https://cerncourier.com/a/einstein-and-heisenberg-the-controversy-over-quantum-physics/?fbclid=IwAR0EvmLdvXkoUY3f7SdUiCYHCPRpF8Lb8kBhvU9yo7SHMC1np0FB8rpf9_w

https://www.facebook.com/konrad.kleinknecht

https://www.cleversow.com/

https://computerhistory.org/

https://twitter.com/ComputerHistory

https://www.youtube.com/channel/UCHDr4RtxwA1KqKGwxgdK4Vg

EDITORIAL  

It’s time to talk about ditching statistical significance

Looking beyond a much used and abused measure would make science harder, but better.

https://www.nature.com/articles/d41586-019-00874-8

Editorial

Moving to a World Beyond “p < 0.05”

Ronald L. Wasserstein,Allen L. Schirm &Nicole A. LazarPages 1-19 | Published online: 20 Mar 2019nfl5182@psu.edu allenschirm@gmail.com ron@amstat.org

American Statistical Association

https://www.amstat.org/

https://www.facebook.com/AmstatNews

https://ww2.amstat.org/meetings/jsm/2021/index.cfm?fbclid=IwAR2F5_7eVrrIsB62koou076DQsNp5xJUP9amBV0dMce6YgK3UhMSunrNVcg

blog

https://magazine.amstat.org/blog/embed/#?secret=tLGz1FM4RQhttps://www.youtube.com/embed/QucLsumQ3n0?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en&autohide=2&wmode=transparent

http://www.gmail.com http://www.google.com http://www.yahoo.com http://www.wordpress.com http://www.harvard.edu http://www.facebook.com/scientificblog http://www.wikipedia.org http://www.princeton.edu http://www.facebook.com http://www.twitter.com http://www.youtube.com

http://www.linkedin.com http://www.forbes.com http://www.stanford.edu http://www.nobelprize.org http://www.nasa.gov http://www.mit.edu http://www.famerp.br http://www.unicamp.br http://www.ucla.edu http://www.caltech.edu http://www.michigan.edu http://www.cornell.edu

https://futurism.com/the-byte/flying-cars-are-actually-finally-becoming-a-reality-in-japan?fbclid=IwAR2iiWdOumDHu7CA93p5YKn3nRbHyTnqGZNOerHf-MY3sRHekszZG3phlKE

http://www.yale.edu http://www.columbia.edu http://www.ox.ac.uk/ https://www.cam.ac.uk/ https://www.karolinska.se/ https://www.manchester.ac.uk/ http://cnpq.br/ https://www.jax.org/

https://phys.org/news/2021-02-years-physicists-track-lost-particles.html?fbclid=IwAR0AmcJDP9VVLj6VbR48ae_OXjN7hHMCnlDqphMfJkyX5hl9MaLkZ1EFY2E

https://en.wikipedia.org/wiki/History_of_the_Internet

https://phys.org/news/2021-02-scientists-highly-accurate-digital-twin.html?fbclid=IwAR2sYE0rFi3i-nPXzzFgByYYbhRkPOv5yb0uYaFz1RCYtF61JBB2iXfkQ6c

https://phys.org/news/2021-02-scientists-highly-accurate-digital-twin.html?fbclid=IwAR2sYE0rFi3i-nPXzzFgByYYbhRkPOv5yb0uYaFz1RCYtF61JBB2iXfkQ6c

Tesla Giga Berlin employee hints at new colors from world-class paint facility

https://www.teslarati.com/tesla-giga-berlin-new-possible-colors-world-class-facility/embed/#?secret=4Y6wNCVw75

https://genomebiology.biomedcentral.com/articles/10.1186/s13059-021-02283-5

The American Statistical Association @AmstatNews · Nonprofit Organization: https://www.facebook.com/AmstatNews

Machell Town

https://magazine.amstat.org/blog/2021/02/01/machell-town/embed/#?secret=oyrRkN1oTX

Machell Town

https://magazine.amstat.org/blog/2021/02/01/machell-town/embed/#?secret=oyrRkN1oTX

https://ww2.amstat.org/meetings/jsm/2021/index.cfm?fbclid=IwAR2F5_7eVrrIsB62koou076DQsNp5xJUP9amBV0dMce6YgK3UhMSunrNVcg

Scientists Have Observed A Rare Phenomenon Expanding Our Understanding Of The Quantum Universe.

https://www.secretsofuniverse.in/higgs-dalitz-decay/embed/#?secret=GzHdSkChWp

Editorial

Moving to a World Beyond “p < 0.05”

Ronald L. Wasserstein,Allen L. Schirm &Nicole A. LazarPages 1-19 | Published online: 20 Mar 2019nfl5182@psu.edu allenschirm@gmail.com ron@amstat.org

The American Statistical Association: https://www.amstat.org/

blog

https://magazine.amstat.org/blog/embed/#?secret=tLGz1FM4RQ&nbsp;https://www.tandfonline.com/doi/full/10.1080/00031305.2019.1583913EDITORIAL  

It’s time to talk about ditching statistical significance

Looking beyond a much used and abused measure would make science harder, but better.

https://www.nature.com/articles/d41586-019-00874-8

What Does Statistically Significant Mean?

https://measuringu.com/statistically-significant/embed/#?secret=YiRDKEPEIW

Não houve diferença estatística significativa. E agora?

https://posgraduando.com/diferenca-estatistica-significativa/embed/#?secret=ByagGlxYDi

https://biovignan.blogspot.com/2020/03/this-60-years-old-virus-is-causing-all.html?fbclid=IwAR2ZWmltzqpe2Kd0CEDXQLvN1x1VruOgNnuYn0y9BTPKjWOEpcXo4dO0imo

https://www.ft.com/content/e0ecc6b6-5d43-11ea-b0ab-339c2307bcd4?fbclid=IwAR1pB68Q4wCmbQLA8s3cHJjNtu7o6Lz5YoErkVvDW23OpGEBLZCwabktm3Y

https://medicalxpress.com/news/2020-03-stem-cells-bone.html?fbclid=IwAR1HB_Yi30RsIfvtHTI1qGr_1Qfq8PWII15EIvhDUR6yq06uuF35H4G9R-c

https://phys.org/news/2020-03-years-scientists-reveal-benzene.html?fbclid=IwAR1nSn_mak1epvpIfsEjvfR5TFzloEO2lssHMO0R25CCHiCPBRNGKH74BV8

https://www.sciencemag.org/news/2020/03/115-million-more-80-boston-researchers-will-collaborate-tackle-covid-19?fbclid=IwAR0GmyW_wAOzz15yAo7RRfrUlQGSU-UsqIyoNFaI5EFQpLeljlkutsCWd9I

https://www.quantamagazine.org/tadashi-tokieda-collects-math-and-physics-surprises-20181127/?fbclid=IwAR0VfFko9agROZvrY0a_Z5ihb2JmxdVFiVMB8qXaW-2H8A6x3qrXuYfYe6E

5aaaa
543aaaa
F6356546560000000000
F745675600000000000
L4444
FORBES64536354
FORBES356

Skip to main content

Advertisement

Nature

Subscribe

  1. nature  
  2. editorials  
  3. article

EDITORIAL  20 MARCH 2019

It’s time to talk about ditching statistical significance

Looking beyond a much used and abused measure would make science harder, but better.

  •  
  •  

PDF version

Bar chart made of measuring cylinders filled with different amounts of varied coloured liquids
Some statisticians are calling for P values to be abandoned as an arbitrary threshold of significance.Credit: Erik Dreyer/Getty

Fans of The Hitchhiker’s Guide to the Galaxy know that the answer to life, the Universe and everything is 42. The joke, of course, is that truth cannot be revealed by a single number.

And yet this is the job often assigned to values: a measure of how surprising a result is, given assumptions about an experiment, including that no effect exists. Whether a P value falls above or below an arbitrary threshold demarcating ‘statistical significance’ (such as 0.05) decides whether hypotheses are accepted, papers are published and products are brought to market. But using P values as the sole arbiter of what to accept as truth can also mean that some analyses are biased, some false positives are overhyped and some genuine effects are overlooked.Scientists rise up against statistical significance

Change is in the air. In a Comment in this week’s issue, three statisticians call for scientists to abandon statistical significance. The authors do not call for P values themselves to be ditched as a statistical tool — rather, they want an end to their use as an arbitrary threshold of significance. More than 800 researchers have added their names as signatories. A series of related articles is being published by the American Statistical Association this week (R. L. Wasserstein et al. Am. Stat. https://doi.org/10.1080/00031305.2019.1583913; 2019). “The tool has become the tyrant,” laments one article.

Statistical significance is so deeply integrated into scientific practice and evaluation that extricating it would be painful. Critics will counter that arbitrary gatekeepers are better than unclear ones, and that the more useful argument is over which results should count for (or against) evidence of effect. There are reasonable viewpoints on all sides; Nature is not seeking to change how it considers statistical analysis in evaluation of papers at this time, but we encourage readers to share their views (see go.nature.com/correspondence).

If researchers do discard statistical significance, what should they do instead? They can start by educating themselves about statistical misconceptions. Most important will be the courage to consider uncertainty from multiple angles in every study. Logic, background knowledge and experimental design should be considered alongside values and similar metrics to reach a conclusion and decide on its certainty.

When working out which methods to use, researchers should also focus as much as possible on actual problems. People who will duel to the death over abstract theories on the best way to use statistics often agree on results when they are presented with concrete scenarios.

Researchers should seek to analyse data in multiple ways to see whether different analyses converge on the same answer. Projects that have crowdsourced analyses of a data set to diverse teams suggest that this approach can work to validate findings and offer new insights.

In short, be sceptical, pick a good question, and try to answer it in many ways. It takes many numbers to get close to the truth.

Nature 567, 283 (2019)doi: https://doi.org/10.1038/d41586-019-00874-8

Latest on:

Research data

Research management

PublishingMassive Google-funded COVID database will track variants and immunityNEWS The broken promise that undermines human genome researchNEWS FEATURE The next 20 years of human genomics must be more equitable and more openEDITORIAL 

Jobs from Nature Careers 

Nature Briefing

An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.Email addressYes! Sign me up to receive the daily Nature Briefing email. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.Sign up

RELATED ARTICLES

SUBJECTS

Sign up to Nature Briefing

An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.Email addressYes! Sign me up to receive the daily Nature Briefing email. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.Sign up

Nature ISSN 1476-4687 (online)

nature.com sitemap

Nature portfolio

Discover content

Publishing policies

Author & Researcher services

Libraries & institutions

Advertising & partnerships

Career development

Regional websites

Legal & Privacy

Springer Nature

© 2021 Springer Nature Limited

Skip to Main Content

Taylor and Francis Online

Log in  |  Register Cart

  1. Home
  2.  All Journals
  3.  The American Statistician
  4.  List of Issues
  5.  Volume 73, Issue sup1
  6.  Moving to a World Beyond “p < 0.05”

 Search in:  This Journal   Anywhere   Advanced search

Publication Cover

The American Statistician Volume 73, 2019 – Issue sup1: Statistical Inference in the 21st Century: A World Beyond p < 0.05Submit an articleJournal homepage

Open access

225,889Views602CrossRef citations to date1399AltmetricListen

Editorial

Moving to a World Beyond “p < 0.05”

Ronald L. Wasserstein,Allen L. Schirm &Nicole A. LazarPages 1-19 | Published online: 20 Mar 2019

In this article

Previous articleView issue table of contentsNext article

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.

March 16, 2019

Some of you exploring this special issue of The American Statistician might be wondering if it’s a scolding from pedantic statisticians lecturing you about what not to do with p-values, without offering any real ideas of what to do about the very hard problem of separating signal from noise in data and making decisions under uncertainty. Fear not. In this issue, thanks to 43 innovative and thought-provoking papers from forward-looking statisticians, help is on the way.

1 “Don’t” Is Not Enough

There’s not much we can say here about the perils of p-values and significance testing that hasn’t been said already for decades (Ziliak and McCloskey 2008; Hubbard 2016). If you’re just arriving to the debate, here’s a sampling of what not to do:

  • Don’t base your conclusions solely on whether an association or effect was found to be “statistically significant” (i.e., the p-value passed some arbitrary threshold such as p < 0.05).
  • Don’t believe that an association or effect exists just because it was statistically significant.
  • Don’t believe that an association or effect is absent just because it was not statistically significant.
  • Don’t believe that your p-value gives the probability that chance alone produced the observed association or effect or the probability that your test hypothesis is true.
  • Don’t conclude anything about scientific or practical importance based on statistical significance (or lack thereof).

Don’t. Don’t. Just…don’t. Yes, we talk a lot about don’ts. The ASA Statement on p-Values and Statistical Significance (Wasserstein and Lazar 2016) was developed primarily because after decades, warnings about the don’ts had gone mostly unheeded. The statement was about what not to do, because there is widespread agreement about the don’ts.

Knowing what not to do with p-values is indeed necessary, but it does not suffice. It is as though statisticians were asking users of statistics to tear out the beams and struts holding up the edifice of modern scientific research without offering solid construction materials to replace them. Pointing out old, rotting timbers was a good start, but now we need more.

Recognizing this, in October 2017, the American Statistical Association (ASA) held the Symposium on Statistical Inference, a two-day gathering that laid the foundations for this special issue of The American Statistician. Authors were explicitly instructed to develop papers for the variety of audiences interested in these topics. If you use statistics in research, business, or policymaking but are not a statistician, these articles were indeed written with YOU in mind. And if you are a statistician, there is still much here for you as well.

The papers in this issue propose many new ideas, ideas that in our determination as editors merited publication to enable broader consideration and debate. The ideas in this editorial are likewise open to debate. They are our own attempt to distill the wisdom of the many voices in this issue into an essence of good statistical practice as we currently see it: some do’s for teaching, doing research, and informing decisions.

Yet the voices in the 43 papers in this issue do not sing as one. At times in this editorial and the papers you’ll hear deep dissonance, the echoes of “statistics wars” still simmering today (Mayo 2018). At other times you’ll hear melodies wrapping in a rich counterpoint that may herald an increasingly harmonious new era of statistics. To us, these are all the sounds of statistical inference in the 21st century, the sounds of a world learning to venture beyond “p < 0.05.”

This is a world where researchers are free to treat “p = 0.051” and “p = 0.049” as not being categorically different, where authors no longer find themselves constrained to selectively publish their results based on a single magic number. In this world, where studies with “p < 0.05” and studies with “p > 0.05” are not automatically in conflict, researchers will see their results more easily replicated—and, even when not, they will better understand why. As we venture down this path, we will begin to see fewer false alarms, fewer overlooked discoveries, and the development of more customized statistical strategies. Researchers will be free to communicate all their findings in all their glorious uncertainty, knowing their work is to be judged by the quality and effective communication of their science, and not by their p-values. As “statistical significance” is used less, statistical thinking will be used more.

The ASA Statement on P-Values and Statistical Significance started moving us toward this world. As of the date of publication of this special issue, the statement has been viewed over 294,000 times and cited over 1700 times—an average of about 11 citations per week since its release. Now we must go further. That’s what this special issue of The American Statistician sets out to do.

To get to the do’s, though, we must begin with one more don’t.

2 Don’t Say “Statistically Significant”

The ASA Statement on P-Values and Statistical Significance stopped just short of recommending that declarations of “statistical significance” be abandoned. We take that step here. We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term “statistically significant” entirely. Nor should variants such as “significantly different,” “p < 0.05,” and “nonsignificant” survive, whether expressed in words, by asterisks in a table, or in some other way.

Regardless of whether it was ever useful, a declaration of “statistical significance” has today become meaningless. Made broadly known by Fisher’s use of the phrase (1925), Edgeworth’s (1885) original intention for statistical significance was simply as a tool to indicate when a result warrants further scrutiny. But that idea has been irretrievably lost. Statistical significance was never meant to imply scientific importance, and the confusion of the two was decried soon after its widespread use (Boring 1919). Yet a full century later the confusion persists.

And so the tool has become the tyrant. The problem is not simply use of the word “significant,” although the statistical and ordinary language meanings of the word are indeed now hopelessly confused (Ghose 2013); the term should be avoided for that reason alone. The problem is a larger one, however: using bright-line rules for justifying scientific claims or conclusions can lead to erroneous beliefs and poor decision making (ASA statement, Principle 3). A label of statistical significance adds nothing to what is already conveyed by the value of p; in fact, this dichotomization of p-values makes matters worse.

For example, no p-value can reveal the plausibility, presence, truth, or importance of an association or effect. Therefore, a label of statistical significance does not mean or imply that an association or effect is highly probable, real, true, or important. Nor does a label of statistical nonsignificance lead to the association or effect being improbable, absent, false, or unimportant. Yet the dichotomization into “significant” and “not significant” is taken as an imprimatur of authority on these characteristics. In a world without bright lines, on the other hand, it becomes untenable to assert dramatic differences in interpretation from inconsequential differences in estimates. As Gelman and Stern (2006) famously observed, the difference between “significant” and “not significant” is not itself statistically significant.

Furthermore, this false split into “worthy” and “unworthy” results leads to the selective reporting and publishing of results based on their statistical significance—the so-called “file drawer problem” (Rosenthal 1979). And the dichotomized reporting problem extends beyond just publication, notes Amrhein, Trafimow, and Greenland (2019): when authors use p-value thresholds to select which findings to discuss in their papers, “their conclusions and what is reported in subsequent news and reviews will be biased…Such selective attention based on study outcomes will therefore not only distort the literature but will slant published descriptions of study results—biasing the summary descriptions reported to practicing professionals and the general public.” For the integrity of scientific publishing and research dissemination, therefore, whether a p-value passes any arbitrary threshold should not be considered at all when deciding which results to present or highlight.

To be clear, the problem is not that of having only two labels. Results should not be trichotomized, or indeed categorized into any number of groups, based on arbitrary p-value thresholds. Similarly, we need to stop using confidence intervals as another means of dichotomizing (based, on whether a null value falls within the interval). And, to preclude a reappearance of this problem elsewhere, we must not begin arbitrarily categorizing other statistical measures (such as Bayes factors).

Despite the limitations of p-values (as noted in Principles 5 and 6 of the ASA statement), however, we are not recommending that the calculation and use of continuous p-values be discontinued. Where p-values are used, they should be reported as continuous quantities (e.g., p = 0.08). They should also be described in language stating what the value means in the scientific context. We believe that a reasonable prerequisite for reporting any p-value is the ability to interpret it appropriately. We say more about this in Section 3.3.

To move forward to a world beyond “p < 0.05,” we must recognize afresh that statistical inference is not—and never has been—equivalent to scientific inference (Hubbard, Haig, and Parsa 2019; Ziliak 2019). However, looking to statistical significance for a marker of scientific observations’ credibility has created a guise of equivalency. Moving beyond “statistical significance” opens researchers to the real significance of statistics, which is “the science of learning from data, and of measuring, controlling, and communicating uncertainty” (Davidian and Louis 2012).

In sum, “statistically significant”—don’t say it and don’t use it.

3 There Are Many Do’s

With the don’ts out of the way, we can finally discuss ideas for specific, positive, constructive actions. We have a massive list of them in the seventh section of this editorial! In that section, the authors of all the articles in this special issue each provide their own short set of do’s. Those lists, and the rest of this editorial, will help you navigate the substantial collection of articles that follows.

Because of the size of this collection, we take the liberty here of distilling our readings of the articles into a summary of what can be done to move beyond “p < 0.05.” You will find the rich details in the articles themselves.

What you will NOT find in this issue is one solution that majestically replaces the outsized role that statistical significance has come to play. The statistical community has not yet converged on a simple paradigm for the use of statistical inference in scientific research—and in fact it may never do so. A one-size-fits-all approach to statistical inference is an inappropriate expectation, even after the dust settles from our current remodeling of statistical practice (Tong 2019). Yet solid principles for the use of statistics do exist, and they are well explained in this special issue.

We summarize our recommendations in two sentences totaling seven words: “Accept uncertainty. Be thoughtful, open, and modest.” Remember “ATOM.”

3.1 Accept Uncertainty

Uncertainty exists everywhere in research. And, just like with the frigid weather in a Wisconsin winter, there are those who will flee from it, trying to hide in warmer havens elsewhere. Others, however, accept and even delight in the omnipresent cold; these are the ones who buy the right gear and bravely take full advantage of all the wonders of a challenging climate. Significance tests and dichotomized p-values have turned many researchers into scientific snowbirds, trying to avoid dealing with uncertainty by escaping to a “happy place” where results are either statistically significant or not. In the real world, data provide a noisy signal. Variation, one of the causes of uncertainty, is everywhere. Exact replication is difficult to achieve. So it is time to get the right (statistical) gear and “move toward a greater acceptance of uncertainty and embracing of variation” (Gelman 2016).

Statistical methods do not rid data of their uncertainty. “Statistics,” Gelman (2016) says, “is often sold as a sort of alchemy that transmutes randomness into certainty, an ‘uncertainty laundering’ that begins with data and concludes with success as measured by statistical significance.” To accept uncertainty requires that we “treat statistical results as being much more incomplete and uncertain than is currently the norm” (Amrhein, Trafimow, and Greenland 2019). We must “countenance uncertainty in all statistical conclusions, seeking ways to quantify, visualize, and interpret the potential for error” (Calin-Jageman and Cumming 2019).

“Accept uncertainty and embrace variation in effects,” advise McShane et al. in Section 7 of this editorial. “[W]e can learn much (indeed, more) about the world by forsaking the false promise of certainty offered by dichotomous declarations of truth or falsity—binary statements about there being ‘an effect’ or ‘no effect’—based on some p-value or other statistical threshold being attained.”

We can make acceptance of uncertainty more natural to our thinking by accompanying every point estimate in our research with a measure of its uncertainty such as a standard error or interval estimate. Reporting and interpreting point and interval estimates should be routine. However, simplistic use of confidence intervals as a measurement of uncertainty leads to the same bad outcomes as use of statistical significance (especially, a focus on whether such intervals include or exclude the “null hypothesis value”). Instead, Greenland (2019) and Amrhein, Trafimow, and Greenland (2019) encourage thinking of confidence intervals as “compatibility intervals,” which use p-values to show the effect sizes that are most compatible with the data under the given model.

How will accepting uncertainty change anything? To begin, it will prompt us to seek better measures, more sensitive designs, and larger samples, all of which increase the rigor of research. It also helps us be modest (the fourth of our four principles, on which we will expand in Section 3.4) and encourages “meta-analytic thinking” (Cumming 2014). Accepting uncertainty as inevitable is a natural antidote to the seductive certainty falsely promised by statistical significance. With this new outlook, we will naturally seek out replications and the integration of evidence through meta-analyses, which usually requires point and interval estimates from contributing studies. This will in turn give us more precise overall estimates for our effects and associations. And this is what will lead to the best research-based guidance for practical decisions.

Accepting uncertainty leads us to be thoughtful, the second of our four principles.

3.2 Be Thoughtful

What do we mean by this exhortation to “be thoughtful”? Researchers already clearly put much thought into their work. We are not accusing anyone of laziness. Rather, we are envisioning a sort of “statistical thoughtfulness.” In this perspective, statistically thoughtful researchers begin above all else with clearly expressed objectives. They recognize when they are doing exploratory studies and when they are doing more rigidly pre-planned studies. They invest in producing solid data. They consider not one but a multitude of data analysis techniques. And they think about so much more.

3.2.1 Thoughtfulness in the Big Picture

“[M]ost scientific research is exploratory in nature,” Tong (2019) contends. “[T]he design, conduct, and analysis of a study are necessarily flexible, and must be open to the discovery of unexpected patterns that prompt new questions and hypotheses. In this context, statistical modeling can be exceedingly useful for elucidating patterns in the data, and researcher degrees of freedom can be helpful and even essential, though they still carry the risk of overfitting. The price of allowing this flexibility is that the validity of any resulting statistical inferences is undermined.”

Calin-Jageman and Cumming (2019) caution that “in practice the dividing line between planned and exploratory research can be difficult to maintain. Indeed, exploratory findings have a slippery way of ‘transforming’ into planned findings as the research process progresses.” At the bottom of that slippery slope one often finds results that don’t reproduce.

Anderson (2019) proposes three questions thoughtful researchers asked thoughtful researchers evaluating research results: What are the practical implications of the estimate? How precise is the estimate? And is the model correctly specified? The latter question leads naturally to three more: Are the modeling assumptions understood? Are these assumptions valid? And do the key results hold up when other modeling choices are made? Anderson further notes, “Modeling assumptions (including all the choices from model specification to sample selection and the handling of data issues) should be sufficiently documented so independent parties can critique, and replicate, the work.”

Drawing on archival research done at the Guinness Archives in Dublin, Ziliak (2019) emerges with ten “G-values” he believes we all wish to maximize in research. That is, we want large G-values, not small p-values. The ten principles of Ziliak’s “Guinnessometrics” are derived primarily from his examination of experiments conducted by statistician William Sealy Gosset while working as Head Brewer for Guinness. Gosset took an economic approach to the logic of uncertainty, preferring balanced designs over random ones and estimation of gambles over bright-line “testing.” Take, for example, Ziliak’s G-value 10: “Consider purpose of the inquiry, and compare with best practice,” in the spirit of what farmers and brewers must do. The purpose is generally NOT to falsify a null hypothesis, says Ziliak. Ask what is at stake, he advises, and determine what magnitudes of change are humanly or scientifically meaningful in context.

Pogrow (2019) offers an approach based on practical benefit rather than statistical or practical significance. This approach is especially useful, he says, for assessing whether interventions in complex organizations (such as hospitals and schools) are effective, and also for increasing the likelihood that the observed benefits will replicate in subsequent research and in clinical practice. In this approach, “practical benefit” recognizes that reliance on small effect sizes can be as problematic as relying on p-values.

Thoughtful research prioritizes sound data production by putting energy into the careful planning, design, and execution of the study (Tong 2019).

Locascio (2019) urges researchers to be prepared for a new publishing model that evaluates their research based on the importance of the questions being asked and the methods used to answer them, rather than the outcomes obtained.

3.2.2 Thoughtfulness Through Context and Prior Knowledge

Thoughtful research considers the scientific context and prior evidence. In this regard, a declaration of statistical significance is the antithesis of thoughtfulness: it says nothing about practical importance, and it ignores what previous studies have contributed to our knowledge.

Thoughtful research looks ahead to prospective outcomes in the context of theory and previous research. Researchers would do well to ask, What do we already know, and how certain are we in what we know? And building on that and on the field’s theory, what magnitudes of differences, odds ratios, or other effect sizes are practically important? These questions would naturally lead a researcher, for example, to use existing evidence from a literature review to identify specifically the findings that would be practically important for the key outcomes under study.

Thoughtful research includes careful consideration of the definition of a meaningful effect size. As a researcher you should communicate this up front, before data are collected and analyzed. Afterwards is just too late; it is dangerously easy to justify observed results after the fact and to overinterpret trivial effect sizes as being meaningful. Many authors in this special issue argue that consideration of the effect size and its “scientific meaningfulness” is essential for reliable inference (e.g., Blume et al. 2019; Betensky 2019). This concern is also addressed in the literature on equivalence testing (Wellek 2017).

Thoughtful research considers “related prior evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain…without giving priority to p-values or other purely statistical measures” (McShane et al. 2019).

Thoughtful researchers “use a toolbox of statistical techniques, employ good judgment, and keep an eye on developments in statistical and data science,” conclude Heck and Krueger ((2019)), who demonstrate how the p-value can be useful to researchers as a heuristic.

3.2.3 Thoughtful Alternatives and Complements to P-Values

Thoughtful research considers multiple approaches for solving problems. This special issue includes some ideas for supplementing or replacing p-values. Here is a short summary of some of them, with a few technical details:

Amrhein, Trafimow, and Greenland (2019) and Greenland (2019) advise that null p-values should be supplemented with a p-value from a test of a pre-specified alternative (such as a minimal important effect size). To reduce confusion with posterior probabilities and better portray evidential value, they further advise that p-values be transformed into s-values (Shannon information, surprisal, or binary logworth) s = – log2(p). This measure of evidence affirms other arguments that the evidence against a hypothesis contained in the p-value is not nearly as strong as is believed by many researchers. The change of scale also moves users away from probability misinterpretations of the p-value.

Blume et al. (2019) offer a “second generation p-value (SGPV),” the characteristics of which mimic or improve upon those of p-values but take practical significance into account. The null hypothesis from which an SGPV is computed is a composite hypothesis representing a range of differences that would be practically or scientifically inconsequential, as in equivalence testing (Wellek 2017). This range is determined in advance by the experimenters. When the SGPV is 1, the data only support null hypotheses; when the SGPV is 0, the data are incompatible with any of the null hypotheses. SGPVs between 0 and 1 are inconclusive at varying levels (maximally inconclusive at or near SGPV = 0.5.) Blume et al. illustrate how the SGPV provides a straightforward and useful descriptive summary of the data. They argue that it eliminates the problem of how classical statistical significance does not imply scientific relevance, it lowers false discovery rates, and its conclusions are more likely to reproduce in subsequent studies.

The “analysis of credibility”(AnCred) is promoted by Matthews (2019). This approach takes account of both the width of the confidence interval and the location of its bounds when assessing weight of evidence. AnCred assesses the credibility of inferences based on the confidence interval by determining the level of prior evidence needed for a new finding to provide credible evidence for a nonzero effect. If this required level of prior evidence is supported by current knowledge and insight, Matthews calls the new result “credible evidence for a non-zero effect,” irrespective of its statistical significance/nonsignificance.

Colquhoun (2019) proposes continuing the use of continuous p-values, but only in conjunction with the “false positive risk (FPR).” The FPR answers the question, “If you observe a ‘significant’ p-value after doing a single unbiased experiment, what is the probability that your result is a false positive?” It tells you what most people mistakenly still think the p-value does, Colquhoun says. The problem, however, is that to calculate the FPR you need to specify the prior probability that an effect is real, and it’s rare to know this. Colquhoun suggests that the FPR could be calculated with a prior probability of 0.5, the largest value reasonable to assume in the absence of hard prior data. The FPR found this way is in a sense the minimum false positive risk (mFPR); less plausible hypotheses (prior probabilities below 0.5) would give even bigger FPRs, Colquhoun says, but the mFPR would be a big improvement on reporting a p-value alone. He points out that p-values near 0.05 are, under a variety of assumptions, associated with minimum false positive risks of 20–30%, which should stop a researcher from making too big a claim about the “statistical significance” of such a result.

Benjamin and Berger (2019) propose a different supplement to the null p-value. The Bayes factor bound (BFB)—which under typically plausible assumptions is the value 1/(-ep ln p)—represents the upper bound of the ratio of data-based odds of the alternative hypothesis to the null hypothesis. Benjamin and Berger advise that the BFB should be reported along with the continuous p-value. This is an incomplete step toward revising practice, they argue, but one that at least confronts the researcher with the maximum possible odds that the alternative hypothesis is true—which is what researchers often think they are getting with a p-value. The BFB, like the FPR, often clarifies that the evidence against the null hypothesis contained in the p-value is not nearly as strong as is believed by many researchers.

Goodman, Spruill, and Komaroff (2019) propose a two-stage approach to inference, requiring both a small p-value below a pre-specified level and a pre-specified sufficiently large effect size before declaring a result “significant.” They argue that this method has improved performance relative to use of dichotomized p-values alone.

Gannon, Pereira, and Polpo (2019) have developed a testing procedure combining frequentist and Bayesian tools to provide a significance level that is a function of sample size.

Manski (2019) and Manski and Tetenov (2019) urge a return to the use of statistical decision theory, which they say has largely been forgotten. Statistical decision theory is not based on p-value thresholds and readily distinguishes between statistical and clinical significance.

Billheimer (2019) suggests abandoning inference about parameters, which are frequently hypothetical quantities used to idealize a problem. Instead, he proposes focusing on the prediction of future observables, and their associated uncertainty, as a means to improving science and decision-making.

3.2.4 Thoughtful Communication of Confidence

Be thoughtful and clear about the level of confidence or credibility that is present in statistical results.

Amrhein, Trafimow, and Greenland (2019) and Greenland (2019) argue that the use of words like “significance” in conjunction with p-values and “confidence” with interval estimates misleads users into overconfident claims. They propose that researchers think of p-values as measuring the compatibility between hypotheses and data, and interpret interval estimates as “compatibility intervals.”

In what may be a controversial proposal, Goodman (2018) suggests requiring “that any researcher making a claim in a study accompany it with their estimate of the chance that the claim is true.” Goodman calls this the confidence index. For example, along with stating “This drug is associated with elevated risk of a heart attack, relative risk (RR) = 2.4, p = 0.03,” Goodman says investigators might add a statement such as “There is an 80% chance that this drug raises the risk, and a 60% chance that the risk is at least doubled.” Goodman acknowledges, “Although simple on paper, requiring a confidence index would entail a profound overhaul of scientific and statistical practice.”

In a similar vein, Hubbard and Carriquiry (2019) urge that researchers prominently display the probability the hypothesis is true or a probability distribution of an effect size, or provide sufficient information for future researchers and policy makers to compute it. The authors further describe why such a probability is necessary for decision making, how it could be estimated by using historical rates of reproduction of findings, and how this same process can be part of continuous “quality control” for science.

Being thoughtful in our approach to research will lead us to be open in our design, conduct, and presentation of it as well.

3.3 Be Open

We envision openness as embracing certain positive practices in the development and presentation of research work.

3.3.1 Openness to Transparency and to the Role of Expert Judgment

First, we repeat oft-repeated advice: Be open to “open science” practices. Calin-Jageman and Cumming (2019), Locascio (2019), and others in this special issue urge adherence to practices such as public pre-registration of methods, transparency and completeness in reporting, shared data and code, and even pre-registered (“results-blind”) review. Completeness in reporting, for example, requires not only describing all analyses performed but also presenting all findings obtained, without regard to statistical significance or any such criterion.

Openness also includes understanding and accepting the role of expert judgment, which enters the practice of statistical inference and decision-making in numerous ways (O’Hagan 2019). “Indeed, there is essentially no aspect of scientific investigation in which judgment is not required,” O’Hagan observes. “Judgment is necessarily subjective, but should be made as carefully, as objectively, and as scientifically as possible.”

Subjectivity is involved in any statistical analysis, Bayesian or frequentist. Gelman and Hennig (2017) observe, “Personal decision making cannot be avoided in statistical data analysis and, for want of approaches to justify such decisions, the pursuit of objectivity degenerates easily to a pursuit to merely appear objective.” One might say that subjectivity is not a problem; it is part of the solution.

Acknowledging this, Brownstein et al. (2019) point out that expert judgment and knowledge are required in all stages of the scientific method. They examine the roles of expert judgment throughout the scientific process, especially regarding the integration of statistical and content expertise. “All researchers, irrespective of their philosophy or practice, use expert judgment in developing models and interpreting results,” say Brownstein et al. “We must accept that there is subjectivity in every stage of scientific inquiry, but objectivity is nevertheless the fundamental goal. Therefore, we should base judgments on evidence and careful reasoning, and seek wherever possible to eliminate potential sources of bias.”

How does one rigorously elicit expert knowledge and judgment in an effective, unbiased, and transparent way? O’Hagan (2019) addresses this, discussing protocols to elicit expert knowledge in an unbiased and as scientifically sound was as possible. It is also important for such elicited knowledge to be examined critically, comparing it to actual study results being an important diagnostic step.

3.3.2 Openness in Communication

Be open in your reporting. Report p-values as continuous, descriptive statistics, as we explain in Section 2. We realize that this leaves researchers without their familiar bright line anchors. Yet if we were to propose a universal template for presenting and interpreting continuous p-values we would violate our own principles! Rather, we believe that the thoughtful use and interpretation of p-values will never adhere to a rigid rulebook, and will instead inevitably vary from study to study. Despite these caveats, we can offer recommendations for sound practices, as described below.

In all instances, regardless of the value taken by p or any other statistic, consider what McShane et al. (2019) call the “currently subordinate factors”—the factors that should no longer be subordinate to “p < 0.05.” These include relevant prior evidence, plausibility of mechanism, study design and data quality, and the real-world costs and benefits that determine what effects are scientifically important. The scientific context of your study matters, they say, and this should guide your interpretation.

When using p-values, remember not only Principle 5 of the ASA statement: “A p-value…does not measure the size of an effect or the importance of a result” but also Principle 6: “By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.” Despite these limitations, if you present p-values, do so for more than one hypothesized value of your variable of interest (Fraser 2019; Greenland 2019), such as 0 and at least one plausible, relevant alternative, such as the minimum practically important effect size (which should be determined before analyzing the data).

Betensky (2019) also reminds us to interpret the p-value in the context of sample size and meaningful effect size.

Instead of p, you might consider presenting the s-value (Greenland 2019), which is described in Section 3.2. As noted in Section 3.1, you might present a confidence interval. Sound practices in the interpretation of confidence intervals include (1) discussing both the upper and lower limits and whether they have different practical implications, (2) paying no particular attention to whether the interval includes the null value, and (3) remembering that an interval is itself an estimate subject to error and generally provides only a rough indication of uncertainty given that all of the assumptions used to create it are correct and, thus, for example, does not “rule out” values outside the interval. Amrhein, Trafimow, and Greenland (2019) suggest that interval estimates be interpreted as “compatibility” intervals rather than as “confidence” intervals, showing the values that are most compatible with the data, under the model used to compute the interval. They argue that such an interpretation and the practices outlined here can help guard against overconfidence.

It is worth noting that Tong (2019) disagrees with using p-values as descriptive statistics. “Divorced from the probability claims attached to such quantities (confidence levels, nominal Type I errors, and so on), there is no longer any reason to privilege such quantities over descriptive statistics that more directly characterize the data at hand.” He further states, “Methods with alleged generality, such as the p-value or Bayes factor, should be avoided in favor of discipline- and problem-specific solutions that can be designed to be fit for purpose.”

Failing to be open in reporting leads to publication bias. Ioannidis (2019) notes the high level of selection bias prevalent in biomedical journals. He defines “selection” as “the collection of choices that lead from the planning of a study to the reporting of p-values.” As an illustration of one form of selection bias, Ioannidis compared “the set of p-values reported in the full text of an article with the set of p-values reported in the abstract.” The main finding, he says, “was that p-values chosen for the abstract tended to show greater significance than those reported in the text, and that the gradient was more pronounced in some types of journals and types of designs.” Ioannidis notes, however, that selection bias “can be present regardless of the approach to inference used.” He argues that in the long run, “the only direct protection must come from standards for reproducible research.”

To be open, remember that one study is rarely enough. The words “a groundbreaking new study” might be loved by news writers but must be resisted by researchers. Breaking ground is only the first step in building a house. It will be suitable for habitation only after much more hard work.

Be open by providing sufficient information so that other researchers can execute meaningful alternative analyses. van Dongen et al. (2019) provide an illustrative example of such alternative analyses by different groups attacking the same problem.

Being open goes hand in hand with being modest.

3.4 Be Modest

Researchers of any ilk may rarely advertise their personal modesty. Yet the most successful ones cultivate a practice of being modest throughout their research, by understanding and clearly expressing the limitations of their work.

Being modest requires a reality check (Amrhein, Trafimow, and Greenland 2019). “A core problem,” they observe, “is that both scientists and the public confound statistics with reality. But statistical inference is a thought experiment, describing the predictive performance of models about reality. Of necessity, these models are extremely simplified relative to the complexities of actual study conduct and of the reality being studied. Statistical results must eventually mislead us when they are used and communicated as if they present this complex reality, rather than a model for it. This is not a problem of our statistical methods. It is a problem of interpretation and communication of results.”

Be modest in recognizing there is not a “true statistical model” underlying every problem, which is why it is wise to thoughtfully consider many possible models (Lavine 2019). Rougier (2019) calls on researchers to “recognize that behind every choice of null distribution and test statistic, there lurks a plausible family of alternative hypotheses, which can provide more insight into the null distribution.”p-values, confidence intervals, and other statistical measures are all uncertain. Treating them otherwise is immodest overconfidence.

Remember that statistical tools have their limitations. Rose and McGuire (2019) show how use of stepwise regression in health care settings can lead to policies that are unfair.

Remember also that the amount of evidence for or against a hypothesis provided by p-values near the ubiquitous p < 0.05 threshold (Johnson 2019) is usually much less than you think (Benjamin and Berger 2019; Colquhoun 2019; Greenland 2019).

Be modest about the role of statistical inference in scientific inference. “Scientific inference is a far broader concept than statistical inference,” says Hubbard, Haig, and Parsa (2019). “A major focus of scientific inference can be viewed as the pursuit of significant sameness, meaning replicable and empirically generalizable results among phenomena. Regrettably, the obsession with users of statistical inference to report significant differences in data sets actively thwarts cumulative knowledge development.”

The nexus of openness and modesty is to report everything while at the same time not concluding anything from a single study with unwarranted certainty. Because of the strong desire to inform and be informed, there is a relentless demand to state results with certainty. Again, accept uncertainty and embrace variation in associations and effects, because they are always there, like it or not. Understand that expressions of uncertainty are themselves uncertain. Accept that one study is rarely definitive, so encourage, sponsor, conduct, and publish replication studies. Then, use meta-analysis, evidence reviews, and Bayesian methods to synthesize evidence across studies.

Resist the urge to overreach in the generalizability of claims, Watch out for pressure to embellish the abstract or the press release. If the study’s limitations are expressed in the paper but not in the abstract, they may never be read.

Be modest by encouraging others to reproduce your work. Of course, for it to be reproduced readily, you will necessarily have been thoughtful in conducting the research and open in presenting it.

Hubbard and Carriquiry (see their “do list” in Section 7) suggest encouraging reproduction of research by giving “a byline status for researchers who reproduce studies.” They would like to see digital versions of papers dynamically updated to display “Reproduced by….” below original research authors’ names or “not yet reproduced” until it is reproduced.

Indeed, when it comes to reproducibility, Amrhein, Trafimow, and Greenland (2019) demand that we be modest in our expectations. “An important role for statistics in research is the summary and accumulation of information,” they say. “If replications do not find the same results, this is not necessarily a crisis, but is part of a natural process by which science evolves. The goal of scientific methodology should be to direct this evolution toward ever more accurate descriptions of the world and how it works, not toward ever more publication of inferences, conclusions, or decisions.”

Referring to replication studies in psychology, McShane et al. (2019) recommend that future large-scale replication projects “should follow the ‘one phenomenon, many studies’ approach of the Many Labs project and Registered Replication Reports rather than the ‘many phenomena, one study’ approach of the Open Science Collaboration project. In doing so, they should systematically vary method factors across the laboratories involved in the project.” This approach helps achieve the goals of Amrhein, Trafimow, and Greenland (2019) by increasing understanding of why and when results replicate or fail to do so, yielding more accurate descriptions of the world and how it works. It also speaks to significant sameness versus significant difference a la Hubbard, Haig, and Parsa (2019).

Kennedy-Shaffer’s (2019) historical perspective on statistical significance reminds us to be modest, by prompting us to recall how the current state of affairs in p-values has come to be.

Finally, be modest by recognizing that different readers may have very different stakes on the results of your analysis, which means you should try to take the role of a neutral judge rather than an advocate for any hypothesis. This can be done, for example, by pairing every null p-value with a p-value testing an equally reasonable alternative, and by discussing the endpoints of every interval estimate (not only whether it contains the null).

Accept that both scientific inference and statistical inference are hard, and understand that no knowledge will be efficiently advanced using simplistic, mechanical rules and procedures. Accept also that pure objectivity is an unattainable goal—no matter how laudable—and that both subjectivity and expert judgment are intrinsic to the conduct of science and statistics. Accept that there will always be uncertainty, and be thoughtful, open, and modest. ATOM.

And to push this acronym further, we argue in the next section that institutional change is needed, so we put forward that change is needed at the ATOMIC level. Let’s go.

4 Editorial, Educational and Other Institutional Practices Will Have to Change

Institutional reform is necessary for moving beyond statistical significance in any context—whether journals, education, academic incentive systems, or others. Several papers in this special issue focus on reform.

Goodman (2019) notes considerable social change is needed in academic institutions, in journals, and among funding and regulatory agencies. He suggests (see Section 7) partnering “with science reform movements and reformers within disciplines, journals, funding agencies and regulators to promote and reward ‘reproducible’ science and diminish the impact of statistical significance on publication, funding and promotion.” Similarly, Colquhoun (2019) says, “In the end, the only way to solve the problem of reproducibility is to do more replication and to reduce the incentives that are imposed on scientists to produce unreliable work. The publish-or-perish culture has damaged science, as has the judgment of their work by silly metrics.”

Trafimow (2019), who added energy to the discussion of p-values a few years ago by banning them from the journal he edits (Fricker et al. 2019), suggests five “nonobvious changes” to editorial practice. These suggestions, which demand reevaluating traditional practices in editorial policy, will not be trivial to implement but would result in massive change in some journals.

Locascio (20172019) suggests that evaluation of manu-scripts for publication should be “results-blind.” That is, manuscripts should be assessed for suitability for publication based on the substantive importance of the research without regard to their reported results. Kmetz (2019) supports this approach as well and says that it would be a huge benefit for reviewers, “freeing [them] from their often thankless present jobs and instead allowing them to review research designs for their potential to provide useful knowledge.” (See also “registered reports” from the Center for Open Science (https://cos.io/rr/?_ga=2.184185454.979594832.1547755516-1193527346.1457026171) and “registered replication reports” from the Association for Psychological Science (https://www.psychologicalscience.org/publications/replication) in relation to this concept.)

Amrhein, Trafimow, and Greenland (2019) ask if results-blind publishing means that anything goes, and then answer affirmatively: “Everything should be published in some form if whatever we measured made sense before we obtained the data because it was connected in a potentially useful way to some research question.” Journal editors, they say, “should be proud about [their] exhaustive methods sections” and base their decisions about the suitability of a study for publication “on the quality of its materials and methods rather than on results and conclusions; the quality of the presentation of the latter is only judged after it is determined that the study is valuable based on its materials and methods.”

A “variation on this theme is pre-registered replication, where a replication study, rather than the original study, is subject to strict pre-registration (e.g., Gelman 2015),” says Tong (2019). “A broader vision of this idea (Mogil and Macleod 2017) is to carry out a whole series of exploratory experiments without any formal statistical inference, and summarize the results by descriptive statistics (including graphics) or even just disclosure of the raw data. When results from this series of experiments converge to a single working hypothesis, it can then be subjected to a pre-registered, randomized and blinded, appropriately powered confirmatory experiment, carried out by another laboratory, in which valid statistical inference may be made.”

Hurlbert, Levine, and Utts (2019) urge abandoning the use of “statistically significant” in all its forms and encourage journals to provide instructions to authors along these lines: “There is now wide agreement among many statisticians who have studied the issue that for reporting of statistical tests yielding p-values it is illogical and inappropriate to dichotomize the p-scale and describe results as ‘significant’ and ‘nonsignificant.’ Authors are strongly discouraged from continuing this never justified practice that originated from confusions in the early history of modern statistics.”

Hurlbert, Levine, and Utts (2019) also urge that the ASA Statement on PValues and Statistical Significance “be sent to the editor-in-chief of every journal in the natural, behavioral and social sciences for forwarding to their respective editorial boards and stables of manuscript reviewers. That would be a good way to quickly improve statistical understanding and practice.” Kmetz (2019) suggests referring to the ASA statement whenever submitting a paper or revision to any editor, peer reviewer, or prospective reader. Hurlbert et al. encourage a “community grassroots effort” to encourage change in journal procedures.

Campbell and Gustafson (2019) propose a statistical model for evaluating publication policies in terms of weighing novelty of studies (and the likelihood of those studies subsequently being found false) against pre-specified study power. They observe that “no publication policy will be perfect. Science is inherently challenging and we must always be willing to accept that a certain proportion of research is potentially false.”

Statistics education will require major changes at all levels to move to a post “p < 0.05” world. Two papers in this special issue make a specific start in that direction (Maurer et al. 2019; Steel, Liermann, and Guttorp 2019), but we hope that volumes will be written on this topic in other venues. We are excited that, with support from the ASA, the US Conference on Teaching Statistics (USCOTS) will focus its 2019 meeting on teaching inference.

The change that needs to happen demands change to editorial practice, to the teaching of statistics at every level where inference is taught, and to much more. However…

5 It Is Going to Take Work, and It Is Going to Take Time

If it were easy, it would have already been done, because as we have noted, this is nowhere near the first time the alarm has been sounded.

Why is eliminating the use of p-values as a truth arbiter so hard? “The basic explanation is neither philosophical nor scientific, but sociologic; everyone uses them,” says Goodman (2019). “It’s the same reason we can use money. When everyone believes in something’s value, we can use it for real things; money for food, and p-values for knowledge claims, publication, funding, and promotion. It doesn’t matter if the p-value doesn’t mean what people think it means; it becomes valuable because of what it buys.”

Goodman observes that statisticians alone cannot address the problem, and that “any approach involving only statisticians will not succeed.” He calls on statisticians to ally themselves “both with scientists in other fields and with broader based, multidisciplinary scientific reform movements. What statisticians can do within our own discipline is important, but to effectively disseminate or implement virtually any method or policy, we need partners.”

“The loci of influence,” Goodman says, “include journals, scientific lay and professional media (including social media), research funders, healthcare payors, technology assessors, regulators, academic institutions, the private sector, and professional societies. They also can include policy or informational entities like the National Academies…as well as various other science advisory bodies across the government. Increasingly, they are also including non-traditional science reform organizations comprised both of scientists and of the science literate lay public…and a broad base of health or science advocacy groups…”

It is no wonder, then, that the problem has persisted for so long. And persist it has! Hubbard (2019) looked at citation-count data on twenty-five articles and books severely critical of the effect of null hypothesis significance testing (NHST) on good science. Though issues were well known, Hubbard says, this did nothing to stem NHST usage over time.

Greenland (personal communication, January 25, 2019) notes that cognitive biases and perverse incentives to offer firm conclusions where none are warranted can warp the use of any method. “The core human and systemic problems are not addressed by shifting blame to p-values and pushing alternatives as magic cures—especially alternatives that have been subject to little or no comparative evaluation in either classrooms or practice,” Greenland said. “What we need now is to move beyond debating only our methods and their interpretations, to concrete proposals for elimination of systemic problems such as pressure to produce noteworthy findings rather than to produce reliable studies and analyses. Review and provisional acceptance of reports before their results are given to the journal (Locascio 2019) is one way to address that pressure, but more ideas are needed since review of promotions and funding applications cannot be so blinded. The challenges of how to deal with human biases and incentives may be the most difficult we must face.” Supporting this view is McShane and Gal’s (20162017) empirical demonstration of cognitive dichotomization errors among biomedical and social science researchers—and even among statisticians.

Challenges for editors and reviewers are many. Here’s an example: Fricker et al. (2019) observed that when p-values were suspended from the journal Basic and Applied Social Psychology authors tended to overstate conclusions.

With all the challenges, how do we get from here to there, from a “p < 0.05” world to a post “p < 0.05” world?

Matthews (2019) notes that “Any proposal encouraging changes in inferential practice must accept the ubiquity of NHST.…Pragmatism suggests, therefore, that the best hope of achieving a change in practice lies in offering inferential tools that can be used alongside the concepts of NHST, adding value to them while mitigating their most egregious features.”

Benjamin and Berger (2019) propose three practices to help researchers during the transition away from use of statistical significance. “…[O]ur goal is to suggest minimal changes that would require little effort for the scientific community to implement,” they say. “Motivating this goal are our hope that easy (but impactful) changes might be adopted and our worry that more complicated changes could be resisted simply because they are perceived to be too difficult for routine implementation.”

Yet there is also concern that progress will stop after a small step or two. Even some proponents of small steps are clear that those small steps still carry us far short of the destination.

For example, Matthews (2019) says that his proposed methodology “is not a panacea for the inferential ills of the research community.” But that doesn’t make it useless. It may “encourage researchers to move beyond NHST and explore the statistical armamentarium now available to answer the central question of research: what does our study tell us?” he says. It “provides a bridge between the dominant but flawed NHST paradigm and the less familiar but more informative methods of Bayesian estimation.”

Likewise, Benjamin and Berger (2019) observe, “In research communities that are deeply attached to reliance on ‘p < 0.05,’ our recommendations will serve as initial steps away from this attachment. We emphasize that our recommendations are intended merely as initial, temporary steps and that many further steps will need to be taken to reach the ultimate destination: a holistic interpretation of statistical evidence that fully conforms to the principles laid out in the ASA Statement…”

Yet, like the authors of this editorial, not all authors in this special issue support gradual approaches with transitional methods.

Some (e.g., Amrhein, Trafimow, and Greenland 2019; Hurlbert, Levine, and Utts 2019; McShane et al. 2019) prefer to rip off the bandage and abandon use of statistical significance altogether. In short, no more dichotomizing p-values into categories of “significance.” Notably, these authors do not suggest banning the use of p-values, but rather suggest using them descriptively, treating them as continuous, and assessing their weight or import with nuanced thinking, clear language, and full understanding of their properties.

So even when there is agreement on the destination, there is disagreement about what road to take. The questions around reform need consideration and debate. It might turn out that different fields take different roads.

The catalyst for change may well come from those people who fund, use, or depend on scientific research, say Calin-Jageman and Cumming (2019). They believe this change has not yet happened to the desired level because of “the cognitive opacity of the NHST approach: the counter-intuitive p-value (it’s good when it is small), the mysterious null hypothesis (you want it to be false), and the eminently confusable Type I and Type II errors.”

Reviewers of this editorial asked, as some readers of it will, is a p-value threshold ever okay to use? We asked some of the authors of articles in the special issue that question as well. Authors identified four general instances. Some allowed that, while p-value thresholds should not be used for inference, they might still be useful for applications such as industrial quality control, in which a highly automated decision rule is needed and the costs of erroneous decisions can be carefully weighed when specifying the threshold. Other authors suggested that such dichotomized use of p-values was acceptable in model-fitting and variable selection strategies, again as automated tools, this time for sorting through large numbers of potential models or variables. Still others pointed out that p-values with very low thresholds are used in fields such as physics, genomics, and imaging as a filter for massive numbers of tests. The fourth instance can be described as “confirmatory setting[s] where the study design and statistical analysis plan are specified prior to data collection, and then adhered to during and after it” (Tong 2019). Tong argues these are the only proper settings for formal statistical inference. And Wellek (2017) says at present it is essential in these settings. “[B]inary decision making is indispensable in medicine and related fields,” he says. “[A] radical rejection of the classical principles of statistical inference…is of virtually no help as long as no conclusively substantiated alternative can be offered.”

Eliminating the declaration of “statistical significance” based on p < 0.05 or other arbitrary thresholds will be easier in some venues than others. Most journals, if they are willing, could fairly rapidly implement editorial policies to effect these changes. Suggestions for how to do that are in this special issue of The American Statistician. However, regulatory agencies might require longer timelines for making changes. The U.S. Food and Drug Administration (FDA), for example, has long established drug review procedures that involve comparing p-values to significance thresholds for Phase III drug trials. Many factors demand consideration, not the least of which is how to avoid turning every drug decision into a court battle. Goodman (2019) cautions that, even as we seek change, “we must respect the reason why the statistical procedures are there in the first place.” Perhaps the ASA could convene a panel of experts, internal and external to FDA, to provide a workable new paradigm. (See Ruberg et al. 2019, who argue for a Bayesian approach that employs data from other trials as a “prior” for Phase 3 trials.)

Change is needed. Change has been needed for decades. Change has been called for by others for quite a while. So…

6 Why Will Change Finally Happen Now?

In 1991, a confluence of weather events created a monster storm that came to be known as “the perfect storm,” entering popular culture through a book (Junger 1997) and a 2000 movie starring George Clooney. Concerns about reproducible science, falling public confidence in science, and the initial impact of the ASA statement in heightening awareness of long-known problems created a perfect storm, in this case, a good storm of motivation to make lasting change. Indeed, such change was the intent of the ASA statement, and we expect this special issue of TAS will inject enough additional energy to the storm to make its impact widely felt.

We are not alone in this view. “60+ years of incisive criticism has not yet dethroned NHST as the dominant approach to inference in many fields of science,” note Calin-Jageman and Cumming (2019). “Momentum, though, seems to finally be on the side of reform.”

Goodman (2019) agrees: “The initial slow speed of progress should not be discouraging; that is how all broad-based social movements move forward and we should be playing the long game. But the ball is rolling downhill, the current generation is inspired and impatient to carry this forward.”

So, let’s do it. Let’s move beyond “statistically significant,” even if upheaval and disruption are inevitable for the time being. It’s worth it. In a world beyond “p < 0.05,” by breaking free from the bonds of statistical significance, statistics in science and policy will become more significant than ever.

7 Authors’ Suggestions

The editors of this special TAS issue on statistical inference asked all the contact authors to help us summarize the guidance they provided in their papers by providing us a short list of do’s. We asked them to be specific but concise and to be active—start each with a verb. Here is the complete list of the authors’ responses, ordered as the papers appear in this special issue.

7.1 Getting to a Post “p < 0.05” Era

Ioannidis, J., What Have We (Not) Learnt From Millions of Scientific Papers With p-Values?

  1. Do not use p-values, unless you have clearly thought about the need to use them and they still seem the best choice.
  2. Do not favor “statistically significant” results.
  3. Do be highly skeptical about “statistically significant” results at the 0.05 level.

Goodman, S., Why Is Getting Rid of pValues So Hard? Musings on Science and Statistics

  1. Partner with science reform movements and reformers within disciplines, journals, funding agencies and regulators to promote and reward reproducible science and diminish the impact of statistical significance on publication, funding and promotion.
  2. Speak to and write for the multifarious array of scientific disciplines, showing how statistical uncertainty and reasoning can be conveyed in non-“bright-line” ways both with conventional and alternative approaches. This should be done not just in didactic articles, but also in original or reanalyzed research, to demonstrate that it is publishable.
  3. Promote, teach and conduct meta-research within many individual scientific disciplines to demonstrate the adverse effects in each of over-reliance on and misinterpretation of p-values and significance verdicts in individual studies and the benefits of emphasizing estimation and cumulative evidence.
  4. Require reporting a quantitative measure of certainty—a “confidence index”—that an observed relationship, or claim, is true. Change analysis goal from achieving significance to appropriately estimating this confidence.
  5. Develop and share teaching materials, software, and published case examples to help with all of the do’s above, and to spread progress in one discipline to others.

Hubbard, R., Will the ASA’s Efforts to Improve Statistical Practice be Successful? Some Evidence to the Contrary

This list applies to the ASA and to the professional statistics community more generally.

  1. Specify, where/if possible, those situations in which the p-value plays a clearly valuable role in data analysis and interpretation.
  2. Contemplate issuing a statement abandoning the use of p-values in null hypothesis significance testing.

Kmetz, J., Correcting Corrupt Research: Recommendations for the Profession to Stop Misuse of pValues

  1. Refer to the ASA statement on p-values whenever submitting a paper or revision to any editor, peer reviewer, or prospective reader. Many in the field do not know of this statement, and having the support of a prestigious organization when authoring any research document will help stop corrupt research from becoming even more dominant than it is.
  2. Train graduate students and future researchers by having them reanalyze published studies and post their findings to appropriate websites or weblogs. This practice will benefit not only the students, but will benefit the professions, by increasing the amount of replicated (or nonreplicated) research available and readily accessible, and as well as reformer organizations that support replication.
  3. Join one or more of the reformer organizations formed or forming in many research fields, and support and publicize their efforts to improve the quality of research practices.
  4. Challenge editors and reviewers when they assert that incorrect practices and interpretations of research, consistent with existing null hypothesis significance testing and beliefs regarding p-values, should be followed in papers submitted to their journals. Point out that new submissions have been prepared to be consistent with the ASA statement on p-values.
  5. Promote emphasis on research quality rather than research quantity in universities and other institutions where professional advancement depends heavily on research “productivity,” by following the practices recommended in this special journal edition. This recommendation will fall most heavily on those who have already achieved success in their fields, perhaps by following an approach quite different from that which led to their success; whatever the merits of that approach may have been, one objectionable outcome of it has been the production of voluminous corrupt research and creation of an environment that promotes and protects it. We must do better.

Hubbard, D., and Carriquiry, A., Quality Control for Scientific Research: Addressing Reproducibility, Responsiveness and Relevance

  1. Compute and prominently display the probability the hypothesis is true (or a probability distribution of an effect size) or provide sufficient information for future researchers and policy makers to compute it.
  2. Promote publicly displayed quality control metrics within your field—in particular, support tracking of reproduction studies and computing the “level 1” and even “level 2” priors as required for #1 above.
  3. Promote a byline status for researchers who reproduce studies: Digital versions are dynamically updated to display “Reproduced by….” below original research authors’ names or “Not yet reproduced” until it is reproduced.

Brownstein, N., Louis, T., O’Hagan, A., and Pendergast, J., The Role of Expert Judgment in Statistical Inference and Evidence-Based Decision-Making

  1. Staff the study team with members who have the necessary knowledge, skills and experience—statistically, scientifically, and otherwise.
  2. Include key members of the research team, including statisticians, in all scientific and administrative meetings.
  3. Understand that subjective judgments are needed in all stages of a study.
  4. Make all judgments as carefully and rigorously as possible and document each decision and rationale for transparency and reproducibility.
  5. Use protocol-guided elicitation of judgments.
  6. Statisticians specifically should:
    • Refine oral and written communication skills.
    • Understand their multiple roles and obligations as collaborators.
    • Take an active leadership role as a member of the scientific team; contribute throughout all phases of the study.
    • Co-own the subject matter—understand a sufficient amount about the relevant science/policy to meld statistical and subject-area expertise.
    • Promote the expectation that your collaborators co-own statistical issues.
    • Write a statistical analysis plan for all analyses and track any changes to that plan over time.
    • Promote co-responsibility for data quality, security, and documentation.
    • Reduce unplanned and uncontrolled modeling/testing (HARK-ing, p-hacking); document all analyses.

O’Hagan, A., Expert Knowledge Elicitation: Subjective but Scientific

  1. Elicit expert knowledge when data relating to a parameter of interest is weak, ambiguous or indirect.
  2. Use a well-designed protocol, such as SHELF, to ensure expert knowledge is elicited in as scientific and unbiased a way as possible.

Kennedy-Shaffer, L., Before p < 0.05 to Beyond p < 0.05: Using History to Contextualize p-Values and Significance Testing

  1. Ensure that inference methods match intuitive understandings of statistical reasoning.
  2. Reduce the computational burden for nonstatisticians using statistical methods.
  3. Consider changing conditions of statistical and scientific inference in developing statistical methods.
  4. Address uncertainty quantitatively and in ways that reward increased precision.

Hubbard, R., Haig, B. D., and Parsa, R. A., The Limited Role of Formal Statistical Inference in Scientific Inference

  1. Teach readers that although deemed equivalent in the social, management, and biomedical sciences, formal methods of statistical inference and scientific inference are very different animals.
  2. Show these readers that formal methods of statistical inference play only a restricted role in scientific inference.
  3. Instruct researchers to pursue significant sameness (i.e., replicable and empirically generalizable results) rather than significant differences in results.
  4. Demonstrate how the pursuit of significant differences actively impedes cumulative knowledge development.

McShane, B., Tackett, J., Böckenholt, U., and Gelman, A., Large Scale Replication Projects in Contemporary Psychological Research

  1. When planning a replication study of a given psychological phenomenon, bear in mind that replication is complicated in psychological research because studies can never be direct or exact replications of one another, and thus heterogeneity—effect sizes that vary from one study of the phenomenon to the next—cannot be avoided.
  2. Future large scale replication projects should follow the “one phenomenon, many studies” approach of the Many Labs project and Registered Replication Reports rather than the “many phenomena, one study” approach of the Open Science Collaboration project. In doing so, they should systematically vary method factors across the laboratories involved in the project.
  3. Researchers analyzing the data resulting from large scale replication projects should do so via a hierarchical (or multilevel) model fit to the totality of the individual-level observations. In doing so, all theoretical moderators should be modeled via covariates while all other potential moderators—that is, method factors—should induce variation (i.e., heterogeneity).
  4. Assessments of replicability should not depend solely on estimates of effects, or worse, significance tests based on them. Heterogeneity must also be an important consideration in assessing replicability.

7.2 Interpreting and Using p

Greenland, S., Valid p-Values Behave Exactly as They Should: Some Misleading Criticisms of p-Values and Their Resolution With s-Values

  1. Replace any statements about statistical significance of a result with the p-value from the test, and present the p-value as an equality, not an inequality. For example, if p = 0.03 then “…was statistically significant” would be replaced by “…had p = 0.03,” and “p < 0.05” would be replaced by “p = 0.03.” (An exception: If p is so small that the accuracy becomes very poor then an inequality reflecting that limit is appropriate; e.g., depending on the sample size, p-values from normal or χ2 approximations to discrete data often lack even 1-digit accuracy when p < 0.0001.) In parallel, if p = 0.25 then “…was not statistically significant” would be replaced by “…had p = 0.25,” and “p > 0.05” would be replaced by “p = 0.25.”
  2. Present p-values for more than one possibility when testing a targeted parameter. For example, if you discuss the p-value from a test of a null hypothesis, also discuss alongside this null p-value another p-value for a plausible alternative parameter possibility (ideally the one used to calculate power in the study proposal). As another example: if you do an equivalence test, present the p-values for both the lower and upper bounds of the equivalence interval (which are used for equivalence tests based on two one-sided tests).
  3. Show confidence intervals for targeted study parameters, but also supplement them with p-values for testing relevant hypotheses (e.g., the p-values for both the null and the alternative hypotheses used for the study design or proposal, as in #2). Confidence intervals only show clearly what is in or out of the interval (i.e., a 95% interval only shows clearly what has p > 0.05 or p ≤ 0.05), but more detail is often desirable for key hypotheses under contention.
  4. Compare groups and studies directly by showing p-values and interval estimates for their differences, not by comparing p-values or interval estimates from the two groups or studies. For example, seeing p = 0.03 in males and p = 0.12 in females does not mean that different associations were seen in males and females; instead, one needs a p-value and confidence interval for the difference in the sex-specific associations to examine the between-sex difference. Similarly, if an early study reported a confidence interval which excluded the null and then a subsequent study reported a confidence interval which included the null, that does not mean the studies gave conflicting results or that the second study failed to replicate the first study; instead, one needs a p-value and confidence interval for the difference in the study-specific associations to examine the between-study difference. In all cases, differences-between-differences must be analyzed directly by statistics for that purpose.
  5. Supplement a focal p-value p with its Shannon information transform (s-value or surprisal) s = –log2(p). This measures the amount of information supplied by the test against the tested hypothesis (or model): Rounded off, the s-value s shows the number of heads in a row one would need to see when tossing a coin to get the same amount of information against the tosses being “fair” (independent with “heads” probability of 1/2) instead of being loaded for heads. For example, if p = 0.03, this represents –log2(0.03) = 5 bits of information against the hypothesis (like getting 5 heads in a trial of “fairness” with 5 coin tosses); and if p = 0.25, this represents only –log2(0.25) = 2 bits of information against the hypothesis (like getting 2 heads in a trial of “fairness” with only 2 coin tosses).

Betensky, R., The pValue Requires Context, Not a Threshold

  1. Interpret the p-value in light of its context of sample size and meaningful effect size.
  2. Incorporate the sample size and meaningful effect size into a decision to reject the null hypothesis.

Anderson, A., Assessing Statistical Results: Magnitude, Precision and Model Uncertainty

  1. Evaluate the importance of statistical results based on their practical implications.
  2. Evaluate the strength of empirical evidence based on the precision of the estimates and the plausibility of the modeling choices.
  3. Seek out subject matter expertise when evaluating the importance and the strength of empirical evidence.

Heck, P., and Krueger, J., Putting the p-Value in Its Place

  1. Use the p-value as a heuristic, that is, as the base for a tentative inference regarding the presence or absence of evidence against the tested hypothesis.
  2. Supplement the p-value with other, conceptually distinct methods and practices, such as effect size estimates, likelihood ratios, or graphical representations.
  3. Strive to embed statistical hypothesis testing within strong a priori theory and a context of relevant prior empirical evidence.

Johnson, V., Evidence From Marginally Significant t-Statistics

  1. Be transparent in the number of outcome variables that were analyzed.
  2. Report the number (and values) of all test statistics that were calculated.
  3. Provide access to protocols for studies involving human or animal subjects.
  4. Clearly describe data values that were excluded from analysis and the justification for doing so.
  5. Provide sufficient details on experimental design so that other researchers can replicate the experiment.
  6. Describe only p-values less than 0.005 as being “statistically significant.”

Fraser, D., The pValue Function and Statistical Inference

  1. Determine a primary variable for assessing the hypothesis at issue.
  2. Calculate its well defined distribution function, respecting continuity.
  3. Substitute the observed data value to obtain the “p-value function.”
  4. Extract the available well defined confidence bounds, confidence intervals, and median estimate.
  5. Know that you don’t have an intellectual basis for decisions.

Rougier, J., pValues, Bayes Factors, and Sufficiency

  1. Recognize that behind every choice of null distribution and test statistic, there lurks a plausible family of alternative hypotheses, which can provide more insight into the null distribution.

Rose, S., and McGuire, T., Limitations of pValues and R-Squared for Stepwise Regression Building: A Fairness Demonstration in Health Policy Risk Adjustment

  1. Formulate a clear objective for variable inclusion in regression procedures.
  2. Assess all relevant evaluation metrics.
  3. Incorporate algorithmic fairness considerations.

7.3 Supplementing or Replacing p

Blume, J., Greevy, R., Welty, V., Smith, J., and DuPont, W., An Introduction to Second Generation p-Values

  1. Construct a composite null hypothesis by specifying the range of effects that are not scientifically meaningful (do this before looking at the data). Why: Eliminating the conflict between scientific significance and statistical significance has numerous statistical and scientific benefits.
  2. Replace classical p-values with second-generation p-values (SGPV). Why: SGPVs accommodate composite null hypotheses and encourage the proper communication of findings.
  3. Interpret the SGPV as a high-level summary of what the data say. Why: Science needs a simple indicator of when the data support only meaningful effects (SGPV = 0), when the data support only trivially null effects (SGPV = 1), or when the data are inconclusive (0 < SGPV < 1).
  4. Report an interval estimate of effect size (confidence interval, support interval, or credible interval) and note its proximity to the composite null hypothesis. Why: This is a more detailed description of study findings.
  5. Consider reporting false discovery rates with SGPVs of 0 or 1. Why: FDRs gauge the chance that an inference is incorrect under assumptions about the data generating process and prior knowledge.

Goodman, W., Spruill, S., and Komaroff, E., A Proposed Hybrid Effect Size Plus p-Value Criterion: Empirical Evidence Supporting Its Use

  1. Determine how far the true parameter’s value would have to be, in your research context, from exactly equaling the conventional, point null hypothesis to consider that the distance is meaningfully large or practically significant.
  2. Combine the conventional p-value criterion with a minimum effect size criterion to generate a two-criteria inference-indicator signal, which provides heuristic, but nondefinitive evidence, for inferring the parameter’s true location.
  3. Document the intended criteria for your inference procedures, such as a p-value cut-point and a minimum practically significant effect size, prior to undertaking the procedure.
  4. Ensure that you use the appropriate inference method for the data that are obtainable and for the inference that is intended.
  5. Acknowledge that every study is fraught with limitations from unknowns regarding true data distributions and other conditions that one’s method assumes.

Benjamin, D., and Berger, J., Three Recommendations for Improving the Use of p-Values

  1. Replace the 0.05 “statistical significance” threshold for claims of novel discoveries with a 0.005 threshold and refer to p-values between 0.05 and 0.005 as “suggestive.”
  2. Report the data-based odds of the alternative hypothesis to the null hypothesis. If the data-based odds cannot be calculated, then use the p-value to report an upper bound on the data-based odds: 1/(-ep ln p).
  3. Report your prior odds and posterior odds (prior odds * data-based odds) of the alternative hypothesis to the null hypothesis. If the data-based odds cannot be calculated, then use your prior odds and the p-value to report an upper bound on your posterior odds: (prior odds) * (1/(-ep ln p)).

Colquhoun, D., The False Positive Risk: A Proposal Concerning What to Do About pValues

  1. Continue to provide p-values and confidence intervals. Although widely misinterpreted, people know how to calculate them and they aren’t entirely useless. Just don’t ever use the terms “statistically significant” or “nonsignificant.”
  2. Provide in addition an indication of false positive risk (FPR). This is the probability that the claim of a real effect on the basis of the p-value is in fact false. The FPR (not the p-value) is the probability that your result occurred by chance. For example, the fact that, under plausible assumptions, observation of a p-value close to 0.05 corresponds to an FPR of at least 0.2–0.3 shows clearly the weakness of the conventional criterion for “statistical significance.”
  3. Alternatively, specify the prior probability of there being a real effect that one would need to be able to justify in order to achieve an FPR of, say, 0.05.

Notes:

There are many ways to calculate the FPR. One, based on a point null and simple alternative can be calculated with the web calculator at http://fpr-calc.ucl.ac.uk/. However other approaches to the calculation of FPR, based on different assumptions, give results that are similar (Table 1 in Colquhoun 2019).

To calculate FPR it is necessary to specify a prior probability and this is rarely known. My recommendation 2 is based on giving the FPR for a prior probability of 0.5. Any higher prior probability of there being a real effect is not justifiable in the absence of hard data. In this sense, the calculated FPR is the minimum that can be expected. More implausible hypotheses would make the problem worse. For example, if the prior probability of there being a real effect were only 0.1, then observation of p = 0.05 would imply a disastrously high FPR = 0.76, and in order to achieve an FPR of 0.05, you’d need to observe p = 0.00045. Others (especially Goodman) have advocated giving likelihood ratios (LRs) in place of p-values. The FPR for a prior of 0.5 is simply 1/(1 + LR), so to give the FPR for a prior of 0.5 is simply a more-easily-comprehensible way of specifying the LR, and so should be acceptable to frequentists and Bayesians.

Matthews, R., Moving Toward the Post p < 0.05 Era via the Analysis of Credibility

  1. Report the outcome of studies as effect sizes summarized by confidence intervals (CIs) along with their point estimates.
  2. Make full use of the point estimate and width and location of the CI relative to the null effect line when interpreting findings. The point estimate is generally the effect size best supported by the study, irrespective of its statistical significance/nonsignificance. Similarly, tight CIs located far from the null effect line generally represent more compelling evidence for a nonzero effect than wide CIs lying close to that line.
  3. Use the analysis of credibility (AnCred) to assess quantitatively the credibility of inferences based on the CI. AnCred determines the level of prior evidence needed for a new finding to provide credible evidence for a nonzero effect.
  4. Establish whether this required level of prior evidence is supported by current knowledge and insight. If it is, the new result provides credible evidence for a nonzero effect, irrespective of its statistical significance/nonsignificance.

Gannon, M., Pereira, C., and Polpo, A., Blending Bayesian and Classical Tools to Define Optimal Sample-Size-Dependent Significance Levels

  1. Retain the useful concept of statistical significance and the same operational procedures as currently used for hypothesis tests, whether frequentist (Neyman–Pearson p-value tests) or Bayesian (Bayes-factor tests).
  2. Use tests with a sample-size-dependent significance level—ours is optimal in the sense of the generalized Neyman–Pearson lemma.
  3. Use a testing scheme that allows tests of any kind of hypothesis, without restrictions on the dimensionalities of the parameter space or the hypothesis. Note that this should include “sharp” hypotheses, which correspond to subsets of lower dimensionality than the full parameter space.
  4. Use hypothesis tests that are compatible with the likelihood principle (LP). They can be easier to interpret consistently than tests that are not LP-compliant.
  5. Use numerical methods to handle hypothesis-testing problems with high-dimensional sample spaces or parameter spaces.

Pogrow, S., How Effect Size (Practical Significance) Misleads Clinical Practice: The Case for Switching to Practical Benefit to Assess Applied Research Findings

  1. Switch from reliance on statistical or practical significance to the more stringent statistical criterion of practical benefit for (a) assessing whether applied research findings indicate that an intervention is effective and should be adopted and scaled—particularly in complex organizations such as schools and hospitals and (b) determining whether relationships are sufficiently strong and explanatory to be used as a basis for setting policy or practice recommendations. Practical benefit increases the likelihood that observed benefits will replicate in subsequent research and in clinical practice by avoiding the problems associated with relying on small effect sizes.
  2. Reform statistics courses in applied disciplines to include the principles of practical benefit, and have students review influential applied research articles in the discipline to determine which findings demonstrate practical benefit.
  3. Recognize the need to develop different inferential statistical criteria for assessing the importance of applied research findings as compared to assessing basic research findings.
  4. Consider consistent, noticeable improvements across contexts using the quick prototyping methods of improvement science as a preferable methodology for identifying effective practices rather than on relying on RCT methods.
  5. Require that applied research reveal the actual unadjusted means/medians of results for all groups and subgroups, and that review panels take such data into account—as opposed to only reporting relative differences between adjusted means/medians. This will help preliminarily identify whether there appear to be clear benefits for an intervention.

7.4 Adopting More Holistic Approaches

McShane, B., Gal, D., Gelman, A., Robert, C., and Tackett, J., Abandon Statistical Significance

  1. Treat p-values (and other purely statistical measures like confidence intervals and Bayes factors) continuously rather than in a dichotomous or thresholded manner. In doing so, bear in mind that it seldom makes sense to calibrate evidence as a function of p-values or other purely statistical measures because they are, among other things, typically defined relative to the generally uninteresting and implausible null hypothesis of zero effect and zero systematic error.
  2. Give consideration to related prior evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain. Do this always—not just once some p-value or other statistical threshold has been attained—and do this without giving priority to p-values or other purely statistical measures.
  3. Analyze and report all of the data and relevant results rather than focusing on single comparisons that attain some p-value or other statistical threshold.
  4. Conduct a decision analysis: p-value and other statistical threshold-based rules implicitly express a particular tradeoff between Type I and Type II error, but in reality this tradeoff should depend on the costs, benefits, and probabilities of all outcomes.
  5. Accept uncertainty and embrace variation in effects: we can learn much (indeed, more) about the world by forsaking the false promise of certainty offered by dichotomous declarations of truth or falsity—binary statements about there being “an effect” or “no effect”—based on some p-value or other statistical threshold being attained.
  6. Obtain more precise individual-level measurements, use within-person or longitudinal designs more often, and give increased consideration to models that use informative priors, that feature varying treatment effects, and that are multilevel or meta-analytic in nature.

Tong, C., Statistical Inference Enables Bad Science; Statistical Thinking Enables Good Science

  1. Prioritize effort for sound data production: the planning, design, and execution of the study.
  2. Build scientific arguments with many sets of data and multiple lines of evidence.
  3. Recognize the difference between exploratory and confirmatory objectives and use distinct statistical strategies for each.
  4. Use flexible descriptive methodology, including disciplined data exploration, enlightened data display, and regularized, robust, and nonparametric models, for exploratory research.
  5. Restrict statistical inferences to confirmatory analyses for which the study design and statistical analysis plan are pre-specified prior to, and strictly adhered to during, data acquisition.

Amrhein, V., Trafimow, D., and Greenland, S., Inferential Statistics as Descriptive Statistics: There Is No Replication Crisis If We Don’t Expect Replication

1. Do not dichotomize, but embrace variation.

(a)Report and interpret inferential statistics like the p-value in a continuous fashion; do not use the word “significant.”

(b)Interpret interval estimates as “compatibility intervals,” showing effect sizes most compatible with the data, under the model used to compute the interval; do not focus on whether such intervals include or exclude zero.

(c)Treat inferential statistics as highly unstable local descriptions of relations between models and the obtained data.

(i)Free your “negative results” by allowing them to be potentially positive. Most studies with large p-values or interval estimates that include the null should be considered “positive,” in the sense that they usually leave open the possibility of important effects (e.g., the effect sizes within the interval estimates).

(ii)Free your “positive results” by allowing them to be different. Most studies with small p-values or interval estimates that are not near the null should be considered provisional, because in replication studies the p-values could be large and the interval estimates could show very different effect sizes.

(iii)There is no replication crisis if we don’t expect replication. Honestly reported results must vary from replication to replication because of varying assumption violations and random variation; excessive agreement itself would suggest deeper problems such as failure to publish results in conflict with group expectations.

Calin-Jageman, R., and Cumming, G., The New Statistics for Better Science: Ask How Much, How Uncertain, and What Else Is Known

  1. Ask quantitative questions and give quantitative answers.
  2. Countenance uncertainty in all statistical conclusions, seeking ways to quantify, visualize, and interpret the potential for error.
  3. Seek replication, and use quantitative methods to synthesize across data sets as a matter of course.
  4. Use Open Science practices to enhance the trustworthiness of research results.
  5. Avoid, wherever possible, any use of p-values or NHST.

Ziliak, S., How Large Are Your G-Values? Try Gosset’s Guinnessometrics When a Little “p” Is Not Enough

  • G-10 Consider the Purpose of the Inquiry, and Compare with Best Practice. Falsification of a null hypothesis is not the main purpose of the experiment or observational study. Making money or beer or medicine—ideally more and better than the competition and best practice—is. Estimating the importance of your coefficient relative to results reported by others, is. To repeat, as the 2016 ASA Statement makes clear, merely falsifying a null hypothesis with a qualitative yes/no, exists/does not exist, significant/not significant answer, is not itself significant science, and should be eschewed.
  • G-9 Estimate the Stakes (Or Eat Them). Estimation of magnitudes of effects, and demonstrations of their substantive meaning, should be the center of most inquiries. Failure to specify the stakes of a hypothesis is the first step toward eating them (gulp).
  • G-8 Study Correlated Data: ABBA, Take a Chance on Me. Most regression models assume “iid” error terms—independently and identically distributed—yet most data in the social and life sciences are correlated by systematic, nonrandom effects—and are thus not independent. Gosset solved the problem of correlated soil plots with the “ABBA” layout, maximizing the correlation of paired differences between the As and Bs with a perfectly balanced chiasmic arrangement.
  • G-7 Minimize “Real Error” with the 3 R’s: Represent, Replicate, Reproduce. A test of significance on a single set of data is nearly valueless. Fisher’s p, Student’s t, and other tests should only be used when there is actual repetition of the experiment. “One and done” is scientism, not scientific. Random error is not equal to real error, and is usually smaller and less important than the sum of nonrandom errors. Measurement error, confounding, specification error, and bias of the auspices are frequently larger in all the testing sciences, agronomy to medicine. Guinnessometrics minimizes real error by repeating trials on stratified and balanced yet independent experimental units, controlling as much as possible for local fixed effects.
  • G-6 Economize with “Less is More”: Small Samples of Independent Experiments. Small sample analysis and distribution theory has an economic origin and foundation: changing inputs to the beer on the large scale (for Guinness, enormous global scale) is risky, with more than money at stake. But smaller samples, as Gosset showed in decades of barley and hops experimentation, does not mean “less than,” and Big Data is in any case not the solution for many problems.
  • G-5 Keep Your Eyes on the Size Matters/How Much? Question. There will be distractions but the expected loss and profit functions rule, or should. Are regression coefficients or differences between means large or small? Compared to what? How do you know?
  • G-4 Visualize. Parameter uncertainty is not the same thing as model uncertainty. Does the result hit you between the eyes? Does the study show magnitudes of effects across the entire distribution? Advances in visualization software continue to outstrip advances in statistical modeling, making more visualization a no brainer.
  • G-3 Consider Posteriors and Priors too (“It pays to go Bayes”). The sample on hand is rarely the only thing that is “known.” Subject matter expertise is an important prior input to statistical design and affects analysis of “posterior” results. For example, Gosset at Guinness was wise to keep quality assurance metrics and bottom line profit at the center of his inquiry. How does prior information fit into the story and evidence? Advances in Bayesian computing software make it easier and easier to do a Bayesian analysis, merging prior and posterior information, values, and knowledge.
  • G-2 Cooperate Up, Down, and Across (Networks and Value Chains). For example, where would brewers be today without the continued cooperation of farmers? Perhaps back on the farm and not at the brewery making beer. Statistical science is social, and cooperation helps. Guinness financed a large share of modern statistical theory, and not only by supporting Gosset and other brewers with academic sabbaticals (Ziliak and McCloskey 2008).
  • G-1 Answer the Brewer’s Original Question (“How should you set the odds?”). No bright-line rule of statistical significance can answer the brewer’s question. As Gosset said way back in 1904, how you set the odds depends on “the importance of the issues at stake” (e.g., the expected benefit and cost) together with the cost of obtaining new material.

Billheimer, D., Predictive Inference and Scientific Reproducibility

  1. Predict observable events or quantities that you care about.
  2. Quantify the uncertainty of your predictions.

Manski, C., Treatment Choice With Trial Data: Statistical Decision Theory Should Supplant Hypothesis Testing

  1. Statisticians should relearn statistical decision theory, which received considerable attention in the middle of the twentieth century but was largely forgotten by the century’s end.
  2. Statistical decision theory should supplant hypothesis testing when statisticians study treatment choice with trial data.
  3. Statisticians should use statistical decision theory when analyzing decision making with sample data more generally.

Manski, C., and Tetenov, A., Trial Size for Near Optimal Choice between Surveillance and Aggressive Treatment: Reconsidering MSLT-II

  1. Statisticians should relearn statistical decision theory, which received considerable attention in the middle of the twentieth century but was largely forgotten by the century’s end.
  2. Statistical decision theory should supplant hypothesis testing when statisticians study treatment choice with trial data.
  3. Statisticians should use statistical decision theory when analyzing decision making with sample data more generally.

Lavine, M., Frequentist, Bayes, or Other?

  1. Look for and present results from many models that fit the data well.
  2. Evaluate models, not just procedures.

Ruberg, S., Harrell, F., Gamalo-Siebers, M., LaVange, L., Lee J., Price K., and Peck C., Inference and Decision-Making for 21st Century Drug Development and Approval

  1. Apply Bayesian paradigm as a framework for improving statistical inference and regulatory decision making by using probability assertions about the magnitude of a treatment effect.
  2. Incorporate prior data and available information formally into the analysis of the confirmatory trials.
  3. Justify and pre-specify how priors are derived and perform sensitivity analysis for a better understanding of the impact of the choice of prior distribution.
  4. Employ quantitative utility functions to reflect key considerations from all stakeholders for optimal decisions via a probability-based evaluation of the treatment effects.
  5. Intensify training in Bayesian approaches, particularly for decision makers and clinical trialists (e.g., physician scientists in FDA, industry and academia).

van Dongen, N., Wagenmakers, E.J., van Doorn, J., Gronau, Q., van Ravenzwaaij, D., Hoekstra, R., Haucke, M., Lakens, D., Hennig, C., Morey, R., Homer, S., Gelman, A., and Sprenger, J., Multiple Perspectives on Inference for Two Simple Statistical Scenarios

  1. Clarify your statistical goals explicitly and unambiguously.
  2. Consider the question of interest and choose a statistical approach accordingly.
  3. Acknowledge the uncertainty in your statistical conclusions.
  4. Explore the robustness of your conclusions by executing several different analyses.
  5. Provide enough background information such that other researchers can interpret your results and possibly execute meaningful alternative analyses.

7.5 Reforming Institutions: Changing Publication Policies and Statistical Education

Trafimow, D., Five Nonobvious Changes in Editorial Practice for Editors and Reviewers to Consider When Evaluating Submissions in a Post P < 0.05 Universe

  1. Tolerate ambiguity.
  2. Replace significance testing with a priori thinking.
  3. Consider the nature of the contribution, on multiple levels.
  4. Emphasize thinking and execution, not results.
  5. Consider that the assumption of random and independent sampling might be wrong.

Locascio, J., The Impact of Results Blind Science Publishing on Statistical Consultation and Collaboration

For journal reviewers

  1. Provide an initial provisional decision regarding acceptance for publication of a journal manuscript based exclusively on the judged importance of the research issues addressed by the study and the soundness of the reported methodology. (The latter would include appropriateness of data analysis methods.) Give no weight to the reported results of the study per se in the decision as to whether to publish or not.
  2. To ensure #1 above is accomplished, commit to an initial decision regarding publication after having been provided with only the Introduction and Methods sections of a manuscript by the editor, not having seen the Abstract, Results, or Discussion. (The latter would be reviewed only if and after a generally irrevocable decision to publish has already been made.)

For investigators/manuscript authors

  1. Obtain consultation and collaboration from statistical consultant(s) and research methodologist(s) early in the development and conduct of a research study.
  2. Emphasize the clinical and scientific importance of a study in the Introduction section of a manuscript, and give a clear, explicit statement of the research questions being addressed and any hypotheses to be tested.
  3. Include a detailed statistical analysis subsection in the Methods section, which would contain, among other things, a justification of the adequacy of the sample size and the reasons various statistical methods were employed. For example, if null hypothesis significance testing and p-values are used, presumably supplemental to other methods, justify why those methods apply and will provide useful additional information in this particular study.
  4. Submit for publication reports of well-conducted studies on important research issues regardless of findings, for example, even if only null effects were obtained, hypotheses were not confirmed, mere replication of previous results were found, or results were inconsistent with established theories.

Hurlbert, S., Levine, R., and Utts, J., Coup de Grâce for a Tough Old Bull: “Statistically Significant” Expires

  1. Encourage journal editorial boards to disallow use of the phrase “statistically significant,” or even “significant,” in manuscripts they will accept for review.
  2. Give primary emphasis in abstracts to the magnitudes of those effects most conclusively demonstrated and of greatest import to the subject matter.
  3. Report precise p-values or other indices of evidence against null hypotheses as continuous variables not requiring any labeling.
  4. Understand the meaning of and rationale for neoFisherian significance assessment (NFSA).

Campbell, H., and Gustafson, P., The World of Research Has Gone Berserk: Modeling the Consequences of Requiring “Greater Statistical Stringency” for Scientific Publication

  1. Consider the meta-research implications of implementing new publication/funding policies. Journal editors and research funders should attempt to model the impact of proposed policy changes before any implementation. In this way, we can anticipate the policy impacts (both positive and negative) on the types of studies researchers pursue and the types of scientific articles that ultimately end up published in the literature.

Fricker, R., Burke, K., Han, X., and Woodall, W., Assessing the Statistical Analyses Used in Basic and Applied Social Psychology After Their p-Value Ban

  1. Use measures of statistical significance combined with measures of practical significance, such as confidence intervals on effect sizes, in assessing research results.
  2. Classify research results as either exploratory or confirmatory and appropriately describe them as such in all published documentation.
  3. Define precisely the population of interest in research studies and carefully assess whether the data being analyzed are representative of the population.
  4. Understand the limitations of inferential methods applied to observational, convenience, or other nonprobabilistically sampled data.

Maurer, K., Hudiburgh, L., Werwinski, L., and Bailer J., Content Audit for p-Value Principles in Introductory Statistics

  1. Evaluate the coverage of p-value principles in the introductory statistics course using rubrics or other systematic assessment guidelines.
  2. Discuss and deploy improvements to curriculum coverage of p-value principles.
  3. Meet with representatives from other departments, who have majors taking your statistics courses, to make sure that inference is being taught in a way that fits the needs of their disciplines.
  4. Ensure that the correct interpretation of p-value principles is a point of emphasis for all faculty members and embedded within all courses of instruction.

Steel, A., Liermann, M., and Guttorp, P., Beyond Calculations: A Course in Statistical Thinking

  1. Design curricula to teach students how statistical analyses are embedded within a larger science life-cycle, including steps such as project formulation, exploratory graphing, peer review, and communication beyond scientists.
  2. Teach the p-value as only one aspect of a complete data analysis.
  3. Prioritize helping students build a strong understanding of what testing and estimation can tell you over teaching statistical procedures.
  4. Explicitly teach statistical communication. Effective communication requires that students clearly formulate the benefits and limitations of statistical results.
  5. Force students to struggle with poorly defined questions and real, messy data in statistics classes.
  6. Encourage students to match the mathematical metric (or data summary) to the scientific question. Teaching students to create customized statistical tests for custom metrics allows statistics to move beyond the mean and pinpoint specific scientific questions.

Gratefully,
Ronald L. Wasserstein
American Statistical Association, Alexandria, VA
ron@amstat.org
Allen L. Schirm
Mathematica Policy Research (retired), Washington, DC
allenschirm@gmail.com
Nicole A. Lazar
Department of Statistics, University of Georgia, Athens, GA
nlazar@stat.uga.edu

Related Research Data

P value functions: An underused method to present research results and to promote quantitative reasoningSource: Wiley
Age, sex and period estimates of Australia’s mental health over the last 17 yearsSource: SAGE Publications
Best uses of p -values and complementary measures in medical research: Recent developments in the frequentist and Bayesian frameworksSource: Informa UK Limited
Failing Grade: 89% of Introduction-to-Psychology Textbooks That Define or Explain Statistical Significance Do So IncorrectlySource: SAGE Publications
Family Socioeconomic Status and Early Life Mortality Risk in the United StatesSource: Springer Science and Business Media LLC
How to Teach Evidence-Based Practice in Social Work: A Systematic ReviewSource: SAGE Publications
Insights into the Potential of the Atlantic Cod Gut Microbiome as Biomarker of Oil Contamination in the Marine EnvironmentSource: MDPI AG
Is One Study as Good as Three? College Graduates Seem to Think So, Even if They Took Statistics ClassesSource: SAGE Publications
Peritoneal mesothelioma and asbestos exposure: a population-based case–control study in Lombardy, ItalySource: BMJ
The influence of extrinsic and intrinsic variables on children’s reading frequency and attitudes: An exploration using an artificial neural networkSource: SAGE Publications
The p value wars (again)Source: Springer Science and Business Media LLC
Toward Replicability With Confidence Intervals for the Exceedance ProbabilitySource: Informa UK Limited
Wild pig (Sus scrofa L.) occupancy patterns in the Brazilian Atlantic forestSource: FapUNIFESP (SciELO)

Linking provided by  

Acknowledgments

Without the help of a huge team, this special issue would never have happened. The articles herein are about the equivalent of three regular issues of The American Statistician. Thank you to all the authors who submitted papers for this issue. Thank you, authors whose papers were accepted, for enduring our critiques. We hope they made you happier with your finished product. Thank you to a talented, hard-working group of associate editors for handling many papers: Frank Bretz, George Cobb, Doug Hubbard, Ray Hubbard, Michael Lavine, Fan Li, Xihong Lin, Tom Louis, Regina Nuzzo, Jane Pendergast, Annie Qu, Sherri Rose, and Steve Ziliak. Thank you to all who served as reviewers. We definitely couldn’t have done this without you. Thank you, TAS Editor Dan Jeske, for your vision and your willingness to let us create this special issue. Special thanks to Janet Wallace, TAS editorial coordinator, for spectacular work and tons of patience. We also are grateful to ASA Journals Manager Eric Sampson for his leadership, and to our partners, the team at Taylor and Francis, for their commitment to ASA’s publishing efforts. Thank you to all who read and commented on the draft of this editorial. You made it so much better! Regina Nuzzo provided extraordinarily helpful substantive and editorial comments. And thanks most especially to the ASA Board of Directors, for generously and enthusiastically supporting the “p-values project” since its inception in 2014. Thank you for your leadership of our profession and our association.

References

References to articles in this special issue

Other articles or books referenced

Related articles 

Information for

Open access

Help and info

Keep up to date

Register to receive personalised research and resources by emailSign me upTaylor and Francis Group Facebook pageTaylor and Francis Group Twitter pageTaylor and Francis Group Linkedin pageTaylor and Francis Group Youtube pageTaylor and Francis Group Weibo pageCopyright © 2021 Informa UK Limited Privacy policyCookiesTerms & conditionsAccessibility

Registered in England & Wales No. 3099067
5 Howick Place | London | SW1P 1WG

Taylor and Francis Group

Accept

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Cookie Policy. By closing this message, you are consenting to our use of cookies.

Hand inserts a molecule into DNA concept design.
Hand inserts a molecule into DNA concept design.

Konrad Kleinknecht

Zur Navigation springenZur Suche springen

Konrad Kleinknecht (* 23. April 1940 in Ravensburg) ist ein deutscher Physiker.

Inhaltsverzeichnis

Leben und Werk[Bearbeiten | Quelltext bearbeiten]

Kleinknecht studierte von 1958 bis 1963 Physik an den Universitäten München und Heidelberg. 1966 wurde er in Heidelberg zum Dr. rer. nat. promoviert. Anschließend war er bis 1969 wissenschaftlicher Angestellter am CERN in Genf, dann bis 1972 wissenschaftlicher Assistent an der Universität Heidelberg, wo er sich 1971 habilitierte. 1972 wurde er auf einen Lehrstuhl an der Universität Dortmund berufen, wo er die Fachrichtung Teilchenphysik aufbaute und Dekan, Prodekan sowie Institutsleiter wurde. 1985 wechselte er an die Universität Mainz. Von 1989 bis 1992 war er Vorsitzender des Fachverbandes Teilchenphysik der Deutschen Physikalischen Gesellschaft (DPG) und 1997 bis 1999 Vorstandsmitglied der DPG. Von 2000 bis 2008 beriet er den Vorstand der DPG als Beauftragter für den Klimaschutz.

Kleinknecht arbeitet auf dem Gebiet der experimentellen Elementarteilchenphysik, besonders zur Schwachen Wechselwirkung, zur Neutrinophysik und zur Verletzung der Materie-Antimaterie-Symmetrie. Durch Entwicklungen auf dem Gebiet der Teilchendetektoren gelangen ihm Präzisionsexperimente, insbesondere zur Verletzung der CP-Symmetrie im System neutraler Kaonen. 1988 konnte die NA31 Kollaboration am CERN, deren Mitglied er war, unter Leitung von Heinrich Wahl erste Anzeichen für direkte CP-Verletzung bei Kaonen liefern.[1]

Seit Ende 2012 ist Kleinknecht Gründungsvorsitzender der Heisenberg-Gesellschaft.[2]

Auszeichnungen[Bearbeiten | Quelltext bearbeiten]

* Zur Hälfte an Heinrich Wahl, zur anderen Hälfte an NA31[3]

Veröffentlichungen[Bearbeiten | Quelltext bearbeiten]

Kleinknecht publizierte mehr als 500 wissenschaftliche Arbeiten und teilweise Sachbücher, darunter

Einzelnachweise[Bearbeiten | Quelltext bearbeiten]

  1.  NA31 Collaboration: First Evidence for Direct CP Violation. In: Physical Letters B. Band 206, 1988, S. 169.
  2.  Die Gründung des Vereins. Heisenberg-Gesellschaft e.V., abgerufen am 21. März 2016.
  3.  Prizes reward high-energy physics. physicsworld, abgerufen am 4. Mai 2019.

Literatur[Bearbeiten | Quelltext bearbeiten]

Weblinks[Bearbeiten | Quelltext bearbeiten]

Normdaten (Person): GND12847906X | LCCNn84182940 | VIAF93522384 | Wikipedia-PersonensucheKategorien

Navigationsmenü

Suche

Mitmachen

Werkzeuge

Drucken/­exportieren

In anderen Sprachen

Links hinzufügen

Skip to content

Science, Technology and Innovation – World @ Rodrigo Nunes Cal SEARCH

https://en.wikipedia.org/wiki/American_Statistical_Association

https://www.youtube.com/embed/_NElMiAEDzI?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en&autohide=2&wmode=transparent

https://www.youtube.com/embed/FbxvI8bwm9M?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en&autohide=2&wmode=transparent

EDITORIAL  

It’s time to talk about ditching statistical significance

Looking beyond a much used and abused measure would make science harder, but better.

https://www.nature.com/articles/d41586-019-00874-8

Editorial

Moving to a World Beyond “p < 0.05”

Ronald L. Wasserstein,Allen L. Schirm &Nicole A. LazarPages 1-19 | Published online: 20 Mar 2019nfl5182@psu.edu allenschirm@gmail.com ron@amstat.org

American Statistical Association

https://www.amstat.org/

https://www.facebook.com/AmstatNews

https://ww2.amstat.org/meetings/jsm/2021/index.cfm?fbclid=IwAR2F5_7eVrrIsB62koou076DQsNp5xJUP9amBV0dMce6YgK3UhMSunrNVcg

blog

https://magazine.amstat.org/blog/embed/#?secret=tLGz1FM4RQhttps://www.youtube.com/embed/QucLsumQ3n0?version=3&rel=1&showsearch=0&showinfo=1&iv_load_policy=1&fs=1&hl=en&autohide=2&wmode=transparent

http://www.gmail.com http://www.google.com http://www.yahoo.com http://www.wordpress.com http://www.harvard.edu http://www.facebook.com/scientificblog http://www.wikipedia.org http://www.princeton.edu http://www.facebook.com http://www.twitter.com http://www.youtube.com

http://www.linkedin.com http://www.forbes.com http://www.stanford.edu http://www.nobelprize.org http://www.nasa.gov http://www.mit.edu http://www.famerp.br http://www.unicamp.br http://www.ucla.edu http://www.caltech.edu http://www.michigan.edu http://www.cornell.edu

https://futurism.com/the-byte/flying-cars-are-actually-finally-becoming-a-reality-in-japan?fbclid=IwAR2iiWdOumDHu7CA93p5YKn3nRbHyTnqGZNOerHf-MY3sRHekszZG3phlKE

http://www.yale.edu http://www.columbia.edu http://www.ox.ac.uk/ https://www.cam.ac.uk/ https://www.karolinska.se/ https://www.manchester.ac.uk/ http://cnpq.br/ https://www.jax.org/

https://phys.org/news/2021-02-years-physicists-track-lost-particles.html?fbclid=IwAR0AmcJDP9VVLj6VbR48ae_OXjN7hHMCnlDqphMfJkyX5hl9MaLkZ1EFY2E

https://en.wikipedia.org/wiki/History_of_the_Internet

https://phys.org/news/2021-02-scientists-highly-accurate-digital-twin.html?fbclid=IwAR2sYE0rFi3i-nPXzzFgByYYbhRkPOv5yb0uYaFz1RCYtF61JBB2iXfkQ6c

https://phys.org/news/2021-02-scientists-highly-accurate-digital-twin.html?fbclid=IwAR2sYE0rFi3i-nPXzzFgByYYbhRkPOv5yb0uYaFz1RCYtF61JBB2iXfkQ6c

Tesla Giga Berlin employee hints at new colors from world-class paint facility

https://www.teslarati.com/tesla-giga-berlin-new-possible-colors-world-class-facility/embed/#?secret=4Y6wNCVw75

https://genomebiology.biomedcentral.com/articles/10.1186/s13059-021-02283-5

The American Statistical Association @AmstatNews · Nonprofit Organization: https://www.facebook.com/AmstatNews

Machell Town

https://magazine.amstat.org/blog/2021/02/01/machell-town/embed/#?secret=oyrRkN1oTX

Machell Town

https://magazine.amstat.org/blog/2021/02/01/machell-town/embed/#?secret=oyrRkN1oTX

https://ww2.amstat.org/meetings/jsm/2021/index.cfm?fbclid=IwAR2F5_7eVrrIsB62koou076DQsNp5xJUP9amBV0dMce6YgK3UhMSunrNVcg

Scientists Have Observed A Rare Phenomenon Expanding Our Understanding Of The Quantum Universe.

https://www.secretsofuniverse.in/higgs-dalitz-decay/embed/#?secret=GzHdSkChWp

Editorial

Moving to a World Beyond “p < 0.05”

Ronald L. Wasserstein,Allen L. Schirm &Nicole A. LazarPages 1-19 | Published online: 20 Mar 2019nfl5182@psu.edu allenschirm@gmail.com ron@amstat.org

The American Statistical Association: https://www.amstat.org/

blog

https://magazine.amstat.org/blog/embed/#?secret=tLGz1FM4RQ&nbsp;https://www.tandfonline.com/doi/full/10.1080/00031305.2019.1583913EDITORIAL  

It’s time to talk about ditching statistical significance

Looking beyond a much used and abused measure would make science harder, but better.

https://www.nature.com/articles/d41586-019-00874-8

What Does Statistically Significant Mean?

https://measuringu.com/statistically-significant/embed/#?secret=YiRDKEPEIW

Não houve diferença estatística significativa. E agora?

https://posgraduando.com/diferenca-estatistica-significativa/embed/#?secret=ByagGlxYDi

https://biovignan.blogspot.com/2020/03/this-60-years-old-virus-is-causing-all.html?fbclid=IwAR2ZWmltzqpe2Kd0CEDXQLvN1x1VruOgNnuYn0y9BTPKjWOEpcXo4dO0imo

https://www.ft.com/content/e0ecc6b6-5d43-11ea-b0ab-339c2307bcd4?fbclid=IwAR1pB68Q4wCmbQLA8s3cHJjNtu7o6Lz5YoErkVvDW23OpGEBLZCwabktm3Y

https://medicalxpress.com/news/2020-03-stem-cells-bone.html?fbclid=IwAR1HB_Yi30RsIfvtHTI1qGr_1Qfq8PWII15EIvhDUR6yq06uuF35H4G9R-c

https://phys.org/news/2020-03-years-scientists-reveal-benzene.html?fbclid=IwAR1nSn_mak1epvpIfsEjvfR5TFzloEO2lssHMO0R25CCHiCPBRNGKH74BV8

https://www.sciencemag.org/news/2020/03/115-million-more-80-boston-researchers-will-collaborate-tackle-covid-19?fbclid=IwAR0GmyW_wAOzz15yAo7RRfrUlQGSU-UsqIyoNFaI5EFQpLeljlkutsCWd9I

https://www.quantamagazine.org/tadashi-tokieda-collects-math-and-physics-surprises-20181127/?fbclid=IwAR0VfFko9agROZvrY0a_Z5ihb2JmxdVFiVMB8qXaW-2H8A6x3qrXuYfYe6E

5aaaa
543aaaa
F6356546560000000000
F745675600000000000
L4444
FORBES64536354
FORBES356
l09000
l757546
l6754745
l86586856
l456754754
l745686745
l747547557
l846745675
76575ZZZ
757354ZZ
45675zzzz
8566343634845ZZZ
73573zzz
3zz
75856ZZZZ
75634ZZZZ
4574576745ZZZZZ
867564564574zzz
56845643ZZZZ
8456754zzzz
5675745zzz
8546754zzz
465754zzzz
8708ZZ
75637457454ZZZZ
face5
856856ZZZ
7545734ZZZ
545ZZZ
IncioITAZZZZ
745754zzzzzz
7575zzz
8678456754zzz
87aaa
856586zzz
claudioITA
756845ZZZ
8465845ZZZZZ
4754ZZZ
45677545zzzz
868465zzz
6845zzz
75546543ZZZ
86745zzzz
745zzzz
85645ZZZ
7564754ZZZ
75485684568754zzzz
Sem títuloeu----
85754ZZZ
74564zzzz
8664565754zzzz
68456zzzz
86754zzz
12zz
864845zzzz
754754zzzz
7457546754zzzzzzzz
847556754zzzz
75754ZZZZ
855575AZZZZ
ITAinicio543zzzz
865675zzzz
56745zzzaazzzz
4756754zzzz
5aaa
74574zzzz
65857ZZZ
86854zzzzz
85645zzzz
6zz
86865aaaa
75754ZZZZZZ
757544zzzz
745634ZZZZ
7575643zzz
735zzz
745756745zzz
75754734ZZZ
6745zzzz
77778ZZZZ
7573544754ZZZ
8zz
757543ZZZ
87zzz
7456745zzz
66745aaaa
2zz
3AAA
7666ZZ
745754aaaa
757546354ZZZZ
9zz
8456745ZZZZZZZZ
78587565zzz
75643ZZZ
75543ZZZ
75675zzz
84575zzzz
56754zzz
84845zzz
7696775zzz
9768564ZZZ
845754zzzz
867646354654ZZZ
5676zzz
745675zzzz
5754ZZZZ
75654ZZZ
6aaaa
4zzz
75686ZZ
46754zzz
75674ZZZZ
857645zzz
45754zzz
75754ZZZ
754754zzzzzzz
7454zzz
4564ZZZ
675743zzzZZ
75734ZZZ
745643ZZZZ
865745zzzz
56zzzz
75745ZZZZ
635488ZZZ
345634zzz
65845ZZZ
756543ZZZZ
854643ZZZ
7575464ZZZZ
8456845zzzz
86754zzzzzz
745745ZZZZ
8456745zzzz
85654745745zzzz
76756zzz
856745ZZZ
756745zzzz
754aaaa
75354854ZZZ
456745zzzz
865754ZZZZ
65745zzz
423zzzzz
63546435743ZZZ
74575zzz
856785ZZZZ
7zz
745754zzz
8457645zzz
21zz
846745ZZZ
754643zzz
756754zzzzazzazz
654zzzz
745745aaa
856845zzzz
8567856zzzz
76575ZZZ
757354ZZ
45675zzzz
8566343634845ZZZ
73573zzz
3zz
75856ZZZZ
75634ZZZZ
4574576745ZZZZZ
867564564574zzz
56845643ZZZZ
8456754zzzz
5675745zzz
8546754zzz
465754zzzz
8708ZZ
75637457454ZZZZ
face5
856856ZZZ
7545734ZZZ
545ZZZ
IncioITAZZZZ
745754zzzzzz
7575zzz
8678456754zzz
87aaa
856586zzz
claudioITA
756845ZZZ
8465845ZZZZZ
4754ZZZ
45677545zzzz
868465zzz
6845zzz
75546543ZZZ
86745zzzz
745zzzz
85645ZZZ
7564754ZZZ
75485684568754zzzz
Sem títuloeu----
85754ZZZ
74564zzzz
8664565754zzzz
68456zzzz
86754zzz
12zz
864845zzzz
754754zzzz
7457546754zzzzzzzz
847556754zzzz
75754ZZZZ
855575AZZZZ
ITAinicio543zzzz
865675zzzz
56745zzzaazzzz
4756754zzzz
5aaa
74574zzzz
65857ZZZ
86854zzzzz
85645zzzz
6zz
86865aaaa
75754ZZZZZZ
757544zzzz
745634ZZZZ
7575643zzz
735zzz
745756745zzz
75754734ZZZ
6745zzzz
77778ZZZZ
7573544754ZZZ
8zz
757543ZZZ
87zzz
7456745zzz
66745aaaa
2zz
3AAA
7666ZZ
745754aaaa
757546354ZZZZ
9zz
8456745ZZZZZZZZ
78587565zzz
75643ZZZ
75543ZZZ
75675zzz
84575zzzz
56754zzz
84845zzz
7696775zzz
9768564ZZZ
845754zzzz
867646354654ZZZ
5676zzz
745675zzzz
5754ZZZZ
75654ZZZ
6aaaa
4zzz
75686ZZ
46754zzz
75674ZZZZ
857645zzz
45754zzz
75754ZZZ
754754zzzzzzz
7454zzz
4564ZZZ
675743zzzZZ
75734ZZZ
745643ZZZZ
865745zzzz
56zzzz
75745ZZZZ
635488ZZZ
345634zzz
65845ZZZ
756543ZZZZ
854643ZZZ
7575464ZZZZ
8456845zzzz
86754zzzzzz
745745ZZZZ
8456745zzzz
85654745745zzzz
76756zzz
856745ZZZ
756745zzzz
754aaaa
75354854ZZZ
456745zzzz
865754ZZZZ
65745zzz
423zzzzz
63546435743ZZZ
74575zzz
856785ZZZZ
7zz
745754zzz
8457645zzz
21zz
846745ZZZ
754643zzz
756754zzzzazzazz
654zzzz
745745aaa
856845zzzz
8567856zzzz
1AAAA
1z
2AAA
1AAAA
1AAAA
1z
2AAA
76575ZZZ
757354ZZ
45675zzzz
8566343634845ZZZ
73573zzz
3zz
75856ZZZZ
75634ZZZZ
4574576745ZZZZZ
867564564574zzz
56845643ZZZZ
8456754zzzz
5675745zzz
8546754zzz
465754zzzz
8708ZZ
75637457454ZZZZ
face5
856856ZZZ
7545734ZZZ
545ZZZ
IncioITAZZZZ
745754zzzzzz
7575zzz
8678456754zzz
87aaa
856586zzz
claudioITA
756845ZZZ
8465845ZZZZZ
4754ZZZ
45677545zzzz
868465zzz
6845zzz
75546543ZZZ
86745zzzz
745zzzz
85645ZZZ
7564754ZZZ
75485684568754zzzz
Sem títuloeu----
85754ZZZ
74564zzzz
8664565754zzzz
68456zzzz
86754zzz
12zz
864845zzzz
754754zzzz
7457546754zzzzzzzz
847556754zzzz
75754ZZZZ
855575AZZZZ
ITAinicio543zzzz
865675zzzz
56745zzzaazzzz
4756754zzzz
5aaa
74574zzzz
65857ZZZ
86854zzzzz
85645zzzz
6zz
86865aaaa
75754ZZZZZZ
757544zzzz
745634ZZZZ
7575643zzz
735zzz
745756745zzz
75754734ZZZ
6745zzzz
77778ZZZZ
7573544754ZZZ
8zz
757543ZZZ
87zzz
7456745zzz
66745aaaa
2zz
3AAA
7666ZZ
745754aaaa
757546354ZZZZ
9zz
8456745ZZZZZZZZ
78587565zzz
75643ZZZ
75543ZZZ
75675zzz
84575zzzz
56754zzz
84845zzz
7696775zzz
9768564ZZZ
845754zzzz
867646354654ZZZ
5676zzz
745675zzzz
5754ZZZZ
75654ZZZ
6aaaa
4zzz
75686ZZ
46754zzz
75674ZZZZ
857645zzz
45754zzz
75754ZZZ
754754zzzzzzz
7454zzz
4564ZZZ
675743zzzZZ
75734ZZZ
745643ZZZZ
865745zzzz
56zzzz
75745ZZZZ
635488ZZZ
345634zzz
65845ZZZ
756543ZZZZ
854643ZZZ
7575464ZZZZ
8456845zzzz
86754zzzzzz
745745ZZZZ
8456745zzzz
85654745745zzzz
76756zzz
856745ZZZ
756745zzzz
754aaaa
75354854ZZZ
456745zzzz
865754ZZZZ
65745zzz
423zzzzz
63546435743ZZZ
74575zzz
856785ZZZZ
7zz
745754zzz
8457645zzz
21zz
846745ZZZ
754643zzz
756754zzzzazzazz
654zzzz
745745aaa
856845zzzz
8567856zzzz
2WWWW
3WWW
6WWW
7WWW
67WWWW
88WWWWW
99WWWW
887WWWW
74574WWWWW
74754WWWWW
745754WWWW
745754WWWW635634564
543534634WWWW
7456475474WWWW
7745745654WWWW
745754634634WWWW
FHARVARD34
3ZZZZZZZZZZZZ
4ZZZZ
6ZZZZZ
8ZZZZ
l876856856
l6568546845
l7546754754
l7657456745
l85467465854
l86575467457
l754754685468
l845685468546
l868688645868
PFIZER45
l868688645868
l86754675475400000000
L09898
L23244
L55454
L65654
L68685
L75675
L534674
L547456
L878567
44aaaaa

Accessibility helpSkip to navigationSkip to contentSkip to footerOPEN DRAWER MENUOPEN SEARCH BARMYFT

MENUSEARCH

Become an FT subscriber to read:

Coronavirus and the $2bn race to find a vaccine

Make informed decisions with the FT

Keep abreast of significant corporate, financial and political developments around the world. Stay informed and spot emerging risks and opportunities with independent global reporting, expert commentary and analysis you can trust.

barrier image

Choose your subscription

TrialNot sure which package to choose?
Try full access for 4 weeks$1.00 for 4 weeks*

SelectPurchase a Trial subscription for $1.00 for 4 weeksYou will be billed $67.00 per month after the trial ends DigitalBe informed with the essential
news and opinion$7.10 per week*

SelectPurchase a Digital subscription for $7.10 per weekYou will be billed $39.50 per month after the trial ends Premium DigitalAll the essentials plus
deeper insights and analysis$11.50 per week*

SelectPurchase a Premium Digital subscription for $11.50 per weekYou will be billed $67.00 per month after the trial ends Team or EnterprisePremium FT.com access for multiple users, with integrations & admin toolsPay based on use 

Get StartedPurchase a Team or Enterprise subscription for per weekYou will be billed per month after the trial endsGROUP SUBSCRIPTION * Subscription Terms and Conditions apply.Learn more and compare subscriptions

Other options

Premium Digital + PrintPrint OnlyPremium Digital + Weekend PrintWeekend Print

Feedback

Useful links

Support

View Site TipsHelp CentreAbout UsAccessibilitymyFT TourCareers

Legal & Privacy

Terms & ConditionsPrivacyCookiesCopyrightSlavery Statement & Policies

Services

FT LiveShare News Tips SecurelyIndividual SubscriptionsGroup SubscriptionsRepublishingContracts & TendersExecutive Job SearchAdvertise with the FTFollow the FT on TwitterFT TransactSecondary Schools

Tools

PortfolioToday’s Newspaper (ePaper)Alerts HubMBA RankingsEnterprise ToolsNews feedNewslettersCurrency ConverterMore from the FT GroupMarkets data delayed by at least 15 minutes. © THE FINANCIAL TIMES LTD 2020. FT and ‘Financial Times’ are trademarks of The Financial Times Ltd.
The Financial Times and its journalism are subject to a self-regulation regime under the FT Editorial Code of Practice.

Close drawer menuFinancial Times

International Edition

logotype

TopicsConditions

Science X Account

  Remember meSign In

Click here to sign in with  or 

Forget Password?Not a member? Sign up

Learn more

  • 1.3K
  • 16
  • Share
  • Email
  1. Home
  2.  Medical research

MARCH 5, 2020

Researchers discover new stem cells that can generate new bone

by University of Connecticut

cells
Credit: CC0 Public Domain

A population of stem cells with the ability to generate new bone has been newly discovered by a group of researchers at the UConn School of Dental Medicine.

In the journal Stem Cells, lead investigator Dr. Ivo Kalajzic, professor of reconstructive sciences, postdoctoral fellows Dr. Sierra Root and Dr. Natalie Wee, and collaborators at Harvard, Maine Medical Research Center, and the University of Auckland present a new population of cells that reside along the vascular channels that stretch across the bone and connect the inner and outer parts of the bone.

“This is a new discovery of perivascular cells residing within the bone itself that can generate new bone forming cells,” said Kalajzic. “These cells likely regulate bone formation or participate in bone mass maintenance and repair.”

Stem cells for bone have long been thought to be present within bone marrow and the outer surface of bone, serving as reserve cells that constantly generate new bone or participate in bone repair. Recent studies have described the existence of a network of vascular channels that helped distribute blood cells out of the bone marrow, but no research has proved the existence of cells within these channels that have the ability to form new bones.

In this study, Kalajzic and his team are the first to report the existence of these progenitor cells within cortical bone that can generate new bone-forming cells—osteoblasts—that can be used to help remodel a bone.

To reach this conclusion, the researchers observed the stem cells within an ex vivo bone transplantation model. These cells migrated out of the transplant, and began to reconstruct the bone marrow cavity and form new bone.

While this study shows there is a population of cells that can help aid bone formation, more research needs to be done to determine the cells‘ potential to regulate bone formation and resorption.


Explore further

After a bone injury, shape-shifting cells rush to the rescue


More information: Sierra H. Root et al, Perivascular osteoprogenitors are associated with transcortical channels of long bones, STEM CELLS (2020). DOI: 10.1002/stem.3159Journal information:Stem CellsProvided by University of Connecticut

1301 shares

  • Facebook
  • Twitter
  • Email

Feedback to editors

Brain regions found in rats that drive stress response

23 HOURS AGO

0

High levels of immunoglobulin E antibodies in microbiomes of people with peanut allergies

23 HOURS AGO

0

Association between avian influenza spread and live poultry trade

MAR 03, 2020

1

Exploring neural mechanisms behind the perception of control in stressful situations

MAR 02, 2020

0

A case study of three people who massively overdosed on LSD

FEB 28, 2020

18


HOW WAVES OF ‘CLUTCHES’ IN THE MOTOR CORTEX HELP OUR BRAINS INITIATE MOVEMENT3 HOURS AGO
COULD CANCER IMMUNOTHERAPY SUCCESS DEPEND ON GUT BACTERIA?16 HOURS AGO
MODERATE-TO-HIGH POSTTRAUMATIC STRESS COMMON AFTER EXPOSURE TO TRAUMA, VIOLENCE16 HOURS AGO
RESEARCHERS DISCOVER NEW GENETIC VARIANTS THAT CAUSE HEART DISEASE IN INFANTS16 HOURS AGO
SPECIALIZED HELPER CELLS CONTRIBUTE TO IMMUNOLOGICAL MEMORY19 HOURS AGO
NEW IMAGING TECHNIQUE ENABLES THE STUDY OF 3-D PRINTED BRAIN TUMORS19 HOURS AGO
THE COMPLEX BIOLOGY BEHIND YOUR LOVE (OR HATRED) OF COFFEE19 HOURS AGO
BIOMARKER IN SALIVA PREDICTS CHILDHOOD OBESITY RISK19 HOURS AGO
INDIVIDUAL RESPONSE TO COVID-19 ‘AS IMPORTANT’ AS GOVERNMENT ACTION19 HOURS AGO
BRAIN REGIONS FOUND IN RATS THAT DRIVE STRESS RESPONSE23 HOURS AGO

User comments

Phys.org
Phys.orgPhys.org internet news portal provides the latest news on science
Tech Xplore
Tech XploreTech Xplore covers the latest engineering, electronics and technology advances
ScienceX
ScienceXScience X Network offers the most comprehensive sci-tech news coverage on the web

Newsletters

 SubscribeScience X Daily and the Weekly Email Newsletter are free features that allow you to receive your favorite sci-tech news updates in your email inbox

Follow us

© Medical Xpress 2011 – 2020 powered by Science X NetworkPrivacy policyTerms of useMedical Disclaimer

Phys.org

Topics

Science X Account

  Remember meSign In

Click here to sign in with  or 

Forget Password?Not a member? Sign up

Learn more

  • 1.3K
  • 191
  • Share
  • Email
  1. Home
  2.  Chemistry
  3.  Materials Science

MARCH 5, 2020

After 90 years, scientists reveal the structure of benzene

by ARC Centre of Excellence in Exciton Science

<img src="https://scx1.b-cdn.net/csz/news/800/2020/after90years.jpg&quot; alt="After 90 years, scientists reveal the structure of benzene" title="DVMS structures for benzene. a Voronoi site for the RHF/6-31G(d) wavefunction. The electron positions of an arbitrary spin are shown as small yellow spheres. b Cross sections through the wavefunction around the Voronoi site in a C–C bonding electrons are shown as blue lobes. C–H bonds are shown in grey. c. Voronoi site showing staggered spins. The electron positions of each spin are respectively shown as small yellow and green spheres. d. Cross sections around the Voronoi site in c. The two spins of the C–C bonding electrons are shown in blue and red. C–H bonds are shown in grey. Credit: <i>Nature Communications
DVMS structures for benzene. a Voronoi site for the RHF/6-31G(d) wavefunction. The electron positions of an arbitrary spin are shown as small yellow spheres. b Cross sections through the wavefunction around the Voronoi site in a C–C bonding electrons are shown as blue lobes. C–H bonds are shown in grey. c. Voronoi site showing staggered spins. The electron positions of each spin are respectively shown as small yellow and green spheres. d. Cross sections around the Voronoi site in c. The two spins of the C–C bonding electrons are shown in blue and red. C–H bonds are shown in grey. Credit: Nature Communications (2020). DOI: 10.1038/s41467-020-15039-9

One of the fundamental mysteries of chemistry has been solved by a collaboration between Exciton Science, UNSW and CSIRO – and the result may have implications for future designs of solar cells, organic light-emitting diodes and other next gen technologies.

Ever since the 1930s debate has raged inside chemistry circles concerning the fundamental electronic structure of benzene. It is a debate that in recent years has taken on added urgency, because benzene – which comprises six carbon atoms matched with six hydrogen atoms – is the fundamental building-block of many opto-electronic materials, which are revolutionising renewable energy and telecommunications tech.

The flat hexagonal ring is also a component of DNA, proteins, wood and petroleum.

The controversy around the structure of the molecule arises because although it has few atomic components the electrons exist in a state comprising not just four dimensions – like our everyday “big” world – but 126.

Analysing a system that complex has until now proved impossible, meaning that the precise behaviour of benzene electrons could not be discovered. And that represented a problem, because without that information, the stability of the molecule in tech applications could never be wholly understood.

Now, however, scientists led by Timothy Schmidt from the ARC Centre of Excellence in Exciton Science and UNSW Sydney have succeeded in unravelling the mystery – and the results came as a surprise. They have now been published in the journal Nature Communications.

Solving a mystery in 126 dimensions
An image of how the 126-dimensional wave function tile is cross-sectioned into our 3-dimensions 42 times, once for each electron. This shows the domain of each electron, in that tile. Credit: UNSW Sydney

Professor Schmidt, with colleagues from UNSW and CSIRO’s Data61, applied a complex algorithm-based method called dynamic Voronoi Metropolis sampling (DVMS) to benzene molecules in order to map their wavefunctions across all 126 dimensions.

Key to unravelling the complex problem was a new mathematical algorithm developed by co-author Dr Phil Kilby from CSIRO’s Data61. The algorithm allows the scientist to partition the dimensional space into equivalent “tiles”, each corresponding to a permutation of electron positions.

Of particular interest to the scientists was understanding the “spin” of the electrons. All electrons have spin – it is the property that produces magnetism, among other fundamental forces – but how they interact with each other is at the base of a wide range of technologies, from light-emitting diodes to quantum computing.

“What we found was very surprising,” said Professor Schmidt. “The electrons with what’s known as up-spin double- bonded, where those with down-spin single-bonded, and vice versa.

“That isn’t how chemists think about benzene. Essentially it reduces the energy of the molecule, making it more stable, by getting electrons, which repel each other, out of each other’s way.”

Co-author Phil Kilby from Data61 added: “Although developed for this chemistry context, the algorithm we developed, for ‘matching with constraints’ can also be applied to a wide variety of areas, from staff rostering to kidney exchange programs.”


Explore further

Theoreticians finally prove that ‘curly arrows’ tell the truth about chemical reactions


More information: Yu Liu et al. The electronic structure of benzene from a tiling of the correlated 126-dimensional wavefunction, Nature Communications (2020). DOI: 10.1038/s41467-020-15039-9Journal information:Nature CommunicationsProvided by ARC Centre of Excellence in Exciton Science

1501 shares

  • Facebook
  • Twitter
  • Email

Feedback to editors

Fruit fly study suggests neither nature nor nurture is responsible for individuality

23 HOURS AGO

10

New carbon-based nanomaterial: Facile diamond synthesis from lower ‘diamondoids’

MAR 05, 2020

0

Researchers estimate size of bird with unusual vocal biomechanics by its song

MAR 05, 2020

0

Research investigates internal kinematics of the galaxy Mkn 938

MAR 05, 2020

2

A theoretical approach to understand the mechanisms of 3-D spatiotemporal mode-locking

MAR 04, 2020

0


RESEARCHERS FIND EVIDENCE OF A COSMIC IMPACT THAT CAUSED DESTRUCTION OF ONE OF THE WORLD’S EARLIEST HUMAN SETTLEMENTS4 HOURS AGO
NASA SATELLITE OFFERS URBAN CARBON DIOXIDE INSIGHTS4 HOURS AGO
GENE REGULATORY FACTORS ENABLE BACTERIA TO KILL RIVALS AND ESTABLISH SYMBIOSIS IN A SQUID4 HOURS AGO
MACHINE LEARNING ILLUMINATES MATERIAL’S HIDDEN ORDER4 HOURS AGO
RADAR AND ICE COULD HELP DETECT AN ELUSIVE SUBATOMIC PARTICLE16 HOURS AGO
DON’T BLAME THE MESSENGER—UNLESS IT’S ALL STATS AND NO STORY16 HOURS AGO
MACHINE SUCKS UP TINY TISSUE SPHEROIDS AND PRINTS THEM PRECISELY19 HOURS AGO
TOPOLOGY PROTECTS LIGHT PROPAGATION IN PHOTONIC CRYSTAL19 HOURS AGO
FIRST OFFICIAL NAMES GIVEN TO FEATURES ON ASTEROID BENNU19 HOURS AGO
STUDY REVEALS BREAST CANCER CELLS SHIFT THEIR METABOLIC STRATEGY TO METASTASIZE19 HOURS AGO

Relevant PhysicsForums posts

Questions about making galinstan

MAR 01, 2020

CO Combustion Reaction

MAR 01, 2020

What is the medium size of a Hydrochloric acid molecule?

FEB 29, 2020

Clarification about the sign of EMF in batteries and electrochemistry

FEB 29, 2020

Interpreting Electrode Potentials

FEB 27, 2020

Aspen Adsorption

FEB 27, 2020

More from Chemistry


User comments

Medical Xpress
Medical XpressMedical Xpress covers all medical research advances and health news
Tech Xplore
Tech XploreTech Xplore covers the latest engineering, electronics and technology advances
ScienceX
ScienceXScience X Network offers the most comprehensive sci-tech news coverage on the web

Newsletters

 SubscribeScience X Daily and the Weekly Email Newsletter are free features that allow you to receive your favorite sci-tech news updates in your email inbox

Follow us

© Phys.org 2003 – 2020 powered by Science X NetworkPrivacy policyTerms of use

Skip to main contentBecome a MemberLog In

ScienceMag.orgSearch

Advertisement

SHARE

More than $100 million from an enormous Chinese real estate company is slated to advance research into COVID-19, including studies of patient samples.GEERT VANDEN WIJNGAERT/BLOOMBERG VIA GETTY IMAGES

With $115 million, more than 80 Boston researchers will collaborate to tackle COVID-19

By Jennifer Couzin-FrankelMar. 5, 2020 , 6:40 PM

A $115 million collaboration to tackle the rapidly spreading viral disease COVID-19, led by heavy hitters of Boston science and funded by a Chinese property development company, kicked off today as the group’s leaders pledged to take on the virus on many fronts. The project brings together researchers at many of the city’s top academic institutions, along with local biotechnology companies such as Moderna. Those leading it hope they can quickly funnel money into studies that will build off a new repository of samples from infected people and community surveillance, materials that can be rapidly shared among scientists. The project, they anticipate, should answer critical questions about how COVID-19 is spreading and how best to prevent and treat infections.

“It was time to harness the whole breadth of knowledge that’s available” in the Boston region, says immunologist Bruce Walker, a leader in HIV/AIDS research; director of the Ragon Institute of MGH, MIT, and Harvard; and joint head of the collaboration. He leads the project with Arlene Sharpe, co-director of the Evergrande Center for Immunologic Diseases at Harvard Medical School and Brigham and Women’s Hospital. Walker and Sharpe were among more than 80 scientists and clinicians who met Monday at Harvard Medical School—in person or, in the case of collaborators in China, remotely—to hammer out the details of the effort, including how to prioritize funding needs.

Walker and four others, including Sharpe, announced the venture this afternoon in an opinion piece in The Boston Globe. Other prominent researchers in the collaboration include George Daley, dean of Harvard Medical School; Harvard epidemiologist Marc Lipsitch; and immunologist Pardis Sabeti of the Broad Institute. The money comes from the China Evergrande Group, which has supported initiatives at Harvard, including opening the center Sharpe co-leads. The company is not garnering a return on its investment, Walker says.

Bruce Aylward briefs reporters on coronavirus

As part of Monday’s meeting, the Boston team had a video conference with researchers in China led by Zhong Nanshan at the Guangzhou Institute of Respiratory Health. Zhong is helping coordinate China’s response to its massive COVID-19 outbreak and was a scientific leader during the 2002–03 severe acute respiratory syndrome outbreak. (Weeks of negotiation preceded the Chinese government allowing an international team organized by the World Health Organization to visit the country in mid-February, both to offer expertise and to learn from the country’s response to the epidemic.)

For Walker, the 7 February death of 34-year-old Li Wenliang, the Wuhan, China–based ophthalmologist who was punished for alerting colleagues to the outbreak in late December 2019, was especially alarming. “I thought, ‘I’ve never known a health care worker to die from influenza,’” Walker says. “This is not influenza.”

Goals of the new effort include improving diagnostic tests, better modeling to predict how the disease will spread, understanding the coronavirus’s basic biology and how it interacts with the human immune system, and developing new treatments. “There will be challenges in terms of competing priorities,” Walker acknowledges. Decisions about where to direct money will be made by a team of researchers.

SIGN UP FOR OUR DAILY NEWSLETTER

Get more great content like this delivered right to you!

The new money was welcomed by other researchers, especially because it came from a nonscientific source—reinforcing the global impact of COVID-19 and the need for varied sources to help combat it. “This is incredibly positive,” says Jeremy Farrar, director of biomedical research charity the Wellcome Trust. “We need the private sector to step up,” as the China Evergrande Group did.

“Coronavirus is not good for real estate,” any more than it’s good for any other part of society, says Sten Vermund, an epidemiologist and dean of the Yale School of Public Health.

The project has many priorities, including developing an animal model to test vaccines and treatments, creating an antibody test of infection to better gauge how deep into communities the virus has reached, and understanding exactly how transmission is happening.

Walker hopes other regions will establish similar collaborations in which researchers drop “institutional allegiances.” The local strategy, he believes, has potential: “We know each other,” he says. “We can’t begin to reorganize the whole world, but we can attempt to reorganize Boston.”

Finally, he argues that using philanthropic funds offers flexibility and speed that federal dollars cannot. Also today, Congress approved $8.3 billion in emergency coronavirus aid, a bill media reports say President Donald Trump is expected to sign. It’s not clear yet when the money will become available, especially to researchers who likely will have to write grant proposals and wait for them to be reviewed. Philanthropic funding allows scientists “to make their own decisions about what can be the most catalytic for entering into a new field,” Walker says. “We just can’t do that with federal funding.”

Posted in:

doi:10.1126/science.abb6021

Jennifer Couzin-Frankel

Jennifer Couzin-Frankel

Staff Writer

More from News

Enjoy reading News from Science? Subscribe today. If you have already subscribed, log into your News account.

Got a tip?

How to contact the news team

Advertisement

Advertisement

Latest News

Sifter

More Sifter

Read the Latest Issue of Science

6 March 2020

Vol 367, Issue 6482

Table of Contents

Get Our E-Alerts

Receive emails from Science. See full list

Science Table of Contents

Science Daily News

Weekly News Roundup

Science Editor’s Choice

First Release Notification

Science Careers Job Seeker

Country
Country*
Afghanistan
Aland Islands
Albania
Algeria
Andorra
Angola
Anguilla
Antarctica
Antigua and Barbuda
Argentina
Armenia
Aruba
Australia
Austria
Azerbaijan
Bahamas
Bahrain
Bangladesh
Barbados
Belarus
Belgium
Belize
Benin
Bermuda
Bhutan
Bolivia, Plurinational State of
Bonaire, Sint Eustatius and Saba
Bosnia and Herzegovina
Botswana
Bouvet Island
Brazil
British Indian Ocean Territory
Brunei Darussalam
Bulgaria
Burkina Faso
Burundi
Cambodia
Cameroon
Canada
Cape Verde
Cayman Islands
Central African Republic
Chad
Chile
China
Christmas Island
Cocos (Keeling) Islands
Colombia
Comoros
Congo
Congo, The Democratic Republic of the
Cook Islands
Costa Rica
Cote D’Ivoire
Croatia
Cuba
Curaçao
Cyprus
Czech Republic
Denmark
Djibouti
Dominica
Dominican Republic
Ecuador
Egypt
El Salvador
Equatorial Guinea
Eritrea
Estonia
Ethiopia
Falkland Islands (Malvinas)
Faroe Islands
Fiji
Finland
France
French Guiana
French Polynesia
French Southern Territories
Gabon
Gambia
Georgia
Germany
Ghana
Gibraltar
Greece
Greenland
Grenada
Guadeloupe
Guatemala
Guernsey
Guinea
Guinea-Bissau
Guyana
Haiti
Heard Island and Mcdonald Islands
Holy See (Vatican City State)
Honduras
Hong Kong
Hungary
Iceland
India
Indonesia
Iran, Islamic Republic of
Iraq
Ireland
Isle of Man
Israel
Italy
Jamaica
Japan
Jersey
Jordan
Kazakhstan
Kenya
Kiribati
Korea, Democratic People’s Republic of
Korea, Republic of
Kuwait
Kyrgyzstan
Lao People’s Democratic Republic
Latvia
Lebanon
Lesotho
Liberia
Libyan Arab Jamahiriya
Liechtenstein
Lithuania
Luxembourg
Macao
Macedonia, The Former Yugoslav Republic of
Madagascar
Malawi
Malaysia
Maldives
Mali
Malta
Martinique
Mauritania
Mauritius
Mayotte
Mexico
Moldova, Republic of
Monaco
Mongolia
Montenegro
Montserrat
Morocco
Mozambique
Myanmar
Namibia
Nauru
Nepal
Netherlands
New Caledonia
New Zealand
Nicaragua
Niger
Nigeria
Niue
Norfolk Island
Norway
Oman
Pakistan
Palestinian
Panama
Papua New Guinea
Paraguay
Peru
Philippines
Pitcairn
Poland
Portugal
Qatar
Reunion
Romania
Russian Federation
RWANDA
Saint Barthélemy
Saint Helena, Ascension and Tristan da Cunha
Saint Kitts and Nevis
Saint Lucia
Saint Martin (French part)
Saint Pierre and Miquelon
Saint Vincent and the Grenadines
Samoa
San Marino
Sao Tome and Principe
Saudi Arabia
Senegal
Serbia
Seychelles
Sierra Leone
Singapore
Sint Maarten (Dutch part)
Slovakia
Slovenia
Solomon Islands
Somalia
South Africa
South Georgia and the South Sandwich Islands
South Sudan
Spain
Sri Lanka
Sudan
Suriname
Svalbard and Jan Mayen
Swaziland
Sweden
Switzerland
Syrian Arab Republic
Taiwan
Tajikistan
Tanzania, United Republic of
Thailand
Timor-Leste
Togo
Tokelau
Tonga
Trinidad and Tobago
Tunisia
Turkey
Turkmenistan
Turks and Caicos Islands
Tuvalu
Uganda
Ukraine
United Arab Emirates
United Kingdom
United States
Uruguay
Uzbekistan
Vanuatu
Venezuela, Bolivarian Republic of
Vietnam
Virgin Islands, British
Wallis and Futuna
Western Sahara
Yemen
Zambia
Zimbabwe

Email

I also wish to receive emails from AAAS/Science
and Science advertisers, including information on
products, services, and special offers which may
include but are not limited to news, career
information, & upcoming events.Sign up today

Required fields are indicated by an asterisk (*)

AAAS

© 2020 American Association for the Advancement of Science. All rights Reserved. AAAS is a partner of HINARIAGORAOARECHORUSCLOCKSSCrossRef and COUNTER.

  • Physics
  • Mathematics
  • Biology
  • Computer Science
  • All Articles
  •  

A Collector of Math and Physics Surprises

9

SHARE
Q&A

A Collector of Math and Physics Surprises

Tadashi Tokieda discovers new physical phenomena by looking at the everyday world with the eyes of a child.9

Photo of Tadashi Tokieda
The mathematician Tadashi Tokieda, pictured at Stanford University, relishes the “toys” he finds in nature. He says that “a child and a scientist can share the same surprise.”Constanza Hevia H. for Quanta Magazine

Erica Klarreich

Contributing Correspondent


November 27, 2018


VIEW PDF/PRINT MODEMathematicsPhysicsPuzzlesQ&A

The Prime Number Conspiracy - The Biggest Ideas in Math from Quanta – AVAILABLE NOW!

Tadashi Tokieda lives in a world in which ordinary objects do extraordinary things. Jars of rice refuse to roll down ramps. Strips of paper slip past solid obstacles. Balls swirling inside a bowl switch direction when more balls join them.

Yet Tokieda’s world is none other than our own. His public mathematics lectures could easily be mistaken for magic shows, but there’s no sleight of hand, no hidden compartments, no trick deck of cards. “All I’m doing is to introduce nature to the spectators and the spectators to nature,” Tokieda said. “That’s an interesting, grand magic show if you like.”

Tokieda, a mathematician at Stanford University, has collected more than 100 of what he calls “toys” — objects from daily life that are easy to make yet exhibit behavior so startling that they often puzzle even physicists. In public lectures and YouTube videos, Tokieda showcases his toys with witty, sparkling commentary, even though English is his seventh language. But his goal is only partly to entertain — it’s also to show people that scientific discoveries are not the exclusive preserve of professional scientists.

“The part of the universe that we can experience with our own biological senses is limited,” he said. “Nonetheless, in that range we can experience things ourselves. We can be surprised, not because we have been told to be surprised but because we actually see [something] and are surprised.”

Tokieda followed an indirect route into mathematics. Growing up in Japan, he started out as an artist and then became a classical philologist (someone who studies and reconstructs ancient languages). Quanta Magazine talked with Tokieda about his journey into mathematics and toy collecting. The interview has been condensed and edited for clarity.

You like to emphasize that the kind of toys for sale in a shop are not toys in your sense of the word.

I’m not interested in games whose rules were set down by humans. I’m only interested in games set down by nature.

If you can buy something from a toy store, then it’s not a toy for me, because that means that somebody has already designed a certain use for it, and you’re supposed to use it that way. If you buy some sort of very sophisticated electronic toy, the child is kind of a slave to this product. But it’s often the case that the child is completely uninterested in that toy itself but plays endlessly and happily with the wrapping paper and box, because the child by his own initiative and imagination makes those objects interesting.

People often confuse my toys with games — puzzles, Rubik’s Cubes and so forth. But these are absolutely outside my interest and competence. I’m not interested in games whose rules were set down by humans. I’m only interested in games set down by nature.

You see, puzzles are made by humans to make a situation tricky for other humans to crack. And that’s against my grain. I want all humans to cooperate and find something really good and surprising in nature and just understand it. Nobody should make it any harder. Nobody should put in any extra rules. A child and a scientist can share the same surprise.

How did you become a toy collector?

I used to do very pure mathematics — symplectic topology. And in those days, I could not possibly share what I was doing with friends and family who are not scientific.

But then when I was a postdoc, I was teaching myself physics and becoming a physicist, and some of it was tangible, especially since I’m often interested in macroscopic phenomena. So I decided that every time I wrote a paper or figured out something, however modest, I would design some tabletop experiment, or toy if you like, that I can produce in front of people in the kitchen, in the garden, and so on — some simple but robust thing that will share some of the fun I had in doing this. And of course, as you can imagine, this was a great success with friends and family.

And then it gradually took over, and now it’s the other way around. I look around my daily life and try to find those interesting phenomena. And then I start doing science out of that.

But you came upon one of your very first toylike phenomena much earlier in life, right? One that involves gluing together two Möbius strips and then cutting along their center lines, to produce, well, a surprise.

I stumbled on it when I was about seven. Anyone who is interested in mathematics plays with Möbius strips in childhood, obviously, and there’s a lot of places in popular literature that tell you that it’s interesting to cut a Möbius strip along the center line. And I was a Japanese boy interested in origami, so it’s very natural for such a boy to do this.

But then, between cutting the Möbius strip along the center line, and gluing Möbius strips together and then cutting — well, I wouldn’t call it an inevitable step, but there is a heuristic step there. It’s not that it’s miles off. And once you take that step, you discover a wonderful phenomenon, which is so beautiful and romantic. It’s waiting for you there.

Share this article


Newsletter

Get Quanta Magazine delivered to your inboxSubscribe now

Most recent newsletter

The Stanford mathematician Tadashi Tokieda demonstrates one of his physics “toys”: the curious higher and lower notes you hear when tapping a coffee mug with a spoon.
Video: The Stanford mathematician Tadashi Tokieda demonstrates one of his physics “toys”: the curious higher and lower notes you hear when tapping a coffee mug with a spoon.Constanza Hevia H. for Quanta Magazine

Back then, you were planning to be an artist, right?

That’s what I was best at. I was a precocious child. At the age of five, I held an exhibit in a major gallery in Tokyo. The family legend says that some Hawaiian couple wandered into this gallery and saw one of my still life paintings. They wanted to buy it at a high price, but my mother vetoed it.

Everyone around me assumed, and I also assumed, that I would become a painter. In some sense, drawing and pictures are still what I care about most. I think as a matter of deep personality, I care more about pictures and drawing than languages, which was my next stage in life.

You hit that stage after you moved from Japan to France, entirely on your own, to go to high school at age 14.

I regarded myself, rather arrogantly, as a very educated person, but I had never heard of calculus in my life.

It turned out to be a real epiphany of my life. In Japan, you know indirectly that other languages and cultures exist, but we are an island, and we don’t see that day to day. We learn something called English, but it’s an academic discipline, right? Can you actually live in that language? Can you fall in and out of love, and have a baby, and see death in that language? Surely not — it’s not precise enough, it’s not rich enough.

But when I arrived in France, here were people, wonderful people, who were living in French. I had this huge shock, the weight of a revelation. I said to myself, “I have to start learning languages.”

So you became a philologist. And it wasn’t until later, when you were already a philology lecturer in Tokyo, that you got interested in mathematics, right? What’s the story there?

I was finishing my dissertation, and I needed a biography of somebody, so I went to the library. Unfortunately, the biography was not where it was supposed to be, but next to the spot was a biography of Lev Davidovich Landau. He was a Russian physicist who single-handedly created a very powerful school of theoretical physics in Moscow.

I started reading this book, because I was going on a train journey and needed something to read. I had never heard of Landau. Indeed, like the rest of the population, I wasn’t even aware that science existed as a human undertaking. What are mathematicians? What are physicists? I had heard these words, but surely, these people don’t exist in real life.

The biography came to a point where Landau, at the age of 54, has a very serious automobile accident. He’s in a coma for a month and a half. Then his son Igor comes to the hospital to check on his father, and he’s awake. It’s a tear-jerking scene. However, Landau doesn’t say, “Oh, I’m happy to be alive,” or “My son, Igor,” or anything like that. Instead, he says, “Igor, you’re here. What’s the indefinite integral of dx over sin x?”

Photo of Tadashi Tokieda
Constanza Hevia H. for Quanta Magazine

Well, Igor takes out a scratch sheet of paper and starts doing calculations, but somehow he can’t get it. Landau says, “Igor, you regard yourself as an educated adult, yet you’re incapable of performing such a simple task.”

When I read this, I took it as a personal criticism. I regarded myself, rather arrogantly, as a very educated person, but I had never heard of calculus in my life. I hadn’t the slightest idea what this sequence of symbols meant.

I decided, as a personal revenge on Landau, to study the subject up to the point where I could solve this exercise. Landau said, in the biography, “Don’t waste your time on mathematicians and lectures and so on — instead, find a book with the largest number of solved exercises and go through them all. That’s how you learn mathematics.”  I went back to the library and found the mathematics book with the largest number of problems. The book was in Russian, and I didn’t know Russian, but a young linguist is not afraid to pick up another language.

So I devoted a whole winter to this, and after maybe a month and a half, I came to the point where I could actually do this integral. But I had inertia, so I kept going. I couldn’t stop. And toward the end of about three months, I realized two things. Number one, I was fairly good at this kind of silly manipulative exercise. Number two, maybe this is not the only way to study mathematics. So I looked around and found I could take a two years’ leave of absence from my job.

And then you went to Oxford to study mathematics.

As far as I could see, Oxford was the only place that would allow you to rush through an undergraduate program in two years. I didn’t know English, but a linguist is not afraid to pick up another language.

After a while, I said, “This is what I want to do.” I resigned from my job and went to Princeton to get a Ph.D.

It’s an unusual path into mathematics.

What are mathematicians? What are physicists? I had heard these words, but surely these people don’t exist in real life.

I don’t think I’ve had an unusual life, but it would be regarded as unusual if you take the standard sort of life people are supposed to have in a certain type of society and try to fit me in it. It’s just a matter of projection, if you see what I mean. If you project on the wrong axis, something looks very complicated. Maybe according to one projection, I have an unusual past. But I don’t think so, because I was living my life day by day in my own way. I never tried to do anything weird — it just happened this way.

And now you’re both a mathematician and a collector of toys. Do you see your toys as a way to try to jolt people out of our complacency about how well we understand the ordinary world around us?

On the contrary — I’m trying to jolt myself out of my complacency. When I share, I just want to share with people. I hope that they’ll like it, but I’m not trying to educate them, and I don’t think people are complacent. People are struggling in their own ways and making efforts and trying to improve. Who am I to jolt them out of complacency?

But I like to be surprised, and I like to be proved wrong. Not in public, because that’s humiliating. But in private, I really like to be proved wrong, because that means that afterward, if I come to terms with it when the dust settles, I am ever so slightly smarter than before, and I feel better that way.

How do you find your toys? You’ve said that it involves looking at the world with the eyes of a child.

Sometimes adults have a regrettable tendency to be interested only in things that are already labeled by other adults as interesting. Whereas if you come a little fresher, and a little more naive, you can look all over the place, whether it’s labeled or not, and find your own surprises.

So, when I’m washing my hands with my child, I might notice that if you open a faucet very thinly — not so that it drips, but a thin, steady stream of water — and you lift your finger gradually toward the faucet, you can actually wrinkle the water stream. It’s really fantastic. You can see beadlike wrinkles.

It turns out that this can be explained beautifully by surface tension. And this was known to some people, but 99.9% of the world population hasn’t seen this wrinkling of the water. So it’s a delightful thing. You don’t want to let go of that feeling of surprise.

And so that’s what you do. You just look around. And sometimes you feel tired, or you feel dizzy, or you feel preoccupied by other things, and you cannot do this. But you’re not always tired and you’re not always preoccupied. In those moments, you can find lots of wonderful things.

Do you find that if a physical phenomenon surprises you, that’s a pretty reliable guide that it will surprise other people?

It’s not a reliable guide at all. Sometimes I think something’s really surprising, and people will say, “Well, so what?”

One thing that is a bit disconcerting is that nowadays, more and more people spend so much time in virtual reality, where anything happens, that then no one is surprised by much in the physical world. That can be a sort of break point between their surprise and my surprise.

RELATED:


  1. IN SEARCH OF GOD’S PERFECT PROOFS
  2. THE CONNOISSEUR OF NUMBER SEQUENCES
  3. A PATH LESS TAKEN TO THE PEAK OF THE MATH WORLD

One very common question that comes up at the end of a lecture is, “Does all this have any practical applications?” It’s really intriguing because this question is asked in almost exactly the same words wherever I go. It’s like listening to a prerecorded message.

I ask them, what do you think constitutes a practical application? It’s very surprising. Roughly speaking, people converge within five to 10 minutes onto two categories of practical applications. One is, if you manage to make several million dollars instantly. The other is, if you manage to kill millions of people instantly. Many people are actually kind of shocked by their own answers.

Then I tell them that, well, I don’t know about other people, but I have a practical application for my toys. When I show my toys to some children, they seem to be happy. If that’s not a practical application, what is?

Correction: This article was revised on November 29, 2018, to correct an anecdote about the physicist Lev Davidovich Landau. Landau asked his son for the indefinite integral of dx over sin x, not the definite integral.

The Quanta Newsletter

Get highlights of the most important news delivered to your email inbox

Subscribe

Most recent newsletter

Also in Mathematics

Animation showing toy-like depictions of computer science, quantum mechanics and pure math affecting each other.
COMPUTATIONAL COMPLEXITY

Landmark Computer Science Proof Cascades Through Physics and Math

ByKEVIN HARTNETT

MARCH 4, 202014

THE JOY OF X

John Urschel: From NFL Player to Mathematician

BySTEVEN STROGATZ

FEBRUARY 25, 2020 

Animated demonstration of a colorful complete graph being tiled by a smaller tree
COMBINATORICS

Rainbow Proof Shows Graphs Have Uniform Parts

ByKEVIN HARTNETT

FEBRUARY 19, 202010

Comment on this article

Quanta Magazine moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (New York time) and can only accept comments written in English.  Show comments

Art for "‘Lava-Lamp’ Proteins Inside Cells May Protect and Regulate"

NEXT ARTICLE

‘Lava-Lamp’ Proteins May Help Cells Cheat Death


All Rights Reserved © 2020

Bio vignan

Get information about biology and health related stuff

This 60 years old virus is causing all soughts of problems. History about corona virus.

What is CORONA virus?

Corona are the group of viruses that causes disease in mammals and birds. In humans it causes respiratory tract infections that are typically mild but sometimes it can cause death.

  Constituents of corona virus

  They belong to subfamily orthocorona virinea.They have positive single standed RNA – genome and nucleocapsid of helical symmetry.The genome size ranges from 27 to 34 kilo basepairs. The largest among the RNA viruses. 

Etymology

The name corona is derived from Latin word corona means “crown ” which is characterized by appearance of crown. 

History of corona virus.

 Corona virus were first discovered in 1960’s. The earliest ones discovered were an infectious which causes bronchitis.It was found in chickens and in two humans patient with the common cold.It was called as Human corona virus 229E 

Corona virus in Humans

 Corona virus vary significantly in risk factor. Some can kill more than 30% infected such as MERS-CoV and some are harmless. Corona virus can cause cold, fever, sever throat from swollen adenoids occurring in spring season.It can even cause pneumonia.Human corona virus was discovered in 2003. SARs-CoV which can cause several acute respiratory syndrome which can cause both upper and lower track respiratory disease. Seven strains of Human corona virus. 1) Human corona 229E 2) Human corona OC43 3) Severe acute respiratory syndrome.                 corona virus -2003 (SARs-CoV -2) 4) Human corona virus N263 5) Human corona HKU1 6) Middle East respiratory Syndrome .                 related  corona virus 7) Severe acute respiratory syndrome                 corona virus (SARS -CoV -2) 2019 which.          is also known as COVID -19. 

Middle East respiratory syndrome

 In September 2012 a new type of corona virus was identified initially called Novel Corona virus 2012 and officially called as (MERS-COV). The WHO issued a alert .But on 28 Sept 2012 WHO said this virus easily does transmit from one person to another.        

Comments

Popular posts from this blog

This animal can convert organic waste into gold.

Image

India is producing around 36 billion tons of organic waste every year. Only 5% is recycling but remaining  waste are getting dumped in the open areas. This causes many problems. It can cause pollution and health hazards. But there is a insect which converts organic waste to gold.
Yes you heard it write Earthworm can convert organic waste to nutrients which is used in agriculture. Today the price of 50 kg bag of vermicompost cost around 1000 to 1500 INR. And it really worth the money. Want know howEarthworm is terrestrial, burrowing, detrivorous,nocturnal animal which lives in moist soil. By their activity in the soil earthworm offers many benefits like
A) IMPROVED NUTRIENT AVAILABILITY
Worms feed on debris and soil. Thier digestive system concentrate on organic mineral constituents in the food they eat so their casts are richer in available nutrient than the soil around them.  According to recent studies in New Zealand shows that worm casts are much nutrient that help to grow many…READ MORE

What exactly is stem cell technology?

Image

STEM CELL TECHNOLOGY  A stem cell is a cell with the unique ability to develop into specialised cell type in the body. In the future they may be used to replace cells and tissues that have been damaged or lost due to disease.
TYPES OF STEM CELL TECHNOLOGY
There are several types they are
1) Embryonic stem cell
2) Non-embroyic stem cell
3) Induced pluripotent stem cell
4) Lord blood stem cell
5) Amiotic fluid stem cell
Diseases that may be cured using stem cell technology  1) Spinal cord injury2) Diabetes

3) Heart disease

4)Parkinson’s disease

5) Alzheimer

6) Lou Gehrig’s disease

7) Lung disease

Some effective treatment did using stem cell technology  Some bone skin and corneal injury and many diseases can be treated by grafting and implanting tissues and the healing process on stem cells within this implanted tissue

Stem cell research in India  INSTITUTE FOR STEM CELL BIOLOGY AND REGENERATIVE MEDICINE is a research institute in India for stem cell technology.   Indi…READ MORE

Unknown facts about turmeric that you should know.

Image

TURMERIC IS A FLOWERING PLANT AND WHICH BELONGS TO GINGER FAMILY.
IT CONTAINS 300 NATURALLY OCCURRING COMPONENTS INCLUDING BETA -CAROTENE, ASCORBIC ACID, CALCIUM, FLAVONIOD FIBERS, IRON, POTASSIUM, ZINC AND OTHERS.We Indians use turmeric in almost every food but many of us don’t know why our ancestors were adding turmeric in there food. Now let’s know some some medical uses of turmeric.

1) It prevents cancer

Now a days we are seeing many cancer cases but turmeric can prevent cancer and keep u safe.
Yes turmeric has a capacity to prevent cancer in human body. According to some research studies it has a capacity to fight against cancerous cells and prevents cancer in human body

2) It can prevent and cure alzymers

Alzheimer is a disease in which it can kill you brain cells and make your nervous system fatigue. So your memory power will reduce and literally you can’t remember anything. But turmeric helps to cure Alzheimer and it can prevent from that diseases before it caus…READ MORE Powered by BloggerTheme images by Michael ElkanPRAJWAL S POUTHAMASHI WANT TO SHARE INFORMATION ABOUT BIOTECH AND LIFE SCIENCE TOPICS WHICH ARE RELATED TO OUR HEALTH AND LIFE STYLE. SO BLOGGER IS A GOOD PLATFORM TO SHARE THOSE INFO.

VISIT PROFILE

Archive

Report Abuse

Skip to main content

Advertisement

  1. nature
  2. editorials
  3. article

a nature research journalMENU

SubscribeSearchLoginEDITORIAL 20 MARCH 2019 

It’s time to talk about ditching statistical significance

Looking beyond a much used and abused measure would make science harder, but better. 

PDF version

Bar chart made of measuring cylinders filled with different amounts of varied coloured liquids
Some statisticians are calling for P values to be abandoned as an arbitrary threshold of significance.Credit: Erik Dreyer/Getty

Fans of The Hitchhiker’s Guide to the Galaxy know that the answer to life, the Universe and everything is 42. The joke, of course, is that truth cannot be revealed by a single number.

And yet this is the job often assigned to values: a measure of how surprising a result is, given assumptions about an experiment, including that no effect exists. Whether a P value falls above or below an arbitrary threshold demarcating ‘statistical significance’ (such as 0.05) decides whether hypotheses are accepted, papers are published and products are brought to market. But using P values as the sole arbiter of what to accept as truth can also mean that some analyses are biased, some false positives are overhyped and some genuine effects are overlooked.

Scientists rise up against statistical significance

Change is in the air. In a Comment in this week’s issue, three statisticians call for scientists to abandon statistical significance. The authors do not call for P values themselves to be ditched as a statistical tool — rather, they want an end to their use as an arbitrary threshold of significance. More than 800 researchers have added their names as signatories. A series of related articles is being published by the American Statistical Association this week (R. L. Wasserstein et al. Am. Stat. https://doi.org/10.1080/00031305.2019.1583913; 2019). “The tool has become the tyrant,” laments one article.

Statistical significance is so deeply integrated into scientific practice and evaluation that extricating it would be painful. Critics will counter that arbitrary gatekeepers are better than unclear ones, and that the more useful argument is over which results should count for (or against) evidence of effect. There are reasonable viewpoints on all sides; Nature is not seeking to change how it considers statistical analysis in evaluation of papers at this time, but we encourage readers to share their views (see go.nature.com/correspondence).

If researchers do discard statistical significance, what should they do instead? They can start by educating themselves about statistical misconceptions. Most important will be the courage to consider uncertainty from multiple angles in every study. Logic, background knowledge and experimental design should be considered alongside values and similar metrics to reach a conclusion and decide on its certainty.

When working out which methods to use, researchers should also focus as much as possible on actual problems. People who will duel to the death over abstract theories on the best way to use statistics often agree on results when they are presented with concrete scenarios.

Researchers should seek to analyse data in multiple ways to see whether different analyses converge on the same answer. Projects that have crowdsourced analyses of a data set to diverse teams suggest that this approach can work to validate findings and offer new insights.

In short, be sceptical, pick a good question, and try to answer it in many ways. It takes many numbers to get close to the truth.

Nature 567, 283 (2019)doi: 10.1038/d41586-019-00874-8  

Latest on:

Mathematics and computing

Publishing

Research data

Four tools that help researchers working in collaborations to see the big picture

TECHNOLOGY FEATURE 29 JUN 20

The robot recruits in China’s health-care system

SPOTLIGHT 24 JUN 20

Meet the engineer behind China’s first robot-run coronavirus ward

SPOTLIGHT 24 JUN 20

Nature Briefing

An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.

Email addressYes! Sign me up to receive the daily Nature Briefing email. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.

Sign up

RELATED ARTICLES

SUBJECTS

   CloseSign up for the Nature Briefing newsletter for a daily update on COVID-19 science.Enter your email addressI agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.Sign up 

Nature

ISSN 1476-4687 (online)

Nature Research

Discover content

Publish with us

Researcher services

Libraries & institutions

Advertising & partnerships

Career development

Regional websites

Springer Nature

© 2020 Springer Nature Limited

WHAT DOES STATISTICALLY SIGNIFICANT MEAN?

by Jeff Sauro, PhD | October 21, 2014

Statistically significant.

It’s a phrase that’s packed with both meaning, and syllables. It’s hard to say and harder to understand.

Yet it’s one of the most common phrases heard when dealing with quantitative methods.

While the phrase statistically significant represents the result of a rational exercise with numbers, it has a way of evoking as much emotion.  Bewilderment, resentment, confusion and even arrogance (for those in the know).

I’ve unpacked the most important concepts to help you the next time you hear the phrase.

Not Due to Chance

In principle, a statistically significant result (usually a difference) is a result that’s not attributed to chance.

More technically, it means that if the Null Hypothesis is true (which means there really is no difference), there’s a low probability of getting a result that large or larger.

Statisticians get really picky about the definition of statistical significance, and use confusing jargon to build a complicated definition. While it’s important to be clear on what statistical significance means technically, it’s just as important to be clear on what it means practically.

Consider these two important factors.

  1. Sampling Error. There’s always a chance that the differences we observe when measuring a sample of users is just the result of random noise; chance fluctuations; happenstance.
  2. Probability; never certainty. Statistics is about probability; you cannot buy 100% certainty. Statistics is about managing risk. Can we live with a 10-percent likelihood that our decision is wrong? A 5-percent likelihood? 33 percent? The answer depends on context: what does it cost to increase the probability of making the right choice, and what is the consequence (or potential consequence) of making the wrong choice? Most publications suggest a cutoff of 5%—it’s okay to be fooled by randomness 1 time out of 20. That’s a reasonably high standard, and it may match your circumstances. It could just as easily be overkill, or it could expose you to far more risk than you can afford.

What it Means in Practice

Let’s look at a common scenario of A/B testing with, say, 435 users. During a week, they are randomly served either website landing page A or website landing page B.

  • 18 out of 220 users (8%) clicked through on landing page A.
  • 6 out of 215 (3%) clicked through on landing page B.

Do we have evidence that future users will click on landing page A more often than on landing page B? Can we reliably attribute the 5-percentage-point difference in click-through rates to the effectiveness of one landing page over the other, or is this random noise?

How Do We Get Statistical Significance?

The test we use to detect statistical difference depends on our metric type and on whether we’re comparing the same users (within subjects) or different users (between subjects) on the designs. To compare two conversion rates in an A/B test, as we’re doing here, we use a test of two proportions on different users (between subjects). These can be computed using the online calculator or downloadable Excel calculator.

Below is a screenshot of the results using the A/B test calculator

To determine whether the observed difference is statistically significant, we look at two outputs of our statistical test:

  1. P-value: The primary output of statistical tests is the p-value (probability value). It indicates the probability of observing the difference if no difference exists. The p-value from our example, 0.014, indicates that we’d expect to see a meaningless (random) difference of 5% or more only about 14 times in 1000. If we are comfortable with that level of chance (something we must consider before running the test) then we declare the observed difference to be statistically significant. In most cases, this would be declared a statistically significant result.
  2. CI around Difference: A confidence interval around a difference that does not cross zero also indicates statistical significance. The graph below shows the 95% confidence interval around the difference between the proportions outputted from the stats package. The observed difference was 5% (8% minus 3%) but we can expect that difference itself to fluctuate. The CI around the difference tells us that it will most likely fluctuate between about 1% and 10% in favor of Landing Page A. But because the difference is greater than 0%, we can conclude that the difference is statistically significant (not due to chance). If the interval crossed zero—if it went, for example, from -2% to 7%—we could not be 95% confident that the difference is nonzero, or even, in fact, that it favors Landing Page A.
    Figure 1: The blue bar shows 5% difference. The black line shows the boundaries of the 95% confidence interval around the difference. Because the lower boundary is above 0%, we can also be 95% confident the difference is AT LEAST 0–another indication of statistical significance. The boundaries of this confidence interval around the difference also provide a way to see what the upper and lower bounds of the improvement could be if we were to go with landing page A. Many organizations want to change designs, for example, only if the conversion-rate increase exceeds some minimum threshold—say 5%. In this example, we can be only 95% confident that the minimum increase is 1%, not 5%.

What it Doesn’t Mean

Statistical significance does not mean practical significance.

The word “significance” in everyday usage connotes consequence and noteworthiness.

Just because you get a low p-value and conclude a difference is statistically significant, doesn’t mean the difference will automatically be important. It’s an unfortunate consequence of the words Sir Ronald Fisher used when describing the method of statistical testing.

To declare practical significance, we need to determine whether the size of the difference is meaningful. In our conversion example, one landing page is generating more than twice as many conversions as the other. This is a relatively large difference for A/B testing, so in most cases, this statistical difference has practical significance as well. The lower boundary of the confidence interval around the difference also leads us to expect at LEAST a 1% improvement. Whether that’s enough to have a practical (or a meaningful) impact on sales or website experience depends on the context.

Sample Size

As we might expect, the likelihood of obtaining statistically significant results increases as our sample size increases. For example, in analyzing the conversion rates of a high-traffic ecommerce website, two-thirds of users saw the current ad that was being tested and the other third saw the new ad.

  • 13 out of 3,135,760 (0.0004%) clicked through on the current ad
  • 10 out of 1,041,515 (0.0010%) clicked through on the new ad

The difference in conversion rates is statistically significant (p = 0.039) but, at 0.0006%, tiny, and likely of no practical significance. However, since the new ad now exists, and since a modest increase is better than none, we might as well use it (oh and just in case you thought a lot of people clicked on ads, let this remind you of how they don’t!)

Conversely, small sample sizes (say fewer than 50 users) make it harder to find statistical significance; but when we do find statistical significance with small sample sizes, the differences are large and more likely to drive action.

Some standardized methods express differences, called effect sizes, which help us interpret the size of the difference. Here, too, the context determines whether the difference warrants action.

Conclusion and Summary

Here’s a recap of statistical significance:

  • Statistically significant means a result is unlikely due to chance
  • The p-value is the probability of obtaining the difference we saw from a sample (or a larger one) if there really isn’t a difference for all users.
  • A conventional (and arbitrary) threshold for declaring statistical significance is a p-value of less than 0.05.
  • Statistical significance doesn’t mean practical significance. Only by considering context can we determine whether a difference is practically significant; that is, whether it requires action.
  • The confidence interval around the difference also indicates statistical significance if the interval does not cross zero. It also provides likely boundaries for any improvement to aide in determining if a difference really is noteworthy.
  • With large sample sizes, you’re virtually certain to see statistically significant results, in such situations it’s important to interpret the size of the difference.
  • Small sample sizes often do not yield statistical significance; when they do, the differences themselves tend also to be practically significant; that is, meaningful enough to warrant action.

Now say statistically significant three times fast.

Learn More: UX Measurement Boot Camp

Intensive Training on UX Methods, Metrics and Measurement

Denver: Aug. 5th-7th, 2020

You Might Also Be Interested In:

  • Best Practices for Using Statistics on Small Sample Sizes
  • How to Find the Right Sample Size for A Usability Test
SEARCH BLOG

User Experience Salaries & Calculator (2018)How To Make Personas More ScientificScientific Thinking For Better Design9 Biases That Affect Survey Responses4 Types of Observational Research

BOOKS

3300 E 1st Ave. Suite 370Denver, Colorado 802061 + 303-578-2801 – MSTContact Us

BLOG
NEWSLETTER

Sign-up to receive weekly updates.

© 2004-2020 MeasuringU

Skip to content

Pós-Graduando – Tudo sobre a Pós-Graduação Logotipo

Não houve diferença estatística significativa. E agora?

A frase “não houve diferença estatística significativa” soa quase como uma sentença de morte para muitos alunos de graduação, pós-graduação e até mesmo para alguns pesquisadores.

Como assim “não houve diferença estatística significativa”? Eu fiz tudo certinho: calculei o tamanho da amostra de forma correta, tive cuidado na implantação do experimento, tive cuidado na coleta de dados, escolhi os teste estatísticos adequados e agora todo o meu trabalho não servirá para nada? Os resultados que encontrei não têm valor?

Calma, não é bem assim! Se você tiver paciência (e fôlego) para ler este texto até o fim, perceberá o quanto é equivocada essa ditadura do “p-significativo” que rege a pesquisa científica atualmente.

diferença estatística significativa

1. Por que mesmo você precisa utilizar a análise estatística em sua pesquisa científica?

Antes de embarcarmos na discussão sobre “diferença estatística significativa”, é preciso relembrar primeiro por que precisamos utilizar a análise estatística em uma pesquisa científica.

Milhares de trabalhos científicos são publicados anualmente em centenas de periódicos, e a esmagadora maioria, tanto em estudos destinados à ciência básica, quanto naqueles de pesquisa aplicada, utiliza a estatística para referendar suas conclusões.

Em qualquer pesquisa científica, o procedimento geral é o de formular hipóteses e verificá-las, diretamente, ou por meio de suas consequências (Vianna, 2001). E o que nos obriga a utilizar a análise estatística para testar hipóteses formuladas é a presença, em todas as observações ou dados, de efeitos de fatores não controlados, que podem causar variação em nossos dados tanto quanto o efeito dos tratamentos que são objeto de estudo (Pinto e Schwaab, 2011).

Em um mundinho científico perfeito, o cientista conseguiria controlar todos os fatores que não estão sendo estudados e a única variação de dados na pesquisa seria referente ao efeito ou ao fenômeno estudado.

Mas, na prática, um cientista da área médica não consegue controlar totalmente os fatores genéticos, os hábitos alimentares, a rotina de trabalho e a rotina de exercícios de todos os indivíduos do teste. Da mesma forma, um cientista da área de Agronomia não consegue controlar fatores como clima, solo, ataque de pragas, incidência de doenças, competição com plantas daninhas.

Portanto, quem recorre à estatística como ferramenta de tomada de decisão se depara, antes mesmo de calcular qualquer medida ou teste estatístico, ou ainda no processo de ensino aprendizagem da disciplina de investigação, com o conceito de erro (Martins e Domingues, 2014).

Desta forma, os testes de hipóteses ou testes de significância nos permitem decidir se rejeitamos ou não uma determinada hipótese estatística, com o menor risco possível de se cometer um erro (Moore e Fligner, 2014).

diferença estatística significativa

2. O valor de p e a diferença estatística significativa

Quando encontramos diferença estatística significativa entre grupos ou entre tratamentos, inferimos que essas diferenças não devem ser atribuídas ao acaso (ou ao erro, ou aos fatores não controlados), mas sim aos efeitos maiores de alguns dos grupos ou dos tratamentos (Rumsey, 2009).

Assim, ao realizar um experimento, o cientista formula uma hipótese nula (H0), também chamada de hipótese da nulidade, em que não existe diferença entre os efeitos estudados, que será posta à prova. Os dados observados e a análise estatística serão utilizados para tomar a decisão de rejeitar (assumindo que seja falsa) ou não rejeitar (assumindo que seja verdadeira) essa hipótese nula (Schwaab, 2007).

Admitindo-se inicialmente que a hipótese da nulidade seja verdadeira, se verificarmos que os resultados obtidos em uma amostra diferem acentuadamente dos resultados esperados para essa hipótese podemos concluir, com base na teoria das probabilidades, que as diferenças são significativas e, portanto, rejeitamos a hipótese de nulidade em favor de uma outra, denominada hipótese alternativa (H1) ou (Ha(Vieira, 2011).

Este processo é semelhante à presunção de inocência do direito penal. Até prova em contrário, o réu é inocente; face às provas, o juiz ou jurados decidem: culpado ou não culpado. Em analogia com os testes de hipóteses, a hipótese nula é verdadeira até que uma evidência suficientemente forte indique que essa afirmação é incorreta, com uma baixa probabilidade de erro.

Essa probabilidade de erro é o valor de p. Para Sir Ronald Aylmer Fisher, quanto menor fosse o valor de p, maior seria a probabilidade de que a hipótese nula, aquela em que não existe diferença entre os grupos ou entre os tratamento, fosse falsa.

A ironia é que quando Fisher apresentou o valor de p na década de 1920, ele não quis dizer que o valor de p seria um teste definitivo. O valor de p era visto por Fischer apenas como uma maneira informal para julgar se determinada evidência era digna de um segundo olhar. Ou seja, o valor de p não foi concebido para ser utilizado da maneira como é utilizado hoje!

Quando se conclui que uma diferença não é estatisticamente significativa, isso não indica propriamente que as médias sejam iguais, ou que não exista um efeito substantivo. Indica apenas que não houve evidência suficientemente forte para provar que a hipótese nula era falsa (Rumsey, 2009).

diferença estatística significativa

3. Diferença significativa ou diferença estatística significativa

Entre as consequências desta busca insana pelo p < 0,05 está a tendência em desviar a atenção do tamanho real de um efeito. Algumas diferenças podem ser significativas segundo a estatística, mas irrelevantes na prática. E vice-versa.

Em 2013, por exemplo, um estudo com mais de 19 mil pessoas concluiu que os casais dos Estados Unidos que tiveram seu primeiro encontro online eram menos propensos ao divórcio (p < 0,002) e mais propensos a ter uma alta satisfação conjugal (p < 0,001) do que aqueles que tiveram seu primeiro encontro pessoalmente.

Isso poderia soar impressionante, se os efeitos observados não fossem minúsculos: os encontros online mudaram a taxa de divórcio de 7,67 para 5,96%, enquanto a satisfação conjugal se moveu de 5,48 para 5,64, em uma escala de zero a sete.

Em alguns artigos científicos (principalmente de língua inglesa), por economia de espaço ou por outro motivo qualquer, os autores omitem o termo “estatística” e escrevem apenas que “não houve diferença entre os grupos” ou “não houve diferença significativa entre os grupos”. Em estudos com medicamentos, por exemplo, é possível que diferentes tratamentos não possuam diferença estatística significativa entre si, mas a morte de um paciente em um dos tratamentos seria altamente significativa do ponto de vista clínico, por motivos óbvios (Moore e Fligner, 2014).

diferença estatística significativa

4. A ditadura da diferença estatística significativa

Considerando que os valores de significância tenham erroneamente se revestido de tamanha autoridade científica, temos presenciado um viés que privilegia apenas a publicação de artigos que encontrem diferença estatística significativa, como se os estudos que não encontrassem tais diferenças não tivessem aplicabilidade ou pudessem despertar interesse!

Se com nossa pesquisa não conseguimos descobrir qual é a explicação para um determinado fenômeno, ao menos descobrimos qual explicação NÃO é. Isso é importante, pois resultados negativos também são resultados, tão válidos quanto os positivos, e sua publicação evita a duplicação de esforços, ou seja, cientistas da mesma área não irão tentar os mesmos experimentos (Vianna, 2001).

Além disso, a publicação deste tipo de artigo abre espaço para a discussão sobre os motivos pelos quais os experimentos não tiveram os resultados esperados. O resultado é o mesmo: economia de tempo e de recursos.

Uma prova de que essa percepção é importante é o surgimento de revistas como o Journal of Negative Results in Biomedicine, o Journal of Negative Results – Ecology and Evolutionary Biology, o Journal of Pharmaceutical Negative Results, o Journal of Interesting Negative Results, entre outros, que publicam apenas pesquisas de refutação de hipóteses.

Outro periódico, o Journal of Errology, publicou durante anos apenas os resultados de pesquisas que NÃO deram certo, como protocolos que não funcionaram como deveriam ou então erros que invalidaram a pesquisa. Esse periódico possuía um sistema de revisão aberto, feito por meio de discussões online.

Entretanto, em alguns casos, não encontrar diferença estatística significativa é tão relevante quanto encontrá-la. Este pesquisador, por exemplo, estudou adubos verdes (leguminosas) e adubos industrializados (ureia) em lavouras de milho e observou que quando o milho era cultivado após a ervilhaca-peluda (uma leguminosa utilizada como adubo verde) não havia diferença estatística significativa entre as doses de ureia utilizadas no cultivo de milho.

Olha só que bacana: todo o nitrogênio de que a planta de milho necessita foi suprido pelo adubo verde (ervilhaca-peluda), de modo que qualquer dose de ureia aplicada não afetava a produtividade das plantas de milho. Uma baita economia para o agricultor, se levarmos em conta o custo dos fertilizantes industrializados.

Portanto, ao se deparar com o “não houve diferença estatística significativa”, ao invés de ficar #chateado e ir “xingar muito no Twitter”, procure entender:

1. Que o valor de p e a tal diferença estatística significativa não são essa Coca-Cola toda.
O valor de p não é um teste definitivo. Leve em consideração a magnitude do efeito, os intervalos de confiança, o tamanho da amostra, e o poder do teste estatístico utilizado.

2. Quais foram os motivos/causas que levaram a não encontrar diferença estatística significativa.
Foi um problema com o número de amostras? Foi um problema com o método de coleta de dados? Houve influência de fatores não controlados conhecidos? A análise estatística é a adequada? Ou, caso não encontre um dos problemas anteriores, como posso explicar esse resultado? O que ocasionou esse resultado?

3. Se esse resultado possui aplicação prática.
Partindo do pressuposto que o efeito esperado realmente não aconteceu, o que isso significa? Quais são as implicações práticas deste resultado?

4. Se esse resultado indica em qual direção seguir.
Já que esse efeito/fenômeno não pode ser observado dessa forma, de qual forma seria possível? Como fazer para conseguir resolver esse problema?

Afinal, se você faz Ciência e não está cometendo erros, então provavelmente você não está fazendo isso certo!By Pós-Graduando|31-10-2015|debates|60 Comentários

Compartilhe esse conteúdo com seus amigos no:

FacebookTwitterLinkedInGoogle+PinterestEmail

Sobre o Autor: Pós-Graduando

Criador e editor de conteúdo do blog, é portador de uma imaginação hiperativa e de uma necessidade patológica de estar sempre bem-humorado. Acredita que a Pós-Graduação, como tudo na vida, pode ser interessante, divertida e descomplicada.

Postagens Relacionadas

60 Comments

  1. Paulo César 31.10.15 em 12:02 – ResponderExcelente texto!Além de ser muito didático, traz à tona uma discussão muito importante: o uso equivocado das análises estatísticas.O meio acadêmico é repleto de “achismos” e de pesquisadores que entendem muito da sua área de domínio, mas pouco de estatística.A gente ainda encontra por aí pesquisadores que fazem determinadas análises estatísticas porque “todo mundo faz assim” (como se porque todo mundo faz errado, não há problema em fazer também); porque “eu acho que fica melhor desse jeito” (como se a opinião pessoal fosse mais importante que os requisitos e pressupostos dos testes); ou porque “eu sempre fiz assim” (prefiro nem comentar essa).Esse texto é quase um serviço de utilidade pública (acadêmica).