Adjusting the significance level does not affect the reliability of the study
What an observation like 'it turned out that p < 0.01 but the sifnificance level was 0.05' means exactly, is almost impossible to explain in practice. Applied researchers (such as medics, biologists, and psychologists) tend to explain a small p simply as a small probability of a false positive, and even professional statisticians unfortunately sometimes make similar mistakes.
In his recent article, Grünwald provides a mathematical proof that shows that if you work with e-values instead of p-values, adjusting the p-value is indeed possible: you may change the significance level in a later stage of the research project, and the research result remains reliable.
Through previously published research by Grünwald and colleagues, it was already clear that with the e-value -in contrast to the p-value- you may adjust the number of participants in your study: you may stop when you want and add data as long as you want. So now it also becomes clear that e-values are more flexible in yet another way than p-values and confidence intervals: the significance level may also be determined retrospectively.
The confusion between p-value and significance level is perhaps the main reason why p-values are so difficult to understand – and this is what makes Grunwald's discovery revolutionary. He shows that this problem is largely eliminated with the e-value.
Earlier this year, Peter Grünwald, senior researcher in CWI's Machine Learning research group, was awarded an ERC Advanced Grant to further research flexible statistical methods based on the e-value, a robust and flexible alternative to the p-value.