It misses an important element that may seem obvious but isn't: truth. Some insights are indeed, general and surprising, but when examined closely, turn out to be false.
That's what happened to many of the ideas presented by Daniel Kahneman in Thinking, fast and slow. He obviously aimed for "general and surprising" but accepted as solid results the outcome of lone, irreproducible experiments.
We (our current culture -- it wasn't always like that) love originality and "surprises", but as a rule, the more surprising the result, the more scrutiny it should withstand.
Circulating ideas where people disagree about whether or not they're true is an excellent way to discover things. Not just whether the particular ideas true, but why or why not, and where the uncertainty comes from.
In your example, perhaps the most lasting value of priming studies is a much greater understanding of how p-hacking actually misleads us. Everyone knew that it could in theory, but it was general and surprising that entire fields could have an apparent scientific consensus entirely based on p-hacking and file drawer effects.
Maybe we will learn... but maybe not. From Daniel Kahneman himself:
> there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the βlaw of small numbers,β which allows researchers to trust the results of underpowered studies with unreasonably small samples. (...) Our article was written in 1969 and published in 1971, but I failed to internalize its message.
That's what happened to many of the ideas presented by Daniel Kahneman in Thinking, fast and slow. He obviously aimed for "general and surprising" but accepted as solid results the outcome of lone, irreproducible experiments.
We (our current culture -- it wasn't always like that) love originality and "surprises", but as a rule, the more surprising the result, the more scrutiny it should withstand.