Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Theory dictates that with stronger magnets, the reactor can be scaled down (with the square/cube, can't remember exactly), and thus cost and time to develop

Here's the quick summary:

B: magnetic field strength

R: length scale

Fusion rate ∝ (plasma pressure)^2 ∝ B^4

Energy gain (Q) ∝ R^1.3 B^3

Power density ∝ R B^4

Cost ∝ R^3

So, say for example you're targeting a fixed Q. Doubling the magnetic field strength results in R1 = R0 / 2^(3/1.3) = 0.2 R0. And 0.2 R0 translates to 1/(0.2)^3 = 0.008 = 0.8% the cost.

The scaling is absolutely insane, and a stronger magnetic field has other advantages (such as making plasma instability far less of a concern), though structural loads can be an issue (that, at least is a relatively straightforward engineering problem).

If you take 12T for ITER and 20T for SPARC, that's not actually 2x, it's 1.67, which translates to 30% the size and 3% the cost (and time). It should also be noted that this is just rough, order-of magnitude estimation, but it should be broadly accurate.

For a more detailed explanation: https://youtu.be/KkpqA8yG9T4



To be fair, the main reason instabilities are less of a concern is wrapped up in that B^4 scaling.


I understand there's a bit more to it than that.

Here's the section in Professor Whyte's talk: https://youtu.be/KkpqA8yG9T4?t=2215

> It's even more subtle than that, in fact this is really one of the things we've studies at MIT, is that there's other things that come in terms of benefits, particularly when you make the magnetic field very high, it basically starts to tame, just all of the whole suite of plasma instabilities that exist.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: