Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I suspect that this is a compute tradeoff they decided to make. Uses much less compute than SORA, so feasible to scale up for the public, but that much less coherent. If you look at OpenAI's technical report for SORA they show examples with "base compute" and "4x compute". This output looks like an estimated "2x" to me.

I wonder if they are really talking about the model size.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: