Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The basic idea is that file IO and socket IO will allow a task to be suspended while the IO is happening, and let another task do its work.

On top of that, you can write a C extension that will let you run compute heavy work in a way that allows your task to be suspended (for example, the C extension spins up its own thread). This extension has to be "very careful", basically by avoiding touching Python-side data during this work.

The way this sort of stuff ends up working is you pass data into a C extension, and that extension takes ownership of the data or copies it or whatever, does what it needs, then gives Python back some result.

But if you're just pure-Python compute heavy, then your task won't be suspended. So everything will run, but your compute-heavy stuff will hog the CPU, and won't be suspended. (Though if you have compute heavy work that is, like, looping over data, you could add `sleep(0)` between every couple of iterations. This gives other tasks a chance to run! This could be good enough to prevent weird bottlenecks).

But the ultimate thing is if you have N compute-heavy tasks, you probably won't get speed advantages. If you have 1 compute-heavy task and N IO-heavy tasks, you can get advantages (even if the IO-heavy stuff is interspersed). But if you have N compute-heavy tasks and not much IO-heavy tasks, multiprocessing can get you where you want (since it's usually IO-heavy stuff that is helped out the most with async/await)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: