Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
moultano
on Oct 20, 2020
|
parent
|
context
|
favorite
| on:
Why deep learning works even though it shouldn’t
This isn't true. A loss function that asymptotes can also have no minima, which commonly used loss functions do.
bananaface
on Oct 21, 2020
[–]
Doesn't the fact the network is discrete (floats have a maximum precision) mean this isn't actually the case? There's a finite number of states the net can be in, and one (or more) is best.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: