Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a real problem. Are there any voice dialing programs for Android phones which do voice recognition locally and don't require Google services? That's the way it used to work until Google broke it so they could monitor all your dialing.


Google in 2007: Do no evil.

Google in 2017: Skynet didn't build itself people! We need more ML training data!


Skynet v. 0.1 didn't build itself. Clearly, though, once it becomes a minimum viable product, it will produce the next generations of itself.

Or will it? If it's an AI intent on approximating human speech processing, why should it be any good at programming? See also https://www.fanfiction.net/s/9658524/1/Branches-on-the-Tree-....


I wonder what makes people think that an AI needs to be able to code to improve itself. I don't see infant brains "programming" themselves to get better.

The next step in programming is probably not anything like programming. Machine learning certainly isn't. Creating a neural net that achieves superhuman image recognition takes 30 lines of code in keras. And terabytes of training data. But the programming doesn't look anything like someone in the 90s would have thought. Except maybe for a bunch of AI researchers.


> I wonder what makes people think that an AI needs to be able to code to improve itself. I don't see infant brains "programming" themselves to get better.

That's largely because you haven't looked. Infants are constantly developing and pruning pathways and connections in their brain, which effectively changes both the hardware and the software.


They don't do that consciously (with an agenda). Infants are just being infants. If an AI just needs to keep doing AI things to "reprogram" itself, then theres no barrier for it to do that.


Infants also don't survive by themselves. Leave an infant alone and it will die.


im just saying that infants are learning a whole lot without consciously making an effort to do so. they just provide data through their sensors by existing.


You build a metaoptimizer. The metaoptimizer decides which new layers are necessary, and whether the cost of "buying new neurons" ( adding compute instances & storage ) is worth the benefits to the high level optimization functions. ( or pruning / reallocating ). Meta-optimization is clearly a thing, it's in the research papers, and once you build a good one, you're done programming.

My new prediction is Skynet will come out of the financial bots. It solves the money problem, because those bots will become financially self-sufficient quickly.

Regrettably, the target function is "make money" which will cause our next-gen species to be psychopathic and uninterested in human life, except, in the short term, as a market to be plundered. Not a good outcome.


Thinking about the way we learn, I'd say it is a LOT like programming ourselves. Study and Practice both have aspects of learning to do steps in a mechanism to achieve an outcome, and each of us has to build that mechanism from scratch. The difference is that not all of us need to be self aware for the process to continue working. However, the best of us seem to have very detailed systems worked out for "programming ourselves".


But we are talking about literal programming here. Not conceptual programming.

I dont think that an AI will program itself anything like we program computers.

People who "program" themselves really just provide good data. Positive reinforcement, "healthy thoughts", "gaining valuable reference experiences", things like that. When you want to learn a new language, at the simplest level, you can immerse yourself in that countries culture and you will learn automatically. You don't hop inside your brain and move the axons around. You just provide good data and let the engine do its job, and why would an AI be incapable of doing that?

They already do that. "Backprop" is that mechanism. Not yet on a really advanced level, but a machine learning algorithm already introduces its learning back into itself and incrementally improves on that.


The most obvious source of runaway (as opposed to incremental) improvement would be an AI written by humans that is better at writing AIs than humans are. It could then write an AI even better than itself recursively until diminishing returns are reached, causing a near instantaneous jump in its intelligence.


I don't know, Rogue One had a droid typing on a keyboard /s


That's a pretty interesting interpretation of the Terminator story and rules. I hadn't seen that one before, thanks for the link!

But ... ugh, rationalist fiction always makes me a little sad. I share the motivation, the annoyance at how pervasive the plot driven unrealistically stupid decision is in fiction. I want to like it, but it's always so saturated with authorial naivete, inexperience and smugness.

I guess I just wish they'd revere Iain M. Banks more and Yudkowsky less.


> I guess I just wish they'd revere Iain M. Banks more and Yudkowsky less.

Definitely. Though Banks doesn't have quite the Internet following, possibly because he was publishing before the Internet took off and compounded by the fact that I can't link to a free, online copy of his work.


> Skynet v. 0.1 didn't build itself.

v. 0.1 was based on the chip from the Terminator that had been sent back in time by Skynet. Humans otherwise were decades away from being able to design something like that.

I think it's reasonable to say that Skynet built itself.


> Or will it? If it's an AI intent on approximating human speech processing, why should it be any good at programming?

Exactly! Any sophisticated machine will be as full of errors as a human, just different kinds of errors. Unless it's a Godel machine [1].

[1] https://en.wikipedia.org/wiki/G%C3%B6del_machine


From that last question, I imagined an AI having a hell of a time trying to figure out programming. Haha..


I think ideas like shoehorning SMS into Hangouts (or vice versa?) are driven more by misguided bonus formulas than any overtly evil intent.


This holds for virtually all evil.


Generally speaking, people aren't out there to screw you over. Voice recognition happening on Google servers is many many times better than what can be achieved on your tiny phone. Your phone doesn't even have the storage to store the models used for phoneme decoding on state of the arts speech recognition systems.

Speech recognition ain't like dusting crops, boy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: