Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is a problem entirely of our own making.

The stereo beside me has components that span 30 years and uses the same cabling for all components.

I realise someone will come back with "Oh, but DRM, encryption, needing more information about the source and destination"... but I'd counter by pointing out that telecoms cabling works perfectly well for transporting all kinds of things across it. You can put into protocols the things that you need, without having to create hardware problems that require constantly replaced and software upgradeable cabling.



>That is a problem entirely of our own making.

Aren't all problems in computing this way?

>The stereo beside me has components that span 30 years and uses the same cabling for all components.

Yes. And it does just one thing. Output analog electrically encoded sound signals.

Now, call me again when that stereo cabling has to also support: audio out, digital audio out, video out, different resolutions, MIDI out, connecting to medical and scientific devices for control, charging, data backup and sync, ethernet, etc etc...

>I realise someone will come back with "Oh, but DRM, encryption, needing more information about the source and destination"... but I'd counter by pointing out that telecoms cabling works perfectly well for transporting all kinds of things across it.

Only because there is a computer in the other (either an actual PC or a TV, etc) that knows what to expect and how to decode it and show it.

In this case, the computer is inside the cable, so the connected devices don't have to know everything.


The real issue here though is it's a solution to a non-existent problem. This was potentially a problem before HDMI was a standard. That's no longer true though. Modern HDMI does video, audio and has an ethernet bus standard right on the cable.

All Apple is doing is proprietary lock-in here - it's not more flexible in anyway, where as HDMI is signal compatible with DVI and DisplayPort.


_Modern_ HDMI supports ARC, 3D and Ethernet. The early versions didn't, and you have to upgrade components to get the functionality. Apple is solving a real problem here: next time the HDMI consortium adds another random feature to their bus, Apple can just ship a new adapter that will work with all Lightning devices since none of the HDMI hardware is in the phone.


Since when did Apple care about not having to replace your iPhone?


Apple still sells millions of iPhone 4(S)s every month. I'd say they care about it to the tune of a few billion dollars.


The 4S is what, 18 months old?


Since they decided to issue OS updates to existing owners?


Those with original iPhones, 3G, 3GS or original iPads no longer get updates.


But you have to buy new HDMI cables every time the standard is upgraded. Right now the ethernet bus is only 100mbps, and it only supports 4k at 24fps; when these are improved = new cable. HDMI also can have latency issues, if you want to play something like RockSmith you need to use an analog output to circumvent it.


The amount of engineering that goes into making devices work over the huge installed base of cat5/cat5e/cat6 twisted pair is pretty amazing; I was amazed Gig-E worked, let alone 10GE.

If there weren't the huge installed base of wires in the walls of buildings, we probably would have different standards for patch cables instead of fairly obscenely complex network interfaces.


all use Ethernet and standards are set for them, all peripherals need to talk to it in same protocol and newer devices should be back compatible .

But with lightning you can design cables which need not use device computing power for all transformations, it is pretty smart, they built this cable for the next decade we will see interesting applications from apple and its partners in coming years


The protocol certainly is not same and not exactly backward compatible. 1000-Base-T is fairly complex interface that is more similar to S(H)DSL than to traditional ethernet over TP or even anything that would any sane person call "baseband interface". By the way this is the reason that SFP/GBIC to 1000-Base-T transceivers are not supported by all devices with relevant slot, are not exactly compatible between manufacturers and sometimes are not compatible with non-gigabit devices on other end of the link (ie. not backward-compatible on the RJ45 side of things).

And before than in the days of first fast ethernet implementations there also were incompatibilities of similar type, which is mostly the reason that every managed or even "smart" switch allows you to disable auto negotiation quite prominently in it's configuration interface. Although that was more about software issues than about interoperable implementation being non-practical as is to some extent the case with copper SFP modules.

Bottom line: "I don't know what the next network interface will look like, but it will be called Ethernet and use RJ45 connectors on CatN (for some value of N) twisted pair ..."


At the physical level, there are big differences between 10baseT and 10GE, even though both are "ethernet".


While I sympathize, this isn't just about A/V standards.

This allows for absolutely ANYTHING, barring bandwidth concerns.

Ubiquitous body computers with skin access ports? We can make an interface for that.

"Quantum broadcast" antenna technology? We can make a dongle for that.


Which is precisely why I gave the example of cabling for telecommunications.


I find your example lacking in perspective. In telecom, even under TDM standards like SDH, SONET, etc., there were a plethora of incompatible connectors. You can have rj45, rj12, BNC (literally dozens of variations). In optics the same is true as well with SC, ST, LC, FC, etc. The last decades shift towards Ethernet everywhere is as a result of "mass" consumer desire (i.e. you guys on HN building things that use Ethernet. A protocol originally designed for a limited number of workstations in a small environment.) and that has put the RJ45 at the top for a lot of devices. Recall that 15 years ago AS400, ATM, FDDI, all had their own connectors and standards. All of this (and much more) just to send some bits over a physical connection. Just crack open a Grays catalog and go to town.

For the record each of these has a purpose and reason. Some were a function of the materials present at the time others due to specific environmental concerns. I would not class any in the realm of "lock-in". Due to the capital intensive nature of the industry they each were good business decisions at the time. It wasn't as easy as buying a $40 plug.


Which actually doesn't make any sense. Sure copper and cable has been around for thirty years but as Google Fiber, Verzion FioS, and AT&T U-Verse show it's often required to install new cabling and other hardware. This is true of Phone (dsl upgrades) and Cable internet also. I used to work as a Subcontractor for Time Warner's Road Runner internet service when it first arrived in North East Ohio. Part of the common work was installing new routers and switches after the new cable lines were installed in the individual neighborhoods.


It's not clear that it can support "absolutely ANYTHING" for values of anything that are any greater than can be achieved with USB 2.0 OTG though. So far it hasn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: