Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> someone trusted it was safe to use a Google link.

That someone made a poor decision to rely on anything made by Google.



Hindsight is 20/20. Google was considered by geeks to be a very reliable company at some point.


Using a link shortener for any kind of long-term link, no matter who hosts it, has never been a good idea. They're for ephemeral links shared over limited mediums like SMS or where a human would have to manually copy the link from the medium to the browsing device like a TV ad. If you put one in a document intended for digital consumption you've already screwed up.


Link shorteners are old enough that likely more URLs that were targeted by link shorteners have rotted away than have link shorteners themselves.

Go look at a decade+ old webpage. So many of the links to specific resources (as in, not just a link to a domain name with no path) simply don't work anymore.


I think it would be easy for these services to audit their link database and cull any that have had dead endpoints for more than 12 months.

That would come off far less user hostile than this move while still achieving the goal of trimming truly unnecessary bloat from their database. It also doesn't require you to keep track of how often a link is followed, which incurs its own small cost.


> cull any that have had dead endpoints

That actually seems just as bad to me, since the URL often has enough data to figure out what was being pointed to even if the exact URL format of a site has changed or even if a site has gone offline. It might be like:

kmart dot com / product.aspx?SKU=12345678&search_term=Staplers or /products/swingline-red-stapler-1235467890

Those URLs would now be dead and kmart itself will soon be fully dead but someone can still understand what was being linked to.

Even if the URL is 404, it's still possibly useful information for someone looking at some old resource.


Totally. Furthermore one can input that (now broken) URL into the Internet Archive to see if they might have snapshotted that red stapler page.


We knew that. But it is very useful in documents that would be printed, especially if the original url is complicated. That is why one would not use a random url shortener, but Google's. After all, Google would never destroy those URLs, and the company will likely outlive us.

I'm completely serious, and I have a PhD thesis with such links to back it up. Just in some foootnotes, but still.

Yes, maybe this shows how naive we were/I was. But it definitely also shows how deep Google has fallen, that it had so much trust and completely betrayed it.


> We knew that. But it is very useful in documents that would be printed, especially if the original url is complicated.

Maybe for ads in periodicals or other content where accessing the link isn't going to matter down the line, but absolutely not in a document that is expected to be useful years down the line. It's already enough of a problem dealing with link rot without adding another stage of rotting redirections to the mix.

> After all, Google would never destroy those URLs, and the company will likely outlive us.

I will give you that Google URL shortener came out in 2009 and the "Google Graveyard" didn't really pick up speed until 2011 but I feel like any thoughts of "the company will likely outlive us" no matter what company should have been dead after 2008.

> I'm completely serious, and I have a PhD thesis with such links to back it up. Just in some foootnotes, but still. > Yes, maybe this shows how naive we were/I was. But it definitely also shows how deep Google has fallen, that it had so much trust and completely betrayed it.

I would absolutely agree that was a naive choice. A shortened URL could still be useful in a long-lived document as a convenience measure for fitting the link in to a page of content and/or humans copying it to a device, but the full link should then be placed at the end of the chapter or document so it's still discoverable when the shortener eventually disappears.

The only exception I'd be willing to grant is for where the referenced content and the shortened URL are hosted by the same company, for example many programming books have a printed URL at the publisher's site for accessing errata, examples, etc. which is itself a redirect to a deeper link on the publisher's site. In that case those hosting the redirect have an actual interest in the target content being accessible.


I am constantly annoyed at O’Reilly and similar book vendors which seem to have a policy that all links should go through a shortener.


Yeah, when Google was founded, people acted like they were normal smart and benevolent and forward-thinking Internet techies (it was a type), and they got a lot of support and good hires because of that.

Then, even as that was eroding, they were still seen as reliable, IIRC.

The killedbygoogle reputation was more recent. And still I think isn't common knowledge among non-techies.

And even today, if you ask a techie which companies have certain reliability capabilities, Google would be at the top of some lists (e.g., keeping certain sites running under massive demand, and securing data against attackers).


It’s not the sites with massive demand we’re concerned about. It’s anything that Google considers niche, even if that niche is still a few million users.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: