Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honest question: Why?

I don't understand why people inherently dislike Javascript (aside from, y'know, creepy ad networks).



Because we want the information to be free. If the web server is serving a document, you can do all sorts of stuff with it - you can index it, you can transform it, you can save it for later.

If the web server is serving a DRM-ed program, that loads the human-viewable data over non-standard interfaces, all that breaks. Only humans in front of the web browser will be able to see the data.

Or sufficiently dedicated people to run a headless browser to run the Javascript and re-build the content and work on the rebuilt DOM. But also we now need NoScript, JS blockers, 3rd party blockers, and the publishers invest in anti-adblocks. It's a neverending arms escalation between those who want to restrictively publish information, and those that want the information without restrictions. So all this JS-based workaround to try to DRM things only brings more work for everybody involved, with minimal results.

NOTE: when I say DRM, I actually mean Digital Policy Enforcement - the publishers want to maintain their policies around access to their information (e.g. you cannot see this article without seeing this ad) using digital means. But DRM has a nicer twist to it - the uninformed may mistake the R for My Rights.


I like the term DRP (Digital Revenue Protection)

Seems to cover the intent of things quite nicely :)


RMS always refers to DRM as Digital Restrictions Management.

Also conveys the intent quite nicely - plus you can keep the acronym ;-)


I thought he used it to refer to delicious ripped foot manifolds?

http://youtube.com/watch?v=I25UeVXrEHQ


Because, besides the creepy ad networks and stuff, it's most often used in a mix of shitty engineering and user-hostile practices.

Let's consider a web document like this article here. Its stated goal is to be read by the visitor and thus deliver him value. So presumably, an article that's easier to read is better than one harder to read. An article that, ceteris paribus, consumes less resources on user end is better than one that consumes more.

Now we have a perfect technology to deliver that article. Plain old HTML. With a little bit of CSS on top. When what you want to send is text communicating a message, you need exactly zero JavaScript to do that successfully[0]. You barely even need much CSS - the default browser styles, raw as they are, are better than most web designers produce, if you care about providing value to the user.

Now if you don't, here starts JavaScript. Look at just what JS on Wired does and find me one line of code that actually serves the user. The JS there tracks you, shows you ads, shows you nagging popups[1], adds social media buttons that are somewhat useful if you want to exchange being tracked everywhere for convenience of not having to CTRL+TAB to that Facebook tab. In general, JS here is a waste of electricity (often in users' phone batteries).

You can run a similar analysis of other websites[2] and rarely if ever you'll find one when JavaScript does anything other than fuck users over more or less subtly. The technology is fine, but everyone[3] is using it for user-hostile purposes, and/or because of bad engineering. Think of all the scroll hijacking, JS rendering article text dynamically on a blog page, etc. Personally I dislike it from the very same reason I dislike crappy code.

--

[0] - Sure, JS can be used to qualitatively enhance the reading experience, to make it more pleasant and efficient. I accept that in principle, but I'll cede the point only when I see anyone other than Bret Victor actually doing it.

[1] - Wired, I appreciate that you wanted to say "thank you" to me for turning off uMatrix for a second, but could you please do not do that with a popup?

[2] - Web apps are a different topic; I don't think anybody is saying you should turn off JavaScript for GMail or Google Docs. But most of the sites on the web are not, and should not behave like web apps.

[3] - Except Bret Victor.


I once was smart enough to burst into a rant about exactly this in an interview question.

I didn't get the job.

Next I was joking with a friend that all the layers of abstraction added to the web are probably part of some big conspiracy by web developers to create artificial demand and job security. You run a heavy CMS but the bells and whistles confuse rather than help the user, and they'd rather pay for an hour of hour time than figure things out themselves. Your spa makes a simple series of documents feel like an app, with all the added complexity, but in the end the user couldn't care less about the full page transitions or parallax scrolling.

I have one simple rule: if it's about information retrieval, it's supposed to be a simple bloody document. If it's supposed to act like an app, it should look like an app. The latter is the propper use case for JS, but people are applying the latter to the former. This is not user centric design.


Agreed.

JS on a website that is otherwise not an app has its use cases ie. tabulated data, search, filtering, realtime data etc.

But a static website displaying a simple article has no fucking business running any code other than HTML/CSS on my computer.


Reddit's mobile site takes 1-2 seconds to load comments through JavaScript, and heaven forbid you click a link; that destroys your scroll position. Disable JavaScript and comments load instantly and the back button actually works.

JavaScript seemingly is more often used to degrade basic site functionality, rather than enhance it.


Because js-heavy sites are slow on a bit older pc's that are otherwise perfectly capable for most of the other tasks.

See idlewords.com/talks/website_obesity.htm


Speed and bandwidth usage are part of it. I visit a site, wanting to see specific content. The vast majority of the 44 domains (according to uBlock Origin) that Wired connects to aren't actually needed to render what I'm trying to look at. They're mostly ads, tracking scripts, analytics. None of which help me and most of which are never used to help shape the content, only monetise user data and serve adverts.


I recently found d a stash of web pages I had saved locally over a decade ago. They still load...a modern one wouldn't.


You should create screenshots instead.


Those don't scale with resolution changes.

A saved webpage can stay as responsive as an ebook.


Screenshots are also bigger than necessary (image instead of text + metadata), and are not greppable.


Why? So I can fuck around with OCR later?

The data is already structured.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: