Nyxt seems amazing, but it's hard for me to imagine using a browser without Ublock Origin. I understand that it doesn't really make sense for everyone to support webextensions, and it's a huge amount of extra work, but it's just a huge barrier to adoption.
It's hard to trust the smaller browsers to keep me private online if there isn't something that gives me that same level of control over blocking. And there's a huge adblocking community built around this software that maintains lists, fixes websites, and it's just a bunch of privacy-focused community effort that is hard for any small team to replicate.
But aside from webextension support, Nyxt looks really exciting, and I'm glad that people are building browsers that are actually innovating in this space.
It's a really compelling project, with webextension support I could easily see myself moving over to a scriptable browser built on something like Servo.
It makes a lot more sense to ad block system wide than to expect every bit of software you use to implement extensions to allow you to do it. There are plenty of services like AdGuard and Lockdown, they can use the same lists that ublock uses, and you protect not just your browser but your email and everything else you use. Some routers will let you do something similar, and you can add a pihole to your network if your router won’t help.
It would be great to move adblocking to an OS/network level, but DNS blocking just isn't comparable to what a browser-level adblocker does.
To get to the point where I would feel comfortable having my OS handle browser blocking, a reasonable chunk web browser functionality would need to be moved out of the browser and onto the OS. I just don't see that happening any time soon, and I'm not sure that's the direction we would want to move with browsers anyway.
Ublock Origin will do things like stub Javascript methods on the page. A Piihole can't do that. And even for simple things like blocking requests based on the current domain -- the movement towards DoH and SNI are going to make that harder and harder as time progresses. You really need an interface that has insight into not just what requests you're making, but where/why you're making them.
Not to say that system wide blockers don't have value, but they're really there for the apps that can't handle their own adblocking. They're a defense in depth for the requests that slip through other parts of your setup, but they're not a good replacement for a browser extension.
All you're asking is already possible with Privoxy[1], which is even stronger than a browser adblocker. It's a very old software: it used to be unmaintained and lacking some essential features, but thankfully the development resumed and is now fully fuctional again with the modern web.
It can be used as an adblocker based on domain, request path, HTTP headers, etc, but it can do much more. It can redirect requests (for example, replacing assets from a CDN with a local cache), modify headers (stripping or making cookies temporary, changing user agent, etc.) and even rewrite the content of web pages using regular expressions or any external program.
By default, it has only a basic configuration that blocks tracking and ads, but there are tools[2] that convert adblock rules to the Privoxy format, so it will be functionally equivalent to adblock.
It acts as a CONNECT proxy, so you can run it locally or on a router and if combined with a NAT rule, it can also work transparently (obviously, you need to manually trust a CA certificate for https).
Can it let me pick any visible HTML element on a website to filter out just by clicking on it? Can it block content of a website without interfering with a `curl` request or a file download from a random messenger app?
All those things are trivial to do in a browser plugin but probably a total workaround-filled pain on any other layer in the system.
> Can it let me pick any visible HTML element on a website to filter out just by clicking on it?
The best it can offer is a CGI editor to change its configuration from the browser. I don't see how you could implement something like this: it's either interactive or a passive network element, but not both.
> Can it block content of a website without interfering with a `curl` request or a file download from a random messenger app?
This is really trivial, just don't proxy them (ie don't set the http_proxy variable).
Oh, Privoxy. Must've been 20 years since I used that. But, can it block certain elements with certain IDs? And do I have to disable DoH to make it work?
If you mean hiding an element, yes it's possible: you can either inject CSS into the page or write a filter to remove the HTML entirely. For example, adblock2privoxy generates both Privoxy rules to block requests and stylesheets to hide elements (you need a local webserver for this, though).
> And do I have to disable DoH to make it work?
It's Privoxy, not the browser, that will do the DNS queries. So, no: it will work regardless of DoH.
1) The converter you link has 62 stars and hasn't been updated in 2 years. Additionally I'm seeing multiple issues about basic adblock rules not taking effect. Short version, I would not trust this repo to convert rules.
This is kind of exactly what I'm talking about with the difficulty of keeping pace with what is essentially a shared standard in the adblocking community. It's not enough to write one converter that gets updated every 2 years, in the space of those 2 years, Ublock Origin has expanded the syntax it supports. Adblocking is a cat-and-mouse game, there isn't a single set of features that can implemented once and then the software marked as "done".
2) Even assuming that converter does work (which I am doubtful of), Ublock Origin uses a superset of the adblock rules format, so you have to target what Ublock Origin supports, not just what adblockers in general do.
And obviously I'm not going to try and recreate those lists myself manually, I don't have the time or energy to do that. They have to be 100% consumable from upstream.
----
Okay, moving on to Privoxy itself:
1) On the community aspect again, I don't see dedicated Reddit groups devoted to finding every single broken website on this software. I don't see a public issue tracker. It seems to be following the old FOSS philosophy of developing software primarily on mailing lists someplace, which is fine for some software but not fine for something that is highly community dependent like adblocking.
You say the software is being actively developed again, I don't see any way to easily confirm that. I don't see any way to easily figure out how many people are using this and verifying that it works.
2) As far as I can tell, this doesn't support DoH. That is also kind of a dealbreaker for me, I don't want to make myself less secure in one area to make myself more secure in another. This is a solveable problem: if Privoxy was being set up as a local DoH server as well, and it was using DoH itself to query/cache results, then the issue would almost completely go away.
However, am I correct in guessing that Privoxy is also going to struggle in the future with encrypted SNI, or with the fact that my browser strips referrer headers from requests?
3) I'm looking at Privoxy's pattern documentation[0], and correct me if I'm wrong but it doesn't seem to support contextual blocking at all. In Ublock Origin I can do rules like:
$script,third-party,domain=imgbox.com
My original criticism of DNS blocking in general was that it lacked context information, so it's just flat-out not acceptable for a Ublock Origin replacement to lack the ability to distinguish between a third-party request and a first-party request. That's critical functionality. Maybe I'm missing something here, but I've gone over the Actions and Template file documentation and I don't see the words "third-party" even mentioned anywhere.
4) Privoxy seems to lack the ability to block iframes, or at most it seems to have the ability to strip them from the HTML itself. That's not enough, sometimes iframes get dynamically created after a page is loaded, and modifying the HTML is not enough to block that.
5) I don't see any way to mark sites as trusted (probably related to point #3). So there doesn't seem to be a way for me to disable Privoxy when I'm on a specific site.
6) I don't see anything in the docs about CNAME unmasking. And CNAME cloaking isn't a theoretical attack, there are websites in the wild using that technique.
7) Browser integration also seems to be lacking. This isn't the biggest problem, I can tolerate annoyance, but it's a little bit of a quality of life issue.
8) And so on. Most of UBlock Origin's dynamic filtering syntax[1] seems to be unsupported. It's very possible I'm misreading the docs, or the docs are out of date or there's a trick to make it work, but if that's the case, that's also a problem, because then the docs need to be clearer.
----
None of that is to say that Privoxy is bad software. It's just to say that it doesn't seem like it's an all-in-one replacement for what my browser does.
We often do defense-in-depth in this area. You can set up a Piihole, or a firewall, or a proxy server to handle adblocking for devices and applications that don't expose interfaces like the browser does. That's a good idea. But the farther away from the context of the application that you get, the harder it is to do really detailed blocking based on that context.
This is something fundamental about adblocking that people don't always seem to understand -- it's not an either/or proposition, it's not like you set up a proxy server and all of your browser configs become useless. The proxy server just adds another layer of defense.
No, there's nothing javascript specific in Privoxy, but there's also nothing stopping you from implementing it. You could write filters to inject custom code (like greasemonkey userscripts) or modifying the scripts in tranport.
It makes sense to use both, but there is no alternative to "context-aware" ad blocking within the browser. YouTube video ads cannot be blocked via hosts file, same for any ad that's served directly from that site.
Also uBlock Origin lets you block any HTML element on any site. This lets you (interactively!) cut out autoplaying videos, annoying popups and basically anything that you as an individual user find distracting.
Or how about the option to block loading of large images? Try implementing that system wide without accidentally breaking certain requests using `curl` and while still allowing the user to flip that switch in seconds.
There's no single abstraction layer where it makes sense to do everything at once.
this is why i think Gemini is the most interesting thing happening in the web space today. the browser itself can't overcome the fundamental bloat and decay of the web as we know it today; the heavier and heavier js load, the ridiculous ad load that necessitates entire extensions just to escape from it; etc, etc.
Gemini is a project that actually attacks the root of the problem by presenting an extremely stripped-down hypertext format and giving an alternative at the protocol level.
at least for my circle, there's a consensus that the web has calcified and become such a walled garden that we've reached a time where it makes sense to "start over" at a pretty base level; rather than trying to build on top of a platform that, by its nature, inevitably tends towards centralization and capitalization.
> the browser itself can't overcome the fundamental bloat and decay of the web as we know it today
There's nothing "fundamental" about web bloat.
Here's a simple litmus test for Gemini: take a look at a few of the personal websites for people who've spoken highly of Gemini. How many of them have exercised restraint and eschewed with all the sorts of things that Gemini disallows? How many of them have embraced features of the Web that are not permitted in Gemini?
> rather than trying to build on top of a platform that, by its nature, inevitably tends towards centralization and capitalization
If we take it for granted that it's true of the Web (and that's being generous), why is Gemini going to fare better?
These all seem like a bunch of vague and empty talking points for a piece of tech who's main catalyst for coming into interest and capturing attention comes down to NIH, and not actually any fundamental correctness about its nature.
This is primarily why I use Brave, particularly on phones. I've had some form of ublock and js whitelisting since 2002 or so and don't even recognize most sites without them.
I assume a lot of the potential "early adopters" for any new browser are privacy/security focused and as neat as it is conceptually that's going to be a steep climb for any new project.
The problem is that UBlock Origin goes way beyond just blocking a list of domains, it's not just a Piihole.
It's also doing CNAME uncloaking, request rewriting, stubbing functions in pages, it has syntax to handle CSS changes, it has rules that allow requests to only go through in certain contexts or if they're originating from certain domains. And most importantly, there's a giant community of people basically standardized on Ublock Origin who maintain all of these lists and who are constantly identifying new threats and proposing new features.
So a competitor to Ublock Origin would need to be constantly competing with it, it would need to be pulling in new features as they got released, and it would need to be consuming the same lists that Ublock Origin uses. It's just really hard to keep pace with that.
If Mozilla came out tomorrow and said they were going to do native adblocking, I would trust them less than Ublock Origin. I don't think even they would be able to keep pace if they were trying to build a community from scratch.
What about blockit[0]? It does implement some of the features you need. It's still a WIP, but I'm actively working on it and it uses the adblock-rust library from Brave, which is already able to compete with uBlock Origin.
Looks very promising! Correct me if I'm wrong, but it looks like adblock-rust also handles CNAME resolution? That's pretty good. And I also like that it directly consumes adblock rules, it doesn't require them to be converted into a new format.
That being said:
> it has rules that allow requests to only go through in certain contexts or if they're originating from certain domains
I haven't personally seen a proposal for server-based blocking outside of the browser that could begin to tackle this problem, and contextual blocking is a huge part of what makes browser adblockers work so well. As far as I can tell, adblock-rust isn't an exception to that.
It's a very difficult problem. In order for you to have a rule like "block 3rd-party requests on a-bank-website.com", you need more information than just the request itself. Maybe this is something where sites could piggyback off of CORS requests? But browsers don't always send CORS requests.
I'm not sure what the solution would be, but regardless I don't think that blockit would be a replacement in its current state. Still looks like a promising project as a network-based blocker though.
Actually I would invert the problem: I wish nyxt itself were a webextension. IME Using a WebKit forks (or whatever) will always eventually be insufficient.
Unfortunately, it can't. Even something as simple as catching all keypresses globally is impossible with WebExtensions. Having to insert UI elements into pages is also pretty limiting and not a great user-experience with resource-hogging sites.
DNS blocks are good, but not really comparable to what a modern browser adblocker does. The state of adblocking on the web has gotten a lot more involved as both websites and adblockers have evolved in the kind of cat-and-mouse game they play with each other.
The "lossless tree history" is an amazing feature to have. I have yet to find a modern, usable replacement for the Norwell History Tool XUL extension for older Firefox versions. It allowed one to see browsing as it happened in a logical, orderly timeline instead of a collection of URLs and their last accessed value, so that you could see for example what webpage led to another. I wish this project success for allowing users control of their browsers and the vast data contained in them can make the difference between choosing from the average Chromium reskins.
For example, my last browsing session only became unusable (16GB RAM + 8GB swap 100% used, machine OOMing renderers, system freezing for about a minute or two at a time) after I reached about 1,081 tabs.
The Marvellous Suspender on Chrome and Auto Tab Discard on Firefox will let you scale to that (there are other options too). I have something like 7-8000 tabs open counting all my browsers and devices. 1081 tabs is no big deal - I have single windows with more than that, and everything is snappy.
Here is a counterpoint, and I encourage anyone here to tell me where I am wrong:
Why have tabs at all? Are you really needing to save the state of the vast majority documents and their JS? The suspender says no.
Why not consider all windows in which you aren’t typing a document to have a very small state, such as scroll position. And even the ones where you ARE typing a document can save the form fields in an encrypted file.
No, what you basically are saving is the already loaded DOM. And what if browsers took a radical approach to it as they are doing to third party cookie and... removed everything except maybe the latest 10 documents.
Yes the latest accessed 10 documents would be actually in buffers. The rest would be UNLOADED and browsers would save the state of their textboxes or scrollig, and restore it once the “same” elements appeared. But mostly they’d enable this new API to save state beforeunload and restore it, and that’s it. It’s not even a new API, you should already be playing nice by using this event and not storing some crazy state. Sure, infinite scrolling thingies would be broken, and the caches of many images would be purged but so what. Users can MANUALLY mark sites where they really NEED the caches to grow so large.
Instead, index the text on ALL sites and give the user a way to search their history of all their titles and bodies of all sites, as easily as they search google.
Every time the user opens a new tab, what they’re really saying is “bookmark this current site”. But why should they even make those decisions to bookmark. You should be storing are their history locally (and making it searcheable and making encypted backups of it across all their browser sessions on all their personal devices).
That’s what the user really WANTS to do. It’s the same idea as “gmail search” had when Google first launched GMail versus ordering all your mail in hierarchical folders. Think about it!
I've been dreaming of a system like this myself. Every time a page is loaded it would be written on-disk in a format that the browser can easily re-render, and can be nicely displayed by any third-party app. So you'd essentially have a local save of each and every single page you've ever visited; no more "this worked when I last visited it" because the browser could switch to this backup in case upstream is down. Also when you close a tab or switch to another webpage you don't really "close" the website, you just put it back to storage so it can be loaded at a later time. It seems to me using a browser should be closer to using a text editor, where you have one resource you're interacting with at the moment and others are in the background ready to replace the current one, but in a manner that loading them is a benign operation.
Tabs, as you say, essentially mean "I want to keep this page in an easy-to-reach place". If you look at gmail in comparison, it's exactly the same as keeping emails in the Inbox: they are important (for varying values of important) and there is something to do regarding them, so keep them there. It's not exactly the same as starring messages because starring is opt-in (needs manual action to mark importance) while the Inbox is opt-out (needs manual action to unmark importance). It seems many people have been using tabs this way and close them when work is done so we can transcribe GTD to browsers: when a tab is open it means I need to do something with it, when I'm done close it. Regarding the previous paragraph it shouldn't be harder to load a file from upstream than from a tab.
For this to work there needs to be a very strong search and history side. For me the best representation of browsing is a directed graph: nodes are websites and edges are clicks with a timestamp. There can be multiple edges between the same 2 nodes, if I clicked multiple times. The problem is such a graph is not only hard to represent efficiently, it's even harder to use it to search in history. But the good side is that as long as this data structure is used, you can represent any history (flat, threaded, ...) as you want.
I agree with the comparison of inbox-vs-tabs. The people out there that have 30,000 emails in their inboxes... those are the people that lose tabs because their computer forces them to close them.
Hrm, a directed graph. Interesting! I think that's how vim stores edit history?
I won't say you're technically or foundationally wrong, but I will say the view presented seems to be looking at solving the problem from the bottom up ("green-field the current implementation, identify the simplest possible alternative architecture that's just complicated enough to solve the majority of use-cases, and let the long tail fix itself"), instead of looking at things from the top down and making tiny/trivial incremental permutations to the bigger picture. Let me explain this counter-counter-argument.
> Why have tabs at all? Are you really needing to save the state of the vast majority documents and their JS? The suspender says no.
Tumblr, Pinterest and other websites that use infinite scrolling say yes. These sites are large and nontrivial. Tumblr is a major social networking platform. Pinterest is... Uber for Google Image search, or something. Both are, it would seem, not going anywhere. Pinterest's client UI is a labyrinthian mess. Tumblr is more manageable. But both use a rummage-around-in-dev-urandom approach to feed delivery; no page load delivers the same content twice.
I have been bitten by this enough that I actually gave up on the sites a few months ago. Well, the app, actually. I'd load a particular image, get lost in the "related" section (this would be especially problematic for industrial-design collections...), find something related that would catch my eye, my finger would tap the wrong image, I'd go back, and... it's gone. The layout's using a different seed now, the images are 98% the same but the one I was specifically looking for is now no longer in the results. This would happen with alarming regularity - like 80% of the time I'd mis-tap, which would be 60% of the time I was browsing, this would happen.
This is what got me so wound up about Chrome not having tab serialization in the first place (my other comment and the links it points to have some further frustration about tab suspension).
> Why not consider all windows in which you aren’t typing a document to have a very small state, such as scroll position. And even the ones where you ARE typing a document can save the form fields in an encrypted file.
> No, what you basically are saving is the already loaded DOM.
Zooming out somewhat, browsers are not just a DOM, and textboxes and scroll position are not the only bits of state in a page. To be pedantic, there's the DOM, but there's also the CSSOM (the CSS object model) which is built from all the CSS files, JS-injected style tags, and manually-applied JS .style.<blah> manipulations; and over in JS land Service Workers now mean pages are running multiple virtual threads of execution at the same time (not actually sure whether these map to OS threads), and WebAssembly bolts an entire new world onto the end of the JavaScript runtime too.
When I look at the web I don't see a single web browser running some specific set of "pet applications", if I can word it that way; rather, I envisage the terabytes (eep) of JavaScript code keeping the world turning around every day as the focus, and that the web runtime is kind of at the mercy of keeping that eye-wateringly head-spinningly large installed base in the air, while moving the platform forward and making substantive progress noises. This point of view is my only explanation for why things often feel so irritatingly stagnant.
--
> And what if browsers took a radical approach to it as they are doing to third party cookie and... removed everything except maybe the latest 10 documents.
Well, IT support teams around the world would need to add staff to deal with the exponential increase in complaints.
> Yes the latest accessed 10 documents would be actually in buffers. The rest would be UNLOADED and browsers would save the state of their textboxes or scrollig, and restore it once the “same” elements appeared. But mostly they’d enable this new API to save state beforeunload and restore it, and that’s it. It’s not even a new API, you should already be playing nice by using this event and not storing some crazy state. Sure, infinite scrolling thingies would be broken, and the caches of many images would be purged but so what. Users can MANUALLY mark sites where they really NEED the caches to grow so large.
Google tried almost exactly this with tab discarding a few months ago, working in exactly the way you describe.
It blew up the entire world's workflow, and they had to back it out. :(
--
> Instead, index the text on ALL sites and give the user a way to search their history of all their titles and bodies of all sites, as easily as they search google.
I've wanted this FOR SEVERAL UMPTY MILLION YEARS HEY GOOGLE WHY DON'T YOU ACTUALLY USE YOUR 100-EXABYTE OR WHATEVER IT IS BIGTABLE FOR SOMETHING ACTUALLY USEful okay sorry I'll stop with the shouting but serIOUSly this would honestly fix 100% of 100% of my problems (yes, 100% of 100% of my problems) with short term memory loss and trying to remember things online and... [sad violin noises]
In all seriousness, my guess is regulatory restriction. The Wayback Machine is this obscure little dorky project in the corner because it can't be anything else.
1. Malicious user creates Google account
2. User uses newly created account to search for contentious $thing, saves $thing (or maybe it auto-saves), then signs out of account and never uses it again
3. Time passes
4. User re-logs back in and re-views $thing from Google's cache, creating <legal/sociopolitical/military/etc> $problem. Fireworks ensue.
Open challenge: solve the general use case of the external brain (searching the history of pages that I've viewed), _while_ not invoking the above problem.
I don't believe this can be done :'(
--
> Every time the user opens a new tab, what they’re really saying is “bookmark this current site”.
Not quite. Not always.
I can actually say this with some authority: the moment I learned that history is volatile (Chrome caps it at 3 months IIRC) I immediately began using bookmarks as nonvolatile history, "just in case".
About 10 years ago.
I have about ~30,000 bookmarks. They're all in Other bookmarks, because Chrome doesn't offer a tagging system that will sync with Chrome on Android.
I have accessed about 3 of them; the other ~5000 times I needed a bookmark I was unable to brave the Tide Of Bookmarks.
:'(
--
> But why should they even make those decisions to bookmark. You should be storing are their history locally (and making it searcheable and making encypted backups of it across all their browser sessions on all their personal devices).
Yes please. (Imagine that all-caps scroller again)
> That’s what the user really WANTS to do.
Yes it is!
> It’s the same idea as “gmail search” had when Google first launched GMail versus ordering all your mail in hierarchical folders. Think about it!
I don't need to :P
I've been thinking about this for a while now myself. All the existing solutions out there seem to revolve around snapshotting the DOM, or storing the exact requests then replaying them, etc etc. None treat the web as the black-box it is.
My alternative idea would unfortunately require participation at the renderer level but would scale to all current and future apps: save the render display lists instead.
In (recent) devtools, in the 3-dot menu at the top-right, select More tools > Layers. Click into the image you see, then click the Paint profiler link that appears. I argue, save _that_. It's the set of Skia operations that drew the page.
My arguments this is a good idea:
1. It doesn't perfectly save the entire page, but it *does* mathematically-perfectly save the parts that have been rendered, and which you have read. If you want to save the whole thing you'll need another imperfect solution. But if you just want to remember what you have read, this will *always* work, regardless of future development.
2. This scheme works with infinite-scrolling systems, _and_ with annoying websites that arrange bits of text into overflow:scroll divs that don't scroll the entire page, and which completely foil those page-screenshot extensions that scroll the page in chunks. (Sidenote: those extensions are actually the only correct way to snapshot websites currently; if you hit CTRL+SHIFT+P in the devtools and select "full page screenshot", you'll often crash the renderer for the tab if the page's full height is over 10,000 pixels, _especially_ on sites that use synthetic/virtualized DOMs for giant listviews and such.)
Unfortunately, I don't really have the resources to implement this right now, nor (depressingly) sufficient knowledge of C++ either. Sigh.
I would definitely definitely like this to be simpler...
I used TGS a few years ago, back when I was still limping along on a 32-bit machine. Generally either the browser process would hit ~3GB VIRT and very abruptly terminate, or (back when Chrome would lump all of the open tabs owned by an extension into a single renderer) the renderer would simply thrash so much (because suspending the current tab, or switching between suspended tabs, invoked the mostly-swapped-to-disk renderer process) the browser would effectively become unusable, eg, 30 second stalls switching between tabs or 2 minute stalls opening new tabs.
Chrome's built-in tab discarding actually closes the renderer process, which solves all of those design fails, but then there's the browser process to contend with; a few days ago the browser process was basically sitting on about 2.5-3GB RAM. Apparently the data structures associated with remembering/showing a few tabs require a lot of memory...?!
The only annoyance with TGS and tab discarding without proper tab serialization/dehydration is page state is thrown out the window. Scrolled 350 pages deep in a tumblr blog or pinterest feed? Permanently gone on restore. I actually actively avoid sites with infinite scrolling where I can ._.
Of course, the real problem is that browsers don't provide good simple mnemonics for "I want to come back to this later" that effectively translate from "this is open and currently a thing" to something that works for squishy brains and finite hardware. Chrome's new reading list feature (which doesn't quite work in Dev yet, clicking the menu item randomly decided to start SEGV/MAPADDRing the other day, glad I wasn't using it lol) will be interesting to watch, but looks about as potent as the tag-less bookmark system, sadly.
I've been making "I need to fix this with an extension" noises to myself for years, but the gigantic annoyance is that, at the end of the day, whatever fun system I come up with on desktop will never seamlessly integrate on mobile because of course I can't run extensions there. What on earth is the point of having an external brain if I can't access it without needing to invoke a 30-step process that I have to fully context switch away from whatever I'm doing to perform?!?
I forgot the actual number but I remember that above 700~ tabs firefox started to lag, usable but slow. It was funny to hit a structural limit like that and kudos on the devs to allow such a tab-hoarding behavior.
I'd love to work on a quick tab group freeze feature. (a 2d graph to order / select and then stash the tabs dom without external resources maybe) ?
This + tree style tabs works pretty well for me. It’s kind of a clunky system. I tend to end up with a _lot_ more tabs than I need but it’s nice that the hierarchy in tree tabs shows which site a new tab was opened from, even if I’m not using the original site.
Not exactly Nyxt (which looks amazing), but that people are are starting to "emacs" the web. There's a social process whenever humans make a thing where we rumble around a lot and try different things and eventually we settle on a kinda-ok-for-now-if-I-can-make-some-adjusments thing (like how I just used written english).
I'm a little skeptical (for me) that we've arrived at a 'durable state' for the web. I think it may change a little too quickly for it to make sense (for me) to invest time into learning this particular browser but I am very excited about the trend.
Edit: I wrote this comment without even seeing that they support list but of course they do.
I used to use the keyboard a lot in Opera back in the day. Favorites included searching for links on the page by text and pressing enter to get there, and navigating back/forward using "link rel" tags in the documen (for example, you could go to the "next page" regardless of the browser history)
I seem to recall that web pages were steadily making it harder for it to work. E.g no "a href" or "link rel"s.
Here's hoping that Nyxt will have success in bringing this era back!
I have fond memories of Presto Opera as well. It also had geospatial navigation between the links (go to a link that is visually closest to the left/right/top/bottom) that normally may require tab-bing to death if something is visually close but far in DOM
BTW, "go to next/prev page" would also look for user-configurable strings inside. So you could define the strings from DOM on pages that you often visit and indeed you would breeze over those pages then. Useful on discussion boards etc.
And oh and the built-in and configurable ultra-powerful mouse gestures...
So I compiled the prerelease today. There were a number of hiccups with the process but nothing big, and I managed to get it running.
On first load you have a tutorial, and a menu to select the session. There's only one session so it should default to that. There is no way to click the menus like there is in Emacs, so I have to type Default in full each load.
Running "vi-normal-mode" doesn't switch the keybindings, and it only runs on new buffers, not old ones. It would be nice to have a "global mode" feature.
Futhermore, n and N do not work for searching in vi mode, and it would be nice to have "?" bound to the help buffer, simply because half the time I can't remember what key switches to which buffer. There should also be autocomplete on tab, like doom emacs has.
All in all, it feels half-baked, in it's current state, which I guess it is. I'll be happy to use it more when it's in a better state, but I don't think it'll be replacing Firefox any time soon
- You can press `enter` instead of typing "default" in full if it's selected.
- Running `vi-normal-mode` enables VI bindings in the current buffer only. If you want to enable them everywhere, you can use the graphical confiuration menu that's presented on startup.
- `tab` inserts the current selection in the input. Do you mean something else?
Thanks for responding! That wasn't my experience with using enter, but i just tried it and it works, so I'm pretty sure that was just me :) the command being relegated to buffers is useful to know.
> - `tab` inserts the current selection in the input. Do you mean something else?
In emacs (specifically ivy/helm), tab in context-sensitive. Using tab on a command input autocompletes that input, where elsewhere it behaves differently. There doesn't actually seem to be a nice way to select from the command lists given (like the session selector) without typing it in, and that means I have to type it in in full, which is awkward and cumbersome.
One final note is I just tried using it again -- first problem is duckduckgo.com does not load, I just get a blank page after it says "finished loading". The other is that the default key of 'd', I had assumed would close the buffer, but I guess that must be 'x' because it closed the entire window, which I would have expected from 'q'.
The prompt buffer (previously known as "minibuffer") supports fuzzy completion: you never have to type the suggestion in full. Just type a portion of what you want and the appropriate suggestion should come to the top. Even typos are supported.
If duckduckgo does not load, maybe you enable noscript-mode, proxy-mode or similar? Try starting the browser with `nyxt -I` (no config file).
Do other HTTPS site work?
`d` is not bound by default.
To see the full list of commands and bindings, press `Ctrl+space`, it will display them all.
> The prompt buffer (previously known as "minibuffer") supports fuzzy completion: you never have to type the suggestion in full. Just type a portion of what you want and the appropriate suggestion should come to the top. Even typos are supported.
Thank you :) I tried this with a few commands and it does select the topmost entry when i hit "enter". However, typing in "duck" and hitting enter does not go to https://duckduckgo.com, which is what was intended. There doesn't seem to be a way to select or cycle through the available entries? mouse clicking does not work and nor do the arrow keys
> If duckduckgo does not load, maybe you enable noscript-mode, proxy-mode or similar? Try starting the browser with `nyxt -I` (no config file). Do other HTTPS site work?
https://news.ycombinator.com works with nyxt -I, however https://duckduckgo.com/ shows a blank screen. I compiled with the latest webkit2gtk on Alpine Linux. I've had this problem with Surf so it might straight up be a webkit problem.
> `d` is not bound by default.
When I press it it exits nyxt. Running it in a console we can now see this error: https://tpaste.us/x1zJ
Another thing I just noticed, is that hitting "emacs mode" in the command list loads emacs mode, but vi-normal is still active! Hitting "vi-normal" to make it go back into vi-mode does not work, but selecting 'vi' in the settings menu shifts the current buffer back to vi mode. Should I select 'emacs-mode' again to get out of it, like a toggle?
Any individual or group of people that task themselves with making a browser deserve some sort of accolade. What I appreciate here is the minimalist approach and similarities to qutebrowser, uzbl, dwb and probably others.
No uBlock Origin is not a deal breaker for me. Even with ubo on and JavaScript blocked the web is bloated as hell. I feel bad for all the folks who work on these projects because they need to deal with the same comments over and over again. That doesn't help the project.
And Qutebrowser implements ABP-style ad blocking since v2.0.0, which a lot of sibling comments are concerned about. Whereas Nyxt still only has host-based blocking, if I'm reading blocker-mode.lisp correctly.
It also remindes me of Tridactyl[0], which promises to bring vim shortcuts to firefox. I tried it for a while but found that I just enjoy using the mouse more to explore the web, even though I'm mostly a keyboard-only user on the rest of my system.
Nyxt seems really cool. I have a vague hope in the back of my mind that itll eventually be an emacs-like editor as well as browser. It seems like it could be a cool lispy answer to things like vscode etc.
Every web browser should have a tree based history by this point.
This part is cool:
> Nyxt is web engine agnostic. We utilize a minimal API to interface to any web engine. This makes us flexible and resilient to changes in the web landscape. Currently, we support WebKit and WebEngine (Blink).
Does this browser have anything do to with Next? The logo seems similar.
After briefly looking into this, my understanding is that the embedding documentation hasn't been updated in a very long time, and the overall architecture doesn't lend itself to embedding.
Gecko itself is almost impossible to use outside of firefox at this point so I don't think it would work well in this kind of a browser unfortunately. I'm not sure about servo though other than it just not being ready for normal browsing.
What does that mean in practice though? Since it’s not a browser plug-in, I assume the download is an actual fully fledged browser. Perhaps I’m missing something obvious.
What rendering engine did they choose? Or is it possible to switch between them?
In practice Nyxt is a common lisp environment that has an FFI binding to webkitgtk (and probably qt-webengine). I've only used to webkitgtk. So Nyxt is a lot like eolie and gnome web.
Nice to see some people still trying to innovate on web browsing experience. With all mainline browsers stuck with almost identical UI, it is great to see some experimentation still going on.
Indeed. Though I'm kind of sad that mouse gestures and voice control haven't gone mainstream yet. Keeping one hand on mouse and another on keyboard with voice scrolling was a nice combo. And the lower the bar for customizability the more accessible for differently abled users.
That looks like someone bundled a prepacked browser-like starter-package using Emacs compiled with GTK-widgets (WebView) with Helm and some other niceties like the status-bar.
It even uses the same terminology with words like “buffers” instead of tabs.
Is this actually an Emacs-based browser being shipped to the masses?
I think it's more accurate to say "emacs inspired". The system is written in Common Lisp and uses a lot of emacs ideas like you point out (buffers vs tabs, chorded keyboard shortcuts (though there is a vi mode too), configurable in Lisp, etc.).
If you’re on macOS there’s Vimac which gives you this for the whole OS including Safari and the contents of web pages (since it supports macOS accessibility features).
That tree based history thing is amazing. The Firefox history pane seems to always contain exactly the web pages I don't want, and never the ones I do.
I would love to give Nyxt a try once there is bitwarden integration.
I am using Tridactyl on Firefox which has many of these keyboard navigation features. I tried using qutebrowser (a bit more like nyxt) but I stick with Firefox for the password manager integration and easy installation on all platforms. Also Ctrl+Tab on Firefox switching between most recently is what I want most of the time.
I'm sad to see that Windows is not an officially supported target. This doesn't feel like the kind of project that anything gains from assuming POSIX, yet a lot of software is written this way. I'd love to take advantage of asdf, nvm, ranger, and now Nyxt!
This is going to sound flippant (because, it is, and I hate myself for saying this);
But POSIX users would love games!
It’s just such a low priority because it would be a very low percentage of total users.
I could add that my own experience of supporting developer workflows on Windows/MacOS and Linux is about 3x more complicated than it needs to be because of Windows. :(
Nyxt does not assume POSIX, in fact it is completely independent of any POSIX-ness.
It has been reported to run on Windows via WSL and also without it, but it needs much more work because WebKitGTK is not trivial to get to run on Windows.
I've never used Common Lisp how common is it (heh heh)? I really enjoyed hacking around in Clojure, so I may have to pick it up. I'd love to hear from anyone who's done serious Common Lisp work. Why'd you choose Common Lisp vs any alternative?
It's just a great balance of dynamicism and pragmatism.
I can redefine a function without restarting, and then get assembly level profiling of a benchmark faster than a typical C++ program can link.
Most of the features that made lisps unique when I started learning lisp have been adopted by modern languages, but the debug tooling and dynamicism in a compiled language are both rare.
Note this is the "fully contained Guix pack" (a bit like a container) which contains all the recursive dependencies, which includes WebKitGTK, GTK and the like.
Nyxt alone is about 100-150 MiB uncompressed, which is what it costs you if you install it via your package manager.
I did not see mention of this on the download page, so I was taken aback. It does seems the Ubuntu package works in Debian, but that wasn't clear on the website, either.
I am looking for info. Why is Firefox web engine no the go to choice for foss projects? It is not embeddable or just really complicated or or what? If so are they making any progress towards more adoption?
Mozilla is not very developer friendly. Gecko uses a two decade old method of RPC, the lib interface (XPCOM based on Microsoft's COM) is about the same age, the documentation hasn't been updated in a decade even though the code evolved, and I've heard from 2 employees directly that it's not important to them.
It's not really a surprise that gecko isn't the primary choice for browser developers.
After briefly looking into this, my understanding is that the embedding documentation hasn't been updated in a very long time, and the overall architecture doesn't lend itself to embedding.
However, I think that regular browsers like chrome/firefox are better in that they offer 99% of customizability at 1% of the effort one would have to spend to configure nyxt. There are extensions that let you inject javascript into a website. You can do a lot with that. Not to mention that mice are very well suited for browsing web.
Configuring Nyxt is part of the fun. This is the same reason one might choose emacs or vim over vscode. Sure there are extensions in other software but I'm really getting a kick out of setting up Brave's ad-block server with this browser.
Similarly, I've been using Modeless Keyboard Navigation[0] (a vimium fork to remove the normal/edit mode distinction) for about 2 years now, and it works wonderfully for navigating to links via keyboard.
ive been looking for a more mouseless browser for a while now, qutebrowser doesn't cut it because no extensions and need to configure to get videos & other things working. so I've been stuck with vimium... when nyxt supports ad blocker and other extensions im on board
It's hard to trust the smaller browsers to keep me private online if there isn't something that gives me that same level of control over blocking. And there's a huge adblocking community built around this software that maintains lists, fixes websites, and it's just a bunch of privacy-focused community effort that is hard for any small team to replicate.
But aside from webextension support, Nyxt looks really exciting, and I'm glad that people are building browsers that are actually innovating in this space.