I don't understand why people keep reinventing routers even though routing doesn't take much time anyways, especially in a compiled language like Go.
Most APIs seems to respond within 20 to 200ms. Even if you take 1ms to route stuff, what's the point in spending so much time in trying to optimize routing instead of SQL queries, cache layers or developer productivity with a nice ORM ? Trying to squeeze nanoseconds out seems pointless to me at the moment, especially since newcomers to Go end up seeing 20 different routers and might not know where to start.
> The latest SQLite 3.8.7 alpha version (available on the download page
> http://www.sqlite.org/download.html) is 50% faster than the 3.7.17 release
> from 16 months ago. That is to say, it does 50% more work using the same
> number of CPU cycles.
>
> The 50% faster number above is not about better query plans. This is 50%
> faster at the low-level grunt work of moving bits on and off disk and
> search b-trees. We have achieved this by incorporating hundreds of
> micro-optimizations. Each micro-optimization might improve the performance
> by as little as 0.05%. If we get one that improves performance by 0.25%,
> that is considered a huge win. Each of these optimizations is unmeasurable
> on a real-world system (we have to use cachegrind to get repeatable
> run-times) but if you do enough of them, they add up.
A router is rather easy to create, so I reckon this sort of easy projects are appealing to people as beginner projects and pet projects, so they implement them and put them on the internet just for kudos or karmas or for merely showin' off to potential employers.
It's particularly jarring for me given that whenever there's a post about someone moving away from Amazon and seeing significant performance and/or cost benefits the HN zeitgeist tends to be rubbishing it as too much work for the benefit, but endlessly re-solving the same problems over and over in software for much smaller gains tends to be wildly applauded.
If 80% of posters on a thread about diets say vegetarian is healthier, and 80% of posters on a thread about fun recommend riding a motorcycle, it doesn't follow that there is a single HN zeitgeist saying both things. A simpler explanation is that there are two groups which speak up in different situations. Those groups don't necessarily overlap much.
At first I came here thinking the same thing. Then after looking at the benchmarks in README I decided it's a worthy attempt at refining the craft and eliminating the cruft of what else has come before.
It's no 100 mile mountain race, but it is more interesting than just some random HelloWorld router.
It's not only about time, it's about GC pauses.
If you allocate hundreds of KB on each request (even simple stuff like health checks, or frequently polls for "is there anything ready yet" (in memory not db access etc), you will have a higher GC cost, and possibly having long GC pauses.
Go 1.5 is going to improve on the GC pauses, but another way of improving it is to just create garbage (by reusing memory). Go doesn't have manual heap allocation and free, but you can easily achieve manual memory management by simple means like handling your own pools.
Every kind of program known to mankind will be implemented if a language remains popular long enough. There are web servers written in awk, but nobody cares because awk isn't compumateguistically threatening to them.
I mean, you could even make a web server/router/framework out of Javascript, but what kind of lunatic would do that?
Interesting approach. I guess there are use cases where the small number of allocs of some of the other routers is enough to cause problems.
I'll definitely stick to httprouter as my default starting point, though. The params interface is much nicer, and it's pretty damn good with performance and allocation count: https://github.com/julienschmidt/httprouter
We're also using httprouter and so far have been pretty happy with it. The lack of middleware isn't really an issue, you can inline the key stuff (Gzip, logging) yourself from other sources.
It isn't too difficult to add basic middleware support. We actually wrote a tiny framework[0] around httprouter for that to use on our API servers. It works pretty well.
It is interesting and surprising how many allocations per request other frameworks do, from 100s to 10000s.
They're sort of cheating by having a pool, and sort of punting with things like "// MaxParam sets the maximum allowed path parameters. Default is 5..."
I've programmed without pools or allocations, where everything had statically allocated storage. Both for microcontroller firmware, and telephony software. It's noticeably harder.
It would have been nice if the library conformed to http.Handler. That has nothing to do with routing yet whenever a new router comes out, it's non-idiomatic. I'll take free performance anyday but not if I have to rewrite code.
I'm curious as to why they chose to use a separate echo.Context. Why not create a type with an embedded io.ReaderCloser and a map[string]interface{}. Then replace the req.Body (it's a io.ReaderCloser) with the custom type. You could then write methods that take a http.Request to add/delete/modify the piggy backing context field. No need to have a Context type in your handlers.
It appears to be storing routes in a tree structure allocated via struct literals in router setup. I'm imagining that if performance is relying on them always being on the stack, that would be an implementation detail of the Go runtime and may change.
Looks like a decent implementation, but aesthetically I dislike monolithic "frameworks" (eg: "echo.New()"). The "Go way" is to write small composable libraries, not opaque frameworks. Gorilla would have been a good model to draw inspiration from.
That's standard terminology in libraries, especially in parsers. You don't really care about overhead memory, you're concerned with scaling one the requests start coming in.
With things like C-based HTTP or JSON parsers, they will use the same memory space that is being passed to them to par/lex/etc but in this case with a Go library I'm honestly not sure what To does behind the scenes (I haven't looked at the source).
I thought it meant that they were only allocating on the stack instead of the heap, (which would still be wrong terminology-wise) but they're clearly doing heap allocations from the examples: https://github.com/labstack/echo/blob/master/example/main.go
The benchmark section in the README clearly shows the echo router to be the only one of all benchmarked routers that has exactly 0 allocations per request.
Most APIs seems to respond within 20 to 200ms. Even if you take 1ms to route stuff, what's the point in spending so much time in trying to optimize routing instead of SQL queries, cache layers or developer productivity with a nice ORM ? Trying to squeeze nanoseconds out seems pointless to me at the moment, especially since newcomers to Go end up seeing 20 different routers and might not know where to start.
I'm genuinely wondering.