Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Echo: A fast HTTP router and micro framework in Go (github.com/labstack)
172 points by Spiritus on March 31, 2015 | hide | past | favorite | 39 comments


I don't understand why people keep reinventing routers even though routing doesn't take much time anyways, especially in a compiled language like Go.

Most APIs seems to respond within 20 to 200ms. Even if you take 1ms to route stuff, what's the point in spending so much time in trying to optimize routing instead of SQL queries, cache layers or developer productivity with a nice ORM ? Trying to squeeze nanoseconds out seems pointless to me at the moment, especially since newcomers to Go end up seeing 20 different routers and might not know where to start.

I'm genuinely wondering.


http://permalink.gmane.org/gmane.comp.db.sqlite.general/9054...

  > The latest SQLite 3.8.7 alpha version (available on the download page
  > http://www.sqlite.org/download.html) is 50% faster than the 3.7.17 release
  > from 16 months ago.  That is to say, it does 50% more work using the same
  > number of CPU cycles.
  > 
  > The 50% faster number above is not about better query plans.  This is 50%
  > faster at the low-level grunt work of moving bits on and off disk and
  > search b-trees.  We have achieved this by incorporating hundreds of
  > micro-optimizations.  Each micro-optimization might improve the performance
  > by as little as 0.05%.  If we get one that improves performance by 0.25%,
  > that is considered a huge win.  Each of these optimizations is unmeasurable
  > on a real-world system (we have to use cachegrind to get repeatable
  > run-times) but if you do enough of them, they add up.


A router is rather easy to create, so I reckon this sort of easy projects are appealing to people as beginner projects and pet projects, so they implement them and put them on the internet just for kudos or karmas or for merely showin' off to potential employers.


20ms is not fast enough for requests served from memory.

Many routing implementations are awful, comparing the URL to every single entry instead of using a trie can add up.


It's particularly jarring for me given that whenever there's a post about someone moving away from Amazon and seeing significant performance and/or cost benefits the HN zeitgeist tends to be rubbishing it as too much work for the benefit, but endlessly re-solving the same problems over and over in software for much smaller gains tends to be wildly applauded.


If 80% of posters on a thread about diets say vegetarian is healthier, and 80% of posters on a thread about fun recommend riding a motorcycle, it doesn't follow that there is a single HN zeitgeist saying both things. A simpler explanation is that there are two groups which speak up in different situations. Those groups don't necessarily overlap much.


Shaving 1ms off of every single request can have a larger overall impact than optimizing queries/etc. that are not used for every. single. request.

I'm not saying I agree, but just one possible answer.


At first I came here thinking the same thing. Then after looking at the benchmarks in README I decided it's a worthy attempt at refining the craft and eliminating the cruft of what else has come before.

It's no 100 mile mountain race, but it is more interesting than just some random HelloWorld router.


It's not only about time, it's about GC pauses. If you allocate hundreds of KB on each request (even simple stuff like health checks, or frequently polls for "is there anything ready yet" (in memory not db access etc), you will have a higher GC cost, and possibly having long GC pauses.

Go 1.5 is going to improve on the GC pauses, but another way of improving it is to just create garbage (by reusing memory). Go doesn't have manual heap allocation and free, but you can easily achieve manual memory management by simple means like handling your own pools.


> Most APIs seems to respond within 20 to 200ms.

Many of our internal services respond in < 5ms while handling thousands of requests per second. 1ms of routing a big deal.


It's satisfying to roll your own?


Basically.

Every kind of program known to mankind will be implemented if a language remains popular long enough. There are web servers written in awk, but nobody cares because awk isn't compumateguistically threatening to them.

I mean, you could even make a web server/router/framework out of Javascript, but what kind of lunatic would do that?


Most other things (like the ones you mentioned) require a level of expressiveness that Go lacks.

  SQL: Pointless without Generics.
  Cache layer: Pointless without Generics.
  ORM: Pointless even with Generics.


I didn't want to release this today... but there we go. Gin will also be a zero allocation http router.

https://github.com/gin-gonic/gin/issues/249


Awesome! Gin is really the best go webframework at the moment. Fast, easy to use and is being maintained and improved upon!


That's great!


Interesting approach. I guess there are use cases where the small number of allocs of some of the other routers is enough to cause problems.

I'll definitely stick to httprouter as my default starting point, though. The params interface is much nicer, and it's pretty damn good with performance and allocation count: https://github.com/julienschmidt/httprouter


We're also using httprouter and so far have been pretty happy with it. The lack of middleware isn't really an issue, you can inline the key stuff (Gzip, logging) yourself from other sources.


It isn't too difficult to add basic middleware support. We actually wrote a tiny framework[0] around httprouter for that to use on our API servers. It works pretty well.

[0] https://github.com/VividCortex/siesta


It is interesting and surprising how many allocations per request other frameworks do, from 100s to 10000s.

They're sort of cheating by having a pool, and sort of punting with things like "// MaxParam sets the maximum allowed path parameters. Default is 5..."


How is that cheating? That's just how you program without allocations.


Pool allocations are still allocations. If you look at the implementation for Pool.get, it works a lot like malloc:

http://golang.org/src/sync/pool.go

I've programmed without pools or allocations, where everything had statically allocated storage. Both for microcontroller firmware, and telephony software. It's noticeably harder.


I'm curious what they did different than httprouter to remove the allocations, or if they trick the benchmark tool by reusing memory with a pool.


Looks like its using a `sync.Pool` to amortize allocations.

https://github.com/labstack/echo/blob/master/echo.go#L17


I noticed that, I didn't verify where it was used. It's smart, although it cheats the benchmark... kind of. =P

I'm always suspicious of usages of `sync.Pool`, it's easy to reuse things that aren't reset properly and end up with subtle coupling between requests.


It would have been nice if the library conformed to http.Handler. That has nothing to do with routing yet whenever a new router comes out, it's non-idiomatic. I'll take free performance anyday but not if I have to rewrite code.


I'm curious as to why they chose to use a separate echo.Context. Why not create a type with an embedded io.ReaderCloser and a map[string]interface{}. Then replace the req.Body (it's a io.ReaderCloser) with the custom type. You could then write methods that take a http.Request to add/delete/modify the piggy backing context field. No need to have a Context type in your handlers.


It appears to be storing routes in a tree structure allocated via struct literals in router setup. I'm imagining that if performance is relying on them always being on the stack, that would be an implementation detail of the Go runtime and may change.

Looks like a decent implementation, but aesthetically I dislike monolithic "frameworks" (eg: "echo.New()"). The "Go way" is to write small composable libraries, not opaque frameworks. Gorilla would have been a good model to draw inspiration from.


> I'm imagining that if performance is relying on them always being on the stack

In which case can it make a performance difference if memory is on the stack or somewhere else in memory?


It seems like it takes most of the important interfaces, in which case I feel like it is close enough to the "Go way".


What do you mean by zero memory allocation? The title seems contradictory. You have to allocate at least some memory for a program to run.

Do you mean it doesn't allocate any memory per request, it just uses pre allocated memory?


That's standard terminology in libraries, especially in parsers. You don't really care about overhead memory, you're concerned with scaling one the requests start coming in.

With things like C-based HTTP or JSON parsers, they will use the same memory space that is being passed to them to par/lex/etc but in this case with a Go library I'm honestly not sure what To does behind the scenes (I haven't looked at the source).


I thought it meant that they were only allocating on the stack instead of the heap, (which would still be wrong terminology-wise) but they're clearly doing heap allocations from the examples: https://github.com/labstack/echo/blob/master/example/main.go


We removed "with zero memory allocation" from the title, because it may be misleading.


"Constant space" and "zero garbage" perhaps?


He means zero _dynamic_ memory allocation.


Yeah, seems like a sensationalist title tbh. The project README doesn't giving any info to support that claim as well.

It doesn't make any sense.


The benchmark section in the README clearly shows the echo router to be the only one of all benchmarked routers that has exactly 0 allocations per request.


I think he wants it to be a framework without any heap allocations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: