Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Been using .NET for years now for backend web development after having taken a break from C#. It is such an improvement over the old .NET framework. When I started building my first backend with it, I was surprised how much was included and "just worked". Need to add authentication? Few lines. OAuth? Also built in. Response caching? Yes. ORM? EF Core is pretty good. Need to use env variables to override your JSON config? You are going to have to build a... just kidding that works with one more line too.

Coming from NodeJS, the amount of stuff that could be added with a single line from an official package was great. No more worrying about hundreds of unvetted dependencies.



It really is like finding enlightenment after having to figure out which third party package is best for every little thing in Node. Visual Studio is a pretty powerful IDE as well.


Jetbrains Rider is great for .NET too, although I didn't try backend developement with it.


I'm a web developer that has used VS Code for years, but these past few months have been using Rider for developing a Unity project. One thing that really stands out to me about Rider is the "Refactor" (Ctrl+Shift+R) feature; extremely useful.

It allows you to just build things rapidly without worrying much about patterns/naming because once your vague ideas solidify and you realize where you went wrong, it's really easy to just Ctrl+Shift+R and make any codebase-wide adjustments instantly.

It's also great about picking up sub-optimal implementations or patterns with helpful warnings, and you just Ctrl+. to have it auto-fix for you. Using these Rider features combined with Github Copilot, I was able to pretty easily learn intermediate level C# because it's like having a mentor working along side you.


Except botched C# style rules. Rider still insists on Java style guidelines for .net projects. And sometimes, it spawns a never ending background process that only way to get rid of is to restart the app. Otherwise, it's okay.

edit: been using Rider on Linux since a few years now


I've been using Rider style rules for two years now exclusively in a VS based team and aside from some linebreak rules I need to tweak here and there I've seen no issues ?

But I use a .editorconfig tuned stylecop and roslynator on all projects so maybe that overrides some defaults.

Mind sharing which Java style rules are you seeing ?


Rider highlights pascal cased class methods/attributes and suggests to use camel case, for example. I haven't changed any style settings. This is Rider default. Maybe its a Linux thing.


That's just a few .editorconfig lines away from begin consistent for the whole team.


> Rider still insists on Java style guidelines for .net projects.

No, it uses Resharper as the backend.


It’s awesome. What I enjoy most is, that all the different tools feel the same, and you can use the same kind of IDE also for web developmen, JavaScript and SQL. Rider also has webstorm and datagrip bundled as a plug in, so you can also do everything in one IDE. I prefer though to use the standalone tools for that, they have just less features than rider/IntelliJ, but i would consider that a plus. Less menu items, less tool windows.


The Datagrip integration is so cool whenever I need to write raw SQL queries inside Rider


I migrated a while ago from VS + ReSharper to Rider. Mainly backend work. It’s been great.


I’m using it for backend work day in and day out and it’s a joy.


Can you install it [edit: the .Net 7/8 platform that sounds interesting, not the IDE] for free on a Linux server and get the same benefits? (This isn't advocacy, it's a literal question about something I don't know.)


Yes:

https://learn.microsoft.com/en-us/dotnet/core/install/linux

Also VS Code, an open source IDE with first class support for .NET languages:

https://code.visualstudio.com/docs/setup/linux


Also emacs has a pretty good csharp language server integration. I personally use the Spacemacs "dotnet" layer.


Yes, everything is completely free, open source.


Everything but the debugger sadly


I know! How crap is that? As a result, no debugging on a Raspberry PI.

It turned me right off C# outside of work. There are plenty of other languages that are better suited for hobby development.


Monetizing software is so incredibly indirect. Obviously companies don't make any money on free frameworks and tooling. These days, using C#/.NET maybe/kinda/sorta increases the chances that you might deploy on Azure or use Azure services. That tiny (or not so tiny) uptick easily funds all C# / .NET ecosystem development.

I'm not sure why they do not open source the debugger. I suppose it is obvious to some extent that some companies (JetBrains) are able to charge for high quality tooling. Microsoft makes a trickle of money from Visual Studio professional.

Though the objective function and decision variables are somewhat opaque, this is clearly an optimization problem.


The why is Visual Studio. The VPs running that div are a bunch of Muppets who are constantly trying their best to destroy the OSS .Net projects reputation.

The cynic in me suspects all the drama was directly related to Scott Hanselman suddenly losing interest in bloggin.


I prefer Rider as a C# IDE whether it's Windows or Linux, although the last time I was doing this there were still a handful of things you needed VS for.


Yes, since VS2022 became 64-bit app it is much more pleasant to use now.


It better be at a 30GB install size.


Install size depends on the features you pick when you install. Web / service / library development is small, desktop / mobile / C++ gets bigger as the toolchains + sdks + emulators are big. If your VS install is 30 GB and that's a problem, run the installer, click modify, and uncheck stuff you're not using. A lot of devs check off everything "just in case" but since you can add other features when you need them, it's better to start with what you know you'll use.


A 2TB nvme drive is less than $200. If VS is saving significant dev time, people would install it even if much larger install.


I'm not sure it's 10x as nice as competitors, which is the context here. If it were just about space who cares, but when your competitor does the exact same at a much smaller size and ultimately better price too, it's a little confusing what is going on with VS.


The competitor doesn't ship a full OS SDK for all kinds of development.

Go see how much GB, Apple or Android development stacks require.


    $ pacman -Qi emacs | grep Size 
    Installed Size  : 111,46 MiB


I guess if anything you is Emacs Lisp, targeting the Emacs OS, then ok.


It needs to be 10x nicer to make up for a 10x disk space footprint? What if it were only 1GB and the competitors were only 100Mb? Would it still need to be 10x nicer?


I think people question why VS is so freaking huge when it's contemporaries are not.

Most of that disk size is supposedly in the toolchain (requiring 2-60GB of disk space alone[1]). Why is this so big? Modern toolchains from other vendors are not this large.

[1] https://learn.microsoft.com/en-us/visualstudio/releases/2022...


I have spotted someone that has never done Apple, Google, UNIX or game console native development.

It turns out native code for all kinds of OS workloads take some space.


Also transparent compression is a thing in case you need to squeeze out more space. And I don't mean the shoddy NTFS active compression. It received support for much more efficient compression algorithms since win10 which are accessible using compact.exe[0], albeit passive rather than active so there is decay once files get modified.

[0] https://learn.microsoft.com/en-us/windows-server/administrat...


Come on, my whole OS fits 3 times into that, let alone every single other programming language’s build tools with debug symbols, everything combined.

So what exactly does VS save you? Hell, I can’t even imagine what’s inside that.


IDK if it's related but I have like a 50% install success rate. Seems to be complicated.


Mine's generally around 10GB as I omit all the mobile dev stack and images.


> ORM? EF Core is pretty good.

We're moving to .Net, and I was surprised by how poor the built-in DB stuff is. It's like either assembly or Python, but nothing in the middle.

That said I've also been impressed about how nice it is to get stuff going. I used C# back in the .Net 1.1 days and yeah massive difference in ergonomics.


EF Core when it first came out was pretty rough but they have been adding a ton, especially in .NET 7 and coming up with 8: https://learn.microsoft.com/en-us/ef/core/what-is-new/ef-cor... Most of the pain points for me are solved now. Like someone else mentioned, Dapper can fill the gaps.


>It's like either assembly or Python, but nothing in the middle.

Dapper seems to be in the middle and it is pretty popular


Yeah, but from what I saw it doesn't help much with master-detail setups? Like, inserting or updating an order with order lines etc. We rely heavily on those.


Use transactions => fails rollback. No need to correct failures.

Important concepts: .NoTracking and setting the EntityState correctly ( mark as deleted, updated, ... )

Additionally, sometimes it's easier to update things granularly, instead of all at once.

I suppose the annoyance comes from updating in bulk on a POST and trying to map everything?


Maybe also check PetaPoco. But at this point you're getting closer and closer to code-first EF Core anyway. :)

https://github.com/CollaboratingPlatypus/PetaPoco


I haven't actually tried either Dapper or PetaPoco, only perused their documentation. But I was sold on LinqToDb after seeing how it supported CTE and seeing our close code to generate updates [1] and joins [2] ended up looking like the actual intended SQL.

  [1]: https://linq2db.github.io/#update

  [2]: https://linq2db.github.io/articles/sql/Join-Operators.html


"seeing how close"


I use the ORM with Servicestack, OrmLite, and it seems to handle those 1 to many references well.

https://docs.servicestack.net/ormlite/


It can. Basically you separate the query response by the parent and child in the row using `SplitOn` and it can materialize it.


It is pure SQL, so it should, I think?


>We're moving to .Net, and I was surprised by how poor the built-in DB stuff is

Right now EF Core is probably the best ORM that has ever existed. What exactly is missing?

Although for performance you would probably reach for something like Dapper but that is not an ORM.


I really like EF Core but I feel it's just recently started to hit parity in a lot of areas with EF 6(though it has moved well past in other areas), and it's missing odd some stuff that's table stakes these days. Stuff I'd like to see:

* Support for "FOR UPDATE" and "SKIP LOCKED"; it's easy enough to tag the queries and modify the SQL in an interceptor though.. Yuck.

* Transaction attributes ala Spring; Though I can make do without them it's nice being able to specify what sorta transaction party a method is interested in.

* Better support for "bulk updates". SQL Alchemy, as rough an onboarding experience as it is, has really good support for executing db-side update logic.

That said, I largely agree with you and nobody should overlook LinqPad. Anyone interested in babysitting the generated SQL should be using it; too bad it's not on Linux :| :( ;(

The optimizations they cover in the docs should all be done by default IMHO; optimizing models, polling db contexts, and etc. I also open and close a db connection at app start which further reduced the first request latency after putting a process in rotation.


> Right now EF Core is probably the best ORM that has ever existed. What exactly is missing?

Wish I could agree but they would have to fix the very slow time to first query when using big models (+500 tables in our case). Compiled models is not a solution for us since our model changes a lot and the compilation is just as slow. It's disappointing because it used to work fine under the ancient Linq2SQL library.


Can't you generate the compiled model as part of the CI/CD?

Also maybe you might find a benefit from splitting your context into multiples. I am considering this option for one of my code bases


This is what I plan to do and I might use something like Husky to ensure models are built before code is committed. It IS sort of a PITA and I wonder if they could introduce a system on top of content hashes or the like to verify optimized models match the source in dev and etc.


I can but the DX would still be miserable, starting a local instance would still incur the cost of building the model, and a dev need to do that a lot of times during a day.

I can't split the model without massive refactorings, and even then, some tables are common across all modules and would need to be duplicated. Your advice is unfortunately the standard answer in my case, I guess EF Core is just not for me, really disappointing but "c'est la vie".

edit: ha you probably mean commit it in source control so other devs can also use it? I guess it's a compromise, would still slow down our DbContext refresh command a lot.


Are your models code first or reverse engineered?

Are you relying on model conventions or spelling out everything in modelBuilder calls?


Well that was my point, either you're writing a lot of code yourself ("assembly"), or you use EF ("Python").

We're not used to something like EF, perhaps it would work for us. But debugging generated queries due to performance issues is something we'd like to avoid. For now the decision was made to not use EF.


You can easily echo them to the console or debug window.

To be honest you should keep all ORM queries fairly simple if you can. Where clauses fine. Inserts, updates, deletes, ORMs save so much code, and so much pain when you add new properties/remove them.

But if a query is more than a few includes or joins you should be handcrafting it with FromSQL() or loading it piecemeal using Load().

And don't even think about using it to make complicated reports, that is not a good idea. Make a stored procedure or view.

And that's especially true if you are using anything other than SQL Server. I've seen abysmal performance myself on MySQL/Maria on moderately complex EF queries. I've not really looked since EF 6, but it used to love making nested selects instead of JOINs, which were fine in SQL Server but terrible performance-wise in MySQL. Postgre I've never used with EF in anger so can't comment.

You can use EF with Hot chocolate to make a GraphQL endpoint really easily, but I'd imagine that's an easy way to saddle yourself with serious performance problems unless you limit the levels it can go. I'd be interested to hear if anyone's using it and how they find it?


What I personally dislike about it is, for the easy stuff, I'm not sure it really saves that much code over a micro-ORM like Dapper, and for the hard stuff, well, everyone's a Linq wizard already so they're tempted to use it, especially if not in the habit of writing much SQL. With today's tools you can even get Intellisense on inlined SQL queries.

Also the lazy-loading and the in-memory provider for tests are both kind of misfeatures.


EF Core generates pretty good queries for common cases. I'm not entirely sure about really complex analytical queries, but you might want drop down to SQL for those anyway.

There is one gigantic footgun in EF Core, that is the decision between single query and split query. If you choose single query in the wrong situation you can end up with truly pathological queries. I might blame EF Core here a bit for a dangerous default, but to be honest the other choice would be dangerous in a different way, so there is no obvious good default choice here. This is a part that you need to understand to use this ORM, and fortunately it generates warnings now and kind of forces you to choose the strategy.

The one other aspect that helps to generate good queries with EF Core is to use "Select()" for any case where you want to request fewer columns than available in your tables. I find it quite natural to write queries this way in any case.


I think you might find Linq2Db[0] to be the right fit for you then, which brands itself on being typesafe SQL in C#.

[0]: https://github.com/linq2db/linq2db


You can use EF Core also with plain SQL queries, if you don’t like/trust the query builder. You can also disable change tracking completely, if you prefer to only INSERT/UPDATE directly with SQL statements. You get a lot of awesome features, but nobody forces you to use them.


Well that sounds a lot better than what my coworkers told me. Will definitely check out EF Core.


Yeah, EF's Interpolated Execute methods are a massive step up over raw ADO.NET ("assembly"), even if you don't use most of the other modeling tools and context tracking.

The Interpolated family of methods take nice, clean string interpolation like $"Select * from Table where Id = {Id}" and make sure that is properly parametric SQL queries (ie, avoiding things like SQL injection attacks).

It's a killer feature and I have some idea why it lives in the EF side of the house rather than being generally applied across all of ADO.NET, but it should still probably be a more reusable library of its own beyond just EF.


EF handles migration and you can re-use the DbConnection and execute plain SQL.

If you don't want to debug difficult queries, then extend it to use Dapper and use the best of both worlds.


You don’t have to use migrations, it’s an optional feature. It works equally well to just generate entities from an existing database, that is migrated/set-up in any other way (as long as the schema is not crazily complex).


Not my project/not affiliate but Evolve is wonderful for handling db migrations


Haven't heard of Evolve before, but used DbUp in the past with success too: https://dbup.readthedocs.io/en/latest/


Can someone familiar with both EF Core and the Java ecosystem’s Hibernate/etc describe how are they different?


For performance, .NoTracking() and not calling .SaveChanges() on every loop already does wonders.

I usually call .SaveChanges() when i % 20 == 0


We'll need tracking (if I understand correctly, we need previous values). When calling SaveChanges often, how do you handle rollback in case something fails?


You can use .NoTracking() and an SQL query for bulk changes ( and minor changes). Eg. Updating one column.

Works faster than with .Tracking. The SQL script runs in your transaction, so rollback runs as usual.

The i%20==0 condition with .SaveChanges() is used with .Tracking(), yes.


Either wrap everything into a transaction or just call SaveChangesAsync only once. The exact behaviour depends on the database here, under the hood this uses database transactions.


I'd kill for anything close to EF for Node. Selecting complex structures from a database in any of the existing JS ORMs is just painful.


I've enjoyed the ORM in adonisjs: https://docs.adonisjs.com/guides/models/relationships#preloa...

Having used it on an actual product and dealing with some of the pain points, it's my go-to since the typescript version (v5) came out.

The ORM uses Knex.js internally, which is very simple to drop into if you just want a query builder. Having Knex be accessible also makes it simple to just write your query in plain sql as well, or as the Lucid ORM has available, just fragments of your query (say the join statement) as raw sql: https://docs.adonisjs.com/reference/database/query-builder#w...

Along with debugging, printing out the sql, and support via the Adonisjs REPL "Ace", it makes for a very nice experience.


These are more akin to Dapper which is a thin layer on top of SQL. Currently I use Prisma for JS, it's a bit better, but if you haven't used EF then you probably don't know what you're missing.


I've used EF quite a bit in the past. While there may be some features missing, Lucid an ActiveRecord implementation, which I would figure would fall into the "ORM" category.

Which features would you say are the key ones that make Lucid seem more like a query mapper (Dapper, Knex) than an ORM (ActiveRecord, EF)?

Specifically, in my Adonis projects, I'm mostly working with the Model objects through the ORM methods, and only dropping to Knex/SQL when necessary (complex CTE, etc). Since it's such a Model-centric seeming way of development, it naturally seems like an Object-Relational Mapping to me.


Context tracking - selecting multiple entities, updating and pushing them back to the db.

Selecting complex dtos, this isn’t query building. A lot of magic turns this into sql.

    TopPaidMayors = Cities
        .where(c => c.state.govoner.party ==‘dem’)
        .select(c => 
            c.name, 
            highestPaid = c.mayors
                .orderByDesc(m => salary)
                .take(10))
        .orderBy(c => highestPaid.First().salary)


Well you put a fairly normal select query there, which is can be accomplished in Lucid as well. Additionally change tracking exists since it's an ActiveRecord implementation.

It's not all magic as well. Looking into the internals of EF, ActiveRecord, Hibernate, or other ORMs reveal patterns that once familiarized can help reason about the behavior of complex queries. I only state this to try to work against the commonly found wisdom of "big frameworks are magic" that tends to scare away learning developers from hoping to understand them.

There are intersections and disjunctions of feature sets between the various ORMs, with some features for EF still only available via extensions (or nonexistent). I don't think this makes the Lucid ORM any less of an ORM.

Again, I like EF Core. I simply think that as far as node-based ORMs go, that Lucid is the one I've had the best experience with, so wanted to highlight it.


Query builders are a thin wrapper on raw sql. Real ORMs that understand the relations between objects are not.

There are a ton of joins and sub-selects in that query. 8 very succinct lines. Can you do anything close in Lucid? I don't think so.


Well I've put sources and links to multiple pages about the ActiveRecord implementation. The Lucid documentation for the ORM "query building" (which it's the same in EF, LINQ has "Integrated Query" in the name) does track entities and subentities, which is how you can query things update them, then later call `.save()` to persist them back to the database.

However, you seem set on making this a combative conversation rather than a collaborative discussion, so I'll end my participation here.


I had hoped you'd put some actual code in either of your last two replies.


Mikro ORM is the best I had come across while avoiding Prisma.


Prisma is a bit higher level, Mikro is more a query builder. EF though is way beyond both.


What would you like build-in db stuff to be like?


Well I mean DataTable and friends can handle master-detail for example, but you gotta do a lot of plumbing to set it all up, especially with autoincs involved. Was kinda expecting it to be less work.

Ideally I'd like to supply some selects, fill up some DataTables with master-detail data, manipulate it and commit changes.

But yeah, maybe I gotta check out the latest EF stuff and see if I can't convince the others...


DataTables are a construct from .NET Framework 1.1. You owe yourself (and would be doing your employer a massive favour) to check out EF Core.


Well that'd explain why they feel clunky :D

Will def look at it more carefully.


EF Core: 1. load order and related line items 2. edit anything 3. call SaveChanges()

Definitely check it out if you haven't recently!


Just do yourself a favor and don’t use datatables directly. If you want to be as close to the database as possible, check out Dapper. It EF core should cover nearly all Dapper functionality and much more.


I am just old fashioned, as much stored procedures as possible, why waste network traffic.


The new asp.net took a lot of good concepts from the node ecosystem and feels really modern. It has a lot of batteries included. I think .net is an awesome platform to build backends. I wouldn’t use it for frontend though. Razor and Blazor never really convinced me.


Blazer Server is perfect for internal applications and dashboards though. It's just so easy to use, especially if you plug in any of the community made component libraries like MudBlazor. I had pure backend devs actually happily make frontend for once. It's essentially Phoenix's LiveView but in C# and that uses already proven SignalR.


For me Blazor is a mess. It’s component system is even worse then Angular, a complete OOP-mess. Yes it’s easy to use, but some things don’t work and it takes ages to find out why.

I prefer next.js for the frontends.


I have been having a ridiculous time trying to find out just how to override an appsettings json variable (DBConnection string) with an environment variable. Could not find any good answer. What is the right way?


Configuration is applied in layers. If you’re using the default setup you get the following layers applied in this order:

1. appsettings.json

2. appsettings.{env}.json

3. user secrets (only in Development environment)

4. environment variables

5. command line args

You can full customize the setup if you desire, there are packages to support things like external secret stores. If you Google ‘Asp.Net core configuration” there’s a MS page that goes into great detail on all of this.

Anyway, your env vars must match the name as you’d structure it in a json object, but with the periods replace with double underscores. So ConnectionStrings.MyConnection becomes CONNECTIONSTRINGS__MYCONNECTION, FeatureFlags.Product.EnableNewIdFormat becomes FEATUREFLAGS__PRODUCT__ENABLENEWIDFORMAT, etc.


Awesome overview, stuff like this is often hard for newcomers to find/figure out. I'm sure it's in the docs somewhere but people often miss it.

One nitpick: environment vars don't have to be capitalized. You can do ConnectionStrings__MyConnection so the casing matches what you see in appsettings.json.

And a word of warning: make sure to understand how the configuration overrides work. If you have appsettings.json define an array of auth servers with 5 elements, then in appsettings.Production.json (or env variables) define an array of auth servers with only 1, auth servers 2-5 from the default appsettings.json will still be there! (something similar to this may or may not have caused a scare in the past)


Yep, it really just works with key/value pairs, so config overrides happen to individual keys. The config system itself doesn't really have a concept of nested objects or arrays. The config provider that reads the json file takes the path, like JobSettings[0].JobName, and transforms it into into a key like JobSettings:0:JobName (IIRC).

I tend to avoid using arrays in config because of the unexpected behavior, and the risk of overriding something you didn't mean to. Anywhere you'd use an array you can usually use an object and bind it to a Dictionary<string, Whatever>, then ignore the keys.


Word of caution. Azure functions implements appSettings differently.


Thank you - I had read that portion of the docs but for whatever reason that part of it just didn’t click. For some reason it made me think I could only use some special subset of env vars that were prefixed with ASPNET_CORE or similar


You can have multiple appsettings files and are additive. Eg, you can have appsettings.production.json which only contains an override for the connection string in the base appsettings.json.


I believe GP is asking about using runtime environment variables passed in through the shell or injected when starting a Docker container, i.e. the 12factor.net approach.


I think it's just:

builder.Configuration.AddEnvironmentVariables();

Then you can use an env variable like "Foo__Bar=X" to override Foo.Bar from your appsettings json.


If using the default builder, environment variables are included automatically as a configuration provider.


Probably not the right way, but if all else fails just prefix with

  System.Environment.GetEnvironmentVariable("NAME") ?? ...


> I was surprised how much was included and "just worked".

A simple HTTP server? Maybe I'm missing something, but when I needed it I hadn't found one.

I believe, the closest it has is System.Net.HttpListener which is a very different thing from your typical Golang's net/http.Server or python's http.server.HTTPServer.

I believe at some point they had switched from HTTP.sys to Kestrel, so at least it doesn't need the admin privileges anymore. But this whole thing is so much related to ASP.NET it's pretty hard to figure out how to create a simplest HTTP server without anything else forced upon you (no services, no "web applications", no router, just plain and simple bare HTTP protocol handler). So my impression is that maybe .NET can make certain complex things easy, but it has some issues with keeping simple things simple.


It sounds like you want minimal APIs: https://learn.microsoft.com/en-us/aspnet/core/tutorials/min-...

If for some reason you don’t want the framework to handle things like routing or request/response deserialization/serialization then you can go bare bones and implement everything via custom middleware: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/m...


… how often are you hand-parsing HTTP requests and hand-crafting HTTP responses one character at a time?

Productive devs want the request/response wrapper objects and routing constructs to handler methods to get work done and can still drop down into fine-grained request/response crafting as and when required.


> how often are you hand-parsing HTTP requests

That's exactly what I don't do, and what I want to see available in a standard library.

But I quite frequently implement custom request handling before any routing happens (if there's even any routing). That's super easy to do in Go, Python or Rust, but when I needed something comparable in C# I haven't found any similar composable independent pieces that I can join together in a way I see fit.


As a sibling commenter mentioned, there’s a minimalist functional web server available out of the box, that can later have other more complex components added on:

https://learn.microsoft.com/en-us/aspnet/core/tutorials/min-...

It’s .NET 101 stuff.


I'm sorry, I think I really must be missing something out and/or explained myself poorly, but I don't see how this is comparable... Doesn't `WebApplication.CreateBuilder(args).Build()` creates a whole web application type thing? In my understanding it's something comparable to `gin.Default()` or `flask.Flask(__name__)`, rather than lower level basic `http.Server{Addr: addr}` or `http.server.HTTPServer(address)` (which still doesn't require any manual HTTP protocol parsing).

And this stuff is ASP.NET Core, not a bare .NET [Core], isn't it? What I'm talking about is something comparable to just Kestrel, except that I failed to find any documentation on using it "raw" without the whole ASP.NET thing (maybe I misunderstood what it is and it's tightly coupled with the whole framework?).


WebApplicationBuilder is the mechanism for configuring the kestrel web server. Kestrel is the foundation of Asp.Net Core, there's no separating the two. But Kestrel is a totally modular system. If you configure your request pipeline with a single custom middleware then that is literally the only thing the server is running. If you use only minimal APIs then the middleware that handles their routing is the only thing running. Asp.Net Core has dozens of bells and whistles but none of them affect your app if they aren't part of your request pipeline.


WebApplicationBuilder itself is mostly the base .NET Generic Host [1] (which is "bare" .NET and is used as a common host for dependency injection plus common utilities such as configuration and logging) plus activating the Kestrel hooks for HTTP middleware. Kestrel even at its most "raw" is always slightly higher-level and closer to Flask or Express (JS) as a middleware-focused HTTP execution engine.

(This partly reflects the classic HTTP.SYS role in IIS/Windows as well, because Window's HTTP.SYS is surprisingly high level for a "raw" kernel component for hosting web servers. From my understanding, most of "Kestrel" under-the-hood is just a cross-platform semi-recreation of the HTTP.SYS abstraction machine on top of things like but maybe not exactly libuv/ioring. So yes, everything is "naturally" higher level in .NET than Python's lowest level just because it assumes a higher-level "OS server" base.)

Also, yes, the boundary between "Kestrel" and ASP.NET is really hard to define at this point. Almost all of ASP.NET is just "Express-style" (though much of these middleware patterns in ASP.NET I believe predate Express) middleware that is cumulatively stacked on top of each other as you add more high-level ASP.NET features, and at this point all of them are just about optional depending on what you are looking to do.

Even many alternatives to ASP.NET at this point are built on top of the core basics like WebApplicationBuilder, they just diverge at which sets of middleware stack on top of that.

As others point out the recently expanded "Minimal APIs" experience is most tuned for "Flask-like" out-of-the-box behavior: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/m...

That's as low level as it gets in .NET, but not so much because of "strong coupling" but because "everything is middleware" in .NET.

[1] https://learn.microsoft.com/en-us/dotnet/core/extensions/gen...


Thank you very much for your comment!

I don't have any issues with the IHost and builder patterns. I actually like those - although I've only used the very basics, so I don't really know about the intricacies and possible drawbacks.

Thanks for clearing my misunderstanding about the coupling. I really thought Kestrel was something different, have not expected it to be this high level. It being a replacement of HTTP.sys totally makes sense, of course.

I've found and read https://learn.microsoft.com/en-us/aspnet/core/fundamentals/m... and it started to make more sense now.


There's an ancient historic low-level API, that I just remembered, which you can explore that still remains around for backwards compatibility but isn't recommended for new code: Kestrel was (via a long scenic route) forked from System.Net.HttpListener [1] which is the closest to a strict bare-bones HTTP.SYS wrapper that has existed in .NET.

There's a long issues thread on HttpListener should be more strongly marked deprecated [2] to avoid people accidentally using it despite today's recommendations to use Kestrel/the "Most Core" parts of ASP.NET. One fun part of the thread is an example repo of the absolute most "bare-bones" and "raw" Kestrel bootup possible [3], including a "TODO: implement TLS handshake here" bit.

[1] https://learn.microsoft.com/en-us/dotnet/api/system.net.http...

[2] https://github.com/dotnet/platform-compat/issues/88

[3] https://github.com/davidfowl/BasicKestrel/tree/master/BasicK...


>In my understanding it's something comparable to `gin.Default()` or `flask.Flask(__name__)`, rather than lower level basic `http.Server{Addr: addr}` or `http.server.HTTPServer(address)` (which still doesn't require any manual HTTP protocol parsing).

Its not. Keep reading.


You just add middleware before you register any controllers (or, leave out all that stuff entirely)


This works in practice, but comparing to Python, it's like pulling in Django when all I need is in the standard library (although some people do and their projects are successful, that's for sure). I just don't want to bring a whole industrial grade CNC machine to do something a handsaw would be perfect for.


It's nothing like pulling in Django.


Mind sharing couple of examples? For me it's more natural to do custom headers stuff on webserver's side.

Disclaimer: looking from sysadmin's POV.


You can just use Kestrel without anything else.


I'm definitely a fan of the "batteries included" approach, but I am ambivalent on EF because I feel like it is still a bit too magic and gets abused. Though it's not like it's a big deal to use whatever else you prefer instead (I am a big fan of the inline SQL with Dapper approach).


I stuck to DB-First model + LINQ + SaveChanges() and largely managed to keep out the magic quite successfully for a .NET6 web project last year. Records are fantastic when composing queries. I didn't touch inheritance or any fancy mapping strategies -- one table, one class.

The only bit of framework-specific / hidden magic debugging I really had to do was the realization of AsSplitQuery() when creating objects composed of independent datasets, AsNoTracking() for a decent perf bump, and single group-by's are fine but when you start nesting them it gets really hairy really quickly -- they usually ended up becoming views.

Otherwise change-tracking worked wonderfully for batch-updating (but everything I did was short-lived) and outside of group-bys, the LINQ -> SQL mapping (and vice-versa) was extremely predictable, in both EF Core generating the SQL I expected, and creating the correct LINQ from the SQL I knew I wanted.

9/10 would use again; inline sql is for nerds


Yeah, if you stick to a very narrow subset of what it can do then you will have no problems. Hopefully everyone else on your project is on the same page about what that subset is.


I mean if your alternative is no-orm, inline SQL, then the “narrow subset” I’ve chosen is precisely competitive, and is highly effective, and IMO strictly an improvement. If your alternative is a different ORM, then my opinion is moot, but at least I’ve never seen an ORM be worth the headache (which is why we avoiding using the full feature-set in the first place)

I don’t know why one would have an issue with not using the features they’re not looking for anyways. Keeping everyone aligned on patterns/usage is half the point of code review, and it was managed there without much trouble (you can’t really do the truly magical incantations without quite a bit of setup)


I’m a fan of Dapper for doing the tedious mapping and result set handling stuff but still sticking to SQL. I don’t really find writing LINQ instead of simple SQL queries is much of a time saver.


I'm not a fan of the LINQ syntax itself so much as the fact that I'm not longer dealing with arbitrary strings being smashed together that happen to form a valid SQL statement, and all the benefits that inevitably come with not having stringly-typed logic (like being able to refactor properly, type safety, proper autocomplete, no-typos-at-runtime, etc). It maps closely enough to SQL itself that the negatives of having to use an "intermediate" language/api aren't significant, and the magic can be made negligible, so it's largely pure gains.


The tools are good enough that my IDE can detect SQL strings and offer suggestions, plus once the database starts getting used by multiple applications even type safety won’t really make it safe to go renaming existing columns.


What is DB-First model ?


There was a specific feature in the .Net Framework version of EF called “database first” that would analyze your existing database and generate an xml file describing it that EF would use to generate models on the fly. It was pretty horrible and thankfully isn’t supported by modern EF Core. However the term database first stuck and it’s really come to refer to any any scenario where you manage your database using a tool other than Entity Framework migrations. Could be a dedicated migration tool, Sql Server database project, whatever. Then you either create EF models to match the tables or use a tool to generate them.


Coming from 10 years of spring and NodeJS development, last year of .NET has been incredible. I share that "batteries included" experience. So much useful functionality is just packed right in. And there's a lot of backward compatibility and support.

So much better than spring and NodeJS.


nestjs provides a comparable experience for nodejs as well


When you update .NET libs years later, we still don’t have to change any code in our own software. With anything nodejs, even months and sometimes weeks, updates of any kind means everything breaks. Not sure how anyone builds things with it that don’t need updates outside security fixes. Smaller companies might have software running years or decades that only need security updates and not new features. A lot of our stuff is over 10 years old and needs only security updates; none of that is node (or in the npm ecosystem for that matter) as we simply have shot ourselves in the foot with that too many times already.


It's been a few years (2018?) since I used NestJS but back then our experience with it was far from stellar. It lacked documentation beyond the basic "getting started" examples (just checked, it doesn't seem like they've improved much on that front), it had quite a few footguns and as soon as we strayed off the beaten path, things tended to become painful, especially on the GraphQL side of things and general 'plumbing' like interceptors, schemas, and data validation.

Internal error handling was sometimes abysmal too, a misconfiguration of certain dependencies in `AppModule` could leave the application in a broken state on startup where it wouldn't bind to its port and no error messages were printed to console. On a few occasions I had to spend an hour or more reading and understanding NestJS source code to resolve those issues, which could have been avoided if they had better internal validation and error logging in place.

That's not to say it was all terrible, some aspects of it were genuinely good, but the overall experience and many hours of needless pain it caused left a really bad taste in my mouth. Back then, at least, it felt like a Lego set where the pieces didn't all quite fit together.

Depressingly enough, it seemed like NestJS was the best that Node.js world had to offer which made me quit the ecosystem altogether.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: