C was famously well designed. The problems it was well designed for have disappeared and its good ideas are so well understood that, like many great early works, they are cliches that can no longer be recognised for their brilliance because everyone is doing that.
C's flaws are real, but they exist in the context of that design. For example, C has bad memory management because it was built when programmers had kilo- to mega- bytes to play with and needed perfect control.
I'm not sure anyone could ever accuse anything related to the internet of being 'designed'. Even being standardised is a stretch. Javascript isn't C, because C is an extremely focused attempt to solve specific problems. Javascript simply does not attempt to tackle the same problems as C and it doesn't attempt to tackle them in the same way.
It may be that they are both fail to solve the big problems in web development, but that is not really a thing worth comparing languages on.
This, a thousand times. It's just so easy for people to fall into the trap of thinking the current list of things to do or avoid has always been obvious. The path is always obvious after a million people have trodden it. But at one time it wasn't. At one time people had to figure out things we now take for granted - like network APIs or concurrent programming models or, most relevantly, memory management.
At the time manual management of object lifetimes was the only feasible option for a systems programming language, and it was feasible in part because control flows were simpler. Yes, even in things like OS kernels. They were kept simple by design in part to keep lifetime management tractable. Many of the things that make that approach non-viable today - e.g. massive concurrency, callbacks, exceptions - weren't part of the idiom then. They were irrelevant to C's design.
C was good for its time. Go ahead, take a look at PL/I and BLISS. See whether you'd rather program in those. It might or might not be relevant today, but C was better designed in its context than anything most of its modern critics have ever designed (or will design) themselves.
The Lisp machines were contemporaneous with Unix, as were Wirth's Pascal based systems, the Burroughs with its bound checks, Algol 68.
Dont get me wrong: C gets the job done. It's successful in a wide number of domains, and I do like using it. But its success was not inevitable and involved a lot of external and soft factors.
C was introduced in 1972. Lisp machines started in 1975, so can't have influenced C without some time travel.
> Wirth's Pascal based systems
Pascal predated C by only two years, so it's not clear how much it could have influenced C's design. (Side note: Wirth originally intended Pascal to be a pure teaching language, and was reportedly furious when a TA wrote the first compiler to make grading easier.) More importantly, Wirth's Modula machines could not have existed until at least 1979 when the language itself was introduced.
> Burroughs with its bound checks
That's a machine, not a language. What language(s) were its OS and compilers written in?
> Algol 68
Now we're getting somewhere. Yes, Algol 68 had existed for a whole four years. Was it suitable as a systems programming language? Not in its canonical form, but there were certainly members of its family (e.g. Wirth's PL/360 and ALGOL/W) that were worthy alternatives to C.
> its success was not inevitable
Never said so. I'm not even trying to claim that C was the best possible language at the time. The question is whether it was well designed for its time, with specific reference to the issue of memory management. I'd say it was, and leave it as an exercise for the reader to determine what strategies those other alternatives - even if they were superior in some ways - used to deal with heap-allocated memory.
> C was good for its time. Go ahead, take a look at PL/I and BLISS. See whether you'd rather program in those.
I don't think that's a fair comment. Pl/I wasn't really the same domain as C and there were other alternatives to PL/I (such as FORTRAN, COBOL, ALGOL, Simula, LISP etc)
In fact by the time C was released, there were already literally dozens of programming languages. And C itself an evolutionary progression because it took ideas from B, ALBOL and a few others. Also Pascal pre-dates C as well.
I don't mean the following as a criticism against C because I do quite like the language but it does sometimes feel like it's put on a pedestal because it ended up dominating the industry -- either directly or via other languages heavily inspired by C. I do wonder if we've be seeing the same rose-tinted comments had UNIX been based on Pascal and Object Pascal not lost to C++.
> by the time C was released, there were already literally dozens of programming languages
Hundreds, actually, I was there, but writing a compiler in COBOL or an OS kernel in FORTRAN would have been quite an exercise. For those kinds of domains, there were few alternatives and assembler was still a common choice. C was certainly an improvement on that.
Pascal was my favorite language for a number of years. But it was completely inappropriate for systems programming because it was too protective, it kept you from doing the things that are regularly required at an OS level.
> Pascal was my favorite language for a number of years.
Likewise :)
> But it was completely inappropriate for systems programming because it was too protective, it kept you from doing the things that are regularly required at an OS level.
I guess it depends on what systems you're programming. I seem to recall Pascal was heavily used in the early days of Windows and Macintoshes. Albeit my memory is hazy so perhaps that was "just" user applications with the lower level stuff being done in assembly?
I started my professional programming career working in Pascal on the Mac. I clearly recall having to write one piece (a MIDI driver) in assembly. I have fuzzier memories of the interfaces using a rather explicitly Pascal-based string format and stack calling convention, and MPW was also Pascal-oriented rather than C-oriented, so I suspect that at least some of the system software was written in Pascal. OTOH, I'm pretty sure I recall hearing that practically all of what Atkinson and Hirschfeld and Capps and so on wrote for the original ROMs was in assembly (as distinct from what had been done on the LISA) because that was the only way it would fit.
The Pascal and C calling conventions were different in the way the stack got cleaned after a function call, and Pascal turned out to be ever so slightly faster. So the OS functions used that as the standard. Microsoft had a keyword that could be applied to any function in C to make it compatible.
Assembly sounds very believable for early OS work.
> The problems it was well designed for have disappeared
I think C is still an incredibly relevant language. There are still plenty of cases where it's useful to have a very simple, un-opinionated language for working close to the metal. I think C as "portable assembly" is still incredibly useful, and no other language has really filled that slot perfectly.
> C has bad memory management because it was built when programmers had kilo- to mega- bytes to play with and needed perfect control.
I don't think C's memory management is bad. It's not right for heaps of problems where it's not needed; for example for things like front-end work, or server software where the main concern is stability, manual memory management is unnecessary. But for domains like computer graphics, or for optimizing compute-bound workloads, manual memory management is still completely relevant.
To me, the parts of C which seem a bit dated are the lack of some basic "quality of life" features we take for granted in most languages. For instance, I would be happy with a language almost exactly like C, but with a slightly more powerful type system first-class optionals, and some syntactic sugar for dealing with arrays (i.e. counts, map/reduce/filter functions etc.)
I mean, there are always matters of degree. Sure C imposes some structure by nature of it's syntactic choices, but it's extremely wide-open when compared to other programming languages.
Lisp, for example, is very opinionated when it comes to things like side-effects. Go is opinionated when it comes to how memory should be managed. Rust is opinionated about when data should be readable and writable, and passed between threads.
C doesn't impose any of these restrictions, largely by virtue of being low-level, and moreover, it should be possible to implement whatever data transforms are implemented in any of those other languages in C as well. The inverse is not the case, so I would argue that C is more permissive than any of those languages.
> C doesn't impose any of these restrictions, largely by virtue of being low-level, and moreover, it should be possible to implement whatever data transforms are implemented in any of those other languages in C as well.
You're conflating lack of guardrails & helpers with "low-level". C is not lower level than Rust or C++, not at all. It's just more dangerous, and that danger brings no "low-level-ness" advantages.
> it should be possible to implement whatever data transforms are implemented in any of those other languages in C as well.
Define what you consider "possible" here. There's a bunch of things I'd consider "not possible" in C that you can do trivially in C++/Rust, such as the codegen templates & coroutines perform, for example. If you throw enough macro pre-processor hacks at it to the point it doesn't reassemble C in the slightest sure you can sort of pull it off. But is that sort of thing considered "possible in C"? Or have you just made a second language in front of C that happens to compile to C via the hello world of compilers?
Even with macros I'm not entirely sure you can actually do something like static_assert in C since you can't put an #error in a macro substitution. Or more practically defining the backing type of an enum
> Lisp, for example, is very opinionated when it comes to things like side-effects
Ermm...no, it's not. Functional programming languages do tend to do that, but most lisps aren't functional--certainly, it's not a defining characteristic of the language family.
> C doesn't impose any of these restrictions, largely by virtue of being low-level, and moreover, it should be possible to implement whatever data transforms are implemented in any of those other languages in C as well. The inverse is not the case, so I would argue that C is more permissive than any of those languages.
1. Turing equivalence.
2. Because it's so close to the hardware, c inherits the opinions of all the major ISA designers.
> Ermm...no, it's not. Functional programming languages do tend to do that, but most lisps aren't functional--certainly, it's not a defining characteristic of the language family.
Fair enough. I haven't worked with Lisp since university days, but I think the point stands with any strict functional language.
> 1. Turing equivalence.
Ok technically it's possible to write programs which perform the same computation any two turing-complete languages, but what I mean is that, for example, it's easier to emulate how an arbitrary Go program performs that computation from C than in the other direction.
edit:
> 2. Because it's so close to the hardware, c inherits the opinions of all the major ISA designers.
That's one way to look at it. Another way is that C just gives you an interface to the hardware as it is, and doesn't make many judgements about which abstractions should be built on top of it by default.
Is Lisp opinionated about side-effects? Maybe Clojure is, but have you looked at Common Lisp or Emacs? I can't think of a less-opinionated-about-side-effects application than Emacs.
* In the absence of language extensions, you don't have a pointer type that can be incremented all over the place and dereferenced. You don't just mutate memory willy nilly.
* In relation to the above point, (in the absence of a language extension like "locatives") there is no address-of operator; we can't pass the address of a variable somewhere to have it changed. This can be simulated with a lexical closure, whereby we provide the function to do perform the change, which is called from somewhere. "Use lexical closures as thunks if you want var parameters" is quite opinionated.
* All mutation takes place through data-structure-specific functions (like rplaca for mutating the car of a cons cell), or dedicated special forms (like setq for mutating a variable). The use of generalized places like (incf (car x)) compiles into type-safe function calls. On one particular Lisp, CLISP, that looks like:
there is no idea there of calculating some effective address and just clobbering bits through that pointer. The original value is accessed using car, and the mutation happens opaquely inside the SYSTEM::%RPLACA function. That could be inlined by the compiler, but the model is basically that mutation of objects is done via an API.
* The argument evaluation order of standard functions is well-defined, so side effects occurring in argument forms are well sequenced:
(let ((x 0))
(list (incf x) (incf x) (incf x)))
--> (1 2 3) ;; required result in ANSI Lisp
(Scheme bungles this by having less of an opinion; it has C-like unspecified argument eval order.)
C is not opinionated enough w.r.t. side effects and order of evaluation, which means that programmers have to learn to be highly opinionated "human compilers" to avoid writing bogus code.
C is not close to the hardware: vectors are a bolted on afterthought, especially multidimensional arrays. The carry flag and floating point environment are afterthoughts. One of my coworkers uses assembly for just this reason!
C does not have vectors. C++ has them. Do you mean arrays? Arrays are just translated to pointer arithmetic formulas based on the type/size of the array's elements.
There's a reasonably unambiguous way to measure how "opinionated" a language is, which is to look at the size of its runtime necessary to run programs in that language [1]. C's is not huge, but it is necessary and non-zero. By contrast, many other languages have huge runtimes.
There are some languages that can get smaller than C. Assembler, obviously, is basically the smallest. Rust does have a default runtime, but it's thin, and most of the rest of it can be shut off. Still, it's not a long list of languages that are less opinionated than C.
[1]: If you want to get really technical, this is "relative to the hardware it is running on". C gets a bit of a bonus here because the hardware has over the decades actively targeted C. Particularly visible in the processor's support for calling conventions, and manipulating a "stack" so natively. (While a stack is really, really convenient, it is not, strictly speaking, necessary for programs to be structured that way.) But for the sake of simplicity I'll ignore that for now.
I think of "opinionated" more as a measure of how how much structure a language imposes on you as a programmer.
For instance, Javascript has a massive runtime, but I'd say it's an extremely unopinionated language. If you want, you can implement a Javascript function which rewrites itself every time it's executed.
In my opinion Rust is an extremely opinionated language.
That's a perfectly valid definition of opinionated, and it measures another aspect of a language.
Languages have a lot of aspects.
I used this aspect because upthread we were discussing memory management and "portable assembly". In another context I'd happily use this concept instead.
I've always argued against the idea that "portable assembly" is a useful way to look at C. It is terrible for the purpose in the sense that the compiler is allowed to generate code that has little to do with what you wrote in terms of procedure. Assembly on the other hand is useful exactly because it affords minute control of procedure.
In my view, C is a secretly declarative language where your program is a declaration of constraints for the optimizing back-end that just happens to look a lot like procedural code.
I can write what looks like function containing a loop or a memory write and call it from main only to end up with a program that only loads a constant in a register and returns. Because of this and the sometimes complex conditions that enable some optimizations, it's hard to argue about C code in terms of its performance without falling back on analyzing the actual "non-portable" assembler output of the compiler.
C will remain relevant because Linux is written in C. Accessing all the core functionality of the os (networking, files, ipc...) requires using C libraries the os provides.
Using libc isn't a strong requirement. There's nothing stopping you from making your own syscalls for everything in the language of your choice. I recon C remains relevant because replacing what's already been written in it is a lot of work for very little benefit.
I think that manu of C issues come from the APIs rather than the language. You could do safe memory management by only using refcounts in the api and sticking to it. You could have bound checking in the strings library. But strings are 0 ended (more an API design since strings are not in C). Not saying that its bad design because it allows >256 length strings with only one byte of overhead. But it is a compromise that made sense but has its drawbacks. There are a few stdlib complements in C (strlcpy, antirez sds lib for strings) but it's a shame a more modern lib/runtime has not been established as a standard while keeping C strengths. Or maybe I'm ignorant.
I think you're right. I'm by no measure a C programmer, but I've always regarded it as something that people don't like coding in because it's difficult, not because it's a bad language. Most people complaining about C are like guitar players complaining about the piano. It's more complex, but it has to be given its role.
That said, the fact that K&R didn't specify the size of an int or long complicated matters, but it seems to be an accident of history, not a mistake in design. They had no choice.
I'm a C programmer. I don't know if I'd say it's a bad language, but I definitely hate it. I agree wholeheartedly with OP about the quality of life stuff.
I'm constantly amazed by what C will let you do. The footguns and lack of type checking. C compilers will link a function to a variable for instance. Don't forward declare your functions, and you might not even get a warning (yes some compilers don't give warnings by default for implicit functions). The linker will then go to the symbol table find anything that matches and place a function call. That's just one example, but they're many. C builds erroneous code.
Although it was great for it's time, I think now (2019) it's rather silly that you de facto must learn three languages to use C:. C, C preprocessor, and make.
If I had to do that every day (I don't use c, except passingly, and usually in read-only mode) I'd tear my hair out, and that's even before the inevitable footguns I'd write.
That's not even the half of it. Dealing with the differences between compilers, oftentimes preemptively, and various standards. For a very long time, it was common to build under c89 "just in case". Now you have to deal with the differences between c89, c99, and various industry/internal standards. That's not even considering how configurable compilers are, and all the features to consider. Then as you said, make or IDE.
Much of this isn't necessarily specific to C (lots of toolchains include build scripts, complicated settings, different language revisions, standards, ect., ect., ect.) It does feel especially bad in C though.
> I think now (2019) it's rather silly that you de facto must learn three languages to use C:. C, C preprocessor, and make.
That's basically the case with nearly every language. Most will have some sort of equivalent of the pre-processor and it's usually a lot more complicated, c++ template craziness, code generation tools, dynamic meta-programming tools, etc. For all it's faults the simple text substitution of most macros is the least complicated option.
Most languages will have there own build tools as well, sometimes it will be the same language with something like rake, other times it's xml or it's own language. And they're nearly always inferior to make in many ways.
My preferred workaround for this is to write C code that is accepted by a C++1x compiler under fairly strict warning settings, and use a few standard macros that compile to the usual unsafe constructs in C but allow the C++ compiler to enforce extra rules. E.g. I use {S,K,R}_CAST macros instead of plain C casts.
>yes some compilers don't give warnings by default for implicit functions
Broken or ancient compilers. That's as much C's fault as it's Intel's fault that Windows 10 doesn't run on Pentium MMX. Implicit functions are not part of C; they were gotten rid of back in 1999!
I agree to some extent (I'd also add that undisciplined and apathetic programmers are an even bigger problem).
I have to disagree about implicit functions not being a part of C anymore. I've yet to see one generate an error; this is optional, and frequently not used. I'm including brand new, state of the art compilers, even (default) gcc. Of course, not even emitting a warning is bananas.
>I have to disagree about implicit functions not being a part of C anymore. I've yet to see one generate an error;
This is not really a debatable matter. There's an international standard for what is, and what is not a part of the C language.
Errors as you seem to understand them are optional for everything else except for the #error preprocessor directive. For all the other invalid C programs, only a diagnostic (like the warning you got) is required, and a conforming implementation is free to complete translating the invalid translation unit. I don't see why that would be an issue, as it's very easy to switch those warnings into translation errors if wanted.
> I disagree. I'm glad they didn't -- we may have been stuck with 2 byte ints and 4 byte longs!
At least then "long" would have a usage, whereas currently it's entirely dead & useless, and it'd be much easier to deal with printf's string formatting than the PRId64 nonsense.
The named-size types (int32_t, etc..) are king anyway
> I'm not sure anyone could ever accuse anything related to the internet of being 'designed'. Even being standardised is a stretch.
I think you are talking about the ecosystem surrounding JavaScript, which was created in 10 days for incoherent marketing reasons, not the Internet, which was designed via a rigorous standardisation process. The Internet protocol suite is a beautiful and lasting design that dates back to the 1970s, and it's proved flexible enough for everything from the dial-up era to modern wireless video streaming.
I suppose I could have been fairer. HTML/CSS/Javascript does not grow in a controlled way and the standardisation tends to be reactive rather than proactive.
Most of the components that aren't presenting data to the user are quite well designed. That being said, the internet is very much a sum of parts and is more reminiscent of a naturally forming compost heap than of anything is planned out. And the TCP protocol is a very interesting study from a design perspective and an unusual example in my book of excellent engineering; middling design. The original protocol was not good enough (eg, congestion collapse) but its evolution was quite something.
> C was famously well designed. The problems it was well designed for have disappeared and its good ideas are so well understood that, like many great early works, they are cliches that can no longer be recognised for their brilliance because everyone is doing that.
Another way of looking at this is that C is a step on a staircase. There were many steps before it, and there are many after.
It's okay to pause on a step for a while and catch your breath. Pause too long and you might start thinking this is a good place to stop, since the steps in front of you seem more trouble than they're worth. You may be tired of climbing, and that's okay.
No one person has to keep climbing, but we need people to continue on, so we can see what's at the top of the staircase. So even if some of us decide to make our step a cozy place and settle down, let's be sure to cheer those continuing on, and not try to convince them all to stop with us.
C is an amazing language, and I’m glad that I spent the first five years of my career writing it pretty much every day. (Some days I was writing FORTRAN, of an era when it actually was capitalized, or working on some very bad late 90s C++. But most days, C.) The fact that it was a phenomenally well-designed language doesn’t mean we can’t try to improve upon it. My love for it doesn’t mean I don’t want to improve on it! Quite the opposite in fact.
What most people see as manual memory management in C, malloc and free, doesn't even give you that much control.
Stuff on the stack also gets handled pretty automatically.
You can however implement completely manual memory management with C. I think the Linux kernel does a lot of that. (Though mostly do it via libraries they've written, of course. Manual doesn't mean you have to do everything with copy-and-paste.)
It's a greater degree of control than you get with most languages. For instance, in most high-level, garbage collected languages with RAII, allocations and frees happen at a fairly granular level when objects are created and destroyed, and the programmer has zero ability to ensure things like cache-friendly, contiguous memory layouts.
For things real-time applications, or even video games, this can make the difference between success or failure.
Yes. Though funny enough, most implementations of malloc/free are not suitable for hard real time code. One of malloc or free usually has an unbounded worst-case runtime, because of the data structures for keeping track of free memory they keep.
To the original point, at least C/C++ allow you to do this type of memory management, even if it's not the 'default' strategy. It's not so clear how you would achieve something like this in Go or Java.
Agreed. I wasn't contesting that point at all, just elaborating.
Most of the time for most people, the risk of arbitrarily long pauses from malloc/free is only a theoretical concern. Pauses from your typical GC are a practical concern more often.
Reference counting lies in the middle. If you drop the last reference to a big part of your object graph, it can take an arbitrarily long time to clean up as well.
You can bound it by either being careful how you construct (and deconstruct) your object graph at application level; or you can go for a more sophisticated real-time variant of reference counting.
Google has lots of hits for 'real time reference counting'.
c has no memory management, its for the programmer itself. so if i a c program has bad memory management it's not the language but the programmer. (admittedly not easy to get right if you need to do it yourself.)
C is specific in that it tries to give a good direct interface with hardware. If working within other software like an OS, it's tedious, as there's a lot of things outside of C itself to take into account. Again, not a downside of a language, but the way it is used.
Javascript has no relation whatsoever to C, in its design nor implementation. it's a scripting language :/.
C is far superior than a lot of languages, but it's often badly applied giving it a bad name. It has a more limited scope in todays world of rapid development on top of operating systems. For embedded / bare metal systems it rules the day and there's barely an alternative unless the bare metal provides excessive resources to allow for things like c++ or rust which add a lot of overhead to systems to provide more ease of use. (i.e. not always worth it / possible depending on the system its targeting.)
assembly would be a good alternative to C, but it's incredibly more tedious, where C is a good high level repalcement for assembly, which is what it was designed to be.
C is great, just don't use it if you don't need to use it.
You might argue that C's memory management is simplistic, lacking, or error prone, that it is not automatic, that there is no garbage collection, but to say that C has no memory management is objectively wrong and detrimental to the understanding of how C works.
Stack memory is tracked by both compiler and - in some cases - runtime (see "alloca") by creating a code that manipulates a set of registers. The memory in the heap is managed by a minimal set of calls: you can allocate a chunk of memory of a certain size, you can release it, or you can attempt to re-allocate it. Each of those calls maintains a data structure for tracking available chunks of memory in the raw address space, which programmer does not have to write. This is memory management.
The difficulty of dealing with strings in C is an specific instance the larger problem of missing the key idea that an array shouldn't be just memory, it should be memory plus dimensions.
C would have been a much better language if its arrays/buffers were always sized. It would have made its stdlib and application APIs much safer and would even have enabled having bounds checking as an optional compilation flag.
Pascal-strings were not uncommon in C at one point, rather than null-terminated strings. They lost, but in a sense, they didn't, since all the string-manipulation functions we use nowadays take a length parameter anyways.
The notion was that strings would be consumed fully, thus the starting pointer could be used for iteration, and null char is a natural end condition. Here it is, saving an int and cycles for tracking length.
It's just coding in such way appears to be too low-level. Pointer in programmer's hand became akin to neuro-surgeon's scalpel... casually performing surgery often on 'self' and with shaky hands.
People usually add their own or use existing libraries for strings with bounds. All I'd argue is that it's somewhat annoying that C standard library donesn't have these, so there are too many variants of basically the same thing out there, because most programs benefit from having strings with (unchecked) bounds.
Bounded arrays and array sliced should not be part of the stdlib, they should be a primitive type of the language. Only then you can have automatic bounds checking for every array indexing operation.
It is not entirely missing that idea, since what you malloc() must be aware of its own size, even if that’s not usually something you get access to as a programmer.
One way I’ve seen this done is you malloc() 30 bytes, it allocates 34 and gives you a pointer to 4, and bytes 0-3 hold internal information for the malloc/free implementation.
I would happily argue that JavaScript is _worse_ than C.
For example, this [0] is quite obviously outputting a string to the console, isn't it? (Hint: some of those seemingly random characters pull a double quote out of JS' leaky guts.)
And whilst TypeScript can help by identifying some types... The syntactic difference between one type and another type can easily be hidden from the programmer.
One mistake, one service sending one type when you expect another (we promise we will always send a fully-formed object or plain text, oops we sent a list), and you end up only hitting the error ten levels deep and not knowing how things got so badly mangled.
Type safety is hard. Type safety in a language that desperately wants to be a string is nearly impossible. If you look at anything the wrong way, it will become something else.
At least with C, if I try and treat something as a size_t it will try and behave like a size_t. It might not actually be the right type and cause problems that way - but the behaviour is within the expected bounds.
You can pick this kind of nonsense code for pretty much any language, are you writing this kind of code? Not really.
You have several options when validating input, from handrolled asserts, flow-runtime, json schema, several helper libraries to writing your own dsl with help of nearley or something similar if needed.
JS that doesnt check the type of every argument down to the prototype and dependent function level (not knowing what fns are going to be required further), is written by JS devs all day everyday.
The situation isnt nonsense, it is state of the art JS development.
I don't get your argument. You receive some JSON over the wire, it's not the right format, and everything blows up in your face. Likewise, you receive some bytes over the network, interpret it as one struct but it's really a bit different struct, and you have your problem all over again. Arguably worse.
This is going to happen with any technology unless you validate your data at the boundary. Good C code does that. You can do it in TypeScript as well, a good part of it automated (see e. g. https://github.com/woutervh-/typescript-is).
I don't agree with his overall thought process, but JavaScript is not without issues. I believe what he's referring to here is that JS will (usually) determine its actions based on its runtime type whereas C will (usually) determine its action based on the compile time type. So if I am expecting a number but get passed a string, in JS I am now executing string operations (with all operators) whereas in C I'm going to be doing arithmetic as expected but on incorrect data (probably the address of the string). Both of these are likely not be what we want, but the JS can end up with extremely unexpected outcomes, especially if we end up with object mismatches, where as the C is restricted to outcomes for different numeric inputs.
> You receive some JSON over the wire, it's not the right format, and everything blows up in your face.
If I was using a staticly typed JSON parser, like say Rust's or Crystal's, then it would blow up, right there. It'd throw an exception, or pass back a failure value that I have to handle.
JS doesn't do that. It just keeps going, and due to the way it implicitly handles types, getting a list instead of an object can mean you end up with a string later on.
I guess that, often, people just JSON.parse and think that's enough 'cause they already typed their stuff. But that's not really a problem of TS or JS.
Just because you can write horrible things in a language it doesn't mean it's bad. Just like you can drive your car to a wall doesn't mean your car sucks.
As a C++ programmer from before it was standardised, I would happily say "yes". Unfortunately it has some properties which make it too useful not to use in some situations, and the default shared library linkage on most platforms only works for C or C++ style languages.
Inter-linkage of C++ programs tends to be done over "extern C" ABIs, or "object broker" systems (COM, CORBA etc).
Possibly the next closest system is C#'s MSIL, which is almost a first-class platform citizen on Windows, and allows you to interlink assemblies produced in different languages (F#, "managed" C++, VB). Even then it's capable of alarming confusion depending on whether you've used a "framework" or "core" version.
Using anything other than C makes one's code unusable as a general purpose library due to the complexity of the binary interfaces. Writing C++ ensures the code won't be touched by anything but more C++ code. A Python script can load a C library shared object, resolve a symbol to a function pointer and call it. I don't think I'll ever see Python load a C++ library shared object, obtain a pointer to a C++ object and call methods on it using the C++ ABI and handle the C++ exceptions it may throw.
The only way to interoperate is to sacrifice all the benefits of C++ and adopt a C interface. Rust has this feature as well. I like how the creator of Zig put it: if it doesn't speak the C ABI, someone's gonna rewrite it in C.
In practice the C++ ABI issues are only on Microsoft Windows from what I can tell. Everything else seems to have long ago standardized on the itaniam C++ ABI, with one c++ standard library for everything in the system. Upgrades do happen, but they are major system updates where you are required to rebuild everything (generally by installing packages) so it all works. It is possible to violate those, either with a different ABI, or a different incompatible library of course, but in practice nobody does.
Microsoft has their own ABI (which is annoying, but otherwise there is nothing wrong with it), but they made the "mistake" of tieing it and the standard library to the compiler not the system thus ensuring you will generally have more than one ABI to worry about unless you are careful. They do this because they have a rule that if it ever ran on a Microsoft OS it will still run (I think they have dropped some DOS and Windows 16 support, but otherwise they are famously bug compatible when bugs are exploited). I understand why Microsoft made these decisions, but for developers they are a problem.
Even if it is stable, it's still far too complex. The compilers generate so much machinery it's not realistic to expect interoperability with foreign code.
C++ templates allow you to do some brilliant metaprogramming. They can also be computationally expensive to compile, produce incomprehensible error messages, and are a great way of teleporting surprising behavior into your program. Especially when combined with operator overloading.
> (Hint: some of those seemingly random characters pull a double quote out of JS' leaky guts.
Nope! It’s just JSFuck [0]
1. All objects have a `toString` method that returns a string.
2. Object properties can be accessed by subscript (`object.prop === object[“prop”]`) and numeric property keys are really strings (`object[0] === object[“0”]`).
3. String characters can be accessed by index (`”hello”[0] == “h”`).
4. JavaScript has string concatenation support. It occasionally will implicitly convert something to a string if it’s used like a primitive value.
The script you posted just abused those facts to generate the string “Hello World”. It does not “pull a double quote out of JS’ leaky guts”.
Is it weird? Sure. But using strict types prevents this type of stupid stuff from occurring. Do you really think TypeScript doesn’t try to prevent you from using an object as a string? Of course it does.
If you use strict types internally and validate at the edges then you really shouldn’t have a problem.
There's no double quote involved, you saying that implied you didn't know how the statement worked (regardless of whether you actually understood it or not), so their reply was valid. I feel like this whole avenue doesn't fit your overall point. You can certainly create a string in C without using quotes, and you can also write heavily obfuscated code that would be difficult to understand and look very strange.
I'd say for learning programming (the basics: variables, functions, if-statements and loops) JavaScript is better.
Why?
1. You can visualize your code easier..
2. You do get to be introduced to types, but don't have to worry too much about them (I got really upset when I learned that in Java you could only make arrays of one type as I didn't see the point, now I do, then I didn't).
3. Error handling is easier.
4. No memory management is not something you want to deal with as a beginner (as someone who starts learning programming, memory management is nonsense, later on it's a a necessity for performance).
5. A nicer debugging experience with Chrome dev tools.
So yea, for learning programming JS > C or C++ for that matter.
Disclaimer: my first programming language was Java. In light of that, I'd rather choose for C (that hello world example tripped me up for 8 weeks! After 8 weeks, I still didn't know what command-line arguments were as they taught us to use Eclispe... sigh).
Though, for learning how to hack: <3 C, <3 assemby, <3 opcodes. For learning how to hack, I love C. They made everything easy, there are leaks everywhere! I wonder how that will go about a 100 years from now when things are presumably a lot more tight and secure. Entry level hacking seems to become rarer everyday (aka a simple stack smash and voila! You're in!), so I wonder what difficult antics you'd need to do in a 100 years from now. But for now, we still have C.
Disclaimer: I really enjoyed C when learning how to hack, I might sound sarcastic, I'm not.
Oh boy, I'm arguing over the Internet. Sorry for that. You do bring good points and I agree upfront that they are indeed JS features that either suck or are not very pedagogical.
1. You don't need to learn about the event loop straight away (or asynchronous programming) by the time you get to that concept, you might as well go to another language that's easier for that. But I do concede that once you get to async programming: JS is not the best language to learn from.
With that said for people who got past the basics, here's an intuitive video about the event loop [1] (note: if you're past the basics, don't watch it while you're still doing the basics xD).
2. Basic objects are easily learnable with JS since you can type them in a literal sense, whereas otherwise you need to instantiate it with a constructor (e.g. Java) or you don't really learn about objects (e.g. C, though I've heared that it does have objects right? Despite it being an obscure feature, anyways, I'm not sure if you should be happy to learn about objects, learning about dictionaries or hashing is cool though). More importantly, when you start out learning programming, primitives + some HTML + simple objects are all you need to know.
3. Yes, try/catch is useless. I wouldn't call that basic though, but I'm already noticing we're differing on definitions (note: my definition was relatively clear, variables, if-statements, for loops and functions). And in coding bootcamps, the concept came long after I taught the basics of programming (e.g. it came in week 5 because of NodeJS). I remember Python being better for this.
Errors in try/catch are as infuriating as segfaults or other memory leaks / coredump issues, they simply seem to happen in C a lot more when you're just starting out with programming (sorry if I mangle terms, C has been a while for me, and I like C, just not as a starter language, just as I used to like Charizard but not as a starter pokémon :D). Later on, it's a different story and I have suffered enough pain from it.
Note: I'm not saying JS is the best language to learn from in the world. I think Python is better suited for that, or in some cases Scratch, or in other cases C# + Unity3D.
I'm simply saying it's better than C (in terms of learning programming basics).
First, I didn't make it clear that I'm not advocating for introducing types to beginners. I advocate against JavaScript in particular for beginners.
Re: 1, yoy absolutely have to introduce it as soon as you move beyond the most trivial synyax example. Nodejs, browser, whatever you do, the most trivial thing that actually runs somewhere _will_ have callbacks that are run in the event loop queue. Not run in line-order, which is what beginners learn first. Now you have to go down a rabbit whole about the event loop, what and how and when fns gets put on the queue etc etc. Not ideal IMO since it's pretty specific to JS.
2. I buy that it's better than nasty Java, but any weakly typed language applies.
So yea, Python is probably the best in terms of easiness and less paradigm-specific cruft.
I personally found JS much easier and accessible than Python, but YMMV.
I tried a number of different languages when I first started seriously pursuing a job in dev - C++, PHP, Ruby, Python, and eventually I settled on JS. JS let me quickly get something working while being able to understand what was going on. The other platforms had various barriers that made it difficult to focus on what I wanted to do because I had to know a ton of other things first, or to accept not knowing how something functioned (especially true with Rails).
The only real gotcha I really had struggled with in my first year as a professional dev was reference variables, but that is a gotcha you could run into in most standard languages and not unique to JavaScript.
Also in the only study really done about programming language types, the initial overhead of learning types is is outweighed by the value types bring almost as soon as you start.
Is it just me or is teaching Python way more popular than it should be? Just because a lot of us learned on BASIC dialect, doesn't mean imperative programming 'makes sense' for teaching. Related to the article, Elm seems a fit language for on-boarding new programmers given it's training-wheels type system, friendly error messages, and it's target is visual where newcomers can see the output of what their building. If not, the LISP family has pretty good story here too for simplicity; it's no wonder Scheme & Racket are also popular languages to begin with.
> Hint: some of those seemingly random characters pull a double quote out of JS' leaky guts.
If one is going to critique a subject, the least one could do is learn the most simplistic basics of said subject.
There are no "double-quotes" pulled from anywhere, and there is nothing "leaky" nor buggy about the behaviour. Sure, many people are not fans of type coercion but it's not in any way unique to javascript.
> At least with C, if I try and treat something as a size_t it will try and behave like a size_t. It might not actually be the right type and cause problems that way - but the behaviour is within the expected bounds.
Usually the C behaviour is undefined, at which point it can do absolutely anything. If it just gives you an arbitrary value and chugs along (as many C implementations will do), that's harder to understand and diagnose. If you run a big calculation in Javascript and it comes out as "true5", it's pretty clear what's gone wrong (and pretty easy to bisect the calculation to find where the mistake is). If you make the same code error in C you'll get an answer like 823491 and might not even realise that it's incorrect.
> If you make the same code error in C you'll get an answer like 823491 and might not even realise that it's incorrect.
Or, worse, your code will give you "true5", but only on Tuesdays at 2 AM when your server load is low, and you'll look at the code and wonder how it's possible something like this could ever happen…
The C behavior in this case isn't undefined, unless you go out of your way to violate aliasing rules or something -- but that's a far reach from what parent was saying.
> At least with C, if I try and treat something as a size_t it will try and behave like a size_t. It might not actually be the right type and cause problems that way - but the behaviour is within the expected bounds.
Yeah, until the compiler figures out it's really not a size_t and that it's free to do whatever it likes to your operation.
> Yeah, until the compiler figures out it's really not a size_t and that it's free to do whatever it likes to your operation.
That's not true. Being gracious, parent was saying that if you treat another scalar type as a size_t (or vice versa) something sane happens. The standard guarantees that size_t is an unsigned integer type. If you use any other scalar type where a size_t is expected, it will be converted to a size_t correctly, even if it has less precision than a size_t. If you use a size_t where a less precise scalar type is expected, the behavior is perfectly defined as long as the value stored by the size_t can be stored in the less precise type. The result is implementation defined, or an implementation defined signal is raised, if it cannot be represented in the less precise type.
Either way, the compiler is never free to do whatever it likes in this case. In most cases, doing this is perfectly safe. Compilers are free to warn (and they do!) users when this happens. For instance, GCC and Clang both will warn on this behavior if you supply `-Wconversion`. But you never will see totally unreasonable code generated by compilers when this happens.
I went into some detail about the type system of JS and why I believe that justifies that JS is worse than C, the very same type system that is discussed in the majority of the article.
To add insult to injury, compilation time is bigger in JavaScript compared to C, with all the things that Webpack (or one of the alternatives to it) has to do.
Next week - "Haskell is C" - because you can write program that compiles and doesn't do what you want.
Nobody ever claimed that flow/typescript magically makes your code bug free.
It will help with stupid mistakes. It will help with invariants at the type level, but it won't guarantee anything because there are ways to supress it. It will not help with value/runtime invariants because it doesn't have any dependat-type constructs (you can use things like flow-runtime to get some extra help if needed during runtime, but it will be likely slow if you use it everywhere, maybe cherry picked places is good compromise, depends on what you're trying to solve).
Things like immutable collections and/or better architecture should help.
They might not claim that but the article’s title and gist still is “JavaScript is C because of this one reason”. This kind of comparison is just not useful.
I think the point the article makes is good, I just don't like how the comparison was made. I don't think the article would be of less value if the author dropped the "JavaScript is C" peppered all around it.
It's a fair point. That's the trick with a blog post where you're basically "thinking out loud." This is just the way that the little switch flipped in my head as I was thinking through things that day: The experience felt very similar.
There is. But in this particular case I stand by my point, a cursory glance will show that JavaScript is nothing like C. Starting an article with such a premise and re-stating it repeatedly smells like clickbait to me. Also it devaluates the point made by the article which is actually good.
It is still considered incorrect by many. Some but not all dictionaries and style guides have adopted “they” and “them” as singular gender-ambiguous pronouns.
Then use "they" or if you're really that concerned with making a mistake, spend 2 seconds to google them, and you will see that this Chris is a "He" from the first hit on a google of "Chris Krycho".
The difference is in what a developer can express. As has often been noted: you can write FORTRAN in any language. But that doesn't mean I will opt for Fortran (except perhaps for some specific numeric computing cases! Then I might!) when other tools allow me to capture more invariants in my program.
The fact that the compiler doesn't make me is orthogonal to the point I was getting at in this article. It allows me to capture constraints and invariants that matter to me, and it enforces certain constraints and invariants on all developers. That combo can be—and in my experience working a lot in C and JS as well as in richly typed modern languages like TS, Rust, Elm, etc.—very, very powerful and rewarding.
(Also: TS gets surprisingly far on dynamic runtime inspection. While I'd love to see the ideas in Idris become far more mainstream, in the meantime, the stuff TS is doing with type narrowing, mapped types, conditional types, and so on is nothing short of astounding. You can do some pretty wild stuff that isn't quite dependent types but gets awfully close. But caveat emptor: compile times may get you.)
> Earlier this week, I was working on a problem in the Ember app where I spend most of my day job, and realized: JavaScript is the same as C.
No sir, you have realized that Ember is promoting JS as C.
Throughout the whole article the author is trying to shift responsibilities to compiler or he is seeking a solution which is closer to DSL.
If compiler or typechecker disciplines are more important, why not explore eslint plugins and typescript advance types and use them appropriately before jumping directly to new programming language and relatively limited community ?
I would guess whatever you do on top of typescript, its still a superset of javascript, which will always have holes in it. Its that 100% guarantee that the author is looking for, and that Elm provides.
The difference between 99% and 100% is enormous in my experience. A part of your brain that constantly evaluated the situation can now be totally at rest, spending that energy at other things.
Note that I have not used Elm myself, but looking at it I get the point. I have the 99/100% experience in other things
Yes, precisely so. Type definitions are as good as the programmers who wrote them, and since humans are imperfect they make mistakes. After you have defined your types wrong, there are devious bugs in your code that you have no way of knowing (until you step on that mine and the program hopefully crashes). Although every language by design is mostly bug-free, it's that in JS/TS there is a much greater human factor (similar to C/C++) that just adds a large error margin for the correctness of the application. Especially when there are very loose runtime guarantees for type-safety, there is always a chance for something ugly to happen.
This is especially true when it comes to working with JS libraries. Since no two libraries or JS developers have the same set of ideas for how the language's components work and fit together - how objects are intended to be used, standard ordering of arguments, whether to use a functional or an OO style, etc. - your application will be torn between the dozens of wildly different libraries you end up selecting, or worse, by those you have no choice but to select. Every crack in JS will be pulled wide open by that tension, and even TS' attempt at some normalization will not fully erase that.
In general, there is a sense among Python coders of at least concrete schools of thought for these kinds of abstractions. Java certainly is looser, since it never clung tightly enough to 'pure OO' principles. Rust is young, and - I expect - is still developing its standard idioms and common abstractions, though it has certainly settled on a broad swathe of it already, and holds strong opinions on what good Rust code looks like.
If I asked something similar of Javascript, you would see very little commonality between people. It's designed as a Self-y language with functional leanings, and gives far too unopinionated and broad a leeway to nudge its developers in any one direction. Furthermore, early browser APIs for JavaScript - especially when it came to features like window and cookie interaction - pulled the rug out from under every reasonable abstraction the language could have been geared towards, choosing unreasonable over reasonable abstractions.
This leaves the language where it is now - incohesive, incoherent, unsound when it comes to stitching multiple parts together, because each developer behind each of those components has a wildly different perception of how the JavaScript world 'ought to work'.
Among others, Haskell has bottom, unsafeCoerce, unsafePerformIO, and Data.Dynamic. Even in the dependently typed Agda, you have postulates and pragmas like NON_TERMINATING and NO_POSITIVITY_CHECK. Is there any language out there with no unsafe hatches? Even in the most rigorous languages I'm aware of, I don't think 100% guarantees exist.
Some languages get a lot closer than others, though -- I think Elm, Rust, Haskell, and so on are still a big improvement over the status quo of languages. I'd rather get 90% of the way there than 10%.
Well… I have. I helped bootstrap the TypeScript community in Ember back in 2017 (it existed before that, but I pushed hard and slowly pulled together a small team that works on it). I've written ridiculously advanced types with TS to solve problems (and experienced the compile time pain that can come with that). I'm deeply familiar with TS's strengths and weaknesses. I love TS. I'm working on paving the path to land it in one of the largest JS apps in the world. So when I say that Rust and Elm do things TS doesn't and can't, I'm speaking from experience. Lots of it! And that's not a dig on JS (or C), which I do actually like. It's just that I see how things can be better, and look forward to a world where we have yet nicer things.
I'm in the camp that sadly does not have easy options to choose a different language at work (I mainly work in C#), but I'm still very interested in learning new programming paradigms and techniques.
From this view, and regularly having similar experience with code I have to dig up for whatever reason, I've come to wonder whether the argumentation on languages here doesn't ignore a very important aspect: The blogpost (as many before it) presents the process of understanding your code as a tooling problem. I slowly come to think that the real problem here is educating people to write understandable code that those tools really solve.
I've wrestled with some horrible in house Java application multiple coworkers produced for quite some time now. It has mutable state everywhere and is full of hacks, bugs and race conditions, although the actual problem really isn't that complex. (It is so bad that it looks like management even might get past the sunk cost fallacy regarding a rewrite directly after it has been "finished"! I've never experienced that one so far.)
After thinking it through for quite some time, I'm convinced that there's a very simple (and rather elegant) solution to this, but if you want to tackle such a project, you should at least know about immutable datastructures, record types, algebraic datatypes and the basic concepts of idempotency and CSP. But everytime I try to nudge someone in the dev team in the "right" direction, I feel considerable friction in this regard.
So, more and more, I come to think of the biggest worth of languages like Rust, Elm, Clojure, Erlang and so on being that they force programmers to think in different ways (and grow in the process) -- something that gets handwaved away in other circumstances with "well, it has worked for me so far…".
I think most people learn those things by experience. Let them maintain their code for a while, and they will eventually discover why things like "mutable state everywhere" and "race conditions" are bad. It's only after they have felt the pain they are willing to buy the solution.
I would have followed the same reasoning a couple of years ago, but have since had the "pleasure" of working with people that have been producing this kind of code for multiple decades. At this point, I've grown more amazed at how good humans can ignore self-inflicted pain and more cynical when it comes to believing in their adapability.
I like to use something often called "regression tests", that whenever I find a bug, before fixing it, I write a test that would detect that bug. Confirms that the test catches the bug, then write the fix, and re-run the test to confirm that it's fixed. All tests are then run automatically before deployment.
Also if you already know those 2% of the code that can go wrong, also write tests for those! The only disadvantage with tests are that writing the tests can be more difficult that writing the actual code.
Another strategy I use is defensive programming; "throw an error if the invariant wasn’t properly maintained" used together with code coverage and testing. So when the tests run the code, (or during manual testing) those defenses would trigger. I leave the defense on even in production, and log all errors (if on server) or generates a bug report, that users can submit with the click of a button, if in a user interface. The problem with defensive programming is that even if you have a call stack and state, it can still be hard to figure out the steps that lead up to the bad state. So you need some levels of logging.
I do not consider testing (TDD) or defensive programming a band-aid for bad language design, they catch many sorts of issues, not only type errors. Even if your code is very beautiful, you still want to verify that it works. Although demoing the app, or putting the app in-front of thousands of users, is probably the most effective way of finding issues.
I used to make this argument, but I think I was wrong. I sort of hate working in javascript projects I’ve written with a million tests, because if I want to refactor something, it takes 3x as much work because of all the tests I need to rewrite.
Lately I’ve been really enjoying typescript and rust. The ability to write code and if it compiles run it and have it be correct is like a cool breeze on a summer day. In typescript I need to write fewer tests (because the compiler catches bugs that I would need a unit test for in JS). But also, when I refactor something the compiler gives me a list of all the places in my code which I need to update. In JS I have learned to worry about lingering bugs that I just forgot to write unit tests for or something. With a typed language, that fear is (mostly) gone. It’s a lovely quality of life improvement, and it noticeably improves my velocity.
Unit tests and type checking solve very different problems. In practice there is a lot of overlap, but there are gaps in both that the other covers, thus you need both if you want the best quality (if you want the best quality you will also look at formal proofs which is a super set of type checking and can catch even more problems that the above two won't - but even that will not catch everything)
Unit tests prove that everything you thought of works correctly. Formal proofs show some ares where you didn't think of something still work - but they can fail because the assumptions are wrong.
I agree that you need both. But in my experience you need far fewer unit tests in a typed language in practice to be reasonably confident of correctness. Like, maybe 1/3 of the number of tests. The reason you need so many tests in JS is that its just so easy to accidentally refer to a variable that doesn’t exist anymore or something like that. And the only real way to be safe from that sort of mistake is to go overboard with unit tests and aim for 100% coverage, which destroys velocity.
As you say, unit tests and type systems are good for different things. Unit tests can be used as a poor mans type system, but that seems like an amateurish deal now we have typescript & friends. Well, at least for code you want to maintain long term.
I do not like traditional unit testing as the tests are glued to the code, then I would rather write assertions in-line together with the function! Instead I take a black-box approach, for example a HTTP server is tested by making actual HTTP requests to it. That way you do not have to rewrite the tests when you change the code - just add additional tests.
That is another useful way of testing. The problem with it is eventually you write some low level code and realize that sometime in the future someone is likely to use your new code in a way that will expose a bug - but right now something higher level prevents that bug, and you have no idea how to prevent it with assertions inline and the outside black box tests can't cover this case yet.
For large complex systems all levels of testing are needed to get the best quality.
I use defensive programming, as in throwing errors when something is used the wrong way. Assuming that the developer always run the code at least once before shipping. But I try to avoid thinking into the future. I'm already too good at finding edge cases, that if I would also think about possible issues that might arise in the future, I would get nothing done. I instead wait for it, and then fix it only when it has become an issue. Especially in optimization, I often fall into though traps like "what about when this table has millions of rows, this query would take a long time", I have to remind myself of the problem and what the priority is right now. While there is code debt, there is also opportunity cost.
That's another checkmark for keeping functions pure so you can easily do substitution. Problems will arise if your language doesn't have managed IO because you can never be sure.
If you're re-writing tests you're not refactoring you're just rewriting.
If tests need to be rewritten to support a refactor those aren't tests, they're at best just testing the implementation rather than requirements and at worst just a second implementation.
Defensive programming leads to hell. The next one looking at the code won't be able to tell the actual business logic from defensive checks, or what may happen from what should never happen. I've been badly burned by this, especially while trying to reconstruct flows from badly documented code.
Rust is also C, if the fact that you can't encode all possible invariants into the type system makes a language C. But this is a spectrum, not a dichotomy, and TypeScript falls somewhere way beyond C, but below Rust, in what you can statically encode. And Idris lies beyond Rust.
> In Rust, I can be 100% confident that I will not have memory-unsafe code. Not 98%-and-I’d-better-check-those-last-2%-really-closely. One hundred percent. That’s a game-changer.
Also achievable, and far more easily, with a tracing GC. Do you know all those people worrying about memory safety in Lisp decades ago? Me neither.
I agree. Memory safety is a problem solved 60 years ago and adopted by most mainstream languages, or rather most languages period. Rust doesn't solve memory safety, it just lets you do it in a more performant way.
But the compiler does quite a bit more than just allow for performant memory safety. It enforces rules around mutability and handling errors which are usually only seen in functional languages.
None of these things are special about Rust. The special thing is that Rust can do these things with less computer resources while asking more of the programmer.
And this is a general truth. Type systems add friction and force programmers to deal with details from the very beginning. In the long term this might often be a good thing, but it certainly isn't ergonomic.
>Type systems add friction and force programmers to deal with details from the very beginning. In the long term this might often be a good thing, but it certainly isn't ergonomic.
In Rust, I agree, but I find coding in C#/F# which are decently typed, but with GC, is much faster for me than using e.g. Python. Typescript is much easier than Javascript.
Lack of types usually leads to worse auto completion, errors maybe only found on some edge cases which I didn't try and a huge slow down if I need to add a feature when I've forgotten about the code, since they provide a lot of documentation where you know that it's not stale.
I wouldn't use python at all if it didn't have pandas.
Auto-completion for some dynamic languages has gotten much better with the right tools/editors/plugins.
I get where you are coming from but I would argue that there is a bit of a perception issue here. You can surely write fast in typed languages but you also have to write more and think more.
But I fully agree with the benefits. Errors, changing/refactoring and documentation.
I wonder if we can have the best of both worlds some day. Prototype dynamically and then add types and abstraction progressively in the same language and code base. In theory this is doable with Lisps, but only in a self-imposed, possibly unstructured way.
Totally agree. If I'm writing anything nontrivial then it's going to be in a statically and explicitly typed language. There can be some friction, but the additional tools and speed in refactoring make up for it, IMO.
At this point I only use python for pandas and matplotlib, or for sending small psuedocode snippets to coworkers
> it just lets you do it in a more performant way.
That's not necessarily true. It lets you do it without a tracing GC, which is marketing genius. Whether or not it's more performant depends. A lot.
> It enforces rules around mutability
These are more needed due to achieving memory safety in the absence of a tracing GC. I like its rules for this; I'm just saying that they're a necessary consequence.
> and handling errors
I prefer exceptions myself. But one can write library types that force the user to handle error cases in other languages. It's not as nice as when sum type support is baked in, of course.
Not all invariants relate to allocation. Rust protects you from a whole class of shared-mutable-state issues that JavaScript and Lisp are zero help with.
Hard realtime, I do agree, there you can even afford any kind of memory allocation, everything needs to be statically defined usually.
Embedded devices, depends on the use case, PTC, Aicas, MicroEJ, Astrobe have plenty of happy customers.
Same applies to OSDEV, Midori did not have any big problems powering a couple of Microsoft systems before the powers to be decided it wasn't going to be a Windows replacement.
GC in osdev has quite a large number of problems, performance being one of them, the other being that some resources simply can#t be handled cleanly by a language that can't opt out of a GC.
You somewhat need the option to atleast manage some memory very very manually. And of course, the GC needs to work without allocating memory (ie, true in-place GC) otherwise you're going to blow of your kneecaps of fairly spectacularly (this goes double when you need to initialize memory management, which can't rely on a memory allocator being present or free memory being available)
> Midori did not have any big problems powering a couple of Microsoft systems.
I will believe that when I see the performance under load with my own eyes, and when I can convince myself that the code is not more cumbersome than manual memory management would be.
These situations exist and in those, something like Rust is a very good idea. I maintain that these situations are far rarer than most people imagine (and I've had to deal with more than a few).
This is not memory-safety per se, but Rust goes further, though. You can't have 2 objects that can modify the same object at the same time, for instance, as in:
l1 = [1, 2, 3];
l2 = l1;
l2.push(0); // Oops, I also modified l1, was it expected?
IMHO, Javascript is the C++ of dynamic languages. It suffers from similar early bad design decisions, which now a "committee" tries to fix by piling layer upon layer of complexity upon the broken base without ever being able to undo those early bad decisions.
Actual C is a clean and simple language both compared to Javascript and C++.
> Actual C is a clean and simple language both compared to Javascript and C++.
Why do people keep on calling C "simple"? C isn't simple it's basic, it is absolutely not simple to use at all, and given the mess around macro and pre-processing, it wouldn't call it clean either.
I don't think thw committee's options are really so limited. They could totally have defined a JavaScript 2 not backward compatible with JavaScript 1. Browsers, of course would have then had to support both, but who cares? Especially if there were a good FFI between the two, noone would have been inconvenienced (except browser developers)
Everyone loves a good Javascript punching bag but I would argue that the language/ecosystem has the most inertia and positive change within it. It's not perfect and it's exhausting at times, but I'm confident that the language will continue to move towards something more robust and ergonomic.
One more thing to note. From the post:
> In a C application, try as hard as I may, at the end of the day I am always on my own, making sure the invariants I need for memory safety hold. In Rust, I can be 100% confident that I will not have memory-unsafe code. Not 98%-and-I’d-better-check-those-last-2%-really-closely. One hundred percent. That’s a game-changer.
I think if you're always leaning on a compiler for correctness, maybe your problem doesn't reside in the static vs dynamic debate.
In the computer world, the worst thing often ends up winning.
The reason? Market forces are more powerful than technical features.
This means that if you rush a bad implementation out the door and it gets adopted by the masses, the network effects will ensure your bad solution wins.
Meanwhile, competitors that are working on a high quality competing language or product will be later to market and miss the train.
It's pretty obvious that people are reacting to the headline without having actually internalized the content of the post. Fair, I suppose: it was a bit wild of a title, and the post was pretty off the cuff. This is the bit I really wish people would take away from it, though:
> Neither of those is a guarantee I won’t have bugs. (A compiler that could guarantee that would have to be sentient and far smarter than any human!) Neither of them means I can’t intentionally do stupid things that violate invariants in ways that get the program into broken states from the user’s point of view. But both of them give me the tools and the confidence that I can absolutely guarantee that certain, very important kinds of invariants hold. We’re not looking for an absence of all bugs or a system which can prevent us from making any kind of mistake. We’re looking to be able to spend our times on the things that matter, not on minutiae the computer can check for us.
Indeed! Thus the Assumed Audience header at the top of the post! It's unreasonable to expect anyone to qualify everything they say in every blog post they right, I think. ;) See also: https://v4.chriskrycho.com/2018/assumed-audiences.html
The post is not _about Rust_. It _mentions_ Rust and it also _mentions_ Elm, as examples of higher-level, safer languages and uses them to demonstrate some of the benefits this category of languages offers.
I thought this was going to compare C and JavaScript as being imperfect languages that are ubiquitous as the Lingua Franca of their respective platforms, Unix and the web. I’d be interested in seeing that explored.
I also have some unpublished notes on the value of taking the “make JS/C better via tools etc.” path even when the other is available and why. It’ll likely generate less interest than this one—less provocative by a lot!—but it’s actually what I’m doing in my day job right now!
I would love to have an example of what exactly he is writing about. I spend most my time at the moment to rewrite some code from JS to Rust and still fight with invariants. I would like to know how to achieve the 100% safety. I know that I get most of the null pointer exception etc handled with Option and Result structs. But if you need to deal with string values etc the compiler can’t really help out when the value is not the one you’re expecting. And I also moved most of these to enums.
That is how I would handle that - you lex or parse your strings into an Option<Token> (or Result, etc), so that then all logic operating on the Token enum doesnt need to worry about whether it's a correct value - it's statically enforced.
I personally find the most value out of sum types (enums with data) and the ability to use Option/Result with the try operator '?' for this kind of stuff, where you can float errors up the call chain
Ok I kind of doing that already. The main issue I have is that some data comes from configuration and I can’t create static values for those (URLs for example). Thanks
The author means the title in a specific way that they explain in the article, so it's somewhat disappointing to see the comments taking it as either just C-bashing or saying "JS is good for some things" as if the article said it isn't.
Some people here might remember the mental shift from programming in old school BASIC to doing it in C or another procedural language; having functions as a tool gives you certain guarantees and allows you to program on a different mental landscape. The author is talking about a similar mental shift here, and saying that JS and C are similar to him in that they require him to program in the older landscape. This kind of thing is something we as programmers experience often in many levels, and something we should be able to empathise with - whether we agree or vehemently disagree with it, we can at least start by understand the sense in which the author is writing the article.
This could really use an example of an invariant that couldn't be elegantly enforced with property descriptors[0] and decorators[1]. Without such an example it's just flamebait.
This reminds me of my early C days dilemma: "Check for a disallowed NULL pointer argument on function entry or just trust that it will have a valid value?" And a mixed feeling that would settle after putting an assert there.
Eventually, one develops a "pragmatic trust", appropriate to the level of interactions in the code.
After all a state change is a kind of input resulting from an interaction. Some interactions are more trusted than others.
Asserting reality after every handshake is not going to make one feeling safer. Operating in a trusted environment and understanding and upholding its premises does add one more member to enforce that "safety in [sane] numbers".
" If you have only ever programmed in C/C++/Java and Lisp and scripting languages, you have been sitting in a corner your whole life. Perl, Python, Ruby, PHP, Tcl and Lisp are all the same language."
~ Frank Atanassow, "Some words of advice on language design"
Fuller quote: > Go study Scheme and Prolog and ML and Haskell and Charity and Lucid Synchrone and OBJ and Erlang and Smalltalk. Look at Epigram or Coq or HOL or LEGO or Nuprl. Aside from Java, these are the important ones. If you are familiar with all of these, then you are in a decent position. If you have only ever programmed in C/C++/Java and Lisp and scripting languages, you have been sitting in a corner your whole life. Perl, Python, Ruby, PHP, Tcl and Lisp are all the same language. (Scheme itself is only interesting for hygienic macros and continuations.)
If you think Perl, Python, Ruby, PHP, Tcl and Lisp are all the same language, you've been sitting in another corner all your life, one possibly not as well illuminated as the C/C++/Java corner.
Well, I don't think he's being literal. He's saying (I think) that all of {C, C++, Java, Perl, Python, Ruby, PHP, Tcl, Lisp, Go, etc...} are of an ilk when considered against the wider realms of programming languages. In other words, they are each more similar to each other than any of them are to, say, Prolog.
I think it's fair to say that, "Javascript's packaging is as bad as in C".
While ES6 modules make minor improvements in syntax, the entire npm ecosystem relies on commonjs resolutions which aren't that far under the hood to including header files wholesale.
Then you have the build system which you have to know your way around babel, webpack, and a whole suite of arcane tools each of which with very fragile equivalents to make files with similar nonsense errors when builds fail.
Javascript isn't C, but the ecosystem is a nightmare compared to modern languages with modern language features such as first class package management.
2) There is nothing wrong with C. It was DESIGNED to service specific needs and does that just fine
3) I think this obsession with "safe languages" is being taken too far. Safety does not come free, we pay for it. Mostly in performance which is a valuable "feature" for many types of software. Not gobbling all computer memory running "Hello world" helps too. Besides automatic memory management and "everything is immutable" does not really make software safe either.
3) It's not "safe languages", it's "memory safe languages". The whole point is to make very common security vulnerabilities due to unsafe memory usage much less common.
Rust and garbage-collected languages have memory safety. C/C++ do not. That's the whole point.
The analogy doesn't make sense; they analogize C to Rust, as ECMAScript is to Elm. I am forced to conclude that these are the only four Turing-complete languages that they know.
That's an interesting fact about Elm I wasn't especially aware of. Maybe I should have a closer look at it, even though web front-end stuff is not my cup of tea.
I think this is exactly right! I didn't go back and add it to the article when it later occurred to me, because this was just a "thinking out loud" kind of post, but the exact same analogy occurred to me.
So the author can't solve a problem, goes ahead to say that javascript is c because of one situation, then promotes rust by the end. All with a clickbait title to boot.
I'd be more angry but this type of formula for an article is used so often now that it should be expected.
I can't wait until people figure out that the individuals that aren't competent enough to handle C's issues, aren't going to be competent enough to handle Rust's either.
Hopefully the Rust hype train dies soon. There is simply too much bandwaggoning involved in this language to keep bias out of the equation when considering using it to solve a problem.
Have you given Rust an honest try yet? I thought it was a hype train too at one point, but I started using it, realized that the hype is well founded and never looked back.
I haven't but that wasn't my point at all when saying that. It is on a hype train, whether that hype is justified or not is not what I was trying to get at. What I was saying was that this hype and massive surge in use and publicity causes an inevitable bias which clouds judgement when deciding whether or not to use it. That crowd of programmers that seem to hop on every computer fad, the ones that have been adding hashtag reactjs to their twitter posts for the past year, they're probably going to choose rust regardless of whether or not it should actually be used. The bitter contrarian is probably deciding not to use it because he's sick of seeing it mentioned in every other tech blog post in the past year. As soon as this hype dies off, both of these groups can move on and hopefully decide whether rust is actually worth using for whatever they need done.
I would use it if it's the right tool for the job. I just want to stop seeing it mentioned 20+ times a day and snuck into blog posts that started off as critiques/comparisons of two other languages.
I don't think you understood the blog post (which is presumably my failure as an author; it was just an off the cuff post and I didn't edit it especially carefully). The point is not to compare JS and C (interesting though that would be) but to note ways in which both of them may be slowly superseded by languages which target the same environments they do but allow much more developer confidence along certain axes—and, in my experience, therefore also a better developer experience and higher productivity along those axes. The whole point was that Rust, Elm, etc. can and do prevent whole classes of problems that are difficult (at best) to eliminate when writing C or JS respectively—Rust and Elm (and F and OCaml and Haskell and Idris and…) were essential to the point I was getting at.
One of the challenges for dealing with hype is that things which are legitimately really good tend to get hype, but things which get hype aren't necessarily good. In my experience, Rust is actually really good (so much so that I ran a podcast about it for a couple years). So are lots of other things. It's not perfect, by a long shot. But while there's some of the people hyping it because it's cool phenomenon—that's real!—people also get excited by finding a tool they really enjoy, that solves real problems they experienced previously.
Tl;DR; some pattern in language X seems similar to language Y therefore X ~= Y . Argue why said pattern makes it hard to program in language X or Y, posit language A and B as saviors.
I find it hard to be charitable to these kind of articles. Not that I disagree with the problems expressed, or the advantages of certain languages making certain guarantees. Everything is a tradeoff, and I think you have to be very aware, contextually, what exactly are you leveraging in order to get a certain result. That extends beyond the product itself to things like lifetime, and human factors. I'd like to see these kind of articles more of the "Consider A || B given these kinds of conditions, beware, this may not be suitable for reasons H I J...".
I actually strongly agree with this. Notably the post doesn’t say anything about what you should choose in a given context. I’m in a context right now where I support a pure JS (not even TS yet!) app, and while a colleague and I occasionally joke about rewriting it in Elm, that’s literally a non-starter in reality (and all I’d never suggest it in reality!).
Most of the comments here seem not to have grasped that the basic point of the post is that advances in programming language design—here, specifically around types—can make for real differences in your confidence when shipping. And that’s valuable! It’s also not the only value… but then, I never said it was, either.
I just wish there was a decent way around Javascript, but even With TypeScript, React, Redux and Redux-Saga, basically using everything we can to give some structure to the application, it is still a clusterfuck.
I really, really enjoy the paradigm of the basic HTML application because so many problems just magically go away.
tl;dr: No code was presented, but it feels like Ember was pushing unexpected arguments to a typescript function, messing it up. Author then goes all rust, hoping that it would catch broken invariant automagically, because it leaks no memory. Thus JS is C.
I guess I just accept es5 for the necessary evil it is.
i'm not sold on the orm approaches.
typescript, reason, svelte, etc, all bring interesting ideas to the table.
end of the day, you can probably eliminate a lot of cruft from your ui before actually being required to resort to these things. I still include tag mithril.min.js and stuff like that plenty, and the world didn't stop nor did I die writing hyperscript. or js. or html. or sql. or whatever...
of course, the ultimate loop is simply to use some barely-functioning type-inferred js to c++ driver to emscripten to webasm. that should satisfy the minimalist simple-life cravings of node devs.
C's flaws are real, but they exist in the context of that design. For example, C has bad memory management because it was built when programmers had kilo- to mega- bytes to play with and needed perfect control.
I'm not sure anyone could ever accuse anything related to the internet of being 'designed'. Even being standardised is a stretch. Javascript isn't C, because C is an extremely focused attempt to solve specific problems. Javascript simply does not attempt to tackle the same problems as C and it doesn't attempt to tackle them in the same way.
It may be that they are both fail to solve the big problems in web development, but that is not really a thing worth comparing languages on.