No, that's definitely not the worst part. The worst part is there is no nice way to convert the abstract language from CSS 2.1, the part that everyone cares about right before JavaScript integration, into a standards-compliant algorithm. Why? Because no definition for it exists. It is left as an exercise to the reader.
WHATWG doesn't improve on this either, in fact, they completely leave it out, whereas at least W3C's original work makes it clear that it's descriptive and you need to figure out an algorithm that makes it work.
Edit: The section describing the processing model for CSS is non-normative. The authors provide an example flow, but a normative algorithm doesn't exist for CSS.[1]
The first was merely part of a parallel compiler project and also covers table layout, whereas the second is a Racket (Scheme) program to formulate the HTML doc and CSS rules as a theory for submitting to z3 SMT to solve all kinds of decision problems (it can also produce a rendering).
Not sure that's very helpful; it would be cool if W3C can invest some time into better specs (not just prose).
Oh almost certainly not. That technique was obsolete ten years ago. I was just showing my battle scars from having had to do a ground-up browser in deference to the magnitude of the accomplishment.
Writing a browser was no walk in the park like 6 standards revs ago: I bet it’s a fucking nightmare now.
LAME is an LGPL MP3 encoder that you always had to download separately from whatever software used it because of MP3 patent fears. Even if it doesn't create direct financial liability for the creators it can have a chilling effect on adoption.
The worst part is the fucking patent lawyers: https://pdfpiw.uspto.gov/.piw?docid=09576068