Maybe trivial was the wrong word, but your second paragraph is exactly why I reject the approach to splitting out all nullable values into their own relation. Yes from a data modeling theoretical perspective it works, but from a pragmatic perspective having to join everything back together again is a worse situation than just thinking through the behavior of null in that situation.
But even then things are awful. What do I do when I want to include the value of the nullable table in my result, but I want all of my records, not just the ones with non-null. So even though you've purged null from your data model you're still bringing them back with an outer join.
It's the implicit semantics of the tables. We start with modelling the world itself: "user with id ID is AGE years old and lives at ADDRESS", and end with modelling out particular instance of knowledge of the world: "user with id ID exists", "user with id ID is known to be AGE years old", "user with id ID known to live at ADDRESS"; and with the "closed world principle" you can explore more of the limits of the knowledge: "which users we know to exist and know their addresses but don't know their ages?". But query "what is the id of the oldest user?" is simply unanswerable unless you know all of their ages; the best you generally do is answer "what is the id of the oldest of the users that we know ages of?".
It's when we start confusing the map and the territory, that's when lot of miscalculations starts to happen.
I think you meant to write "open world principle", as every non state fact is implicitly false under the closed world assumption.
But yeah I agree, that databases should always be thought of a a model of the world (crisp), and not as the world itself (fuzzy).
But it's also noteworthy that this is a leaky abstraction, and that any kind of database that has to operate in the real world (in contrast to say an entity component game engine) will face this leakiness.
The only way I see to resolve this is to turn the database into something that remains crisp in the face of fuzziness.
E.g. by storing only observations or interpretations of the real world processes, since the observation is an "subjective" statement made from the database perspective, it holds true regardless of the actual "objective" condition.
It's just not easy to program and model in such a system.
Well, what's the semantics of your result? Say you have
TABLE UsersWithKnownAge(id ID, age INTEGER)
TABLE UsersWithUnknownAge(id ID)
CONSTRAINT UsersWithKnownAge JOIN UsersWithUnknownAge IS EMPTY
with obvious predicates: "User with id ID is known to be AGE years old" and "User with id ID is unknown". You join them and get the relation with the predicate "User with id ID..." How do you continue?
"We know AGE_KNOWLEDGE about the age of the user with id ID", where AGE_KNOWLEDGE is Option<Integer> that has semantics "number is unknown" or "number is known to be NUM". Okay. What information about the whole dataset can you extract from this join?
If your query needs to know the users' ages, you can't execute it unless you restrict yourself to querying about UsersWithKnownAge, in which case, just run it on UsersWithKnownAge.
If your query doesn't need to know users' ages, you can run it on "UsersWithKnownAge(id) UNION UsersWithUnknownAge(id)", not on an OUTER JOIN.
But even then things are awful. What do I do when I want to include the value of the nullable table in my result, but I want all of my records, not just the ones with non-null. So even though you've purged null from your data model you're still bringing them back with an outer join.