> No, the best thing you can do for simplicity is to not conflate concepts.
This presumes the framework in which one is working. The type of map is and always will be the same as the type of function. This is a simple fact of type theory, so it is worthwhile to ponder the value of providing a language mechanism to coerce one into another.
> This is cleverness over craftsmanship. Keeping data and execution as separate as possible is what leads to simplicity and modularity.
No, this is research and experimentation. Why are you so negative about someone’s thoughtful blog post about the implications of formal type theory?
This presumes the framework in which one is working.
One doesn't have to presume anything, there are general principles that people eventually find are true after plenty of experience.
The type of map is and always will be the same as the type of function. This is a simple fact of type theory, so it is worthwhile to ponder the value of providing a language mechanism to coerce one into another.
It isn't worthwhile to ponder because this doesn't contradict or even confront what I'm saying.
No, this is research and experimentation.
It might be personal research, but people have been programming for decades and this stuff has been tried over and over. There is a constant cycle where someone thinks of mixing and conflating concepts together, eventually gets burned by it and goes back to something simple and straightforward. What are you saying 'no' to here? You didn't address what I said.
You're mentioning things that you expect to be self evident, but I don't see an explanation of why this simplifies programs at all.
> One doesn't have to presume anything, there are general principles that people eventually find are true after plenty of experience.
I guess I just disagree with you here. Plenty of programmers with decades of experience have found no such general principle. There is a time and place for everything and dogmatic notions about "never conflate X and Y" because they're "fundamentally different" will always fall flat due to the lack of proof that they are in fact fundamentally different. It depends on the framework in which you're analyzing it.
> It isn't worthwhile to ponder because this doesn't contradict or even confront what I'm saying.
This is a non sequitur. What is worthwhile to ponder has no bearing on what you say. How arrogant can one person be?
> It might be personal research, but people have been programming for decades and this stuff has been tried over and over.
Decades? You think that decades is long enough to get down to the fundamentals of a domain? People have been doing physics for 3 centuries and they're still discovering more. People have been doing mathematics for 3 millennia and they're still discovering more. Let the cycle happen. Don't discourage it. What's it you?
> You're mentioning things that you expect to be self evident, but I don't see an explanation of why this simplifies programs at all.
It may not simplify programs, but it allows for other avenues of formal verification and proof of correctness.
----
Do you have other examples of where concepts were conflated that ended up "burning" the programmer?
What is worthwhile to ponder has no bearing on what you say.
Ponder all you want, but what you said wasn't a reply to what I said.
Decades? You think that decades is long enough to get down to the fundamentals of a domain?
It is enough for this because people have been going around in circles constantly the entire time. It isn't the same people, it is new people coming in, thinking up something 'clever' like conflating execution and data, then eventually getting burned by it when it all turns into a quagmire. Some people never realize why their projects turned into a mess that can't move forward quickly without breaking or can't be learned without huge effort of edge cases.
It depends on the framework in which you're analyzing it.
No it doesn't. There are a bunch of fundamentals that are already universal that apply.
First is edge cases. If you make something like an array start acting like a function, you are creating an edge case where the same thing acts differently depending on context. That context is complexity and a dependency you have to remember. This increases the mental load you need to get something correct.
Second is dependencies. Instead of two separate things you now have two things that can't work right because they depend on each other. This increases complexity and mental load while decreasing modularity.
Third is that execution is always more complicated than data. Now instead of something simple like data (which is simple because it is static and self evident) you have it mixed with something complicated that can't be observed unless it runs and all of the states at each line or fragment are observed. Execution is largely a black box, data is clear. Mixing them makes the data opaque again.
It is clear now that you don't understand the claim. It sounds like like you're conflating pure and impure functions. It's obvious from context (did you even read the blog post?) that the title of the blog post is referring to pure functions.
You obviously can't treat an impure function as an array and no one would ever claim that. The blog itself isn't claiming that either given that the author is commenting on a nugget from Haskell documentation, and the author is explaining design choices in his own pure functional language.
Your three points only make sense if you're definition of "function" allows side effects. If we're talking about pure functions, then due to referential transparency, arrays are in fact equivalent to functions from contiguous subsets of the integers to another type, as the Haskell documentation indicates.
It sounds like like you're conflating pure and impure functions.
No I'm not, this applies to both in different ways.
If we're talking about pure functions, then due to referential transparency, arrays are in fact equivalent to functions
Never ever. You're talking about a function that generates data from an index, which is trivial to make. Just because it is possible to disguise it as an array in haskell, C++ or anything else doesn't mean it is a good idea. An array will have vastly different properties fundamentally and can be examined at any time because the data already exists.
Confusing the two is again conflating two things for no reason. Making a function that takes an index and returns something is a trivial interface, there is no value in trying to mix up the two.
Evidence of this can be found in the fact that you haven't tried to explain why this is a good idea, only that it works under haskell's semantics. Being clever with semantics is always possible, that doesn't mean conflating two things and hiding how things actually work is a good idea.
Notice that I never once in this discussion claimed that it was a "good idea". This is about research and experimentation. The blog post has the title because of the Haskell documentation. Also, claiming that the lack of an argument is evidence to the contrary is argument from silence, a logical fallacy.
The good idea surrounding this isn't about treating functions as data, but maintaining the purity of the type system allowing the implications of the type system to run their course. You seem to be a very pragmatic programmer and so the way Haskell builds up its own universe in terms of its own type system and Hask (the pseudo-category of Haskell objects) probably seems pointless to you. I can't say for certain, though.
I completely reject most of your claims because they appear to be incoherent within the framework of type theory and functional programming. It looks like you're using reasoning that applies only to procedural programming to attempt to prove why an idea in functional programming is bad.
Haskell is 36 years old. It isn't research and experimentation any more, it is ideas that everyone with experience has had a look at. Some people might be learning haskell for the first time, but that doesn't mean it's still research, that all happened decades ago.
The good idea surrounding this isn't about treating functions as data, but maintaining the purity of the type system allowing the implications of the type system to run their course.
And what are the benefits here? How do they help the things I talked about? How do they avoid the pitfalls?
Also, claiming that the lack of an argument is evidence to the contrary is argument from silence, a logical fallacy.
Not really, because if you could contradict what I've said you would have.
within the framework of type theory and functional programming.
People can gather in a room and tell each other how great they are and how they have all the answers, but in 36 years there is a single email filtering program that was made with haskell.
you're using reasoning that applies only to procedural programming to attempt to prove why an idea in functional programming is bad.
I explained why this is a bad idea from fundamental and universally agreed upon ideas about the best ways to write software.
Functional programing languages had lots of ideas and features that turned out to work well. That doesn't mean that conflating two completely separate concepts is a good idea, no matter what 'type theory' someone comes up with to support it.
Saying something is good because haskell says it is good is circular religious thinking. These two things aren't the same, they are literally two different things and trying to unify them doesn't make life easier for anyone. It's just programming cleverness, like the 'goes to operator --> '
This presumes the framework in which one is working. The type of map is and always will be the same as the type of function. This is a simple fact of type theory, so it is worthwhile to ponder the value of providing a language mechanism to coerce one into another.
> This is cleverness over craftsmanship. Keeping data and execution as separate as possible is what leads to simplicity and modularity.
No, this is research and experimentation. Why are you so negative about someone’s thoughtful blog post about the implications of formal type theory?