Maybe :-)---this was a year and a half ago, or more. I do remember spending a fair amount of time trying to wrap my head around the implications of the difference between `f[x] = x^2` and `f[x_] = x^2`, and the differences between "Module", "Block", and "With". I also had a hard time with closures & symbols---I was trying use something not unlike the bank-account example from SICP (https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-20.htm...) to cache some intermediate computations, and it got very messy very quickly.
Now I recognize that this is in no way idiomatic Mathematica code, and I'm sure there is a nice Mathematica-y way to achieve this, but that's sort of my point. I have a problem, think "Aha! I know exactly how to deal with this in Scheme!", try to translate the Scheme solution into Mathematica, and either fail miserably or spend way too much time trying to figure out how to persuade Mathematica to do what I want. (I should perhaps admit that when I hear "Lisp" I think Scheme and to a certain extent CL, with lexical scoping, as opposed to elisp, with dynamic scoping. This is a flaw in my thinking I've never really gotten around to rectifying.)
> The functional parts of Mathematica/WL (by which I mean the equivalents of fold, filter, map, etc) are pretty straightforward and should be quite easy for a Lisp programmer to pick up. See [0].
Yeah, those were pretty natural.
> Note to self: clearly we aren't making it easy enough to understand what WL can do, especially for people who pick it up for one specific thing.
This is important, I think, but I also think the problem's less in your documentation than in how I "learned" Mathematica. I got started using Mathematica for things like messy integrals, or differentiating and then simplifying some huge expression. Something comes up in a problem set, I try three or four times to do it by hand and always lose signs/factors/whatever, and then farm it out to Mathematica; as I gradually started doing more and more, I kept just porting Lisp experience, and this always worked just barely well enough that I wasn't forced to learn Mathematica on its own terms.
I suspect if I had thought about it as "basically m4 with a crazy-awesome standard library", rather than "basically Scheme with a crazy-awesome standard library", I might have been happier, but ultimately I needed (need) to recognize that Mathematica is its own thing, with its own strengths, weaknesses, and fundamental metaphors.
> Btw, its called a "term rewriting system", not a "rule rewriting engine".
Ack. Thanks; I'll try to bear that in mind.
> Did you really want to write diagonalization from scratch? Why not use the superfunction Eigensystem [2], which has been honed by many experts over many years?
No! You're right, that would be a terrible idea. I was using Eigensystem; in Julia I farmed it out to LAPACK via eigs(). My problem was in setting up the matrix to be diagonalized. The state space had something like seven degrees of freedom (four two-dimensional and three with arbitrary dimensions), so I was calculating matrix elements for up to something like N = 10^5 basis states. (There were some tricks to pull so I was iterating through O(N) states, not O(N^2).)
Ultimately, for sufficiently large N, the diagonalization is going to take much longer than the setup, just because the scaling's worse. For small-ish N, though, the setup was interminable, and those small-N test cases are precisely where I need to be able to move quickly when I'm trying to figure out which stupidity I perpetrated this time.
> It's a bit odd to criticize a functional programming language for not having super-fast 'for' loops.
Yeah, no kidding. This was a classic Fortran-style problem, which is why I feel kind of stupid for having even tried to do it in Mathematica.
>> The lesson I learned, though, was to never use Mathematica for anything beyond basic symbolic calculations---integrals, that sort of thing.
>That's basically nonsense. Wolfram|Alpha is a multi-million line Mathematica/WL system that goes far beyond symbolic manipulation. As a totally random example, take Facebook social network analysis [3].
Fair enough. And I have a very smart friend who has sworn by Mathematica for years (and just spent some time working for you guys)---maybe it really is a combination of how I approach Mathematica, the types of problems I've tried to solve in it, and the fact that de gustibus non disputandum. I can't shake the feeling, though, that I'd want to run away very quickly from a large Mathematica/WL project like Wolfram|Alpha.
That's a really good example: you'd have to find the tutorial [0] to know what to do. And we could make that job easier by detecting your probably incorrect use of = instead of := and giving you a "I see you're trying to define a function" kind of deal. Of course people hated Clippy, so we have to tread carefully with that kind of thing :)
> This is important, I think, but I also think the problem's less in your documentation than in how I "learned" Mathematica.
Yes, I think you hit the nail on the head with this paragraph. It is possible to 'accrete' tricks in a way that potentially blocks you from having a holistic knowledge of the language. The workflows for symbolic manipulation, which involve lots of global state and symbols representing variables, is probably a prime culprit.
> My problem was in setting up the matrix to be diagonalized. The state space had something like seven degrees of freedom (four two-dimensional and three with arbitrary dimensions), so I was calculating matrix elements for up to something like N = 10^5 basis states.
That makes more sense. There might have been higher level ways to do this using functions like Array and Table, but perhaps not. And Julia is a really interesting language, I think we can learn a lot from them.
> I can't shake the feeling, though, that I'd want to run away very quickly from a large Mathematica/WL project like Wolfram|Alpha.
Huge codebases in any language get hairy. I'd say we're on a par with C++ in that respect (meaning: not very good, but workable).
Modern languages have had some innovations with clean package systems and API boundaries (though the ML family showed the way), so it's perhaps good we're still waiting to modernize our package system. Plus, I think we have a chance in the next year or two to really leapfrog other languages with some amazing static analysis tools.
Oh, man, that bit me so many times, especially when it had to interact with some kind of scoping trick. I read that tutorial (or the equivalent from before the WL), and never really got good intuition for how immediate & delayed evaluation worked and when to use which.
> There might have been higher level ways to do this using functions like Array and Table, but perhaps not.
I was actually using Table, but I was thinking of it as "iterate over these variables". Table's nice, although every once in a while it would break the picture I had in my head of it as "map-over-cartesian-products".
> Plus, I think we have a chance in the next year or two to really leapfrog other languages with some amazing static analysis tools.
Great! Another beef I have with Mathematica (and Scheme, for that matter) is that it doesn't have types: since I learned bits and pieces of Haskell and started using Julia seriously, I've come to love the way a type system can save me from my own stupidity. This is definitely a matter of taste, though.
Maybe :-)---this was a year and a half ago, or more. I do remember spending a fair amount of time trying to wrap my head around the implications of the difference between `f[x] = x^2` and `f[x_] = x^2`, and the differences between "Module", "Block", and "With". I also had a hard time with closures & symbols---I was trying use something not unlike the bank-account example from SICP (https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-20.htm...) to cache some intermediate computations, and it got very messy very quickly.
Now I recognize that this is in no way idiomatic Mathematica code, and I'm sure there is a nice Mathematica-y way to achieve this, but that's sort of my point. I have a problem, think "Aha! I know exactly how to deal with this in Scheme!", try to translate the Scheme solution into Mathematica, and either fail miserably or spend way too much time trying to figure out how to persuade Mathematica to do what I want. (I should perhaps admit that when I hear "Lisp" I think Scheme and to a certain extent CL, with lexical scoping, as opposed to elisp, with dynamic scoping. This is a flaw in my thinking I've never really gotten around to rectifying.)
> The functional parts of Mathematica/WL (by which I mean the equivalents of fold, filter, map, etc) are pretty straightforward and should be quite easy for a Lisp programmer to pick up. See [0].
Yeah, those were pretty natural.
> Note to self: clearly we aren't making it easy enough to understand what WL can do, especially for people who pick it up for one specific thing.
This is important, I think, but I also think the problem's less in your documentation than in how I "learned" Mathematica. I got started using Mathematica for things like messy integrals, or differentiating and then simplifying some huge expression. Something comes up in a problem set, I try three or four times to do it by hand and always lose signs/factors/whatever, and then farm it out to Mathematica; as I gradually started doing more and more, I kept just porting Lisp experience, and this always worked just barely well enough that I wasn't forced to learn Mathematica on its own terms.
I suspect if I had thought about it as "basically m4 with a crazy-awesome standard library", rather than "basically Scheme with a crazy-awesome standard library", I might have been happier, but ultimately I needed (need) to recognize that Mathematica is its own thing, with its own strengths, weaknesses, and fundamental metaphors.
> Btw, its called a "term rewriting system", not a "rule rewriting engine".
Ack. Thanks; I'll try to bear that in mind.
> Did you really want to write diagonalization from scratch? Why not use the superfunction Eigensystem [2], which has been honed by many experts over many years?
No! You're right, that would be a terrible idea. I was using Eigensystem; in Julia I farmed it out to LAPACK via eigs(). My problem was in setting up the matrix to be diagonalized. The state space had something like seven degrees of freedom (four two-dimensional and three with arbitrary dimensions), so I was calculating matrix elements for up to something like N = 10^5 basis states. (There were some tricks to pull so I was iterating through O(N) states, not O(N^2).)
Ultimately, for sufficiently large N, the diagonalization is going to take much longer than the setup, just because the scaling's worse. For small-ish N, though, the setup was interminable, and those small-N test cases are precisely where I need to be able to move quickly when I'm trying to figure out which stupidity I perpetrated this time.
> It's a bit odd to criticize a functional programming language for not having super-fast 'for' loops.
Yeah, no kidding. This was a classic Fortran-style problem, which is why I feel kind of stupid for having even tried to do it in Mathematica.
>> The lesson I learned, though, was to never use Mathematica for anything beyond basic symbolic calculations---integrals, that sort of thing.
>That's basically nonsense. Wolfram|Alpha is a multi-million line Mathematica/WL system that goes far beyond symbolic manipulation. As a totally random example, take Facebook social network analysis [3].
Fair enough. And I have a very smart friend who has sworn by Mathematica for years (and just spent some time working for you guys)---maybe it really is a combination of how I approach Mathematica, the types of problems I've tried to solve in it, and the fact that de gustibus non disputandum. I can't shake the feeling, though, that I'd want to run away very quickly from a large Mathematica/WL project like Wolfram|Alpha.
Edit: missing newline, missing )