It's definitely closer to matlab than python, but it's closer to python than most mainstream programming languages. I ported ~20k lines of python code to Julia over a couple years manually, and for the most part could do line-by-line translations that worked (but weren't necessarily performant until I profiled and switched to using Julia idioms.)
Well, my workflow uses Revise.jl. I develop either in Jupyter notebooks or in the REPL, prototyping code there and then moving functions to files when they're ready. In that context, rapid iteration is fairly fast.
Nowadays I often use Claude Code, working with a Julia REPL in a tmux or zellij session via send-keys. I'll have it prototype and try to optimize an algorithm there, then create a notebook to "present its results", then I'll take the bits I like and add them to the production codebase.
How do you develop a program which will run for longer duration on HPCs. How do you quickly modify struct definitations, how do you define imports (using vs include syntax is so confusing!)
REPL-based workflow doesn't make sense to me other than scripting work.
Re: REPL use, you just use it to run code and look at results. e.g. for TDD – you can modify your code files normally in the IDE, changes get picked up by revise, and then you re-run the tests in the REPL.
For long-running jobs, I basically follow the same process as in any other language: make the functions I want to run, test them locally on a small dataset that runs relatively quickly, then launch them on the remote machines with the full data.
Revise.jl has struct redefinition now, but before that I would just use NamedTuples while iterating, then make a struct when I was ready to move something to production.
`using` is for importing modules, `include` is for specific files. At work, we currently have a monorepo, with one top-level OurProject.jl file that uses `using` to import external packages, and `include` for all the internal files.
> How do you develop a program which will run for longer duration on HPCs.
The main strategy is to have a way of parameterize the program to bring the runtime down to seconds-minutes on a laptop. E.G. for PDEs, you may be running the HPC version on a giant mesh, but you can run the same algorithm on your local computer on a much coarser mesh.
> How do you quickly modify struct definitations
Thankfully on 1.12 this has been solved. You can redefine structs while keeping the REPL up.
> how do you define imports (using vs include syntax is so confusing!)
Yeah julia messed this up. The basic rule is that include and using are basically the same.
I'd have to guess that this is because of ease of use. C++ lets you get as close to the metal as you choose to, so there is no reason why a C++ solution shouldn't be at least as fast as one written in any other language, and yet ...
Of course it also depends on what additional libaries you are using, especially when it comes to parallel/GPU programming in C++, but easy to believe that Julia out of the box makes it easy to write high performance parallel software.
> C++ lets you get as close to the metal as you choose to
This only ends up being true (for any language, but it's too often cited for C++) in a pretty useless Turing Tarpit sort of sense.
So it's not "no reason" it's just sometimes impractical to solve some problems as well in C++ as in a language that was better suited.
Now people do do impractical things sometimes. It's not very practical to swim across the English channel, but people do it. It's not very practical to climb Mt Everest, but loads of people do that for some reason. Going to the moon wasn't practical but the Americans decided to do it anyway. But the reason even the Americans stopped going for a long time is that actually "that was too hard and I don't want to" is in fact a reason.
Yes, with unlimited development time I would expect C++ solutions to be as fast or faster. But Julia hits a really nice combination of development speed and performance that I haven't found in other languages, at least for number crunching and data pipelines.
I actually do this, but that's mostly because our team reviewed all the existing autoformatters for the relatively obscure language we use, and either really hated the formatting or found that they actually introduced errors!
Are you aware that you can use tmux (or zellij, etc.), spin up the interpreter in a tmux session, and then the LLM can interact with it perfectly normally by using send-keys? And that this works quite well, because LLMs are trained on it? You just need to tell the LLM "I have ipython open in a tmux session named pythonrepl"
This is exactly how I do most of my data analysis work in Julia.
I think the "incompatible" was more in the dvorak sense, which I believe is that whenever you are on another computer, it most likely won't have dvorak.
For jujutsu, it's fine on your own computer, but you probably have to use git in the CI or on remote servers. And you probably started with git, so moving to jujutsu was an added effort (similar to dvorak).
>I think the "incompatible" was more in the dvorak sense, which I believe is that whenever you are on another computer, it most likely won't have dvorak.
That's not a problem, just switch to Qwerty when you use a different computer. For me at least, it's not hard at all to switch between Dvorak and Qwerty.
Yes - this puts it perfectly. I am a very fast typist - I can type ~156wpm in QWERTY. When I was a kid and learned about Dvorak, it was tempting - but I already typed so fast that learning another keyboard just seemed like it would cause "misfires" in my brain - and virtual keyboards are all QWERTY.
Same deal with jj vs. git. I've learned git. I've used it for 15 years. I'm proficient with it. I'm sure jj is better - but I'm not sure it's better enough to be worthwhile.
reply