Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it is easier to write fast, complicated code in Julia than in C, C++, or Fortran. Not just the syntax, but because of great support for generic code by default and metaprogramming making it relatively simple to write code to generate the code you actually want for any given scenario. Interactive benchmarking and profiling are a boon too.

An example of the value of generic code is that forward mode AD is extremely easy, and almost always just works on whatever code you run it on.

Then, once that's done, multiple dispatch (and possible macros for a DSL) allows for a much cleaner user interface than Python offers for numeric code.

I have a lot more experience with R than Python, but seeing more of the scientist/mathematician/researchers side of things, I have to strongly disagree with the view that they should write slow code and contact a CS guy to write fast code in another language when they need it. Do you honestly think that's practical for grad student's projects? Recently, one of my friends wrote a simulation in R. Most of the work was done by hcubature -- a package written in C -- integrating a function written in R. Could just have easily been written in Python. That function was slow, and the simulation ran for days. Before an error caused it to crash, losing days of compute time. I -- a statistics grad student -- helped him rewrite it in Julia, and it finished in 2 hours.

That C/C++ code will still run slowly if they have to call your R/Python code is a problem. They also can't apply things like AD easily. A common solution, used by Stan for example, is to create a whole new modeling language and have users interface through that. Learning a new language -- albeit relatively simple/domain specific -- which they then cannot debug interactively, is another pain point. All this can be avoided by simply using Julia.



The reason I ran away from Julia and don't plan on ever using it again, and don't recommend anyone use it outside of academia, is that so much of the community is made up of grad students. So you get a lot of research code and people who have never been professional programmers maintaining most of the ecosystem. Julia Computing is largely made up of people they've hired from the community straight out of grad school.


I don't see your point of academia and about hiring from the community?

What I see on Github is as professional as it can get. Issues, discussions, triage, review, CI-tests for example.

Maybe you started too early, before Julia was settled? And/or were too over-enthusiastic to begin with? I think Julia had to grow, find the 'correct' solution with e.g. NA/Missing/Nullable. Break things b/c it didn't work out as expected. Postpone things, debugger (maybe?), for more important areas or because base was not stable yet.

Two years ago in a project I hoped that people would switch immediately from R to Julia. But in retrospect it was good they didn't. Julia was not ready for them and too much ecosystem stuff missing/unclear still. (This said, Julia would in principle have been much much better suited for that project).


Things are decent on average, but there's a persistent carelessness and rush to do things without paying attention to the consequences. More in packages than base nowadays, but there's a lot of merging and releasing things immediately without waiting for code review that could have caught mistakes before breaking users.


Out of curiosity: what language has a package ecosystem that in your opinion does do this right?


Large Apache projects, notable widely-used c++ projects like boost, llvm, zmq, cmake, the c++ language standards committee itself, all take their time and rarely if ever release changes/bugfixes immediately. Things go through review, testing, release candidates, and people other than original authors of code provide input before normal users get their hands on anything. The core pydata projects take their time and are cautious about breaking things.


I complained also about the "cowboy" culture I saw among the Julia developers when I first started with it (people making a change directly to master, or merging there own PR without giving time for people around the world to review, or not having a minimum number of qualified reviewers before merging), but those days are gone, and I feel they've matured quite a lot in the past few years in that respect. Some of it I think was simply the great excitement that comes from being able to be so creative with the language, and a rush to get things figured out and nailed down to finally get to v1.0. As far as projects in other languages, I don't really feel it has much to do with the languages themselves, more the type of people that particular project attracts.


Just in the last few months BinDeps was broken by a "deprecation fix" that was completely wrong and using a name that didn't exist, and it got merged and released by the patch author before anyone else could look at it, breaking many downstream packages.

Refactorings and major changes in ZMQ.jl and the web stack similarly get merged and released immediately with zero review, still. This is a major problem.

Features in the base language have been deleted during 0.7-DEV because a single core developer didn't like them, despite multiple other core developers voicing disagreement that the features were useful and removing them was not urgent or necessary.

It's not a development culture I would rely on when products and money and jobs are at stake. Even the startup you were working with abandoned julia, correct?


> Just in the last few months BinDeps was broken ...

What I don't understand is why you didn't just stay with old stable versions? You wouldn't be exposed to such issues, wouldn't you?

> It's not a development culture I would rely on when products and money and jobs are at stake

On the other hand this 'development culture' has brought brilliant results in a relatively short amount of time with a relatively small team.

There was a talk [1] at the Juliacon 2018 where a company very successfully replaced an IBM product with Julia code. At 48:07 there was a question 'about problems with changes in Julia'. Answer: they started with v0.3 and 'didn't really have many problems'. They 'didn't use anything particularly exotic'. So, yes, I'd say if you adapt to the given situation it can (could have) work(ed).

I'm not convinced that a non-cowboy style would have been better. (And besides, this doesn't come free moneywise).

[1]: https://www.youtube.com/watch?v=__gMirBBNXY


These incidents were with respect to 0.6-supporting versions of packages. Pinning is a good idea for reproducability but it's not the default, so updating packages or new installs are broken when careless things make it instantly into a release.

Talk to me when google, amazon, microsoft, facebook etc are publicly using and officially supporting julia on cloud platforms or even infrastructure libraries like protobuf.

The carelessness isn't responsible for or helping anything. A good diffeq and optimization suite have been built despite the prevalence of careless practices, not because of them.

It's not a question of money either, just patience and code review and recognition of how many things downstream are going to be affected by mistakes. You'll save more time in not having to put out as many fires than it will cost to slow down and not be in such a rush at all times.


I was not expecting the change to MDD (Marketing Driven Development) at the last minute. But at least 1.0 is out, I hope those wild west times, get past far behind us now. I'll wait for Julia 1.1 and most packages at 1.0 before diving back in.


Dlang. But it's a compiled language and you have the option of statically compiling all your dependencies. The package manager is also quite simple and just works.

https://github.com/dlang/dub


SciPy's ODE solver doesn't have a stable contributor who contributes more than than about once a year even though it has many long standing issues, and PyDSTool hasn't had a commit in 2 years and doesn't work on Python 3 (and most of pycont is incomplete...). R's deSolve isn't even in a repository you can track and still hasn't fixed their DDE solver issues even though they directly mention in the docs how it's incorrect. So it's not like other open source software has strong maintenance structures....


SciPy solvers are mostly interfaces to existing established solvers, and I’ve not had any problems with them. We’ve also used PyDSTool without problem, and it appears to support Python 3,

https://github.com/robclewley/pydstool/blob/master/README.rs...

If you think these are poorly maintained you should see XPPAUT, a tool still quite widely used.


It just sounds like your code doesn't require more than the occasional hotloop. That's fine then. There is no reason to leave numba.

If you have anything that requires more complications, numba becomes painful. You seem to somehow insist that your usecase is the only one out there. We are actively developing a scientific simulation library in Julia. The prototype was in Python+numba. The Julia code is vastly simpler, and that is because Julia is not "an interface to LLVM for fast loops". It's a full fledged language with performant abstractions, closures, inline functions, metaprogramming, etc. To get things fast in numba I ended up doing code generation (I talked to the Numba developers, it seemed the only way). Talk about brittle, painful and impossible to generalize.

Now we have Julia code, using sparse matrices in the hot loop is easy, Automatic Differentiation just works, etc...

The correct comparison for Julia is this context is C++, not Python.


I’m not insisting I have the only use case, but apart from the examples of traversing language boundaries, I haven’t see a good example of what’s so painful in Numba. What is so challenging that is requires code generation?


I have a data structure based on which I generate a dynamical behaviour that I want to integrate. So I construct a rhs.

I further want the user of the library be able to pass it new functions that can be integrated into overall dynamical behaviour.

There are different ways to achieve this, the simplest version is with closures. Pass a list of functions, and some parameters and I construct a right hand side function from it. Unfortunately this does not work with numba. What I ended up doing is passing not the function itself but the function text to generate the code of the function to be jited and then eval that. It worked but it was horrible to maintain, and required users to pass function bodies as text witha very specific format.

Now in Julia we will probably eventually transition to a macro based approach, but the simple closure based model just worked.

Previously I had large scale, inhomogeneous right hand side functions that I wanted to jit in numba and that need sparse matrices. So I ended up having to implement sparse matrix algorithms by hand because I can't call scipy.sparse.

Another instance: I implemented a solver for stochastic differential equations with algebraic constraints in numba, partly to be able to use it with numba jited functions and get a complete compiled solver out of it. This already constrained my users to use numba compatible code in their right hand side functions.

In order to get this to work I had to implement a non-linear solver from scratch in numba rather than being able to use scipys excellent set of solvers.

Julia is not a magic silver bullet. Getting the ODE solvers to make full use of sparsity still requires some care and attention. But I simply spend a lot less time on bullshit than before. (so I have more time to spend on HackerNews :P)

I decided to switch over when for one paper I was able to implement a problem using the standard tools and packages available in Julia within half a day. The Python equivalent would have involved using a new library that came with its own DSL, which would have meant rewriting quite a bit of my code to take advantage of it. Easily several days work.

With DifferentialEquations.jl I also could just test half a dozen different numerical algorithms on a problem in a matter of minutes, find out which performed best and use that for MonteCarlo. Saved about a week of computation time on one project alone. That's not a critical amount, nobody cares if the paper comes out a week later or earlier, but it's nice (and I don't waste super computer time). With Python libraries with different DSLs this would have taken considerably longer, and I probably would not have done it. This is the result of having one library and interface rather than a whole bunch, if everyone agreed on scipys ode interface (which just got properly established in scipy 1.0.0) this would be easy in Python as well. But that's also the point that people have been making: Julias design for composition over inheritance makes it convenient to rally around one base package.

I also personally very much like being able to enforce types when I want to. This is a big win for bigger projects for us.


> With DifferentialEquations.jl I also could just

yep... I took a look at the DE packages in Julia today, and quite frankly they're much better than the situation in Python, perhaps because of one or more prolific applied mathematicians are making a concerted effort, which is lacking Python? I dunno, but I did recommend my colleagues look at Julia for DEs, for this reason.

That said,

> Pass a list of functions, and some parameters and I construct a right hand side function from it. Unfortunately this does not work with numba.

I'm pretty sure I've done this before with numba, so maybe getting concrete would help, e.g. an Euler step

    def euler(f, dt, nopython=False):
        @numba.jit(nopython=nopython)
        def step(x):
            return x + dt * f(x)
where user can provide regular Python function or a @numba.jit'd function. If a @numba.jit'd function is provided, and nopython=True, this should result in fused machine code. This sort of code gen through nest functions can be done repeatedly for e.g. the time stepping loop.

I've done this for CPU & GPU code for a complex model space (10 neural mass models, 8 coupling functions, 2 integration schemes, N output functions, ...) which, by the above pattern, results in flexible but fast code.

Is this a pattern that captures your use case or not yet?

> implement sparse matrix algorithms by hand because I can't call scipy.sparse.

agreed, this is a surprising omission, which I attribute to not much of the numerical Python community making use of Numba, but could be fixed rapidly.

> constrained my users to use numba compatible code in their right hand side functions

what did you run into that was problematic?

> I had to implement a non-linear solver from scratch in numba rather than being able to use scipys excellent set of solvers

I didn't follow; passing @numba.jit'd functions to scipy is in the Numba guide, so what exactly didn't work?


This pattern is how I wrote the SDE solver in Python. That works great and is really useful and the reason why I teach closures.

The library we're building now though does something different. Something like this:

  def network_rhs(fs, Network)
    def rhs(y,t)
      y2 = np.dot(Network, y)
      r = empty_like(y)
      for i, f in enumerate(fs):
        r[i] = f(y2[i])
      return r
  return rhs

> what did you run into that was problematic?

For more complex model building the right hand side functions actually make use of fairly complex class hierarchies. That was the major stumbling block. But people also were using dictionaries and other non-numpy data structures and just generally idiomatic Python that is not always supported. Some of that stuff is inherently slow/bad design of course, but it still ended up killing the use of my solver for this project.

They are now rewriting in C++, which is absolutely a great choice for their case (and probably would have been viable for us too if we had had more people with a C/C++ background in the team).

> passing @numba.jit'd functions to scipy is in the Numba guide

I wanted to use scipy.root from numba. Not the other way around.

Now if all of the numerical Python community was standardized on numba, a lot of this would not be an issue. Scipys LowLevelCallable is a great step in the right direction. But fundamentally I don't see how you will ever get the different libraries to play together nicely in a performant way. It would require every API to expose numba jitable functions. Last I checked, the only functions you could call from within numba code were other numba functions and the handful of numpy features the numba authors implemented themselves (I remember waiting for dot and inv support). If I have an algorithm by a student implemented on a networkx graph as a data structure I can't just jit that. In Julia it automatically is.


I see what you mean. I’ve done exactly that sort of thing in C with an array of function pointers, but I’m not sure it would work in Numba.

The churn is exhausting but I see the merit of starting over and getting everything done in a fully fledged JITd language.


As said in another part of the thread, we were in a very good spot to do so. There absolutely are many reasons to not jump in at this time.


This just sounds like bad software designto me. You are miswanting something overly generic that’s super not needed, and regardless of implementing in any given language, it sounds like it would benefit hugely from taking a more YAGNI approach to it, restricting its genericity based on likely usage (not intended or imagined usage), and either just manually writing stuff for an exhaustive set of use cases, or code genning just those cases and not allowing or encouraging arbitrary code gen of possible other cases.

I love it when libraries limit what can be done with them and document an extremely specific scope they apply to.

When libraries try to be all things to all people, it’s bad. A sophisticatedcode gen tool that enables library authors to choose to do that is a bad thing, not a good thing.


You don't know my use case, and you are not right. I have a network of heterogeneous interacting nodes with quite different dynamics on the nodes. I pay great attention to YAGNI, and constantly tell my students and colleagues to cut enerality and work from the specific case outward. But this is just essential complexity of the problem domain. I've spent years implementing the concrete cases, I know what research we couldn't and didn't do because it was to painful to do by hand, and this is the minimum level of generality I can get away with.

I have ideas for a more general library of course, :P But I'm not spending time on them.


SciPy's solvers cannot handle events which are nearby, most return codes aren't documented, you cannot run the wrapped solvers in stepping control mode, you cannot give it a linear solver routine, etc. So it wraps established solvers but still only gives the very basic solving feature of it, and most of the details that made the methods famous are not actually available from SciPy's interface.

And it wasn't Python 3 for pydstool. It's SciPy 1.0.0. Some of the recent maintenance for this stuff has actually come from the Julia devs though:

https://github.com/robclewley/pydstool/pull/132


You mentioned Python 3, not me. Btw, I did look through your DE packages, and they are definitely an amazing contribution not seen in Python; I've recommend to colleagues.


>You mentioned Python 3, not me.

Yeah sorry, I was just acknowledging that I was wrong when I found the PR and noticed the mistake. I guess it come across oddly.


> Julia Computing is largely made up of people they've hired from the community straight out of grad school.

Where do you think most companies get "professional programmers" from, exactly?

Julia's been designed and implemented by some very bright people, and it shows.


Industry gets professional programmers by hiring people who have been hammering out shipping code in paying products for years, and years, doing support, maintenance, and new product development and research.

Grad students may be brilliant but that does not help give them any insight in to what makes a good ecosystem, toolchain, and feature set good.


We would love to have more professional programmers contribute: unfortunately those 1-based indices put them off.

More seriously: part of the problem does seem to be that Julia does have some significant differences from "traditional" languages (e.g. the concept of a "virtual method" is a bit fuzzy in Julia, what we call a JIT is probably better described as a JAOT, whether it has a "type system", homoiconicity, etc.).

That said, this JuliaCon I have met a lot more people from and classical "programmer" backgrounds. So hopefully that is changing.


I've seen quite an evolution over the past 3.4 years I've been using Julia and the 4 JuliaCon's I've attended so far. Back at the 2015 JuliaCon, a number of us "older" professional programmers felt like we should stage a palace coup, because it did feel like the input of people who had "been around the block" a few times was not really valued. That's changed quite a lot (maybe because in the intervening years many of the core contributors have gotten their Ph.D.s and are having to live off their blood, sweat, and tears (plus lots of joy, to be sure) of producing things with Julia that people will actually pay money for). Yes, it was young and brash, but those awkward years seem to be past, and I feel the future of Julia is quite bright.


How was it with the origin and design with Python, NumPy, Matplotlib, Pandas? Were the people who originated these projects in their time any more professional and seasoned than Julia people are currently?


Well, if they'd been as brilliant as the GP indicates there would be no need for Julia, would there?


Also, I was actually a postdoc...


I hope that if/as Julia gets adopted in industry, more libraries get written and maintained by professionals. If the language is successful, that may change.

Although AFAIK it hasn't really in the case of R, outside of tidyverse.

As a grad student without a CS background, I don't think I'm qualified to say much more on this.


The issue isn't "Professionals" it is domain specialist. In the past data specilist didn't have Computer Science specialty. They were awesome in stats and numbers but lacked strong programming skills.

Hadley Wickham is special because he has both the stats, data science AND programming skills.

data.table is also an amazing library. R to me is the most improved language in the history of programming languages over the past 5 years.

Also R allows anyone with basic hackery R skills to create libraries easily and that is why so many of them are not optimal.


What happens for that scientist when they have to dive into Julia’a stack to debug something weird? In Python and C, you have established debuggers, semantics etc, which means that, yes, there are two languages instead of one, but neither is a moving target compared to a language which just had a 1.0 release.

I get the issue with scientists writing poor code, but Numba has largely solved this problem, by packing an LLVM JIT into a decorator which can be applied to any numerical code to get same speed ups as Julia, except no language switch required.

Citing slow code in the wild with a fast rewrite is a hilariously poor anecdote performance wise. I’ve rewritten Fortran code into Python and gotten speed ups. Regardless of the language, garbage in, garbage out.

Stan is an example where the modeling is “just” a DSL implemented as C++ templates. Does that make that a good choice?


Sometimes low level debugging is a surprisingly pleasant experience as the julia JIT generates proper DWARF debug info. So for instance, you can break in gdb and see the julia source code for any julia generated stack frames, neatly intertwined with the frames of the C runtime.

To be clear, I don't remember needing to do this as a regular user. As an occasional compiler hacker it's been quite nice though.


As someone who's spent decades programming in C/C++, and diving into assembly code (and writing a fair share when the compiler just couldn't do what I needed), I love being able to directly inspect the output code at many levels, including all the way down to "bare metal". Yes, there's a lot of work to be done in the area of debuggers for Julia, but there are already useful debugging tools (like Rebugger) that I haven't seen for any other language.


That’s cool, I wasn’t aware. I think that’s most useful when working with external libs.


It'd probably be a lot easier to debug something somewhere in the Julia stack, than in the C/C++/Fortran code that many R libraries run through.

My point with the rewrite was not garbage in, garbage out. It was that even though the original R code was using a library written in C, that library had to call a function he wrote in R millions of times. That R function being inherently slow is part of the problem. (The easiest fix for that is just writing the function you pass to that library in RCpp, but the overhead on that is still close to a microsecond -- not sure how it is in R. Numba is probably easier.) It is nicer to not have to worry about that.

An alternative approach some libraries provide, like my Stan example in RStan or PyStan, is the DSL they implement to make it easier for end users in R or Python to write fast C++ code.

But, now lets say you're working on an optimization problem. You want to use a gradient-based optimization method, while your code is heavily dependent on the FooBars and Widgets libraries. If these libraries are written in Julia, you can write code in Julia using these libraries, and automatic differentiation will just work as you pass it to Optim.jl.

If FooBars and Widgets were Cython libraries, optionally wrapping C/C++ code, would this work? Could you write functions making use of these libraries, and get efficient gradients for optimization for use with an optimization library and have everything be fast?

'Stan is an example where the modeling is “just” a DSL implemented as C++ templates. Does that make that a good choice?'

I gave Stan as an example of a less than ideal situation, because people normally use Stan from R or Python, not from within C++. Therefore there R and Python don't integrate well. If you're already working in C++, Stan seems ideal. You can use arbitrary templated C++ code with Stan, include external libraries, etc. It needs to be C++ because of their autodiff.


Optimization (or any gradient based algorithm) is a good use case for AD, but I don’t see why Julia’s approaches are any better than Python’s, eg autograd, theano, pytorch etc.

And sure that wouldn’t work with arbitrary Cython modules because Cython was designed as a Pythonic syntax over the Python C-API, and it just happened to become popular for numerical work.

I don’t think that’s a strong argument, though, because anything small enough to be usable with AD can be rewritten without too much time lost, whereas those massive Fortran routines with iterative algorithms wouldn’t produce useful gradients in any language.


>And sure that wouldn’t work with arbitrary Cython modules because Cython was designed as a Pythonic syntax over the Python C-API, and it just happened to become popular for numerical work.

>I don’t think that’s a strong argument, though, because anything small enough to be usable with AD can be rewritten without too much time lost, whereas those massive Fortran routines with iterative algorithms wouldn’t produce useful gradients in any language.

No. In Julia you can just stick the entire delay differential equation solver into the AD functions and get a gradient for parameter estimation. Saying you cannot use an arbitrary Cython code is a limitation, and saying you cannot put a random large code into AD is a limitation. It wouldn't be an issue if these weren't already solved problems, but having a performant software with simple and available AD is not something that's unreasonable anymore. If you use Julia, it's just something you can expect to work.


No you can’t, at least, not if you want it work. This has nothing to do with Julia though and I didn’t say you can’t run AD over the solver but that that generally will not produce good gradient estimates unless the solver is written with AD in mind. DDEs are a mixed case where I’d expect some parts to be workable but not in general. Another example is the FFT, totally worthless to use AD.


In python I can chose whether to use autograd or numba, I can't JIT the autograded function.

Seriously, read this issue:

https://github.com/HIPS/autograd/issues/47


I’m aware of that, but high performance, fat vector AD would be done with eg Theano or PyTorch, not autograd.


That's the problem: You're locked into one particular library. You can not combine libraries without sacrificing massive performance. There is just no way around that.

The numba story in that github issue mirrors my own experience: Excitement! This works! It's fast! Ok here are some limitations that I can work around. Hmm I would really like to use this library, in principle it should be possible to JIT its output/make it JIT compatible. In practice this turns out to be way to subtle. Ok I'm giving up, either I reimplement an algorithm directly in my hotloop or run slow code.

So yeah, as long as your problems do not cross the domain of one package, Python and its ecosystem is great. Stay with it. But I fully expect that we'll see a lot more innovation in Julia. Already now there are classes of problems for which no Python solution exists but which actually have library support in Julia. That's really really remarkable.

Despite my misgivings about the state of tutorials, the release handling process and the aproach to tooling, this is why I switched already. I also hope that all these aspects will improve post 1.0 massively.

It's genuinely a liberation to no longer be confined to silos of DSLs that do not allow for low cost abstractions.

It's fine if you don't need it. I just don't understand why you hang out in a thread about Julia insisting that I don't need it either and could just use numba + Python when that's exactly what I've been using prior to Julia.


> That's the problem: You're locked into one particular library.

But that's not a issue resolved by any particular language; Julia appears to be free of lock-in because it hasn't had time to develop multiple, exclusive approaches to the same problems. Perhaps Julia builds into the language the ultimate performance solutions, so ok, then, for example, wait until there are N different web frameworks, and there you will find your silos. Python has many approaches to making things fast, which is why there are silos. /shrug

> I switched already

I probably would too if I was still a grad student.

> why you hang out in a thread about Julia

I was perusing whilst waiting for my Python code to complete, when I saw someone suggesting Python is already quite OK, catching some hate. I'm more than happy for the Julia community, but I think it's helpful to get exchanges accurate and critical.


Well I'm not a grad student, but my grad students were happy to switch, too. :) I think you are ignoring the structural reasons why Python has to lead to silos, and that these reasons are addressed in Julia. But in the end, time will tell.


I’m not ignoring anything, but trying to evaluate whether Julia will be worth supporting as the N+1 scientific stack in the lab I work for, and what to recommend to incoming students and people who consider getting off of MATLAB.

I think Julia looks very sexy and students jump on that, often without considering whether they will be their actual work done or spend time porting libs or debugging things I can’t help them with.


I was worried about the same. I think it's a valid reason to sit back and wait. Especially if Numba works for you.

I was also and continue to be worried about the tooling, the lack of good tutorials and especially the Type driven system. I like it so far but Object Oriented is a lot more familiar to many people. The library situation for me specifically has tipped to be a net positive. I also could transition my very small team off Python completely.

So it was not an ad hoc decision, I tried it several times over the last year's and decided it's not there. In my specific situation, with a rewrite of a core library coming up and the library situation being there that changed early this year.


> What happens for that scientist when they have to dive into Julia’a stack to debug something weird?

The same things that happened when we had this conversation about what happens to the Fortran writing scientist when Python and Numpy came along, even at that time it wasnt the first time. I am sure it would not have been a whole lot different when a COBOL alternative had come along.


Not quite: the argument for Julia is that a casual user won’t drop down into C from Python for performance while in Julia it wouldn’t be necessary, thus easier.


Anecdote time: I hit a non-deterministic bug in one of the C based Python packages we were using. Most of the time it worked, but we were running MonteCarlo on it and saw many errors.

I guessed correctly that it was using uninitialized memory, and errored when that wasn't zeroed out, but my C wasn't good enough to find where. Had this been Julia the whole C code would have been Julia code and I would have had a chance to dive in and debug. I ended up having to get a colleague who's fluent in C to help.


Let me guess, this can’t happen in Julia because memory is always initialized? Sounds like a performance hit if you know what you’re doing, so maybe you can use uninitialized memory in Julia and run into the same bug. Perhaps Julia makes it easy to use LLVM sanitizers? But you could’ve done this with your C code as well.


The bug could totally happen in Julia. The point was your question: "What happens for that scientist when they have to dive into Julia’a stack to debug something weird?"

And then claimed that this was somehow better in Python + C, which is not my experience. I expect this to be easier in Julia than in Python and C.

I totally agree that the tooling is not where it needs to be, btw, but now that the target has stopped moving I expect it to get there soon.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: