Hacker Newsnew | past | comments | ask | show | jobs | submit | Davidbrcz's commentslogin

Just need to check if it's plain Ada or one specific profile or SPARK

Refreshing stories between all the AI ones (and crypto/web3 before that)

Ironically, once upon a time Prolog and logic programming in general were part of the cutting-edge of AI. There's quite a fascinating history of Japan's fifth-generation computing efforts in the 1980s when Japan focused on logic programming and massively parallel computing. My former manager, who is from Japan, earned his PhD in the 1990s in a topic related to constraint logic programming.

I remember when so-called "expert systems" written in Prolog or LISP were supposed to replace doctors. Then came the (first) AI winter after people realized how unrealistic that was.

Nowadays LLMs are supposed to replace doctors.. and that makes even less sense given that LLMs are error-prone by design. They will hallucinate, you cannot fix that because of their probabilistic nature, yet all the money in the world is thrown at people who preach LLMs will eventually be able to do every human job.

The second AI winter cannot come soon enough.


The LLM collapse will yield the classical AI reborn with environments like Common Lisp.

Even now NEC makes some cool massively parallel chips and accelerators that I wish were more mainstream because they look like they'd be fun to play with.

You said AI: https://github.com/stassa/louise

  Louise (Patsantzis & Muggleton 2021) is a machine learning system that learns Prolog programs.

  Louise is a Meta-Interpretive Learning (MIL) system. MIL (Muggleton et al. 2014), (Muggleton et al. 2015), is a new setting for Inductive Logic Programming (ILP) (Muggleton, 1991). ILP is a form of weakly-supervised machine learning of logic programs from examples of program behaviour (meaning examples of the inputs and outputs of the programs to be learned). Unlike conventional, statistical machine learning algorithms, ILP approaches do not need to see examples of programs to learn new programs and instead rely on background knowledge, a library of pre-existing logic programs that they reuse to compose new programs.
This is what was done by Douglas Lenat from late 1970-s on [1]. He did his work using Lisp, this thing does something close using Prolog.

[1] https://en.wikipedia.org/wiki/Eurisko


If we're going down that path: Ehud Shapiro got there back in 1984 [1]. His PhD thesis is excellent and shows what logic programming could do (/could have been).

He viewed the task of learning predicates (programs/relations) as a debugging task. The magic is in a refinement operator that enumerates new programs. The diagnostic part was wildly insightful -- he showed how to operationalise Popper's notion of falsification. There are plenty of more modern accounts of that aspect but sadly the learning part was broadly neglected.

There are more recent probabilistic accounts of this approach to learning from the 1990s.

... and if you want to go all the way back you can dig up Gordon Plotkin's PhD thesis on antiunification from the early 1970s.

[1] https://en.wikipedia.org/wiki/Algorithmic_program_debugging


People manually doing resource cleanup by using goto.

I'm assuming that using defer would have prevented the gotos in the first case, and the bug.


To be fair, there were multiple wrongs in that piece of code: avoiding typing with the forward goto cleanup pattern; not using braces; not using autoformatting that would have popped out that second goto statement; ignoring compiler warnings and IDE coloring of dead code or not having those warnings enabled in the first place.

C is hard enough as is to get right and every tool and development pattern that helps avoid common pitfalls is welcome.


The forward goto cleanup pattern is not something "wrong" that was done to "avoid typing". Goto cleanup is the only reasonable way I know to semi-reliably clean up resources in C, and is widely used among most of the large C code bases out there. It's the main way resource cleanup is done in Linux.

By putting all the cleanup code at the end of the function after a cleanup label, you have reduced the complexity of resource management: you have one place where the resource is acquired, and one place where the resource is freed. This is actually manageable. Before you return, you check every resource you might have acquired, and if your handle (pointer, file descriptor, PID, whatever) is not in its null state (null pointer, -1, whatever), you call the free function.

By comparison, if you try to put the correct cleanup functions at every exit point, the problem explodes in complexity. Whereas correctly adding a new resource using the 'goto cleanup' pattern requires adding a single 'if (my_resource is not its null value) { cleanup(my_resource) }' at the end of the function, correctly adding a new resource using the 'cleanup at every exit point' pattern requires going through every single exit point in the function, considering whether or not the resource will be acquired at that time, and if it is, adding the cleanup code. Adding a new exit point similarly requires going through all resources used by the function and determining which ones need to be cleaned up.

C is hard enough as it is to get right when you only need to remember to clean up resources in one place. It gets infinitely harder when you need to match up cleanup code with returns.


In theory, for straight-line code only, the If Statement Ladder of Doom is an alternative:

  int ret;
  FILE *fp;
  if ((fp = fopen("hello.txt", "w")) == NULL) {
      perror("fopen");
      ret = -1;
  } else {
      const char message[] = "hello world\n";
      if (fwrite(message, 1, sizeof message - 1, fp) != sizeof message - 1) {
          perror("fwrite");
          ret = -1;
      } else {
          ret = 0;
      }
  
      /* fallible cleanup is unpleasant: */
      if (fclose(fp) < 0) {
          perror("fclose");
          ret = -1;
      }   
  }
  return ret;
It is in particular universal in Microsoft documentation (but notably not actual Microsoft code; e.g. https://github.com/dotnet/runtime has plenty of cleanup gotos).

In practice, well, the “of doom” part applies: two fallible functions on the main path is (I think) about as many as you can use it for and still have the code look reasonable. A well-known unreasonable example is the official usage sample for IFileDialog: https://learn.microsoft.com/en-us/windows/win32/shell/common....


I don't see this. The problem was a duplicate "goto fail" statement where the second one caused an incorrect return value to be returned. A duplicate defer statement could directly cause a double free. A duplicate "return err;" statement would have the same problem as the "goto fail" code. Potentially, a defer based solution could eliminate the variable for the return code, but this is not the only way to address this problem.


Is that true though?

Using defer, the code would be:

    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        return err;
        return err;
This has the exact same bug: the function exits with a successful return code as long as the SHA hash update succeeds, skipping further certificate validity checks. The fact that resource cleanup has been relegated to defer so that 'goto fail;' can be replaced with 'return err;' fixes nothing.


It would have resulted in an uninitialized variable access warning, though.


I don't think so. The value is set in the assignment in the if statement even for the success path. With and without defer you nowadays get only a warning due to the misleading indentation: https://godbolt.org/z/3G4jzrTTr (updated)


No it wouldn't. 'err' is declared and initialized at the start of the function. Even if it wasn't initialized at the start, it would've been initialized by some earlier fallible function call which is also written as 'if ((err = something()) != 0)'



You do a cross analysis.

- Compile it with the maximum number of warnings enabled

- Run linters/analyzers/fuzzers on it

- Ask another LLM to review it


For main it's explicitly allowed by the standard, and no return is equal to return 0


which is super weird. If they can tell the compiler to allow no return, only for main, they can also tell it to pretend void return is int return of 0, only for main.


Taxes, that's called taxes.


I'm visually impaired: dark mode is about making the web usable for me

https://drgrizz.xyz/dark-mode.html


Yes, I'm working on FAA SWIM services now. The tech feels old school (UML, XSD...) but having documentation, class generation from XSD is very helpful.


That's some Westworld level of discussion


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: