Sounds like you're expecting the AI-based tools that are finding bugs to also provide fixes.
I've been dealing with a bunch of AI-generated (or at least -assisted) vulnerability reports lately. In many cases the reports include proposed patches to fix the issues.
It's been..... interesting. In many cases, the analysis provided in the report has been accurate and helpful. In some cases, the proposed patches have also been good, and we've accepted them with minimal or no changes.
In other cases, despite finding a valid issue, and even providing a good analysis of the problem, the AI tool's suggested patch has been, quite simply, wrong.
Careful review from somebody who really _understands_ the code -- and the wider context in which it is operating -- is still absolutely necessary. That's not always going to happen in an hour.
Yes, that's why I specified "patched product ready for QA testing". It speeds up the development cycle by making a first pass and ensuring it basically works before passing it to a developer for manual review and a QA tester to ensure the fix doesn't break anything else. Both dev and QA are still in the feedback loop and can make changes until it's ready for release
An example worth considering is TeX, which is now 43 years old (considering only TeX82; the earlier TeX78 was a substantially different piece of software). There has been some maintenance over the years, it's true, including a few feature additions in 1990 (TeX 3.0), but I would suggest it has shown itself to be extremely durable.
At the heart of this are two wildly different technologies:
- Literate Programming which was developed so as to work around limitations of the Pascal development stack as it existed when the project was begun: http://literateprogramming.com/
- web2c which allows converting .web source into a format which may be compiled by pretty much _any_ C compiler
LP was described by Knuth as more important than TeX, but it suffers a bit from folks not understanding that it's not so much documentation (if it were, then _The TeXbook_ would be the typeset source of plain.tex) as code commentary only useful to developers working to extend/make use of the application --- there really does need to be some sort of system for manual documentation, but I suspect that it will continue to be a talented technical writer for the foreseeable future.
Except that it's not clear whether the intelligence† that underlies their homing ability is equally effective in helping them evade predators.
†Is "intelligence" even the right word here? I don't know. Much depends on how you define it, I guess, combined with the unknowability of the pigeon's own mental processes.
If you view the source it looks like they goofed and don't actually have any white space between the elements in the rendered example. It's just `<ul class="inline-block-list"><li></li><li></li></ul>`.
When I try the code they show on the page myself the space is displayed.
Yes, agreed! The first time I visited San Antonio (as a Brit who was based in Dallas for some months) happened to be at exactly that time. We didn't know anything about it, but found ourselves on the river walk in the evening, and along came the parade of boats bringing Pancho Claus & co... it was a lovely surprise and a beautiful evening.
It does, but those would only be applied if the `font-variant-ligatures: historical-ligatures` property were specified, so they don't appear on this site.
I inspected for a ligature and any evidence of CSS kerning being turned on before commenting, but I didn't test it to see what the page looked like with it turned on, so I didn't have active knowledge of the possibility of a ligature. If I'd know, it would have been better to give wider scope to the possibility that somehow kerning was being activated by OP's browser. I should have known better than to make a remark about a font without absolutely scrupulous precision! I actually appreciate the comments and corrections.
I've been dealing with a bunch of AI-generated (or at least -assisted) vulnerability reports lately. In many cases the reports include proposed patches to fix the issues.
It's been..... interesting. In many cases, the analysis provided in the report has been accurate and helpful. In some cases, the proposed patches have also been good, and we've accepted them with minimal or no changes.
In other cases, despite finding a valid issue, and even providing a good analysis of the problem, the AI tool's suggested patch has been, quite simply, wrong.
Careful review from somebody who really _understands_ the code -- and the wider context in which it is operating -- is still absolutely necessary. That's not always going to happen in an hour.
reply