Json il less verbose and is the de facto standard for the web, json is easier to parse by javascript (by the browser or node.js), and a serialization/deseriazation is unique, in xml you could put some infos on the attributes oand other on tags or as parameters.
instead of
<event name="gamma_size">
<arg name="size" type="uint"/>
</event>
Your example is more verbose and harder to read... XML is definitely misused in many applications where JSON is appropriate but this is not one of them.
We have TOML in the sense that TOML is currently v0.4.0 and bears an explicit warning that the specification is unstable should be treated accordingly.
That's unfortunate, but there are a lot of projects out there that depend on TOML, and it hasn't changed since 2015. It's de facto stable at this point.
It's not. It's barely more verbose than JSON and a hell of a lot more readable. JSON is appropriate for machines to talk with. XML is appropriate for writing documentation or API specs.
Thanks for the insight. Some of the comments revealed why xml was picked. I personally agree that depending on the problem you must pick the right tools and what might be a good suit in a context it might not be the best fit in another. Moreover, there are probably more mature and performant parsing libs in c for xml than for json.
However, XML might be overkill as it might be too complex for this particular scenario. XML parsing it's complicated and from time to time there are bugs in those parsers. A simpler format might also get the job done and might be safer. However XML might provide flexibility to scale.
Finally, this is what Torvalds said about XML when someone suggested to use it in git:
> XML is crap. Really. There are no excuses. XML is nasty to parse for humans, and it's a disaster to parse even for computers. There's just no reason for that horrible crap to exist.
My experience: I applied for an open position at google on their website, during the university, an interview was arranged after some mails, I was positively surprised by their care of detail: about setting the inteview at right time during the day for me, wow!. When I recieved the call i was suprised, but because call me at wrong day, at wrong time, It was on dinner on my time, and I drinked a couple of beer, it was stressful, the interview was difficult and unsuccessful, after a couple of days by mail told that inteview was not successful. I'm from italy, here the company adopt the Hollywood Principle: “don’t call us, we’ll call you”, because there are too much applicants and few good jobs. I digested: maybe I'm not good for the position, but also them are not perfect, but at least they try to be gentle.
After a two of year I recived a mail for an interview, I was suprised because I did not solicitate it, I was very relaxed because I was discard the previous time, so why this time should be different? (I'm fine that better developer exist), the interview seem goes easily, at the end interviewer ask me where I'd like to work if I could decide and describe some offices on google on different coutry, It was too exciting to be true, and replied all are the same for me, but why you call me, on the other interview I was discarded, he was suprised, told some 0info and asked some info on precedent inverview. Also this time I was informed by mail that the interview was not successful.
Lesson 2: I thinked they call me by error, so two mistake in two interview:. nobody is perfect.
In other words templeos is a unikernel with a jit compiler.
The name is odd, the author is strange, do you judge a movie by private life of the authors?
The illness of van gogh, Gödel didn't affect their creations.
missing graphical interface is a defect.
An unikernal is a strange beast, sound crazy, but got some advantages: is simpler, could be understood completely by a human (and not only by a genius), there are no penality for system calls, and there is no cpu time wasted on context switch. Because the machine are getting cheaper is common to have only one user, so all these security became useless, so on the long run the unikernels will be mainstream. Kudos to him.
a relevant discussion about unikernel here:
https://news.ycombinator.com/item?id=10362897
The advantage of multiple "users" isn't just blocking one person's access to another person's stuff. It's also a valuable tool for sandboxing system processes which need to run parallel on the same host system.
I'm addition to that, a monolithic unikernel (as unikernels usually are) would have the issue of a higher tendency for kernel panics.
So i really can't see unikernels becoming mainstream. If anything the reverse trend is true with more complex kernel designs like micro kernels becoming more favourable as computing hardware gets cheaper.
The real growth area for unikernels is virtualized appliances, eg running a single purpose service as a Xen unikernel. But even that is awfully niche and often better served (particularly in terms of developer and sysadmin productivity) with containers these days.
In that case, you need process isolation and permissions not user privileges. Prior models for mandatory, access control and capability-based security can already do what you're describing. KeyKOS did it in production on mainframes decades ago with extra benefit of persistence for app data. System/38 did one of those models, too, at CPU level. Later became AS/400 and IBM i. AS/400's run and run and run.
So, if you want POLA and damage containment, one option is imitating old designs that pulled that off. Patents expired, too. ;)
Oh I'm fully aware there are a thousand different ways to accomplish similar results. Further to your point, you can also support multiple physical users without actually running a multi-user system as well (eg Windows 95).
However you have to bare in mind that this tangent did start off as an exercise in generalisations so I was following on from that by pointing out that many current multi-user systems also use user accounts as a tool for reducing the exposure a process has. While you'd obviously agree that it's a long way from being the most secure method of hardening a OS, it is still a pretty typical way for many desktop systems to operate.
I think unikernels make sense in conjunction with languages like Rust. If your compiler is making sure it won't generate code that could cause a segfault, the run-time checks to do the same are unnecessary overhead.
(There are, of course, some details that would need to be worked out like how to handle unsafe code blocks, how to run programs written in unsafe languages, and how to enforce a policy of only executing code compiled with a trusted compiler, but none of these seem like fundamentally insurmountable obstacles and the benefits of being able to make a system call without any more overhead than a function call are pretty big for certain applications.)
I'm assuming that applications are written in Rust as well and that the OS is configured to either refuse to run binaries compiled with an unknown compiler or it runs them inside a CPU emulator.
And no, I don't think this solves all of the potential security problems that could exist. What it does accomplish (if you're willing to trust a compiler in the same way that we're expected to trust the MMU in our computers, which might not be warranted at this stage of the development of Rust) is that it solves the problem of one program reaching into another program's address space when it isn't supposed to.
Strictly speaking, it would be more secure to have both compiler-enforced protections and an MMU, so that a compiler bug won't compromise the whole system. It is, however, at least theoretically possible to have secure process isolation without relying on an MMU. That's a big deal, because context switches are expensive and if there's a way to get the same safety without the overhead, someone is likely to build a system that takes advantage of that.
I didn't actually say a unikernel couldn't be a micro kernel though; just that there was a tendency for them to be monolithic.
Unless I've overlooked a bunch of kernels - and I'm happy to be proven wrong - usually you'll find unikernels to be monolithic because those two approaches address the similar objectives in kernel design (namely simpler layout and improved performance).
However correlation or not, you're probably right that my point regarding monolithic kernels wasn't entirely relevant to the rest of my post and thus just risks confusing things.
On the photo #4 you can see a man pointing a pretty insecure computer, windows xp :)
Should Not be used xp anymore because there is no more security updates or I'm wrong?
The image has been used in a number of articles, the oldest of which I was able to find was from 2013 - I would assume the image is older than that even.
Enterprises can buy expensive support contracts direct from MS and still receive security updates. This is fairly common in big companies and with the government. There's articles about them paying for XP custom support from 2014, I'm assuming they still are.