embedding-shape 15 hours ago

This is probably a better introduction it seems, than specifically the kernel of the OS: https://github.com/charlotte-os/.github/blob/main/profile/RE...

> URIs as namespace paths allowing access to system resources both locally and on the network without mounting or unmounting anything

This is such an attractive idea, and I'm gonna give it a try just because I want something with this idea to succeed. Seems the project has many other great ideas too, like the modular kernel where implementations can be switched out. Gonna be interesting to see where it goes! Good luck author/team :)

Edit: This part scares me a bit though: "Graphics Stack: compositing in-kernel", but I'm not sure if it scares me because I don't understand those parts deeply enough. Isn't this potentially a huge hole security wise? Maybe the capability-based security model prevents it from being a big issue, again I'm not sure because I don't think I understand it deeply or as a whole enough.

  • LavenderDay3544 3 hours ago

    OP here.

    The plan is to hand out panes which are just memory buffers to which applications write pixel data as they would on a framebuffer then when the kernel goes to actually refresh the display it composites any visible panes onto the back buffer and then swaps buffers. There is nothing unsafe about that any more so than any other use of shared memory regions between the kernel and userspace and those are quite prolific in existing popular OSes.

    If anything the Unix display server nonsense is overly convoluted and far worse security wise.

    • idle_zealot 2 hours ago

      Does this mean that window management has to be handled in the kernel? Or is there some process that tells the kernel where those panes should be relative to one another/the framebuffer?

  • Philpax 15 hours ago

    The choice of a pure-monolithic kernel is also interesting; I can buy that it's more secure, but having to recompile the kernel every time you change hardware sounds like it would be pretty tedious. Early days, though, so we'll see how that decision works out.

    • vlovich123 13 hours ago

      Why would you buy it’s more secure. Traditionally in windows in-kernel compositing was a constant source of security vulnerabilities. Sure rust may help the obvious memory corruption possibilities but I’m not convinced.

      • LavenderDay3544 3 hours ago

        As opposed to the Unix way where a networked display server is used? Exposing something that doesn't need to be exposed over a network is oh so secure right? It must be because Linux does it and everyone knows Linux is the end all and be all of operating systems...

        But seriously a lot of the design decisions Linux and other Unix like systems makes are horrible and poorly bolted on to a design from the 70s that aged very poorly. One of my goals with this project is to highlight that by showing how system with a more modern design derived from the metric ton of OS research that has been done since the 70s can be far better and show just how poorly designed and put together the million and one Unix clones actually are no matter how much lipstick Unix diehards try to put on that pig.

    • LavenderDay3544 3 hours ago

      Incremental compilation makes that a lot less heavyweight than you would think and the idea is to automate the process so the average non-technical user doesn't need to know or care how it works.

    • astrange 10 hours ago

      A monolithic kernel and resource locators that automatically mount network drives? That's just macOS.

      (You don't have to recompile the kernel if you put all the device drivers in it, just keep the object files around and relink it.)

    • Rohansi 14 hours ago

      Why would you need to recompile if hardware changes? Linux manages just fine as a monolithic kernel that ships with support for many devices in the same kernel build.

      • ofrzeta 14 hours ago

        It's true that you can compile everything in but it's not really the standard practice. On a stock distro you have dozens of dynamic modules loaded.

        • ori_b 5 hours ago

          OpenBSD removed support for loadable modules. Hardware today is big enough that compiling everything in is fine, and we don't need a ton of fiddly code to put a special-purpose linker into the kernel. Saving a bit of memory isn't worth the risk.

  • incognito124 14 hours ago

    Recompiling the whole kernel just to change drivers seems like a deal-breaker for wider adoption

    • pjmlp 14 hours ago

      Quite common on Linux early days.

      Also the only approach for systems where people advocate for static linking everything, yet another reason why dynamic loading became a thing.

    • skissane 9 hours ago

      Recompile (or at least relink) the kernel to change drivers (or even system configuration) is a bit of a blast from the past - in the 1960s thru 1980s it used to be a very common thing, it was called “system generation”. It was found in mainframe operating systems (e.g. OS/360, OS/VS1, OS/VS2, DOS/360); in CP/M; in Netware 2.x (3.x onwards dropped the need for it)

      Most of these systems came with utilities to partially automate the process, some kind of config file to drive it, Netware 2.x even had TUI menuing apps (ELSGEN, NETGEN) to assist in it

      • Brian_K_White 5 hours ago

        Not just old stuff like that either. At least also all the SCO Xenix & Unix'es up to the technically current OSR5, OSR6 and Unixware. I don't know about other (commercial) unixes as much as SCO but given where they all come from I assume Solaris and most of the other commercial unix that still technically exist today have something at least somewhat similar.

        The sys admin scripts would even relink just to merely change the ip address of the nic! (I no longer remember the details, but I think I eventually dug under the hood and figured out how you could edit a couple files and merely reboot without actually relinking a new kernel. But if you only followed the normal directions in the manual, you would use scoadmin and it would relink and reboot.) And this is not because SCO sux. Sure they did, but that was actually more or less normal and not part of why they sucked.

        Change anything about which drives are connected to which scsi hosts on which scsi ids? fuggeddabouddit. Not only relink and reboot, but also pray and have a bootable floppy and a cheat sheet of boot: parameters ready.

    • surajrmal 12 hours ago

      If this kernel ever gets big enough where this might matter, I'm sure they can change the design. Nothing is set in stone forever and for the foreseeable future it's unlikely to matter.

      • LavenderDay3544 3 hours ago

        If there's enough demand for dynamic kernel modules they can be added later. That's not a feature that you have to build ypur whole kernel around from that start. Linux definitely didn't but it has it now so it's definitely that can revisited or even made an opt-in feature.

  • jadbox 10 hours ago

    In theory, wouldn't it be possible for the Linux kernel to also provide a URI "auto mount" extension too?

  • BobbyTables2 13 hours ago

    Wish OP had put that as the main readme.

    The intro page is currently useless.

    • embedding-shape 11 hours ago

      To be fair, the submission URL goes to the kernel specifically, so the README is good considering the repository it's in. The link I put earlier I found via the GitHub organization, which does give you an overview of the OS as a whole (not just the kernel): https://github.com/charlotte-os/

  • KerrAvon 13 hours ago

    In practice, the problem with URIs is that it makes parsing very complex. You don’t really want a parser of that complexity in the kernel if you can avoid it, for performance reasons if nothing else. For low-level resource management, an ad-hoc, much simpler standard would be significantly better.

  • whatpeoplewant 12 hours ago

    This looks like a very interesting project! Good luck to the team.

  • user3939382 14 hours ago

    I’m working on one with a completely new hardware comms networking infra stack everything

the__alchemist 14 hours ago

I love seeing projects in this space! Non-big-corp OSSes have been limited to Linux etc; would love to explore the space more and have non-Linux, non-MS/Apple options. For example, Linux has these at the core which I don't find to be a good match for my uses:

  - Multi-user and server-oriented permissions system.
  - Incompatible ABIs
  - File-based everything; leads to scattered state that gets messy over time.
  - Package managers and compiling-from-source instead of distributing runnable applications directly.
  - Dependence on CLI, and steep learning curve.
If you're OK with those, cool! I think we should have more options.
  • LavenderDay3544 17 minutes ago

    Linux is a big corp OS. Look at who the biggest contributors are and who funds the Linux foundation, ultimately paying Linus and friends' salaries.

  • grepfru_it 14 hours ago

    Haiku, plan9, redox, and Hurd comes to mind

    Reactos if you need something to replace windows

    Implementing support for docker on these operating systems could give them the life you are looking for

    • irusensei 4 hours ago

      I don't think they will like Plan9 if file based everything is a turn off.

      Did you know the Go language supports Plan9? You can create a binary from any system using GOOS=plan9 with amd64 and i386 supported. You might need to disable CGO and use libraries that don't have operating system specifics though. You can even bootstrap Go from it provided you have the SDK.

      Incidentally 9Front is a modern fork of Plan9.

  • Zardoz84 10 hours ago

    BSD exists Also Open Solaris Minix etc...

    • ogogmad 8 hours ago

      I reckon each of these has at least 3/5 of the complaints the OP has about Linux, because they're all still Unix clones.

  • ogogmad 14 hours ago

    > Package managers and compiling-from-source instead of distributing runnable applications directly.

    Docker tries to partially address this, right?

    > Dependence on CLI, and steep learning curve.

    I think this is partially eased by LLMs.

    • LavenderDay3544 13 minutes ago

      They shouldn't have to. OS interfaces including commandline ones should be user oriented not bogged down Unix dogma that was created when used physical text terminals as their primary I/O device. It's not the 60s anymore and modern PC, servers, and embedded devices aren't ancient mainframes with physical terminal hardware where making everything appear to be a file and using convoluted scripting interfaces like the Unix shell made at least some sense.

    • the__alchemist 14 hours ago

      But you can see the theme here: Adding more layers of complexity to patch things. LLMs do seem to do a better job than searching forum posts! I would argue that Docker's point is to patch compatibility barriers in Linux.

    • Levitating 7 hours ago

      > Docker tries to partially address this, right?

      Docker is a good way of turning a 2kb shell script into a 400mb container. It's not a solution.

      Flatpak would be a better example.

ofrzeta 14 hours ago

So, what's modern about it? "novel systems like Plan 9" is quite funny because Plan 9 is 30 years old.

  • LavenderDay3544 11 minutes ago

    Plan 9 is novel compared to Unix which almost every OS in common use mimics. But the reference to Plan 9 was more as a nod to its namespace and suitability for distributed computing which partially inspired my design.

  • pjmlp 14 hours ago

    The sad part is that there are too many ideas of old systems lost in a world that 30 years later seems too focused on putting Linux distributions everywhere.

    • linguae 10 hours ago

      Indeed. I am reminded of what Alan Kay has repeatedly referred to as a “pop culture” of computing that has become widespread in technical communities since the 1980s, when the spread of technology grew faster than educational efforts. One result is there are many inventions and innovations from the research community that never got adopted by major players. The corollary to “perfect is the enemy of the good” is good-enough solutions have amazingly long lifetimes in the marketplace.

      There are many great ideas in operating systems, programming languages, and other systems that have been developed in the fast 30 years, but these ideas need to work with existing infrastructure due to costs, network effects, and other important factors.

      What is interesting is how some of these features do get picked up by the mainstream computing ecosystem. Rust is one of the biggest breakthroughs in systems programming in decades, bringing together research in linear types and memory safety in a form that has resonated with a lot of systems programmers who tend to resist typical languages from the PL community. Some ideas from Plan 9, such as 9P, have made their way into contemporary systems. Features that were once the domain of Lisp have made their ways into contemporary programming languages, such as anonymous functions.

      I think it would be cool if there were some book or blog that taught “alternate universe computing”: the ideas of research systems during the past few decades that didn’t become dominant but have very important lessons that people working on today’s systems can apply. A lot of what I know about research systems comes from graduate school, working in research environments, and reading sites like Hacker News. It would be cool if this information were more widely disseminated.

      • pjmlp 9 hours ago

        There is actually a talk like that from like two years ago, have to see if I find it again.

    • grepfru_it 13 hours ago

      There was also a period of time where everyone and their mom was writing a new operating system trying to replicate Linux’ success

      • pjmlp 12 hours ago

        Isn't what all those UNIX clones keep trying to do?

    • Razengan 13 hours ago

      Yeah the more you read up on computing history from barely even 40 years ago, it seems that most of the things that we take for granted today became so more through politics (and in the case of Microsoft, bullying) than merit.

      • Razengan 10 hours ago

        Regarding Microsoft, this was before even the "Browser Wars" they'd send suited people to the offices of Japanese PC manufacturers and threaten to revoke their Windows licenses if they even OFFERED customers the CHOICE of an alternative operating system!!

        This and other dirt is on any YouTube video about the history/demise of alternative computing platforms/OSes.

  • userbinator 2 hours ago

    Some people seem to like throwing around "modern" as a buzzword. I tend to automatically filter that out.

    • LavenderDay3544 9 minutes ago

      How else would you describe a system that isnt modelled after one that was designed in the 60s like almost all the ones in common use today are?

      Your complaint is more pointless than what you're complaining about.

  • IshKebab 13 hours ago

    That's still newer than Linux's system design.

    • ofrzeta 13 hours ago

      In an operating system course I attended it was mostly Unix and everyone was used to bashing Windows NT ("so crappy, bsod etc.") but we had Stallings' book and I was surprised to learn that NT was in many ways an improvement over Unix and Linux.

      • irusensei 4 hours ago

        NT is the brainchild of Dave Cutler, who also had a leading role in developing Dec's VMS.

      • exe34 12 hours ago

        NT the kernel is quite good. windows nt itself was not always great.

        • brendoelfrendo 11 hours ago

          Is not always that great. Windows 11 is still based on the NT kernel. It's probably still good! Unfortunately the userland experience they put on top of it is just awful.

kragen 14 hours ago

It's comforting to see that capabilities with mandatory access control have become the new normal.

  • LavenderDay3544 7 hours ago

    Why choose one when combining both is better?

    • kragen 7 hours ago

      Exactamente.

jancsika 12 hours ago

> GPLv3 or later (with proprietary driver clarification)

What's that parenthetical mean?

  • nathcd 12 hours ago

    Looks like it's explained here: https://github.com/charlotte-os/Catten/blob/main/License/cla...

    Specifically, "Users may link this kernel with closed-source binary drivers, including static libraries, for personal, internal, or evaluation use without being required to disclose the source code of the proprietary driver.".

    • jancsika 8 hours ago

      Ok, even Doug Crockford has mucked around with licensing before, so this is definitely a digression and not aimed at CharlotteOS which looks fascinating:

      I wish there was a social stigma in Open Source/Free Software to doing anything other than just picking a bog standard license.

      I mean, we have a social stigma even for OS developers about rolling your own crypto primitives. Even though it's the same very general domain, we know from experience that someone who isn't an active, experienced cryptographer would have close to a zero percent chance of getting it right.

      If that's true, then it's even less likely that a programmer is going to make legally competent (or even legally relevant) decisions when writing their own open source compatible license, or modifying an existing license.

      I guess technically the "clarification" of a bog standard license is outside of my critique. Even so, their clarification is shoe-horned right there in a parenthetical next to the "License" heading, making me itchy... :)

      • hunterpayne 6 hours ago

        Its almost impossible to have a non-begging based business model and a standard OpenSource license. So unless you want to donate a lot of work to some huge company's bottom line for free, a standard OpenSource license is a non starter. I'm sorry that you don't seem to understand the events that led to this state. But if you ever wrote an OpenSource platform that people wanted to use, you would know why the standard licenses don't work. That's why the social stigma is the other way around. Your position from the POV of OpenSource devs is naive at best and likely destructive to the developers themselves.

      • LavenderDay3544 7 hours ago

        OP here, it's not mucking around with the license just making sure people know how the GPLv3 works. You are not required to provide source code for the combined work unless it is conveyed. If you combine the covered work with closed source but don't convey the resulting product you are not required to provide any source to anyone.

        Many people don't know that, hence the clarification note.

  • LavenderDay3544 3 hours ago

    There's a note in the repo that clarifies the meaning of the GPLv3 regarding the use of combining covered works with proprietary libraries when the resulting combined work is never conveyed. It doesn't modify the license, it just explains what it means in that specific case as we interpret it.

    Also to be clear I am not a lawyer and nothing I say constitutes any form of legal advice.

not4uffin 9 hours ago

I’m very happy I’m seeing more open source kernels being released.

More options (and thus) competition is very healthy.

shevy-java 10 hours ago

Written in Rust. Hmm.

SerenityOS is written in C++.

I'd love some kind of meta-language that is easy to read and write, easy to maintain - but fast. C, C++, Rust etc... are not that easy to read, write and maintain.

  • LavenderDay3544 7 hours ago

    Being maintainable comes down to code quality, comments, and documentation. These are thing that I really want to emphasize for this project but for now I'm just one guy and it's very early days so I have to focus developing core kernel components first.

  • cultofmetatron 10 hours ago

    fast necessitates manual control -> more semantics for low level control) that need to be expressible, ie: more complex

    easy to understand, maintain -> computer does more work for you to "figure things out" in a way that simply can't be optimal under al conditions.

    TLDR: what you're asking for isn't really possible without some form of AGI

    • card_zero 8 hours ago

      What languages are easy to understand and maintain, anyway?

      • cultofmetatron 4 hours ago

        Id argue that python, elixir, ruby and all manner of languages are easy to understand and maintain. I dont' have to think about memory management or buffer overuns. its much easier to avoid race conditions since I'm not stressing about low level details.

        by that same definition, rust is pretty easy to maintain. I won't say its easy to write though.

ForHackernews 13 hours ago

How does this compare to SerenityOS? At a glance, it looks more modern and free from POSIX legacy?

  • LavenderDay3544 7 hours ago

    I don't know anything about SerenityOS so I can't really say but if you have any more specific questions I'd be happy to answer them.

pjmlp 15 hours ago

Interesting, and kudos for trailing other paths, and not being yet another POSIX clone.

varispeed 13 hours ago

Modern operating system, ready to face challenges of today political landscape, should natively support "hidden" encrypted containers, that is you would log in to completely different, separate environment depending on password. So that when under threat could disclose a password to an environment you are willing to share and attacker would have no way of proving there is any other environment present.

  • LavenderDay3544 7 hours ago

    The only thing political about this project so far is that I insist on it being free software and also free from tivoization. Well that and not going to insane lengths to support hardware whose vendors are clearly hostile to third party operating systems and free software in general like Apple and Qualcomm.

  • Razengan 13 hours ago

    It would be easy to tell for anyone seriously after you: If I kidnap you and make you log into your computer, and you log into the decoy state, it'd be obvious to see that the last time you visited any website etc. was over a month ago and so on.

    • varispeed 12 hours ago

      For sure you'd have to use it from time to time.

      • mixmastamyk 11 hours ago

        Or, write a login script to touch files at random.

        • Razengan 10 hours ago

          That won't mask your online interaction history etc.

          Maybe an LLM agent posting crap at random? lol

          • varispeed 10 hours ago

            Could also be a "cross-over", so your real account could mount certain parts of other file systems as an overlay. So if you could have a browser that would be the same across two environments. That way the throwaway account could be seen as real, but it wouldn't show things you don't want to be compromised.

      • Razengan 10 hours ago

        Something I thought about long ago was that it would be better/easier to divide user accounts into "personas": different sets of public-facing IDs, settings etc.

        This could be done at every level: the operating system, the browser, websites..

        So if you don't care about the website knowing it's the same person, instead of having multiple user accounts on HN, Reddit, you could log into a single account, then choose from a set of different usernames each with their own post history, karma, etc.

        If you want to have different usernames on each website, switch the browser persona.

        At the OS level, people could have different "decoy" personas if they're at risk of state/partner spying or wrench-based decryption, and so on.