SurveyOfSystemLanguages2024
Introduction
In case you missed it, there’s a whole new generation of low-level programming languages being created right now. Rust demonstrated vividly in 2016 that there is a massive unmet need in this space, and it has spawned a bunch of successor languages, though none of them have really hit the relative big-time like Rust has. I’m one of those programmers that has spent 20 years or so asking “can we PLEASE do low-level code in anything besides C and C++?” so I wanted to take an actual look at the languages I know about and do a bit of compare-and-contrast.
This is an opinionated take, and honestly not a particularly deep one, since I don’t know most of these languages and I don’t want to spend months researching each one. I’m also not gonna quibble about what is and is not low-level. I’m moving long-distance soon, and my brain needed something interesting and productive-feeling but not terribly deep, so I’m gonna spend a max of a day or two on each language. I’m gonna give a little bit of history but confine my opinions to the actual languages rather than the whole ecosystem and tooling around them, and just go down the list in chronological-ish order. While very important, tools are far more mutable and easier to change.
To try to have some kind of consistent framework to judge each language, I made a list of ten Arbitrary Criteria:
- Performance – Any language can be fast; the question is, does it take shitloads of work to make it fast?
- Basic type system – Does it have sum types?
- Generics – Do I have to use
void *
everywhere? - Spatial safety – Are arrays bounds-checked?
- Temporal safety – Is it trivial to return a pointer to a local variable?
- Modules and package system – Do I need to write header files and makefiles by hand?
- Low-level junk and FFI – Does it put in the work to make it easy to talk to hardware or foreign software?
- Separate compilation – Does it take 10 minutes to recompile my code? (I just learned that these things are more loosely connected than I thought. Maybe it should be “incremental compilation” instead, but it’s not just that… oh well, I’ll leave it as it is this time.)
- Joy – Does using it make me happy?
- Dread – Does having to use it give me The Fear? (higher ratings are all better, so 1 - lots of fear, 10 - little fear)
All ratings are out of 10. This isn’t IGN: if something deserves a 2, I’m gonna give it a 2. That doesn’t mean it’s crap and whoever thought of it deserves the eternal fires of hell, just that I can think of ways it could be better.
This list is incomplete, of course. There are plenty of other criteria you can consider, these are just ones that matter to me, and they can be judged fairly easily without having to dig too deep. Again, these are all fairly superficial takes. I’m not gonna spend a year or two getting real good at each language.
Last updated in December 2024.
C
C is the baseline of low-level programming; hopefully it doesn’t need much more introduction. If you can’t write something in C, it’s probably not worth writing. We all know it, we often use it whether directly or indirectly, and some of us even like it. Those poor suckers.
Performance: 9/10 – Compilers and CPUs both put lots of work into making C faster, and it doesn’t give you many abstractions that add hidden runtime costs to things. It’s possible to be Faster Than C without writing hand-tuned assembly code, but it’s not very common.
Basic type system: 3/10 – How big is a
long int
? Does x = y;
for random integer types
silently truncate y
or not? If x
is signed and
y
is unsigned, does it zero-extend or sign-extend
y
? What if you do y = x;
instead? What the
hell order do you write typedef
’s in? That said, it could
be a lot worse, and C99 makes it a lot better; all the basics you need
to answer these questions unambiguously are more or less there if you
look hard enough. Even though main(void)
and
main()
are still very different things.
Generics: 1/10 – Oh boy, do I feel like writing this generic API using void pointers or textual macros! Which should I choose? How about… neither, please.
Spatial safety: 1/10 – Bounds-checked arrays? Not in my code! What do you mean the costs of bounds checks are negligable? Get that communist propaganda out of here!
Temporal safety: 1/10 – Oh my no. Even when the compiler can prove beyond all doubt that something is an error, like returning a pointer to a local variable, it’s a warning. If you’re lucky.
Modules: 1/10 – Textual inclusion and “choose a unique prefix for your names”, truly the state of the art in software engineering. It works so badly that there is a cottage industry of single-header libraries to dodge the whole issue, at the cost of recompiling lots of code without sharing any of it. Looking at it that way, C’s support support for sharing code is so poor that a significant number of people prefer to not even try to use it.
Low-level junk: 6/10 – I’m not only a
hater; it is actually pretty nice to do bit-twiddling and
memory-mongling in C. When writing a hobby OS or embedded software in C,
a lot of its flaws matter surprisingly little. However, I think that
what you need for “low-level programming” in 2024 is different than it
was in the 1970s and 80s, and C only really gives you some of
that. C’mon, you don’t have a standard popcount()
function?
Git gud. But more than such details, some of C’s assumptions about what
programs are and are not allowed to do are so arcane and
compiler-dependent, and so poorly explained in the standard, that it’s
pretty difficult to actually learn all the in’s and outs.
Large, real-world low-level C projects like the Linux kernel will always
have to build in compiler-specific assumptions, and probably not just a
few of them.
Separate compilation: 8/10 – Actually pretty good considering the total lack of a module system; it sucks a lot for the human to use but it’s actually very hard to do better from a compiler’s perspective. It’s possible, but hard. There’s just lots of delicate design tradeoffs around generating generic code and doing inlining and stuff where a C compiler gets to explicitly say “not my problem, just write your code better.”
Joy: 5/10 – Few things piss me off more than having to write header files to tell the compiler things it can figure out on its own. But apart from that… It’s really kinda fun to write C! It’s not like, good, but it is fun. It’s an involving sort of puzzle.
Fear: 2/10 – It’s kinda fun up until you have to actually figure out someone else’s weird code.
Mean: 3.7 +/- 2.9
Median: 2.5
Closing thoughts: yeah, C runs a lot of the world. But really now. Really. We can do so much better.
Ada
Ada was developed in the 1980s by the US military as a “big complicated systems language” similar to C++, and then spent 30 years with the stigma of being a big complicated systems language like C++ when most of the languages anyone actually wanted to use were small, easy to implement and growing organically. Further hindering its adoption, for the longest time there was no particularly good open-source Ada compiler. The main one is GNAT which is part of GCC, but I tried to use it a couple times Back In The Day of 2005 or something and found it perennially incomplete, not distributed by distros, difficult to build from source, and/or with questionable licensing of the compiler and standard lib. IIRC it was mainly developed by a company called AdaCore as a semi-proprietary product and nobody really knew where the lines were drawn, and nobody felt like digging in and finding out the hard way. Then when Rust cropped up in the late 2010’s some people said “you know Ada has had a lot of these nice features for decades” and people actually started looking seriously at it again. Turns out the standards and expectations for “this language is too complicated” have changed since 1985, GNAT (and GCC) has matured into a much more solid set of tools, and AdaCore has established itself as a fairly friendly company selling tools and services for program verification rather than “it’s 2002, tomorrow they might sue us for using their library code, or they might vanish entirely, nobody really knows”. Open source was a much more risky endeavor back when a huge chunk of the tech industry was still built on proprietary OS’s and languages.
Ada really deserves a serious look from a modern Rustacean’s perspective, but, well, I don’t wanna. I’m curious though. Oh, I actually have some code lying around from the last time I tried to learn Ada in… uh, 2004. And AdaCore has a decent tutorial… Fiiiiiiiine, let’s do this.
Performance: 7/10 – It’s fine. Much like C, there’s not many built-in features that have runtime performance costs. Unlike C, it does do array bounds checking and stuff, so I’ll ding it a little. To me that’s a tradeoff that is absurdly worthwhile, and probably has been since 1995 or something. However, some of the pointer lifetime shenanigans described below probably require a nontrivial amount of bookkeeping though, and will result in more restrictive designs to get around the limitations on pointer lifetimes and aliasing.
Basic type system: 6.5/10 – Types are
heavily nominal and there’s few implicit conversions, compared
to C’s “yeah looks like an int
, good enough”. Along with
structs there’s “aggregates”, which are more or less tuples. There’s sum
types in there, under the title of “variant records”, but there’s also
lots of other stuff like dynamically sized structs. There’s integers
defined on specific ranges, and just plain ol’ ranges that for loops
iterate over. In fact you don’t define integers with particular sizes,
you define them by ranges and let the compiler decide how big they need
to be. This is a kinda neat approach that is uncommon in modern
languages, but I also feel like you usually only want two kinds of
integer: “exactly the machine size I tell you”, or “potentially
infinitely large”. (In reality, without other qualifications GNAT’s
Integer
type appears to be 32 bits on x86_64.) You can also
define integer types that explicitly wrap when they exceed their range,
and make integers that are subtypes of other types including enums and
integer ranges, and all integers have various “attributes” that can be
metadata about the types or might be full-blown methods, and if you try
you can define what machine word size under the hood is
actually used to represent integers… Okay, this is going from
“neat” to “overdesigned”. There’s lots of powerful and interesting stuff
in Ada’s type system which accomplishes the same sort of stuff as Rust
or Zig’s strong and sophisticated types, but this is still a language
made by the US military in the 1980s so you really can’t call it
“progressive”. There’s no building things up from basic concepts, just
piling feature atop feature until you have an endless pile of one-off
special cases. What else is there… procedures and functions are
different things. You can have nested functions that can access their
outer function’s local variables, but no real closures. It looks like
you can even write first-class functions, if you try
hard enough. That said functions mostly aren’t first-class values,
there’s basically no type inference… Everything you want is
probably in there, if you can find it.
Generics: 5/10 – They exist! There’s both data
structures and functions with generic type parameters, and functions can
also be overloaded with different argument types. However, it doesn’t
loooook like there’s any good way to do what type theory goons call
“existential types”? This is basically what OO inheritance or Rust
traits is for; “existential types” are basically any feature that lets
you write fn foo(x: impl Something)
and then call it with
any type that has a Something
trait which defines some
interesting operations. The examples I can find for generics in Ada do
things like swap values or concatenate arrays where the only operation
done on the generic type is “copy”. I wouldn’t be surprised if there was
functionality for these in there somewhere, but I also wouldn’t be
surprised if it’s so complicated that nobody uses it.
Spatial safety: 9/10 – case
statements
must be exhaustive. Integer overflows are checked. Pointers are called
“access types”, and are nullable but are null-checked when dereferenced.
Variables more or less can’t be uninitialized, as far as I can tell.
There’s a notion of “definite” vs “indefinite” types, which appear to be
types whose size is vs. is not known at compile time; the latter can
only be touched via pointers. So, there’s a notion like Rust’s
Sized
. Apparently access types can contain bounds checks
for indefinite types, so they really are kinda like slices, not just
plain pointers; they presumably can also be implemented like Rust
references, where they may be a pointer but don’t have to be if
the compiler doesn’t feel like it. But definite and indefinite types
aren’t interchangable, so for example there’s different stdlib
containers for the definite and indefinite types. All in all it seems
pretty foolproof? There’s just so much going on it’s hard to know for
sure, so I’ll give it a 9.
Temporal safety: 7/10 – Pointers/access types can
usually only point to dynamic memory. The
AdaCore tutorial recommends not deallocating dynamic memory at all,
which is certainly one way to solve the problem. It
says that memory in the stdlib dynamic containers like vectors and sets
is managed automatically for you, but not much about how. Ooooh, it
looks like each pointer– erm, access type is bound at compile-time to a
particular memory pool, of which the stack is one. Turns out you can
also limit the size of memory pools and it’s explicitly checked; I
wonder if it also uses that to detect stack overflows. I’m slowly
getting the picture: it’s region-based memory management, where pointers
can point to other values in the same regions, but cannot point between
regions and cannot alias values unless explicitly annotated as doing
that. That’s actually fairly based. It’s very restrictive, but also
compile-time-provably safe except in the places where you say you want
to break the rules. Ada’s memory pools might be managed manually, or
might be handled by the compiler in a stack-based fashion, I can’t
figure it out. Freeing a single pointer is possible, but it modifies the
metadata attached to the access type so a double-free doesn’t corrupt
the heap. I haven’t checked, but this might also imply that access types
are actually double-pointers or some other kind of handle, so that
free
can invalidate all pointers pointing to a memory
location? Coaxing Ada into producing an invalid pointer to a local
variable is pretty difficult; they are mostly just not allowed. Aliasing
where multiple pointers point to the same thing is also mostly not
allowed. You can do all those things if you try, but it takes extra work
and the language does its best to not let different sorts of pointers
interact freely with each other. There appears to be nothing like
refcounted or smart pointer types in the standard, though GNAT seems to
offer an add-on lib that contains something like them. It seems like
there’s also “controlled types” which loooook like C++-style RAII,
complete with objects and copy constructors. I wonder if anyone actually
uses them? Sheesh that’s a lot of stuff.
Modules: 5/10 – Better than expected, really. Ada
itself doesn’t enforce any file naming conventions but GNAT creates its
own which seem to work fine. gnat make
will trace through
the file you give it looking for module imports and compiling them as
necessary, leaving behind machine-generated interface files. You can
write interface files by hand but it seems like you usually
don’t have to, perhaps unless you want to declare private vs public
functions and types? Oh, public types might also require interface
files. Forward declarations are necessary if you need mutual recursion,
but mutual recursion is pretty rare in general. There’s more to explore
but it’s frankly so tedious and complicated that I have a hard time
wanting to. Figuring out how the heck to put multiple functions
into the same file is unironically difficult.
Low-level junk: 5/10 – I’m getting a little tired by now so I don’t feel like digging too hard. I’m sure it has everything you want and more, if you can find it and figure out how you’re supposed to use it. There’s first-class support for FFI and you can probably do whatever you want, if you put the time in.
Separate compilation: 8/10 – Some poking around suggests that each file is a compilation unit, and GNAT seems disinclined to inline between compilation units. It’ll inline inside them just fine though. All in all, I’ll call this pretty much on par with C.
Joy: 1/10 – From my brief playing around, defining
new functions and even new variables is so much work that you find
yourself really trying pretty hard to avoid it. For something that’s
intended to be easy to read, the verbose and optional-keyword-laden data
declarations sure confuse my eyes. A lot of that is probably just lack
of familiarity, but still. From my inexpert skimming it feels like Ada
has no orthogonal, powerful abstractions that combine to make
emergent features like Rust’s Result
or Send
,
or Erlang’s processes and messages. Every single thing is a hardcoded
special case that you are permitted to put in the places where the
compiler likes it and combine in the ways that the compiler deems
acceptable. The tedium of all the extra keywords and how programs are
laid out is unfamiliar, but I have a really hard time imagining myself
getting good at it and thinking “boy this really DOES make reading code
so much easier”. Maybe it does if all the code is printed out in 3-ring
binders.
Fear: 1/10 – It has the Java Problem in package and
file layout; everything is so broken up by the design and conventions
that you need to jump around a lot to actually figure out what
the hell anything is. The amount of work it takes to arrange and declare
various functions and types probably makes refactoring hell, so there’s
probably plenty of nice big chunky Ada programs out there that have
never been refactored. Understanding what the hell you’re looking at is
hard, and every single type and statement and such has so many
possible annotations on it that all do utterly unique things, it’s
heckin’ impossible to predict how any of the pieces should go together.
For example, I mentioned above that some access types might have to be
double-pointers under the hood. Not to worry, pointers default to
non-aliasing, so you can declare a pointer type with an annotation as
aliasing, and then only those ones have to be double-pointers. But then
you add another annotation for regions. And another for cross-region
pointers. And another for bounds-checked vs non-bounds-checked pointers.
And so on, and so on, for every single feature in this language; it’s
the same kind of design that makes me annoyed at SQL. And people think
Rust’s &'foo mut [&'bar u8]
is hard to figure out?
Sure I don’t actually know this language, but I can just imagine working
on a large codebase and having it be like C++, where every single person
has their own different dialect of the language and has to get out
the language reference –oops it’s probably unreadable– has to sit
down with another programmer and get a tutorial on how their dialect of
code actually works. (At least you’re allowed to
download the reference though; one up on C there.)
Mean: 5.7 +/- 2.6
Median: 6.25
Closing thoughts: Boy, after some digging around you can really see why our anti-authoritarian forebearers considered Ada a travesty. I am all in favor of a bit of tedium in exchange for a lot of compile-time validation – I like Rust, after all – but Ada seems like a pretty good example of what life is like when you take it too far. Ada has a lot of good things going for it, especially with its region-based memory management… IF I have understood any part of it correctly. It’s entirely possible I’ve screwed up a lot of this ’cause it’s so complicated and alien.
But you can tell deep in your soul that Ada is made to appeal to people who think that all engineering is done by starting with a large, empty sheet of white paper, drawing a single, perfect rectangle on it with a ruler and square, and then dividing the rectangle up into boxes and filling up each box with diagrams until it’s full without ever erasing or changing any lines. Hopefully I don’t have to tell you that’s not how the world usually functions. Large, complicated systems that work well are almost always grown out of small, simple systems that work well; the best design validation tool is a working prototype. Is Ada’s joyless and incredibly detailed nature a benefit in the high-reliability and safety-verified systems it was designed for? Mmmmmaybe. But Ada also makes so much stuff just harder and more hairy than I feel like it needs to be. I think that much like Modula-2 there could be a fairly useful, pleasant language somewhere in there waiting to be discovered. Someone should steal Ada’s memory management model, write a language with a Rust-y type system atop it, and compile it to Ada so it can use all the existing verification tools. So I think I’ll put Ada in the same category as C: “we can do far better by now”.
C++
Nah.
Jai
Jai stands out from the pack by kinda being significant before it was cool, mainly because it is the pet project of game dev Jonathan Blow. Apparently he made Braid and The Witness, made a shitload of money in 2016 at the peak of the indie game renaissance, and proceeded to spend the next eight years dicking around with programming language development. I could try to criticize that, but it would only be out of jealousy ’cause he’s basically living my perfect life. I’m still tempted to cast some shade though, ’cause as far as I can tell Jai is still not available for public consumption. It exists, he’s done livestreams of him using it, there was a closed beta of it in 2023 that spawned some vaguely interesting blog posts… but how the hell are you going to figure out how to make a tool better if you don’t let people use it? The number of beta testers is apparently in the hundreds, but still, I brought myself up reading Eric S Raymond and Paul Graham. They’ve certainly aged poorly in many ways, but the basic principles are ingrained deep: distrust authority, show me the code. No matter what you say about something, if I can’t actually use it, tinker with it, and open it up to see how it works, it’s vaporware at best and an intentional prison at worst.
That said, for something that’s vaporware Jai has quite a following. Maybe Jonathan Blow not letting people touch it is all clever marketing; I wouldn’t put it past him. So I’ve talked a lot more about the person than the language, but I don’t have the language except for second-hand sources. Those sources are out of date by definition now, but I’ll still try to cobble together some guesses based on those. Again, all this information is second-hand, and most of it comes from the fan-maintained docs. I can’t verify that any of it is correct.
Performance: 10/10 – Jai is kinda my baseline for
what a 10/10 would look like in this category. There’s a built-in arena
allocator, with support for a stack of arenas and for changing
allocators in the New()
builtin that allocates memory of
various types. There’s built-in vector types, though apparently they aren’t
as complete as one might want. One of the flagship features I
remember getting a lot of attention is still there, automatic support
for turning array-of-struct’s into structs-of-arrays. After some digging
I think that one is honestly less interesting than a lot of the other
ideas in this language, but I can confirm that writing SoA code in other
languages is kinda miserable, so it does matter.
Basic type system: 5/10 – Basic numerical types of
various sizes exist, as well as int
which is a 64-bit
signed integer. Strings are all what Rust would call string slices: a
pointer+length pair. There’s no sum types as far as I can tell, and
there’s C-like unions with no tagging or checking. Bruh. Types are
first-class values apparently, either evaluated at compile-time or via
RTTI. It’s…. well, better than C.
Generics: 5/10 – There is an Any
type
that appears to be a pointer to some value, tagged by RTTI. There’s
hygenic macros. Oh there actually are generic types that are resolved at
compile-time, I honestly didn’t expect that. There’s some kind
of subtyping and interface types. Along with RTTI it looooooooks like
you can actually do something like dispatching on a generic’s type to do
what you’d normally do with sum types. It’s hard to find too much more
info online, so I’ll give it a 5.
Spatial safety: 1/10 – Variables default initialize
to zero, which never works as well as you’d like ’cause not everything
has a meaningful “zero” value, let alone a useful one. You can mark
variables explicitly as uninitialized, at which point reading them is
“undefined behavior”. Is this at all like C’s UB where it blows up your
entire program? Who knows. There are nullable raw pointers, and plenty
of pointer arithmetic if you want it. You can write
some_struct.foo
and if some_struct
is a
pointer to a struct, it will dereference that pointer for you; no word
if it will dereference through multiple pointers. I can’t tell whether
it checks for null pointers for you, but I bet not. I think
most arrays are accessed via “array views”, what Rust would call slices.
I think those are bounds checked? It’s… technically better than
C, I think? But still. You should know better by now.
Temporal safety: 3/10 – There’s built in support for
iterators over arrays, and a slightly odd way to define new iterators
via a macro called for_expansion
. Can iterators be
invalidated? Ahahahahah hell yeah they can, there’s a special-case
remove
statement to remove an item from an array being
iterated over: “The remove
statement assumes an unordered
remove, the remove swaps the current element that is being iterated on
with the last element, and then removes the last element.” Clever
solution, and it probably does the right thing and performs the
next iteration of the loop on the now-current element, but hoo boy I
could imagine that driving me crazy in the rare case where it isn’t what
I want and causes some sneaky bug. I can’t find any docs on what the
arena allocator does when it decides it needs more space and allocates
another chunk; does it move the existing objects in it or not? I assume
it can’t, ’cause anything else wouldn’t make sense, but still. There’s
no RAII, but there are defer
statements to release
resources. That said, you can mark the return value of a function as
“this must be used”, which is a touch I didn’t expect.
Modules: 6.5/10 – There’s some kind of
module mechanism with #import
, which looks pretty
reasonable, but there’s also #load
which seems to do
textual inclusion of a file? No word on whether it will include the same
file multiple times; it looks like it does. Have fun writing
your own include-guard equivalents! Or maybe it does something else, I
can’t tell. You can define “module parameters” which are… well, constant
values in a module that you can override in the #import
statement. So you can do #import "Module" (VERBOSE=true);
and have what are effectively C #ifdef
’s on the
VERBOSE
variable. I… kinda don’t hate that actually. I can
think of a thousand ways it can go wrong or bite you in the ass, and it
really is glorified C #define
’s, but it’s a neat idea. It
lets the importer of a module have some control over what that module
does, not just the module writer, which sounds like it could be really
nice sometimes. Modules can explicitly offer a configuration API!
Hazardous, but cool.
Low-level junk: 9/10 – There’s support for easily binding to C code, and apparently it’s designed to make rewriting C code into Jai mostly Just Work with fairly mechanical changes. That explains a fair amount of what I don’t like about Jai’s type system and safety properties, I guess? There’s tons of annotations available on types like cache layout and SIMD stuff. Inline asm support is built-in and extensive; there’s even features to make sure sure you can have macros generating asm and it works the way you’d want it to. The inline asm is pretty interesting, there’s a register allocator that will try to fit your asm program into the registers that are available and yell at you if you exceed them, there’s ways to specialize different versions of your asm for different instruction sets like SSE3 and SSE4, and stuff like that. On the flip side the ability to do function calls and jumps inside asm is very limited, which probably means the optimizer has more freedom to reshape the rest of the function around the inline asm; it’s very much intended to write hot-path optimizations rather than as a general tool. There’s built in support for doing things like arranging struct layouts, making enums that are bitsets or flags, all sorts of things. All in all… yeah, really seems very solid and pragmatic. There’s lots of kinda disparate features but you tend to look at them and say “yeah, I would want to use this for…”
Separate compilation: 5/10 – There’s support for creating DLL’s and linking with C stuff. Apart from that, I… can’t actually find much about the separate-compilation story. Does it actually compile stuff in parallel and then link it like C? Does it compile as much as it can and cache some stuff, then do whole-program optimization like Rust? No idea, and I’m tired of looking. I’ll just give it a 5. If it does it like C consider this a 7, if it does it like Rust consider this a 3. I rather suspect the latter, but who knows.
Joy: 2/10 – It has some cute syntactic sugar for var
decls, where name := value;
declares a new var inferring
the type and name : type = value;
declares one with the
given type. Odin does this too; did Jai or Odin come up with that first?
Probably Jai, since it seems to have come first. There’s a significant
number of features that make me stop, blink, and go “…ooooh,
neat”. But apart from that… ugh, I personally would hate using
this language. I’m too OCaml-brained, and this language has far too many
pieces that make me go “but why would you do it THAT way instead of the
obviously better way?! Why do you hate life this much?”
Dread: 1/10 – Like C, it lets you write
if foo { thing(); } else { other_thing(); }
as well as
if foo thing(); else other_thing();
, which I consider a
needless and hopeless footgun. Unlike C you don’t have to put parens
around the foo
, so I have no idea how it keeps the syntax
from being unambiguous. It might manage it; maybe this is why it
apparently has no unary minus, so it doesn’t have to disambiguate
if x - y;
? There’s a ternary
ifx x then foo else bar;
expression instead of just having
an if
statement be an expression. The case
statement that doesn’t allow fallthrough by default (whew!) but doesn’t
default to requiring exhaustive matching of enums (argh!). You can
specify default return values for functions, which might work
out great but I know personally would cause me to blow my own foot off
so many times when I forget to return something and it just gives me the
default. So yeah, Jai repeats almost everything C does wrong, plus
adding pervasive macros and metaprogramming just to make your life
extra-full of sharp tools prone to abuse. Sigh.
Mean: 4.75 +/- 3.0
Median: 5.0
Closing thoughts: Jai is very obviously written by a C programmer who said “man, I wish C was just better at some things…”, without really caring to look too hard beyond it at what other languages in the world might be doing. …Except that when you keep looking there’s weird bits here and there that show some awareness of the outside world: compile-time function arg currying, hygienic macros, etc. It’s very strange; the simple stuff that I consider table stakes like “no uninitialized variables” or “no null pointers” are all horribly scuffed, but there’s some very savvy advanced features like its module parameters or its inline asm that look really, really useful in the right contexts. Haven’t you ever wanted to compile just one module in debug mode, or have your compiler do register alloc on your inline asm for you? To me the core language is kinda garbage, but from a design perspective there’s a lot of interesting stuff to cherry-pick out for use in other languages. Our lad Mr. Blow just needs to sit down and write some games in OCaml or Erlang, broaden his horizons a bit. All in all, Jai is a good example of why society needs artists: not to create things that are useful, practical or popular, but to create things that are interesting.
Rust
I’m gonna try not to fanboy too hard, but Rust is the baseline I compare everything else against. I’ve used Rust as my Language Of Choice for most things since 1.0 came out in 2016, and IMO if you are writing a program where you have to care a lot about how resources are used, it still has no real competition. That said, Rust is certainly not flawless, it has evolved a lot since 2016, and it has made a lot of design decisions which were usually pretty good at the time but in retrospect are not ideal. It’s also made a lot of tradeoffs, and consistently chosen to err in favor of lots of compile-time analysis, powerful optimizers, sophisticated type systems, and dictatorial inflexibility in the face of errors. The result is, IMO, far more useful than most human artifacts ever could hope to be, but there’s other useful design goals that Rust has sacrificed in the process. I love Rust but there’s no reason not to look at it critically and say “how can we make this yet better?”
A quick sidebar, when Rust started as Graydon Hoare’s pie-in-the-sky side-project in 2006(?), it was a very different language than it was when it hit 1.0 a decade later. (At some point in 2010 or something I actually used it a little and talked with some of its people on IRC, which is very weird to think about now.) But Primordial Rust was much more like Erlang in concept: garbage-collected, plentiful multi-processing with lightweight threads and message-sending, and so on. Somehow that design evolved and got a bit co-opted by the community that started growing up around it, which saw some potential for it to be a lower-level language with advanced static analysis and dragged it more and more in that direction. Graydon seems to have concluded “that isn’t really what I wanted, but I’m glad you’re happy with it” and moved on to other things.
Performance: 9/10 – Very, very, very good. Until Rust came around you kinda didn’t notice how much C code jumped through hoops in the name of performance, and in my experience 98% of it came from the fact that you didn’t have strong nominal types and couldn’t trust the optimizer to inline functions. Rust got rid of that BS, deluged the compiler with all the information it could wish for, and let it go ham on optimizing as hard as it wanted. That said, it’s not all gravy. In theory, Rust’s aliasing constraints could make a Rust compiler generate better code than a C compiler could, but in practice that doesn’t see to have happened much? Part of it is ’cause LLVM still is not that great at said alias analysis, and as it gets better at optimizing Rust code it also gets almost as good at doing the same to C code. (This may now be changing.) Additionally, Rust’s hardcore approach to monomorphization tends to generate more bloat in instruction caches, and the fact that Rust code just does a bit more work on runtime checks tends to slow down numerical stuff a little. So, on average, Rust code seems to often weigh in at a little slower than comparable C, but you can always change that if you put some work into it. There’s also plenty of cases where naive Rust outperforms naive C, or does so after only a tiny bit of work, so I’m happy to consider it “too close to call”.
Basic type system: 9/10 – Detailed and explicit as all hell, and I love it that way. Having almost no implicit conversions between types is one of those things that feels like it should be terrible, but becomes incredibly liberating in practice when you realize just how much time you spend in other languages figuring out what the compiler is going to do for you. In Rust you just think a bit harder about what you actually want, and then can mostly ignore it from then on. A big heaping helping of type inference keeps it from driving me insane, and also kinda proves to me that “local type inference within functions with explicitly-typed signatures” is a really good design sweet-spot. If the compiler can’t figure out what type something should be then I probably can’t either when I return to the code I wrote 6 months later, and this is one of the places where Rust has really defined a new baseline in the state of the art.
Generics: 7/10 – Typeclasses for the masses, and a
pretty good point in that design space. Rust traits do a very good job
at compiling down to “the code you would have written by hand anyway”,
and also let you do high-level type-metaprogramming to satisfy the
little goblin inside you that says “these things SHOULD be similar; how
can I express that?” So in general traits are a very large contributor
to the “if it compiles, it works” feel that Rust gives you. That said,
there’s definite downsides: coherence and the orphan rule means I’ve
made numerous PR’s to random crates to add Hash
or
Clone
impl’s to their code, because they didn’t need them
at the time and I can’t implement my own versions on their types. Traits
in general kinda need whole-program analysis to work well, and the
generated code leans on the optimizer pretty hard which makes
compile-times slow. And the type-metaprogramming goblin often makes your
life harder; iiuc Rust traits have some limitations that Haskell
typeclasses don’t, for the sake of fast code and no memory allocation,
and that means some things you might expect to work Just Can’t Be Done
for reasons that aren’t obvious. When you get beyond the simple stuff,
you sometimes need to be pretty savvy and have a good vision of the
whole design before you can write traits effectively.
Spatial safety: 10/10 – If you don’t use
unsafe
blocks then you can’t have spatial safety
violations. All arrays are bounds checked, all type conversions are
checked, there’s no unchecked pointer arithmetic or uninitialized
variables. If you don’t write unsafe
, then unless there’s a
compiler bug, stdlib bug, or you use someone else’s unsafe
code, you just can’t have out-of-bounds accesses or invalid variables.
What more can one ask for?
Temporal safety: 10/10 – This is the real good shit,
here. This is what the borrow checker gets you. You know what, forget
about preventing use-after-free or any of that basic shit; none of the
other languages in this list even come remotely close to Rust’s
ability to save you from fucking up multithreading. In safe Rust, you
simply can’t access multithreaded data without some sort of mutex or
other synchronization; it’s a compile-time error that is detected
infallibly. Most languages don’t even try to do this. Austral
comes closest, as its spec actually talks about multithreading quite a
bit and how errors in threads are treated, but I can’t find mention of
anything like Rust’s Send
and Sync
constraints. It’ll probably get there eventually, but its borrow checker
is simpler than Rust’s and so you’ll either spend more time fighting it,
use more unsafe code, or make design compromises in your program. The
Rust borrow checker sucks ass until you learn how to design
things that it likes, but once you do, it is absolutely magical. Could
it be better? Sure. Has anyone actually made anything better,
even as a research project? Not that I know of.
Modules: 9/10 – Rust’s modules and file-based
compilation has had a kinda rough evolution, as upon release they just
had lots of little edge cases and non-obvious rules. But most of those
got ironed out by the 2018 edition and continue to slowly get smoothed
out, so now it’s in a pretty good place IMO. There’s nothing super
groundbreaking here, it just pretty much works. Each file is a module,
you stick them together into a tree of bigger modules by putting them in
directories, and there’s various ways to fiddle around with it and
adjust it in odd ways if you really want to. The main convenience other
languages don’t have is that the names inside traits and enums are
treated just like the names inside any other module, so if you don’t
want to write AstNode::Whatever
37 times you can import the
AstNode
type and just write Whatever
.
Low-level junk: 4/10 – One of the complaints about
Rust I’ve heard from people is that writing low-level code in Rust
actually kinda sucks, and from my modest experiences with it I have to
agree. Unsafe pointers have shitty ergonomics. Lots of small but
necessary features tend to be locked behind unstable compiler features,
which may never become stable. It’s real easy to accidentally cause UB
or otherwise screw up in unsafe
code. And it’s hard to
understand the compiler’s assumptions about how things fit together
under the hood. The docs help, and the story in general has slowly
gotten better as the stdlib has improved and evolved, but it still often
feels a lot like a second-class citizen. The feeling a lot of the time
is that Rust is not really for writing OS and embedded code,
it’s for writing high-performance applications. Rust’s own marketing
also doesn’t help here; the guide to writing unsafe code is called the
Rustonomicon, and is very dramatic about how dangerous this black-magic
knowledge is, which is plenty of fun and sure makes you feel special…
but frankly 70% of the Rustonomicon talks about stuff that’s pretty
useful to know even in safe code, just hidden under a layer of
ergonomics and conventions. “You should never need to write unsafe code”
is a compelling design (and marketing) goal, but that artificial divide
also comes with costs.
Separate compilation: 3/10 – This is one of the big sacrifices that Rust makes for its powers. Rust looooves inlining and doing whole-program optimizations. Rust’s compilation unit is the crate, which is often a very large chunk of code compared to an individual file, and compiling traits well can involve a lot of whole-program analysis and optimization even between compilation units. The “linking” step in Rust, rather than just sticking together chunks of code and fixing up addresses, can involve more or less arbitrary amounts of code-generation and optimization. Rust more or less has no stable ABI, which is on purpose so the compiler is free to add new optimizations to inlining and calling conventions wherever it feels like. All in all, whenever Rust has had to make a choice between “generate better code” and “parallelize compilation/linking”, it’s chosen “generate better code”, and is now rather infamous for its long compile times. It’s not really worse than C++, and you can compile regular-ass C plenty slowly if you really try, but you have to try a lot harder with C to make a lib or program that scales badly as the program grows. Oh, I’d forgotten, Rust’s highly-integrated approach to libraries and build tools also makes life pretty hard for anyone who wants to redistribute Rust code. Again, Rust makes really useful decisions, but are still tradeoffs that have real downsides.
Joy: 9/10 – You should git gud at living with a borrow checker. Even if you don’t need it, it will change how you think about your programs for the better. Though I also think you should play Elden Ring, so I have my biases when it comes to how much joy is involved in difficult tasks. Apart from that, Rust puts a lot of work into making developer experience nicer, probably precisely because Rust is a bit of a big complicated beast. On the other hand, while IMO development in Rust doesn’t have many unexpected speedbumps once you internalize how it works, its reputation for being hard to learn still seems quite justified.
Dread: 8/10 – There’s not a lot I dread about reading other peoples’ Rust programs! There’s a few things that tend to crop up, like trait salad where people try way too hard to encode their design in the type system, or crates that are far too happy to over-design something simple in search of perfection and result in lots of breaking changes. Nobody’s gonna be perfect. But you can generally just, you know, not use those crates if you don’t feel like it. Apart from that, Rust’s clear conventions, built-in tools for formatting and documentation, culture of pervasive documentation and example code, and amazing compiler messages set a high bar for quality that people writing lots of Rust often feel like they should try to live up to.
Mean: 7.8 +/- 2.3
Median: 9.0
Closing thoughts, Rust is just a really good tool, and that’s a hella nice change from most programming languages that have become mainstream. Those tend to be things that, you know, were a fairly okay tool in the 1980s (like C), or are mediocre tools but are better than the alternatives (like Java or C# or Go), or are pretty decent at some important things but you wouldn’t want to deal with their tradeoffs all the time (like Python or Ruby or Haskell), or that are pretty shitty tools but there’s nothing better available in its problem domain (like JS or C++). I’m very happy to have one programming language in my life that I can use to write programs other people can actually run regardless of OS or tech-savviness (unlike Lisp, OCaml, Erlang/Elixir…) and which is actually, you know, really well-made.
(No language wars here, if you want to add nuance to some of those languages I probably know about it and likely agree. Let me have this.)
Zig
Zig first appeared in February 2016, and is currently on version 0.13. After Rust, it’s definitely the language with the most momentum; I wouldn’t quite call it “mainstream” yet, whatever that means, but real people use it for real things and sometimes even programmers who are not language nerds have played with it. Zig is masterminded by Andrew Kelley, who seems to have diverse interests but one of them that he sometimes writes about is hobby gamedev. I don’t know much else about Zig’s history and evolution though, just that it’s in the “fairly stable but definitely still breaking backwards-compat sometimes” stage of development. What really got my attention was the Zig project’s tendency to just casually toss out articles about absolutely wild things like wrapping a C compiler or writing an incremental linker that can hot-patch existing executables (can’t find a link for that one alas; maybe it was a conference talk?). Zig people seem to consistently come up with creative and ambitious ideas, like writing a bug-for-bug copy of the Windows resource compiler, and then usually follows through on them pretty well.
Performance: 9.5/10 – The Zig docs I could find don’t actually talk about this very much, but it seems to try to exceed C’s limitations in small but important ways, such as not allowing mutable function args and making comptime evaluation more common and controllable. On the other hand, some Zig idioms like dynamic allocator interfaces lean more heavily on function pointers than C or Rust code would. On the gripping hand, comptime evaluation can probably inline out a lot of that sort of dynamic dispatch, only leaving a runtime cost in the places where C or Rust would need to use a function pointer or something anyway.
Basic type system: 10/10 – Types are quite strict overall. Tagged and untagged unions both exist, and you can specify an enum type to use as a tag for a union; that’s one of my minor annoyances with Rust nicely fixed. You have your usual zoo of signed and unsigned integer types, and widening them is transparent but narrowing them or mixing up signed and unsigned requires a cast. There’s built-in SIMD vector types, and special types for arrays and pointers-to-arrays terminated by a null or other sentinel value. One nice touch is that string literals are null-terminated in addition to being handled via ptr+len slices, so it’s often easier to pass Zig strings to C code instead of needing to copy them like Rust.
Generics: 8/10 – This is one of the crazy cool things that Zig does. Zig generics are just functions that can take and return types the same way as any other value. They’re just all evaluated at compile time, and RTTI fills in some gaps to let you do a bit of inspection of types at runtime as well. Zig can do a lot of compile-time evaluation, so its compile-time type functions end up acting a lot like C++ or D’s templates. But for better or worse (mostly better, people seem to say?) instead of “templates” having their own special little DSL in Zig, you pretty much just write Zig code. This is actually quite similar to how formal programming language type theory thinks about the world! The theorists just tend be more interested in proving stuff about their type systems, and so they tend to try to find out how far they can push a type system without it becoming Turing-complete ’cause Turing completeness makes proving interesting things suddenly become much harder. As far as I can tell Zig looks at all of the theory stuff, nods thoughtfully, then says “fuck it, we ball” and lets you write arbitrary type-level programs to evaluate the Ackermann function at compile-time if you really want to. Is it a good idea? Well… I mean, I’m not taking that approach in my pet programming language, but I’m legitimately glad that someone out there is giving it a go. Does it let you write hairy, confusing type-metaprogramming code that puts the most fearsome Rust trait-salad to shame? Well, yes, of course it does. But it also lets you write very straightforward and clean code as well, even for very complicated cases. The compiler doesn’t have to prove that it works, just evaluate it and find out.
Spatial safety: 10/10 – Pointers don’t have a null
value; instead optional types are built into the language. Error values
are a somewhat magical built-in, and Zig doesn’t let you forget to
handle them, but does have a try
statement that handles
them like Rust’s ?
. Matching on errors or tagged unions is
exhaustive. Uninitialized values are only allowed with a particular
keyword, and are set to an (unchecked) guard value in debug mode. All in
all, it’s pretty hard to think of anything it’s missed. Out-of-bounds
array accesses, dereferencing null pointers, and integer overflows are
all things that Zig calls “undefined behavior”. But as far as I can tell
“undefined behavior” in Zig is not treated as “don’t
do this or the compiler explodes your program” like C/C++ does, but
more “this is an error that the compiler may not be able to catch”. (I
thiiink they’re renaming it to “illegal behavior” to distinguish those
cases? But if so the documentation doesn’t seem to have caught up yet.)
It will still try to catch these errors though! If you do something Bad
and the compiler notices, it will refuse to compile the program. If the
compiler can’t figure out if it’s Bad, it will generally insert a
runtime check where possible, at least in debug builds. If you know the
check isn’t needed, you can tell the compiler to skip it, with scope or
compilation-unit granularity. Is it perfect? Hell no;
efficiently detecting something like a double-free is really
hard to do even at runtime, unless you build a whole borrow checker
or garbage collector and never let anyone bypass it. But while Jai and
Odin seem to say “do whatever you want, we trust you”, Zig seems to say
“do whatever you want, we’ve got your back”.
Temporal safety: 4/10 – Zig provides
defer
for cleanup at the end of scopes. As noted earlier,
Zig is pretty persnickity about letting you do stupid things like return
pointers to local variables. You can do it, but there’s enough
“suspicious things are errors” and “default to assuming everything is
constant” around pointers and local variables that it actually took me a
bit of work to figure out how to make something invalid that would
compile. It has pretty good facilities for debugging and double-checking
memory issues as well, but leaking memory and use-after-free’s are
pretty easy. For temporal memory safety it seems to be less about
preventing you from screwing up, and more about making it hard to create
bad, difficult-to-validate designs. It’s a kinda weird vibe, and I don’t
really have enough context to predict how well it will work in
practice.
Modules: 10/10 – Zig has anonymous structs. A Zig
file is implicitly an anonymous struct constant with its definitions as
values. You import the code in the file foo.zig
by writing
const whatever = @import("foo");
and it just assigns that
struct to a constant in your program, the same as any other constant.
So, all the programs in your modules are just treated exactly like any
other value. This is incredibly based, to the level that the
only other language I know that really embraces it is Lua. However, it
has a cost: it means you have to do all the lookup of function names and
such at runtime via indirect loads/jumps rather than more efficient
direct ones… unless you can optimize out all the lookups to values you
know are constant. Oh look, Zig really likes optimizing away
lookups of constants. So unless I’m missing something, you just get
modules that can be manipulated exactly like any other data in the
language, with lots of options for compile-time and runtime(?)
reflection, built out of normal types and data structures. Naturally,
I’m sure you can do incredibly cursed things with this if you try hard
enough. I really am not qualified to explore the deep implications of
this in practice, but I promise you that it has plenty. So it
might be that this is all a huge mistake in the end, for broad
and subtle reasons, but it does all the basic stuff you need and seems
to be working out okay thus far so I’ll give it a 10/10 for style.
Low-level junk: 9/10 – Lots of thought has been put
into this; for example the “freestanding” compiler target can
still provide stack traces, with a little bit of fucking around with
debug info. C interop and FFI are beyond pervasive: some languages can
parse C include files to generate FFI info from them, but Zig just
straight up includes a C compiler into its build system so it can build
and link all your C code itself. In fact I have heard of people using
Zig as a build system for C projects without a single line of Zig
code. There’s plenty of options for laying out structures and fiddling
with alignment and doing bit-casts and stuff like that, but as mentioned
there’s also types for the common idiom of having pointers and arrays
terminated by a fence value you can define, which makes them a lot safer
to use. The convenience of its opaque
types for “pointer to
something I don’t know anything about” frankly puts Rust to shame. It
doesn’t have all the crazy things that Jai does, but it also
has things that Jai doesn’t, so I’m calling it at least as good.
Separate compilation: 6/10 – I haven’t found much up-to-date documentation written about this, but what I can find is kinda surreal. Some old posts I’ve found by Andrew Kelley basically say that Zig doesn’t intend to have multiple compilation units like C files, but rather make incremental compilation work at a more or less arbitrarily fine-grained level. For example, Zig’s syntax is quite carefully designed to let each line of a file be run through the tokenizer separately, so you can check an entire file for changes in parallel and then only recompile the portions of it that changed. I’m not sure that I quite trust this to work without ending up with a massive bottleneck in “linking” or whatever other way they come up with to resolve global dependencies, but it’s a pretty fiery idea. But also the standard build tool tells you plenty of how to build and link static and dynamic libraries, so you can break up your project however you want with a bit of work. It doesn’t have much to say about what it considers a “stable ABI” though, so distributing static libraries might take some more research.
Joy: 7/10 – Zig outright says in its feature overview “Zig competes with C instead of depending on it”, which is a hell of a flex that would have been impossible to take seriously a decade ago. I am trying not to gush, honest, but Zig’s approach to everything seem to be to look at it, think really carefully about it, and then say “okay we have a solution we think will work, let’s take this idea and go hard”. It seems to also consistently have the coxones1 to pull it off, so to me the result is absolutely glorious, even if also mind-bending and surreal. For example, Zig highly prizes “what you see is what you get”: there are no accessors, type-directed dispatch, or function/operator overloading. If something looks like an assignment then it’s an assignment and nothing else, if something looks like a struct member then it’s a struct member and nothing else, and if something looks like a function call then it’s a function call and nothing else. So, “clarity over convenience”. Is it kinda a pain in the ass sometimes? Yeah, probably. But low-level code is always meticulous and tedious, so removing sources of confusion from it is a tradeoff Zig has decided to make deliberately. All in all I don’t know if I’ll actually enjoy writing code in Zig yet, but I’m pretty motivated to find out.
Dread: 5/10 – Like Rust, Zig makes very distinct
tradeoffs for its superpowers; unlike Rust, I haven’t spent years
digging around in the guts of the language and ecosystem to learn how
they all fit together. However, one thing I’ve heard spring up multiple
times is that Zig only compiles code that is actually used. Afaict this
is part of how Zig does its comptime evaluation and incremental
compilation, so you can write code that doesn’t typecheck or such and it
will compile just fine as long as nothing ever calls it. This results in
some very weird stuff sometimes. I was playing with numerical
types, and assigning a 64-bit signed integer constant to a 32-bit
unsigned integer variable works without a type cast or error… ’cause I
wrote const thing: i64 = 1
and and Zig knows that 1 will
always fit into a u32
. Change thing
to -1, or
make it a variable rather than a constant, and it’s a compile-time
error. But just changing const
to var
makes
Zig politely point out “local variable thing
is never
mutated, consider using const
”… and then steadfastly refuse
to compile my program. So it feels like Zig is very strict about not
allowing errors, but very liberal in how it defines “error”; proof is
always in what the code actually does, not what the language’s rules
are. This is uncomfortable to me, to say the least. I suspect
that a sufficiently experienced Ziguana is used to this and knows how to
work with the flow that sort of thing creates, but coming from Rust or
OCaml I find it really hard to trust. It’s probably
fine. Probably.
Mean: 7.9 +/- 2.1
Median: 8.5
Closing thoughts: On the surface Zig looks pretty unremarkable to someone versed in C or Rust, but that’s a sneaky lie. When you try to actually write it it becomes vividly obvious just how weird Zig is, and IMO it’s weird in a good way. It miiiiight be one of those languages that you should learn even if you never actually use it, like Rust or Erlang, ’cause it’ll change how you think about problems. After playing around with it a little, its pervasive compile-time evaluation and its weird approach to what is and is not valid is so strange to my brain that I really can’t tell whether it’s brilliant or insane, and the more I learn about it the less certain I feel. I’d have to get a lot better at Zig and write a few non-trivial programs in it to really judge, but no matter what I’m very glad Zig exists to try out these weird things. Zig’s median rating of 8.5 is higher than I really expected even while writing this, since I still have no clue whether I’d actually like using Zig in practice. But that’s the point of a rating system I suppose; Zig benefits from not having many especially low scores anywhere.
However, I’m not yet convinced that Zig actually a smaller,
simpler language than Rust. In the section on Odin I grump at it a
little for handling lots of special cases with “just add one more little
feature” instead of finding large, powerful features that specialize to
handle a lot of different things. I think Zig does this better; its “big
feature” is palpably “compile-time evaluation”, and it makes that
feature do a startling amount of heavy lifting. But I still feel like it
still has the “just add one more little feature” problem to some extent;
lots of things with no better home just get added to the list of over
120 compiler built-in functions, from @addrSpaceCast()
to
@wasmMemorySize()
. Besides that, looking at error-value
handling and error sets exposes a whole new sub-language around them,
then more about result types and locations behind pointers, and so on.
These may get refactored into libraries or other language features as
time goes on. In any complex system there tends to be a cycle of
reincarnation that alternates between “add new things” and “refactor to
generalize special cases”, and in reality, 120 built-in functions is not
exactly a bloated mess, especially when you actually use about 10 of
them regularly. But it still feels a little spooky to an outsider.
Odin
Odin’s git history starts around the same time that Rust hit 1.0 in late 2016; I have no idea if these things are related at all. It’s palpably inspired by Jai but also palpably its own creature; apparently it started off as far more Pascal-y and evolved from there. Odin is the child of a chap named gingerBill, though all I can find about him in terms of public persona is he –surprise surprise– does indie gamedev as a hobby, and sometimes does it live on stream with beer. Sounds like a cool guy, really. Odin thus seems somewhat gamedev-tilted with built-in vector types and a collection of bindings for gamedev-y physics and graphics libraries, but it’s less monomaniacal about it than than Jai. Instead it sells itself more or less as “C for modern systems, except more joyful to use”. There’s actually a kinda neat and not-too-long interview video with gingerBill here that talks about history, motivation and evolution of Odin.
Performance: 9.5/10 – Like Zig, function params are immutable to let it do more optimizations that C cannot, so they’re certainly thinking about how to exceed C’s powers, rather than simply matching it. You can do vector-programming operations on fixed-length structures and built-in matrix types, and arrays double as built-in vector types. There’s built in support for SoA data types and such. There’s lots and lots of little knobs to twiddle with to tune performance and data layout, but there’s also some things that rely more on RTTI or dynamic typing than static analysis. However, it looks like Odin’s approach is generally to take dynamic type information and evaluate it as much as possible at compile time, it only actually falls back to RTTI and such if necessary. All in all Jai feels a little more hardcore on the performance front, but Odin feels like a slightly more pragmatic design.
Basic type system: 7/10 – There’s a broad assortment of numerical types, as well as built in maps and dynamic arrays a la Go. There’s also floating-point vectors, quaternions, and matrices, your bread-and-butter types for graphics and physics. Variables are initialized to 0 values if possible, though the docs say not to rely on it for complex types. All type conversions are explicit. Like Jai, it allows variables to be explicitly left uninitialized, sigh. Strings are UTF-8 and are represented as ptr+len. Array slices are also ptr+len, which are separate from fixed size arrays, which are separate from dynamic arrays. You can define nominal types like Rust’s “newtype structs”. There’s some Pascal-lineage things too, like bitsets, arrays indexed by enums, etc. Sum types exist! But are only indexed by type, not arbitrary tags. They kinda feel like the language developer was talked into them by someone else. Odin prefers functions to have multiple return values vs. using tuples, and there’s no pattern matching I can find.
Generics: 5/10 – Functions can be overloaded but they have to be specified as such, you can’t just add new random overloads to anyone’s function. There’s a zoo of operators and comparison interactions and such. There’s some kind of options for subtyping of structs with overlapping layouts, and there’s a fair amount of RTTI and reflection to let you essentially do dynamic typing in the cases where you really need it. …oh, there actually ARE full-fledged generics, I didn’t expect that; Odin calls it “parapoly” for “parametric polymorphism”, which is probably a better term than my vague go-to of “generics”. The docs don’t have super in-depth information on this feature yet, and I think it didn’t exist when I looked at Odin a couple years ago. It seems similar to C++’s templates in concept: substitute types into the program like special-purpose macros, then compile it and see if the normal type-checker likes it. There’s some kind of specialization possible with it as well, though maybe only with built-in types yet? Good luck with that, I look forward to seeing how it evolves.
Spatial safety: 7/10 – Switch statements require exhaustive matches. For loops are C-like, with some iterator-like functionality for special cases, but Odin doesn’t have immutable pointers so only some things can be iterated over by reference, others have to be copied. Integers are defined to overflow without error, it is not UB. Arrays are bounds checked, there’s explicit blocks where you can elide them. Pointers are pervasive and nullable, but pointer arithmetic is uncommon and provided only by stdlib built-ins. I really can’t find whether they’re null-checked by default though. A lot of things like maps and unions/sum types are reference types under the hood and are also nullable.
Temporal safety: 4/10 – Uses defer
for
cleanup; I’d really like to see more languages seriously play with move
semantics and RAII. However the docs call out that it’s
defer
statement executes at the end of the scope rather
than the end of the function like Go’s version, so you have a bit more
fine-grained control. There’s an implicit global “context” object that
gets passed to functions for things like allocators and loggers, like
Jai. Unlike Jai as far as I can find, the context has a few knobs you
can tweak to make it truly global or thread-local, though it also
doesn’t let you put anything on it; it’s a predefined struct
that gets threaded through function calls implicitly. Manual memory
management is the name of the game, but there’s built-in options for
allocators with debugging/tracing to help detect leaks and other bugs.
Can we dereference a null pointer? Hell yeah we can, it makes a segfault
rather than a panic. Can we return a pointer to a local variable? Ooh,
at least in some simple cases the compiler catches it and tells you not
to do that, though it’s not that hard to fool it. I appreciate
the effort!
Modules: 7/10 – They exist, they seem fairly
sensible and fullproof if maybe a little tedious. The options for
namespacing are generally powerful and have sensible defaults without
footguns, and there’s support for tying C/FFI libs into it too. There’s
no cargo
-like package manager, so far. On the whole it’s a
little hard to figure out though; if you make nested modules in the same
directory, the odin build
tool will try to find and compile
them together if they use each other, or you can put them in separate
directories and tell the tool where to find them. I thiiiiink I get how
it all fits together, I’m just used to cargo
’s strong
conventions for how modules and sub-crates work and it appears Odin
expects you to do a bit more of your own configuration.
Low-level junk: 8.5/10 – There’s fairly strong support for expressions that must be explicitly evaluated at compile-time, which is cool, and they’re fairly flexible. There’s a whole zoo of compile-time constants that influence compilation modes: entry point, debug info, RTTI, etc. Lots of things where you won’t need it 98% of the time, but that last 2% is really really useful. There’s built-in integer types with different endianness, as well as boolean bit strings. There’s lots of options for layout and user-defined annotations on structs. There’s support for C-like unchecked pointers for FFI and such. There’s an entire zoo of annotations for low-level compiler stuff, from linkage models to calling conventions to marking hot/cold branches. You can specify optimization modes on individual procedures, which is heckin’ rad.
Separate compilation: 6/10 – I actually couldn’t
find much about this in the docs, to my surprise. I think this is an
artifact of The Gamedev Lineage; talks about how to link to C libraries,
but not how to write, say, a regex library and then use it from C,
because that’s not what people tend to think about when they write video
games. It appears that each package you write (a dir with a
collection of files all in the same namespace) is treated as one
compilation unit? Oh but when you build multiple packages together I
think it’s still one compilation unit, but you can tell
odin build
to output a DLL or object file instead of an
exe. I haven’t found a way to pass that object file into
odin build
and have it link it, but you can ask it to print
out the linker flags it would call the linker with, so you can
presumably do it yourself with the right linker invocation. While
playing around with it I also managed to produce a panic with the
message
Internal Compiler Error: TODO(bill): -build-mode:static on non-windows targets
,
so they’re still working on it. That’s fine. Combined with the
slightly-odd module system, it feels like they kinda know the
shape of what they want, but are still exploring how to
fill it out. My impression is they want to default to large
heavily-optimized compilation units like Rust crates, but not go quite
as hardcore on whole-program optimization and have better support for
separate libraries built as object files or DLL’s. No idea yet how that
interacts with any options for doing LTO, or with the two design killers
of separate compilation: inlining and generic types. Keep exploring the
design space!
Joy: 6/10 – Right off the bat the tutorial calls out
a variety of nice ergonomic features, particularly things it does better
than C. There’s no equivalent of Rust’s Result
type, all
error handling is done via Go-like multiple return values, but they’re
sane enough to add an or_return
operator to automatically
check them instead of forcing you to write
if err == nil { return nil }
after every single function
call. There’s some automatic conditional compilation and unit test
harness based on file extensions. There’s a lot of care for debugging
and developer-friendliness in Odin, and it tends to favor safe defaults
whenever feasible. All in all it seems pleasant and detailed without
being baroque, though it also– how do I explain it. You can design a
complex system in two ways: add small pieces and make them fit together
until it has the “shape” you want, or start with a big over-arching idea
and carve away at it until it has the “shape” you want. Odin definitely
builds up out of small pieces, and thus ends up with lots of tables of
rules, operators and options that all seem like things You Just Have To
Know. I prefer the other approach, where you say “My gut says that
Feature X should exist and look like this…” and then try it and
it works exactly like you think it should. All in all though, Odin feels
pretty comfy. It has a bit of the same nebulous low-level-fun factor
that C does.
Dread: 4/10 – There’s also a lot of fiddly
little options that I can imagine biting you in the ass somehow unless
you know exactly what to look for, and that also means it feels like the
design space the language covers is often under-documented. On the whole
Odin seems well put-together, but again, my impulse is to search for big
over-arching ideas that make it possible to do what I want, and Odin’s
impulse leans more towards “just add one more little feature”. I can’t
point to anything in particular that obviously clashes with other
features in ways it shouldn’t, but when
you have this many special-case type conversions built in, I start
to get cautious. To paraphrase Alan
Perlis, if you have a language with ten type conversions, you
probably missed some. –Fuck, the Odin docs for that table literally
say If some cases are missing please let us know
, I
honestly had not noticed that when I wrote this.
Mean: 6.4 +/- 1.7
Median: 6.5
Closing thoughts… I shouldn’t badmouth Jai. I’m not gonna badmouth Jai. But if you are interested in Jai, just go ahead and start using Odin instead of waiting around. Odin just straight up does a lot of the more pragmatic parts of Jai right now, and takes a similar control-freak-oriented approach to giving you all the tools you need for bossing a CPU around. Odin has also grown up a bit since I last skimmed over it a year or two ago, and mostly for the better… But now what worries me a little is that it feels like it’s on the way to becoming a maximalist kitchen-sink language, and deserves a chance to step back and go through some refactoring. Generics and sum types in particular add a whole lot of power and thus really work best when the language is designed to fit around them, rather than them being tacked on after. But Odin is also still in active development, and does not seem like it’s rushing to meet a 1.0 stable release on any sort of arbitrary deadline, so there’s plenty of space for it to keep evolving. I don’t think I’ll reach for Odin next time I want to fiddle around with some gamedev or hardware hacking, but it seems like a pretty good tool.
Hare
Hare is Drew DeVault’s foray into this space; it was made public in 2022 (though there’s writing about it from before then) and seems to have carried on since then with a modest amount of steam. When I first looked at it, it seemed very much like a Modernized C, which rather turned me off of it. I’ve tried making a Better C myself, and it’s actually really hard to make a Better C without either leaving a lot of good features on the table, making it fragile as hell, or fundamentally breaking the low-level nature of C in the first place. But ddevault’s pretty smart, and we’ve all learned a few things about programming languages since I last tried to make a Better C in like 2012, so let’s take another look at Hare version 0.24 now that it’s had a couple years to cook.
Performance: 7/10 – With its pervasive Typescript-like union types represented as tagged unions, it probably does more runtime type-checking than something with a more sophisticated static type system would. Without things like closures with pervasive inlining and whole-program optimizations I expect its typical performance ceiling to be a bit lower than Rust’s. But I don’t see any part of it that really shouts “bad idea!” to me. I’m giving this a relatively low score more because the devs seem to be a little bit more willing than average to make design choices with small runtime costs, if it keeps the language itself simpler. Contrast with Rust’s unspoken rule of “if someone suggests a language feature and we can’t monomorphize it and put it on the stack, we aren’t doing it”. There’s no reason someone can’t write a high-perf compiler for Hare that throws LLVM at it for super-magical optimization, but that’s not the intention of the creators and so there will probably be some design decisions that make such things harder in the long run. And you know what? That’s fine.
Basic type system: 9/10 – Well it has tagged unions,
which can also be Typescript-style int | foo
types, and
often are. Types seem fairly strong in general, there’s few implicit or
unchecked conversions. It has pattern matching on its tagged unions,
which must be exhaustive (though apparently there’s a compiler bug
around that at the moment). Its numerical types are mostly
explicit in size; I expect that int
and uint
are shortcuts that will eventually become more trouble than they’re
worth, but that’s just how I roll. Integer overflow is unchecked but
defined. There’s tuples, even if I had to dig a little to find them.
There’s far greater sins possible though; on the whole I like it.
Generics: 2/10 – Basically no generic types outside of C-like arrays and pointers. Pointers to structs allow some amount of subtyping, though it’s pretty limited. Basically you can have one struct be a prefix of another, and then you can have a collection of pointers that point to either type. Maybe it will grow more capabilities as time goes on? As I said with C, if you’re writing mostly low-level or embedded software, lack of powerful generics matters surprisingly little… but when you start making fundamental libraries and larger, more chonky programs that Do Complicated Things and get maintained for years, generics get a lot more useful.
Spatial safety: 10/10 – Arrays are bounds checked.
Type conversions are checked. You can skip ’em if you want but you have
to try. Most functions return an error value and you generally can’t
ignore errors, there’s a !
operator which serves the same
purpose as Rust’s Result::unwrap()
. Pointers generally
can’t be null and variables can’t be uninitialized. There’s a nullable
pointer type that has to be checked explicitly. All in all I don’t know
what it could do better, really. Maybe have unsafe
blocks?
But there’s no “safe” pointers, so idk what that would get you.
Temporal safety: 2/10 – Can I return a pointer to a
local variable? Oops, hell yeah I can, with no warning even. There’s
some iterator-ish structure but modifying it can absolutely fuck your
life up via iterator invalidation. It even has C’s function-scoped
static
variables, which seem useful mainly for making
absolutely sure your programs aren’t thread-safe. Dangling pointers and
double-frees and stuff are all basically on par with C, I don’t see
anything resembling smart pointers. There’s a defer
statement for cleanup rather than move semantics with automatic cleanup,
but I guess it still gets a point for that.
Modules: 5/10 – Unremarkable but solid. No
cargo
-like package repository system, or even a way to
incorporate random git repos into your program like Go or Crystal, but
maybe it will grow something like that eventually. The compiler includes
a build system that will build module dependencies and seems to do a
good job of caching intermediate results, though for some reason the
project structure docs seem to have a bit of an unhealthy appreciation
of Makefiles
. Gotta ding them a point or two for that. If
you insist upon not reinventing the world, make your build tool output
ninja or makefiles for me. Life is too fucking short.
Low-level junk: 7/10 – Low-key but pretty good. You
can specify symbols for functions, talk to C functions easily (even
weird ones like printf()
), stuff like that. There’s
documentation for what the built-in runtime expects to have, stuff like
memmove()
, abort()
, etc, and how to implement
it “freestanding” – ie, in a microcontroller or OS kernel. It’s nice to
have people actually thinking about this stuff. It’s actually really
interesting to contrast this with Jai. While I read it entirely from
secondhand sources, Jai’s attention to low level stuff was all about
“here’s how you can make a function really fast”. Hare’s seems much more
about “here’s how you connect a bunch of pieces together”.
Separate compilation: 9/10 – Hare is explicitly
described as using the system ABI, and outputs normal-ass C-like
.o
files along with textual metadata files. It will also
attempt to automatically link to C libraries without much extra effort,
though this feature looks like it’s rather WIP. It also outputs its
.ssa
IR files (it uses the QBE compiler backend), so
theoretically you could make a backend that does whole-program
compilation and optimization like Rust does. But currently it just
appears to stick to a C-like linking model and accept the costs of
this.
Joy: 7/10 – I haven’t written much Hare but it seems pretty satisfying, and without too many unexpected horrible footguns. The language core is far closer to what I would design than something like Odin or Jai is, and the user-interface shell around it is far closer to something I would design than Austral is. It’s one of the only languages on this list that really thinks hard about “how do we connect together existing software binaries written in multiple languages”, not just “here’s how you wrap the new language around existing C libraries”. It does have some biases I don’t like (Makefiles? In this economy???), but on the whole Hare seems like the best attempt so far at the “small simple systems language” space.
Dread: 4/10 – I’d really like to see some attempt at temporal memory safety, and you’ll have all the aliasing bugs and weird low-level hackiness around lack of generics that you have in C. But without that, Hare seems mostly as reasonable as I would like something to be.
Mean: 6.2 +/- 2.7
Median: 7.0
Closing thoughts: I actually like Hare a lot more than I did the first time I looked at it. There’s a bunch of Things I Would Do Differently, but that’s fine. You know how I said it’s really hard to make a Better C without screwing up some portion of it or another? Hare seems like a pretty good attempt at it. I think it should have generics and a borrow checker, but powerful generics like I want would be a very fundamental change and, speaking from experience, are a massive source of emergent complexity. If they really want to keep the language small, leaving them out makes sense. And you know, I think you could actually drop a borrow checker into Hare pretty easily. If you make a limited one that tries to be helpful rather than all-powerful it probably won’t entirely upend the entire language, and might not even make the authors hate you for it. Might be fun to try sometime.
Also, it has the cutest mascot ever. Just look at it. Look at it!
[Shitposting about the Go mascot elided; insert your own here.]
Honorable mentions
There’s a handful of languages that aim for “lower-level than C#, Java or Go” but “higher-level than C, Zig or Jai”. Most of these were started before Rust proved that there was a big need in the low-level lang space and that you didn’t absolutely need a garbage collector in the 21st century. I wanted to talk more about these but I’m tired of writing and researching, so I’ll just say they exist and toss out what my general impressions are.
Nim & D
Both of these languages started off roughly in the “C#/Java/Go” level of abstraction and slowly worked their way downward. D is much older than Nim, starting in 2001, and its motivation can be summarized as “C++ but better”. Nim on the other hand first appeared in 2008 and is more “Python but lower-level– PSYCH, it’s actually much more like Pascal! Fooled ya!” I like both of them but they never quite scratched the itch for me, which for a long time was “this language is really good for hobby gamedev”. You can do gamedev in either but neither of them quiiiiite met my bar. To me Rust was the first language that was really entirely better for this than C, without the acceptable-but-irritating compromises you’ll get with a managed runtime.
D especially never seemed to hit critical mass in terms of popularity: if I understand correctly it started with a GC, then made it optional, then realized that to make it optional you needed an entirely different stdlib, then had a community split over the different stdlib, and so on. D is pretty okay but it was created 5-10 years too early to hit when the market really wanted such a language, and has had a rough time growing up. I haven’t looked at it in a while but I have a hard time imagining its future being particularly bright; by now IMO most of what it does has been done better by other tools. Shame, it deserved better.
Nim on the other hand grew up in a somewhat later era, learned some things from the mistakes of the past, and didn’t commit to “don’t break backwards compat too much” until much later in its life. As of the end of 2024, Nim 2.0 is– oh, it actually had its first release in late 2023! Shit, I totally missed it! Looks like after a long dev cycle, version 2.2 was released in October 2024, so go check it out! Nim always leaned more on reference counting than tracing GC, if I understand correctly, but Nim 2 defaults to refcounting with a cycle collector, and has several different allocators available as compilation flags so you can just plop the Boehm GC into your program if you want and see how it compares. The biggest changes in Nim 2 seem to be around move semantics and RAII, so it can do a better job of removing unneeded refcounts and copies, and even without automatic refcounting it offers an above-average amount of control over pointers and memory. I’ll have to take another look at it someday!
Swift
Wasn’t originally in this list but if I left it out someone would have asked about it. I don’t consider Swift a systems language, it’s more of what I call an “application language” like Nim and D are, but it’s much more popular than Nim and D so is, to me, less interesting to talk about. As far as I’ve seen Swift Seems Fine(tm) but it seems to have the Scala Problem where they made some bad design choices that they will never be able to shake off. I’ve also heard complaints about its CoW memory model doing Sneaky Things in bad ways. Since it’s also made by Apple and will probably never be a first-class citizen on any system that doesn’t treat you as a captive ATM, I consider it “useful reference point to learn from” but I’ll probably never have a reason to use it in anger.
Circle & Carbon
These are both very explicitly “C++ successors”. It is now clear that Circle is primarily an experiment by the dauntless Sean Baxter, intended to get the C++26 committee talking about memory safety. It doesn’t seem like Sean intends to develop it much further, as far as I can tell, though there’s nothing stopping someone else from picking it up and running with it. Carbon on the other hand is a Google research project to make a cleaned up C++ while still having some amount of interoperability with it. I think it’s kinda trying to answer the question of “if we wanted a language we could incrementally port existing C++ code to, what would it look like?” But it’s also a Google research project, so who knows whether it will go anywhere in the long term.
Austral
Austral released in late 2021, and its first commit was sometime around 2018. It’s notable for being one of the few languages around with an actually functioning borrow checker comparable to Rust’s, though it’s a lot closer to what Rust 1.0 was like and still a bit more limited even then. I’ve written about Austral before and it’s pretty wild, but I don’t think I have a lot to change or update about what I previously said, and I don’t have energy to double-check what’s changed since then too deeply. It’s very borderline between “maybe kinda actually usable for real things” and “weird small hobby language”, but I like it for its aggressive disdain of fashion so I’m putting it here.
Weird indie shit
I wanted to highlight some of the more interesting small language projects doing interesting things kinda-sorta in this space, but I’m tired of writing and they are by definition are hard to find and under-developed. So I’m just gonna say a few words about the ones that I like.
Vale is a language being made by the inimitable Verdagon, which first caught my attention for writing articles with titles like like “Borrow checking, RC, GC, and the Eleven Other Memory Safety Approaches” or, more recently, “Crossing the Impossible FFI Boundary, and My Gradual Descent Into Madness”. As you may suspect from these titles, Vale seems like a nice but not-especially-remarkable Rust-descended language right up until you start getting into the borrowing model, which is heavily in development but also apparently hard at work exploring every possible approach and weighing them in pragmatism and ergonomics. Lately it seems to have settled on some form of generational references, which is a pretty cool idea, but who knows if it will stay there long term. Pragmatism and ergonomics are really the hard design issues when dealing with compile-time memory safety, so whether or not I ever touch Vale for real I’m very happy to have someone tread this ground so I can learn from their meanderings.
Lobster mentions it manages memory with reference counting, with borrow checking used to optimize out refcounts when possible. I was going to complain in the conclusion of this that not enough languages are attempting this sort of thing, but now I don’t have to! I’m not sure I’d call it a “systems language” since it seems to have a runtime with a JIT, but it looks quite interesting as a “next gen application language” on the level of Nim or Crystal. Or– well, Swift, which also aims at this level of abstraction and does “reference counting but cooler”. I dunno if Lobster super compelling to me, but it looks pretty neat, so I’ll mention it here. It’s my list, you can’t stop me.
Red is one of those strange and startlingly-powerful little systems like Haxe or ROX that gets very little mainstream press or usage, but just quietly badgers along in the background having its own weird adventures. Apparently inspired by REBOL, another language I’ve never heard of, it’s been chugging along steadily since 2011 and looks like the love-child of Tcl and Lisp. While it features raw pointers and implements its own runtime in itself, it seems to be much better at mongling complex data and embedding/being embedded into other programs as a DSL.
Scopes was started in 2016 “as an alternative to C++ for programming computer games and related tools” – what IS it with indie gamedev people making languages? It’s almost like low-latency programming with good FFI and a high performance ceiling is a field that has been forced to put up with shitty tools for decades… 🤔 Scopes fearlessly describes itself as “describing source code with S-expressions but delimiting blocks by indentation rather than braces”, so you know already that the author has a high-sense of self-esteem and is not worried about what other people will think of their aesthetic choices. More interestingly it has manual memory management with “view propagation”, which it describes as a novel approach to borrow checking. It appears to be some form of Rust’s move semantics/affine types and scope-limited borrow checking, but there isn’t too much else written about it yet that I can find. Worth keeping an eye on.
Hylo used to be called Val;
considering that Vale also exists I am glad they changed it up a little
to disambiguate. Hylo is a language with compile-time memory safety
based on mutable
value semantics rather than borrow checking. It looks like this is a
formalism for the same sort of thing that Swift does – local
values are mutable, when you share them between functions they become
immutable-ish and backed by copy-on-write. Going from the examples
though, in practice it looks like a more powerful version of Rust’s move
semantics. You can do things like write a function that can take two
different values by “reference”, mutates them, and decides which one to
return at runtime. That’s the sort of interaction that needs some work
to convince Rust’s borrow checker to accept, since you can’t just write
fn(&'a Thing, &'b Thing) -> &('a | 'b) Thing
.
It seems like an interesting idea taking from both Rust and Swift, so
we’ll see how it goes.
And of course there’s Garnet, my own humble entry into
this space. Its goal is really to take Rust, leave the borrow checker
mostly the way it is, and instead simplify and trim down the rest of the
language until writing a compiler for it is not a multi-person-year
undertaking. Ironically my own understanding of type systems and type
checkers has been stuck in design hell for several years now, which has
been frustrating as shit but also… well, educational. I finally feel
pretty sure that I know how to write a type checker that does everything
that I want, and that what I want will make a pretty interesting
language. Besides chopping out async, replacing traits with a simpler
but lower-level system, and doing some ergonomic cleanup around moving
and pinning of memory, I want Garnet to put more work than Rust into
making low-level unsafe code nicer to use and easier to verify as
correct. Every embedded toolchain in existence currently starts with a C
compiler, and porting rustc
to a new platform (especially a
tiny weird one) is a pretty significant undertaking. So I wanna make it
so people can start bootstrapping their new systems with a Garnet
compiler instead, similar in spirit to how Lua can be hacked up and
squished into just about any kind of environment you care to stuff it
into. It seems like a valuable niche to fill. Will my baby really be
simpler than Rust to implement and mostly as nice to use? Welllll,
I think it probably will, but we’ll have to find out
the hard way.
Conclusions
Whew, this was fun! Turns out I quite like Zig and Hare; I might have to come up with a smol video game so I have an excuse to write stuff in them. Heckin’ all these languages have unplumbed depths full of weird and unique decisions worth rummaging around in, and all of them (except maybe Ada) sound like more fun and less work to use than C. Hey, maybe even Ada will surprise me if I git gud at it. There’s also a lot more variety in the design space than I expected, with some interesting differences in priorities. For example Hare and Zig seem to come from “people who build systems”: they think a lot about building programs that are there to be reliable and to interact with other programs. Jai and Odin on the other hand come much more from “gamedev people”, so they think a lot more about building programs that are nice to experiment with and which interact with people. The different priorities and assumptions involved shape the different approaches really strongly.
I also feel like there’s two evolutionary lineages of development, partially overlapping with those different priorities. Instead of “what is important to you” it’s more about the assumptions in the program. There’s the “functional-programming lineage” that includes Rust and (to my modest surprise) Hare, and the “imperative programming lineage” that includes C, Jai and Odin. Zig again makes the Hard And Interesting Choice by seeming to straddle the lines in atypical ways. The distinction between these heritages isn’t even cosmetic, as the world is slowly becoming more comfy with changing up C-like syntax and its many papercuts, but rather they involve how the type system works and how variables are treated. Are variables immutable by default, non-nullable, and is initialization and pass-by-reference always explicit? Or are variables mutable by default, initialized to some default “zero” value, and do you have types that are obviously represented by pointers under the hood? I definitely prefer the Rust lineage personally, as you may have noticed. To me it seems far less error-prone, and I suck with details and like thinking more about how pieces connect to each other, but it’s definitely also the more tedious style. Different people have different happy places.
A random observation that entertains me: fucking nobody in
this list wants to do C-style type varname = value;
variable declarations. Everyone does some variation of name-then-type,
whether they choose to do keyword varname: type = value
like Zig or Hare, or just varname : type = value
like Odin
or Jai. Similarly, everyone declares function args as
varname: type
rather than C’s type varname
,
and the return types of functions always end up after the function
rather than before. It’s a small thing, but it’s noticeably simpler and
less ambiguous to parse2. Syntax Kinda Doesn’t Matter, but
this is a palpable improvement and it’s so seldom that half a dozen
separate programming languages agree on anything that it’s
kinda surreal to see it happen.
So yeah. I started really programming in 2001 or so, and for most of my life it seemed like there would never be a practical substitute for C or C++ that anyone would actually use. Fortunately, it seems like I’ve been proven wrong. Now go out and do something crazy with this information! Make a video game! Write a Jai compiler! Port Nim to an ESP 8266! Sky’s the limit!
I asked a Spanish-speaking friend if there was a gender-neutral version of “cojones” and he instantly wrote “coxones” and then said “…I nearly gave myself a seizure typing that”. He was very upset with me when I responded with “that’s brilliant, thanks”, so now I have to use it.↩︎
From a compiler’s perspective, this is mainly ’cause variable names are usually just single symbols, while types are complicated compound things that can nest and contain other types. So when you parse a program from start to end, figuring out
simple_identifier complicated_type = complicated_expression maybe_more_stuff
is way easier thancomplicated_type simple_identifier = complicated_expression maybe_more_stuff
. The trick is when you repeat in themaybe_more_stuff
portion of the grammar; the question the parser has to ask is “am I looking at a name or a type?” and types and names can look like each other. But types are more complicated than names, so having a keyword or other simple-to-confirm token to latch on to so the parser knows “this is a variable declaration, not a function call or assignment or something” makes life a lot easier.↩︎