Static Newsabout
wspeirs | 66 comments

haberman|next|

> Non-goals: Drop-in replacement for CPython: Codon is not a drop-in replacement for CPython. There are some aspects of Python that are not suitable for static compilation — we don't support these in Codon.

This is targeting a Python subset, not Python itself.

For example, something as simple as this will not compile, because lists cannot mix types in Codon (https://docs.exaloop.io/codon/language/collections#strong-ty...):

    l = [1, 's']
It's confusing to call this a "Python compiler" when the constraints it imposes pretty fundamentally change the nature of the language.

BiteCode_dev|parent|next|

For a real compiler try nuitka.

bpshaver|parent|prev|next|

Who is out here mixing types in a list anyway?

dathinab|root|parent|next|

parsing json is roughly of the type:

type Json = None | bool | float | str | dict[str, Json] | list[Json]

you might have similar situations for configs e.g. float | str for time in seconds or a human readable time string like "30s" etc.

given how fundamental such things are I'm not sure if there will be any larger projects (especially wrt. web servers and similar) which are compatible with this

also many commonly used features for libraries/classes etc. are not very likely to work (but idk. for sure, they just are very dynamic in nature)

so IMHO this seems to be more like a python-like language you can use for idk. some since computations and similar then a general purpose faster python


bpshaver|root|parent|next|

Agreed, I was just joking. I understand heterogenous lists are possible with Python, but with the use of static type checking I feel like its pretty rare for me to have heterogenous lists unless its duck typing.

JonChesterfield|root|parent|next|

If your language obstructs heterogeneous lists your programs will tend to lack them. Look for classes containing multiple hashtables from the same strings to different object types as a hint that they're missed.

Whether that's a feature is hard to say. Your language stopped you thinking in those terms, and stopped your colleagues from doing so. Did it force clarity of thought or awkward contortions in the implementation? Tends to depend on the domain.


orf|root|parent|prev|next|

It’s common to have a list of objects with different types, but which implement the same interface. Duck typing of this kind is core to Python.

bpshaver|root|parent|next|

Good point.

itishappy|root|parent|prev|next|

The json module returns heterogenous dicts.

https://docs.python.org/3/library/json.html


bpshaver|root|parent|next|

Yeah, just because it can do that doesn't mean that it is good design.

dekhn|root|parent|prev|next|

I've been mixing types in Python lists for several decades now. Why wouldn't you? it's a list of PyObjects.

CaptainNegative|root|parent|prev|next|

I often find myself mixing Nones into lists containing built-in types when the former would indicate some kind of error. I could wrap them all into a nullable-style type, but why shouldn't the interpreter implicitly handle that for me?

bpshaver|root|parent|next|

Yeah, that seems fair.

nicce|root|parent|prev|next|

Everyone who chooses the Python in the first hand.

bpshaver|root|parent|next|

Well, I'm one of those people, and I feel that I rarely do this. Except if I have a list of different objects that implement the same interface, as another commenter mentioned.

__mharrison__|root|parent|prev|next|

Someone who is using Python the wrong way.

RogerL|root|parent|prev|next|

return [key, value]

ghxst|root|parent|next|

Why would you do this over `return key, value` which produces a tuple? Just curious.

0xDEADFED5|root|parent|next|

javascript refugee?

dgan|root|parent|prev|next|

Not the parent, but i return heterogeneous lists of the same length to the excel to be used by xlwings. The first row being the headers, but every row below is obviously heterogeneous

quotemstr|parent|prev|next|

It's not even a subset. They break foundational contracts of the Python language without technical necessity. For example,

> Dictionaries: Codon's dictionary type does not preserve insertion order, unlike Python's as of 3.6.

That's a gratuitous break. Nothing about preserving insertion order interferes with compilation, AOT or otherwise. The authors of Codon broke dict ordering because they felt like it, not because they had to.

At least Mojo merely claims to be Python-like. Unlike Codon, it doesn't claim to be Python then note in the fine print that it doesn't uphold Python contractual language semantics.


orf|root|parent|next|

Try not to throw around statements like “they broke duct ordering because they felt like it”.

Obviously they didn’t do that. There are trade-offs when preserving dictionary ordering.


baq|root|parent|next|

dicts ordering keys in insertion order isn't an implementation detail anymore and hasn't been for years.

dathinab|root|parent|prev|next|

if you claim

> high-performance Python implementation

then no this aren't trade-offs but breaking the standard without it truly being necessary

most important this will break code in a subtle and potentially very surprising way

they could just claim they are python like and then no one would hold them for not keeping to the standard

but if you are misleading about your product people will find offense even if it isn't intentionally


actionfromafar|root|parent|prev|next|

The trade-off is a bit of speed.

cjbillington|root|parent|next|

This might be what you meant, but the ordered dicts are faster, no? I believe ordering was initially an implementation detail that arose as part of performance optimisations, and only later declared officially part of the spec.

Someone|root|parent|next|

> but the ordered dicts are faster, no?

They may be in the current implementations, but removing an implementation constraint can only increase the solution space, so it cannot make the best implementation slower.

As a trivial example, the current implementation that guarantees iteration happens in insertion order also is a valid implementation for a spec that does not require that guarantee.


adammarples|root|parent|prev|next|

Well would you claim that Python 3.5 isn't python?

stoperaticless|root|parent|next|

All versions of python are python.

If lang is not compatible with any of python versions, then the lang isn’t python.

False advertising is not nice. (even if the fineprint clarifies)


thesz|root|parent|next|

> If lang is not compatible with any of python versions, then the lang isn’t python.

Python versions are not compatible between themselves, as python does not preserve backward compatibility, ergo python is not python.


KeplerBoy|root|parent|next|

The words "any" and "all" have a different meaning.

odo1242|parent|prev|next|

Yeah, it feels closer to something like Cython without the python part.

jjk7|parent|prev|next|

The differences seem relatively minor. Your specific example can be worked around by using a tuple; which in most cases does what you want.

itishappy|root|parent|next|

Altering python's core datatypes is not what I'd call minor.

They don't even mention the changes to `list`.

> Integers: Codon's int is a 64-bit signed integer, whereas Python's (after version 3) can be arbitrarily large. However Codon does support larger integers via Int[N] where N is the bit width.

> Strings: Codon currently uses ASCII strings unlike Python's unicode strings.

> Dictionaries: Codon's dictionary type does not preserve insertion order, unlike Python's as of 3.6.

> Tuples: Since tuples compile down to structs, tuple lengths must be known at compile time, meaning you can't convert an arbitrarily-sized list to a tuple, for instance.

https://docs.exaloop.io/codon/general/differences

Pretty sure this means the following doesn't work either:

    config = { "name": "John Doe", "age": 32 }
Note: It looks like you can get around this via Python interop, but that further supports the point that this isn't really Python.

dathinab|root|parent|next|

> Strings: Codon currently uses ASCII strings unlike Python's unicode strings.

wtf this is a supper big issue making this basically unusable for anything handling text (and potentially even just fixed indents, if you aren't limited to EU+US having non us-ascii idents in code or text is common, i.e. while EU companies most times code in english this is much less likely in Asia, especially China and Japan.

it isn't even really a performance benefit compared to utf-8 as utf-8 only using us-ascii letters _is_ us-ascii and you don't have to use unicode aware string operations


actionfromafar|prev|next|

I immediately wonder how it compares to Shedskin¹

I can say one thing - Shedskin compiles to C++, which was very compelling to me for integrating into existing C++ products. Actually another thing too, Shedskin is Open Source under GPLv3. (Like GCC.)

1: https://github.com/shedskin/shedskin/


crorella|parent|next|

I looks like codon has less restrictions when compared to shed skin.

actionfromafar|root|parent|next|

I suppose that's right, I don't think shedskin can call numpy yet, for instance. On the other hand it seems easier to put shedskin on an embedded device, for instance.

Lucasoato|prev|next|

> Is Codon free? Codon is and always will be free for non-production use. That means you can use Codon freely for personal, academic, or other non-commercial applications.

I hope it is released under a truly open-source license in the future; this seems like a promising technology. I'm also wondering how it would match C++ performance if it is still garbage collected.


troymc|parent|next|

The license is the "Business Source License 1.1" [1].

The Business Source License (BSL) 1.1 is a software license created by MariaDB Corporation. It's designed as a middle ground between fully open-source licenses and traditional proprietary software licenses. It's kind of neat because it's a parameteric license, in that you can change some parameters while leaving the text of the license unchanged.

For codon, the "Change Date" is 2028-03-01 and the "Change License" is "Apache License, Version 2.0", meaning that the license will change to Apache2 in March of 2028. Until then, I guess you need to make a deal with Exaloop to use codon in production.

[1] https://github.com/exaloop/codon?tab=License-1-ov-file#readm...


axit|root|parent|next|

From what I've seen is the "Change Date" is usually updated so you always have a few years older software as Apache License and the latest software as BSL

actionfromafar|root|parent|next|

Just to make it clear - the cutoff date on previously released software remains the same. So if you download it now and wait a few years, your software will have matured into its final form, the Apache 2 license.

troymc|root|parent|prev|next|

That make sense. Thanks for clarifying.

w10-1|prev|next|

Unclear if this has been in the works longer as the graalvm LLVM build of python discussed yesterday[1]. The first HN discussion is from 2022 [3].

Any relation? Any comparisons?

Funny I can't find the license for graalvm python in their docs [2]. That could be a differentiator.

- [1] GraalVM Python on HN https://news.ycombinator.com/item?id=41570708

- [2] GraalVM Python site https://www.graalvm.org/python/

- [3] HN Dec 2022 https://news.ycombinator.com/item?id=33908576


mech422|parent|prev|next|

Might want to look at PyPy too: https://pypy.org/features.html

codethief|prev|next|

Reminds me of these two projects which were presented at EuroPython 2024 this summer:

https://ep2024.europython.eu/session/spy-static-python-lang-...

https://ep2024.europython.eu/session/how-to-build-a-python-t...

(The talks were fantastic but they have yet to upload the recordings to YouTube.)


veber-alex|prev|next|

What's up with their benchmarks[1], it just shows benchmark names and I don't see any numbers or graphs. Tried Safari and Chrome.

[1]: https://exaloop.io/benchmarks/


sdmike1|parent|next|

The benchmark page looks to be broken, the JS console is showing some 404'd JS libs and a bad function call.

pizlonator|parent|prev|next|

Also those are some bullshit benchmarks.

It’s not surprising that you can make a static compiler that makes tiny little programs written in a dynamic language into fast executables.

The hard part is making that scale to >=10,000 LoC programs. I dunno which static reasoning approaches codon uses, but all the ones I’m familiar with fall apart when you try to scale to large code.

That’s why JS benchmarking focused on larger and larger programs over time. Even the small programs that JS JIT writers use tend to have a lot of subtle idioms that break static reasoning, to model what happens in larger programs.

If you want to get in the business of making dynamic languages fast then the best advice I can give you is don’t use any of the benchmarks that these folks cite for your perf tuning. If you really do have to start with small programs then something like Richards or deltablue are ok, but you’ll want to diversify to larger programs if you really want to keep it real.

(Source: I was a combatant in the JS perf wars for a decade as a webkitten.)


amelius|prev|next|

The challenge is not just to make Python faster, it's to make Python faster __and__ port the ecosystem of Python modules to your new environment.

GTP|prev|next|

People that landed here may be interested in Mojo [0] as well.

[0] https://www.modular.com/mojo


ipsum2|prev|next|

tony-allan|prev|next|

I would love to see LLVM/WebAssembly as a supported and documented backend!

big-chungus4|prev|next|

so, assuming I don't get integers bigger than int64, and don't use the order of build in dicts, can I just use arbitrary python code and use it with codon? Can I use external libraries? Numpy, PyTorch? Also noticed that it isn't supported on windows

shikon7|prev|next|

From the documentation of the differences with Python:

> Strings: Codon currently uses ASCII strings unlike Python's unicode strings.

That seems really odd to me. Who would use a framework nowadays that doesn't support unicode?


Sparkenstein|prev|next|

Biggest problem at the moment is async support, I guess

https://github.com/exaloop/codon/issues/71


timwaagh|prev|next|

It's a really expensive piece of software. They do not publish their prices because of it. I don't think it's reasonable to market such products onto your average dev because of it. Anyhow Cython and a bunch of others provide a free and open source alternative.

zamazan4ik|prev|next|

I hope one day the compiler itself will be optimized even more: https://github.com/exaloop/codon/issues/137

jitl|prev|next|

What’s the difference between this and Cython? I think another comment already asks about shedskin.

rich_sasha|parent|prev|next|

Cython relies heavily on the Python runtime. You cannot, for example, make a standalone binary with it. A lot of unoptimized Cython binary is just Python wrapped in C.

From a quick glance this seems to genuinely translate into native execution.


edscho|root|parent|next|

You absolutely can create a standalone binary with Cython: see the `--embed` option [1].

[1] https://cython.readthedocs.io/en/stable/src/tutorial/embeddi...


jay-barronville|prev|

Instead of building their GPU support atop CUDA/NVIDIA [0], I’m wondering why they didn’t instead go with WebGPU [1] via something like wgpu [2]. Using wgpu, they could offer cross-platform compatibility across several graphics API’s, covering a wide range of hardware including NVIDIA GeForce and Quadro, AMD Radeon, Intel Iris and Arc, ARM Mali, and Apple’s integrated GPU’s.

They note the following [0]:

> The GPU module is under active development. APIs and semantics might change between Codon releases.

The thing is, based on the current syntax and semantics I see, it’ll almost certainly need to change to support non-NVIDIA devices, so I think it might be a better idea to just go with WebGPU compute pipelines sooner rather than later.

Just my two pennies…

[0]: https://docs.exaloop.io/codon/advanced/gpu

[1]: https://www.w3.org/TR/webgpu

[2]: https://wgpu.rs


MadnessASAP|parent|

Well for better or worse CUDA is the GPU programming API. If you're doing high performance GPU workloads you're almost certainly doing it in CUDA.

WebGPU while stating compute is within their design I would imagine is focused on presentation/rendering and probably not on large demanding workloads.