Ramblings

Stop teaching Git to the new generation (move to Jujutsu instead)

April 27, 2026

Most content in my tech bubble these days is about how AI is shifting software development. The “new age”, the major “new way” to build. But I believe a different and quieter shift has happened and that is well worth writing about. One that has made my daily life better for the past months.

I’ve come to a conclusion that might sound like heresy: Git’s time as our primary interface is nearing its end. We are at a stage where new projects and new learning material should focus exclusively on the next tool: Jujutsu, aka jj.

My pitch for is this: Git won the masses due its network effect, and the vast majority of projects rely on it for day-to-day operations. Let’s learn from that journey, keep the good parts (efficient storage model and broad adoption), but switch to a different user frontend that focuses on making our lives easier and less error-prone.

My VCS experience

I am a bit of a version control system nerd. I’m old enough that my first projects used CVS. I saw the gradual shift to SVN. University was a chaotic mix of SVN, CVS, and a newcomer named Git.

While looking for Google Summer of Code project for my second to last year, I stumbled upon a friendly community, the Mercurial folks, and stuck around for a while. The tool felt intuitive, foolproof, and honestly – kind. I spent that Summer hunting for bugs in the Mercurial codebase, debugging inotify interactions, hoping to make the experience smoother for everyone.

My first job was at a shop working with Subversion. For my second job, I interviewed at Atlassian, because they were a fierce supporter of the Mercurial ecosystem. I ended up joining a Mercurial colleague at Google instead. And I’ve beent there ever since. (Funny how small VCS choices impacted so much of my life’s trajectory).

At Google, I ran into Perforce, then Piper. There’s fig, a Mercurial based CLI for some limited DVCS like operations. And recently, I’ve switched exclusively to jj, for all of my needs, at work and for personal projects.

Can a VCS be intuitive?

The Mercurial vision was that the Version Control experience should be intuitive and foolproof. A tool which works and does no harm. As a 20-something software developer, this felt like a very appealing philosophy.

For a variety of reasons, Git won over the VCS market, and is now a quasi monopoly. Like all of you, I have to use Git often. Like most of you, I acknowledge the limitations of Mercurial and Python (speed, mostly). And yet this Hacker News comment echoes still echoes well how I feel (emphasis mine):

I now use git but always liked fig/mercurial better and wish it had won the version control space. When people ask me what I liked about mercurial, I tell them that I could rebase, cherry pick, branch and do whatever else I could think of without needing to Google it or going to stackoverflow, it was just intuitive. I can’t seem to grok git that way. Any time I need to do anything more than branch, stage, commit, push, I have to [look it up].

I’ve lost weeks of my life looking up how to do “simple” things in Git, or trying to recover data after passing the wrong argument to git reset (soft reset? hard reset? Urgh.). My biggest gripe is that the tool is does not attempt to curb its own power: it hands you a loaded footgun and wishes you luck.

I honestly think we developers suffer from a mild case of Stockholm Syndrome. We’ve convinced ourselves that Git is a “hackerz l33t tool” and find it hard to admit that it has glaring UX pitfalls.

When I think of the next generation of software developers, I know that they will need high proficiency with a DVCS, and I know that they’ll be exposed to very many Git repositories. That being said, these questions make me uncomfortable:

Do I want to expose a beginner to the sharp edges of git? Am I encouraging negative systemic patterns without self-reflection? Could we break free from this historical habit?

To me, it’s clear: if we can save a generation from the git pains and only keep the DVCS “goodness”, we should do it.

Why the jujutsu approach works

The memes from this blog are pretty good, you might want to skim through it.

Technology switches are notoriously hard. It would be nearly impossible to shift thousands of companies and developers away from their Git repositories and ecosystems (GitHub, GitLab, etc) overnight.

This is exactly why I believe jj has a real shot at taking over the world: it doesn’t ask you to leave your familiar Git ecosystem; it just gives you a better way to interact with it.

I used to have to use several tools depending on the context (git, Mercurial/Fig or Piper). But for the last five months, I’ve been exclusively using Jujutsu for everything, work, Github – all. Using a single tool for all my development worklows has been, frnakly, quite liberating.

Overall, jj feels like it finally offers the best of all worlds. Responsiveness and performance are excellent, comparable to Git, which addresses the old speed complaints people had about Mercurial. Unlike Git, though, the learning curve was refreshingly brief: I feel fully onboarded, and more importantly, I never like I’m about to accidentally “nuke” my repository or working copy.

I could gradually move my Git-based workflows without really thinking about it: the scripts which use git as a tool or as a library (e.g. Github actions) didn’t need any changes. I just had to pick a good VSCode extension for JJ and could get started without much friction (I like jj-view).

One of my older repositories was using Git submodules instead of the modern npm-like way of pulling in dependencies. Submodules are famously not supported in jj. Git fans often point this out as a dealbreaker, but honestly? I’m thankful for it. It forced me to migrate to a modern approach, which now allows for automatic upgrades of dependencies via Github actions. Sometimes, losing a feature is actual a blessing in disguise.

Wrapping up: my recommendations

  • Do give jj a try. It’s free, and quite addictive: you probably won’t look back.
  • I believe that the git CLI era is over. Jujutsu is production ready for over 90% of use cases.
  • We should be teaching jj to the next generation of developers. Schools and company onboarding materials should shift their focus to recommend Jujutsu. Let’s stop exposing new developers to Git’s sharp edges.
  • I strongly think that new companies need to set up with jj in mind from day one.
  • Existing companies should encourage a gradual migration to jj for improved productivity.

This post was authored with jj’s help, and Gemini helped me fix about ~10% of its content.

AI: The confidence trap

March 16, 2026

I read an opinion piece recently (fine, it was a TikTok) about Nasdaq potentially changing rules ahead of the SpaceX IPO. The claim was that this would cause a “force buy” for QQQ index managers – effectively forcing passive investors to buy in at any price. This would likely allow non-index holders to predict index manager buys, while inflating the exit value for SpaceX employees and VCs.

I decided to do what I suspect most people will do in 2026 – I asked an AI agent to stress-test the idea. What followed was a fascinating look at how AI logic can slowly come apart at the seams during a long conversation.

The Competent Start

At first, we had a very solid discussion. The model confirmed the possible index rush and explained the underlying mechanics clearly. It seemed well-sourced, and I couldn’t spot any immediate errors. Frankly, that initial chat helped me deepen my understanding of the controversy and was very valuable.

The “Big Number” Trap

The shift started when I fed it a simulated portfolio. I asked the model to simulate several different market conditions over 15 years. Across various scenarios, it became clear that some parameters had a minimal impact – a single-digit percentage.

But then it got stuck. The model couldn’t get past the fact that 1% of a $5M portfolio is $50,000. It anchored so hard on 150k changes that it started to lose it. It started insisting on various urgent rebalances to control most parameters, because the numbers felt big

The Decay of Precision

The longer the conversation went, the less precise it became. I don’t think these models are really evaluated or great for lengthy interactions yet. It started getting stuck on earlier bits of the chat and losing its ability to make sense of new information.

By the time we discussed currency risks (CHF/USD), it had completely lost the plot. To “prove” that currency hedging was cheap, it hallucinated an ISIN for a “CHF-hedged S&P 500” ETF. (In reality, the ISIN pointed to an unrelated – but cheap – UBS green energy fund).

The model then used that green fund’s ultra-low TER (0.06%) as the “cost” of currency hedging, while ignoring both that a real hedged ETFs have much higher TER – and that TER costs are anyhow dwarfed by the actual currency hedging penalties (maybe 3-4% penalty? I don’t claim to be an expert)… Of course if you could currency-hedge at < 0.1% total cost everyone would take that option, duh … :-/

(I’m eliding many other strange mistakes and nonsensical interactions for brevity).

The “Expert Posture” Problem

Now, I expected some disagreement. Discussing investment strategies, even with an educated human, leads to diverse outcomes. It’s highly personal, and predicting the future is hard. I didn’t expect the agent to have a single “correct” answer.

What really struck me was how convincing the model was. Even when its assumptions were false or its data was fabricated, it spoke with massive authority and certainty.

Positioning itself as a confident expert, it tried really hard to have me change my investment strategy. And several times I had to pause and question myself: am I wrong? Did the AI find something new I had ignored so far? Am I the idiot?

As the human in the loop, I found it surprisingly difficult to force the model to see its own mistakes once it had decided on a “helpful” (but wrong) path. Even calling out specific mistakes did not prevent them from re-appearing later in the conversation.

Looking Ahead

This leaves me with an amused smile and a bit of a concern for the year ahead. How many long-term mistakes – financial, technical, or strategic – will be AI-facilitated in 2026? We are entering an era where these agents claim to know outcomes with high certainty, and they will try very hard to “help” you right over a cliff.

Authority is not the same thing as accuracy.

GCP then Github Pages

May 13, 2021

Earlier this year I spent a fair amount of time trying to setup a Kubernetes cluster on GCP, for fun.

I wanted to learn and undersand better Docker / Kubernetes. I also thought it could be a good idea to serve a private subdomain behind IAP, restricting access to only people meant to access that content.

It definitely kept me busy for a while, and I learned a lot. I feel a little less rusty in terms of modern Webdev. As a result, for a few months nicdumz.fr had been serving from GCP, instead of Digital Ocean. GKE for a static website is absolutely overkill, but then again I had another actual private website around on that cluster, so I thought why not.

What I had not anticipated was the high price of the setup, which ended up costing a little over $100 USD a month, way more than what I want to spent for an educational only, mostly abandoned few websites.

So I took a step back. Shut down all of those fancy things. Instead, as http://github.io/ pages is perfectly able to serve static / generated pages for free, I’ve moved there. Had to migrate from Blogofile to Jekyll, but that only took a rainy morning. Bye bye hosting costs :-)

Visual refresh

May 02, 2017

I’ve been ignoring for quite some time now warnings from Google webmaster tools. Supposedly, they said, my site wasn’t responsive or mobile-friendly. What do you mean, people actually use smartphones? Fine, I’ll admit, previous setup was a bit clunky, how 2010 of me.

Which framework?

It’s all about frameworks nowadays. So here we are, then. Took the bait. Found a neat, minimalist CSS framework called Bulma. It comes with a couple simple elements and guidelines, and it was quite straightforward to adapt it to have something working for me. Even with a Blogofile-like site doing content generation. Result is mobile and tablet-friendly, almost free of cost.

Colors, next

I’m obviously a poor designer, and decided not to come up with my own color scheme. Colors are from base16, a syntax highlighting scheme which I rely on, everyday, for all Terminal things. This is the flat variant, transposed, as I could, to something web-like.

No Disqus

Comments have also been completely disabled:

  • I don’t imagine or envision this place becoming a very active social forum, requiring that much interaction.
  • So far comments were quite spammy, requiring some moderation, even for this tiny blog (!)
  • More to the point though, Disqus integration added unacceptable extra latency, requiring at times full seconds to load. I find this unreasonable in 2017.

Bye Disqus!

Webfaction was great, Digital Ocean seems better

March 14, 2017

Webfaction

A few years ago, Webfaction had been offering something somewhat unique on the market. Hosting by developers, for developers, for less than 10 USD a month. For that price I could get SSH access to a shared machine, deploy almost whatever I wanted and get it served by nginx, fast. This was unique: at that time you either had to pay for pricier dedicated machines to get SSH access, or downgrade to simple already-configured bricks that had very little flexibility.

I particularly enjoyed the idea that I could push-over-SSH my content via my version control system, and get it deployed via a hook.

Adopting Webfaction was a pretty great experience. Ticket support a few years ago was responsive and helpful. A much better experience than what I had hoped for.

First they came for Let’s Encrypt

And then Let’s Encrypt happened. Free SSL certificates, I thought, great!

You see, I’m not great at it yet, but I do care about security, and in that great idea. I jumped in, and moved this modest website to https. respect an opportunity to participate and help spread SSL further seemed like a

This is where trouble started. Webfaction has no first-party support for SSL certificates. You’re not root, they do not offer direct access to nginx configuration files. Which means that the only way to deploy a certificate is to bother a human via a ticket. And since Let’s Encrypt certificates expire every responsive enough, and I had a script automating renewal and ticket email. But 90 days, it means bothering a human every ~2 months. It’s fine, support is for a coder, it does give you a bad conscience.

After a while (I suppose because support started receiving way too many tickets), they implemented a way via their webadmin dashboard, to upload your own certificates. So instead of my script bothering a human every 2 months, I had to upload, by hand, via a web form, new certificates every 2 months. Not really an improvement.

Webfaction always claimed that they had support coming for full Let’s Encrypt automation, but a year after the CA launch, nothing had landed.

Then they came for gzip

BREACH happened.

tl;dr: this is a compression side-channel attack. For HTTPS websites that (a) do use compression, (b) include query data in the response, and © serve some kind of secret, bad guys might be able to guess what the secret is, by repeatedly issuing requests, with various payloads, and observing how the compressed (encrypted) response changes.

It’s a serious class of attack. Any website serving secrets should care and implement counter-measures. But if you’re serious about security (and you should be, if you’re serving secrets over the web, right?), I do hope that way before the publication of this class of attacks, you were already implementing some, if not, all, of the following countermeasures:

  • Try really hard to avoid responses that do include query data, or responses that depend deterministically on user input. Because any website allowing such thing probably has way too many XSS vulnerabilities lurking. Hello ?username=alert(1);.
  • Defend against XSRF issues, using CSRF tokens. For instance, on login pages, it’s common for servers to return a single-use random CSRF token/nonce that the client must submit along their request, to prevent repeatability.
  • Randomize response length and content.
  • Rate-limiting requests.

All of those basic security measures prevent repeatability, and randomize responses, which, de facto, prevent exploiting BREACH class of attacks.

Unfortunately, Webfaction had a very different response to BREACH, possibly because they are not security experts. Or more likely because customers that were at-least-as-ignorant-of-security asked for that specific change: they disabled gzip compression for all their SSL websites.

That’s quite a pity. Sure, it’s a safe, large hammer to use which appears to immediately hide away all problems. But websites written with poor security practices are still vulnerable websites, very likely prone to XSS or CSRF attacks. Disabling gzip as a way to save them from BREACH is a poor decision, not improving web security as a whole.

Webfaction is working on nginx config fragment support, to allow users to e.g. enable compression per-website. But as of writing, there’s no ETA for that feature, and their support denied manual overrides to configs. No way to re-enable gzip compression on your website.

Time to speak out?

Do I really care that much about running a command and submitting a form every 2 months to renew a certificate? Do I even need compression that bad on this static website? No, and no. Definitely not that critical.

But I reached a clear, fundamental disagreement with the way this host is running their services. I care enough about SSL to require first-party, automated support for certificate changes. And the way to secure websites, is to, duh actually fix the root cause, not disable the perceived cause of a vulnerability. Sending more bytes over the wire by disabling compression does not protect websites from basic XSS/CSRF issues.

And why won’t you just let me customize my nginx configuration, urgh.

Digital Ocean

I moved out. Looking around, Digital Ocean seemed like a great option. 5 USD a month, and in a few seconds I could create an instance running a clean OS. SH access, root. I get to run whatever I want. Switch gzip on, and off, and back on, as much as I want.

In a couple of hours, on a lazy Sunday, I migrated my data from Webfaction to Digital Ocean. Configuring nginx and auto-renewal for Let’s Encrypt was easier than Hello World in Haskell, thanks to their community-maintained guides that even sysadmin dummies like me can follow.

Looking back, it’s even somewhat strange. How did I survive not having full access to the machine before?

Hello Digital Ocean, and thanks: I, for one, embrace change.

Why Blogofile?

December 30, 2010

I recently started following Planet Python, and two posts by Mike Pirnat caught my attention: I have to admit that if after the first post I did not really try to learn too much about Blogofile, – “Yet another blog engine”, he thought – the second post, however, got me more interested: pushing a changeset to publish a new blog post, that looked exciting and I decided to learn more. That led to this blog.

I’ve been meaning to start a technical blog for some time now, but I never quite found a setup that would look appealing enough for me to do it. Blogofile got me started: let’s find out why.

Not a $wordpress

I guess that we all fall for free stuff or simplicity, but I’m still fairly amazed by the number of developer blogs that are hosted on Blogger or Wordpress.com: my experiments with those services have not been so great, and most of all, posting any kind of code seemed, at the time, difficult. But, fair enough, setup is simple, and every penny you save is a penny you can spend on troll food.

Using a blog platform was not an option for me: I want control over the content I serve, and, ideally, a blog would only be a starting point.

But then, for those that do spend time setting up a blog themselves, why would you pick $wordpress? ($wordpress being a placeholder for any CMS serving content dynamically). And this applies to me as well: why did I only consider dynamic applications so far? Afterall, we’re geeks, we’re good coders! We write blogs about code generation or compilation, Vim good practices, about details of load balancing or cache invalidations, but we would not be able to come up with a few lines of Python to generate HTML from a set of files formatted with ReST or markdown? Ridiculous! No, the reality is that most of us have hard-coded the blog == dynamic equivalence in our minds, and that it’s hard to work around it.

Static goodness

What’s dynamic in a blog? Article content, and comments. Do we need a database for this? I say that comments can be handled by Disqus: you can argue with that, I’m pretty sure that the FSF will someday pick up a fight against such platforms, but the service is good enough for me, and I can risk losing this data. For articles, you are in control, and from my viewpoint, we can afford to regenerate pages each time we write a post.

A blog is not this dynamic.

Fireballs

We’re getting fairly good at serving dynamic content. But still see a lot of blogs that get fireballed or go down after a couple hundred visits from a news aggregator: setting up a dynamic stack is quite easy; mastering this art is something else. Static content, on the other hand, is easier: Nginx with proper Cache-Control is a good start.

Security

My first hacks as a kid were done on PHP. I used PHP to send and receive Caesar cyphered blobs to my friends; I was creating pages issuing 50 or so unindexed requests to MySQL… I was young.

I then forgot about this sandbox. And when a nostalgic myself visited it years later… My browser blocked my request, warning me that I was accessing a malicious website. Hacked, of course :)

Static content means less things to sanitize, less permission issues. Happy me.

User control

I’m the kind of person that has troubles writing a lot of text without my favorite editor. Here, I wrote this post in Vim. I hit :make, the site is regenerated. I have a “high-end” SimpleHTTPServer running in background, and I can preview everything offline, without a complex setup. I also know that everything I view offline will be rendered as-is online: no backend can play tricks on us.

I version everything in a Mercurial repository (edit from the future: moved to Git and GitHub).

“Backups” are achieved with a simple hg clone.

“Deployments”? hg push and a changegroup hook.

The core blogofile sources are a mere 1200 lines of Python.

The blog controller is an additional 500 lines.

That’s really easy to analyze or patch, if you think that will ever need this.

I have yet to find something I really dislike about blogofile.

Hello world!

December 29, 2010

All I wanted for Christmas was a blog.

def test():
  print "Hello, world!"
  return 42

test()

So here we are, a simple static blog, with a fairly simple skin derived from the default Blogofile blog. Untested under IE: if you encounter some troubles, feel free to submit patches to the repository on Bitbucket.

What should you expect?

Not much content about romance, but surely some about Sarcasm, Math and Language! I’m mostly into Python: my first posts are likely to cover details of Mercurial, Zope or ERP5, CPython or various other Python interpreters. I’ll categorize posts accordingly so that readers can select a topic or another.

And who knows what might happen next?