Hacker Newsnew | past | comments | ask | show | jobs | submit | mhandley's commentslogin

I also started using Jove back when 30 of us shared one PDP 11/44 running BSD Unix, and it was antisocial to use something as heavyweight as Emacs. 40 years later, I'm still using UNIX and Emacs.

It was the same for new CS undergrads at UC Berkeley back in the early 90s. There were still labs full of VT220 or similar serial terminals all hooked up to a shared computer.

On reflection, it probably explains why I've used Emacs for my whole career but never really got into any of the elisp customization or other advanced features. I still base my work in the shell (and filesystem) and launch ephemeral Emacs processes rather than living in it as some folks do. I never got interested in IDE functions like controlling compilers nor debuggers from within Emacs.

I never even wanted Emacs to split a terminal window into smaller "screens". I learned the key combo to abort that, much like I learned only enough vi to kill off an unintended launch. But, I do get a lot of mileage out of the XEmacs "frames", i.e. independent X windows all fronting the same set of editing buffers. But I also have terminal windows alongside that to do all the other things from the shell that some people prefer to do from inside the editor...


I've spent the last few weeks writing a non-trivial distributed system using Codex (OpenAI's agentic coding system). I started by writing a design brief, and iterated with o3 to refine it so it was more complete and less ambiguous. Then I asked it to write a spec of all the messages - didn't like its first attempt, but iterated on it til I did like it. Then got it to write a project plan, and iterated on that. Only then did I start on the code. The purpose of all this is to provide it some context.

It generated around 13K lines of Go for me in just over two weeks. I didn't previously speak Go, but its not hard to skimread to get the gist of its approach. I probably wrote about 100 lines, though I added and removed a lot of logging at various times to understand what was actually happening. I got it to write a lot of unit tests, so that coverage testing is very good. But I didn't actually pay a lot of attention to most of those tests on the first pass, because it generally got all the fine detail stuff exactly right on the first pass. So why all the tests? First, if something seems off, I have a place to start a deep dive. Second, it pins down the architecture so that functionality can't creep without me noticing that it is needing to change the unit tests.

Some observations.

- Coding this way is very effective - the new models almost never make fine detail mistakes. But I want to step it through chunks of new functionality at a size that I can at least skim and understand. So that 13K LoC is about 300 PRs. Otherwise I lose track of the big picture, and in this world, the big picture is my task.

- Normally the big design decisions are separated by days of fine detail coding. Using codex means I get to make all those decisions nearly back-to-back. This is both good and bad. The experience is quite intense - mostly I found the fine-detail coding to be "therapeutic", but I don't get that anymore. But not needing to pay attention to the fine detail (at least most of the time), means I think I have a better picture in my head of the overall code structure. We only have so much attention at any time, and if I don't have to hold the details, I can pay attention to the more important things.

- It's very good at writing integration tests quickly, so I write a lot more of them. These I do pay a lot of attention to. Its these tests that tell me if I got the design right, and if not, these are the place I start digging to understand what I need to change.

- Because it takes 10-30m to come back with a response, I try to keep it working on around three tasks at a time. That takes some effort, as it does require come context switching, and effort to give it tasks that won't result in large merge conflicts. If it was faster, I would not bother to set multiple tasks in parallel.

- Codex allows you to ask for multiple solutions. For simpler stuff, I've found asking for one is fine. For slightly more open questions, it's good to ask for multiple solutions, review them and decide which you prefer.

- Just prompting it with "find a bug and suggest a fix" every now and then often shows up real bugs. Mostly they tend to be some form if internal inconsistency, where I'd changed my mind about part of the code, and the something elsewhere needed to be changed to be consistent.

- I learned a lot about Go from it. If I'd been writing myself, my Go would have looked more like C++ which I'm very familiar with. But it wrote more idiomatic Go from the start, and I've learned along the way.

- Any stock algorithm stuff it will one-shot. "Load this set of network links, build a graph from them, run dijkstra over the graph from this node, and tell me the histogram of how many equal-cost shortest paths there are to every other node." That sort of stuff it will one-shot.

- It's much better than me about reasoning about concurrency. Though of course this is also one of Go's strengths.

Now I don't have any experience of how good it would be for maintaining a much larger codebase, but for this sort of scale of utility, I'm very impressed with how effective it has been.

Disclaimer: I work at OpenAI, but on networks, not AI.


That sounds reasonable for access to actual content, but it produces a huge new incentive to constantly produce vast amounts of AI-generated slop served via Cloudflare. Is there a way to disincentivize this?


Thats a more general problem. As content gets cheaper to produce with AI, how do consumers discriminate between good content and slop. We already have this problem with youtube and twitter and reddit

Its interesting that the AI companies will now be on the other end of this issue


I presume the onus will now be on the AI scrapers to decide whether that AI-slop site is worth paying for. How they will figure this out will be interesting to see.


The ones in towns will mostly disappear. There will be enough chargers at supermarkets, malls, restaurants, anywhere people actually want to go, and most people will charge at home or work. The remaining business won't be enough to keep in-town gas stations in business. Range anxiety will become more of an issue for gas cars.

On highways, it will be a different situation. There will be plenty of gas and diesel still available, as the remaining business from towns becomes more concentrated. You won't find a gas station without a restaurant attached though. Fast chargers will be common, but ultra-fast ones won't be as common as we'd like, as they will want to keep you just long enough to buy a meal, etc.


I came of age in the 8-bit era of the early 80s, rode the Internet wave of the 90s and early 2000s, kind of missed the mobile wave but spent that time developing ideas that would eventually turn out to be useful for AI, and now I'm having great fun on the AI wave. I'm happy to have grown up and lived when I did, but I feel that each era of my life has had its own unique opportunities, excitement and really interesting technical problems to work on. And perhaps most importantly, great people to work with.


With computerized control and a comms link between the vehicles, you could probably have one vehicle follow 1m behind another, so they are effectively a train. If you still have a driver at all, you only need one in the front vehicle.


I don't think you could do that for CVLR specifically as it's not segregated from traffic and the second car would have to react individually to vehicles, pedestrians, roundabouts, etc.


If it's really just 1m behind, it doesn't need to respond individually to anything except pedestrians. And you can solve that with some extensible tapes that actually do connect the vehicles to prevent pedestrians walking between them.


It's only there for a month.


When I used to do gliding (sailplane, not hang gliding or paragliding) many years ago, it was not classed as a dangerous sport for insurance purposes. Don't know about the other fields of gliding. General aviation was classed as riskier - I guess glider pilots are more used to the fact that they don't have a working engine!


With a high fraction of renewables, the reverse is probably better in the long run. The larger geographic area you connect, the less you're affected by weather systems, and the wider area you can draw dependable dispatchable power such as hydro from. But that depends on having enough grid capacity to move enough power around, which is currently a problem.

But I wonder from a reliability (or lack of cascading failures) point of view whether synchronous islands interconnected with DC interconnects is more robust than a large synchronous network?


Beck, meaning stream (small river), is one I remember from growing up in the north east.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact