Hacker Newsnew | past | comments | ask | show | jobs | submit | panarky's commentslogin

If you you make a manufacturing error without intentionally deceiving your customers through deceptive naming of features, you have to pay millions per death.

If you intentionally give the feature a deceptive name like "autopilot", and then customers rely on that deceptive name to take their eyes off the road, then you have to pay hundreds of millions per death.

Makes sense to me.


Wouldn't that logic mean any automaker advertising a "collision avoidance system" should be held liable whenever a car crashes into something?

In practice, they are not, because the fine print always clarifies that the feature works only under specific conditions and that the driver remains responsible. Tesla's Autopilot and FSD come with the same kind of disclaimers. The underlying principle is the same.


There are plenty of accurate names Tesla could have selected.

They could have named it "adaptive cruise control with assisted lane-keeping".

Instead their customers are intentionally led to believe it's as safe and autonomous as an airliner's autopilot.

Disclaimers don't compensate for a deceptive name, endless false promises and nonstop marketing hype.


If it was called "comprehensive collision avoidance system" then yes.

Right, this is the frustrating thing about court room activism and general anger towards Tesla. By any stretch of the imagination, this technology is reasonably safe. It has over 3.6 billion miles, currently about 8m miles per day. By all reasonable measures, this technology is safe and useful. I could see why plaintiffs go after Tesla. They have a big target on their back for whatever reason, and activist judges go along. But I don't get how someone on the outside can look at this and think that this technology or marketing over the last 10 years is somehow deceptive or dangerous.

https://teslanorth.com/2025/03/28/teslas-full-self-driving-s...


> activist judges

Wait what? What activism is the judge doing here? The jury is the one that comes up with the verdict and damage award, no?


That's my experience too, when I give Gemini CLI a big, general task and just let it run.

But if I give it structure so it can write its own context, it is truly astonishing.

I'll describe my big, general task and tell it to first read the codebase and then write a detailed requirements document, and not to change any code.

Then I'll tell it to read the codebase and the detailed requirements document it just wrote, and then write a detailed technical spec with API endpoints, params, pseudocode for tricky logic, etc.

Then I'll tell it to read the codebase, and the requirements document it just wrote, and the tech spec it just wrote, and decomp the whole development effort into weekly, daily and hourly tasks to assign to developers and save that in a dev plan document.

Only then is it ready to write code.

And I tell it to read the code base, requirements, tech spec and dev plan, all of which it authored, and implement Phase 1 of the dev plan.

It's not all mechanical and deterministic, or I could just script the whole process. Just like with a team of junior devs, I still need to review each document it writes, tweak things I don't like, or give it a better prompt to reflect my priorities that I forgot to tell it the first time, and have it redo a document from scratch.

But it produces 90% or more of its own context. It ingests all that context that it mostly authored, and then just chugs along for a long time, rarely going off the rails anymore.


In my experience with chat, Flash has gotten much, much better. It's my go-to model even though I'm paying for Pro.

Pro is frustrating because it too often won't search to find current information, and just gives stale results from before its training cutoff. Flash doesn't do this much anymore.

For coding I use Pro in Gemini CLI. It is amazing at coding, but I'm actually using it more to write design docs, decomp multi-week assignments down to daily and hourly tasks, and then feed those docs back to Gemini CLI to have it work through each task sequentially.

With a little structure like this, it can basically write its own context.


I like flash because when it's wrong it's wrong very quickly. You can either change the prompt or just solve the problem yourself. It works well for people who can spot the answer as being "wrong"

> Flash has gotten much, much better. It's my go-to model even though I'm paying for Pro.

Same I think also Pro got worse...


interesting out of all "thinking models," I struggle with Gemini the most for coding. Just can't make it perform. I feel like they silently nerfed it over the last months.

I'm getting 100 Gemini Pro requests per day with an AI Studio API key that doesn't have billing enabled.

After that it's bumped down to Flash, which is surpisingly effective in Gemini CLI.

If I need Pro, I just swap in an API from an account with billing enabled, but usually 100 requests is enough for a day of work.


Obviously I'm not talking about API keys, this is what I would recommend though: https://ai.google.dev/gemini-api/docs/rate-limits#free-tier

I'm talking about "logging in with a Google Account".


Your link shows the free tier gets 100 Pro requests per day.

That matches my experience with a free account. With Gemini CLI it doesn't seem to matter if I log in with a Google Account or use an API key from AI Studio with billing disabled.

Yesterday I had two coding sessions in Gemini CLI with a total of 73 requests to Pro with no rate limiting.

https://imgur.com/a/Ki6g1qc

I can't explain why you're seeing something else, but my experience has been pretty consistent.

Maybe your usage pattern is different from mine and you're getting hit by the 5 RPM limit??


I created an AI Studio key (unbilled) probably over a year ago or so. Is it still good for the current models are should I be creating a new key?

There is no difference.

Whoa. I’m definitely getting just handful of requests via free, just like patent commenter…

> assume I haven't thought the problem through

This is the essence of my workflow.

I dictate rambling, disorganized, convoluted thoughts about a new feature into a text file.

I tell Claude Code or Gemini CLI to read my slop, read the codebase, and write a real functional design doc in Markdown, with a section on open issues and design decisions.

I'll take a quick look at its approach and edit the doc to tweak its approach and answer a few open questions, then I'll tell it to answer the remaining open questions itself and update the doc.

When that's about 90% good, I'll tell the local agent to write a technical design doc to think through data flow, logic, API endpoints and params and test cases.

I'll have it iterate on that a couple more rounds, then tell it to decompose that work into a phased dev plan where each phase is about a week of work, and each task in the phase would be a few hours of work, with phases and tasks sequenced to be testable on their own in frequent small commits.

Then I have the local agent read all of that again, the codebase, the functional design, the technical design, and the entire dev plan so it can build the first phase while keeping future phases in mind.

It's cool because the agent isn't only a good coder, it's also a decent designer and planner too. It can read and write Markdown docs just as well as code and it makes surprisingly good choices on its own.

And I have complete control to alter its direction at any point. When it methodically works through a series of small tasks it's less likely to go off the rails at all, and if it does it's easy to restore to the last commit and run it again.


1. Shame on you, that doesn't sound like fun vibe coding, at all!

2. Thank you for the detailed explanation, it makes a lot of sense. If AI is really a very junior dev that can move fast and has access to a lot of data, your approach is what I imagine works - and crucially - why there is such a difference in outcomes using it. Because what you're saying is, frankly, a lot of work. Now, based on that work you can probably double your output as a programmer, but considering the many code bases I've seen that have 0 documentation, 0 tests, I think there is a huge chunk of programmers that would never do what you're doing because "it's boring".

3. Can you share maybe an example of this, please:

> and write a real functional design doc in Markdown, with a section on open issues and design decisions.

Great comment, I've favorite'd it!


>> non-participation

> flip over to something else

A flow state is possible with 100% focus at any level of abstraction.

If you just "flip over" to HN while the agent thinks, then you're not 100% focused.

But if you're managing three agents at the same time on the same codebase, and while Agent 2 is thinking you "flip over" to Agent 3, you're still fully participating, just at a higher level of abstraction.


Right, same deal as if you were running multiple requests over the network, you need to parallelize them instead of idling while you wait for the network to complete.

I would like to believe this, but in practice, the context switch involves purging my mental working state, which drags me out of the flow state. I'm not sure how to solve this, but I imagine that the context I switch to should be as close as possible to the one I started with - the problem then is that the agents might trample over each other.

Might be fun to respond with one of these to malicious requests for /.env, /.git/config and /.aws/credentials instead of politely returning 404s.

I thought someone posted a blog post from someone who does in the last couple of months? Any time they got hits on their site from misbehaving bots I think they returned a gzip bomb in the HTTP response.

I remember that also.

edit - this? https://idiallo.com/blog/zipbomb-protection


Yes that's the one.

It’s definitely tempting, but I prefer not to piss off people who are already being actively malicious.

It's all just spray-and-pray crap. You're extremely unlikely to be their target, they're just looking for a convenient shell for a botnet. The most likely way they'll handle it if you do actually break them is just blacklist your address. You're not going to be worth the effort.

Isn’t this how a court system works?

I've been sending a nice 10GB gzip bomb (12MB after compression, rate limited download speed) to people that send various malicious requests. I think I might update it tonight with this other approach.

Can't you just server /dev/urandom?

I could, at the expense of a lot of bandwidth. /dev/urandom doesn't compress, so to send something that would consume 10GB of memory, I'd have to use up 10GB of bandwidth. The 10GB of /dev/zero that I return in response to requests takes up just 11MB of bandwidth. Much more efficient use of my bandwidth.

A more effective (while still relatively efficient) alternative would be to have a program that returns an infinite gzip compressed page. That'll catch anyone that doesn't set a timeout on their requests.

I don't imagine it would be too difficult to write a python app that dynamically creates the content, just have the returned content be the output of a generator. Not sure it's worth it though :)


I had a few minutes. This turns out to be really easy to do with FastAPI:

    from fastapi import FastAPI
    from starlette.responses import StreamingResponse
    from fastapi.middleware.gzip import GZipMiddleware
    
    app = FastAPI()
    
    app.add_middleware(GZipMiddleware, minimum_size=0, compresslevel=9)
    
    def lol_generator():
        while True:
            yield "LOL\n"
    
    @app.get("/")
    def stream_text():
        return StreamingResponse(lol_generator(), media_type="text/plain")

Away it goes, streaming GZIP compressed "LOL" to the receiver, and will continue for as long as they want it to. I guess either someone's hard disk is getting full, they OOM, or they are sensible and have timeouts set on their clients.

Probably needs some work to ensure only clients that accept GZIP get it.


Yikes, the gzip stdlib module is painfully slow in python. Even by "I'm used to python being slow" standards, and even under pypy. Even if I drop it down to compresslevel=5, what I'm most likely to do is consume all my CPU, than the target's memory.

A quick port to rust with gemini's help has it running significantly faster for a lot less overhead.


And eat up your bandwidth?

The goal is to DOS the abuser, so the cost to the server needs to be much lower than to the client.

/dev/urandom is not at all that.


> Capitalism is about private ownership of the means of production

That's part of the definition but not the whole definition.

Other parts of of the definition include:

Capitalism is about free markets.

Capitalism is about competition.

Capitalism is about converting the commons to private ownership.

Capitalism is about the more wealthy and powerful exploiting those with less wealth and power.

Capitalism is about mystifying how the system works so it's hard for people to imagine that it could work any other way.


> Capitalism is about free markets. [...] Capitalism is about competition.

Okay, sure. But IP laws prevent all that. As before, IP laws are antithetical to capitalism — and the source of breakdown witnessed earlier.


The market may or may not be "bonkers" to price MSTR at a premium to the market value of the assets it owns.

But there are other non-bitcoin examples.

Berkshire Hathaway is a corporate wrapper around a bunch of assets that are expected to increase in value over time. Berkshire's market value is much greater than the sum of the market values of the assets it wraps.

You might argue that Berkshire is different because the assets it wraps are productive and produce reliable profits for the holding company, while bitcoin just sits there and doesn't do anything.

I'd suggest that producing profits is the attribute that gives Berkshire's assets value, while bitcoin has other attributes that make it valuable. The difference in the source of value shouldn't matter when asking why the wrapper is worth more than the assets it wraps.

Reasonable minds can differ about the sources of value for the underlying assets, but there are many real-world examples where the wrapper around valuable assets is worth a multiple of the assets themselves, and this premium can persist for decades.


It's the second most common four-letter acronym in crypto hype threads right after hfsp.


The Urban Dictionary definition is hilarious, opens with "HFSP is an acronym used typically in the crypto community against non-belivers".

Hasn't defined the term yet and I know I'm in for a hell of a ride.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact