Hacker Newsnew | past | comments | ask | show | jobs | submit | onehp's commentslogin

I think the argument comes from the full title being "The United Kingdom of Great Britain and Northern Ireland". If you drop NI then the other three countries can already be grouped under the Great Britain term.


I wonder how much effort it would be to put together something expanding on this that's really friendly to new developers exploring a system. The goal would be to have something that looks more like a traditional swimlane diagram, but with human readable labelling of the connections. The open telemetry data should provide a nice skeleton and then maybe use some LLM summarisation (with access to the source code) to label what's going on.


I've been loving this service called CodeRabbit that auto generates these diagrams off my pull requests, it's fantastic -> https://github.com/jsonresume/jsonresume.org/pull/131#issuec...


This looks super neat.

I didn’t see anything on the page about running the tool locally (remote code analysis is a deal breaker).

Anyone know if that’s an option?


Given it's a paid service – probably not.

There are local code <-> LLM interfaces though (CLI tools, editor extensions), and if you can figure out a suitable prompt you can get pretty similar experience. (Of course, you'll also want to run an LLM locally as well)


And a ...poem?


That looks positively futuristic.


diagrams + poems sounds like a wonderful time in code reviews!


I love diagrams to represent how systems are setup and run. At one employer, they had hundreds of spreadsheets around the network drive which often linked together via formula or VBA code, along with queries out to databases.

I built a file parser (in VBA, because that is what was available) to log every file reference to a big table, the generate graphviz code to visualize it.

It's easy to say "tons of stuff uses $datasource", but it's way better if you can show exactly how much and the impact of changes.

It was incredibly useful when we changed our domain and all network drive mappings broke.


Yes, looks like they could be adapted to create Story Maps


I vaguely remember this being a technique for getting past doors in the Splinter Cell stealth games (2002).


Retrieval augmented generation. In short you use an LLM to classify your documents (or chunks from them) up front. Then when you want to ask the LLM a question you pull the most relevant ones back to feed it as additional context.


I dont get it. To my understanding it takes huge amounts of data to build any any form of RAG. Simply because it enlarges the statistical model you later prompt. If the model is not big enough how would you expect it to answer you in a non qualifying matter ? It simply can't.

So I don't really buy it and I have yet to see it work better than any rdbms search index.

Tell me I am wrong, I would like to see a local model based on my own docs being able to answer me quality answers based on quality prompts.


RAG doesn't require much data or involve any training, it is a fancy name for "automatically paste some relevant context into the prompt"

Basically if you have a database of three emails and ask when Biff wanted to meet for lunch, a RAG system would select the most relevant email based on any kind of search - embeddings are most fashionable, and create a prompt like

"""Given this document: <your email>, answer the question "When does Biff want to meet for lunch?"""


That's not how RAG works. What you're describing is something closer to prompt optimization.

Sibling comment from discordance has a more accurate description of RAG. There's a longer description from Nvidia here: https://blogs.nvidia.com/blog/what-is-retrieval-augmented-ge...


Right, you read something nebulous about how "the LLM combines the retrieved words and its own response to the query into a final answer it presents to the user", and you think there is some magic going on, and then you click one link deeper and read at https://ai.meta.com/blog/retrieval-augmented-generation-stre... :

> Given the prompt “When did the first mammal appear on Earth?” for instance, RAG might surface documents for “Mammal,” “History of Earth,” and “Evolution of Mammals.” These supporting documents are then concatenated as context with the original input and fed to the [...] model

Finding the relevant context to put in the prompt is a search problem, nearest neighbour search on embeddings is one basic way to do it but the singular focus on "vector databases" is a bit of hype phenomenon IMO - a real world product should factor a lot more than just pure textual content into the relevancy score. Or is your personal AI assistant going to treat emails from yesterday as equally relevant as emails from a year ago?


Legit explanation, that's how it works AFAIK.


RAG:

1. First you create embeddings from your documents

2. Store that in a vector db

3. Ask what the user wants and do a search in the vector db (cosine similarity etc)

4. Feed the relevant search results to your LLM and do the usual LLM stuff with the returned embeddings and chunks of the documents


Although RAG is often implemented via vector databases to find 'relevant' content, I'm not sure that's a necessary component. I've been doing what I call RAG by finding 'relevant' content for the current prompt context via a number of different algorithms that don't use vectors.

Would you define RAG only as 'prompt optimisation that involves embeddings'?


Sure thing, your RAG approach sounds intriguing, especially since you're sidestepping vector databases. But doesn't the input context length cap affect it? (chatgpt plus at 32K [0] or gpt4 via open ai at 128K [1]) Seems like those cases would be pretty rare though.

[0]: https://openai.com/chatgpt/pricing#:~:text=8K-,32K,-32K

[1]: https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turb...


Yes, context window is a limiting factor, but that's true however you identify the content to augment generation.


You're misunderstanding. Imagine your query is matched against chunks of text from the database, where the relevance of information is evaluated for the window each time it slides. Then collecting the n most relevant chunks, these are included in the prompt so then the llm can provide answers from source documents verbatim. This is useful for cases where precise and exact answers are needed. For example, searching the docs for some package for the right api to call. You don't mant a name that's close to right it has to be correct to the character.


Ahh ok I see. It's basically what MS CoPilot 365 does too, with its "Grounding" step.


Yes.


Prices have dropped quite considerably recently. You can now pick up an e-bike as cheaply as €500. Once you get out of the "budget" range I'd expect to be paying more like €1500 for an average one.


Where can i pick up these €500 ebikes? I've never seen one under €900 and that is with absolute garbage parts. Once you have some decent parts on there they are quickly 2000+.


The Jetson Haze is $549 at Costco. I didn't buy one because it's too small for me to ride comfortably, and the battery is too small.

Plus as an all-Chinese bike I'd want to read some long-term reliability reports before buying.


The cheapest here is around €500 (depending on exchange rates), with plenty under €1500.

https://electroheads.com/collections/all-electric-bikes?filt...


All the reasonably good ebikes I've checked recently are now several hundred dollars more expensive than a year ago. You can watch a year-old review on Youtube which mentions prices, then go to the company's website to see how the prices have increased.


You're still having to breathe the cold air though, which isn't great for your health below certain temperatures. https://www.theguardian.com/society/2022/sep/01/how-turning-...


This is my problem. I don't mind the cold on my skin, but breathing the air gets me. Throat and nose dry up, gets itchy and scratchy. Very uncomfortable, so keaping the heater on is really the only solution I see.


In all the work environments I've been in (UK) the culture is the opposite, as the birthday person you are expected to bring in cake.


And I've had exactly the opposite experience in the UK. I think this depends entirely on the workplace.


Yeah. I've worked in multiple places in Europe and people except you to bring a cake or something.

They might pitch in and buy you a small present, in return.

But they definitely expect you to bring a cake or at least some sweets.


In that case, I'd be trying to keep my birthday a secret!


"I wasn't born. I just ... slowly coalesced over a year or so, just like celestial bodies do."


Single Page App. Simply: the application is loaded once when you navigate to the page and more ajax calls are used to retrieve data, perform actions as you use the app.


Yes, they are completely separate companies.


That would enable the truest form of helicopter parenting (https://en.wikipedia.org/wiki/Helicopter_parent)



I heard the phenomenal term "curling parenting" the other day, where the kid is the stone and the parents are the two people frantically working to reduce the friction in front of it.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact