Hacker Newsnew | past | comments | ask | show | jobs | submit | edent's commentslogin

I wonder if Ladybird will ever become a member of the WHAT-WG steering group. It would be nice to see more / any independent voices on there.

I doubt they're as interested in bigco politics as they are in hacking out features.

On the other hand, I think they had a dev or two on TC39. I remember it being mentioned in one of Andreas’ videos (years ago)

Just wanted to give my thanks for ChromaDoze. I use it on flights all the time to help drown out noise. A brilliant open source app.

Anything with under 7 million users in the UK is a "smaller" service - so has lighter requirements. See https://ofcomlive.my.salesforce-sites.com/formentry/Regulati...

If it allows unmoderated communications, it might be higher risk. See https://www.ofcom.org.uk/siteassets/resources/documents/onli...

But most of the requirements are stuff that Mastodon services should be doing anyway - responding to complaints, have a code of conduct, have moderators, perhaps use a CDN to filter out CSAM etc. See https://www.ofcom.org.uk/siteassets/resources/documents/onli...

If you're self-hosting purely for yourself, there are no users other than yourself - so no need to worry.


Good answer.

Not really. I was a civil servant and gave advice on this.

Civil servants aren't there to say whether a policy is good, sensible, or a vote-winner. The CS policy profeasion is there, in part, to advise on risks. Ministers decide whether to accept those risks.

There were plenty of people (like me) who would have pointed out the various risks and problems. Some of which caused policy to change, and some were accepted.

I don't think I've ever seen in recent years the CS be blamed for something like this.


I've written a bit about logos at https://shkspr.mobi/blog/2010/11/hiding-space-invaders-in-qr...

Basically, at the maximum level of error correction, you can obscure up to 30% of the code (excluding the corner target) and the data is still readable.

However, most QR readers will adjust the colours they see to pure black and white - so light colours will be squashed down to white. That means you can have some pretty colourful designs and still keep the codes readable.

Ideally, the border should be at least 2 blocks wide - but modern scanners are pretty good at picking out the targets.

As for size - that depends on your target audience. If your users are sat down and have the QR in front of them, you can cram in as much data as you like. If the code is on a billboard people are far away from, use as little data as you can and make the code as physically large as possible.


About 60k academic citations about to die - https://scholar.google.com/scholar?start=90&q=%22https://goo...

Countless books with irrevocably broken references - https://www.google.com/search?q=%22://goo.gl%22&sca_upv=1&sc...

And for what? The cost of keeping a few TB online and a little bit of CPU power?

An absolute act of cultural vandalism.


https://wiki.archiveteam.org/index.php/Goo.gl

https://tracker.archiveteam.org/goo-gl/ (1.66B work items remaining as of this comment)

How to run an ArchiveTeam warrior: https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior

(edit: i see jaydenmilne commented about this further down thread, mea culpa)


They appear to be doing ~37k items per minute, with 1.6B remaining that is roughly 30 days left. So that's just barely enough to do it in time.

Going to run the warrior over the weekend to help out a bit.


7 days later and they're down to 1.2B remaining!

Thank you for that information!

I wanted to help and did that using VMware.

For curious people, here is what the UI looks like, you have a list of projects to choose, I choose the goo.gl project, and a "Current project" tab which shows the project activity.

Project list: https://imgur.com/a/peTVzyw

Current project: https://imgur.com/a/QVuWWIj


Also available as a docker file, for those not running VMs: https://github.com/ArchiveTeam/warrior-dockerfile

For those in the now, is this heavy on disk usage? Should I install this on my disk drive or my SSD? Just want to avoid tons of disk writes on an SSD if it's unnecessary.

no, seems to download entirely into memory. Moderate CPU usage on this i7 Macbook Air.

IMO it's less Google's fault and more a crappy tech education problem.

It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

And really it's not much different than anything else online - it can disappear on a whim. How many of those shortened links even go to valid pages any more?

And no company is going to maintain a "free" service forever. It's easy to say, "It's only ...", but you're not the one doing the work or paying for it.


> It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.

It's a great idea, and today in 2025, papers are pretty much the only place where using these shortened URLs makes a lot of sense. In almost any other context you could just use a QR code or something, but that wouldn't fit an academic paper.

Their specific choice of shortened URL provider was obviously unfortunate. The real failure is that of DOI to provide an alternative to goo.gl or tinyurl or whatever that is easy to reach for. It's a big failure, since preserving references to things like academic papers is part of their stated purpose.


Even normal HTTP URLs aren't great. If there was ever a case for content-addressable networks like IPFS it's this. Universities should be able to host this data in a decentralized way.

A DOI handle type of thing could certainly point to an IPFS address. I can't speak to how you'd do truly decentralized access to the DOI handle. At some point DNS is a thing and somebody needs to host the handle.

CANs usually have complex hashy URLs, so you still have the compactness problem

Ahh classic free market cop out.

Free market is a euphemism for “there’s no physics demanding this be worked on”

If you want it archived do it. You seem to want someone else to take up your concerns.

An HN genius should be able to crawl this and fix it.

But you’re not geniuses. They’re too busy to be low affect whiners on social media.


if the smartest among us publishing for academia cannot figure this out, then who will?

Not infrequently, someone being smart in one field doesn't necessarily mean they can solve problems in another.

I know some brilliant people, but, well, putting it kindly, they're as useful as a chocolate teapot outside of their specific area of academic expertise.


Well, is the free market going anywhere?

Who's lost out at the end of the day? People who didn't understand the free market and lost access to these "free" services? Or people who knew what would happen and avoided them? My links are still working...

There are digital public goods (like Wikipedia) that are intended to stick around forever with free access, but Google isn't one of them.


Nope! There have in fact been education campaigns about the evils of URL shorteners for years: how they pose security risks (used for shortening malicious URLs), and how they stop working when their domain is temporarily or permanently down.

The authors just had their heads too far up their academic asses to have heard of this.


>"It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors"

???

DOI and ORCID sponsored link-shortening with Goo.gl. Authors did what they were told would be optimal, and ORCID was probably told by Google that it'd hone its link-shortening service for long-term reliability. What a crazy victim-blame.


Jm2c, but if your resource is a link to an online resource that's borderline already (at any point the content can be changed or disappear).

Even worse if your resource is a shortened link by some other service, you've just added yet another layer of unreliable indirection.


Citations are citations, if it's a link, you link to it. But using shorteners for that is silly.

It's not silly if the link is a couple hundred characters long.

Adding an external service so you don’t have to store a few hundred bytes is wild, particularly within a pdf.

It's not the bytes.

It's the fact that it's likely gonna be printed in a paper journal, where you can't click the link.


I find it amusing that you are complaining about not having a computer to click a link while glossing over the fact that you need a computer to use a link at all.

This use case of "I have a paper journal and no PDF but a computer with a web browser" seems extraordinarily contrived. I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs. If we cared, we'd use a QR code.

This kind of luddite behavior sometimes makes using this site exhausting.


Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

Reading paper was more comfortable then reading on the screen, and it was easy to annotate, highlight, scribble notes in the margin, doodle diagrams, etc.

Do grad students today just use tablets with a stylus instead (iPad + pencil, Remarkable Pro, etc)?

Granted, post grad school I don't print much anymore, but that's mostly due to a change in use case. At work I generally read at most 1-5 papers a day tops, which is small enough to just do on a computer screen (and have less need to annotate, etc). Quite different then the 50-100 papers/week + deep analysis expected in academia.


>Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.

I just had a really warm feeling of nostalgia reading that! I was a pretty average student, and the material was sometimes dull, but the coffee was nice, life had little stress (in comparison) and everything felt good. I forgot about those times haha. Thanks!


But in that case you have no computer to type the link into even if you wanted to.

> I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs.

This is by no means a universal experience.

People still get printed journals. Libraries still stock them. Some folks print out reference materials from a PDF to take to class or a meeting or whatnot.


And how many of those people then proceed to type those links into their web browsers, shortened or not?

Sure, contributing to link rot is bad, but in the same way that throwing out spoiled food is bad. Sometimes you've just gotta break a bunch of links.


> And how many of those people then proceed to type those links into their web browsers, shortened or not?

That probably depends on the link's purpose.

"The full dataset and source code to reproduce this research can be downloaded at <url>" might be deeply interesting to someone in a few years.


So he has a computer and can click.

In any case a paper should not rely on an ephemeral resource like internet links.

Have you ever tried to navigate to the errata corrige of computer science books? It's one single book, with one single link, and it's dead anyway.


I’m unconvinced the researchers acted irresponsibly. If anything, a Google-shortened link looks—at first glance—more reliable than a PDF hosted god knows where.

There are always dependencies in citations. Unless a paper comes with its citations embedded, splitting hairs between why one untrustworthy provider is more untrustworthy than another is silly.


The Google shortened link just redirects you to the PDF hosted god knows where...

I feel like all that is beyond the point. People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.

> People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.

Anyone who is savvy enough to put a link in a document is well-aware of the fact that links don't work forever, because anyone who has ever clicked a link from a document has encountered a dead link. It's not 2005 anymore, the internet has accumulated plenty of dead links.


Very much an xkcd.com/2501 situation

This kind of luddite behavior sometimes makes using this site exhausting.

We have many paper documents from over 1,000 years ago.

The vast majority of what was on the internet 25 years ago is gone forever.


What a weird comparison. Do we have the vast majority of paper documents from 1,000 years ago?

We certainly have more paper documents from 1000 years ago than PDFs from 1000 years ago! Clearly that's the fault of the PDFs.

25?

Try going back by 6/7 years on this very website, half the links are dead.


That’s an even worse reason to use a temporary redirection service. If you really need to, put in both.

which makes url shorteners even more attractive for printed media, because you don't have to type many characters manually

Fix that at the presentation layer (PDFs and Word files etc support links) not the data one.

Let me know when you figure out how to make a printed scientific journal clickable.

Scientific journals should not rely on ephemeral data on the internet. It doesn't even matter how long the url is.

Just buy any scientific book and try to navigate to it's own errata they link in the book. It's always dead.


Sure, just turn the three page article into a 500 page one with all the data and code.

Take a photo on your phone, OS recognises the link in the image, makes it clickable, done. Or, use a QR code instead


This is the answer; turns out that non-transformed links are the most generic data format, without any "compression" - QR codes or a third-party-intermediary - needed.

For people wanting to include URL references in things like books, what’s the right approach to take today?

I’m genuinely asking. It seems like its hard to trust that any service will remaining running for decades


https://perma.cc/

It is built for the task, and assuming worse case scenario of sunset, it would be ingested into the Wayback Machine. Note that both the Internet Archive and Cloudflare are supporting partners (bottom of page).

(https://doi.org/ is also an option, but not as accessible to a casual user; the DOI Foundation pointed me to https://www.crossref.org/ for adhoc DOI registration, although I have not had time to research further)


perma.cc is an interesting project, thanks for sharing.

other readers may be specifically interested in their contingency plan

https://perma.cc/contingency-plan


Crossref is designed for publishing workflows. Not set up for ad hoc DOI registration. Not least because just registering a persistent identifier to redirect to an ephemeral page without arrangements for preservation and stewardship of the page doesn’t make much sense.

That’s not to say that DOIs aren’t registered for all kinds of urls. I found the likes of YouTube etc when I researched this about 10 years ago.


Would you have a recommendation for an organization that can register ad hoc DOIs? I am still looking for one.

It really depends what you’re trying to do. Make something citable? Findable? Permalink?

Crossref isn’t the only DOI registration agency. DataCite may be more relevant, although both require membership. Part of this is the commitment to maintaining the content.

You could look at Figshare or Zenodo? https://docs.github.com/en/repositories/archiving-a-github-r...

Then Rogue Scholar is worth a mention. https://rogue-scholar.org/

Sorry that doesn’t answer your question but maybe that’s a clue that DOIs might not be right for your use case?


perma.cc is great. Also check out their tools if you want to get your hands dirty with your own archival process: https://tools.perma.cc/

While Perma is solution specifically for this problem, and a good one at that - citing the might of the backing company is a bit ironic here

If Cloudflare provides the infra (thanks Cloudflare!), I am happy to have them provide the compute and network for the lookups (which, at their scale, is probably a rounding error), with the Internet Archive remaining the storage system of last resort. Is that different than the Internet Archive offering compute to provide the lookups on top of their storage system? Everything is temporary, intent is important, etc. Can always revisit the stack as long as the data exists on disk somewhere accessible.

This is distinct from Google saying "bye y'all, no more GETs for you" with no other way to access the data.


This is much better positioned for longevity than google’s URL shortener, I’m not trying to make that argument. My point is that 10-15 years ago, when Google’s URL shortener was being adopted for all these (inappropriate) uses, its use was supported by a public opinion of Google’s ‘inevitability’. For Perma, CF serves a similar function.

Point taken.

> Websites change. Perma Links don’t.

Until the Cocos Islands are annexed by Australia.


The full URl to the original page.

You aren't responsible if things go offline. No more than if a publisher stops reprinting books and the library copies all get eaten by rats.

A reader can assess the URl for trustworthiness (is it scam.biz or legitimate_news.com) look at the path to hazard a guess at the metadata and contents, and - finally - look it up in an archive.


>The full URl to the original page.

I thought that was the standard in academia? I've had reviewers chastise me when I did not use wayback machine to archive a citation and link to that since listing a "date retrieved" doesn't do jack if there's no IA copy.

Short links were usually in addition to full URLS, and more in conference presentations than the papers themselves.



I think this is the only real answer. Shorteners might work for things like old Twitter where characters were a premium, but I would rather see the whole URL.

We’ve learned over the years that they can be unreliable, security risks, etc.

I just don’t see a major use-case for them anymore.


Real URL and save the website in the Internet Archive as it was on the date of access?

What's the right approach to take for referencing anything that isn't preserved in an institution like the Library of Congress?

Say the interview of a person, a niche publication, a local pamphlet?

Maybe to certify that your article is of a certain level of credibility you need to manually preserve all the cited works yourself in an approved way.


The act of vandalism occurs when someone creates a shortened URL, not when they stop working.

The vandalism was relying on Google.

You'd think people would learn. Ah, well. Hopefully we can do better from lessons learned.

The web is a crap architecture for permanent references anyway. A link points to a server, not e.g. a content hash.

The simplicity of the web is one of its virtues but also leaves a lot on the table.


While an interesting attempt at an impact statement, 90% of the results on the first two pages for me are not references to goo.gl shorteners, but are instead OCR errors or just gibberish. One of the papers is from 1981.

In the first segment of the very first episode of the Abstractions podcast, we talked about Google killing its goo.gl URL obfuscation service and why it is such a craven abdication of responsibility. Have a listen, if you’re curious:

Overcast link to relevant chapter: https://overcast.fm/+BOOFexNLJ8/02:33

Original episode link: https://shows.arrowloop.com/@abstractions/episodes/001-the-r...


Can't someone just go through programmatically right now and build a list of all these links and where they point to? And then put up a list somewhere that everyone can go look up if they need to?


When they began offering this, their rep for ending services was already so bad I refused to consider goo.gl. Amazing for how many years now they have introduced then ended services with large user bases. Gmail being in "beta" for five years was, weirdly, to me, a sign they might stick with it.

I have always struggled with this. If I buy a book I don’t want an online/URL reference in it. Put the book/author/isbn/page etc. Or refer to the magazine/newspaper/journal/issue/page/author/etc.

I mean preferably do both, right? The URL is better for however long it works.

We are long, long past any notion that URLs are permanent references to anything. Better to cite with title, author, and publisher so that maybe a web search will turn it up later. The original URL will almost certainly be broken after a few years.

The cost of dealing and supporting an old codebase instead of burning it all and releasing a written-from-scratch replacement next year

> And for what? The cost of keeping a few TB online and a little bit of CPU power?

For the immeasurable benefits of educating the public.


Truly, the most Googly of sunsets.

> An absolute act of cultural vandalism.

It makes me mad also, but something we have to learn the hard way is that nothing in this world is permanent. Never, ever depend on any technology to persist. Not even URLs to original hosts should be required. Inline everything.


[flagged]


Gosh! It is a pity Google doesn't hire any smart people who know how to build a throttling system.

Still, they're a tiny and cash-starved company so we can't expect too much of them.


Its almost like as if once a company becomes this big, burning them to the ground would be better for society or something. That would be the liberal position on monopolies if they actually believed in anything.

Must not be any questions about that in Leetcode.

It is a business, not a charity. Adjust your expectations accordingly, or expect disappointment.

Modern webservers are very, very fast on modern CPUs. I hear Google has some CPU infrastructure?

I don't know if GCP has a free tier like AWS does, but 10kQPS is likely within the capability of a free EC2 instance running nginx with a static redirect map. Maybe splurge for the one with a full GB of RAM? No problem.


You could deprecate the service, and archive the links as static html. 200bytes of text for an html redirect (not js).

You can serve immense volumes of traffic from static html. One hardware server alone could so easily do the job.

Your attack surface is also tiny without a back end interpreter.

People will chime in with redundancy, but the point is Google could stop maintaining the ingress, and still not be douches about existing urls.

But... you know, it's Google.


Exactly. I've seen goo.gl URLs in printed books. Obviously in old blog posts too. And in government websites. Nonprofit communications. Everywhere.

Why break this??

Sure, deprecate the service. Add no new entries. This is a good idea anyway, link shorteners are bad for the internet.

But breaking all the existing goo.gl URLs seems bizarrely hostile, and completely unnecessary. It would take so little to keep them up.

You don't even need HTML files. The full set of static redirects can be configured into the webserver. No deployment hassles. The filesystem can be RO to further reduce attack surface.

Google is acting like they are a one-person startup here.

Since they are not a one-person startup, I do wonder if we're missing the real issue. Like legal exposure, or implication in some kind of activity that they don't want to be a part of, and it's safer/simpler to just delete everything instead of trying to detect and remove all of the exposure-creating entries.

Of maybe that's what they're telling themselves, even if it's not real.


> Why break this??

We already told you: people are likely brute-forcing URLs.


I'm not sure why that is a problem.

Those numbers make it seem fairly trivial. You have a dozen bytes referencing a few hundred bytes, for a service that is not latency sensitive.

This sounds like a good project for an intern, with server costs that might be able to exceed a hundred dollars per month!


Thanks for your support, I really appreciate it :-)

hey i should also apologize. I am really trying to not be so nit-picky. You had the idea and executed it in less than 1kb. I really meant to just kinda educate about numbers stations in general and i know it came off in a way that was unintentional.

and i used copilot because i am not a programmer, i just wanted to see if it was, in fact, possible to add noise and fix the way the numbers were read in 1kb. and i kept your code essentially the same, only adding stuff to split the numbers up closer to how they sound on RF.

so, sorry!


I'd love it if you forked my code and managed to fit a full numbers station into 1024KB.

my kid wanted it to say point between the groupings, which is the speech engine, so this may sound different depending on your browser:

  <!DOCTYPE html><html><body><button onclick="f()">Start</button><script>
  function f(){with(window){
  a=new AudioContext;b=a.createBuffer(1,c=2*a.sampleRate,a.sampleRate);d=b.getChannelData(0);
  for(i=0;i<c;i++)d[i]=(Math.random()*2-1)*.4;
  e=a.createBufferSource();e.buffer=b;e.loop=1;g=a.createGain();g.gain.value=.05;
  e.connect(g).connect(a.destination);e.start();
  const l=n=>((n.match(/[A-Z]/g)||[]).length==1&&(n[0].match(/[A-Z]/g)||[]).length==1);
  setInterval(()=>{s=Object.getOwnPropertyNames(globalThis).filter(l).sort(()=>.5-Math.random())[0];
  if(Math.random()>.3){
    n=String(Math.ceil(Math.random()*1e4).toString().padStart(4,'0'));
    s=n[0]+'. '+n[1]+'. point. '+n[2]+'. '+n[3]+'.';
  }
  m=new SpeechSynthesisUtterance;m.text=s;
  v=speechSynthesis.getVoices();m.lang=v[(Math.random()*v.length)|0].lang;
  m.rate=Math.random();m.pitch=Math.random()*2;speechSynthesis.speak(m);},866);
  //m.rate=1.7;m.pitch=2;speechSynthesis.speak(m);},866);
  }}</script></body></html>
the comment at the end can be switched with the preceeding line so it sounds like he wanted it to (high pitched and fast, please) <1kb

and i got something stuck in my craw about noise so here's one with more accurate noise:

  <!DOCTYPE html><html><body><button onclick="f()">Start</button><script>
  function f(){with(window){
  a=new AudioContext;
  g=a.createGain();g.gain.value=.05; 
  h=a.createScriptProcessor(256,1,1);p=0;
  h.onaudioprocess=e=>{
    b=e.outputBuffer.getChannelData(0);
    for(i=0;i<b.length;i++)b[i]=p+=(Math.random()*2-1)/10;
  };
  h.connect(g).connect(a.destination);
  const l=n=>((n.match(/[A-Z]/g)||[]).length==1&&(n[0].match(/[A-Z]/g)||[]).length==1);
  setInterval(()=>{
    s=Object.getOwnPropertyNames(globalThis).filter(l).sort(()=>.5-Math.random())[0];
    if(Math.random()>.3){
      n=String(Math.ceil(Math.random()*1e4).toString().padStart(4,'0'));
      s=n[0]+'. '+n[1]+'. point. '+n[2]+'. '+n[3]+'.';
    }
    m=new SpeechSynthesisUtterance;m.text=s;
    v=speechSynthesis.getVoices();m.lang=v[(Math.random()*v.length)|0].lang;
    m.rate=Math.random();m.pitch=Math.random()*2;speechSynthesis.speak(m);
  },866);
  }}</script></body></html>
apologies to the HN servers for using 2kb to display these

i don't know javascript so apologies if i messed anything up (because it will eventually pop and click which is extremely accurate to numbers station reception but also crashes the page - audio stops.)


They literally didn't. They expelled anyone working on Jüdische Physik.

It is one of the (many) reasons they fell behind in atomic research.


Ooh! This is actually a bit of a passive, niche interest of mine. It should be noted I am not a professional historian. I just read a lot of material and watch a lot of interviews and documentaries.

The Nazis fell behind in atomic research for a variety of reasons, each with its own underpinnings. One of the most interesting in my mind was organizational failings. Although many different groups were working in this area, the regime leadership was rather disconnected and didn’t prioritize a coherent or integrated research effort. They didn’t provide much funding either. In some ways this created more room for unstructured scientific inquiry and creativity, but it also meant that no particular group could make any real progress toward usable reactors or weapons.

Contrast this with the Manhattan Project in the US (and the UK’s efforts at radar), which was supported and managed from the highest levels of government with a figurative blank check and despite immense compartmentalization also had a high degree of integration among disciplines and sites. There was one goal.

In my view this is an interesting manifestation of the foundation of the Third Reich. In Martin Davidson’s The Perfect Nazi, Davidson notes that the Nazi party was in many ways a child’s cosplay turned into a nightmare. Davidson writes that one of the key failings of the regime is that it was run by broken people who had more of an interest in catharsis than any real sense of society, advancement, or cohesion.


For radar, RV Jones' "Most Secret War" has an anecdote where the British raid a German coastal radar site (in France), nab the radar operator and are annoyed to discover that they know almost nothing about German radar. Pre-war Germany is already a fascist dictatorship so "ham" radio operators are enemies of the state because they're outside of your centrally controlled narrative. Whereas pre-war Britain has the usual amount of amateurs with radios. So when war broke out and they're conscripting towns at a time the British would see you're a ham and divert you from infantry training or whatever and make you a radar operator - which means the average British radar operator actually has some idea how radio works but the Germans are obliged to basically just pick a soldier and train him to operate the radar from scratch.

This apparently had significant operational consequences because if you don't know how it works all you can do when there's a fault is order spare parts. So German radar stations would be offline more often and for longer. Although Chain Home's transmitters were wildly more powerful than anything even a rich British amateur might have seen before, not to mention operating on frequencies unachievable with prior technology, the principles were familiar.


That is a fantastic contribution to the conversation. I think I’ve heard or read accounts that, if I’d thought long and hard about, might have led me to understand this, but this is new information to me.

I have seen Most Secret War recommended to me by basically every physical and ebook seller I have an account with, so I guess it’s time to take one of them up on the offer. Thank you!

Any other similar insights from your readings?


Interesting, thanks. I tested it working on Chromium - but didn't test other Chrome based browsers.

Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact