That post really needs some editing... but the two big premises that I got to, before giving up, are entirely wrong: NVIDIA's stock price is not based on an expectation that it increases GPU sales quarter over quarter. That's not how asset pricing works. And looking at revenue today when projecting the return on capital expenses also doesn't make any sense. You need to look at future revenue. If we look at various measures of AI capabilities, they increase with available computing power (roughly a linear increase in performance with an exponential growth in computing power). Right now, Cursor makes for a pretty decent programmer. If you give them 10x the computing power, it's very likely to be a pretty good programmer. 10x more computer power, and it could be a very good programmer.
These services are also clearly priced for trial use. Cursor Ultra at $200/month is expensive relative to Microsoft Office, but ridiculously cheap relative to an entry-level programmer, despite being more capable. The article notes that they changed their pricing model, and yes, it's gotten more expensive to use advanced models... but barely, and so what? You can't look at the cost of something without also looking at the value: Cursor with sonnet max is better than what was available a year ago. If Cursor were a very good programmer, paying $100,000/month for a subscription would be a bargain for a company: it's not replacing one programmer -- it's replacing ten of them. Or a hundred. And what you then need is a small number of (very highly paid) software engineers that can manage an army of AI agents, instead of an army of programmers. We're likely already at the stage where a programmer + Cursor is more productive than a programmer with outsourced support in India to do routine tasks. (There's a paper that programmers with AI are less productive than programmers without AI... and the mechanism is that programmers go browse the web/social media while they wait for AI to finish and stay distracted longer than it takes AI to code. Everything has a learning curve. Build in a beep when the task is done and fire programmers who spend all day on social media.)
The other obvious application area that the article is missing (as far as I can tell) is manufacturing. Factories already rely on robots for a lot of tasks, but it's very expensive to train them because you need to go through all potential issues that can arise and manually feed them into the training data. GenAI can let robots address issues they haven't encountered in their training data. If you can cut down on the lengthy and expensive training phase, and also use robots in cases that are less predictable, you can substantially increase automation across factories. For reference, there are roughly 500 million people employed in factories globally. How can someone go on for more than 14,000 words and not talk about manufacturing? In fact, the whole article seems to be about individual user subscriptions... that's obviously not where the money is going to be.
These services are also clearly priced for trial use. Cursor Ultra at $200/month is expensive relative to Microsoft Office, but ridiculously cheap relative to an entry-level programmer, despite being more capable. The article notes that they changed their pricing model, and yes, it's gotten more expensive to use advanced models... but barely, and so what? You can't look at the cost of something without also looking at the value: Cursor with sonnet max is better than what was available a year ago. If Cursor were a very good programmer, paying $100,000/month for a subscription would be a bargain for a company: it's not replacing one programmer -- it's replacing ten of them. Or a hundred. And what you then need is a small number of (very highly paid) software engineers that can manage an army of AI agents, instead of an army of programmers. We're likely already at the stage where a programmer + Cursor is more productive than a programmer with outsourced support in India to do routine tasks. (There's a paper that programmers with AI are less productive than programmers without AI... and the mechanism is that programmers go browse the web/social media while they wait for AI to finish and stay distracted longer than it takes AI to code. Everything has a learning curve. Build in a beep when the task is done and fire programmers who spend all day on social media.)
The other obvious application area that the article is missing (as far as I can tell) is manufacturing. Factories already rely on robots for a lot of tasks, but it's very expensive to train them because you need to go through all potential issues that can arise and manually feed them into the training data. GenAI can let robots address issues they haven't encountered in their training data. If you can cut down on the lengthy and expensive training phase, and also use robots in cases that are less predictable, you can substantially increase automation across factories. For reference, there are roughly 500 million people employed in factories globally. How can someone go on for more than 14,000 words and not talk about manufacturing? In fact, the whole article seems to be about individual user subscriptions... that's obviously not where the money is going to be.